* Fix multiple instances running from same dataDir
* Add exclusive lock on lock file
* Unlock lock file on process exit
* Fix minor issues in lock file implementation
The revalidateMax value is lowered to have a quicker ramp up of
the radiusCache + to keep it healthier.
The defaultMaxGossipNodes value is lowered because with the
current value a Nodes lookup is triggered almost always.
This value is dependant on the content replication value. This
is dependant on the network (and subnetwork) because of the amount
of nodes and their radius/storage capacity.
* Support RPC API namespaces as cli parameter.
* Fluffy now uses rpcFlags on startup.
* Update testnet script to enable all RPC APIs.
* Update Fluffy book and move web3 call into eth calls.
This bug would have as effect that our radius cache would not get
filled by any outgoing pings, causing:
- Node lookups to always be occurring on NH gossip
- POKEs to much more rarely
Also add metrics for the amount of offers done via POKE mechanism.
* fix: nimbus state ahead of era history
* comments
* fix: suggestions
* fix: messages
* fix edge case resume
* check from last file
* formatting
* fix: typo
* fix: unwanted quit before rlp import
* Make stop functions wait for completion before return.
* Implement graceful shutdown.
* Shutdown rpc and metric servers if enabled.
* Move metrics and rpc servers out of PortalNode.
* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
* Enable state network by default. Create status log loop for state and beacon networks. Create status log loop for portal node. Implement stop functions.
* Add missing leaf cache update when a leaf turns to a branch with two
leaves (on merge) and vice versa (on delete) - this could lead to stale
leaves being returned from the cache causing validation failures - it
didn't happen because the leaf caches were not being used efficiently :)
* Replace `seq` with `ArrayBuf` in `Hike` allowing it to become
allocation-free - this PR also works around an inefficiency in nim in
returning large types via a `var` parameter
* Use the leaf cache instead of `getVtxRc` to fetch recent leaves - this
makes the vertex cache more efficient at caching branches because fewer
leaf requests pass through it.
The storage leaf cache was being circumvented when actually fetching
leaves and was instead only being filled with items :/
Also avoids an expensive copy when fetching account data (broadly,
variant objects are comparatively expensive to copy and fetching
accounts is a hotspot)
* move pfx out of variant which avoids pointless field type panic checks
and copies on access
* make `VertexRef` a non-inheritable object which reduces its memory
footprint and simplifies its use - it's also unclear from a semantic
point of view why inheritance makes sense for storing keys
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
* Improve state endpoint genesis test and cover cases when accounts, code and slots doesn't exist.
* Refactor state endpoints to support returning partial proofs.
* Implement getProofs in state endpoints.
* Add tests for getProofs and improve code.
* Implement eth_getProof JSON-RPC api in Fluffy.
- Add new content + content key functionality for header by number
- Remove EpochRecords from the network
- Add pruning call for the EpochRecords + required deprecated
functionality
- Adjust getBlock and getBlockHashByNumber to make use of the
new functionality instead
- Delete content_verifier as it was only verifying the now
deprecated EpochRecord
detail:
For practical reasons, ifsuch an account is asked for a slot, an empty
proof list is returned. It is up to the user to provide an account
proof that shows that there is no storage tree.
* Reverse order in staged blob lists
why:
having the largest block number with the least header list index `0`
makes it easier to grow the list with parent headers, i.e. decreasing
block numbers.
* Set a header response threshold when to ditch peer
* Refactor extension of staged header chains record
why:
Was cobbled together as a proof of concept after several approaches of
how to run the download.
* TODO update
* Make debugging code independent of `release` flag
* Update import from jacek