* fix: nimbus state ahead of era history
* comments
* fix: suggestions
* fix: messages
* fix edge case resume
* check from last file
* formatting
* fix: typo
* fix: unwanted quit before rlp import
* Make stop functions wait for completion before return.
* Implement graceful shutdown.
* Shutdown rpc and metric servers if enabled.
* Move metrics and rpc servers out of PortalNode.
* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
* Enable state network by default. Create status log loop for state and beacon networks. Create status log loop for portal node. Implement stop functions.
* Add missing leaf cache update when a leaf turns to a branch with two
leaves (on merge) and vice versa (on delete) - this could lead to stale
leaves being returned from the cache causing validation failures - it
didn't happen because the leaf caches were not being used efficiently :)
* Replace `seq` with `ArrayBuf` in `Hike` allowing it to become
allocation-free - this PR also works around an inefficiency in nim in
returning large types via a `var` parameter
* Use the leaf cache instead of `getVtxRc` to fetch recent leaves - this
makes the vertex cache more efficient at caching branches because fewer
leaf requests pass through it.
The storage leaf cache was being circumvented when actually fetching
leaves and was instead only being filled with items :/
Also avoids an expensive copy when fetching account data (broadly,
variant objects are comparatively expensive to copy and fetching
accounts is a hotspot)
* move pfx out of variant which avoids pointless field type panic checks
and copies on access
* make `VertexRef` a non-inheritable object which reduces its memory
footprint and simplifies its use - it's also unclear from a semantic
point of view why inheritance makes sense for storing keys
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
* Improve state endpoint genesis test and cover cases when accounts, code and slots doesn't exist.
* Refactor state endpoints to support returning partial proofs.
* Implement getProofs in state endpoints.
* Add tests for getProofs and improve code.
* Implement eth_getProof JSON-RPC api in Fluffy.
- Add new content + content key functionality for header by number
- Remove EpochRecords from the network
- Add pruning call for the EpochRecords + required deprecated
functionality
- Adjust getBlock and getBlockHashByNumber to make use of the
new functionality instead
- Delete content_verifier as it was only verifying the now
deprecated EpochRecord
detail:
For practical reasons, ifsuch an account is asked for a slot, an empty
proof list is returned. It is up to the user to provide an account
proof that shows that there is no storage tree.
* Reverse order in staged blob lists
why:
having the largest block number with the least header list index `0`
makes it easier to grow the list with parent headers, i.e. decreasing
block numbers.
* Set a header response threshold when to ditch peer
* Refactor extension of staged header chains record
why:
Was cobbled together as a proof of concept after several approaches of
how to run the download.
* TODO update
* Make debugging code independent of `release` flag
* Update import from jacek
* Block header download starting at Beacon down to Era1
details:
The header download implementation is intended to be completed to a
full sync facility.
Downloaded block headers are stored in a `CoreDb` table. Later on they
should be fetched, complemented by a block body, executed/imported,
and deleted from the table.
The Era1 repository may be partial or missing. Era1 headers are neither
downloaded nor stored on the `CoreDb` table.
Headers are downloaded top down (largest block number first) using the
hash of the block header by one peer. Other peers fetch headers
opportunistically using block numbers
Observed download times for 14m `MainNet` headers varies between 30min
and 1h (Era1 size truncated to 66m blocks.), full download 52min
(anectdotal.) The number of peers downloading concurrently is crucial
here.
* Activate `flare` by command line option
* Fix copyright year
Saving both memory and processing, we can move entries from one
savepoint to another, specially when the target is empty as it often is
during transaction processing
This avoid restarting the node always with a full radius, which
causes the node the be bombarded with offers which it later has
to delete anyhow.
In order to implement this functionality, several changes were
made as the radius needed to move from the Portal wire protocol
current location to the contentDB and beaconDB, which is
conceptually more correct anyhow.
So radius is now part of the database objects and a handler is
used in the portal wire protocol to access its value.
* replace rocksdb row cache with larger rdb lru caches - these serve the
same purpose but are more efficient because they skips serialization,
locking and rocksdb layering
* don't append fresh items to cache - this has the effect of evicting
the existing items and replacing them with low-value entries that might
never be read - during write-heavy periods of processing, the
newly-added entries were evicted during the store loop
* allow tuning rdb lru size at runtime
* add (hidden) option to print lru stats at exit (replacing the
compile-time flag)
pre:
```
INF 2024-09-03 15:07:01.136+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.675 tps=1870.397 mgps=176.819 avgBps=10.288
avgTps=1574.889 avgMGps=155.952 elapsed=19m26s458ms
```
post:
```
INF 2024-09-03 13:54:26.730+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.637 tps=1864.384 mgps=176.250 avgBps=11.202
avgTps=1714.920 avgMGps=169.818 elapsed=17m51s211ms
```
9%:ish import perf improvement on similar mem usage :)
* Cosmetics, spelling, etc.
* Aristo: make sure that a save cycle always commits even when empty
why:
If `Kvt` is tied to the `Aristo` DB save cycle, then this save cycle
must also be committed if there is no data to save for `Aristo`.
Otherwise this will lead to excessive core memory use with some fringe
condition where Eth headers (or blocks) are downloaded while syncing
and not really stored on disk.
* CoreDb: Correct persistent save mode
why:
Saving `Kvt` first is seen as a harbinger (or canary) for `Aristo` as
both run in sync. If `Kvt` succeeds saving first, so must be `Aristo`
next. Other than this is a defect.
* Wiring ForkedChainRef to other components
- Disable majority of hive simulators
- Only enable pyspec_sim for the moment
- The pyspec_sim is using a smaller RPC service wired to ForkedChainRef
- The RPC service will gradually grow
* Addressing PR review
* Fix test_beacon/setup_env
* Enable consensus_sim (#2441)
* Enable consensus_sim
* Remove isFile check
* Enable Engine API jwt auth tests and exchange cap tests
* Enable engine api in build_sim.sh
* Wire ForkedChainRef to Engine API newPayload
* Wire Engine API getBodies to ForkedChainRef
* Wire Engine API api_forkchoice to ForkedChainRef
* Wire more RPC methods to ForkedChainRef
* Implement eth_syncing
* Implement eth_call and eth_getlogs
* TxPool: simplify smartHead
* Fix smartHead usage
* Fix txpool headDiff
* Remove hasBlockHeader and use headerExists
* Addressing review