When walking AriVtx, parsing integers and nibbles actually becomes a
hotspot - these trivial changes reduces CPU usage during initial key
cache computation by ~15%.
Currently, computed hash keys are stored in a separate column family
with respect to the MPT data they're generated from - this has several
disadvantages:
* A lot of space is wasted because the lookup key (`RootedVertexID`) is
repeated in both tables - this is 30% of the `AriKey` content!
* rocksdb must maintain in-memory bloom filters and LRU caches for said
keys, doubling its "minimal efficient cache size"
* An extra disk traversal must be made to check for existence of cached
hash key
* Doubles the amount of files on disk due to each column family being
its own set of files
Here, the two CFs are joined such that both key and data is stored in
`AriVtx`. This means:
* we save ~30% disk space on repeated lookup keys
* we save ~2gb of memory overhead that can be used to cache data instead
of indices
* we can skip storing hash keys for MPT leaf nodes - these are trivial
to compute and waste a lot of space - previously they had to present in
the `AriKey` CF to avoid having to look in two tables on the happy path.
* There is a small increase in write amplification because when a hash
value is updated for a branch node, we must write both key and branch
data - previously we would write only the key
* There's a small shift in CPU usage - instead of performing lookups in
the database, hashes for leaf nodes are (re)-computed on the fly
* We can return to slightly smaller on-disk SST files since there's
fewer of them, which should reduce disk traffic a bit
Internally, there are also other advantages:
* when clearing keys, we no longer have to store a zero hash in memory -
instead, we deduce staleness of the cached key from the presence of an
updated VertexRef - this saves ~1gb of mem overhead during import
* hash key cache becomes dedicated to branch keys since leaf keys are no
longer stored in memory, reducing churn
* key computation is a lot faster thanks to the skipped second disk
traversal - a key computation for mainnet can be completed in 11 hours
instead of ~2 days (!) thanks to better cache usage and less read
amplification - with additional improvements to the on-disk format, we
can probably get rid of the initial full traversal method of seeding the
key cache on first start after import
All in all, this PR reduces the size of a mainnet database from 160gb to
110gb and the peak memory footprint during import by ~1-2gb.
This kind of data is not used except in tests where it is used only to
create databases that don't match actual usage of aristo.
Removing simplifies future optimizations that can focus on processing
specific leaf types more efficiently.
A casualty of this removal is some test code as well as some proof
generation code that is unused - on the surface, it looks like it should
be possible to port both of these to the more specific data types -
doing so would ensure that a database written by one part of the
codebase can interact with the other - as it stands, there is confusion
on this point since using the proof generation code will result in a
database of a shape that is incompatible with the rest of eth1.
* switch to Nim v2.0.12
* fix LruCache capitalization for styleCheck
* KzgProof/KzgCommitment for styleCheck
* TxEip4844 for styleCheck
* styleCheck issues in nimbus/beacon/payload_conv.nim
* ENode for styleCheck
* isOk for styleCheck
* some more styleCheck fixes
* more styleCheck fixes
---------
Co-authored-by: jangko <jangko128@gmail.com>
* prefer the spec-derived name where possible
* don't pass stateRoot to LedgerRef and friends (it doesn't do anything)
* add deprecation warning in graphql - it needs updating to use
forkedchain instead
When `nimbus import` runs, we end up with a database without MPT roots
leading to long startup times the first time one is needed.
Computing the state root is slow because the on-disk order based on
VertexID sorting does not match the trie traversal order and therefore
makes lookups inefficent.
Here we introduce a helper that speeds up this computation by traversing
the trie in on-disk order and computing the trie hashes bottom up
instead - even though this leads to some redundant reads of nodes that
we cannot yet compute, it's still a net win as leaves and "bottom"
branches make up the majority of the database.
This PR also addresses a few other sources of inefficiency largely due
to the separation of AriKey and AriVtx into their own column families.
Each column family is its own LSM tree that produces hundreds of SST
filtes - with a limit of 512 open files, rocksdb must keep closing and
opening files which leads to expensive metadata reads during random
access.
When rocksdb makes a lookup, it has to read several layers of files for
each lookup. Ribbon filters to skip over files that don't have the
requested data but when these filters are not in memory, reading them is
slow - this happens in two cases: when opening a file and when the
filter has been evicted from the LRU cache. Addressing the open file
limit solves one source of inefficiency, but we must also increase the
block cache size to deal with this problem.
* rocksdb.max_open_files increased to 2048
* per-file size limits increased so that fewer files are created
* WAL size increased to avoid partial flushes which lead to small files
* rocksdb block cache increased
All these increases of course lead to increased memory usage, but at
least performance is acceptable - in the future, we'll need to explore
options such as joining AriVtx and AriKey and/or reducing the row count
(by grouping branch layers under a single vertexid).
With this PR, the mainnet state root can be computed in ~8 hours (down
from 2-3 days) - not great, but still better.
Further, we write all keys to the database, also those that are less
than 32 bytes - because the mpt path is part of the input, it is very
rare that we actually hit a key like this (about 200k such entries on
mainnet), so the code complexity is not worth the benefit really, in the
current database layout / design.
* partial commit
* fixes
* remove converters too
* revert changes on nimbus_verified_proxy
* revert changes in converter
* revert changes(re-xport) in rpc_types
* update copyright year
* replace types in other binaries
* chain config bug
* fix rebase conflict imcomplete buffer
* fix more rebase buffers
* remove ditto types and converters
* fix the tests
* update copyright year
* Dissolve legacy `sync/types.nim` into `*/eth/eth_types.nim`
* Flare sync: Simplify scheduler and remove `runSingle()` method
why:
`runSingle()` is not used anymore (main purpose was for negotiating
best headers in legacy full sync.)
Also, `runMulti()` was renamed `runPeer()`
* Flare sync: Move `chain` field from `sync_desc` -> `worker_desc`
* Flare sync: Remove handler descriptor lateral reference
why:
Not used anymore. It enabled to turn on/off eth handler activity with
regards to the sync state, i.e.from with in the sync worker.
* Flare sync: Update `Hash256` and other deprecated `std/eth` symbols
* Protocols: Update `Hash256` and other deprecated `std/eth` symbols
* Eth handler: Update `Hash256` and other deprecated `std/eth` symbols
* Update flare TODO
* Remove redundant `sync/type` import
why:
The import module `type` has been removed
* Remove duplicate implementation
This is a minimal set of changes to make things work with the new types
in nim-eth - this is the minimal PR that merely resolves
incompatibilities while the full change set would include more cleanup
and migration.
* ForkedChainRef.forkchoice: Skip newBase calculation and skip chain finalization if finalizedHash is zero
* Fix ForkedChainRef.forkChoice: do nothing if headHash is the same with cursorHash
* Fix stupid bug in engine API FCU when calling ForkedChainRef.forkChoice
* Wire RPC server API to nimbus RPC manager
* Add test case
* Use default(Hash256) in ForkedChainRef
* init style for Hash256
https://github.com/status-im/nim-eth/pull/733 updates `Hash256` to
become an array instead of an object - unfortunately, nim does not allow
constructing arrays with `name()`, so this PR changes it to `default`
which works with both.
* lint
* Add missing leaf cache update when a leaf turns to a branch with two
leaves (on merge) and vice versa (on delete) - this could lead to stale
leaves being returned from the cache causing validation failures - it
didn't happen because the leaf caches were not being used efficiently :)
* Replace `seq` with `ArrayBuf` in `Hike` allowing it to become
allocation-free - this PR also works around an inefficiency in nim in
returning large types via a `var` parameter
* Use the leaf cache instead of `getVtxRc` to fetch recent leaves - this
makes the vertex cache more efficient at caching branches because fewer
leaf requests pass through it.
* move pfx out of variant which avoids pointless field type panic checks
and copies on access
* make `VertexRef` a non-inheritable object which reduces its memory
footprint and simplifies its use - it's also unclear from a semantic
point of view why inheritance makes sense for storing keys
* replace rocksdb row cache with larger rdb lru caches - these serve the
same purpose but are more efficient because they skips serialization,
locking and rocksdb layering
* don't append fresh items to cache - this has the effect of evicting
the existing items and replacing them with low-value entries that might
never be read - during write-heavy periods of processing, the
newly-added entries were evicted during the store loop
* allow tuning rdb lru size at runtime
* add (hidden) option to print lru stats at exit (replacing the
compile-time flag)
pre:
```
INF 2024-09-03 15:07:01.136+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.675 tps=1870.397 mgps=176.819 avgBps=10.288
avgTps=1574.889 avgMGps=155.952 elapsed=19m26s458ms
```
post:
```
INF 2024-09-03 13:54:26.730+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.637 tps=1864.384 mgps=176.250 avgBps=11.202
avgTps=1714.920 avgMGps=169.818 elapsed=17m51s211ms
```
9%:ish import perf improvement on similar mem usage :)
* Wiring ForkedChainRef to other components
- Disable majority of hive simulators
- Only enable pyspec_sim for the moment
- The pyspec_sim is using a smaller RPC service wired to ForkedChainRef
- The RPC service will gradually grow
* Addressing PR review
* Fix test_beacon/setup_env
* Enable consensus_sim (#2441)
* Enable consensus_sim
* Remove isFile check
* Enable Engine API jwt auth tests and exchange cap tests
* Enable engine api in build_sim.sh
* Wire ForkedChainRef to Engine API newPayload
* Wire Engine API getBodies to ForkedChainRef
* Wire Engine API api_forkchoice to ForkedChainRef
* Wire more RPC methods to ForkedChainRef
* Implement eth_syncing
* Implement eth_call and eth_getlogs
* TxPool: simplify smartHead
* Fix smartHead usage
* Fix txpool headDiff
* Remove hasBlockHeader and use headerExists
* Addressing review
* pre-allocate `blobify` data and remove redundant error handling
(cannot fail on correct data)
* use threadvar for temporary storage when decoding rdb, avoiding
closure env
* speed up database walkers by avoiding many temporaries
~5% perf improvement on block import, 100x on database iteration (useful
for building analysis tooling)