This is a minimal set of changes to make things work with the new types
in nim-eth - this is the minimal PR that merely resolves
incompatibilities while the full change set would include more cleanup
and migration.
* init style for Hash256
https://github.com/status-im/nim-eth/pull/733 updates `Hash256` to
become an array instead of an object - unfortunately, nim does not allow
constructing arrays with `name()`, so this PR changes it to `default`
which works with both.
* lint
* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
* Add missing leaf cache update when a leaf turns to a branch with two
leaves (on merge) and vice versa (on delete) - this could lead to stale
leaves being returned from the cache causing validation failures - it
didn't happen because the leaf caches were not being used efficiently :)
* Replace `seq` with `ArrayBuf` in `Hike` allowing it to become
allocation-free - this PR also works around an inefficiency in nim in
returning large types via a `var` parameter
* Use the leaf cache instead of `getVtxRc` to fetch recent leaves - this
makes the vertex cache more efficient at caching branches because fewer
leaf requests pass through it.
The storage leaf cache was being circumvented when actually fetching
leaves and was instead only being filled with items :/
Also avoids an expensive copy when fetching account data (broadly,
variant objects are comparatively expensive to copy and fetching
accounts is a hotspot)
* move pfx out of variant which avoids pointless field type panic checks
and copies on access
* make `VertexRef` a non-inheritable object which reduces its memory
footprint and simplifies its use - it's also unclear from a semantic
point of view why inheritance makes sense for storing keys
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
detail:
For practical reasons, ifsuch an account is asked for a slot, an empty
proof list is returned. It is up to the user to provide an account
proof that shows that there is no storage tree.
* replace rocksdb row cache with larger rdb lru caches - these serve the
same purpose but are more efficient because they skips serialization,
locking and rocksdb layering
* don't append fresh items to cache - this has the effect of evicting
the existing items and replacing them with low-value entries that might
never be read - during write-heavy periods of processing, the
newly-added entries were evicted during the store loop
* allow tuning rdb lru size at runtime
* add (hidden) option to print lru stats at exit (replacing the
compile-time flag)
pre:
```
INF 2024-09-03 15:07:01.136+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.675 tps=1870.397 mgps=176.819 avgBps=10.288
avgTps=1574.889 avgMGps=155.952 elapsed=19m26s458ms
```
post:
```
INF 2024-09-03 13:54:26.730+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.637 tps=1864.384 mgps=176.250 avgBps=11.202
avgTps=1714.920 avgMGps=169.818 elapsed=17m51s211ms
```
9%:ish import perf improvement on similar mem usage :)
* Cosmetics, spelling, etc.
* Aristo: make sure that a save cycle always commits even when empty
why:
If `Kvt` is tied to the `Aristo` DB save cycle, then this save cycle
must also be committed if there is no data to save for `Aristo`.
Otherwise this will lead to excessive core memory use with some fringe
condition where Eth headers (or blocks) are downloaded while syncing
and not really stored on disk.
* CoreDb: Correct persistent save mode
why:
Saving `Kvt` first is seen as a harbinger (or canary) for `Aristo` as
both run in sync. If `Kvt` succeeds saving first, so must be `Aristo`
next. Other than this is a defect.
This is a first step towards measuring the efficiency of the LRU caches
over time - metrics can be collected during import or when running
regulary.
Since `nim-metrics` carries some overhead for its default way of
reporting metrics, this PR implements a custom collector over atomic
counters, given that this is one of the hottest spots in the block
processing pipeline.
Using a compile-time flag, the same metrics can be printed on exit which
is useful when comparing different strategies for caching - here's a
recent run over blocks 16000001-1616384 - this is a good candidate to
expose in a better way in the future, maybe:
```
state vtype miss hit total hitrate
Account Leaf 4909417 4466215 9375632 47.64%
Account Branch 20742574 72015123 92757697 77.64%
World Leaf 940483 1140946 2081429 54.82%
World Branch 8224151 131496580 139720731 94.11%
all all 34816625 209118864 243935489 85.73%
```
* pre-allocate `blobify` data and remove redundant error handling
(cannot fail on correct data)
* use threadvar for temporary storage when decoding rdb, avoiding
closure env
* speed up database walkers by avoiding many temporaries
~5% perf improvement on block import, 100x on database iteration (useful
for building analysis tooling)
When the stack has an empty layer on top, there's no need to copy the
contents of `top` to it since it would be the same.
~13% processing saved (!)
pre
```
INF 2024-08-17 19:11:31.748+02:00 Imported blocks
blockNumber=18667648 blocks=12000 importedSlot=7860043 txs=1797812
mgas=181135.177 bps=8.763 tps=1375.062 mgps=132.125 avgBps=6.798
avgTps=1018.501 avgMGps=102.617 elapsed=29m25s154ms
```
post
```
INF 2024-08-17 18:22:52.513+02:00 Imported blocks
blockNumber=18667648 blocks=12000 importedSlot=7860043 txs=1797812
mgas=181135.177 bps=9.648 tps=1513.961 mgps=145.472 avgBps=7.876
avgTps=1179.998 avgMGps=118.888 elapsed=25m23s572ms
```
* Cleaning up, removing cruft and debugging statements
* Make `aristo_delta` fluffy compatible
why:
A sub-module that uses `chronicles` must import all possible
modules used by a parent module that imports the sub-module.
* update TODO
* Extract sub-tree deletion functions into separate sub-modules
* Move/rename `aristo_desc.accLruSize` => `aristo_constants.ACC_LRU_SIZE`
* Lazily delete sub-trees
why:
This gives some control of the memory used to keep the deleted vertices
in the cached layers. For larger sub-trees, keys and vertices might be
on the persistent backend to a large extend. This would pull an amount
of extra information from the backend into the cached layer.
For lazy deleting it is enough to remember sub-trees by a small set of
(at most 16) sub-roots to be processed when storing persistent data.
Marking the tree root deleted immediately allows to let most of the code
base work as before.
* Comments and cosmetics
* No need to import all for `Aristo` here
* Kludge to make `chronicle` usage in sub-modules work with `fluffy`
why:
That `fluffy` would not run with any logging in `core_deb` is a problem
I have known for a while. Up to now, logging was only used for debugging.
With the current `Aristo` PR, there are cases where logging might be
wanted but this works only if `chronicles` runs without the
`json[dynamic]` sinks.
So this should be re-visited.
* More of a kludge
why:
It is not safe in general to recycle vertex IDs while the `RocksDb`
cache has `VertexID` rather than `RootedVertexID` where the former
type seems preferable.
In some fringe cases one might remove a vertex with key `(root1,vid)`
and insert another vertex with key `(root2,vid)` while re-using the
vertex ID `vid`. Without knowledge of `root1` and `root2`, the LRU
cache will return the same vertex for `(root2,vid)` also for
`(root1,vid)`.
* Provide portal proof functions in `aristo_api`
why:
So it can be fully supported by `CoreDb`
* Fix prototype in `kvt_api`
* Fix node constructor for account leafs with storage trees
* Provide simple path check based on portal proof functionality
* Provide portal proof functionality in `CoreDb`
* Update TODO list
* Extracted `test_tx.testTxMergeProofAndKvpList()` => separate file
* Fix serialiser
why:
Typo lead to duplicate rlp-encoded nodes in chain
* Remove cruft
* Implemnt portal proof nodes generators `partXxxTwig()`
* Add unit test for portal proof nodes generator `partAccountTwig()`
* Cosmetics
* Simplify serialiser return code format
* Fix proof generator for extension nodes
why:
Code was simply bonkers, not detected before the unit tests were
adapted to check for just this.
* Implemented portal proof nodes verifier `partUntwig()`
* Cosmetics
* Fix `testutp` cli poblem
* Implement partial trees
why:
This is currently needed for unit tests to pre-load the database
with test data similar to `proof` node pre-load.
The basic features for `snap-sync` boundary proofs are available
as well for future use. What is missing is the final proof verification
and a complete storage data load/merge function (stub is available.)
* Cosmetics, clean up
* Update config for Ledger and CoreDb
why:
Prepare for tracer which depends on the API jump table (as well as
the profiler.) The API jump table is now enabled in unit/integration
test mode piggybacking on the `unittest2DisableParamFiltering`
compiler flag or on an extra compiler flag `dbjapi_enabled`.
* No deed for error field in `NodeRef`
why:
Was opnly needed by proof nodes pre-loader which will be re-implemented
* Cosmetics
* Aristo: Merge `delta_siblings` module into `deltaPersistent()`
* Aristo: Add `isEmpty()` for canonical checking whether a layer is empty
* Aristo: Merge `LayerDeltaRef` into `LayerObj`
why:
No need to maintain nested object refs anymore. Previously the
`LayerDeltaRef` object had a companion `LayerFinalRef` which held
non-delta layer information.
* Kvt: Merge `LayerDeltaRef` into `LayerRef`
why:
No need to maintain nested object refs (as with `Aristo`)
* Kvt: Re-write balancer logic similar to `Aristo`
why:
Although `Kvt` was a cheap copy of `Aristo` it sort of got out of
sync and the balancer code was wrong.
* Update iterator over forked peers
why:
Yield additional field `isLast` indicating that the last iteration
cycle was approached.
* Optimise balancer calculation.
why:
One can often avoid providing a new object containing the merge of two
layers for the balancer. This avoids copying tables. In some cases this
is replaced by `hasKey()` look ups though. One uses one of the two
to combine and merges the other into the first.
Of course, this needs some checks for making sure that none of the
components to merge is eventually shared with something else.
* Fix copyright year
When lazily verifying state roots, we may end up with an entire state
without roots that gets computed for the whole database - in the current
design, that would result in hashes for the entire trie being held in
memory.
Since the hash depends only on the data in the vertex, we can store it
directly at the top-most level derived from the verticies it depends on
- be that memory or database - this makes the memory usage broadly
linear with respect to the already-existing in-memory change set stored
in the layers.
It also ensures that if we have multiple forks in memory, hashes get
cached in the correct layer maximising reuse between forks.
The same layer numbering scheme as elsewhere is reused, where -2 is the
backend, -1 is the balancer, then 0+ is the top of the stack and stack.
A downside of this approach is that we create many small batches - a
future improvement could be to collect all such writes in a single
batch, though the memory profile of this approach should be examined
first (where is the batch kept, exactly?).
* Remove `chunkedMpt` from `persistent()`/`stow()` function
why:
Proof-mode code was removed with PR #2445 and needs to be re-designed.
* Remove unused `beStateRoot` argument from `deltaMerge()`
* Update/drastically simplify `txStow()`
why:
Got rid of many boundary conditions
details:
Many pre-conditions have changed. In particular, previous versions
used the account state (hash) which was conveniently available and
checked it against the backend in order to find out whether there
was something to do, at all. Currently, only an empty set of all
tables in the delta layer has the balancer update ignored.
Notable changes are:
* no check against account state (see above)
* balancer filters have no hash signature (some legacy stuff left over
from journals)
* no (shap sync) proof data which made the generation of the a top layer
more complex
* Cosmetics, cruft removal
* Update unit test file & function name
why:
Was legacy module
* Remove cruft left-over from PR #2494
* TODO
* Update comments on `HashKey` type values
* Remove obsolete hash key conversion flag `forceRoot`
why:
Is treated implicitly by having vertex keys as `HashKey` type and
root vertex states converted to `Hash256`
* Imported/rebase from `no-ext`, PR #2485
Store extension nodes together with the branch
Extension nodes must be followed by a branch - as such, it makes sense
to store the two together both in the database and in memory:
* fewer reads, writes and updates to traverse the tree
* simpler logic for maintaining the node structure
* less space used, both memory and storage, because there are fewer
nodes overall
There is also a downside: hashes can no longer be cached for an
extension - instead, only the extension+branch hash can be cached - this
seems like a fine tradeoff since computing it should be fast.
TODO: fix commented code
* Fix merge functions and `toNode()`
* Update `merkleSignCommit()` prototype
why:
Result is always a 32bit hash
* Update short Merkle hash key generation
details:
Ethereum reference MPTs use Keccak hashes as node links if the size of
an RLP encoded node is at least 32 bytes. Otherwise, the RLP encoded
node value is used as a pseudo node link (rather than a hash.) This is
specified in the yellow paper, appendix D.
Different to the `Aristo` implementation, the reference MPT would not
store such a node on the key-value database. Rather the RLP encoded node value is stored instead of a node link in a parent node
is stored as a node link on the parent database.
Only for the root hash, the top level node is always referred to by the
hash.
* Fix/update `Extension` sections
why:
Were commented out after removal of a dedicated `Extension` type which
left the system disfunctional.
* Clean up unused error codes
* Update unit tests
* Update docu
---------
Co-authored-by: Jacek Sieka <jacek@status.im>
This PR adds a storage hike cache similar to the account hike cache
already present - this cache is less efficient because account storage
is already partically cached in the account ledger but nonetheless helps
keep hiking down.
Notably, there's an opportunity to optimise this cache and the others so
that they cooperate better insteado of overlapping, which is left for a
future PR.
This PR also fixes an O(N) memory usage for storage slots where the
delete would keep the full storage in a work list which on mainnet can
grow very large - the work list is replaced with a more conventional
recursive `O(log N)` approach.
The Vertex type unifies branches, extensions and leaves into a single
memory area where the larges member is the branch (128 bytes + overhead) -
the payloads we have are all smaller than 128 thus wrapping them in an
extra layer of `ref` is wasteful from a memory usage perspective.
Further, the ref:s must be visited during the M&S phase of garbage
collection - since we keep millions of these, many of them
short-lived, this takes up significant CPU time.
```
Function CPU Time: Total CPU Time: Self Module Function (Full) Source File Start Address
system::markStackAndRegisters 10.0% 4.922s nimbus system::markStackAndRegisters(var<system::GcHeap>).constprop.0 gc.nim 0x701230`
```
* Updates and corrections
* Extract `CoreDb` configuration from `base.nim` into separate module
why:
This makes it easier to avoid circular imports, in particular
when the capture journal (aka tracer) is revived.
* Extract `Ledger` configuration from `base.nim` into separate module
why:
This makes it easier to avoid circular imports (if any.)
also:
Move `accounts_ledger.nim` file to sub-folder `backend`. That way the
layout resembles that of the `core_db`.
hike allocations (and the garbage collection maintenance that follows)
are responsible for some 10% of cpu time (not wall time!) at this point
- this PR avoids them by stepping through the layers one step at a time,
simplifying the code at the same time.
* Rename `newKvt()` -> `ctx.getKvt()`
why:
Clean up legacy shortcut. Also, the `KVT` returned is not instantiated
but refers to the shared `KVT` that resides in a context which is a
generalisation of an in-memory database fork. The function `ctx`
retrieves the default context.
* Rename `newTransaction()` -> `ctx.newTransaction()`
why:
Clean up legacy shortcut. The transaction is applied to a context as a
generalisation of an in-memory database fork. The function `ctx`
retrieves the default context.
* Rename `getColumn(CtGeneric)` -> `getGeneric()`
why:
No more a list of well known sub-tries needed, a single one is enough.
In fact, `getColumn()` did only support a single sub-tree by now.
* Reduce TODO list
This trivial bump should improve performance a bit without costing too
much memory - as the trie grows, so does the number of levels in it and
creating hikes becomes ever more expensive - hopefully this cache
increase should give a nice little boost even if it's not a lot.
Introduce a new `StoData` payload type similar to `AccountData`
* slightly more efficient storage format
* typed api
* fewer seqs
* fix encoding docs - it wasn't rlp after all :)
The state and account MPT:s currenty share key space in the database
based on that vertex id:s are assigned essentially randomly, which means
that when two adjacent slot values from the same contract are accessed,
they might reside at large distance from each other.
Here, we prefix each vertex id by its root causing them to be sorted
together thus bringing all data belonging to a particular contract
closer together - the same effect also happens for the main state MPT
whose nodes now end up clustered together more tightly.
In the future, the prefix given to the storage keys can also be used to
perform range operations such as reading all the storage at once and/or
deleting an account with a batch operation.
Notably, parts of the API already supported this rooting concept while
parts didn't - this PR makes the API consistent by always working with a
root+vid.
The storage id is frequently accessed when executing contract code and
finding the path via the database requires several hops making the
process slow - here, we add a cache to keep the most recently used
account storage id:s in memory.
A possible future improvement would be to cache all account accesses so
that for example updating the balance doesn't cause several hikes.