* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
* Implement partial trees
why:
This is currently needed for unit tests to pre-load the database
with test data similar to `proof` node pre-load.
The basic features for `snap-sync` boundary proofs are available
as well for future use. What is missing is the final proof verification
and a complete storage data load/merge function (stub is available.)
* Cosmetics, clean up
When lazily verifying state roots, we may end up with an entire state
without roots that gets computed for the whole database - in the current
design, that would result in hashes for the entire trie being held in
memory.
Since the hash depends only on the data in the vertex, we can store it
directly at the top-most level derived from the verticies it depends on
- be that memory or database - this makes the memory usage broadly
linear with respect to the already-existing in-memory change set stored
in the layers.
It also ensures that if we have multiple forks in memory, hashes get
cached in the correct layer maximising reuse between forks.
The same layer numbering scheme as elsewhere is reused, where -2 is the
backend, -1 is the balancer, then 0+ is the top of the stack and stack.
A downside of this approach is that we create many small batches - a
future improvement could be to collect all such writes in a single
batch, though the memory profile of this approach should be examined
first (where is the batch kept, exactly?).
The state and account MPT:s currenty share key space in the database
based on that vertex id:s are assigned essentially randomly, which means
that when two adjacent slot values from the same contract are accessed,
they might reside at large distance from each other.
Here, we prefix each vertex id by its root causing them to be sorted
together thus bringing all data belonging to a particular contract
closer together - the same effect also happens for the main state MPT
whose nodes now end up clustered together more tightly.
In the future, the prefix given to the storage keys can also be used to
perform range operations such as reading all the storage at once and/or
deleting an account with a batch operation.
Notably, parts of the API already supported this rooting concept while
parts didn't - this PR makes the API consistent by always working with a
root+vid.
* Remove all journal related stuff
* Refactor function names journal*() => delta*(), filter*() => delta*()
* remove `trg` fileld from `FilterRef`
why:
Same as `kMap[$1]`
* Re-type FilterRef.src as `HashKey`
why:
So it is directly comparable to `kMap[$1]`
* Moved `vGen[]` field from `LayerFinalRef` to `LayerDeltaRef`
why:
Then a separate `FilterRef` type is not needed, anymore
* Rename `roFilter` field in `AristoDbRef` => `balancer`
why:
New name more appropriate.
* Replace `FilterRef` by `LayerDeltaRef` type
why:
This allows to avoid copying into the `balancer` (see next patch set)
most of the time. Typically, only one instance is running on the backend
and the `balancer` is only used as a stage before saving data.
* Refactor way how to store data persistently
why:
Avoid useless copy when staging `top` layer for persistently saving to
backend.
* Fix copyright header?
* Add persistent last state stamp feature
why:
This allows to run `CoreDb` without journal
* Start `CoreDb` without journal
* Remove journal related functions from `CoredDb`
* Aristo: Generalise alien/guest interface for piggiback on database
* Aristo: Code cosmetics
* CoreDb+Kvt: Update transaction API
why:
Use single addressable function `forkTx(backLevel: int)` as used
in `Aristo`. So `Kvt` can be synced simultaneously to `Aristo`.
also:
Refactored `kvt_tx.nim` in a similar fashion to `Aristo`.
* Kvt: Replace `LayerDelta` object by reference
why:
Will be needed when introducing filters
* Kvt: Remodel backend filter facility similar to `Aristo`
why:
This allows to operate on several KVT instances simultaneously.
* CoreDb+Kvt: Fix on-disk storage
why:
Overlooked name change: `stow()` => `persist()` for permanent storage
* Fix copyright headers
* Aristo: Code cosmetics, e.g. update some CamelCase names
* CoreDb+Aristo: Provide oldest known state root implied
details:
The Aristo journal allows to recover earlier but not all state roots.
* Aristo: Fix journal backward index operator, e.g. `[^1]`
* Aristo: Fix journal updater
why:
The `fifosStore()` store function slightly misinterpreted the update
instructions when translation is to database `put()` functions. The
effect was that the journal was ever growing due to stale entries which
were never deleted.
* CoreDb+Aristo: Provide utils for purging stale data from the KVT
details:
See earlier patch, not all state roots are available. This patch
provides a mapping from some state root to a block number and allows to
remove all KVT data related to a particular block number
* Aristo+Kvt: Implement a clean up schedule for expired data in KVT
why:
For a single state ledger like `Aristo`, there is only a limited
backlog of states. So KVT data (i.e. headers etc.) are cleaned up
regularly
* Fix copyright year
* Remove cruft
* Docu/code cosmetics
* Aristo: Update `forkBase()`
why:
Was not up to the job
* Update/correct tracer for running against `Aristo`
details:
This patch makes sure that before creating a new `BaseVMState` the
`CoreDb` context is adjusted to accommodate for the state root that
is passed to the `BaseVMState` constructor.
* CpreDb+legacy: Always return current context with `ctxFromTx()`
why:
There was an experimental setting trying to find the node with the
proper setting in the KVT (not the hexary tie layer) which currently
does not work reliable, probably due to `Ledger` caching effects.
* Aristo: Reorg `hashify()` using different schedule algorithm
why:
Directly calculating the search tree top down from the roots turns
out to be faster than using the cached structures left over by `merge()`
and `delete()`.
Time gains is short of 20%
* Aristo: Remove `lTab[]` leaf entry object type
why:
Not used anymore. It was previously needed to build the schedule for
`hashify()`.
* Aristo: Avoid unnecessary re-org of the vertex ID recycling list
why:
This list can become quite large so a heuristic is employed whether
it makes sense to re-org.
Also, re-org check is only done by `delete()` functions.
* Aristo: Remove key/reverse lookup table from tx layers
why:
It is ignored except for handling proof nodes and costs unnecessary
run time resources.
This feature was originally needed to accommodate the mental transition
from the legacy MPT to the `Aristo` trie :).
* Fix copyright year
* CoreDb: update test suite
* Aristo: Simplify reverse key map
why:
The reverse key map `pAmk: (root,key) -> {vid,..}` as been simplified to
`pAmk: key -> {vid,..}` as the state `root` domain argument is not used,
anymore
* Aristo: Remove `HashLabel` object type and replace it by `HashKey`
why:
The `HashLabel` object attaches a root hash to a hash key. This is
nowhere used, anymore.
* Fix copyright
* Aristo: Update unit test suite
* Aristo/Kvt: Fix iterators
why:
Generic iterators were not properly updated after backend change
* Aristo: Add sub-trie deletion functionality
why:
For storage tries linked to an account payload vertex ID, a the
whole storage trie needs to be deleted with the account.
* Aristo: Reserve vertex ID numbers for static custom state roots
why:
Static custom state roots may be controlled by an application,
e.g. for a receipt or a transaction root. The `Aristo` functions
are agnostic of what the static state roots are when different
from the internal tree vertex ID 1.
details;
The `merge()` function applied to a non-static state root (assumed
to be a storage root) will check the payload of an accounts leaf
and mark its Merkle keys to be re-checked.
* Aristo: Correct error code symbol
* Aristo: Update error code symbols
* Aristo: Code cosmetics/comments
* Aristo: Fix hashify schedule calculator
why:
Had a tendency to stop early leaving an incomplete job
* Fix kvt headers
* Provide differential layers for KVT transaction stack
why:
Significant performance improvement
* Provide abstraction layer for database top cache layer
why:
This will eventually implemented as a differential database layers
or transaction layers. The latter is needed to improve performance.
behavioural changes:
Zero vertex and keys (i.e. delete requests) are not optimised out
until the last layer is written to the database.
* Provide differential layers for Aristo transaction stack
why:
Significant performance improvement
* Register paths for added leafs because of trie re-balancing
why:
While the payload would not change, the prefix in the leaf vertex
would. So it needs to be flagged for hash recompilation for the
`hashify()` module.
also:
Make sure that `Hike` paths which might have vertex links into the
backend filter are replaced by vertex copies before manipulating.
Otherwise the vertices on the immutable filter might be involuntarily
changed.
* Also check for paths where the leaf vertex is on the backend, already
why:
A a path can have dome vertices on the top layer cache with the
`Leaf` vertex on the backend.
* Re-define a void `HashLabel` type.
why:
A `HashLabel` type is a pair `(root-vertex-ID, Keccak-hash)`. Previously,
a valid `HashLabel` consisted of a non-empty hash and a non-zero vertex
ID. This definition leads to a non-unique representation of a void
`HashLabel` with either root-ID or has void. This has been changed to
the unique void `HashLabel` exactly if the hash entry is void.
* Update consistency checkers
* Re-org `hashify()` procedure
why:
Syncing against block chain showed serious deficiencies which produced
wrong hashes or simply bailed out with error.
So all fringe cases (mainly due to deleted entries) could be integrated
into the labelling schedule rather than handling separate fringe cases.
* Fix debug noise in `hashify()` for perfectly normal situation
why:
Was previously considered a fixable error
* Fix test sample file names
why:
The larger test file `goerli68161.txt.gz` is already in the local
archive. So there is no need to use the smaller one from the external
repo.
* Activate `accounts_cache` module from `db/ledger`
why:
A copy of the original `accounts_cache.nim` source to be integrated
into the `Ledger` module wrapper which allows to switch between
different `accounts_cache` implementations unser tha same API.
details:
At a later state, the `db/accounts_cache.nim` wrapper will be
removed so that there is only one access to that module via
`db/ledger/accounts_cache.nim`.
* Fix copyright headers in source code
* Aristo: Provide key-value list signature calculator
detail:
Simple wrappers around `Aristo` core functionality
* Update new API for `CoreDb`
details:
+ Renamed new API functions `contains()` => `hasKey()` or `hasPath()`
which disables the `in` operator on non-boolean `contains()` functions
+ The functions `get()` and `fetch()` always return a not-found error if
there is no item, available. The new functions `getOrEmpty()` and
`mergeOrEmpty()` return an an empty `Blob` if there is no such key
found.
* Rewrite `core_apps.nim` using new API from `CoreDb`
* Use `Aristo` functionality for calculating Merkle signatures
details:
For debugging, the `VerifyAristoForMerkleRootCalc` can be set so
that `Aristo` results will be verified against the legacy versions.
* Provide general interface for Merkle signing key-value tables
details:
Export `Aristo` wrappers
* Activate `CoreDb` tests
why:
Now, API seems to be stable enough for general tests.
* Update `toHex()` usage
why:
Byteutils' `toHex()` is superior to `toSeq.mapIt(it.toHex(2)).join`
* Split `aristo_transcode` => `aristo_serialise` + `aristo_blobify`
why:
+ Different modules for different purposes
+ `aristo_serialise`: RLP encoding/decoding
+ `aristo_blobify`: Aristo database encoding/decoding
* Compacted representation of small nodes' links instead of Keccak hashes
why:
Ethereum MPTs use Keccak hashes as node links if the size of an RLP
encoded node is at least 32 bytes. Otherwise, the RLP encoded node
value is used as a pseudo node link (rather than a hash.) Such a node
is nor stored on key-value database. Rather the RLP encoded node value
is stored instead of a lode link in a parent node instead. Only for
the root hash, the top level node is always referred to by the hash.
This feature needed an abstraction of the `HashKey` object which is now
either a hash or a blob of length at most 31 bytes. This leaves two
ways of representing an empty/void `HashKey` type, either as an empty
blob of zero length, or the hash of an empty blob.
* Update `CoreDb` interface (mainly reducing logger noise)
* Fix copyright years (to make `Lint` happy)
* Aristo: remove obsolete functions
* Aristo: Fix error code for non-available hash keys
why:
Must not return `not-found` when the key is not available (i.e. the
current changes were not hashified, yet.)
* CoreDB: Provide TDD and test framework
* Split `core_db/base.nim` into several sources
* Rename `core_db/legacy.nim` => `core_db/legacy_db.nim`
* Update `CoreDb` API, dual methods returning `Result[]` or plain value
detail:
Plain value methods implemet the legacy API, they defect on error results
* Redesign `CoreDB` direct backend access
why:
Made the `backend` directive integral part of the API
* Discontinue providing unused or otherwise available functions
details:
+ setTransactionID() removed, not used and not easily replicable in Aristo
+ maybeGet() removed, available via direct backend access
+ newPhk() removed, never used & was experimental anyway
* Update/reorg backend API
why:
+ Added error print function `$$()`
+ General descriptor completion (and optional validation) via `bless()`
* Update `Aristo`/`Kvt` exception handling
why:
Avoid `CatchableError` exceptions, rather pass them as error code where
appropriate.
* More `CoreDB` compliant `Aristo` and `Kvt` methods
details:
+ Providing functions like `contains()`, `getVtxRc()` (returns `Result[]`).
+ Additional error code: `NotImplemented`
* Rewrite/reorg of Aristo DB constructor
why:
Previously used global object `DefaultQidLayoutRef` as default
initialiser. This object was created at compile time which lead to
non-gc safe functions.
* Update nimbus/db/core_db/legacy_db.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Update nimbus/db/aristo/aristo_transcode.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Update nimbus/db/core_db/legacy_db.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
---------
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Rename FilterID => QueueID
why:
The current usage does not identify a particular filter but uses it as
storage tag to manage it on the database (to be organised in a set of
FIFOs or queues.)
* Split `aristo_filter` source into sub-files
why:
Make space for filter management API
* Store filter queue IDs in pairs on the backend
why:
Any pair will will describe a FIFO accessed by bottom/top IDs
* Reorg some source file names
why:
The "aristo_" prefix for make local/private files is tedious to
use, so removed.
* Implement filter slot scheduler
details:
Filters will be stored on the database on cascaded FIFOs. When a FIFO
queue is full, some filter items are bundled together and stored on the
next FIFO.
* Remove concept of empty/blind filters
why:
Not needed. A non-existent filter is is coded as a nil reference.
* Slightly generalised backend iterators
why:
* VertexID as key for the ID generator state makes no sense
* there will be more tables addressed by non-VertexID keys
* Store serialised/blobified vertices on memory backend
why:
This is more in line with the RocksDB backend so more appropriate
for testing when comparing behaviour. For a speedy memory database,
a backend-less variant should be used.
* Drop the `Aristo` prefix from names `AristoLayerRef`, etc.
* Suppress compiler warning
why:
duplicate imports
* Add filter serialisation transcoder
why:
Will be used as storage format
* Renamed type `NoneBackendRef` => `VoidBackendRef`
* Clarify names: `BE=filter+backend` and `UBE=backend (unfiltered)`
why:
Most functions used full names as `getVtxUnfilteredBackend()` or
`getKeyBackend()`. After defining abbreviations (and its meaning) it
seems easier to use `getVtxUBE()` and `getKeyBE()`.
* Integrate `hashify()` process into transaction logic
why:
Is now transparent unless explicitly controlled.
details:
Cache changes imply setting a `dirty` flag which in turn triggers
`hashify()` processing in transaction and `pack()` directives.
* Removed `aristo_tx.exec()` directive
why:
Inconsistent implementation, functionality will be provided with a
different paradigm.
* Provide deep copy for each transaction layer
why:
Localising changes. Selective deep copy was just overlooked.
* Generalise vertex ID generator state reorg function `vidReorg()`
why:
makes it somewhat easier to handle when saving layers.
* Provide dummy back end descriptor `NoneBackendRef`
* Optional read-only filter between backend and transaction cache
why:
Some staging area for accumulating changes to the backend DB. This
will eventually be an access layer for emulating a backend with
multiple/historic state roots.
* Re-factor `persistent()` with filter between backend/tx-cache => `stow()`
why:
The filter provides an abstraction from the physically stored data on
disk. So, there can be several MPT instances using the same disk data
with different state roots. Of course, all the MPT instances should
not differ too much for practical reasons :).
TODO:
Filter administration tools need to be provided.
* Slightly tighten some self-check conditions
* Redefined the database descriptor object as reference (to the object)
why:
The upcoming transaction wrapper will work with a database reference
rather than the object itself
* Append state before `save()` to the Aristo descriptor
why:
This stae was previously returned by the function. Appending it to
a field of the Aristo descriptor seems easier to handle.
* Generalised Aristo DB constructor for any type of backend
details:
* Records to be deleted are represented as key-void (rather than
key-value) pairs by the put-function arguments
* Allow direct driver access, iterators as example implementation and
for testing.
* Provide backend storage interface
details:
Stores the top layer onto backend tables
* Implemented Rocks DB backend
details:
Transaction based `put()` functionality
Iterators (based on direct RocksDB access)
* Fix include
why:
Eth67 not default yet so that got missed
* Rename `LeafKey` => `LeafTie`
why:
Name is a pen picture of what this object is for. Also, it avoids the
ubiquitous term `key`.
* Provided `getOrVoid()` wrapper for `getOrDefault()`
also:
Provide `isValid()` syntactic sugar for `.isNil.not`, `!= 0` etc.
Reorg descriptor source, split into sub-sources
* Bundled `NodeKey` objects with root ID and called it `HashLabel`
why:
`NodeKey` (aka repurposed Hash265) objects are unique only within a
particular sub-trie (e.g. storage slots) which are kept separated
(i.e non-interleaved) by design. This is not applied to the backend
as the map VertexID->NodeKey labelling the nodes needs not be injective.
For the in-memory database (transaction) layers, the injective map
VertexID->(VertexID,NodeKey) is used where the first field of the image
tuple is the root ID of the sub-trie the `NodeKey` object is valid. So
identical storage tries for different accounts can be represented.
* Exclude some storage tests
why:
These test running on external dumps slipped through. The particular
dumps were reported earlier as somehow dodgy.
This was changed in `#1457` but having a second look, the change on
hexary_interpolate.nim(350) might be incorrect.
* Redesign `Aristo DB` descriptor for transaction based layers
why:
Previous descriptor layout made it cumbersome to push/pop
database delta layers.
The new architecture keeps each layer with the full delta set
relative to the database backend.
* Keep root ID as part of the `Patricia Trie` leaf path
why;
That way, forests are supported
* Keep vertex ID generator state with each db-layer
why:
The vertex ID generator state is part of the difference to the below
layer
* Move otherwise unused source to test directory
* Add Merkle hash generator
also:
* Verification facility for debugging
* Empty Merkle key hashes encoded as `EMPTY_ROOT_HASH`
details:
1. Merging a leaf vertex merges a `Patricia Trie` path (while
adding/modiying vertices) and adds a leaf node with payload
2. Merging a Merkel node merges a single vertex to the `Patricia Trie`
and registers merkel hashes
3. Action 2 can be used before action 1 in order to construct a
Merkel proof as required for handling `snap/1` data.
4. Unit tests show that action 3 is benign for now :)
* Cosmetics, renamed fields (eVtx, bVtx) -> (eVid, bVid)
* Multilayered delta architecture for Aristo DB
details:
Any VertexID or data retrieval needs to go down the rabbit hole and
fetch/get/manipulate the bottom layer -- even without explicit
backend.
* Direct reference to backend from top-level layer
why:
Some services as the vid management needs to be synchronised among all
layers. So access is optimised.