* pre-allocate `blobify` data and remove redundant error handling
(cannot fail on correct data)
* use threadvar for temporary storage when decoding rdb, avoiding
closure env
* speed up database walkers by avoiding many temporaries
~5% perf improvement on block import, 100x on database iteration (useful
for building analysis tooling)
The state and account MPT:s currenty share key space in the database
based on that vertex id:s are assigned essentially randomly, which means
that when two adjacent slot values from the same contract are accessed,
they might reside at large distance from each other.
Here, we prefix each vertex id by its root causing them to be sorted
together thus bringing all data belonging to a particular contract
closer together - the same effect also happens for the main state MPT
whose nodes now end up clustered together more tightly.
In the future, the prefix given to the storage keys can also be used to
perform range operations such as reading all the storage at once and/or
deleting an account with a batch operation.
Notably, parts of the API already supported this rooting concept while
parts didn't - this PR makes the API consistent by always working with a
root+vid.
These representations use ~15-20% less data compared to the status quo,
mainly by removing redundant zeroes in the integer encodings - a
significant effect of this change is that the various rocksdb caches see
better efficiency since more items fit in the same amount of space.
* use RLP encoding for `VertexID` and `UInt256` wherever it appears
* pack `VertexRef`/`PayloadRef` more tightly
* Use RocksDb column families instead of a prefixed single column
why:
Better performance
* Use structural objects `VertexRef` and `HashKey` in LRU cache for RocksDb
why:
Avoids repeated de/serialisation
* Aristo+Kvt: Better RocksDB profiling
why:
Providing more detailed information, mainly for `Aristo`
* Aristo: Renamed journal `stats()` to `capacity()`
why:
`Stats()` was a misnomer
* Aristo: Provide backend read caches for key and vertex IDs
why:
Dedicated LRU caching for particular types gives a throughput advantage.
The sizes of the LRU queues used for caching are currently constant
but might be adjusted at a later time.
* Fix copyright year
* Aristo+RocksDB: Update backend drivers
why:
RocksDB update allows use some of the newly provided methods which
were previously implemented by using the very C backend (for the lack
of NIM methods.)
* Aristo+RocksDB: Simplify drivers wrapper
* Kvt: Update backend drivers and wrappers similar to `Aristo`
* Aristo+Kvm: Use column families for RocksDB
* Aristo+MemoryDB: Code cosmetics
* Aristo: Provide guest column family for export
why:
So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
DBMS system in parallel.
* Kvt: Provide import mechanism for RoksDB guest column family
why:
So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
DBMS system in parallel.
* CoreDb+Aristo: Run persistent `Kvt` DB piggybacked on `Aristo`
why:
Avoiding to run two DBMS systems in parallel.
* Fix copyright year
* Ditto
* Fix debug noise in `hashify()` for perfectly normal situation
why:
Was previously considered a fixable error
* Fix test sample file names
why:
The larger test file `goerli68161.txt.gz` is already in the local
archive. So there is no need to use the smaller one from the external
repo.
* Activate `accounts_cache` module from `db/ledger`
why:
A copy of the original `accounts_cache.nim` source to be integrated
into the `Ledger` module wrapper which allows to switch between
different `accounts_cache` implementations unser tha same API.
details:
At a later state, the `db/accounts_cache.nim` wrapper will be
removed so that there is only one access to that module via
`db/ledger/accounts_cache.nim`.
* Fix copyright headers in source code
* Rewrite remaining `AristoError` return code into `Result[void,AristoError]`
why:
Better code maintenance
* Update import sections
* Update Aristo DB paths
why:
More systematic so directory can be shared with other DB types
* More cosmetcs
* Update unit tests runners
why:
Proper handling of persistent and mem-only DB. The latter can be
consistently triggered by an empty DB path.
* Rename FilterID => QueueID
why:
The current usage does not identify a particular filter but uses it as
storage tag to manage it on the database (to be organised in a set of
FIFOs or queues.)
* Split `aristo_filter` source into sub-files
why:
Make space for filter management API
* Store filter queue IDs in pairs on the backend
why:
Any pair will will describe a FIFO accessed by bottom/top IDs
* Reorg some source file names
why:
The "aristo_" prefix for make local/private files is tedious to
use, so removed.
* Implement filter slot scheduler
details:
Filters will be stored on the database on cascaded FIFOs. When a FIFO
queue is full, some filter items are bundled together and stored on the
next FIFO.