24 Commits

Author SHA1 Message Date
Jacek Sieka
7b88bb3b30
Add branch cache (#2923)
Now that branches are small, we can add a branch cache that fits more
verticies in memory by only storing the branch portion (16 bytes) of the
VertexRef (136 bytes).

Where the original vertex cache hovers around a hit rate of ~60:ish,
this branch cache reaches >90% hit rate instead around block 20M which
gives a nice boost to processing.

A downside of this approach is that a new VertexRef must be allocated
for every cache hit instead of reusing an existing instance - this
causes some GC overhead that needs to be addressed.

Nice 15% improvement nonetheless, can't complain!

```
blocks: 19630784, baseline: 161h18m38s, contender: 136h23m23s
Time (total): -24h55m14s, -15.45%
```
2024-12-11 11:53:26 +01:00
Jacek Sieka
01ca415721
Store keys together with node data (#2849)
Currently, computed hash keys are stored in a separate column family
with respect to the MPT data they're generated from - this has several
disadvantages:

* A lot of space is wasted because the lookup key (`RootedVertexID`) is
repeated in both tables - this is 30% of the `AriKey` content!
* rocksdb must maintain in-memory bloom filters and LRU caches for said
keys, doubling its "minimal efficient cache size"
* An extra disk traversal must be made to check for existence of cached
hash key
* Doubles the amount of files on disk due to each column family being
its own set of files

Here, the two CFs are joined such that both key and data is stored in
`AriVtx`. This means:

* we save ~30% disk space on repeated lookup keys
* we save ~2gb of memory overhead that can be used to cache data instead
of indices
* we can skip storing hash keys for MPT leaf nodes - these are trivial
to compute and waste a lot of space - previously they had to present in
the `AriKey` CF to avoid having to look in two tables on the happy path.
* There is a small increase in write amplification because when a hash
value is updated for a branch node, we must write both key and branch
data - previously we would write only the key
* There's a small shift in CPU usage - instead of performing lookups in
the database, hashes for leaf nodes are (re)-computed on the fly
* We can return to slightly smaller on-disk SST files since there's
fewer of them, which should reduce disk traffic a bit

Internally, there are also other advantages:

* when clearing keys, we no longer have to store a zero hash in memory -
instead, we deduce staleness of the cached key from the presence of an
updated VertexRef - this saves ~1gb of mem overhead during import
* hash key cache becomes dedicated to branch keys since leaf keys are no
longer stored in memory, reducing churn
* key computation is a lot faster thanks to the skipped second disk
traversal - a key computation for mainnet can be completed in 11 hours
instead of ~2 days (!) thanks to better cache usage and less read
amplification - with additional improvements to the on-disk format, we
can probably get rid of the initial full traversal method of seeding the
key cache on first start after import

All in all, this PR reduces the size of a mainnet database from 160gb to
110gb and the peak memory footprint during import by ~1-2gb.
2024-11-20 09:56:27 +01:00
Jacek Sieka
188d689d9d
Speed up initial MPT root computation after import (#2788)
When `nimbus import` runs, we end up with a database without MPT roots
leading to long startup times the first time one is needed.

Computing the state root is slow because the on-disk order based on
VertexID sorting does not match the trie traversal order and therefore
makes lookups inefficent.

Here we introduce a helper that speeds up this computation by traversing
the trie in on-disk order and computing the trie hashes bottom up
instead - even though this leads to some redundant reads of nodes that
we cannot yet compute, it's still a net win as leaves and "bottom"
branches make up the majority of the database.

This PR also addresses a few other sources of inefficiency largely due
to the separation of AriKey and AriVtx into their own column families.

Each column family is its own LSM tree that produces hundreds of SST
filtes - with a limit of 512 open files, rocksdb must keep closing and
opening files which leads to expensive metadata reads during random
access.

When rocksdb makes a lookup, it has to read several layers of files for
each lookup. Ribbon filters to skip over files that don't have the
requested data but when these filters are not in memory, reading them is
slow - this happens in two cases: when opening a file and when the
filter has been evicted from the LRU cache. Addressing the open file
limit solves one source of inefficiency, but we must also increase the
block cache size to deal with this problem.

* rocksdb.max_open_files increased to 2048
* per-file size limits increased so that fewer files are created
* WAL size increased to avoid partial flushes which lead to small files
* rocksdb block cache increased

All these increases of course lead to increased memory usage, but at
least performance is acceptable - in the future, we'll need to explore
options such as joining AriVtx and AriKey and/or reducing the row count
(by grouping branch layers under a single vertexid).

With this PR, the mainnet state root can be computed in ~8 hours (down
from 2-3 days) - not great, but still better.

Further, we write all keys to the database, also those that are less
than 32 bytes - because the mpt path is part of the input, it is very
rare that we actually hit a key like this (about 200k such entries on
mainnet), so the code complexity is not worth the benefit really, in the
current database layout / design.
2024-10-27 11:08:37 +00:00
Jacek Sieka
b4b4d16729
speed up key computation (#2642)
* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
2024-09-20 07:43:53 +02:00
Jacek Sieka
5c1e2e7d3b
Migrate keyed_queue to minilru (#2608)
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
2024-09-13 15:47:50 +02:00
Jacek Sieka
d39c589ec3
lru cache updates (#2590)
* replace rocksdb row cache with larger rdb lru caches - these serve the
same purpose but are more efficient because they skips serialization,
locking and rocksdb layering
* don't append fresh items to cache - this has the effect of evicting
the existing items and replacing them with low-value entries that might
never be read - during write-heavy periods of processing, the
newly-added entries were evicted during the store loop
* allow tuning rdb lru size at runtime
* add (hidden) option to print lru stats at exit (replacing the
compile-time flag)

pre:
```
INF 2024-09-03 15:07:01.136+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.675 tps=1870.397 mgps=176.819 avgBps=10.288
avgTps=1574.889 avgMGps=155.952 elapsed=19m26s458ms
```

post:
```
INF 2024-09-03 13:54:26.730+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.637 tps=1864.384 mgps=176.250 avgBps=11.202
avgTps=1714.920 avgMGps=169.818 elapsed=17m51s211ms
```

9%:ish import perf improvement on similar mem usage :)
2024-09-05 11:18:32 +02:00
Jacek Sieka
ef1bab0802
avoid some trivial memory allocations (#2587)
* pre-allocate `blobify` data and remove redundant error handling
(cannot fail on correct data)
* use threadvar for temporary storage when decoding rdb, avoiding
closure env
* speed up database walkers by avoiding many temporaries

~5% perf improvement on block import, 100x on database iteration (useful
for building analysis tooling)
2024-09-02 16:03:10 +02:00
Jacek Sieka
81e75622cf
storage: store root id together with vid, for better locality of refe… (#2449)
The state and account MPT:s currenty share key space in the database
based on that vertex id:s are assigned essentially randomly, which means
that when two adjacent slot values from the same contract are accessed,
they might reside at large distance from each other.

Here, we prefix each vertex id by its root causing them to be sorted
together thus bringing all data belonging to a particular contract
closer together - the same effect also happens for the main state MPT
whose nodes now end up clustered together more tightly.

In the future, the prefix given to the storage keys can also be used to
perform range operations such as reading all the storage at once and/or
deleting an account with a batch operation.

Notably, parts of the API already supported this rooting concept while
parts didn't - this PR makes the API consistent by always working with a
root+vid.
2024-07-04 15:46:52 +02:00
Jacek Sieka
c364426422
Smaller in-database representations (#2436)
These representations use ~15-20% less data compared to the status quo,
mainly by removing redundant zeroes in the integer encodings - a
significant effect of this change is that the various rocksdb caches see
better efficiency since more items fit in the same amount of space.

* use RLP encoding for `VertexID` and `UInt256` wherever it appears
* pack `VertexRef`/`PayloadRef` more tightly
2024-07-02 20:25:06 +02:00
web3-developer
ea94e8a351
Use RocksDb column family handles instead of name strings. (#2418)
* Bump RocksDb to latest and update Nimbus database to pass column family handles to RocksDb API.

* Bump RocksDb version.
2024-06-27 16:51:43 +08:00
Jacek Sieka
3e001e322c
Fix memory usage spikes during sync, give memory to rocksdb (#2413)
* creating a seq from a table that holds lots of changes means copying
all data into the table - this can be several GB of data while syncing
blocks
* nim fails to optimize the moving of the `WidthFirstForest` - the real
solution is to not construct a `wff` to begin with, but this PR provides
relief while that is being worked on

This spike fix allows us to bump the rocksdb cache by another 2 GB and
still have a significantly lower peak memory usage during sync.
2024-06-25 13:39:53 +02:00
Jordan Hrycaj
5a5cc6295e
Triggered write event for kvt (#2351)
* bump rockdb

* Rename `KVT` objects related to filters according to `Aristo` naming

details:
  filter* => delta*
  roFilter => balancer

* Compulsory error handling if `persistent()` fails

* Add return code to `reCentre()`

why:
  Might eventually fail if re-centring is blocked. Some logic will be
  added in subsequent patch sets.

* Add column families from earlier session to rocksdb in opening procedure

why:
  All previously used CFs must be declared when re-opening an existing
  database.

* Update `init()` and add rocksdb `reinit()` methods for changing parameters

why:
  Opening a set column families (with different open options) must span
  at least the ones that are already on disk.

* Provide write-trigger-event interface into `Aristo` backend

why:
  This allows to save data from a guest application (think `KVT`) to
  get synced with the write cycle so the guest and `Aristo` save all
  atomically.

* Use `KVT` with new column family interface from `Aristo`

* Remove obsolete guest interface

* Implement `KVT` piggyback on `Aristo` backend

* CoreDb: Add separate `KVT`/`Aristo` backend mode for debugging

* Remove `rocks_db` import from `persist()` function

why:
  Some systems (i.p `fluffy` and friends) use the `Aristo` memory
  backend emulation and do not link against rocksdb when building the
  application. So this should fix that problem.
2024-06-13 18:15:11 +00:00
Jordan Hrycaj
a347291413
Aristo use rocksdb cf instead of key pfx (#2332)
* Use RocksDb column families instead of a prefixed single column

why:
  Better performance

* Use structural objects `VertexRef` and `HashKey` in LRU cache for RocksDb

why:
  Avoids repeated de/serialisation
2024-06-10 12:04:22 +00:00
Jordan Hrycaj
b9187e0493
Aristo selective read cashing for rocksdb backend (#2145)
* Aristo+Kvt: Better RocksDB profiling

why:
  Providing more detailed information, mainly for `Aristo`

* Aristo: Renamed journal `stats()` to `capacity()`

why:
  `Stats()` was a misnomer

* Aristo: Provide backend read caches for key and vertex IDs

why:
  Dedicated LRU caching for particular types gives a throughput advantage.
  The sizes of the LRU queues used for caching are currently constant
  but might be adjusted at a later time.

* Fix copyright year
2024-04-22 19:02:22 +00:00
Jordan Hrycaj
e8eb3268f5
Generalise prune mode option 4 different db models (#2139)
* Update README

* Nimbus-main: replaced `PruneMode` options by `ChainDbMode` options

details:
  For the legacy database, this changes the phrase
  - `conf.pruneMode == PruneMode.Full` to the expression
  + `conf.chainDbMode == ChainDbMode.Prune`.

* Fix issues moaned about by NIM compiler

* Fix copyright year
2024-04-17 18:09:55 +00:00
Jordan Hrycaj
d6a4205324
Aristo update rocksdb backend drivers (#2135)
* Aristo+RocksDB: Update backend drivers

why:
  RocksDB update allows use some of the newly provided methods which
  were previously implemented by using the very C backend (for the lack
  of NIM methods.)

* Aristo+RocksDB: Simplify drivers wrapper

* Kvt: Update backend drivers and wrappers similar to `Aristo`

* Aristo+Kvm: Use column families for RocksDB

* Aristo+MemoryDB: Code cosmetics

* Aristo: Provide guest column family for export

why:
  So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
  DBMS system in parallel.

* Kvt: Provide import mechanism for RoksDB guest column family

why:
  So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
   DBMS system in parallel.

* CoreDb+Aristo: Run persistent `Kvt` DB piggybacked on `Aristo`

why:
  Avoiding to run two DBMS systems in parallel.

* Fix copyright year

* Ditto
2024-04-16 20:39:11 +00:00
web3-developer
11691c33e9
Update Nimbus codebase to use the new nim-rocksdb API. (#2054)
* Bump nim-rocksdb.

* Update codebase to use latest nim-rocksdb API.

* Update copyright notices.

* Fix memory leak due to allocCStringArray without deallocCStringArray.

* Improve kvstore_rocksdb code.

* Refactor and cleanup RocksStoreRef.

* Update nim-rocksdb submodule to latest.
2024-03-05 12:54:42 +08:00
Jordan Hrycaj
5462c05dc6
Core db update api tracking (#1907)
* Fix copyright year

* Show elapsed times with enabled `CoreDb` API tracking

* Show elapsed times with enabled `LedgerRef` API tracking

* Reorg `CoreDb` auto destructors for `Aristo` DB

why:
  While `Aristo` supports some parallelism for concurrent database access,
  this comes with a price of management overhead. With a naive approach,
  the auto-destructor will slow down execution because the ledger and
  evm treat the database in a shared mode where a DB descriptor is just
  created and thrown away shortly after.

  This is reflected in the `Coredb` abstraction layer above `Aristo`/`Kvt`
  where a few `Shared` type descriptors are cached and a shared reference
  is returned rather than a disposable new object.

* For `CoreDb` support transaction level tracking

details:
  This is mainly an extra for the legacy DB as `Aristo` and `Kvt` support
  this already.

  Also return an error on the legacy DB backend when `persistent()` is
  called while there are transactions pending (the `persistent()` call
  does nothing otherwise on the legacy backend.)

* Clear compiler warnings (remove unused variables etc.)
2023-11-24 22:16:21 +00:00
Jordan Hrycaj
610e2d338d
Core db fix legacy db root vertex fetcher (#1899)
* Using different `tmp` directories for `Kvt` and `Aristo`

why:
  Closing one database would leave the other set of directories
  incomplete.

* Code cosmetics, silence compiler

* Fix typo `EMPTY_ROOT_HASH` vs. `EMPTY_CODE_HASH`

* Fix copyright years
2023-11-20 20:22:27 +00:00
Jordan Hrycaj
6e0397e276
Aristo and ledger small updates (#1888)
* Fix debug noise in `hashify()` for perfectly normal situation

why:
  Was previously considered a fixable error

* Fix test sample file names

why:
  The larger test file `goerli68161.txt.gz` is already in the local
  archive. So there is no need to use the smaller one from the external
  repo.

* Activate `accounts_cache` module from `db/ledger`

why:
  A copy of the original `accounts_cache.nim` source to be integrated
  into the `Ledger` module wrapper which allows to switch between
  different `accounts_cache` implementations unser tha same API.

details:
  At a later state, the `db/accounts_cache.nim` wrapper will be
  removed so that there is only one access to that module via
  `db/ledger/accounts_cache.nim`.

* Fix copyright headers in source code
2023-11-08 16:52:25 +00:00
Jordan Hrycaj
6d132811ba
Core db update providing additional results code interface (#1776)
* Split `core_db/base.nim` into several sources

* Rename `core_db/legacy.nim` => `core_db/legacy_db.nim`

* Update `CoreDb` API, dual methods returning `Result[]` or plain value

detail:
  Plain value methods implemet the legacy API, they defect on error results

* Redesign `CoreDB` direct backend access

why:
  Made the `backend` directive integral part of the API

* Discontinue providing unused or otherwise available functions

details:
+ setTransactionID() removed, not used and not easily replicable in Aristo
+ maybeGet() removed, available via direct backend access
+ newPhk() removed, never used & was experimental anyway

* Update/reorg backend API

why:
+ Added error print function `$$()`
+ General descriptor completion (and optional validation) via `bless()`

* Update `Aristo`/`Kvt` exception handling

why:
  Avoid `CatchableError` exceptions, rather pass them as error code where
  appropriate.

* More `CoreDB` compliant `Aristo` and `Kvt` methods

details:
+ Providing functions like `contains()`, `getVtxRc()` (returns `Result[]`).
+ Additional error code: `NotImplemented`

* Rewrite/reorg of Aristo DB constructor

why:
  Previously used global object `DefaultQidLayoutRef` as default
  initialiser. This object was created at compile time which lead to
  non-gc safe functions.

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/aristo/aristo_transcode.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

---------

Co-authored-by: Kim De Mey <kim.demey@gmail.com>
2023-09-26 10:21:13 +01:00
Jordan Hrycaj
8e00143313
Aristo db code massage n cosmetics (#1745)
* Rewrite remaining `AristoError` return code into `Result[void,AristoError]`

why:
  Better code maintenance

* Update import sections

* Update Aristo DB paths

why:
 More systematic so directory can be shared with other DB types

* More cosmetcs

* Update unit tests runners

why:
  Proper handling of persistent and mem-only DB. The latter can be
  consistently triggered by an empty DB path.
2023-09-12 19:45:12 +01:00
Jordan Hrycaj
8e46953390
Aristo db state root repos and reorg (#1744)
* Reorg of distributed backend access

details:
  Now handled via API provided in `aristo_desc`.

* Rename `checkCache()` => `checkTop()`

why:
  Better naming for top layer cache checker

also:
  Provide cascaded fifos checker

* Provide `eq` directive for finding filter by exact filter ID (think block number)

* Some code beautification (for better code reading)

* State root reposition and reorg

details:
  Repositioning is supported by forking a new descriptor. Reorg is then
  accomplished by writing this forked state on the backend database.
2023-09-11 21:38:49 +01:00
Jordan Hrycaj
465d694834
Aristo db implement filter storage scheduler (#1713)
* Rename FilterID => QueueID

why:
  The current usage does not identify a particular filter but uses it as
  storage tag to manage it on the database (to be organised in a set of
  FIFOs or queues.)

* Split `aristo_filter` source into sub-files

why:
  Make space for filter management API

* Store filter queue IDs in pairs on the backend

why:
  Any pair will will describe a FIFO accessed by bottom/top IDs

* Reorg some source file names

why:
  The "aristo_" prefix for make local/private files is tedious to
  use, so removed.

* Implement filter slot scheduler

details:
  Filters will be stored on the database on cascaded FIFOs. When a FIFO
  queue is full, some filter items are bundled together and stored on the
  next FIFO.
2023-08-25 23:53:59 +01:00