When `nimbus import` runs, we end up with a database without MPT roots leading to long startup times the first time one is needed. Computing the state root is slow because the on-disk order based on VertexID sorting does not match the trie traversal order and therefore makes lookups inefficent. Here we introduce a helper that speeds up this computation by traversing the trie in on-disk order and computing the trie hashes bottom up instead - even though this leads to some redundant reads of nodes that we cannot yet compute, it's still a net win as leaves and "bottom" branches make up the majority of the database. This PR also addresses a few other sources of inefficiency largely due to the separation of AriKey and AriVtx into their own column families. Each column family is its own LSM tree that produces hundreds of SST filtes - with a limit of 512 open files, rocksdb must keep closing and opening files which leads to expensive metadata reads during random access. When rocksdb makes a lookup, it has to read several layers of files for each lookup. Ribbon filters to skip over files that don't have the requested data but when these filters are not in memory, reading them is slow - this happens in two cases: when opening a file and when the filter has been evicted from the LRU cache. Addressing the open file limit solves one source of inefficiency, but we must also increase the block cache size to deal with this problem. * rocksdb.max_open_files increased to 2048 * per-file size limits increased so that fewer files are created * WAL size increased to avoid partial flushes which lead to small files * rocksdb block cache increased All these increases of course lead to increased memory usage, but at least performance is acceptable - in the future, we'll need to explore options such as joining AriVtx and AriKey and/or reducing the row count (by grouping branch layers under a single vertexid). With this PR, the mainnet state root can be computed in ~8 hours (down from 2-3 days) - not great, but still better. Further, we write all keys to the database, also those that are less than 32 bytes - because the mpt path is part of the input, it is very rare that we actually hit a key like this (about 200k such entries on mainnet), so the code complexity is not worth the benefit really, in the current database layout / design.
Nimbus-eth1 -- Ethereum execution layer database architecture
Last update: 2024-03-08
The following diagram gives a simplified view how components relate with regards to the data storage management.
An arrow between components a and b (as in a->b) is meant to be read as a relies directly on b, or a is served by b. For classifying the functional type of a component in the below diagram, the abstraction type is enclosed in brackets after the name of a component.
-
(application)
This is a group of software modules at the top level of the hierarchy. In the diagram below, the EVM is used as an example. Another application might be the RPC service. -
(API)
The API classification is used for a thin software layer hiding a set of different drivers where only one driver is active for the same API instance. It servers as sort of a logical switch. -
(concentrator)
The concentrator merges several sub-module instances and provides their collected services as a single unified instance. There is not much additional logic implemented besides what the sub-modules provide. -
(driver)
The driver instances are sort of the lower layer workhorses. The implement logic for solving a particular problem, providing a typically well defined service, etc. -
(engine)
This is a bottom level driver in the below diagram.+-------------------+ | EVM (application) | +-------------------+ | | v | +-----------------------------+ | | State DB (concentrator) | | +-----------------------------+ | | | | v | | +------------------------+ | | | Ledger (API) | | | +------------------------+ | | | | | | v | | | +--------------+ | | | | ledger cache | | | | | (driver) | | | | +--------------+ | | | | v | | | +----------------+ | | | | Common | | | | | (concentrator) | | | | +----------------+ | | | | | | v v v v +---------------------------------------+ | Core DB (API) | +---------------------------------------+ | v +---------------------------------------+ | Aristo DB (driver,concentrator) | +---------------------------------------+ | | v v +--------------+ +---------------------+ | Kvt (driver) | | Aristo MPT (driver) | +--------------+ +---------------------+ | | v v +---------------------------------------+ | Rocks DB (engine) | +---------------------------------------+
Here is a list of path references for the components with some explanation. The sources for the components are not always complete but indicate the main locations where to start looking at.
-
-
Sources:
./nimbus/db/core_db/backend/aristo_* -
Synopsis:
Combines both, the Kvt and the Aristo driver sub-modules providing an interface similar to the legacy DB (concentrator) module.
-
-
-
Sources:
./nimbus/db/aristo* -
Synopsis:
Revamped implementation of a hexary Merkle Patricia Tree.
-
-
-
Sources:
./nimbus/common* -
Synopsis:
Collected information for running block chain execution layer applications.
-
-
-
Sources:
./nimbus/db/core_db* -
Synopsis:
Database abstraction layer. Unless for legacy applications, there should be no need to reach out to the layers below.
-
-
-
Sources:
./nimbus/core/executor/* ./nimbus/evm/* -
Synopsis:
An implementation of the Ethereum Virtual Machine.
-
-
-
Sources:
./vendor/nim-eth/eth/trie/hexary.nim -
Synopsis:
Implementation of an MPT, see compact Merkle Patricia Tree.
-
-
-
Sources:
./vendor/nim-eth/eth/trie/db.nim -
Synopsis:
Key value table interface to be used directly for key-value storage or by the Hexary DB (driver) module for storage. Some magic is applied in order to treat hexary data accordingly (based on key length.)
-
-
-
Sources:
./nimbus/db/kvt* -
Synopsis:
Key value table interface for the Aristo DB (driver) module. Contrary to the Key-value table (driver), it is not used for MPT data.
-
-
-
Sources:
./nimbus/db/ledger* -
Synopsis:
Abstraction layer for either the legacy cache (driver) accounts cache (which works with the legacy DB (driver) backend only) or the ledger cache (driver) re-write which is supposed to work with all Core DB (API) backends.
-
-
-
Sources:
./nimbus/db/ledger/accounts_ledger.nim
./nimbus/db/ledger/backend/accounts_ledger*
./nimbus/db/ledger/distinct_ledgers.nim -
Synopsis:
Management of accounts and storage data. This is a re-write of the legacy DB (driver) which is supposed to work with all Core DB (API) backends.
-
-
-
Sources:
./nimbus/db/core_db/backend/legacy_* -
Synopsis:
Legacy database abstraction. It mostly forwards requests directly to the to the Key-value table (driver) and/or the hexary DB (driver).
-
-
-
Sources:
./vendor/nim-rocksdb/* -
Synopsis:
Persistent storage engine.
-
-
-
Sources:
./nimbus/evm/state.nim
./nimbus/evm/types.nim -
Synopsis:
Integrated collection of modules and methods relevant for the EVM.
-