nimbus-eth1/scripts
Jacek Sieka 01ca415721
Store keys together with node data (#2849)
Currently, computed hash keys are stored in a separate column family
with respect to the MPT data they're generated from - this has several
disadvantages:

* A lot of space is wasted because the lookup key (`RootedVertexID`) is
repeated in both tables - this is 30% of the `AriKey` content!
* rocksdb must maintain in-memory bloom filters and LRU caches for said
keys, doubling its "minimal efficient cache size"
* An extra disk traversal must be made to check for existence of cached
hash key
* Doubles the amount of files on disk due to each column family being
its own set of files

Here, the two CFs are joined such that both key and data is stored in
`AriVtx`. This means:

* we save ~30% disk space on repeated lookup keys
* we save ~2gb of memory overhead that can be used to cache data instead
of indices
* we can skip storing hash keys for MPT leaf nodes - these are trivial
to compute and waste a lot of space - previously they had to present in
the `AriKey` CF to avoid having to look in two tables on the happy path.
* There is a small increase in write amplification because when a hash
value is updated for a branch node, we must write both key and branch
data - previously we would write only the key
* There's a small shift in CPU usage - instead of performing lookups in
the database, hashes for leaf nodes are (re)-computed on the fly
* We can return to slightly smaller on-disk SST files since there's
fewer of them, which should reduce disk traffic a bit

Internally, there are also other advantages:

* when clearing keys, we no longer have to store a zero hash in memory -
instead, we deduce staleness of the cached key from the presence of an
updated VertexRef - this saves ~1gb of mem overhead during import
* hash key cache becomes dedicated to branch keys since leaf keys are no
longer stored in memory, reducing churn
* key computation is a lot faster thanks to the skipped second disk
traversal - a key computation for mainnet can be completed in 11 hours
instead of ~2 days (!) thanks to better cache usage and less read
amplification - with additional improvements to the on-disk format, we
can probably get rid of the initial full traversal method of seeding the
key cache on first start after import

All in all, this PR reduces the size of a mainnet database from 160gb to
110gb and the peak memory footprint during import by ~1-2gb.
2024-11-20 09:56:27 +01:00
..
.gitignore eth: bump (#2308) 2024-06-06 23:39:09 +00:00
README.md Script for comparing csv outputs from block import 2024-06-06 14:33:49 +02:00
block-import-stats.py stats: interpolate, remove some broken stats 2024-06-29 06:36:35 +02:00
check_copyright_year.sh Cleanup stateless and block witness code. (#2295) 2024-06-08 15:05:00 +07:00
make_dist.sh ci: fix nightly build 2023-02-23 18:34:04 +07:00
make_states.sh Store keys together with node data (#2849) 2024-11-20 09:56:27 +01:00
print_version.nims Add check copyright year linter to CI 2023-11-01 10:41:20 +07:00
requirements.in Script for comparing csv outputs from block import 2024-06-06 14:33:49 +02:00
requirements.txt increase Python dependencies to address urllib3 vuln and certifi root cert (#2605) 2024-09-10 06:36:28 +00:00

README.md

Utility scripts

block-import-stats.py

This script compares outputs from two nimbus import --debug-csv-stats, a baseline and a contender.

To use it, set up a virtual environment:

# Create a venv for the tool
python -m venv stats
. stats/bin/activate
pip install -r requirements.txt

python block-import-stats.py
  • Generate a baseline version by processing a long range of blocks using nimbus import
  • Modify your code and commit to git (to generate a unique identifier for the code)
  • Re-run the same import over the range of blocks of interest, saving the import statistics to a new CSV
  • Pass the two CSV files to the script

By default, the script will skip block numbers below 500k since these are mostly unintersting.

See -h for help text on running the script.

Testing a particular range of blocks

As long as block import is run on similar hardware, each run can be saved for future reference using the git hash.

The block import can be run repeatedly with --max-blocks to stop after processing a number of blocks - by copying the state at that point, one can resume or replay the import of a particular block range

See make_states.sh for such an example.