nimbus-eth1/scripts
Jacek Sieka f034af422a
Pre-allocate vids for branches (#2882)
Each branch node may have up to 16 sub-items - currently, these are
given VertexID based when they are first needed leading to a
mostly-random order of vertexid for each subitem.

Here, we pre-allocate all 16 vertex ids such that when a branch subitem
is filled, it already has a vertexid waiting for it. This brings several
important benefits:

* subitems are sorted and "close" in their id sequencing - this means
that when rocksdb stores them, they are likely to end up in the same
data block thus improving read efficiency
* because the ids are consequtive, we can store just the starting id and
a bitmap representing which subitems are in use - this reduces disk
space usage for branches allowing more of them fit into a single disk
read, further improving disk read and caching performance - disk usage
at block 18M is down from 84 to 78gb!
* the in-memory footprint of VertexRef reduced allowing more instances
to fit into caches and less memory to be used overall.

Because of the increased locality of reference, it turns out that we no
longer need to iterate over the entire database to efficiently generate
the hash key database because the normal computation is now faster -
this significantly benefits "live" chain processing as well where each
dirtied key must be accompanied by a read of all branch subitems next to
it - most of the performance benefit in this branch comes from this
locality-of-reference improvement.

On a sample resync, there's already ~20% improvement with later blocks
seeing increasing benefit (because the trie is deeper in later blocks
leading to more benefit from branch read perf improvements)

```
blocks: 18729664, baseline: 190h43m49s, contender: 153h59m0s
Time (total): -36h44m48s, -19.27%
```

Note: clients need to be resynced as the PR changes the on-disk format

R.I.P. little bloom filter - your life in the repo was short but
valuable
2024-12-04 11:42:04 +01:00
..
.gitignore eth: bump (#2308) 2024-06-06 23:39:09 +00:00
README.md Script for comparing csv outputs from block import 2024-06-06 14:33:49 +02:00
block-import-stats.py stats: interpolate, remove some broken stats 2024-06-29 06:36:35 +02:00
check_copyright_year.sh Cleanup stateless and block witness code. (#2295) 2024-06-08 15:05:00 +07:00
make_dist.sh ci: fix nightly build 2023-02-23 18:34:04 +07:00
make_states.sh Store keys together with node data (#2849) 2024-11-20 09:56:27 +01:00
print_version.nims Add check copyright year linter to CI 2023-11-01 10:41:20 +07:00
requirements.in Script for comparing csv outputs from block import 2024-06-06 14:33:49 +02:00
requirements.txt Pre-allocate vids for branches (#2882) 2024-12-04 11:42:04 +01:00

README.md

Utility scripts

block-import-stats.py

This script compares outputs from two nimbus import --debug-csv-stats, a baseline and a contender.

To use it, set up a virtual environment:

# Create a venv for the tool
python -m venv stats
. stats/bin/activate
pip install -r requirements.txt

python block-import-stats.py
  • Generate a baseline version by processing a long range of blocks using nimbus import
  • Modify your code and commit to git (to generate a unique identifier for the code)
  • Re-run the same import over the range of blocks of interest, saving the import statistics to a new CSV
  • Pass the two CSV files to the script

By default, the script will skip block numbers below 500k since these are mostly unintersting.

See -h for help text on running the script.

Testing a particular range of blocks

As long as block import is run on similar hardware, each run can be saved for future reference using the git hash.

The block import can be run repeatedly with --max-blocks to stop after processing a number of blocks - by copying the state at that point, one can resume or replay the import of a particular block range

See make_states.sh for such an example.