23eea197f6
* Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml |
||
---|---|---|
.. | ||
platforms | ||
README.md | ||
bench_lab.nim | ||
foo.nim | ||
nbench.nim | ||
nbench.nim.cfg | ||
nbench_spec_scenarios.nim | ||
reports.nim | ||
scenarios.nim |
README.md
Nimbus-bench
Nbench is a profiler dedicated to the Nimbus Beacon Chain.
It is built as a domain specific profiler that aims to be
as unintrusive as possible while providing complementary reports
to dedicated tools like perf
, Apple Instruments
or Intel Vtune
that allows you to dive deep down to a specific line or assembly instructions.
In particular, those tools cannot tell you that your cryptographic subsystem
or your parsing routines or your random number generation should be revisited,
may sample at to high a resolution (millisecond) instead of per-function statistics,
and are much less useful without debugging symbols which requires a lot of space.
I.e. perf
and other generic profiler tools give you the laser-thin focused pictures
while nbench strives to give you the big picture.
Features
- by default nbench will collect the number of calls and time spent in each function.
- like ncli or nfuzz, you can provide nbench isolated scenarios in SSZ format to analyze Nimbus behaviour.
Usage
nim c -d:const_preset=mainnet -d:nbench -d:release -o:build/nbench nbench/nbench.nim
export SCENARIOS=vendor/nim-eth2-scenarios/tests-v0.11.1/mainnet/phase0
# Full state transition
build/nbench cmdFullStateTransition -d="${SCENARIOS}"/sanity/blocks/pyspec_tests/voluntary_exit/ -q=2
# Slot processing
build/nbench cmdSlotProcessing -d="${SCENARIOS}"/sanity/slots/pyspec_tests/slots_1
# Justification-Finalisation
build/nbench cmdEpochProcessing --epochProcessingCat=catJustificationFinalization -d="${SCENARIOS}"/epoch_processing/justification_and_finalization/pyspec_tests/234_ok_support/
# Registry updates
build/nbench cmdEpochProcessing --epochProcessingCat=catRegistryUpdates -d="${SCENARIOS}"/epoch_processing/registry_updates/pyspec_tests/activation_queue_efficiency/
# Slashings
build/nbench cmdEpochProcessing --epochProcessingCat=catSlashings -d="${SCENARIOS}"/epoch_processing/slashings/pyspec_tests/max_penalties/
# Block header processing
build/nbench cmdBlockProcessing --blockProcessingCat=catBlockHeader -d="${SCENARIOS}"/operations/block_header/pyspec_tests/proposer_slashed/
# Proposer slashing
build/nbench cmdBlockProcessing --blockProcessingCat=catProposerSlashings -d="${SCENARIOS}"/operations/proposer_slashing/pyspec_tests/invalid_proposer_index/
# Attester slashing
build/nbench cmdBlockProcessing --blockProcessingCat=catAttesterSlashings -d="${SCENARIOS}"/operations/attester_slashing/pyspec_tests/success_surround/
# Attestation processing
build/nbench cmdBlockProcessing --blockProcessingCat=catAttestations -d="${SCENARIOS}"/operations/attestation/pyspec_tests/success_multi_proposer_index_iterations/
# Deposit processing
build/nbench cmdBlockProcessing --blockProcessingCat=catDeposits -d="${SCENARIOS}"/operations/deposit/pyspec_tests/new_deposit_max/
# Voluntary exit
build/nbench cmdBlockProcessing --blockProcessingCat=catVoluntaryExits -d="${SCENARIOS}"/operations/voluntary_exit/pyspec_tests/validator_exit_in_future/
Running the whole Eth2.0 specs test suite
Warning: this is a proof-of-concept, there is a slight degree of interleaving in output. Furthermore benchmarks are run in parallel and might interfere which each other.
nim c -d:const_preset=mainnet -d:nbench -d:release -o:build/nbench nbench/nbench.nim
nim c -o:build/nbench_tests nbench/nbench_spec_scenarios.nim
build/nbench_tests --nbench=build/nbench --tests=vendor/nim-eth2-scenarios/tests-v0.11.1/mainnet/
TODO Reporting
- Dumping as CSV files also for archival, perf regression suite and/or data mining.
- Piggybacking on eth-metrics and can report over Prometheus or StatsD.
- you can augment it via label pragmas that can be applied file-wide to tag "cryptography", "block_transition", "database" to have a global view of the system.