This PR allows sharing the pubkey data between validators by using a
thread-local cache for pubkey data, netting about a 400mb mem usage
reduction on holesky due to us keeping 3 permanent + several ephemeral
state copies in memory at all times and each state copy holding a full
validator.
The PR also introduces a hash cache for the key which gives ~14% speedup
for a full state `hash_tree_root` - the key makes up for a large part of
the `Validator` htr time.
Finally, the time it takes to copy a state goes down as well from ~80m
ms to ~60, for reasons similar to htr.
We use a `ptr` even if a `ref` could in theory have been used - there is
not much practical benefit to a `ref` (given it's mutable) while a `ptr`
is cheaper and easier to copy (when copying temporary states).
We could go further and cache a cooked pubkey but it turns out this is
quite intrusive - in all the relevant places, we're already using a
cooked key from the immutable validator data so there are no immediate
performance gains of doing so while managing the compressed -> cooked
key mapping would become more difficult - something for a future PR
perhaps.
Co-authored-by: Etan Kissling <etan@status.im>
* compute post-merge randao mix without loading state
* avoid copying state on shuffling computation and compute epochref
* speed up state copy for block production
When using checkpoint sync, only checkpoint state is available, block is
not downloaded and backfilled later.
`dag.backfill` tracks latest filled `slot`, and latest `parent_root` for
which no block has been synced yet.
In checkpoint sync, this assumption is broken, because there, the start
`dag.backfill.slot` is set based on checkpoint state slot, and the block
is also not available.
However, sync manager in backward mode also requests `dag.backfill.slot`
and `block_clearance` then backfills the checkpoint block once it is
synced. But, there is no guarantee that a peer ever sends us that block.
They could send us all parent blocks and solely omit the checkpoint
block itself. In that situation, we would accept the parent blocks and
advance `dag.backfill`, and subsequently never request the checkpoint
block again, resulting in gap inside blocks DB that is never filled.
To mitigate that, the assumption is restored that `dag.backfill.slot`
is the latest filled `slot`, and `dag.backfill.parent_root` is the next
block that needs to be synced. By setting `slot` to `tail.slot + 1` and
`parent_root` to `tail.root`, we put a fake summary into `dag.backfill`
so that `block_clearance` only proceeds once checkpoint block exists.
After checkpoint sync, historical block IDs cannot yet be queried.
However, they are needed to compute dependent roots of `ShufflingRef`.
To allow lookup, enable `getBlockIdAtSlot` to answer from compatible
states in memory; as long as they descend from the finalized checkpoint
and the requested slot is sufficiently recent, `block_roots` contains
everything to recover `BlockSlotId` up to `SLOTS_PER_HISTORICAL_ROOT`.
This is similar to how `attester_dependent_root` etc. are computed.
This accelerates the first couple minutes of checkpoint sync on Mainnet,
especially the time until finality advances past the synced checkpoint.
Finish the rename started in #4809 to have a consistent naming.
`ExecutionPayloadHash` suggests hash over payload instead of block.
`BlockHash` is also the canonical name in engine API.
Avoid marking blocks invalid when corresponding `blobSidecarsByRange`
returns an incomplete / incorrect response while syncing. The block
itself may still be valid in that scenario.
This PR causes a few new warnings to appear - these are harmless but
will need addressing separately as they span several libraries.
* new asyncraises syntax
* asyncraises support in several modules
* `sink` usage reduces spurious copying
* `?` compatiblity for `async` + `results`
* remove `-d:chronosStrictException` (obsolete)
This requires all object types to be explicitly white-listed for
default serialization. The PR makes the minimal changes, although
a number of similar mechanisms in eth2_rest_serialization can now
be removed.
When the BN exits after writing new `head` to database, but before
completing the `updateFinalizedBlocks` call, the database is slightly
inconsistent due to the partial write. We currently fail to start up
after that. Fix that by catching up on partial `updateFinalizedBlocks`
tasks on start up, and add a test for this edge case.
This PR brings down the time to send 100 attestations from ~1s to
~100ms, making it feasible to run 10k validators on a single node (which
regularly send 300 attestations / slot).
This is done by batching the slashing protection database write in a
single transaction thus avoiding a slow fsync for every signature -
effects will be more pronounced on slow drives.
The benefit applies both to beacon and client validators.
* Initial commit.
* Fix issues and tests.
* Fix test compilation issue.
* Update AllTests.
* Change the most poor score name from <lowest> to <bad>.
Split sync committee message score in range, so lexicographic scores will not intersect with normal one.
Lexicographic scores should be below to normal scores.
* Address review comments.
Fix aggregated attestation scoring to use MAX_VALIDATORS_PER_COMMITTEE.
Fix sync committee contributions to use SYNC_SUBCOMMITTEE_SIZE.
Add getUniqueVotes test vectors.
* Post-rebase fixes.
* Address review comments.
* Return back score calculation based on actual bits length.
* AllTests modification.