* Clarify addOrphan error/logging
addOrphan returned a bool to indicate success. Change this to a Result
so that different errors can be distinguished.
* Update beacon_chain/consensus_object_pools/block_quarantine.nim
Co-authored-by: tersec <tersec@users.noreply.github.com>
* Update beacon_chain/gossip_processing/gossip_validation.nim
---------
Co-authored-by: tersec <tersec@users.noreply.github.com>
* replace optimisticRoots table with field in BlockRef
* copyright year
* mark finalized blocks as verified on load
* Update beacon_chain/consensus_object_pools/block_dag.nim
Co-authored-by: Etan Kissling <etan@status.im>
* expand non-optimistic block checking to all pre-merge blocks; refactor markBlockVerified to use BlockRef rather than block root and remove superfluous caller in newPayload path replaced by addResolvedHeadBlock BlockRef construction
* don't treat finalized block specially; VALID status is sticky
---------
Co-authored-by: Etan Kissling <etan@status.im>
`SyncCommitteeMsgPool` grouped messages by their `beacon_block_root`.
This is problematic around sync committee period boundaries and forks.
Around sync committee period boundaries, members from both the current
and next sync committee may sign the same `beacon_block_root`; mixing
the signatures from both committees together is a mistake. Likewise,
around fork transitions, the `signing_root` changes, so those messages
also need to be segregated.
When an uncached `ShufflingRef` is requested, we currently replay state
which can take several seconds. Acceleration is possible by:
1. Start from any state with locked-in `get_active_validator_indices`.
Any blocks / slots applied to such a state can only affect that
result for future epochs, so are viable for querying target epoch.
`compute_activation_exit_epoch(state.slot.epoch) > target.epoch`
2. Determine highest common ancestor among `state` and `target.blck`.
At the ancestor slot, same rules re `get_active_validator_indices`.
`compute_activation_exit_epoch(ancestorSlot.epoch) > target.epoch`
3. We now have a `state` that shares history with `target.blck` up
through a common ancestor slot. Any blocks / slots that the `state`
contains, which are not part of the `target.blck` history, affect
`get_active_validator_indices` at epochs _after_ `target.epoch`.
4. Select `state.randao_mixes[N]` that is closest to common ancestor.
Either direction is fine (above / below ancestor).
5. From that RANDAO mix, mix in / out all RANDAO reveals from blocks
in-between. This is just an XOR operation, so fully reversible.
`mix = mix xor SHA256(blck.message.body.randao_reveal)`
6. Compute the attester dependent slot from `target.epoch`.
`if epoch >= 2: (target.epoch - 1).start_slot - 1 else: GENESIS_SLOT`
7. Trace back from `target.blck` to the attester dependent slot.
We now have the destination for which we want to obtain RANDAO.
8. Mix in all RANDAO reveals from blocks up through the `dependentBlck`.
Same method, no special handling necessary for epoch transitions.
9. Combine `get_active_validator_indices` from `state` at `target.epoch`
with the recovered RANDAO value at `dependentBlck` to obtain the
requested shuffling, and construct the `ShufflingRef` without replay.
* more tests and simplify logic
* test with different number of deposits per branch
* Update beacon_chain/consensus_object_pools/blockchain_dag.nim
Co-authored-by: Jacek Sieka <jacek@status.im>
* `commonAncestor` tests
* lint
---------
Co-authored-by: Jacek Sieka <jacek@status.im>
* Incremental pruning
When turning on pruning the first time the current pruning algorithm
will prune the full database at startup. This delays restart
unnecessarily, since all of the pruned space is not needed at once.
This PR introduces incremental pruning such that we will never prune
more than 32 blocks or the sync speed, whichever is higher.
This mode is expected to become default in a follow-up release.
Post-Capella, historical roots are computed from historical summaries
instead of being directly stored in the beacon state.
Slightly messy to pass both lists around - this is done to avoid
computing the historical root unnecessarily.
The consensus-spec-tests already cover the scenarios of our custom test
runner, so the custom tests can be removed. Also cleans up unused config
flags and related unreachable logic.
The 'peek' name was incorrect as it was actually removing from the
table. It was consequently used incorrectly in block processing: the
blobless block wasn't returned to the table when it should be.
* Simplify block quarantine blobless
The quarantine blobless table was initially keyed off of (Eth2Digest,
ValidatorSig). This was modelled off the orphan table. The presence of
the signature in the key is necessary for orphans, because we can't
verify the signature for an orphan. That is not the case for a
blobless block, where the signature can be verified.
So this PR changes the blobless block table to be keyed off a
Eth2Digest only. This simplifies the retrieval and handling of
blobless blocks.
* review feedback
Just the variable, not yet `lcDataForkAtStateFork` / `atStateFork`.
- Shorten comment in `light_client.nim` to keep line width
- Do not rename `stateFork` mention in `runProposalForkchoiceUpdated`.
- Do not rename `stateFork` in `getStateField(dag.headState, fork)`
Rest is just a mechanical mass replace