When using checkpoint sync, only checkpoint state is available, block is
not downloaded and backfilled later.
`dag.backfill` tracks latest filled `slot`, and latest `parent_root` for
which no block has been synced yet.
In checkpoint sync, this assumption is broken, because there, the start
`dag.backfill.slot` is set based on checkpoint state slot, and the block
is also not available.
However, sync manager in backward mode also requests `dag.backfill.slot`
and `block_clearance` then backfills the checkpoint block once it is
synced. But, there is no guarantee that a peer ever sends us that block.
They could send us all parent blocks and solely omit the checkpoint
block itself. In that situation, we would accept the parent blocks and
advance `dag.backfill`, and subsequently never request the checkpoint
block again, resulting in gap inside blocks DB that is never filled.
To mitigate that, the assumption is restored that `dag.backfill.slot`
is the latest filled `slot`, and `dag.backfill.parent_root` is the next
block that needs to be synced. By setting `slot` to `tail.slot + 1` and
`parent_root` to `tail.root`, we put a fake summary into `dag.backfill`
so that `block_clearance` only proceeds once checkpoint block exists.
After checkpoint sync, historical block IDs cannot yet be queried.
However, they are needed to compute dependent roots of `ShufflingRef`.
To allow lookup, enable `getBlockIdAtSlot` to answer from compatible
states in memory; as long as they descend from the finalized checkpoint
and the requested slot is sufficiently recent, `block_roots` contains
everything to recover `BlockSlotId` up to `SLOTS_PER_HISTORICAL_ROOT`.
This is similar to how `attester_dependent_root` etc. are computed.
This accelerates the first couple minutes of checkpoint sync on Mainnet,
especially the time until finality advances past the synced checkpoint.
Finish the rename started in #4809 to have a consistent naming.
`ExecutionPayloadHash` suggests hash over payload instead of block.
`BlockHash` is also the canonical name in engine API.
When using checkpoint sync, the initial block is missing in the DB.
Update the LC data collector initialization to account for that,
avoiding a spurious error message when it is incorrectly accessed:
```
ERR 2024-02-07 11:21:55.416+01:00 Block failed to load unexpectedly topics="chaindag_lc" bid=d30517a7:8257504 tail=8257504
```
Also fixes a regression from #5691 that resulted in similar messages
while importing the first few blocks after checkpoint sync.
Thanks to @arnetheduck for reporting this.
Full caches should not be used to mark blocks as unviable. The unviable
status is quite persistent and a block marked as such won't be processed
again once the cache empties. Problem originally introduced in #4808.
If the initial state replays cover the finalized head, import matching
`LightClientBootstrap` into database.
This also addresses this error when light client requests bootstrap from
the genesis slot on networks that launch with Altair enabled.
```
{"lvl":"DBG","ts":"2023-10-04 11:17:49.665+00:00","msg":"LC bootstrap unavailable: Sync committee branch not cached","topics":"chaindag_lc","slot":0}
```
With Capella, `bls_to_execution_change` SSE should be emitted on the
event stream whenever a new `SignedBLSToExecutionChange` is received.
Add this missing functionality for compatibility with beacon-API specs.
- https://github.com/ethereum/beacon-APIs/pull/248
* use `PayloadAttributesV3` in `nimbus_light_client` for Deneb
From Deneb onward, `forkchoiceUpdated` requires `PayloadAttributesV3`.
In `nimbus_light_client` we still used `PayloadAttributesV2`.
Also clean up two other locations that were already correctly using
`PayloadAttributesV3`, to reduce code duplication.
* fix letter case
When the BN exits after writing new `head` to database, but before
completing the `updateFinalizedBlocks` call, the database is slightly
inconsistent due to the partial write. We currently fail to start up
after that. Fix that by catching up on partial `updateFinalizedBlocks`
tasks on start up, and add a test for this edge case.
Simplify best `LightClientUpdate` collection by tracking only canonical
data instead of tracking the best update across all branches within the
sync committee period.
- https://github.com/ethereum/consensus-specs/pull/3553
* reorder gossip validation checks
Doing the coverage check only after the corresponding committee index is
known allows optimization by early rejecting invalid data.
* use same helper for individual attestations as well
When creating new LC updates, information about the parent block's post
state must be available (cached), but information about current block's
post state is not yet required. Caching information about the current
block's post state can be delayed, simplifying the LC data collection
logic a bit and allowing more future flexibility with the cache design.
When new finality is reached without supermajority sync committee
support, trigger another event push on beacon-API and libp2p once
the finality gains supermajority support.
- https://github.com/ethereum/consensus-specs/pull/3549
Replace sections that need to be maintained with every `ConsensusFork`
related to LC data collection with a generic logic that keeps working
when unrelated parts of Ethereum change.
`v1.4.0-beta.4` made the Gossip rules more strict and now requires to
ignore blobs from other branches if there are equivocating blocks.
Those blobs are only requestable via Req/Resp.
* ShufflingRef approach to next-epoch validator duty calculation/prediction
* refactor action_tracker.updateActions to take ShufflingRef + beacon_proposers; refactor maybeUpdateActionTrackerNextEpoch to be separate and reused function; add actual fallback logic
* document one possible set of conditions
* check epoch participation flags and inactivity scores to ensure no penalties and MAX_EFFECTIVE_BALANCE to ensure rewards don't matter
* correctly (un)shuffle each proposer index
* remove debugging assertion
Directly initialize `ForkedLightClientObj` instead of separately first
setting the `kind` (initializing everything to zero) and then assigning
the forky data after that.