The `eth2-networks` repo often receives metadata updates with a delay.
Switch to `goerli` repo to obtain the latest config (Dencun scheduling)
when it is updated. This is in line with how Sepolia / Holesky work.
- https://github.com/eth-clients/goerli/pull/178
With Capella, `bls_to_execution_change` SSE should be emitted on the
event stream whenever a new `SignedBLSToExecutionChange` is received.
Add this missing functionality for compatibility with beacon-API specs.
- https://github.com/ethereum/beacon-APIs/pull/248
This requires all object types to be explicitly white-listed for
default serialization. The PR makes the minimal changes, although
a number of similar mechanisms in eth2_rest_serialization can now
be removed.
As followup from #5654, ensure that we still keep the comment around
referring to the correct `forkchoiceUpdated` to use being driven by
the fork schedule.
* use `PayloadAttributesV3` in `nimbus_light_client` for Deneb
From Deneb onward, `forkchoiceUpdated` requires `PayloadAttributesV3`.
In `nimbus_light_client` we still used `PayloadAttributesV2`.
Also clean up two other locations that were already correctly using
`PayloadAttributesV3`, to reduce code duplication.
* fix letter case
* avoid potentially subtle template/function symbol name interactions
* use warn instead of error in getExecutionPayload codepath to ensure lack of ambiguity
When the BN exits after writing new `head` to database, but before
completing the `updateFinalizedBlocks` call, the database is slightly
inconsistent due to the partial write. We currently fail to start up
after that. Fix that by catching up on partial `updateFinalizedBlocks`
tasks on start up, and add a test for this edge case.
Simplify best `LightClientUpdate` collection by tracking only canonical
data instead of tracking the best update across all branches within the
sync committee period.
- https://github.com/ethereum/consensus-specs/pull/3553
* reorder gossip validation checks
Doing the coverage check only after the corresponding committee index is
known allows optimization by early rejecting invalid data.
* use same helper for individual attestations as well
This PR brings down the time to send 100 attestations from ~1s to
~100ms, making it feasible to run 10k validators on a single node (which
regularly send 300 attestations / slot).
This is done by batching the slashing protection database write in a
single transaction thus avoiding a slow fsync for every signature -
effects will be more pronounced on slow drives.
The benefit applies both to beacon and client validators.