* extend light client protocol for Electra
Add missing Electra support for light client protocol:
- https://github.com/ethereum/consensus-specs/pull/3811
Tested against PR consensus-spec-tests, the test runner automatically
picks up the new tests once available.
* workaround `version-2-0`: `Error: cannot instantiate: 'SomeUnsignedInt'`
* fix initialization when Electra not scheduled
* try reduce stack size in test
* put correct sync committee branch version into DB
* adjust fork schedule in light client data tests
* further reduce stack size
* split function into multiple parts
* rename variable
* regenerate test reports to cover new Electra tests
* add Nim bug reference
Including sync contributions into a block affects validator rewards.
When we have not received aggregate sync contributions, but have seen
individual messages, we can produce the contributions locally, improving
validator rewards when subscribing to all subnets or when having a
non-aggregating attached validator in the sync committee.
Addresses two inaccuracies in light client data size documentation:
1. `SyncCommittee` pubkeys serialize are 48 bytes not 64 bytes
2. Some of the estimates used 1000 vs 1024 bytes/KB, aligned to 1024
Bellatrix light client data does not contain the EL block hash, so we
had to follow blocks gossip to learn the EL `block_hash` of such blocks.
Now that Bellatrix is obsolete, we can simplify EL syncing logic under
light client scenarios. Bellatrix light client data can still be used
to advance the light client sync itself, but will no longer result in
`engine_forkchoiceUpdated` calls until the sync reaches Capella. This
also frees up some memory as we no longer have to retain blocks.
Using `let contextFork = consensusFork` no longer seems to work to avoid
capturing the `var` loop variable; it ends up being `Electra` for all
handlers. Use `closureScope` as a more sustainable fix.
`ValidatorSig` uses `blob` but `TrustedSig` uses `data`, aligning the
names reduces code duplication and improves clarity. It also simplifies
`StableContainer` compatibility checks.
In nim-web3 all std.Option are replaced by results.Opt. The same goes in nim-eth, with additional fields name changes and GasInt changed from int64 to uint64.
* Beacon node side implementation.
* Validator client side implementation.
* Address review comments and fix the test.
* Only 400 errors could be IndexedErrorMessage, 500 errors are always ErrorMessage.
* Remove VC shutdown functionality.
* Remove magic constants.
* Make arguments more visible and disable default values.
* Address review comments.
`sizeof` also includes padding between fields, while SSZ defines
`fixedPortionSize` (on type) or `sszSize` (on value) to denote
required bytes to encode. Switch forked block/state readers to SSZ size.
As blocks/states are much larger than the padding, this doesn't affect
practical use cases but is slightly more correct this way.
* electra attestation updates
In Electra, we have two attestation formats: on-chain and on-network -
the former combines all committees of a slot in a single committee bit
list.
This PR makes a number of cleanups to move towards fixing this -
attestation packing however still needs to be fixed as it currently
creates attestations with a single committee only which is very
inefficient.
* more attestations in the blocks
* signing and aggregation fixes
* tool fix
* test, import
* Make listen-address default to use dualstack.
* Use correct newProtocol().
* Bump nim-eth.
* Bump nim-eth one more time.
* Use `*` instead of IPv6 address for dualstack sockets.
* Bump chronos and nim-eth.
* Use new constructor.
* Fix listenAddress should be Opt[T] not Option[T].
* Fix options.md.
- add support for setting protocol handlers with `{.raises.}` annotation
- fix: valueOr and withValue utilities
- fix: remove explicit param from GossipSubParams constructor
Add support for using era file for the initial checkpoint block.
This should also avoid an error when the beacon node is restarted
before the backfill process has made any progress (#6059).
The `<` function to compare peers was not exported, leading to the same
peer be acquired over and over again until kick. `mixin` doesn't pull it
into `peerCmp` without `*` export, and with the export no mixin needed.
The `wss_sim` was not properly maintained since Bellatrix. The missing
functionality is now added, including:
- Bellatrix: Connect to an EL for execution payload production
- Capella: Correct withdrawals processing, is mandatory to do
- Deneb: Dump blob sidecars into the output directory
See https://ethresear.ch/t/insecura-my-consensus-for-the-pyrmont-network/11833
Iterating peers should only yield peers present in registry, otherwise
`nil` pointers are returned and depending on comparison function it will
break, see #6149.
When initializing from a state that's not aligned to an epoch boundary,
an earlier state is loaded that's epoch aligned, and subsequently topped
up with the missing blocks. `dag.headSyncCommittee` is initialized prior
to topping up the missing blocks, though. If the sync committee changes
while applying the blocks (e.g., a sync committee period boundary hits),
the cached information becomes unlinked from `dag.head`, leading to
valid blocks based on that chain being rejected. To fix this, move cache
initialization after the top up with blocks. This has been observed on
Goerli by initializing from 7919502 and attempting to top up 7920111.
The block gets rejected with an invalid state root on nodes that have
restarted after setting 7920111 as head, while it gets accepted by all
other nodes. Error message is `block: state root verification failed`.
The incorrect initialization behaviour was introduced in #4592, before
which the sync committee cache was initialized after applying blocks.
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
`batchVerify`'s precondition is a non-empty signature list:
```nim
if input.len == 0:
# Spec precondition
return false
```
This means that in eras without any blocks (as has happened on Goerli),
calling it leads to era files being reported as invalid.
Add support for using era file for the initial checkpoint block.
This should also avoid an error when the beacon node is restarted
before the backfill process has made any progress (#6059).
`batchVerify`'s precondition is a non-empty signature list:
```nim
if input.len == 0:
# Spec precondition
return false
```
This means that in eras without any blocks (as has happened on Goerli),
calling it leads to era files being reported as invalid.
When initializing from a state that's not aligned to an epoch boundary,
an earlier state is loaded that's epoch aligned, and subsequently topped
up with the missing blocks. `dag.headSyncCommittee` is initialized prior
to topping up the missing blocks, though. If the sync committee changes
while applying the blocks (e.g., a sync committee period boundary hits),
the cached information becomes unlinked from `dag.head`, leading to
valid blocks based on that chain being rejected. To fix this, move cache
initialization after the top up with blocks. This has been observed on
Goerli by initializing from 7919502 and attempting to top up 7920111.
The block gets rejected with an invalid state root on nodes that have
restarted after setting 7920111 as head, while it gets accepted by all
other nodes. Error message is `block: state root verification failed`.
The incorrect initialization behaviour was introduced in #4592, before
which the sync committee cache was initialized after applying blocks.
Iterating peers should only yield peers present in registry, otherwise
`nil` pointers are returned and depending on comparison function it will
break, see #6149.
The `<` function to compare peers was not exported, leading to the same
peer be acquired over and over again until kick. `mixin` doesn't pull it
into `peerCmp` without `*` export, and with the export no mixin needed.
When the network is partitioned for a long time, e.g., Goerli, branches
start forming where different peers have distinct views about the chain
state. The current syncing solution with sync manager doesn't handle the
case well, as it is optimized for a healthy network where syncing can be
parallelized across different peers. To support sync manager discovering
additional branches, a new module is added that pulls in histories from
peers on unknown branches in a backwards manner.
There can be situations where proposals need to be made on top of stale
parent state. Because the distance to the wall slot is big, the proposal
state (`getProposalState`) has to be computed incrementally to avoid lag
spikes. Proposals have to be timely to have a chance to propagate on the
network. Peak memory requirements don't change, but the proposal state
needs to be allocated for a longer duration than usual.
Using a dedicated branch for researching the effectiveness of split view
scenario handling simplifies testing and avoids having partial work on
`unstable`. If we want, we can reintroduce it under a `--debug` flag at
a later time. But for now, Goerli is a rare opoprtunity to test this,
maybe just for another week or so.
- https://github.com/status-im/infra-nimbus/pull/179
In split view situation, the canonical chain may only be served by a
tiny amount of peers, and branches may span long durations. Minority
branches may still have a large weight from attestations and should
be discovered. To assist with that, add a branch discovery module that
assists in such a situation by specifically targeting peers with unknown
histories and downloading from them, in addition to sync manager work
which handles popular branches.