* add csc to enr and metadata
* add column filtering into RequestManager
* nits
* add comment
* resolved reviews 1
* added local custody column set into RequestManager as a field
* faster lookups with hashsets
* fix regressions, fix other reviews, fix response checking for columns
* simpler fix for hashsets
* remove option to select Capella fork choice algo
With Deneb having run stable for quite a while now, it's time to remove
the option to select the prior fork choice algo from Capella.
* also remove usage from test
To avoid "forked" types creeping into `BlobSidecar`, move the reduction
to `BlobSidecarInfoObject` to the sole caller. The info object is fork
agnostic, so does not need "forked" if `BlobSidecar` ever updates.
Bellatrix light client data does not contain the EL block hash, so we
had to follow blocks gossip to learn the EL `block_hash` of such blocks.
Now that Bellatrix is obsolete, we can simplify EL syncing logic under
light client scenarios. Bellatrix light client data can still be used
to advance the light client sync itself, but will no longer result in
`engine_forkchoiceUpdated` calls until the sync reaches Capella. This
also frees up some memory as we no longer have to retain blocks.
* Beacon node side implementation.
* Validator client side implementation.
* Address review comments and fix the test.
* Only 400 errors could be IndexedErrorMessage, 500 errors are always ErrorMessage.
* Remove VC shutdown functionality.
* Remove magic constants.
* Make arguments more visible and disable default values.
* Address review comments.
Using a dedicated branch for researching the effectiveness of split view
scenario handling simplifies testing and avoids having partial work on
`unstable`. If we want, we can reintroduce it under a `--debug` flag at
a later time. But for now, Goerli is a rare opoprtunity to test this,
maybe just for another week or so.
- https://github.com/status-im/infra-nimbus/pull/179
In split view situation, the canonical chain may only be served by a
tiny amount of peers, and branches may span long durations. Minority
branches may still have a large weight from attestations and should
be discovered. To assist with that, add a branch discovery module that
assists in such a situation by specifically targeting peers with unknown
histories and downloading from them, in addition to sync manager work
which handles popular branches.
There are situations where all states in the `blockchain_dag` are
occupied and cannot be borrowed.
- headState: Many assumptions in the code that it cannot be advanced
- clearanceState: Resets every time a new block gets imported, including
blocks from non-canonical branches
- epochRefState: Used even more frequently than clearanceState
This means that during the catch-up mechanic where the head state is
slowly advanced to wall clock to catch up on validator duties in the
situation where the canonical head is way behind non-canonical heads,
we cannot use any of the three existing states. In that situation,
Nimbus already consumes an increased amount of memory due to all the
`BlockRef`, fork choice states and so on, so experience is degraded.
It seems reasonable to allocate a fourth state temporarily during that
mechanic, until a new proposal could be made on the canonical chain.
Note that currently, on `unstable`, proposals _do_ happen every couple
hours because sync manager doesn't manage to discover additional heads
in a split-view scenario on Goerli. However, with the branch discovery
module, new blocks are discovered all the time, and the clearanceState
may no longer be borrowed as it is reset to different branch too often.
The extra state could also find other uses in the future, e.g., for
incremental computations as in reindexing the database, or online
collection of historical light client data.
When restarting beacon node, orphaned blocks remain in the database but
on startup, only the canonical chain as selected by fork choice loads.
When a new block is discovered that builds on top of an orphaned block,
the orphaned block is re-downloaded using sync/request manager, despite
it already being present on disk. Such queries can be answered locally
to improve discovery speed of alternate forks.
Nimbus currently stops performing validator duties if the blockchain
does not progress for `node.config.syncHorizon` slots. This means that
the chain won't recover because no new blocks are proposed. To fix that,
continue performing validator duties if no progress is registered for a
long time, and none of our peers is indicating any progress.
#6087 introduced a subtle change to `nim-web3` resulting in `Gwei` to be
serialized differently than before. Using a `distinct` type for `Gwei`
improves type safety and avoids such problems in the future.