Normally, running LC and DAG sync at same time is fine, but on tiny
devnet where some peer may not support the LC data, we can end up in
situation where peer gets disconnected when DAG is in sync, because
DAG sync never uses any req/resp on local devnet (perfect nw conditions)
so the LC sync over minutes removes the peer as sync is stuck.
We don't need to actively sync LC from network if DAG is already synced,
preventing this specific low peer devnet issue (there are others still).
LC is still locally updated when DAG finalized checkpoint advances.
* Fix blob syncing for Electra
`BlobSidecar` requests on libp2p have a context prefix based on:
> The `<context-bytes>` field is calculated as context =
> `compute_fork_digest(fork_version, genesis_validators_root)`
We currently only process blobs if that indicates Deneb, meaning that
on Electra we incorrectly report `InvalidContextBytes` and refuse to
process the blob response data.
Fix this, and also ensure that the code no longer needs maintenance
with every fork unrelated to blobs.
* fix
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
When the network is partitioned for a long time, e.g., Goerli, branches
start forming where different peers have distinct views about the chain
state. The current syncing solution with sync manager doesn't handle the
case well, as it is optimized for a healthy network where syncing can be
parallelized across different peers. To support sync manager discovering
additional branches, a new module is added that pulls in histories from
peers on unknown branches in a backwards manner.
Using a dedicated branch for researching the effectiveness of split view
scenario handling simplifies testing and avoids having partial work on
`unstable`. If we want, we can reintroduce it under a `--debug` flag at
a later time. But for now, Goerli is a rare opoprtunity to test this,
maybe just for another week or so.
- https://github.com/status-im/infra-nimbus/pull/179
In split view situation, the canonical chain may only be served by a
tiny amount of peers, and branches may span long durations. Minority
branches may still have a large weight from attestations and should
be discovered. To assist with that, add a branch discovery module that
assists in such a situation by specifically targeting peers with unknown
histories and downloading from them, in addition to sync manager work
which handles popular branches.
When checking for `MissingParent`, it may be that the parent block was
already discovered as part of a prior run. In that case, it can be
loaded from storage and processed without having to rediscover the
entire branch from the network. This is similar to #6112 but for blocks
that are discovered via gossip / sync mgr instead of via request mgr.
During sync, we can skip the `blobSidecarsByRange` request when there
are no blocks with `kzg_commitments` in the blocks data. Avoids running
into throttling from peers during long periods of non-finality.
Each individual blob currently uses as much quota from the network limit
as an entire block does, 128 items per second shared across all peers.
Blobs are 128 KB each instead of up to several MB and are simpler to
encode. There can be multiple per block (6 currently), so allow 2000
blobs per second across all peers. That decreases the cost per block
from `3125 + 3125 * blobs.len` quota (= `[3125, 21875]`) to a lower
`3125 + 200 * blobs.len` quota (= `[3125, 4325]`), accounting for the
slight increase in data transfer and encoding time.
When restarting beacon node, orphaned blocks remain in the database but
on startup, only the canonical chain as selected by fork choice loads.
When a new block is discovered that builds on top of an orphaned block,
the orphaned block is re-downloaded using sync/request manager, despite
it already being present on disk. Such queries can be answered locally
to improve discovery speed of alternate forks.