* reworked some of the das core specs, pr'd to check whether whether the conflicting type issue is centric to my machine or not
* bumped nim-blscurve to 9c6e80c6109133c0af3025654f5a8820282cff05, same as unstable
* bumped nim-eth2-scenarios, nim-nat-traversal at par with unstable, added more pathches, made peerdas devnet branch backward compatible, peerdas passing new ssz tests as per alpha3, disabled electra fixture tests, as branch hasn't been rebased for a while
* refactor test fixture files
* rm: serializeDataColumn
* refactor: took data columns extracted from blobs during block proposal to the heap
* disable blob broadcast in pd devnet
* fix addBlock in message router
* fix: data column iterator
* added debug checkpoints to check CI
* refactor if else conditions
* add: updated das core specs to alpha 3, and unit tests pass
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
Using a dedicated branch for researching the effectiveness of split view
scenario handling simplifies testing and avoids having partial work on
`unstable`. If we want, we can reintroduce it under a `--debug` flag at
a later time. But for now, Goerli is a rare opoprtunity to test this,
maybe just for another week or so.
- https://github.com/status-im/infra-nimbus/pull/179
In split view situation, the canonical chain may only be served by a
tiny amount of peers, and branches may span long durations. Minority
branches may still have a large weight from attestations and should
be discovered. To assist with that, add a branch discovery module that
assists in such a situation by specifically targeting peers with unknown
histories and downloading from them, in addition to sync manager work
which handles popular branches.
When checking for `MissingParent`, it may be that the parent block was
already discovered as part of a prior run. In that case, it can be
loaded from storage and processed without having to rediscover the
entire branch from the network. This is similar to #6112 but for blocks
that are discovered via gossip / sync mgr instead of via request mgr.
During sync, we can skip the `blobSidecarsByRange` request when there
are no blocks with `kzg_commitments` in the blocks data. Avoids running
into throttling from peers during long periods of non-finality.
Each individual blob currently uses as much quota from the network limit
as an entire block does, 128 items per second shared across all peers.
Blobs are 128 KB each instead of up to several MB and are simpler to
encode. There can be multiple per block (6 currently), so allow 2000
blobs per second across all peers. That decreases the cost per block
from `3125 + 3125 * blobs.len` quota (= `[3125, 21875]`) to a lower
`3125 + 200 * blobs.len` quota (= `[3125, 4325]`), accounting for the
slight increase in data transfer and encoding time.
When restarting beacon node, orphaned blocks remain in the database but
on startup, only the canonical chain as selected by fork choice loads.
When a new block is discovered that builds on top of an orphaned block,
the orphaned block is re-downloaded using sync/request manager, despite
it already being present on disk. Such queries can be answered locally
to improve discovery speed of alternate forks.
Avoid marking blocks invalid when corresponding `blobSidecarsByRange`
returns an incomplete / incorrect response while syncing. The block
itself may still be valid in that scenario.
* Avoid global in p2p macro (fixes#4578)
* copy p2p macro to this repo and start de-crufting it
* make protocol registration dynamic, removing light client hacks et al
* split out light client protocol into its own file
* cleanups
* Option -> Opt
* remove more cruft
* further split beacon_sync
this allows the light client to respond to peer metadata messages
without exposing the block sync protocol
* better protocol init
* "constant" protocol index
* avoid casts
* copyright
* move some discovery code to discovery
* avoid extraneous data copy when sending chunks
* remove redundant forkdigest field
* document how to connect to a specific peer
* make `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` configurable
Gnosis uses custom `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` to account
for the faster slot timing, so that blobs still remain available for
roughly the same amount of real time.
Also extend REST config endpoint with full config form `v1.4.0-beta.4`,
and extend compatibility checks when loading configs to reduce warnings.
Directly initialize `ForkedLightClientObj` instead of separately first
setting the `kind` (initializing everything to zero) and then assigning
the forky data after that.
When the requestmanager is busy fetching blocks, the queue might get
filled with multiple entries of the same root - since there is no
deduplication, requests containing the same root multiple times will be
sent out.
Also, because the items sit in the queue for a long time potentially,
the request might be stale by the time that the manager is ready with
the previous request.
This PR removes the queue and directly fetches the blocks to download
from the quarantine which solves both problems (the quarantine already
de-duplicates and is clean of stale information).
Removing the queue for blobs is left for a future PR.
Co-authored-by: tersec <tersec@users.noreply.github.com>
The helper function to compute delay until next light client sync task
can be useful from more general purpose contexts. Move to helpers, and
change it to return `Duration` instead of `BeaconTime` for flexibility.
- When syncing `LightClientUpdatesByRange`, and peer replies with
fewer periods than requested, no need to delay next request.
- When `FinalityUpdate` / `OptimisticUpdate` sync fails,
no need to retry immediately.
The `UpdatesByRange` API takes `startPeriod / count`, but is internally
called by `Slice`. Move the logic that converts from the `Slice` to the
caller to reduce complexity inside the used `doRequest` function.