During sync, we can skip the `blobSidecarsByRange` request when there
are no blocks with `kzg_commitments` in the blocks data. Avoids running
into throttling from peers during long periods of non-finality.
Avoid marking blocks invalid when corresponding `blobSidecarsByRange`
returns an incomplete / incorrect response while syncing. The block
itself may still be valid in that scenario.
* Avoid global in p2p macro (fixes#4578)
* copy p2p macro to this repo and start de-crufting it
* make protocol registration dynamic, removing light client hacks et al
* split out light client protocol into its own file
* cleanups
* Option -> Opt
* remove more cruft
* further split beacon_sync
this allows the light client to respond to peer metadata messages
without exposing the block sync protocol
* better protocol init
* "constant" protocol index
* avoid casts
* copyright
* move some discovery code to discovery
* avoid extraneous data copy when sending chunks
* remove redundant forkdigest field
* document how to connect to a specific peer
* make `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` configurable
Gnosis uses custom `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` to account
for the faster slot timing, so that blobs still remain available for
roughly the same amount of real time.
Also extend REST config endpoint with full config form `v1.4.0-beta.4`,
and extend compatibility checks when loading configs to reduce warnings.
* Update sync to use post-decoupling RPCs
blob_sidecars_by_range returns a flat list of sidecars, which must
then be grouped per-slot.
* Add test for groupBlobs
* createBlobs: convert proc to func
Other changes:
Renamed the `EIP_4844_FORK_*` config constants to `DENEB_FORK_*` as
this matches the latest spec and it's already used in the official
Sepolia config.
While syncing the finalized portion of the chain, the execution client
cannot efficiently sync and most of the time returns `SYNCING` - in this
PR, we use CL-verified optmistic sync as long as the block is claimed to
be finalized, only occasionally updating the EL with progress.
Although a peer might lie about what is finalized and what isn't,
eventually we'll call the execution client - thus, all a dishonest
client can do is delay execution verification slightly. Gossip blocks in
particular are never assumed to be finalized.
When backfilling, we only need to download blocks that are newer than
MIN_EPOCHS_FOR_BLOCK_REQUESTS - the rest cannot reliably be fetched from
the network and does not have to be provided to others.
This change affects only trusted-node-synced clients - genesis sync
continues to work as before (because it needs to construct a state by
building it from genesis).
Those wishing to complete a backfill should do so with era files
instead.
The various `PeerScore` constants are used for both beacon blocks and
LC objects, and will likely also find use for EIP4844 blob sidecars.
Renaming them to use more generically applicable names not referring
to blocks explicitly aymore.
* cap maximum number of chunks to download from peer (fixes#1620)
* drop support for requesting blocks via v1 / phase0 protocol
* tighten bounds checking of fixed-size messages
The sync protocol does not distinguish between:
- All requested slots are empty
- Peer does not have data available about requested range
Therefore, we treat EOF for `beacon_blocks_by_range` and for
`beacon_blocks_by_range` as valid responses, as if the entire epoch
really contained no single block for any slot. Once a followup response
provides new blocks, we detect that some blocks were missing and rewind.
During backfill, we also request the known-to-exist `backfill.slot`,
so we can actually detect whether an epoch really does not have blocks
or whether a response is incomplete (`PeerScoreNoBlocks`).
Corrects an off-by-1 in the reported sync percentage computation.
New logic is based on `SyncQueue.total` and `SyncQueue.progress`
with `pivot` instead of `sq.startSlot`.
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
Introduces a new library for syncing using libp2p based light client
sync protocol, and adds a new `nimbus_light_client` executable that uses
this library for syncing. The new executable emits log messages when
new beacon block headers are received, and is integrated into testing.
* Add `NoMonitor` flag to stop SyncManager from monitoring sync situation.
* Remove `toleranceValue` and `PeerScoreHeadTooNew`.
Co-authored-by: Etan Kissling <etan@status.im>
Some upstream repos still need fixes, but this gets us close enough that
style hints can be enabled by default.
In general, "canonical" spellings are preferred even if they violate
nep-1 - this applies in particular to spec-related stuff like
`genesis_validators_root` which appears throughout the codebase.
* era: load blocks and states
Era files contain finalized history and can be thought of as an
alternative source for block and state data that allows clients to avoid
syncing this information from the P2P network - the P2P network is then
used to "top up" the client with the most recent data. They can be
freely shared in the community via whatever means (http, torrent, etc)
and serve as a permanent cold store of consensus data (and, after the
merge, execution data) for history buffs and bean counters alike.
This PR gently introduces support for loading blocks and states in two
cases: block requests from rest/p2p and frontfilling when doing
checkpoint sync.
The era files are used as a secondary source if the information is not
found in the database - compared to the database, there are a few key
differences:
* the database stores the block indexed by block root while the era file
indexes by slot - the former is used only in rest, while the latter is
used both by p2p and rest.
* when loading blocks from era files, the root is no longer trivially
available - if it is needed, it must either be computed (slow) or cached
(messy) - the good news is that for p2p requests, it is not needed
* in era files, "framed" snappy encoding is used while in the database
we store unframed snappy - for p2p2 requests, the latter requires
recompression while the former could avoid it
* front-filling is the process of using era files to replace backfilling
- in theory this front-filling could happen from any block and
front-fills with gaps could also be entertained, but our backfilling
algorithm cannot take advantage of this because there's no (simple) way
to tell it to "skip" a range.
* front-filling, as implemented, is a bit slow (10s to load mainnet): we
load the full BeaconState for every era to grab the roots of the blocks
- it would be better to partially load the state - as such, it would
also be good to be able to partially decompress snappy blobs
* lookups from REST via root are served by first looking up a block
summary in the database, then using the slot to load the block data from
the era file - however, there needs to be an option to create the
summary table from era files to fully support historical queries
To test this, `ncli_db` has an era file exporter: the files it creates
should be placed in an `era` folder next to `db` in the data directory.
What's interesting in particular about this setup is that `db` remains
as the source of truth for security purposes - it stores the latest
synced head root which in turn determines where a node "starts" its
consensus participation - the era directory however can be freely shared
between nodes / people without any (significant) security implications,
assuming the era files are consistent / not broken.
There's lots of future improvements to be had:
* we can drop the in-memory `BlockRef` index almost entirely - at this
point, resident memory usage of Nimbus should drop to a cool 500-600 mb
* we could serve era files via REST trivially: this would drop backfill
times to whatever time it takes to download the files - unlike the
current implementation that downloads block by block, downloading an era
at a time almost entirely cuts out request overhead
* we can "reasonably" recreate detailed state history from almost any
point in time, turning an O(slot) process into O(1) effectively - we'll
still need caches and indices to do this with sufficient efficiency for
the rest api, but at least it cuts the whole process down to minutes
instead of hours, for arbitrary points in time
* CI: ignore failures with Nim-1.6 (temporary)
* test fixes
Co-authored-by: Ștefan Talpalaru <stefantalpalaru@yahoo.com>
* Refactor and optimize logs.
* Introduce shortLog(SyncRequest).
* Address review comment.
* make sync queue logs more consistent
Adds a few minor logging improvements:
- Fixes a typo (`was happened` -> `has happened`)
- Avoids passing `reset_slot` argument to log statement multiple times
- Uses same `rewind_to_slot` label when logging in both sync directions
- Consistent rewind point logging
Co-authored-by: cheatfate <eugene.kabanov@status.im>
When initializing backfill sync, the implementation intends to start at
the first unknown slot (`1` before tail). However, an incorrect variable
is passed, and backfill sync actually starts at the tail slot instead.
This patch corrects this by passing the intended variable. The problem
was introduced with the original backfill implementation at #3263.
The `SyncManager` has a leftover optional `sleepTime` parameter in
its constructor that used to configure the sync loop polling rate.
This parameter was replaced with a constant in #1602 and is no longer
functional. This patch removes the `sleepTime` leftovers.
#3304 introduced a regression to the sync status string displayed in the
status bar; during the main forward sync, the current slot is no longer
reported and always displays as `0`. This patch corrects the computation
to accurately report the current slot once more.
The `SyncManager` has a leftover optional `maxStatusAge` parameter in
its constructor that used to configure the libp2p `Status` polling rate.
This parameter was replaced with a constant in #1827 and is no longer
functional. This patch removes the `maxStatusAge` leftovers.
* harden and speed up block sync
The `GetBlockBy*` server implementation currently reads SSZ bytes from
database, deserializes them into a Nim object then serializes them right
back to SSZ - here, we eliminate the deser/ser steps and send the bytes
straight to the network. Unfortunately, the snappy recoding must still
be done because of differences in framing.
Also, the quota system makes one giant request for quota right before
sending all blocks - this means that a 1024 block request will be
"paused" for a long time, then all blocks will be sent at once causing a
spike in database reads which potentially will see the reading client
time out before any block is sent.
Finally, on the reading side we make several copies of blocks as they
travel through various queues - this was not noticeable before but
becomes a problem in two cases: bellatrix blocks are up to 10mb (instead
of .. 30-40kb) and when backfilling, we process a lot more of them a lot
faster.
* fix status comparisons for nodes syncing from genesis (#3327 was a bit
too hard)
* don't hit database at all for post-altair slots in GetBlock v1
requests