Commit Graph

38 Commits

Author SHA1 Message Date
Eugene Kabanov caaad4777c
BN+LC+SN: Fix cancellation deprecate warnings. (#5455)
* Fix deprecation cancel() warnings in SN, BN, LC.

* Fix missing import.
2023-09-22 11:06:27 +00:00
henridf 1da82a5dcd
Remove async blob queue from Request Manager (#5198)
* Remove async blob queue from Request Manager

* Review feedback

* Review feedback
2023-08-01 22:39:14 +02:00
Jacek Sieka 0660ffcd3e
fix debug logging 2023-07-11 21:48:27 +02:00
Jacek Sieka ca1775f725
Fetch by-root request directly from quarantine (#5167)
When the requestmanager is busy fetching blocks, the queue might get
filled with multiple entries of the same root - since there is no
deduplication, requests containing the same root multiple times will be
sent out.

Also, because the items sit in the queue for a long time potentially,
the request might be stale by the time that the manager is ready with
the previous request.

This PR removes the queue and directly fetches the blocks to download
from the quarantine which solves both problems (the quarantine already
de-duplicates and is clean of stale information).

Removing the queue for blobs is left for a future PR.

Co-authored-by: tersec <tersec@users.noreply.github.com>
2023-07-11 18:22:02 +02:00
henridf be3f5b1eac
More blob tweaks/fixes from running in devnet (#4933)
* BeaconNode: don't call fetchMissingblobs with empty list

* More logging

* BlockProcessor.checkBloblessSignature: Add missing return value
2023-05-11 00:36:35 +00:00
henridf ef0b95dfbc
RequestManager: add support for fetching Blobs (#4844)
* RequestManager: add support for fetching Blobs

* Review feedback

* Lint

* Change peekSortedBlobless -> peekBlobless
2023-04-28 12:57:35 +00:00
henridf 12d640b691
Request manager: Handle blobless blocks (#4833)
Post-Deneb, when the request manager receives a missing block from a
peer, it needs to check if the corresponding blobs are available, and
if so pass them along. If they aren't available, the newly-fetched
block must be put in blobless quarantine (while the blobs are
retrieved, coming in next commit).
2023-04-19 19:37:38 +03:00
henridf 8f2f40f0af
Remove blockBlobsVerifier (#4829)
Having a separate 'blockBlobsVerifier' function for post-Deneb blocks
is no longer of any benefit. This commit removes it.
2023-04-18 02:12:57 +02:00
Etan Kissling 969c6f73ae
misc local `EIP4844` > `Deneb` bumps (#4717)
* misc local `EIP4844` > `Deneb` bumps

* fix
2023-03-11 00:28:19 +00:00
henridf 90640cce05
Update sync to use post-decoupling RPC (#4701)
* Update sync to use post-decoupling RPCs

blob_sidecars_by_range returns a flat list of sidecars, which must
then be grouped per-slot.

* Add test for groupBlobs

* createBlobs: convert proc to func
2023-03-07 20:19:17 +00:00
henridf a999da3361
Remove beacon_block_and_blobs_sidecar_by_root RPC (#4672)
Along with its use in the request manager.
2023-02-26 23:17:40 +00:00
zah 067ba13c52
Capella metadata for Sepolia (#4615)
Other changes:

Renamed the `EIP_4844_FORK_*` config constants to `DENEB_FORK_*` as
this matches the latest spec and it's already used in the official
Sepolia config.
2023-02-15 14:44:09 +00:00
henridf 59e41dc65d
EIP4844 sync (#4581)
* EIP4844 Sync

* Pass eip4844 fork epoch rather than cfg to syncmanager

* Fix sync

* Update test

* map->mapIt
2023-02-11 20:48:35 +00:00
Jacek Sieka f3ddea6c86
Skip execution payload verification for finalized blocks (#4591)
While syncing the finalized portion of the chain, the execution client
cannot efficiently sync and most of the time returns `SYNCING` - in this
PR, we use CL-verified optmistic sync as long as the block is claimed to
be finalized, only occasionally updating the EL with progress.

Although a peer might lie about what is finalized and what isn't,
eventually we'll call the execution client - thus, all a dishonest
client can do is delay execution verification slightly. Gossip blocks in
particular are never assumed to be finalized.
2023-02-06 08:22:08 +01:00
henridf 9233a52e4b
Don't pass RuntimeConfig to RequestManager (#4589) 2023-02-04 12:13:09 +01:00
henridf 94837caa2a
Eip4844 sync fixes (#4577)
* fixes

* Make some log messages blob-aware

* remove redundant optBlobs()
2023-02-01 14:14:50 +00:00
henridf d7316ac863
Wire up blob fetching with request manager (#4527)
* Wire up blob fetching and request manager

* Update beacon_chain/sync/request_manager.nim

Co-authored-by: tersec <tersec@users.noreply.github.com>

* Review feedback: remove redundant RequestManager field.

* Fix syntax

* Improve blob fetching logic

* fix previous commit

* rman.isBlobsTime(): Remove spurious parameter

* Review feedback

* expressionify an if

---------

Co-authored-by: tersec <tersec@users.noreply.github.com>
2023-02-01 00:25:08 +01:00
tersec aacc8d702d
remove Nim 1.2-compatible `push raise`s and update copyright notice years (#4528) 2023-01-20 14:14:37 +00:00
Etan Kissling 2eb56f6e1b
rename `PeerScoreXyzBlocks` -> `PeerScoreXyzValues` (#4318)
The various `PeerScore` constants are used for both beacon blocks and
LC objects, and will likely also find use for EIP4844 blob sidecars.
Renaming them to use more generically applicable names not referring
to blocks explicitly aymore.
2022-11-11 11:34:28 +00:00
Etan Kissling 48994f67d3
rename `BlockError` -> `VerifierError` (#4310)
We currently use `BlockError` for both beacon blocks and LC objects.
In light of EIP4844, we will likely also use it for blob sidecars.
To avoid confusion, renaming it to a more generic `VerifierError`,
and update its documentation to be more generic.

To avoid long lines as a followup, also renaming the `block_processor`'s
`BlockProcessingCompleted.completed`->`ProcessingStatus.completed` and
`BlockProcessingCompleted.notCompleted`->`ProcessingStatus.notCompleted`
2022-11-10 17:40:27 +00:00
Jacek Sieka 63a3f2b1ad
Tighten chunk decoding limits (#4264)
* cap maximum number of chunks to download from peer (fixes #1620)
* drop support for requesting blocks via v1 / phase0 protocol
* tighten bounds checking of fixed-size messages
2022-10-27 18:51:43 +02:00
Miran dfd4afc9f2
compatibility with Nim 1.4+ (#3888) 2022-07-29 10:53:42 +00:00
tersec ae05ba9a48
reduce received invalid sync block logging to notice; decimal TTD logging (#3839) 2022-07-06 13:34:12 +03:00
Etan Kissling 15967c4076
keep track of latest blocks for optimistic sync (#3715)
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
2022-06-10 14:16:37 +00:00
Jacek Sieka c7abc97545
harden and speed up block sync (#3358)
* harden and speed up block sync

The `GetBlockBy*` server implementation currently reads SSZ bytes from
database, deserializes them into a Nim object then serializes them right
back to SSZ - here, we eliminate the deser/ser steps and send the bytes
straight to the network. Unfortunately, the snappy recoding must still
be done because of differences in framing.

Also, the quota system makes one giant request for quota right before
sending all blocks - this means that a 1024 block request will be
"paused" for a long time, then all blocks will be sent at once causing a
spike in database reads which potentially will see the reading client
time out before any block is sent.

Finally, on the reading side we make several copies of blocks as they
travel through various queues - this was not noticeable before but
becomes a problem in two cases: bellatrix blocks are up to 10mb (instead
of .. 30-40kb) and when backfilling, we process a lot more of them a lot
faster.

* fix status comparisons for nodes syncing from genesis (#3327 was a bit
too hard)
* don't hit database at all for post-altair slots in GetBlock v1
requests
2022-02-07 19:20:10 +02:00
Jacek Sieka f70aceef37
Harden handling of unviable forks (#3312)
* Harden handling of unviable forks

In our current handling of unviable forks, we allow peers to send us
blocks that come from a different fork - this is not necessarily an
error as it can happen naturally, but it does open up the client to a
case where the same unviable fork keeps getting requested - rather than
allowing this to happen, we'll now give these peers a small negative
score - if it keeps happening, we'll disconnect them.

* keep track of unviable forks in quarantine, to avoid filling it with
known junk
* collect peer scores in single module
* descore peers when they send unviable blocks during sync
* don't give score for duplicate blocks
* increase quarantine size to a level that allows finality to happen
under optimal conditions - this helps avoid downloading the same blocks
over and over in case of an unviable fork
* increase initial score for new peers to make room for one more failure
before disconnection
* log and score invalid/unviable blocks in requestmanager too
* avoid ChainDAG dependency in quarantine
* reject gossip blocks with unviable parent
* continue processing unviable sync blocks in order to build unviable
dag

* docs

* Update beacon_chain/consensus_object_pools/block_pools_types.nim

* add unviable queue test
2022-01-26 13:20:08 +01:00
Jacek Sieka 118840d241
SyncManager cleanups for backfill support (#3189)
* SyncManager cleanups for backfill support

Cleanups, fixes and simplifications, in anticipation of backfill support
for the `SyncManager`:

* reformat sync progress indicator to show time left and % done more
prominently:
  * old: `sync="sPssPsssss:2:2.4229:00h57m (2706898)"`
  * new: `sync="14d12h31m (0.52%) 1.1378slots/s (wQQQQQDDQQ:1287520)"`
* reset average speed when going out of sync
* pass all block errors to sync manager, including duplicate/unviable
* penalize peers for reporting a head block that is outside of our
expected wall clock time (they're likely on a different network or
trying to disrupt sync)
* remove `SyncFailureKind` (unused)
* remove `inRange` (unused)
* add `Q` for sync queue requests that are in the `SyncQueue` but not
yet in the `BlockProcessor` queue
* update last slot in `SyncQueue` after getting peer status
* fix race condition between `wakeupWaiters` and `resetWait`, where
workers would not be correctly reset if block verification returned a
completed future without event loop
* log syncmanager direction

* Fix ordering issue.
Some of the requests size of which are not equal to `chunkSize` could be processed in wrong order which could lead to sync process freezes.

Co-authored-by: cheatfate <eugene.kabanov@status.im>
2021-12-16 15:57:16 +01:00
Jacek Sieka 1a8b7469e3
move quarantine outside of chaindag (#3124)
* move quarantine outside of chaindag

The quarantine has been part of the ChainDAG for the longest time, but
this design has a few issues:

* the function in which blocks are verified and added to the dag becomes
reentrant and therefore difficult to reason about - we're currently
using a stateful flag to work around it
* quarantined blocks bypass the processing queue leading to a processing
stampede
* the quarantine flow is unsuitable for orphaned attestations - these
should also should be quarantined eventually

Instead of processing the quarantine inside ChainDAG, this PR moves
re-queueing to `block_processor` which already is responsible for
dealing with follow-up work when a block is added to the dag

This sets the stage for keeping attestations in the quarantine as well.

Also:

* make `BlockError` `{.pure.}`
* avoid use of `ValidationResult` in block clearance (that's for gossip)
2021-12-06 10:49:01 +01:00
Jacek Sieka df3fc9525f
import cleanup (#2997)
* import cleanup

...and remove some unused types

* add random imports

* more imports
2021-10-19 16:09:26 +02:00
Jacek Sieka a7a65bce42
disentangle eth2 types from the ssz library (#2785)
* reorganize ssz dependencies

This PR continues the work in
https://github.com/status-im/nimbus-eth2/pull/2646,
https://github.com/status-im/nimbus-eth2/pull/2779 as well as past
issues with serialization and type, to disentangle SSZ from eth2 and at
the same time simplify imports and exports with a structured approach.

The principal idea here is that when a library wants to introduce SSZ
support, they do so via 3 files:

* `ssz_codecs` which imports and reexports `codecs` - this covers the
basic byte conversions and ensures no overloads get lost
* `xxx_merkleization` imports and exports `merkleization` to specialize
and get access to `hash_tree_root` and friends
* `xxx_ssz_serialization` imports and exports `ssz_serialization` to
specialize ssz for a specific library

Those that need to interact with SSZ always import the `xxx_` versions
of the modules and never `ssz` itself so as to keep imports simple and
safe.

This is similar to how the REST / JSON-RPC serializers are structured in
that someone wanting to serialize spec types to REST-JSON will import
`eth2_rest_serialization` and nothing else.

* split up ssz into a core library that is independendent of eth2 types
* rename `bytes_reader` to `codec` to highlight that it contains coding
and decoding of bytes and native ssz types
* remove tricky List init overload that causes compile issues
* get rid of top-level ssz import
* reenable merkleization tests
* move some "standard" json serializers to spec
* remove `ValidatorIndex` serialization for now
* remove test_ssz_merkleization
* add tests for over/underlong byte sequences
* fix broken seq[byte] test - seq[byte] is not an SSZ type

There are a few things this PR doesn't solve:

* like #2646 this PR is weak on how to handle root and other
dontSerialize fields that "sometimes" should be computed - the same
problem appears in REST / JSON-RPC etc

* Fix a build problem on macOS

* Another way to fix the macOS builds

Co-authored-by: Zahary Karadjov <zahary@gmail.com>
2021-08-18 20:57:58 +02:00
Jacek Sieka 7a622e8505
rework spec imports (#2779)
The spec imports are a mess to work with, so this branch cleans them up
a bit to ensure that we avoid generic sandwitches and that importing
stuff generally becomes easier.

* reexport crypto/digest/presets because these are part of the public
symbol set of the rest of the spec types
* don't export `merge` types from `base` - this causes circular deps
* fix circular deps in `ssz/spec_types` - this is the first step in
disentangling ssz from spec
* be explicit about phase0 vs altair - longer term, `altair` will become
the "natural" type set, then merge and so on, so no point in giving
`phase0` special preferential treatment
2021-08-12 13:08:20 +00:00
Jacek Sieka 9697b73e71
forkedbeaconstate_helpers -> forks (#2772)
Simpler module name for stuff that covers forks

* check that runtime config matches database state
* also include some assorted altair cleanups
* use "standard" genesis fork in local testnet to work around missing
runtime config support
2021-08-10 22:46:35 +02:00
Jacek Sieka 2d6a661ac6
Syncv2 (#2723)
* bump libp2p

* altair sync v2

Use V2 sync requests after the altair fork has happened, according to
the wall clock

* Fix the behavior of the v1 req/resp calls after Altair

Co-authored-by: Zahary Karadjov <zahary@gmail.com>
2021-07-15 21:01:07 +02:00
Jacek Sieka 7f52ffb8d9
clean up block processing (#2610)
* gossip_to_consensus -> block_processor (it's processing only blocks,
but not only from gossip)
* measure queue and validation time for blocks
* measure assignment and state loading times for updateStateData
* avoid some unnecessary block copies in block sync
* warn that database is corrupt if we hit tail without a state
2021-05-28 19:34:00 +03:00
Jacek Sieka 2695cfa864
EH cleanup (#2455)
almost 100% raises in nimbus-eth2 now!

* fix some rare exception-related crashes in json-rpc
2021-03-26 07:52:01 +01:00
Mamy Ratsimbazafy c47d636cb3
Split Eth2Processor in prep for batching (#2396)
* Split Eth2Processor in gossip and consensus part and materialize the shared block queue

* Update initialization in test_sync_manager
2021-03-11 11:10:57 +01:00
Mamy Ratsimbazafy d47f53cd9d
Reorg (5/5) (#2377)
* Reorg things left into networking and gossip_processing

* time -> beacon_clock

* fix builds
2021-03-05 14:12:00 +01:00
Mamy Ratsimbazafy 3276dfc683
Consolidate modules by areas [part 1] (#2365)
* Move sync in subfolder

* move validator related thingies in validators

* fix binary builds

* update bounds comment [skip ci]
2021-03-02 11:27:45 +01:00