Commit Graph

34 Commits

Author SHA1 Message Date
tersec 7fd8beb418
rm unused code in {ncli,research,tests}/ (#5809) 2024-01-21 07:55:03 +01:00
Jacek Sieka 62cbdeefc5
verify `genesis_time` more strictly (fixes #1667) (#5694)
Bogus values lead to crashes down the line when timers overflow
2024-01-06 15:26:56 +01:00
tersec 6fd4983f6b
EIP-4788/parentBeaconBlockRoot fixes (#5318) 2023-08-19 08:38:17 +00:00
Jacek Sieka 49729e1ef3
prevent concurrent `storeBlock` calls (fixes #5285) (#5295)
When a block is introduced to the system both via REST and gossip at the
same time, we will call `storeBlock` from two locations leading to a
dupliace check race condition as we wait for the EL.

This issue may manifest in particular when using an external block
builder that itself publishes the block onto the gossip network.

* refactor enqueue flow
* simplify calling `addBlock`
* complete request manager verifier future for blobless blocks
* re-verify parent conditions before adding block

among other things, it might have gone stale or finalized between one
call and the other
2023-08-17 15:12:37 +02:00
Jacek Sieka b8a32419b8
async batch verification (+40% sig verification throughput) (#5176)
* async batch verification

When batch verification is done, the main thread is blocked reducing
concurrency.

With this PR, the new thread signalling primitive in chronos is used to
offload the full batch verification process to a separate thread
allowing the main threads to continue async operations while the other
threads verify signatures.

Similar to previous behavior, the number of ongoing batch verifications
is capped to prevent runaway resource usage.

In addition to the asynchronous processing, 3 addition changes help
drive throughput:

* A loop is used for batch accumulation: this prevents a stampede of
small batches in eager mode where both the eager and the scheduled batch
runner would pick batches off the queue, prematurely picking "fresh"
batches off the queue
* An additional small wait is introduced for small batches - this helps
create slightly larger batches which make better used of the increased
concurrency
* Up to 2 batches are scheduled to the threadpool during high pressure,
reducing startup latency for the threads

Together, these changes increase attestation verification throughput
under load up to 30%.

* fixup

* Update submodules

* fix blst build issues (and a PIC warning)

* bump

---------

Co-authored-by: Zahary Karadjov <zahary@gmail.com>
2023-08-03 11:36:45 +03:00
Jacek Sieka ca1775f725
Fetch by-root request directly from quarantine (#5167)
When the requestmanager is busy fetching blocks, the queue might get
filled with multiple entries of the same root - since there is no
deduplication, requests containing the same root multiple times will be
sent out.

Also, because the items sit in the queue for a long time potentially,
the request might be stale by the time that the manager is ready with
the previous request.

This PR removes the queue and directly fetches the blocks to download
from the quarantine which solves both problems (the quarantine already
de-duplicates and is clean of stale information).

Removing the queue for blobs is left for a future PR.

Co-authored-by: tersec <tersec@users.noreply.github.com>
2023-07-11 18:22:02 +02:00
Etan Kissling 2722778ce5
reduce `nim-eth` dependencies just for RNG (#5099)
We have several modules that import `nim-eth` for the sole purpose of
its `keys.newRng` function. This function is meanwhile a simple wrapper
around `nim-bearssl`'s `HmacDrbgContext.new()`, so the import doesn't
really serve a use anymore. Replace `keys.newRng` with the direct call
to reduce `nim-eth` imports.
2023-06-19 22:43:50 +00:00
henridf 1cf777c64b
Fix sync for blocks older than MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS (#4977)
When doing sync for blocks older than
MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, we skip the blobs by range
request, but we then pass en empty blob sequence to
validation, which then fails.

To fix this: Use an Option[Blobsidecars] to allow expressing the
distinction between "empty blob sequence" and "blobs unavailable". Use
the latter for "old" blocks, and don't attempt to run blob validation.
2023-05-19 16:25:11 +00:00
tersec 74511f61d1
Use withdrawal credentials as default fee recipient (#4968) 2023-05-17 07:56:37 +03:00
henridf 573228ffa0
Rename eth1/ -> el/ and eth1_monitor.nim -> el_monitor.nim (#4944) 2023-05-15 05:05:12 +00:00
henridf 021de18e06
Quarantine and reassembly of gossiped blobs and blocks (#4808) 2023-04-13 19:11:40 +00:00
tersec c31fbc3977
remove most std/options imports (#4778) 2023-03-31 20:46:47 +00:00
tersec 63b1b0840f
5 more modules of eip4844.foo to deneb.foo renames (#4698) 2023-03-06 18:45:52 +00:00
zah 8771e91d53
Support for driving multiple EL nodes from a single Nimbus BN (#4465)
* Support for driving multiple EL nodes from a single Nimbus BN

Full list of changes:

* Eth1Monitor has been renamed to ELManager to match its current
  responsibilities better.

* The ELManager is no longer optional in the code (it won't have
  a nil value under any circumstances).

* The support for subscribing for headers was removed as it only
  worked with WebSockets and contributed significant complexity
  while bringing only a very minor advantage.

* The `--web3-url` parameter has been deprecated in favor of a
  new `--el` parameter. The new parameter has a reasonable default
  value and supports specifying a different JWT for each connection.
  Each connection can also be configured with a different set of
  responsibilities (e.g. download deposits, validate blocks and/or
  produce blocks). On the command-line, these properties can be
  configured through URL properties stored in the #anchor part of
  the URL. In TOML files, they come with a very natural syntax
  (althrough the URL scheme is also supported).

* The previously scattered EL-related state and logic is now moved
  to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim`
  in a follow-up commit). State is assigned properly either to the
  `ELManager` or the to individual `ELConnection` objects where
  appropriate.

  The ELManager executes all Engine API requests against all attached
  EL nodes, in parallel. It compares their results and if there is a
  disagreement regarding the validity of a certain payload, this is
  detected and the beacon node is protected from publishing a block
  with a potential execution layer consensus bug in it.

  The BN provides metrics per EL node for the number of successful or
  failed requests for each type Engine API requests. If an EL node
  goes offline and connectivity is resoted later, we report the
  problem and the remedy in edge-triggered fashion.

* More progress towards implementing Deneb block production in the VC
  and comparing the value of blocks produced by the EL and the builder
  API.

* Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
tersec d058aa09c8
more withdrowls (#4674) 2023-03-02 17:13:35 +01:00
henridf 3681177cf4
Remove ForkySignedBeaconBlockMaybeBlobs (#4681)
This commit removes ForkySignedBeaconBlockMaybeBlobs and all
references. I tried to pull that thread only as little as was needed
to get rid of it. Left a placeholder BlobSidecar array (in lieu of
Opt[BlobsSidecar]) in a few places; this will be used as we rebuild
the decoupled implementation.
2023-02-28 11:36:17 +00:00
zah ff464e49cf
Implement the set of gas_limit end-points in the Keymanager API (#4612)
Fixes #3946
2023-02-15 15:10:31 +00:00
henridf 64878888bd
Blob storage (#4454)
* Blob storage

* fix indentation

* Fix build (none->Opt.none)

* putBlobs -> putBlobsSidecar

* getBlobs -> getBlobsSidecar

* Check blob correctness when storing a backfill block

* Blobs table: rename and conditionally create

* Check block<->blob match in storeBackfillBlock

* Use when .. toFork() to condition on type

* Check blob viability in block_processor.storeBlock()

* Fix build

* Review feedback
2023-01-09 18:42:10 +00:00
Etan Kissling 48994f67d3
rename `BlockError` -> `VerifierError` (#4310)
We currently use `BlockError` for both beacon blocks and LC objects.
In light of EIP4844, we will likely also use it for blob sidecars.
To avoid confusion, renaming it to a more generic `VerifierError`,
and update its documentation to be more generic.

To avoid long lines as a followup, also renaming the `block_processor`'s
`BlockProcessingCompleted.completed`->`ProcessingStatus.completed` and
`BlockProcessingCompleted.notCompleted`->`ProcessingStatus.notCompleted`
2022-11-10 17:40:27 +00:00
tersec fb6e6d9cf4
remove `newPayload` from block production flow (#4186)
* remove `newPayload` from block production flow

* refactor block_processor to run `newPayload` as part of `storeBlock`
2022-10-14 22:48:56 +03:00
tersec c367b14ad9
deprecate `--safe-slots-to-import-optimistically` (#4182) 2022-09-29 06:29:49 +00:00
Jacek Sieka ef8bab58eb
load suggested fee recipient file also when keymanager is disabled (#4078)
Since these files may have been created in a previous run or manually,
we want to keep loading them even on nodes that don't enable the
keystore API (for example static setups)

Other changes:

* log keystore loading progressively (#3699)
* print initial fee recipient when loading validators
* log dynamic fee recipient updates
2022-09-17 08:30:07 +03:00
tersec bf3a014287
more efficient forkchoiceUpdated usage (#4055)
* more efficient forkchoiceUpdated usage

* await rather than asyncSpawn; ensure head update before dag.updateHead

* use action tracker rather than attached validators to check for next slot proposal; use wall slot + 1 rather than state slot + 1 to correctly check when missing blocks

* re-add two-fcU case for when newPayload not VALID

* check dynamicFeeRecipientsStore for potential proposal

* remove duplicate checks for whether next proposer
2022-09-07 20:34:52 +02:00
tersec 1d55743ebb
allow execution clients several seconds to construct blocks (#4012) 2022-08-23 19:19:52 +03:00
tersec 06c8e10ae2
move consensus_manager to consensus_object_pools (#3852) 2022-07-13 14:13:54 +00:00
tersec 1250c56e32
less async optimistic sync (#3842)
* less async optimistic sync

* use asyncSpawn; adapt changes to message router
2022-07-07 16:57:52 +00:00
tersec 1221bb66e8
optimistic sync (#3793)
* optimistic sync

* flag that initially loaded blocks from database might need execution block root filled in

* return optimistic status in REST calls

* refactor blockslot pruning

* ensure beacon_blocks_by_{root,range} do not provide optimistic blocks

* handle forkchoice head being pre-merge with block being postmerge

* re-enable blocking head updates on validator duties

* fix is_optimistic_candidate_block per spec; don't crash with nil future

* fix is_optimistic_candidate_block per spec; don't crash with nil future

* mark blocks sans execution payloads valid during head update
2022-07-04 23:35:33 +03:00
tersec a07d14cd99
remove unused imports in tests/ (#3713) 2022-06-07 17:05:06 +00:00
tersec 9b43a76f2f
kiln beacon node (#3540)
* kiln bn

* use  version of beacon_chain_db

* have Eth1Monitor abstract more tightly over web3provider
2022-03-25 11:40:10 +00:00
Jacek Sieka c64bf045f3
remove StateData (#3507)
One more step on the journey to reduce `BlockRef` usage across the
codebase - this one gets rid of `StateData` whose job was to keep track
of which block was last assigned to a state - these duties have now been
taken over by `latest_block_root`, a fairly recent addition that
computes this block root from state data (at a small cost that should be
insignificant)

99% mechanical change.
2022-03-16 08:20:40 +01:00
Jacek Sieka 61342c2449
limit by-root requests to non-finalized blocks (#3293)
* limit by-root requests to non-finalized blocks

Presently, we keep a mapping from block root to `BlockRef` in memory -
this has simplified reasoning about the dag, but is not sustainable with
the chain growing.

We can distinguish between two cases where by-root access is useful:

* unfinalized blocks - this is where the beacon chain is operating
generally, by validating incoming data as interesting for future fork
choice decisions - bounded by the length of the unfinalized period
* finalized blocks - historical access in the REST API etc - no bounds,
really

In this PR, we limit the by-root block index to the first use case:
finalized chain data can more efficiently be addressed by slot number.

Future work includes:

* limiting the `BlockRef` horizon in general - each instance is 40
bytes+overhead which adds up - this needs further refactoring to deal
with the tail vs state problem
* persisting the finalized slot-to-hash index - this one also keeps
growing unbounded (albeit slowly)

Anyway, this PR easily shaves ~128mb of memory usage at the time of
writing.

* No longer honor `BeaconBlocksByRoot` requests outside of the
non-finalized period - previously, Nimbus would generously return any
block through this libp2p request - per the spec, finalized blocks
should be fetched via `BeaconBlocksByRange` instead.
* return `Opt[BlockRef]` instead of `nil` when blocks can't be found -
this becomes a lot more common now and thus deserves more attention
* `dag.blocks` -> `dag.forkBlocks` - this index only carries unfinalized
blocks from now - `finalizedBlocks` covers the other `BlockRef`
instances
* in backfill, verify that the last backfilled block leads back to
genesis, or panic
* add backfill timings to log
* fix missing check that `BlockRef` block can be fetched with
`getForkedBlock` reliably
* shortcut doppelganger check when feature is not enabled
* in REST/JSON-RPC, fetch blocks without involving `BlockRef`

* fix dag.blocks ref
2022-01-21 13:33:16 +02:00
Jacek Sieka 805e85e1ff
time: spring cleaning (#3262)
Time in the beacon chain is expressed relative to the genesis time -
this PR creates a `beacon_time` module that collects helpers and
utilities for dealing the time units - the new module does not deal with
actual wall time (that's remains in `beacon_clock`).

Collecting the time related stuff in one place makes it easier to find,
avoids some circular imports and allows more easily identifying the code
actually needs wall time to operate.

* move genesis-time-related functionality into `spec/beacon_time`
* avoid using `chronos.Duration` for time differences - it does not
support negative values (such as when something happens earlier than it
should)
* saturate conversions between `FAR_FUTURE_XXX`, so as to avoid
overflows
* fix delay reporting in validator client so it uses the expected
deadline of the slot, not "closest wall slot"
* simplify looping over the slots of an epoch
* `compute_start_slot_at_epoch` -> `start_slot`
* `compute_epoch_at_slot` -> `epoch`

A follow-up PR will (likely) introduce saturating arithmetic for the
time units - this is merely code moves, renames and fixing of small
bugs.
2022-01-11 11:01:54 +01:00
Jacek Sieka c270ec21e4
Validator monitoring (#2925)
Validator monitoring based on and mostly compatible with the
implementation in Lighthouse - tracks additional logs and metrics for
specified validators so as to stay on top on performance.

The implementation works more or less the following way:
* Validator pubkeys are singled out for monitoring - these can be
running on the node or not
* For every action that the validator takes, we record steps in the
process such as messages being seen on the network or published in the
API
* When the dust settles at the end of an epoch, we report the
information from one epoch before that, which coincides with the
balances being updated - this is a tradeoff between being correct
(waiting for finalization) and providing relevant information in a
timely manner)
2021-12-20 20:20:31 +01:00
Jacek Sieka 1a8b7469e3
move quarantine outside of chaindag (#3124)
* move quarantine outside of chaindag

The quarantine has been part of the ChainDAG for the longest time, but
this design has a few issues:

* the function in which blocks are verified and added to the dag becomes
reentrant and therefore difficult to reason about - we're currently
using a stateful flag to work around it
* quarantined blocks bypass the processing queue leading to a processing
stampede
* the quarantine flow is unsuitable for orphaned attestations - these
should also should be quarantined eventually

Instead of processing the quarantine inside ChainDAG, this PR moves
re-queueing to `block_processor` which already is responsible for
dealing with follow-up work when a block is added to the dag

This sets the stage for keeping attestations in the quarantine as well.

Also:

* make `BlockError` `{.pure.}`
* avoid use of `ValidationResult` in block clearance (that's for gossip)
2021-12-06 10:49:01 +01:00