nimbus-eth2/beacon_chain/gossip_processing
zah 8771e91d53
Support for driving multiple EL nodes from a single Nimbus BN (#4465)
* Support for driving multiple EL nodes from a single Nimbus BN

Full list of changes:

* Eth1Monitor has been renamed to ELManager to match its current
  responsibilities better.

* The ELManager is no longer optional in the code (it won't have
  a nil value under any circumstances).

* The support for subscribing for headers was removed as it only
  worked with WebSockets and contributed significant complexity
  while bringing only a very minor advantage.

* The `--web3-url` parameter has been deprecated in favor of a
  new `--el` parameter. The new parameter has a reasonable default
  value and supports specifying a different JWT for each connection.
  Each connection can also be configured with a different set of
  responsibilities (e.g. download deposits, validate blocks and/or
  produce blocks). On the command-line, these properties can be
  configured through URL properties stored in the #anchor part of
  the URL. In TOML files, they come with a very natural syntax
  (althrough the URL scheme is also supported).

* The previously scattered EL-related state and logic is now moved
  to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim`
  in a follow-up commit). State is assigned properly either to the
  `ELManager` or the to individual `ELConnection` objects where
  appropriate.

  The ELManager executes all Engine API requests against all attached
  EL nodes, in parallel. It compares their results and if there is a
  disagreement regarding the validity of a certain payload, this is
  detected and the beacon node is protected from publishing a block
  with a potential execution layer consensus bug in it.

  The BN provides metrics per EL node for the number of successful or
  failed requests for each type Engine API requests. If an EL node
  goes offline and connectivity is resoted later, we report the
  problem and the remedy in edge-triggered fashion.

* More progress towards implementing Deneb block production in the VC
  and comparing the value of blocks produced by the EL and the builder
  API.

* Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
..
README.md automated update of v1.3.0-rc.2 to v1.3.0-rc.3 consensus spec URLs (#4647) 2023-02-21 16:43:21 +00:00
batch_validation.nim refactor batch validation not to require genesis_validators_root each time (#4640) 2023-02-20 09:26:22 +01:00
block_processor.nim Support for driving multiple EL nodes from a single Nimbus BN (#4465) 2023-03-05 01:40:21 +00:00
eth2_processor.nim Remove ForkySignedBeaconBlockMaybeBlobs (#4681) 2023-02-28 11:36:17 +00:00
gossip_validation.nim more withdrowls (#4674) 2023-03-02 17:13:35 +01:00
light_client_processor.nim avoid sending redundant LC finality updates (#4546) 2023-01-24 17:44:55 +00:00
optimistic_processor.nim remove Nim 1.2-compatible `push raise`s and update copyright notice years (#4528) 2023-01-20 14:14:37 +00:00

README.md

Gossip Processing

This folder holds a collection of modules to:

  • validate raw gossip data before
    • rebroadcasting it (potentially aggregated)
    • sending it to one of the consensus object pools

Validation

Gossip validation is different from consensus verification in particular for blocks.

There are multiple consumers of validated consensus objects:

  • a ValidationResult.Accept output triggers rebroadcasting in libp2p
    • We jump into method validate(PubSub, Message) in libp2p/protocols/pubsub/pubsub.nim
    • which was called by rpcHandler(GossipSub, PubSubPeer, RPCMsg)
  • a blockValidator message enqueues the validated object to the processing queue in block_processor
    • blockQueue: AsyncQueue[BlockEntry] (shared with request_manager and sync_manager)
    • This queue is then regularly processed to be made available to the consensus object pools.
  • a xyzValidator message adds the validated object to a pool in eth2_processor
    • Attestations (unaggregated and aggregated) get collected into batches.
    • Once a threshold is exceeded or after a timeout, they get validated together using BatchCrypto.

Security concerns

As the first line of defense in Nimbus, modules must be able to handle bursts of data that may come:

  • from malicious nodes trying to DOS us
  • from long periods of non-finality, creating lots of forks, attestations