8771e91d53
* Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet |
||
---|---|---|
.. | ||
README.md | ||
batch_validation.nim | ||
block_processor.nim | ||
eth2_processor.nim | ||
gossip_validation.nim | ||
light_client_processor.nim | ||
optimistic_processor.nim |
README.md
Gossip Processing
This folder holds a collection of modules to:
- validate raw gossip data before
- rebroadcasting it (potentially aggregated)
- sending it to one of the consensus object pools
Validation
Gossip validation is different from consensus verification in particular for blocks.
- Blocks: https://github.com/ethereum/consensus-specs/blob/v1.3.0-rc.3/specs/phase0/p2p-interface.md#beacon_block
- Attestations (aggregated): https://github.com/ethereum/consensus-specs/blob/v1.3.0-rc.3/specs/phase0/p2p-interface.md#beacon_aggregate_and_proof
- Attestations (unaggregated): https://github.com/ethereum/consensus-specs/blob/v1.3.0-rc.3/specs/phase0/p2p-interface.md#attestation-subnets
- Voluntary exits: https://github.com/ethereum/consensus-specs/blob/v1.3.0-rc.3/specs/phase0/p2p-interface.md#voluntary_exit
- Proposer slashings: https://github.com/ethereum/consensus-specs/blob/v1.3.0-rc.3/specs/phase0/p2p-interface.md#proposer_slashing
- Attester slashing: https://github.com/ethereum/consensus-specs/blob/v1.3.0-rc.3/specs/phase0/p2p-interface.md#attester_slashing
There are multiple consumers of validated consensus objects:
- a
ValidationResult.Accept
output triggers rebroadcasting in libp2p- We jump into method
validate(PubSub, Message)
in libp2p/protocols/pubsub/pubsub.nim - which was called by
rpcHandler(GossipSub, PubSubPeer, RPCMsg)
- We jump into method
- a
blockValidator
message enqueues the validated object to the processing queue inblock_processor
blockQueue: AsyncQueue[BlockEntry]
(shared with request_manager and sync_manager)- This queue is then regularly processed to be made available to the consensus object pools.
- a
xyzValidator
message adds the validated object to a pool in eth2_processor- Attestations (unaggregated and aggregated) get collected into batches.
- Once a threshold is exceeded or after a timeout, they get validated together using BatchCrypto.
Security concerns
As the first line of defense in Nimbus, modules must be able to handle bursts of data that may come:
- from malicious nodes trying to DOS us
- from long periods of non-finality, creating lots of forks, attestations