Etan Kissling d028baea2a
introduce ForkedBlobSidecar for EIP-7688 Electra period before PeerDAS
On `ELECTRA_FORK_EPOCH`, PeerDAS is not yet activated, hence the current
mechanism based on `BlobSidecar` is still in use. With EIP-7688, the
generalized indices of `BeaconBlockBody` get reindexed, changing the
length of the inclusion proof within the `BlobSidecar`. Because network
Req/Resp operations allow responses across fork boundaries, this creates
the need for a `ForkedBlobSidecar` in that layer, same as already done
for `ForkedSignedBeaconBock` for similar reasons.

Note: This PR is only needed if PeerDAS is adopted _after_ EIP-7688.
If PeerDAS is adopted _before_ EIP-7688, a similar PR may be needed for
forked columns. Coincidental `Forked` jank can only be fully avoided if
both features activate at the same epoch, actual changes to blobs aside.
Delaying EIP-7688 for sole purpose of epoch alignemnt is not worth it.
2024-07-25 18:54:38 +02:00
..

Gossip Processing

This folder holds a collection of modules to:

  • validate raw gossip data before
    • rebroadcasting it (potentially aggregated)
    • sending it to one of the consensus object pools

Validation

Gossip validation is different from consensus verification in particular for blocks.

There are multiple consumers of validated consensus objects:

  • a ValidationResult.Accept output triggers rebroadcasting in libp2p
    • We jump into method validate(PubSub, Message) in libp2p/protocols/pubsub/pubsub.nim
    • which was called by rpcHandler(GossipSub, PubSubPeer, RPCMsg)
  • a blockValidator message enqueues the validated object to the processing queue in block_processor
    • blockQueue: AsyncQueue[BlockEntry] (shared with request_manager and sync_manager)
    • This queue is then regularly processed to be made available to the consensus object pools.
  • a xyzValidator message adds the validated object to a pool in eth2_processor
    • Attestations (unaggregated and aggregated) get collected into batches.
    • Once a threshold is exceeded or after a timeout, they get validated together using BatchCrypto.

Security concerns

As the first line of defense in Nimbus, modules must be able to handle bursts of data that may come:

  • from malicious nodes trying to DOS us
  • from long periods of non-finality, creating lots of forks, attestations