eth2.0-specs/specs/deneb/fork-choice.md

124 lines
5.2 KiB
Markdown
Raw Normal View History

2023-02-07 23:22:28 +00:00
# Deneb -- Fork Choice
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Introduction](#introduction)
2023-01-06 15:39:04 +00:00
- [Containers](#containers)
- [Helpers](#helpers)
Free the blobs This PR reintroduces and further decouples blocks and blobs in EIP-4844, so as to improve network and processing performance. Block and blob processing, for the purpose of gossip validation, are independent: they can both be propagated and gossip-validated in parallel - the decoupled design allows 4 important optimizations (or, if you are so inclined, removes 4 unnecessary pessimizations): * Blocks and blobs travel on independent meshes allowing for better parallelization and utilization of high-bandwidth peers * Re-broadcasting after validation can start earlier allowing more efficient use of upload bandwidth - blocks for example can be rebroadcast to peers while blobs are still being downloaded * bandwidth-reduction techniques such as per-peer deduplication are more efficient because of the smaller message size * gossip verification happens independently for blocks and blobs, allowing better sharing / use of CPU and I/O resources in clients With growing block sizes and additional blob data to stream, the network streaming time becomes a dominant factor in propagation times - on a 100mbit line, streaming 1mb to 8 peers takes ~1s - this process is repeated for each hop in both incoming and outgoing directions. This design in particular sends each blob on a separate subnet, thus maximising the potential for parallelisation and providing a natural path for growing the number of blobs per block should the network be judged to be able to handle it. Changes compared to the current design include: * `BlobsSidecar` is split into individual `BlobSidecar` containers - each container is signed individually by the proposer * the signature is used during gossip validation but later dropped. * KZG commitment verification is moved out of the gossip pipeline and instead done before fork choice addition, when both block and sidecars have arrived * clients may verify individual blob commitments earlier * more generally and similar to block verification, gossip propagation is performed solely based on trivial consistency checks and proposer signature verification * by-root blob requests are done per-blob, so as to retain the ability to fill in blobs one-by-one assuming clients generally receive blobs from gossip * by-range blob requests are done per-block, so as to simplify historical sync * range and root requests are limited to `128` entries for both blocks and blobs - practically, the current higher limit of `1024` for blocks does not get used and keeping the limits consistent simplifies implementation - with the merge, block sizes have grown significantly and clients generally fetch smaller chunks.
2023-02-07 09:55:51 +00:00
- [`validate_blob_sidecars`](#validate_blob_sidecars)
- [`is_data_available`](#is_data_available)
- [Updated fork-choice handlers](#updated-fork-choice-handlers)
- [`on_block`](#on_block)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
## Introduction
2023-02-07 23:22:28 +00:00
This is the modification of the fork choice accompanying the Deneb upgrade.
2023-01-06 15:39:04 +00:00
## Containers
## Helpers
Free the blobs This PR reintroduces and further decouples blocks and blobs in EIP-4844, so as to improve network and processing performance. Block and blob processing, for the purpose of gossip validation, are independent: they can both be propagated and gossip-validated in parallel - the decoupled design allows 4 important optimizations (or, if you are so inclined, removes 4 unnecessary pessimizations): * Blocks and blobs travel on independent meshes allowing for better parallelization and utilization of high-bandwidth peers * Re-broadcasting after validation can start earlier allowing more efficient use of upload bandwidth - blocks for example can be rebroadcast to peers while blobs are still being downloaded * bandwidth-reduction techniques such as per-peer deduplication are more efficient because of the smaller message size * gossip verification happens independently for blocks and blobs, allowing better sharing / use of CPU and I/O resources in clients With growing block sizes and additional blob data to stream, the network streaming time becomes a dominant factor in propagation times - on a 100mbit line, streaming 1mb to 8 peers takes ~1s - this process is repeated for each hop in both incoming and outgoing directions. This design in particular sends each blob on a separate subnet, thus maximising the potential for parallelisation and providing a natural path for growing the number of blobs per block should the network be judged to be able to handle it. Changes compared to the current design include: * `BlobsSidecar` is split into individual `BlobSidecar` containers - each container is signed individually by the proposer * the signature is used during gossip validation but later dropped. * KZG commitment verification is moved out of the gossip pipeline and instead done before fork choice addition, when both block and sidecars have arrived * clients may verify individual blob commitments earlier * more generally and similar to block verification, gossip propagation is performed solely based on trivial consistency checks and proposer signature verification * by-root blob requests are done per-blob, so as to retain the ability to fill in blobs one-by-one assuming clients generally receive blobs from gossip * by-range blob requests are done per-block, so as to simplify historical sync * range and root requests are limited to `128` entries for both blocks and blobs - practically, the current higher limit of `1024` for blocks does not get used and keeping the limits consistent simplifies implementation - with the merge, block sizes have grown significantly and clients generally fetch smaller chunks.
2023-02-07 09:55:51 +00:00
#### `validate_blob_sidecars`
```python
def validate_blobs(expected_kzg_commitments: Sequence[KZGCommitment],
blobs: Sequence[Blob],
proofs: Sequence[KZGProof]) -> None:
assert len(expected_kzg_commitments) == len(blobs)
assert len(blobs) == len(proofs)
# Clients MAY use `verify_blob_kzg_proof_multi` for efficiency
for commitment, blob, proof in zip(expected_kzg_commitments, blobs, proofs):
assert verify_blob_kzg_proof(commitment, blob, proof)
```
#### `is_data_available`
The implementation of `is_data_available` will become more sophisticated during later scaling upgrades.
Initially, verification requires every verifying actor to retrieve all matching `Blob`s and `KZGProof`s, and validate them with `validate_blobs`.
The block MUST NOT be considered valid until all valid `Blob`s have been downloaded. Blocks that have been previously validated as available SHOULD be considered available even if the associated `Blob`s have subsequently been pruned.
```python
def is_data_available(beacon_block_root: Root, blob_kzg_commitments: Sequence[KZGCommitment]) -> bool:
# `retrieve_blobs_and_proofs` is implementation and context dependent, raises an exception if not available. It returns all the blobs for the given block root.
# Note: the p2p network does not guarantee sidecar retrieval outside of `MIN_EPOCHS_FOR_BLOBS_SIDECARS_REQUESTS`
blobs, proofs = retrieve_blobs_and_proofs(beacon_block_root)
# For testing, `retrieve_blobs_and_proofs` returns "TEST".
# TODO: Remove it once we have a way to inject `BlobSidecar` into tests.
if isinstance(sidecar, str):
return True
validate_blobs(expected_kzg_commitments, blobs, proofs)
return True
```
## Updated fork-choice handlers
### `on_block`
*Note*: The only modification is the addition of the verification of transition block conditions.
```python
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
"""
Run ``on_block`` upon receiving a new block.
"""
block = signed_block.message
# Parent block must be known
assert block.parent_root in store.block_states
# Make a copy of the state to avoid mutability issues
pre_state = copy(store.block_states[block.parent_root])
# Blocks cannot be in the future. If they are, their consideration must be delayed until they are in the past.
assert get_current_slot(store) >= block.slot
# Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
assert block.slot > finalized_slot
# Check block is a descendant of the finalized block at the checkpoint finalized slot
assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
2023-02-07 23:22:28 +00:00
# [New in Deneb]
# Check if blob data is available
# If not, this block MAY be queued and subsequently considered when blob data becomes available
assert is_data_available(hash_tree_root(block), block.body.blob_kzg_commitments)
# Check the block is valid and compute the post-state
state = pre_state.copy()
state_transition(state, signed_block, True)
# Check the merge transition
if is_merge_transition_block(pre_state, block.body):
validate_merge_block(block)
# Add new block to the store
store.blocks[hash_tree_root(block)] = block
# Add new state for this block to the store
store.block_states[hash_tree_root(block)] = state
# Add proposer score boost if the block is timely
time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT
is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT
if get_current_slot(store) == block.slot and is_before_attesting_interval:
store.proposer_boost_root = hash_tree_root(block)
# Update justified checkpoint
if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
store.best_justified_checkpoint = state.current_justified_checkpoint
if should_update_justified_checkpoint(store, state.current_justified_checkpoint):
store.justified_checkpoint = state.current_justified_checkpoint
# Update finalized checkpoint
if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
store.finalized_checkpoint = state.finalized_checkpoint
store.justified_checkpoint = state.current_justified_checkpoint
```