eth2.0-specs/specs/capella/beacon-chain.md

564 lines
20 KiB
Markdown
Raw Normal View History

# Capella -- The Beacon Chain
2021-12-01 18:37:30 +00:00
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
2022-03-22 14:22:35 +00:00
- [Introduction](#introduction)
- [Custom types](#custom-types)
- [Constants](#constants)
2022-03-22 14:23:59 +00:00
- [Domain types](#domain-types)
2022-03-22 14:22:35 +00:00
- [Preset](#preset)
- [Misc](#misc)
2022-03-22 14:22:35 +00:00
- [State list lengths](#state-list-lengths)
2022-03-22 14:23:59 +00:00
- [Max operations per block](#max-operations-per-block)
2022-03-22 14:22:35 +00:00
- [Execution](#execution)
- [Configuration](#configuration)
- [Containers](#containers)
- [New containers](#new-containers)
- [`Withdrawal`](#withdrawal)
2022-03-22 14:23:59 +00:00
- [`BLSToExecutionChange`](#blstoexecutionchange)
- [`SignedBLSToExecutionChange`](#signedblstoexecutionchange)
Historical batches This PR, a continuation of replaces `historical_roots` with `historical_block_roots`. By keeping an accumulator of historical block roots in the state, it becomes possible to validate the entire block history that led up to that particular state without executing the transitions, and without checking them one by one in backwards order using a parent chain. This is interesting for archival purposes as well as when implementing sync protocols that can verify chunks of blocks quickly, meaning they can be downloaded in any order. It's also useful as it provides a canonical hash by which such chunks of blocks can be named, with a direct reference in the state. In this PR, `historical_roots` is frozen at its current value and `historical_batches` are computed from the merge epoch onwards. After this PR, `block_batch_root` in the state can be used to verify an era of blocks against the state with a simple root check. The `historical_roots` values on the other hand can be used to verify that a constant distributed with clients is valid for a particular state, and therefore extends the block validation all the way back to genesis without backfilling `block_batch_root` and without introducing any new security assumptions in the client. As far as naming goes, it's convenient to talk about an "era" being 8192 slots ~= 1.14 days. The 8192 number comes from the SLOTS_PER_HISTORICAL_ROOT constant. With multiple easily verifable blocks in a file, it becomes trivial to offload block history to out-of-protocol transfer methods (bittorrent / ftp / whatever) - including execution payloads, paving the way for a future in which clients purge block history in p2p. This PR can be applied along with the merge which simplifies payload distribution from the get-go. Both execution and consensus clients benefit because from the merge onwards, they both need to be able to supply ranges of blocks in the sync protocol from what effectively is "cold storage". Another possibility is to include it in a future cleanup PR - this complicates the "cold storage" mode above by not covering exection payloads from start.
2022-10-27 08:33:38 +00:00
- [`HistoricalBatchSummary`](#historicalbatchsummary)
2022-03-22 14:22:35 +00:00
- [Extended Containers](#extended-containers)
- [`ExecutionPayload`](#executionpayload)
- [`ExecutionPayloadHeader`](#executionpayloadheader)
2022-03-22 14:23:59 +00:00
- [`BeaconBlockBody`](#beaconblockbody)
2022-03-22 14:22:35 +00:00
- [`BeaconState`](#beaconstate)
- [Helpers](#helpers)
- [Beacon state mutators](#beacon-state-mutators)
2022-09-21 21:41:33 +00:00
- [`withdraw_balance`](#withdraw_balance)
2022-03-22 14:22:35 +00:00
- [Predicates](#predicates)
- [`has_eth1_withdrawal_credential`](#has_eth1_withdrawal_credential)
2022-03-22 14:22:35 +00:00
- [`is_fully_withdrawable_validator`](#is_fully_withdrawable_validator)
- [`is_partially_withdrawable_validator`](#is_partially_withdrawable_validator)
2022-03-22 14:22:35 +00:00
- [Beacon chain state transition function](#beacon-chain-state-transition-function)
- [Epoch processing](#epoch-processing)
- [Full withdrawals](#full-withdrawals)
- [Partial withdrawals](#partial-withdrawals)
Historical batches This PR, a continuation of replaces `historical_roots` with `historical_block_roots`. By keeping an accumulator of historical block roots in the state, it becomes possible to validate the entire block history that led up to that particular state without executing the transitions, and without checking them one by one in backwards order using a parent chain. This is interesting for archival purposes as well as when implementing sync protocols that can verify chunks of blocks quickly, meaning they can be downloaded in any order. It's also useful as it provides a canonical hash by which such chunks of blocks can be named, with a direct reference in the state. In this PR, `historical_roots` is frozen at its current value and `historical_batches` are computed from the merge epoch onwards. After this PR, `block_batch_root` in the state can be used to verify an era of blocks against the state with a simple root check. The `historical_roots` values on the other hand can be used to verify that a constant distributed with clients is valid for a particular state, and therefore extends the block validation all the way back to genesis without backfilling `block_batch_root` and without introducing any new security assumptions in the client. As far as naming goes, it's convenient to talk about an "era" being 8192 slots ~= 1.14 days. The 8192 number comes from the SLOTS_PER_HISTORICAL_ROOT constant. With multiple easily verifable blocks in a file, it becomes trivial to offload block history to out-of-protocol transfer methods (bittorrent / ftp / whatever) - including execution payloads, paving the way for a future in which clients purge block history in p2p. This PR can be applied along with the merge which simplifies payload distribution from the get-go. Both execution and consensus clients benefit because from the merge onwards, they both need to be able to supply ranges of blocks in the sync protocol from what effectively is "cold storage". Another possibility is to include it in a future cleanup PR - this complicates the "cold storage" mode above by not covering exection payloads from start.
2022-10-27 08:33:38 +00:00
- [Historical batches updates](#historical-batches-updates)
2022-03-22 14:22:35 +00:00
- [Block processing](#block-processing)
- [New `process_withdrawals`](#new-process_withdrawals)
- [Modified `process_execution_payload`](#modified-process_execution_payload)
2022-03-22 14:23:59 +00:00
- [Modified `process_operations`](#modified-process_operations)
- [New `process_bls_to_execution_change`](#new-process_bls_to_execution_change)
- [Testing](#testing)
2022-03-22 14:22:35 +00:00
2021-12-01 18:37:30 +00:00
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
## Introduction
2022-02-24 19:43:57 +00:00
Capella is a consensus-layer upgrade containing a number of features related
2021-12-01 18:37:30 +00:00
to validator withdrawals. Including:
2022-09-20 19:43:38 +00:00
* Automatic withdrawals of `withdrawable` validators.
* Partial withdrawals sweep for validators with 0x01 withdrawal
2022-09-20 19:43:38 +00:00
credentials and balances in excess of `MAX_EFFECTIVE_BALANCE`.
2021-12-01 18:37:30 +00:00
* Operation to change from `BLS_WITHDRAWAL_PREFIX` to
2022-09-20 19:43:38 +00:00
`ETH1_ADDRESS_WITHDRAWAL_PREFIX` versioned withdrawal credentials to enable withdrawals for a validator.
2021-12-01 18:37:30 +00:00
## Custom types
2022-02-23 22:15:24 +00:00
We define the following Python custom types for type hinting and readability:
| Name | SSZ equivalent | Description |
| - | - | - |
2022-09-20 19:43:38 +00:00
| `WithdrawalIndex` | `uint64` | an index of a `Withdrawal` |
2022-02-23 22:15:24 +00:00
2022-03-16 21:41:37 +00:00
## Constants
### Domain types
| Name | Value |
| - | - |
| `DOMAIN_BLS_TO_EXECUTION_CHANGE` | `DomainType('0x0A000000')` |
2021-12-01 18:37:30 +00:00
## Preset
### Misc
| Name | Value |
| - | - |
| `MAX_PARTIAL_WITHDRAWALS_PER_EPOCH` | `uint64(2**8)` (= 256) |
2021-12-01 18:37:30 +00:00
### State list lengths
2022-09-20 19:43:38 +00:00
| Name | Value | Unit |
2022-09-20 20:24:14 +00:00
| - | - | :-: |
2022-09-20 19:43:38 +00:00
| `WITHDRAWAL_QUEUE_LIMIT` | `uint64(2**40)` (= 1,099,511,627,776) | withdrawals enqueued in state |
2022-02-23 22:05:55 +00:00
2022-03-16 21:41:37 +00:00
### Max operations per block
| Name | Value |
| - | - |
| `MAX_BLS_TO_EXECUTION_CHANGES` | `2**4` (= 16) |
2022-02-23 22:05:55 +00:00
### Execution
| Name | Value | Description |
| - | - | - |
2022-03-10 17:38:47 +00:00
| `MAX_WITHDRAWALS_PER_PAYLOAD` | `uint64(2**4)` (= 16) | Maximum amount of withdrawals allowed in each payload |
2021-12-01 18:37:30 +00:00
## Configuration
## Containers
2022-03-22 14:00:53 +00:00
### New containers
#### `Withdrawal`
```python
class Withdrawal(Container):
index: WithdrawalIndex
2022-10-18 14:35:06 +00:00
validator_index: ValidatorIndex
2022-03-22 14:00:53 +00:00
address: ExecutionAddress
amount: Gwei
```
#### `BLSToExecutionChange`
```python
class BLSToExecutionChange(Container):
validator_index: ValidatorIndex
from_bls_pubkey: BLSPubkey
to_execution_address: ExecutionAddress
```
#### `SignedBLSToExecutionChange`
```python
class SignedBLSToExecutionChange(Container):
message: BLSToExecutionChange
signature: BLSSignature
```
Historical batches This PR, a continuation of replaces `historical_roots` with `historical_block_roots`. By keeping an accumulator of historical block roots in the state, it becomes possible to validate the entire block history that led up to that particular state without executing the transitions, and without checking them one by one in backwards order using a parent chain. This is interesting for archival purposes as well as when implementing sync protocols that can verify chunks of blocks quickly, meaning they can be downloaded in any order. It's also useful as it provides a canonical hash by which such chunks of blocks can be named, with a direct reference in the state. In this PR, `historical_roots` is frozen at its current value and `historical_batches` are computed from the merge epoch onwards. After this PR, `block_batch_root` in the state can be used to verify an era of blocks against the state with a simple root check. The `historical_roots` values on the other hand can be used to verify that a constant distributed with clients is valid for a particular state, and therefore extends the block validation all the way back to genesis without backfilling `block_batch_root` and without introducing any new security assumptions in the client. As far as naming goes, it's convenient to talk about an "era" being 8192 slots ~= 1.14 days. The 8192 number comes from the SLOTS_PER_HISTORICAL_ROOT constant. With multiple easily verifable blocks in a file, it becomes trivial to offload block history to out-of-protocol transfer methods (bittorrent / ftp / whatever) - including execution payloads, paving the way for a future in which clients purge block history in p2p. This PR can be applied along with the merge which simplifies payload distribution from the get-go. Both execution and consensus clients benefit because from the merge onwards, they both need to be able to supply ranges of blocks in the sync protocol from what effectively is "cold storage". Another possibility is to include it in a future cleanup PR - this complicates the "cold storage" mode above by not covering exection payloads from start.
2022-10-27 08:33:38 +00:00
#### `HistoricalBatchSummary`
```python
class HistoricalBatchSummary(Container):
"""
`HistoricalBatchSummary` matches the components of the phase0 HistoricalBatch
making the two hash_tree_root-compatible.
"""
block_batch_root: Root
state_batch_root: Root
```
2021-12-01 18:37:30 +00:00
### Extended Containers
2022-03-22 14:00:53 +00:00
#### `ExecutionPayload`
```python
class ExecutionPayload(Container):
# Execution block header fields
parent_hash: Hash32
fee_recipient: ExecutionAddress # 'beneficiary' in the yellow paper
state_root: Bytes32
receipts_root: Bytes32
logs_bloom: ByteVector[BYTES_PER_LOGS_BLOOM]
prev_randao: Bytes32 # 'difficulty' in the yellow paper
block_number: uint64 # 'number' in the yellow paper
gas_limit: uint64
gas_used: uint64
timestamp: uint64
extra_data: ByteList[MAX_EXTRA_DATA_BYTES]
base_fee_per_gas: uint256
# Extra payload fields
block_hash: Hash32 # Hash of execution block
transactions: List[Transaction, MAX_TRANSACTIONS_PER_PAYLOAD]
withdrawals: List[Withdrawal, MAX_WITHDRAWALS_PER_PAYLOAD] # [New in Capella]
```
#### `ExecutionPayloadHeader`
```python
class ExecutionPayloadHeader(Container):
# Execution block header fields
parent_hash: Hash32
fee_recipient: ExecutionAddress
state_root: Bytes32
receipts_root: Bytes32
logs_bloom: ByteVector[BYTES_PER_LOGS_BLOOM]
prev_randao: Bytes32
block_number: uint64
gas_limit: uint64
gas_used: uint64
timestamp: uint64
extra_data: ByteList[MAX_EXTRA_DATA_BYTES]
base_fee_per_gas: uint256
# Extra payload fields
block_hash: Hash32 # Hash of execution block
transactions_root: Root
withdrawals_root: Root # [New in Capella]
```
2022-03-16 21:41:37 +00:00
#### `BeaconBlockBody`
```python
class BeaconBlockBody(Container):
randao_reveal: BLSSignature
eth1_data: Eth1Data # Eth1 data vote
graffiti: Bytes32 # Arbitrary data
# Operations
proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]
attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS]
attestations: List[Attestation, MAX_ATTESTATIONS]
deposits: List[Deposit, MAX_DEPOSITS]
voluntary_exits: List[SignedVoluntaryExit, MAX_VOLUNTARY_EXITS]
sync_aggregate: SyncAggregate
# Execution
execution_payload: ExecutionPayload
# Capella operations
bls_to_execution_changes: List[SignedBLSToExecutionChange, MAX_BLS_TO_EXECUTION_CHANGES] # [New in Capella]
2022-03-16 21:41:37 +00:00
```
2021-12-01 18:37:30 +00:00
#### `BeaconState`
```python
class BeaconState(Container):
# Versioning
genesis_time: uint64
genesis_validators_root: Root
slot: Slot
fork: Fork
# History
latest_block_header: BeaconBlockHeader
block_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
state_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
Historical batches This PR, a continuation of replaces `historical_roots` with `historical_block_roots`. By keeping an accumulator of historical block roots in the state, it becomes possible to validate the entire block history that led up to that particular state without executing the transitions, and without checking them one by one in backwards order using a parent chain. This is interesting for archival purposes as well as when implementing sync protocols that can verify chunks of blocks quickly, meaning they can be downloaded in any order. It's also useful as it provides a canonical hash by which such chunks of blocks can be named, with a direct reference in the state. In this PR, `historical_roots` is frozen at its current value and `historical_batches` are computed from the merge epoch onwards. After this PR, `block_batch_root` in the state can be used to verify an era of blocks against the state with a simple root check. The `historical_roots` values on the other hand can be used to verify that a constant distributed with clients is valid for a particular state, and therefore extends the block validation all the way back to genesis without backfilling `block_batch_root` and without introducing any new security assumptions in the client. As far as naming goes, it's convenient to talk about an "era" being 8192 slots ~= 1.14 days. The 8192 number comes from the SLOTS_PER_HISTORICAL_ROOT constant. With multiple easily verifable blocks in a file, it becomes trivial to offload block history to out-of-protocol transfer methods (bittorrent / ftp / whatever) - including execution payloads, paving the way for a future in which clients purge block history in p2p. This PR can be applied along with the merge which simplifies payload distribution from the get-go. Both execution and consensus clients benefit because from the merge onwards, they both need to be able to supply ranges of blocks in the sync protocol from what effectively is "cold storage". Another possibility is to include it in a future cleanup PR - this complicates the "cold storage" mode above by not covering exection payloads from start.
2022-10-27 08:33:38 +00:00
historical_roots: List[Root, HISTORICAL_ROOTS_LIMIT] # Frozen in Merge, replaced by historical_batches
historical_batches: List[HistoricalBatchSummary, HISTORICAL_ROOTS_LIMIT] # Valid from Merge onwards
2021-12-01 18:37:30 +00:00
# Eth1
eth1_data: Eth1Data
eth1_data_votes: List[Eth1Data, EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH]
eth1_deposit_index: uint64
# Registry
validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]
balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]
# Randomness
randao_mixes: Vector[Bytes32, EPOCHS_PER_HISTORICAL_VECTOR]
# Slashings
slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR] # Per-epoch sums of slashed effective balances
# Participation
previous_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
current_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
# Finality
justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH] # Bit set for every recent justified epoch
previous_justified_checkpoint: Checkpoint
current_justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
# Inactivity
inactivity_scores: List[uint64, VALIDATOR_REGISTRY_LIMIT]
# Sync
current_sync_committee: SyncCommittee
next_sync_committee: SyncCommittee
# Execution
latest_execution_payload_header: ExecutionPayloadHeader
# Withdrawals
withdrawal_queue: List[Withdrawal, WITHDRAWAL_QUEUE_LIMIT] # [New in Capella]
next_withdrawal_index: WithdrawalIndex # [New in Capella]
next_partial_withdrawal_validator_index: ValidatorIndex # [New in Capella]
2022-02-23 22:05:55 +00:00
```
2021-12-01 18:37:30 +00:00
## Helpers
### Beacon state mutators
2022-09-21 21:36:27 +00:00
#### `withdraw_balance`
2021-12-01 18:37:30 +00:00
```python
def withdraw_balance(state: BeaconState, validator_index: ValidatorIndex, amount: Gwei) -> None:
2021-12-01 18:37:30 +00:00
# Decrease the validator's balance
decrease_balance(state, validator_index, amount)
2021-12-01 18:37:30 +00:00
# Create a corresponding withdrawal receipt
2022-03-10 17:38:47 +00:00
withdrawal = Withdrawal(
index=state.next_withdrawal_index,
2022-10-18 14:35:06 +00:00
validator_index=validator_index,
address=ExecutionAddress(state.validators[validator_index].withdrawal_credentials[12:]),
2021-12-01 18:37:30 +00:00
amount=amount,
)
state.next_withdrawal_index = WithdrawalIndex(state.next_withdrawal_index + 1)
state.withdrawal_queue.append(withdrawal)
2021-12-01 18:37:30 +00:00
```
### Predicates
#### `has_eth1_withdrawal_credential`
```python
def has_eth1_withdrawal_credential(validator: Validator) -> bool:
"""
2022-09-20 19:43:38 +00:00
Check if ``validator`` has an 0x01 prefixed "eth1" withdrawal credential.
"""
return validator.withdrawal_credentials[:1] == ETH1_ADDRESS_WITHDRAWAL_PREFIX
```
2022-02-24 22:06:31 +00:00
#### `is_fully_withdrawable_validator`
2021-12-01 18:37:30 +00:00
```python
2022-09-19 17:05:47 +00:00
def is_fully_withdrawable_validator(validator: Validator, balance: Gwei, epoch: Epoch) -> bool:
2021-12-01 18:37:30 +00:00
"""
2022-02-24 22:06:31 +00:00
Check if ``validator`` is fully withdrawable.
2021-12-01 18:37:30 +00:00
"""
return (
has_eth1_withdrawal_credential(validator)
2022-09-19 17:05:47 +00:00
and validator.withdrawable_epoch <= epoch
and balance > 0
)
```
#### `is_partially_withdrawable_validator`
```python
def is_partially_withdrawable_validator(validator: Validator, balance: Gwei) -> bool:
"""
Check if ``validator`` is partially withdrawable.
"""
has_max_effective_balance = validator.effective_balance == MAX_EFFECTIVE_BALANCE
has_excess_balance = balance > MAX_EFFECTIVE_BALANCE
return has_eth1_withdrawal_credential(validator) and has_max_effective_balance and has_excess_balance
2021-12-01 18:37:30 +00:00
```
## Beacon chain state transition function
### Epoch processing
```python
def process_epoch(state: BeaconState) -> None:
process_justification_and_finalization(state)
process_inactivity_updates(state)
process_rewards_and_penalties(state)
process_registry_updates(state)
process_slashings(state)
process_eth1_data_reset(state)
process_effective_balance_updates(state)
process_slashings_reset(state)
process_randao_mixes_reset(state)
Historical batches This PR, a continuation of replaces `historical_roots` with `historical_block_roots`. By keeping an accumulator of historical block roots in the state, it becomes possible to validate the entire block history that led up to that particular state without executing the transitions, and without checking them one by one in backwards order using a parent chain. This is interesting for archival purposes as well as when implementing sync protocols that can verify chunks of blocks quickly, meaning they can be downloaded in any order. It's also useful as it provides a canonical hash by which such chunks of blocks can be named, with a direct reference in the state. In this PR, `historical_roots` is frozen at its current value and `historical_batches` are computed from the merge epoch onwards. After this PR, `block_batch_root` in the state can be used to verify an era of blocks against the state with a simple root check. The `historical_roots` values on the other hand can be used to verify that a constant distributed with clients is valid for a particular state, and therefore extends the block validation all the way back to genesis without backfilling `block_batch_root` and without introducing any new security assumptions in the client. As far as naming goes, it's convenient to talk about an "era" being 8192 slots ~= 1.14 days. The 8192 number comes from the SLOTS_PER_HISTORICAL_ROOT constant. With multiple easily verifable blocks in a file, it becomes trivial to offload block history to out-of-protocol transfer methods (bittorrent / ftp / whatever) - including execution payloads, paving the way for a future in which clients purge block history in p2p. This PR can be applied along with the merge which simplifies payload distribution from the get-go. Both execution and consensus clients benefit because from the merge onwards, they both need to be able to supply ranges of blocks in the sync protocol from what effectively is "cold storage". Another possibility is to include it in a future cleanup PR - this complicates the "cold storage" mode above by not covering exection payloads from start.
2022-10-27 08:33:38 +00:00
process_historical_batches_update(state)
2021-12-01 18:37:30 +00:00
process_participation_flag_updates(state)
process_sync_committee_updates(state)
2022-02-23 22:05:55 +00:00
process_full_withdrawals(state) # [New in Capella]
process_partial_withdrawals(state) # [New in Capella]
2021-12-01 18:37:30 +00:00
```
#### Full withdrawals
2021-12-01 18:37:30 +00:00
2022-02-23 22:05:55 +00:00
*Note*: The function `process_full_withdrawals` is new.
2021-12-01 18:37:30 +00:00
```python
2022-02-23 22:05:55 +00:00
def process_full_withdrawals(state: BeaconState) -> None:
2021-12-01 18:37:30 +00:00
current_epoch = get_current_epoch(state)
2022-09-19 17:05:47 +00:00
for index in range(len(state.validators)):
balance = state.balances[index]
validator = state.validators[index]
if is_fully_withdrawable_validator(validator, balance, current_epoch):
withdraw_balance(state, ValidatorIndex(index), balance)
2021-12-01 18:37:30 +00:00
```
2022-02-23 22:05:55 +00:00
#### Partial withdrawals
*Note*: The function `process_partial_withdrawals` is new.
```python
def process_partial_withdrawals(state: BeaconState) -> None:
partial_withdrawals_count = 0
# Begin where we left off last time
validator_index = state.next_partial_withdrawal_validator_index
for _ in range(len(state.validators)):
balance = state.balances[validator_index]
validator = state.validators[validator_index]
if is_partially_withdrawable_validator(validator, balance):
withdraw_balance(state, validator_index, balance - MAX_EFFECTIVE_BALANCE)
partial_withdrawals_count += 1
# Iterate to next validator to check for partial withdrawal
validator_index = ValidatorIndex((validator_index + 1) % len(state.validators))
# Exit if performed maximum allowable withdrawals
if partial_withdrawals_count == MAX_PARTIAL_WITHDRAWALS_PER_EPOCH:
break
state.next_partial_withdrawal_validator_index = validator_index
```
Historical batches This PR, a continuation of replaces `historical_roots` with `historical_block_roots`. By keeping an accumulator of historical block roots in the state, it becomes possible to validate the entire block history that led up to that particular state without executing the transitions, and without checking them one by one in backwards order using a parent chain. This is interesting for archival purposes as well as when implementing sync protocols that can verify chunks of blocks quickly, meaning they can be downloaded in any order. It's also useful as it provides a canonical hash by which such chunks of blocks can be named, with a direct reference in the state. In this PR, `historical_roots` is frozen at its current value and `historical_batches` are computed from the merge epoch onwards. After this PR, `block_batch_root` in the state can be used to verify an era of blocks against the state with a simple root check. The `historical_roots` values on the other hand can be used to verify that a constant distributed with clients is valid for a particular state, and therefore extends the block validation all the way back to genesis without backfilling `block_batch_root` and without introducing any new security assumptions in the client. As far as naming goes, it's convenient to talk about an "era" being 8192 slots ~= 1.14 days. The 8192 number comes from the SLOTS_PER_HISTORICAL_ROOT constant. With multiple easily verifable blocks in a file, it becomes trivial to offload block history to out-of-protocol transfer methods (bittorrent / ftp / whatever) - including execution payloads, paving the way for a future in which clients purge block history in p2p. This PR can be applied along with the merge which simplifies payload distribution from the get-go. Both execution and consensus clients benefit because from the merge onwards, they both need to be able to supply ranges of blocks in the sync protocol from what effectively is "cold storage". Another possibility is to include it in a future cleanup PR - this complicates the "cold storage" mode above by not covering exection payloads from start.
2022-10-27 08:33:38 +00:00
#### Historical batches updates
*Note*: The function `process_historical_batches_update` replaces `process_historical_roots_update` in phase0.
```python
def process_historical_batches_update(state: BeaconState) -> None:
# Set historical block root accumulator
next_epoch = Epoch(get_current_epoch(state) + 1)
if next_epoch % (SLOTS_PER_HISTORICAL_ROOT // SLOTS_PER_EPOCH) == 0:
historical_batch = HistoricalBatchSummary(
block_batch_root=hash_tree_root(state.block_roots),
state_batch_root=hash_tree_root(state.state_roots))
state.historical_batches.append(historical_batch)
```
2022-02-23 22:05:55 +00:00
### Block processing
```python
def process_block(state: BeaconState, block: BeaconBlock) -> None:
process_block_header(state, block)
2022-02-24 21:26:15 +00:00
if is_execution_enabled(state, block.body):
2022-03-10 17:38:47 +00:00
process_withdrawals(state, block.body.execution_payload) # [New in Capella]
2022-02-24 21:26:15 +00:00
process_execution_payload(state, block.body.execution_payload, EXECUTION_ENGINE) # [Modified in Capella]
2022-02-23 22:05:55 +00:00
process_randao(state, block.body)
process_eth1_data(state, block.body)
process_operations(state, block.body)
process_sync_aggregate(state, block.body.sync_aggregate)
```
2022-03-10 17:38:47 +00:00
#### New `process_withdrawals`
2022-02-23 22:05:55 +00:00
```python
2022-03-10 17:38:47 +00:00
def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
num_withdrawals = min(MAX_WITHDRAWALS_PER_PAYLOAD, len(state.withdrawal_queue))
dequeued_withdrawals = state.withdrawal_queue[:num_withdrawals]
2022-02-23 22:05:55 +00:00
2022-03-10 17:38:47 +00:00
assert len(dequeued_withdrawals) == len(payload.withdrawals)
for dequeued_withdrawal, withdrawal in zip(dequeued_withdrawals, payload.withdrawals):
assert dequeued_withdrawal == withdrawal
2022-02-23 22:05:55 +00:00
# Remove dequeued withdrawals from state
state.withdrawal_queue = state.withdrawal_queue[num_withdrawals:]
2022-02-23 22:05:55 +00:00
```
#### Modified `process_execution_payload`
*Note*: The function `process_execution_payload` is modified to use the new `ExecutionPayloadHeader` type.
```python
def process_execution_payload(state: BeaconState, payload: ExecutionPayload, execution_engine: ExecutionEngine) -> None:
# Verify consistency of the parent hash with respect to the previous execution payload header
if is_merge_transition_complete(state):
assert payload.parent_hash == state.latest_execution_payload_header.block_hash
2022-02-24 21:26:15 +00:00
# Verify prev_randao
assert payload.prev_randao == get_randao_mix(state, get_current_epoch(state))
2022-02-23 22:05:55 +00:00
# Verify timestamp
assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
# Verify the execution payload is valid
2022-02-24 21:26:15 +00:00
assert execution_engine.notify_new_payload(payload)
2022-02-23 22:05:55 +00:00
# Cache execution payload header
state.latest_execution_payload_header = ExecutionPayloadHeader(
parent_hash=payload.parent_hash,
fee_recipient=payload.fee_recipient,
state_root=payload.state_root,
2022-02-24 21:26:15 +00:00
receipts_root=payload.receipts_root,
2022-02-23 22:05:55 +00:00
logs_bloom=payload.logs_bloom,
2022-02-24 21:26:15 +00:00
prev_randao=payload.prev_randao,
2022-02-23 22:05:55 +00:00
block_number=payload.block_number,
gas_limit=payload.gas_limit,
gas_used=payload.gas_used,
timestamp=payload.timestamp,
extra_data=payload.extra_data,
base_fee_per_gas=payload.base_fee_per_gas,
block_hash=payload.block_hash,
transactions_root=hash_tree_root(payload.transactions),
2022-03-22 14:00:53 +00:00
withdrawals_root=hash_tree_root(payload.withdrawals), # [New in Capella]
2022-02-23 22:05:55 +00:00
)
```
2022-03-16 21:41:37 +00:00
#### Modified `process_operations`
*Note*: The function `process_operations` is modified to process `BLSToExecutionChange` operations included in the block.
```python
def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
# Verify that outstanding deposits are processed up to the maximum number of deposits
assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index)
def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
for operation in operations:
fn(state, operation)
for_ops(body.proposer_slashings, process_proposer_slashing)
for_ops(body.attester_slashings, process_attester_slashing)
for_ops(body.attestations, process_attestation)
for_ops(body.deposits, process_deposit)
for_ops(body.voluntary_exits, process_voluntary_exit)
for_ops(body.bls_to_execution_changes, process_bls_to_execution_change) # [New in Capella]
2022-03-16 21:41:37 +00:00
```
#### New `process_bls_to_execution_change`
```python
def process_bls_to_execution_change(state: BeaconState,
signed_address_change: SignedBLSToExecutionChange) -> None:
address_change = signed_address_change.message
assert address_change.validator_index < len(state.validators)
validator = state.validators[address_change.validator_index]
assert validator.withdrawal_credentials[:1] == BLS_WITHDRAWAL_PREFIX
assert validator.withdrawal_credentials[1:] == hash(address_change.from_bls_pubkey)[1:]
domain = get_domain(state, DOMAIN_BLS_TO_EXECUTION_CHANGE)
signing_root = compute_signing_root(address_change, domain)
assert bls.Verify(address_change.from_bls_pubkey, signing_root, signed_address_change.signature)
validator.withdrawal_credentials = (
ETH1_ADDRESS_WITHDRAWAL_PREFIX
+ b'\x00' * 11
2022-03-16 21:41:37 +00:00
+ address_change.to_execution_address
)
```
## Testing
*Note*: The function `initialize_beacon_state_from_eth1` is modified for pure Capella testing only.
Modifications include:
2022-07-18 06:45:00 +00:00
1. Use `CAPELLA_FORK_VERSION` as the previous and current fork version.
2. Utilize the Capella `BeaconBlockBody` when constructing the initial `latest_block_header`.
```python
def initialize_beacon_state_from_eth1(eth1_block_hash: Hash32,
eth1_timestamp: uint64,
deposits: Sequence[Deposit],
execution_payload_header: ExecutionPayloadHeader=ExecutionPayloadHeader()
) -> BeaconState:
fork = Fork(
previous_version=CAPELLA_FORK_VERSION, # [Modified in Capella] for testing only
current_version=CAPELLA_FORK_VERSION, # [Modified in Capella]
epoch=GENESIS_EPOCH,
)
state = BeaconState(
genesis_time=eth1_timestamp + GENESIS_DELAY,
fork=fork,
eth1_data=Eth1Data(block_hash=eth1_block_hash, deposit_count=uint64(len(deposits))),
latest_block_header=BeaconBlockHeader(body_root=hash_tree_root(BeaconBlockBody())),
randao_mixes=[eth1_block_hash] * EPOCHS_PER_HISTORICAL_VECTOR, # Seed RANDAO with Eth1 entropy
)
# Process deposits
leaves = list(map(lambda deposit: deposit.data, deposits))
for index, deposit in enumerate(deposits):
deposit_data_list = List[DepositData, 2**DEPOSIT_CONTRACT_TREE_DEPTH](*leaves[:index + 1])
state.eth1_data.deposit_root = hash_tree_root(deposit_data_list)
process_deposit(state, deposit)
# Process activations
for index, validator in enumerate(state.validators):
balance = state.balances[index]
validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
if validator.effective_balance == MAX_EFFECTIVE_BALANCE:
validator.activation_eligibility_epoch = GENESIS_EPOCH
validator.activation_epoch = GENESIS_EPOCH
# Set genesis validators root for domain separation and chain versioning
state.genesis_validators_root = hash_tree_root(state.validators)
# Fill in sync committees
# Note: A duplicate committee is assigned for the current and next committee at genesis
state.current_sync_committee = get_next_sync_committee(state)
state.next_sync_committee = get_next_sync_committee(state)
# Initialize the execution payload header
state.latest_execution_payload_header = execution_payload_header
return state
```