Merge branch 'dev' into peer-das

This commit is contained in:
Hsiao-Wei Wang 2024-04-04 22:23:02 +09:00
commit 8728561da3
No known key found for this signature in database
GPG Key ID: AE3D6B174F971DE4
77 changed files with 4048 additions and 156 deletions

View File

@ -181,6 +181,19 @@ jobs:
command: make citest fork=eip7002
- store_test_results:
path: tests/core/pyspec/test-reports
test-eip7549:
docker:
- image: circleci/python:3.9
working_directory: ~/specs-repo
steps:
- restore_cache:
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
- restore_pyspec_cached_venv
- run:
name: Run py-tests
command: make citest fork=eip7549
- store_test_results:
path: tests/core/pyspec/test-reports
test-whisk:
docker:
- image: circleci/python:3.9
@ -333,6 +346,9 @@ workflows:
- test-eip7002:
requires:
- install_pyspec_test
- test-eip7549:
requires:
- install_pyspec_test
- test-whisk:
requires:
- install_pyspec_test

View File

@ -71,7 +71,7 @@ jobs:
needs: [preclear,lint,codespell,table_of_contents]
strategy:
matrix:
version: ["phase0", "altair", "bellatrix", "capella", "deneb", "eip6110", "eip7002", "whisk", "eip7594"]
version: ["phase0", "altair", "bellatrix", "capella", "deneb", "eip6110", "eip7002", "eip7549", "whisk", "eip7594"]
steps:
- name: Checkout this repo
uses: actions/checkout@v3.2.0

2
.gitignore vendored
View File

@ -23,7 +23,9 @@ tests/core/pyspec/eth2spec/capella/
tests/core/pyspec/eth2spec/deneb/
tests/core/pyspec/eth2spec/eip6110/
tests/core/pyspec/eth2spec/eip7002/
tests/core/pyspec/eth2spec/eip7549/
tests/core/pyspec/eth2spec/whisk/
tests/core/pyspec/eth2spec/eip7251/
tests/core/pyspec/eth2spec/eip7594/
# coverage reports

View File

@ -35,7 +35,7 @@ MARKDOWN_FILES = $(wildcard $(SPEC_DIR)/*/*.md) \
$(wildcard $(SPEC_DIR)/_features/*/*/*.md) \
$(wildcard $(SSZ_DIR)/*.md)
ALL_EXECUTABLE_SPEC_NAMES = phase0 altair bellatrix capella deneb eip6110 eip7002 whisk
ALL_EXECUTABLE_SPEC_NAMES = phase0 altair bellatrix capella deneb eip6110 eip7002 eip7549 whisk
# The parameters for commands. Use `foreach` to avoid listing specs again.
COVERAGE_SCOPE := $(foreach S,$(ALL_EXECUTABLE_SPEC_NAMES), --cov=eth2spec.$S.$(TEST_PRESET_TYPE))
PYLINT_SCOPE := $(foreach S,$(ALL_EXECUTABLE_SPEC_NAMES), ./eth2spec/$S)

View File

@ -21,11 +21,11 @@ Features are researched and developed in parallel, and then consolidated into se
| 1 | **Altair** | `74240` | <ul><li>Core</li><ul><li>[Beacon chain changes](specs/altair/beacon-chain.md)</li><li>[Altair fork](specs/altair/fork.md)</li></ul><li>Additions</li><ul><li>[Light client sync protocol](specs/altair/light-client/sync-protocol.md) ([full node](specs/altair/light-client/full-node.md), [light client](specs/altair/light-client/light-client.md), [networking](specs/altair/light-client/p2p-interface.md))</li><li>[Honest validator guide changes](specs/altair/validator.md)</li><li>[P2P networking](specs/altair/p2p-interface.md)</li></ul></ul> |
| 2 | **Bellatrix** <br/> (["The Merge"](https://ethereum.org/en/upgrades/merge/)) | `144896` | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/bellatrix/beacon-chain.md)</li><li>[Bellatrix fork](specs/bellatrix/fork.md)</li><li>[Fork choice changes](specs/bellatrix/fork-choice.md)</li></ul><li>Additions</li><ul><li>[Honest validator guide changes](specs/bellatrix/validator.md)</li><li>[P2P networking](specs/bellatrix/p2p-interface.md)</li></ul></ul> |
| 3 | **Capella** | `194048` | <ul><li>Core</li><ul><li>[Beacon chain changes](specs/capella/beacon-chain.md)</li><li>[Capella fork](specs/capella/fork.md)</li></ul><li>Additions</li><ul><li>[Light client sync protocol changes](specs/capella/light-client/sync-protocol.md) ([fork](specs/capella/light-client/fork.md), [full node](specs/capella/light-client/full-node.md), [networking](specs/capella/light-client/p2p-interface.md))</li></ul><ul><li>[Validator additions](specs/capella/validator.md)</li><li>[P2P networking](specs/capella/p2p-interface.md)</li></ul></ul> |
| 4 | **Deneb** | `269568` | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/deneb/beacon-chain.md)</li><li>[Deneb fork](specs/deneb/fork.md)</li><li>[Polynomial commitments](specs/deneb/polynomial-commitments.md)</li><li>[Fork choice changes](specs/deneb/fork-choice.md)</li></ul><li>Additions</li><ul><li>[Light client sync protocol changes](specs/deneb/light-client/sync-protocol.md) ([fork](specs/deneb/light-client/fork.md), [full node](specs/deneb/light-client/full-node.md), [networking](specs/deneb/light-client/p2p-interface.md))</li></ul><ul><li>[Honest validator guide changes](specs/deneb/validator.md)</li><li>[P2P networking](specs/deneb/p2p-interface.md)</li></ul></ul> |
### In-development Specifications
| Code Name or Topic | Specs | Notes |
| - | - | - |
| Deneb (tentative) | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/deneb/beacon-chain.md)</li><li>[Deneb fork](specs/deneb/fork.md)</li><li>[Polynomial commitments](specs/deneb/polynomial-commitments.md)</li><li>[Fork choice changes](specs/deneb/fork-choice.md)</li></ul><li>Additions</li><ul><li>[Light client sync protocol changes](specs/deneb/light-client/sync-protocol.md) ([fork](specs/deneb/light-client/fork.md), [full node](specs/deneb/light-client/full-node.md), [networking](specs/deneb/light-client/p2p-interface.md))</li></ul><ul><li>[Honest validator guide changes](specs/deneb/validator.md)</li><li>[P2P networking](specs/deneb/p2p-interface.md)</li></ul></ul> |
| Sharding (outdated) | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/_features/sharding/beacon-chain.md)</li></ul><li>Additions</li><ul><li>[P2P networking](specs/_features/sharding/p2p-interface.md)</li></ul></ul> |
| Custody Game (outdated) | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/_features/custody_game/beacon-chain.md)</li></ul><li>Additions</li><ul><li>[Honest validator guide changes](specs/_features/custody_game/validator.md)</li></ul></ul> | Dependent on sharding |
| Data Availability Sampling (outdated) | <ul><li>Core</li><ul><li>[Core types and functions](specs/_features/das/das-core.md)</li><li>[Fork choice changes](specs/_features/das/fork-choice.md)</li></ul><li>Additions</li><ul><li>[P2P Networking](specs/_features/das/p2p-interface.md)</li><li>[Sampling process](specs/_features/das/sampling.md)</li></ul></ul> | <ul><li> Dependent on sharding</li><li>[Technical explainer](https://hackmd.io/@HWeNw8hNRimMm2m2GH56Cw/B1YJPGkpD)</li></ul> |

View File

@ -49,15 +49,24 @@ CAPELLA_FORK_VERSION: 0x03000000
CAPELLA_FORK_EPOCH: 194048 # April 12, 2023, 10:27:35pm UTC
# Deneb
DENEB_FORK_VERSION: 0x04000000
DENEB_FORK_EPOCH: 18446744073709551615
DENEB_FORK_EPOCH: 269568 # March 13, 2024, 01:55:35pm UTC
# Electra
ELECTRA_FORK_VERSION: 0x05000000
ELECTRA_FORK_EPOCH: 18446744073709551615
# EIP6110
EIP6110_FORK_VERSION: 0x05000000 # temporary stub
EIP6110_FORK_VERSION: 0x06000000 # temporary stub
EIP6110_FORK_EPOCH: 18446744073709551615
# EIP7002
EIP7002_FORK_VERSION: 0x05000000 # temporary stub
EIP7002_FORK_VERSION: 0x07000000 # temporary stub
EIP7002_FORK_EPOCH: 18446744073709551615
# EIP7251
EIP7251_FORK_VERSION: 0x06000000 # temporary stub
EIP7251_FORK_EPOCH: 18446744073709551615
# EIP7549
EIP7549_FORK_VERSION: 0x06000000 # temporary stub
EIP7549_FORK_EPOCH: 18446744073709551615
# WHISK
WHISK_FORK_VERSION: 0x06000000 # temporary stub
WHISK_FORK_VERSION: 0x08000000 # temporary stub
WHISK_FORK_EPOCH: 18446744073709551615
# EIP7594
EIP7594_FORK_VERSION: 0x06000001
@ -162,3 +171,7 @@ WHISK_PROPOSER_SELECTION_GAP: 2
NUMBER_OF_COLUMNS: 128
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 32
MAX_REQUEST_DATA_COLUMN_SIDECARS: 16384
# [New in EIP7251]
MIN_PER_EPOCH_CHURN_LIMIT_EIP7251: 128000000000 # 2**7 * 10**9 (= 128,000,000,000)
MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT: 256000000000 # 2**8 * 10**9 (= 256,000,000,000)

View File

@ -46,17 +46,26 @@ BELLATRIX_FORK_EPOCH: 18446744073709551615
# Capella
CAPELLA_FORK_VERSION: 0x03000001
CAPELLA_FORK_EPOCH: 18446744073709551615
# DENEB
# Deneb
DENEB_FORK_VERSION: 0x04000001
DENEB_FORK_EPOCH: 18446744073709551615
# Electra
ELECTRA_FORK_VERSION: 0x05000001
ELECTRA_FORK_EPOCH: 18446744073709551615
# EIP6110
EIP6110_FORK_VERSION: 0x05000001
EIP6110_FORK_VERSION: 0x06000001
EIP6110_FORK_EPOCH: 18446744073709551615
# EIP7002
EIP7002_FORK_VERSION: 0x05000001
EIP7002_FORK_VERSION: 0x07000001
EIP7002_FORK_EPOCH: 18446744073709551615
# EIP7251
EIP7251_FORK_VERSION: 0x06000001 # temporary stub
EIP7251_FORK_EPOCH: 18446744073709551615
# EIP7549
EIP7549_FORK_VERSION: 0x06000001 # temporary stub
EIP7549_FORK_EPOCH: 18446744073709551615
# WHISK
WHISK_FORK_VERSION: 0x06000001
WHISK_FORK_VERSION: 0x08000001
WHISK_FORK_EPOCH: 18446744073709551615
# EIP7594
EIP7594_FORK_VERSION: 0x06000001
@ -160,3 +169,7 @@ WHISK_PROPOSER_SELECTION_GAP: 1
NUMBER_OF_COLUMNS: 128
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 32
MAX_REQUEST_DATA_COLUMN_SIDECARS: 16384
# [New in EIP7251]
MIN_PER_EPOCH_CHURN_LIMIT_EIP7251: 64000000000 # 2**6 * 10**9 (= 64,000,000,000)
MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT: 128000000000 # 2**7 * 10**9 (= 128,000,000,000)

View File

@ -0,0 +1,28 @@
# Mainnet preset - EIP7251
# Gwei values
# ---------------------------------------------------------------
# 2**5 * 10**9 (= 32,000,000,000) Gwei
MIN_ACTIVATION_BALANCE: 32000000000
# 2**11 * 10**9 (= 2,048,000,000,000) Gwei
MAX_EFFECTIVE_BALANCE_EIP7251: 2048000000000
# State list lengths
# ---------------------------------------------------------------
PENDING_BALANCE_DEPOSITS_LIMIT: 134217728
PENDING_PARTIAL_WITHDRAWALS_LIMIT: 134217728
PENDING_CONSOLIDATIONS_LIMIT: 262144
# Reward and penalty quotients
# ---------------------------------------------------------------
MIN_SLASHING_PENALTY_QUOTIENT_EIP7251: 4096
WHISTLEBLOWER_REWARD_QUOTIENT_EIP7251: 4096
# Max operations per block
# ---------------------------------------------------------------
MAX_CONSOLIDATIONS: 1
# Execution
# ---------------------------------------------------------------
# 2**3 (= 8) partial withdrawals
MAX_PARTIAL_WITHDRAWALS_PER_PAYLOAD: 8

View File

@ -0,0 +1,8 @@
# Mainnet preset - EIP7594
# # Max operations per block
# ---------------------------------------------------------------
# `uint64(2**0)` (= 1)
MAX_ATTESTER_SLASHINGS_EIP7549: 1
# `uint64(2 * 3)` (= 8)
MAX_ATTESTATIONS_EIP7549: 8

View File

@ -4,5 +4,7 @@
# ---------------------------------------------------------------
# `uint64(2**6)` (= 64)
FIELD_ELEMENTS_PER_CELL: 64
# `uint64(2 * 4096)` (= 8192)
FIELD_ELEMENTS_PER_EXT_BLOB: 8192
# uint64(floorlog2(get_generalized_index(BeaconBlockBody, 'blob_kzg_commitments'))
KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH: 4

View File

@ -0,0 +1,30 @@
# Minimal preset - EIP7251
# Gwei values
# ---------------------------------------------------------------
# 2**5 * 10**9 (= 32,000,000,000) Gwei
MIN_ACTIVATION_BALANCE: 32000000000
# 2**11 * 10**9 (= 2,048,000,000,000) Gwei
MAX_EFFECTIVE_BALANCE_EIP7251: 2048000000000
# State list lengths
# ---------------------------------------------------------------
PENDING_BALANCE_DEPOSITS_LIMIT: 134217728
# [customized] smaller queue
PENDING_PARTIAL_WITHDRAWALS_LIMIT: 64
# [customized] smaller queue
PENDING_CONSOLIDATIONS_LIMIT: 64
# Reward and penalty quotients
# ---------------------------------------------------------------
MIN_SLASHING_PENALTY_QUOTIENT_EIP7251: 4096
WHISTLEBLOWER_REWARD_QUOTIENT_EIP7251: 4096
# Max operations per block
# ---------------------------------------------------------------
MAX_CONSOLIDATIONS: 1
# Execution
# ---------------------------------------------------------------
# [customized] 2**1 (= 2)
MAX_PARTIAL_WITHDRAWALS_PER_PAYLOAD: 2

View File

@ -0,0 +1,8 @@
# Minimal preset - EIP7594
# # Max operations per block
# ---------------------------------------------------------------
# `uint64(2**0)` (= 1)
MAX_ATTESTER_SLASHINGS_EIP7549: 1
# `uint64(2 * 3)` (= 8)
MAX_ATTESTATIONS_EIP7549: 8

View File

@ -4,5 +4,7 @@
# ---------------------------------------------------------------
# `uint64(2**6)` (= 64)
FIELD_ELEMENTS_PER_CELL: 64
# `uint64(2 * 4096)` (= 8192)
FIELD_ELEMENTS_PER_EXT_BLOB: 8192
# uint64(floorlog2(get_generalized_index(BeaconBlockBody, 'blob_kzg_commitments'))
KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH: 4

View File

@ -6,6 +6,8 @@ CAPELLA = 'capella'
DENEB = 'deneb'
EIP6110 = 'eip6110'
EIP7002 = 'eip7002'
EIP7251 = 'eip7251'
EIP7549 = 'eip7549'
WHISK = 'whisk'
EIP7594 = 'eip7594'

View File

@ -7,8 +7,10 @@ from .constants import (
CAPELLA,
DENEB,
EIP6110,
WHISK,
EIP7002,
EIP7251,
EIP7549,
WHISK,
EIP7594,
)
@ -20,8 +22,10 @@ PREVIOUS_FORK_OF = {
CAPELLA: BELLATRIX,
DENEB: CAPELLA,
EIP6110: DENEB,
EIP7549: DENEB,
WHISK: CAPELLA,
EIP7002: CAPELLA,
EIP7251: DENEB,
EIP7594: DENEB,
}

View File

@ -5,7 +5,9 @@ from .capella import CapellaSpecBuilder
from .deneb import DenebSpecBuilder
from .eip6110 import EIP6110SpecBuilder
from .eip7002 import EIP7002SpecBuilder
from .eip7549 import EIP7549SpecBuilder
from .whisk import WhiskSpecBuilder
from .eip7251 import EIP7251SpecBuilder
from .eip7594 import EIP7594SpecBuilder
@ -13,6 +15,7 @@ spec_builders = {
builder.fork: builder
for builder in (
Phase0SpecBuilder, AltairSpecBuilder, BellatrixSpecBuilder, CapellaSpecBuilder, DenebSpecBuilder,
EIP6110SpecBuilder, EIP7002SpecBuilder, WhiskSpecBuilder, EIP7594SpecBuilder,
EIP6110SpecBuilder, EIP7002SpecBuilder, EIP7549SpecBuilder, WhiskSpecBuilder, EIP7251SpecBuilder,
EIP7594SpecBuilder,
)
}

View File

@ -0,0 +1,24 @@
from typing import Dict
from .base import BaseSpecBuilder
from ..constants import EIP7251
class EIP7251SpecBuilder(BaseSpecBuilder):
fork: str = EIP7251
@classmethod
def imports(cls, preset_name: str):
return super().imports(preset_name) + f'''
from eth2spec.deneb import {preset_name} as deneb
'''
## TODO: deal with changed gindices
@classmethod
def hardcoded_ssz_dep_constants(cls) -> Dict[str, str]:
return {
'FINALIZED_ROOT_GINDEX': 'GeneralizedIndex(169)',
'CURRENT_SYNC_COMMITTEE_GINDEX': 'GeneralizedIndex(86)',
'NEXT_SYNC_COMMITTEE_GINDEX': 'GeneralizedIndex(87)',
}

View File

@ -0,0 +1,11 @@
from .base import BaseSpecBuilder
from ..constants import EIP7549
class EIP7549SpecBuilder(BaseSpecBuilder):
fork: str = EIP7549
@classmethod
def imports(cls, preset_name: str):
return super().imports(preset_name) + f'''
'''

View File

@ -24,4 +24,5 @@ from eth2spec.deneb import {preset_name} as deneb
def hardcoded_func_dep_presets(cls, spec_object) -> Dict[str, str]:
return {
'KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH': spec_object.preset_vars['KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH'].value,
'FIELD_ELEMENTS_PER_EXT_BLOB': spec_object.preset_vars['FIELD_ELEMENTS_PER_EXT_BLOB'].value,
}

View File

@ -98,8 +98,8 @@ get_matching_head_attestations = cache_this(
_get_attesting_indices = get_attesting_indices
get_attesting_indices = cache_this(
lambda state, data, bits: (
lambda state, attestation: (
state.randao_mixes.hash_tree_root(),
state.validators.hash_tree_root(), data.hash_tree_root(), bits.hash_tree_root()
state.validators.hash_tree_root(), attestation.hash_tree_root()
),
_get_attesting_indices, lru_size=SLOTS_PER_EPOCH * MAX_COMMITTEES_PER_SLOT * 3)'''

View File

@ -401,7 +401,7 @@ def process_chunk_challenge(state: BeaconState, challenge: CustodyChunkChallenge
# Verify responder is slashable
assert is_slashable_validator(responder, get_current_epoch(state))
# Verify the responder participated in the attestation
attesters = get_attesting_indices(state, challenge.attestation.data, challenge.attestation.aggregation_bits)
attesters = get_attesting_indices(state, challenge)
assert challenge.responder_index in attesters
# Verify shard transition is correctly given
assert hash_tree_root(challenge.shard_transition) == challenge.attestation.data.shard_transition_root
@ -594,7 +594,7 @@ def process_custody_slashing(state: BeaconState, signed_custody_slashing: Signed
assert len(custody_slashing.data) == shard_transition.shard_block_lengths[custody_slashing.data_index]
assert hash_tree_root(custody_slashing.data) == shard_transition.shard_data_roots[custody_slashing.data_index]
# Verify existence and participation of claimed malefactor
attesters = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
attesters = get_attesting_indices(state, attestation)
assert custody_slashing.malefactor_index in attesters
# Verify the malefactor custody key

View File

@ -28,7 +28,7 @@ Warning: this configuration is not definitive.
| Name | Value |
| - | - |
| `EIP6110_FORK_VERSION` | `Version('0x05000000')` |
| `EIP6110_FORK_VERSION` | `Version('0x06000000')` |
| `EIP6110_FORK_EPOCH` | `Epoch(18446744073709551615)` **TBD** |
## Helper functions

View File

@ -28,7 +28,7 @@ Warning: this configuration is not definitive.
| Name | Value |
| - | - |
| `EIP7002_FORK_VERSION` | `Version('0x05000000')` |
| `EIP7002_FORK_VERSION` | `Version('0x07000000')` |
| `EIP7002_FORK_EPOCH` | `Epoch(18446744073709551615)` **TBD** |
## Helper functions

View File

@ -0,0 +1,963 @@
# EIP7251 - Spec
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Introduction](#introduction)
- [Constants](#constants)
- [Withdrawal prefixes](#withdrawal-prefixes)
- [Domains](#domains)
- [Presets](#presets)
- [Gwei values](#gwei-values)
- [Rewards and penalties](#rewards-and-penalties)
- [Max operations per block](#max-operations-per-block)
- [Execution](#execution)
- [State list lengths](#state-list-lengths)
- [Configuration](#configuration)
- [Validator cycle](#validator-cycle)
- [Containers](#containers)
- [New containers](#new-containers)
- [New `PendingBalanceDeposit`](#new-pendingbalancedeposit)
- [New `PendingPartialWithdrawal`](#new-pendingpartialwithdrawal)
- [New `ExecutionLayerWithdrawRequest`](#new-executionlayerwithdrawrequest)
- [New `Consolidation`](#new-consolidation)
- [New `SignedConsolidation`](#new-signedconsolidation)
- [New `PendingConsolidation`](#new-pendingconsolidation)
- [Extended Containers](#extended-containers)
- [`BeaconState`](#beaconstate)
- [`BeaconBlockBody`](#beaconblockbody)
- [Helpers](#helpers)
- [Predicates](#predicates)
- [Updated `is_eligible_for_activation_queue`](#updated-is_eligible_for_activation_queue)
- [New `is_compounding_withdrawal_credential`](#new-is_compounding_withdrawal_credential)
- [New `has_compounding_withdrawal_credential`](#new-has_compounding_withdrawal_credential)
- [New `has_execution_withdrawal_credential`](#new-has_execution_withdrawal_credential)
- [Updated `is_fully_withdrawable_validator`](#updated-is_fully_withdrawable_validator)
- [Updated `is_partially_withdrawable_validator`](#updated-is_partially_withdrawable_validator)
- [Beacon state accessors](#beacon-state-accessors)
- [New `get_validator_max_effective_balance`](#new-get_validator_max_effective_balance)
- [New `get_churn_limit`](#new-get_churn_limit)
- [New `get_activation_exit_churn_limit`](#new-get_activation_exit_churn_limit)
- [New `get_consolidation_churn_limit`](#new-get_consolidation_churn_limit)
- [New `get_active_balance`](#new-get_active_balance)
- [Beacon state mutators](#beacon-state-mutators)
- [Updated `initiate_validator_exit`](#updated--initiate_validator_exit)
- [New `set_compounding_withdrawal_credentials`](#new-set_compounding_withdrawal_credentials)
- [New `switch_to_compounding_validator`](#new-switch_to_compounding_validator)
- [New `queue_excess_active_balance`](#new-queue_excess_active_balance)
- [New `compute_exit_epoch_and_update_churn`](#new-compute_exit_epoch_and_update_churn)
- [New `compute_consolidation_epoch_and_update_churn`](#new-compute_consolidation_epoch_and_update_churn)
- [Updated `slash_validator`](#updated-slash_validator)
- [Beacon chain state transition function](#beacon-chain-state-transition-function)
- [Epoch processing](#epoch-processing)
- [Updated `process_epoch`](#updated-process_epoch)
- [Updated `process_registry_updates`](#updated--process_registry_updates)
- [New `process_pending_balance_deposits`](#new-process_pending_balance_deposits)
- [New `process_pending_consolidations`](#new-process_pending_consolidations)
- [Updated `process_effective_balance_updates`](#updated-process_effective_balance_updates)
- [Block processing](#block-processing)
- [Updated `get_expected_withdrawals`](#updated-get_expected_withdrawals)
- [Updated `process_withdrawals`](#updated-process_withdrawals)
- [Operations](#operations)
- [Updated `process_operations`](#updated-process_operations)
- [Deposits](#deposits)
- [Updated `apply_deposit`](#updated--apply_deposit)
- [New `is_valid_deposit_signature`](#new-is_valid_deposit_signature)
- [Modified `add_validator_to_registry`](#modified-add_validator_to_registry)
- [Updated `get_validator_from_deposit`](#updated-get_validator_from_deposit)
- [Withdrawals](#withdrawals)
- [New `process_execution_layer_withdraw_request`](#new-process_execution_layer_withdraw_request)
- [Consolidations](#consolidations)
- [New `process_consolidation`](#new-process_consolidation)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
## Introduction
See [a modest proposal](https://notes.ethereum.org/@mikeneuder/increase-maxeb), the [diff view](https://github.com/michaelneuder/consensus-specs/pull/3/files) and
[security considerations](https://notes.ethereum.org/@fradamt/meb-increase-security).
*Note:* This specification is built upon [Deneb](../../deneb/beacon-chain.md).
## Constants
The following values are (non-configurable) constants used throughout the specification.
### Withdrawal prefixes
| Name | Value |
| - | - |
| `BLS_WITHDRAWAL_PREFIX` | `Bytes1('0x00')` |
| `ETH1_ADDRESS_WITHDRAWAL_PREFIX` | `Bytes1('0x01')` |
| `COMPOUNDING_WITHDRAWAL_PREFIX` | `Bytes1('0x02')` |
### Domains
| Name | Value |
| - | - |
| `DOMAIN_CONSOLIDATION` | `DomainType('0x0B000000')` |
## Presets
### Gwei values
| Name | Value |
| - | - |
| `MIN_ACTIVATION_BALANCE` | `Gwei(2**5 * 10**9)` (= 32,000,000,000) |
| `MAX_EFFECTIVE_BALANCE_EIP7251` | `Gwei(2**11 * 10**9)` (= 2048,000,000,000) |
### Rewards and penalties
| Name | Value |
| - | - |
| `MIN_SLASHING_PENALTY_QUOTIENT_EIP7251` | `uint64(2**12)` (= 4,096) |
| `WHISTLEBLOWER_REWARD_QUOTIENT_EIP7251` | `uint64(2**12)` (= 4,096) |
### Max operations per block
| Name | Value |
| - | - |
| `MAX_CONSOLIDATIONS` | `uint64(1)` |
### Execution
| Name | Value | Description |
| - | - | - |
| `MAX_PARTIAL_WITHDRAWALS_PER_PAYLOAD` | `uint64(2**3)` (= 8) | Maximum amount of partial withdrawals allowed in each payload |
### State list lengths
| Name | Value | Unit |
| - | - | :-: |
| `PENDING_BALANCE_DEPOSITS_LIMIT` | `uint64(2**27)` (= 134,217,728) | pending balance deposits |
| `PENDING_PARTIAL_WITHDRAWALS_LIMIT` | `uint64(2**27)` (= 134,217,728) | pending partial withdrawals |
| `PENDING_CONSOLIDATIONS_LIMIT` | `uint64(2**18)` (= 262,144) | pending consolidations |
## Configuration
### Validator cycle
| Name | Value |
| - | - |
| `MIN_PER_EPOCH_CHURN_LIMIT_EIP7251` | `Gwei(2**7 * 10**9)` (= 128,000,000,000) | # Equivalent to 4 32 ETH validators
| `MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT` | `Gwei(2**8 * 10**9)` (256,000,000,000) |
## Containers
### New containers
#### New `PendingBalanceDeposit`
```python
class PendingBalanceDeposit(Container):
index: ValidatorIndex
amount: Gwei
```
#### New `PendingPartialWithdrawal`
```python
class PendingPartialWithdrawal(Container):
index: ValidatorIndex
amount: Gwei
withdrawable_epoch: Epoch
```
#### New `ExecutionLayerWithdrawRequest`
```python
class ExecutionLayerWithdrawRequest(Container):
source_address: ExecutionAddress
validator_pubkey: BLSPubkey
amount: Gwei
```
#### New `Consolidation`
```python
class Consolidation(Container):
source_index: ValidatorIndex
target_index: ValidatorIndex
epoch: Epoch
```
#### New `SignedConsolidation`
```python
class SignedConsolidation(Container):
message: Consolidation
signature: BLSSignature
```
#### New `PendingConsolidation`
```python
class PendingConsolidation(Container):
source_index: ValidatorIndex
target_index: ValidatorIndex
```
### Extended Containers
#### `BeaconState`
```python
class BeaconState(Container):
# Versioning
genesis_time: uint64
genesis_validators_root: Root
slot: Slot
fork: Fork
# History
latest_block_header: BeaconBlockHeader
block_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
state_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
historical_roots: List[Root, HISTORICAL_ROOTS_LIMIT]
# Eth1
eth1_data: Eth1Data
eth1_data_votes: List[Eth1Data, EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH]
eth1_deposit_index: uint64
# Registry
validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]
balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]
# Randomness
randao_mixes: Vector[Bytes32, EPOCHS_PER_HISTORICAL_VECTOR]
# Slashings
slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR] # Per-epoch sums of slashed effective balances
# Participation
previous_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
current_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
# Finality
justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH] # Bit set for every recent justified epoch
previous_justified_checkpoint: Checkpoint
current_justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
# Inactivity
inactivity_scores: List[uint64, VALIDATOR_REGISTRY_LIMIT]
# Sync
current_sync_committee: SyncCommittee
next_sync_committee: SyncCommittee
# Execution
latest_execution_payload_header: ExecutionPayloadHeader
# Withdrawals
next_withdrawal_index: WithdrawalIndex
next_withdrawal_validator_index: ValidatorIndex
# Deep history valid from Capella onwards
historical_summaries: List[HistoricalSummary, HISTORICAL_ROOTS_LIMIT]
# EIP-7251
deposit_balance_to_consume: Gwei # [New in EIP-7251]
exit_balance_to_consume: Gwei # [New in EIP-7251]
earliest_exit_epoch: Epoch # [New in EIP-7251]
consolidation_balance_to_consume: Gwei # [New in EIP-7251]
earliest_consolidation_epoch: Epoch # [New in EIP-7251]
pending_balance_deposits: List[PendingBalanceDeposit, PENDING_BALANCE_DEPOSITS_LIMIT] # [New in EIP-7251]
pending_partial_withdrawals: List[PendingPartialWithdrawal, PENDING_PARTIAL_WITHDRAWALS_LIMIT] # [New in EIP-7251]
pending_consolidations: List[PendingConsolidation, PENDING_CONSOLIDATIONS_LIMIT] # [New in EIP-7251]
```
#### `BeaconBlockBody`
```python
class BeaconBlockBody(Container):
randao_reveal: BLSSignature
eth1_data: Eth1Data # Eth1 data vote
graffiti: Bytes32 # Arbitrary data
# Operations
proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]
attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS]
attestations: List[Attestation, MAX_ATTESTATIONS]
deposits: List[Deposit, MAX_DEPOSITS]
voluntary_exits: List[SignedVoluntaryExit, MAX_VOLUNTARY_EXITS]
sync_aggregate: SyncAggregate
# Execution
execution_payload: ExecutionPayload
bls_to_execution_changes: List[SignedBLSToExecutionChange, MAX_BLS_TO_EXECUTION_CHANGES]
blob_kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
consolidations: List[SignedConsolidation, MAX_CONSOLIDATIONS] # [New in EIP-7251]
```
## Helpers
### Predicates
#### Updated `is_eligible_for_activation_queue`
```python
def is_eligible_for_activation_queue(validator: Validator) -> bool:
"""
Check if ``validator`` is eligible to be placed into the activation queue.
"""
return (
validator.activation_eligibility_epoch == FAR_FUTURE_EPOCH
and validator.effective_balance >= MIN_ACTIVATION_BALANCE # [Modified in EIP7251]
)
```
#### New `is_compounding_withdrawal_credential`
```python
def is_compounding_withdrawal_credential(withdrawal_credentials: Bytes32) -> bool:
return withdrawal_credentials[:1] == COMPOUNDING_WITHDRAWAL_PREFIX
```
#### New `has_compounding_withdrawal_credential`
```python
def has_compounding_withdrawal_credential(validator: Validator) -> bool:
"""
Check if ``validator`` has an 0x02 prefixed "compounding" withdrawal credential.
"""
return is_compounding_withdrawal_credential(validator.withdrawal_credentials)
```
#### New `has_execution_withdrawal_credential`
```python
def has_execution_withdrawal_credential(validator: Validator) -> bool:
"""
Check if ``validator`` has a 0x01 or 0x02 prefixed withdrawal credential.
"""
return has_compounding_withdrawal_credential(validator) or has_eth1_withdrawal_credential(validator)
```
#### Updated `is_fully_withdrawable_validator`
```python
def is_fully_withdrawable_validator(validator: Validator, balance: Gwei, epoch: Epoch) -> bool:
"""
Check if ``validator`` is fully withdrawable.
"""
return (
has_execution_withdrawal_credential(validator) # [Modified in EIP7251]
and validator.withdrawable_epoch <= epoch
and balance > 0
)
```
#### Updated `is_partially_withdrawable_validator`
```python
def is_partially_withdrawable_validator(validator: Validator, balance: Gwei) -> bool:
"""
Check if ``validator`` is partially withdrawable.
"""
max_effective_balance = get_validator_max_effective_balance(validator)
has_max_effective_balance = validator.effective_balance == max_effective_balance # [Modified in EIP7251]
has_excess_balance = balance > max_effective_balance # [Modified in EIP7251]
return (
has_execution_withdrawal_credential(validator) # [Modified in EIP7251]
and has_max_effective_balance
and has_excess_balance
)
```
### Beacon state accessors
#### New `get_validator_max_effective_balance`
```python
def get_validator_max_effective_balance(validator: Validator) -> Gwei:
"""
Get max effective balance for ``validator``.
"""
if has_compounding_withdrawal_credential(validator):
return MAX_EFFECTIVE_BALANCE_EIP7251
else:
return MIN_ACTIVATION_BALANCE
```
#### New `get_churn_limit`
```python
def get_churn_limit(state: BeaconState) -> Gwei:
"""
Return the churn limit for the current epoch.
"""
churn = max(
MIN_PER_EPOCH_CHURN_LIMIT_EIP7251,
get_total_active_balance(state) // CHURN_LIMIT_QUOTIENT
)
return churn - churn % EFFECTIVE_BALANCE_INCREMENT
```
#### New `get_activation_exit_churn_limit`
```python
def get_activation_exit_churn_limit(state: BeaconState) -> Gwei:
"""
Return the churn limit for the current epoch dedicated to activations and exits.
"""
return min(MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT, get_churn_limit(state))
```
#### New `get_consolidation_churn_limit`
```python
def get_consolidation_churn_limit(state: BeaconState) -> Gwei:
return get_churn_limit(state) - get_activation_exit_churn_limit(state)
```
#### New `get_active_balance`
```python
def get_active_balance(state: BeaconState, validator_index: ValidatorIndex) -> Gwei:
max_effective_balance = get_validator_max_effective_balance(state.validators[validator_index])
return min(state.balances[validator_index], max_effective_balance)
```
### Beacon state mutators
#### Updated `initiate_validator_exit`
```python
def initiate_validator_exit(state: BeaconState, index: ValidatorIndex) -> None:
"""
Initiate the exit of the validator with index ``index``.
"""
# Return if validator already initiated exit
validator = state.validators[index]
if validator.exit_epoch != FAR_FUTURE_EPOCH:
return
# Compute exit queue epoch [Modified in EIP 7251]
exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
# Set validator exit epoch and withdrawable epoch
validator.exit_epoch = exit_queue_epoch
validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
```
#### New `set_compounding_withdrawal_credentials`
```python
def set_compounding_withdrawal_credentials(state: BeaconState, index: ValidatorIndex) -> None:
validator = state.validators[index]
if has_eth1_withdrawal_credential(validator):
validator.withdrawal_credentials[:1] = COMPOUNDING_WITHDRAWAL_PREFIX
```
#### New `switch_to_compounding_validator`
```python
def switch_to_compounding_validator(state: BeaconState, index: ValidatorIndex) -> None:
validator = state.validators[index]
if has_eth1_withdrawal_credential(validator):
validator.withdrawal_credentials[:1] = COMPOUNDING_WITHDRAWAL_PREFIX
queue_excess_active_balance(state, index)
```
#### New `queue_excess_active_balance`
```python
def queue_excess_active_balance(state: BeaconState, index: ValidatorIndex) -> None:
balance = state.balances[index]
if balance > MIN_ACTIVATION_BALANCE:
excess_balance = balance - MIN_ACTIVATION_BALANCE
state.balances[index] = MIN_ACTIVATION_BALANCE
state.pending_balance_deposits.append(
PendingBalanceDeposit(index=index, amount=excess_balance)
)
```
#### New `compute_exit_epoch_and_update_churn`
```python
def compute_exit_epoch_and_update_churn(state: BeaconState, exit_balance: Gwei) -> Epoch:
earliest_exit_epoch = compute_activation_exit_epoch(get_current_epoch(state))
per_epoch_churn = get_activation_exit_churn_limit(state)
# New epoch for exits.
if state.earliest_exit_epoch < earliest_exit_epoch:
state.earliest_exit_epoch = earliest_exit_epoch
state.exit_balance_to_consume = per_epoch_churn
if exit_balance <= state.exit_balance_to_consume:
# Exit fits in the current earliest epoch.
state.exit_balance_to_consume -= exit_balance
else:
# Exit doesn't fit in the current earliest epoch.
balance_to_process = exit_balance - state.exit_balance_to_consume
additional_epochs, remainder = divmod(balance_to_process, per_epoch_churn)
state.earliest_exit_epoch += additional_epochs + 1
state.exit_balance_to_consume = per_epoch_churn - remainder
return state.earliest_exit_epoch
```
#### New `compute_consolidation_epoch_and_update_churn`
```python
def compute_consolidation_epoch_and_update_churn(state: BeaconState, consolidation_balance: Gwei) -> Epoch:
earliest_consolidation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
per_epoch_consolidation_churn = get_consolidation_churn_limit(state)
# New epoch for consolidations.
if state.earliest_consolidation_epoch < earliest_consolidation_epoch:
state.earliest_consolidation_epoch = earliest_consolidation_epoch
state.consolidation_balance_to_consume = per_epoch_consolidation_churn
if consolidation_balance <= state.consolidation_balance_to_consume:
# Consolidation fits in the current earliest consolidation epoch.
state.consolidation_balance_to_consume -= consolidation_balance
else:
# Consolidation doesn't fit in the current earliest epoch.
balance_to_process = consolidation_balance - state.consolidation_balance_to_consume
additional_epochs, remainder = divmod(balance_to_process, per_epoch_consolidation_churn)
state.earliest_consolidation_epoch += additional_epochs + 1
state.consolidation_balance_to_consume = per_epoch_consolidation_churn - remainder
return state.earliest_consolidation_epoch
```
#### Updated `slash_validator`
```python
def slash_validator(state: BeaconState,
slashed_index: ValidatorIndex,
whistleblower_index: ValidatorIndex=None) -> None:
"""
Slash the validator with index ``slashed_index``.
"""
epoch = get_current_epoch(state)
initiate_validator_exit(state, slashed_index)
validator = state.validators[slashed_index]
validator.slashed = True
validator.withdrawable_epoch = max(validator.withdrawable_epoch, Epoch(epoch + EPOCHS_PER_SLASHINGS_VECTOR))
state.slashings[epoch % EPOCHS_PER_SLASHINGS_VECTOR] += validator.effective_balance
slashing_penalty = validator.effective_balance // MIN_SLASHING_PENALTY_QUOTIENT_EIP7251 # [Modified in EIP7251]
decrease_balance(state, slashed_index, slashing_penalty)
# Apply proposer and whistleblower rewards
proposer_index = get_beacon_proposer_index(state)
if whistleblower_index is None:
whistleblower_index = proposer_index
whistleblower_reward = Gwei(
validator.effective_balance // WHISTLEBLOWER_REWARD_QUOTIENT_EIP7251) # [Modified in EIP7251]
proposer_reward = Gwei(whistleblower_reward * PROPOSER_WEIGHT // WEIGHT_DENOMINATOR)
increase_balance(state, proposer_index, proposer_reward)
increase_balance(state, whistleblower_index, Gwei(whistleblower_reward - proposer_reward))
```
## Beacon chain state transition function
### Epoch processing
#### Updated `process_epoch`
```python
def process_epoch(state: BeaconState) -> None:
process_justification_and_finalization(state)
process_inactivity_updates(state)
process_rewards_and_penalties(state)
process_registry_updates(state) # [Modified in EIP7251]
process_slashings(state)
process_eth1_data_reset(state)
process_pending_balance_deposits(state) # New in EIP7251
process_pending_consolidations(state) # New in EIP7251
process_effective_balance_updates(state) # [Modified in EIP7251]
process_slashings_reset(state)
process_randao_mixes_reset(state)
```
#### Updated `process_registry_updates`
```python
def process_registry_updates(state: BeaconState) -> None:
# Process activation eligibility and ejections
for index, validator in enumerate(state.validators):
if is_eligible_for_activation_queue(validator):
validator.activation_eligibility_epoch = get_current_epoch(state) + 1
if (
is_active_validator(validator, get_current_epoch(state))
and validator.effective_balance <= EJECTION_BALANCE
):
initiate_validator_exit(state, ValidatorIndex(index))
# Activate all eligible validators
activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
for validator in state.validators:
if is_eligible_for_activation(state, validator):
validator.activation_epoch = activation_epoch
```
#### New `process_pending_balance_deposits`
```python
def process_pending_balance_deposits(state: BeaconState) -> None:
available_for_processing = state.deposit_balance_to_consume + get_activation_exit_churn_limit(state)
processed_amount = 0
next_deposit_index = 0
for deposit in state.pending_balance_deposits:
if processed_amount + deposit.amount > available_for_processing:
break
increase_balance(state, deposit.index, deposit.amount)
processed_amount += deposit.amount
next_deposit_index += 1
state.pending_balance_deposits = state.pending_balance_deposits[next_deposit_index:]
if len(state.pending_balance_deposits) == 0:
state.deposit_balance_to_consume = 0
else:
state.deposit_balance_to_consume = available_for_processing - processed_amount
```
#### New `process_pending_consolidations`
```python
def process_pending_consolidations(state: BeaconState) -> None:
next_pending_consolidation = 0
for pending_consolidation in state.pending_consolidations:
source_validator = state.validators[pending_consolidation.source_index]
if source_validator.slashed:
next_pending_consolidation += 1
continue
if source_validator.withdrawable_epoch > get_current_epoch(state):
break
# Churn any target excess active balance of target and raise its max
switch_to_compounding_validator(state, pending_consolidation.target_index)
# Move active balance to target. Excess balance is withdrawable.
active_balance = get_active_balance(state, pending_consolidation.source_index)
decrease_balance(state, pending_consolidation.source_index, active_balance)
increase_balance(state, pending_consolidation.target_index, active_balance)
next_pending_consolidation += 1
state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
```
#### Updated `process_effective_balance_updates`
```python
def process_effective_balance_updates(state: BeaconState) -> None:
# Update effective balances with hysteresis
for index, validator in enumerate(state.validators):
balance = state.balances[index]
HYSTERESIS_INCREMENT = uint64(EFFECTIVE_BALANCE_INCREMENT // HYSTERESIS_QUOTIENT)
DOWNWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_DOWNWARD_MULTIPLIER
UPWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_UPWARD_MULTIPLIER
EFFECTIVE_BALANCE_LIMIT = (
MAX_EFFECTIVE_BALANCE_EIP7251 if has_compounding_withdrawal_credential(validator)
else MIN_ACTIVATION_BALANCE
)
if (
balance + DOWNWARD_THRESHOLD < validator.effective_balance
or validator.effective_balance + UPWARD_THRESHOLD < balance
):
validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, EFFECTIVE_BALANCE_LIMIT)
```
### Block processing
```python
def process_block(state: BeaconState, block: BeaconBlock) -> None:
process_block_header(state, block)
process_withdrawals(state, block.body.execution_payload) # [Modified in EIP7251]
process_execution_payload(state, block.body, EXECUTION_ENGINE)
process_randao(state, block.body)
process_eth1_data(state, block.body)
process_operations(state, block.body) # [Modified in EIP7251]
process_sync_aggregate(state, block.body.sync_aggregate)
```
##### Updated `get_expected_withdrawals`
```python
def get_expected_withdrawals(state: BeaconState) -> Tuple[Sequence[Withdrawal], uint64]:
epoch = get_current_epoch(state)
withdrawal_index = state.next_withdrawal_index
validator_index = state.next_withdrawal_validator_index
withdrawals: List[Withdrawal] = []
# [New in EIP7251] Consume pending partial withdrawals
for withdrawal in state.pending_partial_withdrawals:
if withdrawal.withdrawable_epoch > epoch or len(withdrawals) == MAX_PARTIAL_WITHDRAWALS_PER_PAYLOAD:
break
validator = state.validators[withdrawal.index]
if validator.exit_epoch == FAR_FUTURE_EPOCH and state.balances[withdrawal.index] > MIN_ACTIVATION_BALANCE:
withdrawable_balance = min(state.balances[withdrawal.index] - MIN_ACTIVATION_BALANCE, withdrawal.amount)
withdrawals.append(Withdrawal(
index=withdrawal_index,
validator_index=withdrawal.index,
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
amount=withdrawable_balance,
))
withdrawal_index += WithdrawalIndex(1)
partial_withdrawals_count = len(withdrawals)
# END: Consume pending partial withdrawals
# Sweep for remaining.
bound = min(len(state.validators), MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP)
for _ in range(bound):
validator = state.validators[validator_index]
balance = state.balances[validator_index]
if is_fully_withdrawable_validator(validator, balance, epoch):
withdrawals.append(Withdrawal(
index=withdrawal_index,
validator_index=validator_index,
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
amount=balance,
))
withdrawal_index += WithdrawalIndex(1)
elif is_partially_withdrawable_validator(validator, balance):
withdrawals.append(Withdrawal(
index=withdrawal_index,
validator_index=validator_index,
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
amount=balance - get_validator_max_effective_balance(validator), # [Modified in EIP7251]
))
withdrawal_index += WithdrawalIndex(1)
if len(withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
break
validator_index = ValidatorIndex((validator_index + 1) % len(state.validators))
return withdrawals, partial_withdrawals_count
```
##### Updated `process_withdrawals`
```python
def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
expected_withdrawals, partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in EIP7251]
assert len(payload.withdrawals) == len(expected_withdrawals)
for expected_withdrawal, withdrawal in zip(expected_withdrawals, payload.withdrawals):
assert withdrawal == expected_withdrawal
decrease_balance(state, withdrawal.validator_index, withdrawal.amount)
# [New in EIP7251] update pending partial withdrawals
state.pending_partial_withdrawals = state.pending_partial_withdrawals[partial_withdrawals_count:]
# Update the next withdrawal index if this block contained withdrawals
if len(expected_withdrawals) != 0:
latest_withdrawal = expected_withdrawals[-1]
state.next_withdrawal_index = WithdrawalIndex(latest_withdrawal.index + 1)
# Update the next validator index to start the next withdrawal sweep
if len(expected_withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
# Next sweep starts after the latest withdrawal's validator index
next_validator_index = ValidatorIndex((expected_withdrawals[-1].validator_index + 1) % len(state.validators))
state.next_withdrawal_validator_index = next_validator_index
else:
# Advance sweep by the max length of the sweep if there was not a full set of withdrawals
next_index = state.next_withdrawal_validator_index + MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP
next_validator_index = ValidatorIndex(next_index % len(state.validators))
state.next_withdrawal_validator_index = next_validator_index
```
#### Operations
##### Updated `process_operations`
```python
def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
# Verify that outstanding deposits are processed up to the maximum number of deposits
assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index)
def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
for operation in operations:
fn(state, operation)
for_ops(body.proposer_slashings, process_proposer_slashing)
for_ops(body.attester_slashings, process_attester_slashing)
for_ops(body.attestations, process_attestation)
for_ops(body.deposits, process_deposit) # [Modified in EIP7251]
for_ops(body.voluntary_exits, process_voluntary_exit)
for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
for_ops(body.execution_payload.withdraw_requests, process_execution_layer_withdraw_request) # New in EIP7251
for_ops(body.consolidations, process_consolidation) # New in EIP7251
```
##### Deposits
###### Updated `apply_deposit`
```python
def apply_deposit(state: BeaconState,
pubkey: BLSPubkey,
withdrawal_credentials: Bytes32,
amount: uint64,
signature: BLSSignature) -> None:
validator_pubkeys = [v.pubkey for v in state.validators]
if pubkey not in validator_pubkeys:
# Verify the deposit signature (proof of possession) which is not checked by the deposit contract
if is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature):
add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
else:
# Increase balance by deposit amount
index = ValidatorIndex(validator_pubkeys.index(pubkey))
state.pending_balance_deposits.append(
PendingBalanceDeposit(index=index, amount=amount)) # [Modified in EIP-7251]
# Check if valid deposit switch to compounding credentials
if (
is_compounding_withdrawal_credential(withdrawal_credentials)
and has_eth1_withdrawal_credential(state.validators[index])
and is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature)
):
switch_to_compounding_validator(state, index)
```
###### New `is_valid_deposit_signature`
```python
def is_valid_deposit_signature(pubkey: BLSPubkey,
withdrawal_credentials: Bytes32,
amount: uint64,
signature: BLSSignature) -> None:
deposit_message = DepositMessage(
pubkey=pubkey,
withdrawal_credentials=withdrawal_credentials,
amount=amount,
)
domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
signing_root = compute_signing_root(deposit_message, domain)
return bls.Verify(pubkey, signing_root, signature)
```
###### Modified `add_validator_to_registry`
```python
def add_validator_to_registry(state: BeaconState,
pubkey: BLSPubkey,
withdrawal_credentials: Bytes32,
amount: uint64) -> None:
index = get_index_for_new_validator(state)
validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
set_or_append_list(state.validators, index, validator)
set_or_append_list(state.balances, index, 0) # [Modified in EIP7251]
set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000))
set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000))
set_or_append_list(state.inactivity_scores, index, uint64(0))
state.pending_balance_deposits.append(PendingBalanceDeposit(index=index, amount=amount)) # [New in EIP7251]
```
###### Updated `get_validator_from_deposit`
```python
def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32) -> Validator:
return Validator(
pubkey=pubkey,
withdrawal_credentials=withdrawal_credentials,
activation_eligibility_epoch=FAR_FUTURE_EPOCH,
activation_epoch=FAR_FUTURE_EPOCH,
exit_epoch=FAR_FUTURE_EPOCH,
withdrawable_epoch=FAR_FUTURE_EPOCH,
effective_balance=0, # [Modified in EIP7251]
)
```
##### Withdrawals
###### New `process_execution_layer_withdraw_request`
```python
def process_execution_layer_withdraw_request(
state: BeaconState,
execution_layer_withdraw_request: ExecutionLayerWithdrawRequest
) -> None:
amount = execution_layer_withdraw_request.amount
is_full_exit_request = amount == 0
# If partial withdrawal queue is full, only full exits are processed
if len(state.pending_partial_withdrawals) >= PENDING_PARTIAL_WITHDRAWALS_LIMIT and not is_full_exit_request:
return
validator_pubkeys = [v.pubkey for v in state.validators]
index = ValidatorIndex(validator_pubkeys.index(execution_layer_withdraw_request.validator_pubkey))
validator = state.validators[index]
# Same conditions as in EIP-7002
if not (
has_execution_withdrawal_credential(validator)
# Verify withdrawal credentials
and validator.withdrawal_credentials[12:] == execution_layer_withdraw_request.source_address
# Verify the validator is active
and is_active_validator(validator, get_current_epoch(state))
# Verify exit has not been initiated, and slashed
and validator.exit_epoch == FAR_FUTURE_EPOCH
# Verify the validator has been active long enough
and get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
):
return
# New condition: only allow partial withdrawals with compounding withdrawal credentials
if not (is_full_exit_request or has_compounding_withdrawal_credential(validator)):
return
pending_balance_to_withdraw = sum(
item.amount for item in state.pending_partial_withdrawals if item.index == index
)
# only exit validator if it has no pending withdrawals in the queue
if is_full_exit_request and pending_balance_to_withdraw > 0:
return
if is_full_exit_request:
initiate_validator_exit(state, index)
elif state.balances[index] > MIN_ACTIVATION_BALANCE + pending_balance_to_withdraw:
to_withdraw = min(
state.balances[index] - MIN_ACTIVATION_BALANCE - pending_balance_to_withdraw,
amount
)
exit_queue_epoch = compute_exit_epoch_and_update_churn(state, to_withdraw)
withdrawable_epoch = Epoch(exit_queue_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
state.pending_partial_withdrawals.append(PendingPartialWithdrawal(
index=index,
amount=to_withdraw,
withdrawable_epoch=withdrawable_epoch,
))
```
##### Consolidations
###### New `process_consolidation`
```python
def process_consolidation(state: BeaconState, signed_consolidation: SignedConsolidation) -> None:
# If the pending consolidations queue is full, no consolidations are allowed in the block
assert len(state.pending_consolidations) < PENDING_CONSOLIDATIONS_LIMIT
# If there is too little available consolidation churn limit, no consolidations are allowed in the block
assert get_consolidation_churn_limit(state) > MIN_ACTIVATION_BALANCE
consolidation = signed_consolidation.message
# Verify that source != target, so a consolidation cannot be used as an exit.
assert consolidation.source_index != consolidation.target_index
source_validator = state.validators[consolidation.source_index]
target_validator = state.validators[consolidation.target_index]
# Verify the source and the target are active
current_epoch = get_current_epoch(state)
assert is_active_validator(source_validator, current_epoch)
assert is_active_validator(target_validator, current_epoch)
# Verify exits for source and target have not been initiated
assert source_validator.exit_epoch == FAR_FUTURE_EPOCH
assert target_validator.exit_epoch == FAR_FUTURE_EPOCH
# Consolidations must specify an epoch when they become valid; they are not valid before then
assert current_epoch >= consolidation.epoch
# Verify the source and the target have Execution layer withdrawal credentials
assert has_execution_withdrawal_credential(source_validator)
assert has_execution_withdrawal_credential(target_validator)
# Verify the same withdrawal address
assert source_validator.withdrawal_credentials[1:] == target_validator.withdrawal_credentials[1:]
# Verify consolidation is signed by the source and the target
domain = compute_domain(DOMAIN_CONSOLIDATION, genesis_validators_root=state.genesis_validators_root)
signing_root = compute_signing_root(consolidation, domain)
pubkeys = [source_validator.pubkey, target_validator.pubkey]
assert bls.FastAggregateVerify(pubkeys, signing_root, signed_consolidation.signature)
# Initiate source validator exit and append pending consolidation
active_balance = get_active_balance(state, consolidation.source_index)
source_validator.exit_epoch = compute_consolidation_epoch_and_update_churn(state, active_balance)
source_validator.withdrawable_epoch = Epoch(
source_validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY
)
state.pending_consolidations.append(PendingConsolidation(
source_index=consolidation.source_index,
target_index=consolidation.target_index
))
```

View File

@ -0,0 +1,139 @@
# EIP7251 -- Fork Logic
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Introduction](#introduction)
- [Configuration](#configuration)
- [Helper functions](#helper-functions)
- [Misc](#misc)
- [Modified `compute_fork_version`](#modified-compute_fork_version)
- [Fork to EIP7251](#fork-to-eip7251)
- [Fork trigger](#fork-trigger)
- [Upgrading the state](#upgrading-the-state)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Introduction
This document describes the process of the EIP7251 upgrade.
## Configuration
Warning: this configuration is not definitive.
| Name | Value |
| - | - |
| `EIP7251_FORK_VERSION` | `Version('0x06000000')` |
| `EIP7251_FORK_EPOCH` | `Epoch(18446744073709551615)` **TBD** |
## Helper functions
### Misc
#### Modified `compute_fork_version`
```python
def compute_fork_version(epoch: Epoch) -> Version:
"""
Return the fork version at the given ``epoch``.
"""
if epoch >= EIP7251_FORK_EPOCH:
return EIP7251_FORK_VERSION
if epoch >= DENEB_FORK_EPOCH:
return DENEB_FORK_VERSION
if epoch >= CAPELLA_FORK_EPOCH:
return CAPELLA_FORK_VERSION
if epoch >= BELLATRIX_FORK_EPOCH:
return BELLATRIX_FORK_VERSION
if epoch >= ALTAIR_FORK_EPOCH:
return ALTAIR_FORK_VERSION
return GENESIS_FORK_VERSION
```
## Fork to EIP7251
### Fork trigger
TBD. This fork is defined for testing purposes, the EIP may be combined with other consensus-layer upgrade.
For now, we assume the condition will be triggered at epoch `EIP7251_FORK_EPOCH`.
Note that for the pure EIP7251 networks, we don't apply `upgrade_to_eip7251` since it starts with EIP7251 version logic.
### Upgrading the state
If `state.slot % SLOTS_PER_EPOCH == 0` and `compute_epoch_at_slot(state.slot) == EIP7251_FORK_EPOCH`,
an irregular state change is made to upgrade to EIP7251.
```python
def upgrade_to_eip7251(pre: deneb.BeaconState) -> BeaconState:
post = BeaconState(
# Versioning
genesis_time=pre.genesis_time,
genesis_validators_root=pre.genesis_validators_root,
slot=pre.slot,
fork=Fork(
previous_version=pre.fork.current_version,
current_version=EIP7251_FORK_VERSION,
epoch=deneb.get_current_epoch(pre),
),
# History
latest_block_header=pre.latest_block_header,
block_roots=pre.block_roots,
state_roots=pre.state_roots,
historical_roots=pre.historical_roots,
# Eth1
eth1_data=pre.eth1_data,
eth1_data_votes=pre.eth1_data_votes,
eth1_deposit_index=pre.eth1_deposit_index,
# Registry
validators=pre.validators,
balances=pre.balances,
# Randomness
randao_mixes=pre.randao_mixes,
# Slashings
slashings=pre.slashings,
# Participation
previous_epoch_participation=pre.previous_epoch_participation,
current_epoch_participation=pre.current_epoch_participation,
# Finality
justification_bits=pre.justification_bits,
previous_justified_checkpoint=pre.previous_justified_checkpoint,
current_justified_checkpoint=pre.current_justified_checkpoint,
finalized_checkpoint=pre.finalized_checkpoint,
# Inactivity
inactivity_scores=pre.inactivity_scores,
# Sync
current_sync_committee=pre.current_sync_committee,
next_sync_committee=pre.next_sync_committee,
# Execution-layer
latest_execution_payload_header=pre.latest_execution_payload_header,
# Withdrawals
next_withdrawal_index=pre.next_withdrawal_index,
next_withdrawal_validator_index=pre.next_withdrawal_validator_index,
# Deep history valid from Capella onwards
historical_summaries=pre.historical_summaries,
# [New in EIP7251]
deposit_balance_to_consume=0,
exit_balance_to_consume=get_activation_exit_churn_limit(pre),
earliest_exit_epoch=max([v.exit_epoch for v in pre.validators if v.exit_epoch != FAR_FUTURE_EPOCH]) + 1,
consolidation_balance_to_consume=get_consolidation_churn_limit(pre),
earliest_consolidation_epoch=compute_activation_exit_epoch(get_current_epoch(pre)),
pending_balance_deposits=[],
pending_partial_withdrawals=[],
pending_consolidations=[],
)
# Ensure early adopters of compounding credentials go through the activation churn
queue_excess_active_balance(post)
for index, validator in enumerate(post.validators):
if has_compounding_withdrawal_credential(validator):
queue_excess_active_balance(post, index)
return post
```

View File

@ -0,0 +1,163 @@
# EIP-7549 -- The Beacon Chain
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Introduction](#introduction)
- [Preset](#preset)
- [Containers](#containers)
- [Modified containers](#modified-containers)
- [`Attestation`](#attestation)
- [`IndexedAttestation`](#indexedattestation)
- [`BeaconBlockBody`](#beaconblockbody)
- [Helper functions](#helper-functions)
- [Misc](#misc)
- [`get_committee_indices`](#get_committee_indices)
- [Beacon state accessors](#beacon-state-accessors)
- [Modified `get_attesting_indices`](#modified-get_attesting_indices)
- [Block processing](#block-processing)
- [Modified `process_attestation`](#modified-process_attestation)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
## Introduction
This is the beacon chain specification to move the attestation committee index outside of the signed message. For motivation, refer to [EIP-7549](https://eips.ethereum.org/EIPS/eip-7549).
*Note:* This specification is built upon [Deneb](../../deneb/beacon_chain.md) and is under active development.
## Preset
| Name | Value | Description |
| - | - | - |
| `MAX_ATTESTER_SLASHINGS_EIP7549` | `2**0` (= 1) |
| `MAX_ATTESTATIONS_EIP7549` | `2**3` (= 8) |
## Containers
### Modified containers
#### `Attestation`
```python
class Attestation(Container):
aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE * MAX_COMMITTEES_PER_SLOT] # [Modified in EIP7549]
data: AttestationData
committee_bits: Bitvector[MAX_COMMITTEES_PER_SLOT] # [New in EIP7549]
signature: BLSSignature
```
#### `IndexedAttestation`
```python
class IndexedAttestation(Container):
# [Modified in EIP7549]
attesting_indices: List[ValidatorIndex, MAX_VALIDATORS_PER_COMMITTEE * MAX_COMMITTEES_PER_SLOT]
data: AttestationData
signature: BLSSignature
```
#### `BeaconBlockBody`
```python
class BeaconBlockBody(Container):
randao_reveal: BLSSignature
eth1_data: Eth1Data # Eth1 data vote
graffiti: Bytes32 # Arbitrary data
# Operations
proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]
attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS_EIP7549] # [Modified in EIP7549]
attestations: List[Attestation, MAX_ATTESTATIONS_EIP7549] # [Modified in EIP7549]
deposits: List[Deposit, MAX_DEPOSITS]
voluntary_exits: List[SignedVoluntaryExit, MAX_VOLUNTARY_EXITS]
sync_aggregate: SyncAggregate
# Execution
execution_payload: ExecutionPayload
bls_to_execution_changes: List[SignedBLSToExecutionChange, MAX_BLS_TO_EXECUTION_CHANGES]
blob_kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
```
## Helper functions
### Misc
#### `get_committee_indices`
```python
def get_committee_indices(commitee_bits: Bitvector) -> Sequence[CommitteeIndex]:
return [CommitteeIndex(index) for index, bit in enumerate(commitee_bits) if bit]
```
### Beacon state accessors
#### Modified `get_attesting_indices`
```python
def get_attesting_indices(state: BeaconState, attestation: Attestation) -> Set[ValidatorIndex]:
"""
Return the set of attesting indices corresponding to ``aggregation_bits`` and ``committee_bits``.
"""
output: Set[ValidatorIndex] = set()
committee_indices = get_committee_indices(attestation.committee_bits)
committee_offset = 0
for index in committee_indices:
committee = get_beacon_committee(state, attestation.data.slot, index)
committee_attesters = set(
index for i, index in enumerate(committee) if attestation.aggregation_bits[committee_offset + i])
output = output.union(committee_attesters)
committee_offset += len(committee)
return output
```
### Block processing
#### Modified `process_attestation`
```python
def process_attestation(state: BeaconState, attestation: Attestation) -> None:
data = attestation.data
assert data.target.epoch in (get_previous_epoch(state), get_current_epoch(state))
assert data.target.epoch == compute_epoch_at_slot(data.slot)
assert data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot
# [Modified in EIP7549]
assert data.index == 0
committee_indices = get_committee_indices(attestation.committee_bits)
participants_count = 0
for index in committee_indices:
assert index < get_committee_count_per_slot(state, data.target.epoch)
committee = get_beacon_committee(state, data.slot, index)
participants_count += len(committee)
assert len(attestation.aggregation_bits) == participants_count
# Participation flag indices
participation_flag_indices = get_attestation_participation_flag_indices(state, data, state.slot - data.slot)
# Verify signature
assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))
# Update epoch participation flags
if data.target.epoch == get_current_epoch(state):
epoch_participation = state.current_epoch_participation
else:
epoch_participation = state.previous_epoch_participation
proposer_reward_numerator = 0
for index in get_attesting_indices(state, attestation):
for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
proposer_reward_numerator += get_base_reward(state, index) * weight
# Reward proposer
proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator)
increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
```

View File

@ -0,0 +1,141 @@
# EIP-7549 -- Fork Logic
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Introduction](#introduction)
- [Configuration](#configuration)
- [Helper functions](#helper-functions)
- [Misc](#misc)
- [Modified `compute_fork_version`](#modified-compute_fork_version)
- [Fork to EIP-7549](#fork-to-eip-7549)
- [Fork trigger](#fork-trigger)
- [Upgrading the state](#upgrading-the-state)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Introduction
This document describes the process of EIP-7549 upgrade.
## Configuration
Warning: this configuration is not definitive.
| Name | Value |
| - | - |
| `EIP7549_FORK_VERSION` | `Version('0x06000000')` |
| `EIP7549_FORK_EPOCH` | `Epoch(18446744073709551615)` **TBD** |
## Helper functions
### Misc
#### Modified `compute_fork_version`
```python
def compute_fork_version(epoch: Epoch) -> Version:
"""
Return the fork version at the given ``epoch``.
"""
if epoch >= EIP7549_FORK_EPOCH:
return EIP7549_FORK_VERSION
if epoch >= DENEB_FORK_EPOCH:
return DENEB_FORK_VERSION
if epoch >= CAPELLA_FORK_EPOCH:
return CAPELLA_FORK_VERSION
if epoch >= BELLATRIX_FORK_EPOCH:
return BELLATRIX_FORK_VERSION
if epoch >= ALTAIR_FORK_EPOCH:
return ALTAIR_FORK_VERSION
return GENESIS_FORK_VERSION
```
## Fork to EIP-7549
### Fork trigger
TBD. This fork is defined for testing purposes, the EIP may be combined with other consensus-layer upgrade.
For now, we assume the condition will be triggered at epoch `EIP7549_FORK_EPOCH`.
Note that for the pure EIP-7549 networks, we don't apply `upgrade_to_eip7549` since it starts with EIP-7549 version logic.
### Upgrading the state
If `state.slot % SLOTS_PER_EPOCH == 0` and `compute_epoch_at_slot(state.slot) == EIP7549_FORK_EPOCH`,
an irregular state change is made to upgrade to EIP-7549.
```python
def upgrade_to_eip7549(pre: capella.BeaconState) -> BeaconState:
epoch = capella.get_current_epoch(pre)
latest_execution_payload_header = ExecutionPayloadHeader(
parent_hash=pre.latest_execution_payload_header.parent_hash,
fee_recipient=pre.latest_execution_payload_header.fee_recipient,
state_root=pre.latest_execution_payload_header.state_root,
receipts_root=pre.latest_execution_payload_header.receipts_root,
logs_bloom=pre.latest_execution_payload_header.logs_bloom,
prev_randao=pre.latest_execution_payload_header.prev_randao,
block_number=pre.latest_execution_payload_header.block_number,
gas_limit=pre.latest_execution_payload_header.gas_limit,
gas_used=pre.latest_execution_payload_header.gas_used,
timestamp=pre.latest_execution_payload_header.timestamp,
extra_data=pre.latest_execution_payload_header.extra_data,
base_fee_per_gas=pre.latest_execution_payload_header.base_fee_per_gas,
block_hash=pre.latest_execution_payload_header.block_hash,
transactions_root=pre.latest_execution_payload_header.transactions_root,
withdrawals_root=pre.latest_execution_payload_header.withdrawals_root,
)
post = BeaconState(
# Versioning
genesis_time=pre.genesis_time,
genesis_validators_root=pre.genesis_validators_root,
slot=pre.slot,
fork=Fork(
previous_version=pre.fork.current_version,
current_version=EIP7549_FORK_VERSION, # [Modified in EIP-7549]
epoch=epoch,
),
# History
latest_block_header=pre.latest_block_header,
block_roots=pre.block_roots,
state_roots=pre.state_roots,
historical_roots=pre.historical_roots,
# Eth1
eth1_data=pre.eth1_data,
eth1_data_votes=pre.eth1_data_votes,
eth1_deposit_index=pre.eth1_deposit_index,
# Registry
validators=pre.validators,
balances=pre.balances,
# Randomness
randao_mixes=pre.randao_mixes,
# Slashings
slashings=pre.slashings,
# Participation
previous_epoch_participation=pre.previous_epoch_participation,
current_epoch_participation=pre.current_epoch_participation,
# Finality
justification_bits=pre.justification_bits,
previous_justified_checkpoint=pre.previous_justified_checkpoint,
current_justified_checkpoint=pre.current_justified_checkpoint,
finalized_checkpoint=pre.finalized_checkpoint,
# Inactivity
inactivity_scores=pre.inactivity_scores,
# Sync
current_sync_committee=pre.current_sync_committee,
next_sync_committee=pre.next_sync_committee,
# Execution-layer
latest_execution_payload_header=latest_execution_payload_header,
# Withdrawals
next_withdrawal_index=pre.next_withdrawal_index,
next_withdrawal_validator_index=pre.next_withdrawal_validator_index,
# Deep history valid from Capella onwards
historical_summaries=pre.historical_summaries,
)
return post
```

View File

@ -0,0 +1,52 @@
# EIP-7549 -- Networking
This document contains the consensus-layer networking specification for EIP-7549.
The specification of these changes continues in the same format as the network specifications of previous upgrades, and assumes them as pre-requisite.
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Modifications in EIP-7549](#modifications-in-eip-7549)
- [The gossip domain: gossipsub](#the-gossip-domain-gossipsub)
- [Topics and messages](#topics-and-messages)
- [Global topics](#global-topics)
- [`beacon_aggregate_and_proof`](#beacon_aggregate_and_proof)
- [`beacon_attestation_{subnet_id}`](#beacon_attestation_subnet_id)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
## Modifications in EIP-7549
### The gossip domain: gossipsub
#### Topics and messages
The `beacon_aggregate_and_proof` and `beacon_attestation_{subnet_id}` topics are modified to support the gossip of a new attestation type.
##### Global topics
###### `beacon_aggregate_and_proof`
*[Modified in EIP7549]*
The following convenience variables are re-defined
- `index = get_committee_indices(aggregate.committee_bits)[0]`
The following validations are added:
* [REJECT] `len(committee_indices) == 1`, where `committee_indices = get_committee_indices(aggregate)`.
* [REJECT] `aggregate.data.index == 0`
###### `beacon_attestation_{subnet_id}`
The following convenience variables are re-defined
- `index = get_committee_indices(attestation.committee_bits)[0]`
The following validations are added:
* [REJECT] `len(committee_indices) == 1`, where `committee_indices = get_committee_indices(attestation)`.
* [REJECT] `attestation.data.index == 0`

View File

@ -0,0 +1,74 @@
# Deneb -- Honest Validator
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Modifications in EIP-7549](#modifications-in-eip-7549)
- [Block proposal](#block-proposal)
- [Constructing the `BeaconBlockBody`](#constructing-the-beaconblockbody)
- [Attester slashings](#attester-slashings)
- [Attestations](#attestations)
- [Attesting](#attesting)
- [Construct attestation](#construct-attestation)
- [Attestation aggregation](#attestation-aggregation)
- [Construct aggregate](#construct-aggregate)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
## Modifications in EIP-7549
### Block proposal
#### Constructing the `BeaconBlockBody`
##### Attester slashings
Changed the max attestations size to `MAX_ATTESTER_SLASHINGS_EIP7549`.
##### Attestations
The network attestation aggregates contain only the assigned committee attestations.
Attestation aggregates received by the block proposer from the committee aggregators with disjoint `committee_bits` sets and equal `AttestationData` SHOULD be consolidated into a single `Attestation` object.
The proposer should run the following function to construct an on chain final aggregate form a list of network aggregates with equal `AttestationData`:
```python
def compute_on_chain_aggregate(network_aggregates: Sequence[Attestation]) -> Attestation:
aggregates = sorted(network_aggregates, key=lambda a: get_committee_indices(a.committee_bits)[0])
data = aggregates[0].data
aggregation_bits = Bitlist[MAX_VALIDATORS_PER_COMMITTEE * MAX_COMMITTEES_PER_SLOT]()
for a in aggregates:
for b in a.aggregation_bits:
aggregation_bits.append(b)
signature = bls.Aggregate([a.signature for a in aggregates])
committee_indices = [get_committee_indices(a.committee_bits)[0] for a in aggregates]
committee_flags = [(index in committee_indices) for index in range(0, MAX_COMMITTEES_PER_SLOT)]
committee_bits = Bitvector[MAX_COMMITTEES_PER_SLOT](committee_flags)
return Attestation(aggregation_bits, data, committee_bits, signature)
```
### Attesting
#### Construct attestation
- Set `attestation_data.index = 0`.
- Let `attestation.aggregation_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE * MAX_COMMITTEES_PER_SLOT]` of length `len(committee)`, where the bit of the index of the validator in the `committee` is set to `0b1`.
- Let `attestation.committee_bits` be a `Bitvector[MAX_COMMITTEES_PER_SLOT]`, where the bit at the index associated with the validator's committee is set to `0b1`.
*Note*: Calling `get_attesting_indices(state, attestation)` should return a list of length equal to 1, containing `validator_index`.
### Attestation aggregation
#### Construct aggregate
- Set `attestation_data.index = 0`.
- Let `aggregation_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE * MAX_COMMITTEES_PER_SLOT]` of length `len(committee)`, where each bit set from each individual attestation is set to `0b1`.
- Set `attestation.committee_bits = committee_bits`, where `committee_bits` has the same value as in each individual attestation.

View File

@ -42,6 +42,9 @@
- [`verify_cell_proof`](#verify_cell_proof)
- [`verify_cell_proof_batch`](#verify_cell_proof_batch)
- [Reconstruction](#reconstruction)
- [`construct_vanishing_polynomial`](#construct_vanishing_polynomial)
- [`recover_shifted_data`](#recover_shifted_data)
- [`recover_original_data`](#recover_original_data)
- [`recover_polynomial`](#recover_polynomial)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
@ -59,7 +62,7 @@ Public functions MUST accept raw bytes as input and perform the required cryptog
| Name | SSZ equivalent | Description |
| - | - | - |
| `PolynomialCoeff` | `List[BLSFieldElement, 2 * FIELD_ELEMENTS_PER_BLOB]` | A polynomial in coefficient form |
| `PolynomialCoeff` | `List[BLSFieldElement, FIELD_ELEMENTS_PER_EXT_BLOB]` | A polynomial in coefficient form |
| `Cell` | `Vector[BLSFieldElement, FIELD_ELEMENTS_PER_CELL]` | The unit of blob data that can come with their own KZG proofs |
| `CellID` | `uint64` | Cell identifier |
| `RowIndex` | `uint64` | Row identifier |
@ -78,9 +81,10 @@ Cells are the smallest unit of blob data that can come with their own KZG proofs
| Name | Value | Description |
| - | - | - |
| `FIELD_ELEMENTS_PER_EXT_BLOB` | `2 * FIELD_ELEMENTS_PER_BLOB` | Number of field elements in a Reed-Solomon extended blob |
| `FIELD_ELEMENTS_PER_CELL` | `uint64(64)` | Number of field elements in a cell |
| `BYTES_PER_CELL` | `FIELD_ELEMENTS_PER_CELL * BYTES_PER_FIELD_ELEMENT` | The number of bytes in a cell |
| `CELLS_PER_BLOB` | `((2 * FIELD_ELEMENTS_PER_BLOB) // FIELD_ELEMENTS_PER_CELL)` | The number of cells in a blob |
| `CELLS_PER_BLOB` | `FIELD_ELEMENTS_PER_EXT_BLOB // FIELD_ELEMENTS_PER_CELL` | The number of cells in a blob |
| `RANDOM_CHALLENGE_KZG_CELL_BATCH_DOMAIN` | `b'RCKZGCBATCH__V1_'` |
## Helper functions
@ -194,12 +198,15 @@ def multiply_polynomialcoeff(a: PolynomialCoeff, b: PolynomialCoeff) -> Polynomi
"""
Multiplies the coefficient form polynomials ``a`` and ``b``
"""
assert len(a) + len(b) <= FIELD_ELEMENTS_PER_EXT_BLOB
r = [0]
for power, coef in enumerate(a):
summand = [0] * power + [int(coef) * int(x) % BLS_MODULUS for x in b]
r = add_polynomialcoeff(r, summand)
return r
```
#### `divide_polynomialcoeff`
```python
@ -354,7 +361,7 @@ def coset_for_cell(cell_id: CellID) -> Cell:
"""
assert cell_id < CELLS_PER_BLOB
roots_of_unity_brp = bit_reversal_permutation(
compute_roots_of_unity(2 * FIELD_ELEMENTS_PER_BLOB)
compute_roots_of_unity(FIELD_ELEMENTS_PER_EXT_BLOB)
)
return Cell(roots_of_unity_brp[FIELD_ELEMENTS_PER_CELL * cell_id:FIELD_ELEMENTS_PER_CELL * (cell_id + 1)])
```
@ -404,7 +411,7 @@ def compute_cells(blob: Blob) -> Vector[Cell, CELLS_PER_BLOB]:
polynomial_coeff = polynomial_eval_to_coeff(polynomial)
extended_data = fft_field(polynomial_coeff + [0] * FIELD_ELEMENTS_PER_BLOB,
compute_roots_of_unity(2 * FIELD_ELEMENTS_PER_BLOB))
compute_roots_of_unity(FIELD_ELEMENTS_PER_EXT_BLOB))
extended_data_rbo = bit_reversal_permutation(extended_data)
return [extended_data_rbo[i * FIELD_ELEMENTS_PER_CELL:(i + 1) * FIELD_ELEMENTS_PER_CELL]
for i in range(CELLS_PER_BLOB)]
@ -473,73 +480,99 @@ def verify_cell_proof_batch(row_commitments_bytes: Sequence[Bytes48],
## Reconstruction
### `recover_polynomial`
### `construct_vanishing_polynomial`
```python
def recover_polynomial(cell_ids: Sequence[CellID],
cells_bytes: Sequence[Vector[Bytes32, FIELD_ELEMENTS_PER_CELL]]) -> Polynomial:
def construct_vanishing_polynomial(missing_cell_ids: Sequence[CellID]) -> Tuple[
Sequence[BLSFieldElement],
Sequence[BLSFieldElement]]:
"""
Recovers a polynomial from 2 * FIELD_ELEMENTS_PER_CELL evaluations, half of which can be missing.
This algorithm uses FFTs to recover cells faster than using Lagrange implementation. However,
a faster version thanks to Qi Zhou can be found here:
https://github.com/ethereum/research/blob/51b530a53bd4147d123ab3e390a9d08605c2cdb8/polynomial_reconstruction/polynomial_reconstruction_danksharding.py
Public method.
Given the cells that are missing from the data, compute the polynomial that vanishes at every point that
corresponds to a missing field element.
"""
assert len(cell_ids) == len(cells_bytes)
cells = [bytes_to_cell(cell_bytes) for cell_bytes in cells_bytes]
assert len(cells) >= CELLS_PER_BLOB // 2
missing_cell_ids = [cell_id for cell_id in range(CELLS_PER_BLOB) if cell_id not in cell_ids]
# Get the small domain
roots_of_unity_reduced = compute_roots_of_unity(CELLS_PER_BLOB)
# Compute polynomial that vanishes at all the missing cells (over the small domain)
short_zero_poly = vanishing_polynomialcoeff([
roots_of_unity_reduced[reverse_bits(cell_id, CELLS_PER_BLOB)]
for cell_id in missing_cell_ids
roots_of_unity_reduced[reverse_bits(missing_cell_id, CELLS_PER_BLOB)]
for missing_cell_id in missing_cell_ids
])
full_zero_poly = []
for i in short_zero_poly:
full_zero_poly.append(i)
full_zero_poly.extend([0] * (FIELD_ELEMENTS_PER_CELL - 1))
full_zero_poly = full_zero_poly + [0] * (2 * FIELD_ELEMENTS_PER_BLOB - len(full_zero_poly))
# Extend vanishing polynomial to full domain using the closed form of the vanishing polynomial over a coset
zero_poly_coeff = [0] * FIELD_ELEMENTS_PER_EXT_BLOB
for i, coeff in enumerate(short_zero_poly):
zero_poly_coeff[i * FIELD_ELEMENTS_PER_CELL] = coeff
zero_poly_eval = fft_field(full_zero_poly,
compute_roots_of_unity(2 * FIELD_ELEMENTS_PER_BLOB))
# Compute evaluations of the extended vanishing polynomial
zero_poly_eval = fft_field(zero_poly_coeff,
compute_roots_of_unity(FIELD_ELEMENTS_PER_EXT_BLOB))
zero_poly_eval_brp = bit_reversal_permutation(zero_poly_eval)
for cell_id in missing_cell_ids:
start = cell_id * FIELD_ELEMENTS_PER_CELL
end = (cell_id + 1) * FIELD_ELEMENTS_PER_CELL
assert zero_poly_eval_brp[start:end] == [0] * FIELD_ELEMENTS_PER_CELL
for cell_id in cell_ids:
start = cell_id * FIELD_ELEMENTS_PER_CELL
end = (cell_id + 1) * FIELD_ELEMENTS_PER_CELL
assert all(a != 0 for a in zero_poly_eval_brp[start:end])
extended_evaluation_rbo = [0] * (FIELD_ELEMENTS_PER_BLOB * 2)
# Sanity check
for cell_id in range(CELLS_PER_BLOB):
start = cell_id * FIELD_ELEMENTS_PER_CELL
end = (cell_id + 1) * FIELD_ELEMENTS_PER_CELL
if cell_id in missing_cell_ids:
assert all(a == 0 for a in zero_poly_eval_brp[start:end])
else: # cell_id in cell_ids
assert all(a != 0 for a in zero_poly_eval_brp[start:end])
return zero_poly_coeff, zero_poly_eval, zero_poly_eval_brp
```
### `recover_shifted_data`
```python
def recover_shifted_data(cell_ids: Sequence[CellID],
cells: Sequence[Cell],
zero_poly_eval: Sequence[BLSFieldElement],
zero_poly_coeff: Sequence[BLSFieldElement],
roots_of_unity_extended: Sequence[BLSFieldElement]) -> Tuple[
Sequence[BLSFieldElement],
Sequence[BLSFieldElement],
BLSFieldElement]:
"""
Given Z(x), return polynomial Q_1(x)=(E*Z)(k*x) and Q_2(x)=Z(k*x) and k^{-1}.
"""
shift_factor = BLSFieldElement(PRIMITIVE_ROOT_OF_UNITY)
shift_inv = div(BLSFieldElement(1), shift_factor)
extended_evaluation_rbo = [0] * FIELD_ELEMENTS_PER_EXT_BLOB
for cell_id, cell in zip(cell_ids, cells):
start = cell_id * FIELD_ELEMENTS_PER_CELL
end = (cell_id + 1) * FIELD_ELEMENTS_PER_CELL
extended_evaluation_rbo[start:end] = cell
extended_evaluation = bit_reversal_permutation(extended_evaluation_rbo)
# Compute (E*Z)(x)
extended_evaluation_times_zero = [BLSFieldElement(int(a) * int(b) % BLS_MODULUS)
for a, b in zip(zero_poly_eval, extended_evaluation)]
roots_of_unity_extended = compute_roots_of_unity(2 * FIELD_ELEMENTS_PER_BLOB)
extended_evaluations_fft = fft_field(extended_evaluation_times_zero, roots_of_unity_extended, inv=True)
shift_factor = BLSFieldElement(PRIMITIVE_ROOT_OF_UNITY)
shift_inv = div(BLSFieldElement(1), shift_factor)
# Compute (E*Z)(k*x)
shifted_extended_evaluation = shift_polynomialcoeff(extended_evaluations_fft, shift_factor)
shifted_zero_poly = shift_polynomialcoeff(full_zero_poly, shift_factor)
# Compute Z(k*x)
shifted_zero_poly = shift_polynomialcoeff(zero_poly_coeff, shift_factor)
eval_shifted_extended_evaluation = fft_field(shifted_extended_evaluation, roots_of_unity_extended)
eval_shifted_zero_poly = fft_field(shifted_zero_poly, roots_of_unity_extended)
return eval_shifted_extended_evaluation, eval_shifted_zero_poly, shift_inv
```
### `recover_original_data`
```python
def recover_original_data(eval_shifted_extended_evaluation: Sequence[BLSFieldElement],
eval_shifted_zero_poly: Sequence[BLSFieldElement],
shift_inv: BLSFieldElement,
roots_of_unity_extended: Sequence[BLSFieldElement]) -> Sequence[BLSFieldElement]:
"""
Given Q_1, Q_2 and k^{-1}, compute P(x).
"""
# Compute Q_3 = Q_1(x)/Q_2(x) = P(k*x)
eval_shifted_reconstructed_poly = [
div(a, b)
for a, b in zip(eval_shifted_extended_evaluation, eval_shifted_zero_poly)
@ -547,10 +580,59 @@ def recover_polynomial(cell_ids: Sequence[CellID],
shifted_reconstructed_poly = fft_field(eval_shifted_reconstructed_poly, roots_of_unity_extended, inv=True)
# Unshift P(k*x) by k^{-1} to get P(x)
reconstructed_poly = shift_polynomialcoeff(shifted_reconstructed_poly, shift_inv)
reconstructed_data = bit_reversal_permutation(fft_field(reconstructed_poly, roots_of_unity_extended))
return reconstructed_data
```
### `recover_polynomial`
```python
def recover_polynomial(cell_ids: Sequence[CellID],
cells_bytes: Sequence[Vector[Bytes32, FIELD_ELEMENTS_PER_CELL]]) -> Polynomial:
"""
Recover original polynomial from FIELD_ELEMENTS_PER_EXT_BLOB evaluations, half of which can be missing. This
algorithm uses FFTs to recover cells faster than using Lagrange implementation, as can be seen here:
https://ethresear.ch/t/reed-solomon-erasure-code-recovery-in-n-log-2-n-time-with-ffts/3039
A faster version thanks to Qi Zhou can be found here:
https://github.com/ethereum/research/blob/51b530a53bd4147d123ab3e390a9d08605c2cdb8/polynomial_reconstruction/polynomial_reconstruction_danksharding.py
Public method.
"""
assert len(cell_ids) == len(cells_bytes)
# Check we have enough cells to be able to perform the reconstruction
assert CELLS_PER_BLOB / 2 <= len(cell_ids) <= CELLS_PER_BLOB
# Check for duplicates
assert len(cell_ids) == len(set(cell_ids))
# Get the extended domain
roots_of_unity_extended = compute_roots_of_unity(FIELD_ELEMENTS_PER_EXT_BLOB)
# Convert from bytes to cells
cells = [bytes_to_cell(cell_bytes) for cell_bytes in cells_bytes]
missing_cell_ids = [cell_id for cell_id in range(CELLS_PER_BLOB) if cell_id not in cell_ids]
zero_poly_coeff, zero_poly_eval, zero_poly_eval_brp = construct_vanishing_polynomial(missing_cell_ids)
eval_shifted_extended_evaluation, eval_shifted_zero_poly, shift_inv = recover_shifted_data(
cell_ids,
cells,
zero_poly_eval,
zero_poly_coeff,
roots_of_unity_extended,
)
reconstructed_data = recover_original_data(
eval_shifted_extended_evaluation,
eval_shifted_zero_poly,
shift_inv,
roots_of_unity_extended,
)
for cell_id, cell in zip(cell_ids, cells):
start = cell_id * FIELD_ELEMENTS_PER_CELL
end = (cell_id + 1) * FIELD_ELEMENTS_PER_CELL

View File

@ -41,7 +41,7 @@ Warning: this configuration is not definitive.
| Name | Value |
| -------------------- | ----------------------- |
| `WHISK_FORK_VERSION` | `Version('0x05000000')` |
| `WHISK_FORK_VERSION` | `Version('0x08000000')` |
| `WHISK_FORK_EPOCH` | `Epoch(18446744073709551615)` **TBD** |
## Fork to Whisk

View File

@ -496,7 +496,7 @@ def process_attestation(state: BeaconState, attestation: Attestation) -> None:
epoch_participation = state.previous_epoch_participation
proposer_reward_numerator = 0
for index in get_attesting_indices(state, data, attestation.aggregation_bits):
for index in get_attesting_indices(state, attestation):
for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
epoch_participation[index] = add_flag(epoch_participation[index], flag_index)

View File

@ -69,7 +69,7 @@ def translate_participation(state: BeaconState, pending_attestations: Sequence[p
# Apply flags to all attesting validators
epoch_participation = state.previous_epoch_participation
for index in get_attesting_indices(state, data, attestation.aggregation_bits):
for index in get_attesting_indices(state, attestation):
for flag_index in participation_flag_indices:
epoch_participation[index] = add_flag(epoch_participation[index], flag_index)

View File

@ -168,7 +168,7 @@ Similar to the discussion about the maximum gossip size increase, the
`ExecutionPayload` type can cause `BeaconBlock`s to exceed the 1 MiB bounds put
in place during Phase 0.
As with the gossip limit, 10 MiB is selected because this is firmly below any
As with the gossip limit, 10 MiB is selected because this is firmly above any
valid block sizes in the range of gas limits expected in the medium term.
As with both gossip and req/rsp maximum values, type-specific limits should

View File

@ -1,7 +1,5 @@
# Capella -- Fork Choice
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- TOC -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->

View File

@ -1,7 +1,5 @@
# Capella -- Honest Validator
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- TOC -->

View File

@ -1,7 +1,5 @@
# Deneb -- The Beacon Chain
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- TOC -->
@ -337,7 +335,7 @@ def process_attestation(state: BeaconState, attestation: Attestation) -> None:
epoch_participation = state.previous_epoch_participation
proposer_reward_numerator = 0
for index in get_attesting_indices(state, data, attestation.aggregation_bits):
for index in get_attesting_indices(state, attestation):
for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
epoch_participation[index] = add_flag(epoch_participation[index], flag_index)

View File

@ -1,7 +1,5 @@
# Deneb -- Fork Logic
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
@ -29,7 +27,7 @@ Warning: this configuration is not definitive.
| Name | Value |
| - | - |
| `DENEB_FORK_VERSION` | `Version('0x04000000')` |
| `DENEB_FORK_EPOCH` | `Epoch(18446744073709551615)` **TBD** |
| `DENEB_FORK_EPOCH` | `Epoch(269568)` (March 13, 2024, 01:55:35pm UTC) |
## Helper functions

View File

@ -183,7 +183,7 @@ The following validations MUST pass before forwarding the `blob_sidecar` on the
- _[REJECT]_ The current finalized_checkpoint is an ancestor of the sidecar's block -- i.e. `get_checkpoint_block(store, block_header.parent_root, store.finalized_checkpoint.epoch) == store.finalized_checkpoint.root`.
- _[REJECT]_ The sidecar's inclusion proof is valid as verified by `verify_blob_sidecar_inclusion_proof(blob_sidecar)`.
- _[REJECT]_ The sidecar's blob is valid as verified by `verify_blob_kzg_proof(blob_sidecar.blob, blob_sidecar.kzg_commitment, blob_sidecar.kzg_proof)`.
- _[IGNORE]_ The sidecar is the first sidecar for the tuple (block_header.slot, block_header.proposer_index, blob_sidecar.index) with valid header signature, sidecar inclusion proof, and kzg proof.
- _[IGNORE]_ The sidecar is the first sidecar for the tuple `(block_header.slot, block_header.proposer_index, blob_sidecar.index)` with valid header signature, sidecar inclusion proof, and kzg proof.
- _[REJECT]_ The sidecar is proposed by the expected `proposer_index` for the block's slot in the context of the current shuffling (defined by `block_header.parent_root`/`block_header.slot`).
If the `proposer_index` cannot immediately be verified against the expected shuffling, the sidecar MAY be queued for later processing while proposers for the block's branch are calculated -- in such a case _do not_ `REJECT`, instead `IGNORE` this message.

View File

@ -1,7 +1,5 @@
# Deneb -- Honest Validator
**Notice**: This document is a work-in-progress for researchers and implementers.
## Table of contents
<!-- TOC -->

View File

@ -178,6 +178,8 @@ The following values are (non-configurable) constants used throughout the specif
| Name | Value |
| - | - |
| `UINT64_MAX` | `uint64(2**64 - 1)` |
| `UINT64_MAX_SQRT` | `uint64(4294967295)` |
| `GENESIS_SLOT` | `Slot(0)` |
| `GENESIS_EPOCH` | `Epoch(0)` |
| `FAR_FUTURE_EPOCH` | `Epoch(2**64 - 1)` |
@ -599,6 +601,8 @@ def integer_squareroot(n: uint64) -> uint64:
"""
Return the largest integer ``x`` such that ``x**2 <= n``.
"""
if n == UINT64_MAX:
return UINT64_MAX_SQRT
x = n
y = (x + 1) // 2
while y < x:
@ -1082,7 +1086,7 @@ def get_indexed_attestation(state: BeaconState, attestation: Attestation) -> Ind
"""
Return the indexed attestation corresponding to ``attestation``.
"""
attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
attesting_indices = get_attesting_indices(state, attestation)
return IndexedAttestation(
attesting_indices=sorted(attesting_indices),
@ -1094,14 +1098,12 @@ def get_indexed_attestation(state: BeaconState, attestation: Attestation) -> Ind
#### `get_attesting_indices`
```python
def get_attesting_indices(state: BeaconState,
data: AttestationData,
bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]) -> Set[ValidatorIndex]:
def get_attesting_indices(state: BeaconState, attestation: Attestation) -> Set[ValidatorIndex]:
"""
Return the set of attesting indices corresponding to ``data`` and ``bits``.
"""
committee = get_beacon_committee(state, data.slot, data.index)
return set(index for i, index in enumerate(committee) if bits[i])
committee = get_beacon_committee(state, attestation.data.slot, attestation.data.index)
return set(index for i, index in enumerate(committee) if attestation.aggregation_bits[i])
```
### Beacon state mutators
@ -1339,7 +1341,7 @@ def get_unslashed_attesting_indices(state: BeaconState,
attestations: Sequence[PendingAttestation]) -> Set[ValidatorIndex]:
output = set() # type: Set[ValidatorIndex]
for a in attestations:
output = output.union(get_attesting_indices(state, a.data, a.aggregation_bits))
output = output.union(get_attesting_indices(state, a))
return set(filter(lambda index: not state.validators[index].slashed, output))
```
@ -1512,7 +1514,7 @@ def get_inclusion_delay_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequ
for index in get_unslashed_attesting_indices(state, matching_source_attestations):
attestation = min([
a for a in matching_source_attestations
if index in get_attesting_indices(state, a.data, a.aggregation_bits)
if index in get_attesting_indices(state, a)
], key=lambda a: a.inclusion_delay)
rewards[attestation.proposer_index] += get_proposer_reward(state, index)
max_attester_reward = Gwei(get_base_reward(state, index) - get_proposer_reward(state, index))

View File

@ -245,7 +245,7 @@ The following gossipsub [parameters](https://github.com/libp2p/specs/blob/master
- `fanout_ttl` (ttl for fanout maps for topics we are not subscribed to but have published to, seconds): 60
- `mcache_len` (number of windows to retain full messages in cache for `IWANT` responses): 6
- `mcache_gossip` (number of windows to gossip about): 3
- `seen_ttl` (number of heartbeat intervals to retain message IDs): 550
- `seen_ttl` (expiry time for cache of seen message ids, seconds): SECONDS_PER_SLOT * SLOTS_PER_EPOCH * 2
*Note*: Gossipsub v1.1 introduces a number of
[additional parameters](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#overview-of-new-parameters)
@ -353,26 +353,32 @@ The following validations MUST pass before forwarding the `signed_beacon_block`
The `beacon_aggregate_and_proof` topic is used to propagate aggregated attestations (as `SignedAggregateAndProof`s)
to subscribing nodes (typically validators) to be included in future blocks.
We define the following variables for convenience:
- `aggregate_and_proof = signed_aggregate_and_proof.message`
- `aggregate = aggregate_and_proof.aggregate`
- `index = aggregate.index`
- `aggregation_bits = attestation.aggregation_bits`
The following validations MUST pass before forwarding the `signed_aggregate_and_proof` on the network.
(We define the following for convenience -- `aggregate_and_proof = signed_aggregate_and_proof.message` and `aggregate = aggregate_and_proof.aggregate`)
- _[REJECT]_ The committee index is within the expected range -- i.e. `aggregate.data.index < get_committee_count_per_slot(state, aggregate.data.target.epoch)`.
- _[REJECT]_ The committee index is within the expected range -- i.e. `index < get_committee_count_per_slot(state, aggregate.data.target.epoch)`.
- _[IGNORE]_ `aggregate.data.slot` is within the last `ATTESTATION_PROPAGATION_SLOT_RANGE` slots (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) --
i.e. `aggregate.data.slot + ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >= aggregate.data.slot`
(a client MAY queue future aggregates for processing at the appropriate slot).
- _[REJECT]_ The aggregate attestation's epoch matches its target -- i.e. `aggregate.data.target.epoch ==
compute_epoch_at_slot(aggregate.data.slot)`
- _[REJECT]_ The number of aggregation bits matches the committee size -- i.e.
`len(aggregate.aggregation_bits) == len(get_beacon_committee(state, aggregate.data.slot, aggregate.data.index))`.
`len(aggregation_bits) == len(get_beacon_committee(state, aggregate.data.slot, index))`.
- _[REJECT]_ The aggregate attestation has participants --
that is, `len(get_attesting_indices(state, aggregate.data, aggregate.aggregation_bits)) >= 1`.
that is, `len(get_attesting_indices(state, aggregate)) >= 1`.
- _[IGNORE]_ A valid aggregate attestation defined by `hash_tree_root(aggregate.data)` whose `aggregation_bits` is a non-strict superset has _not_ already been seen.
(via aggregate gossip, within a verified block, or through the creation of an equivalent aggregate locally).
- _[IGNORE]_ The `aggregate` is the first valid aggregate received for the aggregator
with index `aggregate_and_proof.aggregator_index` for the epoch `aggregate.data.target.epoch`.
- _[REJECT]_ The attestation has participants -- that is, `len(get_attesting_indices(state, aggregate)) >= 1`.
- _[REJECT]_ `aggregate_and_proof.selection_proof` selects the validator as an aggregator for the slot --
i.e. `is_aggregator(state, aggregate.data.slot, aggregate.data.index, aggregate_and_proof.selection_proof)` returns `True`.
i.e. `is_aggregator(state, aggregate.data.slot, index, aggregate_and_proof.selection_proof)` returns `True`.
- _[REJECT]_ The aggregator's validator index is within the committee --
i.e. `aggregate_and_proof.aggregator_index in get_beacon_committee(state, aggregate.data.slot, aggregate.data.index)`.
i.e. `aggregate_and_proof.aggregator_index in get_beacon_committee(state, aggregate.data.slot, index)`.
- _[REJECT]_ The `aggregate_and_proof.selection_proof` is a valid signature
of the `aggregate.data.slot` by the validator with index `aggregate_and_proof.aggregator_index`.
- _[REJECT]_ The aggregator signature, `signed_aggregate_and_proof.signature`, is valid.
@ -429,10 +435,14 @@ Attestation subnets are used to propagate unaggregated attestations to subsectio
The `beacon_attestation_{subnet_id}` topics are used to propagate unaggregated attestations
to the subnet `subnet_id` (typically beacon and persistent committees) to be aggregated before being gossiped to `beacon_aggregate_and_proof`.
We define the following variables for convenience:
- `index = attestation.index`
- `aggregation_bits = attestation.aggregation_bits`
The following validations MUST pass before forwarding the `attestation` on the subnet.
- _[REJECT]_ The committee index is within the expected range -- i.e. `attestation.data.index < get_committee_count_per_slot(state, attestation.data.target.epoch)`.
- _[REJECT]_ The committee index is within the expected range -- i.e. `index < get_committee_count_per_slot(state, attestation.data.target.epoch)`.
- _[REJECT]_ The attestation is for the correct subnet --
i.e. `compute_subnet_for_attestation(committees_per_slot, attestation.data.slot, attestation.data.index) == subnet_id`,
i.e. `compute_subnet_for_attestation(committees_per_slot, attestation.data.slot, index) == subnet_id`,
where `committees_per_slot = get_committee_count_per_slot(state, attestation.data.target.epoch)`,
which may be pre-computed along with the committee information for the signature check.
- _[IGNORE]_ `attestation.data.slot` is within the last `ATTESTATION_PROPAGATION_SLOT_RANGE` slots
@ -442,9 +452,9 @@ The following validations MUST pass before forwarding the `attestation` on the s
- _[REJECT]_ The attestation's epoch matches its target -- i.e. `attestation.data.target.epoch ==
compute_epoch_at_slot(attestation.data.slot)`
- _[REJECT]_ The attestation is unaggregated --
that is, it has exactly one participating validator (`len([bit for bit in attestation.aggregation_bits if bit]) == 1`, i.e. exactly 1 bit is set).
that is, it has exactly one participating validator (`len([bit for bit in aggregation_bits if bit]) == 1`, i.e. exactly 1 bit is set).
- _[REJECT]_ The number of aggregation bits matches the committee size -- i.e.
`len(attestation.aggregation_bits) == len(get_beacon_committee(state, attestation.data.slot, attestation.data.index))`.
`len(aggregation_bits) == len(get_beacon_committee(state, attestation.data.slot, index))`.
- _[IGNORE]_ There has been no other valid attestation seen on an attestation subnet
that has an identical `attestation.data.target.epoch` and participating validator index.
- _[REJECT]_ The signature of `attestation` is valid.

View File

@ -494,7 +494,7 @@ Set `attestation.data = attestation_data` where `attestation_data` is the `Attes
- Let `attestation.aggregation_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE]` of length `len(committee)`, where the bit of the index of the validator in the `committee` is set to `0b1`.
*Note*: Calling `get_attesting_indices(state, attestation.data, attestation.aggregation_bits)` should return a list of length equal to 1, containing `validator_index`.
*Note*: Calling `get_attesting_indices(state, attestation)` should return a list of length equal to 1, containing `validator_index`.
##### Aggregate signature

View File

@ -1 +1 @@
1.4.0-beta.6
1.4.0

View File

View File

@ -8,7 +8,7 @@ from eth2spec.utils import bls
from .exceptions import SkippedTest
from .helpers.constants import (
PHASE0, ALTAIR, BELLATRIX, CAPELLA, DENEB,
EIP6110, EIP7002, EIP7594,
EIP6110, EIP7002, EIP7251, EIP7549, EIP7594,
WHISK,
MINIMAL,
ALL_PHASES,
@ -120,8 +120,7 @@ def scaled_churn_balances_min_churn_limit(spec: Spec):
def scaled_churn_balances_equal_activation_churn_limit(spec: Spec):
"""
Helper method to create enough validators to scale the churn limit.
(This is *firmly* over the churn limit -- thus the +2 instead of just +1)
Usage: `@with_custom_state(balances_fn=scaled_churn_balances_exceed_activation_churn_limit, ...)`
Usage: `@with_custom_state(balances_fn=scaled_churn_balances_equal_activation_churn_limit, ...)`
"""
num_validators = spec.config.CHURN_LIMIT_QUOTIENT * (spec.config.MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT)
return [spec.MAX_EFFECTIVE_BALANCE] * num_validators
@ -137,6 +136,19 @@ def scaled_churn_balances_exceed_activation_churn_limit(spec: Spec):
return [spec.MAX_EFFECTIVE_BALANCE] * num_validators
def scaled_churn_balances_exceed_activation_exit_churn_limit(spec: Spec):
"""
Helper method to create enough validators to scale the churn limit.
(The number of validators is double the amount need for the max activation/exit churn limit)
Usage: `@with_custom_state(balances_fn=scaled_churn_balances_exceed_activation_churn_limit, ...)`
"""
num_validators = (
2 * spec.config.CHURN_LIMIT_QUOTIENT
* spec.config.MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT
// spec.MIN_ACTIVATION_BALANCE)
return [spec.MIN_ACTIVATION_BALANCE] * num_validators
with_state = with_custom_state(default_balances, default_activation_threshold)
@ -509,8 +521,10 @@ with_capella_and_later = with_all_phases_from(CAPELLA)
with_deneb_and_later = with_all_phases_from(DENEB)
with_eip6110_and_later = with_all_phases_from(EIP6110)
with_eip7002_and_later = with_all_phases_from(EIP7002)
with_eip7549_and_later = with_all_phases_from(EIP7549)
with_whisk_and_later = with_all_phases_from(WHISK, all_phases=ALLOWED_TEST_RUNNER_FORKS)
with_eip7594_and_later = with_all_phases_from(EIP7594, all_phases=ALLOWED_TEST_RUNNER_FORKS)
with_eip7251_and_later = with_all_phases_from(EIP7251, all_phases=ALLOWED_TEST_RUNNER_FORKS)
class quoted_str(str):

View File

@ -0,0 +1,76 @@
from eth2spec.test.context import (
ForkMeta,
with_fork_metas,
with_presets,
)
from eth2spec.test.helpers.constants import (
AFTER_DENEB_PRE_POST_FORKS,
MINIMAL,
)
from eth2spec.test.helpers.keys import pubkeys
from eth2spec.test.helpers.fork_transition import (
do_fork,
transition_to_next_epoch_and_append_blocks,
transition_until_fork,
)
def mock_activated_validators(spec, state, mock_activations):
validator_count = len(state.validators)
for i in range(mock_activations):
index = validator_count + i
validator = spec.Validator(
pubkey=pubkeys[index],
withdrawal_credentials=spec.ETH1_ADDRESS_WITHDRAWAL_PREFIX + b'\x00' * 11 + b'\x56' * 20,
activation_eligibility_epoch=0,
activation_epoch=spec.FAR_FUTURE_EPOCH,
exit_epoch=spec.FAR_FUTURE_EPOCH,
withdrawable_epoch=spec.FAR_FUTURE_EPOCH,
effective_balance=spec.MAX_EFFECTIVE_BALANCE,
)
state.validators.append(validator)
state.balances.append(spec.MAX_EFFECTIVE_BALANCE)
state.previous_epoch_participation.append(spec.ParticipationFlags(0b0000_0000))
state.current_epoch_participation.append(spec.ParticipationFlags(0b0000_0000))
state.inactivity_scores.append(0)
state.validators[index].activation_epoch = spec.get_current_epoch(state)
@with_fork_metas([ForkMeta(pre_fork_name=pre, post_fork_name=post, fork_epoch=2)
for pre, post in AFTER_DENEB_PRE_POST_FORKS])
@with_presets([MINIMAL], reason="churn limit update needs enough validators")
def test_higher_churn_limit_to_lower(state, fork_epoch, spec, post_spec, pre_tag, post_tag):
"""
Test if churn limit goes from high to low due to EIP-7514.
"""
# Create high churn limit
mock_activations = post_spec.config.MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT * spec.config.CHURN_LIMIT_QUOTIENT
mock_activated_validators(spec, state, mock_activations)
transition_until_fork(spec, state, fork_epoch)
churn_limit_0 = spec.get_validator_churn_limit(state)
assert churn_limit_0 > post_spec.config.MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT
# check pre state
assert spec.get_current_epoch(state) < fork_epoch
yield "pre", state
# irregular state transition to handle fork
blocks = []
state, block = do_fork(state, spec, post_spec, fork_epoch)
blocks.append(post_tag(block))
# check post state
assert spec.get_current_epoch(state) == fork_epoch
# continue regular state transition with new spec into next epoch
transition_to_next_epoch_and_append_blocks(post_spec, state, post_tag, blocks, only_last_block=True)
yield "blocks", blocks
yield "post", state
churn_limit_1 = post_spec.get_validator_activation_churn_limit(state)
assert churn_limit_1 == post_spec.config.MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT
assert churn_limit_1 < churn_limit_0

View File

@ -0,0 +1,631 @@
from eth2spec.test.helpers.constants import MINIMAL
from eth2spec.test.context import (
spec_state_test,
with_eip7251_and_later,
with_presets,
always_bls,
spec_test, single_phase,
with_custom_state,
scaled_churn_balances_exceed_activation_exit_churn_limit,
default_activation_threshold,
)
from eth2spec.test.helpers.keys import pubkey_to_privkey
from eth2spec.test.helpers.consolidations import (
run_consolidation_processing,
sign_consolidation,
)
from eth2spec.test.helpers.withdrawals import (
set_eth1_withdrawal_credential_with_balance,
set_compounding_withdrawal_credential,
)
# ***********************
# * CONSOLIDATION TESTS *
# ***********************
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_basic_consolidation(spec, state):
print(spec.config.PRESET_BASE)
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
source_index = spec.get_active_validator_indices(state, current_epoch)[0]
target_index = spec.get_active_validator_indices(state, current_epoch)[1]
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, source_index)
set_eth1_withdrawal_credential_with_balance(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation)
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
# Check consolidation churn is decremented correctly
assert state.consolidation_balance_to_consume == consolidation_churn_limit - spec.MIN_ACTIVATION_BALANCE
# Check exit epoch
assert state.validators[0].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_basic_consolidation_with_compounding_credential(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
source_index = spec.get_active_validator_indices(state, current_epoch)[0]
target_index = spec.get_active_validator_indices(state, current_epoch)[1]
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_compounding_withdrawal_credential(spec, state, source_index)
set_compounding_withdrawal_credential(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation)
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
# Check consolidation churn is decremented correctly
assert state.consolidation_balance_to_consume == consolidation_churn_limit - spec.MIN_ACTIVATION_BALANCE
# Check exit epoch
assert state.validators[0].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_consolidation_churn_limit_balance(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
source_index = spec.get_active_validator_indices(state, current_epoch)[0]
# Set source balance to consolidation churn limit
state.balances[source_index] = consolidation_churn_limit
target_index = spec.get_active_validator_indices(state, current_epoch)[1]
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_compounding_withdrawal_credential(spec, state, source_index)
set_compounding_withdrawal_credential(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation)
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
# Check consolidation churn is decremented correctly
assert state.consolidation_balance_to_consume == 0
# Check exit epoch
assert state.validators[0].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_consolidation_balance_larger_than_churn_limit(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
source_index = spec.get_active_validator_indices(state, current_epoch)[0]
# Set source balance higher than consolidation churn limit
state.balances[source_index] = consolidation_churn_limit + 1
target_index = spec.get_active_validator_indices(state, current_epoch)[1]
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_compounding_withdrawal_credential(spec, state, source_index)
set_compounding_withdrawal_credential(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation)
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch) + 1
# Check consolidation churn is decremented correctly
assert state.consolidation_balance_to_consume == consolidation_churn_limit - 1
# Check exit epoch
assert state.validators[0].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_consolidation_balance_twice_the_churn_limit(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
source_index = spec.get_active_validator_indices(state, current_epoch)[0]
target_index = spec.get_active_validator_indices(state, current_epoch)[1]
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_compounding_withdrawal_credential(spec, state, source_index)
set_compounding_withdrawal_credential(spec, state, target_index)
# Set source balance higher than consolidation churn limit
state.balances[source_index] = 2 * consolidation_churn_limit
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation)
# when exiting a multiple of the churn limit greater than 1, an extra exit epoch is added
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch) + 2
assert state.validators[0].exit_epoch == expected_exit_epoch
# since the earliest exit epoch moves to a new one, consolidation balance is back to full
assert state.consolidation_balance_to_consume == consolidation_churn_limit
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_multiple_consolidations_below_churn(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
yield "pre", state
# Prepare a bunch of consolidations, based on the current state
consolidations = []
for i in range(3):
source_index = 2 * i
target_index = 2 * i + 1
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, source_index)
set_eth1_withdrawal_credential_with_balance(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
consolidations.append(signed_consolidation)
# Now run all the consolidations
for consolidation in consolidations:
# the function yields data, but we are just interested in running it here, ignore yields.
for _ in run_consolidation_processing(spec, state, consolidation):
continue
yield "post", state
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
assert state.earliest_consolidation_epoch == expected_exit_epoch
assert state.consolidation_balance_to_consume == consolidation_churn_limit - 3 * spec.MIN_ACTIVATION_BALANCE
for i in range(3):
assert state.validators[2 * i].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_multiple_consolidations_equal_churn(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
yield "pre", state
# Prepare a bunch of consolidations, based on the current state
consolidations = []
for i in range(4):
source_index = 2 * i
target_index = 2 * i + 1
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, source_index)
set_eth1_withdrawal_credential_with_balance(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
consolidations.append(signed_consolidation)
# Now run all the consolidations
for consolidation in consolidations:
# the function yields data, but we are just interested in running it here, ignore yields.
for _ in run_consolidation_processing(spec, state, consolidation):
continue
yield "post", state
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
assert state.earliest_consolidation_epoch == expected_exit_epoch
assert state.consolidation_balance_to_consume == 0
for i in range(4):
assert state.validators[2 * i].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_multiple_consolidations_above_churn(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
# Prepare a bunch of consolidations, based on the current state
consolidations = []
for i in range(4):
source_index = 2 * i
target_index = 2 * i + 1
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, source_index)
set_eth1_withdrawal_credential_with_balance(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
consolidations.append(signed_consolidation)
# Now run all the consolidations
for consolidation in consolidations:
# the function yields data, but we are just interested in running it here, ignore yields.
for _ in run_consolidation_processing(spec, state, consolidation):
continue
# consolidate an additional validator
source_index = spec.get_active_validator_indices(state, current_epoch)[-2]
target_index = spec.get_active_validator_indices(state, current_epoch)[-1]
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, source_index)
set_eth1_withdrawal_credential_with_balance(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
# This is the interesting part of the test: on a pre-state with full consolidation queue,
# when processing an additional consolidation, it results in an exit in a later epoch
yield from run_consolidation_processing(spec, state, signed_consolidation)
expected_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
assert state.earliest_consolidation_epoch == expected_exit_epoch + 1
assert state.consolidation_balance_to_consume == consolidation_churn_limit - spec.MIN_ACTIVATION_BALANCE
assert state.validators[source_index].exit_epoch == expected_exit_epoch + 1
for i in range(4):
assert state.validators[2 * i].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@with_presets([MINIMAL], "need sufficient consolidation churn limit")
@with_custom_state(
balances_fn=scaled_churn_balances_exceed_activation_exit_churn_limit, threshold_fn=default_activation_threshold)
@spec_test
@single_phase
def test_multiple_consolidations_equal_twice_churn(spec, state):
# This state has 256 validators each with 32 ETH in MINIMAL preset, 128 ETH consolidation churn
consolidation_churn_limit = spec.get_consolidation_churn_limit(state)
# Set the consolidation balance to consume equal to churn limit
state.consolidation_balance_to_consume = consolidation_churn_limit
current_epoch = spec.get_current_epoch(state)
yield "pre", state
# Prepare a bunch of consolidations, based on the current state
consolidations = []
for i in range(8):
source_index = 2 * i
target_index = 2 * i + 1
source_privkey = pubkey_to_privkey[state.validators[source_index].pubkey]
target_privkey = pubkey_to_privkey[state.validators[target_index].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, source_index)
set_eth1_withdrawal_credential_with_balance(spec, state, target_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=source_index,
target_index=target_index),
source_privkey, target_privkey)
consolidations.append(signed_consolidation)
# Now run all the consolidations
for consolidation in consolidations:
# the function yields data, but we are just interested in running it here, ignore yields.
for _ in run_consolidation_processing(spec, state, consolidation):
continue
yield "post", state
first_exit_epoch = spec.compute_activation_exit_epoch(current_epoch)
assert state.consolidation_balance_to_consume == 0
assert state.earliest_consolidation_epoch == first_exit_epoch + 1
for i in range(4):
assert state.validators[2 * i].exit_epoch == first_exit_epoch
for i in range(4, 8):
assert state.validators[2 * i].exit_epoch == first_exit_epoch + 1
# Failing tests
@with_eip7251_and_later
@spec_state_test
def test_invalid_source_equals_target(spec, state):
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
# Set withdrawal credentials to eth1
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=validator_index,
target_index=validator_index),
validator_privkey, validator_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_exceed_pending_consolidations_limit(spec, state):
state.pending_consolidations = (
[spec.PendingConsolidation(source_index=0, target_index=1)] * spec.PENDING_CONSOLIDATIONS_LIMIT
)
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(epoch=current_epoch, source_index=0, target_index=1),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_exited_source(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(epoch=current_epoch, source_index=0, target_index=1),
source_privkey, target_privkey)
# exit source
spec.initiate_validator_exit(state, 0)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_exited_target(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
# exit target
spec.initiate_validator_exit(state, 1)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_inactive_source(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
# set source validator as not yet activated
state.validators[0].activation_epoch = spec.FAR_FUTURE_EPOCH
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_inactive_target(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
# set target validator as not yet activated
state.validators[1].activation_epoch = spec.FAR_FUTURE_EPOCH
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_no_execution_withdrawal_credential(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_different_credentials(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
# Set source and target withdrawal credentials to different eth1 credentials
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1, address=b'\x10' * 20)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_invalid_source_signature(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
# Change the pubkey of the source validator, invalidating its signature
state.validators[0].pubkey = state.validators[1].pubkey
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_invalid_target_signature(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch,
source_index=0,
target_index=1),
source_privkey, target_privkey)
# Change the pubkey of the target validator, invalidating its signature
state.validators[1].pubkey = state.validators[2].pubkey
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_before_specified_epoch(spec, state):
current_epoch = spec.get_current_epoch(state)
source_privkey = pubkey_to_privkey[state.validators[0].pubkey]
target_privkey = pubkey_to_privkey[state.validators[1].pubkey]
# Set source and target withdrawal credentials to the same eth1 credential
set_eth1_withdrawal_credential_with_balance(spec, state, 0)
set_eth1_withdrawal_credential_with_balance(spec, state, 1)
# set epoch=current_epoch + 1, so it's too early to process it
signed_consolidation = sign_consolidation(spec, state,
spec.Consolidation(
epoch=current_epoch + 1,
source_index=0,
target_index=1),
source_privkey, target_privkey)
yield from run_consolidation_processing(spec, state, signed_consolidation, valid=False)

View File

@ -0,0 +1,284 @@
from eth2spec.test.helpers.deposits import (
build_deposit,
prepare_state_and_deposit,
run_deposit_processing_eip7251,
run_deposit_processing_eip7251_with_specific_fork_version,
sign_deposit_data,
)
from eth2spec.test.helpers.keys import privkeys, pubkeys
from eth2spec.test.context import (
spec_state_test,
with_eip7251_and_later,
always_bls,
)
@with_eip7251_and_later
@spec_state_test
def test_new_deposit_under_min_activation_balance(spec, state):
# fresh deposit = next validator index = validator appended to registry
validator_index = len(state.validators)
# effective balance will be 1 EFFECTIVE_BALANCE_INCREMENT smaller because of this small decrement.
amount = spec.MIN_ACTIVATION_BALANCE - 1
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_new_deposit_min(spec, state):
# fresh deposit = next validator index = validator appended to registry
validator_index = len(state.validators)
amount = spec.MIN_DEPOSIT_AMOUNT
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_new_deposit_between_min_and_max(spec, state):
# fresh deposit = next validator index = validator appended to registry
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE_EIP7251 // 2
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_new_deposit_max(spec, state):
# fresh deposit = next validator index = validator appended to registry
validator_index = len(state.validators)
# effective balance will be exactly the same as balance.
amount = spec.MAX_EFFECTIVE_BALANCE_EIP7251
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_new_deposit_over_max(spec, state):
# fresh deposit = next validator index = validator appended to registry
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE_EIP7251 + 1
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
# @with_eip7251_and_later
# @spec_state_test
# def test_top_up__max_effective_balance(spec, state):
# validator_index = 0
# amount = spec.MAX_EFFECTIVE_BALANCE_EIP7251 // 4
# deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
# state.balances[validator_index] = spec.MAX_EFFECTIVE_BALANCE_EIP7251
# state.validators[validator_index].effective_balance = spec.MAX_EFFECTIVE_BALANCE_EIP7251
# yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
# assert state.balances[validator_index] == spec.MAX_EFFECTIVE_BALANCE_EIP7251 + amount
# assert state.validators[validator_index].effective_balance == spec.MAX_EFFECTIVE_BALANCE_EIP7251
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_correct_sig_but_forked_state(spec, state):
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE
# deposits will always be valid, regardless of the current fork
state.fork.current_version = spec.Version('0x1234abcd')
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_incorrect_sig_new_deposit(spec, state):
# fresh deposit = next validator index = validator appended to registry
validator_index = len(state.validators)
amount = spec.MIN_ACTIVATION_BALANCE
deposit = prepare_state_and_deposit(spec, state, validator_index, amount)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index, effective=False)
@with_eip7251_and_later
@spec_state_test
def test_top_up__max_effective_balance(spec, state):
validator_index = 0
amount = spec.MAX_EFFECTIVE_BALANCE // 4
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
state.balances[validator_index] = spec.MAX_EFFECTIVE_BALANCE
state.validators[validator_index].effective_balance = spec.MAX_EFFECTIVE_BALANCE
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
assert state.validators[validator_index].effective_balance == spec.MAX_EFFECTIVE_BALANCE
@with_eip7251_and_later
@spec_state_test
def test_top_up__less_effective_balance(spec, state):
validator_index = 0
amount = spec.MAX_EFFECTIVE_BALANCE // 4
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
initial_balance = spec.MAX_EFFECTIVE_BALANCE - 1000
initial_effective_balance = spec.MAX_EFFECTIVE_BALANCE - spec.EFFECTIVE_BALANCE_INCREMENT
state.balances[validator_index] = initial_balance
state.validators[validator_index].effective_balance = initial_effective_balance
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
# unchanged effective balance
assert state.validators[validator_index].effective_balance == initial_effective_balance
@with_eip7251_and_later
@spec_state_test
def test_top_up__zero_balance(spec, state):
validator_index = 0
amount = spec.MAX_EFFECTIVE_BALANCE // 4
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
initial_balance = 0
initial_effective_balance = 0
state.balances[validator_index] = initial_balance
state.validators[validator_index].effective_balance = initial_effective_balance
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
# unchanged effective balance
assert state.validators[validator_index].effective_balance == initial_effective_balance
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_incorrect_sig_top_up(spec, state):
validator_index = 0
amount = spec.MAX_EFFECTIVE_BALANCE // 4
deposit = prepare_state_and_deposit(spec, state, validator_index, amount)
# invalid signatures, in top-ups, are allowed!
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_incorrect_withdrawal_credentials_top_up(spec, state):
validator_index = 0
amount = spec.MAX_EFFECTIVE_BALANCE // 4
withdrawal_credentials = spec.BLS_WITHDRAWAL_PREFIX + spec.hash(b"junk")[1:]
deposit = prepare_state_and_deposit(
spec,
state,
validator_index,
amount,
withdrawal_credentials=withdrawal_credentials
)
# inconsistent withdrawal credentials, in top-ups, are allowed!
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_invalid_wrong_deposit_for_deposit_count(spec, state):
deposit_data_leaves = [spec.DepositData() for _ in range(len(state.validators))]
# build root for deposit_1
index_1 = len(deposit_data_leaves)
pubkey_1 = pubkeys[index_1]
privkey_1 = privkeys[index_1]
_, _, deposit_data_leaves = build_deposit(
spec,
deposit_data_leaves,
pubkey_1,
privkey_1,
spec.MAX_EFFECTIVE_BALANCE,
withdrawal_credentials=b'\x00' * 32,
signed=True,
)
deposit_count_1 = len(deposit_data_leaves)
# build root for deposit_2
index_2 = len(deposit_data_leaves)
pubkey_2 = pubkeys[index_2]
privkey_2 = privkeys[index_2]
deposit_2, root_2, deposit_data_leaves = build_deposit(
spec,
deposit_data_leaves,
pubkey_2,
privkey_2,
spec.MAX_EFFECTIVE_BALANCE,
withdrawal_credentials=b'\x00' * 32,
signed=True,
)
# state has root for deposit_2 but is at deposit_count for deposit_1
state.eth1_data.deposit_root = root_2
state.eth1_data.deposit_count = deposit_count_1
yield from run_deposit_processing_eip7251(spec, state, deposit_2, index_2, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_bad_merkle_proof(spec, state):
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE
deposit = prepare_state_and_deposit(spec, state, validator_index, amount)
# mess up merkle branch
deposit.proof[5] = spec.Bytes32()
sign_deposit_data(spec, deposit.data, privkeys[validator_index])
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_key_validate_invalid_subgroup(spec, state):
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE
# All-zero pubkey would not pass `bls.KeyValidate`, but `process_deposit` would not throw exception.
pubkey = b'\x00' * 48
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, pubkey=pubkey, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
def test_key_validate_invalid_decompression(spec, state):
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE
# `deserialization_fails_infinity_with_true_b_flag` BLS G1 deserialization test case.
# This pubkey would not pass `bls.KeyValidate`, but `process_deposit` would not throw exception.
pubkey_hex = 'c01000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000'
pubkey = bytes.fromhex(pubkey_hex)
deposit = prepare_state_and_deposit(spec, state, validator_index, amount, pubkey=pubkey, signed=True)
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index)
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_ineffective_deposit_with_bad_fork_version(spec, state):
yield from run_deposit_processing_eip7251_with_specific_fork_version(
spec,
state,
fork_version=spec.Version('0xAaBbCcDd'),
effective=False,
)

View File

@ -0,0 +1,213 @@
from eth2spec.test.context import (
spec_state_test,
expect_assertion_error,
with_eip7251_and_later,
with_presets,
)
from eth2spec.test.helpers.constants import MINIMAL
from eth2spec.test.helpers.state import (
get_validator_index_by_pubkey,
)
from eth2spec.test.helpers.withdrawals import (
set_eth1_withdrawal_credential_with_balance,
)
# Only failing test from capella process_withdrawals is
# test_success_excess_balance_but_no_max_effective_balance
# Modified tests from 7002. Just testing EL-triggered exits, not partial withdrawals
@with_eip7251_and_later
@spec_state_test
def test_basic_exit(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_pubkey = state.validators[validator_index].pubkey
address = b'\x22' * 20
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index, address=address)
execution_layer_withdraw_request = spec.ExecutionLayerWithdrawRequest(
source_address=address,
validator_pubkey=validator_pubkey,
amount=0,
)
yield from run_execution_layer_withdraw_request_processing(spec, state, execution_layer_withdraw_request)
@with_eip7251_and_later
@spec_state_test
def test_incorrect_source_address(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_pubkey = state.validators[validator_index].pubkey
address = b'\x22' * 20
incorrect_address = b'\x33' * 20
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index, address=address)
execution_layer_withdraw_request = spec.ExecutionLayerWithdrawRequest(
source_address=incorrect_address,
validator_pubkey=validator_pubkey,
amount=0,
)
yield from run_execution_layer_withdraw_request_processing(
spec, state, execution_layer_withdraw_request, success=False
)
@with_eip7251_and_later
@spec_state_test
def test_incorrect_withdrawal_credential_prefix(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_pubkey = state.validators[validator_index].pubkey
address = b'\x22' * 20
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index, address=address)
# Set incorrect prefix
state.validators[validator_index].withdrawal_credentials = (
spec.BLS_WITHDRAWAL_PREFIX
+ state.validators[validator_index].withdrawal_credentials[1:]
)
execution_layer_withdraw_request = spec.ExecutionLayerWithdrawRequest(
source_address=address,
validator_pubkey=validator_pubkey,
amount=0,
)
yield from run_execution_layer_withdraw_request_processing(
spec, state, execution_layer_withdraw_request, success=False
)
@with_eip7251_and_later
@spec_state_test
def test_on_exit_initiated_validator(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_pubkey = state.validators[validator_index].pubkey
address = b'\x22' * 20
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index, address=address)
# Initiate exit earlier
spec.initiate_validator_exit(state, validator_index)
execution_layer_withdraw_request = spec.ExecutionLayerWithdrawRequest(
source_address=address,
validator_pubkey=validator_pubkey,
amount=0,
)
yield from run_execution_layer_withdraw_request_processing(
spec, state, execution_layer_withdraw_request, success=False
)
@with_eip7251_and_later
@spec_state_test
def test_activation_epoch_less_than_shard_committee_period(spec, state):
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_pubkey = state.validators[validator_index].pubkey
address = b'\x22' * 20
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index, address=address)
execution_layer_withdraw_request = spec.ExecutionLayerWithdrawRequest(
source_address=address,
validator_pubkey=validator_pubkey,
amount=0,
)
assert spec.get_current_epoch(state) < (
state.validators[validator_index].activation_epoch + spec.config.SHARD_COMMITTEE_PERIOD
)
yield from run_execution_layer_withdraw_request_processing(
spec, state, execution_layer_withdraw_request, success=False
)
# Partial withdrawals tests
@with_eip7251_and_later
@spec_state_test
@with_presets([MINIMAL])
def test_partial_withdrawal_queue_full(spec, state):
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
validator_pubkey = state.validators[validator_index].pubkey
address = b'\x22' * 20
set_eth1_withdrawal_credential_with_balance(spec, state, validator_index, address=address)
execution_layer_withdraw_request = spec.ExecutionLayerWithdrawRequest(
source_address=address,
validator_pubkey=validator_pubkey,
amount=10 ** 9,
)
partial_withdrawal = spec.PendingPartialWithdrawal(index=0, amount=1, withdrawable_epoch=current_epoch)
state.pending_partial_withdrawals = [partial_withdrawal] * spec.PENDING_PARTIAL_WITHDRAWALS_LIMIT
yield from run_execution_layer_withdraw_request_processing(
spec, state, execution_layer_withdraw_request, success=False
)
#
# Run processing
#
def run_execution_layer_withdraw_request_processing(
spec, state, execution_layer_withdraw_request, valid=True, success=True
):
"""
Run ``process_execution_layer_withdraw_request``, yielding:
- pre-state ('pre')
- execution_layer_withdraw_request ('execution_layer_withdraw_request')
- post-state ('post').
If ``valid == False``, run expecting ``AssertionError``
If ``success == False``, it doesn't initiate exit successfully
"""
validator_index = get_validator_index_by_pubkey(state, execution_layer_withdraw_request.validator_pubkey)
yield 'pre', state
yield 'execution_layer_withdraw_request', execution_layer_withdraw_request
if not valid:
expect_assertion_error(
lambda: spec.process_execution_layer_withdraw_request(state, execution_layer_withdraw_request))
yield 'post', None
return
pre_exit_epoch = state.validators[validator_index].exit_epoch
pre_pending_partial_withdrawals = state.pending_partial_withdrawals
pre_balance = state.balances[validator_index]
spec.process_execution_layer_withdraw_request(state, execution_layer_withdraw_request)
yield 'post', state
if execution_layer_withdraw_request.amount == 0:
if success:
assert pre_exit_epoch == spec.FAR_FUTURE_EPOCH
assert state.validators[validator_index].exit_epoch < spec.FAR_FUTURE_EPOCH
else:
assert state.validators[validator_index].exit_epoch == pre_exit_epoch
else:
if success:
assert state.validators[validator_index].exit_epoch == spec.FAR_FUTURE_EPOCH
assert state.balances[validator_index] == pre_balance
post_length = len(state.pending_partial_withdrawals)
assert post_length == len(pre_pending_partial_withdrawals) + 1
assert post_length < spec.PENDING_PARTIAL_WITHDRAWALS_LIMIT
assert state.pending_partial_withdrawals[post_length - 1].validator_index == validator_index
else:
assert state.pending_partial_withdrawals == pre_pending_partial_withdrawals

View File

@ -0,0 +1,441 @@
from eth2spec.test.helpers.constants import (MINIMAL, MAINNET)
from eth2spec.test.context import (
spec_state_test,
with_eip7251_and_later,
with_presets,
always_bls,
spec_test, single_phase,
with_custom_state,
scaled_churn_balances_min_churn_limit,
)
from eth2spec.test.helpers.keys import pubkey_to_privkey
from eth2spec.test.helpers.voluntary_exits import (
run_voluntary_exit_processing,
sign_voluntary_exit,
)
# ********************
# * EXIT QUEUE TESTS *
# ********************
@with_eip7251_and_later
@spec_state_test
def test_min_balance_exit(spec, state):
# This state has 64 validators each with 32 ETH
expected_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
churn_limit = spec.get_activation_exit_churn_limit(state)
# Set the balance to consume equal to churn limit
state.exit_balance_to_consume = churn_limit
yield "pre", state
# Exit validators, all which fit in the churn limit
spec.initiate_validator_exit(state, 0)
yield "post", state
# Check exit queue churn is set
assert state.exit_balance_to_consume == churn_limit - spec.MIN_ACTIVATION_BALANCE
# Check exit epoch
assert state.validators[0].exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@spec_state_test
def test_min_balance_exits_up_to_churn(spec, state):
# This state has 64 validators each with 32 ETH
single_validator_balance = spec.MIN_ACTIVATION_BALANCE
expected_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
churn_limit = spec.get_activation_exit_churn_limit(state)
# Set the balance to consume equal to churn limit
state.exit_balance_to_consume = churn_limit
yield "pre", state
# Exit validators, all which fit in the churn limit
for i in range(churn_limit // spec.MIN_ACTIVATION_BALANCE):
validator_index = i
spec.initiate_validator_exit(state, validator_index)
yield f"post{i}", state
# Check exit queue churn is set
assert state.exit_balance_to_consume == churn_limit - single_validator_balance * (i + 1)
# Check exit epoch
assert state.validators[validator_index].exit_epoch == expected_exit_epoch
yield "post", state
@with_eip7251_and_later
@spec_state_test
def test_min_balance_exits_above_churn(spec, state):
# This state has 64 validators each with 32 ETH
single_validator_balance = spec.MIN_ACTIVATION_BALANCE
expected_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
churn_limit = spec.get_activation_exit_churn_limit(state)
# Set the balance to consume equal to churn limit
state.exit_balance_to_consume = churn_limit
yield "pre", state
# Exit validators, all which fit in the churn limit
for i in range(churn_limit // spec.MIN_ACTIVATION_BALANCE):
validator_index = i
spec.initiate_validator_exit(state, validator_index)
# Check exit queue churn is set
assert state.exit_balance_to_consume == churn_limit - single_validator_balance * (i + 1)
# Check exit epoch
assert state.validators[validator_index].exit_epoch == expected_exit_epoch
# Exit balance has been fully consumed
assert state.exit_balance_to_consume == 0
# Exit an additional validator, doesn't fit in the churn limit, so exit
# epoch is incremented
validator_index = churn_limit // spec.MIN_ACTIVATION_BALANCE
spec.initiate_validator_exit(state, validator_index)
yield "post", state
# Check exit epoch
assert state.validators[validator_index].exit_epoch == expected_exit_epoch + 1
# Check exit balance to consume is set
assert state.exit_balance_to_consume == churn_limit - single_validator_balance
# @with_eip7251_and_later
# @spec_state_test
# def test_exit_balance_to_consume_large_validator(spec, state):
# # Set 0th validator effective balance to 2048 ETH
# state.validators[0].effective_balance = spec.MAX_EFFECTIVE_BALANCE_EIP7251
# churn_limit = spec.get_validator_churn_limit(state)
# expected_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
# expected_exit_epoch += spec.MAX_EFFECTIVE_BALANCE_EIP7251 // churn_limit
# validator_index = 0
# spec.initiate_validator_exit(state, validator_index)
# # Check exit epoch
# assert state.validators[validator_index].exit_epoch == expected_exit_epoch
# # Check exit_balance_to_consume
# assert state.exit_balance_to_consume == churn_limit - (spec.MAX_EFFECTIVE_BALANCE_EIP7251 % churn_limit)
# # Check earliest_exit_epoch
# assert state.earliest_exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@spec_state_test
@with_presets([MAINNET], "With CHURN_LIMIT_QUOTIENT=32, can't change validator balance without changing churn_limit")
def test_max_balance_exit(spec, state):
churn_limit = spec.get_activation_exit_churn_limit(state)
assert churn_limit == spec.MIN_ACTIVATION_BALANCE * spec.config.MIN_PER_EPOCH_CHURN_LIMIT
# Set 0th validator effective balance to 2048 ETH
state.validators[0].effective_balance = spec.MAX_EFFECTIVE_BALANCE_EIP7251
yield 'pre', state
expected_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
# Validator consumes exit churn for 16 epochs, exits at the 17th one
expected_exit_epoch += (spec.MAX_EFFECTIVE_BALANCE_EIP7251 // churn_limit)
validator_index = 0
spec.initiate_validator_exit(state, validator_index)
yield 'post', state
# Check exit epoch
assert state.validators[validator_index].exit_epoch == expected_exit_epoch
# Check exit_balance_to_consume
assert state.exit_balance_to_consume == churn_limit
# Check earliest_exit_epoch
assert state.earliest_exit_epoch == expected_exit_epoch
@with_eip7251_and_later
@spec_state_test
@with_presets([MAINNET], "With CHURN_LIMIT_QUOTIENT=32, can't change validator balance without changing churn_limit")
def test_exit_with_balance_equal_to_churn_limit(spec, state):
churn_limit = spec.get_activation_exit_churn_limit(state)
# Set 0th validator effective balance to churn_limit
state.validators[0].effective_balance = churn_limit
yield 'pre', state
validator_index = 0
spec.initiate_validator_exit(state, validator_index)
yield 'post', state
# Validator consumes churn limit fully in the current epoch
assert (state.validators[validator_index].exit_epoch ==
spec.compute_activation_exit_epoch(spec.get_current_epoch(state)))
# Check exit_balance_to_consume
assert state.exit_balance_to_consume == 0
# Check earliest_exit_epoch
assert state.earliest_exit_epoch == state.validators[validator_index].exit_epoch
@with_eip7251_and_later
@spec_state_test
@with_presets([MAINNET], "With CHURN_LIMIT_QUOTIENT=32, can't change validator balance without changing churn_limit")
def test_exit_churn_limit_balance_existing_churn_(spec, state):
cl = spec.get_activation_exit_churn_limit(state)
# set exit epoch to the first available one and set exit balance to consume to full churn limit
state.earliest_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
state.exit_balance_to_consume = cl
# consume some churn in exit epoch
state.exit_balance_to_consume -= 1000000000
# Set 0th validator effective balance to the churn limit
state.validators[0].effective_balance = cl
yield 'pre', state
# The existing 1 ETH churn will push an extra epoch
expected_exit_epoch = state.earliest_exit_epoch + 1
yield 'post', state
validator_index = 0
spec.initiate_validator_exit(state, validator_index)
# Check exit epoch
assert state.validators[validator_index].exit_epoch == expected_exit_epoch
# Check balance consumed in exit epoch is the remainder 1 ETH
assert state.exit_balance_to_consume == cl - 1000000000
# check earliest exit epoch
assert expected_exit_epoch == state.earliest_exit_epoch
@with_eip7251_and_later
@spec_state_test
@with_presets([MAINNET], "With CHURN_LIMIT_QUOTIENT=32, can't change validator balance without changing churn_limit")
def test_multi_epoch_exit_existing_churn(spec, state):
cl = spec.get_activation_exit_churn_limit(state)
# set exit epoch to the first available one and set exit balance to consume to full churn limit
state.earliest_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state))
state.exit_balance_to_consume = cl
# consume some churn in exit epoch
state.exit_balance_to_consume -= 1000000000
# Set 0th validator effective balance to 2x the churn limit
state.validators[0].effective_balance = 2 * cl
yield 'pre', state
# Two extra epochs will be necessary
expected_exit_epoch = spec.compute_activation_exit_epoch(spec.get_current_epoch(state)) + 2
validator_index = 0
spec.initiate_validator_exit(state, validator_index)
yield 'post', state
# Check exit epoch
assert state.validators[validator_index].exit_epoch == expected_exit_epoch
# Check balance consumed in exit epoch is the remainder 1 ETH
assert state.exit_balance_to_consume == cl - 1000000000
# check earliest exit epoch
assert expected_exit_epoch == state.earliest_exit_epoch
# Repurposed from phase0 voluntary exit tests, should disable the phase0 ones
def run_test_success_exit_queue(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
churn_limit = spec.get_activation_exit_churn_limit(state)
# exit `MAX_EXITS_PER_EPOCH`
max_exits = churn_limit // spec.MIN_ACTIVATION_BALANCE
initial_indices = spec.get_active_validator_indices(state, current_epoch)[:max_exits]
# Prepare a bunch of exits, based on the current state
exit_queue = []
for index in initial_indices:
privkey = pubkey_to_privkey[state.validators[index].pubkey]
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=index), privkey)
exit_queue.append(signed_voluntary_exit)
# Now run all the exits
for voluntary_exit in exit_queue:
# the function yields data, but we are just interested in running it here, ignore yields.
for _ in run_voluntary_exit_processing(spec, state, voluntary_exit):
continue
# exit an additional validator
validator_index = spec.get_active_validator_indices(state, current_epoch)[-1]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=validator_index), privkey)
# This is the interesting part of the test: on a pre-state with a full exit queue,
# when processing an additional exit, it results in an exit in a later epoch
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit)
for index in initial_indices:
assert (
state.validators[validator_index].exit_epoch ==
state.validators[index].exit_epoch + 1
)
assert state.earliest_exit_epoch == state.validators[validator_index].exit_epoch
consumed_churn = spec.MIN_ACTIVATION_BALANCE * (max_exits + 1)
assert state.exit_balance_to_consume == churn_limit - (consumed_churn % churn_limit)
@with_eip7251_and_later
@spec_state_test
def test_success_exit_queue__min_churn(spec, state):
yield from run_test_success_exit_queue(spec, state)
@with_eip7251_and_later
@with_presets([MINIMAL],
reason="mainnet config leads to larger validator set than limit of public/private keys pre-generated")
@spec_test
@with_custom_state(balances_fn=scaled_churn_balances_min_churn_limit,
threshold_fn=lambda spec: spec.config.EJECTION_BALANCE)
@single_phase
def test_success_exit_queue__scaled_churn(spec, state):
churn_limit = spec.get_activation_exit_churn_limit(state)
assert churn_limit > spec.config.MIN_PER_EPOCH_CHURN_LIMIT
yield from run_test_success_exit_queue(spec, state)
# After here no modifications were made, can just leave them in phase0 as is
@with_eip7251_and_later
@spec_state_test
def test_basic(spec, state):
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=validator_index), privkey)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit)
assert state.validators[validator_index].exit_epoch == spec.compute_activation_exit_epoch(current_epoch)
@with_eip7251_and_later
@spec_state_test
@always_bls
def test_invalid_incorrect_signature(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
voluntary_exit = spec.VoluntaryExit(
epoch=current_epoch,
validator_index=validator_index,
)
signed_voluntary_exit = sign_voluntary_exit(spec, state, voluntary_exit, 12345)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_default_exit_epoch_subsequent_exit(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=validator_index), privkey)
# Exit one validator prior to this new one
exited_index = spec.get_active_validator_indices(state, current_epoch)[-1]
state.validators[exited_index].exit_epoch = current_epoch - 1
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit)
assert state.validators[validator_index].exit_epoch == spec.compute_activation_exit_epoch(current_epoch)
@with_eip7251_and_later
@spec_state_test
def test_invalid_validator_exit_in_future(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
voluntary_exit = spec.VoluntaryExit(
epoch=current_epoch + 1,
validator_index=validator_index,
)
signed_voluntary_exit = sign_voluntary_exit(spec, state, voluntary_exit, privkey)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_validator_incorrect_validator_index(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
voluntary_exit = spec.VoluntaryExit(
epoch=current_epoch,
validator_index=len(state.validators),
)
signed_voluntary_exit = sign_voluntary_exit(spec, state, voluntary_exit, privkey)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_validator_not_active(spec, state):
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
state.validators[validator_index].activation_epoch = spec.FAR_FUTURE_EPOCH
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=validator_index), privkey)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_validator_already_exited(spec, state):
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow validator able to exit
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
# but validator already has exited
state.validators[validator_index].exit_epoch = current_epoch + 2
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=validator_index), privkey)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit, valid=False)
@with_eip7251_and_later
@spec_state_test
def test_invalid_validator_not_active_long_enough(spec, state):
current_epoch = spec.get_current_epoch(state)
validator_index = spec.get_active_validator_indices(state, current_epoch)[0]
privkey = pubkey_to_privkey[state.validators[validator_index].pubkey]
signed_voluntary_exit = sign_voluntary_exit(
spec, state, spec.VoluntaryExit(epoch=current_epoch, validator_index=validator_index), privkey)
assert (
current_epoch - state.validators[validator_index].activation_epoch <
spec.config.SHARD_COMMITTEE_PERIOD
)
yield from run_voluntary_exit_processing(spec, state, signed_voluntary_exit, valid=False)

View File

@ -0,0 +1,99 @@
from eth2spec.test.helpers.epoch_processing import run_epoch_processing_with
from eth2spec.test.context import (
spec_state_test,
with_eip7251_and_later,
)
@with_eip7251_and_later
@spec_state_test
def test_pending_deposit_min_activation_balance(spec, state):
index = 0
amount = spec.MIN_ACTIVATION_BALANCE
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=index, amount=amount))
pre_balance = state.balances[index]
yield from run_epoch_processing_with(spec, state, 'process_pending_balance_deposits')
assert state.balances[index] == pre_balance + amount
# No leftover deposit balance to consume when there are no deposits left to process
assert state.deposit_balance_to_consume == 0
assert state.pending_balance_deposits == []
@with_eip7251_and_later
@spec_state_test
def test_pending_deposit_balance_equal_churn(spec, state):
index = 0
amount = spec.get_activation_exit_churn_limit(state)
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=index, amount=amount))
pre_balance = state.balances[index]
yield from run_epoch_processing_with(spec, state, 'process_pending_balance_deposits')
assert state.balances[index] == pre_balance + amount
assert state.deposit_balance_to_consume == 0
assert state.pending_balance_deposits == []
@with_eip7251_and_later
@spec_state_test
def test_pending_deposit_balance_above_churn(spec, state):
index = 0
amount = spec.get_activation_exit_churn_limit(state) + 1
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=index, amount=amount))
pre_balance = state.balances[index]
yield from run_epoch_processing_with(spec, state, 'process_pending_balance_deposits')
# deposit was above churn, balance hasn't changed
assert state.balances[index] == pre_balance
# deposit balance to consume is the full churn limit
assert state.deposit_balance_to_consume == spec.get_activation_exit_churn_limit(state)
# deposit is still in the queue
assert state.pending_balance_deposits == [spec.PendingBalanceDeposit(index=index, amount=amount)]
@with_eip7251_and_later
@spec_state_test
def test_pending_deposit_preexisting_churn(spec, state):
index = 0
amount = 10 ** 9 + 1
state.deposit_balance_to_consume = 2 * amount
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=index, amount=amount))
pre_balance = state.balances[index]
yield from run_epoch_processing_with(spec, state, 'process_pending_balance_deposits')
# balance was deposited correctly
assert state.balances[index] == pre_balance + amount
# No leftover deposit balance to consume when there are no deposits left to process
assert state.deposit_balance_to_consume == 0
# queue emptied
assert state.pending_balance_deposits == []
@with_eip7251_and_later
@spec_state_test
def test_multiple_pending_deposits_below_churn(spec, state):
amount = 10**9
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=0, amount=amount))
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=1, amount=amount))
pre_balances = state.balances
yield from run_epoch_processing_with(spec, state, 'process_pending_balance_deposits')
for i in [0, 1]:
assert state.balances[i] == pre_balances[i] + amount
# No leftover deposit balance to consume when there are no deposits left to process
assert state.deposit_balance_to_consume == 0
assert state.pending_balance_deposits == []
@with_eip7251_and_later
@spec_state_test
def test_multiple_pending_deposits_above_churn(spec, state):
# set third deposit to be over the churn
amount = (spec.get_activation_exit_churn_limit(state) // 3) + 1
for i in [0, 1, 2]:
state.pending_balance_deposits.append(spec.PendingBalanceDeposit(index=i, amount=amount))
pre_balances = state.balances
yield from run_epoch_processing_with(spec, state, 'process_pending_balance_deposits')
# First two deposits are processed, third is not because above churn
for i in [0, 1]:
assert state.balances[i] == pre_balances[i] + amount
assert state.balances[2] == pre_balances[2]
# Only first two subtract from the deposit balance to consume
assert state.deposit_balance_to_consume == spec.get_activation_exit_churn_limit(state) - 2 * amount
# third deposit is still in the queue
assert state.pending_balance_deposits == [spec.PendingBalanceDeposit(index=2, amount=amount)]

View File

@ -0,0 +1,57 @@
from eth2spec.test.helpers.epoch_processing import run_epoch_processing_with
from eth2spec.test.context import (
spec_state_test,
with_eip7251_and_later,
)
# ***********************
# * CONSOLIDATION TESTS *
# ***********************
@with_eip7251_and_later
@spec_state_test
def test_basic_pending_consolidation(spec, state):
current_epoch = spec.get_current_epoch(state)
source_index = spec.get_active_validator_indices(state, current_epoch)[0]
target_index = spec.get_active_validator_indices(state, current_epoch)[1]
# append pending consolidation
state.pending_consolidations.append(spec.PendingConsolidation(source_index=source_index, target_index=target_index))
# Set withdrawable epoch to current epoch to allow processing
state.validators[source_index].withdrawable_epoch = spec.get_current_epoch(state)
yield from run_epoch_processing_with(spec, state, "process_pending_consolidations")
assert state.balances[target_index] == 2 * spec.MIN_ACTIVATION_BALANCE
assert state.balances[source_index] == 0
@with_eip7251_and_later
@spec_state_test
def test_skip_consolidation_when_source_slashed(spec, state):
current_epoch = spec.get_current_epoch(state)
source0_index = spec.get_active_validator_indices(state, current_epoch)[0]
target0_index = spec.get_active_validator_indices(state, current_epoch)[1]
source1_index = spec.get_active_validator_indices(state, current_epoch)[2]
target1_index = spec.get_active_validator_indices(state, current_epoch)[3]
# append pending consolidation
state.pending_consolidations.append(
spec.PendingConsolidation(source_index=source0_index, target_index=target0_index)
)
state.pending_consolidations.append(
spec.PendingConsolidation(source_index=source1_index, target_index=target1_index)
)
# Set withdrawable epoch of sources to current epoch to allow processing
state.validators[source0_index].withdrawable_epoch = spec.get_current_epoch(state)
state.validators[source1_index].withdrawable_epoch = spec.get_current_epoch(state)
# set first source as slashed
state.validators[source0_index].slashed = True
yield from run_epoch_processing_with(spec, state, "process_pending_consolidations")
# first pending consolidation should not be processed
assert state.balances[target0_index] == spec.MIN_ACTIVATION_BALANCE
assert state.balances[source0_index] == spec.MIN_ACTIVATION_BALANCE
# second pending consolidation should be processed: first one is skipped and doesn't block the queue
assert state.balances[target1_index] == 2 * spec.MIN_ACTIVATION_BALANCE
assert state.balances[source1_index] == 0

View File

@ -0,0 +1,12 @@
from eth2spec.test.context import (
single_phase,
spec_test,
with_eip7251_and_later,
)
@with_eip7251_and_later
@spec_test
@single_phase
def test_withdrawals(spec):
assert spec.MAX_PARTIAL_WITHDRAWALS_PER_PAYLOAD < spec.MAX_WITHDRAWALS_PER_PAYLOAD

View File

@ -2,6 +2,7 @@ import random
from eth2spec.test.context import (
spec_test,
single_phase,
expect_assertion_error,
with_eip7594_and_later,
)
from eth2spec.test.helpers.sharding import (
@ -101,3 +102,19 @@ def test_recover_polynomial(spec):
# Now flatten the cells and check that they match the entirety of the recovered data
flattened_cells = [x for xs in cells for x in xs]
assert flattened_cells == recovered_data
@with_eip7594_and_later
@spec_test
@single_phase
def test_multiply_polynomial_degree_overflow(spec):
rng = random.Random(5566)
# Perform a legitimate-but-maxed-out polynomial multiplication
poly1_coeff = [rng.randint(0, BLS_MODULUS - 1) for _ in range(spec.FIELD_ELEMENTS_PER_BLOB)]
poly2_coeff = [rng.randint(0, BLS_MODULUS - 1) for _ in range(spec.FIELD_ELEMENTS_PER_BLOB)]
_ = spec.multiply_polynomialcoeff(poly1_coeff, poly2_coeff)
# Now overflow the degree by pumping the degree of one of the inputs by one
poly2_coeff = [rng.randint(0, BLS_MODULUS - 1) for _ in range(spec.FIELD_ELEMENTS_PER_BLOB + 1)]
expect_assertion_error(lambda: spec.multiply_polynomialcoeff(poly1_coeff, poly2_coeff))

View File

@ -1,6 +1,6 @@
from eth2spec.test.context import (
spec_test,
single_phase,
spec_test,
with_eip7594_and_later,
)
@ -18,3 +18,7 @@ def test_invariants(spec):
assert spec.config.MAX_REQUEST_DATA_COLUMN_SIDECARS == (
spec.config.MAX_REQUEST_BLOCKS_DENEB * spec.config.NUMBER_OF_COLUMNS
)
def test_polynomical_commitments_sampling(spec):
assert spec.FIELD_ELEMENTS_PER_EXT_BLOB == 2 * spec.FIELD_ELEMENTS_PER_BLOB

View File

@ -5,7 +5,7 @@ from typing import List
from eth2spec.test.context import expect_assertion_error
from eth2spec.test.helpers.state import state_transition_and_sign_block, next_epoch, next_slot
from eth2spec.test.helpers.block import build_empty_block_for_next_slot
from eth2spec.test.helpers.forks import is_post_altair, is_post_deneb
from eth2spec.test.helpers.forks import is_post_altair, is_post_deneb, is_post_eip7549
from eth2spec.test.helpers.keys import privkeys
from eth2spec.utils import bls
from eth2spec.utils.ssz.ssz_typing import Bitlist
@ -78,7 +78,7 @@ def build_attestation_data(spec, state, slot, index, beacon_block_root=None, sha
data = spec.AttestationData(
slot=slot,
index=index,
index=0 if is_post_eip7549(spec) else index,
beacon_block_root=beacon_block_root,
source=spec.Checkpoint(epoch=source_epoch, root=source_root),
target=spec.Checkpoint(epoch=spec.compute_epoch_at_slot(slot), root=epoch_boundary_root),
@ -104,20 +104,21 @@ def get_valid_attestation(spec,
attestation_data = build_attestation_data(spec, state, slot=slot, index=index, beacon_block_root=beacon_block_root)
beacon_committee = spec.get_beacon_committee(
state,
attestation_data.slot,
attestation_data.index,
)
beacon_committee = spec.get_beacon_committee(state, slot, index)
committee_size = len(beacon_committee)
aggregation_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE](*([0] * committee_size))
attestation = spec.Attestation(
aggregation_bits=aggregation_bits,
data=attestation_data,
)
if is_post_eip7549(spec):
# will fill aggregation_bits later
attestation = spec.Attestation(data=attestation_data)
else:
committee_size = len(beacon_committee)
aggregation_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE](*([0] * committee_size))
attestation = spec.Attestation(
aggregation_bits=aggregation_bits,
data=attestation_data,
)
# fill the attestation with (optionally filtered) participants, and optionally sign it
fill_aggregate_attestation(spec, state, attestation, signed=signed, filter_participant_set=filter_participant_set)
fill_aggregate_attestation(spec, state, attestation, signed=signed,
filter_participant_set=filter_participant_set, committee_index=index)
return attestation
@ -144,11 +145,7 @@ def sign_indexed_attestation(spec, state, indexed_attestation):
def sign_attestation(spec, state, attestation):
participants = spec.get_attesting_indices(
state,
attestation.data,
attestation.aggregation_bits,
)
participants = spec.get_attesting_indices(state, attestation)
attestation.signature = sign_aggregate_attestation(spec, state, attestation.data, participants)
@ -167,7 +164,7 @@ def compute_max_inclusion_slot(spec, attestation):
return attestation.data.slot + spec.SLOTS_PER_EPOCH
def fill_aggregate_attestation(spec, state, attestation, signed=False, filter_participant_set=None):
def fill_aggregate_attestation(spec, state, attestation, committee_index, signed=False, filter_participant_set=None):
"""
`signed`: Signing is optional.
`filter_participant_set`: Optional, filters the full committee indices set (default) to a subset that participates
@ -175,15 +172,27 @@ def fill_aggregate_attestation(spec, state, attestation, signed=False, filter_pa
beacon_committee = spec.get_beacon_committee(
state,
attestation.data.slot,
attestation.data.index,
committee_index,
)
# By default, have everyone participate
participants = set(beacon_committee)
# But optionally filter the participants to a smaller amount
if filter_participant_set is not None:
participants = filter_participant_set(participants)
if is_post_eip7549(spec):
attestation.committee_bits = spec.Bitvector[spec.MAX_COMMITTEES_PER_SLOT]()
attestation.committee_bits[committee_index] = True
attestation.aggregation_bits = get_empty_eip7549_aggregation_bits(
spec, state, attestation.committee_bits, attestation.data.slot)
for i in range(len(beacon_committee)):
attestation.aggregation_bits[i] = beacon_committee[i] in participants
if is_post_eip7549(spec):
offset = get_eip7549_aggregation_bits_offset(
spec, state, attestation.data.slot, attestation.committee_bits, committee_index)
aggregation_bits_index = offset + i
attestation.aggregation_bits[aggregation_bits_index] = beacon_committee[i] in participants
else:
attestation.aggregation_bits[i] = beacon_committee[i] in participants
if signed and len(participants) > 0:
sign_attestation(spec, state, attestation)
@ -392,3 +401,34 @@ def cached_prepare_state_with_attestations(spec, state):
# Put the LRU cache result into the state view, as if we transitioned the original view
state.set_backing(_prep_state_cache_dict[key])
def get_max_attestations(spec):
if is_post_eip7549(spec):
return spec.MAX_ATTESTATIONS_EIP7549
else:
return spec.MAX_ATTESTATIONS
def get_empty_eip7549_aggregation_bits(spec, state, committee_bits, slot):
committee_indices = spec.get_committee_indices(committee_bits)
participants_count = 0
for index in committee_indices:
committee = spec.get_beacon_committee(state, slot, index)
participants_count += len(committee)
aggregation_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE * spec.MAX_COMMITTEES_PER_SLOT](
[False] * participants_count
)
return aggregation_bits
def get_eip7549_aggregation_bits_offset(spec, state, slot, committee_bits, committee_index):
committee_indices = spec.get_committee_indices(committee_bits)
assert committee_index in committee_indices
offset = 0
for i in committee_indices:
if committee_index == i:
break
committee = spec.get_beacon_committee(state, slot, committee_indices[i])
offset += len(committee)
return offset

View File

@ -1,4 +1,5 @@
from eth2spec.test.helpers.attestations import get_valid_attestation, sign_attestation, sign_indexed_attestation
from eth2spec.test.helpers.forks import is_post_eip7549
def get_valid_attester_slashing(spec, state, slot=None, signed_1=False, signed_2=False, filter_participant_set=None):
@ -62,3 +63,10 @@ def get_attestation_1_data(spec, att_slashing):
def get_attestation_2_data(spec, att_slashing):
return att_slashing.attestation_2.data
def get_max_attester_slashings(spec):
if is_post_eip7549(spec):
return spec.MAX_ATTESTER_SLASHINGS_EIP7549
else:
return spec.MAX_ATTESTER_SLASHINGS

View File

@ -0,0 +1,61 @@
from eth2spec.utils import bls
from eth2spec.test.context import expect_assertion_error
from eth2spec.test.helpers.keys import privkeys
def prepare_signed_consolidations(spec, state, index_pairs, fork_version=None):
def create_signed_consolidation(source_index, target_index):
consolidation = spec.Consolidation(
epoch=spec.get_current_epoch(state),
source_index=source_index,
target_index=target_index,
)
return sign_consolidation(spec, state, consolidation, privkeys[source_index], privkeys[target_index],
fork_version=fork_version)
return [create_signed_consolidation(source_index, target_index) for (source_index, target_index) in index_pairs]
def sign_consolidation(spec, state, consolidation, source_privkey, target_privkey, fork_version=None):
domain = spec.compute_domain(spec.DOMAIN_CONSOLIDATION, genesis_validators_root=state.genesis_validators_root)
signing_root = spec.compute_signing_root(consolidation, domain)
return spec.SignedConsolidation(
message=consolidation,
signature=bls.Aggregate([bls.Sign(source_privkey, signing_root), bls.Sign(target_privkey, signing_root)])
)
def run_consolidation_processing(spec, state, signed_consolidation, valid=True):
"""
Run ``process_consolidation``, yielding:
- pre-state ('pre')
- consolidation ('consolidation')
- post-state ('post').
If ``valid == False``, run expecting ``AssertionError``
"""
source_validator = state.validators[signed_consolidation.message.source_index]
target_validator = state.validators[signed_consolidation.message.target_index]
yield 'pre', state
yield 'consolidation', signed_consolidation
if not valid:
expect_assertion_error(lambda: spec.process_consolidation(state, signed_consolidation))
yield 'post', None
return
pre_exit_epoch = source_validator.exit_epoch
spec.process_consolidation(state, signed_consolidation)
yield 'post', state
assert source_validator.withdrawal_credentials[1:] == target_validator.withdrawal_credentials[1:]
assert pre_exit_epoch == spec.FAR_FUTURE_EPOCH
assert state.validators[signed_consolidation.message.source_index].exit_epoch < spec.FAR_FUTURE_EPOCH
assert state.validators[signed_consolidation.message.source_index].exit_epoch == state.earliest_consolidation_epoch
assert state.pending_consolidations[len(state.pending_consolidations) - 1] == spec.PendingConsolidation(
source_index=signed_consolidation.message.source_index,
target_index=signed_consolidation.message.target_index
)

View File

@ -18,7 +18,9 @@ CUSTODY_GAME = SpecForkName('custody_game')
DAS = SpecForkName('das')
EIP6110 = SpecForkName('eip6110')
EIP7002 = SpecForkName('eip7002')
EIP7549 = SpecForkName('eip7549')
WHISK = SpecForkName('whisk')
EIP7251 = SpecForkName('eip7251')
EIP7594 = SpecForkName('eip7594')
#
@ -38,6 +40,8 @@ ALL_PHASES = (
# Experimental patches
EIP6110,
EIP7002,
EIP7251,
EIP7549,
EIP7594,
)
# The forks that have light client specs
@ -59,6 +63,8 @@ PREVIOUS_FORK_OF = {
EIP6110: DENEB,
WHISK: CAPELLA,
EIP7002: CAPELLA,
EIP7549: DENEB,
EIP7251: DENEB,
EIP7594: DENEB,
}
@ -89,4 +95,4 @@ ALL_PRESETS = (MINIMAL, MAINNET)
#
# Number
#
MAX_UINT_64 = 2**64 - 1
UINT64_MAX = 2**64 - 1

View File

@ -390,3 +390,88 @@ def run_deposit_receipt_processing_with_specific_fork_version(
valid=valid,
effective=effective
)
# ********************
# * EIP7251 *
# ********************
def run_deposit_processing_eip7251(spec, state, deposit, validator_index, valid=True, effective=True):
"""
Run ``process_deposit``, yielding:
- pre-state ('pre')
- deposit ('deposit')
- post-state ('post').
If ``valid == False``, run expecting ``AssertionError``
"""
pre_validator_count = len(state.validators)
pre_pending_deposits = len(state.pending_balance_deposits)
pre_balance = 0
pre_effective_balance = 0
is_top_up = False
# is a top-up
if validator_index < pre_validator_count:
is_top_up = True
pre_balance = get_balance(state, validator_index)
pre_effective_balance = state.validators[validator_index].effective_balance
yield 'pre', state
yield 'deposit', deposit
if not valid:
expect_assertion_error(lambda: spec.process_deposit(state, deposit))
yield 'post', None
return
spec.process_deposit(state, deposit)
yield 'post', state
if not effective or not bls.KeyValidate(deposit.data.pubkey):
assert len(state.validators) == pre_validator_count
assert len(state.balances) == pre_validator_count
else:
# no balance changes on deposit processing
assert get_balance(state, validator_index) == pre_balance
assert state.validators[validator_index].effective_balance == pre_effective_balance
if is_top_up:
assert len(state.validators) == pre_validator_count
assert len(state.balances) == pre_validator_count
else:
# new validator
assert len(state.validators) == pre_validator_count + 1
assert len(state.balances) == pre_validator_count + 1
# new correct balance deposit has been appended
assert len(state.pending_balance_deposits) == pre_pending_deposits + 1
assert state.pending_balance_deposits[pre_pending_deposits].amount == deposit.data.amount
assert state.pending_balance_deposits[pre_pending_deposits].index == validator_index
assert state.eth1_deposit_index == state.eth1_data.deposit_count
def run_deposit_processing_eip7251_with_specific_fork_version(
spec,
state,
fork_version,
valid=True,
effective=True):
validator_index = len(state.validators)
amount = spec.MAX_EFFECTIVE_BALANCE
pubkey = pubkeys[validator_index]
privkey = privkeys[validator_index]
withdrawal_credentials = spec.BLS_WITHDRAWAL_PREFIX + spec.hash(pubkey)[1:]
deposit_message = spec.DepositMessage(pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, amount=amount)
domain = spec.compute_domain(domain_type=spec.DOMAIN_DEPOSIT, fork_version=fork_version)
deposit_data = spec.DepositData(
pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, amount=amount,
signature=bls.Sign(privkey, spec.compute_signing_root(deposit_message, domain))
)
deposit, root, _ = deposit_from_context(spec, [deposit_data], 0)
state.eth1_deposit_index = 0
state.eth1_data.deposit_root = root
state.eth1_data.deposit_count = 1
yield from run_deposit_processing_eip7251(spec, state, deposit, validator_index, valid=valid, effective=effective)

View File

@ -1,6 +1,6 @@
from .constants import (
PHASE0, ALTAIR, BELLATRIX, CAPELLA, DENEB,
EIP6110, EIP7002, WHISK,
EIP6110, EIP7002, EIP7251, EIP7549, WHISK,
PREVIOUS_FORK_OF,
)
@ -45,6 +45,14 @@ def is_post_eip7002(spec):
return is_post_fork(spec.fork, EIP7002)
def is_post_eip7251(spec):
return is_post_fork(spec.fork, EIP7251)
def is_post_eip7549(spec):
return is_post_fork(spec.fork, EIP7549)
def is_post_whisk(spec):
return is_post_fork(spec.fork, WHISK)

View File

@ -6,7 +6,7 @@ from eth2spec.test.helpers.execution_payload import (
compute_el_header_block_hash,
)
from eth2spec.test.helpers.forks import (
is_post_altair, is_post_bellatrix, is_post_capella, is_post_eip6110, is_post_eip7002, is_post_whisk,
is_post_altair, is_post_bellatrix, is_post_capella, is_post_eip6110, is_post_eip7002, is_post_whisk, is_post_eip7251
)
from eth2spec.test.helpers.keys import pubkeys
from eth2spec.test.helpers.whisk import compute_whisk_initial_tracker_cached, compute_whisk_initial_k_commitment_cached
@ -149,4 +149,11 @@ def create_genesis_state(spec, validator_balances, activation_threshold):
for i in range(spec.WHISK_PROPOSER_TRACKERS_COUNT):
state.whisk_proposer_trackers[i] = compute_whisk_initial_tracker_cached(i % vc)
if is_post_eip7251(spec):
state.deposit_balance_to_consume = 0
state.exit_balance_to_consume = 0
state.earliest_exit_epoch = spec.GENESIS_EPOCH
state.pending_balance_deposits = []
state.pending_partial_withdrawals = []
return state

View File

@ -13,7 +13,7 @@ from eth2spec.test.helpers.sync_committee import (
)
from eth2spec.test.helpers.proposer_slashings import get_valid_proposer_slashing
from eth2spec.test.helpers.attester_slashings import get_valid_attester_slashing_by_indices
from eth2spec.test.helpers.attestations import get_valid_attestation
from eth2spec.test.helpers.attestations import get_valid_attestation, get_max_attestations
from eth2spec.test.helpers.deposits import build_deposit, deposit_from_context
from eth2spec.test.helpers.voluntary_exits import prepare_signed_exits
from eth2spec.test.helpers.bls_to_execution_changes import get_signed_address_change
@ -101,7 +101,7 @@ def get_random_attester_slashings(spec, state, rng, slashed_indices=[]):
def get_random_attestations(spec, state, rng):
num_attestations = rng.randrange(1, spec.MAX_ATTESTATIONS)
num_attestations = rng.randrange(1, get_max_attestations(spec))
attestations = [
get_valid_attestation(

View File

@ -201,7 +201,7 @@ def run_get_inclusion_delay_deltas(spec, state):
# Track proposer of earliest included attestation for the validator defined by index
earliest_attestation = min([
a for a in eligible_attestations
if index in spec.get_attesting_indices(state, a.data, a.aggregation_bits)
if index in spec.get_attesting_indices(state, a)
], key=lambda a: a.inclusion_delay)
rewarded_proposer_indices.add(earliest_attestation.proposer_index)

View File

@ -54,3 +54,11 @@ def prepare_expected_withdrawals(spec, state,
set_validator_partially_withdrawable(spec, state, index)
return fully_withdrawable_indices, partial_withdrawals_indices
def set_compounding_withdrawal_credential(spec, state, index, address=None):
if address is None:
address = b'\x11' * 20
validator = state.validators[index]
validator.withdrawal_credentials = spec.COMPOUNDING_WITHDRAWAL_PREFIX + b'\x00' * 11 + address

View File

@ -236,11 +236,7 @@ def test_invalid_future_target_epoch(spec, state):
attestation = get_valid_attestation(spec, state)
participants = spec.get_attesting_indices(
state,
attestation.data,
attestation.aggregation_bits
)
participants = spec.get_attesting_indices(state, attestation)
attestation.data.target.epoch = spec.get_current_epoch(state) + 1 # target epoch will be too new to handle
# manually add signature for correct participants

View File

@ -15,6 +15,7 @@ from eth2spec.test.helpers.attester_slashings import (
get_valid_attester_slashing_by_indices,
get_valid_attester_slashing,
get_indexed_attestation_participants,
get_max_attester_slashings,
)
from eth2spec.test.helpers.proposer_slashings import get_valid_proposer_slashing, check_proposer_slashing_effect
from eth2spec.test.helpers.attestations import get_valid_attestation
@ -30,7 +31,11 @@ from eth2spec.test.helpers.sync_committee import (
compute_sync_committee_participant_reward_and_penalty,
)
from eth2spec.test.helpers.constants import PHASE0, MINIMAL
from eth2spec.test.helpers.forks import is_post_altair, is_post_bellatrix, is_post_capella
from eth2spec.test.helpers.forks import (
is_post_altair,
is_post_bellatrix,
is_post_capella,
)
from eth2spec.test.context import (
spec_test, spec_state_test, dump_skipping_message,
with_phases, with_all_phases, single_phase,
@ -550,7 +555,7 @@ def test_attester_slashing(spec, state):
@with_all_phases
@spec_state_test
def test_invalid_duplicate_attester_slashing_same_block(spec, state):
if spec.MAX_ATTESTER_SLASHINGS < 2:
if get_max_attester_slashings(spec) < 2:
return dump_skipping_message("Skip test if config cannot handle multiple AttesterSlashings per block")
attester_slashing = get_valid_attester_slashing(spec, state, signed_1=True, signed_2=True)
@ -578,7 +583,7 @@ def test_invalid_duplicate_attester_slashing_same_block(spec, state):
@with_all_phases
@spec_state_test
def test_multiple_attester_slashings_no_overlap(spec, state):
if spec.MAX_ATTESTER_SLASHINGS < 2:
if get_max_attester_slashings(spec) < 2:
return dump_skipping_message("Skip test if config cannot handle multiple AttesterSlashings per block")
# copy for later balance lookups.
@ -618,7 +623,7 @@ def test_multiple_attester_slashings_no_overlap(spec, state):
@with_all_phases
@spec_state_test
def test_multiple_attester_slashings_partial_overlap(spec, state):
if spec.MAX_ATTESTER_SLASHINGS < 2:
if get_max_attester_slashings(spec) < 2:
return dump_skipping_message("Skip test if config cannot handle multiple AttesterSlashings per block")
# copy for later balance lookups.

View File

@ -2,6 +2,7 @@ from eth2spec.test.context import with_all_phases, spec_state_test
from eth2spec.test.helpers.block import build_empty_block_for_next_slot
from eth2spec.test.helpers.attestations import get_valid_attestation, sign_attestation
from eth2spec.test.helpers.constants import ALL_PHASES
from eth2spec.test.helpers.forks import is_post_eip7549
from eth2spec.test.helpers.state import transition_to, state_transition_and_sign_block, next_epoch, next_slot
from eth2spec.test.helpers.fork_choice import get_genesis_forkchoice_store
@ -325,6 +326,9 @@ def test_on_attestation_invalid_attestation(spec, state):
attestation = get_valid_attestation(spec, state, slot=block.slot, signed=True)
# make invalid by using an invalid committee index
attestation.data.index = spec.MAX_COMMITTEES_PER_SLOT * spec.SLOTS_PER_EPOCH
if is_post_eip7549(spec):
attestation.committee_bits = spec.Bitvector[spec.MAX_COMMITTEES_PER_SLOT]()
else:
attestation.data.index = spec.MAX_COMMITTEES_PER_SLOT * spec.SLOTS_PER_EPOCH
run_on_attestation(spec, state, store, attestation, False)

View File

@ -0,0 +1,29 @@
import random
from math import isqrt
from eth2spec.test.context import (
spec_test,
single_phase,
with_all_phases,
)
@with_all_phases
@spec_test
@single_phase
def test_integer_squareroot(spec):
values = [0, 100, 2**64 - 2, 2**64 - 1]
for n in values:
uint64_n = spec.uint64(n)
assert spec.integer_squareroot(uint64_n) == isqrt(n)
rng = random.Random(5566)
for _ in range(10):
n = rng.randint(0, 2**64 - 1)
uint64_n = spec.uint64(n)
assert spec.integer_squareroot(uint64_n) == isqrt(n)
try:
spec.integer_squareroot(spec.uint64(2**64))
assert False
except ValueError:
pass

View File

@ -2,9 +2,9 @@ from eth2spec.test.context import (
spec_state_test,
with_all_phases,
)
from eth2spec.test.helpers.constants import MAX_UINT_64
from eth2spec.test.helpers.constants import UINT64_MAX
from eth2spec.test.helpers.forks import (
is_post_altair, is_post_bellatrix,
is_post_altair, is_post_bellatrix, is_post_eip7251,
)
@ -16,9 +16,9 @@ def check_bound(value, lower_bound, upper_bound):
@with_all_phases
@spec_state_test
def test_validators(spec, state):
check_bound(spec.VALIDATOR_REGISTRY_LIMIT, 1, MAX_UINT_64)
check_bound(spec.MAX_COMMITTEES_PER_SLOT, 1, MAX_UINT_64)
check_bound(spec.TARGET_COMMITTEE_SIZE, 1, MAX_UINT_64)
check_bound(spec.VALIDATOR_REGISTRY_LIMIT, 1, UINT64_MAX)
check_bound(spec.MAX_COMMITTEES_PER_SLOT, 1, UINT64_MAX)
check_bound(spec.TARGET_COMMITTEE_SIZE, 1, UINT64_MAX)
# Note: can be less if you assume stricters bounds on validator set based on total ETH supply
maximum_validators_per_committee = (
@ -30,24 +30,24 @@ def test_validators(spec, state):
check_bound(spec.config.MIN_PER_EPOCH_CHURN_LIMIT, 1, spec.VALIDATOR_REGISTRY_LIMIT)
check_bound(spec.config.CHURN_LIMIT_QUOTIENT, 1, spec.VALIDATOR_REGISTRY_LIMIT)
check_bound(spec.config.MIN_GENESIS_ACTIVE_VALIDATOR_COUNT, spec.TARGET_COMMITTEE_SIZE, MAX_UINT_64)
check_bound(spec.config.MIN_GENESIS_ACTIVE_VALIDATOR_COUNT, spec.TARGET_COMMITTEE_SIZE, UINT64_MAX)
@with_all_phases
@spec_state_test
def test_balances(spec, state):
assert spec.MAX_EFFECTIVE_BALANCE % spec.EFFECTIVE_BALANCE_INCREMENT == 0
check_bound(spec.MIN_DEPOSIT_AMOUNT, 1, MAX_UINT_64)
check_bound(spec.MAX_EFFECTIVE_BALANCE, spec.MIN_DEPOSIT_AMOUNT, MAX_UINT_64)
check_bound(spec.MAX_EFFECTIVE_BALANCE, spec.EFFECTIVE_BALANCE_INCREMENT, MAX_UINT_64)
check_bound(spec.MIN_DEPOSIT_AMOUNT, 1, UINT64_MAX)
check_bound(spec.MAX_EFFECTIVE_BALANCE, spec.MIN_DEPOSIT_AMOUNT, UINT64_MAX)
check_bound(spec.MAX_EFFECTIVE_BALANCE, spec.EFFECTIVE_BALANCE_INCREMENT, UINT64_MAX)
@with_all_phases
@spec_state_test
def test_hysteresis_quotient(spec, state):
check_bound(spec.HYSTERESIS_QUOTIENT, 1, MAX_UINT_64)
check_bound(spec.HYSTERESIS_QUOTIENT, 1, UINT64_MAX)
check_bound(spec.HYSTERESIS_DOWNWARD_MULTIPLIER, 1, spec.HYSTERESIS_QUOTIENT)
check_bound(spec.HYSTERESIS_UPWARD_MULTIPLIER, spec.HYSTERESIS_QUOTIENT, MAX_UINT_64)
check_bound(spec.HYSTERESIS_UPWARD_MULTIPLIER, spec.HYSTERESIS_QUOTIENT, UINT64_MAX)
@with_all_phases
@ -58,6 +58,8 @@ def test_incentives(spec, state):
assert spec.MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX <= spec.WHISTLEBLOWER_REWARD_QUOTIENT
elif is_post_altair(spec):
assert spec.MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR <= spec.WHISTLEBLOWER_REWARD_QUOTIENT
elif is_post_eip7251(spec):
assert spec.MIN_SLASHING_PENALTY_QUOTIENT_EIP7251 <= spec.WHISTLEBLOWER_REWARD_QUOTIENT_EIP7251
else:
assert spec.MIN_SLASHING_PENALTY_QUOTIENT <= spec.WHISTLEBLOWER_REWARD_QUOTIENT
@ -68,7 +70,7 @@ def test_time(spec, state):
assert spec.SLOTS_PER_EPOCH <= spec.SLOTS_PER_HISTORICAL_ROOT
assert spec.MIN_SEED_LOOKAHEAD < spec.MAX_SEED_LOOKAHEAD
assert spec.SLOTS_PER_HISTORICAL_ROOT % spec.SLOTS_PER_EPOCH == 0
check_bound(spec.SLOTS_PER_HISTORICAL_ROOT, spec.SLOTS_PER_EPOCH, MAX_UINT_64)
check_bound(spec.SLOTS_PER_HISTORICAL_ROOT, spec.SLOTS_PER_EPOCH, UINT64_MAX)
check_bound(spec.MIN_ATTESTATION_INCLUSION_DELAY, 1, spec.SLOTS_PER_EPOCH)

View File

@ -7,7 +7,7 @@ generation and verification of merkle proofs based on static data.
Tests for each individual SSZ type are grouped into a `suite` indicating the SSZ type name.
### `object.yaml`
### `object.ssz_snappy`
A SSZ-snappy encoded object from which other data is generated. The SSZ type can be determined from the test `suite` name.

View File

@ -18,6 +18,7 @@ from eth2spec.test.altair.transition import (
)
from eth2spec.test.deneb.transition import (
test_operations as test_deneb_operations,
test_transition as test_deneb_transition,
)
@ -47,6 +48,7 @@ if __name__ == "__main__":
test_altair_slashing,
test_altair_operations,
test_deneb_operations,
test_deneb_transition,
)
for transition_test_module in all_tests:
for pre_fork, post_fork in ALL_PRE_POST_FORKS: