Merge branch 'dev'
This commit is contained in:
commit
9be1e12618
|
@ -9,7 +9,7 @@ build/
|
|||
output/
|
||||
dist/
|
||||
|
||||
eth2.0-spec-tests/
|
||||
consensus-spec-tests/
|
||||
|
||||
.pytest_cache
|
||||
.mypy_cache
|
||||
|
|
2
Makefile
2
Makefile
|
@ -6,7 +6,7 @@ TEST_GENERATORS_DIR = ./tests/generators
|
|||
PY_SPEC_DIR = $(TEST_LIBS_DIR)/pyspec
|
||||
ETH2SPEC_MODULE_DIR = $(PY_SPEC_DIR)/eth2spec
|
||||
TEST_REPORT_DIR = $(PY_SPEC_DIR)/test-reports
|
||||
TEST_VECTOR_DIR = ../eth2.0-spec-tests/tests
|
||||
TEST_VECTOR_DIR = ../consensus-spec-tests/tests
|
||||
GENERATOR_DIR = ./tests/generators
|
||||
SOLIDITY_DEPOSIT_CONTRACT_DIR = ./solidity_deposit_contract
|
||||
SOLIDITY_DEPOSIT_CONTRACT_SOURCE = ${SOLIDITY_DEPOSIT_CONTRACT_DIR}/deposit_contract.sol
|
||||
|
|
19
README.md
19
README.md
|
@ -1,17 +1,17 @@
|
|||
# Ethereum 2.0 Specifications
|
||||
# Ethereum Proof-of-Stake Consensus Specifications
|
||||
|
||||
[![Join the chat at https://discord.gg/qGpsxSA](https://img.shields.io/badge/chat-on%20discord-blue.svg)](https://discord.gg/qGpsxSA) [![Join the chat at https://gitter.im/ethereum/sharding](https://badges.gitter.im/ethereum/sharding.svg)](https://gitter.im/ethereum/sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
To learn more about sharding and Ethereum 2.0 (Serenity), see the [sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs) and the [research compendium](https://notes.ethereum.org/s/H1PGqDhpm).
|
||||
To learn more about proof-of-stake and sharding, see the [PoS FAQ](https://eth.wiki/en/concepts/proof-of-stake-faqs), [sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs) and the [research compendium](https://notes.ethereum.org/s/H1PGqDhpm).
|
||||
|
||||
This repository hosts the current Eth2 specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed-upon changes to the spec can be made through pull requests.
|
||||
This repository hosts the current Ethereum proof-of-stake specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed-upon changes to the spec can be made through pull requests.
|
||||
|
||||
|
||||
## Specs
|
||||
|
||||
[![GitHub release](https://img.shields.io/github/v/release/ethereum/eth2.0-specs)](https://github.com/ethereum/eth2.0-specs/releases/) [![PyPI version](https://badge.fury.io/py/eth2spec.svg)](https://badge.fury.io/py/eth2spec)
|
||||
|
||||
Core specifications for Eth2 clients can be found in [specs](specs/). These are divided into features.
|
||||
Core specifications for Ethereum proof-of-stake clients can be found in [specs](specs/). These are divided into features.
|
||||
Features are researched and developed in parallel, and then consolidated into sequential upgrades when ready.
|
||||
|
||||
The current features are:
|
||||
|
@ -73,13 +73,12 @@ Sharding follows the merge, and is divided into three parts:
|
|||
|
||||
Additional specifications and standards outside of requisite client functionality can be found in the following repos:
|
||||
|
||||
* [Eth2 APIs](https://github.com/ethereum/eth2.0-apis)
|
||||
* [Eth2 Metrics](https://github.com/ethereum/eth2.0-metrics/)
|
||||
* [Interop Standards in Eth2 PM](https://github.com/ethereum/eth2.0-pm/tree/master/interop)
|
||||
* [Beacon APIs](https://github.com/ethereum/beacon-apis)
|
||||
* [Beacon Metrics](https://github.com/ethereum/beacon-metrics/)
|
||||
|
||||
## Design goals
|
||||
|
||||
The following are the broad design goals for Ethereum 2.0:
|
||||
The following are the broad design goals for the Ethereum proof-of-stake consensus specifications:
|
||||
* to minimize complexity, even at the cost of some losses in efficiency
|
||||
* to remain live through major network partitions and when very large portions of nodes go offline
|
||||
* to select all components such that they are either quantum secure or can be easily swapped out for quantum secure counterparts when available
|
||||
|
@ -97,3 +96,7 @@ The following are the broad design goals for Ethereum 2.0:
|
|||
Documentation on the different components used during spec writing can be found here:
|
||||
* [YAML Test Generators](tests/generators/README.md)
|
||||
* [Executable Python Spec, with Py-tests](tests/core/pyspec/README.md)
|
||||
|
||||
## Consensus spec tests
|
||||
|
||||
Conformance tests built from the executable python spec are available in the [Ethereum Proof-of-Stake Consensus Spec Tests](https://github.com/ethereum/consensus-spec-tests) repo. Compressed tarballs are available in [releases](https://github.com/ethereum/consensus-spec-tests/releases).
|
||||
|
|
|
@ -1,22 +1,24 @@
|
|||
# Mainnet preset - Sharding
|
||||
|
||||
# Beacon-chain
|
||||
# ---------------------------------------------------------------
|
||||
# Misc
|
||||
# ---------------------------------------------------------------
|
||||
# 2**10 (= 1,024)
|
||||
MAX_SHARDS: 1024
|
||||
# 2**6 = 64
|
||||
# 2**6 (= 64)
|
||||
INITIAL_ACTIVE_SHARDS: 64
|
||||
# 2**3 (= 8)
|
||||
GASPRICE_ADJUSTMENT_COEFFICIENT: 8
|
||||
SAMPLE_PRICE_ADJUSTMENT_COEFFICIENT: 8
|
||||
# 2**4 (= 16)
|
||||
MAX_SHARD_PROPOSER_SLASHINGS: 16
|
||||
|
||||
# Shard block configs
|
||||
# ---------------------------------------------------------------
|
||||
#
|
||||
MAX_SHARD_HEADERS_PER_SHARD: 4
|
||||
# 2**8 (= 256)
|
||||
SHARD_STATE_MEMORY_SLOTS: 256
|
||||
# 2**40 (= 1,099,511,627,776)
|
||||
BLOB_BUILDER_REGISTRY_LIMIT: 1099511627776
|
||||
|
||||
# Shard blob samples
|
||||
# ---------------------------------------------------------------
|
||||
# 2**11 (= 2,048)
|
||||
MAX_SAMPLES_PER_BLOCK: 2048
|
||||
# 2**10 (= 1,1024)
|
||||
|
@ -25,6 +27,6 @@ TARGET_SAMPLES_PER_BLOCK: 1024
|
|||
# Gwei values
|
||||
# ---------------------------------------------------------------
|
||||
# 2**33 (= 8,589,934,592) Gwei
|
||||
MAX_GASPRICE: 8589934592
|
||||
MAX_SAMPLE_PRICE: 8589934592
|
||||
# 2**3 (= 8) Gwei
|
||||
MIN_GASPRICE: 8
|
||||
MIN_SAMPLE_PRICE: 8
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Minimal preset - Sharding
|
||||
|
||||
# Beacon-chain
|
||||
# Misc
|
||||
# ---------------------------------------------------------------
|
||||
# Misc
|
||||
# [customized] reduced for testing
|
||||
|
@ -8,15 +8,18 @@ MAX_SHARDS: 8
|
|||
# [customized] reduced for testing
|
||||
INITIAL_ACTIVE_SHARDS: 2
|
||||
# 2**3 (= 8)
|
||||
GASPRICE_ADJUSTMENT_COEFFICIENT: 8
|
||||
SAMPLE_PRICE_ADJUSTMENT_COEFFICIENT: 8
|
||||
# [customized] reduced for testing
|
||||
MAX_SHARD_PROPOSER_SLASHINGS: 4
|
||||
|
||||
# Shard block configs
|
||||
# ---------------------------------------------------------------
|
||||
#
|
||||
MAX_SHARD_HEADERS_PER_SHARD: 4
|
||||
# 2**8 (= 256)
|
||||
SHARD_STATE_MEMORY_SLOTS: 256
|
||||
# 2**40 (= 1,099,511,627,776)
|
||||
BLOB_BUILDER_REGISTRY_LIMIT: 1099511627776
|
||||
|
||||
# Shard blob samples
|
||||
# ---------------------------------------------------------------
|
||||
# 2**11 (= 2,048)
|
||||
MAX_SAMPLES_PER_BLOCK: 2048
|
||||
# 2**10 (= 1,1024)
|
||||
|
@ -25,6 +28,6 @@ TARGET_SAMPLES_PER_BLOCK: 1024
|
|||
# Gwei values
|
||||
# ---------------------------------------------------------------
|
||||
# 2**33 (= 8,589,934,592) Gwei
|
||||
MAX_GASPRICE: 8589934592
|
||||
MAX_SAMPLE_PRICE: 8589934592
|
||||
# 2**3 (= 8) Gwei
|
||||
MIN_GASPRICE: 8
|
||||
MIN_SAMPLE_PRICE: 8
|
||||
|
|
6
setup.py
6
setup.py
|
@ -56,7 +56,7 @@ def floorlog2(x: int) -> uint64:
|
|||
|
||||
|
||||
OPTIMIZED_BLS_AGGREGATE_PUBKEYS = '''
|
||||
def eth2_aggregate_pubkeys(pubkeys: Sequence[BLSPubkey]) -> BLSPubkey:
|
||||
def eth_aggregate_pubkeys(pubkeys: Sequence[BLSPubkey]) -> BLSPubkey:
|
||||
return bls.AggregatePKs(pubkeys)
|
||||
'''
|
||||
|
||||
|
@ -480,8 +480,8 @@ def get_generalized_index(ssz_class: Any, *path: Sequence[PyUnion[int, SSZVariab
|
|||
|
||||
@classmethod
|
||||
def implement_optimizations(cls, functions: Dict[str, str]) -> Dict[str, str]:
|
||||
if "eth2_aggregate_pubkeys" in functions:
|
||||
functions["eth2_aggregate_pubkeys"] = OPTIMIZED_BLS_AGGREGATE_PUBKEYS.strip()
|
||||
if "eth_aggregate_pubkeys" in functions:
|
||||
functions["eth_aggregate_pubkeys"] = OPTIMIZED_BLS_AGGREGATE_PUBKEYS.strip()
|
||||
return super().implement_optimizations(functions)
|
||||
|
||||
#
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Altair Beacon chain changes
|
||||
# Altair -- The Beacon Chain
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
@ -287,7 +287,7 @@ def get_next_sync_committee(state: BeaconState) -> SyncCommittee:
|
|||
"""
|
||||
indices = get_next_sync_committee_indices(state)
|
||||
pubkeys = [state.validators[index].pubkey for index in indices]
|
||||
aggregate_pubkey = eth2_aggregate_pubkeys(pubkeys)
|
||||
aggregate_pubkey = eth_aggregate_pubkeys(pubkeys)
|
||||
return SyncCommittee(pubkeys=pubkeys, aggregate_pubkey=aggregate_pubkey)
|
||||
```
|
||||
|
||||
|
@ -544,7 +544,7 @@ def process_sync_aggregate(state: BeaconState, sync_aggregate: SyncAggregate) ->
|
|||
previous_slot = max(state.slot, Slot(1)) - Slot(1)
|
||||
domain = get_domain(state, DOMAIN_SYNC_COMMITTEE, compute_epoch_at_slot(previous_slot))
|
||||
signing_root = compute_signing_root(get_block_root_at_slot(state, previous_slot), domain)
|
||||
assert eth2_fast_aggregate_verify(participant_pubkeys, signing_root, sync_aggregate.sync_committee_signature)
|
||||
assert eth_fast_aggregate_verify(participant_pubkeys, signing_root, sync_aggregate.sync_committee_signature)
|
||||
|
||||
# Compute participant and proposer rewards
|
||||
total_active_increments = get_total_active_balance(state) // EFFECTIVE_BALANCE_INCREMENT
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Altair BLS extensions
|
||||
# Altair -- BLS extensions
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
@ -9,8 +9,8 @@
|
|||
- [Introduction](#introduction)
|
||||
- [Constants](#constants)
|
||||
- [Extensions](#extensions)
|
||||
- [`eth2_aggregate_pubkeys`](#eth2_aggregate_pubkeys)
|
||||
- [`eth2_fast_aggregate_verify`](#eth2_fast_aggregate_verify)
|
||||
- [`eth_aggregate_pubkeys`](#eth_aggregate_pubkeys)
|
||||
- [`eth_fast_aggregate_verify`](#eth_fast_aggregate_verify)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- /TOC -->
|
||||
|
@ -29,14 +29,14 @@ Knowledge of the [phase 0 specification](../phase0/beacon-chain.md) is assumed,
|
|||
|
||||
## Extensions
|
||||
|
||||
### `eth2_aggregate_pubkeys`
|
||||
### `eth_aggregate_pubkeys`
|
||||
|
||||
An additional function `AggregatePKs` is defined to extend the
|
||||
[IETF BLS signature draft standard v4](https://tools.ietf.org/html/draft-irtf-cfrg-bls-signature-04)
|
||||
spec referenced in the phase 0 document.
|
||||
|
||||
```python
|
||||
def eth2_aggregate_pubkeys(pubkeys: Sequence[BLSPubkey]) -> BLSPubkey:
|
||||
def eth_aggregate_pubkeys(pubkeys: Sequence[BLSPubkey]) -> BLSPubkey:
|
||||
"""
|
||||
Return the aggregate public key for the public keys in ``pubkeys``.
|
||||
|
||||
|
@ -46,16 +46,19 @@ def eth2_aggregate_pubkeys(pubkeys: Sequence[BLSPubkey]) -> BLSPubkey:
|
|||
Refer to the BLS signature draft standard for more information.
|
||||
"""
|
||||
assert len(pubkeys) > 0
|
||||
# Ensure that the given inputs are valid pubkeys
|
||||
assert all(bls.KeyValidate(pubkey) for pubkey in pubkeys)
|
||||
|
||||
result = copy(pubkeys[0])
|
||||
for pubkey in pubkeys[1:]:
|
||||
result += pubkey
|
||||
return result
|
||||
```
|
||||
|
||||
### `eth2_fast_aggregate_verify`
|
||||
### `eth_fast_aggregate_verify`
|
||||
|
||||
```python
|
||||
def eth2_fast_aggregate_verify(pubkeys: Sequence[BLSPubkey], message: Bytes32, signature: BLSSignature) -> bool:
|
||||
def eth_fast_aggregate_verify(pubkeys: Sequence[BLSPubkey], message: Bytes32, signature: BLSSignature) -> bool:
|
||||
"""
|
||||
Wrapper to ``bls.FastAggregateVerify`` accepting the ``G2_POINT_AT_INFINITY`` signature when ``pubkeys`` is empty.
|
||||
"""
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Altair fork
|
||||
# Altair -- Fork Logic
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document describes the process of the first upgrade of Ethereum 2.0: the Altair hard fork, introducing light client support and other improvements.
|
||||
This document describes the process of the first upgrade of the beacon chain: the Altair hard fork, introducing light client support and other improvements.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Ethereum Altair networking specification
|
||||
# Altair -- Networking
|
||||
|
||||
This document contains the networking specification for Ethereum 2.0 clients added during the Altair deployment.
|
||||
This document contains the networking specification for Altair.
|
||||
This document should be viewed as additive to the [document from Phase 0](../phase0/p2p-interface.md) and will be referred to as the "Phase 0 document" hereafter.
|
||||
Readers should understand the Phase 0 document and use it as a basis to understand the changes outlined in this document.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Minimal Light Client
|
||||
# Altair -- Minimal Light Client
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -28,8 +28,8 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
Eth2 is designed to be light client friendly for constrained environments to
|
||||
access Eth2 with reasonable safety and liveness.
|
||||
The beacon chain is designed to be light client friendly for constrained environments to
|
||||
access Ethereum with reasonable safety and liveness.
|
||||
Such environments include resource-constrained devices (e.g. phones for trust-minimised wallets)
|
||||
and metered VMs (e.g. blockchain VMs for cross-chain bridges).
|
||||
|
||||
|
@ -184,7 +184,7 @@ def process_light_client_update(store: LightClientStore, update: LightClientUpda
|
|||
):
|
||||
# Apply update if (1) 2/3 quorum is reached and (2) we have a finality proof.
|
||||
# Note that (2) means that the current light client design needs finality.
|
||||
# It may be changed to re-organizable light client design. See the on-going issue eth2.0-specs#2182.
|
||||
# It may be changed to re-organizable light client design. See the on-going issue consensus-specs#2182.
|
||||
apply_light_client_update(store.snapshot, update)
|
||||
store.valid_updates = set()
|
||||
elif current_slot > store.snapshot.header.slot + update_timeout:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Ethereum 2.0 Altair -- Honest Validator
|
||||
# Altair -- Honest Validator
|
||||
|
||||
This is an accompanying document to [Ethereum 2.0 Altair -- The Beacon Chain](./beacon-chain.md), which describes the expected actions of a "validator" participating in the Ethereum 2.0 protocol.
|
||||
This is an accompanying document to [Altair -- The Beacon Chain](./beacon-chain.md), which describes the expected actions of a "validator" participating in the Ethereum proof-of-stake protocol.
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
@ -49,8 +49,8 @@ This is an accompanying document to [Ethereum 2.0 Altair -- The Beacon Chain](./
|
|||
|
||||
## Introduction
|
||||
|
||||
This document represents the expected behavior of an "honest validator" with respect to the Altair upgrade of the Ethereum 2.0 protocol.
|
||||
It builds on the [previous document for the behavior of an "honest validator" from Phase 0](../phase0/validator.md) of the Ethereum 2.0 protocol.
|
||||
This document represents the expected behavior of an "honest validator" with respect to the Altair upgrade of the Ethereum proof-of-stake protocol.
|
||||
It builds on the [previous document for the behavior of an "honest validator" from Phase 0](../phase0/validator.md) of the Ethereum proof-of-stake protocol.
|
||||
This previous document is referred to below as the "Phase 0 document".
|
||||
|
||||
Altair introduces a new type of committee: the sync committee. Sync committees are responsible for signing each block of the canonical chain and there exists an efficient algorithm for light clients to sync the chain using the output of the sync committees.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Custody Game -- Beacon Chain
|
||||
# Custody Game -- The Beacon Chain
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -57,7 +57,7 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document details the beacon chain additions and changes of Ethereum 2.0 to support the shard data custody game,
|
||||
This document details the beacon chain additions and changes of to support the shard data custody game,
|
||||
building upon the [Sharding](../sharding/beacon-chain.md) specification.
|
||||
|
||||
## Constants
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Ethereum 2.0 Custody Game -- Honest Validator
|
||||
# Custody Game -- Honest Validator
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
This is an accompanying document to the [Ethereum 2.0 Custody Game](./), which describes the expected actions of a "validator"
|
||||
participating in the Ethereum 2.0 Custody Game.
|
||||
This is an accompanying document to the [Custody Game](./), which describes the expected actions of a "validator"
|
||||
participating in the shard data Custody Game.
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Data Availability Sampling -- Core
|
||||
# Data Availability Sampling -- Core
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Data Availability Sampling -- Fork Choice
|
||||
# Data Availability Sampling -- Fork Choice
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document is the beacon chain fork choice spec for Ethereum 2.0 Data Availability Sampling. The only change that we add from phase 0 is that we add a concept of "data dependencies";
|
||||
This document is the beacon chain fork choice spec for Data Availability Sampling. The only change that we add from phase 0 is that we add a concept of "data dependencies";
|
||||
a block is only eligible for consideration in the fork choice after a data availability test has been successfully completed for all dependencies.
|
||||
The "root" of a shard block for data dependency purposes is considered to be a `DataCommitment` object, which is a pair of a Kate commitment and a length.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Data Availability Sampling -- Network specification
|
||||
# Data Availability Sampling -- Networking
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Data Availability Sampling
|
||||
# Data Availability Sampling -- Sampling
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 The Merge
|
||||
# The Merge -- The Beacon Chain
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -12,6 +12,8 @@
|
|||
- [Custom types](#custom-types)
|
||||
- [Constants](#constants)
|
||||
- [Execution](#execution)
|
||||
- [Configuration](#configuration)
|
||||
- [Genesis testing settings](#genesis-testing-settings)
|
||||
- [Containers](#containers)
|
||||
- [Extended containers](#extended-containers)
|
||||
- [`BeaconBlockBody`](#beaconblockbody)
|
||||
|
@ -31,6 +33,7 @@
|
|||
- [`on_payload`](#on_payload)
|
||||
- [Block processing](#block-processing)
|
||||
- [Execution payload processing](#execution-payload-processing)
|
||||
- [`is_valid_gas_limit`](#is_valid_gas_limit)
|
||||
- [`process_execution_payload`](#process_execution_payload)
|
||||
- [Testing](#testing)
|
||||
|
||||
|
@ -59,6 +62,19 @@ This patch adds transaction execution to the beacon chain as part of the Merge f
|
|||
| `MAX_BYTES_PER_OPAQUE_TRANSACTION` | `uint64(2**20)` (= 1,048,576) |
|
||||
| `MAX_TRANSACTIONS_PER_PAYLOAD` | `uint64(2**14)` (= 16,384) |
|
||||
| `BYTES_PER_LOGS_BLOOM` | `uint64(2**8)` (= 256) |
|
||||
| `GAS_LIMIT_DENOMINATOR` | `uint64(2**10)` (= 1,024) |
|
||||
| `MIN_GAS_LIMIT` | `uint64(5000)` (= 5,000) |
|
||||
|
||||
## Configuration
|
||||
|
||||
### Genesis testing settings
|
||||
|
||||
*Note*: These configuration settings do not apply to the mainnet and are utilized only by pure Merge testing.
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `GENESIS_GAS_LIMIT` | `uint64(30000000)` (= 30,000,000) |
|
||||
| `GENESIS_BASE_FEE_PER_GAS` | `Bytes32('0x00ca9a3b00000000000000000000000000000000000000000000000000000000')` (= 1,000,000,000) |
|
||||
|
||||
## Containers
|
||||
|
||||
|
@ -128,6 +144,8 @@ class BeaconState(Container):
|
|||
|
||||
#### `ExecutionPayload`
|
||||
|
||||
*Note*: The `base_fee_per_gas` field is serialized in little-endian.
|
||||
|
||||
```python
|
||||
class ExecutionPayload(Container):
|
||||
# Execution block header fields
|
||||
|
@ -141,6 +159,7 @@ class ExecutionPayload(Container):
|
|||
gas_limit: uint64
|
||||
gas_used: uint64
|
||||
timestamp: uint64
|
||||
base_fee_per_gas: Bytes32 # base fee introduced in EIP-1559, little-endian serialized
|
||||
# Extra payload fields
|
||||
block_hash: Hash32 # Hash of execution block
|
||||
transactions: List[Transaction, MAX_TRANSACTIONS_PER_PAYLOAD]
|
||||
|
@ -161,6 +180,7 @@ class ExecutionPayloadHeader(Container):
|
|||
gas_limit: uint64
|
||||
gas_used: uint64
|
||||
timestamp: uint64
|
||||
base_fee_per_gas: Bytes32
|
||||
# Extra payload fields
|
||||
block_hash: Hash32 # Hash of execution block
|
||||
transactions_root: Root
|
||||
|
@ -239,17 +259,41 @@ def process_block(state: BeaconState, block: BeaconBlock) -> None:
|
|||
|
||||
### Execution payload processing
|
||||
|
||||
#### `is_valid_gas_limit`
|
||||
|
||||
```python
|
||||
def is_valid_gas_limit(payload: ExecutionPayload, parent: ExecutionPayloadHeader) -> bool:
|
||||
parent_gas_limit = parent.gas_limit
|
||||
|
||||
# Check if the payload used too much gas
|
||||
if payload.gas_used > payload.gas_limit:
|
||||
return False
|
||||
|
||||
# Check if the payload changed the gas limit too much
|
||||
if payload.gas_limit >= parent_gas_limit + parent_gas_limit // GAS_LIMIT_DENOMINATOR:
|
||||
return False
|
||||
if payload.gas_limit <= parent_gas_limit - parent_gas_limit // GAS_LIMIT_DENOMINATOR:
|
||||
return False
|
||||
|
||||
# Check if the gas limit is at least the minimum gas limit
|
||||
if payload.gas_limit < MIN_GAS_LIMIT:
|
||||
return False
|
||||
|
||||
return True
|
||||
```
|
||||
|
||||
#### `process_execution_payload`
|
||||
|
||||
*Note:* This function depends on `process_randao` function call as it retrieves the most recent randao mix from the `state`. Implementations that are considering parallel processing of execution payload with respect to beacon chain state transition function should work around this dependency.
|
||||
|
||||
```python
|
||||
def process_execution_payload(state: BeaconState, payload: ExecutionPayload, execution_engine: ExecutionEngine) -> None:
|
||||
# Verify consistency of the parent hash, block number and random
|
||||
# Verify consistency of the parent hash, block number, random, base fee per gas and gas limit
|
||||
if is_merge_complete(state):
|
||||
assert payload.parent_hash == state.latest_execution_payload_header.block_hash
|
||||
assert payload.block_number == state.latest_execution_payload_header.block_number + uint64(1)
|
||||
assert payload.random == get_randao_mix(state, get_current_epoch(state))
|
||||
assert is_valid_gas_limit(payload, state.latest_execution_payload_header)
|
||||
# Verify timestamp
|
||||
assert payload.timestamp == compute_timestamp_at_slot(state, state.slot)
|
||||
# Verify the execution payload is valid
|
||||
|
@ -266,6 +310,7 @@ def process_execution_payload(state: BeaconState, payload: ExecutionPayload, exe
|
|||
gas_limit=payload.gas_limit,
|
||||
gas_used=payload.gas_used,
|
||||
timestamp=payload.timestamp,
|
||||
base_fee_per_gas=payload.base_fee_per_gas,
|
||||
block_hash=payload.block_hash,
|
||||
transactions_root=hash_tree_root(payload.transactions),
|
||||
)
|
||||
|
@ -321,6 +366,8 @@ def initialize_beacon_state_from_eth1(eth1_block_hash: Bytes32,
|
|||
state.latest_execution_payload_header.block_hash = eth1_block_hash
|
||||
state.latest_execution_payload_header.timestamp = eth1_timestamp
|
||||
state.latest_execution_payload_header.random = eth1_block_hash
|
||||
state.latest_execution_payload_header.gas_limit = GENESIS_GAS_LIMIT
|
||||
state.latest_execution_payload_header.base_fee_per_gas = GENESIS_BASE_FEE_PER_GAS
|
||||
|
||||
return state
|
||||
```
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 The Merge
|
||||
# The Merge -- Fork Choice
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -129,7 +129,7 @@ def on_block(store: Store, signed_block: SignedBeaconBlock, transition_store: Tr
|
|||
assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
|
||||
|
||||
# [New in Merge]
|
||||
if (transition_store is not None) and is_merge_block(pre_state, block):
|
||||
if (transition_store is not None) and is_merge_block(pre_state, block.body):
|
||||
# Delay consideration of block until PoW block is processed by the PoW node
|
||||
pow_block = get_pow_block(block.body.execution_payload.parent_hash)
|
||||
pow_parent = get_pow_block(pow_block.parent_hash)
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 The Merge
|
||||
# The Merge -- Fork Logic
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
|
|
@ -0,0 +1,131 @@
|
|||
# The Merge -- Networking
|
||||
|
||||
This document contains the networking specification for the Merge.
|
||||
|
||||
The specification of these changes continues in the same format as the network specifications of previous upgrades, and assumes them as pre-requisite. This document should be viewed as additive to the documents from [Phase 0](../phase0/p2p-interface.md) and from [Altair](../altair/p2p-interface.md)
|
||||
and will be referred to as the "Phase 0 document" and "Altair document" respectively, hereafter.
|
||||
Readers should understand the Phase 0 and Altair documents and use them as a basis to understand the changes outlined in this document.
|
||||
|
||||
## Table of contents
|
||||
|
||||
<!-- TOC -->
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
||||
- [Warning](#warning)
|
||||
- [Modifications in the Merge](#modifications-in-the-merge)
|
||||
- [The gossip domain: gossipsub](#the-gossip-domain-gossipsub)
|
||||
- [Topics and messages](#topics-and-messages)
|
||||
- [Global topics](#global-topics)
|
||||
- [`beacon_block`](#beacon_block)
|
||||
- [Transitioning the gossip](#transitioning-the-gossip)
|
||||
- [The Req/Resp domain](#the-reqresp-domain)
|
||||
- [Messages](#messages)
|
||||
- [BeaconBlocksByRange v2](#beaconblocksbyrange-v2)
|
||||
- [BeaconBlocksByRoot v2](#beaconblocksbyroot-v2)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- /TOC -->
|
||||
|
||||
## Warning
|
||||
|
||||
This document is currently illustrative for early Merge testnets and some parts are subject to change.
|
||||
Refer to the note in the [validator guide](./validator.md) for further details.
|
||||
|
||||
# Modifications in the Merge
|
||||
|
||||
## The gossip domain: gossipsub
|
||||
|
||||
Some gossip meshes are upgraded in the Merge to support upgraded types.
|
||||
|
||||
### Topics and messages
|
||||
|
||||
Topics follow the same specification as in prior upgrades.
|
||||
All topics remain stable except the beacon block topic which is updated with the modified type.
|
||||
|
||||
The specification around the creation, validation, and dissemination of messages has not changed from the Phase 0 and Altair documents.
|
||||
|
||||
The derivation of the `message-id` remains stable.
|
||||
|
||||
The new topics along with the type of the `data` field of a gossipsub message are given in this table:
|
||||
|
||||
| Name | Message Type |
|
||||
| - | - |
|
||||
| `beacon_block` | `SignedBeaconBlock` (modified) |
|
||||
|
||||
Note that the `ForkDigestValue` path segment of the topic separates the old and the new `beacon_block` topics.
|
||||
|
||||
#### Global topics
|
||||
|
||||
The Merge changes the type of the global beacon block topic.
|
||||
|
||||
##### `beacon_block`
|
||||
|
||||
The *type* of the payload of this topic changes to the (modified) `SignedBeaconBlock` found in the Merge.
|
||||
Specifically, this type changes with the addition of `execution_payload` to the inner `BeaconBlockBody`.
|
||||
See the Merge [state transition document](./beacon-chain.md#beaconblockbody) for further details.
|
||||
|
||||
In addition to the gossip validations for this topic from prior specifications,
|
||||
the following validations MUST pass before forwarding the `signed_beacon_block` on the network.
|
||||
Alias `block = signed_beacon_block.message`, `execution_payload = block.body.execution_payload`.
|
||||
- If the merge is complete with respect to the head state -- i.e. `is_merge_complete(state)` --
|
||||
then validate the following:
|
||||
- _[REJECT]_ The block's execution payload must be non-empty --
|
||||
i.e. `execution_payload != ExecutionPayload()`
|
||||
- If the execution is enabled for the block -- i.e. `is_execution_enabled(state, block.body)`
|
||||
then validate the following:
|
||||
- _[REJECT]_ The block's execution payload timestamp is correct with respect to the slot
|
||||
-- i.e. `execution_payload.timestamp == compute_time_at_slot(state, block.slot)`.
|
||||
- _[REJECT]_ Gas used is less than the gas limit --
|
||||
i.e. `execution_payload.gas_used <= execution_payload.gas_limit`.
|
||||
- _[REJECT]_ The execution payload block hash is not equal to the parent hash --
|
||||
i.e. `execution_payload.block_hash != execution_payload.parent_hash`.
|
||||
- _[REJECT]_ The execution payload transaction list data is within expected size limits,
|
||||
the data MUST NOT be larger than the SSZ list-limit,
|
||||
and a client MAY be more strict.
|
||||
|
||||
*Note*: Additional [gossip validations](https://github.com/ethereum/devp2p/blob/master/caps/eth.md#block-encoding-and-validity)
|
||||
(see block "data validity" conditions) that rely more heavily on execution-layer state and logic are currently under consideration.
|
||||
|
||||
### Transitioning the gossip
|
||||
|
||||
See gossip transition details found in the [Altair document](../altair/p2p) for
|
||||
details on how to handle transitioning gossip topics for the Merge.
|
||||
|
||||
## The Req/Resp domain
|
||||
|
||||
### Messages
|
||||
|
||||
#### BeaconBlocksByRange v2
|
||||
|
||||
**Protocol ID:** `/eth2/beacon_chain/req/beacon_blocks_by_range/2/`
|
||||
|
||||
Request and Response remain unchanged.
|
||||
The Merge fork-digest is introduced to the `context` enum to specify the Merge block type.
|
||||
|
||||
Per `context = compute_fork_digest(fork_version, genesis_validators_root)`:
|
||||
|
||||
[0]: # (eth2spec: skip)
|
||||
|
||||
| `fork_version` | Chunk SSZ type |
|
||||
| ------------------------ | -------------------------- |
|
||||
| `GENESIS_FORK_VERSION` | `phase0.SignedBeaconBlock` |
|
||||
| `ALTAIR_FORK_VERSION` | `altair.SignedBeaconBlock` |
|
||||
| `MERGE_FORK_VERSION` | `merge.SignedBeaconBlock` |
|
||||
|
||||
#### BeaconBlocksByRoot v2
|
||||
|
||||
**Protocol ID:** `/eth2/beacon_chain/req/beacon_blocks_by_root/2/`
|
||||
|
||||
Request and Response remain unchanged.
|
||||
The Merge fork-digest is introduced to the `context` enum to specify the Merge block type.
|
||||
|
||||
Per `context = compute_fork_digest(fork_version, genesis_validators_root)`:
|
||||
|
||||
[1]: # (eth2spec: skip)
|
||||
|
||||
| `fork_version` | Chunk SSZ type |
|
||||
| ------------------------ | -------------------------- |
|
||||
| `GENESIS_FORK_VERSION` | `phase0.SignedBeaconBlock` |
|
||||
| `ALTAIR_FORK_VERSION` | `altair.SignedBeaconBlock` |
|
||||
| `MERGE_FORK_VERSION` | `merge.SignedBeaconBlock` |
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 The Merge
|
||||
# The Merge -- Honest Validator
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Phase 0 -- The Beacon Chain
|
||||
# Phase 0 -- The Beacon Chain
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
@ -140,10 +140,10 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document represents the specification for Phase 0 of Ethereum 2.0 -- The Beacon Chain.
|
||||
This document represents the specification for Phase 0 -- The Beacon Chain.
|
||||
|
||||
At the core of Ethereum 2.0 is a system chain called the "beacon chain". The beacon chain stores and manages the registry of validators. In the initial deployment phases of Ethereum 2.0, the only mechanism to become a validator is to make a one-way ETH transaction to a deposit contract on Ethereum 1.0. Activation as a validator happens when Ethereum 1.0 deposit receipts are processed by the beacon chain, the activation balance is reached, and a queuing process is completed. Exit is either voluntary or done forcibly as a penalty for misbehavior.
|
||||
The primary source of load on the beacon chain is "attestations". Attestations are simultaneously availability votes for a shard block (in a later Eth2 upgrade) and proof-of-stake votes for a beacon block (Phase 0).
|
||||
At the core of Ethereum proof-of-stake is a system chain called the "beacon chain". The beacon chain stores and manages the registry of validators. In the initial deployment phases of proof-of-stake, the only mechanism to become a validator is to make a one-way ETH transaction to a deposit contract on the Ethereum proof-of-work chain. Activation as a validator happens when deposit receipts are processed by the beacon chain, the activation balance is reached, and a queuing process is completed. Exit is either voluntary or done forcibly as a penalty for misbehavior.
|
||||
The primary source of load on the beacon chain is "attestations". Attestations are simultaneously availability votes for a shard block (in a later upgrade) and proof-of-stake votes for a beacon block (Phase 0).
|
||||
|
||||
## Notation
|
||||
|
||||
|
@ -647,6 +647,7 @@ The [IETF BLS signature draft standard v4](https://tools.ietf.org/html/draft-irt
|
|||
- `def Aggregate(signatures: Sequence[BLSSignature]) -> BLSSignature`
|
||||
- `def FastAggregateVerify(pubkeys: Sequence[BLSPubkey], message: Bytes, signature: BLSSignature) -> bool`
|
||||
- `def AggregateVerify(pubkeys: Sequence[BLSPubkey], messages: Sequence[Bytes], signature: BLSSignature) -> bool`
|
||||
- `def KeyValidate(pubkey: BLSPubkey) -> bool`
|
||||
|
||||
The above functions are accessed through the `bls` module, e.g. `bls.Verify`.
|
||||
|
||||
|
@ -1165,13 +1166,13 @@ def slash_validator(state: BeaconState,
|
|||
|
||||
## Genesis
|
||||
|
||||
Before the Ethereum 2.0 genesis has been triggered, and for every Ethereum 1.0 block, let `candidate_state = initialize_beacon_state_from_eth1(eth1_block_hash, eth1_timestamp, deposits)` where:
|
||||
Before the Ethereum beacon chain genesis has been triggered, and for every Ethereum proof-of-work block, let `candidate_state = initialize_beacon_state_from_eth1(eth1_block_hash, eth1_timestamp, deposits)` where:
|
||||
|
||||
- `eth1_block_hash` is the hash of the Ethereum 1.0 block
|
||||
- `eth1_block_hash` is the hash of the Ethereum proof-of-work block
|
||||
- `eth1_timestamp` is the Unix timestamp corresponding to `eth1_block_hash`
|
||||
- `deposits` is the sequence of all deposits, ordered chronologically, up to (and including) the block with hash `eth1_block_hash`
|
||||
|
||||
Eth1 blocks must only be considered once they are at least `SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE` seconds old (i.e. `eth1_timestamp + SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE <= current_unix_time`). Due to this constraint, if `GENESIS_DELAY < SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE`, then the `genesis_time` can happen before the time/state is first known. Values should be configured to avoid this case.
|
||||
Proof-of-work blocks must only be considered once they are at least `SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE` seconds old (i.e. `eth1_timestamp + SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE <= current_unix_time`). Due to this constraint, if `GENESIS_DELAY < SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE`, then the `genesis_time` can happen before the time/state is first known. Values should be configured to avoid this case.
|
||||
|
||||
```python
|
||||
def initialize_beacon_state_from_eth1(eth1_block_hash: Bytes32,
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Phase 0 -- Deposit Contract
|
||||
# Phase 0 -- Deposit Contract
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
@ -8,7 +8,7 @@
|
|||
- [Introduction](#introduction)
|
||||
- [Constants](#constants)
|
||||
- [Configuration](#configuration)
|
||||
- [Ethereum 1.0 deposit contract](#ethereum-10-deposit-contract)
|
||||
- [Staking deposit contract](#staking-deposit-contract)
|
||||
- [`deposit` function](#deposit-function)
|
||||
- [Deposit amount](#deposit-amount)
|
||||
- [Withdrawal credentials](#withdrawal-credentials)
|
||||
|
@ -20,7 +20,7 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document represents the specification for the beacon chain deposit contract, part of Ethereum 2.0 Phase 0.
|
||||
This document represents the specification for the beacon chain deposit contract, part of Phase 0.
|
||||
|
||||
## Constants
|
||||
|
||||
|
@ -42,9 +42,9 @@ These configurations are updated for releases and may be out of sync during `dev
|
|||
| `DEPOSIT_NETWORK_ID` | `1` |
|
||||
| `DEPOSIT_CONTRACT_ADDRESS` | `0x00000000219ab540356cBB839Cbe05303d7705Fa` |
|
||||
|
||||
## Ethereum 1.0 deposit contract
|
||||
## Staking deposit contract
|
||||
|
||||
The initial deployment phases of Ethereum 2.0 are implemented without consensus changes to Ethereum 1.0. A deposit contract at address `DEPOSIT_CONTRACT_ADDRESS` is added to the Ethereum 1.0 chain defined by the [chain-id](https://eips.ethereum.org/EIPS/eip-155) -- `DEPOSIT_CHAIN_ID` -- and the network-id -- `DEPOSIT_NETWORK_ID` -- for deposits of ETH to the beacon chain. Validator balances will be withdrawable to the shards in Phase 2.
|
||||
The initial deployment phases of Ethereum proof-of-stake are implemented without consensus changes to the existing Ethereum proof-of-work chain. A deposit contract at address `DEPOSIT_CONTRACT_ADDRESS` is added to the Ethereum proof-of-work chain defined by the [chain-id](https://eips.ethereum.org/EIPS/eip-155) -- `DEPOSIT_CHAIN_ID` -- and the network-id -- `DEPOSIT_NETWORK_ID` -- for deposits of ETH to the beacon chain. Validator balances will be withdrawable to the execution-layer in a followup fork after the Merge.
|
||||
|
||||
_Note_: See [here](https://chainid.network/) for a comprehensive list of public Ethereum chain chain-id's and network-id's.
|
||||
|
||||
|
@ -54,7 +54,7 @@ The deposit contract has a public `deposit` function to make deposits. It takes
|
|||
|
||||
#### Deposit amount
|
||||
|
||||
The amount of ETH (rounded down to the closest Gwei) sent to the deposit contract is the deposit amount, which must be of size at least `MIN_DEPOSIT_AMOUNT` Gwei. Note that ETH consumed by the deposit contract is no longer usable on Ethereum 1.0.
|
||||
The amount of ETH (rounded down to the closest Gwei) sent to the deposit contract is the deposit amount, which must be of size at least `MIN_DEPOSIT_AMOUNT` Gwei. Note that ETH consumed by the deposit contract is no longer usable on the execution-layer until sometime after the Merge.
|
||||
|
||||
#### Withdrawal credentials
|
||||
|
||||
|
@ -68,7 +68,7 @@ Support for new withdrawal prefixes can be added without modifying the deposit c
|
|||
|
||||
#### `DepositEvent` log
|
||||
|
||||
Every Ethereum 1.0 deposit emits a `DepositEvent` log for consumption by the beacon chain. The deposit contract does little validation, pushing most of the validator onboarding logic to the beacon chain. In particular, the proof of possession (a BLS12-381 signature) is not verified by the deposit contract.
|
||||
Every deposit emits a `DepositEvent` log for consumption by the beacon chain. The deposit contract does little validation, pushing most of the validator onboarding logic to the beacon chain. In particular, the proof of possession (a BLS12-381 signature) is not verified by the deposit contract.
|
||||
|
||||
## Solidity code
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Phase 0 -- Beacon Chain Fork Choice
|
||||
# Phase 0 -- Beacon Chain Fork Choice
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
@ -35,7 +35,7 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document is the beacon chain fork choice spec, part of Ethereum 2.0 Phase 0. It assumes the [beacon chain state transition function spec](./beacon-chain.md).
|
||||
This document is the beacon chain fork choice spec, part of Phase 0. It assumes the [beacon chain state transition function spec](./beacon-chain.md).
|
||||
|
||||
## Fork choice
|
||||
|
||||
|
@ -51,7 +51,7 @@ Any of the above handlers that trigger an unhandled exception (e.g. a failed ass
|
|||
|
||||
1) **Leap seconds**: Slots will last `SECONDS_PER_SLOT + 1` or `SECONDS_PER_SLOT - 1` seconds around leap seconds. This is automatically handled by [UNIX time](https://en.wikipedia.org/wiki/Unix_time).
|
||||
2) **Honest clocks**: Honest nodes are assumed to have clocks synchronized within `SECONDS_PER_SLOT` seconds of each other.
|
||||
3) **Eth1 data**: The large `ETH1_FOLLOW_DISTANCE` specified in the [honest validator document](./validator.md) should ensure that `state.latest_eth1_data` of the canonical Ethereum 2.0 chain remains consistent with the canonical Ethereum 1.0 chain. If not, emergency manual intervention will be required.
|
||||
3) **Eth1 data**: The large `ETH1_FOLLOW_DISTANCE` specified in the [honest validator document](./validator.md) should ensure that `state.latest_eth1_data` of the canonical beacon chain remains consistent with the canonical Ethereum proof-of-work chain. If not, emergency manual intervention will be required.
|
||||
4) **Manual forks**: Manual forks may arbitrarily change the fork choice rule but are expected to be enacted at epoch transitions, with the fork details reflected in `state.fork`.
|
||||
5) **Implementation**: The implementation found in this specification is constructed for ease of understanding rather than for optimization in computation, space, or any other resource. A number of optimized alternatives can be found [here](https://github.com/protolambda/lmd-ghost).
|
||||
|
||||
|
@ -327,9 +327,13 @@ def on_tick(store: Store, time: uint64) -> None:
|
|||
# Not a new epoch, return
|
||||
if not (current_slot > previous_slot and compute_slots_since_epoch_start(current_slot) == 0):
|
||||
return
|
||||
# Update store.justified_checkpoint if a better checkpoint is known
|
||||
|
||||
# Update store.justified_checkpoint if a better checkpoint on the store.finalized_checkpoint chain
|
||||
if store.best_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
|
||||
store.justified_checkpoint = store.best_justified_checkpoint
|
||||
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
|
||||
ancestor_at_finalized_slot = get_ancestor(store, store.best_justified_checkpoint.root, finalized_slot)
|
||||
if ancestor_at_finalized_slot == store.finalized_checkpoint.root:
|
||||
store.justified_checkpoint = store.best_justified_checkpoint
|
||||
```
|
||||
|
||||
#### `on_block`
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
# Ethereum 2.0 networking specification
|
||||
# Phase 0 -- Networking
|
||||
|
||||
This document contains the networking specification for Ethereum 2.0 clients.
|
||||
This document contains the networking specification for Phase 0.
|
||||
|
||||
It consists of four main sections:
|
||||
|
||||
1. A specification of the network fundamentals.
|
||||
2. A specification of the three network interaction *domains* of Eth2: (a) the gossip domain, (b) the discovery domain, and (c) the Req/Resp domain.
|
||||
2. A specification of the three network interaction *domains* of the proof-of-stake consensus layer: (a) the gossip domain, (b) the discovery domain, and (c) the Req/Resp domain.
|
||||
3. The rationale and further explanation for the design choices made in the previous two sections.
|
||||
4. An analysis of the maturity/state of the libp2p features required by this spec across the languages in which Eth2 clients are being developed.
|
||||
4. An analysis of the maturity/state of the libp2p features required by this spec across the languages in which clients are being developed.
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
@ -19,7 +19,7 @@ It consists of four main sections:
|
|||
- [Encryption and identification](#encryption-and-identification)
|
||||
- [Protocol Negotiation](#protocol-negotiation)
|
||||
- [Multiplexing](#multiplexing)
|
||||
- [Eth2 network interaction domains](#eth2-network-interaction-domains)
|
||||
- [Consensus-layer network interaction domains](#consensus-layer-network-interaction-domains)
|
||||
- [Configuration](#configuration)
|
||||
- [MetaData](#metadata)
|
||||
- [The gossip domain: gossipsub](#the-gossip-domain-gossipsub)
|
||||
|
@ -113,7 +113,7 @@ It consists of four main sections:
|
|||
|
||||
# Network fundamentals
|
||||
|
||||
This section outlines the specification for the networking stack in Ethereum 2.0 clients.
|
||||
This section outlines the specification for the networking stack in Ethereum consensus-layer clients.
|
||||
|
||||
## Transport
|
||||
|
||||
|
@ -163,7 +163,7 @@ and MAY support [yamux](https://github.com/hashicorp/yamux/blob/master/spec.md).
|
|||
If both are supported by the client, yamux MUST take precedence during negotiation.
|
||||
See the [Rationale](#design-decision-rationale) section below for tradeoffs.
|
||||
|
||||
# Eth2 network interaction domains
|
||||
# Consensus-layer network interaction domains
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -435,7 +435,7 @@ The following validations MUST pass before forwarding the `attestation` on the s
|
|||
Attestation broadcasting is grouped into subnets defined by a topic.
|
||||
The number of subnets is defined via `ATTESTATION_SUBNET_COUNT`.
|
||||
The correct subnet for an attestation can be calculated with `compute_subnet_for_attestation`.
|
||||
`beacon_attestation_{subnet_id}` topics, are rotated through throughout the epoch in a similar fashion to rotating through shards in committees (future Eth2 upgrade).
|
||||
`beacon_attestation_{subnet_id}` topics, are rotated through throughout the epoch in a similar fashion to rotating through shards in committees (future beacon chain upgrade).
|
||||
The subnets are rotated through with `committees_per_slot = get_committee_count_per_slot(state, attestation.data.target.epoch)` subnets per slot.
|
||||
|
||||
Unaggregated attestations are sent as `Attestation`s to the subnet topic,
|
||||
|
@ -904,7 +904,7 @@ This integration enables the libp2p stack to subsequently form connections and s
|
|||
|
||||
### ENR structure
|
||||
|
||||
The Ethereum Node Record (ENR) for an Ethereum 2.0 client MUST contain the following entries
|
||||
The Ethereum Node Record (ENR) for an Ethereum consensus client MUST contain the following entries
|
||||
(exclusive of the sequence number and signature, which MUST be present in an ENR):
|
||||
|
||||
- The compressed secp256k1 publickey, 33 bytes (`secp256k1` field).
|
||||
|
@ -933,7 +933,7 @@ If a node's `MetaData.attnets` is composed of all zeros, the ENR MAY optionally
|
|||
#### `eth2` field
|
||||
|
||||
ENRs MUST carry a generic `eth2` key with an 16-byte value of the node's current fork digest, next fork version,
|
||||
and next fork epoch to ensure connections are made with peers on the intended eth2 network.
|
||||
and next fork epoch to ensure connections are made with peers on the intended Ethereum network.
|
||||
|
||||
| Key | Value |
|
||||
|:-------------|:--------------------|
|
||||
|
@ -998,7 +998,7 @@ The libp2p QUIC transport inherently relies on TLS 1.3 per requirement in sectio
|
|||
of the [QUIC protocol specification](https://tools.ietf.org/html/draft-ietf-quic-transport-22#section-7)
|
||||
and the accompanying [QUIC-TLS document](https://tools.ietf.org/html/draft-ietf-quic-tls-22).
|
||||
|
||||
The usage of one handshake procedure or the other shall be transparent to the Eth2 application layer,
|
||||
The usage of one handshake procedure or the other shall be transparent to the application layer,
|
||||
once the libp2p Host/Node object has been configured appropriately.
|
||||
|
||||
### What are the advantages of using TCP/QUIC/Websockets?
|
||||
|
@ -1019,7 +1019,7 @@ Provided that we use the same port numbers and encryption mechanisms as HTTP/3,
|
|||
and we may only become subject to standard IP-based firewall filtering—something we can counteract via other mechanisms.
|
||||
|
||||
WebSockets and/or WebRTC transports are necessary for interaction with browsers,
|
||||
and will become increasingly important as we incorporate browser-based light clients to the Eth2 network.
|
||||
and will become increasingly important as we incorporate browser-based light clients to the Ethereum network.
|
||||
|
||||
### Why do we not just support a single transport?
|
||||
|
||||
|
@ -1186,7 +1186,7 @@ Enforcing hashes for topic names would preclude us from leveraging such features
|
|||
No security or privacy guarantees are lost as a result of choosing plaintext topic names,
|
||||
since the domain is finite anyway, and calculating a digest's preimage would be trivial.
|
||||
|
||||
Furthermore, the Eth2 topic names are shorter than their digest equivalents (assuming SHA-256 hash),
|
||||
Furthermore, the topic names are shorter than their digest equivalents (assuming SHA-256 hash),
|
||||
so hashing topics would bloat messages unnecessarily.
|
||||
|
||||
### Why are we using the `StrictNoSign` signature policy?
|
||||
|
@ -1211,7 +1211,7 @@ Some examples of where messages could be duplicated:
|
|||
### Why are these specific gossip parameters chosen?
|
||||
|
||||
- `D`, `D_low`, `D_high`, `D_lazy`: recommended defaults.
|
||||
- `heartbeat_interval`: 0.7 seconds, recommended for eth2 in the [GossipSub evaluation report by Protocol Labs](https://gateway.ipfs.io/ipfs/QmRAFP5DBnvNjdYSbWhEhVRJJDFCLpPyvew5GwCCB4VxM4).
|
||||
- `heartbeat_interval`: 0.7 seconds, recommended for the beacon chain in the [GossipSub evaluation report by Protocol Labs](https://gateway.ipfs.io/ipfs/QmRAFP5DBnvNjdYSbWhEhVRJJDFCLpPyvew5GwCCB4VxM4).
|
||||
- `fanout_ttl`: 60 seconds, recommended default.
|
||||
Fanout is primarily used by committees publishing attestations to subnets.
|
||||
This happens once per epoch per validator and the subnet changes each epoch
|
||||
|
@ -1285,7 +1285,7 @@ due to not being fully synced to ensure that such (amplified) DOS attacks are no
|
|||
|
||||
In Phase 0, peers for attestation subnets will be found using the `attnets` entry in the ENR.
|
||||
|
||||
Although this method will be sufficient for early phases of Eth2, we aim to use the more appropriate discv5 topics for this and other similar tasks in the future.
|
||||
Although this method will be sufficient for early upgrade of the beacon chain, we aim to use the more appropriate discv5 topics for this and other similar tasks in the future.
|
||||
ENRs should ultimately not be used for this purpose.
|
||||
They are best suited to store identity, location, and capability information, rather than more volatile advertisements.
|
||||
|
||||
|
@ -1319,7 +1319,7 @@ Requests are segregated by protocol ID to:
|
|||
4. Enable flexibility and agility for clients adopting spec changes that impact the request, by signalling to peers exactly which subset of new/old requests they support.
|
||||
5. Enable clients to explicitly choose backwards compatibility at the request granularity.
|
||||
Without this, clients would be forced to support entire versions of the coarser request protocol.
|
||||
6. Parallelise RFCs (or Eth2 EIPs).
|
||||
6. Parallelise RFCs (or EIPs).
|
||||
By decoupling requests from one another, each RFC that affects the request protocol can be deployed/tested/debated independently
|
||||
without relying on a synchronization point to version the general top-level protocol.
|
||||
1. This has the benefit that clients can explicitly choose which RFCs to deploy
|
||||
|
@ -1476,8 +1476,8 @@ discv5 supports self-certified, flexible peer records (ENRs) and topic-based adv
|
|||
On the other hand, libp2p Kademlia DHT is a fully-fledged DHT protocol/implementations
|
||||
with content routing and storage capabilities, both of which are irrelevant in this context.
|
||||
|
||||
Eth 1.0 nodes will evolve to support discv5.
|
||||
By sharing the discovery network between Eth 1.0 and 2.0,
|
||||
Ethereum execution-layer nodes will evolve to support discv5.
|
||||
By sharing the discovery network between Ethereum consensus-layer and execution-layer clients,
|
||||
we benefit from the additive effect on network size that enhances resilience and resistance against certain attacks,
|
||||
to which smaller networks are more vulnerable.
|
||||
It should also help light clients of both networks find nodes with specific capabilities.
|
||||
|
@ -1502,17 +1502,17 @@ discv5 uses ENRs and we will presumably need to:
|
|||
|
||||
1. Add `multiaddr` to the dictionary, so that nodes can advertise their multiaddr under a reserved namespace in ENRs. – and/or –
|
||||
2. Define a bi-directional conversion function between multiaddrs and the corresponding denormalized fields in an ENR
|
||||
(ip, ip6, tcp, tcp6, etc.), for compatibility with nodes that do not support multiaddr natively (e.g. Eth 1.0 nodes).
|
||||
(ip, ip6, tcp, tcp6, etc.), for compatibility with nodes that do not support multiaddr natively (e.g. Ethereum execution-layer nodes).
|
||||
|
||||
### Why do we not form ENRs and find peers until genesis block/state is known?
|
||||
|
||||
Although client software might very well be running locally prior to the solidification of the eth2 genesis state and block,
|
||||
Although client software might very well be running locally prior to the solidification of the beacon chain genesis state and block,
|
||||
clients cannot form valid ENRs prior to this point.
|
||||
ENRs contain `fork_digest` which utilizes the `genesis_validators_root` for a cleaner separation between chains
|
||||
so prior to knowing genesis, we cannot use `fork_digest` to cleanly find peers on our intended chain.
|
||||
Once genesis data is known, we can then form ENRs and safely find peers.
|
||||
|
||||
When using an eth1 deposit contract for deposits, `fork_digest` will be known `GENESIS_DELAY` (7 days in mainnet configuration) before `genesis_time`,
|
||||
When using a proof-of-work deposit contract for deposits, `fork_digest` will be known `GENESIS_DELAY` (7 days in mainnet configuration) before `genesis_time`,
|
||||
providing ample time to find peers and form initial connections and gossip subnets prior to genesis.
|
||||
|
||||
## Compression/Encoding
|
||||
|
@ -1586,4 +1586,4 @@ It is advisable to derive these lengths from the SSZ type definitions in use, to
|
|||
# libp2p implementations matrix
|
||||
|
||||
This section will soon contain a matrix showing the maturity/state of the libp2p features required
|
||||
by this spec across the languages in which Eth2 clients are being developed.
|
||||
by this spec across the languages in which clients are being developed.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Ethereum 2.0 Phase 0 -- Honest Validator
|
||||
# Phase 0 -- Honest Validator
|
||||
|
||||
This is an accompanying document to [Ethereum 2.0 Phase 0 -- The Beacon Chain](./beacon-chain.md), which describes the expected actions of a "validator" participating in the Ethereum 2.0 protocol.
|
||||
This is an accompanying document to [Phase 0 -- The Beacon Chain](./beacon-chain.md), which describes the expected actions of a "validator" participating in the Ethereum proof-of-stake protocol.
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
@ -74,9 +74,9 @@ This is an accompanying document to [Ethereum 2.0 Phase 0 -- The Beacon Chain](.
|
|||
|
||||
## Introduction
|
||||
|
||||
This document represents the expected behavior of an "honest validator" with respect to Phase 0 of the Ethereum 2.0 protocol. This document does not distinguish between a "node" (i.e. the functionality of following and reading the beacon chain) and a "validator client" (i.e. the functionality of actively participating in consensus). The separation of concerns between these (potentially) two pieces of software is left as a design decision that is out of scope.
|
||||
This document represents the expected behavior of an "honest validator" with respect to Phase 0 of the Ethereum proof-of-stake protocol. This document does not distinguish between a "node" (i.e. the functionality of following and reading the beacon chain) and a "validator client" (i.e. the functionality of actively participating in consensus). The separation of concerns between these (potentially) two pieces of software is left as a design decision that is out of scope.
|
||||
|
||||
A validator is an entity that participates in the consensus of the Ethereum 2.0 protocol. This is an optional role for users in which they can post ETH as collateral and verify and attest to the validity of blocks to seek financial returns in exchange for building and securing the protocol. This is similar to proof-of-work networks in which miners provide collateral in the form of hardware/hash-power to seek returns in exchange for building and securing the protocol.
|
||||
A validator is an entity that participates in the consensus of the Ethereum proof-of-stake protocol. This is an optional role for users in which they can post ETH as collateral and verify and attest to the validity of blocks to seek financial returns in exchange for building and securing the protocol. This is similar to proof-of-work networks in which miners provide collateral in the form of hardware/hash-power to seek returns in exchange for building and securing the protocol.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
@ -162,7 +162,7 @@ The `withdrawal_credentials` field must be such that:
|
|||
* `withdrawal_credentials[1:12] == b'\x00' * 11`
|
||||
* `withdrawal_credentials[12:] == eth1_withdrawal_address`
|
||||
|
||||
After the merge of the current Ethereum application layer (Eth1) into the Beacon Chain (Eth2),
|
||||
After the merge of the current Ethereum application layer into the Beacon Chain,
|
||||
withdrawals to `eth1_withdrawal_address` will be normal ETH transfers (with no payload other than the validator's ETH)
|
||||
triggered by a user transaction that will set the gas price and gas limit as well pay fees.
|
||||
As long as the account or contract with address `eth1_withdrawal_address` can receive ETH transfers,
|
||||
|
@ -170,7 +170,7 @@ the future withdrawal protocol is agnostic to all other implementation details.
|
|||
|
||||
### Submit deposit
|
||||
|
||||
In Phase 0, all incoming validator deposits originate from the Ethereum 1.0 chain defined by `DEPOSIT_CHAIN_ID` and `DEPOSIT_NETWORK_ID`. Deposits are made to the [deposit contract](./deposit-contract.md) located at `DEPOSIT_CONTRACT_ADDRESS`.
|
||||
In Phase 0, all incoming validator deposits originate from the Ethereum proof-of-work chain defined by `DEPOSIT_CHAIN_ID` and `DEPOSIT_NETWORK_ID`. Deposits are made to the [deposit contract](./deposit-contract.md) located at `DEPOSIT_CONTRACT_ADDRESS`.
|
||||
|
||||
To submit a deposit:
|
||||
|
||||
|
@ -182,13 +182,13 @@ To submit a deposit:
|
|||
- Let `deposit_message` be a `DepositMessage` with all the `DepositData` contents except the `signature`.
|
||||
- Let `signature` be the result of `bls.Sign` of the `compute_signing_root(deposit_message, domain)` with `domain=compute_domain(DOMAIN_DEPOSIT)`. (_Warning_: Deposits _must_ be signed with `GENESIS_FORK_VERSION`, calling `compute_domain` without a second argument defaults to the correct version).
|
||||
- Let `deposit_data_root` be `hash_tree_root(deposit_data)`.
|
||||
- Send a transaction on the Ethereum 1.0 chain to `DEPOSIT_CONTRACT_ADDRESS` executing `def deposit(pubkey: bytes[48], withdrawal_credentials: bytes[32], signature: bytes[96], deposit_data_root: bytes32)` along with a deposit of `amount` Gwei.
|
||||
- Send a transaction on the Ethereum proof-of-work chain to `DEPOSIT_CONTRACT_ADDRESS` executing `def deposit(pubkey: bytes[48], withdrawal_credentials: bytes[32], signature: bytes[96], deposit_data_root: bytes32)` along with a deposit of `amount` Gwei.
|
||||
|
||||
*Note*: Deposits made for the same `pubkey` are treated as for the same validator. A singular `Validator` will be added to `state.validators` with each additional deposit amount added to the validator's balance. A validator can only be activated when total deposits for the validator pubkey meet or exceed `MAX_EFFECTIVE_BALANCE`.
|
||||
|
||||
### Process deposit
|
||||
|
||||
Deposits cannot be processed into the beacon chain until the Eth1 block in which they were deposited or any of its descendants is added to the beacon chain `state.eth1_data`. This takes _a minimum_ of `ETH1_FOLLOW_DISTANCE` Eth1 blocks (~8 hours) plus `EPOCHS_PER_ETH1_VOTING_PERIOD` epochs (~6.8 hours). Once the requisite Eth1 data is added, the deposit will normally be added to a beacon chain block and processed into the `state.validators` within an epoch or two. The validator is then in a queue to be activated.
|
||||
Deposits cannot be processed into the beacon chain until the proof-of-work block in which they were deposited or any of its descendants is added to the beacon chain `state.eth1_data`. This takes _a minimum_ of `ETH1_FOLLOW_DISTANCE` Eth1 blocks (~8 hours) plus `EPOCHS_PER_ETH1_VOTING_PERIOD` epochs (~6.8 hours). Once the requisite proof-of-work block data is added, the deposit will normally be added to a beacon chain block and processed into the `state.validators` within an epoch or two. The validator is then in a queue to be activated.
|
||||
|
||||
### Validator index
|
||||
|
||||
|
@ -401,9 +401,9 @@ Up to `MAX_ATTESTATIONS`, aggregate attestations can be included in the `block`.
|
|||
|
||||
##### Deposits
|
||||
|
||||
If there are any unprocessed deposits for the existing `state.eth1_data` (i.e. `state.eth1_data.deposit_count > state.eth1_deposit_index`), then pending deposits _must_ be added to the block. The expected number of deposits is exactly `min(MAX_DEPOSITS, eth1_data.deposit_count - state.eth1_deposit_index)`. These [`deposits`](./beacon-chain.md#deposit) are constructed from the `Deposit` logs from the [Eth1 deposit contract](./deposit-contract.md) and must be processed in sequential order. The deposits included in the `block` must satisfy the verification conditions found in [deposits processing](./beacon-chain.md#deposits).
|
||||
If there are any unprocessed deposits for the existing `state.eth1_data` (i.e. `state.eth1_data.deposit_count > state.eth1_deposit_index`), then pending deposits _must_ be added to the block. The expected number of deposits is exactly `min(MAX_DEPOSITS, eth1_data.deposit_count - state.eth1_deposit_index)`. These [`deposits`](./beacon-chain.md#deposit) are constructed from the `Deposit` logs from the [deposit contract](./deposit-contract.md) and must be processed in sequential order. The deposits included in the `block` must satisfy the verification conditions found in [deposits processing](./beacon-chain.md#deposits).
|
||||
|
||||
The `proof` for each deposit must be constructed against the deposit root contained in `state.eth1_data` rather than the deposit root at the time the deposit was initially logged from the 1.0 chain. This entails storing a full deposit merkle tree locally and computing updated proofs against the `eth1_data.deposit_root` as needed. See [`minimal_merkle.py`](https://github.com/ethereum/research/blob/master/spec_pythonizer/utils/merkle_minimal.py) for a sample implementation.
|
||||
The `proof` for each deposit must be constructed against the deposit root contained in `state.eth1_data` rather than the deposit root at the time the deposit was initially logged from the proof-of-work chain. This entails storing a full deposit merkle tree locally and computing updated proofs against the `eth1_data.deposit_root` as needed. See [`minimal_merkle.py`](https://github.com/ethereum/research/blob/master/spec_pythonizer/utils/merkle_minimal.py) for a sample implementation.
|
||||
|
||||
##### Voluntary exits
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Phase 0 -- Weak Subjectivity Guide
|
||||
# Phase 0 -- Weak Subjectivity Guide
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
@ -26,11 +26,11 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This document is a guide for implementing the Weak Subjectivity protections in Phase 0 of Ethereum 2.0.
|
||||
This document is a guide for implementing the Weak Subjectivity protections in Phase 0.
|
||||
This document is still a work-in-progress, and is subject to large changes.
|
||||
For more information about weak subjectivity and why it is required, please refer to:
|
||||
|
||||
- [Weak Subjectivity in Eth2.0](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2)
|
||||
- [Weak Subjectivity in Ethereum Proof-of-Stake](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2)
|
||||
- [Proof of Stake: How I Learned to Love Weak Subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/)
|
||||
|
||||
## Prerequisites
|
||||
|
@ -77,7 +77,7 @@ a safety margin of at least `1/3 - SAFETY_DECAY/100`.
|
|||
|
||||
A detailed analysis of the calculation of the weak subjectivity period is made in [this report](https://github.com/runtimeverification/beacon-chain-verification/blob/master/weak-subjectivity/weak-subjectivity-analysis.pdf).
|
||||
|
||||
*Note*: The expressions in the report use fractions, whereas eth2.0-specs uses only `uint64` arithmetic. The expressions have been simplified to avoid computing fractions, and more details can be found [here](https://www.overleaf.com/read/wgjzjdjpvpsd).
|
||||
*Note*: The expressions in the report use fractions, whereas the consensus-specs only use `uint64` arithmetic. The expressions have been simplified to avoid computing fractions, and more details can be found [here](https://www.overleaf.com/read/wgjzjdjpvpsd).
|
||||
|
||||
*Note*: The calculations here use `Ether` instead of `Gwei`, because the large magnitude of balances in `Gwei` can cause an overflow while computing using `uint64` arithmetic operations. Using `Ether` reduces the magnitude of the multiplicative factors by an order of `ETH_TO_GWEI` (`= 10**9`) and avoid the scope for overflows in `uint64`.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Sharding -- Beacon Chain changes
|
||||
# Sharding -- The Beacon Chain
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -9,14 +9,18 @@
|
|||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Glossary](#glossary)
|
||||
- [Custom types](#custom-types)
|
||||
- [Constants](#constants)
|
||||
- [Misc](#misc)
|
||||
- [Domain types](#domain-types)
|
||||
- [Shard Work Status](#shard-work-status)
|
||||
- [Preset](#preset)
|
||||
- [Misc](#misc-1)
|
||||
- [Shard block samples](#shard-block-samples)
|
||||
- [Participation flag indices](#participation-flag-indices)
|
||||
- [Incentivization weights](#incentivization-weights)
|
||||
- [Preset](#preset)
|
||||
- [Misc](#misc-2)
|
||||
- [Shard blob samples](#shard-blob-samples)
|
||||
- [Precomputed size verification points](#precomputed-size-verification-points)
|
||||
- [Gwei values](#gwei-values)
|
||||
- [Configuration](#configuration)
|
||||
|
@ -25,26 +29,29 @@
|
|||
- [`BeaconBlockBody`](#beaconblockbody)
|
||||
- [`BeaconState`](#beaconstate)
|
||||
- [New containers](#new-containers)
|
||||
- [`Builder`](#builder)
|
||||
- [`DataCommitment`](#datacommitment)
|
||||
- [`AttestedDataCommitment`](#attesteddatacommitment)
|
||||
- [`ShardBlobBody`](#shardblobbody)
|
||||
- [`ShardBlobBodySummary`](#shardblobbodysummary)
|
||||
- [`ShardBlob`](#shardblob)
|
||||
- [`ShardBlobHeader`](#shardblobheader)
|
||||
- [`SignedShardBlob`](#signedshardblob)
|
||||
- [`SignedShardBlobHeader`](#signedshardblobheader)
|
||||
- [`PendingShardHeader`](#pendingshardheader)
|
||||
- [`ShardBlobReference`](#shardblobreference)
|
||||
- [`SignedShardBlobReference`](#signedshardblobreference)
|
||||
- [`ShardProposerSlashing`](#shardproposerslashing)
|
||||
- [`ShardWork`](#shardwork)
|
||||
- [Helper functions](#helper-functions)
|
||||
- [Misc](#misc-2)
|
||||
- [Misc](#misc-3)
|
||||
- [`next_power_of_two`](#next_power_of_two)
|
||||
- [`compute_previous_slot`](#compute_previous_slot)
|
||||
- [`compute_updated_gasprice`](#compute_updated_gasprice)
|
||||
- [`compute_updated_sample_price`](#compute_updated_sample_price)
|
||||
- [`compute_committee_source_epoch`](#compute_committee_source_epoch)
|
||||
- [`batch_apply_participation_flag`](#batch_apply_participation_flag)
|
||||
- [Beacon state accessors](#beacon-state-accessors)
|
||||
- [Updated `get_committee_count_per_slot`](#updated-get_committee_count_per_slot)
|
||||
- [`get_active_shard_count`](#get_active_shard_count)
|
||||
- [`get_shard_committee`](#get_shard_committee)
|
||||
- [`compute_proposer_index`](#compute_proposer_index)
|
||||
- [`get_shard_proposer_index`](#get_shard_proposer_index)
|
||||
- [`get_start_shard`](#get_start_shard)
|
||||
- [`compute_shard_from_committee_index`](#compute_shard_from_committee_index)
|
||||
|
@ -56,7 +63,6 @@
|
|||
- [`process_shard_proposer_slashing`](#process_shard_proposer_slashing)
|
||||
- [Epoch transition](#epoch-transition)
|
||||
- [`process_pending_shard_confirmations`](#process_pending_shard_confirmations)
|
||||
- [`charge_confirmed_shard_fees`](#charge_confirmed_shard_fees)
|
||||
- [`reset_pending_shard_work`](#reset_pending_shard_work)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
@ -69,6 +75,12 @@ This document describes the extensions made to the Phase 0 design of The Beacon
|
|||
based on the ideas [here](https://hackmd.io/G-Iy5jqyT7CXWEz8Ssos8g) and more broadly [here](https://arxiv.org/abs/1809.09044),
|
||||
using KZG10 commitments to commit to data to remove any need for fraud proofs (and hence, safety-critical synchrony assumptions) in the design.
|
||||
|
||||
### Glossary
|
||||
|
||||
- **Data**: A list of KZG points, to translate a byte string into
|
||||
- **Blob**: Data with commitments and meta-data, like a flattened bundle of L2 transactions.
|
||||
- **Builder**: Independent actor that builds blobs and bids for proposal slots via fee-paying blob-headers, responsible for availability.
|
||||
- **Shard proposer**: Validator taking bids from blob builders for shard data opportunity, co-signs with builder to propose the blob.
|
||||
|
||||
## Custom types
|
||||
|
||||
|
@ -79,6 +91,7 @@ We define the following Python custom types for type hinting and readability:
|
|||
| `Shard` | `uint64` | A shard number |
|
||||
| `BLSCommitment` | `Bytes48` | A G1 curve point |
|
||||
| `BLSPoint` | `uint256` | A number `x` in the range `0 <= x < MODULUS` |
|
||||
| `BuilderIndex` | `uint64` | Builder registry index |
|
||||
|
||||
## Constants
|
||||
|
||||
|
@ -97,8 +110,7 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `DOMAIN_SHARD_PROPOSER` | `DomainType('0x80000000')` |
|
||||
| `DOMAIN_SHARD_COMMITTEE` | `DomainType('0x81000000')` |
|
||||
| `DOMAIN_SHARD_BLOB` | `DomainType('0x80000000')` |
|
||||
|
||||
### Shard Work Status
|
||||
|
||||
|
@ -108,6 +120,30 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
| `SHARD_WORK_CONFIRMED` | `1` | Confirmed, reduced to just the commitment |
|
||||
| `SHARD_WORK_PENDING` | `2` | Pending, a list of competing headers |
|
||||
|
||||
### Misc
|
||||
|
||||
TODO: `PARTICIPATION_FLAG_WEIGHTS` backwards-compatibility is difficult, depends on usage.
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `PARTICIPATION_FLAG_WEIGHTS` | `[TIMELY_SOURCE_WEIGHT, TIMELY_TARGET_WEIGHT, TIMELY_HEAD_WEIGHT, TIMELY_SHARD_WEIGHT]` |
|
||||
|
||||
### Participation flag indices
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `TIMELY_SHARD_FLAG_INDEX` | `3` |
|
||||
|
||||
### Incentivization weights
|
||||
|
||||
TODO: determine weight for shard attestations
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `TIMELY_SHARD_WEIGHT` | `uint64(8)` |
|
||||
|
||||
TODO: `WEIGHT_DENOMINATOR` needs to be adjusted, but this breaks a lot of Altair code.
|
||||
|
||||
## Preset
|
||||
|
||||
### Misc
|
||||
|
@ -115,17 +151,19 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
| Name | Value | Notes |
|
||||
| - | - | - |
|
||||
| `MAX_SHARDS` | `uint64(2**10)` (= 1,024) | Theoretical max shard count (used to determine data structure sizes) |
|
||||
| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `uint64(2**3)` (= 8) | Gasprice may decrease/increase by at most exp(1 / this value) *per epoch* |
|
||||
| `INITIAL_ACTIVE_SHARDS` | `uint64(2**6)` (= 64) | Initial shard count |
|
||||
| `SAMPLE_PRICE_ADJUSTMENT_COEFFICIENT` | `uint64(2**3)` (= 8) | Sample price may decrease/increase by at most exp(1 / this value) *per epoch* |
|
||||
| `MAX_SHARD_PROPOSER_SLASHINGS` | `2**4` (= 16) | Maximum amount of shard proposer slashing operations per block |
|
||||
| `MAX_SHARD_HEADERS_PER_SHARD` | `4` | |
|
||||
| `SHARD_STATE_MEMORY_SLOTS` | `uint64(2**8)` (= 256) | Number of slots for which shard commitments and confirmation status is directly available in the state |
|
||||
| `BLOB_BUILDER_REGISTRY_LIMIT` | `uint64(2**40)` (= 1,099,511,627,776) | shard blob builders |
|
||||
|
||||
### Shard block samples
|
||||
### Shard blob samples
|
||||
|
||||
| Name | Value | Notes |
|
||||
| - | - | - |
|
||||
| `MAX_SAMPLES_PER_BLOCK` | `uint64(2**11)` (= 2,048) | 248 * 2,048 = 507,904 bytes |
|
||||
| `TARGET_SAMPLES_PER_BLOCK` | `uint64(2**10)` (= 1,024) | 248 * 1,024 = 253,952 bytes |
|
||||
| `MAX_SAMPLES_PER_BLOB` | `uint64(2**11)` (= 2,048) | 248 * 2,048 = 507,904 bytes |
|
||||
| `TARGET_SAMPLES_PER_BLOB` | `uint64(2**10)` (= 1,024) | 248 * 1,024 = 253,952 bytes |
|
||||
|
||||
### Precomputed size verification points
|
||||
|
||||
|
@ -133,20 +171,19 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
| - | - |
|
||||
| `G1_SETUP` | Type `List[G1]`. The G1-side trusted setup `[G, G*s, G*s**2....]`; note that the first point is the generator. |
|
||||
| `G2_SETUP` | Type `List[G2]`. The G2-side trusted setup `[G, G*s, G*s**2....]` |
|
||||
| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(MAX_SAMPLES_PER_BLOCK * POINTS_PER_SAMPLE), MODULUS)` |
|
||||
| `ROOT_OF_UNITY` | `pow(PRIMITIVE_ROOT_OF_UNITY, (MODULUS - 1) // int(MAX_SAMPLES_PER_BLOB * POINTS_PER_SAMPLE), MODULUS)` |
|
||||
|
||||
### Gwei values
|
||||
|
||||
| Name | Value | Unit | Description |
|
||||
| - | - | - | - |
|
||||
| `MAX_GASPRICE` | `Gwei(2**33)` (= 8,589,934,592) | Gwei | Max gasprice charged for a TARGET-sized shard block |
|
||||
| `MIN_GASPRICE` | `Gwei(2**3)` (= 8) | Gwei | Min gasprice charged for a TARGET-sized shard block |
|
||||
| `MAX_SAMPLE_PRICE` | `Gwei(2**33)` (= 8,589,934,592) | Gwei | Max sample charged for a TARGET-sized shard blob |
|
||||
| `MIN_SAMPLE_PRICE` | `Gwei(2**3)` (= 8) | Gwei | Min sample price charged for a TARGET-sized shard blob |
|
||||
|
||||
## Configuration
|
||||
|
||||
| Name | Value | Notes |
|
||||
| - | - | - |
|
||||
| `INITIAL_ACTIVE_SHARDS` | `uint64(2**6)` (= 64) | Initial shard count |
|
||||
Note: Some preset variables may become run-time configurable for testnets, but default to a preset while the spec is unstable.
|
||||
E.g. `INITIAL_ACTIVE_SHARDS`, `MAX_SAMPLES_PER_BLOB` and `TARGET_SAMPLES_PER_BLOB`.
|
||||
|
||||
## Updated containers
|
||||
|
||||
|
@ -163,8 +200,8 @@ class AttestationData(Container):
|
|||
# FFG vote
|
||||
source: Checkpoint
|
||||
target: Checkpoint
|
||||
# Shard header root
|
||||
shard_header_root: Root # [New in Sharding]
|
||||
# Hash-tree-root of ShardBlob
|
||||
shard_blob_root: Root # [New in Sharding]
|
||||
```
|
||||
|
||||
### `BeaconBlockBody`
|
||||
|
@ -179,19 +216,24 @@ class BeaconBlockBody(merge.BeaconBlockBody): # [extends The Merge block body]
|
|||
|
||||
```python
|
||||
class BeaconState(merge.BeaconState):
|
||||
# [Updated fields] (Warning: this changes with Altair, Sharding will rebase to use participation-flags)
|
||||
previous_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]
|
||||
current_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]
|
||||
# [New fields]
|
||||
# Blob builder registry.
|
||||
blob_builders: List[Builder, BLOB_BUILDER_REGISTRY_LIMIT]
|
||||
blob_builder_balances: List[Gwei, BLOB_BUILDER_REGISTRY_LIMIT]
|
||||
# A ring buffer of the latest slots, with information per active shard.
|
||||
shard_buffer: Vector[List[ShardWork, MAX_SHARDS], SHARD_STATE_MEMORY_SLOTS]
|
||||
shard_gasprice: uint64
|
||||
shard_sample_price: uint64
|
||||
```
|
||||
|
||||
## New containers
|
||||
|
||||
The shard data itself is network-layer only, and can be found in the [P2P specification](./p2p-interface.md).
|
||||
The beacon chain registers just the commitments of the shard data.
|
||||
### `Builder`
|
||||
|
||||
```python
|
||||
class Builder(Container):
|
||||
pubkey: BLSPubkey
|
||||
# TODO: fields for either an expiry mechanism (refunding execution account with remaining balance)
|
||||
# and/or a builder-transaction mechanism.
|
||||
```
|
||||
|
||||
### `DataCommitment`
|
||||
|
||||
|
@ -200,41 +242,117 @@ class DataCommitment(Container):
|
|||
# KZG10 commitment to the data
|
||||
point: BLSCommitment
|
||||
# Length of the data in samples
|
||||
length: uint64
|
||||
samples_count: uint64
|
||||
```
|
||||
|
||||
### `AttestedDataCommitment`
|
||||
|
||||
```python
|
||||
class AttestedDataCommitment(Container):
|
||||
# KZG10 commitment to the data, and length
|
||||
commitment: DataCommitment
|
||||
# hash_tree_root of the ShardBlobHeader (stored so that attestations can be checked against it)
|
||||
root: Root
|
||||
# The proposer who included the shard-header
|
||||
includer_index: ValidatorIndex
|
||||
```
|
||||
|
||||
### `ShardBlobBody`
|
||||
|
||||
Unsigned shard data, bundled by a shard-builder.
|
||||
Unique, signing different bodies as shard proposer for the same `(slot, shard)` is slashable.
|
||||
|
||||
```python
|
||||
class ShardBlobBody(Container):
|
||||
# The actual data commitment
|
||||
commitment: DataCommitment
|
||||
# Proof that the degree < commitment.samples_count * POINTS_PER_SAMPLE
|
||||
degree_proof: BLSCommitment
|
||||
# The actual data. Should match the commitment and degree proof.
|
||||
data: List[BLSPoint, POINTS_PER_SAMPLE * MAX_SAMPLES_PER_BLOB]
|
||||
# Latest block root of the Beacon Chain, before shard_blob.slot
|
||||
beacon_block_root: Root
|
||||
# fee payment fields (EIP 1559 like)
|
||||
# TODO: express in MWei instead?
|
||||
max_priority_fee_per_sample: Gwei
|
||||
max_fee_per_sample: Gwei
|
||||
```
|
||||
|
||||
### `ShardBlobBodySummary`
|
||||
|
||||
Summary version of the `ShardBlobBody`, omitting the data payload, while preserving the data-commitments.
|
||||
|
||||
The commitments are not further collapsed to a single hash,
|
||||
to avoid an extra network roundtrip between proposer and builder, to include the header on-chain more quickly.
|
||||
|
||||
```python
|
||||
class ShardBlobBodySummary(Container):
|
||||
# The actual data commitment
|
||||
commitment: DataCommitment
|
||||
# Proof that the degree < commitment.length
|
||||
# Proof that the degree < commitment.samples_count * POINTS_PER_SAMPLE
|
||||
degree_proof: BLSCommitment
|
||||
# Hash-tree-root as summary of the data field
|
||||
data_root: Root
|
||||
# Latest block root of the Beacon Chain, before shard_blob.slot
|
||||
beacon_block_root: Root
|
||||
# fee payment fields (EIP 1559 like)
|
||||
# TODO: express in MWei instead?
|
||||
max_priority_fee_per_sample: Gwei
|
||||
max_fee_per_sample: Gwei
|
||||
```
|
||||
|
||||
### `ShardBlob`
|
||||
|
||||
`ShardBlobBody` wrapped with the header data that is unique to the shard blob proposal.
|
||||
|
||||
```python
|
||||
class ShardBlob(Container):
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
# Builder of the data, pays data-fee to proposer
|
||||
builder_index: BuilderIndex
|
||||
# Proposer of the shard-blob
|
||||
proposer_index: ValidatorIndex
|
||||
# Blob contents
|
||||
body: ShardBlobBody
|
||||
```
|
||||
|
||||
### `ShardBlobHeader`
|
||||
|
||||
Header version of `ShardBlob`.
|
||||
|
||||
```python
|
||||
class ShardBlobHeader(Container):
|
||||
# Slot and shard that this header is intended for
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
# SSZ-summary of ShardBlobBody
|
||||
body_summary: ShardBlobBodySummary
|
||||
# Builder of the data, pays data-fee to proposer
|
||||
builder_index: BuilderIndex
|
||||
# Proposer of the shard-blob
|
||||
proposer_index: ValidatorIndex
|
||||
# Blob contents, without the full data
|
||||
body_summary: ShardBlobBodySummary
|
||||
```
|
||||
|
||||
### `SignedShardBlob`
|
||||
|
||||
Full blob data, signed by the shard builder (ensuring fee payment) and shard proposer (ensuring a single proposal).
|
||||
|
||||
```python
|
||||
class SignedShardBlob(Container):
|
||||
message: ShardBlob
|
||||
signature: BLSSignature
|
||||
```
|
||||
|
||||
### `SignedShardBlobHeader`
|
||||
|
||||
Header of the blob, the signature is equally applicable to `SignedShardBlob`.
|
||||
Shard proposers can accept `SignedShardBlobHeader` as a data-transaction by co-signing the header.
|
||||
|
||||
```python
|
||||
class SignedShardBlobHeader(Container):
|
||||
message: ShardBlobHeader
|
||||
# Signature by builder.
|
||||
# Once accepted by proposer, the signatures is the aggregate of both.
|
||||
signature: BLSSignature
|
||||
```
|
||||
|
||||
|
@ -242,10 +360,8 @@ class SignedShardBlobHeader(Container):
|
|||
|
||||
```python
|
||||
class PendingShardHeader(Container):
|
||||
# KZG10 commitment to the data
|
||||
commitment: DataCommitment
|
||||
# hash_tree_root of the ShardHeader (stored so that attestations can be checked against it)
|
||||
root: Root
|
||||
# The commitment that is attested
|
||||
attested: AttestedDataCommitment
|
||||
# Who voted for the header
|
||||
votes: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]
|
||||
# Sum of effective balances of votes
|
||||
|
@ -256,41 +372,43 @@ class PendingShardHeader(Container):
|
|||
|
||||
### `ShardBlobReference`
|
||||
|
||||
Reference version of `ShardBlobHeader`, substituting the body for just a hash-tree-root.
|
||||
|
||||
```python
|
||||
class ShardBlobReference(Container):
|
||||
# Slot and shard that this reference is intended for
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
# Hash-tree-root of ShardBlobBody
|
||||
body_root: Root
|
||||
# Builder of the data
|
||||
builder_index: BuilderIndex
|
||||
# Proposer of the shard-blob
|
||||
proposer_index: ValidatorIndex
|
||||
```
|
||||
|
||||
### `SignedShardBlobReference`
|
||||
|
||||
```python
|
||||
class SignedShardBlobReference(Container):
|
||||
message: ShardBlobReference
|
||||
signature: BLSSignature
|
||||
# Blob hash-tree-root for slashing reference
|
||||
body_root: Root
|
||||
```
|
||||
|
||||
### `ShardProposerSlashing`
|
||||
|
||||
```python
|
||||
class ShardProposerSlashing(Container):
|
||||
signed_reference_1: SignedShardBlobReference
|
||||
signed_reference_2: SignedShardBlobReference
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
proposer_index: ValidatorIndex
|
||||
builder_index_1: BuilderIndex
|
||||
builder_index_2: BuilderIndex
|
||||
body_root_1: Root
|
||||
body_root_2: Root
|
||||
signature_1: BLSSignature
|
||||
signature_2: BLSSignature
|
||||
```
|
||||
|
||||
### `ShardWork`
|
||||
|
||||
```python
|
||||
class ShardWork(Container):
|
||||
# Upon confirmation the data is reduced to just the header.
|
||||
# Upon confirmation the data is reduced to just the commitment.
|
||||
status: Union[ # See Shard Work Status enum
|
||||
None, # SHARD_WORK_UNCONFIRMED
|
||||
DataCommitment, # SHARD_WORK_CONFIRMED
|
||||
AttestedDataCommitment, # SHARD_WORK_CONFIRMED
|
||||
List[PendingShardHeader, MAX_SHARD_HEADERS_PER_SHARD] # SHARD_WORK_PENDING
|
||||
]
|
||||
```
|
||||
|
@ -316,18 +434,17 @@ def compute_previous_slot(slot: Slot) -> Slot:
|
|||
return Slot(0)
|
||||
```
|
||||
|
||||
#### `compute_updated_gasprice`
|
||||
#### `compute_updated_sample_price`
|
||||
|
||||
```python
|
||||
def compute_updated_gasprice(prev_gasprice: Gwei, shard_block_length: uint64, adjustment_quotient: uint64) -> Gwei:
|
||||
if shard_block_length > TARGET_SAMPLES_PER_BLOCK:
|
||||
delta = max(1, prev_gasprice * (shard_block_length - TARGET_SAMPLES_PER_BLOCK)
|
||||
// TARGET_SAMPLES_PER_BLOCK // adjustment_quotient)
|
||||
return min(prev_gasprice + delta, MAX_GASPRICE)
|
||||
def compute_updated_sample_price(prev_price: Gwei, samples_length: uint64, active_shards: uint64) -> Gwei:
|
||||
adjustment_quotient = active_shards * SLOTS_PER_EPOCH * SAMPLE_PRICE_ADJUSTMENT_COEFFICIENT
|
||||
if samples_length > TARGET_SAMPLES_PER_BLOB:
|
||||
delta = max(1, prev_price * (samples_length - TARGET_SAMPLES_PER_BLOB) // TARGET_SAMPLES_PER_BLOB // adjustment_quotient)
|
||||
return min(prev_price + delta, MAX_SAMPLE_PRICE)
|
||||
else:
|
||||
delta = max(1, prev_gasprice * (TARGET_SAMPLES_PER_BLOCK - shard_block_length)
|
||||
// TARGET_SAMPLES_PER_BLOCK // adjustment_quotient)
|
||||
return max(prev_gasprice, MIN_GASPRICE + delta) - delta
|
||||
delta = max(1, prev_price * (TARGET_SAMPLES_PER_BLOB - samples_length) // TARGET_SAMPLES_PER_BLOB // adjustment_quotient)
|
||||
return max(prev_price, MIN_SAMPLE_PRICE + delta) - delta
|
||||
```
|
||||
|
||||
#### `compute_committee_source_epoch`
|
||||
|
@ -343,6 +460,20 @@ def compute_committee_source_epoch(epoch: Epoch, period: uint64) -> Epoch:
|
|||
return source_epoch
|
||||
```
|
||||
|
||||
#### `batch_apply_participation_flag`
|
||||
|
||||
```python
|
||||
def batch_apply_participation_flag(state: BeaconState, bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE],
|
||||
epoch: Epoch, full_committee: Sequence[ValidatorIndex], flag_index: int):
|
||||
if epoch == get_current_epoch(state):
|
||||
epoch_participation = state.current_epoch_participation
|
||||
else:
|
||||
epoch_participation = state.previous_epoch_participation
|
||||
for bit, index in zip(bits, full_committee):
|
||||
if bit:
|
||||
epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
|
||||
```
|
||||
|
||||
### Beacon state accessors
|
||||
|
||||
#### Updated `get_committee_count_per_slot`
|
||||
|
@ -369,73 +500,17 @@ def get_active_shard_count(state: BeaconState, epoch: Epoch) -> uint64:
|
|||
return INITIAL_ACTIVE_SHARDS
|
||||
```
|
||||
|
||||
#### `get_shard_committee`
|
||||
|
||||
```python
|
||||
def get_shard_committee(beacon_state: BeaconState, epoch: Epoch, shard: Shard) -> Sequence[ValidatorIndex]:
|
||||
"""
|
||||
Return the shard committee of the given ``epoch`` of the given ``shard``.
|
||||
"""
|
||||
source_epoch = compute_committee_source_epoch(epoch, SHARD_COMMITTEE_PERIOD)
|
||||
active_validator_indices = get_active_validator_indices(beacon_state, source_epoch)
|
||||
seed = get_seed(beacon_state, source_epoch, DOMAIN_SHARD_COMMITTEE)
|
||||
return compute_committee(
|
||||
indices=active_validator_indices,
|
||||
seed=seed,
|
||||
index=shard,
|
||||
count=get_active_shard_count(beacon_state, epoch),
|
||||
)
|
||||
```
|
||||
|
||||
#### `compute_proposer_index`
|
||||
|
||||
Updated version to get a proposer index that will only allow proposers with a certain minimum balance,
|
||||
ensuring that the balance is always sufficient to cover gas costs.
|
||||
|
||||
```python
|
||||
def compute_proposer_index(beacon_state: BeaconState,
|
||||
indices: Sequence[ValidatorIndex],
|
||||
seed: Bytes32,
|
||||
min_effective_balance: Gwei = Gwei(0)) -> ValidatorIndex:
|
||||
"""
|
||||
Return from ``indices`` a random index sampled by effective balance.
|
||||
"""
|
||||
assert len(indices) > 0
|
||||
MAX_RANDOM_BYTE = 2**8 - 1
|
||||
i = uint64(0)
|
||||
total = uint64(len(indices))
|
||||
while True:
|
||||
candidate_index = indices[compute_shuffled_index(i % total, total, seed)]
|
||||
random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
|
||||
effective_balance = beacon_state.validators[candidate_index].effective_balance
|
||||
if effective_balance <= min_effective_balance:
|
||||
continue
|
||||
if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
|
||||
return candidate_index
|
||||
i += 1
|
||||
```
|
||||
|
||||
#### `get_shard_proposer_index`
|
||||
|
||||
```python
|
||||
def get_shard_proposer_index(beacon_state: BeaconState, slot: Slot, shard: Shard) -> ValidatorIndex:
|
||||
def get_shard_proposer_index(state: BeaconState, slot: Slot, shard: Shard) -> ValidatorIndex:
|
||||
"""
|
||||
Return the proposer's index of shard block at ``slot``.
|
||||
"""
|
||||
epoch = compute_epoch_at_slot(slot)
|
||||
committee = get_shard_committee(beacon_state, epoch, shard)
|
||||
seed = hash(get_seed(beacon_state, epoch, DOMAIN_SHARD_PROPOSER) + uint_to_bytes(slot))
|
||||
|
||||
# Proposer must have sufficient balance to pay for worst case fee burn
|
||||
EFFECTIVE_BALANCE_MAX_DOWNWARD_DEVIATION = (
|
||||
EFFECTIVE_BALANCE_INCREMENT - EFFECTIVE_BALANCE_INCREMENT
|
||||
* HYSTERESIS_DOWNWARD_MULTIPLIER // HYSTERESIS_QUOTIENT
|
||||
)
|
||||
min_effective_balance = (
|
||||
beacon_state.shard_gasprice * MAX_SAMPLES_PER_BLOCK // TARGET_SAMPLES_PER_BLOCK
|
||||
+ EFFECTIVE_BALANCE_MAX_DOWNWARD_DEVIATION
|
||||
)
|
||||
return compute_proposer_index(beacon_state, committee, seed, min_effective_balance)
|
||||
seed = hash(get_seed(state, epoch, DOMAIN_SHARD_BLOB) + uint_to_bytes(slot) + uint_to_bytes(shard))
|
||||
indices = get_active_validator_indices(state, epoch)
|
||||
return compute_proposer_index(state, indices, seed)
|
||||
```
|
||||
|
||||
#### `get_start_shard`
|
||||
|
@ -445,7 +520,7 @@ def get_start_shard(state: BeaconState, slot: Slot) -> Shard:
|
|||
"""
|
||||
Return the start shard at ``slot``.
|
||||
"""
|
||||
epoch = compute_epoch_at_slot(Slot(_slot))
|
||||
epoch = compute_epoch_at_slot(Slot(slot))
|
||||
committee_count = get_committee_count_per_slot(state, epoch)
|
||||
active_shard_count = get_active_shard_count(state, epoch)
|
||||
return committee_count * slot % active_shard_count
|
||||
|
@ -500,13 +575,19 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
|
|||
for_ops(body.attester_slashings, process_attester_slashing)
|
||||
# New shard proposer slashing processing
|
||||
for_ops(body.shard_proposer_slashings, process_shard_proposer_slashing)
|
||||
# Limit is dynamic based on active shard count
|
||||
|
||||
# Limit is dynamic: based on active shard count
|
||||
assert len(body.shard_headers) <= MAX_SHARD_HEADERS_PER_SHARD * get_active_shard_count(state, get_current_epoch(state))
|
||||
for_ops(body.shard_headers, process_shard_header)
|
||||
|
||||
# New attestation processing
|
||||
for_ops(body.attestations, process_attestation)
|
||||
for_ops(body.deposits, process_deposit)
|
||||
for_ops(body.voluntary_exits, process_voluntary_exit)
|
||||
|
||||
# TODO: to avoid parallel shards racing, and avoid inclusion-order problems,
|
||||
# update the fee price per slot, instead of per header.
|
||||
# state.shard_sample_price = compute_updated_sample_price(state.shard_sample_price, ?, shard_count)
|
||||
```
|
||||
|
||||
##### Extended Attestation processing
|
||||
|
@ -514,31 +595,47 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
|
|||
```python
|
||||
def process_attestation(state: BeaconState, attestation: Attestation) -> None:
|
||||
altair.process_attestation(state, attestation)
|
||||
update_pending_shard_work(state, attestation)
|
||||
process_attested_shard_work(state, attestation)
|
||||
```
|
||||
|
||||
```python
|
||||
def update_pending_shard_work(state: BeaconState, attestation: Attestation) -> None:
|
||||
def process_attested_shard_work(state: BeaconState, attestation: Attestation) -> None:
|
||||
attestation_shard = compute_shard_from_committee_index(
|
||||
state,
|
||||
attestation.data.slot,
|
||||
attestation.data.index,
|
||||
)
|
||||
full_committee = get_beacon_committee(state, attestation.data.slot, attestation.data.index)
|
||||
|
||||
buffer_index = attestation.data.slot % SHARD_STATE_MEMORY_SLOTS
|
||||
committee_work = state.shard_buffer[buffer_index][attestation_shard]
|
||||
|
||||
# Skip attestation vote accounting if the header is not pending
|
||||
if committee_work.status.selector != SHARD_WORK_PENDING:
|
||||
# TODO In Altair: set participation bit flag, if attestation matches winning header.
|
||||
# If the data was already confirmed, check if this matches, to apply the flag to the attesters.
|
||||
if committee_work.status.selector == SHARD_WORK_CONFIRMED:
|
||||
attested: AttestedDataCommitment = committee_work.status.value
|
||||
if attested.root == attestation.data.shard_blob_root:
|
||||
batch_apply_participation_flag(state, attestation.aggregation_bits,
|
||||
attestation.data.target.epoch,
|
||||
full_committee, TIMELY_SHARD_FLAG_INDEX)
|
||||
return
|
||||
|
||||
current_headers: Sequence[PendingShardHeader] = committee_work.status.value
|
||||
|
||||
# Find the corresponding header, abort if it cannot be found
|
||||
header_index = [header.root for header in current_headers].index(attestation.data.shard_header_root)
|
||||
header_index = len(current_headers)
|
||||
for i, header in enumerate(current_headers):
|
||||
if attestation.data.shard_blob_root == header.attested.root:
|
||||
header_index = i
|
||||
break
|
||||
|
||||
# Attestations for an unknown header do not count towards shard confirmations, but can otherwise be valid.
|
||||
if header_index == len(current_headers):
|
||||
# Note: Attestations may be re-included if headers are included late.
|
||||
return
|
||||
|
||||
pending_header: PendingShardHeader = current_headers[header_index]
|
||||
full_committee = get_beacon_committee(state, attestation.data.slot, attestation.data.index)
|
||||
|
||||
# The weight may be outdated if it is not the initial weight, and from a previous epoch
|
||||
if pending_header.weight != 0 and compute_epoch_at_slot(pending_header.update_slot) < get_current_epoch(state):
|
||||
|
@ -559,8 +656,11 @@ def update_pending_shard_work(state: BeaconState, attestation: Attestation) -> N
|
|||
|
||||
# Check if the PendingShardHeader is eligible for expedited confirmation, requiring 2/3 of balance attesting
|
||||
if pending_header.weight * 3 >= full_committee_balance * 2:
|
||||
# TODO In Altair: set participation bit flag for voters of this early winning header
|
||||
if pending_header.commitment == DataCommitment():
|
||||
# participants of the winning header are remembered with participation flags
|
||||
batch_apply_participation_flag(state, pending_header.votes, attestation.data.target.epoch,
|
||||
full_committee, TIMELY_SHARD_FLAG_INDEX)
|
||||
|
||||
if pending_header.attested.commitment == DataCommitment():
|
||||
# The committee voted to not confirm anything
|
||||
state.shard_buffer[buffer_index][attestation_shard].status.change(
|
||||
selector=SHARD_WORK_UNCONFIRMED,
|
||||
|
@ -569,7 +669,7 @@ def update_pending_shard_work(state: BeaconState, attestation: Attestation) -> N
|
|||
else:
|
||||
state.shard_buffer[buffer_index][attestation_shard].status.change(
|
||||
selector=SHARD_WORK_CONFIRMED,
|
||||
value=pending_header.commitment,
|
||||
value=pending_header.attested,
|
||||
)
|
||||
```
|
||||
|
||||
|
@ -577,49 +677,89 @@ def update_pending_shard_work(state: BeaconState, attestation: Attestation) -> N
|
|||
|
||||
```python
|
||||
def process_shard_header(state: BeaconState, signed_header: SignedShardBlobHeader) -> None:
|
||||
header = signed_header.message
|
||||
header: ShardBlobHeader = signed_header.message
|
||||
slot = header.slot
|
||||
shard = header.shard
|
||||
|
||||
# Verify the header is not 0, and not from the future.
|
||||
assert Slot(0) < header.slot <= state.slot
|
||||
header_epoch = compute_epoch_at_slot(header.slot)
|
||||
assert Slot(0) < slot <= state.slot
|
||||
header_epoch = compute_epoch_at_slot(slot)
|
||||
# Verify that the header is within the processing time window
|
||||
assert header_epoch in [get_previous_epoch(state), get_current_epoch(state)]
|
||||
# Verify that the shard is active
|
||||
assert header.shard < get_active_shard_count(state, header_epoch)
|
||||
# Verify that the shard is valid
|
||||
shard_count = get_active_shard_count(state, header_epoch)
|
||||
assert shard < shard_count
|
||||
# Verify that a committee is able to attest this (slot, shard)
|
||||
start_shard = get_start_shard(state, slot)
|
||||
committee_index = (shard_count + shard - start_shard) % shard_count
|
||||
committees_per_slot = get_committee_count_per_slot(state, header_epoch)
|
||||
assert committee_index <= committees_per_slot
|
||||
|
||||
# Verify that the block root matches,
|
||||
# to ensure the header will only be included in this specific Beacon Chain sub-tree.
|
||||
assert header.body_summary.beacon_block_root == get_block_root_at_slot(state, header.slot - 1)
|
||||
assert header.body_summary.beacon_block_root == get_block_root_at_slot(state, slot - 1)
|
||||
|
||||
# Check that this data is still pending
|
||||
committee_work = state.shard_buffer[header.slot % SHARD_STATE_MEMORY_SLOTS][header.shard]
|
||||
committee_work = state.shard_buffer[slot % SHARD_STATE_MEMORY_SLOTS][shard]
|
||||
assert committee_work.status.selector == SHARD_WORK_PENDING
|
||||
|
||||
# Check that this header is not yet in the pending list
|
||||
current_headers: List[PendingShardHeader, MAX_SHARD_HEADERS_PER_SHARD] = committee_work.status.value
|
||||
header_root = hash_tree_root(header)
|
||||
assert header_root not in [pending_header.root for pending_header in current_headers]
|
||||
assert header_root not in [pending_header.attested.root for pending_header in current_headers]
|
||||
|
||||
# Verify proposer
|
||||
assert header.proposer_index == get_shard_proposer_index(state, header.slot, header.shard)
|
||||
# Verify signature
|
||||
signing_root = compute_signing_root(header, get_domain(state, DOMAIN_SHARD_PROPOSER))
|
||||
assert bls.Verify(state.validators[header.proposer_index].pubkey, signing_root, signed_header.signature)
|
||||
# Verify proposer matches
|
||||
assert header.proposer_index == get_shard_proposer_index(state, slot, shard)
|
||||
|
||||
# Verify builder and proposer aggregate signature
|
||||
blob_signing_root = compute_signing_root(header, get_domain(state, DOMAIN_SHARD_BLOB))
|
||||
builder_pubkey = state.blob_builders[header.builder_index].pubkey
|
||||
proposer_pubkey = state.validators[header.proposer_index].pubkey
|
||||
assert bls.FastAggregateVerify([builder_pubkey, proposer_pubkey], blob_signing_root, signed_header.signature)
|
||||
|
||||
# Verify the length by verifying the degree.
|
||||
body_summary = header.body_summary
|
||||
if body_summary.commitment.length == 0:
|
||||
points_count = body_summary.commitment.samples_count * POINTS_PER_SAMPLE
|
||||
if points_count == 0:
|
||||
assert body_summary.degree_proof == G1_SETUP[0]
|
||||
assert (
|
||||
bls.Pairing(body_summary.degree_proof, G2_SETUP[0])
|
||||
== bls.Pairing(body_summary.commitment.point, G2_SETUP[-body_summary.commitment.length])
|
||||
== bls.Pairing(body_summary.commitment.point, G2_SETUP[-points_count])
|
||||
)
|
||||
|
||||
# Charge EIP 1559 fee, builder pays for opportunity, and is responsible for later availability,
|
||||
# or fail to publish at their own expense.
|
||||
samples = body_summary.commitment.samples_count
|
||||
# TODO: overflows, need bigger int type
|
||||
max_fee = body_summary.max_fee_per_sample * samples
|
||||
|
||||
# Builder must have sufficient balance, even if max_fee is not completely utilized
|
||||
assert state.blob_builder_balances[header.builder_index] >= max_fee
|
||||
|
||||
base_fee = state.shard_sample_price * samples
|
||||
# Base fee must be paid
|
||||
assert max_fee >= base_fee
|
||||
|
||||
# Remaining fee goes towards proposer for prioritizing, up to a maximum
|
||||
max_priority_fee = body_summary.max_priority_fee_per_sample * samples
|
||||
priority_fee = min(max_fee - base_fee, max_priority_fee)
|
||||
|
||||
# Burn base fee, take priority fee
|
||||
# priority_fee <= max_fee - base_fee, thus priority_fee + base_fee <= max_fee, thus sufficient balance.
|
||||
state.blob_builder_balances[header.builder_index] -= base_fee + priority_fee
|
||||
# Pay out priority fee
|
||||
increase_balance(state, header.proposer_index, priority_fee)
|
||||
|
||||
# Initialize the pending header
|
||||
index = compute_committee_index_from_shard(state, header.slot, header.shard)
|
||||
committee_length = len(get_beacon_committee(state, header.slot, index))
|
||||
index = compute_committee_index_from_shard(state, slot, shard)
|
||||
committee_length = len(get_beacon_committee(state, slot, index))
|
||||
initial_votes = Bitlist[MAX_VALIDATORS_PER_COMMITTEE]([0] * committee_length)
|
||||
pending_header = PendingShardHeader(
|
||||
commitment=body_summary.commitment,
|
||||
root=header_root,
|
||||
attested=AttestedDataCommitment(
|
||||
commitment=body_summary.commitment,
|
||||
root=header_root,
|
||||
includer_index=get_beacon_proposer_index(state),
|
||||
),
|
||||
votes=initial_votes,
|
||||
weight=0,
|
||||
update_slot=state.slot,
|
||||
|
@ -638,27 +778,36 @@ The goal is to ensure that a proof can only be constructed if `deg(B) < l` (ther
|
|||
|
||||
```python
|
||||
def process_shard_proposer_slashing(state: BeaconState, proposer_slashing: ShardProposerSlashing) -> None:
|
||||
reference_1 = proposer_slashing.signed_reference_1.message
|
||||
reference_2 = proposer_slashing.signed_reference_2.message
|
||||
slot = proposer_slashing.slot
|
||||
shard = proposer_slashing.shard
|
||||
proposer_index = proposer_slashing.proposer_index
|
||||
|
||||
# Verify header slots match
|
||||
assert reference_1.slot == reference_2.slot
|
||||
# Verify header shards match
|
||||
assert reference_1.shard == reference_2.shard
|
||||
# Verify header proposer indices match
|
||||
assert reference_1.proposer_index == reference_2.proposer_index
|
||||
# Verify the headers are different (i.e. different body)
|
||||
reference_1 = ShardBlobReference(slot=slot, shard=shard,
|
||||
proposer_index=proposer_index,
|
||||
builder_index=proposer_slashing.builder_index_1,
|
||||
body_root=proposer_slashing.body_root_1)
|
||||
reference_2 = ShardBlobReference(slot=slot, shard=shard,
|
||||
proposer_index=proposer_index,
|
||||
builder_index=proposer_slashing.builder_index_2,
|
||||
body_root=proposer_slashing.body_root_2)
|
||||
|
||||
# Verify the signed messages are different
|
||||
assert reference_1 != reference_2
|
||||
# Verify the proposer is slashable
|
||||
proposer = state.validators[reference_1.proposer_index]
|
||||
assert is_slashable_validator(proposer, get_current_epoch(state))
|
||||
# Verify signatures
|
||||
for signed_header in (proposer_slashing.signed_reference_1, proposer_slashing.signed_reference_2):
|
||||
domain = get_domain(state, DOMAIN_SHARD_PROPOSER, compute_epoch_at_slot(signed_header.message.slot))
|
||||
signing_root = compute_signing_root(signed_header.message, domain)
|
||||
assert bls.Verify(proposer.pubkey, signing_root, signed_header.signature)
|
||||
|
||||
slash_validator(state, reference_1.proposer_index)
|
||||
# Verify the proposer is slashable
|
||||
proposer = state.validators[proposer_index]
|
||||
assert is_slashable_validator(proposer, get_current_epoch(state))
|
||||
|
||||
# The builders are not slashed, the proposer co-signed with them
|
||||
builder_pubkey_1 = state.blob_builders[proposer_slashing.builder_index_1].pubkey
|
||||
builder_pubkey_2 = state.blob_builders[proposer_slashing.builder_index_2].pubkey
|
||||
domain = get_domain(state, DOMAIN_SHARD_PROPOSER, compute_epoch_at_slot(slot))
|
||||
signing_root_1 = compute_signing_root(reference_1, domain)
|
||||
signing_root_2 = compute_signing_root(reference_2, domain)
|
||||
assert bls.FastAggregateVerify([builder_pubkey_1, proposer.pubkey], signing_root_1, proposer_slashing.signature_1)
|
||||
assert bls.FastAggregateVerify([builder_pubkey_2, proposer.pubkey], signing_root_2, proposer_slashing.signature_2)
|
||||
|
||||
slash_validator(state, proposer_index)
|
||||
```
|
||||
|
||||
### Epoch transition
|
||||
|
@ -669,13 +818,12 @@ This epoch transition overrides the Merge epoch transition:
|
|||
def process_epoch(state: BeaconState) -> None:
|
||||
# Sharding pre-processing
|
||||
process_pending_shard_confirmations(state)
|
||||
charge_confirmed_shard_fees(state)
|
||||
reset_pending_shard_work(state)
|
||||
|
||||
# Base functionality
|
||||
process_justification_and_finalization(state)
|
||||
process_inactivity_updates(state)
|
||||
process_rewards_and_penalties(state)
|
||||
process_rewards_and_penalties(state) # Note: modified, see new TIMELY_SHARD_FLAG_INDEX
|
||||
process_registry_updates(state)
|
||||
process_slashings(state)
|
||||
process_eth1_data_reset(state)
|
||||
|
@ -706,46 +854,10 @@ def process_pending_shard_confirmations(state: BeaconState) -> None:
|
|||
committee_work = state.shard_buffer[buffer_index][shard_index]
|
||||
if committee_work.status.selector == SHARD_WORK_PENDING:
|
||||
winning_header = max(committee_work.status.value, key=lambda header: header.weight)
|
||||
# TODO In Altair: set participation bit flag of voters for winning header
|
||||
if winning_header.commitment == DataCommitment():
|
||||
if winning_header.attested.commitment == DataCommitment():
|
||||
committee_work.status.change(selector=SHARD_WORK_UNCONFIRMED, value=None)
|
||||
else:
|
||||
committee_work.status.change(selector=SHARD_WORK_CONFIRMED, value=winning_header.commitment)
|
||||
```
|
||||
|
||||
#### `charge_confirmed_shard_fees`
|
||||
|
||||
```python
|
||||
def charge_confirmed_shard_fees(state: BeaconState) -> None:
|
||||
new_gasprice = state.shard_gasprice
|
||||
previous_epoch = get_previous_epoch(state)
|
||||
previous_epoch_start_slot = compute_start_slot_at_epoch(previous_epoch)
|
||||
adjustment_quotient = (
|
||||
get_active_shard_count(state, previous_epoch)
|
||||
* SLOTS_PER_EPOCH * GASPRICE_ADJUSTMENT_COEFFICIENT
|
||||
)
|
||||
# Iterate through confirmed shard-headers
|
||||
for slot in range(previous_epoch_start_slot, previous_epoch_start_slot + SLOTS_PER_EPOCH):
|
||||
buffer_index = slot % SHARD_STATE_MEMORY_SLOTS
|
||||
for shard_index in range(len(state.shard_buffer[buffer_index])):
|
||||
committee_work = state.shard_buffer[buffer_index][shard_index]
|
||||
if committee_work.status.selector == SHARD_WORK_CONFIRMED:
|
||||
commitment: DataCommitment = committee_work.status.value
|
||||
# Charge EIP 1559 fee
|
||||
proposer = get_shard_proposer_index(state, slot, Shard(shard_index))
|
||||
fee = (
|
||||
(state.shard_gasprice * commitment.length)
|
||||
// TARGET_SAMPLES_PER_BLOCK
|
||||
)
|
||||
decrease_balance(state, proposer, fee)
|
||||
|
||||
# Track updated gas price
|
||||
new_gasprice = compute_updated_gasprice(
|
||||
new_gasprice,
|
||||
commitment.length,
|
||||
adjustment_quotient,
|
||||
)
|
||||
state.shard_gasprice = new_gasprice
|
||||
committee_work.status.change(selector=SHARD_WORK_CONFIRMED, value=winning_header.attested)
|
||||
```
|
||||
|
||||
#### `reset_pending_shard_work`
|
||||
|
@ -773,8 +885,7 @@ def reset_pending_shard_work(state: BeaconState) -> None:
|
|||
selector=SHARD_WORK_PENDING,
|
||||
value=List[PendingShardHeader, MAX_SHARD_HEADERS_PER_SHARD](
|
||||
PendingShardHeader(
|
||||
commitment=DataCommitment(),
|
||||
root=Root(),
|
||||
attested=AttestedDataCommitment(),
|
||||
votes=Bitlist[MAX_VALIDATORS_PER_COMMITTEE]([0] * committee_length),
|
||||
weight=0,
|
||||
update_slot=slot,
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Ethereum 2.0 Sharding -- Network specification
|
||||
# Sharding -- Networking
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
|
@ -11,16 +11,13 @@
|
|||
- [Introduction](#introduction)
|
||||
- [Constants](#constants)
|
||||
- [Misc](#misc)
|
||||
- [New containers](#new-containers)
|
||||
- [ShardBlobBody](#shardblobbody)
|
||||
- [ShardBlob](#shardblob)
|
||||
- [SignedShardBlob](#signedshardblob)
|
||||
- [Gossip domain](#gossip-domain)
|
||||
- [Topics and messages](#topics-and-messages)
|
||||
- [Shard blob subnets](#shard-blob-subnets)
|
||||
- [`shard_blob_{subnet_id}`](#shard_blob_subnet_id)
|
||||
- [Global topics](#global-topics)
|
||||
- [`shard_header`](#shard_header)
|
||||
- [`shard_blob_header`](#shard_blob_header)
|
||||
- [`shard_blob_tx`](#shard_blob_tx)
|
||||
- [`shard_proposer_slashing`](#shard_proposer_slashing)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
@ -39,47 +36,9 @@ The adjustments and additions for Shards are outlined in this document.
|
|||
| Name | Value | Description |
|
||||
| ---- | ----- | ----------- |
|
||||
| `SHARD_BLOB_SUBNET_COUNT` | `64` | The number of `shard_blob_{subnet_id}` subnets used in the gossipsub protocol. |
|
||||
| `SHARD_TX_PROPAGATION_GRACE_SLOTS` | `4` | The number of slots for a late transaction to propagate |
|
||||
| `SHARD_TX_PROPAGATION_BUFFER_SLOTS` | `8` | The number of slots for an early transaction to propagate |
|
||||
|
||||
## New containers
|
||||
|
||||
### ShardBlobBody
|
||||
|
||||
```python
|
||||
class ShardBlobBody(Container):
|
||||
# The actual data commitment
|
||||
commitment: DataCommitment
|
||||
# Proof that the degree < commitment.length
|
||||
degree_proof: BLSCommitment
|
||||
# The actual data. Should match the commitment and degree proof.
|
||||
data: List[BLSPoint, POINTS_PER_SAMPLE * MAX_SAMPLES_PER_BLOCK]
|
||||
# Latest block root of the Beacon Chain, before shard_blob.slot
|
||||
beacon_block_root: Root
|
||||
```
|
||||
|
||||
The user MUST always verify the commitments in the `body` are valid for the `data` in the `body`.
|
||||
|
||||
### ShardBlob
|
||||
|
||||
```python
|
||||
class ShardBlob(Container):
|
||||
# Slot and shard that this blob is intended for
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
# Shard data with related commitments and beacon anchor
|
||||
body: ShardBlobBody
|
||||
# Proposer of the shard-blob
|
||||
proposer_index: ValidatorIndex
|
||||
```
|
||||
|
||||
This is the expanded form of the `ShardBlobHeader` type.
|
||||
|
||||
### SignedShardBlob
|
||||
|
||||
```python
|
||||
class SignedShardBlob(Container):
|
||||
message: ShardBlob
|
||||
signature: BLSSignature
|
||||
```
|
||||
|
||||
## Gossip domain
|
||||
|
||||
|
@ -87,21 +46,22 @@ class SignedShardBlob(Container):
|
|||
|
||||
Following the same scheme as the [Phase0 gossip topics](../phase0/p2p-interface.md#topics-and-messages), names and payload types are:
|
||||
|
||||
| Name | Message Type |
|
||||
|----------------------------------|---------------------------|
|
||||
| `shard_blob_{subnet_id}` | `SignedShardBlob` |
|
||||
| `shard_header` | `SignedShardBlobHeader` |
|
||||
| `shard_proposer_slashing` | `ShardProposerSlashing` |
|
||||
| Name | Message Type |
|
||||
|---------------------------------|--------------------------|
|
||||
| `shard_blob_{subnet_id}` | `SignedShardBlob` |
|
||||
| `shard_blob_header` | `SignedShardBlobHeader` |
|
||||
| `shard_blob_tx` | `SignedShardBlobHeader` |
|
||||
| `shard_proposer_slashing` | `ShardProposerSlashing` |
|
||||
|
||||
The [DAS network specification](./das-p2p.md) defines additional topics.
|
||||
|
||||
#### Shard blob subnets
|
||||
|
||||
Shard blob subnets are used to propagate shard blobs to subsections of the network.
|
||||
Shard blob subnets are used by builders to make their blobs available after selection by shard proposers.
|
||||
|
||||
##### `shard_blob_{subnet_id}`
|
||||
|
||||
Shard block data, in the form of a `SignedShardBlob` is published to the `shard_blob_{subnet_id}` subnets.
|
||||
Shard blob data, in the form of a `SignedShardBlob` is published to the `shard_blob_{subnet_id}` subnets.
|
||||
|
||||
```python
|
||||
def compute_subnet_for_shard_blob(state: BeaconState, slot: Slot, shard: Shard) -> uint64:
|
||||
|
@ -117,51 +77,94 @@ def compute_subnet_for_shard_blob(state: BeaconState, slot: Slot, shard: Shard)
|
|||
return uint64((committees_since_epoch_start + committee_index) % SHARD_BLOB_SUBNET_COUNT)
|
||||
```
|
||||
|
||||
The following validations MUST pass before forwarding the `signed_blob` (with inner `message` as `blob`) on the horizontal subnet or creating samples for it.
|
||||
- _[IGNORE]_ The `blob` is not from a future slot (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) --
|
||||
i.e. validate that `blob.slot <= current_slot`
|
||||
(a client MAY queue future blobs for processing at the appropriate slot).
|
||||
- _[IGNORE]_ The `blob` is new enough to be still be processed --
|
||||
The following validations MUST pass before forwarding the `signed_blob`,
|
||||
on the horizontal subnet or creating samples for it. Alias `blob = signed_blob.message`.
|
||||
|
||||
- _[IGNORE]_ The `blob` is published 1 slot early or later (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) --
|
||||
i.e. validate that `blob.slot <= current_slot + 1`
|
||||
(a client MAY queue future blobs for propagation at the appropriate slot).
|
||||
- _[IGNORE]_ The `blob` is new enough to still be processed --
|
||||
i.e. validate that `compute_epoch_at_slot(blob.slot) >= get_previous_epoch(state)`
|
||||
- _[REJECT]_ The shard should have a committee at slot --
|
||||
- _[REJECT]_ The shard blob is for an active shard --
|
||||
i.e. `blob.shard < get_active_shard_count(state, compute_epoch_at_slot(blob.slot))`
|
||||
- _[REJECT]_ The `blob.shard` MUST have a committee at the `blob.slot` --
|
||||
i.e. validate that `compute_committee_index_from_shard(state, blob.slot, blob.shard)` doesn't raise an error
|
||||
- _[REJECT]_ The shard blob is for the correct subnet --
|
||||
i.e. `compute_subnet_for_shard_blob(state, blob.slot, blob.shard) == subnet_id`
|
||||
- _[IGNORE]_ The blob is the first blob with valid signature received for the `(blob.proposer_index, blob.slot, blob.shard)` combination.
|
||||
- _[REJECT]_ As already limited by the SSZ list-limit, it is important the blob is well-formatted and not too large.
|
||||
- _[REJECT]_ The blob is not too large -- the data MUST NOT be larger than the SSZ list-limit, and a client MAY apply stricter bounds.
|
||||
- _[REJECT]_ The `blob.body.data` MUST NOT contain any point `p >= MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid.
|
||||
- _[REJECT]_ The proposer signature, `signed_blob.signature`, is valid with respect to the `proposer_index` pubkey.
|
||||
- _[REJECT]_ The blob is proposed by the expected `proposer_index` for the blob's slot
|
||||
- _[REJECT]_ The blob builder defined by `blob.builder_index` exists and has sufficient balance to back the fee payment.
|
||||
- _[REJECT]_ The blob signature, `signed_blob.signature`, is valid for the aggregate of proposer and builder --
|
||||
i.e. `bls.FastAggregateVerify([builder_pubkey, proposer_pubkey], blob_signing_root, signed_blob.signature)`.
|
||||
- _[REJECT]_ The blob is proposed by the expected `proposer_index` for the blob's `slot` and `shard`,
|
||||
in the context of the current shuffling (defined by `blob.body.beacon_block_root`/`slot`).
|
||||
If the `proposer_index` cannot immediately be verified against the expected shuffling,
|
||||
the block MAY be queued for later processing while proposers for the blob's branch are calculated --
|
||||
the blob MAY be queued for later processing while proposers for the blob's branch are calculated --
|
||||
in such a case _do not_ `REJECT`, instead `IGNORE` this message.
|
||||
|
||||
#### Global topics
|
||||
|
||||
There are two additional global topics for Sharding, one is used to propagate shard blob headers (`shard_header`) to
|
||||
all nodes on the network. Another one is used to propagate validator message (`shard_proposer_slashing`).
|
||||
There are three additional global topics for Sharding.
|
||||
|
||||
##### `shard_header`
|
||||
- `shard_blob_header`: co-signed headers to be included on-chain and to serve as a signal to the builder to publish full data.
|
||||
- `shard_blob_tx`: builder-signed headers, also known as "data transaction".
|
||||
- `shard_proposer_slashing`: slashings of duplicate shard proposals.
|
||||
|
||||
Shard header data, in the form of a `SignedShardBlobHeader` is published to the global `shard_header` subnet.
|
||||
##### `shard_blob_header`
|
||||
|
||||
The following validations MUST pass before forwarding the `signed_shard_blob_header` (with inner `message` as `header`) on the network.
|
||||
- _[IGNORE]_ The `header` is not from a future slot (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) --
|
||||
i.e. validate that `header.slot <= current_slot`
|
||||
(a client MAY queue future headers for processing at the appropriate slot).
|
||||
- _[IGNORE]_ The `header` is new enough to be still be processed --
|
||||
Shard header data, in the form of a `SignedShardBlobHeader` is published to the global `shard_blob_header` subnet.
|
||||
Shard blob headers select shard blob bids by builders
|
||||
and should be timely to ensure builders can publish the full shard blob before subsequent attestations.
|
||||
|
||||
The following validations MUST pass before forwarding the `signed_blob_header` on the network. Alias `header = signed_blob_header.message`.
|
||||
|
||||
- _[IGNORE]_ The `header` is published 1 slot early or later (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) --
|
||||
i.e. validate that `header.slot <= current_slot + 1`
|
||||
(a client MAY queue future headers for propagation at the appropriate slot).
|
||||
- _[IGNORE]_ The header is new enough to still be processed --
|
||||
i.e. validate that `compute_epoch_at_slot(header.slot) >= get_previous_epoch(state)`
|
||||
- _[REJECT]_ The shard header is for an active shard --
|
||||
i.e. `header.shard < get_active_shard_count(state, compute_epoch_at_slot(header.slot))`
|
||||
- _[REJECT]_ The `header.shard` MUST have a committee at the `header.slot` --
|
||||
i.e. validate that `compute_committee_index_from_shard(state, header.slot, header.shard)` doesn't raise an error.
|
||||
- _[IGNORE]_ The header is the first header with valid signature received for the `(header.proposer_index, header.slot, header.shard)` combination.
|
||||
- _[REJECT]_ The shard should have a committee at slot --
|
||||
i.e. validate that `compute_committee_index_from_shard(state, header.slot, header.shard)` doesn't raise an error
|
||||
- _[REJECT]_ The proposer signature, `signed_shard_blob_header.signature`, is valid with respect to the `proposer_index` pubkey.
|
||||
- _[REJECT]_ The header is proposed by the expected `proposer_index` for the block's slot
|
||||
- _[REJECT]_ The blob builder defined by `blob.builder_index` exists and has sufficient balance to back the fee payment.
|
||||
- _[REJECT]_ The header signature, `signed_blob_header.signature`, is valid for the aggregate of proposer and builder --
|
||||
i.e. `bls.FastAggregateVerify([builder_pubkey, proposer_pubkey], blob_signing_root, signed_blob_header.signature)`.
|
||||
- _[REJECT]_ The header is proposed by the expected `proposer_index` for the blob's `header.slot` and `header.shard`
|
||||
in the context of the current shuffling (defined by `header.body_summary.beacon_block_root`/`slot`).
|
||||
If the `proposer_index` cannot immediately be verified against the expected shuffling,
|
||||
the block MAY be queued for later processing while proposers for the block's branch are calculated --
|
||||
the blob MAY be queued for later processing while proposers for the blob's branch are calculated --
|
||||
in such a case _do not_ `REJECT`, instead `IGNORE` this message.
|
||||
|
||||
##### `shard_blob_tx`
|
||||
|
||||
Shard data-transactions in the form of a `SignedShardBlobHeader` are published to the global `shard_blob_tx` subnet.
|
||||
These shard blob headers are signed solely by the blob-builder.
|
||||
|
||||
The following validations MUST pass before forwarding the `signed_blob_header` on the network. Alias `header = signed_blob_header.message`.
|
||||
|
||||
- _[IGNORE]_ The header is not propagating more than `SHARD_TX_PROPAGATION_BUFFER_SLOTS` slots ahead of time --
|
||||
i.e. validate that `header.slot <= current_slot + SHARD_TX_PROPAGATION_BUFFER_SLOTS`.
|
||||
- _[IGNORE]_ The header is not propagating later than `SHARD_TX_PROPAGATION_GRACE_SLOTS` slots too late --
|
||||
i.e. validate that `header.slot + SHARD_TX_PROPAGATION_GRACE_SLOTS >= current_slot`
|
||||
- _[REJECT]_ The shard header is for an active shard --
|
||||
i.e. `header.shard < get_active_shard_count(state, compute_epoch_at_slot(header.slot))`
|
||||
- _[REJECT]_ The `header.shard` MUST have a committee at the `header.slot` --
|
||||
i.e. validate that `compute_committee_index_from_shard(state, header.slot, header.shard)` doesn't raise an error.
|
||||
- _[IGNORE]_ The header is not stale -- i.e. the corresponding shard proposer has not already selected a header for `(header.slot, header.shard)`.
|
||||
- _[IGNORE]_ The header is the first header with valid signature received for the `(header.builder_index, header.slot, header.shard)` combination.
|
||||
- _[REJECT]_ The blob builder, define by `header.builder_index`, exists and has sufficient balance to back the fee payment.
|
||||
- _[IGNORE]_ The header fee SHOULD be higher than previously seen headers for `(header.slot, header.shard)`, from any builder.
|
||||
Propagating nodes MAY increase fee increments in case of spam.
|
||||
- _[REJECT]_ The header signature, `signed_blob_header.signature`, is valid for ONLY the builder --
|
||||
i.e. `bls.Verify(builder_pubkey, blob_signing_root, signed_blob_header.signature)`. The signature is not an aggregate with the proposer.
|
||||
- _[REJECT]_ The header is designated for proposal by the expected `proposer_index` for the blob's `header.slot` and `header.shard`
|
||||
in the context of the current shuffling (defined by `header.body_summary.beacon_block_root`/`slot`).
|
||||
If the `proposer_index` cannot immediately be verified against the expected shuffling,
|
||||
the blob MAY be queued for later processing while proposers for the blob's branch are calculated --
|
||||
in such a case _do not_ `REJECT`, instead `IGNORE` this message.
|
||||
|
||||
##### `shard_proposer_slashing`
|
||||
|
||||
|
@ -169,6 +172,6 @@ Shard proposer slashings, in the form of `ShardProposerSlashing`, are published
|
|||
|
||||
The following validations MUST pass before forwarding the `shard_proposer_slashing` on to the network.
|
||||
- _[IGNORE]_ The shard proposer slashing is the first valid shard proposer slashing received
|
||||
for the proposer with index `proposer_slashing.signed_header_1.message.proposer_index`.
|
||||
The `slot` and `shard` are ignored, there are no per-shard slashings.
|
||||
for the proposer with index `proposer_slashing.proposer_index`.
|
||||
The `proposer_slashing.slot` and `proposer_slashing.shard` are ignored, there are no repeated or per-shard slashings.
|
||||
- _[REJECT]_ All of the conditions within `process_shard_proposer_slashing` pass validation.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Eth2 Executable Python Spec (PySpec)
|
||||
# Executable Python Spec (PySpec)
|
||||
|
||||
The executable Python spec is built from the Eth2 specification,
|
||||
The executable Python spec is built from the consensus specifications,
|
||||
complemented with the necessary helper functions for hashing, BLS, and more.
|
||||
|
||||
With this executable spec,
|
||||
|
@ -27,7 +27,7 @@ to enable debuggers to navigate between packages and generated code, without fra
|
|||
By default, when installing the `eth2spec` as package in non-develop mode,
|
||||
the distutils implementation of the `setup` runs `build`, which is extended to run the same `pyspec` work,
|
||||
but outputs into the standard `./build/lib` output.
|
||||
This enables the `eth2.0-specs` repository to be installed like any other python package.
|
||||
This enables the `consensus-specs` repository to be installed like any other python package.
|
||||
|
||||
|
||||
## Py-tests
|
||||
|
|
|
@ -1 +1 @@
|
|||
1.1.0-beta.2
|
||||
1.1.0-beta.3
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Eth2 config util
|
||||
# Consensus specs config util
|
||||
|
||||
For run-time configuration, see [Configs documentation](../../../../../configs/README.md).
|
||||
|
||||
|
@ -13,7 +13,7 @@ from eth2spec.phase0 import mainnet as spec
|
|||
from pathlib import Path
|
||||
|
||||
# To load the default configurations
|
||||
config_util.load_defaults(Path("eth2.0-specs/configs")) # change path to point to equivalent of specs `configs` dir.
|
||||
config_util.load_defaults(Path("consensus-specs/configs")) # change path to point to equivalent of specs `configs` dir.
|
||||
# After loading the defaults, a config can be chosen: 'mainnet', 'minimal', or custom network config (by file path)
|
||||
spec.config = spec.Configuration(**config_util.load_config_file(Path('mytestnet.yaml')))
|
||||
```
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Eth2 test generator helpers
|
||||
# Consensus test generator helpers
|
||||
|
||||
## `gen_base`
|
||||
|
||||
|
|
|
@ -96,6 +96,8 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]):
|
|||
if len(presets) != 0:
|
||||
print(f"Filtering test-generator runs to only include presets: {', '.join(presets)}")
|
||||
|
||||
generated_test_count = 0
|
||||
skipped_test_count = 0
|
||||
for tprov in test_providers:
|
||||
# runs anything that we don't want to repeat for every test case.
|
||||
tprov.prepare()
|
||||
|
@ -110,6 +112,7 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]):
|
|||
|
||||
if case_dir.exists():
|
||||
if not args.force and not incomplete_tag_file.exists():
|
||||
skipped_test_count += 1
|
||||
print(f'Skipping already existing test: {case_dir}')
|
||||
continue
|
||||
else:
|
||||
|
@ -149,6 +152,7 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]):
|
|||
output_part("ssz", name, dump_ssz_fn(data, name, file_mode))
|
||||
except SkippedTest as e:
|
||||
print(e)
|
||||
skipped_test_count += 1
|
||||
shutil.rmtree(case_dir)
|
||||
continue
|
||||
|
||||
|
@ -172,10 +176,13 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]):
|
|||
if not written_part:
|
||||
shutil.rmtree(case_dir)
|
||||
else:
|
||||
generated_test_count += 1
|
||||
# Only remove `INCOMPLETE` tag file
|
||||
os.remove(incomplete_tag_file)
|
||||
|
||||
print(f"completed {generator_name}")
|
||||
summary_message = f"completed generation of {generator_name} with {generated_test_count} tests"
|
||||
summary_message += f" ({skipped_test_count} skipped tests)"
|
||||
print(summary_message)
|
||||
|
||||
|
||||
def dump_yaml_fn(data: Any, name: str, file_mode: str, yaml_encoder: YAML):
|
||||
|
|
|
@ -2,7 +2,6 @@ import random
|
|||
from eth2spec.test.helpers.block import (
|
||||
build_empty_block_for_next_slot,
|
||||
)
|
||||
from eth2spec.test.helpers.block_processing import run_block_processing_to
|
||||
from eth2spec.test.helpers.state import (
|
||||
state_transition_and_sign_block,
|
||||
transition_to,
|
||||
|
@ -12,60 +11,17 @@ from eth2spec.test.helpers.constants import (
|
|||
)
|
||||
from eth2spec.test.helpers.sync_committee import (
|
||||
compute_aggregate_sync_committee_signature,
|
||||
compute_sync_committee_participant_reward_and_penalty,
|
||||
compute_sync_committee_proposer_reward,
|
||||
compute_committee_indices,
|
||||
get_committee_indices,
|
||||
run_sync_committee_processing,
|
||||
run_successful_sync_committee_test,
|
||||
)
|
||||
from eth2spec.test.context import (
|
||||
default_activation_threshold,
|
||||
expect_assertion_error,
|
||||
misc_balances,
|
||||
single_phase,
|
||||
with_altair_and_later,
|
||||
with_custom_state,
|
||||
with_presets,
|
||||
spec_state_test,
|
||||
always_bls,
|
||||
spec_test,
|
||||
)
|
||||
from eth2spec.utils.hash_function import hash
|
||||
|
||||
|
||||
def run_sync_committee_processing(spec, state, block, expect_exception=False):
|
||||
"""
|
||||
Processes everything up to the sync committee work, then runs the sync committee work in isolation, and
|
||||
produces a pre-state and post-state (None if exception) specifically for sync-committee processing changes.
|
||||
"""
|
||||
# process up to the sync committee work
|
||||
call = run_block_processing_to(spec, state, block, 'process_sync_aggregate')
|
||||
yield 'pre', state
|
||||
yield 'sync_aggregate', block.body.sync_aggregate
|
||||
if expect_exception:
|
||||
expect_assertion_error(lambda: call(state, block))
|
||||
yield 'post', None
|
||||
else:
|
||||
call(state, block)
|
||||
yield 'post', state
|
||||
|
||||
|
||||
def get_committee_indices(spec, state, duplicates=False):
|
||||
"""
|
||||
This utility function allows the caller to ensure there are or are not
|
||||
duplicate validator indices in the returned committee based on
|
||||
the boolean ``duplicates``.
|
||||
"""
|
||||
state = state.copy()
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
randao_index = (current_epoch + 1) % spec.EPOCHS_PER_HISTORICAL_VECTOR
|
||||
while True:
|
||||
committee = spec.get_next_sync_committee_indices(state)
|
||||
if duplicates:
|
||||
if len(committee) != len(set(committee)):
|
||||
return committee
|
||||
else:
|
||||
if len(committee) == len(set(committee)):
|
||||
return committee
|
||||
state.randao_mixes[randao_index] = hash(state.randao_mixes[randao_index])
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
|
@ -177,58 +133,6 @@ def test_invalid_signature_extra_participant(spec, state):
|
|||
yield from run_sync_committee_processing(spec, state, block, expect_exception=True)
|
||||
|
||||
|
||||
def validate_sync_committee_rewards(spec, pre_state, post_state, committee_indices, committee_bits, proposer_index):
|
||||
for index in range(len(post_state.validators)):
|
||||
reward = 0
|
||||
penalty = 0
|
||||
if index in committee_indices:
|
||||
_reward, _penalty = compute_sync_committee_participant_reward_and_penalty(
|
||||
spec,
|
||||
pre_state,
|
||||
index,
|
||||
committee_indices,
|
||||
committee_bits,
|
||||
)
|
||||
reward += _reward
|
||||
penalty += _penalty
|
||||
|
||||
if proposer_index == index:
|
||||
reward += compute_sync_committee_proposer_reward(
|
||||
spec,
|
||||
pre_state,
|
||||
committee_indices,
|
||||
committee_bits,
|
||||
)
|
||||
|
||||
assert post_state.balances[index] == pre_state.balances[index] + reward - penalty
|
||||
|
||||
|
||||
def run_successful_sync_committee_test(spec, state, committee_indices, committee_bits):
|
||||
pre_state = state.copy()
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
block.body.sync_aggregate = spec.SyncAggregate(
|
||||
sync_committee_bits=committee_bits,
|
||||
sync_committee_signature=compute_aggregate_sync_committee_signature(
|
||||
spec,
|
||||
state,
|
||||
block.slot - 1,
|
||||
[index for index, bit in zip(committee_indices, committee_bits) if bit],
|
||||
)
|
||||
)
|
||||
|
||||
yield from run_sync_committee_processing(spec, state, block)
|
||||
|
||||
validate_sync_committee_rewards(
|
||||
spec,
|
||||
pre_state,
|
||||
state,
|
||||
committee_indices,
|
||||
committee_bits,
|
||||
block.proposer_index,
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
|
@ -502,150 +406,3 @@ def test_proposer_in_committee_with_participation(spec, state):
|
|||
else:
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
raise AssertionError("failed to find a proposer in the sync committee set; check test setup")
|
||||
|
||||
|
||||
def _test_harness_for_randomized_test_case(spec, state, duplicates=False, participation_fn=None):
|
||||
committee_indices = get_committee_indices(spec, state, duplicates=duplicates)
|
||||
|
||||
if participation_fn:
|
||||
participating_indices = participation_fn(committee_indices)
|
||||
else:
|
||||
participating_indices = committee_indices
|
||||
|
||||
committee_bits = [index in participating_indices for index in committee_indices]
|
||||
committee_size = len(committee_indices)
|
||||
if duplicates:
|
||||
assert committee_size > len(set(committee_indices))
|
||||
else:
|
||||
assert committee_size == len(set(committee_indices))
|
||||
|
||||
yield from run_successful_sync_committee_test(spec, state, committee_indices, committee_bits)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_only_one_participant_with_duplicates(spec, state):
|
||||
rng = random.Random(101)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: [rng.choice(comm)],
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_low_participation_with_duplicates(spec, state):
|
||||
rng = random.Random(201)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.25)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_high_participation_with_duplicates(spec, state):
|
||||
rng = random.Random(301)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.75)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_all_but_one_participating_with_duplicates(spec, state):
|
||||
rng = random.Random(401)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) - 1),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_test
|
||||
@with_custom_state(balances_fn=misc_balances, threshold_fn=default_activation_threshold)
|
||||
@single_phase
|
||||
def test_random_misc_balances_and_half_participation_with_duplicates(spec, state):
|
||||
rng = random.Random(1401)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) // 2),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_only_one_participant_without_duplicates(spec, state):
|
||||
rng = random.Random(501)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: [rng.choice(comm)],
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_low_participation_without_duplicates(spec, state):
|
||||
rng = random.Random(601)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.25)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_high_participation_without_duplicates(spec, state):
|
||||
rng = random.Random(701)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.75)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_all_but_one_participating_without_duplicates(spec, state):
|
||||
rng = random.Random(801)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) - 1),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_test
|
||||
@with_custom_state(balances_fn=misc_balances, threshold_fn=default_activation_threshold)
|
||||
@single_phase
|
||||
def test_random_misc_balances_and_half_participation_without_duplicates(spec, state):
|
||||
rng = random.Random(1501)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) // 2),
|
||||
)
|
|
@ -0,0 +1,165 @@
|
|||
import random
|
||||
from eth2spec.test.helpers.constants import (
|
||||
MAINNET, MINIMAL,
|
||||
)
|
||||
from eth2spec.test.helpers.sync_committee import (
|
||||
get_committee_indices,
|
||||
run_successful_sync_committee_test,
|
||||
)
|
||||
from eth2spec.test.context import (
|
||||
with_altair_and_later,
|
||||
spec_state_test,
|
||||
default_activation_threshold,
|
||||
misc_balances,
|
||||
single_phase,
|
||||
with_custom_state,
|
||||
with_presets,
|
||||
spec_test,
|
||||
)
|
||||
|
||||
|
||||
def _test_harness_for_randomized_test_case(spec, state, duplicates=False, participation_fn=None):
|
||||
committee_indices = get_committee_indices(spec, state, duplicates=duplicates)
|
||||
|
||||
if participation_fn:
|
||||
participating_indices = participation_fn(committee_indices)
|
||||
else:
|
||||
participating_indices = committee_indices
|
||||
|
||||
committee_bits = [index in participating_indices for index in committee_indices]
|
||||
committee_size = len(committee_indices)
|
||||
if duplicates:
|
||||
assert committee_size > len(set(committee_indices))
|
||||
else:
|
||||
assert committee_size == len(set(committee_indices))
|
||||
|
||||
yield from run_successful_sync_committee_test(spec, state, committee_indices, committee_bits)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_only_one_participant_with_duplicates(spec, state):
|
||||
rng = random.Random(101)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: [rng.choice(comm)],
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_low_participation_with_duplicates(spec, state):
|
||||
rng = random.Random(201)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.25)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_high_participation_with_duplicates(spec, state):
|
||||
rng = random.Random(301)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.75)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_all_but_one_participating_with_duplicates(spec, state):
|
||||
rng = random.Random(401)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) - 1),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MAINNET], reason="to create duplicate committee")
|
||||
@spec_test
|
||||
@with_custom_state(balances_fn=misc_balances, threshold_fn=default_activation_threshold)
|
||||
@single_phase
|
||||
def test_random_misc_balances_and_half_participation_with_duplicates(spec, state):
|
||||
rng = random.Random(1401)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
duplicates=True,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) // 2),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_only_one_participant_without_duplicates(spec, state):
|
||||
rng = random.Random(501)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: [rng.choice(comm)],
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_low_participation_without_duplicates(spec, state):
|
||||
rng = random.Random(601)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.25)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_high_participation_without_duplicates(spec, state):
|
||||
rng = random.Random(701)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, int(len(comm) * 0.75)),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_state_test
|
||||
def test_random_all_but_one_participating_without_duplicates(spec, state):
|
||||
rng = random.Random(801)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) - 1),
|
||||
)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@with_presets([MINIMAL], reason="to create nonduplicate committee")
|
||||
@spec_test
|
||||
@with_custom_state(balances_fn=misc_balances, threshold_fn=default_activation_threshold)
|
||||
@single_phase
|
||||
def test_random_misc_balances_and_half_participation_without_duplicates(spec, state):
|
||||
rng = random.Random(1501)
|
||||
yield from _test_harness_for_randomized_test_case(
|
||||
spec,
|
||||
state,
|
||||
participation_fn=lambda comm: rng.sample(comm, len(comm) // 2),
|
||||
)
|
|
@ -3,16 +3,22 @@ from random import Random
|
|||
from eth2spec.test.context import spec_state_test, with_altair_and_later
|
||||
from eth2spec.test.helpers.inactivity_scores import randomize_inactivity_scores, zero_inactivity_scores
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
next_epoch_via_block,
|
||||
set_full_participation,
|
||||
set_empty_participation,
|
||||
)
|
||||
from eth2spec.test.helpers.voluntary_exits import (
|
||||
exit_validators,
|
||||
get_exited_validators
|
||||
)
|
||||
from eth2spec.test.helpers.epoch_processing import (
|
||||
run_epoch_processing_with
|
||||
)
|
||||
from eth2spec.test.helpers.random import (
|
||||
randomize_attestation_participation,
|
||||
randomize_previous_epoch_participation,
|
||||
randomize_state,
|
||||
)
|
||||
from eth2spec.test.helpers.rewards import leaking
|
||||
|
||||
|
@ -266,3 +272,108 @@ def test_some_slashed_full_random_leaking(spec, state):
|
|||
|
||||
# Check still in leak
|
||||
assert spec.is_in_inactivity_leak(state)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@spec_state_test
|
||||
@leaking()
|
||||
def test_some_exited_full_random_leaking(spec, state):
|
||||
rng = Random(1102233)
|
||||
|
||||
exit_count = 3
|
||||
|
||||
# randomize ahead of time to check exited validators do not have
|
||||
# mutations applied to their inactivity scores
|
||||
randomize_inactivity_scores(spec, state, rng=rng)
|
||||
|
||||
assert not any(get_exited_validators(spec, state))
|
||||
exited_indices = exit_validators(spec, state, exit_count, rng=rng)
|
||||
assert not any(get_exited_validators(spec, state))
|
||||
|
||||
# advance the state to effect the exits
|
||||
target_epoch = max(state.validators[index].exit_epoch for index in exited_indices)
|
||||
# validators that have exited in the previous epoch or earlier will not
|
||||
# have their inactivity scores modified, the test advances the state past this point
|
||||
# to confirm this invariant:
|
||||
previous_epoch = spec.get_previous_epoch(state)
|
||||
for _ in range(target_epoch - previous_epoch):
|
||||
next_epoch(spec, state)
|
||||
assert len(get_exited_validators(spec, state)) == exit_count
|
||||
|
||||
previous_scores = state.inactivity_scores.copy()
|
||||
|
||||
yield from run_inactivity_scores_test(
|
||||
spec, state,
|
||||
randomize_previous_epoch_participation, rng=rng,
|
||||
)
|
||||
|
||||
# ensure exited validators have their score "frozen" at exit
|
||||
# but otherwise there was a change
|
||||
some_changed = False
|
||||
for index in range(len(state.validators)):
|
||||
if index in exited_indices:
|
||||
assert previous_scores[index] == state.inactivity_scores[index]
|
||||
else:
|
||||
previous_score = previous_scores[index]
|
||||
current_score = state.inactivity_scores[index]
|
||||
if previous_score != current_score:
|
||||
some_changed = True
|
||||
assert some_changed
|
||||
|
||||
# Check still in leak
|
||||
assert spec.is_in_inactivity_leak(state)
|
||||
|
||||
|
||||
def _run_randomized_state_test_for_inactivity_updates(spec, state, rng=Random(13377331)):
|
||||
randomize_inactivity_scores(spec, state, rng=rng)
|
||||
randomize_state(spec, state, rng=rng)
|
||||
|
||||
exited_validators = get_exited_validators(spec, state)
|
||||
exited_but_not_slashed = []
|
||||
for index in exited_validators:
|
||||
validator = state.validators[index]
|
||||
if validator.slashed:
|
||||
continue
|
||||
exited_but_not_slashed.append(index)
|
||||
|
||||
assert len(exited_but_not_slashed) > 0
|
||||
|
||||
some_exited_validator = exited_but_not_slashed[0]
|
||||
|
||||
pre_score_for_exited_validator = state.inactivity_scores[some_exited_validator]
|
||||
|
||||
assert pre_score_for_exited_validator != 0
|
||||
|
||||
assert len(set(state.inactivity_scores)) > 1
|
||||
|
||||
yield from run_inactivity_scores_test(spec, state)
|
||||
|
||||
post_score_for_exited_validator = state.inactivity_scores[some_exited_validator]
|
||||
assert pre_score_for_exited_validator == post_score_for_exited_validator
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@spec_state_test
|
||||
def test_randomized_state(spec, state):
|
||||
"""
|
||||
This test ensures that once a validator has exited,
|
||||
their inactivity score does not change.
|
||||
"""
|
||||
rng = Random(10011001)
|
||||
_run_randomized_state_test_for_inactivity_updates(spec, state, rng=rng)
|
||||
|
||||
|
||||
@with_altair_and_later
|
||||
@spec_state_test
|
||||
@leaking()
|
||||
def test_randomized_state_leaking(spec, state):
|
||||
"""
|
||||
This test ensures that once a validator has exited,
|
||||
their inactivity score does not change, even during a leak.
|
||||
Note that slashed validators are still subject to mutations
|
||||
(refer ``get_eligible_validator_indices`).
|
||||
"""
|
||||
rng = Random(10011002)
|
||||
_run_randomized_state_test_for_inactivity_updates(spec, state, rng=rng)
|
||||
# Check still in leak
|
||||
assert spec.is_in_inactivity_leak(state)
|
||||
|
|
|
@ -0,0 +1,438 @@
|
|||
"""
|
||||
This module is generated from the ``random`` test generator.
|
||||
Please do not edit this file manually.
|
||||
See the README for that generator for more information.
|
||||
"""
|
||||
|
||||
from eth2spec.test.helpers.constants import ALTAIR
|
||||
from eth2spec.test.context import (
|
||||
misc_balances_in_default_range_with_many_validators,
|
||||
with_phases,
|
||||
zero_activation_threshold,
|
||||
only_generator,
|
||||
)
|
||||
from eth2spec.test.context import (
|
||||
always_bls,
|
||||
spec_test,
|
||||
with_custom_state,
|
||||
single_phase,
|
||||
)
|
||||
from eth2spec.test.utils.randomized_block_tests import (
|
||||
run_generated_randomized_test,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_0(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_1(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_2(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_3(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_4(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_5(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_6(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_7(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_8(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_9(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_10(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_11(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_12(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_13(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_14(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([ALTAIR])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_15(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block_altair_with_cycling_sync_committee_participation
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block_altair_with_cycling_sync_committee_participation', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state_altair'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
|
@ -152,6 +152,22 @@ def misc_balances(spec):
|
|||
return balances
|
||||
|
||||
|
||||
def misc_balances_in_default_range_with_many_validators(spec):
|
||||
"""
|
||||
Helper method to create a series of balances that includes some misc. balances but
|
||||
none that are below the ``EJECTION_BALANCE``.
|
||||
"""
|
||||
# Double validators to facilitate randomized testing
|
||||
num_validators = spec.SLOTS_PER_EPOCH * 8 * 2
|
||||
floor = spec.config.EJECTION_BALANCE + spec.EFFECTIVE_BALANCE_INCREMENT
|
||||
balances = [
|
||||
max(spec.MAX_EFFECTIVE_BALANCE * 2 * i // num_validators, floor) for i in range(num_validators)
|
||||
]
|
||||
rng = Random(1234)
|
||||
rng.shuffle(balances)
|
||||
return balances
|
||||
|
||||
|
||||
def low_single_balance(spec):
|
||||
"""
|
||||
Helper method to create a single of balance of 1 Gwei.
|
||||
|
@ -440,6 +456,17 @@ with_altair_and_later = with_phases([ALTAIR, MERGE])
|
|||
with_merge_and_later = with_phases([MERGE]) # TODO: include sharding when spec stabilizes.
|
||||
|
||||
|
||||
def only_generator(reason):
|
||||
def _decorator(inner):
|
||||
def _wrapper(*args, **kwargs):
|
||||
if is_pytest:
|
||||
dump_skipping_message(reason)
|
||||
return None
|
||||
return inner(*args, **kwargs)
|
||||
return _wrapper
|
||||
return _decorator
|
||||
|
||||
|
||||
def fork_transition_test(pre_fork_name, post_fork_name, fork_epoch=None):
|
||||
"""
|
||||
A decorator to construct a "transition" test from one fork of the eth2 spec
|
||||
|
|
|
@ -217,30 +217,13 @@ def next_slots_with_attestations(spec,
|
|||
post_state = state.copy()
|
||||
signed_blocks = []
|
||||
for _ in range(slot_count):
|
||||
block = build_empty_block_for_next_slot(spec, post_state)
|
||||
if fill_cur_epoch and post_state.slot >= spec.MIN_ATTESTATION_INCLUSION_DELAY:
|
||||
slot_to_attest = post_state.slot - spec.MIN_ATTESTATION_INCLUSION_DELAY + 1
|
||||
if slot_to_attest >= spec.compute_start_slot_at_epoch(spec.get_current_epoch(post_state)):
|
||||
attestations = _get_valid_attestation_at_slot(
|
||||
post_state,
|
||||
spec,
|
||||
slot_to_attest,
|
||||
participation_fn=participation_fn
|
||||
)
|
||||
for attestation in attestations:
|
||||
block.body.attestations.append(attestation)
|
||||
if fill_prev_epoch:
|
||||
slot_to_attest = post_state.slot - spec.SLOTS_PER_EPOCH + 1
|
||||
attestations = _get_valid_attestation_at_slot(
|
||||
post_state,
|
||||
spec,
|
||||
slot_to_attest,
|
||||
participation_fn=participation_fn
|
||||
)
|
||||
for attestation in attestations:
|
||||
block.body.attestations.append(attestation)
|
||||
|
||||
signed_block = state_transition_and_sign_block(spec, post_state, block)
|
||||
signed_block = state_transition_with_full_block(
|
||||
spec,
|
||||
post_state,
|
||||
fill_cur_epoch,
|
||||
fill_prev_epoch,
|
||||
participation_fn,
|
||||
)
|
||||
signed_blocks.append(signed_block)
|
||||
|
||||
return state, signed_blocks, post_state
|
||||
|
@ -249,7 +232,8 @@ def next_slots_with_attestations(spec,
|
|||
def next_epoch_with_attestations(spec,
|
||||
state,
|
||||
fill_cur_epoch,
|
||||
fill_prev_epoch):
|
||||
fill_prev_epoch,
|
||||
participation_fn=None):
|
||||
assert state.slot % spec.SLOTS_PER_EPOCH == 0
|
||||
|
||||
return next_slots_with_attestations(
|
||||
|
@ -258,9 +242,76 @@ def next_epoch_with_attestations(spec,
|
|||
spec.SLOTS_PER_EPOCH,
|
||||
fill_cur_epoch,
|
||||
fill_prev_epoch,
|
||||
participation_fn,
|
||||
)
|
||||
|
||||
|
||||
def state_transition_with_full_block(spec, state, fill_cur_epoch, fill_prev_epoch, participation_fn=None):
|
||||
"""
|
||||
Build and apply a block with attestions at the calculated `slot_to_attest` of current epoch and/or previous epoch.
|
||||
"""
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
if fill_cur_epoch and state.slot >= spec.MIN_ATTESTATION_INCLUSION_DELAY:
|
||||
slot_to_attest = state.slot - spec.MIN_ATTESTATION_INCLUSION_DELAY + 1
|
||||
if slot_to_attest >= spec.compute_start_slot_at_epoch(spec.get_current_epoch(state)):
|
||||
attestations = _get_valid_attestation_at_slot(
|
||||
state,
|
||||
spec,
|
||||
slot_to_attest,
|
||||
participation_fn=participation_fn
|
||||
)
|
||||
for attestation in attestations:
|
||||
block.body.attestations.append(attestation)
|
||||
if fill_prev_epoch:
|
||||
slot_to_attest = state.slot - spec.SLOTS_PER_EPOCH + 1
|
||||
attestations = _get_valid_attestation_at_slot(
|
||||
state,
|
||||
spec,
|
||||
slot_to_attest,
|
||||
participation_fn=participation_fn
|
||||
)
|
||||
for attestation in attestations:
|
||||
block.body.attestations.append(attestation)
|
||||
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
return signed_block
|
||||
|
||||
|
||||
def state_transition_with_full_attestations_block(spec, state, fill_cur_epoch, fill_prev_epoch):
|
||||
"""
|
||||
Build and apply a block with attestions at all valid slots of current epoch and/or previous epoch.
|
||||
"""
|
||||
# Build a block with previous attestations
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
attestations = []
|
||||
|
||||
if fill_cur_epoch:
|
||||
# current epoch
|
||||
slots = state.slot % spec.SLOTS_PER_EPOCH
|
||||
for slot_offset in range(slots):
|
||||
target_slot = state.slot - slot_offset
|
||||
attestations += _get_valid_attestation_at_slot(
|
||||
state,
|
||||
spec,
|
||||
target_slot,
|
||||
)
|
||||
|
||||
if fill_prev_epoch:
|
||||
# attest previous epoch
|
||||
slots = spec.SLOTS_PER_EPOCH - state.slot % spec.SLOTS_PER_EPOCH
|
||||
for slot_offset in range(1, slots):
|
||||
target_slot = state.slot - (state.slot % spec.SLOTS_PER_EPOCH) - slot_offset
|
||||
attestations += _get_valid_attestation_at_slot(
|
||||
state,
|
||||
spec,
|
||||
target_slot,
|
||||
)
|
||||
|
||||
block.body.attestations = attestations
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
return signed_block
|
||||
|
||||
|
||||
def prepare_state_with_attestations(spec, state, participation_fn=None):
|
||||
"""
|
||||
Prepare state with attestations according to the ``participation_fn``.
|
||||
|
|
|
@ -48,11 +48,9 @@ def run_block_processing_to(spec, state, block, process_name: str):
|
|||
A test prepares a pre-state by calling this function, output the pre-state,
|
||||
and it can then proceed to run the returned callable, and output a post-state.
|
||||
"""
|
||||
print(f"state.slot {state.slot} block.slot {block.slot}")
|
||||
# transition state to slot before block state transition
|
||||
if state.slot < block.slot:
|
||||
spec.process_slots(state, block.slot)
|
||||
print(f"state.slot {state.slot} block.slot {block.slot} A")
|
||||
|
||||
# process components of block transition
|
||||
for name, call in get_process_calls(spec).items():
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
from random import Random
|
||||
|
||||
from eth2spec.test.context import is_post_altair
|
||||
from eth2spec.test.helpers.keys import pubkeys, privkeys
|
||||
from eth2spec.utils import bls
|
||||
from eth2spec.utils.merkle_minimal import calc_merkle_tree_from_leaves, get_merkle_proof
|
||||
|
@ -15,6 +16,8 @@ def mock_deposit(spec, state, index):
|
|||
state.validators[index].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
state.validators[index].activation_epoch = spec.FAR_FUTURE_EPOCH
|
||||
state.validators[index].effective_balance = spec.MAX_EFFECTIVE_BALANCE
|
||||
if is_post_altair(spec):
|
||||
state.inactivity_scores[index] = 0
|
||||
assert not spec.is_active_validator(state.validators[index], spec.get_current_epoch(state))
|
||||
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@ def build_empty_execution_payload(spec, state, randao_mix=None):
|
|||
gas_limit=latest.gas_limit, # retain same limit
|
||||
gas_used=0, # empty block, 0 gas
|
||||
timestamp=timestamp,
|
||||
base_fee_per_gas=latest.base_fee_per_gas, # retain same base_fee
|
||||
block_hash=spec.Hash32(),
|
||||
transactions=empty_txs,
|
||||
)
|
||||
|
@ -41,6 +42,7 @@ def get_execution_payload_header(spec, execution_payload):
|
|||
gas_limit=execution_payload.gas_limit,
|
||||
gas_used=execution_payload.gas_used,
|
||||
timestamp=execution_payload.timestamp,
|
||||
base_fee_per_gas=execution_payload.base_fee_per_gas,
|
||||
block_hash=execution_payload.block_hash,
|
||||
transactions_root=spec.hash_tree_root(execution_payload.transactions)
|
||||
)
|
||||
|
|
|
@ -1,4 +1,8 @@
|
|||
from eth_utils import encode_hex
|
||||
from eth2spec.test.helpers.attestations import (
|
||||
next_epoch_with_attestations,
|
||||
next_slots_with_attestations,
|
||||
)
|
||||
|
||||
|
||||
def get_anchor_root(spec, state):
|
||||
|
@ -18,23 +22,20 @@ def add_block_to_store(spec, store, signed_block):
|
|||
spec.on_block(store, signed_block)
|
||||
|
||||
|
||||
def tick_and_run_on_block(spec, store, signed_block, test_steps=None):
|
||||
if test_steps is None:
|
||||
test_steps = []
|
||||
|
||||
def tick_and_add_block(spec, store, signed_block, test_steps, valid=True, allow_invalid_attestations=False):
|
||||
pre_state = store.block_states[signed_block.message.parent_root]
|
||||
block_time = pre_state.genesis_time + signed_block.message.slot * spec.config.SECONDS_PER_SLOT
|
||||
|
||||
if store.time < block_time:
|
||||
on_tick_and_append_step(spec, store, block_time, test_steps)
|
||||
|
||||
yield from run_on_block(spec, store, signed_block, test_steps)
|
||||
post_state = yield from add_block(
|
||||
spec, store, signed_block, test_steps, valid=valid, allow_invalid_attestations=allow_invalid_attestations)
|
||||
|
||||
return post_state
|
||||
|
||||
|
||||
def tick_and_run_on_attestation(spec, store, attestation, test_steps=None):
|
||||
if test_steps is None:
|
||||
test_steps = []
|
||||
|
||||
def tick_and_run_on_attestation(spec, store, attestation, test_steps):
|
||||
parent_block = store.blocks[attestation.data.beacon_block_root]
|
||||
pre_state = store.block_states[spec.hash_tree_root(parent_block)]
|
||||
block_time = pre_state.genesis_time + parent_block.slot * spec.config.SECONDS_PER_SLOT
|
||||
|
@ -49,6 +50,37 @@ def tick_and_run_on_attestation(spec, store, attestation, test_steps=None):
|
|||
test_steps.append({'attestation': get_attestation_file_name(attestation)})
|
||||
|
||||
|
||||
def add_attestation(spec, store, attestation, test_steps, valid=True):
|
||||
yield get_attestation_file_name(attestation), attestation
|
||||
|
||||
if not valid:
|
||||
try:
|
||||
run_on_attestation(spec, store, attestation, valid=True)
|
||||
except AssertionError:
|
||||
test_steps.append({
|
||||
'attestation': get_attestation_file_name(attestation),
|
||||
'valid': False,
|
||||
})
|
||||
return
|
||||
else:
|
||||
assert False
|
||||
|
||||
run_on_attestation(spec, store, attestation, valid=True)
|
||||
test_steps.append({'attestation': get_attestation_file_name(attestation)})
|
||||
|
||||
|
||||
def run_on_attestation(spec, store, attestation, valid=True):
|
||||
if not valid:
|
||||
try:
|
||||
spec.on_attestation(store, attestation)
|
||||
except AssertionError:
|
||||
return
|
||||
else:
|
||||
assert False
|
||||
|
||||
spec.on_attestation(store, attestation)
|
||||
|
||||
|
||||
def get_genesis_forkchoice_store(spec, genesis_state):
|
||||
store, _ = get_genesis_forkchoice_store_and_block(spec, genesis_state)
|
||||
return store
|
||||
|
@ -73,25 +105,53 @@ def on_tick_and_append_step(spec, store, time, test_steps):
|
|||
test_steps.append({'tick': int(time)})
|
||||
|
||||
|
||||
def run_on_block(spec, store, signed_block, test_steps, valid=True):
|
||||
def run_on_block(spec, store, signed_block, valid=True):
|
||||
if not valid:
|
||||
try:
|
||||
spec.on_block(store, signed_block)
|
||||
|
||||
except AssertionError:
|
||||
return
|
||||
else:
|
||||
assert False
|
||||
|
||||
spec.on_block(store, signed_block)
|
||||
assert store.blocks[signed_block.message.hash_tree_root()] == signed_block.message
|
||||
|
||||
|
||||
def add_block(spec, store, signed_block, test_steps, valid=True, allow_invalid_attestations=False):
|
||||
"""
|
||||
Run on_block and on_attestation
|
||||
"""
|
||||
yield get_block_file_name(signed_block), signed_block
|
||||
|
||||
if not valid:
|
||||
try:
|
||||
run_on_block(spec, store, signed_block, valid=True)
|
||||
except AssertionError:
|
||||
test_steps.append({
|
||||
'block': get_block_file_name(signed_block),
|
||||
'valid': False,
|
||||
})
|
||||
return
|
||||
else:
|
||||
assert False
|
||||
|
||||
run_on_block(spec, store, signed_block, valid=True)
|
||||
test_steps.append({'block': get_block_file_name(signed_block)})
|
||||
|
||||
# An on_block step implies receiving block's attestations
|
||||
for attestation in signed_block.message.body.attestations:
|
||||
spec.on_attestation(store, attestation)
|
||||
try:
|
||||
for attestation in signed_block.message.body.attestations:
|
||||
run_on_attestation(spec, store, attestation, valid=True)
|
||||
except AssertionError:
|
||||
if allow_invalid_attestations:
|
||||
pass
|
||||
else:
|
||||
raise
|
||||
|
||||
assert store.blocks[signed_block.message.hash_tree_root()] == signed_block.message
|
||||
block_root = signed_block.message.hash_tree_root()
|
||||
assert store.blocks[block_root] == signed_block.message
|
||||
assert store.block_states[block_root].hash_tree_root() == signed_block.message.state_root
|
||||
test_steps.append({
|
||||
'checks': {
|
||||
'time': int(store.time),
|
||||
|
@ -102,6 +162,8 @@ def run_on_block(spec, store, signed_block, test_steps, valid=True):
|
|||
}
|
||||
})
|
||||
|
||||
return store.block_states[signed_block.message.hash_tree_root()]
|
||||
|
||||
|
||||
def get_formatted_head_output(spec, store):
|
||||
head = spec.get_head(store)
|
||||
|
@ -110,3 +172,49 @@ def get_formatted_head_output(spec, store):
|
|||
'slot': int(slot),
|
||||
'root': encode_hex(head),
|
||||
}
|
||||
|
||||
|
||||
def apply_next_epoch_with_attestations(spec,
|
||||
state,
|
||||
store,
|
||||
fill_cur_epoch,
|
||||
fill_prev_epoch,
|
||||
participation_fn=None,
|
||||
test_steps=None):
|
||||
if test_steps is None:
|
||||
test_steps = []
|
||||
|
||||
_, new_signed_blocks, post_state = next_epoch_with_attestations(
|
||||
spec, state, fill_cur_epoch, fill_prev_epoch, participation_fn=participation_fn)
|
||||
for signed_block in new_signed_blocks:
|
||||
block = signed_block.message
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
block_root = block.hash_tree_root()
|
||||
assert store.blocks[block_root] == block
|
||||
last_signed_block = signed_block
|
||||
|
||||
assert store.block_states[block_root].hash_tree_root() == post_state.hash_tree_root()
|
||||
|
||||
return post_state, store, last_signed_block
|
||||
|
||||
|
||||
def apply_next_slots_with_attestations(spec,
|
||||
state,
|
||||
store,
|
||||
slots,
|
||||
fill_cur_epoch,
|
||||
fill_prev_epoch,
|
||||
test_steps,
|
||||
participation_fn=None):
|
||||
_, new_signed_blocks, post_state = next_slots_with_attestations(
|
||||
spec, state, slots, fill_cur_epoch, fill_prev_epoch, participation_fn=participation_fn)
|
||||
for signed_block in new_signed_blocks:
|
||||
block = signed_block.message
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
block_root = block.hash_tree_root()
|
||||
assert store.blocks[block_root] == block
|
||||
last_signed_block = signed_block
|
||||
|
||||
assert store.block_states[block_root].hash_tree_root() == post_state.hash_tree_root()
|
||||
|
||||
return post_state, store, last_signed_block
|
||||
|
|
|
@ -78,5 +78,7 @@ def create_genesis_state(spec, validator_balances, activation_threshold):
|
|||
# Initialize the execution payload header (with block number and genesis time set to 0)
|
||||
state.latest_execution_payload_header.block_hash = eth1_block_hash
|
||||
state.latest_execution_payload_header.random = eth1_block_hash
|
||||
state.latest_execution_payload_header.gas_limit = spec.GENESIS_GAS_LIMIT
|
||||
state.latest_execution_payload_header.base_fee_per_gas = spec.GENESIS_BASE_FEE_PER_GAS
|
||||
|
||||
return state
|
||||
|
|
|
@ -7,6 +7,10 @@ from eth2spec.test.helpers.state import (
|
|||
from eth2spec.test.helpers.block import (
|
||||
build_empty_block_for_next_slot,
|
||||
)
|
||||
from eth2spec.test.helpers.sync_committee import (
|
||||
compute_committee_indices,
|
||||
compute_aggregate_sync_committee_signature,
|
||||
)
|
||||
from eth2spec.test.helpers.proposer_slashings import get_valid_proposer_slashing
|
||||
from eth2spec.test.helpers.attester_slashings import get_valid_attester_slashing_by_indices
|
||||
from eth2spec.test.helpers.attestations import get_valid_attestation
|
||||
|
@ -44,8 +48,12 @@ def run_slash_and_exit(spec, state, slash_index, exit_index, valid=True):
|
|||
|
||||
|
||||
def get_random_proposer_slashings(spec, state, rng):
|
||||
num_slashings = rng.randrange(spec.MAX_PROPOSER_SLASHINGS)
|
||||
indices = spec.get_active_validator_indices(state, spec.get_current_epoch(state)).copy()
|
||||
num_slashings = rng.randrange(1, spec.MAX_PROPOSER_SLASHINGS)
|
||||
active_indices = spec.get_active_validator_indices(state, spec.get_current_epoch(state)).copy()
|
||||
indices = [
|
||||
index for index in active_indices
|
||||
if not state.validators[index].slashed
|
||||
]
|
||||
slashings = [
|
||||
get_valid_proposer_slashing(
|
||||
spec, state,
|
||||
|
@ -56,14 +64,33 @@ def get_random_proposer_slashings(spec, state, rng):
|
|||
return slashings
|
||||
|
||||
|
||||
def get_random_attester_slashings(spec, state, rng):
|
||||
num_slashings = rng.randrange(spec.MAX_ATTESTER_SLASHINGS)
|
||||
indices = spec.get_active_validator_indices(state, spec.get_current_epoch(state)).copy()
|
||||
def get_random_attester_slashings(spec, state, rng, slashed_indices=[]):
|
||||
"""
|
||||
Caller can supply ``slashed_indices`` if they are aware of other indices
|
||||
that will be slashed by other operations in the same block as the one that
|
||||
contains the output of this function.
|
||||
"""
|
||||
# ensure at least one attester slashing, the max count
|
||||
# is small so not much room for random inclusion
|
||||
num_slashings = rng.randrange(1, spec.MAX_ATTESTER_SLASHINGS)
|
||||
active_indices = spec.get_active_validator_indices(state, spec.get_current_epoch(state)).copy()
|
||||
indices = [
|
||||
index for index in active_indices
|
||||
if (
|
||||
not state.validators[index].slashed
|
||||
and index not in slashed_indices
|
||||
)
|
||||
]
|
||||
sample_upper_bound = 4
|
||||
max_slashed_count = num_slashings * sample_upper_bound - 1
|
||||
if len(indices) < max_slashed_count:
|
||||
return []
|
||||
|
||||
slot_range = list(range(state.slot - spec.SLOTS_PER_HISTORICAL_ROOT + 1, state.slot))
|
||||
slashings = [
|
||||
get_valid_attester_slashing_by_indices(
|
||||
spec, state,
|
||||
sorted([indices.pop(rng.randrange(len(indices))) for _ in range(rng.randrange(1, 4))]),
|
||||
sorted([indices.pop(rng.randrange(len(indices))) for _ in range(rng.randrange(1, sample_upper_bound))]),
|
||||
slot=slot_range.pop(rng.randrange(len(slot_range))),
|
||||
signed_1=True, signed_2=True,
|
||||
)
|
||||
|
@ -73,7 +100,7 @@ def get_random_attester_slashings(spec, state, rng):
|
|||
|
||||
|
||||
def get_random_attestations(spec, state, rng):
|
||||
num_attestations = rng.randrange(spec.MAX_ATTESTATIONS)
|
||||
num_attestations = rng.randrange(1, spec.MAX_ATTESTATIONS)
|
||||
|
||||
attestations = [
|
||||
get_valid_attestation(
|
||||
|
@ -87,7 +114,7 @@ def get_random_attestations(spec, state, rng):
|
|||
|
||||
|
||||
def prepare_state_and_get_random_deposits(spec, state, rng):
|
||||
num_deposits = rng.randrange(spec.MAX_DEPOSITS)
|
||||
num_deposits = rng.randrange(1, spec.MAX_DEPOSITS)
|
||||
|
||||
deposit_data_leaves = [spec.DepositData() for _ in range(len(state.validators))]
|
||||
deposits = []
|
||||
|
@ -117,24 +144,65 @@ def prepare_state_and_get_random_deposits(spec, state, rng):
|
|||
return deposits
|
||||
|
||||
|
||||
def _eligible_for_exit(spec, state, index):
|
||||
validator = state.validators[index]
|
||||
|
||||
not_slashed = not validator.slashed
|
||||
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
activation_epoch = validator.activation_epoch
|
||||
active_for_long_enough = current_epoch >= activation_epoch + spec.config.SHARD_COMMITTEE_PERIOD
|
||||
|
||||
not_exited = validator.exit_epoch == spec.FAR_FUTURE_EPOCH
|
||||
|
||||
return not_slashed and active_for_long_enough and not_exited
|
||||
|
||||
|
||||
def get_random_voluntary_exits(spec, state, to_be_slashed_indices, rng):
|
||||
num_exits = rng.randrange(spec.MAX_VOLUNTARY_EXITS)
|
||||
indices = set(spec.get_active_validator_indices(state, spec.get_current_epoch(state)).copy())
|
||||
num_exits = rng.randrange(1, spec.MAX_VOLUNTARY_EXITS)
|
||||
active_indices = set(spec.get_active_validator_indices(state, spec.get_current_epoch(state)).copy())
|
||||
indices = set(
|
||||
index for index in active_indices
|
||||
if _eligible_for_exit(spec, state, index)
|
||||
)
|
||||
eligible_indices = indices - to_be_slashed_indices
|
||||
exit_indices = [eligible_indices.pop() for _ in range(num_exits)]
|
||||
indices_count = min(num_exits, len(eligible_indices))
|
||||
exit_indices = [eligible_indices.pop() for _ in range(indices_count)]
|
||||
return prepare_signed_exits(spec, state, exit_indices)
|
||||
|
||||
|
||||
def run_test_full_random_operations(spec, state, rng=Random(2080)):
|
||||
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
|
||||
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
|
||||
def get_random_sync_aggregate(spec, state, slot, fraction_participated=1.0, rng=Random(2099)):
|
||||
committee_indices = compute_committee_indices(spec, state, state.current_sync_committee)
|
||||
participant_count = int(len(committee_indices) * fraction_participated)
|
||||
participant_indices = rng.sample(range(len(committee_indices)), participant_count)
|
||||
participants = [
|
||||
committee_indices[index]
|
||||
for index in participant_indices
|
||||
]
|
||||
signature = compute_aggregate_sync_committee_signature(
|
||||
spec,
|
||||
state,
|
||||
slot,
|
||||
participants,
|
||||
)
|
||||
return spec.SyncAggregate(
|
||||
sync_committee_bits=[index in participant_indices for index in range(len(committee_indices))],
|
||||
sync_committee_signature=signature,
|
||||
)
|
||||
|
||||
|
||||
def build_random_block_from_state_for_next_slot(spec, state, rng=Random(2188)):
|
||||
# prepare state for deposits before building block
|
||||
deposits = prepare_state_and_get_random_deposits(spec, state, rng)
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
block.body.proposer_slashings = get_random_proposer_slashings(spec, state, rng)
|
||||
block.body.attester_slashings = get_random_attester_slashings(spec, state, rng)
|
||||
proposer_slashings = get_random_proposer_slashings(spec, state, rng)
|
||||
block.body.proposer_slashings = proposer_slashings
|
||||
slashed_indices = [
|
||||
slashing.signed_header_1.message.proposer_index
|
||||
for slashing in proposer_slashings
|
||||
]
|
||||
block.body.attester_slashings = get_random_attester_slashings(spec, state, rng, slashed_indices)
|
||||
block.body.attestations = get_random_attestations(spec, state, rng)
|
||||
block.body.deposits = deposits
|
||||
|
||||
|
@ -148,6 +216,15 @@ def run_test_full_random_operations(spec, state, rng=Random(2080)):
|
|||
slashed_indices = slashed_indices.union(attester_slashing.attestation_2.attesting_indices)
|
||||
block.body.voluntary_exits = get_random_voluntary_exits(spec, state, slashed_indices, rng)
|
||||
|
||||
return block
|
||||
|
||||
|
||||
def run_test_full_random_operations(spec, state, rng=Random(2080)):
|
||||
# move state forward SHARD_COMMITTEE_PERIOD epochs to allow for exit
|
||||
state.slot += spec.config.SHARD_COMMITTEE_PERIOD * spec.SLOTS_PER_EPOCH
|
||||
|
||||
block = build_random_block_from_state_for_next_slot(spec, state, rng)
|
||||
|
||||
yield 'pre', state
|
||||
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
|
|
|
@ -20,16 +20,20 @@ def set_some_new_deposits(spec, state, rng):
|
|||
state.validators[index].activation_eligibility_epoch = spec.get_current_epoch(state)
|
||||
|
||||
|
||||
def exit_random_validators(spec, state, rng):
|
||||
def exit_random_validators(spec, state, rng, fraction=None):
|
||||
if fraction is None:
|
||||
# Exit ~1/2
|
||||
fraction = 0.5
|
||||
|
||||
if spec.get_current_epoch(state) < 5:
|
||||
# Move epochs forward to allow for some validators already exited/withdrawable
|
||||
for _ in range(5):
|
||||
next_epoch(spec, state)
|
||||
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
# Exit ~1/2 of validators
|
||||
for index in spec.get_active_validator_indices(state, current_epoch):
|
||||
if rng.choice([True, False]):
|
||||
sampled = rng.random() < fraction
|
||||
if not sampled:
|
||||
continue
|
||||
|
||||
validator = state.validators[index]
|
||||
|
@ -41,11 +45,15 @@ def exit_random_validators(spec, state, rng):
|
|||
validator.withdrawable_epoch = current_epoch + 1
|
||||
|
||||
|
||||
def slash_random_validators(spec, state, rng):
|
||||
# Slash ~1/2 of validators
|
||||
def slash_random_validators(spec, state, rng, fraction=None):
|
||||
if fraction is None:
|
||||
# Slash ~1/2 of validators
|
||||
fraction = 0.5
|
||||
|
||||
for index in range(len(state.validators)):
|
||||
# slash at least one validator
|
||||
if index == 0 or rng.choice([True, False]):
|
||||
sampled = rng.random() < fraction
|
||||
if index == 0 or sampled:
|
||||
spec.slash_validator(state, index)
|
||||
|
||||
|
||||
|
@ -115,8 +123,8 @@ def randomize_attestation_participation(spec, state, rng=Random(8020)):
|
|||
randomize_epoch_participation(spec, state, spec.get_current_epoch(state), rng)
|
||||
|
||||
|
||||
def randomize_state(spec, state, rng=Random(8020)):
|
||||
def randomize_state(spec, state, rng=Random(8020), exit_fraction=None, slash_fraction=None):
|
||||
set_some_new_deposits(spec, state, rng)
|
||||
exit_random_validators(spec, state, rng)
|
||||
slash_random_validators(spec, state, rng)
|
||||
exit_random_validators(spec, state, rng, fraction=exit_fraction)
|
||||
slash_random_validators(spec, state, rng, fraction=slash_fraction)
|
||||
randomize_attestation_participation(spec, state, rng)
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
from eth2spec.test.context import expect_assertion_error, is_post_altair
|
||||
from eth2spec.test.helpers.block import apply_empty_block, sign_block, transition_unsigned_block
|
||||
from eth2spec.test.helpers.voluntary_exits import get_exited_validators
|
||||
|
||||
|
||||
def get_balance(state, index):
|
||||
|
@ -133,3 +134,24 @@ def _set_empty_participation(spec, state, current=True, previous=True):
|
|||
|
||||
def set_empty_participation(spec, state, rng=None):
|
||||
_set_empty_participation(spec, state)
|
||||
|
||||
|
||||
def ensure_state_has_validators_across_lifecycle(spec, state):
|
||||
"""
|
||||
Scan the validator registry to ensure there is at least 1 validator
|
||||
for each of the following lifecycle states:
|
||||
1. Pending / deposited
|
||||
2. Active
|
||||
3. Exited
|
||||
4. Slashed
|
||||
"""
|
||||
has_pending = any(filter(spec.is_eligible_for_activation_queue, state.validators))
|
||||
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
has_active = any(filter(lambda v: spec.is_active_validator(v, current_epoch), state.validators))
|
||||
|
||||
has_exited = any(get_exited_validators(spec, state))
|
||||
|
||||
has_slashed = any(filter(lambda v: v.slashed, state.validators))
|
||||
|
||||
return has_pending and has_active and has_exited and has_slashed
|
||||
|
|
|
@ -1,10 +1,15 @@
|
|||
from collections import Counter
|
||||
|
||||
from eth2spec.test.context import (
|
||||
expect_assertion_error,
|
||||
)
|
||||
from eth2spec.test.helpers.keys import privkeys
|
||||
from eth2spec.test.helpers.block import (
|
||||
build_empty_block_for_next_slot,
|
||||
)
|
||||
from eth2spec.test.helpers.block_processing import run_block_processing_to
|
||||
from eth2spec.utils import bls
|
||||
from eth2spec.utils.hash_function import hash
|
||||
|
||||
|
||||
def compute_sync_committee_signature(spec, state, slot, privkey, block_root=None, domain_type=None):
|
||||
|
@ -75,5 +80,104 @@ def compute_committee_indices(spec, state, committee):
|
|||
Given a ``committee``, calculate and return the related indices
|
||||
"""
|
||||
all_pubkeys = [v.pubkey for v in state.validators]
|
||||
committee_indices = [all_pubkeys.index(pubkey) for pubkey in committee.pubkeys]
|
||||
return committee_indices
|
||||
return [all_pubkeys.index(pubkey) for pubkey in committee.pubkeys]
|
||||
|
||||
|
||||
def validate_sync_committee_rewards(spec, pre_state, post_state, committee_indices, committee_bits, proposer_index):
|
||||
for index in range(len(post_state.validators)):
|
||||
reward = 0
|
||||
penalty = 0
|
||||
if index in committee_indices:
|
||||
_reward, _penalty = compute_sync_committee_participant_reward_and_penalty(
|
||||
spec,
|
||||
pre_state,
|
||||
index,
|
||||
committee_indices,
|
||||
committee_bits,
|
||||
)
|
||||
reward += _reward
|
||||
penalty += _penalty
|
||||
|
||||
if proposer_index == index:
|
||||
reward += compute_sync_committee_proposer_reward(
|
||||
spec,
|
||||
pre_state,
|
||||
committee_indices,
|
||||
committee_bits,
|
||||
)
|
||||
|
||||
assert post_state.balances[index] == pre_state.balances[index] + reward - penalty
|
||||
|
||||
|
||||
def run_sync_committee_processing(spec, state, block, expect_exception=False):
|
||||
"""
|
||||
Processes everything up to the sync committee work, then runs the sync committee work in isolation, and
|
||||
produces a pre-state and post-state (None if exception) specifically for sync-committee processing changes.
|
||||
"""
|
||||
pre_state = state.copy()
|
||||
# process up to the sync committee work
|
||||
call = run_block_processing_to(spec, state, block, 'process_sync_aggregate')
|
||||
yield 'pre', state
|
||||
yield 'sync_aggregate', block.body.sync_aggregate
|
||||
if expect_exception:
|
||||
expect_assertion_error(lambda: call(state, block))
|
||||
yield 'post', None
|
||||
else:
|
||||
call(state, block)
|
||||
yield 'post', state
|
||||
if expect_exception:
|
||||
assert pre_state.balances == state.balances
|
||||
else:
|
||||
committee_indices = compute_committee_indices(
|
||||
spec,
|
||||
state,
|
||||
state.current_sync_committee,
|
||||
)
|
||||
committee_bits = block.body.sync_aggregate.sync_committee_bits
|
||||
validate_sync_committee_rewards(
|
||||
spec,
|
||||
pre_state,
|
||||
state,
|
||||
committee_indices,
|
||||
committee_bits,
|
||||
block.proposer_index
|
||||
)
|
||||
|
||||
|
||||
def _build_block_for_next_slot_with_sync_participation(spec, state, committee_indices, committee_bits):
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
block.body.sync_aggregate = spec.SyncAggregate(
|
||||
sync_committee_bits=committee_bits,
|
||||
sync_committee_signature=compute_aggregate_sync_committee_signature(
|
||||
spec,
|
||||
state,
|
||||
block.slot - 1,
|
||||
[index for index, bit in zip(committee_indices, committee_bits) if bit],
|
||||
)
|
||||
)
|
||||
return block
|
||||
|
||||
|
||||
def run_successful_sync_committee_test(spec, state, committee_indices, committee_bits):
|
||||
block = _build_block_for_next_slot_with_sync_participation(spec, state, committee_indices, committee_bits)
|
||||
yield from run_sync_committee_processing(spec, state, block)
|
||||
|
||||
|
||||
def get_committee_indices(spec, state, duplicates=False):
|
||||
"""
|
||||
This utility function allows the caller to ensure there are or are not
|
||||
duplicate validator indices in the returned committee based on
|
||||
the boolean ``duplicates``.
|
||||
"""
|
||||
state = state.copy()
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
randao_index = (current_epoch + 1) % spec.EPOCHS_PER_HISTORICAL_VECTOR
|
||||
while True:
|
||||
committee = spec.get_next_sync_committee_indices(state)
|
||||
if duplicates:
|
||||
if len(committee) != len(set(committee)):
|
||||
return committee
|
||||
else:
|
||||
if len(committee) == len(set(committee)):
|
||||
return committee
|
||||
state.randao_mixes[randao_index] = hash(state.randao_mixes[randao_index])
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
from random import Random
|
||||
from eth2spec.utils import bls
|
||||
from eth2spec.test.helpers.keys import privkeys
|
||||
|
||||
|
@ -23,3 +24,21 @@ def sign_voluntary_exit(spec, state, voluntary_exit, privkey):
|
|||
message=voluntary_exit,
|
||||
signature=bls.Sign(privkey, signing_root)
|
||||
)
|
||||
|
||||
|
||||
#
|
||||
# Helpers for applying effects of a voluntary exit
|
||||
#
|
||||
def get_exited_validators(spec, state):
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
return [index for (index, validator) in enumerate(state.validators) if validator.exit_epoch <= current_epoch]
|
||||
|
||||
|
||||
def exit_validators(spec, state, validator_count, rng=None):
|
||||
if rng is None:
|
||||
rng = Random(1337)
|
||||
|
||||
indices = rng.sample(range(len(state.validators)), validator_count)
|
||||
for index in indices:
|
||||
spec.initiate_validator_exit(state, index)
|
||||
return indices
|
||||
|
|
|
@ -306,7 +306,7 @@ def test_att1_empty_indices(spec, state):
|
|||
attester_slashing = get_valid_attester_slashing(spec, state, signed_1=False, signed_2=True)
|
||||
|
||||
attester_slashing.attestation_1.attesting_indices = []
|
||||
attester_slashing.attestation_1.signature = spec.bls.Z2_SIGNATURE
|
||||
attester_slashing.attestation_1.signature = spec.bls.G2_POINT_AT_INFINITY
|
||||
|
||||
yield from run_attester_slashing_processing(spec, state, attester_slashing, False)
|
||||
|
||||
|
@ -318,7 +318,7 @@ def test_att2_empty_indices(spec, state):
|
|||
attester_slashing = get_valid_attester_slashing(spec, state, signed_1=True, signed_2=False)
|
||||
|
||||
attester_slashing.attestation_2.attesting_indices = []
|
||||
attester_slashing.attestation_2.signature = spec.bls.Z2_SIGNATURE
|
||||
attester_slashing.attestation_2.signature = spec.bls.G2_POINT_AT_INFINITY
|
||||
|
||||
yield from run_attester_slashing_processing(spec, state, attester_slashing, False)
|
||||
|
||||
|
@ -330,10 +330,10 @@ def test_all_empty_indices(spec, state):
|
|||
attester_slashing = get_valid_attester_slashing(spec, state, signed_1=False, signed_2=False)
|
||||
|
||||
attester_slashing.attestation_1.attesting_indices = []
|
||||
attester_slashing.attestation_1.signature = spec.bls.Z2_SIGNATURE
|
||||
attester_slashing.attestation_1.signature = spec.bls.G2_POINT_AT_INFINITY
|
||||
|
||||
attester_slashing.attestation_2.attesting_indices = []
|
||||
attester_slashing.attestation_2.signature = spec.bls.Z2_SIGNATURE
|
||||
attester_slashing.attestation_2.signature = spec.bls.G2_POINT_AT_INFINITY
|
||||
|
||||
yield from run_attester_slashing_processing(spec, state, attester_slashing, False)
|
||||
|
||||
|
|
|
@ -11,12 +11,12 @@ from eth2spec.test.helpers.block import build_empty_block_for_next_slot
|
|||
from eth2spec.test.helpers.constants import MINIMAL
|
||||
from eth2spec.test.helpers.fork_choice import (
|
||||
tick_and_run_on_attestation,
|
||||
tick_and_run_on_block,
|
||||
tick_and_add_block,
|
||||
get_anchor_root,
|
||||
get_genesis_forkchoice_store_and_block,
|
||||
get_formatted_head_output,
|
||||
on_tick_and_append_step,
|
||||
run_on_block,
|
||||
add_block,
|
||||
)
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
|
@ -68,12 +68,12 @@ def test_chain_no_attestations(spec, state):
|
|||
# On receiving a block of `GENESIS_SLOT + 1` slot
|
||||
block_1 = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block_1 = state_transition_and_sign_block(spec, state, block_1)
|
||||
yield from tick_and_run_on_block(spec, store, signed_block_1, test_steps)
|
||||
yield from tick_and_add_block(spec, store, signed_block_1, test_steps)
|
||||
|
||||
# On receiving a block of next epoch
|
||||
block_2 = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block_2 = state_transition_and_sign_block(spec, state, block_2)
|
||||
yield from tick_and_run_on_block(spec, store, signed_block_2, test_steps)
|
||||
yield from tick_and_add_block(spec, store, signed_block_2, test_steps)
|
||||
|
||||
assert spec.get_head(store) == spec.hash_tree_root(block_2)
|
||||
test_steps.append({
|
||||
|
@ -107,14 +107,14 @@ def test_split_tie_breaker_no_attestations(spec, state):
|
|||
block_1_state = genesis_state.copy()
|
||||
block_1 = build_empty_block_for_next_slot(spec, block_1_state)
|
||||
signed_block_1 = state_transition_and_sign_block(spec, block_1_state, block_1)
|
||||
yield from tick_and_run_on_block(spec, store, signed_block_1, test_steps)
|
||||
yield from tick_and_add_block(spec, store, signed_block_1, test_steps)
|
||||
|
||||
# additional block at slot 1
|
||||
block_2_state = genesis_state.copy()
|
||||
block_2 = build_empty_block_for_next_slot(spec, block_2_state)
|
||||
block_2.body.graffiti = b'\x42' * 32
|
||||
signed_block_2 = state_transition_and_sign_block(spec, block_2_state, block_2)
|
||||
yield from tick_and_run_on_block(spec, store, signed_block_2, test_steps)
|
||||
yield from tick_and_add_block(spec, store, signed_block_2, test_steps)
|
||||
|
||||
highest_root = max(spec.hash_tree_root(block_1), spec.hash_tree_root(block_2))
|
||||
assert spec.get_head(store) == highest_root
|
||||
|
@ -150,14 +150,14 @@ def test_shorter_chain_but_heavier_weight(spec, state):
|
|||
for _ in range(3):
|
||||
long_block = build_empty_block_for_next_slot(spec, long_state)
|
||||
signed_long_block = state_transition_and_sign_block(spec, long_state, long_block)
|
||||
yield from tick_and_run_on_block(spec, store, signed_long_block, test_steps)
|
||||
yield from tick_and_add_block(spec, store, signed_long_block, test_steps)
|
||||
|
||||
# build short tree
|
||||
short_state = genesis_state.copy()
|
||||
short_block = build_empty_block_for_next_slot(spec, short_state)
|
||||
short_block.body.graffiti = b'\x42' * 32
|
||||
signed_short_block = state_transition_and_sign_block(spec, short_state, short_block)
|
||||
yield from tick_and_run_on_block(spec, store, signed_short_block, test_steps)
|
||||
yield from tick_and_add_block(spec, store, signed_short_block, test_steps)
|
||||
|
||||
short_attestation = get_valid_attestation(spec, short_state, short_block.slot, signed=True)
|
||||
yield from tick_and_run_on_attestation(spec, store, short_attestation, test_steps)
|
||||
|
@ -200,7 +200,7 @@ def test_filtered_block_tree(spec, state):
|
|||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
for signed_block in signed_blocks:
|
||||
yield from run_on_block(spec, store, signed_block, test_steps)
|
||||
yield from add_block(spec, store, signed_block, test_steps)
|
||||
|
||||
assert store.justified_checkpoint == state.current_justified_checkpoint
|
||||
|
||||
|
@ -247,7 +247,7 @@ def test_filtered_block_tree(spec, state):
|
|||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
|
||||
# include rogue block and associated attestations in the store
|
||||
yield from run_on_block(spec, store, signed_rogue_block, test_steps)
|
||||
yield from add_block(spec, store, signed_rogue_block, test_steps)
|
||||
|
||||
for attestation in attestations:
|
||||
yield from tick_and_run_on_attestation(spec, store, attestation, test_steps)
|
||||
|
|
|
@ -0,0 +1,689 @@
|
|||
import random
|
||||
|
||||
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
|
||||
from eth2spec.test.context import MINIMAL, spec_state_test, with_all_phases, with_presets
|
||||
from eth2spec.test.helpers.attestations import (
|
||||
next_epoch_with_attestations,
|
||||
next_slots_with_attestations,
|
||||
state_transition_with_full_block,
|
||||
state_transition_with_full_attestations_block,
|
||||
)
|
||||
from eth2spec.test.helpers.block import (
|
||||
build_empty_block_for_next_slot,
|
||||
build_empty_block,
|
||||
transition_unsigned_block,
|
||||
sign_block,
|
||||
)
|
||||
from eth2spec.test.helpers.fork_choice import (
|
||||
get_genesis_forkchoice_store_and_block,
|
||||
on_tick_and_append_step,
|
||||
add_block,
|
||||
tick_and_add_block,
|
||||
apply_next_epoch_with_attestations,
|
||||
apply_next_slots_with_attestations,
|
||||
)
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
next_slots,
|
||||
state_transition_and_sign_block,
|
||||
transition_to,
|
||||
)
|
||||
|
||||
|
||||
rng = random.Random(2020)
|
||||
|
||||
|
||||
def _drop_random_one_third(_slot, _index, indices):
|
||||
committee_len = len(indices)
|
||||
assert committee_len >= 3
|
||||
filter_len = committee_len // 3
|
||||
participant_count = committee_len - filter_len
|
||||
return rng.sample(indices, participant_count)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_basic(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# On receiving a block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
assert spec.get_head(store) == signed_block.message.hash_tree_root()
|
||||
|
||||
# On receiving a block of next epoch
|
||||
store.time = current_time + spec.config.SECONDS_PER_SLOT * spec.SLOTS_PER_EPOCH
|
||||
block = build_empty_block(spec, state, state.slot + spec.SLOTS_PER_EPOCH)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
assert spec.get_head(store) == signed_block.message.hash_tree_root()
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
# TODO: add tests for justified_root and finalized_root
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@with_presets([MINIMAL], reason="too slow")
|
||||
def test_on_block_checkpoints(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Run for 1 epoch with full attestations
|
||||
next_epoch(spec, state)
|
||||
on_tick_and_append_step(spec, store, store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT, test_steps)
|
||||
|
||||
state, store, last_signed_block = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
last_block_root = hash_tree_root(last_signed_block.message)
|
||||
assert spec.get_head(store) == last_block_root
|
||||
|
||||
# Forward 1 epoch
|
||||
next_epoch(spec, state)
|
||||
on_tick_and_append_step(spec, store, store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT, test_steps)
|
||||
|
||||
# Mock the finalized_checkpoint and build a block on it
|
||||
fin_state = store.block_states[last_block_root].copy()
|
||||
fin_state.finalized_checkpoint = store.block_states[last_block_root].current_justified_checkpoint.copy()
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, fin_state)
|
||||
signed_block = state_transition_and_sign_block(spec, fin_state.copy(), block)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
assert spec.get_head(store) == signed_block.message.hash_tree_root()
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_future_block(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Do NOT tick time to `GENESIS_SLOT + 1` slot
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
yield from add_block(spec, store, signed_block, test_steps, valid=False)
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_bad_parent_root(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
transition_unsigned_block(spec, state, block)
|
||||
block.state_root = state.hash_tree_root()
|
||||
|
||||
block.parent_root = b'\x45' * 32
|
||||
|
||||
signed_block = sign_block(spec, state, block)
|
||||
|
||||
yield from add_block(spec, store, signed_block, test_steps, valid=False)
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@with_presets([MINIMAL], reason="too slow")
|
||||
def test_on_block_before_finalized(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Fork
|
||||
another_state = state.copy()
|
||||
|
||||
# Create a finalized chain
|
||||
for _ in range(4):
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
assert store.finalized_checkpoint.epoch == 2
|
||||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, another_state)
|
||||
block.body.graffiti = b'\x12' * 32
|
||||
signed_block = state_transition_and_sign_block(spec, another_state, block)
|
||||
assert signed_block.message.hash_tree_root() not in store.blocks
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps, valid=False)
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@with_presets([MINIMAL], reason="too slow")
|
||||
def test_on_block_finalized_skip_slots(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Create a finalized chain
|
||||
for _ in range(4):
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
assert store.finalized_checkpoint.epoch == 2
|
||||
|
||||
# Another chain
|
||||
another_state = store.block_states[store.finalized_checkpoint.root].copy()
|
||||
# Build block that includes the skipped slots up to finality in chain
|
||||
block = build_empty_block(spec,
|
||||
another_state,
|
||||
spec.compute_start_slot_at_epoch(store.finalized_checkpoint.epoch) + 2)
|
||||
block.body.graffiti = b'\x12' * 32
|
||||
signed_block = state_transition_and_sign_block(spec, another_state, block)
|
||||
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@with_presets([MINIMAL], reason="too slow")
|
||||
def test_on_block_finalized_skip_slots_not_in_skip_chain(spec, state):
|
||||
test_steps = []
|
||||
# Initialization
|
||||
transition_to(spec, state, state.slot + spec.SLOTS_PER_EPOCH - 1)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
transition_unsigned_block(spec, state, block)
|
||||
block.state_root = state.hash_tree_root()
|
||||
store = spec.get_forkchoice_store(state, block)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', block
|
||||
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
pre_finalized_checkpoint_epoch = store.finalized_checkpoint.epoch
|
||||
|
||||
# Finalized
|
||||
for _ in range(3):
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
assert store.finalized_checkpoint.epoch == pre_finalized_checkpoint_epoch + 1
|
||||
|
||||
# Now build a block at later slot than finalized epoch
|
||||
# Includes finalized block in chain, but not at appropriate skip slot
|
||||
pre_state = store.block_states[block.hash_tree_root()].copy()
|
||||
block = build_empty_block(spec,
|
||||
state=pre_state,
|
||||
slot=spec.compute_start_slot_at_epoch(store.finalized_checkpoint.epoch) + 2)
|
||||
block.body.graffiti = b'\x12' * 32
|
||||
signed_block = sign_block(spec, pre_state, block)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps, valid=False)
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@with_presets([MINIMAL], reason="mainnet config requires too many pre-generated public/private keys")
|
||||
def test_on_block_update_justified_checkpoint_within_safe_slots(spec, state):
|
||||
"""
|
||||
Test `should_update_justified_checkpoint`:
|
||||
compute_slots_since_epoch_start(get_current_slot(store)) < SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
"""
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Skip epoch 0 & 1
|
||||
for _ in range(2):
|
||||
next_epoch(spec, state)
|
||||
# Fill epoch 2
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 0
|
||||
assert state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 2
|
||||
# Skip epoch 3 & 4
|
||||
for _ in range(2):
|
||||
next_epoch(spec, state)
|
||||
# Epoch 5: Attest current epoch
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, participation_fn=_drop_random_one_third, test_steps=test_steps)
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 0
|
||||
assert state.current_justified_checkpoint.epoch == 2
|
||||
assert store.justified_checkpoint.epoch == 2
|
||||
assert state.current_justified_checkpoint == store.justified_checkpoint
|
||||
|
||||
# Skip epoch 6
|
||||
next_epoch(spec, state)
|
||||
|
||||
pre_state = state.copy()
|
||||
|
||||
# Build a block to justify epoch 5
|
||||
signed_block = state_transition_with_full_block(spec, state, True, True)
|
||||
assert state.finalized_checkpoint.epoch == 0
|
||||
assert state.current_justified_checkpoint.epoch == 5
|
||||
assert state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch
|
||||
assert spec.get_current_slot(store) % spec.SLOTS_PER_EPOCH < spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
# Run on_block
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
# Ensure justified_checkpoint has been changed but finality is unchanged
|
||||
assert store.justified_checkpoint.epoch == 5
|
||||
assert store.justified_checkpoint == state.current_justified_checkpoint
|
||||
assert store.finalized_checkpoint.epoch == pre_state.finalized_checkpoint.epoch == 0
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_presets([MINIMAL], reason="It assumes that `MAX_ATTESTATIONS` >= 2/3 attestations of an epoch")
|
||||
@spec_state_test
|
||||
def test_on_block_outside_safe_slots_but_finality(spec, state):
|
||||
"""
|
||||
Test `should_update_justified_checkpoint` case
|
||||
- compute_slots_since_epoch_start(get_current_slot(store)) > SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
- new_justified_checkpoint and store.justified_checkpoint.root are NOT conflicting
|
||||
|
||||
Thus should_update_justified_checkpoint returns True.
|
||||
|
||||
Part of this script is similar to `test_new_justified_is_later_than_store_justified`.
|
||||
"""
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Skip epoch 0
|
||||
next_epoch(spec, state)
|
||||
# Fill epoch 1 to 3, attest current epoch
|
||||
for _ in range(3):
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 2
|
||||
assert state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 3
|
||||
|
||||
# Skip epoch 4-6
|
||||
for _ in range(3):
|
||||
next_epoch(spec, state)
|
||||
|
||||
# epoch 7
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, True, test_steps=test_steps)
|
||||
assert state.finalized_checkpoint.epoch == 2
|
||||
assert state.current_justified_checkpoint.epoch == 7
|
||||
|
||||
# epoch 8, attest the first 5 blocks
|
||||
state, store, _ = yield from apply_next_slots_with_attestations(
|
||||
spec, state, store, 5, True, True, test_steps)
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 2
|
||||
assert state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 7
|
||||
|
||||
# Propose a block at epoch 9, 5th slot
|
||||
next_epoch(spec, state)
|
||||
next_slots(spec, state, 4)
|
||||
signed_block = state_transition_with_full_attestations_block(spec, state, True, True)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 2
|
||||
assert state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 7
|
||||
|
||||
# Propose an empty block at epoch 10, SAFE_SLOTS_TO_UPDATE_JUSTIFIED + 2 slot
|
||||
# This block would trigger justification and finality updates on store
|
||||
next_epoch(spec, state)
|
||||
next_slots(spec, state, 4)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
assert state.finalized_checkpoint.epoch == 7
|
||||
assert state.current_justified_checkpoint.epoch == 8
|
||||
# Step time past safe slots and run on_block
|
||||
if store.time < spec.compute_time_at_slot(state, signed_block.message.slot):
|
||||
time = store.genesis_time + signed_block.message.slot * spec.config.SECONDS_PER_SLOT
|
||||
on_tick_and_append_step(spec, store, time, test_steps)
|
||||
assert spec.get_current_slot(store) % spec.SLOTS_PER_EPOCH >= spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
yield from add_block(spec, store, signed_block, test_steps)
|
||||
|
||||
# Ensure justified_checkpoint finality has been changed
|
||||
assert store.finalized_checkpoint.epoch == 7
|
||||
assert store.finalized_checkpoint == state.finalized_checkpoint
|
||||
assert store.justified_checkpoint.epoch == 8
|
||||
assert store.justified_checkpoint == state.current_justified_checkpoint
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_presets([MINIMAL], reason="It assumes that `MAX_ATTESTATIONS` >= 2/3 attestations of an epoch")
|
||||
@spec_state_test
|
||||
def test_new_justified_is_later_than_store_justified(spec, state):
|
||||
"""
|
||||
J: Justified
|
||||
F: Finalized
|
||||
fork_1_state (forked from genesis):
|
||||
epoch
|
||||
[0] <- [1] <- [2] <- [3] <- [4]
|
||||
F J
|
||||
|
||||
fork_2_state (forked from fork_1_state's epoch 2):
|
||||
epoch
|
||||
└──── [3] <- [4] <- [5] <- [6]
|
||||
F J
|
||||
|
||||
fork_3_state (forked from genesis):
|
||||
[0] <- [1] <- [2] <- [3] <- [4] <- [5]
|
||||
F J
|
||||
"""
|
||||
# The 1st fork, from genesis
|
||||
fork_1_state = state.copy()
|
||||
# The 3rd fork, from genesis
|
||||
fork_3_state = state.copy()
|
||||
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# ----- Process fork_1_state
|
||||
# Skip epoch 0
|
||||
next_epoch(spec, fork_1_state)
|
||||
# Fill epoch 1 with previous epoch attestations
|
||||
fork_1_state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, fork_1_state, store, False, True, test_steps=test_steps)
|
||||
|
||||
# Fork `fork_2_state` at the start of epoch 2
|
||||
fork_2_state = fork_1_state.copy()
|
||||
assert spec.get_current_epoch(fork_2_state) == 2
|
||||
|
||||
# Skip epoch 2
|
||||
next_epoch(spec, fork_1_state)
|
||||
# # Fill epoch 3 & 4 with previous epoch attestations
|
||||
for _ in range(2):
|
||||
fork_1_state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, fork_1_state, store, False, True, test_steps=test_steps)
|
||||
|
||||
assert fork_1_state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 0
|
||||
assert fork_1_state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 3
|
||||
assert store.justified_checkpoint == fork_1_state.current_justified_checkpoint
|
||||
|
||||
# ------ fork_2_state: Create a chain to set store.best_justified_checkpoint
|
||||
# NOTE: The goal is to make `store.best_justified_checkpoint.epoch > store.justified_checkpoint.epoch`
|
||||
all_blocks = []
|
||||
|
||||
# Proposed an empty block at epoch 2, 1st slot
|
||||
block = build_empty_block_for_next_slot(spec, fork_2_state)
|
||||
signed_block = state_transition_and_sign_block(spec, fork_2_state, block)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
assert fork_2_state.current_justified_checkpoint.epoch == 0
|
||||
|
||||
# Skip to epoch 4
|
||||
for _ in range(2):
|
||||
next_epoch(spec, fork_2_state)
|
||||
assert fork_2_state.current_justified_checkpoint.epoch == 0
|
||||
|
||||
# Propose a block at epoch 4, 5th slot
|
||||
# Propose a block at epoch 5, 5th slot
|
||||
for _ in range(2):
|
||||
next_epoch(spec, fork_2_state)
|
||||
next_slots(spec, fork_2_state, 4)
|
||||
signed_block = state_transition_with_full_attestations_block(spec, fork_2_state, True, True)
|
||||
yield from tick_and_add_block(spec, store, signed_block, test_steps)
|
||||
assert fork_2_state.current_justified_checkpoint.epoch == 0
|
||||
|
||||
# Propose a block at epoch 6, SAFE_SLOTS_TO_UPDATE_JUSTIFIED + 2 slot
|
||||
next_epoch(spec, fork_2_state)
|
||||
next_slots(spec, fork_2_state, spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED + 2)
|
||||
signed_block = state_transition_with_full_attestations_block(spec, fork_2_state, True, True)
|
||||
assert fork_2_state.finalized_checkpoint.epoch == 0
|
||||
assert fork_2_state.current_justified_checkpoint.epoch == 5
|
||||
# Check SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
spec.on_tick(store, store.genesis_time + fork_2_state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
assert spec.compute_slots_since_epoch_start(spec.get_current_slot(store)) >= spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
# Run on_block
|
||||
yield from add_block(spec, store, signed_block, test_steps)
|
||||
assert store.finalized_checkpoint.epoch == 0
|
||||
assert store.justified_checkpoint.epoch == 3
|
||||
assert store.best_justified_checkpoint.epoch == 5
|
||||
|
||||
# ------ fork_3_state: Create another chain to test the
|
||||
# "Update justified if new justified is later than store justified" case
|
||||
all_blocks = []
|
||||
for _ in range(3):
|
||||
next_epoch(spec, fork_3_state)
|
||||
|
||||
# epoch 3
|
||||
_, signed_blocks, fork_3_state = next_epoch_with_attestations(spec, fork_3_state, True, True)
|
||||
all_blocks += signed_blocks
|
||||
assert fork_3_state.finalized_checkpoint.epoch == 0
|
||||
|
||||
# epoch 4, attest the first 5 blocks
|
||||
_, blocks, fork_3_state = next_slots_with_attestations(spec, fork_3_state, 5, True, True)
|
||||
all_blocks += blocks.copy()
|
||||
assert fork_3_state.finalized_checkpoint.epoch == 0
|
||||
|
||||
# Propose a block at epoch 5, 5th slot
|
||||
next_epoch(spec, fork_3_state)
|
||||
next_slots(spec, fork_3_state, 4)
|
||||
signed_block = state_transition_with_full_block(spec, fork_3_state, True, True)
|
||||
all_blocks.append(signed_block.copy())
|
||||
assert fork_3_state.finalized_checkpoint.epoch == 0
|
||||
|
||||
# Propose a block at epoch 6, 5th slot
|
||||
next_epoch(spec, fork_3_state)
|
||||
next_slots(spec, fork_3_state, 4)
|
||||
signed_block = state_transition_with_full_block(spec, fork_3_state, True, True)
|
||||
all_blocks.append(signed_block.copy())
|
||||
assert fork_3_state.finalized_checkpoint.epoch == 3
|
||||
assert fork_3_state.current_justified_checkpoint.epoch == 4
|
||||
|
||||
# FIXME: pending on the `on_block`, `on_attestation` fix
|
||||
# # Apply blocks of `fork_3_state` to `store`
|
||||
# for block in all_blocks:
|
||||
# if store.time < spec.compute_time_at_slot(fork_2_state, block.message.slot):
|
||||
# spec.on_tick(store, store.genesis_time + block.message.slot * spec.config.SECONDS_PER_SLOT)
|
||||
# # valid_attestations=False because the attestations are outdated (older than previous epoch)
|
||||
# yield from add_block(spec, store, block, test_steps, allow_invalid_attestations=False)
|
||||
|
||||
# assert store.finalized_checkpoint == fork_3_state.finalized_checkpoint
|
||||
# assert (store.justified_checkpoint
|
||||
# == fork_3_state.current_justified_checkpoint
|
||||
# != store.best_justified_checkpoint)
|
||||
# assert (store.best_justified_checkpoint
|
||||
# == fork_2_state.current_justified_checkpoint)
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_new_finalized_slot_is_not_justified_checkpoint_ancestor(spec, state):
|
||||
"""
|
||||
J: Justified
|
||||
F: Finalized
|
||||
state (forked from genesis):
|
||||
epoch
|
||||
[0] <- [1] <- [2] <- [3] <- [4] <- [5]
|
||||
F J
|
||||
|
||||
another_state (forked from epoch 0):
|
||||
└──── [1] <- [2] <- [3] <- [4] <- [5]
|
||||
F J
|
||||
"""
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# ----- Process state
|
||||
# Goal: make `store.finalized_checkpoint.epoch == 0` and `store.justified_checkpoint.epoch == 3`
|
||||
# Skip epoch 0
|
||||
next_epoch(spec, state)
|
||||
|
||||
# Forking another_state
|
||||
another_state = state.copy()
|
||||
|
||||
# Fill epoch 1 with previous epoch attestations
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, False, True, test_steps=test_steps)
|
||||
# Skip epoch 2
|
||||
next_epoch(spec, state)
|
||||
# Fill epoch 3 & 4 with previous epoch attestations
|
||||
for _ in range(2):
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, False, True, test_steps=test_steps)
|
||||
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 0
|
||||
assert state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 3
|
||||
assert store.justified_checkpoint == state.current_justified_checkpoint
|
||||
|
||||
# Create another chain
|
||||
# Goal: make `another_state.finalized_checkpoint.epoch == 2` and `another_state.justified_checkpoint.epoch == 3`
|
||||
all_blocks = []
|
||||
# Fill epoch 1 & 2 with previous + current epoch attestations
|
||||
for _ in range(3):
|
||||
_, signed_blocks, another_state = next_epoch_with_attestations(spec, another_state, True, True)
|
||||
all_blocks += signed_blocks
|
||||
|
||||
assert another_state.finalized_checkpoint.epoch == 2
|
||||
assert another_state.current_justified_checkpoint.epoch == 3
|
||||
assert state.finalized_checkpoint != another_state.finalized_checkpoint
|
||||
assert state.current_justified_checkpoint != another_state.current_justified_checkpoint
|
||||
|
||||
# pre_store_justified_checkpoint_root = store.justified_checkpoint.root
|
||||
|
||||
# FIXME: pending on the `on_block`, `on_attestation` fix
|
||||
# # Apply blocks of `another_state` to `store`
|
||||
# for block in all_blocks:
|
||||
# # NOTE: Do not call `on_tick` here
|
||||
# yield from add_block(spec, store, block, test_steps, allow_invalid_attestations=True)
|
||||
|
||||
# finalized_slot = spec.compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
|
||||
# ancestor_at_finalized_slot = spec.get_ancestor(store, pre_store_justified_checkpoint_root, finalized_slot)
|
||||
# assert ancestor_at_finalized_slot != store.finalized_checkpoint.root
|
||||
|
||||
# assert store.finalized_checkpoint == another_state.finalized_checkpoint
|
||||
# assert store.justified_checkpoint == another_state.current_justified_checkpoint
|
||||
|
||||
yield 'steps', test_steps
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_new_finalized_slot_is_justified_checkpoint_ancestor(spec, state):
|
||||
"""
|
||||
J: Justified
|
||||
F: Finalized
|
||||
state:
|
||||
epoch
|
||||
[0] <- [1] <- [2] <- [3] <- [4] <- [5]
|
||||
F J
|
||||
|
||||
another_state (forked from state at epoch 3):
|
||||
└──── [4] <- [5]
|
||||
F J
|
||||
"""
|
||||
test_steps = []
|
||||
# Initialization
|
||||
store, anchor_block = get_genesis_forkchoice_store_and_block(spec, state)
|
||||
yield 'anchor_state', state
|
||||
yield 'anchor_block', anchor_block
|
||||
current_time = state.slot * spec.config.SECONDS_PER_SLOT + store.genesis_time
|
||||
on_tick_and_append_step(spec, store, current_time, test_steps)
|
||||
assert store.time == current_time
|
||||
|
||||
# Process state
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, False, True, test_steps=test_steps)
|
||||
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False, test_steps=test_steps)
|
||||
next_epoch(spec, state)
|
||||
|
||||
for _ in range(2):
|
||||
state, store, _ = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, False, True, test_steps=test_steps)
|
||||
|
||||
assert state.finalized_checkpoint.epoch == store.finalized_checkpoint.epoch == 2
|
||||
assert state.current_justified_checkpoint.epoch == store.justified_checkpoint.epoch == 4
|
||||
assert store.justified_checkpoint == state.current_justified_checkpoint
|
||||
|
||||
# Create another chain
|
||||
# Forking from epoch 3
|
||||
all_blocks = []
|
||||
slot = spec.compute_start_slot_at_epoch(3)
|
||||
block_root = spec.get_block_root_at_slot(state, slot)
|
||||
another_state = store.block_states[block_root].copy()
|
||||
for _ in range(2):
|
||||
_, signed_blocks, another_state = next_epoch_with_attestations(spec, another_state, True, True)
|
||||
all_blocks += signed_blocks
|
||||
|
||||
assert another_state.finalized_checkpoint.epoch == 3
|
||||
assert another_state.current_justified_checkpoint.epoch == 4
|
||||
|
||||
pre_store_justified_checkpoint_root = store.justified_checkpoint.root
|
||||
for block in all_blocks:
|
||||
# FIXME: Once `on_block` and `on_attestation` logic is fixed,
|
||||
# fix test case and remove allow_invalid_attestations flag
|
||||
yield from tick_and_add_block(spec, store, block, test_steps, allow_invalid_attestations=True)
|
||||
|
||||
finalized_slot = spec.compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
|
||||
ancestor_at_finalized_slot = spec.get_ancestor(store, pre_store_justified_checkpoint_root, finalized_slot)
|
||||
assert ancestor_at_finalized_slot == store.finalized_checkpoint.root
|
||||
|
||||
assert store.finalized_checkpoint == another_state.finalized_checkpoint
|
||||
assert store.justified_checkpoint != another_state.current_justified_checkpoint
|
||||
|
||||
yield 'steps', test_steps
|
|
@ -0,0 +1,438 @@
|
|||
"""
|
||||
This module is generated from the ``random`` test generator.
|
||||
Please do not edit this file manually.
|
||||
See the README for that generator for more information.
|
||||
"""
|
||||
|
||||
from eth2spec.test.helpers.constants import PHASE0
|
||||
from eth2spec.test.context import (
|
||||
misc_balances_in_default_range_with_many_validators,
|
||||
with_phases,
|
||||
zero_activation_threshold,
|
||||
only_generator,
|
||||
)
|
||||
from eth2spec.test.context import (
|
||||
always_bls,
|
||||
spec_test,
|
||||
with_custom_state,
|
||||
single_phase,
|
||||
)
|
||||
from eth2spec.test.utils.randomized_block_tests import (
|
||||
run_generated_randomized_test,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_0(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_1(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_2(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_3(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_4(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_5(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_6(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_7(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'validation': 'validate_is_not_leaking', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_8(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_9(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_10(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_11(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_12(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:last_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'last_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_13(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:random_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'random_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_14(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:penultimate_slot_in_epoch,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 0, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 'penultimate_slot_in_epoch', 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
||||
|
||||
|
||||
@only_generator("randomized test for broad coverage, not point-to-point CI")
|
||||
@with_phases([PHASE0])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_15(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
# epochs:epochs_until_leak,slots:0,with-block:no_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
# epochs:1,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:no_block
|
||||
# epochs:0,slots:0,with-block:random_block
|
||||
scenario = {'transitions': [{'epochs_to_skip': 'epochs_until_leak', 'validation': 'validate_is_leaking', 'slots_to_skip': 0, 'block_producer': 'no_block'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}, {'epochs_to_skip': 1, 'slots_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'slots_to_skip': 0, 'epochs_to_skip': 0, 'block_producer': 'no_block', 'validation': 'no_op_validation'}, {'block_producer': 'random_block', 'epochs_to_skip': 0, 'slots_to_skip': 0, 'validation': 'no_op_validation'}], 'state_randomizer': 'randomize_state'} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)
|
|
@ -1051,25 +1051,25 @@ def test_eth1_data_votes_no_consensus(spec, state):
|
|||
yield 'post', state
|
||||
|
||||
|
||||
@with_phases([PHASE0])
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_full_random_operations_0(spec, state):
|
||||
yield from run_test_full_random_operations(spec, state, rng=Random(2020))
|
||||
|
||||
|
||||
@with_phases([PHASE0])
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_full_random_operations_1(spec, state):
|
||||
yield from run_test_full_random_operations(spec, state, rng=Random(2021))
|
||||
|
||||
|
||||
@with_phases([PHASE0])
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_full_random_operations_2(spec, state):
|
||||
yield from run_test_full_random_operations(spec, state, rng=Random(2022))
|
||||
|
||||
|
||||
@with_phases([PHASE0])
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_full_random_operations_3(spec, state):
|
||||
yield from run_test_full_random_operations(spec, state, rng=Random(2023))
|
||||
|
|
|
@ -1,239 +1,47 @@
|
|||
from copy import deepcopy
|
||||
|
||||
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
|
||||
|
||||
from eth2spec.test.context import with_all_phases, spec_state_test
|
||||
from eth2spec.test.helpers.attestations import next_epoch_with_attestations
|
||||
from eth2spec.test.helpers.block import build_empty_block_for_next_slot, sign_block, transition_unsigned_block, \
|
||||
build_empty_block
|
||||
from eth2spec.test.helpers.fork_choice import get_genesis_forkchoice_store
|
||||
from eth2spec.test.helpers.state import next_epoch, state_transition_and_sign_block, transition_to
|
||||
|
||||
|
||||
def run_on_block(spec, store, signed_block, valid=True):
|
||||
if not valid:
|
||||
try:
|
||||
spec.on_block(store, signed_block)
|
||||
except AssertionError:
|
||||
return
|
||||
else:
|
||||
assert False
|
||||
|
||||
spec.on_block(store, signed_block)
|
||||
assert store.blocks[hash_tree_root(signed_block.message)] == signed_block.message
|
||||
|
||||
|
||||
def apply_next_epoch_with_attestations(spec, state, store):
|
||||
_, new_signed_blocks, post_state = next_epoch_with_attestations(spec, state, True, False)
|
||||
for signed_block in new_signed_blocks:
|
||||
block = signed_block.message
|
||||
block_root = hash_tree_root(block)
|
||||
store.blocks[block_root] = block
|
||||
store.block_states[block_root] = post_state
|
||||
last_signed_block = signed_block
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
return post_state, store, last_signed_block
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_basic(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
assert store.time == time
|
||||
|
||||
# On receiving a block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
# On receiving a block of next epoch
|
||||
store.time = time + spec.config.SECONDS_PER_SLOT * spec.SLOTS_PER_EPOCH
|
||||
block = build_empty_block(spec, state, state.slot + spec.SLOTS_PER_EPOCH)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
# TODO: add tests for justified_root and finalized_root
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_checkpoints(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
state, store, last_signed_block = apply_next_epoch_with_attestations(spec, state, store)
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
last_block_root = hash_tree_root(last_signed_block.message)
|
||||
|
||||
# Mock the finalized_checkpoint
|
||||
fin_state = store.block_states[last_block_root]
|
||||
fin_state.finalized_checkpoint = (
|
||||
store.block_states[last_block_root].current_justified_checkpoint
|
||||
)
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, fin_state)
|
||||
signed_block = state_transition_and_sign_block(spec, deepcopy(fin_state), block)
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_future_block(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
|
||||
# do not tick time
|
||||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
run_on_block(spec, store, signed_block, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_bad_parent_root(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
transition_unsigned_block(spec, state, block)
|
||||
block.state_root = state.hash_tree_root()
|
||||
|
||||
block.parent_root = b'\x45' * 32
|
||||
|
||||
signed_block = sign_block(spec, state, block)
|
||||
|
||||
run_on_block(spec, store, signed_block, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_before_finalized(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
|
||||
store.finalized_checkpoint = spec.Checkpoint(
|
||||
epoch=store.finalized_checkpoint.epoch + 2,
|
||||
root=store.finalized_checkpoint.root
|
||||
)
|
||||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
run_on_block(spec, store, signed_block, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_finalized_skip_slots(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
|
||||
store.finalized_checkpoint = spec.Checkpoint(
|
||||
epoch=store.finalized_checkpoint.epoch + 2,
|
||||
root=store.finalized_checkpoint.root
|
||||
)
|
||||
|
||||
# Build block that includes the skipped slots up to finality in chain
|
||||
block = build_empty_block(spec, state, spec.compute_start_slot_at_epoch(store.finalized_checkpoint.epoch) + 2)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_finalized_skip_slots_not_in_skip_chain(spec, state):
|
||||
# Initialization
|
||||
transition_to(spec, state, state.slot + spec.SLOTS_PER_EPOCH - 1)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
transition_unsigned_block(spec, state, block)
|
||||
block.state_root = state.hash_tree_root()
|
||||
store = spec.get_forkchoice_store(state, block)
|
||||
store.finalized_checkpoint = spec.Checkpoint(
|
||||
epoch=store.finalized_checkpoint.epoch + 2,
|
||||
root=store.finalized_checkpoint.root
|
||||
)
|
||||
|
||||
# First transition through the epoch to ensure no skipped slots
|
||||
state, store, _ = apply_next_epoch_with_attestations(spec, state, store)
|
||||
|
||||
# Now build a block at later slot than finalized epoch
|
||||
# Includes finalized block in chain, but not at appropriate skip slot
|
||||
block = build_empty_block(spec, state, spec.compute_start_slot_at_epoch(store.finalized_checkpoint.epoch) + 2)
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
run_on_block(spec, store, signed_block, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_update_justified_checkpoint_within_safe_slots(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 0
|
||||
spec.on_tick(store, time)
|
||||
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
state, store, last_signed_block = apply_next_epoch_with_attestations(spec, state, store)
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
last_block_root = hash_tree_root(last_signed_block.message)
|
||||
|
||||
# Mock the justified checkpoint
|
||||
just_state = store.block_states[last_block_root]
|
||||
new_justified = spec.Checkpoint(
|
||||
epoch=just_state.current_justified_checkpoint.epoch + 1,
|
||||
root=b'\x77' * 32,
|
||||
)
|
||||
just_state.current_justified_checkpoint = new_justified
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, just_state)
|
||||
signed_block = state_transition_and_sign_block(spec, deepcopy(just_state), block)
|
||||
assert spec.get_current_slot(store) % spec.SLOTS_PER_EPOCH < spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
assert store.justified_checkpoint == new_justified
|
||||
from eth2spec.test.context import (
|
||||
spec_state_test,
|
||||
with_all_phases,
|
||||
)
|
||||
from eth2spec.test.helpers.block import (
|
||||
build_empty_block_for_next_slot,
|
||||
)
|
||||
from eth2spec.test.helpers.fork_choice import (
|
||||
get_genesis_forkchoice_store,
|
||||
run_on_block,
|
||||
apply_next_epoch_with_attestations,
|
||||
)
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
state_transition_and_sign_block,
|
||||
)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_outside_safe_slots_and_multiple_better_justified(spec, state):
|
||||
"""
|
||||
NOTE: test_new_justified_is_later_than_store_justified also tests best_justified_checkpoint
|
||||
"""
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 0
|
||||
spec.on_tick(store, time)
|
||||
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
state, store, last_signed_block = apply_next_epoch_with_attestations(spec, state, store)
|
||||
spec.on_tick(store, store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
state, store, last_signed_block = yield from apply_next_epoch_with_attestations(
|
||||
spec, state, store, True, False)
|
||||
last_block_root = hash_tree_root(last_signed_block.message)
|
||||
|
||||
# Mock fictitious justified checkpoint in store
|
||||
# NOTE: Mock fictitious justified checkpoint in store
|
||||
store.justified_checkpoint = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(last_signed_block.message.slot),
|
||||
root=spec.Root("0x4a55535449464945440000000000000000000000000000000000000000000000")
|
||||
)
|
||||
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
spec.on_tick(store, store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
|
||||
# Create new higher justified checkpoint not in branch of store's justified checkpoint
|
||||
just_block = build_empty_block_for_next_slot(spec, state)
|
||||
|
@ -243,11 +51,13 @@ def test_on_block_outside_safe_slots_and_multiple_better_justified(spec, state):
|
|||
spec.on_tick(store, store.time + spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED * spec.config.SECONDS_PER_SLOT)
|
||||
assert spec.get_current_slot(store) % spec.SLOTS_PER_EPOCH >= spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
|
||||
previously_finalized = store.finalized_checkpoint
|
||||
previously_justified = store.justified_checkpoint
|
||||
|
||||
# Add a series of new blocks with "better" justifications
|
||||
best_justified_checkpoint = spec.Checkpoint(epoch=0)
|
||||
for i in range(3, 0, -1):
|
||||
# Mutate store
|
||||
just_state = store.block_states[last_block_root]
|
||||
new_justified = spec.Checkpoint(
|
||||
epoch=previously_justified.epoch + i,
|
||||
|
@ -261,63 +71,17 @@ def test_on_block_outside_safe_slots_and_multiple_better_justified(spec, state):
|
|||
block = build_empty_block_for_next_slot(spec, just_state)
|
||||
signed_block = state_transition_and_sign_block(spec, deepcopy(just_state), block)
|
||||
|
||||
# NOTE: Mock store so that the modified state could be accessed
|
||||
parent_block = store.blocks[last_block_root].copy()
|
||||
parent_block.state_root = just_state.hash_tree_root()
|
||||
store.blocks[block.parent_root] = parent_block
|
||||
store.block_states[block.parent_root] = just_state.copy()
|
||||
assert block.parent_root in store.blocks.keys()
|
||||
assert block.parent_root in store.block_states.keys()
|
||||
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
assert store.finalized_checkpoint == previously_finalized
|
||||
assert store.justified_checkpoint == previously_justified
|
||||
# ensure the best from the series was stored
|
||||
assert store.best_justified_checkpoint == best_justified_checkpoint
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_on_block_outside_safe_slots_but_finality(spec, state):
|
||||
# Initialization
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
state, store, last_signed_block = apply_next_epoch_with_attestations(spec, state, store)
|
||||
last_block_root = hash_tree_root(last_signed_block.message)
|
||||
|
||||
# Mock fictitious justified checkpoint in store
|
||||
store.justified_checkpoint = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(last_signed_block.message.slot),
|
||||
root=spec.Root("0x4a55535449464945440000000000000000000000000000000000000000000000")
|
||||
)
|
||||
|
||||
next_epoch(spec, state)
|
||||
spec.on_tick(store, store.time + state.slot * spec.config.SECONDS_PER_SLOT)
|
||||
|
||||
# Create new higher justified checkpoint not in branch of store's justified checkpoint
|
||||
just_block = build_empty_block_for_next_slot(spec, state)
|
||||
store.blocks[just_block.hash_tree_root()] = just_block
|
||||
|
||||
# Step time past safe slots
|
||||
spec.on_tick(store, store.time + spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED * spec.config.SECONDS_PER_SLOT)
|
||||
assert spec.get_current_slot(store) % spec.SLOTS_PER_EPOCH >= spec.SAFE_SLOTS_TO_UPDATE_JUSTIFIED
|
||||
|
||||
# Mock justified and finalized update in state
|
||||
just_fin_state = store.block_states[last_block_root]
|
||||
new_justified = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(just_block.slot) + 1,
|
||||
root=just_block.hash_tree_root(),
|
||||
)
|
||||
assert new_justified.epoch > store.justified_checkpoint.epoch
|
||||
new_finalized = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(just_block.slot),
|
||||
root=just_block.parent_root,
|
||||
)
|
||||
assert new_finalized.epoch > store.finalized_checkpoint.epoch
|
||||
just_fin_state.current_justified_checkpoint = new_justified
|
||||
just_fin_state.finalized_checkpoint = new_finalized
|
||||
|
||||
# Build and add block that includes the new justified/finalized info
|
||||
block = build_empty_block_for_next_slot(spec, just_fin_state)
|
||||
signed_block = state_transition_and_sign_block(spec, deepcopy(just_fin_state), block)
|
||||
|
||||
run_on_block(spec, store, signed_block)
|
||||
|
||||
assert store.finalized_checkpoint == new_finalized
|
||||
assert store.justified_checkpoint == new_justified
|
||||
|
|
|
@ -1,5 +1,13 @@
|
|||
from eth2spec.test.context import with_all_phases, spec_state_test
|
||||
from eth2spec.test.helpers.fork_choice import get_genesis_forkchoice_store
|
||||
from eth2spec.test.helpers.block import (
|
||||
build_empty_block_for_next_slot,
|
||||
)
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
state_transition_and_sign_block,
|
||||
transition_to,
|
||||
)
|
||||
|
||||
|
||||
def run_on_tick(spec, store, time, new_justified_checkpoint=False):
|
||||
|
@ -26,18 +34,92 @@ def test_basic(spec, state):
|
|||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_update_justified_single(spec, state):
|
||||
def test_update_justified_single_on_store_finalized_chain(spec, state):
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
next_epoch = spec.get_current_epoch(state) + 1
|
||||
next_epoch_start_slot = spec.compute_start_slot_at_epoch(next_epoch)
|
||||
seconds_until_next_epoch = next_epoch_start_slot * spec.config.SECONDS_PER_SLOT - store.time
|
||||
|
||||
store.best_justified_checkpoint = spec.Checkpoint(
|
||||
epoch=store.justified_checkpoint.epoch + 1,
|
||||
root=b'\x55' * 32,
|
||||
# [Mock store.best_justified_checkpoint]
|
||||
# Create a block at epoch 1
|
||||
next_epoch(spec, state)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
store.blocks[block.hash_tree_root()] = block.copy()
|
||||
store.block_states[block.hash_tree_root()] = state.copy()
|
||||
parent_block = block.copy()
|
||||
# To make compute_slots_since_epoch_start(current_slot) == 0, transition to the end of the epoch
|
||||
slot = state.slot + spec.SLOTS_PER_EPOCH - state.slot % spec.SLOTS_PER_EPOCH - 1
|
||||
transition_to(spec, state, slot)
|
||||
# Create a block at the start of epoch 2
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
# Mock state
|
||||
state.current_justified_checkpoint = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(parent_block.slot),
|
||||
root=parent_block.hash_tree_root(),
|
||||
)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
store.blocks[block.hash_tree_root()] = block
|
||||
store.block_states[block.hash_tree_root()] = state
|
||||
# Mock store.best_justified_checkpoint
|
||||
store.best_justified_checkpoint = state.current_justified_checkpoint.copy()
|
||||
|
||||
run_on_tick(
|
||||
spec,
|
||||
store,
|
||||
store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT,
|
||||
new_justified_checkpoint=True
|
||||
)
|
||||
|
||||
run_on_tick(spec, store, store.time + seconds_until_next_epoch, True)
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_update_justified_single_not_on_store_finalized_chain(spec, state):
|
||||
store = get_genesis_forkchoice_store(spec, state)
|
||||
init_state = state.copy()
|
||||
|
||||
# Chain grows
|
||||
# Create a block at epoch 1
|
||||
next_epoch(spec, state)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
block.body.graffiti = b'\x11' * 32
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
store.blocks[block.hash_tree_root()] = block.copy()
|
||||
store.block_states[block.hash_tree_root()] = state.copy()
|
||||
# Mock store.finalized_checkpoint
|
||||
store.finalized_checkpoint = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(block.slot),
|
||||
root=block.hash_tree_root(),
|
||||
)
|
||||
|
||||
# [Mock store.best_justified_checkpoint]
|
||||
# Create a block at epoch 1
|
||||
state = init_state.copy()
|
||||
next_epoch(spec, state)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
block.body.graffiti = b'\x22' * 32
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
store.blocks[block.hash_tree_root()] = block.copy()
|
||||
store.block_states[block.hash_tree_root()] = state.copy()
|
||||
parent_block = block.copy()
|
||||
# To make compute_slots_since_epoch_start(current_slot) == 0, transition to the end of the epoch
|
||||
slot = state.slot + spec.SLOTS_PER_EPOCH - state.slot % spec.SLOTS_PER_EPOCH - 1
|
||||
transition_to(spec, state, slot)
|
||||
# Create a block at the start of epoch 2
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
# Mock state
|
||||
state.current_justified_checkpoint = spec.Checkpoint(
|
||||
epoch=spec.compute_epoch_at_slot(parent_block.slot),
|
||||
root=parent_block.hash_tree_root(),
|
||||
)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
store.blocks[block.hash_tree_root()] = block.copy()
|
||||
store.block_states[block.hash_tree_root()] = state.copy()
|
||||
# Mock store.best_justified_checkpoint
|
||||
store.best_justified_checkpoint = state.current_justified_checkpoint.copy()
|
||||
|
||||
run_on_tick(
|
||||
spec,
|
||||
store,
|
||||
store.genesis_time + state.slot * spec.config.SECONDS_PER_SLOT,
|
||||
)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
from .utils import (
|
||||
vector_test,
|
||||
with_meta_tags,
|
||||
build_transition_test,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [ # avoid "unused import" lint error
|
||||
"vector_test",
|
||||
"with_meta_tags",
|
||||
"build_transition_test",
|
||||
]
|
|
@ -0,0 +1,319 @@
|
|||
"""
|
||||
Utility code to generate randomized block tests
|
||||
"""
|
||||
|
||||
import sys
|
||||
import warnings
|
||||
from random import Random
|
||||
from typing import Callable
|
||||
|
||||
from eth2spec.test.helpers.multi_operations import (
|
||||
build_random_block_from_state_for_next_slot,
|
||||
get_random_sync_aggregate,
|
||||
)
|
||||
from eth2spec.test.helpers.inactivity_scores import (
|
||||
randomize_inactivity_scores,
|
||||
)
|
||||
from eth2spec.test.helpers.random import (
|
||||
randomize_state as randomize_state_helper,
|
||||
)
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_slot,
|
||||
next_epoch,
|
||||
ensure_state_has_validators_across_lifecycle,
|
||||
state_transition_and_sign_block,
|
||||
)
|
||||
|
||||
# primitives:
|
||||
# state
|
||||
|
||||
|
||||
def randomize_state(spec, state, exit_fraction=0.1, slash_fraction=0.1):
|
||||
randomize_state_helper(spec, state, exit_fraction=exit_fraction, slash_fraction=slash_fraction)
|
||||
|
||||
|
||||
def randomize_state_altair(spec, state):
|
||||
randomize_state(spec, state, exit_fraction=0.1, slash_fraction=0.1)
|
||||
randomize_inactivity_scores(spec, state)
|
||||
|
||||
|
||||
# epochs
|
||||
|
||||
def epochs_until_leak(spec):
|
||||
"""
|
||||
State is "leaking" if the current epoch is at least
|
||||
this value after the last finalized epoch.
|
||||
"""
|
||||
return spec.MIN_EPOCHS_TO_INACTIVITY_PENALTY + 1
|
||||
|
||||
|
||||
def epochs_for_shard_committee_period(spec):
|
||||
return spec.config.SHARD_COMMITTEE_PERIOD
|
||||
|
||||
|
||||
# slots
|
||||
|
||||
def last_slot_in_epoch(spec):
|
||||
return spec.SLOTS_PER_EPOCH - 1
|
||||
|
||||
|
||||
def random_slot_in_epoch(spec, rng=Random(1336)):
|
||||
return rng.randrange(1, spec.SLOTS_PER_EPOCH - 2)
|
||||
|
||||
|
||||
def penultimate_slot_in_epoch(spec):
|
||||
return spec.SLOTS_PER_EPOCH - 2
|
||||
|
||||
|
||||
# blocks
|
||||
|
||||
def no_block(_spec, _pre_state, _signed_blocks):
|
||||
return None
|
||||
|
||||
|
||||
# May need to make several attempts to find a block that does not correspond to a slashed
|
||||
# proposer with the randomization helpers...
|
||||
BLOCK_ATTEMPTS = 32
|
||||
|
||||
|
||||
def _warn_if_empty_operations(block):
|
||||
if len(block.body.deposits) == 0:
|
||||
warnings.warn(f"deposits missing in block at slot {block.slot}")
|
||||
|
||||
if len(block.body.proposer_slashings) == 0:
|
||||
warnings.warn(f"proposer slashings missing in block at slot {block.slot}")
|
||||
|
||||
if len(block.body.attester_slashings) == 0:
|
||||
warnings.warn(f"attester slashings missing in block at slot {block.slot}")
|
||||
|
||||
if len(block.body.attestations) == 0:
|
||||
warnings.warn(f"attestations missing in block at slot {block.slot}")
|
||||
|
||||
if len(block.body.voluntary_exits) == 0:
|
||||
warnings.warn(f"voluntary exits missing in block at slot {block.slot}")
|
||||
|
||||
|
||||
def random_block(spec, state, _signed_blocks):
|
||||
"""
|
||||
Produce a random block.
|
||||
NOTE: this helper may mutate state, as it will attempt
|
||||
to produce a block over ``BLOCK_ATTEMPTS`` slots in order
|
||||
to find a valid block in the event that the proposer has already been slashed.
|
||||
"""
|
||||
# NOTE: ``state`` has been "randomized" at this point and so will likely
|
||||
# contain a large number of slashed validators. This function needs to return
|
||||
# a valid block so it needs to check that the proposer of the next slot is not
|
||||
# slashed.
|
||||
# To do this, generate a ``temp_state`` to use for checking the propser in the next slot.
|
||||
# This ensures no accidental mutations happen to the ``state`` the caller expects to get back
|
||||
# after this function returns.
|
||||
# Using a copy of the state for proposer sampling is also sound as any inputs used for the
|
||||
# shuffling are fixed a few epochs prior to ``spec.get_current_epoch(state)``.
|
||||
temp_state = state.copy()
|
||||
next_slot(spec, temp_state)
|
||||
for _ in range(BLOCK_ATTEMPTS):
|
||||
proposer_index = spec.get_beacon_proposer_index(temp_state)
|
||||
proposer = state.validators[proposer_index]
|
||||
if proposer.slashed:
|
||||
next_slot(spec, state)
|
||||
next_slot(spec, temp_state)
|
||||
else:
|
||||
block = build_random_block_from_state_for_next_slot(spec, state)
|
||||
_warn_if_empty_operations(block)
|
||||
return block
|
||||
else:
|
||||
raise AssertionError("could not find a block with an unslashed proposer, check ``state`` input")
|
||||
|
||||
|
||||
SYNC_AGGREGATE_PARTICIPATION_BUCKETS = 4
|
||||
|
||||
|
||||
def random_block_altair_with_cycling_sync_committee_participation(spec,
|
||||
state,
|
||||
signed_blocks):
|
||||
block = random_block(spec, state, signed_blocks)
|
||||
block_index = len(signed_blocks) % SYNC_AGGREGATE_PARTICIPATION_BUCKETS
|
||||
fraction_missed = block_index * (1 / SYNC_AGGREGATE_PARTICIPATION_BUCKETS)
|
||||
fraction_participated = 1.0 - fraction_missed
|
||||
block.body.sync_aggregate = get_random_sync_aggregate(
|
||||
spec,
|
||||
state,
|
||||
block.slot - 1,
|
||||
fraction_participated=fraction_participated,
|
||||
)
|
||||
return block
|
||||
|
||||
|
||||
# validations
|
||||
|
||||
def no_op_validation(spec, state):
|
||||
return True
|
||||
|
||||
|
||||
def validate_is_leaking(spec, state):
|
||||
return spec.is_in_inactivity_leak(state)
|
||||
|
||||
|
||||
def validate_is_not_leaking(spec, state):
|
||||
return not validate_is_leaking(spec, state)
|
||||
|
||||
|
||||
# transitions
|
||||
|
||||
def with_validation(transition, validation):
|
||||
if isinstance(transition, Callable):
|
||||
transition = transition()
|
||||
transition["validation"] = validation
|
||||
return transition
|
||||
|
||||
|
||||
def no_op_transition():
|
||||
return {}
|
||||
|
||||
|
||||
def epoch_transition(n=0):
|
||||
return {
|
||||
"epochs_to_skip": n,
|
||||
}
|
||||
|
||||
|
||||
def slot_transition(n=0):
|
||||
return {
|
||||
"slots_to_skip": n,
|
||||
}
|
||||
|
||||
|
||||
def transition_to_leaking():
|
||||
return {
|
||||
"epochs_to_skip": epochs_until_leak,
|
||||
"validation": validate_is_leaking,
|
||||
}
|
||||
|
||||
|
||||
transition_without_leak = with_validation(no_op_transition, validate_is_not_leaking)
|
||||
|
||||
# block transitions
|
||||
|
||||
|
||||
def transition_with_random_block(block_randomizer):
|
||||
"""
|
||||
Build a block transition with randomized data.
|
||||
Provide optional sub-transitions to advance some
|
||||
number of epochs or slots before applying the random block.
|
||||
"""
|
||||
return {
|
||||
"block_producer": block_randomizer,
|
||||
}
|
||||
|
||||
|
||||
# setup and test gen
|
||||
|
||||
|
||||
def _randomized_scenario_setup(state_randomizer):
|
||||
"""
|
||||
Return a sequence of pairs of ("mutation", "validation"),
|
||||
a function that accepts (spec, state) arguments and performs some change
|
||||
and a function that accepts (spec, state) arguments and validates some change was made.
|
||||
"""
|
||||
def _skip_epochs(epoch_producer):
|
||||
def f(spec, state):
|
||||
"""
|
||||
The unoptimized spec implementation is too slow to advance via ``next_epoch``.
|
||||
Instead, just overwrite the ``state.slot`` and continue...
|
||||
"""
|
||||
epochs_to_skip = epoch_producer(spec)
|
||||
slots_to_skip = epochs_to_skip * spec.SLOTS_PER_EPOCH
|
||||
state.slot += slots_to_skip
|
||||
return f
|
||||
|
||||
def _simulate_honest_execution(spec, state):
|
||||
"""
|
||||
Want to start tests not in a leak state; the finality data
|
||||
may not reflect this condition with prior (arbitrary) mutations,
|
||||
so this mutator addresses that fact.
|
||||
"""
|
||||
state.justification_bits = (True, True, True, True)
|
||||
previous_epoch = spec.get_previous_epoch(state)
|
||||
previous_root = spec.get_block_root(state, previous_epoch)
|
||||
previous_previous_epoch = max(spec.GENESIS_EPOCH, spec.Epoch(previous_epoch - 1))
|
||||
previous_previous_root = spec.get_block_root(state, previous_previous_epoch)
|
||||
state.previous_justified_checkpoint = spec.Checkpoint(
|
||||
epoch=previous_previous_epoch,
|
||||
root=previous_previous_root,
|
||||
)
|
||||
state.current_justified_checkpoint = spec.Checkpoint(
|
||||
epoch=previous_epoch,
|
||||
root=previous_root,
|
||||
)
|
||||
state.finalized_checkpoint = spec.Checkpoint(
|
||||
epoch=previous_previous_epoch,
|
||||
root=previous_previous_root,
|
||||
)
|
||||
|
||||
return (
|
||||
# NOTE: the block randomization function assumes at least 1 shard committee period
|
||||
# so advance the state before doing anything else.
|
||||
(_skip_epochs(epochs_for_shard_committee_period), no_op_validation),
|
||||
(_simulate_honest_execution, no_op_validation),
|
||||
(state_randomizer, ensure_state_has_validators_across_lifecycle),
|
||||
)
|
||||
|
||||
# Run the generated tests:
|
||||
|
||||
|
||||
# while the test implementation works via code-gen,
|
||||
# references to helper code in this module are serialized as str names.
|
||||
# to resolve this references at runtime, we need a reference to this module:
|
||||
_this_module = sys.modules[__name__]
|
||||
|
||||
|
||||
def _resolve_ref(ref):
|
||||
if isinstance(ref, str):
|
||||
return getattr(_this_module, ref)
|
||||
return ref
|
||||
|
||||
|
||||
def _iter_temporal(spec, description):
|
||||
"""
|
||||
Intended to advance some number of {epochs, slots}.
|
||||
Caller can provide a constant integer or a callable deriving a number from
|
||||
the ``spec`` under consideration.
|
||||
"""
|
||||
numeric = _resolve_ref(description)
|
||||
if isinstance(numeric, Callable):
|
||||
numeric = numeric(spec)
|
||||
for i in range(numeric):
|
||||
yield i
|
||||
|
||||
|
||||
def run_generated_randomized_test(spec, state, scenario):
|
||||
if "setup" not in scenario:
|
||||
state_randomizer = _resolve_ref(scenario.get("state_randomizer", randomize_state))
|
||||
scenario["setup"] = _randomized_scenario_setup(state_randomizer)
|
||||
|
||||
for mutation, validation in scenario["setup"]:
|
||||
mutation(spec, state)
|
||||
validation(spec, state)
|
||||
|
||||
yield "pre", state
|
||||
|
||||
blocks = []
|
||||
for transition in scenario["transitions"]:
|
||||
epochs_to_skip = _iter_temporal(spec, transition["epochs_to_skip"])
|
||||
for _ in epochs_to_skip:
|
||||
next_epoch(spec, state)
|
||||
slots_to_skip = _iter_temporal(spec, transition["slots_to_skip"])
|
||||
for _ in slots_to_skip:
|
||||
next_slot(spec, state)
|
||||
|
||||
block_producer = _resolve_ref(transition["block_producer"])
|
||||
block = block_producer(spec, state, blocks)
|
||||
if block:
|
||||
signed_block = state_transition_and_sign_block(spec, state, block)
|
||||
blocks.append(signed_block)
|
||||
|
||||
validation = _resolve_ref(transition["validation"])
|
||||
assert validation(spec, state)
|
||||
|
||||
yield "blocks", blocks
|
||||
yield "post", state
|
|
@ -10,9 +10,8 @@ bls = py_ecc_bls
|
|||
|
||||
STUB_SIGNATURE = b'\x11' * 96
|
||||
STUB_PUBKEY = b'\x22' * 48
|
||||
Z1_PUBKEY = b'\xc0' + b'\x00' * 47
|
||||
Z2_SIGNATURE = b'\xc0' + b'\x00' * 95
|
||||
STUB_COORDINATES = _signature_to_G2(Z2_SIGNATURE)
|
||||
G2_POINT_AT_INFINITY = b'\xc0' + b'\x00' * 95
|
||||
STUB_COORDINATES = _signature_to_G2(G2_POINT_AT_INFINITY)
|
||||
|
||||
|
||||
def use_milagro():
|
||||
|
@ -95,6 +94,12 @@ def signature_to_G2(signature):
|
|||
|
||||
@only_with_bls(alt_return=STUB_PUBKEY)
|
||||
def AggregatePKs(pubkeys):
|
||||
if bls == py_ecc_bls:
|
||||
assert all(bls.KeyValidate(pubkey) for pubkey in pubkeys)
|
||||
elif bls == milagro_bls:
|
||||
# milagro_bls._AggregatePKs checks KeyValidate internally
|
||||
pass
|
||||
|
||||
return bls._AggregatePKs(list(pubkeys))
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# General test format
|
||||
|
||||
This document defines the YAML format and structure used for Eth2 testing.
|
||||
This document defines the YAML format and structure used for consensus spec testing.
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
@ -151,7 +151,7 @@ Between all types of tests, a few formats are common:
|
|||
- **`.yaml`**: A YAML file containing structured data to describe settings or test contents.
|
||||
- **`.ssz`**: A file containing raw SSZ-encoded data. Previously widely used in tests, but replaced with compressed variant.
|
||||
- **`.ssz_snappy`**: Like `.ssz`, but compressed with Snappy block compression.
|
||||
Snappy block compression is already applied to SSZ in Eth2 gossip, available in client implementations, and thus chosen as compression method.
|
||||
Snappy block compression is already applied to SSZ in consensus-layer gossip, available in client implementations, and thus chosen as compression method.
|
||||
|
||||
|
||||
#### Special output parts
|
||||
|
|
|
@ -7,6 +7,8 @@ The BLS test suite runner has the following handlers:
|
|||
|
||||
- [`aggregate_verify`](./aggregate_verify.md)
|
||||
- [`aggregate`](./aggregate.md)
|
||||
- [`eth_aggregate_pubkeys`](./eth_aggregate_pubkeys.md)
|
||||
- [`eth_fast_aggregate_verify`](./eth_fast_aggregate_verify.md)
|
||||
- [`fast_aggregate_verify`](./fast_aggregate_verify.md)
|
||||
- [`sign`](./sign.md)
|
||||
- [`verify`](./verify.md)
|
||||
|
|
|
@ -14,6 +14,8 @@ output: BLS Signature -- expected output, single BLS signature or empty.
|
|||
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
|
||||
- No output value if the input is invalid.
|
||||
|
||||
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
|
||||
|
||||
## Condition
|
||||
|
||||
The `aggregate` handler should aggregate the signatures in the `input`, and the result should match the expected `output`.
|
||||
|
|
|
@ -8,10 +8,17 @@ The test data is declared in a `data.yaml` file:
|
|||
|
||||
```yaml
|
||||
input:
|
||||
pubkeys: List[bytes48] -- the pubkeys
|
||||
pubkeys: List[BLS Pubkey] -- the pubkeys
|
||||
messages: List[bytes32] -- the messages
|
||||
signature: bytes96 -- the signature to verify against pubkeys and messages
|
||||
output: bool -- VALID or INVALID
|
||||
signature: BLS Signature -- the signature to verify against pubkeys and messages
|
||||
output: bool -- true (VALID) or false (INVALID)
|
||||
```
|
||||
|
||||
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
|
||||
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
|
||||
|
||||
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
|
||||
|
||||
## Condition
|
||||
|
||||
The `aggregate_verify` handler should verify the signature with pubkeys and messages in the `input`, and the result should match the expected `output`.
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
# Test format: Ethereum-customized BLS pubkey aggregation
|
||||
|
||||
A BLS pubkey aggregation combines a series of pubkeys into a single pubkey.
|
||||
|
||||
## Test case format
|
||||
|
||||
The test data is declared in a `data.yaml` file:
|
||||
|
||||
```yaml
|
||||
input: List[BLS Pubkey] -- list of input BLS pubkeys
|
||||
output: BLSPubkey -- expected output, single BLS pubkeys or empty.
|
||||
```
|
||||
|
||||
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
|
||||
- No output value if the input is invalid.
|
||||
|
||||
## Condition
|
||||
|
||||
The `eth_aggregate_pubkeys` handler should aggregate the signatures in the `input`, and the result should match the expected `output`.
|
|
@ -0,0 +1,24 @@
|
|||
# Test format: Ethereum-customized BLS fast aggregate verify
|
||||
|
||||
Verify the signature against the given pubkeys and one message.
|
||||
|
||||
## Test case format
|
||||
|
||||
The test data is declared in a `data.yaml` file:
|
||||
|
||||
```yaml
|
||||
input:
|
||||
pubkeys: List[BLS Pubkey] -- list of input BLS pubkeys
|
||||
message: bytes32 -- the message
|
||||
signature: BLS Signature -- the signature to verify against pubkeys and message
|
||||
output: bool -- true (VALID) or false (INVALID)
|
||||
```
|
||||
|
||||
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
|
||||
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
|
||||
|
||||
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
|
||||
|
||||
## Condition
|
||||
|
||||
The `eth_fast_aggregate_verify` handler should verify the signature with pubkeys and message in the `input`, and the result should match the expected `output`.
|
|
@ -1,4 +1,4 @@
|
|||
# Test format: BLS sign message
|
||||
# Test format: BLS fast aggregate verify
|
||||
|
||||
Verify the signature against the given pubkeys and one message.
|
||||
|
||||
|
@ -8,10 +8,17 @@ The test data is declared in a `data.yaml` file:
|
|||
|
||||
```yaml
|
||||
input:
|
||||
pubkeys: List[bytes48] -- the pubkey
|
||||
pubkeys: List[BLS Pubkey] -- list of input BLS pubkeys
|
||||
message: bytes32 -- the message
|
||||
signature: bytes96 -- the signature to verify against pubkeys and message
|
||||
output: bool -- VALID or INVALID
|
||||
signature: BLS Signature -- the signature to verify against pubkeys and message
|
||||
output: bool -- true (VALID) or false (INVALID)
|
||||
```
|
||||
|
||||
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
|
||||
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
|
||||
|
||||
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
|
||||
|
||||
## Condition
|
||||
|
||||
The `fast_aggregate_verify` handler should verify the signature with pubkeys and message in the `input`, and the result should match the expected `output`.
|
||||
|
|
|
@ -28,7 +28,11 @@ The steps to execute in sequence. There may be multiple items of the following t
|
|||
The parameter that is required for executing `on_tick(store, time)`.
|
||||
|
||||
```yaml
|
||||
{ tick: int } -- to execute `on_tick(store, time)`
|
||||
{
|
||||
tick: int -- to execute `on_tick(store, time)`.
|
||||
valid: bool -- optional, default to `true`.
|
||||
If it's `false`, this execution step is expected to be invalid.
|
||||
}
|
||||
```
|
||||
|
||||
After this step, the `store` object may have been updated.
|
||||
|
@ -38,7 +42,12 @@ After this step, the `store` object may have been updated.
|
|||
The parameter that is required for executing `on_attestation(store, attestation)`.
|
||||
|
||||
```yaml
|
||||
{ attestation: string } -- the name of the `attestation_<32-byte-root>.ssz_snappy` file. To execute `on_attestation(store, attestation)` with the given attestation.
|
||||
{
|
||||
attestation: string -- the name of the `attestation_<32-byte-root>.ssz_snappy` file.
|
||||
To execute `on_attestation(store, attestation)` with the given attestation.
|
||||
valid: bool -- optional, default to `true`.
|
||||
If it's `false`, this execution step is expected to be invalid.
|
||||
}
|
||||
```
|
||||
The file is located in the same folder (see below).
|
||||
|
||||
|
@ -49,7 +58,12 @@ After this step, the `store` object may have been updated.
|
|||
The parameter that is required for executing `on_block(store, block)`.
|
||||
|
||||
```yaml
|
||||
{ block: string } -- the name of the `block_<32-byte-root>.ssz_snappy` file. To execute `on_block(store, block)` with the given attestation.
|
||||
{
|
||||
block: string -- the name of the `block_<32-byte-root>.ssz_snappy` file.
|
||||
To execute `on_block(store, block)` with the given attestation.
|
||||
valid: bool -- optional, default to `true`.
|
||||
If it's `false`, this execution step is expected to be invalid.
|
||||
}
|
||||
```
|
||||
The file is located in the same folder (see below).
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# SSZ, static tests
|
||||
|
||||
This set of test-suites provides static testing for SSZ:
|
||||
to instantiate just the known Eth2 SSZ types from binary data.
|
||||
to instantiate just the known Ethereum SSZ types from binary data.
|
||||
|
||||
This series of tests is based on the spec-maintained `eth2spec/utils/ssz/ssz_impl.py`, i.e. fully consistent with the SSZ spec.
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
# Eth2 test generators
|
||||
# Consensus test generators
|
||||
|
||||
This directory contains all the generators for tests, consumed by Eth2 client implementations.
|
||||
This directory contains all the generators for tests, consumed by consensus-layer client implementations.
|
||||
|
||||
Any issues with the generators and/or generated tests should be filed in the repository that hosts the generator outputs,
|
||||
here: [ethereum/eth2.0-spec-tests](https://github.com/ethereum/eth2.0-spec-tests).
|
||||
here: [ethereum/consensus-spec-tests](https://github.com/ethereum/consensus-spec-tests).
|
||||
|
||||
On releases, test generators are run by the release manager. Test-generation of mainnet tests can take a significant amount of time, and is better left out of a CI setup.
|
||||
|
||||
|
@ -36,7 +36,7 @@ Prerequisites:
|
|||
|
||||
### Cleaning
|
||||
|
||||
This removes the existing virtual environments (`/tests/generators/<generator>/venv`) and generated tests (`../eth2.0-spec-tests/tests`).
|
||||
This removes the existing virtual environments (`/tests/generators/<generator>/venv`) and generated tests (`../consensus-spec-tests/tests`).
|
||||
|
||||
```bash
|
||||
make clean
|
||||
|
@ -226,5 +226,5 @@ Do note that generators should be easy to maintain, lean, and based on the spec.
|
|||
If a test generator is not needed anymore, undo the steps described above and make a new release:
|
||||
|
||||
1. Remove the generator directory.
|
||||
2. Remove the generated tests in the [`eth2.0-spec-tests`](https://github.com/ethereum/eth2.0-spec-tests) repository by opening a pull request there.
|
||||
2. Remove the generated tests in the [`consensus-spec-tests`](https://github.com/ethereum/consensus-spec-tests) repository by opening a pull request there.
|
||||
3. Make a new release.
|
||||
|
|
|
@ -12,8 +12,10 @@ from eth_utils import (
|
|||
import milagro_bls_binding as milagro_bls
|
||||
|
||||
from eth2spec.utils import bls
|
||||
from eth2spec.test.helpers.constants import PHASE0
|
||||
from eth2spec.test.helpers.constants import PHASE0, ALTAIR
|
||||
from eth2spec.test.helpers.typing import SpecForkName
|
||||
from eth2spec.gen_helpers.gen_base import gen_runner, gen_typing
|
||||
from eth2spec.altair import spec
|
||||
|
||||
|
||||
def to_bytes(i):
|
||||
|
@ -51,9 +53,12 @@ PRIVKEYS = [
|
|||
]
|
||||
PUBKEYS = [bls.SkToPk(privkey) for privkey in PRIVKEYS]
|
||||
|
||||
Z1_PUBKEY = b'\xc0' + b'\x00' * 47
|
||||
NO_SIGNATURE = b'\x00' * 96
|
||||
Z2_SIGNATURE = b'\xc0' + b'\x00' * 95
|
||||
ZERO_PUBKEY = b'\x00' * 48
|
||||
G1_POINT_AT_INFINITY = b'\xc0' + b'\x00' * 47
|
||||
|
||||
ZERO_SIGNATURE = b'\x00' * 96
|
||||
G2_POINT_AT_INFINITY = b'\xc0' + b'\x00' * 95
|
||||
|
||||
ZERO_PRIVKEY = 0
|
||||
ZERO_PRIVKEY_BYTES = b'\x00' * 32
|
||||
|
||||
|
@ -146,13 +151,13 @@ def case02_verify():
|
|||
}
|
||||
|
||||
# Invalid pubkey and signature with the point at infinity
|
||||
assert not bls.Verify(Z1_PUBKEY, SAMPLE_MESSAGE, Z2_SIGNATURE)
|
||||
assert not milagro_bls.Verify(Z1_PUBKEY, SAMPLE_MESSAGE, Z2_SIGNATURE)
|
||||
assert not bls.Verify(G1_POINT_AT_INFINITY, SAMPLE_MESSAGE, G2_POINT_AT_INFINITY)
|
||||
assert not milagro_bls.Verify(G1_POINT_AT_INFINITY, SAMPLE_MESSAGE, G2_POINT_AT_INFINITY)
|
||||
yield f'verify_infinity_pubkey_and_infinity_signature', {
|
||||
'input': {
|
||||
'pubkey': encode_hex(Z1_PUBKEY),
|
||||
'pubkey': encode_hex(G1_POINT_AT_INFINITY),
|
||||
'message': encode_hex(SAMPLE_MESSAGE),
|
||||
'signature': encode_hex(Z2_SIGNATURE),
|
||||
'signature': encode_hex(G2_POINT_AT_INFINITY),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
@ -178,10 +183,10 @@ def case03_aggregate():
|
|||
}
|
||||
|
||||
# Valid to aggregate G2 point at infinity
|
||||
aggregate_sig = bls.Aggregate([Z2_SIGNATURE])
|
||||
assert aggregate_sig == milagro_bls.Aggregate([Z2_SIGNATURE]) == Z2_SIGNATURE
|
||||
aggregate_sig = bls.Aggregate([G2_POINT_AT_INFINITY])
|
||||
assert aggregate_sig == milagro_bls.Aggregate([G2_POINT_AT_INFINITY]) == G2_POINT_AT_INFINITY
|
||||
yield f'aggregate_infinity_signature', {
|
||||
'input': [encode_hex(Z2_SIGNATURE)],
|
||||
'input': [encode_hex(G2_POINT_AT_INFINITY)],
|
||||
'output': encode_hex(aggregate_sig),
|
||||
}
|
||||
|
||||
|
@ -237,32 +242,32 @@ def case04_fast_aggregate_verify():
|
|||
}
|
||||
|
||||
# Invalid pubkeys and signature -- len(pubkeys) == 0 and signature == Z1_SIGNATURE
|
||||
assert not bls.FastAggregateVerify([], message, Z2_SIGNATURE)
|
||||
assert not milagro_bls.FastAggregateVerify([], message, Z2_SIGNATURE)
|
||||
assert not bls.FastAggregateVerify([], message, G2_POINT_AT_INFINITY)
|
||||
assert not milagro_bls.FastAggregateVerify([], message, G2_POINT_AT_INFINITY)
|
||||
yield f'fast_aggregate_verify_na_pubkeys_and_infinity_signature', {
|
||||
'input': {
|
||||
'pubkeys': [],
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(Z2_SIGNATURE),
|
||||
'signature': encode_hex(G2_POINT_AT_INFINITY),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# Invalid pubkeys and signature -- len(pubkeys) == 0 and signature == 0x00...
|
||||
assert not bls.FastAggregateVerify([], message, NO_SIGNATURE)
|
||||
assert not milagro_bls.FastAggregateVerify([], message, NO_SIGNATURE)
|
||||
yield f'fast_aggregate_verify_na_pubkeys_and_na_signature', {
|
||||
assert not bls.FastAggregateVerify([], message, ZERO_SIGNATURE)
|
||||
assert not milagro_bls.FastAggregateVerify([], message, ZERO_SIGNATURE)
|
||||
yield f'fast_aggregate_verify_na_pubkeys_and_zero_signature', {
|
||||
'input': {
|
||||
'pubkeys': [],
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(NO_SIGNATURE),
|
||||
'signature': encode_hex(ZERO_SIGNATURE),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# Invalid pubkeys and signature -- pubkeys contains point at infinity
|
||||
pubkeys = PUBKEYS.copy()
|
||||
pubkeys_with_infinity = pubkeys + [Z1_PUBKEY]
|
||||
pubkeys_with_infinity = pubkeys + [G1_POINT_AT_INFINITY]
|
||||
signatures = [bls.Sign(privkey, SAMPLE_MESSAGE) for privkey in PRIVKEYS]
|
||||
aggregate_signature = bls.Aggregate(signatures)
|
||||
assert not bls.FastAggregateVerify(pubkeys_with_infinity, SAMPLE_MESSAGE, aggregate_signature)
|
||||
|
@ -317,31 +322,31 @@ def case05_aggregate_verify():
|
|||
}
|
||||
|
||||
# Invalid pubkeys and signature -- len(pubkeys) == 0 and signature == Z1_SIGNATURE
|
||||
assert not bls.AggregateVerify([], [], Z2_SIGNATURE)
|
||||
assert not milagro_bls.AggregateVerify([], [], Z2_SIGNATURE)
|
||||
assert not bls.AggregateVerify([], [], G2_POINT_AT_INFINITY)
|
||||
assert not milagro_bls.AggregateVerify([], [], G2_POINT_AT_INFINITY)
|
||||
yield f'aggregate_verify_na_pubkeys_and_infinity_signature', {
|
||||
'input': {
|
||||
'pubkeys': [],
|
||||
'messages': [],
|
||||
'signature': encode_hex(Z2_SIGNATURE),
|
||||
'signature': encode_hex(G2_POINT_AT_INFINITY),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# Invalid pubkeys and signature -- len(pubkeys) == 0 and signature == 0x00...
|
||||
assert not bls.AggregateVerify([], [], NO_SIGNATURE)
|
||||
assert not milagro_bls.AggregateVerify([], [], NO_SIGNATURE)
|
||||
yield f'aggregate_verify_na_pubkeys_and_na_signature', {
|
||||
assert not bls.AggregateVerify([], [], ZERO_SIGNATURE)
|
||||
assert not milagro_bls.AggregateVerify([], [], ZERO_SIGNATURE)
|
||||
yield f'aggregate_verify_na_pubkeys_and_zero_signature', {
|
||||
'input': {
|
||||
'pubkeys': [],
|
||||
'messages': [],
|
||||
'signature': encode_hex(NO_SIGNATURE),
|
||||
'signature': encode_hex(ZERO_SIGNATURE),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# Invalid pubkeys and signature -- pubkeys contains point at infinity
|
||||
pubkeys_with_infinity = pubkeys + [Z1_PUBKEY]
|
||||
pubkeys_with_infinity = pubkeys + [G1_POINT_AT_INFINITY]
|
||||
messages_with_sample = messages + [SAMPLE_MESSAGE]
|
||||
assert not bls.AggregateVerify(pubkeys_with_infinity, messages_with_sample, aggregate_signature)
|
||||
assert not milagro_bls.AggregateVerify(pubkeys_with_infinity, messages_with_sample, aggregate_signature)
|
||||
|
@ -355,7 +360,150 @@ def case05_aggregate_verify():
|
|||
}
|
||||
|
||||
|
||||
def create_provider(handler_name: str,
|
||||
def case06_eth_aggregate_pubkeys():
|
||||
for pubkey in PUBKEYS:
|
||||
encoded_pubkey = encode_hex(pubkey)
|
||||
aggregate_pubkey = spec.eth_aggregate_pubkeys([pubkey])
|
||||
# Should be unchanged
|
||||
assert aggregate_pubkey == milagro_bls._AggregatePKs([pubkey]) == pubkey
|
||||
# Valid pubkey
|
||||
yield f'eth_aggregate_pubkeys_valid_{(hash(bytes(encoded_pubkey, "utf-8"))[:8]).hex()}', {
|
||||
'input': [encode_hex(pubkey)],
|
||||
'output': encode_hex(aggregate_pubkey),
|
||||
}
|
||||
|
||||
# Valid pubkeys
|
||||
aggregate_pubkey = spec.eth_aggregate_pubkeys(PUBKEYS)
|
||||
assert aggregate_pubkey == milagro_bls._AggregatePKs(PUBKEYS)
|
||||
yield f'eth_aggregate_pubkeys_valid_pubkeys', {
|
||||
'input': [encode_hex(pubkey) for pubkey in PUBKEYS],
|
||||
'output': encode_hex(aggregate_pubkey),
|
||||
}
|
||||
|
||||
# Invalid pubkeys -- len(pubkeys) == 0
|
||||
expect_exception(spec.eth_aggregate_pubkeys, [])
|
||||
expect_exception(milagro_bls._AggregatePKs, [])
|
||||
yield f'eth_aggregate_pubkeys_empty_list', {
|
||||
'input': [],
|
||||
'output': None,
|
||||
}
|
||||
|
||||
# Invalid pubkeys -- [ZERO_PUBKEY]
|
||||
expect_exception(spec.eth_aggregate_pubkeys, [ZERO_PUBKEY])
|
||||
expect_exception(milagro_bls._AggregatePKs, [ZERO_PUBKEY])
|
||||
yield f'eth_aggregate_pubkeys_zero_pubkey', {
|
||||
'input': [encode_hex(ZERO_PUBKEY)],
|
||||
'output': None,
|
||||
}
|
||||
|
||||
# Invalid pubkeys -- G1 point at infinity
|
||||
expect_exception(spec.eth_aggregate_pubkeys, [G1_POINT_AT_INFINITY])
|
||||
expect_exception(milagro_bls._AggregatePKs, [G1_POINT_AT_INFINITY])
|
||||
yield f'eth_aggregate_pubkeys_infinity_pubkey', {
|
||||
'input': [encode_hex(G1_POINT_AT_INFINITY)],
|
||||
'output': None,
|
||||
}
|
||||
|
||||
# Invalid pubkeys -- b'\x40\x00\x00\x00....\x00' pubkey
|
||||
x40_pubkey = b'\x40' + b'\00' * 47
|
||||
expect_exception(spec.eth_aggregate_pubkeys, [x40_pubkey])
|
||||
expect_exception(milagro_bls._AggregatePKs, [x40_pubkey])
|
||||
yield f'eth_aggregate_pubkeys_x40_pubkey', {
|
||||
'input': [encode_hex(x40_pubkey)],
|
||||
'output': None,
|
||||
}
|
||||
|
||||
|
||||
def case07_eth_fast_aggregate_verify():
|
||||
"""
|
||||
Similar to `case04_fast_aggregate_verify` except for the empty case
|
||||
"""
|
||||
for i, message in enumerate(MESSAGES):
|
||||
privkeys = PRIVKEYS[:i + 1]
|
||||
sigs = [bls.Sign(privkey, message) for privkey in privkeys]
|
||||
aggregate_signature = bls.Aggregate(sigs)
|
||||
pubkeys = [bls.SkToPk(privkey) for privkey in privkeys]
|
||||
pubkeys_serial = [encode_hex(pubkey) for pubkey in pubkeys]
|
||||
|
||||
# Valid signature
|
||||
identifier = f'{pubkeys_serial}_{encode_hex(message)}'
|
||||
assert spec.eth_fast_aggregate_verify(pubkeys, message, aggregate_signature)
|
||||
yield f'eth_fast_aggregate_verify_valid_{(hash(bytes(identifier, "utf-8"))[:8]).hex()}', {
|
||||
'input': {
|
||||
'pubkeys': pubkeys_serial,
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(aggregate_signature),
|
||||
},
|
||||
'output': True,
|
||||
}
|
||||
|
||||
# Invalid signature -- extra pubkey
|
||||
pubkeys_extra = pubkeys + [bls.SkToPk(PRIVKEYS[-1])]
|
||||
pubkeys_extra_serial = [encode_hex(pubkey) for pubkey in pubkeys_extra]
|
||||
identifier = f'{pubkeys_extra_serial}_{encode_hex(message)}'
|
||||
assert not spec.eth_fast_aggregate_verify(pubkeys_extra, message, aggregate_signature)
|
||||
yield f'eth_fast_aggregate_verify_extra_pubkey_{(hash(bytes(identifier, "utf-8"))[:8]).hex()}', {
|
||||
'input': {
|
||||
'pubkeys': pubkeys_extra_serial,
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(aggregate_signature),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# Invalid signature -- tampered with signature
|
||||
tampered_signature = aggregate_signature[:-4] + b'\xff\xff\xff\xff'
|
||||
identifier = f'{pubkeys_serial}_{encode_hex(message)}'
|
||||
assert not spec.eth_fast_aggregate_verify(pubkeys, message, tampered_signature)
|
||||
yield f'eth_fast_aggregate_verify_tampered_signature_{(hash(bytes(identifier, "utf-8"))[:8]).hex()}', {
|
||||
'input': {
|
||||
'pubkeys': pubkeys_serial,
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(tampered_signature),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# NOTE: Unlike `FastAggregateVerify`, len(pubkeys) == 0 and signature == G2_POINT_AT_INFINITY is VALID
|
||||
assert spec.eth_fast_aggregate_verify([], message, G2_POINT_AT_INFINITY)
|
||||
yield f'eth_fast_aggregate_verify_na_pubkeys_and_infinity_signature', {
|
||||
'input': {
|
||||
'pubkeys': [],
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(G2_POINT_AT_INFINITY),
|
||||
},
|
||||
'output': True,
|
||||
}
|
||||
|
||||
# Invalid pubkeys and signature -- len(pubkeys) == 0 and signature == 0x00...
|
||||
assert not spec.eth_fast_aggregate_verify([], message, ZERO_SIGNATURE)
|
||||
yield f'eth_fast_aggregate_verify_na_pubkeys_and_zero_signature', {
|
||||
'input': {
|
||||
'pubkeys': [],
|
||||
'message': encode_hex(message),
|
||||
'signature': encode_hex(ZERO_SIGNATURE),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
# Invalid pubkeys and signature -- pubkeys contains point at infinity
|
||||
pubkeys = PUBKEYS.copy()
|
||||
pubkeys_with_infinity = pubkeys + [G1_POINT_AT_INFINITY]
|
||||
signatures = [bls.Sign(privkey, SAMPLE_MESSAGE) for privkey in PRIVKEYS]
|
||||
aggregate_signature = bls.Aggregate(signatures)
|
||||
assert not spec.eth_fast_aggregate_verify(pubkeys_with_infinity, SAMPLE_MESSAGE, aggregate_signature)
|
||||
yield f'eth_fast_aggregate_verify_infinity_pubkey', {
|
||||
'input': {
|
||||
'pubkeys': [encode_hex(pubkey) for pubkey in pubkeys_with_infinity],
|
||||
'message': encode_hex(SAMPLE_MESSAGE),
|
||||
'signature': encode_hex(aggregate_signature),
|
||||
},
|
||||
'output': False,
|
||||
}
|
||||
|
||||
|
||||
def create_provider(fork_name: SpecForkName,
|
||||
handler_name: str,
|
||||
test_case_fn: Callable[[], Iterable[Tuple[str, Dict[str, Any]]]]) -> gen_typing.TestProvider:
|
||||
|
||||
def prepare_fn() -> None:
|
||||
|
@ -368,7 +516,7 @@ def create_provider(handler_name: str,
|
|||
print(data)
|
||||
(case_name, case_content) = data
|
||||
yield gen_typing.TestCase(
|
||||
fork_name=PHASE0,
|
||||
fork_name=fork_name,
|
||||
preset_name='general',
|
||||
runner_name='bls',
|
||||
handler_name=handler_name,
|
||||
|
@ -383,9 +531,13 @@ def create_provider(handler_name: str,
|
|||
if __name__ == "__main__":
|
||||
bls.use_py_ecc() # Py-ecc is chosen instead of Milagro, since the code is better understood to be correct.
|
||||
gen_runner.run_generator("bls", [
|
||||
create_provider('sign', case01_sign),
|
||||
create_provider('verify', case02_verify),
|
||||
create_provider('aggregate', case03_aggregate),
|
||||
create_provider('fast_aggregate_verify', case04_fast_aggregate_verify),
|
||||
create_provider('aggregate_verify', case05_aggregate_verify),
|
||||
# PHASE0
|
||||
create_provider(PHASE0, 'sign', case01_sign),
|
||||
create_provider(PHASE0, 'verify', case02_verify),
|
||||
create_provider(PHASE0, 'aggregate', case03_aggregate),
|
||||
create_provider(PHASE0, 'fast_aggregate_verify', case04_fast_aggregate_verify),
|
||||
create_provider(PHASE0, 'aggregate_verify', case05_aggregate_verify),
|
||||
# ALTAIR
|
||||
create_provider(ALTAIR, 'eth_aggregate_pubkeys', case06_eth_aggregate_pubkeys),
|
||||
create_provider(ALTAIR, 'eth_fast_aggregate_verify', case07_eth_fast_aggregate_verify),
|
||||
])
|
||||
|
|
|
@ -5,6 +5,7 @@ from eth2spec.test.helpers.constants import PHASE0, ALTAIR, MERGE
|
|||
if __name__ == "__main__":
|
||||
phase_0_mods = {key: 'eth2spec.test.phase0.fork_choice.test_' + key for key in [
|
||||
'get_head',
|
||||
'on_block',
|
||||
]}
|
||||
# No additional Altair specific finality tests, yet.
|
||||
altair_mods = phase_0_mods
|
||||
|
|
|
@ -12,8 +12,9 @@ if __name__ == "__main__":
|
|||
'voluntary_exit',
|
||||
]}
|
||||
altair_mods = {
|
||||
**{key: 'eth2spec.test.altair.block_processing.test_process_' + key for key in [
|
||||
**{key: 'eth2spec.test.altair.block_processing.sync_aggregate.test_process_' + key for key in [
|
||||
'sync_aggregate',
|
||||
'sync_aggregate_random',
|
||||
]},
|
||||
**phase_0_mods,
|
||||
} # also run the previous phase 0 tests
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
all:
|
||||
if ! test -d venv; then python3 -m venv venv; fi;
|
||||
. ./venv/bin/activate
|
||||
pip3 install -r requirements.txt
|
||||
rm -f ../../core/pyspec/eth2spec/test/phase0/random/test_random.py
|
||||
rm -f ../../core/pyspec/eth2spec/test/altair/random/test_random.py
|
||||
python3 generate.py phase0 > ../../core/pyspec/eth2spec/test/phase0/random/test_random.py
|
||||
python3 generate.py altair > ../../core/pyspec/eth2spec/test/altair/random/test_random.py
|
|
@ -0,0 +1,31 @@
|
|||
# Randomized tests
|
||||
|
||||
Randomized tests in the format of `sanity` blocks tests, with randomized operations.
|
||||
|
||||
Information on the format of the tests can be found in the [sanity test formats documentation](../../formats/sanity/README.md).
|
||||
|
||||
# To generate test sources
|
||||
|
||||
```bash
|
||||
$ make
|
||||
```
|
||||
|
||||
The necessary commands are in the `Makefile`, as the only target.
|
||||
|
||||
The generated files are committed to the repo so you should not need to do this.
|
||||
|
||||
# To run tests
|
||||
|
||||
Each of the generated test does produce a `pytest` test instance but by default is
|
||||
currently skipped. Running the test via the generator (see next) will trigger any errors
|
||||
that would arise during the running of `pytest`.
|
||||
|
||||
# To generate spec tests (from the generated files)
|
||||
|
||||
Run the test generator in the usual way.
|
||||
|
||||
E.g. from the root of this repo, you can run:
|
||||
|
||||
```bash
|
||||
$ make gen_random
|
||||
```
|
|
@ -0,0 +1,258 @@
|
|||
"""
|
||||
This test format currently uses code generation to assemble the tests
|
||||
as the current test infra does not have a facility to dynamically
|
||||
generate tests that can be seen by ``pytest``.
|
||||
|
||||
This will likley change in future releases of the testing infra.
|
||||
|
||||
NOTE: To add additional scenarios, add test cases below in ``_generate_randomized_scenarios``.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import random
|
||||
import warnings
|
||||
from typing import Callable
|
||||
import itertools
|
||||
|
||||
from eth2spec.test.utils.randomized_block_tests import (
|
||||
no_block,
|
||||
no_op_validation,
|
||||
randomize_state,
|
||||
randomize_state_altair,
|
||||
random_block,
|
||||
random_block_altair_with_cycling_sync_committee_participation,
|
||||
last_slot_in_epoch,
|
||||
random_slot_in_epoch,
|
||||
penultimate_slot_in_epoch,
|
||||
epoch_transition,
|
||||
slot_transition,
|
||||
transition_with_random_block,
|
||||
transition_to_leaking,
|
||||
transition_without_leak,
|
||||
)
|
||||
from eth2spec.test.helpers.constants import PHASE0, ALTAIR
|
||||
|
||||
|
||||
# Ensure this many blocks are present in *each* randomized scenario
|
||||
BLOCK_TRANSITIONS_COUNT = 2
|
||||
|
||||
|
||||
def _normalize_transition(transition):
|
||||
"""
|
||||
Provide "empty" or "no op" sub-transitions
|
||||
to a given transition.
|
||||
"""
|
||||
if isinstance(transition, Callable):
|
||||
transition = transition()
|
||||
if "epochs_to_skip" not in transition:
|
||||
transition["epochs_to_skip"] = 0
|
||||
if "slots_to_skip" not in transition:
|
||||
transition["slots_to_skip"] = 0
|
||||
if "block_producer" not in transition:
|
||||
transition["block_producer"] = no_block
|
||||
if "validation" not in transition:
|
||||
transition["validation"] = no_op_validation
|
||||
return transition
|
||||
|
||||
|
||||
def _normalize_scenarios(scenarios):
|
||||
"""
|
||||
"Normalize" a "scenario" so that a producer of a test case
|
||||
does not need to provide every expected key/value.
|
||||
"""
|
||||
for scenario in scenarios:
|
||||
transitions = scenario["transitions"]
|
||||
for i, transition in enumerate(transitions):
|
||||
transitions[i] = _normalize_transition(transition)
|
||||
|
||||
|
||||
def _flatten(t):
|
||||
leak_transition = t[0]
|
||||
result = [leak_transition]
|
||||
for transition_batch in t[1]:
|
||||
for transition in transition_batch:
|
||||
if isinstance(transition, tuple):
|
||||
for subtransition in transition:
|
||||
result.append(subtransition)
|
||||
else:
|
||||
result.append(transition)
|
||||
return result
|
||||
|
||||
|
||||
def _generate_randomized_scenarios(block_randomizer):
|
||||
"""
|
||||
Generates a set of randomized testing scenarios.
|
||||
Return a sequence of "scenarios" where each scenario:
|
||||
1. Provides some setup
|
||||
2. Provides a sequence of transitions that mutate the state in some way,
|
||||
possibly yielding blocks along the way
|
||||
NOTE: scenarios are "normalized" with empty/no-op elements before returning
|
||||
to the test generation to facilitate brevity when writing scenarios by hand.
|
||||
NOTE: the main block driver builds a block for the **next** slot, so
|
||||
the slot transitions are offset by -1 to target certain boundaries.
|
||||
"""
|
||||
# go forward 0 or 1 epochs
|
||||
epochs_set = (
|
||||
epoch_transition(n=0),
|
||||
epoch_transition(n=1),
|
||||
)
|
||||
# within those epochs, go forward to:
|
||||
slots_set = (
|
||||
# the first slot in an epoch (see note in docstring about offsets...)
|
||||
slot_transition(last_slot_in_epoch),
|
||||
# the second slot in an epoch
|
||||
slot_transition(n=0),
|
||||
# some random number of slots, but not at epoch boundaries
|
||||
slot_transition(random_slot_in_epoch),
|
||||
# the last slot in an epoch (see note in docstring about offsets...)
|
||||
slot_transition(penultimate_slot_in_epoch),
|
||||
)
|
||||
# and produce a block...
|
||||
blocks_set = (
|
||||
transition_with_random_block(block_randomizer),
|
||||
)
|
||||
|
||||
rng = random.Random(1447)
|
||||
all_skips = list(itertools.product(epochs_set, slots_set))
|
||||
randomized_skips = (
|
||||
rng.sample(all_skips, len(all_skips))
|
||||
for _ in range(BLOCK_TRANSITIONS_COUNT)
|
||||
)
|
||||
|
||||
# build a set of block transitions from combinations of sub-transitions
|
||||
transitions_generator = (
|
||||
itertools.product(prefix, blocks_set)
|
||||
for prefix in randomized_skips
|
||||
)
|
||||
block_transitions = zip(*transitions_generator)
|
||||
|
||||
# and preface each set of block transitions with the possible leak transitions
|
||||
leak_transitions = (
|
||||
transition_without_leak,
|
||||
transition_to_leaking,
|
||||
)
|
||||
scenarios = [
|
||||
{"transitions": _flatten(t)}
|
||||
for t in itertools.product(leak_transitions, block_transitions)
|
||||
]
|
||||
_normalize_scenarios(scenarios)
|
||||
return scenarios
|
||||
|
||||
|
||||
def _id_from_scenario(test_description):
|
||||
"""
|
||||
Construct a name for the scenario based its data.
|
||||
"""
|
||||
def _to_id_part(prefix, x):
|
||||
suffix = str(x)
|
||||
if isinstance(x, Callable):
|
||||
suffix = x.__name__
|
||||
return f"{prefix}{suffix}"
|
||||
|
||||
def _id_from_transition(transition):
|
||||
return ",".join((
|
||||
_to_id_part("epochs:", transition["epochs_to_skip"]),
|
||||
_to_id_part("slots:", transition["slots_to_skip"]),
|
||||
_to_id_part("with-block:", transition["block_producer"])
|
||||
))
|
||||
|
||||
return "|".join(map(_id_from_transition, test_description["transitions"]))
|
||||
|
||||
|
||||
test_imports_template = """\"\"\"
|
||||
This module is generated from the ``random`` test generator.
|
||||
Please do not edit this file manually.
|
||||
See the README for that generator for more information.
|
||||
\"\"\"
|
||||
|
||||
from eth2spec.test.helpers.constants import {phase}
|
||||
from eth2spec.test.context import (
|
||||
misc_balances_in_default_range_with_many_validators,
|
||||
with_phases,
|
||||
zero_activation_threshold,
|
||||
only_generator,
|
||||
)
|
||||
from eth2spec.test.context import (
|
||||
always_bls,
|
||||
spec_test,
|
||||
with_custom_state,
|
||||
single_phase,
|
||||
)
|
||||
from eth2spec.test.utils.randomized_block_tests import (
|
||||
run_generated_randomized_test,
|
||||
)"""
|
||||
|
||||
test_template = """
|
||||
@only_generator(\"randomized test for broad coverage, not point-to-point CI\")
|
||||
@with_phases([{phase}])
|
||||
@with_custom_state(
|
||||
balances_fn=misc_balances_in_default_range_with_many_validators,
|
||||
threshold_fn=zero_activation_threshold
|
||||
)
|
||||
@spec_test
|
||||
@single_phase
|
||||
@always_bls
|
||||
def test_randomized_{index}(spec, state):
|
||||
# scenario as high-level, informal text:
|
||||
{name_as_comment}
|
||||
scenario = {scenario} # noqa: E501
|
||||
yield from run_generated_randomized_test(
|
||||
spec,
|
||||
state,
|
||||
scenario,
|
||||
)"""
|
||||
|
||||
|
||||
def _to_comment(name, indent_level):
|
||||
parts = name.split("|")
|
||||
indentation = " " * indent_level
|
||||
parts = [
|
||||
indentation + "# " + part for part in parts
|
||||
]
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
def run_generate_tests_to_std_out(phase, state_randomizer, block_randomizer):
|
||||
scenarios = _generate_randomized_scenarios(block_randomizer)
|
||||
test_content = {"phase": phase.upper()}
|
||||
test_imports = test_imports_template.format(**test_content)
|
||||
test_file = [test_imports]
|
||||
for index, scenario in enumerate(scenarios):
|
||||
# required for setup phase
|
||||
scenario["state_randomizer"] = state_randomizer.__name__
|
||||
|
||||
# need to pass name, rather than function reference...
|
||||
transitions = scenario["transitions"]
|
||||
for transition in transitions:
|
||||
for name, value in transition.items():
|
||||
if isinstance(value, Callable):
|
||||
transition[name] = value.__name__
|
||||
|
||||
test_content = test_content.copy()
|
||||
name = _id_from_scenario(scenario)
|
||||
test_content["name_as_comment"] = _to_comment(name, 1)
|
||||
test_content["index"] = index
|
||||
test_content["scenario"] = scenario
|
||||
test_instance = test_template.format(**test_content)
|
||||
test_file.append(test_instance)
|
||||
print("\n\n".join(test_file))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
did_generate = False
|
||||
if PHASE0 in sys.argv:
|
||||
did_generate = True
|
||||
run_generate_tests_to_std_out(
|
||||
PHASE0,
|
||||
state_randomizer=randomize_state,
|
||||
block_randomizer=random_block,
|
||||
)
|
||||
if ALTAIR in sys.argv:
|
||||
did_generate = True
|
||||
run_generate_tests_to_std_out(
|
||||
ALTAIR,
|
||||
state_randomizer=randomize_state_altair,
|
||||
block_randomizer=random_block_altair_with_cycling_sync_committee_participation,
|
||||
)
|
||||
if not did_generate:
|
||||
warnings.warn("no phase given for test generation")
|
|
@ -0,0 +1,18 @@
|
|||
from eth2spec.test.helpers.constants import PHASE0, ALTAIR
|
||||
from eth2spec.gen_helpers.gen_from_tests.gen import run_state_test_generators
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
phase_0_mods = {key: 'eth2spec.test.phase0.random.test_' + key for key in [
|
||||
'random',
|
||||
]}
|
||||
altair_mods = {key: 'eth2spec.test.altair.random.test_' + key for key in [
|
||||
'random',
|
||||
]}
|
||||
|
||||
all_mods = {
|
||||
PHASE0: phase_0_mods,
|
||||
ALTAIR: altair_mods,
|
||||
}
|
||||
|
||||
run_state_test_generators(runner_name="random", all_mods=all_mods)
|
|
@ -0,0 +1,2 @@
|
|||
pytest>=4.4
|
||||
../../../[generator]
|
|
@ -1,10 +1,10 @@
|
|||
# Shuffling Tests
|
||||
|
||||
Tests for the swap-or-not shuffling in Eth2.
|
||||
Tests for the swap-or-not shuffling in the beacon chain.
|
||||
|
||||
Tips for initial shuffling write:
|
||||
- run with `round_count = 1` first, do the same with pyspec.
|
||||
- start with permute index
|
||||
- optimized shuffling implementations:
|
||||
- vitalik, Python: https://github.com/ethereum/eth2.0-specs/pull/576#issue-250741806
|
||||
- vitalik, Python: https://github.com/ethereum/consensus-specs/pull/576#issue-250741806
|
||||
- protolambda, Go: https://github.com/protolambda/eth2-shuffle
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# SSZ-static
|
||||
|
||||
The purpose of this test-generator is to provide test-vectors for the most important applications of SSZ:
|
||||
the serialization and hashing of Eth2 data types.
|
||||
the serialization and hashing of Ethereum data type.
|
||||
|
||||
Test-format documentation can be found [here](../../formats/ssz_static/README.md).
|
||||
|
|
Loading…
Reference in New Issue