Merge branch 'dev' into proto-merge-test-gen
This commit is contained in:
commit
69af7f733e
|
@ -19,6 +19,8 @@ Accompanying documents can be found in [specs](specs) and include
|
||||||
* [BLS signature verification](specs/bls_signature.md)
|
* [BLS signature verification](specs/bls_signature.md)
|
||||||
* [General test format](specs/test-format.md)
|
* [General test format](specs/test-format.md)
|
||||||
* [Honest validator implementation doc](specs/validator/0_beacon-chain-validator.md)
|
* [Honest validator implementation doc](specs/validator/0_beacon-chain-validator.md)
|
||||||
|
* [Merkle proof formats](specs/light_client/merkle_proofs.md)
|
||||||
|
* [Light client syncing protocol](specs/light_client/sync_protocol.md)
|
||||||
|
|
||||||
|
|
||||||
### Design goals
|
### Design goals
|
||||||
|
|
|
@ -0,0 +1,20 @@
|
||||||
|
# Constant Presets
|
||||||
|
|
||||||
|
This directory contains a set of constants presets used for testing, testnets, and mainnet.
|
||||||
|
|
||||||
|
A preset file contains all the constants known for its target.
|
||||||
|
Later-fork constants can be ignored, e.g. ignore phase1 constants as a client that only supports phase 0 currently.
|
||||||
|
|
||||||
|
## Format
|
||||||
|
|
||||||
|
Each preset is a key-value mapping.
|
||||||
|
|
||||||
|
**Key**: an `UPPER_SNAKE_CASE` (a.k.a. "macro case") formatted string, name of the constant.
|
||||||
|
**Value**: can be any of:
|
||||||
|
- an unsigned integer number, can be up to 64 bits (incl.)
|
||||||
|
- a hexadecimal string, prefixed with `0x`
|
||||||
|
|
||||||
|
Presets may contain comments to describe the values.
|
||||||
|
|
||||||
|
See `mainnet.yaml` for a complete example.
|
||||||
|
|
|
@ -0,0 +1,124 @@
|
||||||
|
# Mainnet preset
|
||||||
|
# Note: the intention of this file (for now) is to illustrate what a mainnet configuration could look like.
|
||||||
|
# Some of these constants may still change before the launch of Phase 0.
|
||||||
|
|
||||||
|
|
||||||
|
# Misc
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**10 ` (= 1,024)
|
||||||
|
SHARD_COUNT: 1024
|
||||||
|
# 2**7 ` (= 128)
|
||||||
|
TARGET_COMMITTEE_SIZE: 128
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
MAX_BALANCE_CHURN_QUOTIENT: 32
|
||||||
|
# 2**12 ` (= 4,096)
|
||||||
|
MAX_ATTESTATION_PARTICIPANTS: 4096
|
||||||
|
# 2**2 ` (= 4)
|
||||||
|
MAX_EXIT_DEQUEUES_PER_EPOCH: 4
|
||||||
|
# See issue 563
|
||||||
|
SHUFFLE_ROUND_COUNT: 90
|
||||||
|
|
||||||
|
|
||||||
|
# Deposit contract
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# **TBD**
|
||||||
|
DEPOSIT_CONTRACT_ADDRESS: 0x1234567890123567890123456789012357890
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
DEPOSIT_CONTRACT_TREE_DEPTH: 32
|
||||||
|
|
||||||
|
|
||||||
|
# Gwei values
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**0 * 10**9 ` (= 1,000,000,000) Gwei
|
||||||
|
MIN_DEPOSIT_AMOUNT: 1000000000
|
||||||
|
# 2**5 * 10**9 ` (= 32,000,000,000) Gwei
|
||||||
|
MAX_DEPOSIT_AMOUNT: 32000000000
|
||||||
|
# 2**4 * 10**9 ` (= 16,000,000,000) Gwei
|
||||||
|
EJECTION_BALANCE: 16000000000
|
||||||
|
# 2**0 * 10**9 ` (= 1,000,000,000) Gwei
|
||||||
|
HIGH_BALANCE_INCREMENT: 1000000000
|
||||||
|
|
||||||
|
|
||||||
|
# Initial values
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
GENESIS_FORK_VERSION: 0x00000000
|
||||||
|
# 2**32, GENESIS_EPOCH is derived from this constant
|
||||||
|
GENESIS_SLOT: 4294967296
|
||||||
|
GENESIS_START_SHARD: 0
|
||||||
|
# 2**64 - 1
|
||||||
|
FAR_FUTURE_EPOCH: 18446744073709551615
|
||||||
|
BLS_WITHDRAWAL_PREFIX_BYTE: 0x00
|
||||||
|
|
||||||
|
|
||||||
|
# Time parameters
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 6 seconds 6 seconds
|
||||||
|
SECONDS_PER_SLOT: 6
|
||||||
|
# 2**2 ` (= 4) slots 24 seconds
|
||||||
|
MIN_ATTESTATION_INCLUSION_DELAY: 4
|
||||||
|
# 2**6 ` (= 64) slots 6.4 minutes
|
||||||
|
SLOTS_PER_EPOCH: 64
|
||||||
|
# 2**0 ` (= 1) epochs 6.4 minutes
|
||||||
|
MIN_SEED_LOOKAHEAD: 1
|
||||||
|
# 2**2 ` (= 4) epochs 25.6 minutes
|
||||||
|
ACTIVATION_EXIT_DELAY: 4
|
||||||
|
# 2**4 ` (= 16) epochs ~1.7 hours
|
||||||
|
EPOCHS_PER_ETH1_VOTING_PERIOD: 16
|
||||||
|
# 2**13 ` (= 8,192) slots ~13 hours
|
||||||
|
SLOTS_PER_HISTORICAL_ROOT: 8192
|
||||||
|
# 2**8 ` (= 256) epochs ~27 hours
|
||||||
|
MIN_VALIDATOR_WITHDRAWABILITY_DELAY: 256
|
||||||
|
# 2**11 ` (= 2,048) epochs 9 days
|
||||||
|
PERSISTENT_COMMITTEE_PERIOD: 2048
|
||||||
|
# 2**6 ` (= 64)
|
||||||
|
MAX_CROSSLINK_EPOCHS: 64
|
||||||
|
|
||||||
|
|
||||||
|
# State list lengths
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**13 ` (= 8,192) epochs ~36 days
|
||||||
|
LATEST_RANDAO_MIXES_LENGTH: 8192
|
||||||
|
# 2**13 ` (= 8,192) epochs ~36 days
|
||||||
|
LATEST_ACTIVE_INDEX_ROOTS_LENGTH: 8192
|
||||||
|
# 2**13 ` (= 8,192) epochs ~36 days
|
||||||
|
LATEST_SLASHED_EXIT_LENGTH: 8192
|
||||||
|
|
||||||
|
|
||||||
|
# Reward and penalty quotients
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
BASE_REWARD_QUOTIENT: 32
|
||||||
|
# 2**9 ` (= 512)
|
||||||
|
WHISTLEBLOWING_REWARD_QUOTIENT: 512
|
||||||
|
# 2**3 ` (= 8)
|
||||||
|
PROPOSER_REWARD_QUOTIENT: 8
|
||||||
|
# 2**24 ` (= 16,777,216)
|
||||||
|
INACTIVITY_PENALTY_QUOTIENT: 16777216
|
||||||
|
|
||||||
|
|
||||||
|
# Max operations per block
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
MIN_PENALTY_QUOTIENT: 32
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_PROPOSER_SLASHINGS: 16
|
||||||
|
# 2**0 ` (= 1)
|
||||||
|
MAX_ATTESTER_SLASHINGS: 1
|
||||||
|
# 2**7 ` (= 128)
|
||||||
|
MAX_ATTESTATIONS: 128
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_DEPOSITS: 16
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_VOLUNTARY_EXITS: 16
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_TRANSFERS: 16
|
||||||
|
|
||||||
|
|
||||||
|
# Signature domains
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
DOMAIN_BEACON_BLOCK: 0
|
||||||
|
DOMAIN_RANDAO: 1
|
||||||
|
DOMAIN_ATTESTATION: 2
|
||||||
|
DOMAIN_DEPOSIT: 3
|
||||||
|
DOMAIN_VOLUNTARY_EXIT: 4
|
||||||
|
DOMAIN_TRANSFER: 5
|
|
@ -0,0 +1,124 @@
|
||||||
|
# Minimal preset
|
||||||
|
|
||||||
|
|
||||||
|
# Misc
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
|
||||||
|
# [customized] Just 8 shards for testing purposes
|
||||||
|
SHARD_COUNT: 8
|
||||||
|
|
||||||
|
# [customized] unsecure, but fast
|
||||||
|
TARGET_COMMITTEE_SIZE: 4
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
MAX_BALANCE_CHURN_QUOTIENT: 32
|
||||||
|
# 2**12 ` (= 4,096)
|
||||||
|
MAX_ATTESTATION_PARTICIPANTS: 4096
|
||||||
|
# 2**2 ` (= 4)
|
||||||
|
MAX_EXIT_DEQUEUES_PER_EPOCH: 4
|
||||||
|
# See issue 563
|
||||||
|
SHUFFLE_ROUND_COUNT: 90
|
||||||
|
|
||||||
|
|
||||||
|
# Deposit contract
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# **TBD**
|
||||||
|
DEPOSIT_CONTRACT_ADDRESS: 0x1234567890123567890123456789012357890
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
DEPOSIT_CONTRACT_TREE_DEPTH: 32
|
||||||
|
|
||||||
|
|
||||||
|
# Gwei values
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**0 * 10**9 ` (= 1,000,000,000) Gwei
|
||||||
|
MIN_DEPOSIT_AMOUNT: 1000000000
|
||||||
|
# 2**5 * 10**9 ` (= 32,000,000,000) Gwei
|
||||||
|
MAX_DEPOSIT_AMOUNT: 32000000000
|
||||||
|
# 2**4 * 10**9 ` (= 16,000,000,000) Gwei
|
||||||
|
EJECTION_BALANCE: 16000000000
|
||||||
|
# 2**0 * 10**9 ` (= 1,000,000,000) Gwei
|
||||||
|
HIGH_BALANCE_INCREMENT: 1000000000
|
||||||
|
|
||||||
|
|
||||||
|
# Initial values
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
GENESIS_FORK_VERSION: 0x00000000
|
||||||
|
# 2**32, GENESIS_EPOCH is derived from this constant
|
||||||
|
GENESIS_SLOT: 4294967296
|
||||||
|
GENESIS_START_SHARD: 0
|
||||||
|
# 2**64 - 1
|
||||||
|
FAR_FUTURE_EPOCH: 18446744073709551615
|
||||||
|
BLS_WITHDRAWAL_PREFIX_BYTE: 0x00
|
||||||
|
|
||||||
|
|
||||||
|
# Time parameters
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 6 seconds 6 seconds
|
||||||
|
SECONDS_PER_SLOT: 6
|
||||||
|
# [customized] 2 slots
|
||||||
|
MIN_ATTESTATION_INCLUSION_DELAY: 2
|
||||||
|
# [customized] fast epochs
|
||||||
|
SLOTS_PER_EPOCH: 8
|
||||||
|
# 2**0 ` (= 1) epochs 6.4 minutes
|
||||||
|
MIN_SEED_LOOKAHEAD: 1
|
||||||
|
# 2**2 ` (= 4) epochs 25.6 minutes
|
||||||
|
ACTIVATION_EXIT_DELAY: 4
|
||||||
|
# 2**4 ` (= 16) epochs ~1.7 hours
|
||||||
|
EPOCHS_PER_ETH1_VOTING_PERIOD: 16
|
||||||
|
# [customized] smaller state
|
||||||
|
SLOTS_PER_HISTORICAL_ROOT: 64
|
||||||
|
# 2**8 ` (= 256) epochs ~27 hours
|
||||||
|
MIN_VALIDATOR_WITHDRAWABILITY_DELAY: 256
|
||||||
|
# 2**11 ` (= 2,048) epochs 9 days
|
||||||
|
PERSISTENT_COMMITTEE_PERIOD: 2048
|
||||||
|
# 2**6 ` (= 64)
|
||||||
|
MAX_CROSSLINK_EPOCHS: 64
|
||||||
|
|
||||||
|
|
||||||
|
# State list lengths
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# [customized] smaller state
|
||||||
|
LATEST_RANDAO_MIXES_LENGTH: 64
|
||||||
|
# [customized] smaller state
|
||||||
|
LATEST_ACTIVE_INDEX_ROOTS_LENGTH: 64
|
||||||
|
# [customized] smaller state
|
||||||
|
LATEST_SLASHED_EXIT_LENGTH: 64
|
||||||
|
|
||||||
|
|
||||||
|
# Reward and penalty quotients
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
BASE_REWARD_QUOTIENT: 32
|
||||||
|
# 2**9 ` (= 512)
|
||||||
|
WHISTLEBLOWING_REWARD_QUOTIENT: 512
|
||||||
|
# 2**3 ` (= 8)
|
||||||
|
PROPOSER_REWARD_QUOTIENT: 8
|
||||||
|
# 2**24 ` (= 16,777,216)
|
||||||
|
INACTIVITY_PENALTY_QUOTIENT: 16777216
|
||||||
|
|
||||||
|
|
||||||
|
# Max operations per block
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
# 2**5 ` (= 32)
|
||||||
|
MIN_PENALTY_QUOTIENT: 32
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_PROPOSER_SLASHINGS: 16
|
||||||
|
# 2**0 ` (= 1)
|
||||||
|
MAX_ATTESTER_SLASHINGS: 1
|
||||||
|
# 2**7 ` (= 128)
|
||||||
|
MAX_ATTESTATIONS: 128
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_DEPOSITS: 16
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_VOLUNTARY_EXITS: 16
|
||||||
|
# 2**4 ` (= 16)
|
||||||
|
MAX_TRANSFERS: 16
|
||||||
|
|
||||||
|
|
||||||
|
# Signature domains
|
||||||
|
# ---------------------------------------------------------------
|
||||||
|
DOMAIN_BEACON_BLOCK: 0
|
||||||
|
DOMAIN_RANDAO: 1
|
||||||
|
DOMAIN_ATTESTATION: 2
|
||||||
|
DOMAIN_DEPOSIT: 3
|
||||||
|
DOMAIN_VOLUNTARY_EXIT: 4
|
||||||
|
DOMAIN_TRANSFER: 5
|
|
@ -0,0 +1,18 @@
|
||||||
|
# Fork timelines
|
||||||
|
|
||||||
|
This directory contains a set of fork timelines used for testing, testnets, and mainnet.
|
||||||
|
|
||||||
|
A timeline file contains all the forks known for its target.
|
||||||
|
Later forks can be ignored, e.g. ignore fork `phase1` as a client that only supports phase 0 currently.
|
||||||
|
|
||||||
|
## Format
|
||||||
|
|
||||||
|
Each preset is a key-value mapping.
|
||||||
|
|
||||||
|
**Key**: an `lower_snake_case` (a.k.a. "python case") formatted string, name of the fork.
|
||||||
|
**Value**: an unsigned integer number, epoch number of activation of the fork
|
||||||
|
|
||||||
|
Timelines may contain comments to describe the values.
|
||||||
|
|
||||||
|
See `mainnet.yaml` for a complete example.
|
||||||
|
|
|
@ -0,0 +1,12 @@
|
||||||
|
# Mainnet fork timeline
|
||||||
|
|
||||||
|
# Equal to GENESIS_EPOCH
|
||||||
|
phase0: 67108864
|
||||||
|
|
||||||
|
# Example 1:
|
||||||
|
# phase0_funny_fork_name: 67116000
|
||||||
|
|
||||||
|
# Example 2:
|
||||||
|
# Should be equal to PHASE_1_GENESIS_EPOCH
|
||||||
|
# (placeholder in example value here)
|
||||||
|
# phase1: 67163000
|
|
@ -178,6 +178,10 @@ Code snippets appearing in `this style` are to be interpreted as Python code.
|
||||||
|
|
||||||
## Constants
|
## Constants
|
||||||
|
|
||||||
|
Note: the default mainnet values for the constants are included here for spec-design purposes.
|
||||||
|
The different configurations for mainnet, testnets, and yaml-based testing can be found in the `configs/constant_presets/` directory.
|
||||||
|
These configurations are updated for releases, but may be out of sync during `dev` changes.
|
||||||
|
|
||||||
### Misc
|
### Misc
|
||||||
|
|
||||||
| Name | Value |
|
| Name | Value |
|
||||||
|
@ -1041,9 +1045,12 @@ def verify_merkle_branch(leaf: Bytes32, proof: List[Bytes32], depth: int, index:
|
||||||
```python
|
```python
|
||||||
def get_crosslink_committee_for_attestation(state: BeaconState,
|
def get_crosslink_committee_for_attestation(state: BeaconState,
|
||||||
attestation_data: AttestationData) -> List[ValidatorIndex]:
|
attestation_data: AttestationData) -> List[ValidatorIndex]:
|
||||||
# Find the committee in the list with the desired shard
|
"""
|
||||||
|
Return the crosslink committee corresponding to ``attestation_data``.
|
||||||
|
"""
|
||||||
crosslink_committees = get_crosslink_committees_at_slot(state, attestation_data.slot)
|
crosslink_committees = get_crosslink_committees_at_slot(state, attestation_data.slot)
|
||||||
|
|
||||||
|
# Find the committee in the list with the desired shard
|
||||||
assert attestation_data.shard in [shard for _, shard in crosslink_committees]
|
assert attestation_data.shard in [shard for _, shard in crosslink_committees]
|
||||||
crosslink_committee = [committee for committee, shard in crosslink_committees if shard == attestation_data.shard][0]
|
crosslink_committee = [committee for committee, shard in crosslink_committees if shard == attestation_data.shard][0]
|
||||||
|
|
||||||
|
@ -1160,9 +1167,9 @@ def verify_bitfield(bitfield: bytes, committee_size: int) -> bool:
|
||||||
### `convert_to_indexed`
|
### `convert_to_indexed`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def convert_to_indexed(state: BeaconState, attestation: Attestation):
|
def convert_to_indexed(state: BeaconState, attestation: Attestation) -> IndexedAttestation:
|
||||||
"""
|
"""
|
||||||
Convert an attestation to (almost) indexed-verifiable form
|
Convert ``attestation`` to (almost) indexed-verifiable form.
|
||||||
"""
|
"""
|
||||||
attesting_indices = get_attestation_participants(state, attestation.data, attestation.aggregation_bitfield)
|
attesting_indices = get_attestation_participants(state, attestation.data, attestation.aggregation_bitfield)
|
||||||
custody_bit_1_indices = get_attestation_participants(state, attestation.data, attestation.custody_bitfield)
|
custody_bit_1_indices = get_attestation_participants(state, attestation.data, attestation.custody_bitfield)
|
||||||
|
@ -1172,7 +1179,7 @@ def convert_to_indexed(state: BeaconState, attestation: Attestation):
|
||||||
custody_bit_0_indices=custody_bit_0_indices,
|
custody_bit_0_indices=custody_bit_0_indices,
|
||||||
custody_bit_1_indices=custody_bit_1_indices,
|
custody_bit_1_indices=custody_bit_1_indices,
|
||||||
data=attestation.data,
|
data=attestation.data,
|
||||||
aggregate_signature=attestation.aggregate_signature
|
aggregate_signature=attestation.aggregate_signature,
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -1661,6 +1668,7 @@ def lmd_ghost(store: Store, start_state: BeaconState, start_block: BeaconBlock)
|
||||||
children = get_children(store, head)
|
children = get_children(store, head)
|
||||||
if len(children) == 0:
|
if len(children) == 0:
|
||||||
return head
|
return head
|
||||||
|
# Ties broken by favoring block with lexicographically higher root
|
||||||
head = max(children, key=lambda x: (get_vote_count(x), hash_tree_root(x)))
|
head = max(children, key=lambda x: (get_vote_count(x), hash_tree_root(x)))
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,11 @@
|
||||||
|
**NOTICE**: This document is a work-in-progress for researchers and implementers.
|
||||||
|
|
||||||
|
### Constants
|
||||||
|
|
||||||
|
| Name | Value |
|
||||||
|
| - | - |
|
||||||
|
| `LENGTH_FLAG` | `2**64 - 1` |
|
||||||
|
|
||||||
### Generalized Merkle tree index
|
### Generalized Merkle tree index
|
||||||
|
|
||||||
In a binary Merkle tree, we define a "generalized index" of a node as `2**depth + index`. Visually, this looks as follows:
|
In a binary Merkle tree, we define a "generalized index" of a node as `2**depth + index`. Visually, this looks as follows:
|
||||||
|
@ -36,17 +44,34 @@ y_data_root len(y)
|
||||||
.......
|
.......
|
||||||
```
|
```
|
||||||
|
|
||||||
We can now define a concept of a "path", a way of describing a function that takes as input an SSZ object and outputs some specific (possibly deeply nested) member. For example, `foo -> foo.x` is a path, as are `foo -> len(foo.y)` and `foo -> foo[5]`. We'll describe paths as lists: in these three cases they are `["x"]`, `["y", "len"]` and `["y", 5]` respectively. We can now define a function `get_generalized_indices(object: Any, path: List[str OR int], root=1: int) -> int` that converts an object and a path to a set of generalized indices (note that for constant-sized objects, there is only one generalized index and it only depends on the path, but for dynamically sized objects the indices may depend on the object itself too). For dynamically-sized objects, the set of indices will have more than one member because of the need to access an array's length to determine the correct generalized index for some array access.
|
We can now define a concept of a "path", a way of describing a function that takes as input an SSZ object and outputs some specific (possibly deeply nested) member. For example, `foo -> foo.x` is a path, as are `foo -> len(foo.y)` and `foo -> foo.y[5].w`. We'll describe paths as lists, which can have two representations. In "human-readable form", they are `["x"]`, `["y", "__len__"]` and `["y", 5, "w"]` respectively. In "encoded form", they are lists of `uint64` values, in these cases (assuming the fields of `foo` in order are `x` then `y`, and `w` is the first field of `y[i]`) `[0]`, `[1, 2**64-1]`, `[1, 5, 0]`.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def get_generalized_indices(obj: Any, path: List[str or int], root=1) -> List[int]:
|
def path_to_encoded_form(obj: Any, path: List[str or int]) -> List[int]:
|
||||||
|
if len(path) == 0:
|
||||||
|
return []
|
||||||
|
if isinstance(path[0], "__len__"):
|
||||||
|
assert len(path) == 1
|
||||||
|
return [LENGTH_FLAG]
|
||||||
|
elif isinstance(path[0], str) and hasattr(obj, "fields"):
|
||||||
|
return [list(obj.fields.keys()).index(path[0])] + path_to_encoded_form(getattr(obj, path[0]), path[1:])
|
||||||
|
elif isinstance(obj, (StaticList, DynamicList)):
|
||||||
|
return [path[0]] + path_to_encoded_form(obj[path[0]], path[1:])
|
||||||
|
else:
|
||||||
|
raise Exception("Unknown type / path")
|
||||||
|
```
|
||||||
|
|
||||||
|
We can now define a function `get_generalized_indices(object: Any, path: List[int], root=1: int) -> int` that converts an object and a path to a set of generalized indices (note that for constant-sized objects, there is only one generalized index and it only depends on the path, but for dynamically sized objects the indices may depend on the object itself too). For dynamically-sized objects, the set of indices will have more than one member because of the need to access an array's length to determine the correct generalized index for some array access.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_generalized_indices(obj: Any, path: List[int], root=1) -> List[int]:
|
||||||
if len(path) == 0:
|
if len(path) == 0:
|
||||||
return [root]
|
return [root]
|
||||||
elif isinstance(obj, StaticList):
|
elif isinstance(obj, StaticList):
|
||||||
items_per_chunk = (32 // len(serialize(x))) if isinstance(x, int) else 1
|
items_per_chunk = (32 // len(serialize(x))) if isinstance(x, int) else 1
|
||||||
new_root = root * next_power_of_2(len(obj) // items_per_chunk) + path[0] // items_per_chunk
|
new_root = root * next_power_of_2(len(obj) // items_per_chunk) + path[0] // items_per_chunk
|
||||||
return get_generalized_indices(obj[path[0]], path[1:], new_root)
|
return get_generalized_indices(obj[path[0]], path[1:], new_root)
|
||||||
elif isinstance(obj, DynamicList) and path[0] == "len":
|
elif isinstance(obj, DynamicList) and path[0] == LENGTH_FLAG:
|
||||||
return [root * 2 + 1]
|
return [root * 2 + 1]
|
||||||
elif isinstance(obj, DynamicList) and isinstance(path[0], int):
|
elif isinstance(obj, DynamicList) and isinstance(path[0], int):
|
||||||
assert path[0] < len(obj)
|
assert path[0] < len(obj)
|
||||||
|
@ -54,9 +79,9 @@ def get_generalized_indices(obj: Any, path: List[str or int], root=1) -> List[in
|
||||||
new_root = root * 2 * next_power_of_2(len(obj) // items_per_chunk) + path[0] // items_per_chunk
|
new_root = root * 2 * next_power_of_2(len(obj) // items_per_chunk) + path[0] // items_per_chunk
|
||||||
return [root *2 + 1] + get_generalized_indices(obj[path[0]], path[1:], new_root)
|
return [root *2 + 1] + get_generalized_indices(obj[path[0]], path[1:], new_root)
|
||||||
elif hasattr(obj, "fields"):
|
elif hasattr(obj, "fields"):
|
||||||
index = list(fields.keys()).index(path[0])
|
field = list(fields.keys())[path[0]]
|
||||||
new_root = root * next_power_of_2(len(fields)) + index
|
new_root = root * next_power_of_2(len(fields)) + path[0]
|
||||||
return get_generalized_indices(getattr(obj, path[0]), path[1:], new_root)
|
return get_generalized_indices(getattr(obj, field), path[1:], new_root)
|
||||||
else:
|
else:
|
||||||
raise Exception("Unknown type / path")
|
raise Exception("Unknown type / path")
|
||||||
```
|
```
|
||||||
|
@ -109,6 +134,8 @@ def get_proof_indices(tree_indices: List[int]) -> List[int]:
|
||||||
|
|
||||||
Generating a proof is simply a matter of taking the node of the SSZ hash tree with the union of the given generalized indices for each index given by `get_proof_indices`, and outputting the list of nodes in the same order.
|
Generating a proof is simply a matter of taking the node of the SSZ hash tree with the union of the given generalized indices for each index given by `get_proof_indices`, and outputting the list of nodes in the same order.
|
||||||
|
|
||||||
|
Here is the verification function:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def verify_multi_proof(root, indices, leaves, proof):
|
def verify_multi_proof(root, indices, leaves, proof):
|
||||||
tree = {}
|
tree = {}
|
||||||
|
@ -127,8 +154,24 @@ def verify_multi_proof(root, indices, leaves, proof):
|
||||||
return (indices == []) or (1 in tree and tree[1] == root)
|
return (indices == []) or (1 in tree and tree[1] == root)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### MerklePartial
|
||||||
|
|
||||||
|
We define:
|
||||||
|
|
||||||
|
#### `MerklePartial`
|
||||||
|
|
||||||
|
|
||||||
|
```python
|
||||||
|
{
|
||||||
|
"root": "bytes32",
|
||||||
|
"indices": ["uint64"],
|
||||||
|
"values": ["bytes32"],
|
||||||
|
"proof": ["bytes32"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Proofs for execution
|
#### Proofs for execution
|
||||||
|
|
||||||
We define `MerklePartial(f, arg1, arg2...)` as being a list of Merkle multiproofs of the sets of nodes in the hash trees of the SSZ objects that are needed to authenticate the values needed to compute some function `f(arg1, arg2...)`. An individual Merkle multiproof is given as a dynamic sized list of `bytes32` values, a `MerklePartial` is a fixed-size list of objects `{proof: ["bytes32"], value: "bytes32"}`, one for each `arg` to `f` (if some `arg` is a base type, then the multiproof is empty).
|
We define `MerklePartial(f, arg1, arg2..., focus=0)` as being a `MerklePartial` object wrapping a Merkle multiproof of the set of nodes in the hash tree of the SSZ object `arg[focus]` that is needed to authenticate the parts of the object needed to compute `f(arg1, arg2...)`.
|
||||||
|
|
||||||
Ideally, any function which accepts an SSZ object should also be able to accept a `MerklePartial` object as a substitute.
|
Ideally, any function which accepts an SSZ object should also be able to accept a `MerklePartial` object as a substitute.
|
||||||
|
|
|
@ -47,26 +47,15 @@ We add a data type `PeriodData` and four helpers:
|
||||||
```python
|
```python
|
||||||
def get_earlier_start_epoch(slot: Slot) -> int:
|
def get_earlier_start_epoch(slot: Slot) -> int:
|
||||||
return slot - slot % PERSISTENT_COMMITTEE_PERIOD - PERSISTENT_COMMITTEE_PERIOD * 2
|
return slot - slot % PERSISTENT_COMMITTEE_PERIOD - PERSISTENT_COMMITTEE_PERIOD * 2
|
||||||
|
|
||||||
def get_later_start_epoch(slot: Slot) -> int:
|
def get_later_start_epoch(slot: Slot) -> int:
|
||||||
return slot - slot % PERSISTENT_COMMITTEE_PERIOD - PERSISTENT_COMMITTEE_PERIOD
|
return slot - slot % PERSISTENT_COMMITTEE_PERIOD - PERSISTENT_COMMITTEE_PERIOD
|
||||||
|
|
||||||
def get_earlier_period_data(block: ExtendedBeaconBlock, shard_id: Shard) -> PeriodData:
|
def get_period_data(block: ExtendedBeaconBlock, shard_id: Shard, later: bool) -> PeriodData:
|
||||||
period_start = get_earlier_start_epoch(block.slot)
|
period_start = get_later_start_epoch(header.slot) if later else get_earlier_start_epoch(header.slot)
|
||||||
validator_count = len(get_active_validator_indices(block.state, period_start))
|
validator_count = len(get_active_validator_indices(state, period_start))
|
||||||
committee_count = validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE) + 1
|
committee_count = validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE) + 1
|
||||||
indices = get_shuffled_committee(block.state, shard_id, period_start, 0, committee_count)
|
indices = get_period_committee(block.state, shard_id, period_start, 0, committee_count)
|
||||||
return PeriodData(
|
|
||||||
validator_count,
|
|
||||||
generate_seed(block.state, period_start),
|
|
||||||
[block.state.validator_registry[i] for i in indices]
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_later_period_data(block: ExtendedBeaconBlock, shard_id: Shard) -> PeriodData:
|
|
||||||
period_start = get_later_start_epoch(block.slot)
|
|
||||||
validator_count = len(get_active_validator_indices(block.state, period_start))
|
|
||||||
committee_count = validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE) + 1
|
|
||||||
indices = get_shuffled_committee(block.state, shard_id, period_start, 0, committee_count)
|
|
||||||
return PeriodData(
|
return PeriodData(
|
||||||
validator_count,
|
validator_count,
|
||||||
generate_seed(block.state, period_start),
|
generate_seed(block.state, period_start),
|
||||||
|
@ -80,18 +69,18 @@ A light client will keep track of:
|
||||||
|
|
||||||
* A random `shard_id` in `[0...SHARD_COUNT-1]` (selected once and retained forever)
|
* A random `shard_id` in `[0...SHARD_COUNT-1]` (selected once and retained forever)
|
||||||
* A block header that they consider to be finalized (`finalized_header`) and do not expect to revert.
|
* A block header that they consider to be finalized (`finalized_header`) and do not expect to revert.
|
||||||
* `later_period_data = get_maximal_later_committee(finalized_header, shard_id)`
|
* `later_period_data = get_period_data(finalized_header, shard_id, later=True)`
|
||||||
* `earlier_period_data = get_maximal_earlier_committee(finalized_header, shard_id)`
|
* `earlier_period_data = get_period_data(finalized_header, shard_id, later=False)`
|
||||||
|
|
||||||
We use the struct `validator_memory` to keep track of these variables.
|
We use the struct `validator_memory` to keep track of these variables.
|
||||||
|
|
||||||
### Updating the shuffled committee
|
### Updating the shuffled committee
|
||||||
|
|
||||||
If a client's `validator_memory.finalized_header` changes so that `header.slot // PERSISTENT_COMMITTEE_PERIOD` increases, then the client can ask the network for a `new_committee_proof = MerklePartial(get_maximal_later_committee, validator_memory.finalized_header, shard_id)`. It can then compute:
|
If a client's `validator_memory.finalized_header` changes so that `header.slot // PERSISTENT_COMMITTEE_PERIOD` increases, then the client can ask the network for a `new_committee_proof = MerklePartial(get_period_data, validator_memory.finalized_header, shard_id, later=True)`. It can then compute:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
earlier_period_data = later_period_data
|
earlier_period_data = later_period_data
|
||||||
later_period_data = get_later_period_data(new_committee_proof, finalized_header, shard_id)
|
later_period_data = get_period_data(new_committee_proof, finalized_header, shard_id, later=True)
|
||||||
```
|
```
|
||||||
|
|
||||||
The maximum size of a proof is `128 * ((22-7) * 32 + 110) = 75520` bytes for validator records and `(22-7) * 32 + 128 * 8 = 1504` for the active index proof (much smaller because the relevant active indices are all beside each other in the Merkle tree). This needs to be done once per `PERSISTENT_COMMITTEE_PERIOD` epochs (2048 epochs / 9 days), or ~38 bytes per epoch.
|
The maximum size of a proof is `128 * ((22-7) * 32 + 110) = 75520` bytes for validator records and `(22-7) * 32 + 128 * 8 = 1504` for the active index proof (much smaller because the relevant active indices are all beside each other in the Merkle tree). This needs to be done once per `PERSISTENT_COMMITTEE_PERIOD` epochs (2048 epochs / 9 days), or ~38 bytes per epoch.
|
||||||
|
@ -106,13 +95,13 @@ def compute_committee(header: BeaconBlockHeader,
|
||||||
|
|
||||||
earlier_validator_count = validator_memory.earlier_period_data.validator_count
|
earlier_validator_count = validator_memory.earlier_period_data.validator_count
|
||||||
later_validator_count = validator_memory.later_period_data.validator_count
|
later_validator_count = validator_memory.later_period_data.validator_count
|
||||||
earlier_committee = validator_memory.earlier_period_data.committee
|
maximal_earlier_committee = validator_memory.earlier_period_data.committee
|
||||||
later_committee = validator_memory.later_period_data.committee
|
maximal_later_committee = validator_memory.later_period_data.committee
|
||||||
earlier_start_epoch = get_earlier_start_epoch(header.slot)
|
earlier_start_epoch = get_earlier_start_epoch(header.slot)
|
||||||
later_start_epoch = get_later_start_epoch(header.slot)
|
later_start_epoch = get_later_start_epoch(header.slot)
|
||||||
epoch = slot_to_epoch(header.slot)
|
epoch = slot_to_epoch(header.slot)
|
||||||
|
|
||||||
actual_committee_count = max(
|
committee_count = max(
|
||||||
earlier_validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
earlier_validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
||||||
later_validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
later_validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
||||||
) + 1
|
) + 1
|
||||||
|
|
|
@ -1,71 +0,0 @@
|
||||||
# General test format [WIP]
|
|
||||||
|
|
||||||
This document defines the general YAML format to which all tests should conform. Testing specifications in Eth2.0 are still a work in progress. _Expect breaking changes_
|
|
||||||
|
|
||||||
## ToC
|
|
||||||
|
|
||||||
* [About](#about)
|
|
||||||
* [YAML Fields](#yaml-fields)
|
|
||||||
* [Example test suite](#example-test-suite)
|
|
||||||
|
|
||||||
## About
|
|
||||||
Ethereum 2.0 uses YAML as the format for all cross client tests. This document describes at a high level the general format to which all test files should conform.
|
|
||||||
|
|
||||||
The particular formats of specific types of tests (test suites) are defined in separate documents.
|
|
||||||
|
|
||||||
## YAML fields
|
|
||||||
`title` _(required)_
|
|
||||||
|
|
||||||
`summary` _(optional)_
|
|
||||||
|
|
||||||
`test_suite` _(required)_ string defining the test suite to which the test cases conform
|
|
||||||
|
|
||||||
`fork` _(required)_ production release versioning
|
|
||||||
|
|
||||||
`version` _(required)_ version for particular test document
|
|
||||||
|
|
||||||
`test_cases` _(required)_ list of test cases each of which is formatted to conform to the `test_case` standard defined by `test_suite`. All test cases have optional `name` and `description` string fields.
|
|
||||||
|
|
||||||
## Example test suite
|
|
||||||
`shuffle` is a test suite that defines test cases for the `shuffle()` helper function defined in the `beacon-chain` spec.
|
|
||||||
|
|
||||||
Test cases that conform to the `shuffle` test suite have the following fields:
|
|
||||||
|
|
||||||
* `input` _(required)_ the list of items passed into `shuffle()`
|
|
||||||
* `output` _(required)_ the expected list returned by `shuffle()`
|
|
||||||
* `seed` _(required)_ the seed of entropy passed into `shuffle()`
|
|
||||||
|
|
||||||
As for all test cases, `name` and `description` are optional string fields.
|
|
||||||
|
|
||||||
The following is a sample YAML document for the `shuffle` test suite:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
title: Shuffling Algorithm Tests
|
|
||||||
summary: Test vectors for shuffling a list based upon a seed using `shuffle`
|
|
||||||
test_suite: shuffle
|
|
||||||
fork: tchaikovsky
|
|
||||||
version: 1.0
|
|
||||||
|
|
||||||
test_cases:
|
|
||||||
- input: []
|
|
||||||
output: []
|
|
||||||
seed: !!binary ""
|
|
||||||
- name: boring_list
|
|
||||||
description: List with a single element, 0
|
|
||||||
input: [0]
|
|
||||||
output: [0]
|
|
||||||
seed: !!binary ""
|
|
||||||
- input: [255]
|
|
||||||
output: [255]
|
|
||||||
seed: !!binary ""
|
|
||||||
- input: [4, 6, 2, 6, 1, 4, 6, 2, 1, 5]
|
|
||||||
output: [1, 6, 4, 1, 6, 6, 2, 2, 4, 5]
|
|
||||||
seed: !!binary ""
|
|
||||||
- input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
|
|
||||||
output: [4, 7, 10, 13, 3, 1, 2, 9, 12, 6, 11, 8, 5]
|
|
||||||
seed: !!binary ""
|
|
||||||
- input: [65, 6, 2, 6, 1, 4, 6, 2, 1, 5]
|
|
||||||
output: [6, 65, 2, 5, 4, 2, 6, 6, 1, 1]
|
|
||||||
seed: !!binary |
|
|
||||||
JlAYJ5H2j8g7PLiPHZI/rTS1uAvKiieOrifPN6Moso0=
|
|
||||||
```
|
|
|
@ -0,0 +1,171 @@
|
||||||
|
# General test format
|
||||||
|
|
||||||
|
This document defines the YAML format and structure used for ETH 2.0 testing.
|
||||||
|
|
||||||
|
## ToC
|
||||||
|
|
||||||
|
* [About](#about)
|
||||||
|
* [Glossary](#glossary)
|
||||||
|
* [Test format philosophy](#test-format-philosophy)
|
||||||
|
* [Test Suite](#yaml-suite)
|
||||||
|
* [Config](#config)
|
||||||
|
* [Fork-timeline](#fork-timeline)
|
||||||
|
* [Config sourcing](#config-sourcing)
|
||||||
|
* [Test structure](#test-structure)
|
||||||
|
|
||||||
|
## About
|
||||||
|
|
||||||
|
Ethereum 2.0 uses YAML as the format for all cross client tests. This document describes at a high level the general format to which all test files should conform.
|
||||||
|
|
||||||
|
The particular formats of specific types of tests (test suites) are defined in separate documents.
|
||||||
|
|
||||||
|
## Glossary
|
||||||
|
|
||||||
|
- `generator`: a program that outputs one or more `suite` files.
|
||||||
|
- A generator should only output one `type` of test.
|
||||||
|
- A generator is free to output multiple `suite` files, optionally with different `handler`s.
|
||||||
|
- `type`: the specialization of one single `generator`.
|
||||||
|
- `suite`: a YAML file with:
|
||||||
|
- a header: describes the `suite`, and defines what the `suite` is for
|
||||||
|
- a list of test cases
|
||||||
|
- `runner`: where a generator is a *"producer"*, this is the *"consumer"**.
|
||||||
|
- A `runner` focuses on *only one* `type`, and each type has *only one* `runner`.
|
||||||
|
- `handler`: a `runner` may be too limited sometimes, you may have a `suite` with a specific focus that requires a different format.
|
||||||
|
To facilitate this, you specify a `handler`: the runner can deal with the format by using the specified handler.
|
||||||
|
Using a `handler` in a `runner` is optional.
|
||||||
|
- `case`: a test case, an entry in the `test_cases` list of a `suite`. A case can be anything in general,
|
||||||
|
but its format should be well-defined in the documentation corresponding to the `type` (and `handler`).\
|
||||||
|
A test has the same exact configuration and fork context as the other entries in the `case` list of its `suite`.
|
||||||
|
- `forks_timeline`: a fork timeline definition, a YAML file containing a key for each fork-name, and an epoch number as value.
|
||||||
|
|
||||||
|
## Test format philosophy
|
||||||
|
|
||||||
|
### Config design
|
||||||
|
|
||||||
|
After long discussion, the following types of configured constants were identified:
|
||||||
|
- Never changing: genesis data
|
||||||
|
- Changing, but reliant on old value: e.g. an epoch time may change, but if you want to do the conversion
|
||||||
|
`(genesis data, timestamp) -> epoch number` you end up needing both constants.
|
||||||
|
- Changing, but kept around during fork transition: finalization may take a while,
|
||||||
|
e.g. an executable has to deal with new deposits and old deposits at the same time. Another example may be economic constants.
|
||||||
|
- Additional, back-wards compatible: new constants are introduced for later phases
|
||||||
|
- Changing: there is a very small chance some constant may really be *replaced*.
|
||||||
|
In this off-chance, it is likely better to include it as an additional variable,
|
||||||
|
and some clients may simply stop supporting the old one, if they do not want to sync from genesis.
|
||||||
|
|
||||||
|
Based on these types of changes, we model the config as a list of key value pairs,
|
||||||
|
that only grows with every fork (they may change in development versions of forks however, git manages this).
|
||||||
|
With this approach, configurations are backwards compatible (older clients ignore unknown variables), and easy to maintain.
|
||||||
|
|
||||||
|
### Fork config design
|
||||||
|
|
||||||
|
There are two types of fork-data:
|
||||||
|
1) timeline: when does a fork take place?
|
||||||
|
2) coverage: what forks are covered by a test?
|
||||||
|
|
||||||
|
The first is neat to have as a separate form: we prevent duplication, and can run with different presets
|
||||||
|
(e.g. fork timeline for a minimal local test, for a public testnet, or for mainnet)
|
||||||
|
|
||||||
|
The second is still somewhat ambiguous: some tests may want cover multiple forks, and can do so in different ways:
|
||||||
|
- run one test, transitioning from one to the other
|
||||||
|
- run the same test for both
|
||||||
|
- run a test for every transition from one fork to the other
|
||||||
|
- more
|
||||||
|
|
||||||
|
There is a common factor here however: the options are exclusive, and give a clear idea on what test suites need to be ran to cover testing for a specific fork.
|
||||||
|
The way this list of forks is interpreted, is up to the test-runner:
|
||||||
|
State-transition test suites may want to just declare forks that are being covered in the test suite,
|
||||||
|
whereas shuffling test suites may want to declare a list of forks to test the shuffling algorithm for individually.
|
||||||
|
|
||||||
|
### Test completeness
|
||||||
|
|
||||||
|
Tests should be independent of any sync-data. If one wants to run a test, the input data should be available from the YAML.
|
||||||
|
The aim is to provide clients with a well-defined scope of work to run a particular set of test-suites.
|
||||||
|
|
||||||
|
- Clients that are complete are expected to contribute to testing, seeking for better resources to get conformance with the spec, and other clients.
|
||||||
|
- Clients that are not complete in functionality can choose to ignore suites that use certain test-runners, or specific handlers of these test-runners.
|
||||||
|
- Clients that are on older versions can test there work based on older releases of the generated tests, and catch up with newer releases when possible.
|
||||||
|
|
||||||
|
## Test Suite
|
||||||
|
|
||||||
|
```
|
||||||
|
title: <required, string, short, one line> -- Display name for the test suite
|
||||||
|
summary: <required, string, average, 1-3 lines> -- Summarizes the test suite
|
||||||
|
forks_timeline: <required, string, reference to a fork definition file, without extension> -- Used to determine the forking timeline
|
||||||
|
forks: <required, list of strings> -- Runner decides what to do: run for each fork, or run for all at once, each fork transition, etc.
|
||||||
|
- ... <required, string, first the fork name, then the spec version>
|
||||||
|
config: <required, string, reference to a config file, without extension> -- Used to determine which set of constants to run (possibly compile time) with
|
||||||
|
runner: <required, string, no spaces, python-like naming format> *MUST be consistent with folder structure*
|
||||||
|
handler: <optional, string, no spaces, python-like naming format> *MUST be consistent with folder structure*
|
||||||
|
|
||||||
|
test_cases: <list, values being maps defining a test case each>
|
||||||
|
...
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Config
|
||||||
|
|
||||||
|
A configuration is a separate YAML file.
|
||||||
|
Separation of configuration and tests aims to:
|
||||||
|
- Prevent duplication of configuration
|
||||||
|
- Make all tests easy to upgrade (e.g. when a new config constant is introduced)
|
||||||
|
- Clearly define which constants to use
|
||||||
|
- Shareable between clients, for cross-client short or long lived testnets
|
||||||
|
- Minimize the amounts of different constants permutations to compile as a client.
|
||||||
|
Note: Some clients prefer compile-time constants and optimizations.
|
||||||
|
They should compile for each configuration once, and run the corresponding tests per build target.
|
||||||
|
|
||||||
|
The format is described in `configs/constant_presets`.
|
||||||
|
|
||||||
|
|
||||||
|
## Fork-timeline
|
||||||
|
|
||||||
|
A fork timeline is (preferably) loaded in as a configuration object into a client, as opposed to the constants configuration:
|
||||||
|
- we do not allocate or optimize any code based on epoch numbers
|
||||||
|
- when we transition from one fork to the other, it is preferred to stay online.
|
||||||
|
- we may decide on an epoch number for a fork based on external events (e.g. Eth1 log event),
|
||||||
|
a client should be able to activate a fork dynamically.
|
||||||
|
|
||||||
|
The format is described in `configs/fork_timelines`.
|
||||||
|
|
||||||
|
## Config sourcing
|
||||||
|
|
||||||
|
The constants configurations are located in:
|
||||||
|
|
||||||
|
```
|
||||||
|
<specs repo root>/configs/constant_presets/<config name>.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
And copied by CI for testing purposes to:
|
||||||
|
|
||||||
|
```
|
||||||
|
<tests repo root>/configs/constant_presets/<config name>.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
The fork timelines are located in:
|
||||||
|
|
||||||
|
```
|
||||||
|
<specs repo root>/configs/fork_timelines/<timeline name>.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
And copied by CI for testing purposes to:
|
||||||
|
|
||||||
|
```
|
||||||
|
<tests repo root>/configs/fork_timelines/<timeline name>.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test structure
|
||||||
|
|
||||||
|
To prevent parsing of hundreds of different YAML files to test a specific test type,
|
||||||
|
or even more specific, just a handler, tests should be structured in the following nested form:
|
||||||
|
|
||||||
|
```
|
||||||
|
. <--- root of eth2.0 tests repository
|
||||||
|
├── bls <--- collection of handler for a specific test-runner, example runner: "bls"
|
||||||
|
│ ├── signing <--- collection of test suites for a specific handler, example handler: "signing". If no handler, use a dummy folder "main"
|
||||||
|
│ │ ├── sign_msg.yml <--- an entry list of test suites
|
||||||
|
│ │ ... <--- more suite files (optional)
|
||||||
|
│ ... <--- more handlers
|
||||||
|
... <--- more test types
|
||||||
|
```
|
Loading…
Reference in New Issue