From 7cb163090246e152927bbc5857b4fd8c9ad53bf9 Mon Sep 17 00:00:00 2001 From: Age Manning Date: Thu, 30 Mar 2023 14:09:16 +1100 Subject: [PATCH 01/22] Attnet revamp draft --- specs/phase0/validator.md | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 54b344791..2ed047f0f 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -88,10 +88,11 @@ All terminology, constants, functions, and protocol mechanics defined in the [Ph | Name | Value | Unit | Duration | | - | - | :-: | :-: | -| `TARGET_AGGREGATORS_PER_COMMITTEE` | `2**4` (= 16) | validators | | -| `RANDOM_SUBNETS_PER_VALIDATOR` | `2**0` (= 1) | subnets | | -| `EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION` | `2**8` (= 256) | epochs | ~27 hours | +| `TARGET_AGGREGATORS_PER_COMMITTEE` | `2**4` (= 16) | validators | +| `EPOCHS_PER_SUBNET_SUBSCRIPTION` | `2**8` (= 256) | epochs | ~27 hours | | `ATTESTATION_SUBNET_COUNT` | `64` | The number of attestation subnets used in the gossipsub protocol. | +| `ATTESTATION_SUBNET_EXTRA_BITS` | 0 | The number of extra bits of a NodeId to use when mapping to a subscribed subnet | +| `SUBNETS_PER_NODE` | 2 | The number of long-lived subnets a beacon node should be subscribed to. | ## Containers @@ -606,15 +607,29 @@ def get_aggregate_and_proof_signature(state: BeaconState, ## Phase 0 attestation subnet stability -Because Phase 0 does not have shards and thus does not have Shard Committees, there is no stable backbone to the attestation subnets (`beacon_attestation_{subnet_id}`). To provide this stability, each validator must: +Because Phase 0 does not have shards and thus does not have Shard Committees, there is no stable backbone to the attestation subnets (`beacon_attestation_{subnet_id}`). To provide this stability, each beacon node should: -* Randomly select and remain subscribed to `RANDOM_SUBNETS_PER_VALIDATOR` attestation subnets -* Maintain advertisement of the randomly selected subnets in their node's ENR `attnets` entry by setting the randomly selected `subnet_id` bits to `True` (e.g. `ENR["attnets"][subnet_id] = True`) for all persistent attestation subnets -* Set the lifetime of each random subscription to a random number of epochs between `EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION` and `2 * EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION]`. At the end of life for a subscription, select a new random subnet, update subnet subscriptions, and publish an updated ENR +* Remain subscribed to `SUBNETS_PER_NODE` for `SUBNET_DURATION_IN_EPOCHS` epochs. +* Maintain advertisement of the selected subnets in their node's ENR `attnets` entry by setting the selected `subnet_id` bits to `True` (e.g. `ENR["attnets"][subnet_id] = True`) for all persistent attestation subnets. +* Select these subnets based on their node-id as specified by the following + `compute_subnets(node_id,epoch)` function. -*Note*: Short lived beacon committee assignments should not be added in into the ENR `attnets` entry. +```python +ATTESTATION_SUBNET_PREFIX_BITS = ceil(log2(ATTESTATION_SUBNET_COUNT)) + ATTESTATION_SUBNET_EXTRA_BITS -*Note*: When preparing for a hard fork, a validator must select and subscribe to random subnets of the future fork versioning at least `EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION` epochs in advance of the fork. These new subnets for the fork are maintained in addition to those for the current fork until the fork occurs. After the fork occurs, let the subnets from the previous fork reach the end of life with no replacements. +def compute_subnet(node_id, epoch, index): + node_id_prefix = node_id >> (256 - ATTESTATION_SUBNET_PREFIX_BITS) + permutation_seed = hash(uint_to_bytes(epoch // SUBNET_DURATION_IN_EPOCHS)) + permutated_prefix = compute_shuffled_index(node_id_prefix, 1 << ATTESTATION_SUBNET_PREFIX_BITS, permutation_seed) + return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT + +def compute_subnets(node_id, epoch): + return [compute_subnet(node_id, epoch, idx) for idx in range(SUBNETS_PER_NODE)] +``` + +*Note*: Nodes should subscribe to new subnets and remain subscribed to old subnets for at least one epoch. Nodes should pick a random duration to unsubscribe from old subnets to smooth the transition on the exact epoch boundary of which the shuffling changes. + +*Note*: When preparing for a hard fork, a validator must select and subscribe to subnets of the future fork versioning at least `EPOCHS_PER_SUBNET_SUBSCRIPTION` epochs in advance of the fork. These new subnets for the fork are maintained in addition to those for the current fork until the fork occurs. After the fork occurs, let the subnets from the previous fork reach the end of life with no replacements. ## How to avoid slashing From 0dd8db76cd7d21a2853f0aad5995d027daf8c0e3 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Thu, 30 Mar 2023 14:51:41 +0800 Subject: [PATCH 02/22] Make linter happy. Add `SUBNET_DURATION_IN_EPOCHS` definition. --- specs/phase0/validator.md | 27 ++++++++++--------- .../unittests/test_config_invariants.py | 2 +- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 2ed047f0f..4df4437d0 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -91,8 +91,10 @@ All terminology, constants, functions, and protocol mechanics defined in the [Ph | `TARGET_AGGREGATORS_PER_COMMITTEE` | `2**4` (= 16) | validators | | `EPOCHS_PER_SUBNET_SUBSCRIPTION` | `2**8` (= 256) | epochs | ~27 hours | | `ATTESTATION_SUBNET_COUNT` | `64` | The number of attestation subnets used in the gossipsub protocol. | -| `ATTESTATION_SUBNET_EXTRA_BITS` | 0 | The number of extra bits of a NodeId to use when mapping to a subscribed subnet | -| `SUBNETS_PER_NODE` | 2 | The number of long-lived subnets a beacon node should be subscribed to. | +| `ATTESTATION_SUBNET_EXTRA_BITS` | `0` | The number of extra bits of a NodeId to use when mapping to a subscribed subnet | +| `SUBNETS_PER_NODE` | `2` | The number of long-lived subnets a beacon node should be subscribed to. | +| `ATTESTATION_SUBNET_PREFIX_BITS` | `(ceillog2(ATTESTATION_SUBNET_COUNT) + ATTESTATION_SUBNET_EXTRA_BITS)` | | +| `SUBNET_DURATION_IN_EPOCHS` | `2` | | ## Containers @@ -611,20 +613,19 @@ Because Phase 0 does not have shards and thus does not have Shard Committees, th * Remain subscribed to `SUBNETS_PER_NODE` for `SUBNET_DURATION_IN_EPOCHS` epochs. * Maintain advertisement of the selected subnets in their node's ENR `attnets` entry by setting the selected `subnet_id` bits to `True` (e.g. `ENR["attnets"][subnet_id] = True`) for all persistent attestation subnets. -* Select these subnets based on their node-id as specified by the following - `compute_subnets(node_id,epoch)` function. +* Select these subnets based on their node-id as specified by the following `compute_subnets(node_id,epoch)` function. ```python -ATTESTATION_SUBNET_PREFIX_BITS = ceil(log2(ATTESTATION_SUBNET_COUNT)) + ATTESTATION_SUBNET_EXTRA_BITS +def compute_subnet(node_id: int, epoch: Epoch, index: int) -> int: + node_id_prefix = node_id >> (256 - ATTESTATION_SUBNET_PREFIX_BITS) + permutation_seed = hash(uint_to_bytes(epoch // SUBNET_DURATION_IN_EPOCHS)) + permutated_prefix = compute_shuffled_index(node_id_prefix, 1 << ATTESTATION_SUBNET_PREFIX_BITS, permutation_seed) + return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT +``` -def compute_subnet(node_id, epoch, index): - node_id_prefix = node_id >> (256 - ATTESTATION_SUBNET_PREFIX_BITS) - permutation_seed = hash(uint_to_bytes(epoch // SUBNET_DURATION_IN_EPOCHS)) - permutated_prefix = compute_shuffled_index(node_id_prefix, 1 << ATTESTATION_SUBNET_PREFIX_BITS, permutation_seed) - return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT - -def compute_subnets(node_id, epoch): - return [compute_subnet(node_id, epoch, idx) for idx in range(SUBNETS_PER_NODE)] +```python +def compute_subnets(node_id: int, epoch: Epoch) -> Sequence[int]: + return [compute_subnet(node_id, epoch, idx) for idx in range(SUBNETS_PER_NODE)] ``` *Note*: Nodes should subscribe to new subnets and remain subscribed to old subnets for at least one epoch. Nodes should pick a random duration to unsubscribe from old subnets to smooth the transition on the exact epoch boundary of which the shuffling changes. diff --git a/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py b/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py index 9b27d1deb..69aa3eb2a 100644 --- a/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py +++ b/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py @@ -75,7 +75,7 @@ def test_time(spec, state): @with_all_phases @spec_state_test def test_networking(spec, state): - assert spec.RANDOM_SUBNETS_PER_VALIDATOR <= spec.ATTESTATION_SUBNET_COUNT + assert spec.SUBNETS_PER_NODE <= spec.ATTESTATION_SUBNET_COUNT @with_all_phases From a0d03378fabf76cb91897ffe17310050f3996ee2 Mon Sep 17 00:00:00 2001 From: Age Manning Date: Thu, 6 Apr 2023 12:40:55 +1000 Subject: [PATCH 03/22] Correct subnet subscription duration variable --- specs/phase0/validator.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 4df4437d0..1b06aecfb 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -94,7 +94,6 @@ All terminology, constants, functions, and protocol mechanics defined in the [Ph | `ATTESTATION_SUBNET_EXTRA_BITS` | `0` | The number of extra bits of a NodeId to use when mapping to a subscribed subnet | | `SUBNETS_PER_NODE` | `2` | The number of long-lived subnets a beacon node should be subscribed to. | | `ATTESTATION_SUBNET_PREFIX_BITS` | `(ceillog2(ATTESTATION_SUBNET_COUNT) + ATTESTATION_SUBNET_EXTRA_BITS)` | | -| `SUBNET_DURATION_IN_EPOCHS` | `2` | | ## Containers @@ -611,14 +610,14 @@ def get_aggregate_and_proof_signature(state: BeaconState, Because Phase 0 does not have shards and thus does not have Shard Committees, there is no stable backbone to the attestation subnets (`beacon_attestation_{subnet_id}`). To provide this stability, each beacon node should: -* Remain subscribed to `SUBNETS_PER_NODE` for `SUBNET_DURATION_IN_EPOCHS` epochs. +* Remain subscribed to `SUBNETS_PER_NODE` for `EPOCHS_PER_SUBNET_SUBSCRIPTION` epochs. * Maintain advertisement of the selected subnets in their node's ENR `attnets` entry by setting the selected `subnet_id` bits to `True` (e.g. `ENR["attnets"][subnet_id] = True`) for all persistent attestation subnets. * Select these subnets based on their node-id as specified by the following `compute_subnets(node_id,epoch)` function. ```python def compute_subnet(node_id: int, epoch: Epoch, index: int) -> int: node_id_prefix = node_id >> (256 - ATTESTATION_SUBNET_PREFIX_BITS) - permutation_seed = hash(uint_to_bytes(epoch // SUBNET_DURATION_IN_EPOCHS)) + permutation_seed = hash(uint_to_bytes(epoch // EPOCHS_PER_SUBNET_SUBSCRIPTION)) permutated_prefix = compute_shuffled_index(node_id_prefix, 1 << ATTESTATION_SUBNET_PREFIX_BITS, permutation_seed) return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT ``` From 6e423f6c4275608515158b5c483c351f0eb61b19 Mon Sep 17 00:00:00 2001 From: Age Manning Date: Wed, 12 Apr 2023 11:29:48 +1000 Subject: [PATCH 04/22] Stagger node rotations --- specs/phase0/validator.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 1b06aecfb..56ca50732 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -612,23 +612,22 @@ Because Phase 0 does not have shards and thus does not have Shard Committees, th * Remain subscribed to `SUBNETS_PER_NODE` for `EPOCHS_PER_SUBNET_SUBSCRIPTION` epochs. * Maintain advertisement of the selected subnets in their node's ENR `attnets` entry by setting the selected `subnet_id` bits to `True` (e.g. `ENR["attnets"][subnet_id] = True`) for all persistent attestation subnets. -* Select these subnets based on their node-id as specified by the following `compute_subnets(node_id,epoch)` function. +* Select these subnets based on their node-id as specified by the following `compute_subscribed_subnets(node_id,epoch)` function. ```python -def compute_subnet(node_id: int, epoch: Epoch, index: int) -> int: +def compute_subscribed_subnet(node_id: int, epoch: Epoch, index: int) -> int: node_id_prefix = node_id >> (256 - ATTESTATION_SUBNET_PREFIX_BITS) - permutation_seed = hash(uint_to_bytes(epoch // EPOCHS_PER_SUBNET_SUBSCRIPTION)) + node_offset = node_id % EPOCHS_PER_SUBNET_SUBSCRIPTION + permutation_seed = hash(uint_to_bytes((epoch + node_offset) // EPOCHS_PER_SUBNET_SUBSCRIPTION)) permutated_prefix = compute_shuffled_index(node_id_prefix, 1 << ATTESTATION_SUBNET_PREFIX_BITS, permutation_seed) return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT ``` ```python -def compute_subnets(node_id: int, epoch: Epoch) -> Sequence[int]: - return [compute_subnet(node_id, epoch, idx) for idx in range(SUBNETS_PER_NODE)] +def compute_subscribed_subnets(node_id: int, epoch: Epoch) -> Sequence[int]: + return [compute_subscribed_subnet(node_id, epoch, idx) for idx in range(SUBNETS_PER_NODE)] ``` -*Note*: Nodes should subscribe to new subnets and remain subscribed to old subnets for at least one epoch. Nodes should pick a random duration to unsubscribe from old subnets to smooth the transition on the exact epoch boundary of which the shuffling changes. - *Note*: When preparing for a hard fork, a validator must select and subscribe to subnets of the future fork versioning at least `EPOCHS_PER_SUBNET_SUBSCRIPTION` epochs in advance of the fork. These new subnets for the fork are maintained in addition to those for the current fork until the fork occurs. After the fork occurs, let the subnets from the previous fork reach the end of life with no replacements. ## How to avoid slashing From 745d529598632029fee9820590b033b7e0e23935 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Tue, 25 Apr 2023 12:57:42 +0800 Subject: [PATCH 05/22] Add `compute_subscribed_subnets` unittests and fix typing errors --- specs/phase0/validator.md | 10 +++-- .../validator/test_validator_unittest.py | 39 ++++++++++++++++++- 2 files changed, 45 insertions(+), 4 deletions(-) diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 56ca50732..5266fec7a 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -616,10 +616,14 @@ Because Phase 0 does not have shards and thus does not have Shard Committees, th ```python def compute_subscribed_subnet(node_id: int, epoch: Epoch, index: int) -> int: - node_id_prefix = node_id >> (256 - ATTESTATION_SUBNET_PREFIX_BITS) + node_id_prefix = node_id >> (256 - int(ATTESTATION_SUBNET_PREFIX_BITS)) node_offset = node_id % EPOCHS_PER_SUBNET_SUBSCRIPTION - permutation_seed = hash(uint_to_bytes((epoch + node_offset) // EPOCHS_PER_SUBNET_SUBSCRIPTION)) - permutated_prefix = compute_shuffled_index(node_id_prefix, 1 << ATTESTATION_SUBNET_PREFIX_BITS, permutation_seed) + permutation_seed = hash(uint_to_bytes(uint64((epoch + node_offset) // EPOCHS_PER_SUBNET_SUBSCRIPTION))) + permutated_prefix = compute_shuffled_index( + node_id_prefix, + 1 << int(ATTESTATION_SUBNET_PREFIX_BITS), + permutation_seed, + ) return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT ``` diff --git a/tests/core/pyspec/eth2spec/test/phase0/unittests/validator/test_validator_unittest.py b/tests/core/pyspec/eth2spec/test/phase0/unittests/validator/test_validator_unittest.py index cf7ef392f..177748eac 100644 --- a/tests/core/pyspec/eth2spec/test/phase0/unittests/validator/test_validator_unittest.py +++ b/tests/core/pyspec/eth2spec/test/phase0/unittests/validator/test_validator_unittest.py @@ -1,6 +1,12 @@ +import random + from eth2spec.test.context import ( + single_phase, spec_state_test, - always_bls, with_phases, with_all_phases, + spec_test, + always_bls, + with_phases, + with_all_phases, ) from eth2spec.test.helpers.constants import PHASE0 from eth2spec.test.helpers.attestations import build_attestation_data, get_valid_attestation @@ -476,3 +482,34 @@ def test_get_aggregate_and_proof_signature(spec, state): privkey=privkey, pubkey=pubkey, ) + + +def run_compute_subscribed_subnets_arguments(spec, rng=random.Random(1111)): + node_id = rng.randint(0, 2**40 - 1) # try VALIDATOR_REGISTRY_LIMIT + epoch = rng.randint(0, 2**64 - 1) + subnets = spec.compute_subscribed_subnets(node_id, epoch) + assert len(subnets) == spec.SUBNETS_PER_NODE + + +@with_all_phases +@spec_test +@single_phase +def test_compute_subscribed_subnets_random_1(spec): + rng = random.Random(1111) + run_compute_subscribed_subnets_arguments(spec, rng) + + +@with_all_phases +@spec_test +@single_phase +def test_compute_subscribed_subnets_random_2(spec): + rng = random.Random(2222) + run_compute_subscribed_subnets_arguments(spec, rng) + + +@with_all_phases +@spec_test +@single_phase +def test_compute_subscribed_subnets_random_3(spec): + rng = random.Random(3333) + run_compute_subscribed_subnets_arguments(spec, rng) From e31fcbd6a9f795100ec6f1de434ffd4555a0f0e2 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Fri, 28 Apr 2023 23:09:10 +0800 Subject: [PATCH 06/22] Add `GetPayloadResponse` for `get_payload` API --- setup.py | 2 +- specs/_features/eip4788/validator.md | 4 +-- specs/bellatrix/validator.md | 19 ++++++++--- specs/capella/validator.md | 14 ++++++-- specs/deneb/validator.md | 50 ++++++++++++++++++++++------ tests/formats/fork_choice/README.md | 4 +-- 6 files changed, 70 insertions(+), 23 deletions(-) diff --git a/setup.py b/setup.py index fc3acb806..a3e94642e 100644 --- a/setup.py +++ b/setup.py @@ -588,7 +588,7 @@ class NoopExecutionEngine(ExecutionEngine): payload_attributes: Optional[PayloadAttributes]) -> Optional[PayloadId]: pass - def get_payload(self: ExecutionEngine, payload_id: PayloadId) -> ExecutionPayload: + def get_payload(self: ExecutionEngine, payload_id: PayloadId) -> GetPayloadResponse: # pylint: disable=unused-argument raise NotImplementedError("no default block production") diff --git a/specs/_features/eip4788/validator.md b/specs/_features/eip4788/validator.md index 421e297ce..3140cdb21 100644 --- a/specs/_features/eip4788/validator.md +++ b/specs/_features/eip4788/validator.md @@ -13,7 +13,7 @@ - [Helpers](#helpers) - [Protocols](#protocols) - [`ExecutionEngine`](#executionengine) - - [`get_payload`](#get_payload) + - [Modified `get_payload`](#modified-get_payload) - [Beacon chain responsibilities](#beacon-chain-responsibilities) - [Block proposal](#block-proposal) - [Constructing the `BeaconBlockBody`](#constructing-the-beaconblockbody) @@ -40,7 +40,7 @@ Please see related Beacon Chain doc before continuing and use them as a referenc ### `ExecutionEngine` -#### `get_payload` +#### Modified `get_payload` `get_payload` returns the upgraded EIP-4788 `ExecutionPayload` type. diff --git a/specs/bellatrix/validator.md b/specs/bellatrix/validator.md index a176d7534..dea763cde 100644 --- a/specs/bellatrix/validator.md +++ b/specs/bellatrix/validator.md @@ -9,6 +9,7 @@ - [Introduction](#introduction) - [Prerequisites](#prerequisites) - [Helpers](#helpers) + - [`GetPayloadResponse`](#getpayloadresponse) - [`get_pow_block_at_terminal_total_difficulty`](#get_pow_block_at_terminal_total_difficulty) - [`get_terminal_pow_block`](#get_terminal_pow_block) - [Protocols](#protocols) @@ -36,6 +37,14 @@ Please see related Beacon Chain doc before continuing and use them as a referenc ## Helpers +### `GetPayloadResponse` + +```python +@dataclass +class GetPayloadResponse(object): + execution_payload: ExecutionPayload +``` + ### `get_pow_block_at_terminal_total_difficulty` ```python @@ -83,13 +92,13 @@ The Engine API may be used to implement it with an external execution engine. #### `get_payload` -Given the `payload_id`, `get_payload` returns the most recent version of the execution payload that -has been built since the corresponding call to `notify_forkchoice_updated` method. +Given the `payload_id`, `get_payload` returns `GetPayloadResponse` with the most recent version of +the execution payload that has been built since the corresponding call to `notify_forkchoice_updated` method. ```python -def get_payload(self: ExecutionEngine, payload_id: PayloadId) -> ExecutionPayload: +def get_payload(self: ExecutionEngine, payload_id: PayloadId) -> GetPayloadResponse: """ - Return ``execution_payload`` object. + Return ``GetPayloadResponse`` object. """ ... ``` @@ -162,7 +171,7 @@ def get_execution_payload(payload_id: Optional[PayloadId], execution_engine: Exe # Pre-merge, empty payload return ExecutionPayload() else: - return execution_engine.get_payload(payload_id) + return execution_engine.get_payload(payload_id).execution_payload ``` *Note*: It is recommended for a validator to call `prepare_execution_payload` as soon as input parameters become known, diff --git a/specs/capella/validator.md b/specs/capella/validator.md index 644ee476f..29cff8c61 100644 --- a/specs/capella/validator.md +++ b/specs/capella/validator.md @@ -11,9 +11,10 @@ - [Introduction](#introduction) - [Prerequisites](#prerequisites) - [Helpers](#helpers) + - [Modified `GetPayloadResponse`](#modified-getpayloadresponse) - [Protocols](#protocols) - [`ExecutionEngine`](#executionengine) - - [`get_payload`](#get_payload) + - [Modified `get_payload`](#modified-get_payload) - [Beacon chain responsibilities](#beacon-chain-responsibilities) - [Block proposal](#block-proposal) - [Constructing the `BeaconBlockBody`](#constructing-the-beaconblockbody) @@ -39,11 +40,20 @@ Please see related Beacon Chain doc before continuing and use them as a referenc ## Helpers +### Modified `GetPayloadResponse` + +```python +@dataclass +class GetPayloadResponse(object): + execution_payload: ExecutionPayload + block_value: uint256 +``` + ## Protocols ### `ExecutionEngine` -#### `get_payload` +#### Modified `get_payload` `get_payload` returns the upgraded Capella `ExecutionPayload` type. diff --git a/specs/deneb/validator.md b/specs/deneb/validator.md index b627de023..6562c91dd 100644 --- a/specs/deneb/validator.md +++ b/specs/deneb/validator.md @@ -11,7 +11,11 @@ - [Introduction](#introduction) - [Prerequisites](#prerequisites) - [Helpers](#helpers) - - [`get_blobs_and_kzg_commitments`](#get_blobs_and_kzg_commitments) + - [`BlobsBundle`](#blobsbundle) + - [Modified `GetPayloadResponse`](#modified-getpayloadresponse) +- [Protocol](#protocol) + - [`ExecutionEngine`](#executionengine) + - [Modified `get_payload`](#modified-get_payload) - [Beacon chain responsibilities](#beacon-chain-responsibilities) - [Block and sidecar proposal](#block-and-sidecar-proposal) - [Constructing the `BeaconBlockBody`](#constructing-the-beaconblockbody) @@ -36,17 +40,40 @@ Please see related Beacon Chain doc before continuing and use them as a referenc ## Helpers -### `get_blobs_and_kzg_commitments` - -The interface to retrieve blobs and corresponding kzg commitments. - -Note: This API is *unstable*. `get_blobs_and_kzg_commitments` and `get_payload` may be unified. -Implementers may also retrieve blobs individually per transaction. +### `BlobsBundle` ```python -def get_blobs_and_kzg_commitments( - payload_id: PayloadId -) -> Tuple[Sequence[Blob], Sequence[KZGCommitment], Sequence[KZGProof]]: +@dataclass +class BlobsBundle(object): + commitments: Sequence[KZGCommitment] + proofs: Sequence[KZGProof] + blobs: Sequence[Blob] +``` + +### Modified `GetPayloadResponse` + +```python +@dataclass +class GetPayloadResponse(object): + execution_payload: ExecutionPayload + block_value: uint256 + blobs_bundle: BlobsBundle +``` + +## Protocol + +### `ExecutionEngine` + +#### Modified `get_payload` + +Given the `payload_id`, `get_payload` returns the most recent version of the execution payload that +has been built since the corresponding call to `notify_forkchoice_updated` method. + +```python +def get_payload(self: ExecutionEngine, payload_id: PayloadId) -> GetPayloadResponse: + """ + Return ExecutionPayload, uint256, BlobsBundle objects. + """ # pylint: disable=unused-argument ... ``` @@ -62,7 +89,8 @@ All validator responsibilities remain unchanged other than those noted below. ##### Blob KZG commitments 1. After retrieving the execution payload from the execution engine as specified in Capella, -use the `payload_id` to retrieve `blobs` and `blob_kzg_commitments` via `get_blobs_and_kzg_commitments(payload_id)`. +use the `payload_id` to retrieve `blobs`, `blob_kzg_commitments`, and `blob_kzg_proofs` +via `get_payload(payload_id).blobs_bundle`. 2. Validate `blobs` and `blob_kzg_commitments`: ```python diff --git a/tests/formats/fork_choice/README.md b/tests/formats/fork_choice/README.md index c94b95933..3b28837de 100644 --- a/tests/formats/fork_choice/README.md +++ b/tests/formats/fork_choice/README.md @@ -114,8 +114,8 @@ Optional step for optimistic sync tests. This step sets the [`payloadStatus`](https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#PayloadStatusV1) value that Execution Layer client mock returns in responses to the following Engine API calls: -* [`engine_newPayloadV1(payload)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#engine_newpayloadv1) if `payload.blockHash == payload_info.block_hash` -* [`engine_forkchoiceUpdatedV1(forkchoiceState, ...)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#engine_forkchoiceupdatedv1) if `forkchoiceState.headBlockHash == payload_info.block_hash` +* [`engine_newPayloadV1(payload)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_newpayloadv1) if `payload.blockHash == payload_info.block_hash` +* [`engine_forkchoiceUpdatedV1(forkchoiceState, ...)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_forkchoiceupdatedv1) if `forkchoiceState.headBlockHash == payload_info.block_hash` *Note:* Status of a payload must be *initialized* via `on_payload_info` before the corresponding `on_block` execution step. From 79b8a9abecded921179d2b2854d8dc7b8c570d5d Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Thu, 4 May 2023 18:09:01 +0800 Subject: [PATCH 07/22] Apply suggestions from code review --- specs/phase0/validator.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 5266fec7a..92eadde5f 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -612,7 +612,7 @@ Because Phase 0 does not have shards and thus does not have Shard Committees, th * Remain subscribed to `SUBNETS_PER_NODE` for `EPOCHS_PER_SUBNET_SUBSCRIPTION` epochs. * Maintain advertisement of the selected subnets in their node's ENR `attnets` entry by setting the selected `subnet_id` bits to `True` (e.g. `ENR["attnets"][subnet_id] = True`) for all persistent attestation subnets. -* Select these subnets based on their node-id as specified by the following `compute_subscribed_subnets(node_id,epoch)` function. +* Select these subnets based on their node-id as specified by the following `compute_subscribed_subnets(node_id, epoch)` function. ```python def compute_subscribed_subnet(node_id: int, epoch: Epoch, index: int) -> int: @@ -629,7 +629,7 @@ def compute_subscribed_subnet(node_id: int, epoch: Epoch, index: int) -> int: ```python def compute_subscribed_subnets(node_id: int, epoch: Epoch) -> Sequence[int]: - return [compute_subscribed_subnet(node_id, epoch, idx) for idx in range(SUBNETS_PER_NODE)] + return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)] ``` *Note*: When preparing for a hard fork, a validator must select and subscribe to subnets of the future fork versioning at least `EPOCHS_PER_SUBNET_SUBSCRIPTION` epochs in advance of the fork. These new subnets for the fork are maintained in addition to those for the current fork until the fork occurs. After the fork occurs, let the subnets from the previous fork reach the end of life with no replacements. From 5cb2733ed5271c582fb2235367558ff8950dd7a2 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Thu, 4 May 2023 18:50:13 +0800 Subject: [PATCH 08/22] Add custom types `NodeID` and `SubnetID` and constant `NODE_ID_BITS` --- setup.py | 4 +-- specs/phase0/validator.md | 25 ++++++++++++++----- .../unittests/test_config_invariants.py | 2 ++ 3 files changed, 23 insertions(+), 8 deletions(-) diff --git a/setup.py b/setup.py index 52bad2b71..f1130eb58 100644 --- a/setup.py +++ b/setup.py @@ -383,7 +383,7 @@ from typing import ( from eth2spec.utils.ssz.ssz_impl import hash_tree_root, copy, uint_to_bytes from eth2spec.utils.ssz.ssz_typing import ( - View, boolean, Container, List, Vector, uint8, uint32, uint64, + View, boolean, Container, List, Vector, uint8, uint32, uint64, uint256, Bytes1, Bytes4, Bytes32, Bytes48, Bytes96, Bitlist) from eth2spec.utils.ssz.ssz_typing import Bitvector # noqa: F401 from eth2spec.utils import bls @@ -551,7 +551,7 @@ class BellatrixSpecBuilder(AltairSpecBuilder): return super().imports(preset_name) + f''' from typing import Protocol from eth2spec.altair import {preset_name} as altair -from eth2spec.utils.ssz.ssz_typing import Bytes8, Bytes20, ByteList, ByteVector, uint256 +from eth2spec.utils.ssz.ssz_typing import Bytes8, Bytes20, ByteList, ByteVector ''' @classmethod diff --git a/specs/phase0/validator.md b/specs/phase0/validator.md index 92eadde5f..604350ed8 100644 --- a/specs/phase0/validator.md +++ b/specs/phase0/validator.md @@ -10,6 +10,7 @@ This is an accompanying document to [Phase 0 -- The Beacon Chain](./beacon-chain - [Introduction](#introduction) - [Prerequisites](#prerequisites) +- [Custom types](#custom-types) - [Constants](#constants) - [Misc](#misc) - [Containers](#containers) @@ -82,6 +83,15 @@ A validator is an entity that participates in the consensus of the Ethereum proo All terminology, constants, functions, and protocol mechanics defined in the [Phase 0 -- The Beacon Chain](./beacon-chain.md) and [Phase 0 -- Deposit Contract](./deposit-contract.md) doc are requisite for this document and used throughout. Please see the Phase 0 doc before continuing and use as a reference throughout. +## Custom types + +We define the following Python custom types for type hinting and readability: + +| Name | SSZ equivalent | Description | +| - | - | - | +| `NodeID` | `uint256` | node identifier | +| `SubnetID` | `uint64` | subnet identifier | + ## Constants ### Misc @@ -94,6 +104,7 @@ All terminology, constants, functions, and protocol mechanics defined in the [Ph | `ATTESTATION_SUBNET_EXTRA_BITS` | `0` | The number of extra bits of a NodeId to use when mapping to a subscribed subnet | | `SUBNETS_PER_NODE` | `2` | The number of long-lived subnets a beacon node should be subscribed to. | | `ATTESTATION_SUBNET_PREFIX_BITS` | `(ceillog2(ATTESTATION_SUBNET_COUNT) + ATTESTATION_SUBNET_EXTRA_BITS)` | | +| `NODE_ID_BITS` | `256` | The bit length of uint256 is 256 | ## Containers @@ -515,7 +526,9 @@ The `subnet_id` for the `attestation` is calculated with: - Let `subnet_id = compute_subnet_for_attestation(committees_per_slot, attestation.data.slot, attestation.data.index)`. ```python -def compute_subnet_for_attestation(committees_per_slot: uint64, slot: Slot, committee_index: CommitteeIndex) -> uint64: +def compute_subnet_for_attestation(committees_per_slot: uint64, + slot: Slot, + committee_index: CommitteeIndex) -> SubnetID: """ Compute the correct subnet for an attestation for Phase 0. Note, this mimics expected future behavior where attestations will be mapped to their shard subnet. @@ -523,7 +536,7 @@ def compute_subnet_for_attestation(committees_per_slot: uint64, slot: Slot, comm slots_since_epoch_start = uint64(slot % SLOTS_PER_EPOCH) committees_since_epoch_start = committees_per_slot * slots_since_epoch_start - return uint64((committees_since_epoch_start + committee_index) % ATTESTATION_SUBNET_COUNT) + return SubnetID((committees_since_epoch_start + committee_index) % ATTESTATION_SUBNET_COUNT) ``` ### Attestation aggregation @@ -615,8 +628,8 @@ Because Phase 0 does not have shards and thus does not have Shard Committees, th * Select these subnets based on their node-id as specified by the following `compute_subscribed_subnets(node_id, epoch)` function. ```python -def compute_subscribed_subnet(node_id: int, epoch: Epoch, index: int) -> int: - node_id_prefix = node_id >> (256 - int(ATTESTATION_SUBNET_PREFIX_BITS)) +def compute_subscribed_subnet(node_id: NodeID, epoch: Epoch, index: int) -> SubnetID: + node_id_prefix = node_id >> (NODE_ID_BITS - int(ATTESTATION_SUBNET_PREFIX_BITS)) node_offset = node_id % EPOCHS_PER_SUBNET_SUBSCRIPTION permutation_seed = hash(uint_to_bytes(uint64((epoch + node_offset) // EPOCHS_PER_SUBNET_SUBSCRIPTION))) permutated_prefix = compute_shuffled_index( @@ -624,11 +637,11 @@ def compute_subscribed_subnet(node_id: int, epoch: Epoch, index: int) -> int: 1 << int(ATTESTATION_SUBNET_PREFIX_BITS), permutation_seed, ) - return (permutated_prefix + index) % ATTESTATION_SUBNET_COUNT + return SubnetID((permutated_prefix + index) % ATTESTATION_SUBNET_COUNT) ``` ```python -def compute_subscribed_subnets(node_id: int, epoch: Epoch) -> Sequence[int]: +def compute_subscribed_subnets(node_id: NodeID, epoch: Epoch) -> Sequence[SubnetID]: return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)] ``` diff --git a/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py b/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py index 69aa3eb2a..b0fd06374 100644 --- a/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py +++ b/tests/core/pyspec/eth2spec/test/phase0/unittests/test_config_invariants.py @@ -76,6 +76,8 @@ def test_time(spec, state): @spec_state_test def test_networking(spec, state): assert spec.SUBNETS_PER_NODE <= spec.ATTESTATION_SUBNET_COUNT + node_id_length = spec.NodeID(1).type_byte_length() # in bytes + assert node_id_length * 8 == spec.NODE_ID_BITS # in bits @with_all_phases From 9f5bb03cb4fea394a0d64963c3e3b820a5da5813 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Thu, 10 Nov 2022 16:33:50 -0500 Subject: [PATCH 09/22] Refactor `run_generator` --- .../gen_helpers/gen_base/gen_runner.py | 296 +++++++++++------- 1 file changed, 175 insertions(+), 121 deletions(-) diff --git a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py index 0a831a592..5e6ea93d3 100644 --- a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py +++ b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py @@ -1,4 +1,7 @@ -from eth_utils import encode_hex +from dataclasses import ( + dataclass, + field, +) import os import time import shutil @@ -15,6 +18,8 @@ from ruamel.yaml import ( from filelock import FileLock from snappy import compress +from eth_utils import encode_hex + from eth2spec.test import context from eth2spec.test.exceptions import SkippedTest @@ -28,6 +33,14 @@ context.is_pytest = False TIME_THRESHOLD_TO_PRINT = 1.0 # seconds +@dataclass +class Diagnostics(object): + collected_test_count: int = 0 + generated_test_count: int = 0 + skipped_test_count: int = 0 + test_identifiers: list = field(default_factory=list) + + def validate_output_dir(path_str): path = Path(path_str) @@ -40,6 +53,47 @@ def validate_output_dir(path_str): return path +def get_test_case_dir(test_case, output_dir): + return ( + Path(output_dir) / Path(test_case.preset_name) / Path(test_case.fork_name) + / Path(test_case.runner_name) / Path(test_case.handler_name) + / Path(test_case.suite_name) / Path(test_case.case_name) + ) + + +def get_test_identifier(test_case): + return "::".join([ + test_case.preset_name, + test_case.fork_name, + test_case.runner_name, + test_case.handler_name, + test_case.suite_name, + test_case.case_name + ]) + + +def get_incomplete_tag_file(case_dir): + return case_dir / "INCOMPLETE" + + +def should_skip_case_dir(case_dir, is_force, diagnostics_obj): + is_skip = False + incomplete_tag_file = get_incomplete_tag_file(case_dir) + + if case_dir.exists(): + if not is_force and not incomplete_tag_file.exists(): + diagnostics_obj.skipped_test_count += 1 + print(f'Skipping already existing test: {case_dir}') + is_skip = True + else: + print(f'Warning, output directory {case_dir} already exist,' + ' old files will be deleted and it will generate test vector files with the latest version') + # Clear the existing case_dir folder + shutil.rmtree(case_dir) + + return is_skip, diagnostics_obj + + def run_generator(generator_name, test_providers: Iterable[TestProvider]): """ Implementation for a general test generator. @@ -129,10 +183,8 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): print(f"Filtering test-generator runs to only include presets: {', '.join(presets)}") collect_only = args.collect_only - collected_test_count = 0 - generated_test_count = 0 - skipped_test_count = 0 - test_identifiers = [] + + diagnostics_obj = Diagnostics() provider_start = time.time() for tprov in test_providers: @@ -145,146 +197,114 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): if len(presets) != 0 and test_case.preset_name not in presets: continue - case_dir = ( - Path(output_dir) / Path(test_case.preset_name) / Path(test_case.fork_name) - / Path(test_case.runner_name) / Path(test_case.handler_name) - / Path(test_case.suite_name) / Path(test_case.case_name) - ) - collected_test_count += 1 + case_dir = get_test_case_dir(test_case, output_dir) print(f"Collected test at: {case_dir}") + diagnostics_obj.collected_test_count += 1 - incomplete_tag_file = case_dir / "INCOMPLETE" + is_skip, diagnostics_obj = should_skip_case_dir(case_dir, args.force, diagnostics_obj) + if is_skip: + continue - if case_dir.exists(): - if not args.force and not incomplete_tag_file.exists(): - skipped_test_count += 1 - print(f'Skipping already existing test: {case_dir}') - continue - else: - print(f'Warning, output directory {case_dir} already exist,' - f' old files will be deleted and it will generate test vector files with the latest version') - # Clear the existing case_dir folder - shutil.rmtree(case_dir) - - print(f'Generating test: {case_dir}') - test_start = time.time() - - written_part = False - - # Add `INCOMPLETE` tag file to indicate that the test generation has not completed. - case_dir.mkdir(parents=True, exist_ok=True) - with incomplete_tag_file.open("w") as f: - f.write("\n") - - try: - def output_part(out_kind: str, name: str, fn: Callable[[Path, ], None]): - # make sure the test case directory is created before any test part is written. - case_dir.mkdir(parents=True, exist_ok=True) - try: - fn(case_dir) - except IOError as e: - error_message = ( - f'[Error] error when dumping test "{case_dir}", part "{name}", kind "{out_kind}": {e}' - ) - # Write to error log file - with log_file.open("a+") as f: - f.write(error_message) - traceback.print_exc(file=f) - f.write('\n') - - sys.exit(error_message) - - meta = dict() - - try: - for (name, out_kind, data) in test_case.case_fn(): - written_part = True - if out_kind == "meta": - meta[name] = data - elif out_kind == "cfg": - output_part(out_kind, name, dump_yaml_fn(data, name, file_mode, cfg_yaml)) - elif out_kind == "data": - output_part(out_kind, name, dump_yaml_fn(data, name, file_mode, yaml)) - elif out_kind == "ssz": - output_part(out_kind, name, dump_ssz_fn(data, name, file_mode)) - else: - assert False # Unknown kind - except SkippedTest as e: - print(e) - skipped_test_count += 1 - shutil.rmtree(case_dir) - continue - - # Once all meta data is collected (if any), write it to a meta data file. - if len(meta) != 0: - written_part = True - output_part("data", "meta", dump_yaml_fn(meta, "meta", file_mode, yaml)) - - if not written_part: - print(f"test case {case_dir} did not produce any test case parts") - except Exception as e: - error_message = f"[ERROR] failed to generate vector(s) for test {case_dir}: {e}" - # Write to error log file - with log_file.open("a+") as f: - f.write(error_message) - traceback.print_exc(file=f) - f.write('\n') - traceback.print_exc() - else: - # If no written_part, the only file was incomplete_tag_file. Clear the existing case_dir folder. - if not written_part: - shutil.rmtree(case_dir) - else: - generated_test_count += 1 - test_identifier = "::".join([ - test_case.preset_name, - test_case.fork_name, - test_case.runner_name, - test_case.handler_name, - test_case.suite_name, - test_case.case_name - ]) - test_identifiers.append(test_identifier) - # Only remove `INCOMPLETE` tag file - os.remove(incomplete_tag_file) - test_end = time.time() - span = round(test_end - test_start, 2) - if span > TIME_THRESHOLD_TO_PRINT: - print(f' - generated in {span} seconds') + # generate test vector + is_skip, diagnostics_obj = generate_test_vector_and_diagnose( + test_case, case_dir, log_file, file_mode, + cfg_yaml, yaml, diagnostics_obj, + ) + if is_skip: + continue provider_end = time.time() span = round(provider_end - provider_start, 2) if collect_only: - print(f"Collected {collected_test_count} tests in total") + print(f"Collected {diagnostics_obj.collected_test_count} tests in total") else: - summary_message = f"completed generation of {generator_name} with {generated_test_count} tests" - summary_message += f" ({skipped_test_count} skipped tests)" + summary_message = f"completed generation of {generator_name} with {diagnostics_obj.generated_test_count} tests" + summary_message += f" ({diagnostics_obj.skipped_test_count} skipped tests)" if span > TIME_THRESHOLD_TO_PRINT: summary_message += f" in {span} seconds" print(summary_message) - diagnostics = { - "collected_test_count": collected_test_count, - "generated_test_count": generated_test_count, - "skipped_test_count": skipped_test_count, - "test_identifiers": test_identifiers, + + diagnostics_output = { + "collected_test_count": diagnostics_obj.collected_test_count, + "generated_test_count": diagnostics_obj.generated_test_count, + "skipped_test_count": diagnostics_obj.skipped_test_count, + "test_identifiers": diagnostics_obj.test_identifiers, "durations": [f"{span} seconds"], } - diagnostics_path = Path(os.path.join(output_dir, "diagnostics.json")) - diagnostics_lock = FileLock(os.path.join(output_dir, "diagnostics.json.lock")) + diagnostics_path = Path(os.path.join(output_dir, "diagnostics_obj.json")) + diagnostics_lock = FileLock(os.path.join(output_dir, "diagnostics_obj.json.lock")) with diagnostics_lock: diagnostics_path.touch(exist_ok=True) if os.path.getsize(diagnostics_path) == 0: with open(diagnostics_path, "w+") as f: - json.dump(diagnostics, f) + json.dump(diagnostics_output, f) else: with open(diagnostics_path, "r+") as f: existing_diagnostics = json.load(f) - for k, v in diagnostics.items(): + for k, v in diagnostics_output.items(): existing_diagnostics[k] += v with open(diagnostics_path, "w+") as f: json.dump(existing_diagnostics, f) - print(f"wrote diagnostics to {diagnostics_path}") + print(f"wrote diagnostics_obj to {diagnostics_path}") + + +def generate_test_vector_and_diagnose(test_case, case_dir, log_file, file_mode, cfg_yaml, yaml, diagnostics_obj): + is_skip = False + + print(f'Generating test: {case_dir}') + test_start = time.time() + + written_part = False + + # Add `INCOMPLETE` tag file to indicate that the test generation has not completed. + incomplete_tag_file = get_incomplete_tag_file(case_dir) + case_dir.mkdir(parents=True, exist_ok=True) + with incomplete_tag_file.open("w") as f: + f.write("\n") + + try: + meta = dict() + try: + written_part, meta = execute_test(test_case, case_dir, meta, log_file, file_mode, cfg_yaml, yaml) + except SkippedTest as e: + print(e) + diagnostics_obj.skipped_test_count += 1 + shutil.rmtree(case_dir) + is_skip = True + return is_skip, diagnostics_obj + + # Once all meta data is collected (if any), write it to a meta data file. + if len(meta) != 0: + written_part = True + output_part(case_dir, log_file, "data", "meta", dump_yaml_fn(meta, "meta", file_mode, yaml)) + + if not written_part: + print(f"test case {case_dir} did not produce any test case parts") + except Exception as e: + error_message = f"[ERROR] failed to generate vector(s) for test {case_dir}: {e}" + # Write to error log file + with log_file.open("a+") as f: + f.write(error_message) + traceback.print_exc(file=f) + f.write('\n') + traceback.print_exc() + else: + # If no written_part, the only file was incomplete_tag_file. Clear the existing case_dir folder. + if not written_part: + shutil.rmtree(case_dir) + else: + diagnostics_obj.generated_test_count += 1 + test_identifier = get_test_identifier(test_case) + diagnostics_obj.test_identifiers.append(test_identifier) + # Only remove `INCOMPLETE` tag file + os.remove(incomplete_tag_file) + test_end = time.time() + span = round(test_end - test_start, 2) + if span > TIME_THRESHOLD_TO_PRINT: + print(f' - generated in {span} seconds') + + return is_skip, diagnostics_obj def dump_yaml_fn(data: Any, name: str, file_mode: str, yaml_encoder: YAML): @@ -295,6 +315,40 @@ def dump_yaml_fn(data: Any, name: str, file_mode: str, yaml_encoder: YAML): return dump +def output_part(case_dir, log_file, out_kind: str, name: str, fn: Callable[[Path, ], None]): + # make sure the test case directory is created before any test part is written. + case_dir.mkdir(parents=True, exist_ok=True) + try: + fn(case_dir) + except IOError as e: + error_message = f'[Error] error when dumping test "{case_dir}", part "{name}", kind "{out_kind}": {e}' + # Write to error log file + with log_file.open("a+") as f: + f.write(error_message) + traceback.print_exc(file=f) + f.write('\n') + + sys.exit(error_message) + + +def execute_test(test_case, case_dir, meta, log_file, file_mode, cfg_yaml, yaml): + result = test_case.case_fn() + for (name, out_kind, data) in result: + written_part = True + if out_kind == "meta": + meta[name] = data + elif out_kind == "cfg": + output_part(case_dir, log_file, out_kind, name, dump_yaml_fn(data, name, file_mode, cfg_yaml)) + elif out_kind == "data": + output_part(case_dir, log_file, out_kind, name, dump_yaml_fn(data, name, file_mode, yaml)) + elif out_kind == "ssz": + output_part(case_dir, log_file, out_kind, name, dump_ssz_fn(data, name, file_mode)) + else: + raise ValueError("Unknown out_kind %s" % out_kind) + + return written_part, meta + + def dump_ssz_fn(data: AnyStr, name: str, file_mode: str): def dump(case_path: Path): out_path = case_path / Path(name + '.ssz_snappy') From aeccd20fd102021984073162e48fc84abc900601 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Fri, 5 May 2023 23:03:25 +0800 Subject: [PATCH 10/22] Try multiprocessing --- setup.py | 2 +- .../gen_helpers/gen_base/gen_runner.py | 141 ++++++++++++------ tests/core/pyspec/eth2spec/test/context.py | 6 +- 3 files changed, 100 insertions(+), 49 deletions(-) diff --git a/setup.py b/setup.py index 5d2736979..a82f55e02 100644 --- a/setup.py +++ b/setup.py @@ -1184,7 +1184,7 @@ setup( extras_require={ "test": ["pytest>=4.4", "pytest-cov", "pytest-xdist"], "lint": ["flake8==5.0.4", "mypy==0.981", "pylint==2.15.3"], - "generator": ["python-snappy==0.6.1", "filelock"], + "generator": ["python-snappy==0.6.1", "filelock", "pathos==0.3.0"], "docs": ["mkdocs==1.4.2", "mkdocs-material==9.1.5", "mdx-truly-sane-lists==1.3", "mkdocs-awesome-pages-plugin==2.8.0"] }, install_requires=[ diff --git a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py index 5e6ea93d3..56b1bcab1 100644 --- a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py +++ b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py @@ -11,12 +11,16 @@ import sys import json from typing import Iterable, AnyStr, Any, Callable import traceback +import multiprocessing +from collections import namedtuple + from ruamel.yaml import ( YAML, ) from filelock import FileLock from snappy import compress +from pathos.multiprocessing import ProcessingPool as Pool from eth_utils import encode_hex @@ -32,6 +36,12 @@ context.is_pytest = False TIME_THRESHOLD_TO_PRINT = 1.0 # seconds +# Generator mode setting +MODE_SINGLE_PROCESS = 'MODE_SINGLE_PROCESS' +MODE_MULTIPROCESSING = 'MODE_MULTIPROCESSING' + +GENERATOR_MODE = MODE_SINGLE_PROCESS + @dataclass class Diagnostics(object): @@ -41,6 +51,45 @@ class Diagnostics(object): test_identifiers: list = field(default_factory=list) +TestCaseParams = namedtuple( + 'TestCaseParams', [ + 'test_case', 'case_dir', 'log_file', 'file_mode', + ]) + + +def worker_function(item): + return generate_test_vector(*item) + + +def get_default_yaml(): + yaml = YAML(pure=True) + yaml.default_flow_style = None + + def _represent_none(self, _): + return self.represent_scalar('tag:yaml.org,2002:null', 'null') + + yaml.representer.add_representer(type(None), _represent_none) + + return yaml + + +def get_cfg_yaml(): + # Spec config is using a YAML subset + cfg_yaml = YAML(pure=True) + cfg_yaml.default_flow_style = False # Emit separate line for each key + + def cfg_represent_bytes(self, data): + return self.represent_int(encode_hex(data)) + + cfg_yaml.representer.add_representer(bytes, cfg_represent_bytes) + + def cfg_represent_quoted_str(self, data): + return self.represent_scalar(u'tag:yaml.org,2002:str', data, style="'") + + cfg_yaml.representer.add_representer(context.quoted_str, cfg_represent_quoted_str) + return cfg_yaml + + def validate_output_dir(path_str): path = Path(path_str) @@ -148,28 +197,6 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): else: file_mode = "w" - yaml = YAML(pure=True) - yaml.default_flow_style = None - - def _represent_none(self, _): - return self.represent_scalar('tag:yaml.org,2002:null', 'null') - - yaml.representer.add_representer(type(None), _represent_none) - - # Spec config is using a YAML subset - cfg_yaml = YAML(pure=True) - cfg_yaml.default_flow_style = False # Emit separate line for each key - - def cfg_represent_bytes(self, data): - return self.represent_int(encode_hex(data)) - - cfg_yaml.representer.add_representer(bytes, cfg_represent_bytes) - - def cfg_represent_quoted_str(self, data): - return self.represent_scalar(u'tag:yaml.org,2002:str', data, style="'") - - cfg_yaml.representer.add_representer(context.quoted_str, cfg_represent_quoted_str) - log_file = Path(output_dir) / 'testgen_error_log.txt' print(f"Generating tests into {output_dir}") @@ -185,8 +212,11 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): collect_only = args.collect_only diagnostics_obj = Diagnostics() - provider_start = time.time() + + if GENERATOR_MODE == MODE_MULTIPROCESSING: + all_test_case_params = [] + for tprov in test_providers: if not collect_only: # runs anything that we don't want to repeat for every test case. @@ -205,13 +235,20 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): if is_skip: continue - # generate test vector - is_skip, diagnostics_obj = generate_test_vector_and_diagnose( - test_case, case_dir, log_file, file_mode, - cfg_yaml, yaml, diagnostics_obj, - ) - if is_skip: - continue + if GENERATOR_MODE == MODE_SINGLE_PROCESS: + result = generate_test_vector(test_case, case_dir, log_file, file_mode) + write_result_into_diagnostics_obj(result, diagnostics_obj) + elif GENERATOR_MODE == MODE_MULTIPROCESSING: + item = TestCaseParams(test_case, case_dir, log_file, file_mode) + all_test_case_params.append(item) + + if GENERATOR_MODE == MODE_MULTIPROCESSING: + num_process = multiprocessing.cpu_count() // 2 - 1 + with Pool(processes=num_process) as pool: + results = pool.map(worker_function, iter(all_test_case_params)) + + for result in results: + write_result_into_diagnostics_obj(result, diagnostics_obj) provider_end = time.time() span = round(provider_end - provider_start, 2) @@ -249,54 +286,55 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): print(f"wrote diagnostics_obj to {diagnostics_path}") -def generate_test_vector_and_diagnose(test_case, case_dir, log_file, file_mode, cfg_yaml, yaml, diagnostics_obj): - is_skip = False +def generate_test_vector(test_case, case_dir, log_file, file_mode): + cfg_yaml = get_cfg_yaml() + yaml = get_default_yaml() + + written_part = False print(f'Generating test: {case_dir}') test_start = time.time() - written_part = False - # Add `INCOMPLETE` tag file to indicate that the test generation has not completed. incomplete_tag_file = get_incomplete_tag_file(case_dir) case_dir.mkdir(parents=True, exist_ok=True) with incomplete_tag_file.open("w") as f: f.write("\n") + result = None try: meta = dict() try: written_part, meta = execute_test(test_case, case_dir, meta, log_file, file_mode, cfg_yaml, yaml) except SkippedTest as e: + result = 0 # 0 means skipped print(e) - diagnostics_obj.skipped_test_count += 1 shutil.rmtree(case_dir) - is_skip = True - return is_skip, diagnostics_obj + return result # Once all meta data is collected (if any), write it to a meta data file. if len(meta) != 0: written_part = True output_part(case_dir, log_file, "data", "meta", dump_yaml_fn(meta, "meta", file_mode, yaml)) - if not written_part: - print(f"test case {case_dir} did not produce any test case parts") except Exception as e: + result = -1 # -1 means error error_message = f"[ERROR] failed to generate vector(s) for test {case_dir}: {e}" # Write to error log file with log_file.open("a+") as f: f.write(error_message) traceback.print_exc(file=f) f.write('\n') + print(error_message) traceback.print_exc() else: # If no written_part, the only file was incomplete_tag_file. Clear the existing case_dir folder. if not written_part: + print(f"test case {case_dir} did not produce any written_part") shutil.rmtree(case_dir) + result = -1 else: - diagnostics_obj.generated_test_count += 1 - test_identifier = get_test_identifier(test_case) - diagnostics_obj.test_identifiers.append(test_identifier) + result = get_test_identifier(test_case) # Only remove `INCOMPLETE` tag file os.remove(incomplete_tag_file) test_end = time.time() @@ -304,7 +342,19 @@ def generate_test_vector_and_diagnose(test_case, case_dir, log_file, file_mode, if span > TIME_THRESHOLD_TO_PRINT: print(f' - generated in {span} seconds') - return is_skip, diagnostics_obj + return result + + +def write_result_into_diagnostics_obj(result, diagnostics_obj): + if result == -1: # error + pass + elif result == 0: + diagnostics_obj.skipped_test_count += 1 + elif result is not None: + diagnostics_obj.generated_test_count += 1 + diagnostics_obj.test_identifiers.append(result) + else: + raise Exception(f"Unexpected result: {result}") def dump_yaml_fn(data: Any, name: str, file_mode: str, yaml_encoder: YAML): @@ -312,6 +362,7 @@ def dump_yaml_fn(data: Any, name: str, file_mode: str, yaml_encoder: YAML): out_path = case_path / Path(name + '.yaml') with out_path.open(file_mode) as f: yaml_encoder.dump(data, f) + f.close() return dump @@ -320,14 +371,14 @@ def output_part(case_dir, log_file, out_kind: str, name: str, fn: Callable[[Path case_dir.mkdir(parents=True, exist_ok=True) try: fn(case_dir) - except IOError as e: + except (IOError, ValueError) as e: error_message = f'[Error] error when dumping test "{case_dir}", part "{name}", kind "{out_kind}": {e}' # Write to error log file with log_file.open("a+") as f: f.write(error_message) traceback.print_exc(file=f) f.write('\n') - + print(error_message) sys.exit(error_message) diff --git a/tests/core/pyspec/eth2spec/test/context.py b/tests/core/pyspec/eth2spec/test/context.py index 901fd273a..626ffc1db 100644 --- a/tests/core/pyspec/eth2spec/test/context.py +++ b/tests/core/pyspec/eth2spec/test/context.py @@ -560,7 +560,7 @@ def _get_basic_dict(ssz_dict: Dict[str, Any]) -> Dict[str, Any]: return result -def _get_copy_of_spec(spec): +def get_copy_of_spec(spec): fork = spec.fork preset = spec.config.PRESET_BASE module_path = f"eth2spec.{fork}.{preset}" @@ -601,14 +601,14 @@ def with_config_overrides(config_overrides, emitted_fork=None, emit=True): def decorator(fn): def wrapper(*args, spec: Spec, **kw): # Apply config overrides to spec - spec, output_config = spec_with_config_overrides(_get_copy_of_spec(spec), config_overrides) + spec, output_config = spec_with_config_overrides(get_copy_of_spec(spec), config_overrides) # Apply config overrides to additional phases, if present if 'phases' in kw: phases = {} for fork in kw['phases']: phases[fork], output = spec_with_config_overrides( - _get_copy_of_spec(kw['phases'][fork]), config_overrides) + get_copy_of_spec(kw['phases'][fork]), config_overrides) if emitted_fork == fork: output_config = output kw['phases'] = phases From 98d0ca48b8730f05ed180182971f629bfedd8d0d Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Sat, 6 May 2023 17:45:22 +0800 Subject: [PATCH 11/22] Fix `test_randomized_state` and `test_randomized_state_leaking` --- .../test_process_inactivity_updates.py | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/tests/core/pyspec/eth2spec/test/altair/epoch_processing/test_process_inactivity_updates.py b/tests/core/pyspec/eth2spec/test/altair/epoch_processing/test_process_inactivity_updates.py index 0816dfad6..57fe8b9ca 100644 --- a/tests/core/pyspec/eth2spec/test/altair/epoch_processing/test_process_inactivity_updates.py +++ b/tests/core/pyspec/eth2spec/test/altair/epoch_processing/test_process_inactivity_updates.py @@ -54,7 +54,15 @@ def test_genesis_random_scores(spec, state): # def run_inactivity_scores_test(spec, state, participation_fn=None, inactivity_scores_fn=None, rng=Random(10101)): - next_epoch_via_block(spec, state) + while True: + try: + next_epoch_via_block(spec, state) + except AssertionError: + # If the proposer is slashed, we skip this epoch and try to propose block at the next epoch + next_epoch(spec, state) + else: + break + if participation_fn is not None: participation_fn(spec, state, rng=rng) if inactivity_scores_fn is not None: @@ -363,7 +371,7 @@ def test_randomized_state(spec, state): their inactivity score does not change. """ rng = Random(10011001) - _run_randomized_state_test_for_inactivity_updates(spec, state, rng=rng) + yield from _run_randomized_state_test_for_inactivity_updates(spec, state, rng=rng) @with_altair_and_later @@ -377,6 +385,6 @@ def test_randomized_state_leaking(spec, state): (refer ``get_eligible_validator_indices`). """ rng = Random(10011002) - _run_randomized_state_test_for_inactivity_updates(spec, state, rng=rng) + yield from _run_randomized_state_test_for_inactivity_updates(spec, state, rng=rng) # Check still in leak assert spec.is_in_inactivity_leak(state) From 3ae4bf14f1d607a19f5b3bd694b841fff0545d98 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Sat, 6 May 2023 20:11:50 +0800 Subject: [PATCH 12/22] Fix and set to `MODE_MULTIPROCESSING` --- .../core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py index 56b1bcab1..c8b7f0c03 100644 --- a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py +++ b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py @@ -40,7 +40,7 @@ TIME_THRESHOLD_TO_PRINT = 1.0 # seconds MODE_SINGLE_PROCESS = 'MODE_SINGLE_PROCESS' MODE_MULTIPROCESSING = 'MODE_MULTIPROCESSING' -GENERATOR_MODE = MODE_SINGLE_PROCESS +GENERATOR_MODE = MODE_MULTIPROCESSING @dataclass @@ -330,7 +330,7 @@ def generate_test_vector(test_case, case_dir, log_file, file_mode): else: # If no written_part, the only file was incomplete_tag_file. Clear the existing case_dir folder. if not written_part: - print(f"test case {case_dir} did not produce any written_part") + print(f"[Error] test case {case_dir} did not produce any written_part") shutil.rmtree(case_dir) result = -1 else: @@ -384,6 +384,7 @@ def output_part(case_dir, log_file, out_kind: str, name: str, fn: Callable[[Path def execute_test(test_case, case_dir, meta, log_file, file_mode, cfg_yaml, yaml): result = test_case.case_fn() + written_part = False for (name, out_kind, data) in result: written_part = True if out_kind == "meta": From 1008714e545b5a1e60cca772271f7df0592db394 Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Tue, 9 May 2023 21:49:41 +0800 Subject: [PATCH 13/22] Add `settings.py` of testgen --- .../gen_helpers/gen_base/gen_runner.py | 20 ++++++++----------- .../eth2spec/gen_helpers/gen_base/settings.py | 13 ++++++++++++ 2 files changed, 21 insertions(+), 12 deletions(-) create mode 100644 tests/core/pyspec/eth2spec/gen_helpers/gen_base/settings.py diff --git a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py index c8b7f0c03..2562c7fad 100644 --- a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py +++ b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/gen_runner.py @@ -11,7 +11,6 @@ import sys import json from typing import Iterable, AnyStr, Any, Callable import traceback -import multiprocessing from collections import namedtuple from ruamel.yaml import ( @@ -28,21 +27,19 @@ from eth2spec.test import context from eth2spec.test.exceptions import SkippedTest from .gen_typing import TestProvider +from .settings import ( + GENERATOR_MODE, + MODE_MULTIPROCESSING, + MODE_SINGLE_PROCESS, + NUM_PROCESS, + TIME_THRESHOLD_TO_PRINT, +) # Flag that the runner does NOT run test via pytest context.is_pytest = False -TIME_THRESHOLD_TO_PRINT = 1.0 # seconds - -# Generator mode setting -MODE_SINGLE_PROCESS = 'MODE_SINGLE_PROCESS' -MODE_MULTIPROCESSING = 'MODE_MULTIPROCESSING' - -GENERATOR_MODE = MODE_MULTIPROCESSING - - @dataclass class Diagnostics(object): collected_test_count: int = 0 @@ -243,8 +240,7 @@ def run_generator(generator_name, test_providers: Iterable[TestProvider]): all_test_case_params.append(item) if GENERATOR_MODE == MODE_MULTIPROCESSING: - num_process = multiprocessing.cpu_count() // 2 - 1 - with Pool(processes=num_process) as pool: + with Pool(processes=NUM_PROCESS) as pool: results = pool.map(worker_function, iter(all_test_case_params)) for result in results: diff --git a/tests/core/pyspec/eth2spec/gen_helpers/gen_base/settings.py b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/settings.py new file mode 100644 index 000000000..b8c3e591f --- /dev/null +++ b/tests/core/pyspec/eth2spec/gen_helpers/gen_base/settings.py @@ -0,0 +1,13 @@ +import multiprocessing + + +# Generator mode setting +MODE_SINGLE_PROCESS = 'MODE_SINGLE_PROCESS' +MODE_MULTIPROCESSING = 'MODE_MULTIPROCESSING' +# Test generator mode +GENERATOR_MODE = MODE_MULTIPROCESSING +# Number of subprocesses when using MODE_MULTIPROCESSING +NUM_PROCESS = multiprocessing.cpu_count() // 2 - 1 + +# Diagnostics +TIME_THRESHOLD_TO_PRINT = 1.0 # seconds From 847553783b786686e7f24c69d104c055cbf9508b Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Mon, 15 May 2023 17:11:50 +0800 Subject: [PATCH 14/22] Add `test_zeroed_commitment` --- .../eth2spec/test/deneb/sanity/test_blocks.py | 20 +++++++++++++++++++ .../pyspec/eth2spec/test/helpers/sharding.py | 19 +++++++++--------- 2 files changed, 30 insertions(+), 9 deletions(-) diff --git a/tests/core/pyspec/eth2spec/test/deneb/sanity/test_blocks.py b/tests/core/pyspec/eth2spec/test/deneb/sanity/test_blocks.py index 111565cce..b518f2b55 100644 --- a/tests/core/pyspec/eth2spec/test/deneb/sanity/test_blocks.py +++ b/tests/core/pyspec/eth2spec/test/deneb/sanity/test_blocks.py @@ -150,3 +150,23 @@ def test_incorrect_block_hash(spec, state): yield 'blocks', [signed_block] yield 'post', state + + +@with_deneb_and_later +@spec_state_test +def test_zeroed_commitment(spec, state): + """ + The blob is invalid, but the commitment is in correct form. + """ + yield 'pre', state + + block = build_empty_block_for_next_slot(spec, state) + opaque_tx, _, blob_kzg_commitments, _ = get_sample_opaque_tx(spec, blob_count=1, is_valid_blob=False) + assert all(commitment == b'\x00' * 48 for commitment in blob_kzg_commitments) + block.body.blob_kzg_commitments = blob_kzg_commitments + block.body.execution_payload.transactions = [opaque_tx] + block.body.execution_payload.block_hash = compute_el_block_hash(spec, block.body.execution_payload) + signed_block = state_transition_and_sign_block(spec, state, block) + + yield 'blocks', [signed_block] + yield 'post', state diff --git a/tests/core/pyspec/eth2spec/test/helpers/sharding.py b/tests/core/pyspec/eth2spec/test/helpers/sharding.py index 6b913b90e..16b5a0234 100644 --- a/tests/core/pyspec/eth2spec/test/helpers/sharding.py +++ b/tests/core/pyspec/eth2spec/test/helpers/sharding.py @@ -50,12 +50,9 @@ class SignedBlobTransaction(Container): signature: ECDSASignature -def get_sample_blob(spec, rng=None): - if rng is None: - rng = random.Random(5566) - +def get_sample_blob(spec, rng=random.Random(5566), is_valid_blob=True): values = [ - rng.randint(0, spec.BLS_MODULUS - 1) + rng.randint(0, spec.BLS_MODULUS - 1) if is_valid_blob else spec.BLS_MODULUS for _ in range(spec.FIELD_ELEMENTS_PER_BLOB) ] @@ -98,15 +95,19 @@ def get_poly_in_both_forms(spec, rng=None): return coeffs, evals -def get_sample_opaque_tx(spec, blob_count=1, rng=None): +def get_sample_opaque_tx(spec, blob_count=1, rng=random.Random(5566), is_valid_blob=True): blobs = [] blob_kzg_commitments = [] blob_kzg_proofs = [] blob_versioned_hashes = [] for _ in range(blob_count): - blob = get_sample_blob(spec, rng) - blob_commitment = spec.KZGCommitment(spec.blob_to_kzg_commitment(blob)) - blob_kzg_proof = spec.compute_blob_kzg_proof(blob, blob_commitment) + blob = get_sample_blob(spec, rng, is_valid_blob=is_valid_blob) + if is_valid_blob: + blob_commitment = spec.KZGCommitment(spec.blob_to_kzg_commitment(blob)) + blob_kzg_proof = spec.compute_blob_kzg_proof(blob, blob_commitment) + else: + blob_commitment = spec.KZGCommitment() + blob_kzg_proof = spec.KZGProof() blob_versioned_hash = spec.kzg_commitment_to_versioned_hash(blob_commitment) blobs.append(blob) blob_kzg_commitments.append(blob_commitment) From 5a6052f46caa540ccceaaf6fab6a2d4c91354dfd Mon Sep 17 00:00:00 2001 From: Alex Stokes Date: Mon, 15 May 2023 16:51:52 -0600 Subject: [PATCH 15/22] Update fork-choice.md Stylistic change to be in line with validations in other specifications --- specs/deneb/fork-choice.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/specs/deneb/fork-choice.md b/specs/deneb/fork-choice.md index 9faa11077..ce9973f13 100644 --- a/specs/deneb/fork-choice.md +++ b/specs/deneb/fork-choice.md @@ -30,9 +30,8 @@ This is the modification of the fork choice accompanying the Deneb upgrade. def validate_blobs(expected_kzg_commitments: Sequence[KZGCommitment], blobs: Sequence[Blob], proofs: Sequence[KZGProof]) -> None: - assert len(expected_kzg_commitments) == len(blobs) - assert len(blobs) == len(proofs) - + assert len(expected_kzg_commitments) == len(blobs) == len(proofs) + assert verify_blob_kzg_proof_batch(blobs, expected_kzg_commitments, proofs) ``` From bb38c56ddd0b6c29d472e6eb96b63a44d7c89abc Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Tue, 16 May 2023 20:07:21 +0800 Subject: [PATCH 16/22] Fix `dump_kzg_trusted_setup_files`. Use Fastest BLS lib (#3358) --- tests/core/pyspec/eth2spec/utils/kzg.py | 29 ++++++++++--------------- 1 file changed, 12 insertions(+), 17 deletions(-) diff --git a/tests/core/pyspec/eth2spec/utils/kzg.py b/tests/core/pyspec/eth2spec/utils/kzg.py index 4b5454f66..cef1faa66 100644 --- a/tests/core/pyspec/eth2spec/utils/kzg.py +++ b/tests/core/pyspec/eth2spec/utils/kzg.py @@ -10,20 +10,13 @@ from typing import ( from pathlib import Path from eth_utils import encode_hex -from py_ecc.optimized_bls12_381 import ( # noqa: F401 - G1, - G2, - Z1, - Z2, - curve_order as BLS_MODULUS, - add, - multiply, - neg, -) from py_ecc.typing import ( Optimized_Point3D, ) from eth2spec.utils import bls +from eth2spec.utils.bls import ( + BLS_MODULUS, +) PRIMITIVE_ROOT_OF_UNITY = 7 @@ -35,7 +28,7 @@ def generate_setup(generator: Optimized_Point3D, secret: int, length: int) -> Tu """ result = [generator] for _ in range(1, length): - result.append(multiply(result[-1], secret)) + result.append(bls.multiply(result[-1], secret)) return tuple(result) @@ -49,9 +42,9 @@ def fft(vals: Sequence[Optimized_Point3D], modulus: int, domain: int) -> Sequenc R = fft(vals[1::2], modulus, domain[::2]) o = [0] * len(vals) for i, (x, y) in enumerate(zip(L, R)): - y_times_root = multiply(y, domain[i]) - o[i] = add(x, y_times_root) - o[i + len(L)] = add(x, neg(y_times_root)) + y_times_root = bls.multiply(y, domain[i]) + o[i] = bls.add(x, y_times_root) + o[i + len(L)] = bls.add(x, bls.neg(y_times_root)) return o @@ -90,12 +83,14 @@ def get_lagrange(setup: Sequence[Optimized_Point3D]) -> Tuple[bytes]: # TODO: introduce an IFFT function for simplicity fft_output = fft(setup, BLS_MODULUS, domain) inv_length = pow(len(setup), BLS_MODULUS - 2, BLS_MODULUS) - return tuple(bls.G1_to_bytes48(multiply(fft_output[-i], inv_length)) for i in range(len(fft_output))) + return tuple(bls.G1_to_bytes48(bls.multiply(fft_output[-i], inv_length)) for i in range(len(fft_output))) def dump_kzg_trusted_setup_files(secret: int, g1_length: int, g2_length: int, output_dir: str) -> None: - setup_g1 = generate_setup(bls.G1, secret, g1_length) - setup_g2 = generate_setup(bls.G2, secret, g2_length) + bls.use_fastest() + + setup_g1 = generate_setup(bls.G1(), secret, g1_length) + setup_g2 = generate_setup(bls.G2(), secret, g2_length) setup_g1_lagrange = get_lagrange(setup_g1) roots_of_unity = compute_roots_of_unity(g1_length) From 32358e8fad3425eb23e406413d60744e0f274d40 Mon Sep 17 00:00:00 2001 From: Justin Traglia <95511699+jtraglia@users.noreply.github.com> Date: Wed, 17 May 2023 11:24:48 -0500 Subject: [PATCH 17/22] Add comment about zero elements in batch verification (#3367) --- specs/deneb/polynomial-commitments.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/deneb/polynomial-commitments.md b/specs/deneb/polynomial-commitments.md index e23c31fab..39758ac88 100644 --- a/specs/deneb/polynomial-commitments.md +++ b/specs/deneb/polynomial-commitments.md @@ -566,7 +566,7 @@ def verify_blob_kzg_proof_batch(blobs: Sequence[Blob], proofs_bytes: Sequence[Bytes48]) -> bool: """ Given a list of blobs and blob KZG proofs, verify that they correspond to the provided commitments. - + Will return True if there are zero blobs/commitments/proofs. Public method. """ From db2e613aac14012234edf94d42cc34b8b5eaa49e Mon Sep 17 00:00:00 2001 From: terencechain Date: Wed, 17 May 2023 11:08:33 -0700 Subject: [PATCH 18/22] Nitpick: blob -> blob sidecar --- specs/deneb/p2p-interface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/deneb/p2p-interface.md b/specs/deneb/p2p-interface.md index 3c6f3c88a..507cd4c0c 100644 --- a/specs/deneb/p2p-interface.md +++ b/specs/deneb/p2p-interface.md @@ -226,7 +226,7 @@ No more than `MAX_REQUEST_BLOB_SIDECARS` may be requested at a time. The response MUST consist of zero or more `response_chunk`. Each _successful_ `response_chunk` MUST contain a single `BlobSidecar` payload. -Clients MUST support requesting sidecars since `minimum_request_epoch`, where `minimum_request_epoch = max(finalized_epoch, current_epoch - MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, DENEB_FORK_EPOCH)`. If any root in the request content references a block earlier than `minimum_request_epoch`, peers MAY respond with error code `3: ResourceUnavailable` or not include the blob in the response. +Clients MUST support requesting sidecars since `minimum_request_epoch`, where `minimum_request_epoch = max(finalized_epoch, current_epoch - MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, DENEB_FORK_EPOCH)`. If any root in the request content references a block earlier than `minimum_request_epoch`, peers MAY respond with error code `3: ResourceUnavailable` or not include the blob sidecar in the response. Clients MUST respond with at least one sidecar, if they have it. Clients MAY limit the number of blocks and sidecars in the response. From 44394ad1216b68dbcb5e00c9239e8b74998ee0de Mon Sep 17 00:00:00 2001 From: Hsiao-Wei Wang Date: Thu, 18 May 2023 20:58:24 +0800 Subject: [PATCH 19/22] Fix CircleCI Python version --- .circleci/config.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.circleci/config.yml b/.circleci/config.yml index 5958a2fc6..fcdf483d5 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -157,7 +157,7 @@ jobs: path: tests/core/pyspec/test-reports test-eip6110: docker: - - image: circleci/python:3.8 + - image: circleci/python:3.9 working_directory: ~/specs-repo steps: - restore_cache: From 2f218f83368562abd83fa236e542033e5ebcbafc Mon Sep 17 00:00:00 2001 From: Suphanat Chunhapanya Date: Fri, 5 May 2023 15:09:43 +0700 Subject: [PATCH 20/22] Specify the number of sidecar subnets Previously the number of subnets is equal to MAX_BLOBS_PER_BLOCK which specifies the number of blobs per block. This commit now makes the number of subnets equal to BLOB_SIDECAR_SUBNET_COUNT instead. The advantage of doing this is that we can change MAX_BLOBS_PER_BLOCK without worrying about the p2p network structure and the number of subnets. --- specs/deneb/p2p-interface.md | 8 ++++---- specs/deneb/validator.md | 21 ++++++++++++++++++++- 2 files changed, 24 insertions(+), 5 deletions(-) diff --git a/specs/deneb/p2p-interface.md b/specs/deneb/p2p-interface.md index 507cd4c0c..3136f190d 100644 --- a/specs/deneb/p2p-interface.md +++ b/specs/deneb/p2p-interface.md @@ -22,7 +22,7 @@ The specification of these changes continues in the same format as the network s - [Topics and messages](#topics-and-messages) - [Global topics](#global-topics) - [`beacon_block`](#beacon_block) - - [`blob_sidecar_{index}`](#blob_sidecar_index) + - [`blob_sidecar_{subnet_id}`](#blob_sidecar_subnet_id) - [Transitioning the gossip](#transitioning-the-gossip) - [The Req/Resp domain](#the-reqresp-domain) - [Messages](#messages) @@ -107,7 +107,7 @@ The new topics along with the type of the `data` field of a gossipsub message ar | Name | Message Type | | - | - | -| `blob_sidecar_{index}` | `SignedBlobSidecar` (new) | +| `blob_sidecar_{subnet_id}` | `SignedBlobSidecar` (new) | ##### Global topics @@ -117,13 +117,13 @@ Deneb introduces new global topics for blob sidecars. The *type* of the payload of this topic changes to the (modified) `SignedBeaconBlock` found in deneb. -###### `blob_sidecar_{index}` +###### `blob_sidecar_{subnet_id}` This topic is used to propagate signed blob sidecars, one for each sidecar index. The number of indices is defined by `MAX_BLOBS_PER_BLOCK`. The following validations MUST pass before forwarding the `signed_blob_sidecar` on the network, assuming the alias `sidecar = signed_blob_sidecar.message`: -- _[REJECT]_ The sidecar is for the correct topic -- i.e. `sidecar.index` matches the topic `{index}`. +- _[REJECT]_ The sidecar is for the correct subnet -- i.e. `compute_subnet_for_blob_sidecar(sidecar.index) == subnet_id`. - _[IGNORE]_ The sidecar is not from a future slot (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- i.e. validate that `sidecar.slot <= current_slot` (a client MAY queue future sidecars for processing at the appropriate slot). - _[IGNORE]_ The sidecar is from a slot greater than the latest finalized slot -- i.e. validate that `sidecar.slot > compute_start_slot_at_epoch(state.finalized_checkpoint.epoch)` - _[IGNORE]_ The sidecar's block's parent (defined by `sidecar.block_parent_root`) has been seen (via both gossip and non-gossip sources) (a client MAY queue sidecars for processing once the parent block is retrieved). diff --git a/specs/deneb/validator.md b/specs/deneb/validator.md index 6562c91dd..4bd9ab4a5 100644 --- a/specs/deneb/validator.md +++ b/specs/deneb/validator.md @@ -10,6 +10,8 @@ - [Introduction](#introduction) - [Prerequisites](#prerequisites) +- [Constants](#constants) + - [Misc](#misc) - [Helpers](#helpers) - [`BlobsBundle`](#blobsbundle) - [Modified `GetPayloadResponse`](#modified-getpayloadresponse) @@ -38,6 +40,14 @@ All behaviors and definitions defined in this document, and documents it extends All terminology, constants, functions, and protocol mechanics defined in the updated [Beacon Chain doc of Deneb](./beacon-chain.md) are requisite for this document and used throughout. Please see related Beacon Chain doc before continuing and use them as a reference throughout. +## Constants + +### Misc + +| Name | Value | Unit | +| - | - | :-: | +| `BLOB_SIDECAR_SUBNET_COUNT` | `4` | The number of blob sidecar subnets used in the gossipsub protocol. | + ## Helpers ### `BlobsBundle` @@ -136,7 +146,7 @@ def get_blob_sidecars(block: BeaconBlock, ``` -Then for each sidecar, `signed_sidecar = SignedBlobSidecar(message=sidecar, signature=signature)` is constructed and published to the `blob_sidecar_{index}` topics according to its index. +Then for each sidecar, `signed_sidecar = SignedBlobSidecar(message=sidecar, signature=signature)` is constructed and published to the associated sidecar topic, the `blob_sidecar_{subnet_id}` pubsub topic. `signature` is obtained from: @@ -149,6 +159,15 @@ def get_blob_sidecar_signature(state: BeaconState, return bls.Sign(privkey, signing_root) ``` +The `subnet_id` for the `signed_sidecar` is calculated with: +- Let `blob_index = signed_sidecar.message.index`. +- Let `subnet_id = compute_subnet_for_blob_sidecar(blob_index)`. + +```python +def compute_subnet_for_blob_sidecar(blob_index: BlobIndex) -> uint64: + return uint64(blob_index % BLOB_SIDECAR_SUBNET_COUNT) +``` + After publishing the peers on the network may request the sidecar through sync-requests, or a local user may be interested. The validator MUST hold on to sidecars for `MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS` epochs and serve when capable, From 08a13261c209ca2ebb7d099b4cc6240a7b6ad8b7 Mon Sep 17 00:00:00 2001 From: Suphanat Chunhapanya Date: Tue, 9 May 2023 21:03:59 +0700 Subject: [PATCH 21/22] Use SubnetID instead of uint64 --- specs/deneb/validator.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/specs/deneb/validator.md b/specs/deneb/validator.md index 4bd9ab4a5..4303c90b5 100644 --- a/specs/deneb/validator.md +++ b/specs/deneb/validator.md @@ -164,8 +164,8 @@ The `subnet_id` for the `signed_sidecar` is calculated with: - Let `subnet_id = compute_subnet_for_blob_sidecar(blob_index)`. ```python -def compute_subnet_for_blob_sidecar(blob_index: BlobIndex) -> uint64: - return uint64(blob_index % BLOB_SIDECAR_SUBNET_COUNT) +def compute_subnet_for_blob_sidecar(blob_index: BlobIndex) -> SubnetID: + return SubnetID(blob_index % BLOB_SIDECAR_SUBNET_COUNT) ``` After publishing the peers on the network may request the sidecar through sync-requests, or a local user may be interested. From 7097dcf27ae357a57c107f6939d1b96e202665f4 Mon Sep 17 00:00:00 2001 From: Alex Stokes Date: Thu, 18 May 2023 15:27:47 -0600 Subject: [PATCH 22/22] Clarify blob subnets --- specs/deneb/p2p-interface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/specs/deneb/p2p-interface.md b/specs/deneb/p2p-interface.md index 3136f190d..d67c144b2 100644 --- a/specs/deneb/p2p-interface.md +++ b/specs/deneb/p2p-interface.md @@ -119,7 +119,7 @@ The *type* of the payload of this topic changes to the (modified) `SignedBeaconB ###### `blob_sidecar_{subnet_id}` -This topic is used to propagate signed blob sidecars, one for each sidecar index. The number of indices is defined by `MAX_BLOBS_PER_BLOCK`. +This topic is used to propagate signed blob sidecars, where each blob index maps to some `subnet_id`. The following validations MUST pass before forwarding the `signed_blob_sidecar` on the network, assuming the alias `sidecar = signed_blob_sidecar.message`: