Merge branch 'dev' into pr3052

This commit is contained in:
Hsiao-Wei Wang 2022-11-18 02:12:07 +08:00
commit f1d4c9047a
No known key found for this signature in database
GPG Key ID: AE3D6B174F971DE4
13 changed files with 59 additions and 22 deletions

View File

@ -122,7 +122,7 @@ This backbone is based on a pure function of the *node* identity and time:
peers on a vertical topic can be found by searching the local peerstore for identities that hash to the desired topic(s),
assuming the peerstore already has a large enough variety of peers.
- Nodes can be held accountable for contributing to the backbone:
peers that particpate in DAS but are not active on the appropriate backbone topics can be scored down.
peers that participate in DAS but are not active on the appropriate backbone topics can be scored down.
*Note: This is experimental, DAS should be light enough for all participants to run, but scoring needs to undergo testing*
A node should anticipate backbone topics to subscribe to based their own identity.

View File

@ -23,6 +23,7 @@ The specification of these changes continues in the same format as the network s
- [Messages](#messages)
- [BeaconBlocksByRange v2](#beaconblocksbyrange-v2)
- [BeaconBlocksByRoot v2](#beaconblocksbyroot-v2)
- [BeaconBlockAndBlobsSidecarByRoot v1](#beaconblockandblobssidecarbyroot-v1)
- [BlobsSidecarsByRange v1](#blobssidecarsbyrange-v1)
- [Design decision rationale](#design-decision-rationale)
- [Why are blobs relayed as a sidecar, separate from beacon blocks?](#why-are-blobs-relayed-as-a-sidecar-separate-from-beacon-blocks)
@ -130,7 +131,8 @@ Per `context = compute_fork_digest(fork_version, genesis_validators_root)`:
**Protocol ID:** `/eth2/beacon_chain/req/beacon_blocks_by_root/2/`
The EIP-4844 fork-digest is introduced to the `context` enum to specify EIP-4844 beacon block type.
After `EIP4844_FORK_EPOCH`, `BeaconBlocksByRootV2` is replaced by `BeaconBlockAndBlobsSidecarByRootV1`
clients MUST support requesting blocks by root for pre-fork-epoch blocks.
Per `context = compute_fork_digest(fork_version, genesis_validators_root)`:
@ -142,7 +144,42 @@ Per `context = compute_fork_digest(fork_version, genesis_validators_root)`:
| `ALTAIR_FORK_VERSION` | `altair.SignedBeaconBlock` |
| `BELLATRIX_FORK_VERSION` | `bellatrix.SignedBeaconBlock` |
| `CAPELLA_FORK_VERSION` | `capella.SignedBeaconBlock` |
| `EIP4844_FORK_VERSION` | `eip4844.SignedBeaconBlock` |
#### BeaconBlockAndBlobsSidecarByRoot v1
**Protocol ID:** `/eth2/beacon_chain/req/beacon_block_and_blobs_sidecar_by_root/1/`
Request Content:
```
(
List[Root, MAX_REQUEST_BLOCKS]
)
```
Response Content:
```
(
List[SignedBeaconBlockAndBlobsSidecar, MAX_REQUEST_BLOCKS]
)
```
Requests blocks by block root (= `hash_tree_root(SignedBeaconBlockAndBlobsSidecar.beacon_block.message)`).
The response is a list of `SignedBeaconBlockAndBlobsSidecar` whose length is less than or equal to the number of requests.
It may be less in the case that the responding peer is missing blocks and sidecars.
No more than `MAX_REQUEST_BLOCKS` may be requested at a time.
`BeaconBlockAndBlobsSidecarByRoot` is primarily used to recover recent blocks and sidecars (e.g. when receiving a block or attestation whose parent is unknown).
The response MUST consist of zero or more `response_chunk`.
Each _successful_ `response_chunk` MUST contain a single `SignedBeaconBlockAndBlobsSidecar` payload.
Clients MUST support requesting blocks and sidecars since the latest finalized epoch.
Clients MUST respond with at least one block and sidecar, if they have it.
Clients MAY limit the number of blocks and sidecars in the response.
#### BlobsSidecarsByRange v1

View File

@ -32,7 +32,7 @@
- [`interpolate_polynomial`](#interpolate_polynomial)
- [`evaluate_polynomial_in_evaluation_form`](#evaluate_polynomial_in_evaluation_form)
- [KZG Operations](#kzg-operations)
- [Elliptic curve helper functoins](#elliptic-curve-helper-functoins)
- [Elliptic curve helper functions](#elliptic-curve-helper-functions)
- [`elliptic_curve_lincomb`](#elliptic_curve_lincomb)
- [Hash to field](#hash-to-field)
- [`hash_to_bls_field`](#hash_to_bls_field)
@ -47,7 +47,7 @@
## Introduction
This document specifies basic polynomial operations and KZG polynomial commitment operations as they are needed for the sharding specification. The implementations are not optimized for performance, but readability. All practical implementations should optimize the polynomial operations, and hints what the best known algorithms for these implementations are are included below.
This document specifies basic polynomial operations and KZG polynomial commitment operations as they are needed for the sharding specification. The implementations are not optimized for performance, but readability. All practical implementations should optimize the polynomial operations, and hints what the best known algorithms for these implementations are included below.
## Constants
@ -313,7 +313,7 @@ def evaluate_polynomial_in_evaluation_form(poly: BLSPolynomialByEvaluations, x:
We are using the KZG10 polynomial commitment scheme (Kate, Zaverucha and Goldberg, 2010: https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf).
### Elliptic curve helper functoins
### Elliptic curve helper functions
#### `elliptic_curve_lincomb`

View File

@ -143,7 +143,7 @@ This ensures that blocks are only optimistically imported if one or more of the
following are true:
1. The parent of the block has execution enabled.
1. The current slot (as per the system clock) is at least
2. The current slot (as per the system clock) is at least
`SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` ahead of the slot of the block being
imported.

View File

@ -10,12 +10,12 @@ Use an OS that has Python 3.8 or above. For example, Debian 11 (bullseye)
```sh
sudo apt install -y make git wget python3-venv gcc python3-dev
```
1. Download the latest [consensus specs](https://github.com/ethereum/consensus-specs)
2. Download the latest [consensus specs](https://github.com/ethereum/consensus-specs)
```sh
git clone https://github.com/ethereum/consensus-specs.git
cd consensus-specs
```
1. Create the specifications and tests:
3. Create the specifications and tests:
```sh
make install_test
make pyspec
@ -31,12 +31,12 @@ To read more about creating the environment, [see here](core/pyspec/README.md).
cd ~/consensus-specs
. venv/bin/activate
```
1. Run a sanity check test against Altair fork:
2. Run a sanity check test against Altair fork:
```sh
cd tests/core/pyspec
python -m pytest -k test_empty_block_transition --fork altair eth2spec
```
1. The output should be similar to:
3. The output should be similar to:
```
============================= test session starts ==============================
platform linux -- Python 3.9.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
@ -114,7 +114,7 @@ In Python `yield` is used by [generators](https://wiki.python.org/moin/Generator
we can treat it as a partial return statement that doesn't stop the function's processing, only adds to a list
of return values. Here we add two values, the string `'pre'` and the initial state, to the list of return values.
[You can read more about test generators and how the are used here](generators).
[You can read more about test generators and how they are used here](generators).
```python
block = build_empty_block_for_next_slot(spec, state)
@ -417,7 +417,7 @@ In the last line you can see two conditions being asserted:
1. `data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot` which verifies that the attestation doesn't
arrive too early.
1. `state.slot <= data.slot + SLOTS_PER_EPOCH` which verifies that the attestation doesn't
2. `state.slot <= data.slot + SLOTS_PER_EPOCH` which verifies that the attestation doesn't
arrive too late.
This is how the consensus layer tests deal with edge cases, by asserting the conditions required for the
@ -431,7 +431,7 @@ Now we'll write a similar test that verifies that being `SLOTS_PER_EPOCH` away i
`test_after_epoch_slots` function. We need two changes:
1. Call `transition_to_slot_via_block` with one less slot to advance
1. Don't tell `run_attestation_processing` to return an empty post state.
2. Don't tell `run_attestation_processing` to return an empty post state.
The modified function is:

View File

@ -212,7 +212,7 @@ def slash_some_validators_for_inactivity_scores_test(spec, state, rng=Random(404
next_epoch_via_block(spec, future_state)
proposer_index = spec.get_beacon_proposer_index(future_state)
# Slash ~1/4 of validaors
# Slash ~1/4 of validators
for validator_index in range(len(state.validators)):
if rng.choice(range(4)) == 0 and validator_index != proposer_index:
spec.slash_validator(state, validator_index)

View File

@ -136,6 +136,6 @@ def test_inactivity_scores_full_participation_leaking(spec, state):
yield 'blocks', [signed_block]
yield 'post', state
# Full particiaption during a leak so all scores should decrease by 1
# Full participation during a leak so all scores should decrease by 1
for pre, post in zip(previous_inactivity_scores, state.inactivity_scores):
assert post == pre - 1

View File

@ -55,7 +55,7 @@ def test_is_assigned_to_sync_committee(spec, state):
disqualified_pubkeys = set(
filter(lambda key: key not in sync_committee_pubkeys, active_pubkeys)
)
# NOTE: only check `disqualified_pubkeys` if SYNC_COMMITEE_SIZE < validator count
# NOTE: only check `disqualified_pubkeys` if SYNC_COMMITTEE_SIZE < validator count
if disqualified_pubkeys:
sample_size = 3
assert validator_count >= sample_size

View File

@ -255,7 +255,7 @@ def state_transition_with_full_block(spec,
sync_aggregate=None,
block=None):
"""
Build and apply a block with attestions at the calculated `slot_to_attest` of current epoch and/or previous epoch.
Build and apply a block with attestations at the calculated `slot_to_attest` of current epoch and/or previous epoch.
"""
if block is None:
block = build_empty_block_for_next_slot(spec, state)

View File

@ -206,7 +206,7 @@ def run_get_inclusion_delay_deltas(spec, state):
rewarded_proposer_indices.add(earliest_attestation.proposer_index)
# Ensure all expected proposers have been rewarded
# Track rewarde indices
# Track reward indices
proposing_indices = [a.proposer_index for a in eligible_attestations]
for index in proposing_indices:
if index in rewarded_proposer_indices:

View File

@ -7,7 +7,7 @@ def set_validator_fully_withdrawable(spec, state, index, withdrawable_epoch=None
validator = state.validators[index]
validator.withdrawable_epoch = withdrawable_epoch
# set exit epoch as well to avoid interactions with other epoch process, e.g. forced ejecions
# set exit epoch as well to avoid interactions with other epoch process, e.g. forced ejections
if validator.exit_epoch > withdrawable_epoch:
validator.exit_epoch = withdrawable_epoch

View File

@ -88,7 +88,7 @@ def test_activation_queue_no_activation_no_finality(spec, state):
def test_activation_queue_sorting(spec, state):
churn_limit = spec.get_validator_churn_limit(state)
# try to activate more than the per-epoch churn linmit
# try to activate more than the per-epoch churn limit
mock_activations = churn_limit * 2
epoch = spec.get_current_epoch(state)

View File

@ -40,7 +40,7 @@ Blocks must be processed in order, following the main transition function
Blocks are encoded as `SignedBeaconBlock`s from the relevant spec version
as indicated by the `post_fork` and `fork_block` data in the `meta.yaml`.
As blocks span fork boundaires, a `fork_block` number is given in
As blocks span fork boundaries, a `fork_block` number is given in
the `meta.yaml` to help resolve which blocks belong to which fork.
The `fork_block` is the index in the test data of the **last** block