Apply suggestions from @djrtwo

Co-authored-by: Danny Ryan <dannyjryan@gmail.com>
This commit is contained in:
Diederik Loerakker 2021-03-30 01:33:17 +02:00 committed by GitHub
parent d28cac0e8f
commit 430627f290
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 32 additions and 34 deletions

View File

@ -2,7 +2,7 @@
# Fork
# ---------------------------------------------------------------
SHARDING_FORK_VERSION: 0x03000000
SHARDING_FORK_VERSION: 0x03000001
# TBD
SHARDING_FORK_SLOT: 0

View File

@ -1,8 +1,8 @@
# Ethereum 2.0 Phase 1 -- Honest Validator
# Ethereum 2.0 Custody Game -- Honest Validator
**Notice**: This document is a work-in-progress for researchers and implementers.
This is an accompanying document to [Ethereum 2.0 Phase 1](./), which describes the expected actions of a "validator"
participating in the Ethereum 2.0 Phase 1 protocol.
This is an accompanying document to the [Ethereum 2.0 Custody Game](./), which describes the expected actions of a "validator"
participating in the Ethereum 2.0 Custody Game.
## Table of contents
@ -29,14 +29,14 @@ participating in the Ethereum 2.0 Phase 1 protocol.
## Prerequisites
This document is an extension of the [Phase 0 -- Validator](../phase0/validator.md). All behaviors and definitions defined in the Phase 0 doc carry over unless explicitly noted or overridden.
This document is an extension of the [Sharding -- Validator](../sharding/validator.md). All behaviors and definitions defined in the Sharding doc carry over unless explicitly noted or overridden.
All terminology, constants, functions, and protocol mechanics defined in the [Phase 1 -- The Beacon Chain](./beacon-chain.md) and [Phase 1 -- Custody Game](./custody-game.md)
docs are requisite for this document and used throughout. Please see the Phase 1 docs before continuing and use them as a reference throughout.
All terminology, constants, functions, and protocol mechanics defined in the [Custody Game -- The Beacon Chain](./beacon-chain.md)
docs are requisite for this document and used throughout. Please see the Custody Game docs before continuing and use them as a reference throughout.
## Becoming a validator
Becoming a validator in Phase 1 is unchanged from Phase 0. See the [Phase 0 validator guide](../phase0/validator.md#becoming-a-validator) for details.
Becoming a validator in Custody Game is unchanged from Phase 0. See the [Phase 0 validator guide](../phase0/validator.md#becoming-a-validator) for details.
## Beacon chain validator assignments

View File

@ -1,4 +1,4 @@
# Ethereum 2.0 Data Availability Sampling Core
# Ethereum 2.0 Data Availability Sampling -- Core
**Notice**: This document is a work-in-progress for researchers and implementers.

View File

@ -1,4 +1,4 @@
# Ethereum 2.0 Data Availability Sampling Fork Choice
# Ethereum 2.0 Data Availability Sampling -- Fork Choice
**Notice**: This document is a work-in-progress for researchers and implementers.
@ -17,9 +17,9 @@
## Introduction
This document is the beacon chain fork choice spec for part of Ethereum 2.0 Phase 1. The only change that we add from phase 0 is that we add a concept of "data dependencies";
This document is the beacon chain fork choice spec for Ethereum 2.0 Data Availability Sampling. The only change that we add from phase 0 is that we add a concept of "data dependencies";
a block is only eligible for consideration in the fork choice after a data availability test has been successfully completed for all dependencies.
The "root" of a shard block for data dependency purposes is considered to be a DataCommitment object, which is a pair of a Kate commitment and a length.
The "root" of a shard block for data dependency purposes is considered to be a `DataCommitment` object, which is a pair of a Kate commitment and a length.
## Dependency calculation

View File

@ -1,4 +1,4 @@
# Ethereum 2.0 Data Availability Sampling Network specification
# Ethereum 2.0 Data Availability Sampling -- Network specification
**Notice**: This document is a work-in-progress for researchers and implementers.
@ -68,7 +68,7 @@ At full operation, the network has one proposer, per shard, per slot.
In the push-model, there are:
- *Vertical subnets*: Sinks can subscribe to indices of samples: there is a sample to subnet mapping.
- *Horizontal subnets*: Sources need to distribute samples to all vertical networks: they participate in a fanout layer.
- *Horizontal subnets*: Sources need to distribute samples to all vertical networks: they participate in a fan-out layer.
### Horizontal subnets
@ -84,7 +84,7 @@ it may publish to all its peers on the subnet, instead of just those in its mesh
#### Horizontal propagation
Peers on the horizontal subnet are expected to at least perform regular propagation of shard blocks, like how do would participate in any other topic.
Peers on the horizontal subnet are expected to at least perform regular propagation of shard blocks, like participation in any other topic.
*Although this may be sufficient for testnets, expect parameter changes in the spec here.*
@ -137,7 +137,7 @@ Backbone subscription work is outlined in the [DAS participation spec](sampling.
#### Quick Rotation: Sampling
A node MUST maintain `k` random subscriptions to topics, and rotate these according to the [DAS participation spec](sampling.md#quick-rotation-sampling).
If the node does not already have connected peers on the topic it needs to sample, it can search its peerstore, and if necessary in the DHT, for peers in the topic backbone.
If the node does not already have connected peers on the topic it needs to sample, it can search its peerstore and, if necessary, in the DHT for peers in the topic backbone.
## DAS in the Gossip domain: Push
@ -148,13 +148,13 @@ Following the same scheme as the [Phase0 gossip topics](../phase0/p2p-interface.
|----------------------------------|---------------------------|
| `das_sample_{subnet_index}` | `DASSample` |
Also see the [Phase1 general networking spec](./p2p-phase1.md) for important topics such as that of the shard-blobs and shard-headers.
Also see the [Sharding general networking spec](../sharding/p2p-interface.md) for important topics such as that of the shard-blobs and shard-headers.
#### Horizontal subnets: `shard_blob_{shard}`
Extending the regular `shard_blob_{shard}` as [defined in the Phase1 networking specification](./p2p-phase1.md#shard-blobs-shard_blob_shard)
Extending the regular `shard_blob_{shard}` as [defined in the Sharding networking specification](../sharding/p2p-interface.md#shard-blobs-shard_blob_shard)
If participating in DAS, upon receiving a `signed_blob` for the first time, with a `slot` not older than `MAX_RESAMPLE_TIME`,
If participating in DAS, upon receiving a `signed_blob` for the first time with a `slot` not older than `MAX_RESAMPLE_TIME`,
a subscriber of a `shard_blob_{shard}` SHOULD reconstruct the samples and publish them to vertical subnets.
Take `blob = signed_blob.blob`:
1. Extend the data: `extended_data = extend_data(blob.data)`
@ -171,20 +171,20 @@ against the commitment to blob polynomial, specific to that `(shard, slot)` key.
The following validations MUST pass before forwarding the `sample` on the vertical subnet.
- _[IGNORE]_ The commitment for the (`sample.shard`, `sample.slot`, `sample.index`) tuple must be known.
If not known, the client MAY queue the sample, if it passes formatting conditions.
If not known, the client MAY queue the sample if it passes formatting conditions.
- _[REJECT]_ `sample.shard`, `sample.slot` and `sample.index` are hashed into a `sbunet_index` (TODO: define hash) which MUST match the topic `{subnet_index}` parameter.
- _[REJECT]_ `sample.shard` must be within valid range: `0 <= sample.shard < get_active_shard_count(state, compute_epoch_at_slot(sample.slot))`.
- _[REJECT]_ `sample.index` must be within valid range: `0 <= sample.index < sample_count`, where:
- `sample_count = (points_count + POINTS_PER_SAMPLE - 1) // POINTS_PER_SAMPLE`
- `points_count` is the length as claimed along with the commitment, which must be smaller than `MAX_SAMPLES_PER_BLOCK`.
- _[IGNORE]_ The `sample` is not from a future slot (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) --
i.e. validate that `sample.slot <= current_slot`. A client MAY queue future samples for processing at the appropriate slot, if it passed formatting conditions.
i.e. validate that `sample.slot <= current_slot`. A client MAY queue future samples for processing at the appropriate slot if it passed formatting conditions.
- _[IGNORE]_ This is the first received sample with the (`sample.shard`, `sample.slot`, `sample.index`) key tuple.
- _[REJECT]_ As already limited by the SSZ list-limit, it is important the sample data is well-formatted and not too large.
- _[REJECT]_ The `sample.data` MUST NOT contain any point `p >= MODULUS`. Although it is a `uint256`, not the full 256 bit range is valid.
- _[REJECT]_ The `sample.proof` MUST be valid: `verify_sample(sample, sample_count, commitment)`
Upon receiving a valid sample, it SHOULD be retained for a buffer period, if the local node is part of the backbone that covers this sample.
Upon receiving a valid sample, it SHOULD be retained for a buffer period if the local node is part of the backbone that covers this sample.
This is to serve other peers that may have missed it.
@ -194,7 +194,7 @@ To pull samples from nodes, in case of network instability when samples are unav
This builds on top of the protocol identification and encoding spec which was introduced in [the Phase0 network spec](../phase0/p2p-interface.md).
Note that the Phase1 DAS networking uses a different protocol prefix: `/eth2/das/req`
Note that DAS networking uses a different protocol prefix: `/eth2/das/req`
The result codes are extended with:
- 3: **ResourceUnavailable** -- when the request was valid but cannot be served at this point in time.

View File

@ -90,12 +90,11 @@ The following values are (non-configurable) constants used throughout the specif
| Name | Value | Notes |
| - | - | - |
| `MAX_SHARDS` | `uint64(2**10)` (= 1024) | Theoretical max shard count (used to determine data structure sizes) |
| `MAX_SHARDS` | `uint64(2**10)` (= 1,024) | Theoretical max shard count (used to determine data structure sizes) |
| `INITIAL_ACTIVE_SHARDS` | `uint64(2**6)` (= 64) | Initial shard count |
| `GASPRICE_ADJUSTMENT_COEFFICIENT` | `uint64(2**3)` (= 8) | Gasprice may decrease/increase by at most exp(1 / this value) *per epoch* |
| `MAX_SHARD_HEADERS_PER_SHARD` | `4` | |
### Shard block configs
| Name | Value | Notes |
@ -147,7 +146,7 @@ class AttestationData(Container):
source: Checkpoint
target: Checkpoint
# Shard header root
shard_header_root: Root
shard_header_root: Root # [New in Sharding]
```
### `BeaconBlockBody`
@ -419,8 +418,8 @@ def process_block(state: BeaconState, block: BeaconBlock) -> None:
process_block_header(state, block)
process_randao(state, block.body)
process_eth1_data(state, block.body)
process_operations(state, block.body) # [Modified]
process_application_payload(state, block.body) # [Part of the Merge]
process_operations(state, block.body) # [Modified in Sharding]
process_application_payload(state, block.body) # [New in Merge]
```
#### Operations
@ -559,7 +558,7 @@ def process_epoch(state: BeaconState) -> None:
# Sharding
process_pending_headers(state)
charge_confirmed_header_fees(state)
process_confirmed_header_fees(state)
reset_pending_headers(state)
# Final updates
@ -577,7 +576,6 @@ def process_epoch(state: BeaconState) -> None:
#### Pending headers
```python
def process_pending_headers(state: BeaconState) -> None:
# Pending header processing applies to the previous epoch.
# Skip if `GENESIS_EPOCH` because no prior epoch to process.
@ -668,7 +666,7 @@ def reset_pending_headers(state: BeaconState) -> None:
next_epoch = get_current_epoch(state) + 1
next_epoch_start_slot = compute_start_slot_at_epoch(next_epoch)
for slot in range(next_epoch_start_slot, next_epoch_start_slot + SLOTS_IN_EPOCH):
for index in range(get_committee_count_per_slot(next_epoch)
for index in range(get_committee_count_per_slot(next_epoch):
shard = compute_shard_from_committee_index(state, slot, index)
committee_length = len(get_beacon_committee(state, slot, shard))
state.current_epoch_pending_shard_headers.append(PendingShardHeader(