commit
fde9b410f0
|
@ -79,16 +79,16 @@ jobs:
|
|||
# Restore git repo at point close to target branch/revision, to speed up checkout
|
||||
- restore_cache:
|
||||
keys:
|
||||
- v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- v2-specs-repo-{{ .Branch }}-
|
||||
- v2-specs-repo-
|
||||
- v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- v3-specs-repo-{{ .Branch }}-
|
||||
- v3-specs-repo-
|
||||
- checkout
|
||||
- run:
|
||||
name: Clean up git repo to reduce cache size
|
||||
command: git gc
|
||||
# Save the git checkout as a cache, to make cloning next time faster.
|
||||
- save_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
paths:
|
||||
- ~/specs-repo
|
||||
install_pyspec_test:
|
||||
|
@ -97,7 +97,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_pyspec_cached_venv
|
||||
- run:
|
||||
name: Install pyspec requirements
|
||||
|
@ -109,7 +109,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_pyspec_cached_venv
|
||||
- run:
|
||||
name: Run py-tests
|
||||
|
@ -140,7 +140,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_pyspec_cached_venv
|
||||
- run:
|
||||
name: Run linter
|
||||
|
@ -152,7 +152,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_deposit_contract_compiler_cached_venv
|
||||
- run:
|
||||
name: Install deposit contract compiler requirements
|
||||
|
@ -164,7 +164,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_deposit_contract_tester_cached_venv
|
||||
- run:
|
||||
name: Install deposit contract tester requirements
|
||||
|
@ -176,7 +176,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_deposit_contract_compiler_cached_venv
|
||||
- run:
|
||||
name: Run deposit contract compile test
|
||||
|
@ -187,7 +187,7 @@ jobs:
|
|||
working_directory: ~/specs-repo
|
||||
steps:
|
||||
- restore_cache:
|
||||
key: v2-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
key: v3-specs-repo-{{ .Branch }}-{{ .Revision }}
|
||||
- restore_deposit_contract_tester_cached_venv
|
||||
- run:
|
||||
name: Run deposit contract test
|
||||
|
|
2
Makefile
2
Makefile
|
@ -117,7 +117,7 @@ install_deposit_contract_compiler:
|
|||
|
||||
compile_deposit_contract:
|
||||
cd $(DEPOSIT_CONTRACT_COMPILER_DIR); . venv/bin/activate; \
|
||||
python3.7 deposit_contract/compile.py contracts/validator_registration.vy
|
||||
python3.7 deposit_contract/compile.py ../contracts/validator_registration.vy
|
||||
|
||||
test_compile_deposit_contract:
|
||||
cd $(DEPOSIT_CONTRACT_COMPILER_DIR); . venv/bin/activate; \
|
||||
|
|
|
@ -682,14 +682,10 @@ def is_slashable_attestation_data(data_1: AttestationData, data_2: AttestationDa
|
|||
```python
|
||||
def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: IndexedAttestation) -> bool:
|
||||
"""
|
||||
Check if ``indexed_attestation`` has valid indices and signature.
|
||||
Check if ``indexed_attestation`` has sorted and unique indices and a valid aggregate signature.
|
||||
"""
|
||||
indices = indexed_attestation.attesting_indices
|
||||
|
||||
# Verify max number of indices
|
||||
if not len(indices) <= MAX_VALIDATORS_PER_COMMITTEE:
|
||||
return False
|
||||
# Verify indices are sorted and unique
|
||||
indices = indexed_attestation.attesting_indices
|
||||
if not indices == sorted(set(indices)):
|
||||
return False
|
||||
# Verify aggregate signature
|
||||
|
@ -1192,7 +1188,7 @@ Let `genesis_block = BeaconBlock(state_root=hash_tree_root(genesis_state))`.
|
|||
|
||||
## Beacon chain state transition function
|
||||
|
||||
The post-state corresponding to a pre-state `state` and a signed block `signed_block` is defined as `state_transition(state, signed_block)`. State transitions that trigger an unhandled exception (e.g. a failed `assert` or an out-of-range list access) are considered invalid.
|
||||
The post-state corresponding to a pre-state `state` and a signed block `signed_block` is defined as `state_transition(state, signed_block)`. State transitions that trigger an unhandled exception (e.g. a failed `assert` or an out-of-range list access) are considered invalid. State transitions that cause a `uint64` overflow or underflow are also considered invalid.
|
||||
|
||||
```python
|
||||
def state_transition(state: BeaconState, signed_block: SignedBeaconBlock, validate_result: bool=True) -> BeaconState:
|
||||
|
|
|
@ -319,7 +319,7 @@ Attestation subnets are used to propagate unaggregated attestations to subsectio
|
|||
- The attestation's committee index (`attestation.data.index`) is for the correct subnet.
|
||||
- `attestation.data.slot` is within the last `ATTESTATION_PROPAGATION_SLOT_RANGE` slots (within a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance) -- i.e. `attestation.data.slot + ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >= attestation.data.slot` (a client MAY queue future attestations for processing at the appropriate slot).
|
||||
- The attestation is unaggregated -- that is, it has exactly one participating validator (`len([bit for bit in attestation.aggregation_bits if bit == 0b1]) == 1`).
|
||||
- There has been no other attestation seen on an attestation subnet that has an identical `attestation.data.target.epoch` and participating validator index.
|
||||
- There has been no other valid attestation seen on an attestation subnet that has an identical `attestation.data.target.epoch` and participating validator index.
|
||||
- The block being voted for (`attestation.data.beacon_block_root`) passes validation.
|
||||
- The signature of `attestation` is valid.
|
||||
|
||||
|
@ -345,7 +345,9 @@ Topics are post-fixed with an encoding. Encodings define how the payload of a go
|
|||
|
||||
#### Mainnet
|
||||
|
||||
- `ssz_snappy` - All objects are SSZ-encoded and then compressed with [Snappy](https://github.com/google/snappy). Example: The beacon aggregate attestation topic string is `/eth2/beacon_aggregate_and_proof/ssz_snappy`, and the data field of a gossipsub message is an `AggregateAndProof` that has been SSZ-encoded and then compressed with Snappy.
|
||||
- `ssz_snappy` - All objects are SSZ-encoded and then compressed with [Snappy](https://github.com/google/snappy) block compression. Example: The beacon aggregate attestation topic string is `/eth2/beacon_aggregate_and_proof/ssz_snappy`, and the data field of a gossipsub message is an `AggregateAndProof` that has been SSZ-encoded and then compressed with Snappy.
|
||||
|
||||
Snappy has two formats: "block" and "frames" (streaming). Gossip messages remain relatively small (100s of bytes to 100s of kilobytes) so [basic snappy block compression](https://github.com/google/snappy/blob/master/format_description.txt) is used to avoid the additional overhead associated with snappy frames.
|
||||
|
||||
Implementations MUST use a single encoding. Changing an encoding will require coordination between participating implementations.
|
||||
|
||||
|
@ -448,7 +450,7 @@ Here, `result` represents the 1-byte response code.
|
|||
The token of the negotiated protocol ID specifies the type of encoding to be used for the req/resp interaction. Two values are possible at this time:
|
||||
|
||||
- `ssz`: the contents are [SSZ-encoded](../../ssz/simple-serialize.md). This encoding type MUST be supported by all clients. For objects containing a single field, only the field is SSZ-encoded not a container with a single field. For example, the `BeaconBlocksByRoot` request is an SSZ-encoded list of `Root`'s.
|
||||
- `ssz_snappy`: The contents are SSZ-encoded and then compressed with [Snappy](https://github.com/google/snappy). MAY be supported in the interoperability testnet; MUST be supported in mainnet.
|
||||
- `ssz_snappy`: The contents are SSZ-encoded and then compressed with [Snappy](https://github.com/google/snappy) frames compression. MAY be supported in the interoperability testnet; MUST be supported in mainnet.
|
||||
|
||||
#### SSZ-encoding strategy (with or without Snappy)
|
||||
|
||||
|
@ -461,7 +463,7 @@ Snappy has two formats: "block" and "frames" (streaming). To support large reque
|
|||
Since snappy frame contents [have a maximum size of `65536` bytes](https://github.com/google/snappy/blob/master/framing_format.txt#L104)
|
||||
and frame headers are just `identifier (1) + checksum (4)` bytes, the expected buffering of a single frame is acceptable.
|
||||
|
||||
**Encoding-dependent header:** Req/Resp protocols using the `ssz` or `ssz_snappy` encoding strategies MUST encode the length of the raw SSZ bytes, encoded as an unsigned [protobuf varint](https://developers.google.com/protocol-buffers/docs/encoding#varints).
|
||||
**Encoding-dependent header:** Req/Resp protocols using the `ssz` or `ssz_snappy` encoding strategies MUST encode the length of the raw SSZ bytes, encoded as an unsigned [protobuf varint](https://developers.google.com/protocol-buffers/docs/encoding#varints).
|
||||
|
||||
*Writing*: By first computing and writing the SSZ byte length, the SSZ encoder can then directly write the chunk contents to the stream.
|
||||
If Snappy is applied, it can be passed through a buffered Snappy writer to compress frame by frame.
|
||||
|
@ -575,7 +577,7 @@ Response Content:
|
|||
)
|
||||
```
|
||||
|
||||
Requests count beacon blocks from the peer starting from `start_slot`, leading up to the current head block as selected by fork choice. `step` defines the slot increment between blocks. For example, requesting blocks starting at `start_slot` 2 with a step value of 2 would return the blocks at slots [2, 4, 6, …]. In cases where a slot is empty for a given slot number, no block is returned. For example, if slot 4 were empty in the previous example, the returned array would contain [2, 6, …]. A step value of 1 returns all blocks on the range `[start_slot, start_slot + count)`.
|
||||
Requests beacon blocks in the slot range `[start_slot, start_slot + count * step)`, leading up to the current head block as selected by fork choice. `step` defines the slot increment between blocks. For example, requesting blocks starting at `start_slot` 2 with a step value of 2 would return the blocks at slots [2, 4, 6, …]. In cases where a slot is empty for a given slot number, no block is returned. For example, if slot 4 were empty in the previous example, the returned array would contain [2, 6, …].
|
||||
|
||||
`BeaconBlocksByRange` is primarily used to sync historical blocks.
|
||||
|
||||
|
@ -755,7 +757,7 @@ where the fields of `ENRForkID` are defined as
|
|||
* `next_fork_version` is the fork version corresponding to the next planned hard fork at a future epoch. If no future fork is planned, set `next_fork_version = current_fork_version` to signal this fact
|
||||
* `next_fork_epoch` is the epoch at which the next fork is planned and the `current_fork_version` will be updated. If no future fork is planned, set `next_fork_epoch = FAR_FUTURE_EPOCH` to signal this fact
|
||||
|
||||
*Note*: `fork_digest` is composed of values that are not not known until the genesis block/state are available. Due to this, clients SHOULD NOT form ENRs and begin peer discovery until genesis values are known.
|
||||
*Note*: `fork_digest` is composed of values that are not not known until the genesis block/state are available. Due to this, clients SHOULD NOT form ENRs and begin peer discovery until genesis values are known. One notable exception to this rule is the distribution of bootnode ENRs prior to genesis. In this case, bootnode ENRs SHOULD be initially distributed with `eth2` field set as `ENRForkID(fork_digest=compute_fork_digest(GENESIS_FORK_VERSION, b'\x00'*32), next_fork_version=GENESIS_FORK_VERSION, next_fork_epoch=FAR_FUTURE_EPOCH)`. After genesis values are known, the bootnodes SHOULD update ENRs to participate in normal discovery operations.
|
||||
|
||||
Clients SHOULD connect to peers with `fork_digest`, `next_fork_version`, and `next_fork_epoch` that match local values.
|
||||
|
||||
|
|
|
@ -1 +1 @@
|
|||
0.11.1
|
||||
0.11.2
|
Loading…
Reference in New Issue