commit
0642ff2cad
|
@ -0,0 +1,2 @@
|
|||
uint
|
||||
byteorder
|
2
Makefile
2
Makefile
|
@ -89,7 +89,7 @@ $(PY_SPEC_PHASE_0_TARGETS): $(PY_SPEC_PHASE_0_DEPS)
|
|||
python3 $(SCRIPT_DIR)/build_spec.py -p0 $(SPEC_DIR)/core/0_beacon-chain.md $(SPEC_DIR)/core/0_fork-choice.md $(SPEC_DIR)/validator/0_beacon-chain-validator.md $@
|
||||
|
||||
$(PY_SPEC_DIR)/eth2spec/phase1/spec.py: $(PY_SPEC_PHASE_1_DEPS)
|
||||
python3 $(SCRIPT_DIR)/build_spec.py -p1 $(SPEC_DIR)/core/0_beacon-chain.md $(SPEC_DIR)/core/1_custody-game.md $(SPEC_DIR)/core/1_shard-data-chains.md $(SPEC_DIR)/core/0_fork-choice.md $@
|
||||
python3 $(SCRIPT_DIR)/build_spec.py -p1 $(SPEC_DIR)/core/0_beacon-chain.md $(SPEC_DIR)/core/0_fork-choice.md $(SPEC_DIR)/light_client/merkle_proofs.md $(SPEC_DIR)/core/1_custody-game.md $(SPEC_DIR)/core/1_shard-data-chains.md $(SPEC_DIR)/core/1_beacon-chain-misc.md $@
|
||||
|
||||
CURRENT_DIR = ${CURDIR}
|
||||
|
||||
|
|
24
README.md
24
README.md
|
@ -1,15 +1,15 @@
|
|||
# Ethereum 2.0 Specifications
|
||||
|
||||
[![Join the chat at https://gitter.im/ethereum/sharding](https://badges.gitter.im/ethereum/sharding.svg)](https://gitter.im/ethereum/sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
[![Join the chat at https://discord.gg/hpFs23p](https://img.shields.io/badge/chat-on%20discord-blue.svg)](https://discord.gg/hpFs23p) [![Join the chat at https://gitter.im/ethereum/sharding](https://badges.gitter.im/ethereum/sharding.svg)](https://gitter.im/ethereum/sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
To learn more about sharding and Ethereum 2.0 (Serenity), see the [sharding FAQ](https://github.com/ethereum/wiki/wiki/Sharding-FAQ) and the [research compendium](https://notes.ethereum.org/s/H1PGqDhpm).
|
||||
|
||||
This repository hosts the current Eth 2.0 specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed-upon changes to the spec can be made through pull requests.
|
||||
This repository hosts the current Eth2 specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed-upon changes to the spec can be made through pull requests.
|
||||
|
||||
|
||||
## Specs
|
||||
|
||||
Core specifications for Eth 2.0 client validation can be found in [specs/core](specs/core). These are divided into phases. Each subsequent phase depends upon the prior. The current phases specified are:
|
||||
Core specifications for Eth2 client validation can be found in [specs/core](specs/core). These are divided into phases. Each subsequent phase depends upon the prior. The current phases specified are:
|
||||
|
||||
### Phase 0
|
||||
* [The Beacon Chain](specs/core/0_beacon-chain.md)
|
||||
|
@ -20,12 +20,13 @@ Core specifications for Eth 2.0 client validation can be found in [specs/core](s
|
|||
### Phase 1
|
||||
* [Custody Game](specs/core/1_custody-game.md)
|
||||
* [Shard Data Chains](specs/core/1_shard-data-chains.md)
|
||||
* [Misc beacon chain updates](specs/core/1_beacon-chain-misc.md)
|
||||
|
||||
### Phase 2
|
||||
|
||||
Phase 2 is still actively in R&D and does not yet have any formal specifications.
|
||||
|
||||
See the [Eth 2.0 Phase 2 Wiki](https://hackmd.io/UzysWse1Th240HELswKqVA?view) for current progress, discussions, and definitions regarding this work.
|
||||
See the [Eth2 Phase 2 Wiki](https://hackmd.io/UzysWse1Th240HELswKqVA?view) for current progress, discussions, and definitions regarding this work.
|
||||
|
||||
### Accompanying documents can be found in [specs](specs) and include:
|
||||
|
||||
|
@ -34,8 +35,14 @@ See the [Eth 2.0 Phase 2 Wiki](https://hackmd.io/UzysWse1Th240HELswKqVA?view) fo
|
|||
* [General test format](specs/test_formats/README.md)
|
||||
* [Merkle proof formats](specs/light_client/merkle_proofs.md)
|
||||
* [Light client syncing protocol](specs/light_client/sync_protocol.md)
|
||||
* [Beacon node API for validator](specs/validator/0_beacon-node-validator-api.md)
|
||||
|
||||
## Additional specifications for client implementers
|
||||
|
||||
Additional specifications and standards outside of requisite client functionality can be found in the following repos:
|
||||
|
||||
* [Eth2 APIs](https://github.com/ethereum/eth2.0-apis)
|
||||
* [Eth2 Metrics](https://github.com/ethereum/eth2.0-metrics/)
|
||||
* [Interop Standards in Eth2 PM](https://github.com/ethereum/eth2.0-pm/tree/master/interop)
|
||||
|
||||
## Design goals
|
||||
|
||||
|
@ -47,8 +54,15 @@ The following are the broad design goals for Ethereum 2.0:
|
|||
* to allow for a typical consumer laptop with `O(C)` resources to process/validate `O(1)` shards (including any system level validation such as the beacon chain)
|
||||
|
||||
|
||||
## Useful external resources
|
||||
|
||||
* [Design Rationale](https://notes.ethereum.org/s/rkhCgQteN#)
|
||||
* [Phase 0 Onboarding Document](https://notes.ethereum.org/s/Bkn3zpwxB)
|
||||
|
||||
|
||||
## For spec contributors
|
||||
|
||||
|
||||
Documentation on the different components used during spec writing can be found here:
|
||||
* [YAML Test Generators](test_generators/README.md)
|
||||
* [Executable Python Spec, with Py-tests](test_libs/pyspec/README.md)
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
This directory contains a set of constants presets used for testing, testnets, and mainnet.
|
||||
|
||||
A preset file contains all the constants known for its target.
|
||||
Later-fork constants can be ignored, e.g. ignore phase1 constants as a client that only supports phase 0 currently.
|
||||
Later-fork constants can be ignored, e.g. ignore Phase 1 constants as a client that only supports Phase 0 currently.
|
||||
|
||||
|
||||
## Forking
|
||||
|
@ -14,9 +14,8 @@ Instead, for forks that introduce changes in a constant, the constant name is pr
|
|||
Over time, the need to sync an older state may be deprecated.
|
||||
In this case, the prefix on the new constant may be removed, and the old constant will keep a special name before completely being removed.
|
||||
|
||||
A previous iteration of forking made use of "timelines", but this collides with the definitions used in the spec (constants for special forking slots etc.),
|
||||
and was not integrated sufficiently in any of the spec tools or implementations.
|
||||
Instead, the config essentially doubles as fork definition now, changing the value for e.g. `PHASE_1_GENESIS_SLOT` changes the fork.
|
||||
A previous iteration of forking made use of "timelines", but this collides with the definitions used in the spec (constants for special forking slots, etc.), and was not integrated sufficiently in any of the spec tools or implementations.
|
||||
Instead, the config essentially doubles as fork definition now, e.g. changing the value for `PHASE_1_GENESIS_SLOT` changes the fork.
|
||||
|
||||
Another reason to prefer forking through constants is the ability to program a forking moment based on context, instead of being limited to a static slot number.
|
||||
|
||||
|
|
|
@ -5,12 +5,12 @@
|
|||
|
||||
# Misc
|
||||
# ---------------------------------------------------------------
|
||||
# 2**10 (= 1,024)
|
||||
SHARD_COUNT: 1024
|
||||
# 2**6 (= 64)
|
||||
MAX_COMMITTEES_PER_SLOT: 64
|
||||
# 2**7 (= 128)
|
||||
TARGET_COMMITTEE_SIZE: 128
|
||||
# 2**12 (= 4,096)
|
||||
MAX_VALIDATORS_PER_COMMITTEE: 4096
|
||||
# 2**11 (= 2,048)
|
||||
MAX_VALIDATORS_PER_COMMITTEE: 2048
|
||||
# 2**2 (= 4)
|
||||
MIN_PER_EPOCH_CHURN_LIMIT: 4
|
||||
# 2**16 (= 65,536)
|
||||
|
@ -51,16 +51,16 @@ BLS_WITHDRAWAL_PREFIX: 0x00
|
|||
|
||||
# Time parameters
|
||||
# ---------------------------------------------------------------
|
||||
# 6 seconds 6 seconds
|
||||
SECONDS_PER_SLOT: 6
|
||||
# 12 seconds
|
||||
SECONDS_PER_SLOT: 12
|
||||
# 2**0 (= 1) slots 6 seconds
|
||||
MIN_ATTESTATION_INCLUSION_DELAY: 1
|
||||
# 2**6 (= 64) slots 6.4 minutes
|
||||
SLOTS_PER_EPOCH: 64
|
||||
# 2**5 (= 32) slots 6.4 minutes
|
||||
SLOTS_PER_EPOCH: 32
|
||||
# 2**0 (= 1) epochs 6.4 minutes
|
||||
MIN_SEED_LOOKAHEAD: 1
|
||||
# 2**2 (= 4) epochs 25.6 minutes
|
||||
ACTIVATION_EXIT_DELAY: 4
|
||||
MAX_SEED_LOOKAHEAD: 4
|
||||
# 2**10 (= 1,024) slots ~1.7 hours
|
||||
SLOTS_PER_ETH1_VOTING_PERIOD: 1024
|
||||
# 2**13 (= 8,192) slots ~13 hours
|
||||
|
@ -116,18 +116,15 @@ MAX_ATTESTATIONS: 128
|
|||
MAX_DEPOSITS: 16
|
||||
# 2**4 (= 16)
|
||||
MAX_VOLUNTARY_EXITS: 16
|
||||
# Originally 2**4 (= 16), disabled for now.
|
||||
MAX_TRANSFERS: 0
|
||||
|
||||
|
||||
# Signature domains
|
||||
# ---------------------------------------------------------------
|
||||
DOMAIN_BEACON_PROPOSER: 0x00000000
|
||||
DOMAIN_RANDAO: 0x01000000
|
||||
DOMAIN_ATTESTATION: 0x02000000
|
||||
DOMAIN_BEACON_ATTESTER: 0x01000000
|
||||
DOMAIN_RANDAO: 0x02000000
|
||||
DOMAIN_DEPOSIT: 0x03000000
|
||||
DOMAIN_VOLUNTARY_EXIT: 0x04000000
|
||||
DOMAIN_TRANSFER: 0x05000000
|
||||
DOMAIN_CUSTODY_BIT_CHALLENGE: 0x06000000
|
||||
DOMAIN_SHARD_PROPOSER: 0x80000000
|
||||
DOMAIN_SHARD_ATTESTER: 0x81000000
|
||||
|
|
|
@ -4,12 +4,12 @@
|
|||
# Misc
|
||||
# ---------------------------------------------------------------
|
||||
|
||||
# [customized] Just 8 shards for testing purposes
|
||||
SHARD_COUNT: 8
|
||||
# [customized] Just 4 committees for slot for testing purposes
|
||||
MAX_COMMITTEES_PER_SLOT: 4
|
||||
# [customized] unsecure, but fast
|
||||
TARGET_COMMITTEE_SIZE: 4
|
||||
# 2**12 (= 4,096)
|
||||
MAX_VALIDATORS_PER_COMMITTEE: 4096
|
||||
# 2**11 (= 2,048)
|
||||
MAX_VALIDATORS_PER_COMMITTEE: 2048
|
||||
# 2**2 (= 4)
|
||||
MIN_PER_EPOCH_CHURN_LIMIT: 4
|
||||
# 2**16 (= 65,536)
|
||||
|
@ -50,7 +50,7 @@ BLS_WITHDRAWAL_PREFIX: 0x00
|
|||
|
||||
# Time parameters
|
||||
# ---------------------------------------------------------------
|
||||
# 6 seconds 6 seconds
|
||||
# [customized] Faster for testing purposes
|
||||
SECONDS_PER_SLOT: 6
|
||||
# 2**0 (= 1) slots 6 seconds
|
||||
MIN_ATTESTATION_INCLUSION_DELAY: 1
|
||||
|
@ -59,7 +59,7 @@ SLOTS_PER_EPOCH: 8
|
|||
# 2**0 (= 1) epochs
|
||||
MIN_SEED_LOOKAHEAD: 1
|
||||
# 2**2 (= 4) epochs
|
||||
ACTIVATION_EXIT_DELAY: 4
|
||||
MAX_SEED_LOOKAHEAD: 4
|
||||
# [customized] higher frequency new deposits from eth1 for testing
|
||||
SLOTS_PER_ETH1_VOTING_PERIOD: 16
|
||||
# [customized] smaller state
|
||||
|
@ -74,6 +74,10 @@ MAX_EPOCHS_PER_CROSSLINK: 4
|
|||
MIN_EPOCHS_TO_INACTIVITY_PENALTY: 4
|
||||
# [customized] 2**12 (= 4,096) epochs
|
||||
EARLY_DERIVED_SECRET_PENALTY_MAX_FUTURE_EPOCHS: 4096
|
||||
# 2**2 (= 4) epochs
|
||||
EPOCHS_PER_CUSTODY_PERIOD: 4
|
||||
# 2**2 (= 4) epochs
|
||||
CUSTODY_PERIOD_TO_RANDAO_PADDING: 4
|
||||
|
||||
|
||||
# State vector lengths
|
||||
|
@ -114,18 +118,25 @@ MAX_ATTESTATIONS: 128
|
|||
MAX_DEPOSITS: 16
|
||||
# 2**4 (= 16)
|
||||
MAX_VOLUNTARY_EXITS: 16
|
||||
# Originally 2**4 (= 16), disabled for now.
|
||||
MAX_TRANSFERS: 0
|
||||
|
||||
|
||||
# Signature domains
|
||||
# ---------------------------------------------------------------
|
||||
DOMAIN_BEACON_PROPOSER: 0x00000000
|
||||
DOMAIN_RANDAO: 0x01000000
|
||||
DOMAIN_ATTESTATION: 0x02000000
|
||||
DOMAIN_BEACON_ATTESTER: 0x01000000
|
||||
DOMAIN_RANDAO: 0x02000000
|
||||
DOMAIN_DEPOSIT: 0x03000000
|
||||
DOMAIN_VOLUNTARY_EXIT: 0x04000000
|
||||
DOMAIN_TRANSFER: 0x05000000
|
||||
DOMAIN_CUSTODY_BIT_CHALLENGE: 0x06000000
|
||||
DOMAIN_SHARD_PROPOSER: 0x80000000
|
||||
DOMAIN_SHARD_ATTESTER: 0x81000000
|
||||
|
||||
|
||||
# Phase 1
|
||||
# ---------------------------------------------------------------
|
||||
SHARD_SLOTS_PER_BEACON_SLOT: 2
|
||||
EPOCHS_PER_SHARD_PERIOD: 4
|
||||
# PHASE_1_FORK_EPOCH >= EPOCHS_PER_SHARD_PERIOD * 2
|
||||
PHASE_1_FORK_EPOCH: 8
|
||||
# PHASE_1_FORK_SLOT = PHASE_1_FORK_EPOCH * SLOTS_PER_EPOCH
|
||||
PHASE_1_FORK_SLOT: 64
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -1,10 +1,11 @@
|
|||
# Vyper target 0.1.0b12
|
||||
MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei
|
||||
DEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32
|
||||
MAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1
|
||||
PUBKEY_LENGTH: constant(uint256) = 48 # bytes
|
||||
WITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes
|
||||
AMOUNT_LENGTH: constant(uint256) = 8 # bytes
|
||||
SIGNATURE_LENGTH: constant(uint256) = 96 # bytes
|
||||
AMOUNT_LENGTH: constant(uint256) = 8 # bytes
|
||||
|
||||
DepositEvent: event({
|
||||
pubkey: bytes[48],
|
||||
|
@ -42,7 +43,7 @@ def to_little_endian_64(value: uint256) -> bytes[8]:
|
|||
|
||||
@public
|
||||
@constant
|
||||
def get_hash_tree_root() -> bytes32:
|
||||
def get_deposit_root() -> bytes32:
|
||||
zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
|
||||
node: bytes32 = zero_bytes32
|
||||
size: uint256 = self.deposit_count
|
||||
|
@ -65,13 +66,16 @@ def get_deposit_count() -> bytes[8]:
|
|||
@public
|
||||
def deposit(pubkey: bytes[PUBKEY_LENGTH],
|
||||
withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH],
|
||||
signature: bytes[SIGNATURE_LENGTH]):
|
||||
signature: bytes[SIGNATURE_LENGTH],
|
||||
deposit_data_root: bytes32):
|
||||
# Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`)
|
||||
assert self.deposit_count < MAX_DEPOSIT_COUNT
|
||||
|
||||
# Validate deposit data
|
||||
# Check deposit amount
|
||||
deposit_amount: uint256 = msg.value / as_wei_value(1, "gwei")
|
||||
assert deposit_amount >= MIN_DEPOSIT_AMOUNT
|
||||
|
||||
# Length checks to facilitate formal verification (see https://github.com/ethereum/eth2.0-specs/pull/1362/files#r320361859)
|
||||
assert len(pubkey) == PUBKEY_LENGTH
|
||||
assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH
|
||||
assert len(signature) == SIGNATURE_LENGTH
|
||||
|
@ -80,7 +84,7 @@ def deposit(pubkey: bytes[PUBKEY_LENGTH],
|
|||
amount: bytes[8] = self.to_little_endian_64(deposit_amount)
|
||||
log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))
|
||||
|
||||
# Compute `DepositData` hash tree root
|
||||
# Compute deposit data root (`DepositData` hash tree root)
|
||||
zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
|
||||
pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))
|
||||
signature_root: bytes32 = sha256(concat(
|
||||
|
@ -91,8 +95,10 @@ def deposit(pubkey: bytes[PUBKEY_LENGTH],
|
|||
sha256(concat(pubkey_root, withdrawal_credentials)),
|
||||
sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),
|
||||
))
|
||||
# Verify computed and expected deposit data roots match
|
||||
assert node == deposit_data_root
|
||||
|
||||
# Add `DepositData` hash tree root to Merkle tree (update a single `branch` node)
|
||||
# Add deposit data root to Merkle tree (update a single `branch` node)
|
||||
self.deposit_count += 1
|
||||
size: uint256 = self.deposit_count
|
||||
for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
eth-tester[py-evm]==0.1.0b39
|
||||
vyper==0.1.0b10
|
||||
vyper==0.1.0b12
|
||||
web3==5.0.0b2
|
||||
pytest==3.6.1
|
||||
../test_libs/pyspec
|
||||
|
|
|
@ -6,7 +6,6 @@ import pytest
|
|||
|
||||
import eth_utils
|
||||
from tests.contracts.conftest import (
|
||||
DEPOSIT_CONTRACT_TREE_DEPTH,
|
||||
FULL_DEPOSIT_AMOUNT,
|
||||
MIN_DEPOSIT_AMOUNT,
|
||||
)
|
||||
|
@ -14,29 +13,42 @@ from tests.contracts.conftest import (
|
|||
from eth2spec.phase0.spec import (
|
||||
DepositData,
|
||||
)
|
||||
from eth2spec.utils.hash_function import hash
|
||||
from eth2spec.utils.ssz.ssz_typing import List
|
||||
from eth2spec.utils.ssz.ssz_impl import (
|
||||
hash_tree_root,
|
||||
)
|
||||
|
||||
|
||||
SAMPLE_PUBKEY = b'\x11' * 48
|
||||
SAMPLE_WITHDRAWAL_CREDENTIALS = b'\x22' * 32
|
||||
SAMPLE_VALID_SIGNATURE = b'\x33' * 96
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def deposit_input():
|
||||
def deposit_input(amount):
|
||||
"""
|
||||
pubkey: bytes[48]
|
||||
withdrawal_credentials: bytes[32]
|
||||
signature: bytes[96]
|
||||
deposit_data_root: bytes[32]
|
||||
"""
|
||||
return (
|
||||
b'\x11' * 48,
|
||||
b'\x22' * 32,
|
||||
b'\x33' * 96,
|
||||
SAMPLE_PUBKEY,
|
||||
SAMPLE_WITHDRAWAL_CREDENTIALS,
|
||||
SAMPLE_VALID_SIGNATURE,
|
||||
hash_tree_root(
|
||||
DepositData(
|
||||
pubkey=SAMPLE_PUBKEY,
|
||||
withdrawal_credentials=SAMPLE_WITHDRAWAL_CREDENTIALS,
|
||||
amount=amount,
|
||||
signature=SAMPLE_VALID_SIGNATURE,
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
'success,deposit_amount',
|
||||
('success', 'amount'),
|
||||
[
|
||||
(True, FULL_DEPOSIT_AMOUNT),
|
||||
(True, MIN_DEPOSIT_AMOUNT),
|
||||
|
@ -47,18 +59,24 @@ def deposit_input():
|
|||
def test_deposit_amount(registration_contract,
|
||||
w3,
|
||||
success,
|
||||
deposit_amount,
|
||||
amount,
|
||||
assert_tx_failed,
|
||||
deposit_input):
|
||||
call = registration_contract.functions.deposit(*deposit_input)
|
||||
if success:
|
||||
assert call.transact({"value": deposit_amount * eth_utils.denoms.gwei})
|
||||
assert call.transact({"value": amount * eth_utils.denoms.gwei})
|
||||
else:
|
||||
assert_tx_failed(
|
||||
lambda: call.transact({"value": deposit_amount * eth_utils.denoms.gwei})
|
||||
lambda: call.transact({"value": amount * eth_utils.denoms.gwei})
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
'amount',
|
||||
[
|
||||
(FULL_DEPOSIT_AMOUNT)
|
||||
]
|
||||
)
|
||||
@pytest.mark.parametrize(
|
||||
'invalid_pubkey,invalid_withdrawal_credentials,invalid_signature,success',
|
||||
[
|
||||
|
@ -71,38 +89,62 @@ def test_deposit_amount(registration_contract,
|
|||
def test_deposit_inputs(registration_contract,
|
||||
w3,
|
||||
assert_tx_failed,
|
||||
deposit_input,
|
||||
amount,
|
||||
invalid_pubkey,
|
||||
invalid_withdrawal_credentials,
|
||||
invalid_signature,
|
||||
success):
|
||||
pubkey = deposit_input[0][2:] if invalid_pubkey else deposit_input[0]
|
||||
if invalid_withdrawal_credentials: # this one is different to satisfy linter
|
||||
withdrawal_credentials = deposit_input[1][2:]
|
||||
else:
|
||||
withdrawal_credentials = deposit_input[1]
|
||||
signature = deposit_input[2][2:] if invalid_signature else deposit_input[2]
|
||||
pubkey = SAMPLE_PUBKEY[2:] if invalid_pubkey else SAMPLE_PUBKEY
|
||||
withdrawal_credentials = (
|
||||
SAMPLE_WITHDRAWAL_CREDENTIALS[2:] if invalid_withdrawal_credentials
|
||||
else SAMPLE_WITHDRAWAL_CREDENTIALS
|
||||
)
|
||||
signature = SAMPLE_VALID_SIGNATURE[2:] if invalid_signature else SAMPLE_VALID_SIGNATURE
|
||||
|
||||
call = registration_contract.functions.deposit(
|
||||
pubkey,
|
||||
withdrawal_credentials,
|
||||
signature,
|
||||
hash_tree_root(
|
||||
DepositData(
|
||||
pubkey=SAMPLE_PUBKEY if invalid_pubkey else pubkey,
|
||||
withdrawal_credentials=(
|
||||
SAMPLE_WITHDRAWAL_CREDENTIALS if invalid_withdrawal_credentials
|
||||
else withdrawal_credentials
|
||||
),
|
||||
amount=amount,
|
||||
signature=SAMPLE_VALID_SIGNATURE if invalid_signature else signature,
|
||||
),
|
||||
)
|
||||
)
|
||||
if success:
|
||||
assert call.transact({"value": FULL_DEPOSIT_AMOUNT * eth_utils.denoms.gwei})
|
||||
assert call.transact({"value": amount * eth_utils.denoms.gwei})
|
||||
else:
|
||||
assert_tx_failed(
|
||||
lambda: call.transact({"value": FULL_DEPOSIT_AMOUNT * eth_utils.denoms.gwei})
|
||||
lambda: call.transact({"value": amount * eth_utils.denoms.gwei})
|
||||
)
|
||||
|
||||
|
||||
def test_deposit_event_log(registration_contract, a0, w3, deposit_input):
|
||||
def test_deposit_event_log(registration_contract, a0, w3):
|
||||
log_filter = registration_contract.events.DepositEvent.createFilter(
|
||||
fromBlock='latest',
|
||||
)
|
||||
|
||||
deposit_amount_list = [randint(MIN_DEPOSIT_AMOUNT, FULL_DEPOSIT_AMOUNT * 2) for _ in range(3)]
|
||||
|
||||
for i in range(3):
|
||||
deposit_input = (
|
||||
SAMPLE_PUBKEY,
|
||||
SAMPLE_WITHDRAWAL_CREDENTIALS,
|
||||
SAMPLE_VALID_SIGNATURE,
|
||||
hash_tree_root(
|
||||
DepositData(
|
||||
pubkey=SAMPLE_PUBKEY,
|
||||
withdrawal_credentials=SAMPLE_WITHDRAWAL_CREDENTIALS,
|
||||
amount=deposit_amount_list[i],
|
||||
signature=SAMPLE_VALID_SIGNATURE,
|
||||
),
|
||||
)
|
||||
)
|
||||
registration_contract.functions.deposit(
|
||||
*deposit_input,
|
||||
).transact({"value": deposit_amount_list[i] * eth_utils.denoms.gwei})
|
||||
|
@ -118,7 +160,7 @@ def test_deposit_event_log(registration_contract, a0, w3, deposit_input):
|
|||
assert log['index'] == i.to_bytes(8, 'little')
|
||||
|
||||
|
||||
def test_deposit_tree(registration_contract, w3, assert_tx_failed, deposit_input):
|
||||
def test_deposit_tree(registration_contract, w3, assert_tx_failed):
|
||||
log_filter = registration_contract.events.DepositEvent.createFilter(
|
||||
fromBlock='latest',
|
||||
)
|
||||
|
@ -126,6 +168,20 @@ def test_deposit_tree(registration_contract, w3, assert_tx_failed, deposit_input
|
|||
deposit_amount_list = [randint(MIN_DEPOSIT_AMOUNT, FULL_DEPOSIT_AMOUNT * 2) for _ in range(10)]
|
||||
deposit_data_list = []
|
||||
for i in range(0, 10):
|
||||
deposit_data = DepositData(
|
||||
pubkey=SAMPLE_PUBKEY,
|
||||
withdrawal_credentials=SAMPLE_WITHDRAWAL_CREDENTIALS,
|
||||
amount=deposit_amount_list[i],
|
||||
signature=SAMPLE_VALID_SIGNATURE,
|
||||
)
|
||||
deposit_input = (
|
||||
SAMPLE_PUBKEY,
|
||||
SAMPLE_WITHDRAWAL_CREDENTIALS,
|
||||
SAMPLE_VALID_SIGNATURE,
|
||||
hash_tree_root(deposit_data),
|
||||
)
|
||||
deposit_data_list.append(deposit_data)
|
||||
|
||||
tx_hash = registration_contract.functions.deposit(
|
||||
*deposit_input,
|
||||
).transact({"value": deposit_amount_list[i] * eth_utils.denoms.gwei})
|
||||
|
@ -138,12 +194,8 @@ def test_deposit_tree(registration_contract, w3, assert_tx_failed, deposit_input
|
|||
|
||||
assert log["index"] == i.to_bytes(8, 'little')
|
||||
|
||||
deposit_data_list.append(DepositData(
|
||||
pubkey=deposit_input[0],
|
||||
withdrawal_credentials=deposit_input[1],
|
||||
amount=deposit_amount_list[i],
|
||||
signature=deposit_input[2],
|
||||
))
|
||||
|
||||
# Check deposit count and root
|
||||
count = len(deposit_data_list).to_bytes(8, 'little')
|
||||
assert count == registration_contract.functions.get_deposit_count().call()
|
||||
root = hash_tree_root(List[DepositData, 2**32](*deposit_data_list))
|
||||
assert root == registration_contract.functions.get_hash_tree_root().call()
|
||||
assert root == registration_contract.functions.get_deposit_root().call()
|
||||
|
|
|
@ -28,6 +28,7 @@ from eth2spec.utils.ssz.ssz_typing import (
|
|||
Bytes1, Bytes4, Bytes8, Bytes32, Bytes48, Bytes96, Bitlist, Bitvector,
|
||||
)
|
||||
from eth2spec.utils.bls import (
|
||||
bls_aggregate_signatures,
|
||||
bls_aggregate_pubkeys,
|
||||
bls_verify,
|
||||
bls_verify_multiple,
|
||||
|
@ -37,7 +38,10 @@ from eth2spec.utils.bls import (
|
|||
from eth2spec.utils.hash_function import hash
|
||||
'''
|
||||
PHASE1_IMPORTS = '''from typing import (
|
||||
Any, Dict, Optional, Set, Sequence, MutableSequence, Tuple,
|
||||
Any, Dict, Set, Sequence, MutableSequence, NewType, Tuple, Union,
|
||||
)
|
||||
from math import (
|
||||
log2,
|
||||
)
|
||||
|
||||
from dataclasses import (
|
||||
|
@ -48,20 +52,30 @@ from dataclasses import (
|
|||
from eth2spec.utils.ssz.ssz_impl import (
|
||||
hash_tree_root,
|
||||
signing_root,
|
||||
serialize,
|
||||
is_zero,
|
||||
)
|
||||
from eth2spec.utils.ssz.ssz_typing import (
|
||||
bit, boolean, Container, List, Vector, Bytes, uint64,
|
||||
Bytes1, Bytes4, Bytes8, Bytes32, Bytes48, Bytes96, Bitlist, Bitvector,
|
||||
BasicValue, Elements, BaseBytes, BaseList, SSZType,
|
||||
Container, List, Vector, Bytes, BytesN, Bitlist, Bitvector, Bits,
|
||||
Bytes1, Bytes4, Bytes8, Bytes32, Bytes48, Bytes96,
|
||||
uint64, bit, boolean, byte,
|
||||
)
|
||||
from eth2spec.utils.bls import (
|
||||
bls_aggregate_pubkeys,
|
||||
bls_verify,
|
||||
bls_verify_multiple,
|
||||
bls_signature_to_G2,
|
||||
)
|
||||
|
||||
from eth2spec.utils.hash_function import hash
|
||||
|
||||
|
||||
SSZVariableName = str
|
||||
GeneralizedIndex = NewType('GeneralizedIndex', int)
|
||||
'''
|
||||
SUNDRY_CONSTANTS_FUNCTIONS = '''
|
||||
def ceillog2(x: uint64) -> int:
|
||||
return (x - 1).bit_length()
|
||||
'''
|
||||
SUNDRY_FUNCTIONS = '''
|
||||
# Monkey patch hash cache
|
||||
|
@ -105,13 +119,20 @@ def apply_constants_preset(preset: Dict[str, Any]) -> None:
|
|||
global_vars[k] = v
|
||||
|
||||
# Deal with derived constants
|
||||
global_vars['GENESIS_EPOCH'] = compute_epoch_of_slot(GENESIS_SLOT)
|
||||
global_vars['GENESIS_EPOCH'] = compute_epoch_at_slot(GENESIS_SLOT)
|
||||
|
||||
# Initialize SSZ types again, to account for changed lengths
|
||||
init_SSZ_types()
|
||||
'''
|
||||
|
||||
|
||||
def remove_for_phase1(functions: Dict[str, str]):
|
||||
for key, value in functions.items():
|
||||
lines = value.split("\n")
|
||||
lines = filter(lambda s: "[to be removed in phase 1]" not in s, lines)
|
||||
functions[key] = "\n".join(lines)
|
||||
|
||||
|
||||
def strip_comments(raw: str) -> str:
|
||||
comment_line_regex = re.compile(r'^\s+# ')
|
||||
lines = raw.split('\n')
|
||||
|
@ -142,10 +163,15 @@ def objects_to_spec(functions: Dict[str, str],
|
|||
]
|
||||
)
|
||||
)
|
||||
for k in list(functions):
|
||||
if "ceillog2" in k:
|
||||
del functions[k]
|
||||
functions_spec = '\n\n'.join(functions.values())
|
||||
for k in list(constants.keys()):
|
||||
if k.startswith('DOMAIN_'):
|
||||
constants[k] = f"DomainType(({constants[k]}).to_bytes(length=4, byteorder='little'))"
|
||||
if k == "BLS12_381_Q":
|
||||
constants[k] += " # noqa: E501"
|
||||
constants_spec = '\n'.join(map(lambda x: '%s = %s' % (x, constants[x]), constants))
|
||||
ssz_objects_instantiation_spec = '\n\n'.join(ssz_objects.values())
|
||||
ssz_objects_reinitialization_spec = (
|
||||
|
@ -158,6 +184,7 @@ def objects_to_spec(functions: Dict[str, str],
|
|||
spec = (
|
||||
imports
|
||||
+ '\n\n' + new_type_definitions
|
||||
+ '\n' + SUNDRY_CONSTANTS_FUNCTIONS
|
||||
+ '\n\n' + constants_spec
|
||||
+ '\n\n\n' + ssz_objects_instantiation_spec
|
||||
+ '\n\n' + functions_spec
|
||||
|
@ -187,13 +214,13 @@ ignored_dependencies = [
|
|||
'bit', 'boolean', 'Vector', 'List', 'Container', 'Hash', 'BLSPubkey', 'BLSSignature', 'Bytes', 'BytesN'
|
||||
'Bytes1', 'Bytes4', 'Bytes32', 'Bytes48', 'Bytes96', 'Bitlist', 'Bitvector',
|
||||
'uint8', 'uint16', 'uint32', 'uint64', 'uint128', 'uint256',
|
||||
'bytes' # to be removed after updating spec doc
|
||||
'bytes', 'byte', 'BytesN' # to be removed after updating spec doc
|
||||
]
|
||||
|
||||
|
||||
def dependency_order_ssz_objects(objects: Dict[str, str], custom_types: Dict[str, str]) -> None:
|
||||
"""
|
||||
Determines which SSZ Object is depenedent on which other and orders them appropriately
|
||||
Determines which SSZ Object is dependent on which other and orders them appropriately
|
||||
"""
|
||||
items = list(objects.items())
|
||||
for key, value in items:
|
||||
|
@ -263,17 +290,26 @@ def build_phase0_spec(phase0_sourcefile: str, fork_choice_sourcefile: str,
|
|||
return spec
|
||||
|
||||
|
||||
def build_phase1_spec(phase0_sourcefile: str,
|
||||
def build_phase1_spec(phase0_beacon_sourcefile: str,
|
||||
phase0_fork_choice_sourcefile: str,
|
||||
merkle_proofs_sourcefile: str,
|
||||
phase1_custody_sourcefile: str,
|
||||
phase1_shard_sourcefile: str,
|
||||
fork_choice_sourcefile: str,
|
||||
phase1_beacon_misc_sourcefile: str,
|
||||
outfile: str=None) -> Optional[str]:
|
||||
phase0_spec = get_spec(phase0_sourcefile)
|
||||
phase1_custody = get_spec(phase1_custody_sourcefile)
|
||||
phase1_shard_data = get_spec(phase1_shard_sourcefile)
|
||||
fork_choice_spec = get_spec(fork_choice_sourcefile)
|
||||
spec_objects = phase0_spec
|
||||
for value in [phase1_custody, phase1_shard_data, fork_choice_spec]:
|
||||
all_sourcefiles = (
|
||||
phase0_beacon_sourcefile,
|
||||
phase0_fork_choice_sourcefile,
|
||||
merkle_proofs_sourcefile,
|
||||
phase1_custody_sourcefile,
|
||||
phase1_shard_sourcefile,
|
||||
phase1_beacon_misc_sourcefile,
|
||||
)
|
||||
all_spescs = [get_spec(spec) for spec in all_sourcefiles]
|
||||
for spec in all_spescs:
|
||||
remove_for_phase1(spec[0])
|
||||
spec_objects = all_spescs[0]
|
||||
for value in all_spescs[1:]:
|
||||
spec_objects = combine_spec_objects(spec_objects, value)
|
||||
spec = objects_to_spec(*spec_objects, PHASE1_IMPORTS)
|
||||
if outfile is not None:
|
||||
|
@ -286,17 +322,19 @@ if __name__ == '__main__':
|
|||
description = '''
|
||||
Build the specs from the md docs.
|
||||
If building phase 0:
|
||||
1st argument is input spec.md
|
||||
2nd argument is input fork_choice.md
|
||||
3rd argument is input validator_guide.md
|
||||
1st argument is input /core/0_beacon-chain.md
|
||||
2nd argument is input /core/0_fork-choice.md
|
||||
3rd argument is input /core/0_beacon-chain-validator.md
|
||||
4th argument is output spec.py
|
||||
|
||||
If building phase 1:
|
||||
1st argument is input spec_phase0.md
|
||||
2nd argument is input spec_phase1_custody.md
|
||||
3rd argument is input spec_phase1_shard_data.md
|
||||
4th argument is input fork_choice.md
|
||||
5th argument is output spec.py
|
||||
1st argument is input /core/0_beacon-chain.md
|
||||
2nd argument is input /core/0_fork-choice.md
|
||||
3rd argument is input /light_client/merkle_proofs.md
|
||||
4th argument is input /core/1_custody-game.md
|
||||
5th argument is input /core/1_shard-data-chains.md
|
||||
6th argument is input /core/1_beacon-chain-misc.md
|
||||
7th argument is output spec.py
|
||||
'''
|
||||
parser = ArgumentParser(description=description)
|
||||
parser.add_argument("-p", "--phase", dest="phase", type=int, default=0, help="Build for phase #")
|
||||
|
@ -309,10 +347,15 @@ If building phase 1:
|
|||
else:
|
||||
print(" Phase 0 requires spec, forkchoice, and v-guide inputs as well as an output file.")
|
||||
elif args.phase == 1:
|
||||
if len(args.files) == 5:
|
||||
if len(args.files) == 7:
|
||||
build_phase1_spec(*args.files)
|
||||
else:
|
||||
print(" Phase 1 requires 4 input files as well as an output file: "
|
||||
+ "(phase0.md and phase1.md, phase1.md, fork_choice.md, output.py)")
|
||||
print(
|
||||
" Phase 1 requires input files as well as an output file:\n"
|
||||
"\t core/phase_0: (0_beacon-chain.md, 0_fork-choice.md)\n"
|
||||
"\t light_client: (merkle_proofs.md)\n"
|
||||
"\t core/phase_1: (1_custody-game.md, 1_shard-data-chains.md, 1_beacon-chain-misc.md)\n"
|
||||
"\t and output.py"
|
||||
)
|
||||
else:
|
||||
print("Invalid phase: {0}".format(args.phase))
|
||||
|
|
|
@ -81,7 +81,7 @@ def get_spec(file_name: str) -> SpecObject:
|
|||
if c not in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ_0123456789':
|
||||
is_constant_def = False
|
||||
if is_constant_def:
|
||||
constants[row[0]] = row[1].replace('**TBD**', '0x1234567890123456789012345678901234567890')
|
||||
constants[row[0]] = row[1].replace('**TBD**', '2**32')
|
||||
elif row[1].startswith('uint') or row[1].startswith('Bytes'):
|
||||
custom_types[row[0]] = row[1]
|
||||
return functions, custom_types, constants, ssz_objects, inserts
|
||||
|
|
|
@ -19,13 +19,12 @@
|
|||
- [State list lengths](#state-list-lengths)
|
||||
- [Rewards and penalties](#rewards-and-penalties)
|
||||
- [Max operations per block](#max-operations-per-block)
|
||||
- [Signature domain types](#signature-domain-types)
|
||||
- [Domain types](#domain-types)
|
||||
- [Containers](#containers)
|
||||
- [Misc dependencies](#misc-dependencies)
|
||||
- [`Fork`](#fork)
|
||||
- [`Checkpoint`](#checkpoint)
|
||||
- [`Validator`](#validator)
|
||||
- [`Crosslink`](#crosslink)
|
||||
- [`AttestationData`](#attestationdata)
|
||||
- [`AttestationDataAndCustodyBit`](#attestationdataandcustodybit)
|
||||
- [`IndexedAttestation`](#indexedattestation)
|
||||
|
@ -33,7 +32,6 @@
|
|||
- [`Eth1Data`](#eth1data)
|
||||
- [`HistoricalBatch`](#historicalbatch)
|
||||
- [`DepositData`](#depositdata)
|
||||
- [`CompactCommittee`](#compactcommittee)
|
||||
- [`BeaconBlockHeader`](#beaconblockheader)
|
||||
- [Beacon operations](#beacon-operations)
|
||||
- [`ProposerSlashing`](#proposerslashing)
|
||||
|
@ -41,7 +39,6 @@
|
|||
- [`Attestation`](#attestation)
|
||||
- [`Deposit`](#deposit)
|
||||
- [`VoluntaryExit`](#voluntaryexit)
|
||||
- [`Transfer`](#transfer)
|
||||
- [Beacon blocks](#beacon-blocks)
|
||||
- [`BeaconBlockBody`](#beaconblockbody)
|
||||
- [`BeaconBlock`](#beaconblock)
|
||||
|
@ -68,9 +65,10 @@
|
|||
- [`is_valid_merkle_branch`](#is_valid_merkle_branch)
|
||||
- [Misc](#misc-1)
|
||||
- [`compute_shuffled_index`](#compute_shuffled_index)
|
||||
- [`compute_proposer_index`](#compute_proposer_index)
|
||||
- [`compute_committee`](#compute_committee)
|
||||
- [`compute_epoch_of_slot`](#compute_epoch_of_slot)
|
||||
- [`compute_start_slot_of_epoch`](#compute_start_slot_of_epoch)
|
||||
- [`compute_epoch_at_slot`](#compute_epoch_at_slot)
|
||||
- [`compute_start_slot_at_epoch`](#compute_start_slot_at_epoch)
|
||||
- [`compute_activation_exit_epoch`](#compute_activation_exit_epoch)
|
||||
- [`compute_domain`](#compute_domain)
|
||||
- [Beacon state accessors](#beacon-state-accessors)
|
||||
|
@ -82,13 +80,9 @@
|
|||
- [`get_active_validator_indices`](#get_active_validator_indices)
|
||||
- [`get_validator_churn_limit`](#get_validator_churn_limit)
|
||||
- [`get_seed`](#get_seed)
|
||||
- [`get_committee_count`](#get_committee_count)
|
||||
- [`get_crosslink_committee`](#get_crosslink_committee)
|
||||
- [`get_start_shard`](#get_start_shard)
|
||||
- [`get_shard_delta`](#get_shard_delta)
|
||||
- [`get_committee_count_at_slot`](#get_committee_count_at_slot)
|
||||
- [`get_beacon_committee`](#get_beacon_committee)
|
||||
- [`get_beacon_proposer_index`](#get_beacon_proposer_index)
|
||||
- [`get_attestation_data_slot`](#get_attestation_data_slot)
|
||||
- [`get_compact_committees_root`](#get_compact_committees_root)
|
||||
- [`get_total_balance`](#get_total_balance)
|
||||
- [`get_total_active_balance`](#get_total_active_balance)
|
||||
- [`get_domain`](#get_domain)
|
||||
|
@ -106,7 +100,6 @@
|
|||
- [Epoch processing](#epoch-processing)
|
||||
- [Helper functions](#helper-functions-1)
|
||||
- [Justification and finalization](#justification-and-finalization)
|
||||
- [Crosslinks](#crosslinks)
|
||||
- [Rewards and penalties](#rewards-and-penalties-1)
|
||||
- [Registry updates](#registry-updates)
|
||||
- [Slashings](#slashings)
|
||||
|
@ -121,7 +114,6 @@
|
|||
- [Attestations](#attestations)
|
||||
- [Deposits](#deposits)
|
||||
- [Voluntary exits](#voluntary-exits)
|
||||
- [Transfers](#transfers)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
|
@ -130,7 +122,7 @@
|
|||
This document represents the specification for Phase 0 of Ethereum 2.0 -- The Beacon Chain.
|
||||
|
||||
At the core of Ethereum 2.0 is a system chain called the "beacon chain". The beacon chain stores and manages the registry of validators. In the initial deployment phases of Ethereum 2.0, the only mechanism to become a validator is to make a one-way ETH transaction to a deposit contract on Ethereum 1.0. Activation as a validator happens when Ethereum 1.0 deposit receipts are processed by the beacon chain, the activation balance is reached, and a queuing process is completed. Exit is either voluntary or done forcibly as a penalty for misbehavior.
|
||||
The primary source of load on the beacon chain is "attestations". Attestations are simultaneously availability votes for a shard block and proof-of-stake votes for a beacon block. A sufficient number of attestations for the same shard block create a "crosslink", confirming the shard segment up to that shard block into the beacon chain. Crosslinks also serve as infrastructure for asynchronous cross-shard communication.
|
||||
The primary source of load on the beacon chain is "attestations". Attestations are simultaneously availability votes for a shard block (Phase 1) and proof-of-stake votes for a beacon block (Phase 0).
|
||||
|
||||
## Notation
|
||||
|
||||
|
@ -144,12 +136,12 @@ We define the following Python custom types for type hinting and readability:
|
|||
| - | - | - |
|
||||
| `Slot` | `uint64` | a slot number |
|
||||
| `Epoch` | `uint64` | an epoch number |
|
||||
| `Shard` | `uint64` | a shard number |
|
||||
| `CommitteeIndex` | `uint64` | a committee index at a slot |
|
||||
| `ValidatorIndex` | `uint64` | a validator registry index |
|
||||
| `Gwei` | `uint64` | an amount in Gwei |
|
||||
| `Hash` | `Bytes32` | a hash |
|
||||
| `Version` | `Bytes4` | a fork version number |
|
||||
| `DomainType` | `Bytes4` | a signature domain type |
|
||||
| `DomainType` | `Bytes4` | a domain type |
|
||||
| `Domain` | `Bytes8` | a signature domain |
|
||||
| `BLSPubkey` | `Bytes48` | a BLS12-381 public key |
|
||||
| `BLSSignature` | `Bytes96` | a BLS12-381 signature |
|
||||
|
@ -161,7 +153,7 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
| Name | Value |
|
||||
| - | - |
|
||||
| `FAR_FUTURE_EPOCH` | `Epoch(2**64 - 1)` |
|
||||
| `BASE_REWARDS_PER_EPOCH` | `5` |
|
||||
| `BASE_REWARDS_PER_EPOCH` | `4` |
|
||||
| `DEPOSIT_CONTRACT_TREE_DEPTH` | `2**5` (= 32) |
|
||||
| `SECONDS_PER_DAY` | `86400` |
|
||||
| `JUSTIFICATION_BITS_LENGTH` | `4` |
|
||||
|
@ -169,22 +161,22 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
|
||||
## Configuration
|
||||
|
||||
*Note*: The default mainnet configuration values are included here for spec-design purposes. The different configurations for mainnet, testnets, and YAML-based testing can be found in the [`configs/constant_presets`](../../configs/constant_presets) directory. These configurations are updated for releases and may be out of sync during `dev` changes.
|
||||
*Note*: The default mainnet configuration values are included here for spec-design purposes. The different configurations for mainnet, testnets, and YAML-based testing can be found in the [`configs/constant_presets`](../../configs) directory. These configurations are updated for releases and may be out of sync during `dev` changes.
|
||||
|
||||
### Misc
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `SHARD_COUNT` | `2**10` (= 1,024) |
|
||||
| `MAX_COMMITTEES_PER_SLOT` | `2**6` (= 64) |
|
||||
| `TARGET_COMMITTEE_SIZE` | `2**7` (= 128) |
|
||||
| `MAX_VALIDATORS_PER_COMMITTEE` | `2**12` (= 4,096) |
|
||||
| `MAX_VALIDATORS_PER_COMMITTEE` | `2**11` (= 2,048) |
|
||||
| `MIN_PER_EPOCH_CHURN_LIMIT` | `2**2` (= 4) |
|
||||
| `CHURN_LIMIT_QUOTIENT` | `2**16` (= 65,536) |
|
||||
| `SHUFFLE_ROUND_COUNT` | `90` |
|
||||
| `MIN_GENESIS_ACTIVE_VALIDATOR_COUNT` | `2**16` (= 65,536) |
|
||||
| `MIN_GENESIS_TIME` | `1578009600` (Jan 3, 2020) |
|
||||
|
||||
- For the safety of crosslinks, `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `SLOTS_PER_EPOCH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes of at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)
|
||||
- For the safety of committees, `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `SLOTS_PER_EPOCH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes of at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)
|
||||
|
||||
### Gwei values
|
||||
|
||||
|
@ -207,16 +199,15 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
|
||||
| Name | Value | Unit | Duration |
|
||||
| - | - | :-: | :-: |
|
||||
| `SECONDS_PER_SLOT` | `6` | seconds | 6 seconds |
|
||||
| `MIN_ATTESTATION_INCLUSION_DELAY` | `2**0` (= 1) | slots | 6 seconds |
|
||||
| `SLOTS_PER_EPOCH` | `2**6` (= 64) | slots | 6.4 minutes |
|
||||
| `SECONDS_PER_SLOT` | `12` | seconds | 12 seconds |
|
||||
| `MIN_ATTESTATION_INCLUSION_DELAY` | `2**0` (= 1) | slots | 12 seconds |
|
||||
| `SLOTS_PER_EPOCH` | `2**5` (= 32) | slots | 6.4 minutes |
|
||||
| `MIN_SEED_LOOKAHEAD` | `2**0` (= 1) | epochs | 6.4 minutes |
|
||||
| `ACTIVATION_EXIT_DELAY` | `2**2` (= 4) | epochs | 25.6 minutes |
|
||||
| `MAX_SEED_LOOKAHEAD` | `2**2` (= 4) | epochs | 25.6 minutes |
|
||||
| `SLOTS_PER_ETH1_VOTING_PERIOD` | `2**10` (= 1,024) | slots | ~1.7 hours |
|
||||
| `SLOTS_PER_HISTORICAL_ROOT` | `2**13` (= 8,192) | slots | ~13 hours |
|
||||
| `MIN_VALIDATOR_WITHDRAWABILITY_DELAY` | `2**8` (= 256) | epochs | ~27 hours |
|
||||
| `PERSISTENT_COMMITTEE_PERIOD` | `2**11` (= 2,048) | epochs | 9 days |
|
||||
| `MAX_EPOCHS_PER_CROSSLINK` | `2**6` (= 64) | epochs | ~7 hours |
|
||||
| `MIN_EPOCHS_TO_INACTIVITY_PENALTY` | `2**2` (= 4) | epochs | 25.6 minutes |
|
||||
|
||||
### State list lengths
|
||||
|
@ -249,20 +240,18 @@ The following values are (non-configurable) constants used throughout the specif
|
|||
| `MAX_ATTESTATIONS` | `2**7` (= 128) |
|
||||
| `MAX_DEPOSITS` | `2**4` (= 16) |
|
||||
| `MAX_VOLUNTARY_EXITS` | `2**4` (= 16) |
|
||||
| `MAX_TRANSFERS` | `0` |
|
||||
|
||||
### Signature domain types
|
||||
### Domain types
|
||||
|
||||
The following types are defined, mapping into `DomainType` (little endian):
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `DOMAIN_BEACON_PROPOSER` | `0` |
|
||||
| `DOMAIN_RANDAO` | `1` |
|
||||
| `DOMAIN_ATTESTATION` | `2` |
|
||||
| `DOMAIN_BEACON_ATTESTER` | `1` |
|
||||
| `DOMAIN_RANDAO` | `2` |
|
||||
| `DOMAIN_DEPOSIT` | `3` |
|
||||
| `DOMAIN_VOLUNTARY_EXIT` | `4` |
|
||||
| `DOMAIN_TRANSFER` | `5` |
|
||||
|
||||
## Containers
|
||||
|
||||
|
@ -296,39 +285,27 @@ class Checkpoint(Container):
|
|||
```python
|
||||
class Validator(Container):
|
||||
pubkey: BLSPubkey
|
||||
withdrawal_credentials: Hash # Commitment to pubkey for withdrawals and transfers
|
||||
withdrawal_credentials: Hash # Commitment to pubkey for withdrawals
|
||||
effective_balance: Gwei # Balance at stake
|
||||
slashed: boolean
|
||||
# Status epochs
|
||||
activation_eligibility_epoch: Epoch # When criteria for activation were met
|
||||
activation_epoch: Epoch
|
||||
exit_epoch: Epoch
|
||||
withdrawable_epoch: Epoch # When validator can withdraw or transfer funds
|
||||
```
|
||||
|
||||
#### `Crosslink`
|
||||
|
||||
```python
|
||||
class Crosslink(Container):
|
||||
shard: Shard
|
||||
parent_root: Hash
|
||||
# Crosslinking data
|
||||
start_epoch: Epoch
|
||||
end_epoch: Epoch
|
||||
data_root: Hash
|
||||
withdrawable_epoch: Epoch # When validator can withdraw funds
|
||||
```
|
||||
|
||||
#### `AttestationData`
|
||||
|
||||
```python
|
||||
class AttestationData(Container):
|
||||
slot: Slot
|
||||
index: CommitteeIndex
|
||||
# LMD GHOST vote
|
||||
beacon_block_root: Hash
|
||||
# FFG vote
|
||||
source: Checkpoint
|
||||
target: Checkpoint
|
||||
# Crosslink vote
|
||||
crosslink: Crosslink
|
||||
```
|
||||
|
||||
#### `AttestationDataAndCustodyBit`
|
||||
|
@ -336,7 +313,7 @@ class AttestationData(Container):
|
|||
```python
|
||||
class AttestationDataAndCustodyBit(Container):
|
||||
data: AttestationData
|
||||
custody_bit: bit # Challengeable bit (SSZ-bool, 1 byte) for the custody of crosslink data
|
||||
custody_bit: bit # Challengeable bit (SSZ-bool, 1 byte) for the custody of shard data
|
||||
```
|
||||
|
||||
#### `IndexedAttestation`
|
||||
|
@ -386,14 +363,6 @@ class DepositData(Container):
|
|||
signature: BLSSignature
|
||||
```
|
||||
|
||||
#### `CompactCommittee`
|
||||
|
||||
```python
|
||||
class CompactCommittee(Container):
|
||||
pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]
|
||||
compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]
|
||||
```
|
||||
|
||||
#### `BeaconBlockHeader`
|
||||
|
||||
```python
|
||||
|
@ -451,19 +420,6 @@ class VoluntaryExit(Container):
|
|||
signature: BLSSignature
|
||||
```
|
||||
|
||||
#### `Transfer`
|
||||
|
||||
```python
|
||||
class Transfer(Container):
|
||||
sender: ValidatorIndex
|
||||
recipient: ValidatorIndex
|
||||
amount: Gwei
|
||||
fee: Gwei
|
||||
slot: Slot # Slot at which transfer must be processed
|
||||
pubkey: BLSPubkey # Withdrawal pubkey
|
||||
signature: BLSSignature # Signature checked against withdrawal pubkey
|
||||
```
|
||||
|
||||
### Beacon blocks
|
||||
|
||||
#### `BeaconBlockBody`
|
||||
|
@ -479,7 +435,6 @@ class BeaconBlockBody(Container):
|
|||
attestations: List[Attestation, MAX_ATTESTATIONS]
|
||||
deposits: List[Deposit, MAX_DEPOSITS]
|
||||
voluntary_exits: List[VoluntaryExit, MAX_VOLUNTARY_EXITS]
|
||||
transfers: List[Transfer, MAX_TRANSFERS]
|
||||
```
|
||||
|
||||
#### `BeaconBlock`
|
||||
|
@ -515,19 +470,13 @@ class BeaconState(Container):
|
|||
# Registry
|
||||
validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]
|
||||
balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]
|
||||
# Shuffling
|
||||
start_shard: Shard
|
||||
# Randomness
|
||||
randao_mixes: Vector[Hash, EPOCHS_PER_HISTORICAL_VECTOR]
|
||||
active_index_roots: Vector[Hash, EPOCHS_PER_HISTORICAL_VECTOR] # Active index digests for light clients
|
||||
compact_committees_roots: Vector[Hash, EPOCHS_PER_HISTORICAL_VECTOR] # Committee digests for light clients
|
||||
# Slashings
|
||||
slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR] # Per-epoch sums of slashed effective balances
|
||||
# Attestations
|
||||
previous_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]
|
||||
current_epoch_attestations: List[PendingAttestation, MAX_ATTESTATIONS * SLOTS_PER_EPOCH]
|
||||
# Crosslinks
|
||||
previous_crosslinks: Vector[Crosslink, SHARD_COUNT] # Previous epoch snapshot
|
||||
current_crosslinks: Vector[Crosslink, SHARD_COUNT]
|
||||
# Finality
|
||||
justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH] # Bit set for every recent justified epoch
|
||||
previous_justified_checkpoint: Checkpoint # Previous epoch snapshot
|
||||
|
@ -571,7 +520,7 @@ def xor(bytes_1: Bytes32, bytes_2: Bytes32) -> Bytes32:
|
|||
```python
|
||||
def int_to_bytes(n: uint64, length: uint64) -> bytes:
|
||||
"""
|
||||
Return the ``length``-byte serialization of ``n``.
|
||||
Return the ``length``-byte serialization of ``n`` in ``ENDIANNESS``-endian.
|
||||
"""
|
||||
return n.to_bytes(length, ENDIANNESS)
|
||||
```
|
||||
|
@ -581,7 +530,7 @@ def int_to_bytes(n: uint64, length: uint64) -> bytes:
|
|||
```python
|
||||
def bytes_to_int(data: bytes) -> uint64:
|
||||
"""
|
||||
Return the integer deserialization of ``data``.
|
||||
Return the integer deserialization of ``data`` intepreted as ``ENDIANNESS``-endian.
|
||||
"""
|
||||
return int.from_bytes(data, ENDIANNESS)
|
||||
```
|
||||
|
@ -660,8 +609,8 @@ def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: Indexe
|
|||
bit_1_indices = indexed_attestation.custody_bit_1_indices
|
||||
|
||||
# Verify no index has custody bit equal to 1 [to be removed in phase 1]
|
||||
if not len(bit_1_indices) == 0:
|
||||
return False
|
||||
if not len(bit_1_indices) == 0: # [to be removed in phase 1]
|
||||
return False # [to be removed in phase 1]
|
||||
# Verify max number of indices
|
||||
if not len(bit_0_indices) + len(bit_1_indices) <= MAX_VALIDATORS_PER_COMMITTEE:
|
||||
return False
|
||||
|
@ -682,7 +631,7 @@ def is_valid_indexed_attestation(state: BeaconState, indexed_attestation: Indexe
|
|||
hash_tree_root(AttestationDataAndCustodyBit(data=indexed_attestation.data, custody_bit=0b1)),
|
||||
],
|
||||
signature=indexed_attestation.signature,
|
||||
domain=get_domain(state, DOMAIN_ATTESTATION, indexed_attestation.data.target.epoch),
|
||||
domain=get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch),
|
||||
):
|
||||
return False
|
||||
return True
|
||||
|
@ -729,6 +678,25 @@ def compute_shuffled_index(index: ValidatorIndex, index_count: uint64, seed: Has
|
|||
return ValidatorIndex(index)
|
||||
```
|
||||
|
||||
#### `compute_proposer_index`
|
||||
|
||||
```python
|
||||
def compute_proposer_index(state: BeaconState, indices: Sequence[ValidatorIndex], seed: Hash) -> ValidatorIndex:
|
||||
"""
|
||||
Return from ``indices`` a random index sampled by effective balance.
|
||||
"""
|
||||
assert len(indices) > 0
|
||||
MAX_RANDOM_BYTE = 2**8 - 1
|
||||
i = 0
|
||||
while True:
|
||||
candidate_index = indices[compute_shuffled_index(ValidatorIndex(i % len(indices)), len(indices), seed)]
|
||||
random_byte = hash(seed + int_to_bytes(i // 32, length=8))[i % 32]
|
||||
effective_balance = state.validators[candidate_index].effective_balance
|
||||
if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
|
||||
return ValidatorIndex(candidate_index)
|
||||
i += 1
|
||||
```
|
||||
|
||||
#### `compute_committee`
|
||||
|
||||
```python
|
||||
|
@ -744,20 +712,20 @@ def compute_committee(indices: Sequence[ValidatorIndex],
|
|||
return [indices[compute_shuffled_index(ValidatorIndex(i), len(indices), seed)] for i in range(start, end)]
|
||||
```
|
||||
|
||||
#### `compute_epoch_of_slot`
|
||||
#### `compute_epoch_at_slot`
|
||||
|
||||
```python
|
||||
def compute_epoch_of_slot(slot: Slot) -> Epoch:
|
||||
def compute_epoch_at_slot(slot: Slot) -> Epoch:
|
||||
"""
|
||||
Return the epoch number of ``slot``.
|
||||
Return the epoch number at ``slot``.
|
||||
"""
|
||||
return Epoch(slot // SLOTS_PER_EPOCH)
|
||||
```
|
||||
|
||||
#### `compute_start_slot_of_epoch`
|
||||
#### `compute_start_slot_at_epoch`
|
||||
|
||||
```python
|
||||
def compute_start_slot_of_epoch(epoch: Epoch) -> Slot:
|
||||
def compute_start_slot_at_epoch(epoch: Epoch) -> Slot:
|
||||
"""
|
||||
Return the start slot of ``epoch``.
|
||||
"""
|
||||
|
@ -771,7 +739,7 @@ def compute_activation_exit_epoch(epoch: Epoch) -> Epoch:
|
|||
"""
|
||||
Return the epoch during which validator activations and exits initiated in ``epoch`` take effect.
|
||||
"""
|
||||
return Epoch(epoch + 1 + ACTIVATION_EXIT_DELAY)
|
||||
return Epoch(epoch + 1 + MAX_SEED_LOOKAHEAD)
|
||||
```
|
||||
|
||||
#### `compute_domain`
|
||||
|
@ -793,7 +761,7 @@ def get_current_epoch(state: BeaconState) -> Epoch:
|
|||
"""
|
||||
Return the current epoch.
|
||||
"""
|
||||
return compute_epoch_of_slot(state.slot)
|
||||
return compute_epoch_at_slot(state.slot)
|
||||
```
|
||||
|
||||
#### `get_previous_epoch`
|
||||
|
@ -814,7 +782,7 @@ def get_block_root(state: BeaconState, epoch: Epoch) -> Hash:
|
|||
"""
|
||||
Return the block root at the start of a recent ``epoch``.
|
||||
"""
|
||||
return get_block_root_at_slot(state, compute_start_slot_of_epoch(epoch))
|
||||
return get_block_root_at_slot(state, compute_start_slot_at_epoch(epoch))
|
||||
```
|
||||
|
||||
#### `get_block_root_at_slot`
|
||||
|
@ -862,70 +830,45 @@ def get_validator_churn_limit(state: BeaconState) -> uint64:
|
|||
#### `get_seed`
|
||||
|
||||
```python
|
||||
def get_seed(state: BeaconState, epoch: Epoch) -> Hash:
|
||||
def get_seed(state: BeaconState, epoch: Epoch, domain_type: DomainType) -> Hash:
|
||||
"""
|
||||
Return the seed at ``epoch``.
|
||||
"""
|
||||
mix = get_randao_mix(state, Epoch(epoch + EPOCHS_PER_HISTORICAL_VECTOR - MIN_SEED_LOOKAHEAD - 1)) # Avoid underflow
|
||||
active_index_root = state.active_index_roots[epoch % EPOCHS_PER_HISTORICAL_VECTOR]
|
||||
return hash(mix + active_index_root + int_to_bytes(epoch, length=32))
|
||||
return hash(domain_type + int_to_bytes(epoch, length=8) + mix)
|
||||
```
|
||||
|
||||
#### `get_committee_count`
|
||||
#### `get_committee_count_at_slot`
|
||||
|
||||
```python
|
||||
def get_committee_count(state: BeaconState, epoch: Epoch) -> uint64:
|
||||
def get_committee_count_at_slot(state: BeaconState, slot: Slot) -> uint64:
|
||||
"""
|
||||
Return the number of committees at ``epoch``.
|
||||
Return the number of committees at ``slot``.
|
||||
"""
|
||||
committees_per_slot = max(1, min(
|
||||
SHARD_COUNT // SLOTS_PER_EPOCH,
|
||||
epoch = compute_epoch_at_slot(slot)
|
||||
return max(1, min(
|
||||
MAX_COMMITTEES_PER_SLOT,
|
||||
len(get_active_validator_indices(state, epoch)) // SLOTS_PER_EPOCH // TARGET_COMMITTEE_SIZE,
|
||||
))
|
||||
return committees_per_slot * SLOTS_PER_EPOCH
|
||||
```
|
||||
|
||||
#### `get_crosslink_committee`
|
||||
#### `get_beacon_committee`
|
||||
|
||||
```python
|
||||
def get_crosslink_committee(state: BeaconState, epoch: Epoch, shard: Shard) -> Sequence[ValidatorIndex]:
|
||||
def get_beacon_committee(state: BeaconState, slot: Slot, index: CommitteeIndex) -> Sequence[ValidatorIndex]:
|
||||
"""
|
||||
Return the crosslink committee at ``epoch`` for ``shard``.
|
||||
Return the beacon committee at ``slot`` for ``index``.
|
||||
"""
|
||||
epoch = compute_epoch_at_slot(slot)
|
||||
committees_per_slot = get_committee_count_at_slot(state, slot)
|
||||
return compute_committee(
|
||||
indices=get_active_validator_indices(state, epoch),
|
||||
seed=get_seed(state, epoch),
|
||||
index=(shard + SHARD_COUNT - get_start_shard(state, epoch)) % SHARD_COUNT,
|
||||
count=get_committee_count(state, epoch),
|
||||
seed=get_seed(state, epoch, DOMAIN_BEACON_ATTESTER),
|
||||
index=(slot % SLOTS_PER_EPOCH) * committees_per_slot + index,
|
||||
count=committees_per_slot * SLOTS_PER_EPOCH,
|
||||
)
|
||||
```
|
||||
|
||||
#### `get_start_shard`
|
||||
|
||||
```python
|
||||
def get_start_shard(state: BeaconState, epoch: Epoch) -> Shard:
|
||||
"""
|
||||
Return the start shard of the 0th committee at ``epoch``.
|
||||
"""
|
||||
assert epoch <= get_current_epoch(state) + 1
|
||||
check_epoch = Epoch(get_current_epoch(state) + 1)
|
||||
shard = Shard((state.start_shard + get_shard_delta(state, get_current_epoch(state))) % SHARD_COUNT)
|
||||
while check_epoch > epoch:
|
||||
check_epoch -= Epoch(1)
|
||||
shard = Shard((shard + SHARD_COUNT - get_shard_delta(state, check_epoch)) % SHARD_COUNT)
|
||||
return shard
|
||||
```
|
||||
|
||||
#### `get_shard_delta`
|
||||
|
||||
```python
|
||||
def get_shard_delta(state: BeaconState, epoch: Epoch) -> uint64:
|
||||
"""
|
||||
Return the number of shards to increment ``state.start_shard`` at ``epoch``.
|
||||
"""
|
||||
return min(get_committee_count(state, epoch), SHARD_COUNT - SHARD_COUNT // SLOTS_PER_EPOCH)
|
||||
```
|
||||
|
||||
#### `get_beacon_proposer_index`
|
||||
|
||||
```python
|
||||
|
@ -934,53 +877,9 @@ def get_beacon_proposer_index(state: BeaconState) -> ValidatorIndex:
|
|||
Return the beacon proposer index at the current slot.
|
||||
"""
|
||||
epoch = get_current_epoch(state)
|
||||
committees_per_slot = get_committee_count(state, epoch) // SLOTS_PER_EPOCH
|
||||
offset = committees_per_slot * (state.slot % SLOTS_PER_EPOCH)
|
||||
shard = Shard((get_start_shard(state, epoch) + offset) % SHARD_COUNT)
|
||||
first_committee = get_crosslink_committee(state, epoch, shard)
|
||||
MAX_RANDOM_BYTE = 2**8 - 1
|
||||
seed = get_seed(state, epoch)
|
||||
i = 0
|
||||
while True:
|
||||
candidate_index = first_committee[(epoch + i) % len(first_committee)]
|
||||
random_byte = hash(seed + int_to_bytes(i // 32, length=8))[i % 32]
|
||||
effective_balance = state.validators[candidate_index].effective_balance
|
||||
if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
|
||||
return ValidatorIndex(candidate_index)
|
||||
i += 1
|
||||
```
|
||||
|
||||
#### `get_attestation_data_slot`
|
||||
|
||||
```python
|
||||
def get_attestation_data_slot(state: BeaconState, data: AttestationData) -> Slot:
|
||||
"""
|
||||
Return the slot corresponding to the attestation ``data``.
|
||||
"""
|
||||
committee_count = get_committee_count(state, data.target.epoch)
|
||||
offset = (data.crosslink.shard + SHARD_COUNT - get_start_shard(state, data.target.epoch)) % SHARD_COUNT
|
||||
return Slot(compute_start_slot_of_epoch(data.target.epoch) + offset // (committee_count // SLOTS_PER_EPOCH))
|
||||
```
|
||||
|
||||
#### `get_compact_committees_root`
|
||||
|
||||
```python
|
||||
def get_compact_committees_root(state: BeaconState, epoch: Epoch) -> Hash:
|
||||
"""
|
||||
Return the compact committee root at ``epoch``.
|
||||
"""
|
||||
committees = [CompactCommittee() for _ in range(SHARD_COUNT)]
|
||||
start_shard = get_start_shard(state, epoch)
|
||||
for committee_number in range(get_committee_count(state, epoch)):
|
||||
shard = Shard((start_shard + committee_number) % SHARD_COUNT)
|
||||
for index in get_crosslink_committee(state, epoch, shard):
|
||||
validator = state.validators[index]
|
||||
committees[shard].pubkeys.append(validator.pubkey)
|
||||
compact_balance = validator.effective_balance // EFFECTIVE_BALANCE_INCREMENT
|
||||
# `index` (top 6 bytes) + `slashed` (16th bit) + `compact_balance` (bottom 15 bits)
|
||||
compact_validator = uint64((index << 16) + (validator.slashed << 15) + compact_balance)
|
||||
committees[shard].compact_validators.append(compact_validator)
|
||||
return hash_tree_root(Vector[CompactCommittee, SHARD_COUNT](committees))
|
||||
seed = hash(get_seed(state, epoch, DOMAIN_BEACON_PROPOSER) + int_to_bytes(state.slot, length=8))
|
||||
indices = get_active_validator_indices(state, epoch)
|
||||
return compute_proposer_index(state, indices, seed)
|
||||
```
|
||||
|
||||
#### `get_total_balance`
|
||||
|
@ -1044,7 +943,7 @@ def get_attesting_indices(state: BeaconState,
|
|||
"""
|
||||
Return the set of attesting indices corresponding to ``data`` and ``bits``.
|
||||
"""
|
||||
committee = get_crosslink_committee(state, data.target.epoch, data.crosslink.shard)
|
||||
committee = get_beacon_committee(state, data.slot, data.index)
|
||||
return set(index for i, index in enumerate(committee) if bits[i])
|
||||
```
|
||||
|
||||
|
@ -1137,6 +1036,7 @@ def initialize_beacon_state_from_eth1(eth1_block_hash: Hash,
|
|||
genesis_time=eth1_timestamp - eth1_timestamp % SECONDS_PER_DAY + 2 * SECONDS_PER_DAY,
|
||||
eth1_data=Eth1Data(block_hash=eth1_block_hash, deposit_count=len(deposits)),
|
||||
latest_block_header=BeaconBlockHeader(body_root=hash_tree_root(BeaconBlockBody())),
|
||||
randao_mixes=[eth1_block_hash] * EPOCHS_PER_HISTORICAL_VECTOR, # Seed RANDAO with Eth1 entropy
|
||||
)
|
||||
|
||||
# Process deposits
|
||||
|
@ -1154,13 +1054,6 @@ def initialize_beacon_state_from_eth1(eth1_block_hash: Hash,
|
|||
validator.activation_eligibility_epoch = GENESIS_EPOCH
|
||||
validator.activation_epoch = GENESIS_EPOCH
|
||||
|
||||
# Populate active_index_roots and compact_committees_roots
|
||||
indices_list = List[ValidatorIndex, VALIDATOR_REGISTRY_LIMIT](get_active_validator_indices(state, GENESIS_EPOCH))
|
||||
active_index_root = hash_tree_root(indices_list)
|
||||
committee_root = get_compact_committees_root(state, GENESIS_EPOCH)
|
||||
for index in range(EPOCHS_PER_HISTORICAL_VECTOR):
|
||||
state.active_index_roots[index] = active_index_root
|
||||
state.compact_committees_roots[index] = committee_root
|
||||
return state
|
||||
```
|
||||
|
||||
|
@ -1231,12 +1124,12 @@ def process_slot(state: BeaconState) -> None:
|
|||
```python
|
||||
def process_epoch(state: BeaconState) -> None:
|
||||
process_justification_and_finalization(state)
|
||||
process_crosslinks(state)
|
||||
process_rewards_and_penalties(state)
|
||||
process_registry_updates(state)
|
||||
# @process_reveal_deadlines
|
||||
# @process_challenge_deadlines
|
||||
process_slashings(state)
|
||||
# @update_period_committee
|
||||
process_final_updates(state)
|
||||
# @after_process_final_updates
|
||||
```
|
||||
|
@ -1261,7 +1154,7 @@ def get_matching_target_attestations(state: BeaconState, epoch: Epoch) -> Sequen
|
|||
def get_matching_head_attestations(state: BeaconState, epoch: Epoch) -> Sequence[PendingAttestation]:
|
||||
return [
|
||||
a for a in get_matching_source_attestations(state, epoch)
|
||||
if a.data.beacon_block_root == get_block_root_at_slot(state, get_attestation_data_slot(state, a.data))
|
||||
if a.data.beacon_block_root == get_block_root_at_slot(state, a.data.slot)
|
||||
]
|
||||
```
|
||||
|
||||
|
@ -1279,23 +1172,6 @@ def get_attesting_balance(state: BeaconState, attestations: Sequence[PendingAtte
|
|||
return get_total_balance(state, get_unslashed_attesting_indices(state, attestations))
|
||||
```
|
||||
|
||||
```python
|
||||
def get_winning_crosslink_and_attesting_indices(state: BeaconState,
|
||||
epoch: Epoch,
|
||||
shard: Shard) -> Tuple[Crosslink, Set[ValidatorIndex]]:
|
||||
attestations = [a for a in get_matching_source_attestations(state, epoch) if a.data.crosslink.shard == shard]
|
||||
crosslinks = filter(
|
||||
lambda c: hash_tree_root(state.current_crosslinks[shard]) in (c.parent_root, hash_tree_root(c)),
|
||||
[a.data.crosslink for a in attestations]
|
||||
)
|
||||
# Winning crosslink has the crosslink data root with the most balance voting for it (ties broken lexicographically)
|
||||
winning_crosslink = max(crosslinks, key=lambda c: (
|
||||
get_attesting_balance(state, [a for a in attestations if a.data.crosslink == c]), c.data_root
|
||||
), default=Crosslink())
|
||||
winning_attestations = [a for a in attestations if a.data.crosslink == winning_crosslink]
|
||||
return winning_crosslink, get_unslashed_attesting_indices(state, winning_attestations)
|
||||
```
|
||||
|
||||
#### Justification and finalization
|
||||
|
||||
```python
|
||||
|
@ -1339,20 +1215,6 @@ def process_justification_and_finalization(state: BeaconState) -> None:
|
|||
state.finalized_checkpoint = old_current_justified_checkpoint
|
||||
```
|
||||
|
||||
#### Crosslinks
|
||||
|
||||
```python
|
||||
def process_crosslinks(state: BeaconState) -> None:
|
||||
state.previous_crosslinks = [c for c in state.current_crosslinks]
|
||||
for epoch in (get_previous_epoch(state), get_current_epoch(state)):
|
||||
for offset in range(get_committee_count(state, epoch)):
|
||||
shard = Shard((get_start_shard(state, epoch) + offset) % SHARD_COUNT)
|
||||
crosslink_committee = set(get_crosslink_committee(state, epoch, shard))
|
||||
winning_crosslink, attesting_indices = get_winning_crosslink_and_attesting_indices(state, epoch, shard)
|
||||
if 3 * get_total_balance(state, attesting_indices) >= 2 * get_total_balance(state, crosslink_committee):
|
||||
state.current_crosslinks[shard] = winning_crosslink
|
||||
```
|
||||
|
||||
#### Rewards and penalties
|
||||
|
||||
```python
|
||||
|
@ -1396,9 +1258,7 @@ def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence
|
|||
rewards[attestation.proposer_index] += proposer_reward
|
||||
max_attester_reward = get_base_reward(state, index) - proposer_reward
|
||||
rewards[index] += Gwei(
|
||||
max_attester_reward
|
||||
* (SLOTS_PER_EPOCH + MIN_ATTESTATION_INCLUSION_DELAY - attestation.inclusion_delay)
|
||||
// SLOTS_PER_EPOCH
|
||||
max_attester_reward // attestation.inclusion_delay
|
||||
)
|
||||
|
||||
# Inactivity penalty
|
||||
|
@ -1415,36 +1275,15 @@ def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence
|
|||
return rewards, penalties
|
||||
```
|
||||
|
||||
```python
|
||||
def get_crosslink_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
|
||||
rewards = [Gwei(0) for _ in range(len(state.validators))]
|
||||
penalties = [Gwei(0) for _ in range(len(state.validators))]
|
||||
epoch = get_previous_epoch(state)
|
||||
for offset in range(get_committee_count(state, epoch)):
|
||||
shard = Shard((get_start_shard(state, epoch) + offset) % SHARD_COUNT)
|
||||
crosslink_committee = set(get_crosslink_committee(state, epoch, shard))
|
||||
winning_crosslink, attesting_indices = get_winning_crosslink_and_attesting_indices(state, epoch, shard)
|
||||
attesting_balance = get_total_balance(state, attesting_indices)
|
||||
committee_balance = get_total_balance(state, crosslink_committee)
|
||||
for index in crosslink_committee:
|
||||
base_reward = get_base_reward(state, index)
|
||||
if index in attesting_indices:
|
||||
rewards[index] += base_reward * attesting_balance // committee_balance
|
||||
else:
|
||||
penalties[index] += base_reward
|
||||
return rewards, penalties
|
||||
```
|
||||
|
||||
```python
|
||||
def process_rewards_and_penalties(state: BeaconState) -> None:
|
||||
if get_current_epoch(state) == GENESIS_EPOCH:
|
||||
return
|
||||
|
||||
rewards1, penalties1 = get_attestation_deltas(state)
|
||||
rewards2, penalties2 = get_crosslink_deltas(state)
|
||||
rewards, penalties = get_attestation_deltas(state)
|
||||
for index in range(len(state.validators)):
|
||||
increase_balance(state, ValidatorIndex(index), rewards1[index] + rewards2[index])
|
||||
decrease_balance(state, ValidatorIndex(index), penalties1[index] + penalties2[index])
|
||||
increase_balance(state, ValidatorIndex(index), rewards[index])
|
||||
decrease_balance(state, ValidatorIndex(index), penalties[index])
|
||||
```
|
||||
|
||||
#### Registry updates
|
||||
|
@ -1504,14 +1343,6 @@ def process_final_updates(state: BeaconState) -> None:
|
|||
HALF_INCREMENT = EFFECTIVE_BALANCE_INCREMENT // 2
|
||||
if balance < validator.effective_balance or validator.effective_balance + 3 * HALF_INCREMENT < balance:
|
||||
validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
|
||||
# Set active index root
|
||||
index_epoch = Epoch(next_epoch + ACTIVATION_EXIT_DELAY)
|
||||
index_root_position = index_epoch % EPOCHS_PER_HISTORICAL_VECTOR
|
||||
indices_list = List[ValidatorIndex, VALIDATOR_REGISTRY_LIMIT](get_active_validator_indices(state, index_epoch))
|
||||
state.active_index_roots[index_root_position] = hash_tree_root(indices_list)
|
||||
# Set committees root
|
||||
committee_root_position = next_epoch % EPOCHS_PER_HISTORICAL_VECTOR
|
||||
state.compact_committees_roots[committee_root_position] = get_compact_committees_root(state, next_epoch)
|
||||
# Reset slashings
|
||||
state.slashings[next_epoch % EPOCHS_PER_SLASHINGS_VECTOR] = Gwei(0)
|
||||
# Set randao mix
|
||||
|
@ -1520,8 +1351,6 @@ def process_final_updates(state: BeaconState) -> None:
|
|||
if next_epoch % (SLOTS_PER_HISTORICAL_ROOT // SLOTS_PER_EPOCH) == 0:
|
||||
historical_batch = HistoricalBatch(block_roots=state.block_roots, state_roots=state.state_roots)
|
||||
state.historical_roots.append(hash_tree_root(historical_batch))
|
||||
# Update start shard
|
||||
state.start_shard = Shard((state.start_shard + get_shard_delta(state, current_epoch)) % SHARD_COUNT)
|
||||
# Rotate current/previous epoch attestations
|
||||
state.previous_epoch_attestations = state.current_epoch_attestations
|
||||
state.current_epoch_attestations = []
|
||||
|
@ -1549,9 +1378,9 @@ def process_block_header(state: BeaconState, block: BeaconBlock) -> None:
|
|||
state.latest_block_header = BeaconBlockHeader(
|
||||
slot=block.slot,
|
||||
parent_root=block.parent_root,
|
||||
# state_root: zeroed, overwritten in the next `process_slot` call
|
||||
# `state_root` is zeroed and overwritten in the next `process_slot` call
|
||||
body_root=hash_tree_root(block.body),
|
||||
# signature is always zeroed
|
||||
# `signature` is zeroed
|
||||
)
|
||||
# Verify proposer is not slashed
|
||||
proposer = state.validators[get_beacon_proposer_index(state)]
|
||||
|
@ -1588,8 +1417,6 @@ def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None:
|
|||
def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
|
||||
# Verify that outstanding deposits are processed up to the maximum number of deposits
|
||||
assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index)
|
||||
# Verify that there are no duplicate transfers
|
||||
assert len(body.transfers) == len(set(body.transfers))
|
||||
|
||||
for operations, function in (
|
||||
(body.proposer_slashings, process_proposer_slashing),
|
||||
|
@ -1597,7 +1424,7 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
|
|||
(body.attestations, process_attestation),
|
||||
(body.deposits, process_deposit),
|
||||
(body.voluntary_exits, process_voluntary_exit),
|
||||
(body.transfers, process_transfer),
|
||||
# @process_shard_receipt_proofs
|
||||
):
|
||||
for operation in operations:
|
||||
function(state, operation)
|
||||
|
@ -1608,16 +1435,15 @@ def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
|
|||
```python
|
||||
def process_proposer_slashing(state: BeaconState, proposer_slashing: ProposerSlashing) -> None:
|
||||
proposer = state.validators[proposer_slashing.proposer_index]
|
||||
# Verify that the epoch is the same
|
||||
assert (compute_epoch_of_slot(proposer_slashing.header_1.slot)
|
||||
== compute_epoch_of_slot(proposer_slashing.header_2.slot))
|
||||
# Verify slots match
|
||||
assert proposer_slashing.header_1.slot == proposer_slashing.header_2.slot
|
||||
# But the headers are different
|
||||
assert proposer_slashing.header_1 != proposer_slashing.header_2
|
||||
# Check proposer is slashable
|
||||
assert is_slashable_validator(proposer, get_current_epoch(state))
|
||||
# Signatures are valid
|
||||
for header in (proposer_slashing.header_1, proposer_slashing.header_2):
|
||||
domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_of_slot(header.slot))
|
||||
domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_at_slot(header.slot))
|
||||
assert bls_verify(proposer.pubkey, signing_root(header), header.signature, domain)
|
||||
|
||||
slash_validator(state, proposer_slashing.proposer_index)
|
||||
|
@ -1648,37 +1474,27 @@ def process_attester_slashing(state: BeaconState, attester_slashing: AttesterSla
|
|||
```python
|
||||
def process_attestation(state: BeaconState, attestation: Attestation) -> None:
|
||||
data = attestation.data
|
||||
assert data.crosslink.shard < SHARD_COUNT
|
||||
assert data.index < get_committee_count_at_slot(state, data.slot)
|
||||
assert data.target.epoch in (get_previous_epoch(state), get_current_epoch(state))
|
||||
assert data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot <= data.slot + SLOTS_PER_EPOCH
|
||||
|
||||
attestation_slot = get_attestation_data_slot(state, data)
|
||||
assert attestation_slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot <= attestation_slot + SLOTS_PER_EPOCH
|
||||
|
||||
committee = get_crosslink_committee(state, data.target.epoch, data.crosslink.shard)
|
||||
committee = get_beacon_committee(state, data.slot, data.index)
|
||||
assert len(attestation.aggregation_bits) == len(attestation.custody_bits) == len(committee)
|
||||
|
||||
pending_attestation = PendingAttestation(
|
||||
data=data,
|
||||
aggregation_bits=attestation.aggregation_bits,
|
||||
inclusion_delay=state.slot - attestation_slot,
|
||||
inclusion_delay=state.slot - data.slot,
|
||||
proposer_index=get_beacon_proposer_index(state),
|
||||
)
|
||||
|
||||
if data.target.epoch == get_current_epoch(state):
|
||||
assert data.source == state.current_justified_checkpoint
|
||||
parent_crosslink = state.current_crosslinks[data.crosslink.shard]
|
||||
state.current_epoch_attestations.append(pending_attestation)
|
||||
else:
|
||||
assert data.source == state.previous_justified_checkpoint
|
||||
parent_crosslink = state.previous_crosslinks[data.crosslink.shard]
|
||||
state.previous_epoch_attestations.append(pending_attestation)
|
||||
|
||||
# Check crosslink against expected parent crosslink
|
||||
assert data.crosslink.parent_root == hash_tree_root(parent_crosslink)
|
||||
assert data.crosslink.start_epoch == parent_crosslink.end_epoch
|
||||
assert data.crosslink.end_epoch == min(data.target.epoch, parent_crosslink.end_epoch + MAX_EPOCHS_PER_CROSSLINK)
|
||||
assert data.crosslink.data_root == Bytes32() # [to be removed in phase 1]
|
||||
|
||||
# Check signature
|
||||
assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))
|
||||
```
|
||||
|
@ -1746,33 +1562,3 @@ def process_voluntary_exit(state: BeaconState, exit: VoluntaryExit) -> None:
|
|||
# Initiate exit
|
||||
initiate_validator_exit(state, exit.validator_index)
|
||||
```
|
||||
|
||||
##### Transfers
|
||||
|
||||
```python
|
||||
def process_transfer(state: BeaconState, transfer: Transfer) -> None:
|
||||
# Verify the balance the covers amount and fee (with overflow protection)
|
||||
assert state.balances[transfer.sender] >= max(transfer.amount + transfer.fee, transfer.amount, transfer.fee)
|
||||
# A transfer is valid in only one slot
|
||||
assert state.slot == transfer.slot
|
||||
# Sender must satisfy at least one of the following:
|
||||
assert (
|
||||
# 1) Never have been eligible for activation
|
||||
state.validators[transfer.sender].activation_eligibility_epoch == FAR_FUTURE_EPOCH or
|
||||
# 2) Be withdrawable
|
||||
get_current_epoch(state) >= state.validators[transfer.sender].withdrawable_epoch or
|
||||
# 3) Have a balance of at least MAX_EFFECTIVE_BALANCE after the transfer
|
||||
state.balances[transfer.sender] >= transfer.amount + transfer.fee + MAX_EFFECTIVE_BALANCE
|
||||
)
|
||||
# Verify that the pubkey is valid
|
||||
assert state.validators[transfer.sender].withdrawal_credentials == BLS_WITHDRAWAL_PREFIX + hash(transfer.pubkey)[1:]
|
||||
# Verify that the signature is valid
|
||||
assert bls_verify(transfer.pubkey, signing_root(transfer), transfer.signature, get_domain(state, DOMAIN_TRANSFER))
|
||||
# Process the transfer
|
||||
decrease_balance(state, transfer.sender, transfer.amount + transfer.fee)
|
||||
increase_balance(state, transfer.recipient, transfer.amount)
|
||||
increase_balance(state, get_beacon_proposer_index(state), transfer.fee)
|
||||
# Verify balances are not dust
|
||||
assert not (0 < state.balances[transfer.sender] < MIN_DEPOSIT_AMOUNT)
|
||||
assert not (0 < state.balances[transfer.recipient] < MIN_DEPOSIT_AMOUNT)
|
||||
```
|
||||
|
|
|
@ -34,11 +34,11 @@ This document represents the specification for the beacon chain deposit contract
|
|||
|
||||
## Ethereum 1.0 deposit contract
|
||||
|
||||
The initial deployment phases of Ethereum 2.0 are implemented without consensus changes to Ethereum 1.0. A deposit contract at address `DEPOSIT_CONTRACT_ADDRESS` is added to Ethereum 1.0 for deposits of ETH to the beacon chain. Validator balances will be withdrawable to the shards in Phase 2 (i.e. when the EVM 2.0 is deployed and the shards have state).
|
||||
The initial deployment phases of Ethereum 2.0 are implemented without consensus changes to Ethereum 1.0. A deposit contract at address `DEPOSIT_CONTRACT_ADDRESS` is added to Ethereum 1.0 for deposits of ETH to the beacon chain. Validator balances will be withdrawable to the shards in Phase 2.
|
||||
|
||||
### `deposit` function
|
||||
|
||||
The deposit contract has a public `deposit` function to make deposits. It takes as arguments `pubkey: bytes[48], withdrawal_credentials: bytes[32], signature: bytes[96]` corresponding to a `DepositData` object.
|
||||
The deposit contract has a public `deposit` function to make deposits. It takes as arguments `pubkey: bytes[48], withdrawal_credentials: bytes[32], signature: bytes[96], deposit_data_root: bytes32`. The first three arguments populate a [`DepositData`](./0_beacon-chain.md#depositdata) object, and `deposit_data_root` is the expected `DepositData` root as a protection against malformatted calldata.
|
||||
|
||||
#### Deposit amount
|
||||
|
||||
|
|
|
@ -118,7 +118,7 @@ def get_latest_attesting_balance(store: Store, root: Hash) -> Gwei:
|
|||
def get_head(store: Store) -> Hash:
|
||||
# Execute the LMD-GHOST fork choice
|
||||
head = store.justified_checkpoint.root
|
||||
justified_slot = compute_start_slot_of_epoch(store.justified_checkpoint.epoch)
|
||||
justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
|
||||
while True:
|
||||
children = [
|
||||
root for root in store.blocks.keys()
|
||||
|
@ -156,9 +156,9 @@ def on_block(store: Store, block: BeaconBlock) -> None:
|
|||
store.finalized_checkpoint.root
|
||||
)
|
||||
# Check that block is later than the finalized epoch slot
|
||||
assert block.slot > compute_start_slot_of_epoch(store.finalized_checkpoint.epoch)
|
||||
assert block.slot > compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
|
||||
# Check the block is valid and compute the post-state
|
||||
state = state_transition(pre_state, block)
|
||||
state = state_transition(pre_state, block, True)
|
||||
# Add new state for this block to the store
|
||||
store.block_states[signing_root(block)] = state
|
||||
|
||||
|
@ -182,18 +182,17 @@ def on_attestation(store: Store, attestation: Attestation) -> None:
|
|||
|
||||
# Attestations cannot be from future epochs. If they are, delay consideration until the epoch arrives
|
||||
base_state = store.block_states[target.root].copy()
|
||||
assert store.time >= base_state.genesis_time + compute_start_slot_of_epoch(target.epoch) * SECONDS_PER_SLOT
|
||||
assert store.time >= base_state.genesis_time + compute_start_slot_at_epoch(target.epoch) * SECONDS_PER_SLOT
|
||||
|
||||
# Store target checkpoint state if not yet seen
|
||||
if target not in store.checkpoint_states:
|
||||
process_slots(base_state, compute_start_slot_of_epoch(target.epoch))
|
||||
process_slots(base_state, compute_start_slot_at_epoch(target.epoch))
|
||||
store.checkpoint_states[target] = base_state
|
||||
target_state = store.checkpoint_states[target]
|
||||
|
||||
# Attestations can only affect the fork choice of subsequent slots.
|
||||
# Delay consideration in the fork choice until their slot is in the past.
|
||||
attestation_slot = get_attestation_data_slot(target_state, attestation.data)
|
||||
assert store.time >= (attestation_slot + 1) * SECONDS_PER_SLOT
|
||||
assert store.time >= (attestation.data.slot + 1) * SECONDS_PER_SLOT
|
||||
|
||||
# Get state at the `target` to validate attestation and calculate the committees
|
||||
indexed_attestation = get_indexed_attestation(target_state, attestation)
|
||||
|
|
|
@ -0,0 +1,252 @@
|
|||
# Phase 1 miscellaneous beacon chain changes
|
||||
|
||||
## Table of contents
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [Phase 1 miscellaneous beacon chain changes](#phase-1-miscellaneous-beacon-chain-changes)
|
||||
- [Table of contents](#table-of-contents)
|
||||
- [Configuration](#configuration)
|
||||
- [Containers](#containers)
|
||||
- [`CompactCommittee`](#compactcommittee)
|
||||
- [`ShardReceiptDelta`](#shardreceiptdelta)
|
||||
- [`ShardReceiptProof`](#shardreceiptproof)
|
||||
- [Helper functions](#helper-functions)
|
||||
- [`pack_compact_validator`](#pack_compact_validator)
|
||||
- [`unpack_compact_validator`](#unpack_compact_validator)
|
||||
- [`committee_to_compact_committee`](#committee_to_compact_committee)
|
||||
- [`verify_merkle_proof`](#verify_merkle_proof)
|
||||
- [`compute_historical_state_generalized_index`](#compute_historical_state_generalized_index)
|
||||
- [`get_generalized_index_of_crosslink_header`](#get_generalized_index_of_crosslink_header)
|
||||
- [`process_shard_receipt_proof`](#process_shard_receipt_proof)
|
||||
- [Changes](#changes)
|
||||
- [Phase 0 container updates](#phase-0-container-updates)
|
||||
- [`BeaconState`](#beaconstate)
|
||||
- [`BeaconBlockBody`](#beaconblockbody)
|
||||
- [Persistent committees](#persistent-committees)
|
||||
- [Shard receipt processing](#shard-receipt-processing)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Configuration
|
||||
|
||||
| Name | Value | Unit | Duration
|
||||
| - | - | - | - |
|
||||
| `MAX_SHARD_RECEIPT_PROOFS` | `2**0` (= 1) | - | - |
|
||||
| `PERIOD_COMMITTEE_ROOT_LENGTH` | `2**8` (= 256) | periods | ~9 months |
|
||||
| `MINOR_REWARD_QUOTIENT` | `2**8` (=256) | - | - |
|
||||
| `REWARD_COEFFICIENT_BASE` | **TBD** | - | - |
|
||||
|
||||
## Containers
|
||||
|
||||
#### `CompactCommittee`
|
||||
|
||||
```python
|
||||
class CompactCommittee(Container):
|
||||
pubkeys: List[BLSPubkey, MAX_VALIDATORS_PER_COMMITTEE]
|
||||
compact_validators: List[uint64, MAX_VALIDATORS_PER_COMMITTEE]
|
||||
```
|
||||
|
||||
#### `ShardReceiptDelta`
|
||||
|
||||
```python
|
||||
class ShardReceiptDelta(Container):
|
||||
index: ValidatorIndex
|
||||
reward_coefficient: uint64
|
||||
block_fee: Gwei
|
||||
```
|
||||
|
||||
|
||||
#### `ShardReceiptProof`
|
||||
|
||||
```python
|
||||
class ShardReceiptProof(Container):
|
||||
shard: Shard
|
||||
proof: List[Hash, PLACEHOLDER]
|
||||
receipt: List[ShardReceiptDelta, PLACEHOLDER]
|
||||
```
|
||||
|
||||
## Helper functions
|
||||
|
||||
#### `pack_compact_validator`
|
||||
|
||||
```python
|
||||
def pack_compact_validator(index: int, slashed: bool, balance_in_increments: int) -> int:
|
||||
"""
|
||||
Creates a compact validator object representing index, slashed status, and compressed balance.
|
||||
Takes as input balance-in-increments (// EFFECTIVE_BALANCE_INCREMENT) to preserve symmetry with
|
||||
the unpacking function.
|
||||
"""
|
||||
return (index << 16) + (slashed << 15) + balance_in_increments
|
||||
```
|
||||
|
||||
#### `unpack_compact_validator`
|
||||
|
||||
```python
|
||||
def unpack_compact_validator(compact_validator: int) -> Tuple[int, bool, int]:
|
||||
"""
|
||||
Returns validator index, slashed, balance // EFFECTIVE_BALANCE_INCREMENT
|
||||
"""
|
||||
return compact_validator >> 16, bool((compact_validator >> 15) % 2), compact_validator & (2**15 - 1)
|
||||
```
|
||||
|
||||
#### `committee_to_compact_committee`
|
||||
|
||||
```python
|
||||
def committee_to_compact_committee(state: BeaconState, committee: Sequence[ValidatorIndex]) -> CompactCommittee:
|
||||
"""
|
||||
Given a state and a list of validator indices, outputs the CompactCommittee representing them.
|
||||
"""
|
||||
validators = [state.validators[i] for i in committee]
|
||||
compact_validators = [
|
||||
pack_compact_validator(i, v.slashed, v.effective_balance // EFFECTIVE_BALANCE_INCREMENT)
|
||||
for i, v in zip(committee, validators)
|
||||
]
|
||||
pubkeys = [v.pubkey for v in validators]
|
||||
return CompactCommittee(pubkeys=pubkeys, compact_validators=compact_validators)
|
||||
```
|
||||
|
||||
#### `verify_merkle_proof`
|
||||
|
||||
```python
|
||||
def verify_merkle_proof(leaf: Hash, proof: Sequence[Hash], index: GeneralizedIndex, root: Hash) -> bool:
|
||||
assert len(proof) == get_generalized_index_length(index)
|
||||
for i, h in enumerate(proof):
|
||||
if get_generalized_index_bit(index, i):
|
||||
leaf = hash(h + leaf)
|
||||
else:
|
||||
leaf = hash(leaf + h)
|
||||
return leaf == root
|
||||
```
|
||||
|
||||
#### `compute_historical_state_generalized_index`
|
||||
|
||||
```python
|
||||
def compute_historical_state_generalized_index(earlier: ShardSlot, later: ShardSlot) -> GeneralizedIndex:
|
||||
"""
|
||||
Computes the generalized index of the state root of slot `earlier` based on the state root of slot `later`.
|
||||
Relies on the `history_accumulator` in the `ShardState`, where `history_accumulator[i]` maintains the most
|
||||
recent 2**i'th slot state. Works by tracing a `log(later-earlier)` step path from `later` to `earlier`
|
||||
through intermediate blocks at the next available multiples of descending powers of two.
|
||||
"""
|
||||
o = GeneralizedIndex(1)
|
||||
for i in range(HISTORY_ACCUMULATOR_DEPTH - 1, -1, -1):
|
||||
if (later - 1) & 2**i > (earlier - 1) & 2**i:
|
||||
later = later - ((later - 1) % 2**i) - 1
|
||||
gindex = GeneralizedIndex(get_generalized_index(ShardState, ['history_accumulator', i]))
|
||||
o = concat_generalized_indices(o, gindex)
|
||||
return o
|
||||
```
|
||||
|
||||
#### `get_generalized_index_of_crosslink_header`
|
||||
|
||||
```python
|
||||
def get_generalized_index_of_crosslink_header(index: int) -> GeneralizedIndex:
|
||||
"""
|
||||
Gets the generalized index for the root of the index'th header in a crosslink.
|
||||
"""
|
||||
MAX_CROSSLINK_SIZE = (
|
||||
MAX_SHARD_BLOCK_SIZE * SHARD_SLOTS_PER_EPOCH * MAX_EPOCHS_PER_CROSSLINK
|
||||
)
|
||||
assert MAX_CROSSLINK_SIZE == get_previous_power_of_two(MAX_CROSSLINK_SIZE)
|
||||
return GeneralizedIndex(MAX_CROSSLINK_SIZE // SHARD_HEADER_SIZE + index)
|
||||
```
|
||||
|
||||
#### `process_shard_receipt_proof`
|
||||
|
||||
```python
|
||||
def process_shard_receipt_proof(state: BeaconState, receipt_proof: ShardReceiptProof) -> None:
|
||||
"""
|
||||
Processes a ShardReceipt object.
|
||||
"""
|
||||
receipt_slot = (
|
||||
state.next_shard_receipt_period[receipt_proof.shard] *
|
||||
SHARD_SLOTS_PER_EPOCH * EPOCHS_PER_SHARD_PERIOD
|
||||
)
|
||||
first_slot_in_last_crosslink = state.current_crosslinks[receipt_proof.shard].start_epoch * SHARD_SLOTS_PER_EPOCH
|
||||
gindex = concat_generalized_indices(
|
||||
get_generalized_index_of_crosslink_header(0),
|
||||
GeneralizedIndex(get_generalized_index(ShardBlockHeader, 'state_root')),
|
||||
compute_historical_state_generalized_index(receipt_slot, first_slot_in_last_crosslink),
|
||||
GeneralizedIndex(get_generalized_index(ShardState, 'receipt_root'))
|
||||
)
|
||||
assert verify_merkle_proof(
|
||||
leaf=hash_tree_root(receipt_proof.receipt),
|
||||
proof=receipt_proof.proof,
|
||||
index=gindex,
|
||||
root=state.current_crosslinks[receipt_proof.shard].data_root
|
||||
)
|
||||
for delta in receipt_proof.receipt:
|
||||
if get_current_epoch(state) < state.validators[delta.index].withdrawable_epoch:
|
||||
increase_amount = (
|
||||
state.validators[delta.index].effective_balance * delta.reward_coefficient // REWARD_COEFFICIENT_BASE
|
||||
)
|
||||
increase_balance(state, delta.index, increase_amount)
|
||||
decrease_balance(state, delta.index, delta.block_fee)
|
||||
state.next_shard_receipt_period[receipt_proof.shard] += 1
|
||||
proposer_index = get_beacon_proposer_index(state)
|
||||
increase_balance(state, proposer_index, Gwei(get_base_reward(state, proposer_index) // MINOR_REWARD_QUOTIENT))
|
||||
```
|
||||
|
||||
## Changes
|
||||
|
||||
### Phase 0 container updates
|
||||
|
||||
Add the following fields to the end of the specified container objects.
|
||||
|
||||
#### `BeaconState`
|
||||
|
||||
```python
|
||||
class BeaconState(Container):
|
||||
# Period committees
|
||||
period_committee_roots: Vector[Hash, PERIOD_COMMITTEE_ROOT_LENGTH]
|
||||
next_shard_receipt_period: Vector[uint64, SHARD_COUNT]
|
||||
```
|
||||
|
||||
`period_committee_roots` values are initialized to `Bytes32()` (empty bytes value).
|
||||
`next_shard_receipt_period` values are initialized to `compute_epoch_at_slot(PHASE_1_FORK_SLOT) // EPOCHS_PER_SHARD_PERIOD`.
|
||||
|
||||
#### `BeaconBlockBody`
|
||||
|
||||
```python
|
||||
class BeaconBlockBody(Container):
|
||||
shard_receipt_proofs: List[ShardReceiptProof, MAX_SHARD_RECEIPT_PROOFS]
|
||||
```
|
||||
|
||||
`shard_receipt_proofs` is initialized to `[]`.
|
||||
|
||||
### Persistent committees
|
||||
|
||||
Run `update_period_committee` immediately before `process_final_updates`:
|
||||
|
||||
```python
|
||||
# begin insert @update_period_committee
|
||||
update_period_committee(state)
|
||||
# end insert @update_period_committee
|
||||
def update_period_committee(state: BeaconState) -> None:
|
||||
"""
|
||||
Updates period committee roots at boundary blocks.
|
||||
"""
|
||||
if (get_current_epoch(state) + 1) % EPOCHS_PER_SHARD_PERIOD != 0:
|
||||
return
|
||||
|
||||
period = (get_current_epoch(state) + 1) // EPOCHS_PER_SHARD_PERIOD
|
||||
committees = Vector[CompactCommittee, SHARD_COUNT]([
|
||||
committee_to_compact_committee(
|
||||
state,
|
||||
get_period_committee(state, Shard(shard), Epoch(get_current_epoch(state) + 1)),
|
||||
)
|
||||
for shard in range(SHARD_COUNT)
|
||||
])
|
||||
state.period_committee_roots[period % PERIOD_COMMITTEE_ROOT_LENGTH] = hash_tree_root(committees)
|
||||
```
|
||||
|
||||
### Shard receipt processing
|
||||
|
||||
Run `process_shard_receipt_proof` on each `ShardReceiptProof` during block processing.
|
||||
|
||||
```python
|
||||
# begin insert @process_shard_receipt_proofs
|
||||
(body.shard_receipt_proofs, process_shard_receipt_proof),
|
||||
# end insert @process_shard_receipt_proofs
|
||||
```
|
|
@ -12,6 +12,7 @@
|
|||
- [Terminology](#terminology)
|
||||
- [Constants](#constants)
|
||||
- [Misc](#misc)
|
||||
- [Custody game parameters](#custody-game-parameters)
|
||||
- [Time parameters](#time-parameters)
|
||||
- [Max operations per block](#max-operations-per-block)
|
||||
- [Reward and penalty quotients](#reward-and-penalty-quotients)
|
||||
|
@ -33,12 +34,14 @@
|
|||
- [`BeaconBlockBody`](#beaconblockbody)
|
||||
- [Helpers](#helpers)
|
||||
- [`ceillog2`](#ceillog2)
|
||||
- [`is_valid_merkle_branch_with_mixin`](#is_valid_merkle_branch_with_mixin)
|
||||
- [`get_crosslink_chunk_count`](#get_crosslink_chunk_count)
|
||||
- [`get_bit`](#get_bit)
|
||||
- [`legendre_bit`](#legendre_bit)
|
||||
- [`custody_subchunkify`](#custody_subchunkify)
|
||||
- [`get_custody_chunk_bit`](#get_custody_chunk_bit)
|
||||
- [`get_chunk_bits_root`](#get_chunk_bits_root)
|
||||
- [`get_randao_epoch_for_custody_period`](#get_randao_epoch_for_custody_period)
|
||||
- [`get_reveal_period`](#get_reveal_period)
|
||||
- [`get_custody_period_for_validator`](#get_custody_period_for_validator)
|
||||
- [`replace_empty_or_append`](#replace_empty_or_append)
|
||||
- [Per-block processing](#per-block-processing)
|
||||
- [Operations](#operations)
|
||||
|
@ -74,12 +77,23 @@ This document details the beacon chain additions and changes in Phase 1 of Ether
|
|||
## Constants
|
||||
|
||||
### Misc
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `BLS12_381_Q` | `4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787` |
|
||||
| `MINOR_REWARD_QUOTIENT` | `2**8` (= 256) |
|
||||
| `MAX_EPOCHS_PER_CROSSLINK` | `2**6` (= 64) | epochs | ~7 hours |
|
||||
|
||||
### Custody game parameters
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `BYTES_PER_SHARD_BLOCK` | `2**14` (= 16,384) |
|
||||
| `BYTES_PER_CUSTODY_CHUNK` | `2**9` (= 512) |
|
||||
| `MINOR_REWARD_QUOTIENT` | `2**8` (= 256) |
|
||||
| `BYTES_PER_CUSTODY_SUBCHUNK` | `48` |
|
||||
| `CHUNKS_PER_EPOCH` | `2 * BYTES_PER_SHARD_BLOCK * SLOTS_PER_EPOCH // BYTES_PER_CUSTODY_CHUNK` |
|
||||
| `MAX_CUSTODY_CHUNKS` | `MAX_EPOCHS_PER_CROSSLINK * CHUNKS_PER_EPOCH` |
|
||||
| `CUSTODY_DATA_DEPTH` | `ceillog2(MAX_CUSTODY_CHUNKS) + 1` |
|
||||
| `CUSTODY_CHUNK_BIT_DEPTH` | `ceillog2(MAX_EPOCHS_PER_CROSSLINK * CHUNKS_PER_EPOCH // 256) + 2` |
|
||||
|
||||
### Time parameters
|
||||
|
||||
|
@ -144,7 +158,7 @@ class CustodyBitChallenge(Container):
|
|||
attestation: Attestation
|
||||
challenger_index: ValidatorIndex
|
||||
responder_key: BLSSignature
|
||||
chunk_bits: Bytes[PLACEHOLDER]
|
||||
chunk_bits: Bitlist[MAX_CUSTODY_CHUNKS]
|
||||
signature: BLSSignature
|
||||
```
|
||||
|
||||
|
@ -181,10 +195,10 @@ class CustodyBitChallengeRecord(Container):
|
|||
class CustodyResponse(Container):
|
||||
challenge_index: uint64
|
||||
chunk_index: uint64
|
||||
chunk: Vector[Bytes[PLACEHOLDER], BYTES_PER_CUSTODY_CHUNK]
|
||||
data_branch: List[Hash, PLACEHOLDER]
|
||||
chunk_bits_branch: List[Hash, PLACEHOLDER]
|
||||
chunk_bits_leaf: Hash
|
||||
chunk: BytesN[BYTES_PER_CUSTODY_CHUNK]
|
||||
data_branch: List[Hash, CUSTODY_DATA_DEPTH]
|
||||
chunk_bits_branch: List[Hash, CUSTODY_CHUNK_BIT_DEPTH]
|
||||
chunk_bits_leaf: Bitvector[256]
|
||||
```
|
||||
|
||||
### New beacon operations
|
||||
|
@ -225,11 +239,11 @@ Add the following fields to the end of the specified container objects. Fields w
|
|||
|
||||
```python
|
||||
class Validator(Container):
|
||||
# next_custody_reveal_period is initialised to the custody period
|
||||
# next_custody_secret_to_reveal is initialised to the custody period
|
||||
# (of the particular validator) in which the validator is activated
|
||||
# = get_reveal_period(...)
|
||||
next_custody_reveal_period: uint64
|
||||
max_reveal_lateness: uint64
|
||||
# = get_custody_period_for_validator(...)
|
||||
next_custody_secret_to_reveal: uint64
|
||||
max_reveal_lateness: Epoch
|
||||
```
|
||||
|
||||
#### `BeaconState`
|
||||
|
@ -263,7 +277,26 @@ class BeaconBlockBody(Container):
|
|||
|
||||
```python
|
||||
def ceillog2(x: uint64) -> int:
|
||||
return x.bit_length()
|
||||
return (x - 1).bit_length()
|
||||
```
|
||||
|
||||
### `is_valid_merkle_branch_with_mixin`
|
||||
|
||||
```python
|
||||
def is_valid_merkle_branch_with_mixin(leaf: Hash,
|
||||
branch: Sequence[Hash],
|
||||
depth: uint64,
|
||||
index: uint64,
|
||||
root: Hash,
|
||||
mixin: uint64) -> bool:
|
||||
value = leaf
|
||||
for i in range(depth):
|
||||
if index // (2**i) % 2:
|
||||
value = hash(branch[i] + value)
|
||||
else:
|
||||
value = hash(value + branch[i])
|
||||
value = hash(value + mixin.to_bytes(32, "little"))
|
||||
return value == root
|
||||
```
|
||||
|
||||
### `get_crosslink_chunk_count`
|
||||
|
@ -271,37 +304,69 @@ def ceillog2(x: uint64) -> int:
|
|||
```python
|
||||
def get_custody_chunk_count(crosslink: Crosslink) -> int:
|
||||
crosslink_length = min(MAX_EPOCHS_PER_CROSSLINK, crosslink.end_epoch - crosslink.start_epoch)
|
||||
chunks_per_epoch = 2 * BYTES_PER_SHARD_BLOCK * SLOTS_PER_EPOCH // BYTES_PER_CUSTODY_CHUNK
|
||||
return crosslink_length * chunks_per_epoch
|
||||
return crosslink_length * CHUNKS_PER_EPOCH
|
||||
```
|
||||
|
||||
### `get_bit`
|
||||
### `legendre_bit`
|
||||
|
||||
Returns the Legendre symbol `(a/q)` normalizes as a bit (i.e. `((a/q) + 1) // 2`). In a production implementation, a well-optimized library (e.g. GMP) should be used for this.
|
||||
|
||||
```python
|
||||
def get_bit(serialization: bytes, i: uint64) -> int:
|
||||
"""
|
||||
Extract the bit in ``serialization`` at position ``i``.
|
||||
"""
|
||||
return (serialization[i // 8] >> (i % 8)) % 2
|
||||
def legendre_bit(a: int, q: int) -> int:
|
||||
if a >= q:
|
||||
return legendre_bit(a % q, q)
|
||||
if a == 0:
|
||||
return 0
|
||||
assert(q > a > 0 and q % 2 == 1)
|
||||
t = 1
|
||||
n = q
|
||||
while a != 0:
|
||||
while a % 2 == 0:
|
||||
a //= 2
|
||||
r = n % 8
|
||||
if r == 3 or r == 5:
|
||||
t = -t
|
||||
a, n = n, a
|
||||
if a % 4 == n % 4 == 3:
|
||||
t = -t
|
||||
a %= n
|
||||
if n == 1:
|
||||
return (t + 1) // 2
|
||||
else:
|
||||
return 0
|
||||
```
|
||||
|
||||
### `custody_subchunkify`
|
||||
|
||||
Given one proof of custody chunk, returns the proof of custody subchunks of the correct sizes.
|
||||
|
||||
```python
|
||||
def custody_subchunkify(bytez: bytes) -> Sequence[bytes]:
|
||||
bytez += b'\x00' * (-len(bytez) % BYTES_PER_CUSTODY_SUBCHUNK)
|
||||
return [bytez[i:i + BYTES_PER_CUSTODY_SUBCHUNK]
|
||||
for i in range(0, len(bytez), BYTES_PER_CUSTODY_SUBCHUNK)]
|
||||
```
|
||||
|
||||
### `get_custody_chunk_bit`
|
||||
|
||||
```python
|
||||
def get_custody_chunk_bit(key: BLSSignature, chunk: bytes) -> bool:
|
||||
# TODO: Replace with something MPC-friendly, e.g. the Legendre symbol
|
||||
return bool(get_bit(hash(key + chunk), 0))
|
||||
full_G2_element = bls_signature_to_G2(key)
|
||||
s = full_G2_element[0].coeffs
|
||||
bits = [legendre_bit((i + 1) * s[i % 2] + int.from_bytes(subchunk, "little"), BLS12_381_Q)
|
||||
for i, subchunk in enumerate(custody_subchunkify(chunk))]
|
||||
|
||||
return bool(sum(bits) % 2)
|
||||
```
|
||||
|
||||
### `get_chunk_bits_root`
|
||||
|
||||
```python
|
||||
def get_chunk_bits_root(chunk_bits: bytes) -> Hash:
|
||||
aggregated_bits = bytearray([0] * 32)
|
||||
for i in range(0, len(chunk_bits), 32):
|
||||
for j in range(32):
|
||||
aggregated_bits[j] ^= chunk_bits[i + j]
|
||||
return hash(aggregated_bits)
|
||||
def get_chunk_bits_root(chunk_bits: Bitlist[MAX_CUSTODY_CHUNKS]) -> bit:
|
||||
aggregated_bits = 0
|
||||
for i, b in enumerate(chunk_bits):
|
||||
aggregated_bits += 2**i * b
|
||||
return legendre_bit(aggregated_bits, BLS12_381_Q)
|
||||
```
|
||||
|
||||
### `get_randao_epoch_for_custody_period`
|
||||
|
@ -312,10 +377,10 @@ def get_randao_epoch_for_custody_period(period: uint64, validator_index: Validat
|
|||
return Epoch(next_period_start + CUSTODY_PERIOD_TO_RANDAO_PADDING)
|
||||
```
|
||||
|
||||
### `get_reveal_period`
|
||||
### `get_custody_period_for_validator`
|
||||
|
||||
```python
|
||||
def get_reveal_period(state: BeaconState, validator_index: ValidatorIndex, epoch: Epoch=None) -> int:
|
||||
def get_custody_period_for_validator(state: BeaconState, validator_index: ValidatorIndex, epoch: Epoch=None) -> int:
|
||||
'''
|
||||
Return the reveal period for a given validator.
|
||||
'''
|
||||
|
@ -354,9 +419,9 @@ def process_custody_key_reveal(state: BeaconState, reveal: CustodyKeyReveal) ->
|
|||
Note that this function mutates ``state``.
|
||||
"""
|
||||
revealer = state.validators[reveal.revealer_index]
|
||||
epoch_to_sign = get_randao_epoch_for_custody_period(revealer.next_custody_reveal_period, reveal.revealed_index)
|
||||
epoch_to_sign = get_randao_epoch_for_custody_period(revealer.next_custody_secret_to_reveal, reveal.revealer_index)
|
||||
|
||||
assert revealer.next_custody_reveal_period < get_reveal_period(state, reveal.revealed_index)
|
||||
assert revealer.next_custody_secret_to_reveal < get_custody_period_for_validator(state, reveal.revealer_index)
|
||||
|
||||
# Revealed validator is active or exited, but not withdrawn
|
||||
assert is_slashable_validator(revealer, get_current_epoch(state))
|
||||
|
@ -374,15 +439,19 @@ def process_custody_key_reveal(state: BeaconState, reveal: CustodyKeyReveal) ->
|
|||
)
|
||||
|
||||
# Decrement max reveal lateness if response is timely
|
||||
if revealer.next_custody_reveal_period == get_reveal_period(state, reveal.revealer_index) - 2:
|
||||
revealer.max_reveal_lateness -= MAX_REVEAL_LATENESS_DECREMENT
|
||||
revealer.max_reveal_lateness = max(
|
||||
revealer.max_reveal_lateness,
|
||||
get_reveal_period(state, reveal.revealed_index) - revealer.next_custody_reveal_period
|
||||
)
|
||||
if epoch_to_sign + EPOCHS_PER_CUSTODY_PERIOD >= get_current_epoch(state):
|
||||
if revealer.max_reveal_lateness >= MAX_REVEAL_LATENESS_DECREMENT:
|
||||
revealer.max_reveal_lateness -= MAX_REVEAL_LATENESS_DECREMENT
|
||||
else:
|
||||
revealer.max_reveal_lateness = 0
|
||||
else:
|
||||
revealer.max_reveal_lateness = max(
|
||||
revealer.max_reveal_lateness,
|
||||
get_current_epoch(state) - epoch_to_sign - EPOCHS_PER_CUSTODY_PERIOD
|
||||
)
|
||||
|
||||
# Process reveal
|
||||
revealer.next_custody_reveal_period += 1
|
||||
revealer.next_custody_secret_to_reveal += 1
|
||||
|
||||
# Reward Block Preposer
|
||||
proposer_index = get_beacon_proposer_index(state)
|
||||
|
@ -478,7 +547,7 @@ def process_chunk_challenge(state: BeaconState, challenge: CustodyChunkChallenge
|
|||
# Verify the attestation
|
||||
assert is_valid_indexed_attestation(state, get_indexed_attestation(state, challenge.attestation))
|
||||
# Verify it is not too late to challenge
|
||||
assert (compute_epoch_of_slot(challenge.attestation.data.slot)
|
||||
assert (compute_epoch_at_slot(challenge.attestation.data.slot)
|
||||
>= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY)
|
||||
responder = state.validators[challenge.responder_index]
|
||||
assert responder.exit_epoch >= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY
|
||||
|
@ -520,7 +589,7 @@ For each `challenge` in `block.body.custody_bit_challenges`, run the following f
|
|||
```python
|
||||
def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) -> None:
|
||||
attestation = challenge.attestation
|
||||
epoch = compute_epoch_of_slot(attestation.data.slot)
|
||||
epoch = attestation.data.target.epoch
|
||||
shard = attestation.data.crosslink.shard
|
||||
|
||||
# Verify challenge signature
|
||||
|
@ -533,7 +602,10 @@ def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) ->
|
|||
assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))
|
||||
# Verify attestation is eligible for challenging
|
||||
responder = state.validators[challenge.responder_index]
|
||||
assert epoch + responder.max_reveal_lateness <= get_reveal_period(state, challenge.responder_index)
|
||||
assert get_current_epoch(state) <= get_randao_epoch_for_custody_period(
|
||||
get_custody_period_for_validator(state, challenge.responder_index, epoch),
|
||||
challenge.responder_index
|
||||
) + 2 * EPOCHS_PER_CUSTODY_PERIOD + responder.max_reveal_lateness
|
||||
|
||||
# Verify the responder participated in the attestation
|
||||
attesters = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
|
||||
|
@ -543,17 +615,18 @@ def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) ->
|
|||
assert record.challenger_index != challenge.challenger_index
|
||||
# Verify the responder custody key
|
||||
epoch_to_sign = get_randao_epoch_for_custody_period(
|
||||
get_reveal_period(state, challenge.responder_index, epoch),
|
||||
get_custody_period_for_validator(state, challenge.responder_index, epoch),
|
||||
challenge.responder_index,
|
||||
)
|
||||
domain = get_domain(state, DOMAIN_RANDAO, epoch_to_sign)
|
||||
assert bls_verify(responder.pubkey, hash_tree_root(epoch_to_sign), challenge.responder_key, domain)
|
||||
# Verify the chunk count
|
||||
chunk_count = get_custody_chunk_count(attestation.data.crosslink)
|
||||
# Verify the first bit of the hash of the chunk bits does not equal the custody bit
|
||||
committee = get_crosslink_committee(state, epoch, shard)
|
||||
assert chunk_count == len(challenge.chunk_bits)
|
||||
# Verify custody bit is incorrect
|
||||
committee = get_beacon_committee(state, epoch, shard)
|
||||
custody_bit = attestation.custody_bits[committee.index(challenge.responder_index)]
|
||||
assert custody_bit != get_bit(get_chunk_bits_root(challenge.chunk_bits), 0)
|
||||
assert custody_bit != get_chunk_bits_root(challenge.chunk_bits)
|
||||
# Add new bit challenge record
|
||||
new_record = CustodyBitChallengeRecord(
|
||||
challenge_index=state.custody_challenge_index,
|
||||
|
@ -601,7 +674,7 @@ def process_chunk_challenge_response(state: BeaconState,
|
|||
# Verify bit challenge data is null
|
||||
assert response.chunk_bits_branch == [] and response.chunk_bits_leaf == Hash()
|
||||
# Verify minimum delay
|
||||
assert get_current_epoch(state) >= challenge.inclusion_epoch + ACTIVATION_EXIT_DELAY
|
||||
assert get_current_epoch(state) >= challenge.inclusion_epoch + MAX_SEED_LOOKAHEAD
|
||||
# Verify the chunk matches the crosslink data root
|
||||
assert is_valid_merkle_branch(
|
||||
leaf=hash_tree_root(response.chunk),
|
||||
|
@ -636,16 +709,17 @@ def process_bit_challenge_response(state: BeaconState,
|
|||
root=challenge.data_root,
|
||||
)
|
||||
# Verify the chunk bit leaf matches the challenge data
|
||||
assert is_valid_merkle_branch(
|
||||
leaf=response.chunk_bits_leaf,
|
||||
assert is_valid_merkle_branch_with_mixin(
|
||||
leaf=hash_tree_root(response.chunk_bits_leaf),
|
||||
branch=response.chunk_bits_branch,
|
||||
depth=ceillog2(challenge.chunk_count) >> 8,
|
||||
depth=ceillog2(MAX_CUSTODY_CHUNKS // 256),
|
||||
index=response.chunk_index // 256,
|
||||
root=challenge.chunk_bits_merkle_root
|
||||
root=challenge.chunk_bits_merkle_root,
|
||||
mixin=challenge.chunk_count,
|
||||
)
|
||||
# Verify the chunk bit does not match the challenge chunk bit
|
||||
assert (get_custody_chunk_bit(challenge.responder_key, response.chunk)
|
||||
!= get_bit(challenge.chunk_bits_leaf, response.chunk_index % 256))
|
||||
!= response.chunk_bits_leaf[response.chunk_index % 256])
|
||||
# Clear the challenge
|
||||
records = state.custody_bit_challenge_records
|
||||
records[records.index(challenge)] = CustodyBitChallengeRecord()
|
||||
|
@ -665,8 +739,8 @@ Run `process_reveal_deadlines(state)` immediately after `process_registry_update
|
|||
# end insert @process_reveal_deadlines
|
||||
def process_reveal_deadlines(state: BeaconState) -> None:
|
||||
for index, validator in enumerate(state.validators):
|
||||
deadline = validator.next_custody_reveal_period + (CUSTODY_RESPONSE_DEADLINE // EPOCHS_PER_CUSTODY_PERIOD)
|
||||
if get_reveal_period(state, ValidatorIndex(index)) > deadline:
|
||||
deadline = validator.next_custody_secret_to_reveal + (CUSTODY_RESPONSE_DEADLINE // EPOCHS_PER_CUSTODY_PERIOD)
|
||||
if get_custody_period_for_validator(state, ValidatorIndex(index)) > deadline:
|
||||
slash_validator(state, ValidatorIndex(index))
|
||||
```
|
||||
|
||||
|
|
|
@ -9,36 +9,53 @@
|
|||
- [Ethereum 2.0 Phase 1 -- Shard Data Chains](#ethereum-20-phase-1----shard-data-chains)
|
||||
- [Table of contents](#table-of-contents)
|
||||
- [Introduction](#introduction)
|
||||
- [Custom types](#custom-types)
|
||||
- [Configuration](#configuration)
|
||||
- [Misc](#misc)
|
||||
- [Initial values](#initial-values)
|
||||
- [Time parameters](#time-parameters)
|
||||
- [State list lengths](#state-list-lengths)
|
||||
- [Rewards and penalties](#rewards-and-penalties)
|
||||
- [Signature domain types](#signature-domain-types)
|
||||
- [TODO PLACEHOLDER](#todo-placeholder)
|
||||
- [Data structures](#data-structures)
|
||||
- [`ShardBlockBody`](#shardblockbody)
|
||||
- [`ShardAttestation`](#shardattestation)
|
||||
- [Containers](#containers)
|
||||
- [`Crosslink`](#crosslink)
|
||||
- [`ShardBlock`](#shardblock)
|
||||
- [`ShardBlockHeader`](#shardblockheader)
|
||||
- [`ShardState`](#shardstate)
|
||||
- [`ShardAttestationData`](#shardattestationdata)
|
||||
- [Helper functions](#helper-functions)
|
||||
- [`get_period_committee`](#get_period_committee)
|
||||
- [`get_switchover_epoch`](#get_switchover_epoch)
|
||||
- [`get_persistent_committee`](#get_persistent_committee)
|
||||
- [`get_shard_proposer_index`](#get_shard_proposer_index)
|
||||
- [`get_shard_header`](#get_shard_header)
|
||||
- [`verify_shard_attestation_signature`](#verify_shard_attestation_signature)
|
||||
- [`compute_crosslink_data_root`](#compute_crosslink_data_root)
|
||||
- [Object validity](#object-validity)
|
||||
- [Shard blocks](#shard-blocks)
|
||||
- [Shard attestations](#shard-attestations)
|
||||
- [Beacon attestations](#beacon-attestations)
|
||||
- [Misc](#misc-1)
|
||||
- [`compute_epoch_of_shard_slot`](#compute_epoch_of_shard_slot)
|
||||
- [`compute_shard_period_start_epoch`](#compute_shard_period_start_epoch)
|
||||
- [Beacon state accessors](#beacon-state-accessors)
|
||||
- [`get_period_committee`](#get_period_committee)
|
||||
- [`get_shard_committee`](#get_shard_committee)
|
||||
- [`get_shard_proposer_index`](#get_shard_proposer_index)
|
||||
- [Shard state mutators](#shard-state-mutators)
|
||||
- [`process_delta`](#process_delta)
|
||||
- [Genesis](#genesis)
|
||||
- [`get_genesis_shard_state`](#get_genesis_shard_state)
|
||||
- [`get_genesis_shard_block`](#get_genesis_shard_block)
|
||||
- [Shard state transition function](#shard-state-transition-function)
|
||||
- [Period processing](#period-processing)
|
||||
- [Block processing](#block-processing)
|
||||
- [Block header](#block-header)
|
||||
- [Attestations](#attestations)
|
||||
- [Block body](#block-body)
|
||||
- [Shard fork choice rule](#shard-fork-choice-rule)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes the shard data layer and the shard fork choice rule in Phase 1 of Ethereum 2.0.
|
||||
This document describes the shard transition function (data layer only) and the shard fork choice rule as part of Phase 1 of Ethereum 2.0.
|
||||
|
||||
## Custom types
|
||||
|
||||
| Name | SSZ equivalent | Description |
|
||||
| - | - | - |
|
||||
| `Shard` | `uint64` | a shard number |
|
||||
| `ShardSlot` | `uint64` | a shard slot number |
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -46,69 +63,68 @@ This document describes the shard data layer and the shard fork choice rule in P
|
|||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `BYTES_PER_SHARD_BLOCK_BODY` | `2**14` (= 16,384) |
|
||||
| `MAX_SHARD_ATTESTIONS` | `2**4` (= 16) |
|
||||
| `SHARD_COUNT` | `2**10` (= 1,024) |
|
||||
| `MIN_BLOCK_BODY_PRICE` | `2**0` (= 1) |
|
||||
| `MAX_PERIOD_COMMITTEE_SIZE` | `2**7` (= 128) |
|
||||
| `SHARD_HEADER_SIZE` | `2**10` (= 1024) |
|
||||
| `SHARD_BLOCK_SIZE_TARGET` | `2**14` (= 16,384) |
|
||||
| `MAX_SHARD_BLOCK_SIZE` | `2**16` (= 65,536) |
|
||||
|
||||
### Initial values
|
||||
|
||||
| Name | Value |
|
||||
| `PHASE_1_FORK_EPOCH` | **TBD** |
|
||||
| `PHASE_1_FORK_SLOT` | **TBD** |
|
||||
| `GENESIS_SHARD_SLOT` | 0 |
|
||||
| Name | Value | Unit |
|
||||
| - | - |
|
||||
| `SHARD_GENESIS_EPOCH` | **TBD** | Epoch |
|
||||
|
||||
### Time parameters
|
||||
|
||||
| Name | Value | Unit | Duration |
|
||||
| - | - | :-: | :-: |
|
||||
| `CROSSLINK_LOOKBACK` | `2**0` (= 1) | epochs | 6.2 minutes |
|
||||
| `SHARD_SLOTS_PER_EPOCH` | `2**7` (= 128) | shard slots | 6.4 minutes |
|
||||
| `EPOCHS_PER_SHARD_PERIOD` | `2**8` (= 256) | epochs | ~27 hours |
|
||||
|
||||
### State list lengths
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `HISTORY_ACCUMULATOR_DEPTH` | `2**6` (= 64) |
|
||||
|
||||
### Rewards and penalties
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `BLOCK_BODY_PRICE_QUOTIENT` | `2**3` (= 8) |
|
||||
|
||||
### Signature domain types
|
||||
|
||||
The following types are defined, mapping into `DomainType` (little endian):
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `DOMAIN_SHARD_PROPOSER` | `128` |
|
||||
| `DOMAIN_SHARD_ATTESTER` | `129` |
|
||||
|
||||
### TODO PLACEHOLDER
|
||||
## Containers
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `PLACEHOLDER` | `2**32` |
|
||||
|
||||
## Data structures
|
||||
|
||||
### `ShardBlockBody`
|
||||
### `Crosslink`
|
||||
|
||||
```python
|
||||
class ShardBlockBody(Container):
|
||||
data: Vector[Bytes[PLACEHOLDER], BYTES_PER_SHARD_BLOCK_BODY]
|
||||
```
|
||||
|
||||
### `ShardAttestation`
|
||||
|
||||
```python
|
||||
class ShardAttestation(Container):
|
||||
class data(Container):
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
shard_block_root: Hash
|
||||
aggregation_bits: Bitlist[PLACEHOLDER]
|
||||
aggregate_signature: BLSSignature
|
||||
# Crosslink is a placeholder to appease the build script until phase 1 is reworked
|
||||
class Crosslink(Container):
|
||||
shard: Shard
|
||||
```
|
||||
|
||||
### `ShardBlock`
|
||||
|
||||
```python
|
||||
class ShardBlock(Container):
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
beacon_chain_root: Hash
|
||||
slot: ShardSlot
|
||||
beacon_block_root: Hash
|
||||
parent_root: Hash
|
||||
data: ShardBlockBody
|
||||
state_root: Hash
|
||||
attestations: List[ShardAttestation, PLACEHOLDER]
|
||||
body: List[byte, MAX_SHARD_BLOCK_SIZE - SHARD_HEADER_SIZE]
|
||||
block_size_sum: uint64
|
||||
aggregation_bits: Bitvector[2 * MAX_PERIOD_COMMITTEE_SIZE]
|
||||
attestations: BLSSignature
|
||||
signature: BLSSignature
|
||||
```
|
||||
|
||||
|
@ -116,309 +132,310 @@ class ShardBlock(Container):
|
|||
|
||||
```python
|
||||
class ShardBlockHeader(Container):
|
||||
slot: Slot
|
||||
shard: Shard
|
||||
beacon_chain_root: Hash
|
||||
slot: ShardSlot
|
||||
beacon_block_root: Hash
|
||||
parent_root: Hash
|
||||
body_root: Hash
|
||||
state_root: Hash
|
||||
attestations: List[ShardAttestation, PLACEHOLDER]
|
||||
body_root: Hash
|
||||
block_size_sum: uint64
|
||||
aggregation_bits: Bitvector[2 * MAX_PERIOD_COMMITTEE_SIZE]
|
||||
attestations: BLSSignature
|
||||
signature: BLSSignature
|
||||
```
|
||||
|
||||
### `ShardState`
|
||||
|
||||
```python
|
||||
class ShardState(Container):
|
||||
shard: Shard
|
||||
slot: ShardSlot
|
||||
history_accumulator: Vector[Hash, HISTORY_ACCUMULATOR_DEPTH]
|
||||
latest_block_header: ShardBlockHeader
|
||||
block_size_sum: uint64
|
||||
# Fees and rewards
|
||||
block_body_price: Gwei
|
||||
older_committee_positive_deltas: Vector[Gwei, MAX_PERIOD_COMMITTEE_SIZE]
|
||||
older_committee_negative_deltas: Vector[Gwei, MAX_PERIOD_COMMITTEE_SIZE]
|
||||
newer_committee_positive_deltas: Vector[Gwei, MAX_PERIOD_COMMITTEE_SIZE]
|
||||
newer_committee_negative_deltas: Vector[Gwei, MAX_PERIOD_COMMITTEE_SIZE]
|
||||
```
|
||||
|
||||
### `ShardAttestationData`
|
||||
|
||||
```python
|
||||
class ShardAttestationData(Container):
|
||||
slot: ShardSlot
|
||||
parent_root: Hash
|
||||
```
|
||||
|
||||
## Helper functions
|
||||
|
||||
### `get_period_committee`
|
||||
### Misc
|
||||
|
||||
#### `compute_epoch_of_shard_slot`
|
||||
|
||||
```python
|
||||
def get_period_committee(state: BeaconState,
|
||||
epoch: Epoch,
|
||||
shard: Shard,
|
||||
index: uint64,
|
||||
count: uint64) -> Sequence[ValidatorIndex]:
|
||||
"""
|
||||
Return committee for a period. Used to construct persistent committees.
|
||||
"""
|
||||
return compute_committee(
|
||||
indices=get_active_validator_indices(state, epoch),
|
||||
seed=get_seed(state, epoch),
|
||||
index=shard * count + index,
|
||||
count=SHARD_COUNT * count,
|
||||
def compute_epoch_of_shard_slot(slot: ShardSlot) -> Epoch:
|
||||
return Epoch(slot // SHARD_SLOTS_PER_EPOCH)
|
||||
```
|
||||
|
||||
#### `compute_shard_period_start_epoch`
|
||||
|
||||
```python
|
||||
def compute_shard_period_start_epoch(epoch: Epoch, lookback: uint64) -> Epoch:
|
||||
return Epoch(epoch - (epoch % EPOCHS_PER_SHARD_PERIOD) - lookback * EPOCHS_PER_SHARD_PERIOD)
|
||||
```
|
||||
|
||||
### Beacon state accessors
|
||||
|
||||
#### `get_period_committee`
|
||||
|
||||
```python
|
||||
def get_period_committee(beacon_state: BeaconState, shard: Shard, epoch: Epoch) -> Sequence[ValidatorIndex]:
|
||||
active_validator_indices = get_active_validator_indices(beacon_state, epoch)
|
||||
seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_ATTESTER)
|
||||
return compute_committee(active_validator_indices, seed, shard, SHARD_COUNT)[:MAX_PERIOD_COMMITTEE_SIZE]
|
||||
```
|
||||
|
||||
#### `get_shard_committee`
|
||||
|
||||
```python
|
||||
def get_shard_committee(beacon_state: BeaconState, shard: Shard, epoch: Epoch) -> Sequence[ValidatorIndex]:
|
||||
older_committee = get_period_committee(beacon_state, shard, compute_shard_period_start_epoch(epoch, 2))
|
||||
newer_committee = get_period_committee(beacon_state, shard, compute_shard_period_start_epoch(epoch, 1))
|
||||
# Every epoch cycle out validators from the older committee and cycle in validators from the newer committee
|
||||
older_subcommittee = [i for i in older_committee if i % EPOCHS_PER_SHARD_PERIOD > epoch % EPOCHS_PER_SHARD_PERIOD]
|
||||
newer_subcommittee = [i for i in newer_committee if i % EPOCHS_PER_SHARD_PERIOD <= epoch % EPOCHS_PER_SHARD_PERIOD]
|
||||
return older_subcommittee + newer_subcommittee
|
||||
```
|
||||
|
||||
#### `get_shard_proposer_index`
|
||||
|
||||
```python
|
||||
def get_shard_proposer_index(beacon_state: BeaconState, shard: Shard, slot: ShardSlot) -> ValidatorIndex:
|
||||
epoch = get_current_epoch(beacon_state)
|
||||
shard_committee = get_shard_committee(beacon_state, shard, epoch)
|
||||
active_indices = [i for i in shard_committee if is_active_validator(beacon_state.validators[i], epoch)]
|
||||
assert any(active_indices)
|
||||
|
||||
epoch_seed = get_seed(beacon_state, epoch, DOMAIN_SHARD_PROPOSER)
|
||||
seed = hash(epoch_seed + int_to_bytes(slot, length=8) + int_to_bytes(shard, length=8))
|
||||
return compute_proposer_index(beacon_state, active_indices, seed)
|
||||
```
|
||||
|
||||
### Shard state mutators
|
||||
|
||||
#### `process_delta`
|
||||
|
||||
```python
|
||||
def process_delta(beacon_state: BeaconState,
|
||||
shard_state: ShardState,
|
||||
index: ValidatorIndex,
|
||||
delta: Gwei,
|
||||
positive: bool=True) -> None:
|
||||
epoch = compute_epoch_of_shard_slot(shard_state.slot)
|
||||
older_committee = get_period_committee(beacon_state, shard_state.shard, compute_shard_period_start_epoch(epoch, 2))
|
||||
newer_committee = get_period_committee(beacon_state, shard_state.shard, compute_shard_period_start_epoch(epoch, 1))
|
||||
if index in older_committee:
|
||||
if positive:
|
||||
shard_state.older_committee_positive_deltas[older_committee.index(index)] += delta
|
||||
else:
|
||||
shard_state.older_committee_negative_deltas[older_committee.index(index)] += delta
|
||||
elif index in newer_committee:
|
||||
if positive:
|
||||
shard_state.newer_committee_positive_deltas[newer_committee.index(index)] += delta
|
||||
else:
|
||||
shard_state.newer_committee_negative_deltas[newer_committee.index(index)] += delta
|
||||
```
|
||||
|
||||
## Genesis
|
||||
|
||||
### `get_genesis_shard_state`
|
||||
|
||||
```python
|
||||
def get_genesis_shard_state(shard: Shard) -> ShardState:
|
||||
return ShardState(
|
||||
shard=shard,
|
||||
slot=ShardSlot(SHARD_GENESIS_EPOCH * SHARD_SLOTS_PER_EPOCH),
|
||||
latest_block_header=ShardBlockHeader(
|
||||
shard=shard,
|
||||
slot=ShardSlot(SHARD_GENESIS_EPOCH * SHARD_SLOTS_PER_EPOCH),
|
||||
body_root=hash_tree_root(List[byte, MAX_SHARD_BLOCK_SIZE - SHARD_HEADER_SIZE]()),
|
||||
),
|
||||
block_body_price=MIN_BLOCK_BODY_PRICE,
|
||||
)
|
||||
```
|
||||
|
||||
### `get_switchover_epoch`
|
||||
### `get_genesis_shard_block`
|
||||
|
||||
```python
|
||||
def get_switchover_epoch(state: BeaconState, epoch: Epoch, index: ValidatorIndex) -> int:
|
||||
earlier_start_epoch = Epoch(epoch - (epoch % PERSISTENT_COMMITTEE_PERIOD) - PERSISTENT_COMMITTEE_PERIOD * 2)
|
||||
return (bytes_to_int(hash(get_seed(state, earlier_start_epoch) + int_to_bytes(index, length=3)[0:8]))
|
||||
% PERSISTENT_COMMITTEE_PERIOD)
|
||||
def get_genesis_shard_block(shard: Shard) -> ShardBlock:
|
||||
return ShardBlock(
|
||||
shard=shard,
|
||||
slot=ShardSlot(SHARD_GENESIS_EPOCH * SHARD_SLOTS_PER_EPOCH),
|
||||
state_root=hash_tree_root(get_genesis_shard_state(shard)),
|
||||
)
|
||||
```
|
||||
|
||||
### `get_persistent_committee`
|
||||
## Shard state transition function
|
||||
|
||||
```python
|
||||
def get_persistent_committee(state: BeaconState,
|
||||
shard: Shard,
|
||||
slot: Slot) -> Sequence[ValidatorIndex]:
|
||||
"""
|
||||
Return the persistent committee for the given ``shard`` at the given ``slot``.
|
||||
"""
|
||||
epoch = compute_epoch_of_slot(slot)
|
||||
earlier_start_epoch = Epoch(epoch - (epoch % PERSISTENT_COMMITTEE_PERIOD) - PERSISTENT_COMMITTEE_PERIOD * 2)
|
||||
later_start_epoch = Epoch(epoch - (epoch % PERSISTENT_COMMITTEE_PERIOD) - PERSISTENT_COMMITTEE_PERIOD)
|
||||
|
||||
committee_count = max(
|
||||
len(get_active_validator_indices(state, earlier_start_epoch)) //
|
||||
(SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
||||
len(get_active_validator_indices(state, later_start_epoch)) //
|
||||
(SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
||||
) + 1
|
||||
|
||||
index = slot % committee_count
|
||||
earlier_committee = get_period_committee(state, earlier_start_epoch, shard, index, committee_count)
|
||||
later_committee = get_period_committee(state, later_start_epoch, shard, index, committee_count)
|
||||
|
||||
# Take not-yet-cycled-out validators from earlier committee and already-cycled-in validators from
|
||||
# later committee; return a sorted list of the union of the two, deduplicated
|
||||
return sorted(list(set(
|
||||
[i for i in earlier_committee if epoch % PERSISTENT_COMMITTEE_PERIOD < get_switchover_epoch(state, epoch, i)]
|
||||
+ [i for i in later_committee if epoch % PERSISTENT_COMMITTEE_PERIOD >= get_switchover_epoch(state, epoch, i)]
|
||||
)))
|
||||
def shard_state_transition(beacon_state: BeaconState,
|
||||
shard_state: ShardState,
|
||||
block: ShardBlock,
|
||||
validate_state_root: bool=False) -> ShardState:
|
||||
# Process slots (including those with no blocks) since block
|
||||
process_shard_slots(shard_state, block.slot)
|
||||
# Process block
|
||||
process_shard_block(beacon_state, shard_state, block)
|
||||
# Validate state root (`validate_state_root == True` in production)
|
||||
if validate_state_root:
|
||||
assert block.state_root == hash_tree_root(shard_state)
|
||||
# Return post-state
|
||||
return shard_state
|
||||
```
|
||||
|
||||
### `get_shard_proposer_index`
|
||||
|
||||
```python
|
||||
def get_shard_proposer_index(state: BeaconState,
|
||||
shard: Shard,
|
||||
slot: Slot) -> Optional[ValidatorIndex]:
|
||||
# Randomly shift persistent committee
|
||||
persistent_committee = list(get_persistent_committee(state, shard, slot))
|
||||
seed = hash(state.current_shuffling_seed + int_to_bytes(shard, length=8) + int_to_bytes(slot, length=8))
|
||||
random_index = bytes_to_int(seed[0:8]) % len(persistent_committee)
|
||||
persistent_committee = persistent_committee[random_index:] + persistent_committee[:random_index]
|
||||
|
||||
# Search for an active proposer
|
||||
for index in persistent_committee:
|
||||
if is_active_validator(state.validators[index], get_current_epoch(state)):
|
||||
return index
|
||||
|
||||
# No block can be proposed if no validator is active
|
||||
return None
|
||||
def process_shard_slots(shard_state: ShardState, slot: ShardSlot) -> None:
|
||||
assert shard_state.slot <= slot
|
||||
while shard_state.slot < slot:
|
||||
process_shard_slot(shard_state)
|
||||
# Process shard period on the start slot of the next shard period
|
||||
if (shard_state.slot + 1) % (SHARD_SLOTS_PER_EPOCH * EPOCHS_PER_SHARD_PERIOD) == 0:
|
||||
process_shard_period(shard_state)
|
||||
shard_state.slot += ShardSlot(1)
|
||||
```
|
||||
|
||||
### `get_shard_header`
|
||||
```python
|
||||
def process_shard_slot(shard_state: ShardState) -> None:
|
||||
# Cache state root
|
||||
previous_state_root = hash_tree_root(shard_state)
|
||||
if shard_state.latest_block_header.state_root == Bytes32():
|
||||
shard_state.latest_block_header.state_root = previous_state_root
|
||||
# Cache state root in history accumulator
|
||||
depth = 0
|
||||
while shard_state.slot % 2**depth == 0 and depth < HISTORY_ACCUMULATOR_DEPTH:
|
||||
shard_state.history_accumulator[depth] = previous_state_root
|
||||
depth += 1
|
||||
```
|
||||
|
||||
### Period processing
|
||||
|
||||
```python
|
||||
def get_shard_header(block: ShardBlock) -> ShardBlockHeader:
|
||||
return ShardBlockHeader(
|
||||
slot=block.slot,
|
||||
def process_shard_period(shard_state: ShardState) -> None:
|
||||
# Rotate committee deltas
|
||||
shard_state.older_committee_positive_deltas = shard_state.newer_committee_positive_deltas
|
||||
shard_state.older_committee_negative_deltas = shard_state.newer_committee_negative_deltas
|
||||
shard_state.newer_committee_positive_deltas = [Gwei(0) for _ in range(MAX_PERIOD_COMMITTEE_SIZE)]
|
||||
shard_state.newer_committee_negative_deltas = [Gwei(0) for _ in range(MAX_PERIOD_COMMITTEE_SIZE)]
|
||||
```
|
||||
|
||||
### Block processing
|
||||
|
||||
```python
|
||||
def process_shard_block(beacon_state: BeaconState, shard_state: ShardState, block: ShardBlock) -> None:
|
||||
process_shard_block_header(beacon_state, shard_state, block)
|
||||
process_shard_attestations(beacon_state, shard_state, block)
|
||||
process_shard_block_body(beacon_state, shard_state, block)
|
||||
```
|
||||
|
||||
#### Block header
|
||||
|
||||
```python
|
||||
def process_shard_block_header(beacon_state: BeaconState, shard_state: ShardState, block: ShardBlock) -> None:
|
||||
# Verify the shard number
|
||||
assert block.shard == shard_state.shard
|
||||
# Verify the slot number
|
||||
assert block.slot == shard_state.slot
|
||||
# Verify the beacon chain root
|
||||
epoch = compute_epoch_of_shard_slot(shard_state.slot)
|
||||
assert epoch * SLOTS_PER_EPOCH == beacon_state.slot
|
||||
beacon_block_header = BeaconBlockHeader(
|
||||
slot=beacon_state.latest_block_header.slot,
|
||||
parent_root=beacon_state.latest_block_header.parent_root,
|
||||
state_root=beacon_state.latest_block_header.state_root,
|
||||
body_root=beacon_state.latest_block_header.body_root,
|
||||
)
|
||||
if beacon_block_header.state_root == Bytes32():
|
||||
beacon_block_header.state_root = hash_tree_root(beacon_state)
|
||||
assert block.beacon_block_root == signing_root(beacon_block_header)
|
||||
# Verify the parent root
|
||||
assert block.parent_root == signing_root(shard_state.latest_block_header)
|
||||
# Save current block as the new latest block
|
||||
shard_state.latest_block_header = ShardBlockHeader(
|
||||
shard=block.shard,
|
||||
beacon_chain_root=block.beacon_chain_root,
|
||||
slot=block.slot,
|
||||
beacon_block_root=block.beacon_block_root,
|
||||
parent_root=block.parent_root,
|
||||
# `state_root` is zeroed and overwritten in the next `process_shard_slot` call
|
||||
body_root=hash_tree_root(block.body),
|
||||
state_root=block.state_root,
|
||||
block_size_sum=block.block_size_sum,
|
||||
aggregation_bits=block.aggregation_bits,
|
||||
attestations=block.attestations,
|
||||
signature=block.signature,
|
||||
# `signature` is zeroed
|
||||
)
|
||||
# Verify the sum of the block sizes since genesis
|
||||
shard_state.block_size_sum += SHARD_HEADER_SIZE + len(block.body)
|
||||
assert block.block_size_sum == shard_state.block_size_sum
|
||||
# Verify proposer is not slashed
|
||||
proposer_index = get_shard_proposer_index(beacon_state, shard_state.shard, block.slot)
|
||||
proposer = beacon_state.validators[proposer_index]
|
||||
assert not proposer.slashed
|
||||
# Verify proposer signature
|
||||
domain = get_domain(beacon_state, DOMAIN_SHARD_PROPOSER, compute_epoch_of_shard_slot(block.slot))
|
||||
assert bls_verify(proposer.pubkey, signing_root(block), block.signature, domain)
|
||||
```
|
||||
|
||||
### `verify_shard_attestation_signature`
|
||||
#### Attestations
|
||||
|
||||
```python
|
||||
def verify_shard_attestation_signature(state: BeaconState,
|
||||
attestation: ShardAttestation) -> None:
|
||||
data = attestation.data
|
||||
persistent_committee = get_persistent_committee(state, data.shard, data.slot)
|
||||
def process_shard_attestations(beacon_state: BeaconState, shard_state: ShardState, block: ShardBlock) -> None:
|
||||
pubkeys = []
|
||||
for i, index in enumerate(persistent_committee):
|
||||
if attestation.aggregation_bits[i]:
|
||||
validator = state.validators[index]
|
||||
assert is_active_validator(validator, get_current_epoch(state))
|
||||
pubkeys.append(validator.pubkey)
|
||||
assert bls_verify(
|
||||
pubkey=bls_aggregate_pubkeys(pubkeys),
|
||||
message_hash=data.shard_block_root,
|
||||
signature=attestation.aggregate_signature,
|
||||
domain=get_domain(state, DOMAIN_SHARD_ATTESTER, compute_epoch_of_slot(data.slot))
|
||||
)
|
||||
attestation_count = 0
|
||||
shard_committee = get_shard_committee(beacon_state, shard_state.shard, block.slot)
|
||||
for i, validator_index in enumerate(shard_committee):
|
||||
if block.aggregation_bits[i]:
|
||||
pubkeys.append(beacon_state.validators[validator_index].pubkey)
|
||||
process_delta(beacon_state, shard_state, validator_index, get_base_reward(beacon_state, validator_index))
|
||||
attestation_count += 1
|
||||
# Verify there are no extraneous bits set beyond the shard committee
|
||||
for i in range(len(shard_committee), 2 * MAX_PERIOD_COMMITTEE_SIZE):
|
||||
assert block.aggregation_bits[i] == 0b0
|
||||
# Verify attester aggregate signature
|
||||
domain = get_domain(beacon_state, DOMAIN_SHARD_ATTESTER, compute_epoch_of_shard_slot(block.slot))
|
||||
message = hash_tree_root(ShardAttestationData(slot=shard_state.slot, parent_root=block.parent_root))
|
||||
assert bls_verify(bls_aggregate_pubkeys(pubkeys), message, block.attestations, domain)
|
||||
# Proposer micro-reward
|
||||
proposer_index = get_shard_proposer_index(beacon_state, shard_state.shard, block.slot)
|
||||
reward = attestation_count * get_base_reward(beacon_state, proposer_index) // PROPOSER_REWARD_QUOTIENT
|
||||
process_delta(beacon_state, shard_state, proposer_index, Gwei(reward))
|
||||
```
|
||||
|
||||
### `compute_crosslink_data_root`
|
||||
#### Block body
|
||||
|
||||
```python
|
||||
def compute_crosslink_data_root(blocks: Sequence[ShardBlock]) -> Hash:
|
||||
def is_power_of_two(value: uint64) -> bool:
|
||||
return (value > 0) and (value & (value - 1) == 0)
|
||||
|
||||
def pad_to_power_of_2(values: MutableSequence[bytes]) -> Sequence[bytes]:
|
||||
while not is_power_of_two(len(values)):
|
||||
values.append(b'\x00' * BYTES_PER_SHARD_BLOCK_BODY)
|
||||
return values
|
||||
|
||||
def hash_tree_root_of_bytes(data: bytes) -> Hash:
|
||||
return hash_tree_root([data[i:i + 32] for i in range(0, len(data), 32)])
|
||||
|
||||
def zpad(data: bytes, length: uint64) -> bytes:
|
||||
return data + b'\x00' * (length - len(data))
|
||||
|
||||
return hash(
|
||||
# TODO untested code.
|
||||
# Need to either pass a typed list to hash-tree-root, or merkleize_chunks(values, pad_to=2**x)
|
||||
hash_tree_root(pad_to_power_of_2([
|
||||
hash_tree_root_of_bytes(
|
||||
zpad(serialize(get_shard_header(block)), BYTES_PER_SHARD_BLOCK_BODY)
|
||||
) for block in blocks
|
||||
]))
|
||||
+ hash_tree_root(pad_to_power_of_2([
|
||||
hash_tree_root_of_bytes(block.body) for block in blocks
|
||||
]))
|
||||
)
|
||||
```
|
||||
|
||||
## Object validity
|
||||
|
||||
### Shard blocks
|
||||
|
||||
Let:
|
||||
|
||||
- `beacon_blocks` be the `BeaconBlock` list such that `beacon_blocks[slot]` is the canonical `BeaconBlock` at slot `slot`
|
||||
- `beacon_state` be the canonical `BeaconState` after processing `beacon_blocks[-1]`
|
||||
- `valid_shard_blocks` be the list of valid `ShardBlock`, recursively defined
|
||||
- `candidate` be a candidate `ShardBlock` for which validity is to be determined by running `is_valid_shard_block`
|
||||
|
||||
```python
|
||||
def is_valid_shard_block(beacon_blocks: Sequence[BeaconBlock],
|
||||
beacon_state: BeaconState,
|
||||
valid_shard_blocks: Sequence[ShardBlock],
|
||||
candidate: ShardBlock) -> bool:
|
||||
# Check if block is already determined valid
|
||||
for _, block in enumerate(valid_shard_blocks):
|
||||
if candidate == block:
|
||||
return True
|
||||
|
||||
# Check slot number
|
||||
assert candidate.slot >= PHASE_1_FORK_SLOT
|
||||
|
||||
# Check shard number
|
||||
assert candidate.shard <= SHARD_COUNT
|
||||
|
||||
# Check beacon block
|
||||
beacon_block = beacon_blocks[candidate.slot]
|
||||
assert candidate.beacon_block_root == signing_root(beacon_block)
|
||||
assert beacon_block.slot <= candidate.slot
|
||||
|
||||
# Check state root
|
||||
assert candidate.state_root == Hash() # [to be removed in phase 2]
|
||||
|
||||
# Check parent block
|
||||
if candidate.slot == PHASE_1_FORK_SLOT:
|
||||
assert candidate.parent_root == Hash()
|
||||
def process_shard_block_body(beacon_state: BeaconState, shard_state: ShardState, block: ShardBlock) -> None:
|
||||
# Verify block body size is a multiple of the header size
|
||||
assert len(block.body) % SHARD_HEADER_SIZE == 0
|
||||
# Apply proposer block body fee
|
||||
block_body_fee = shard_state.block_body_price * len(block.body) // MAX_SHARD_BLOCK_SIZE
|
||||
proposer_index = get_shard_proposer_index(beacon_state, shard_state.shard, block.slot)
|
||||
process_delta(beacon_state, shard_state, proposer_index, Gwei(block_body_fee), positive=False) # Burn
|
||||
process_delta(beacon_state, shard_state, proposer_index, Gwei(block_body_fee // PROPOSER_REWARD_QUOTIENT)) # Reward
|
||||
# Calculate new block body price
|
||||
block_size = SHARD_HEADER_SIZE + len(block.body)
|
||||
QUOTIENT = MAX_SHARD_BLOCK_SIZE * BLOCK_BODY_PRICE_QUOTIENT
|
||||
if block_size > SHARD_BLOCK_SIZE_TARGET:
|
||||
price_delta = Gwei(shard_state.block_body_price * (block_size - SHARD_BLOCK_SIZE_TARGET) // QUOTIENT)
|
||||
# The maximum block body price caps the amount burnt on fees within a shard period
|
||||
MAX_BLOCK_BODY_PRICE = MAX_EFFECTIVE_BALANCE // EPOCHS_PER_SHARD_PERIOD // SHARD_SLOTS_PER_EPOCH
|
||||
shard_state.block_body_price = Gwei(min(MAX_BLOCK_BODY_PRICE, shard_state.block_body_price + price_delta))
|
||||
else:
|
||||
parent_block = next(
|
||||
(block for block in valid_shard_blocks if signing_root(block) == candidate.parent_root),
|
||||
None
|
||||
)
|
||||
assert parent_block is not None
|
||||
assert parent_block.shard == candidate.shard
|
||||
assert parent_block.slot < candidate.slot
|
||||
assert signing_root(beacon_blocks[parent_block.slot]) == parent_block.beacon_chain_root
|
||||
|
||||
# Check attestations
|
||||
assert len(candidate.attestations) <= MAX_SHARD_ATTESTIONS
|
||||
for _, attestation in enumerate(candidate.attestations):
|
||||
assert max(GENESIS_SHARD_SLOT, candidate.slot - SLOTS_PER_EPOCH) <= attestation.data.slot
|
||||
assert attestation.data.slot <= candidate.slot - MIN_ATTESTATION_INCLUSION_DELAY
|
||||
assert attestation.data.crosslink.shard == candidate.shard
|
||||
verify_shard_attestation_signature(beacon_state, attestation)
|
||||
|
||||
# Check signature
|
||||
proposer_index = get_shard_proposer_index(beacon_state, candidate.shard, candidate.slot)
|
||||
assert proposer_index is not None
|
||||
assert bls_verify(
|
||||
pubkey=beacon_state.validators[proposer_index].pubkey,
|
||||
message_hash=signing_root(candidate),
|
||||
signature=candidate.signature,
|
||||
domain=get_domain(beacon_state, DOMAIN_SHARD_PROPOSER, compute_epoch_of_slot(candidate.slot)),
|
||||
)
|
||||
|
||||
return True
|
||||
```
|
||||
|
||||
### Shard attestations
|
||||
|
||||
Let:
|
||||
|
||||
- `valid_shard_blocks` be the list of valid `ShardBlock`
|
||||
- `beacon_state` be the canonical `BeaconState`
|
||||
- `candidate` be a candidate `ShardAttestation` for which validity is to be determined by running `is_valid_shard_attestation`
|
||||
|
||||
```python
|
||||
def is_valid_shard_attestation(valid_shard_blocks: Sequence[ShardBlock],
|
||||
beacon_state: BeaconState,
|
||||
candidate: ShardAttestation) -> bool:
|
||||
# Check shard block
|
||||
shard_block = next(
|
||||
(block for block in valid_shard_blocks if signing_root(block) == candidate.data.shard_block_root),
|
||||
None,
|
||||
)
|
||||
assert shard_block is not None
|
||||
assert shard_block.slot == candidate.data.slot
|
||||
assert shard_block.shard == candidate.data.shard
|
||||
|
||||
# Check signature
|
||||
verify_shard_attestation_signature(beacon_state, candidate)
|
||||
|
||||
return True
|
||||
```
|
||||
|
||||
### Beacon attestations
|
||||
|
||||
Let:
|
||||
|
||||
- `shard` be a valid `Shard`
|
||||
- `shard_blocks` be the `ShardBlock` list such that `shard_blocks[slot]` is the canonical `ShardBlock` for shard `shard` at slot `slot`
|
||||
- `beacon_state` be the canonical `BeaconState`
|
||||
- `valid_attestations` be the set of valid `Attestation` objects, recursively defined
|
||||
- `candidate` be a candidate `Attestation` which is valid under Phase 0 rules, and for which validity is to be determined under Phase 1 rules by running `is_valid_beacon_attestation`
|
||||
|
||||
```python
|
||||
def is_valid_beacon_attestation(shard: Shard,
|
||||
shard_blocks: Sequence[ShardBlock],
|
||||
beacon_state: BeaconState,
|
||||
valid_attestations: Set[Attestation],
|
||||
candidate: Attestation) -> bool:
|
||||
# Check if attestation is already determined valid
|
||||
for attestation in valid_attestations:
|
||||
if candidate == attestation:
|
||||
return True
|
||||
|
||||
# Check previous attestation
|
||||
if candidate.data.previous_crosslink.epoch <= PHASE_1_FORK_EPOCH:
|
||||
assert candidate.data.previous_crosslink.data_root == Hash()
|
||||
else:
|
||||
previous_attestation = next(
|
||||
(attestation for attestation in valid_attestations
|
||||
if attestation.data.crosslink.data_root == candidate.data.previous_crosslink.data_root),
|
||||
None,
|
||||
)
|
||||
assert previous_attestation is not None
|
||||
assert candidate.data.previous_attestation.epoch < compute_epoch_of_slot(candidate.data.slot)
|
||||
|
||||
# Check crosslink data root
|
||||
start_epoch = beacon_state.crosslinks[shard].epoch
|
||||
end_epoch = min(compute_epoch_of_slot(candidate.data.slot) - CROSSLINK_LOOKBACK,
|
||||
start_epoch + MAX_EPOCHS_PER_CROSSLINK)
|
||||
blocks = []
|
||||
for slot in range(start_epoch * SLOTS_PER_EPOCH, end_epoch * SLOTS_PER_EPOCH):
|
||||
blocks.append(shard_blocks[slot])
|
||||
assert candidate.data.crosslink.data_root == compute_crosslink_data_root(blocks)
|
||||
|
||||
return True
|
||||
price_delta = Gwei(shard_state.block_body_price * (SHARD_BLOCK_SIZE_TARGET - block_size) // QUOTIENT)
|
||||
shard_state.block_body_price = Gwei(max(MIN_BLOCK_BODY_PRICE, shard_state.block_body_price + price_delta))
|
||||
```
|
||||
|
||||
## Shard fork choice rule
|
||||
|
||||
The fork choice rule for any shard is LMD GHOST using the shard attestations of the persistent committee and the beacon chain attestations of the crosslink committee currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (i.e. `state.crosslinks[shard].shard_block_root`). Only blocks whose `beacon_chain_root` is the block in the main beacon chain at the specified `slot` should be considered. (If the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than that slot.)
|
||||
The fork choice rule for any shard is LMD GHOST using the shard attestations of the shard committee and the beacon chain attestations of the crosslink committee currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (i.e. `beacon_state.crosslinks[shard].shard_block_root`). Only blocks whose `beacon_block_root` is the block in the main beacon chain at the specified `slot` should be considered. (If the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than that slot.)
|
||||
|
|
|
@ -6,22 +6,44 @@
|
|||
<!-- TOC -->
|
||||
|
||||
- [Merkle proof formats](#merkle-proof-formats)
|
||||
- [Table of contents](#table-of-contents)
|
||||
- [Constants](#constants)
|
||||
- [Generalized Merkle tree index](#generalized-merkle-tree-index)
|
||||
- [SSZ object to index](#ssz-object-to-index)
|
||||
- [Merkle multiproofs](#merkle-multiproofs)
|
||||
- [MerklePartial](#merklepartial)
|
||||
- [`SSZMerklePartial`](#sszmerklepartial)
|
||||
- [Proofs for execution](#proofs-for-execution)
|
||||
- [Table of contents](#table-of-contents)
|
||||
- [Helper functions](#helper-functions)
|
||||
- [Generalized Merkle tree index](#generalized-merkle-tree-index)
|
||||
- [SSZ object to index](#ssz-object-to-index)
|
||||
- [Helpers for generalized indices](#helpers-for-generalized-indices)
|
||||
- [`concat_generalized_indices`](#concat_generalized_indices)
|
||||
- [`get_generalized_index_length`](#get_generalized_index_length)
|
||||
- [`get_generalized_index_bit`](#get_generalized_index_bit)
|
||||
- [`generalized_index_sibling`](#generalized_index_sibling)
|
||||
- [`generalized_index_child`](#generalized_index_child)
|
||||
- [`generalized_index_parent`](#generalized_index_parent)
|
||||
- [Merkle multiproofs](#merkle-multiproofs)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Constants
|
||||
## Helper functions
|
||||
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `LENGTH_FLAG` | `2**64 - 1` |
|
||||
```python
|
||||
def get_next_power_of_two(x: int) -> int:
|
||||
"""
|
||||
Get next power of 2 >= the input.
|
||||
"""
|
||||
if x <= 2:
|
||||
return x
|
||||
else:
|
||||
return 2 * get_next_power_of_two((x + 1) // 2)
|
||||
```
|
||||
|
||||
```python
|
||||
def get_previous_power_of_two(x: int) -> int:
|
||||
"""
|
||||
Get the previous power of 2 >= the input.
|
||||
"""
|
||||
if x <= 2:
|
||||
return x
|
||||
else:
|
||||
return 2 * get_previous_power_of_two(x // 2)
|
||||
```
|
||||
|
||||
## Generalized Merkle tree index
|
||||
|
||||
|
@ -37,13 +59,16 @@ In a binary Merkle tree, we define a "generalized index" of a node as `2**depth
|
|||
Note that the generalized index has the convenient property that the two children of node `k` are `2k` and `2k+1`, and also that it equals the position of a node in the linear representation of the Merkle tree that's computed by this function:
|
||||
|
||||
```python
|
||||
def merkle_tree(leaves: List[Bytes32]) -> List[Bytes32]:
|
||||
o = [0] * len(leaves) + leaves
|
||||
for i in range(len(leaves) - 1, 0, -1):
|
||||
def merkle_tree(leaves: Sequence[Hash]) -> Sequence[Hash]:
|
||||
padded_length = get_next_power_of_two(len(leaves))
|
||||
o = [Hash()] * padded_length + list(leaves) + [Hash()] * (padded_length - len(leaves))
|
||||
for i in range(padded_length - 1, 0, -1):
|
||||
o[i] = hash(o[i * 2] + o[i * 2 + 1])
|
||||
return o
|
||||
```
|
||||
|
||||
We define a custom type `GeneralizedIndex` as a Python integer type in this document. It can be represented as a Bitvector/Bitlist object as well.
|
||||
|
||||
We will define Merkle proofs in terms of generalized indices.
|
||||
|
||||
## SSZ object to index
|
||||
|
@ -61,46 +86,149 @@ y_data_root len(y)
|
|||
.......
|
||||
```
|
||||
|
||||
We can now define a concept of a "path", a way of describing a function that takes as input an SSZ object and outputs some specific (possibly deeply nested) member. For example, `foo -> foo.x` is a path, as are `foo -> len(foo.y)` and `foo -> foo.y[5].w`. We'll describe paths as lists, which can have two representations. In "human-readable form", they are `["x"]`, `["y", "__len__"]` and `["y", 5, "w"]` respectively. In "encoded form", they are lists of `uint64` values, in these cases (assuming the fields of `foo` in order are `x` then `y`, and `w` is the first field of `y[i]`) `[0]`, `[1, 2**64-1]`, `[1, 5, 0]`.
|
||||
We can now define a concept of a "path", a way of describing a function that takes as input an SSZ object and outputs some specific (possibly deeply nested) member. For example, `foo -> foo.x` is a path, as are `foo -> len(foo.y)` and `foo -> foo.y[5].w`. We'll describe paths as lists, which can have two representations. In "human-readable form", they are `["x"]`, `["y", "__len__"]` and `["y", 5, "w"]` respectively. In "encoded form", they are lists of `uint64` values, in these cases (assuming the fields of `foo` in order are `x` then `y`, and `w` is the first field of `y[i]`) `[0]`, `[1, 2**64-1]`, `[1, 5, 0]`. We define `SSZVariableName` as the member variable name string, i.e., a path is presented as a sequence of integers and `SSZVariableName`.
|
||||
|
||||
```python
|
||||
def path_to_encoded_form(obj: Any, path: List[Union[str, int]]) -> List[int]:
|
||||
if len(path) == 0:
|
||||
return []
|
||||
elif isinstance(path[0], "__len__"):
|
||||
assert len(path) == 1
|
||||
return [LENGTH_FLAG]
|
||||
elif isinstance(path[0], str) and hasattr(obj, "fields"):
|
||||
return [list(obj.fields.keys()).index(path[0])] + path_to_encoded_form(getattr(obj, path[0]), path[1:])
|
||||
elif isinstance(obj, (Vector, List)):
|
||||
return [path[0]] + path_to_encoded_form(obj[path[0]], path[1:])
|
||||
def item_length(typ: SSZType) -> int:
|
||||
"""
|
||||
Return the number of bytes in a basic type, or 32 (a full hash) for compound types.
|
||||
"""
|
||||
if issubclass(typ, BasicValue):
|
||||
return typ.byte_len
|
||||
else:
|
||||
raise Exception("Unknown type / path")
|
||||
return 32
|
||||
```
|
||||
|
||||
We can now define a function `get_generalized_indices(object: Any, path: List[int], root: int=1) -> List[int]` that converts an object and a path to a set of generalized indices (note that for constant-sized objects, there is only one generalized index and it only depends on the path, but for dynamically sized objects the indices may depend on the object itself too). For dynamically-sized objects, the set of indices will have more than one member because of the need to access an array's length to determine the correct generalized index for some array access.
|
||||
```python
|
||||
def get_elem_type(typ: Union[BaseBytes, BaseList, Container],
|
||||
index_or_variable_name: Union[int, SSZVariableName]) -> SSZType:
|
||||
"""
|
||||
Return the type of the element of an object of the given type with the given index
|
||||
or member variable name (eg. `7` for `x[7]`, `"foo"` for `x.foo`)
|
||||
"""
|
||||
return typ.get_fields()[index_or_variable_name] if issubclass(typ, Container) else typ.elem_type
|
||||
```
|
||||
|
||||
```python
|
||||
def get_generalized_indices(obj: Any, path: List[int], root: int=1) -> List[int]:
|
||||
if len(path) == 0:
|
||||
return [root]
|
||||
elif isinstance(obj, Vector):
|
||||
items_per_chunk = (32 // len(serialize(x))) if isinstance(x, int) else 1
|
||||
new_root = root * next_power_of_2(len(obj) // items_per_chunk) + path[0] // items_per_chunk
|
||||
return get_generalized_indices(obj[path[0]], path[1:], new_root)
|
||||
elif isinstance(obj, List) and path[0] == LENGTH_FLAG:
|
||||
return [root * 2 + 1]
|
||||
elif isinstance(obj, List) and isinstance(path[0], int):
|
||||
assert path[0] < len(obj)
|
||||
items_per_chunk = (32 // len(serialize(x))) if isinstance(x, int) else 1
|
||||
new_root = root * 2 * next_power_of_2(len(obj) // items_per_chunk) + path[0] // items_per_chunk
|
||||
return [root *2 + 1] + get_generalized_indices(obj[path[0]], path[1:], new_root)
|
||||
elif hasattr(obj, "fields"):
|
||||
field = list(fields.keys())[path[0]]
|
||||
new_root = root * next_power_of_2(len(fields)) + path[0]
|
||||
return get_generalized_indices(getattr(obj, field), path[1:], new_root)
|
||||
def chunk_count(typ: SSZType) -> int:
|
||||
"""
|
||||
Return the number of hashes needed to represent the top-level elements in the given type
|
||||
(eg. `x.foo` or `x[7]` but not `x[7].bar` or `x.foo.baz`). In all cases except lists/vectors
|
||||
of basic types, this is simply the number of top-level elements, as each element gets one
|
||||
hash. For lists/vectors of basic types, it is often fewer because multiple basic elements
|
||||
can be packed into one 32-byte chunk.
|
||||
"""
|
||||
# typ.length describes the limit for list types, or the length for vector types.
|
||||
if issubclass(typ, BasicValue):
|
||||
return 1
|
||||
elif issubclass(typ, Bits):
|
||||
return (typ.length + 255) // 256
|
||||
elif issubclass(typ, Elements):
|
||||
return (typ.length * item_length(typ.elem_type) + 31) // 32
|
||||
elif issubclass(typ, Container):
|
||||
return len(typ.get_fields())
|
||||
else:
|
||||
raise Exception("Unknown type / path")
|
||||
raise Exception(f"Type not supported: {typ}")
|
||||
```
|
||||
|
||||
```python
|
||||
def get_item_position(typ: SSZType, index_or_variable_name: Union[int, SSZVariableName]) -> Tuple[int, int, int]:
|
||||
"""
|
||||
Return three variables:
|
||||
(i) the index of the chunk in which the given element of the item is represented;
|
||||
(ii) the starting byte position within the chunk;
|
||||
(iii) the ending byte position within the chunk.
|
||||
For example: for a 6-item list of uint64 values, index=2 will return (0, 16, 24), index=5 will return (1, 8, 16)
|
||||
"""
|
||||
if issubclass(typ, Elements):
|
||||
index = int(index_or_variable_name)
|
||||
start = index * item_length(typ.elem_type)
|
||||
return start // 32, start % 32, start % 32 + item_length(typ.elem_type)
|
||||
elif issubclass(typ, Container):
|
||||
variable_name = index_or_variable_name
|
||||
return typ.get_field_names().index(variable_name), 0, item_length(get_elem_type(typ, variable_name))
|
||||
else:
|
||||
raise Exception("Only lists/vectors/containers supported")
|
||||
```
|
||||
|
||||
```python
|
||||
def get_generalized_index(typ: SSZType, path: Sequence[Union[int, SSZVariableName]]) -> GeneralizedIndex:
|
||||
"""
|
||||
Converts a path (eg. `[7, "foo", 3]` for `x[7].foo[3]`, `[12, "bar", "__len__"]` for
|
||||
`len(x[12].bar)`) into the generalized index representing its position in the Merkle tree.
|
||||
"""
|
||||
root = GeneralizedIndex(1)
|
||||
for p in path:
|
||||
assert not issubclass(typ, BasicValue) # If we descend to a basic type, the path cannot continue further
|
||||
if p == '__len__':
|
||||
typ = uint64
|
||||
assert issubclass(typ, (List, Bytes))
|
||||
root = GeneralizedIndex(root * 2 + 1)
|
||||
else:
|
||||
pos, _, _ = get_item_position(typ, p)
|
||||
base_index = (GeneralizedIndex(2) if issubclass(typ, (List, Bytes)) else GeneralizedIndex(1))
|
||||
root = GeneralizedIndex(root * base_index * get_next_power_of_two(chunk_count(typ)) + pos)
|
||||
typ = get_elem_type(typ, p)
|
||||
return root
|
||||
```
|
||||
|
||||
### Helpers for generalized indices
|
||||
|
||||
_Usage note: functions outside this section should manipulate generalized indices using only functions inside this section. This is to make it easier for developers to implement generalized indices with underlying representations other than bigints._
|
||||
|
||||
#### `concat_generalized_indices`
|
||||
|
||||
```python
|
||||
def concat_generalized_indices(*indices: GeneralizedIndex) -> GeneralizedIndex:
|
||||
"""
|
||||
Given generalized indices i1 for A -> B, i2 for B -> C .... i_n for Y -> Z, returns
|
||||
the generalized index for A -> Z.
|
||||
"""
|
||||
o = GeneralizedIndex(1)
|
||||
for i in indices:
|
||||
o = GeneralizedIndex(o * get_previous_power_of_two(i) + (i - get_previous_power_of_two(i)))
|
||||
return o
|
||||
```
|
||||
|
||||
#### `get_generalized_index_length`
|
||||
|
||||
```python
|
||||
def get_generalized_index_length(index: GeneralizedIndex) -> int:
|
||||
"""
|
||||
Return the length of a path represented by a generalized index.
|
||||
"""
|
||||
return int(log2(index))
|
||||
```
|
||||
|
||||
#### `get_generalized_index_bit`
|
||||
|
||||
```python
|
||||
def get_generalized_index_bit(index: GeneralizedIndex, position: int) -> bool:
|
||||
"""
|
||||
Return the given bit of a generalized index.
|
||||
"""
|
||||
return (index & (1 << position)) > 0
|
||||
```
|
||||
|
||||
#### `generalized_index_sibling`
|
||||
|
||||
```python
|
||||
def generalized_index_sibling(index: GeneralizedIndex) -> GeneralizedIndex:
|
||||
return GeneralizedIndex(index ^ 1)
|
||||
```
|
||||
|
||||
#### `generalized_index_child`
|
||||
|
||||
```python
|
||||
def generalized_index_child(index: GeneralizedIndex, right_side: bool) -> GeneralizedIndex:
|
||||
return GeneralizedIndex(index * 2 + right_side)
|
||||
```
|
||||
|
||||
#### `generalized_index_parent`
|
||||
|
||||
```python
|
||||
def generalized_index_parent(index: GeneralizedIndex) -> GeneralizedIndex:
|
||||
return GeneralizedIndex(index // 2)
|
||||
```
|
||||
|
||||
## Merkle multiproofs
|
||||
|
@ -116,72 +244,99 @@ x x . . . . x *
|
|||
|
||||
. are unused nodes, * are used nodes, x are the values we are trying to prove. Notice how despite being a multiproof for 3 values, it requires only 3 auxiliary nodes, only one node more than would be required to prove a single value. Normally the efficiency gains are not quite that extreme, but the savings relative to individual Merkle proofs are still significant. As a rule of thumb, a multiproof for k nodes at the same level of an n-node tree has size `k * (n/k + log(n/k))`.
|
||||
|
||||
Here is code for creating and verifying a multiproof. First, a method for computing the generalized indices of the auxiliary tree nodes that a proof of a given set of generalized indices will require:
|
||||
First, we provide a method for computing the generalized indices of the auxiliary tree nodes that a proof of a given set of generalized indices will require:
|
||||
|
||||
```python
|
||||
def get_proof_indices(tree_indices: List[int]) -> List[int]:
|
||||
# Get all indices touched by the proof
|
||||
maximal_indices = set()
|
||||
for i in tree_indices:
|
||||
x = i
|
||||
while x > 1:
|
||||
maximal_indices.add(x ^ 1)
|
||||
x //= 2
|
||||
maximal_indices = tree_indices + sorted(list(maximal_indices))[::-1]
|
||||
# Get indices that cannot be recalculated from earlier indices
|
||||
redundant_indices = set()
|
||||
proof = []
|
||||
for index in maximal_indices:
|
||||
if index not in redundant_indices:
|
||||
proof.append(index)
|
||||
while index > 1:
|
||||
redundant_indices.add(index)
|
||||
if (index ^ 1) not in redundant_indices:
|
||||
break
|
||||
index //= 2
|
||||
return [i for i in proof if i not in tree_indices]
|
||||
def get_branch_indices(tree_index: GeneralizedIndex) -> Sequence[GeneralizedIndex]:
|
||||
"""
|
||||
Get the generalized indices of the sister chunks along the path from the chunk with the
|
||||
given tree index to the root.
|
||||
"""
|
||||
o = [generalized_index_sibling(tree_index)]
|
||||
while o[-1] > 1:
|
||||
o.append(generalized_index_sibling(generalized_index_parent(o[-1])))
|
||||
return o[:-1]
|
||||
```
|
||||
|
||||
Generating a proof is simply a matter of taking the node of the SSZ hash tree with the union of the given generalized indices for each index given by `get_proof_indices`, and outputting the list of nodes in the same order.
|
||||
|
||||
Here is the verification function:
|
||||
|
||||
```python
|
||||
def verify_multi_proof(root: Bytes32, indices: List[int], leaves: List[Bytes32], proof: List[Bytes32]) -> bool:
|
||||
tree = {}
|
||||
for index, leaf in zip(indices, leaves):
|
||||
tree[index] = leaf
|
||||
for index, proof_item in zip(get_proof_indices(indices), proof):
|
||||
tree[index] = proof_item
|
||||
index_queue = sorted(tree.keys())[:-1]
|
||||
i = 0
|
||||
while i < len(index_queue):
|
||||
index = index_queue[i]
|
||||
if index >= 2 and index ^ 1 in tree:
|
||||
tree[index // 2] = hash(tree[index - index % 2] + tree[index - index % 2 + 1])
|
||||
index_queue.append(index // 2)
|
||||
i += 1
|
||||
return (indices == []) or (1 in tree and tree[1] == root)
|
||||
def get_path_indices(tree_index: GeneralizedIndex) -> Sequence[GeneralizedIndex]:
|
||||
"""
|
||||
Get the generalized indices of the chunks along the path from the chunk with the
|
||||
given tree index to the root.
|
||||
"""
|
||||
o = [tree_index]
|
||||
while o[-1] > 1:
|
||||
o.append(generalized_index_parent(o[-1]))
|
||||
return o[:-1]
|
||||
```
|
||||
|
||||
## MerklePartial
|
||||
|
||||
We define:
|
||||
|
||||
### `SSZMerklePartial`
|
||||
|
||||
|
||||
```python
|
||||
{
|
||||
"root": "bytes32",
|
||||
"indices": ["uint64"],
|
||||
"values": ["bytes32"],
|
||||
"proof": ["bytes32"]
|
||||
}
|
||||
def get_helper_indices(indices: Sequence[GeneralizedIndex]) -> Sequence[GeneralizedIndex]:
|
||||
"""
|
||||
Get the generalized indices of all "extra" chunks in the tree needed to prove the chunks with the given
|
||||
generalized indices. Note that the decreasing order is chosen deliberately to ensure equivalence to the
|
||||
order of hashes in a regular single-item Merkle proof in the single-item case.
|
||||
"""
|
||||
all_helper_indices: Set[GeneralizedIndex] = set()
|
||||
all_path_indices: Set[GeneralizedIndex] = set()
|
||||
for index in indices:
|
||||
all_helper_indices = all_helper_indices.union(set(get_branch_indices(index)))
|
||||
all_path_indices = all_path_indices.union(set(get_path_indices(index)))
|
||||
|
||||
return sorted(all_helper_indices.difference(all_path_indices), reverse=True)
|
||||
```
|
||||
|
||||
### Proofs for execution
|
||||
Now we provide the Merkle proof verification functions. First, for single item proofs:
|
||||
|
||||
We define `MerklePartial(f, arg1, arg2..., focus=0)` as being a `SSZMerklePartial` object wrapping a Merkle multiproof of the set of nodes in the hash tree of the SSZ object `arg[focus]` that is needed to authenticate the parts of the object needed to compute `f(arg1, arg2...)`.
|
||||
```python
|
||||
def calculate_merkle_root(leaf: Hash, proof: Sequence[Hash], index: GeneralizedIndex) -> Hash:
|
||||
assert len(proof) == get_generalized_index_length(index)
|
||||
for i, h in enumerate(proof):
|
||||
if get_generalized_index_bit(index, i):
|
||||
leaf = hash(h + leaf)
|
||||
else:
|
||||
leaf = hash(leaf + h)
|
||||
return leaf
|
||||
```
|
||||
|
||||
Ideally, any function which accepts an SSZ object should also be able to accept a `SSZMerklePartial` object as a substitute.
|
||||
```python
|
||||
def verify_merkle_proof(leaf: Hash, proof: Sequence[Hash], index: GeneralizedIndex, root: Hash) -> bool:
|
||||
return calculate_merkle_root(leaf, proof, index) == root
|
||||
```
|
||||
|
||||
Now for multi-item proofs:
|
||||
|
||||
```python
|
||||
def calculate_multi_merkle_root(leaves: Sequence[Hash],
|
||||
proof: Sequence[Hash],
|
||||
indices: Sequence[GeneralizedIndex]) -> Hash:
|
||||
assert len(leaves) == len(indices)
|
||||
helper_indices = get_helper_indices(indices)
|
||||
assert len(proof) == len(helper_indices)
|
||||
objects = {
|
||||
**{index: node for index, node in zip(indices, leaves)},
|
||||
**{index: node for index, node in zip(helper_indices, proof)}
|
||||
}
|
||||
keys = sorted(objects.keys(), reverse=True)
|
||||
pos = 0
|
||||
while pos < len(keys):
|
||||
k = keys[pos]
|
||||
if k in objects and k ^ 1 in objects and k // 2 not in objects:
|
||||
objects[GeneralizedIndex(k // 2)] = hash(
|
||||
objects[GeneralizedIndex((k | 1) ^ 1)] +
|
||||
objects[GeneralizedIndex(k | 1)]
|
||||
)
|
||||
keys.append(GeneralizedIndex(k // 2))
|
||||
pos += 1
|
||||
return objects[GeneralizedIndex(1)]
|
||||
```
|
||||
|
||||
```python
|
||||
def verify_merkle_multiproof(leaves: Sequence[Hash],
|
||||
proof: Sequence[Hash],
|
||||
indices: Sequence[GeneralizedIndex],
|
||||
root: Hash) -> bool:
|
||||
return calculate_multi_merkle_root(leaves, proof, indices) == root
|
||||
```
|
||||
|
||||
Note that the single-item proof is a special case of a multi-item proof; a valid single-item proof verifies correctly when put into the multi-item verification function (making the natural trivial changes to input arguments, `index -> [index]` and `leaf -> [leaf]`). Note also that `calculate_merkle_root` and `calculate_multi_merkle_root` can be used independently to compute the new Merkle root of a proof with leaves updated.
|
||||
|
|
|
@ -1,199 +1,172 @@
|
|||
# Beacon Chain Light Client Syncing
|
||||
# Minimal Light Client Design
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers. One of the design goals of the Eth 2.0 beacon chain is light-client friendliness, not only to allow low-resource clients (mobile phones, IoT, etc.) to maintain access to the blockchain in a reasonably safe way, but also to facilitate the development of "bridges" between the Eth 2.0 beacon chain and other chains.
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers.
|
||||
|
||||
## Table of contents
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [Beacon Chain Light Client Syncing](#beacon-chain-light-client-syncing)
|
||||
- [Minimal Light Client Design](#minimal-light-client-design)
|
||||
- [Table of contents](#table-of-contents)
|
||||
- [Preliminaries](#preliminaries)
|
||||
- [Expansions](#expansions)
|
||||
- [`get_active_validator_indices`](#get_active_validator_indices)
|
||||
- [`MerklePartial`](#merklepartial)
|
||||
- [`PeriodData`](#perioddata)
|
||||
- [`get_earlier_start_epoch`](#get_earlier_start_epoch)
|
||||
- [`get_later_start_epoch`](#get_later_start_epoch)
|
||||
- [`get_period_data`](#get_period_data)
|
||||
- [Light client state](#light-client-state)
|
||||
- [Updating the shuffled committee](#updating-the-shuffled-committee)
|
||||
- [Computing the current committee](#computing-the-current-committee)
|
||||
- [Verifying blocks](#verifying-blocks)
|
||||
- [Introduction](#introduction)
|
||||
- [Custom types](#custom-types)
|
||||
- [Constants](#constants)
|
||||
- [Containers](#containers)
|
||||
- [`LightClientUpdate`](#lightclientupdate)
|
||||
- [Helpers](#helpers)
|
||||
- [`LightClientMemory`](#lightclientmemory)
|
||||
- [`get_persistent_committee_pubkeys_and_balances`](#get_persistent_committee_pubkeys_and_balances)
|
||||
- [Light client state updates](#light-client-state-updates)
|
||||
- [Data overhead](#data-overhead)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Preliminaries
|
||||
## Introduction
|
||||
|
||||
### Expansions
|
||||
Ethereum 2.0 is designed to be light client friendly. This allows low-resource clients such as mobile phones to access Ethereum 2.0 with reasonable safety and liveness. It also facilitates the development of "bridges" to external blockchains. This document suggests a minimal light client design for the beacon chain.
|
||||
|
||||
We define an "expansion" of an object as an object where a field in an object that is meant to represent the `hash_tree_root` of another object is replaced by the object. Note that defining expansions is not a consensus-layer-change; it is merely a "re-interpretation" of the object. Particularly, the `hash_tree_root` of an expansion of an object is identical to that of the original object, and we can define expansions where, given a complete history, it is always possible to compute the expansion of any object in the history. The opposite of an expansion is a "summary" (e.g. `BeaconBlockHeader` is a summary of `BeaconBlock`).
|
||||
## Custom types
|
||||
|
||||
We define two expansions:
|
||||
We define the following Python custom types for type hinting and readability:
|
||||
|
||||
* `ExtendedBeaconState`, which is identical to a `BeaconState` except `compact_committees_roots: List[Bytes32]` is replaced by `active_indices: List[List[ValidatorIndex]]`, where `BeaconState.compact_committees_roots[i] = hash_tree_root(ExtendedBeaconState.active_indices[i])`.
|
||||
* `ExtendedBeaconBlock`, which is identical to a `BeaconBlock` except `state_root` is replaced with the corresponding `state: ExtendedBeaconState`.
|
||||
| Name | SSZ equivalent | Description |
|
||||
| - | - | - |
|
||||
| `CompactValidator` | `uint64` | compact representation of a validator for light clients |
|
||||
|
||||
### `get_active_validator_indices`
|
||||
## Constants
|
||||
|
||||
Note that there is now a new way to compute `get_active_validator_indices`:
|
||||
| Name | Value |
|
||||
| - | - |
|
||||
| `BEACON_CHAIN_ROOT_IN_SHARD_BLOCK_HEADER_DEPTH` | `4` |
|
||||
| `BEACON_CHAIN_ROOT_IN_SHARD_BLOCK_HEADER_INDEX` | **TBD** |
|
||||
| `PERIOD_COMMITTEE_ROOT_IN_BEACON_STATE_DEPTH` | `5` |
|
||||
| `PERIOD_COMMITTEE_ROOT_IN_BEACON_STATE_INDEX` | **TBD** |
|
||||
|
||||
## Containers
|
||||
|
||||
### `LightClientUpdate`
|
||||
|
||||
```python
|
||||
def get_active_validator_indices(state: ExtendedBeaconState, epoch: Epoch) -> List[ValidatorIndex]:
|
||||
return state.active_indices[epoch % EPOCHS_PER_HISTORICAL_VECTOR]
|
||||
class LightClientUpdate(container):
|
||||
# Shard block root (and authenticating signature data)
|
||||
shard_block_root: Hash
|
||||
fork_version: Version
|
||||
aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]
|
||||
signature: BLSSignature
|
||||
# Updated beacon header (and authenticating branch)
|
||||
header: BeaconBlockHeader
|
||||
header_branch: Vector[Hash, BEACON_CHAIN_ROOT_IN_SHARD_BLOCK_HEADER_DEPTH]
|
||||
# Updated period committee (and authenticating branch)
|
||||
committee: CompactCommittee
|
||||
committee_branch: Vector[Hash, PERIOD_COMMITTEE_ROOT_IN_BEACON_STATE_DEPTH + log_2(SHARD_COUNT)]
|
||||
```
|
||||
|
||||
Note that it takes `state` instead of `state.validators` as an argument. This does not affect its use in `get_shuffled_committee`, because `get_shuffled_committee` has access to the full `state` as one of its arguments.
|
||||
## Helpers
|
||||
|
||||
|
||||
### `MerklePartial`
|
||||
|
||||
A `MerklePartial(f, *args)` is an object that contains a minimal Merkle proof needed to compute `f(*args)`. A `MerklePartial` can be used in place of a regular SSZ object, though a computation would return an error if it attempts to access part of the object that is not contained in the proof.
|
||||
|
||||
### `PeriodData`
|
||||
### `LightClientMemory`
|
||||
|
||||
```python
|
||||
{
|
||||
'validator_count': 'uint64',
|
||||
'seed': 'bytes32',
|
||||
'committee': [Validator],
|
||||
}
|
||||
@dataclass
|
||||
class LightClientMemory(object):
|
||||
shard: Shard # Randomly initialized and retained forever
|
||||
header: BeaconBlockHeader # Beacon header which is not expected to revert
|
||||
# period committees corresponding to the beacon header
|
||||
previous_committee: CompactCommittee
|
||||
current_committee: CompactCommittee
|
||||
next_committee: CompactCommittee
|
||||
```
|
||||
|
||||
### `get_earlier_start_epoch`
|
||||
### `get_persistent_committee_pubkeys_and_balances`
|
||||
|
||||
```python
|
||||
def get_earlier_start_epoch(slot: Slot) -> int:
|
||||
return slot - slot % PERSISTENT_COMMITTEE_PERIOD - PERSISTENT_COMMITTEE_PERIOD * 2
|
||||
def get_persistent_committee_pubkeys_and_balances(memory: LightClientMemory,
|
||||
epoch: Epoch) -> Tuple[Sequence[BLSPubkey], Sequence[uint64]]:
|
||||
"""
|
||||
Return pubkeys and balances for the persistent committee at ``epoch``.
|
||||
"""
|
||||
current_period = compute_epoch_at_slot(memory.header.slot) // EPOCHS_PER_SHARD_PERIOD
|
||||
next_period = epoch // EPOCHS_PER_SHARD_PERIOD
|
||||
assert next_period in (current_period, current_period + 1)
|
||||
if next_period == current_period:
|
||||
earlier_committee, later_committee = memory.previous_committee, memory.current_committee
|
||||
else:
|
||||
earlier_committee, later_committee = memory.current_committee, memory.next_committee
|
||||
|
||||
pubkeys = []
|
||||
balances = []
|
||||
for pubkey, compact_validator in zip(earlier_committee.pubkeys, earlier_committee.compact_validators):
|
||||
index, slashed, balance = unpack_compact_validator(compact_validator)
|
||||
if epoch % EPOCHS_PER_SHARD_PERIOD < index % EPOCHS_PER_SHARD_PERIOD:
|
||||
pubkeys.append(pubkey)
|
||||
balances.append(balance)
|
||||
for pubkey, compact_validator in zip(later_committee.pubkeys, later_committee.compact_validators):
|
||||
index, slashed, balance = unpack_compact_validator(compact_validator)
|
||||
if epoch % EPOCHS_PER_SHARD_PERIOD >= index % EPOCHS_PER_SHARD_PERIOD:
|
||||
pubkeys.append(pubkey)
|
||||
balances.append(balance)
|
||||
return pubkeys, balances
|
||||
```
|
||||
|
||||
### `get_later_start_epoch`
|
||||
## Light client state updates
|
||||
|
||||
The state of a light client is stored in a `memory` object of type `LightClientMemory`. To advance its state a light client requests an `update` object of type `LightClientUpdate` from the network by sending a request containing `(memory.shard, memory.header.slot, slot_range_end)` and calls `update_memory(memory, update)`.
|
||||
|
||||
```python
|
||||
def get_later_start_epoch(slot: Slot) -> int:
|
||||
return slot - slot % PERSISTENT_COMMITTEE_PERIOD - PERSISTENT_COMMITTEE_PERIOD
|
||||
```
|
||||
def update_memory(memory: LightClientMemory, update: LightClientUpdate) -> None:
|
||||
# Verify the update does not skip a period
|
||||
current_period = compute_epoch_at_slot(memory.header.slot) // EPOCHS_PER_SHARD_PERIOD
|
||||
next_epoch = compute_epoch_of_shard_slot(update.header.slot)
|
||||
next_period = next_epoch // EPOCHS_PER_SHARD_PERIOD
|
||||
assert next_period in (current_period, current_period + 1)
|
||||
|
||||
### `get_period_data`
|
||||
|
||||
```python
|
||||
def get_period_data(block: ExtendedBeaconBlock, shard_id: Shard, later: bool) -> PeriodData:
|
||||
period_start = get_later_start_epoch(header.slot) if later else get_earlier_start_epoch(header.slot)
|
||||
validator_count = len(get_active_validator_indices(state, period_start))
|
||||
committee_count = validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE) + 1
|
||||
indices = get_period_committee(block.state, shard_id, period_start, 0, committee_count)
|
||||
return PeriodData(
|
||||
validator_count,
|
||||
get_seed(block.state, period_start),
|
||||
[block.state.validators[i] for i in indices],
|
||||
# Verify update header against shard block root and header branch
|
||||
assert is_valid_merkle_branch(
|
||||
leaf=hash_tree_root(update.header),
|
||||
branch=update.header_branch,
|
||||
depth=BEACON_CHAIN_ROOT_IN_SHARD_BLOCK_HEADER_DEPTH,
|
||||
index=BEACON_CHAIN_ROOT_IN_SHARD_BLOCK_HEADER_INDEX,
|
||||
root=update.shard_block_root,
|
||||
)
|
||||
```
|
||||
|
||||
### Light client state
|
||||
# Verify persistent committee votes pass 2/3 threshold
|
||||
pubkeys, balances = get_persistent_committee_pubkeys_and_balances(memory, next_epoch)
|
||||
assert 3 * sum(filter(lambda i: update.aggregation_bits[i], balances)) > 2 * sum(balances)
|
||||
|
||||
A light client will keep track of:
|
||||
|
||||
* A random `shard_id` in `[0...SHARD_COUNT-1]` (selected once and retained forever)
|
||||
* A block header that they consider to be finalized (`finalized_header`) and do not expect to revert.
|
||||
* `later_period_data = get_period_data(finalized_header, shard_id, later=True)`
|
||||
* `earlier_period_data = get_period_data(finalized_header, shard_id, later=False)`
|
||||
|
||||
We use the struct `ValidatorMemory` to keep track of these variables.
|
||||
|
||||
### Updating the shuffled committee
|
||||
|
||||
If a client's `validator_memory.finalized_header` changes so that `header.slot // PERSISTENT_COMMITTEE_PERIOD` increases, then the client can ask the network for a `new_committee_proof = MerklePartial(get_period_data, validator_memory.finalized_header, shard_id, later=True)`. It can then compute:
|
||||
|
||||
```python
|
||||
earlier_period_data = later_period_data
|
||||
later_period_data = get_period_data(new_committee_proof, finalized_header, shard_id, later=True)
|
||||
```
|
||||
|
||||
The maximum size of a proof is `128 * ((22-7) * 32 + 110) = 75520` bytes for validator records and `(22-7) * 32 + 128 * 8 = 1504` for the active index proof (much smaller because the relevant active indices are all beside each other in the Merkle tree). This needs to be done once per `PERSISTENT_COMMITTEE_PERIOD` epochs (2048 epochs / 9 days), or ~38 bytes per epoch.
|
||||
|
||||
## Computing the current committee
|
||||
|
||||
Here is a helper to compute the committee at a slot given the maximal earlier and later committees:
|
||||
|
||||
```python
|
||||
def compute_committee(header: BeaconBlockHeader,
|
||||
validator_memory: ValidatorMemory) -> List[ValidatorIndex]:
|
||||
earlier_validator_count = validator_memory.earlier_period_data.validator_count
|
||||
later_validator_count = validator_memory.later_period_data.validator_count
|
||||
maximal_earlier_committee = validator_memory.earlier_period_data.committee
|
||||
maximal_later_committee = validator_memory.later_period_data.committee
|
||||
earlier_start_epoch = get_earlier_start_epoch(header.slot)
|
||||
later_start_epoch = get_later_start_epoch(header.slot)
|
||||
epoch = compute_epoch_of_slot(header.slot)
|
||||
|
||||
committee_count = max(
|
||||
earlier_validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
||||
later_validator_count // (SHARD_COUNT * TARGET_COMMITTEE_SIZE),
|
||||
) + 1
|
||||
|
||||
def get_offset(count: int, end: bool) -> int:
|
||||
return get_split_offset(
|
||||
count,
|
||||
SHARD_COUNT * committee_count,
|
||||
validator_memory.shard_id * committee_count + (1 if end else 0),
|
||||
)
|
||||
|
||||
actual_earlier_committee = maximal_earlier_committee[
|
||||
0:get_offset(earlier_validator_count, True) - get_offset(earlier_validator_count, False)
|
||||
]
|
||||
actual_later_committee = maximal_later_committee[
|
||||
0:get_offset(later_validator_count, True) - get_offset(later_validator_count, False)
|
||||
]
|
||||
def get_switchover_epoch(index):
|
||||
return (
|
||||
bytes_to_int(hash(validator_memory.earlier_period_data.seed + int_to_bytes(index, length=3))[0:8]) %
|
||||
PERSISTENT_COMMITTEE_PERIOD
|
||||
)
|
||||
|
||||
# Take not-yet-cycled-out validators from earlier committee and already-cycled-in validators from
|
||||
# later committee; return a sorted list of the union of the two, deduplicated
|
||||
return sorted(list(set(
|
||||
[i for i in actual_earlier_committee if epoch % PERSISTENT_COMMITTEE_PERIOD < get_switchover_epoch(i)]
|
||||
+ [i for i in actual_later_committee if epoch % PERSISTENT_COMMITTEE_PERIOD >= get_switchover_epoch(i)]
|
||||
)))
|
||||
```
|
||||
|
||||
Note that this method makes use of the fact that the committee for any given shard always starts and ends at the same validator index independently of the committee count (this is because the validator set is split into `SHARD_COUNT * committee_count` slices but the first slice of a shard is a multiple `committee_count * i`, so the start of the slice is `n * committee_count * i // (SHARD_COUNT * committee_count) = n * i // SHARD_COUNT`, using the slightly nontrivial algebraic identity `(x * a) // ab == x // b`).
|
||||
|
||||
## Verifying blocks
|
||||
|
||||
If a client wants to update its `finalized_header` it asks the network for a `BlockValidityProof`, which is simply:
|
||||
|
||||
```python
|
||||
{
|
||||
'header': BeaconBlockHeader,
|
||||
'shard_aggregate_signature': BLSSignature,
|
||||
'shard_bits': Bitlist[PLACEHOLDER],
|
||||
'shard_parent_block': ShardBlock,
|
||||
}
|
||||
```
|
||||
|
||||
The verification procedure is as follows:
|
||||
|
||||
```python
|
||||
def verify_block_validity_proof(proof: BlockValidityProof, validator_memory: ValidatorMemory) -> bool:
|
||||
assert proof.shard_parent_block.beacon_chain_root == hash_tree_root(proof.header)
|
||||
committee = compute_committee(proof.header, validator_memory)
|
||||
# Verify that we have >=50% support
|
||||
support_balance = sum([v.effective_balance for i, v in enumerate(committee) if proof.shard_bits[i]])
|
||||
total_balance = sum([v.effective_balance for i, v in enumerate(committee)])
|
||||
assert support_balance * 2 > total_balance
|
||||
# Verify shard attestations
|
||||
group_public_key = bls_aggregate_pubkeys([
|
||||
v.pubkey for v, index in enumerate(committee)
|
||||
if proof.shard_bits[index]
|
||||
])
|
||||
assert bls_verify(
|
||||
pubkey=group_public_key,
|
||||
message_hash=hash_tree_root(shard_parent_block),
|
||||
signature=proof.shard_aggregate_signature,
|
||||
domain=get_domain(state, compute_epoch_of_slot(shard_block.slot), DOMAIN_SHARD_ATTESTER),
|
||||
)
|
||||
pubkey = bls_aggregate_pubkeys(filter(lambda i: update.aggregation_bits[i], pubkeys))
|
||||
domain = compute_domain(DOMAIN_SHARD_ATTESTER, update.fork_version)
|
||||
assert bls_verify(pubkey, update.shard_block_root, update.signature, domain)
|
||||
|
||||
# Update period committees if entering a new period
|
||||
if next_period == current_period + 1:
|
||||
assert is_valid_merkle_branch(
|
||||
leaf=hash_tree_root(update.committee),
|
||||
branch=update.committee_branch,
|
||||
depth=PERIOD_COMMITTEE_ROOT_IN_BEACON_STATE_DEPTH + log_2(SHARD_COUNT),
|
||||
index=PERIOD_COMMITTEE_ROOT_IN_BEACON_STATE_INDEX << log_2(SHARD_COUNT) + memory.shard,
|
||||
root=hash_tree_root(update.header),
|
||||
)
|
||||
memory.previous_committee = memory.current_committee
|
||||
memory.current_committee = memory.next_committee
|
||||
memory.next_committee = update.committee
|
||||
|
||||
# Update header
|
||||
memory.header = update.header
|
||||
```
|
||||
|
||||
The size of this proof is only 200 (header) + 96 (signature) + 16 (bits) + 352 (shard block) = 664 bytes. It can be reduced further by replacing `ShardBlock` with `MerklePartial(lambda x: x.beacon_chain_root, ShardBlock)`, which would cut off ~220 bytes.
|
||||
## Data overhead
|
||||
|
||||
Once every `EPOCHS_PER_SHARD_PERIOD` epochs (~27 hours) a light client downloads a `LightClientUpdate` object:
|
||||
|
||||
* `shard_block_root`: 32 bytes
|
||||
* `fork_version`: 4 bytes
|
||||
* `aggregation_bits`: 16 bytes
|
||||
* `signature`: 96 bytes
|
||||
* `header`: 8 + 32 + 32 + 32 + 96 = 200 bytes
|
||||
* `header_branch`: 4 * 32 = 128 bytes
|
||||
* `committee`: 128 * (48 + 8) = 7,168 bytes
|
||||
* `committee_branch`: (5 + 10) * 32 = 480 bytes
|
||||
|
||||
The total overhead is 8,124 bytes, or ~0.083 bytes per second. The Bitcoin SPV equivalent is 80 bytes per ~560 seconds, or ~0.143 bytes per second. Various compression optimisations (similar to [these](https://github.com/RCasatta/compressedheaders)) are possible.
|
||||
|
||||
A light client can choose to update the header (without updating the committee) more frequently than once every `EPOCHS_PER_SHARD_PERIOD` epochs at a cost of 32 + 4 + 16 + 96 + 200 + 128 = 476 bytes per update.
|
||||
|
|
|
@ -5,9 +5,9 @@ This document contains the networking specification for Ethereum 2.0 clients.
|
|||
It consists of four main sections:
|
||||
|
||||
1. A specification of the network fundamentals detailing the two network configurations: interoperability test network and mainnet launch.
|
||||
2. A specification of the three network interaction *domains* of Eth 2.0: (a) the gossip domain, (b) the discovery domain, and (c) the Req/Resp domain.
|
||||
2. A specification of the three network interaction *domains* of Eth2: (a) the gossip domain, (b) the discovery domain, and (c) the Req/Resp domain.
|
||||
3. The rationale and further explanation for the design choices made in the previous two sections.
|
||||
4. An analysis of the maturity/state of the libp2p features required by this spec across the languages in which Eth 2.0 clients are being developed.
|
||||
4. An analysis of the maturity/state of the libp2p features required by this spec across the languages in which Eth2 clients are being developed.
|
||||
|
||||
## Table of contents
|
||||
|
||||
|
@ -21,7 +21,7 @@ It consists of four main sections:
|
|||
- [Encryption and identification](#encryption-and-identification)
|
||||
- [Protocol negotiation](#protocol-negotiation)
|
||||
- [Multiplexing](#multiplexing)
|
||||
- [Eth 2.0 network interaction domains](#eth-20-network-interaction-domains)
|
||||
- [Eth2 network interaction domains](#eth2-network-interaction-domains)
|
||||
- [Configuration](#configuration)
|
||||
- [The gossip domain: gossipsub](#the-gossip-domain-gossipsub)
|
||||
- [The Req/Resp domain](#the-reqresp-domain)
|
||||
|
@ -83,7 +83,7 @@ The following SecIO parameters MUST be supported by all stacks:
|
|||
|
||||
[Noise Framework](http://www.noiseprotocol.org/) handshakes will be used for mainnet. libp2p Noise support [is in the process of being standardized](https://github.com/libp2p/specs/issues/195) in the libp2p project.
|
||||
|
||||
Noise support will presumably include IX, IK, and XX handshake patterns, and may rely on Curve25519 keys, ChaCha20 and Poly1305 ciphers, and SHA-256 as a hash function. These aspects are being actively debated in the referenced issue (Eth 2.0 implementers are welcome to comment and contribute to the discussion).
|
||||
Noise support will presumably include IX, IK, and XX handshake patterns, and may rely on Curve25519 keys, ChaCha20 and Poly1305 ciphers, and SHA-256 as a hash function. These aspects are being actively debated in the referenced issue (Eth2 implementers are welcome to comment and contribute to the discussion).
|
||||
|
||||
## Protocol Negotiation
|
||||
|
||||
|
@ -105,7 +105,7 @@ Two multiplexers are commonplace in libp2p implementations: [mplex](https://gith
|
|||
|
||||
Clients MUST support [mplex](https://github.com/libp2p/specs/tree/master/mplex) and MAY support [yamux](https://github.com/hashicorp/yamux/blob/master/spec.md). If both are supported by the client, yamux must take precedence during negotiation. See the [Rationale](#design-decision-rationale) section below for tradeoffs.
|
||||
|
||||
# Eth 2.0 network interaction domains
|
||||
# Eth2 network interaction domains
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -115,9 +115,10 @@ This section outlines constants that are used in this spec.
|
|||
|---|---|---|
|
||||
| `GOSSIP_MAX_SIZE` | `2**20` (= 1048576, 1 MiB) | The maximum allowed size of uncompressed gossip messages. |
|
||||
| `MAX_CHUNK_SIZE` | `2**20` (1048576, 1 MiB) | The maximum allowed size of uncompressed req/resp chunked responses. |
|
||||
| `SHARD_SUBNET_COUNT` | `TODO` | The number of shard subnets used in the gossipsub protocol. |
|
||||
| `ATTESTATION_SUBNET_COUNT` | `64` | The number of attestation subnets used in the gossipsub protocol. |
|
||||
| `TTFB_TIMEOUT` | `5s` | The maximum time to wait for first byte of request response (time-to-first-byte). |
|
||||
| `RESP_TIMEOUT` | `10s` | The maximum time for complete response transfer. |
|
||||
| `ATTESTATION_PROPAGATION_SLOT_RANGE` | `32` | The maximum number of slots during which an attestation can be propagated. |
|
||||
|
||||
## The gossip domain: gossipsub
|
||||
|
||||
|
@ -140,54 +141,72 @@ The following gossipsub [parameters](https://github.com/libp2p/specs/tree/master
|
|||
- `gossip_history` (number of heartbeat intervals to retain message IDs): 5
|
||||
- `heartbeat_interval` (frequency of heartbeat, seconds): 1
|
||||
|
||||
### Topics
|
||||
### Topics and messages
|
||||
|
||||
Topics are plain UTF-8 strings and are encoded on the wire as determined by protobuf (gossipsub messages are enveloped in protobuf messages).
|
||||
Topics are plain UTF-8 strings and are encoded on the wire as determined by protobuf (gossipsub messages are enveloped in protobuf messages). Topic strings have form: `/eth2/TopicName/TopicEncoding`. This defines both the type of data being sent on the topic and how the data field of the message is encoded.
|
||||
|
||||
Topic strings have form: `/eth2/TopicName/TopicEncoding`. This defines both the type of data being sent on the topic and how the data field of the message is encoded. (Further details can be found in [Messages](#Messages)).
|
||||
Each gossipsub [message](https://github.com/libp2p/go-libp2p-pubsub/blob/master/pb/rpc.proto#L17-L24) has a maximum size of `GOSSIP_MAX_SIZE`. Clients MUST reject (fail validation) messages that are over this size limit. Likewise, clients MUST NOT emit or propagate messages larger than this limit.
|
||||
|
||||
There are two main topics used to propagate attestations and beacon blocks to all nodes on the network. Their `TopicName`s are:
|
||||
The payload is carried in the `data` field of a gossipsub message, and varies depending on the topic:
|
||||
|
||||
| Topic | Message Type |
|
||||
|----------------------------------------|-------------------|
|
||||
| beacon_block | BeaconBlock |
|
||||
| beacon_aggregate_and_proof | AggregateAndProof |
|
||||
| beacon_attestation\* | Attestation |
|
||||
| committee_index{subnet_id}\_beacon_attestation | Attestation |
|
||||
| voluntary_exit | VoluntaryExit |
|
||||
| proposer_slashing | ProposerSlashing |
|
||||
| attester_slashing | AttesterSlashing |
|
||||
|
||||
Clients MUST reject (fail validation) messages containing an incorrect type, or invalid payload.
|
||||
|
||||
When processing incoming gossip, clients MAY descore or disconnect peers who fail to observe these constraints.
|
||||
|
||||
\* The `beacon_attestation` topic is only for interop and will be removed prior to mainnet.
|
||||
|
||||
#### Global topics
|
||||
|
||||
There are two primary global topics used to propagate beacon blocks and aggregate attestations to all nodes on the network. Their `TopicName`s are:
|
||||
|
||||
- `beacon_block` - This topic is used solely for propagating new beacon blocks to all nodes on the networks. Blocks are sent in their entirety. Clients MUST validate the block proposer signature before forwarding it across the network.
|
||||
- `beacon_attestation` - This topic is used to propagate aggregated attestations (in their entirety) to subscribing nodes (typically block proposers) to be included in future blocks. Clients MUST validate that the block being voted for passes validation before forwarding the attestation on the network (TODO: [additional validations](https://github.com/ethereum/eth2.0-specs/issues/1332)).
|
||||
- `beacon_aggregate_and_proof` - This topic is used to propagate aggregated attestations (as `AggregateAndProof`s) to subscribing nodes (typically validators) to be included in future blocks. The following validations MUST pass before forwarding the `aggregate_and_proof` on the network.
|
||||
- The aggregate attestation defined by `hash_tree_root(aggregate_and_proof.aggregate)` has _not_ already been seen (via aggregate gossip, within a block, or through the creation of an equivalent aggregate locally).
|
||||
- The block being voted for (`aggregate_and_proof.aggregate.data.beacon_block_root`) passes validation.
|
||||
- `aggregate_and_proof.aggregate.data.slot` is within the last `ATTESTATION_PROPAGATION_SLOT_RANGE` slots (`aggregate_and_proof.aggregate.data.slot + ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >= aggregate_and_proof.aggregate.data.slot`).
|
||||
- The validator index is within the aggregate's committee -- i.e. `aggregate_and_proof.index in get_attesting_indices(state, aggregate_and_proof.aggregate.data, aggregate_and_proof.aggregate.aggregation_bits)`.
|
||||
- `aggregate_and_proof.selection_proof` selects the validator as an aggregator for the slot -- i.e. `is_aggregator(state, aggregate_and_proof.aggregate.data.index, aggregate_and_proof.selection_proof)` returns `True`.
|
||||
- The `aggregate_and_proof.selection_proof` is a valid signature of the `aggregate_and_proof.aggregate.data.slot` by the validator with index `aggregate_and_proof.index`.
|
||||
- The signature of `aggregate_and_proof.aggregate` is valid.
|
||||
|
||||
Additional topics are used to propagate lower frequency validator messages. Their `TopicName`s are:
|
||||
Additional global topics are used to propagate lower frequency validator messages. Their `TopicName`s are:
|
||||
|
||||
- `voluntary_exit` - This topic is used solely for propagating voluntary validator exits to proposers on the network. Voluntary exits are sent in their entirety. Clients who receive a voluntary exit on this topic MUST validate the conditions within `process_voluntary_exit` before forwarding it across the network.
|
||||
- `proposer_slashing` - This topic is used solely for propagating proposer slashings to proposers on the network. Proposer slashings are sent in their entirety. Clients who receive a proposer slashing on this topic MUST validate the conditions within `process_proposer_slashing` before forwarding it across the network.
|
||||
- `attester_slashing` - This topic is used solely for propagating attester slashings to proposers on the network. Attester slashings are sent in their entirety. Clients who receive an attester slashing on this topic MUST validate the conditions within `process_attester_slashing` before forwarding it across the network.
|
||||
|
||||
#### Attestation subnets
|
||||
|
||||
Attestation subnets are used to propagate unaggregated attestations to subsections of the network. Their `TopicName`s are:
|
||||
|
||||
- `committee_index{subnet_id}_beacon_attestation` - These topics are used to propagate unaggregated attestations to the subnet `subnet_id` (typically beacon and persistent committees) to be aggregated before being gossiped to `beacon_aggregate_and_proof`. The following validations MUST pass before forwarding the `attestation` on the subnet.
|
||||
- The attestation's committee index (`attestation.data.index`) is for the correct subnet.
|
||||
- The attestation is unaggregated -- that is, it has exactly one participating validator (`len([bit for bit in attestation.aggregation_bits if bit == 0b1]) == 1`).
|
||||
- The block being voted for (`attestation.data.beacon_block_root`) passes validation.
|
||||
- `attestation.data.slot` is within the last `ATTESTATION_PROPAGATION_SLOT_RANGE` slots (`attestation.data.slot + ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >= attestation.data.slot`).
|
||||
- The signature of `attestation` is valid.
|
||||
|
||||
#### Interop
|
||||
|
||||
Unaggregated and aggregated attestations from all shards are sent to the `beacon_attestation` topic. Clients are not required to publish aggregate attestations but must be able to process them. All validating clients SHOULD try to perform local attestation aggregation to prepare for block proposing.
|
||||
Unaggregated and aggregated attestations from all shards are sent as `Attestation`s to the `beacon_attestation` topic. Clients are not required to publish aggregate attestations but must be able to process them. All validating clients SHOULD try to perform local attestation aggregation to prepare for block proposing.
|
||||
|
||||
#### Mainnet
|
||||
|
||||
Shards are grouped into their own subnets (defined by a shard topic). The number of shard subnets is defined via `SHARD_SUBNET_COUNT` and the shard `shard_number % SHARD_SUBNET_COUNT` is assigned to the topic: `shard{shard_number % SHARD_SUBNET_COUNT}_beacon_attestation`. Unaggregated attestations are sent to the subnet topic. Aggregated attestations are sent to the `beacon_attestation` topic.
|
||||
Attestation broadcasting is grouped into subnets defined by a topic. The number of subnets is defined via `ATTESTATION_SUBNET_COUNT`. For the `committee_index{subnet_id}_beacon_attestation` topics, `subnet_id` is set to `index % ATTESTATION_SUBNET_COUNT`, where `index` is the `CommitteeIndex` of the given committee.
|
||||
|
||||
TODO: [aggregation strategy](https://github.com/ethereum/eth2.0-specs/issues/1331)
|
||||
Unaggregated attestations are sent to the subnet topic, `committee_index{attestation.data.index % ATTESTATION_SUBNET_COUNT}_beacon_attestation` as `Attestation`s.
|
||||
|
||||
### Messages
|
||||
|
||||
Each gossipsub [message](https://github.com/libp2p/go-libp2p-pubsub/blob/master/pb/rpc.proto#L17-L24) has a maximum size of `GOSSIP_MAX_SIZE`.
|
||||
|
||||
Clients MUST reject (fail validation) messages that are over this size limit. Likewise, clients MUST NOT emit or propagate messages larger than this limit.
|
||||
|
||||
The payload is carried in the `data` field of a gossipsub message, and varies depending on the topic:
|
||||
|
||||
|
||||
| Topic | Message Type |
|
||||
|------------------------------|-------------------|
|
||||
| beacon_block | BeaconBlock |
|
||||
| beacon_attestation | Attestation |
|
||||
| shard{N}\_beacon_attestation | Attestation |
|
||||
| voluntary_exit | VoluntaryExit |
|
||||
| proposer_slashing | ProposerSlashing |
|
||||
| attester_slashing | AttesterSlashing |
|
||||
|
||||
Clients MUST reject (fail validation) messages containing an incorrect type, or invalid payload.
|
||||
|
||||
When processing incoming gossip, clients MAY descore or disconnect peers who fail to observe these constraints.
|
||||
Aggregated attestations are sent to the `beacon_aggregate_and_proof` topic as `AggregateAndProof`s.
|
||||
|
||||
### Encodings
|
||||
|
||||
|
@ -199,7 +218,7 @@ Topics are post-fixed with an encoding. Encodings define how the payload of a go
|
|||
|
||||
#### Mainnet
|
||||
|
||||
- `ssz_snappy` - All objects are SSZ-encoded and then compressed with [Snappy](https://github.com/google/snappy). Example: The beacon attestation topic string is `/eth2/beacon_attestation/ssz_snappy`, and the data field of a gossipsub message is an `Attestation` that has been SSZ-encoded and then compressed with Snappy.
|
||||
- `ssz_snappy` - All objects are SSZ-encoded and then compressed with [Snappy](https://github.com/google/snappy). Example: The beacon aggregate attestation topic string is `/eth2/beacon_aggregate_and_proof/ssz_snappy`, and the data field of a gossipsub message is an `AggregateAndProof` that has been SSZ-encoded and then compressed with Snappy.
|
||||
|
||||
Implementations MUST use a single encoding. Changing an encoding will require coordination between participating implementations.
|
||||
|
||||
|
@ -475,9 +494,9 @@ Specifications of these parameters can be found in the [ENR Specification](http:
|
|||
|
||||
#### Interop
|
||||
|
||||
In the interoperability testnet, all peers will support all capabilities defined in this document (gossip, full Req/Resp suite, discovery protocol), therefore the ENR record does not need to carry Eth 2.0 capability information, as it would be superfluous.
|
||||
In the interoperability testnet, all peers will support all capabilities defined in this document (gossip, full Req/Resp suite, discovery protocol), therefore the ENR record does not need to carry Eth2 capability information, as it would be superfluous.
|
||||
|
||||
Nonetheless, ENRs MUST carry a generic `eth2` key with nil value, denoting that the peer is indeed an Eth 2.0 peer, in order to eschew connecting to Eth 1.0 peers.
|
||||
Nonetheless, ENRs MUST carry a generic `eth2` key with nil value, denoting that the peer is indeed an Eth2 peer, in order to eschew connecting to Eth 1.0 peers.
|
||||
|
||||
#### Mainnet
|
||||
|
||||
|
@ -514,7 +533,7 @@ Clients may support other transports such as libp2p QUIC, WebSockets, and WebRTC
|
|||
|
||||
The libp2p QUIC transport inherently relies on TLS 1.3 per requirement in section 7 of the [QUIC protocol specification](https://tools.ietf.org/html/draft-ietf-quic-transport-22#section-7) and the accompanying [QUIC-TLS document](https://tools.ietf.org/html/draft-ietf-quic-tls-22).
|
||||
|
||||
The usage of one handshake procedure or the other shall be transparent to the Eth 2.0 application layer, once the libp2p Host/Node object has been configured appropriately.
|
||||
The usage of one handshake procedure or the other shall be transparent to the Eth2 application layer, once the libp2p Host/Node object has been configured appropriately.
|
||||
|
||||
### What are the advantages of using TCP/QUIC/Websockets?
|
||||
|
||||
|
@ -524,7 +543,7 @@ QUIC is a new protocol that’s in the final stages of specification by the IETF
|
|||
|
||||
QUIC is being adopted as the underlying protocol for HTTP/3. This has the potential to award us censorship resistance via deep packet inspection for free. Provided that we use the same port numbers and encryption mechanisms as HTTP/3, our traffic may be indistinguishable from standard web traffic, and we may only become subject to standard IP-based firewall filtering—something we can counteract via other mechanisms.
|
||||
|
||||
WebSockets and/or WebRTC transports are necessary for interaction with browsers, and will become increasingly important as we incorporate browser-based light clients to the Eth 2.0 network.
|
||||
WebSockets and/or WebRTC transports are necessary for interaction with browsers, and will become increasingly important as we incorporate browser-based light clients to the Eth2 network.
|
||||
|
||||
### Why do we not just support a single transport?
|
||||
|
||||
|
@ -652,11 +671,23 @@ Topic names have a hierarchical structure. In the future, gossipsub may support
|
|||
|
||||
No security or privacy guarantees are lost as a result of choosing plaintext topic names, since the domain is finite anyway, and calculating a digest's preimage would be trivial.
|
||||
|
||||
Furthermore, the Eth 2.0 topic names are shorter than their digest equivalents (assuming SHA-256 hash), so hashing topics would bloat messages unnecessarily.
|
||||
Furthermore, the Eth2 topic names are shorter than their digest equivalents (assuming SHA-256 hash), so hashing topics would bloat messages unnecessarily.
|
||||
|
||||
### Why are there `SHARD_SUBNET_COUNT` subnets, and why is this not defined?
|
||||
### Why are there `ATTESTATION_SUBNET_COUNT` attestation subnets?
|
||||
|
||||
Depending on the number of validators, it may be more efficient to group shard subnets and might provide better stability for the gossipsub channel. The exact grouping will be dependent on more involved network tests. This constant allows for more flexibility in setting up the network topology for attestation aggregation (as aggregation should happen on each subnet).
|
||||
Depending on the number of validators, it may be more efficient to group shard subnets and might provide better stability for the gossipsub channel. The exact grouping will be dependent on more involved network tests. This constant allows for more flexibility in setting up the network topology for attestation aggregation (as aggregation should happen on each subnet). The value is currently set to to be equal `MAX_COMMITTEES_PER_SLOT` until network tests indicate otherwise.
|
||||
|
||||
### Why are attestations limited to be broadcast on gossip channels within `SLOTS_PER_EPOCH` slots?
|
||||
|
||||
Attestations can only be included on chain within an epoch's worth of slots so this is the natural cutoff. There is no utility to the chain to broadcast attestations older than one epoch, and because validators have a chance to make a new attestation each epoch, there is minimal utility to the fork choice to relay old attestations as a new latest message can soon be created by each validator.
|
||||
|
||||
In addition to this, relaying attestations requires validating the attestation in the context of the `state` during which it was created. Thus, validating arbitrarily old attestations would put additional requirements on which states need to be readily available to the node. This would result in a higher resource burden and could serve as a DoS vector.
|
||||
|
||||
### Why are aggregate attestations broadcast to the global topic as `AggregateAndProof`s rather than just as `Attestation`s?
|
||||
|
||||
The dominant strategy for an individual validator is to always broadcast an aggregate containing their own attestation to the global channel to ensure that proposers see their attestation for inclusion. Using a private selection criteria and providing this proof of selection alongside the gossiped aggregate ensures that this dominant strategy will not flood the global channel.
|
||||
|
||||
Also, an attacker can create any number of honest-looking aggregates and broadcast them to the global pubsub channel. Thus without some sort of proof of selection as an aggregator, the global channel can trivially be spammed.
|
||||
|
||||
### Why are we sending entire objects in the pubsub and not just hashes?
|
||||
|
||||
|
@ -683,7 +714,7 @@ Requests are segregated by protocol ID to:
|
|||
3. Enable clients to select the individual requests/versions they support. It would no longer be a strict requirement to support all requests, and clients, in principle, could support a subset of requests and variety of versions.
|
||||
4. Enable flexibility and agility for clients adopting spec changes that impact the request, by signalling to peers exactly which subset of new/old requests they support.
|
||||
5. Enable clients to explicitly choose backwards compatibility at the request granularity. Without this, clients would be forced to support entire versions of the coarser request protocol.
|
||||
6. Parallelise RFCs (or Eth 2.0 EIPs). By decoupling requests from one another, each RFC that affects the request protocol can be deployed/tested/debated independently without relying on a synchronization point to version the general top-level protocol.
|
||||
6. Parallelise RFCs (or Eth2 EIPs). By decoupling requests from one another, each RFC that affects the request protocol can be deployed/tested/debated independently without relying on a synchronization point to version the general top-level protocol.
|
||||
1. This has the benefit that clients can explicitly choose which RFCs to deploy without buying into all other RFCs that may be included in that top-level version.
|
||||
2. Affording this level of granularity with a top-level protocol would imply creating as many variants (e.g. /protocol/43-{a,b,c,d,...}) as the cartesian product of RFCs inflight, O(n^2).
|
||||
7. Allow us to simplify the payload of requests. Request-id’s and method-ids no longer need to be sent. The encoding/request type and version can all be handled by the framework.
|
||||
|
@ -795,4 +826,4 @@ For specific ad-hoc testing scenarios, you can use the [plaintext/2.0.0 secure c
|
|||
|
||||
# libp2p implementations matrix
|
||||
|
||||
This section will soon contain a matrix showing the maturity/state of the libp2p features required by this spec across the languages in which Eth 2.0 clients are being developed.
|
||||
This section will soon contain a matrix showing the maturity/state of the libp2p features required by this spec across the languages in which Eth2 clients are being developed.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# SimpleSerialize (SSZ)
|
||||
|
||||
**Notice**: This document is a work-in-progress describing typing, serialization, and Merkleization of Eth 2.0 objects.
|
||||
**Notice**: This document is a work-in-progress describing typing, serialization, and Merkleization of Eth2 objects.
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
@ -26,6 +26,7 @@
|
|||
- [Deserialization](#deserialization)
|
||||
- [Merkleization](#merkleization)
|
||||
- [Self-signed containers](#self-signed-containers)
|
||||
- [Summaries and expansions](#summaries-and-expansions)
|
||||
- [Implementations](#implementations)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
@ -138,7 +139,7 @@ return bytes(array)
|
|||
|
||||
### `Bitlist[N]`
|
||||
|
||||
Note that from the offset coding, the length (in bytes) of the bitlist is known. An additional leading `1` bit is added so that the length in bits will also be known.
|
||||
Note that from the offset coding, the length (in bytes) of the bitlist is known. An additional `1` bit is added to the end, at index `e` where `e` is the length of the bitlist (not the limit), so that the length in bits will also be known.
|
||||
|
||||
```python
|
||||
array = [0] * ((len(value) // 8) + 1)
|
||||
|
@ -180,7 +181,15 @@ return serialized_type_index + serialized_bytes
|
|||
|
||||
## Deserialization
|
||||
|
||||
Because serialization is an injective function (i.e. two distinct objects of the same type will serialize to different values) any bytestring has at most one object it could deserialize to. Efficient algorithms for computing this object can be found in [the implementations](#implementations).
|
||||
Because serialization is an injective function (i.e. two distinct objects of the same type will serialize to different values) any bytestring has at most one object it could deserialize to.
|
||||
|
||||
Deserialization can be implemented using a recursive algorithm. The deserialization of basic objects is easy, and from there we can find a simple recursive algorithm for all fixed-size objects. For variable-size objects we have to do one of the following depending on what kind of object it is:
|
||||
|
||||
* Vector/list of a variable-size object: The serialized data will start with offsets of all the serialized objects (`BYTES_PER_LENGTH_OFFSET` bytes each).
|
||||
* Using the first offset, we can compute the length of the list (divide by `BYTES_PER_LENGTH_OFFSET`), as it gives us the total number of bytes in the offset data.
|
||||
* The size of each object in the vector/list can be inferred from the difference of two offsets. To get the size of the last object, the total number of bytes has to be known (it is not generally possible to deserialize an SSZ object of unknown length)
|
||||
* Containers follow the same principles as vectors, with the difference that there may be fixed-size objects in a container as well. This means the `fixed_parts` data will contain offsets as well as fixed-size objects.
|
||||
* In the case of bitlists, the length in bits cannot be uniquely inferred from the number of bytes in the object. Because of this, they have a bit at the end that is always set. This bit has to be used to infer the size of the bitlist in bits.
|
||||
|
||||
Note that deserialization requires hardening against invalid inputs. A non-exhaustive list:
|
||||
|
||||
|
@ -188,6 +197,8 @@ Note that deserialization requires hardening against invalid inputs. A non-exhau
|
|||
- Scope: Extra unused bytes, not aligned with element size.
|
||||
- More elements than a list limit allows. Part of enforcing consensus.
|
||||
|
||||
Efficient algorithms for computing this object can be found in [the implementations](#implementations).
|
||||
|
||||
## Merkleization
|
||||
|
||||
We first define helper functions:
|
||||
|
@ -227,6 +238,12 @@ We now define Merkleization `hash_tree_root(value)` of an object `value` recursi
|
|||
|
||||
Let `value` be a self-signed container object. The convention is that the signature (e.g. a `"bytes96"` BLS12-381 signature) be the last field of `value`. Further, the signed message for `value` is `signing_root(value) = hash_tree_root(truncate_last(value))` where `truncate_last` truncates the last element of `value`.
|
||||
|
||||
## Summaries and expansions
|
||||
|
||||
Let `A` be an object derived from another object `B` by replacing some of the (possibly nested) values of `B` by their `hash_tree_root`. We say `A` is a "summary" of `B`, and that `B` is an "expansion" of `A`. Notice `hash_tree_root(A) == hash_tree_root(B)`.
|
||||
|
||||
We similarly define "summary types" and "expansion types". For example, [`BeaconBlock`](./core/0_beacon-chain.md#beaconblock) is an expansion type of [`BeaconBlockHeader`](./core/0_beacon-chain.md#beaconblockheader). Notice that objects expand to at most one object of a given expansion type. For example, `BeaconBlockHeader` objects uniquely expand to `BeaconBlock` objects.
|
||||
|
||||
## Implementations
|
||||
|
||||
| Language | Project | Maintainer | Implementation |
|
||||
|
@ -234,10 +251,11 @@ Let `value` be a self-signed container object. The convention is that the signat
|
|||
| Python | Ethereum 2.0 | Ethereum Foundation | [https://github.com/ethereum/py-ssz](https://github.com/ethereum/py-ssz) |
|
||||
| Rust | Lighthouse | Sigma Prime | [https://github.com/sigp/lighthouse/tree/master/eth2/utils/ssz](https://github.com/sigp/lighthouse/tree/master/eth2/utils/ssz) |
|
||||
| Nim | Nimbus | Status | [https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim](https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim) |
|
||||
| Rust | Shasper | ParityTech | [https://github.com/paritytech/shasper/tree/master/utils/ssz](https://github.com/paritytech/shasper/tree/master/util/ssz) |
|
||||
| Rust | Shasper | ParityTech | [https://github.com/paritytech/shasper/tree/master/utils/ssz](https://github.com/paritytech/shasper/tree/master/utils/ssz) |
|
||||
| TypeScript | Lodestar | ChainSafe Systems | [https://github.com/ChainSafe/ssz-js](https://github.com/ChainSafe/ssz-js) |
|
||||
| Java | Cava | ConsenSys | [https://www.github.com/ConsenSys/cava/tree/master/ssz](https://www.github.com/ConsenSys/cava/tree/master/ssz) |
|
||||
| Go | Prysm | Prysmatic Labs | [https://github.com/prysmaticlabs/go-ssz](https://github.com/prysmaticlabs/go-ssz) |
|
||||
| Swift | Yeeth | Dean Eigenmann | [https://github.com/yeeth/SimpleSerialize.swift](https://github.com/yeeth/SimpleSerialize.swift) |
|
||||
| C# | | Jordan Andrews | [https://github.com/codingupastorm/csharp-ssz](https://github.com/codingupastorm/csharp-ssz) |
|
||||
| C# | Cortex | Sly Gryphon | [https://www.nuget.org/packages/Cortex.SimpleSerialize](https://www.nuget.org/packages/Cortex.SimpleSerialize) |
|
||||
| C++ | | Jiyun Kim | [https://github.com/NAKsir-melody/cpp_ssz](https://github.com/NAKsir-melody/cpp_ssz) |
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# General test format
|
||||
|
||||
This document defines the YAML format and structure used for Eth 2.0 testing.
|
||||
This document defines the YAML format and structure used for Eth2 testing.
|
||||
|
||||
## Table of contents
|
||||
<!-- TOC -->
|
||||
|
|
|
@ -38,7 +38,6 @@ The provided pre-state is already transitioned to just before the specific sub-t
|
|||
Sub-transitions:
|
||||
|
||||
- `justification_and_finalization`
|
||||
- `crosslinks`
|
||||
- *`rewards_and_penalties` - planned testing extension*
|
||||
- `registry_updates`
|
||||
- `slashings`
|
||||
|
|
|
@ -6,7 +6,7 @@ Tests the initialization of a genesis state based on Eth1 data.
|
|||
|
||||
### `eth1_block_hash.yaml`
|
||||
|
||||
A `Bytes32` hex encoded, with prefix 0x. The root of the Eth-1 block.
|
||||
A `Bytes32` hex encoded, with prefix 0x. The root of the Eth1 block.
|
||||
|
||||
Also available as `eth1_block_hash.ssz`.
|
||||
|
||||
|
|
|
@ -43,10 +43,9 @@ Operations:
|
|||
|-------------------------|----------------------|----------------------|--------------------------------------------------------|
|
||||
| `attestation` | `Attestation` | `attestation` | `process_attestation(state, attestation)` |
|
||||
| `attester_slashing` | `AttesterSlashing` | `attester_slashing` | `process_attester_slashing(state, attester_slashing)` |
|
||||
| `block_header` | `Block` | **`block`** | `process_block_header(state, block)` |
|
||||
| `block_header` | `Block` | **`block`** | `process_block_header(state, block)` |
|
||||
| `deposit` | `Deposit` | `deposit` | `process_deposit(state, deposit)` |
|
||||
| `proposer_slashing` | `ProposerSlashing` | `proposer_slashing` | `process_proposer_slashing(state, proposer_slashing)` |
|
||||
| `transfer` | `Transfer` | `transfer` | `process_transfer(state, transfer)` |
|
||||
| `voluntary_exit` | `VoluntaryExit` | `voluntary_exit` | `process_voluntary_exit(state, voluntary_exit)` |
|
||||
|
||||
Note that `block_header` is not strictly an operation (and is a full `Block`), but processed in the same manner, and hence included here.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# SSZ, static tests
|
||||
|
||||
This set of test-suites provides static testing for SSZ:
|
||||
to instantiate just the known Eth 2.0 SSZ types from binary data.
|
||||
to instantiate just the known Eth2 SSZ types from binary data.
|
||||
|
||||
This series of tests is based on the spec-maintained `eth2spec/utils/ssz/ssz_impl.py`, i.e. fully consistent with the SSZ spec.
|
||||
|
||||
|
|
|
@ -37,16 +37,26 @@
|
|||
- [Attestations](#attestations)
|
||||
- [Deposits](#deposits)
|
||||
- [Voluntary exits](#voluntary-exits)
|
||||
- [Attestations](#attestations-1)
|
||||
- [Attesting](#attesting)
|
||||
- [Attestation data](#attestation-data)
|
||||
- [General](#general)
|
||||
- [LMD GHOST vote](#lmd-ghost-vote)
|
||||
- [FFG vote](#ffg-vote)
|
||||
- [Crosslink vote](#crosslink-vote)
|
||||
- [Construct attestation](#construct-attestation)
|
||||
- [Data](#data)
|
||||
- [Aggregation bits](#aggregation-bits)
|
||||
- [Custody bits](#custody-bits)
|
||||
- [Aggregate signature](#aggregate-signature)
|
||||
- [Broadcast attestation](#broadcast-attestation)
|
||||
- [Attestation aggregation](#attestation-aggregation)
|
||||
- [Aggregation selection](#aggregation-selection)
|
||||
- [Construct aggregate](#construct-aggregate)
|
||||
- [Data](#data-1)
|
||||
- [Aggregation bits](#aggregation-bits-1)
|
||||
- [Custody bits](#custody-bits-1)
|
||||
- [Aggregate signature](#aggregate-signature-1)
|
||||
- [Broadcast aggregate](#broadcast-aggregate)
|
||||
- [`AggregateAndProof`](#aggregateandproof)
|
||||
- [How to avoid slashing](#how-to-avoid-slashing)
|
||||
- [Proposer slashing](#proposer-slashing)
|
||||
- [Attester slashing](#attester-slashing)
|
||||
|
@ -70,6 +80,7 @@ All terminology, constants, functions, and protocol mechanics defined in the [Ph
|
|||
| Name | Value | Unit | Duration |
|
||||
| - | - | :-: | :-: |
|
||||
| `ETH1_FOLLOW_DISTANCE` | `2**10` (= 1,024) | blocks | ~4 hours |
|
||||
| `TARGET_AGGREGATORS_PER_COMMITTEE` | `2**4` (= 16) | validators | |
|
||||
|
||||
## Becoming a validator
|
||||
|
||||
|
@ -97,16 +108,17 @@ In Phase 0, all incoming validator deposits originate from the Ethereum 1.0 proo
|
|||
To submit a deposit:
|
||||
|
||||
- Pack the validator's [initialization parameters](#initialization) into `deposit_data`, a [`DepositData`](../core/0_beacon-chain.md#depositdata) SSZ object.
|
||||
- Let `amount` be the amount in Gwei to be deposited by the validator where `MIN_DEPOSIT_AMOUNT <= amount <= MAX_EFFECTIVE_BALANCE`.
|
||||
- Let `amount` be the amount in Gwei to be deposited by the validator where `amount >= MIN_DEPOSIT_AMOUNT`.
|
||||
- Set `deposit_data.amount = amount`.
|
||||
- Let `signature` be the result of `bls_sign` of the `signing_root(deposit_data)` with `domain=compute_domain(DOMAIN_DEPOSIT)`. (Deposits are valid regardless of fork version, `compute_domain` will default to zeroes there).
|
||||
- Send a transaction on the Ethereum 1.0 chain to `DEPOSIT_CONTRACT_ADDRESS` executing `def deposit(pubkey: bytes[48], withdrawal_credentials: bytes[32], signature: bytes[96])` along with a deposit of `amount` Gwei.
|
||||
- Let `deposit_data_root` be `hash_tree_root(deposit_data)`.
|
||||
- Send a transaction on the Ethereum 1.0 chain to `DEPOSIT_CONTRACT_ADDRESS` executing `def deposit(pubkey: bytes[48], withdrawal_credentials: bytes[32], signature: bytes[96], deposit_data_root: bytes32)` along with a deposit of `amount` Gwei.
|
||||
|
||||
*Note*: Deposits made for the same `pubkey` are treated as for the same validator. A singular `Validator` will be added to `state.validators` with each additional deposit amount added to the validator's balance. A validator can only be activated when total deposits for the validator pubkey meet or exceed `MAX_EFFECTIVE_BALANCE`.
|
||||
|
||||
### Process deposit
|
||||
|
||||
Deposits cannot be processed into the beacon chain until the Eth 1.0 block in which they were deposited or any of its descendants is added to the beacon chain `state.eth1_data`. This takes _a minimum_ of `ETH1_FOLLOW_DISTANCE` Eth 1.0 blocks (~4 hours) plus `ETH1_DATA_VOTING_PERIOD` epochs (~1.7 hours). Once the requisite Eth 1.0 data is added, the deposit will normally be added to a beacon chain block and processed into the `state.validators` within an epoch or two. The validator is then in a queue to be activated.
|
||||
Deposits cannot be processed into the beacon chain until the Eth1 block in which they were deposited or any of its descendants is added to the beacon chain `state.eth1_data`. This takes _a minimum_ of `ETH1_FOLLOW_DISTANCE` Eth1 blocks (~4 hours) plus `ETH1_DATA_VOTING_PERIOD` epochs (~1.7 hours). Once the requisite Eth1 data is added, the deposit will normally be added to a beacon chain block and processed into the `state.validators` within an epoch or two. The validator is then in a queue to be activated.
|
||||
|
||||
### Validator index
|
||||
|
||||
|
@ -114,7 +126,7 @@ Once a validator has been processed and added to the beacon state's `validators`
|
|||
|
||||
### Activation
|
||||
|
||||
In normal operation, the validator is quickly activated, at which point the validator is added to the shuffling and begins validation after an additional `ACTIVATION_EXIT_DELAY` epochs (25.6 minutes).
|
||||
In normal operation, the validator is quickly activated, at which point the validator is added to the shuffling and begins validation after an additional `MAX_SEED_LOOKAHEAD` epochs (25.6 minutes).
|
||||
|
||||
The function [`is_active_validator`](../core/0_beacon-chain.md#is_active_validator) can be used to check if a validator is active during a given epoch. Usage is as follows:
|
||||
|
||||
|
@ -135,32 +147,29 @@ A validator can get committee assignments for a given epoch using the following
|
|||
```python
|
||||
def get_committee_assignment(state: BeaconState,
|
||||
epoch: Epoch,
|
||||
validator_index: ValidatorIndex) -> Optional[Tuple[Sequence[ValidatorIndex], Shard, Slot]]:
|
||||
validator_index: ValidatorIndex
|
||||
) -> Optional[Tuple[Sequence[ValidatorIndex], CommitteeIndex, Slot]]:
|
||||
"""
|
||||
Return the committee assignment in the ``epoch`` for ``validator_index``.
|
||||
``assignment`` returned is a tuple of the following form:
|
||||
* ``assignment[0]`` is the list of validators in the committee
|
||||
* ``assignment[1]`` is the shard to which the committee is assigned
|
||||
* ``assignment[1]`` is the index to which the committee is assigned
|
||||
* ``assignment[2]`` is the slot at which the committee is assigned
|
||||
Return None if no assignment.
|
||||
"""
|
||||
next_epoch = get_current_epoch(state) + 1
|
||||
assert epoch <= next_epoch
|
||||
|
||||
committees_per_slot = get_committee_count(state, epoch) // SLOTS_PER_EPOCH
|
||||
start_slot = compute_start_slot_of_epoch(epoch)
|
||||
start_slot = compute_start_slot_at_epoch(epoch)
|
||||
for slot in range(start_slot, start_slot + SLOTS_PER_EPOCH):
|
||||
offset = committees_per_slot * (slot % SLOTS_PER_EPOCH)
|
||||
slot_start_shard = (get_start_shard(state, epoch) + offset) % SHARD_COUNT
|
||||
for i in range(committees_per_slot):
|
||||
shard = Shard((slot_start_shard + i) % SHARD_COUNT)
|
||||
committee = get_crosslink_committee(state, epoch, shard)
|
||||
for index in range(get_committee_count_at_slot(state, Slot(slot))):
|
||||
committee = get_beacon_committee(state, Slot(slot), CommitteeIndex(index))
|
||||
if validator_index in committee:
|
||||
return committee, shard, Slot(slot)
|
||||
return committee, CommitteeIndex(index), Slot(slot)
|
||||
return None
|
||||
```
|
||||
|
||||
A validator can use the following function to see if they are supposed to propose during their assigned committee slot. This function can only be run with a `state` of the slot in question. Proposer selection is only stable within the context of the current epoch.
|
||||
A validator can use the following function to see if they are supposed to propose during a slot. This function can only be run with a `state` of the slot in question. Proposer selection is only stable within the context of the current epoch.
|
||||
|
||||
```python
|
||||
def is_proposer(state: BeaconState,
|
||||
|
@ -170,11 +179,13 @@ def is_proposer(state: BeaconState,
|
|||
|
||||
*Note*: To see if a validator is assigned to propose during the slot, the beacon state must be in the epoch in question. At the epoch boundaries, the validator must run an epoch transition into the epoch to successfully check the proposal assignment of the first slot.
|
||||
|
||||
*Note*: `BeaconBlock` proposal is distinct from beacon committee assignment, and in a given epoch each responsibility might occur at different a different slot.
|
||||
|
||||
### Lookahead
|
||||
|
||||
The beacon chain shufflings are designed to provide a minimum of 1 epoch lookahead on the validator's upcoming committee assignments for attesting dictated by the shuffling and slot. Note that this lookahead does not apply to proposing, which must be checked during the epoch in question.
|
||||
|
||||
`get_committee_assignment` should be called at the start of each epoch to get the assignment for the next epoch (`current_epoch + 1`). A validator should plan for future assignments by noting at which future slot they will have to attest and also which shard they should begin syncing (in Phase 1+).
|
||||
`get_committee_assignment` should be called at the start of each epoch to get the assignment for the next epoch (`current_epoch + 1`). A validator should plan for future assignments by noting at which future slot they will have to attest.
|
||||
|
||||
Specifically, a validator should call `get_committee_assignment(state, next_epoch, validator_index)` when checking for next epoch assignments.
|
||||
|
||||
|
@ -212,15 +223,15 @@ Set `block.randao_reveal = epoch_signature` where `epoch_signature` is obtained
|
|||
|
||||
```python
|
||||
def get_epoch_signature(state: BeaconState, block: BeaconBlock, privkey: int) -> BLSSignature:
|
||||
domain = get_domain(state, DOMAIN_RANDAO, compute_epoch_of_slot(block.slot))
|
||||
return bls_sign(privkey, hash_tree_root(compute_epoch_of_slot(block.slot)), domain)
|
||||
domain = get_domain(state, DOMAIN_RANDAO, compute_epoch_at_slot(block.slot))
|
||||
return bls_sign(privkey, hash_tree_root(compute_epoch_at_slot(block.slot)), domain)
|
||||
```
|
||||
|
||||
##### Eth1 Data
|
||||
|
||||
The `block.eth1_data` field is for block proposers to vote on recent Eth 1.0 data. This recent data contains an Eth 1.0 block hash as well as the associated deposit root (as calculated by the `get_hash_tree_root()` method of the deposit contract) and deposit count after execution of the corresponding Eth 1.0 block. If over half of the block proposers in the current Eth 1.0 voting period vote for the same `eth1_data` then `state.eth1_data` updates at the end of the voting period. Each deposit in `block.body.deposits` must verify against `state.eth1_data.eth1_deposit_root`.
|
||||
The `block.eth1_data` field is for block proposers to vote on recent Eth1 data. This recent data contains an Eth1 block hash as well as the associated deposit root (as calculated by the `get_deposit_root()` method of the deposit contract) and deposit count after execution of the corresponding Eth1 block. If over half of the block proposers in the current Eth1 voting period vote for the same `eth1_data` then `state.eth1_data` updates at the end of the voting period. Each deposit in `block.body.deposits` must verify against `state.eth1_data.eth1_deposit_root`.
|
||||
|
||||
Let `get_eth1_data(distance: uint64) -> Eth1Data` be the (subjective) function that returns the Eth 1.0 data at distance `distance` relative to the Eth 1.0 head at the start of the current Eth 1.0 voting period. Let `previous_eth1_distance` be the distance relative to the Eth 1.0 block corresponding to `state.eth1_data.block_hash` at the start of the current Eth 1.0 voting period. An honest block proposer sets `block.eth1_data = get_eth1_vote(state, previous_eth1_distance)` where:
|
||||
Let `get_eth1_data(distance: uint64) -> Eth1Data` be the (subjective) function that returns the Eth1 data at distance `distance` relative to the Eth1 head at the start of the current Eth1 voting period. Let `previous_eth1_distance` be the distance relative to the Eth1 block corresponding to `state.eth1_data.block_hash` at the start of the current Eth1 voting period. An honest block proposer sets `block.eth1_data = get_eth1_vote(state, previous_eth1_distance)` where:
|
||||
|
||||
```python
|
||||
def get_eth1_vote(state: BeaconState, previous_eth1_distance: uint64) -> Eth1Data:
|
||||
|
@ -246,7 +257,7 @@ Set `header.signature = block_signature` where `block_signature` is obtained fro
|
|||
|
||||
```python
|
||||
def get_block_signature(state: BeaconState, header: BeaconBlockHeader, privkey: int) -> BLSSignature:
|
||||
domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_of_slot(header.slot))
|
||||
domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_at_slot(header.slot))
|
||||
return bls_sign(privkey, signing_root(header), domain)
|
||||
```
|
||||
|
||||
|
@ -266,7 +277,7 @@ Up to `MAX_ATTESTATIONS`, aggregate attestations can be included in the `block`.
|
|||
|
||||
##### Deposits
|
||||
|
||||
If there are any unprocessed deposits for the existing `state.eth1_data` (i.e. `state.eth1_data.deposit_count > state.eth1_deposit_index`), then pending deposits _must_ be added to the block. The expected number of deposits is exactly `min(MAX_DEPOSITS, eth1_data.deposit_count - state.eth1_deposit_index)`. These [`deposits`](../core/0_beacon-chain.md#deposit) are constructed from the `Deposit` logs from the [Eth 1.0 deposit contract](../core/0_deposit-contract) and must be processed in sequential order. The deposits included in the `block` must satisfy the verification conditions found in [deposits processing](../core/0_beacon-chain.md#deposits).
|
||||
If there are any unprocessed deposits for the existing `state.eth1_data` (i.e. `state.eth1_data.deposit_count > state.eth1_deposit_index`), then pending deposits _must_ be added to the block. The expected number of deposits is exactly `min(MAX_DEPOSITS, eth1_data.deposit_count - state.eth1_deposit_index)`. These [`deposits`](../core/0_beacon-chain.md#deposit) are constructed from the `Deposit` logs from the [Eth1 deposit contract](../core/0_deposit-contract.md) and must be processed in sequential order. The deposits included in the `block` must satisfy the verification conditions found in [deposits processing](../core/0_beacon-chain.md#deposits).
|
||||
|
||||
The `proof` for each deposit must be constructed against the deposit root contained in `state.eth1_data` rather than the deposit root at the time the deposit was initially logged from the 1.0 chain. This entails storing a full deposit merkle tree locally and computing updated proofs against the `eth1_data.deposit_root` as needed. See [`minimal_merkle.py`](https://github.com/ethereum/research/blob/master/spec_pythonizer/utils/merkle_minimal.py) for a sample implementation.
|
||||
|
||||
|
@ -274,11 +285,11 @@ The `proof` for each deposit must be constructed against the deposit root contai
|
|||
|
||||
Up to `MAX_VOLUNTARY_EXITS`, [`VoluntaryExit`](../core/0_beacon-chain.md#voluntaryexit) objects can be included in the `block`. The exits must satisfy the verification conditions found in [exits processing](../core/0_beacon-chain.md#voluntary-exits).
|
||||
|
||||
### Attestations
|
||||
### Attesting
|
||||
|
||||
A validator is expected to create, sign, and broadcast an attestation during each epoch. The `committee`, assigned `shard`, and assigned `slot` for which the validator performs this role during an epoch are defined by `get_committee_assignment(state, epoch, validator_index)`.
|
||||
A validator is expected to create, sign, and broadcast an attestation during each epoch. The `committee`, assigned `index`, and assigned `slot` for which the validator performs this role during an epoch are defined by `get_committee_assignment(state, epoch, validator_index)`.
|
||||
|
||||
A validator should create and broadcast the attestation halfway through the `slot` during which the validator is assigned―that is, `SECONDS_PER_SLOT * 0.5` seconds after the start of `slot`.
|
||||
A validator should create and broadcast the `attestation` to the associated attestation subnet one-third of the way through the `slot` during which the validator is assigned―that is, `SECONDS_PER_SLOT / 3` seconds after the start of `slot`.
|
||||
|
||||
#### Attestation data
|
||||
|
||||
|
@ -287,6 +298,11 @@ First, the validator should construct `attestation_data`, an [`AttestationData`]
|
|||
- Let `head_block` be the result of running the fork choice during the assigned slot.
|
||||
- Let `head_state` be the state of `head_block` processed through any empty slots up to the assigned slot using `process_slots(state, slot)`.
|
||||
|
||||
##### General
|
||||
|
||||
* Set `attestation_data.slot = slot` where `slot` is the assigned slot.
|
||||
* Set `attestation_data.index = index` where `index` is the index associated with the validator's committee.
|
||||
|
||||
##### LMD GHOST vote
|
||||
|
||||
Set `attestation_data.beacon_block_root = signing_root(head_block)`.
|
||||
|
@ -298,20 +314,9 @@ Set `attestation_data.beacon_block_root = signing_root(head_block)`.
|
|||
|
||||
*Note*: `epoch_boundary_block_root` can be looked up in the state using:
|
||||
|
||||
- Let `start_slot = compute_start_slot_of_epoch(get_current_epoch(head_state))`.
|
||||
- Let `start_slot = compute_start_slot_at_epoch(get_current_epoch(head_state))`.
|
||||
- Let `epoch_boundary_block_root = signing_root(head_block) if start_slot == head_state.slot else get_block_root(state, start_slot)`.
|
||||
|
||||
##### Crosslink vote
|
||||
|
||||
Construct `attestation_data.crosslink` via the following.
|
||||
|
||||
- Set `attestation_data.crosslink.shard = shard` where `shard` is the shard associated with the validator's committee.
|
||||
- Let `parent_crosslink = head_state.current_crosslinks[shard]`.
|
||||
- Set `attestation_data.crosslink.start_epoch = parent_crosslink.end_epoch`.
|
||||
- Set `attestation_data.crosslink.end_epoch = min(attestation_data.target.epoch, parent_crosslink.end_epoch + MAX_EPOCHS_PER_CROSSLINK)`.
|
||||
- Set `attestation_data.crosslink.parent_root = hash_tree_root(head_state.current_crosslinks[shard])`.
|
||||
- Set `attestation_data.crosslink.data_root = ZERO_HASH`. *Note*: This is a stub for Phase 0.
|
||||
|
||||
#### Construct attestation
|
||||
|
||||
Next, the validator creates `attestation`, an [`Attestation`](../core/0_beacon-chain.md#attestation) object.
|
||||
|
@ -322,7 +327,7 @@ Set `attestation.data = attestation_data` where `attestation_data` is the `Attes
|
|||
|
||||
##### Aggregation bits
|
||||
|
||||
- Let `attestation.aggregation_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE]` where the bits at the index in the aggregated validator's `committee` is set to `0b1`.
|
||||
- Let `attestation.aggregation_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE]` of length `len(committee)`, where the bit of the index of the validator in the `committee` is set to `0b1`.
|
||||
|
||||
*Note*: Calling `get_attesting_indices(state, attestation.data, attestation.aggregation_bits)` should return a list of length equal to 1, containing `validator_index`.
|
||||
|
||||
|
@ -343,10 +348,85 @@ def get_signed_attestation_data(state: BeaconState, attestation: IndexedAttestat
|
|||
custody_bit=0b0,
|
||||
)
|
||||
|
||||
domain = get_domain(state, DOMAIN_ATTESTATION, attestation.data.target.epoch)
|
||||
domain = get_domain(state, DOMAIN_BEACON_ATTESTER, attestation.data.target.epoch)
|
||||
return bls_sign(privkey, hash_tree_root(attestation_data_and_custody_bit), domain)
|
||||
```
|
||||
|
||||
#### Broadcast attestation
|
||||
|
||||
Finally, the validator broadcasts `attestation` to the associated attestation subnet -- the `index{attestation.data.index % ATTESTATION_SUBNET_COUNT}_beacon_attestation` pubsub topic.
|
||||
|
||||
### Attestation aggregation
|
||||
|
||||
Some validators are selected to locally aggregate attestations with a similar `attestation_data` to their constructed `attestation` for the assigned `slot`.
|
||||
|
||||
#### Aggregation selection
|
||||
|
||||
A validator is selected to aggregate based upon the return value of `is_aggregator()`.
|
||||
|
||||
```python
|
||||
def slot_signature(state: BeaconState, slot: Slot, privkey: int) -> BLSSignature:
|
||||
domain = get_domain(state, DOMAIN_BEACON_ATTESTER, compute_epoch_at_slot(slot))
|
||||
return bls_sign(privkey, hash_tree_root(slot), domain)
|
||||
```
|
||||
|
||||
```python
|
||||
def is_aggregator(state: BeaconState, slot: Slot, index: CommitteeIndex, slot_signature: BLSSignature) -> bool:
|
||||
committee = get_beacon_committee(state, slot, index)
|
||||
modulo = max(1, len(committee) // TARGET_AGGREGATORS_PER_COMMITTEE)
|
||||
return bytes_to_int(hash(slot_signature)[0:8]) % modulo == 0
|
||||
```
|
||||
|
||||
#### Construct aggregate
|
||||
|
||||
If the validator is selected to aggregate (`is_aggregator()`), they construct an aggregate attestation via the following.
|
||||
|
||||
Collect `attestations` seen via gossip during the `slot` that have an equivalent `attestation_data` to that constructed by the validator, and create an `aggregate_attestation: Attestation` with the following fields.
|
||||
|
||||
##### Data
|
||||
|
||||
Set `aggregate_attestation.data = attestation_data` where `attestation_data` is the `AttestationData` object that is the same for each individual attestation being aggregated.
|
||||
|
||||
##### Aggregation bits
|
||||
|
||||
Let `aggregate_attestation.aggregation_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE]` of length `len(committee)`, where each bit set from each individual attestation is set to `0b1`.
|
||||
|
||||
##### Custody bits
|
||||
|
||||
- Let `aggregate_attestation.custody_bits` be a `Bitlist[MAX_VALIDATORS_PER_COMMITTEE]` filled with zeros of length `len(committee)`.
|
||||
|
||||
*Note*: This is a stub for Phase 0.
|
||||
|
||||
##### Aggregate signature
|
||||
|
||||
Set `aggregate_attestation.signature = aggregate_signature` where `aggregate_signature` is obtained from:
|
||||
|
||||
```python
|
||||
def get_aggregate_signature(attestations: Sequence[Attestation]) -> BLSSignature:
|
||||
signatures = [attestation.signature for attestation in attestations]
|
||||
return bls_aggregate_signatures(signatures)
|
||||
```
|
||||
|
||||
#### Broadcast aggregate
|
||||
|
||||
If the validator is selected to aggregate (`is_aggregator`), then they broadcast their best aggregate to the global aggregate channel (`beacon_aggregate_and_proof`) two-thirds of the way through the `slot`-that is, `SECONDS_PER_SLOT * 2 / 3` seconds after the start of `slot`.
|
||||
|
||||
Aggregate attestations are broadcast as `AggregateAndProof` objects to prove to the gossip channel that the validator has been selected as an aggregator.
|
||||
|
||||
##### `AggregateAndProof`
|
||||
|
||||
```python
|
||||
class AggregateAndProof(Container):
|
||||
index: ValidatorIndex
|
||||
selection_proof: BLSSignature
|
||||
aggregate: Attestation
|
||||
```
|
||||
|
||||
Where
|
||||
* `index` is the validator's `validator_index`.
|
||||
* `selection_proof` is the signature of the slot (`slot_signature()`).
|
||||
* `aggregate` is the `aggregate_attestation` constructed in the previous section.
|
||||
|
||||
## How to avoid slashing
|
||||
|
||||
"Slashing" is the burning of some amount of validator funds and immediate ejection from the active validator set. In Phase 0, there are two ways in which funds can be slashed: [proposer slashing](#proposer-slashing) and [attester slashing](#attester-slashing). Although being slashed has serious repercussions, it is simple enough to avoid being slashed all together by remaining _consistent_ with respect to the messages a validator has previously signed.
|
||||
|
@ -361,7 +441,7 @@ To avoid "proposer slashings", a validator must not sign two conflicting [`Beaco
|
|||
|
||||
Specifically, when signing a `BeaconBlock`, a validator should perform the following steps in the following order:
|
||||
|
||||
1. Save a record to hard disk that a beacon block has been signed for the `epoch=compute_epoch_of_slot(block.slot)`.
|
||||
1. Save a record to hard disk that a beacon block has been signed for the `epoch=compute_epoch_at_slot(block.slot)`.
|
||||
2. Generate and broadcast the block.
|
||||
|
||||
If the software crashes at some point within this routine, then when the validator comes back online, the hard disk has the record of the *potentially* signed/broadcast block and can effectively avoid slashing.
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
# Ethereum 2.0 Phase 0 -- Beacon Node API for Validator
|
||||
|
||||
**Notice**: This document is a work-in-progress for researchers and implementers. This is an accompanying document to [Ethereum 2.0 Phase 0 -- Honest Validator](0_beacon-chain-validator.md) that describes an API exposed by the beacon node, which enables the validator client to participate in the Ethereum 2.0 protocol.
|
||||
|
||||
## Outline
|
||||
|
||||
This document outlines a minimal application programming interface (API) which is exposed by a beacon node for use by a validator client implementation which aims to facilitate [Phase 0](../../README.md#phase-0) of Ethereum 2.0.
|
||||
|
||||
The API is a REST interface, accessed via HTTP, designed for use as a local communications protocol between binaries. Currently, the only supported return data type is JSON.
|
||||
|
||||
## Background
|
||||
|
||||
The beacon node maintains the state of the beacon chain by communicating with other beacon nodes in the Ethereum 2.0 network. Conceptually, it does not maintain keypairs that participate with the beacon chain.
|
||||
|
||||
The validator client is a conceptually separate entity which utilizes private keys to perform validator related tasks, called "duties", on the beacon chain. These duties include the production of beacon blocks and signing of attestations.
|
||||
|
||||
Since it is recommended to separate these concerns in the client implementations, we must clearly define the communication between them.
|
||||
|
||||
The goal of this specification is to promote interoperability between beacon nodes and validator clients derived from different projects and to encourage innovation in validator client implementations, independently from beacon node development. For example, the validator client from [Lighthouse](https://github.com/sigp/lighthouse) could communicate with a running instance of the beacon node from [Prysm](https://github.com/prysmaticlabs/prysm), or a staking pool might create a decentrally managed validator client which utilizes the same API.
|
||||
|
||||
This specification is derived from a proposal and discussion on Issues [#1011](https://github.com/ethereum/eth2.0-specs/issues/1011) and [#1012](https://github.com/ethereum/eth2.0-specs/issues/1012).
|
||||
|
||||
## Specification
|
||||
|
||||
The API specification has been written in [OpenAPI 3.0](https://swagger.io/docs/specification/about/) and is provided in the [beacon_node_oapi.yaml](beacon_node_oapi.yaml) file alongside this document.
|
||||
|
||||
For convenience, this specification has been uploaded to SwaggerHub [here](https://app.swaggerhub.com/apis/spble/beacon_node_api_for_validator).
|
|
@ -1,641 +0,0 @@
|
|||
openapi: "3.0.2"
|
||||
info:
|
||||
title: "Minimal Beacon Node API for Validator"
|
||||
description: "A minimal API specification for the beacon node, which enables a validator to connect and perform its obligations on the Ethereum 2.0 phase 0 beacon chain."
|
||||
version: "0.2.0"
|
||||
license:
|
||||
name: "Apache 2.0"
|
||||
url: "https://www.apache.org/licenses/LICENSE-2.0.html"
|
||||
tags:
|
||||
- name: MinimalSet
|
||||
description: The minimal set of endpoints to enable a working validator implementation.
|
||||
- name: OptionalSet
|
||||
description: Extra endpoints which are nice-to-haves.
|
||||
paths:
|
||||
/node/version:
|
||||
get:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Get version string of the running beacon node."
|
||||
description: "Requests that the beacon node identify information about its implementation in a format similar to a [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) field."
|
||||
responses:
|
||||
200:
|
||||
description: Request successful
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/version'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
/node/genesis_time:
|
||||
get:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Get the genesis_time parameter from beacon node configuration."
|
||||
description: "Requests the genesis_time parameter from the beacon node, which should be consistent across all beacon nodes that follow the same beacon chain."
|
||||
responses:
|
||||
200:
|
||||
description: Request successful
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/genesis_time'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
/node/syncing:
|
||||
get:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Poll to see if the the beacon node is syncing."
|
||||
description: "Requests the beacon node to describe if it's currently syncing or not, and if it is, what block it is up to. This is modelled after the Eth1.0 JSON-RPC eth_syncing call.."
|
||||
responses:
|
||||
200:
|
||||
description: Request successful
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
is_syncing:
|
||||
type: boolean
|
||||
description: "A boolean of whether the node is currently syncing or not."
|
||||
sync_status:
|
||||
$ref: '#/components/schemas/SyncingStatus'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
/node/fork:
|
||||
get:
|
||||
tags:
|
||||
- OptionalSet
|
||||
summary: "Get fork information from running beacon node."
|
||||
description: "Requests the beacon node to provide which fork version it is currently on."
|
||||
responses:
|
||||
200:
|
||||
description: Request successful
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
fork:
|
||||
$ref: '#/components/schemas/Fork'
|
||||
chain_id:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Sometimes called the network id, this number discerns the active chain for the beacon node. Analogous to Eth1.0 JSON-RPC net_version."
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
/validator/duties:
|
||||
get:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Get validator duties for the requested validators."
|
||||
description: "Requests the beacon node to provide a set of _duties_, which are actions that should be performed by validators, for a particular epoch. Duties should only need to be checked once per epoch, however a chain reorganization (of > MIN_SEED_LOOKAHEAD epochs) could occur, resulting in a change of duties. For full safety, this API call should be polled at every slot to ensure that chain reorganizations are recognized, and to ensure that the beacon node is properly synchronized."
|
||||
parameters:
|
||||
- name: validator_pubkeys
|
||||
in: query
|
||||
required: true
|
||||
description: "An array of hex-encoded BLS public keys"
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/pubkey'
|
||||
minItems: 1
|
||||
- name: epoch
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: integer
|
||||
responses:
|
||||
200:
|
||||
description: Success response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/ValidatorDuty'
|
||||
400:
|
||||
$ref: '#/components/responses/InvalidRequest'
|
||||
406:
|
||||
description: "Duties cannot be provided for the requested epoch."
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
503:
|
||||
$ref: '#/components/responses/CurrentlySyncing'
|
||||
|
||||
/validator/block:
|
||||
get:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Produce a new block, without signature."
|
||||
description: "Requests a beacon node to produce a valid block, which can then be signed by a validator."
|
||||
parameters:
|
||||
- name: slot
|
||||
in: query
|
||||
required: true
|
||||
description: "The slot for which the block should be proposed."
|
||||
schema:
|
||||
type: integer
|
||||
format: uint64
|
||||
- name: randao_reveal
|
||||
in: query
|
||||
required: true
|
||||
description: "The validator's randao reveal value."
|
||||
schema:
|
||||
type: string
|
||||
format: byte
|
||||
responses:
|
||||
200:
|
||||
description: Success response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/BeaconBlock'
|
||||
400:
|
||||
$ref: '#/components/responses/InvalidRequest'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
503:
|
||||
$ref: '#/components/responses/CurrentlySyncing'
|
||||
post:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Publish a signed block."
|
||||
description: "Instructs the beacon node to broadcast a newly signed beacon block to the beacon network, to be included in the beacon chain. The beacon node is not required to validate the signed `BeaconBlock`, and a successful response (20X) only indicates that the broadcast has been successful. The beacon node is expected to integrate the new block into its state, and therefore validate the block internally, however blocks which fail the validation are still broadcast but a different status code is returned (202)"
|
||||
parameters:
|
||||
- name: beacon_block
|
||||
in: query
|
||||
required: true
|
||||
description: "The `BeaconBlock` object, as sent from the beacon node originally, but now with the signature field completed."
|
||||
schema:
|
||||
$ref: '#/components/schemas/BeaconBlock'
|
||||
responses:
|
||||
200:
|
||||
description: "The block was validated successfully and has been broadcast. It has also been integrated into the beacon node's database."
|
||||
202:
|
||||
description: "The block failed validation, but was successfully broadcast anyway. It was not integrated into the beacon node's database."
|
||||
400:
|
||||
$ref: '#/components/responses/InvalidRequest'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
503:
|
||||
$ref: '#/components/responses/CurrentlySyncing'
|
||||
|
||||
/validator/attestation:
|
||||
get:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Produce an attestation, without signature."
|
||||
description: "Requests that the beacon node produce an IndexedAttestation, with a blank signature field, which the validator will then sign."
|
||||
parameters:
|
||||
- name: validator_pubkey
|
||||
in: query
|
||||
required: true
|
||||
description: "Uniquely identifying which validator this attestation is to be produced for."
|
||||
schema:
|
||||
$ref: '#/components/schemas/pubkey'
|
||||
- name: poc_bit
|
||||
in: query
|
||||
required: true
|
||||
description: "The proof-of-custody bit that is to be reported by the requesting validator. This bit will be inserted into the appropriate location in the returned `IndexedAttestation`."
|
||||
schema:
|
||||
type: integer
|
||||
format: uint32
|
||||
minimum: 0
|
||||
maximum: 1
|
||||
- name: slot
|
||||
in: query
|
||||
required: true
|
||||
description: "The slot for which the attestation should be proposed."
|
||||
schema:
|
||||
type: integer
|
||||
- name: shard
|
||||
in: query
|
||||
required: true
|
||||
description: "The shard number for which the attestation is to be proposed."
|
||||
schema:
|
||||
type: integer
|
||||
responses:
|
||||
200:
|
||||
description: Success response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/IndexedAttestation'
|
||||
400:
|
||||
$ref: '#/components/responses/InvalidRequest'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
503:
|
||||
$ref: '#/components/responses/CurrentlySyncing'
|
||||
post:
|
||||
tags:
|
||||
- MinimalSet
|
||||
summary: "Publish a signed attestation."
|
||||
description: "Instructs the beacon node to broadcast a newly signed IndexedAttestation object to the intended shard subnet. The beacon node is not required to validate the signed IndexedAttestation, and a successful response (20X) only indicates that the broadcast has been successful. The beacon node is expected to integrate the new attestation into its state, and therefore validate the attestation internally, however attestations which fail the validation are still broadcast but a different status code is returned (202)"
|
||||
parameters:
|
||||
- name: attestation
|
||||
in: query
|
||||
required: true
|
||||
description: "An `IndexedAttestation` structure, as originally provided by the beacon node, but now with the signature field completed."
|
||||
schema:
|
||||
$ref: '#/components/schemas/IndexedAttestation'
|
||||
responses:
|
||||
200:
|
||||
description: "The attestation was validated successfully and has been broadcast. It has also been integrated into the beacon node's database."
|
||||
202:
|
||||
description: "The attestation failed validation, but was successfully broadcast anyway. It was not integrated into the beacon node's database."
|
||||
400:
|
||||
$ref: '#/components/responses/InvalidRequest'
|
||||
500:
|
||||
$ref: '#/components/responses/InternalError'
|
||||
503:
|
||||
$ref: '#/components/responses/CurrentlySyncing'
|
||||
|
||||
components:
|
||||
schemas:
|
||||
pubkey:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{96}$"
|
||||
description: "The validator's BLS public key, uniquely identifying them. _48-bytes, hex encoded with 0x prefix, case insensitive._"
|
||||
example: "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc"
|
||||
version:
|
||||
type: string
|
||||
description: "A string which uniquely identifies the client implementation and its version; similar to [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3)."
|
||||
example: "Lighthouse / v0.1.5 (Linux x86_64)"
|
||||
genesis_time:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The genesis_time configured for the beacon node, which is the unix time at which the Eth2.0 chain began."
|
||||
example: 1557716289
|
||||
ValidatorDuty:
|
||||
type: object
|
||||
properties:
|
||||
validator_pubkey:
|
||||
$ref: '#/components/schemas/pubkey'
|
||||
attestation_slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The slot at which the validator must attest."
|
||||
attestation_shard:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The shard in which the validator must attest."
|
||||
block_proposal_slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
nullable: true
|
||||
description: "The slot in which a validator must propose a block, or `null` if block production is not required."
|
||||
SyncingStatus:
|
||||
type: object
|
||||
nullable: true
|
||||
properties:
|
||||
starting_slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The slot at which syncing started (will only be reset after the sync reached its head)"
|
||||
current_slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The most recent slot sync'd by the beacon node."
|
||||
highest_slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Globally, the estimated most recent slot number, or current target slot number."
|
||||
|
||||
BeaconBlock:
|
||||
description: "The [`BeaconBlock`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beaconblock) object from the Eth2.0 spec."
|
||||
allOf:
|
||||
- $ref: '#/components/schemas/BeaconBlockCommon'
|
||||
- type: object
|
||||
properties:
|
||||
body:
|
||||
$ref: '#/components/schemas/BeaconBlockBody'
|
||||
BeaconBlockHeader:
|
||||
description: "The [`BeaconBlockHeader`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beaconblockheader) object from the Eth2.0 spec."
|
||||
allOf:
|
||||
- $ref: '#/components/schemas/BeaconBlockCommon'
|
||||
- type: object
|
||||
properties:
|
||||
body_root:
|
||||
type: string
|
||||
format: bytes
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "The tree hash merkle root of the `BeaconBlockBody` for the `BeaconBlock`"
|
||||
BeaconBlockCommon:
|
||||
# An abstract object to collect the common fields between the BeaconBlockHeader and the BeaconBlock objects
|
||||
type: object
|
||||
properties:
|
||||
slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The slot to which this block corresponds."
|
||||
parent_root:
|
||||
type: string
|
||||
format: bytes
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "The signing merkle root of the parent `BeaconBlock`."
|
||||
state_root:
|
||||
type: string
|
||||
format: bytes
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "The tree hash merkle root of the `BeaconState` for the `BeaconBlock`."
|
||||
signature:
|
||||
type: string
|
||||
format: bytes
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
example: "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
description: "The BLS signature of the `BeaconBlock` made by the validator of the block."
|
||||
BeaconBlockBody:
|
||||
type: object
|
||||
description: "The [`BeaconBlockBody`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beaconblockbody) object from the Eth2.0 spec."
|
||||
properties:
|
||||
randao_reveal:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
description: "The RanDAO reveal value provided by the validator."
|
||||
eth1_data:
|
||||
title: Eth1Data
|
||||
type: object
|
||||
description: "The [`Eth1Data`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#eth1data) object from the Eth2.0 spec."
|
||||
properties:
|
||||
deposit_root:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "Root of the deposit tree."
|
||||
deposit_count:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Total number of deposits."
|
||||
block_hash:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "Ethereum 1.x block hash."
|
||||
graffiti:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
proposer_slashings:
|
||||
type: array
|
||||
items:
|
||||
title: ProposerSlashings
|
||||
type: object
|
||||
description: "The [`ProposerSlashing`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#proposerslashing) object from the Eth2.0 spec."
|
||||
properties:
|
||||
proposer_index:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The index of the proposer to be slashed."
|
||||
header_1:
|
||||
$ref: '#/components/schemas/BeaconBlockHeader'
|
||||
header_2:
|
||||
$ref: '#/components/schemas/BeaconBlockHeader'
|
||||
attester_slashings:
|
||||
type: array
|
||||
items:
|
||||
title: AttesterSlashings
|
||||
type: object
|
||||
description: "The [`AttesterSlashing`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attesterslashing) object from the Eth2.0 spec."
|
||||
properties:
|
||||
attestation_1:
|
||||
$ref: '#/components/schemas/IndexedAttestation'
|
||||
attestation_2:
|
||||
$ref: '#/components/schemas/IndexedAttestation'
|
||||
attestations:
|
||||
type: array
|
||||
items:
|
||||
title: Attestation
|
||||
type: object
|
||||
description: "The [`Attestation`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attestation) object from the Eth2.0 spec."
|
||||
properties:
|
||||
aggregation_bits:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]+$"
|
||||
description: "Attester aggregation bits."
|
||||
custody_bits:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]+$"
|
||||
description: "Custody bits."
|
||||
signature:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
description: "BLS aggregate signature."
|
||||
data:
|
||||
$ref: '#/components/schemas/AttestationData'
|
||||
deposits:
|
||||
type: array
|
||||
items:
|
||||
title: Deposit
|
||||
type: object
|
||||
description: "The [`Deposit`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#deposit) object from the Eth2.0 spec."
|
||||
properties:
|
||||
proof:
|
||||
type: array
|
||||
description: "Branch in the deposit tree."
|
||||
items:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
minItems: 32
|
||||
maxItems: 32
|
||||
index:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Index in the deposit tree."
|
||||
data:
|
||||
title: DepositData
|
||||
type: object
|
||||
description: "The [`DepositData`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#depositdata) object from the Eth2.0 spec."
|
||||
properties:
|
||||
pubkey:
|
||||
$ref: '#/components/schemas/pubkey'
|
||||
withdrawal_credentials:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "The withdrawal credentials."
|
||||
amount:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Amount in Gwei."
|
||||
signature:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
description: "Container self-signature."
|
||||
voluntary_exits:
|
||||
type: array
|
||||
items:
|
||||
title: VoluntaryExit
|
||||
type: object
|
||||
description: "The [`VoluntaryExit`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#voluntaryexit) object from the Eth2.0 spec."
|
||||
properties:
|
||||
epoch:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Minimum epoch for processing exit."
|
||||
validator_index:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Index of the exiting validator."
|
||||
signature:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
description: "Validator signature."
|
||||
transfers:
|
||||
type: array
|
||||
items:
|
||||
title: Transfer
|
||||
type: object
|
||||
description: "The [`Transfer`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#transfer) object from the Eth2.0 spec."
|
||||
properties:
|
||||
sender:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Sender index."
|
||||
recipient:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Recipient index."
|
||||
amount:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Amount in Gwei."
|
||||
fee:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Fee in Gwei for block producer."
|
||||
slot:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Inclusion slot."
|
||||
pubkey:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{96}$"
|
||||
description: "Sender withdrawal public key."
|
||||
signature:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
description: "Sender signature."
|
||||
|
||||
Fork:
|
||||
type: object
|
||||
description: "The [`Fork`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#Fork) object from the Eth2.0 spec."
|
||||
properties:
|
||||
previous_version:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{8}$"
|
||||
description: "Previous fork version."
|
||||
current_version:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{8}$"
|
||||
description: "Current fork version."
|
||||
epoch:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Fork epoch number."
|
||||
IndexedAttestation:
|
||||
type: object
|
||||
description: "The [`IndexedAttestation`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#indexedattestation) object from the Eth2.0 spec."
|
||||
properties:
|
||||
custody_bit_0_indices:
|
||||
type: array
|
||||
description: "Validator indices for 0 bits."
|
||||
items:
|
||||
type: integer
|
||||
format: uint64
|
||||
custody_bit_1_indices:
|
||||
type: array
|
||||
description: "Validator indices for 1 bits."
|
||||
items:
|
||||
type: integer
|
||||
format: uint64
|
||||
signature:
|
||||
type: string
|
||||
format: bytes
|
||||
pattern: "^0x[a-fA-F0-9]{192}$"
|
||||
example: "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
description: "The BLS signature of the `IndexedAttestation`, created by the validator of the attestation."
|
||||
data:
|
||||
$ref: '#/components/schemas/AttestationData'
|
||||
AttestationData:
|
||||
type: object
|
||||
description: "The [`AttestationData`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attestationdata) object from the Eth2.0 spec."
|
||||
properties:
|
||||
beacon_block_root:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "LMD GHOST vote."
|
||||
source_epoch:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Source epoch from FFG vote."
|
||||
source_root:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "Source root from FFG vote."
|
||||
target_epoch:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "Target epoch from FFG vote."
|
||||
target_root:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "Target root from FFG vote."
|
||||
crosslink:
|
||||
title: CrossLink
|
||||
type: object
|
||||
description: "The [`Crosslink`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#crosslink) object from the Eth2.0 spec, contains data from epochs [`start_epoch`, `end_epoch`)."
|
||||
properties:
|
||||
shard:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The shard number."
|
||||
start_epoch:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The first epoch which the crosslinking data references."
|
||||
end_epoch:
|
||||
type: integer
|
||||
format: uint64
|
||||
description: "The 'end' epoch referred to by the crosslinking data; no data in this Crosslink should refer to the `end_epoch` since it is not included in the crosslinking data interval."
|
||||
parent_root:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "Root of the previous crosslink."
|
||||
data_root:
|
||||
type: string
|
||||
format: byte
|
||||
pattern: "^0x[a-fA-F0-9]{64}$"
|
||||
description: "Root of the crosslinked shard data since the previous crosslink."
|
||||
|
||||
responses:
|
||||
Success:
|
||||
description: "Request successful."
|
||||
InvalidRequest:
|
||||
description: "Invalid request syntax."
|
||||
InternalError:
|
||||
description: "Beacon node internal error."
|
||||
CurrentlySyncing:
|
||||
description: "Beacon node is currently syncing, try again later."
|
||||
NotFound:
|
||||
description: "The requested API endpoint does not exist."
|
|
@ -1,6 +1,6 @@
|
|||
# Eth 2.0 Test Generators
|
||||
# Eth2 test generators
|
||||
|
||||
This directory contains all the generators for tests, consumed by Eth 2.0 client implementations.
|
||||
This directory contains all the generators for tests, consumed by Eth2 client implementations.
|
||||
|
||||
Any issues with the generators and/or generated tests should be filed in the repository that hosts the generator outputs,
|
||||
here: [ethereum/eth2.0-spec-tests](https://github.com/ethereum/eth2.0-spec-tests).
|
||||
|
@ -9,6 +9,24 @@ On releases, test generators are run by the release manager. Test-generation of
|
|||
|
||||
An automated nightly tests release system, with a config filter applied, is being considered as implementation needs mature.
|
||||
|
||||
## Table of contents
|
||||
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
||||
|
||||
- [How to run generators](#how-to-run-generators)
|
||||
- [Cleaning](#cleaning)
|
||||
- [Running all test generators](#running-all-test-generators)
|
||||
- [Running a single generator](#running-a-single-generator)
|
||||
- [Developing a generator](#developing-a-generator)
|
||||
- [How to add a new test generator](#how-to-add-a-new-test-generator)
|
||||
- [How to remove a test generator](#how-to-remove-a-test-generator)
|
||||
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
||||
|
||||
|
||||
## How to run generators
|
||||
|
||||
Prerequisites:
|
||||
|
|
|
@ -9,7 +9,7 @@ The base unit is bytes48 of which only 381 bits are used
|
|||
|
||||
## Resources
|
||||
|
||||
- [Eth2.0 spec](../../specs/bls_signature.md)
|
||||
- [Eth2 spec](../../specs/bls_signature.md)
|
||||
- [Finite Field Arithmetic](http://www.springeronline.com/sgw/cda/pageitems/document/cda_downloaddocument/0,11996,0-0-45-110359-0,00.pdf)
|
||||
- Chapter 2 of [Elliptic Curve Cryptography](http://cacr.uwaterloo.ca/ecc/). Darrel Hankerson, Alfred Menezes, and Scott Vanstone
|
||||
- [Zcash BLS parameters](https://github.com/zkcrypto/pairing/tree/master/src/bls12_381)
|
||||
|
|
|
@ -3,7 +3,6 @@ from typing import Iterable
|
|||
from eth2spec.phase0 import spec as spec_phase0
|
||||
from eth2spec.phase1 import spec as spec_phase1
|
||||
from eth2spec.test.phase_0.epoch_processing import (
|
||||
test_process_crosslinks,
|
||||
test_process_final_updates,
|
||||
test_process_justification_and_finalization,
|
||||
test_process_registry_updates,
|
||||
|
@ -36,8 +35,6 @@ def create_provider(handler_name: str, tests_src, config_name: str) -> gen_typin
|
|||
|
||||
if __name__ == "__main__":
|
||||
gen_runner.run_generator("epoch_processing", [
|
||||
create_provider('crosslinks', test_process_crosslinks, 'minimal'),
|
||||
create_provider('crosslinks', test_process_crosslinks, 'mainnet'),
|
||||
create_provider('final_updates', test_process_final_updates, 'minimal'),
|
||||
create_provider('final_updates', test_process_final_updates, 'mainnet'),
|
||||
create_provider('justification_and_finalization', test_process_justification_and_finalization, 'minimal'),
|
||||
|
|
|
@ -6,7 +6,6 @@ from eth2spec.test.phase_0.block_processing import (
|
|||
test_process_block_header,
|
||||
test_process_deposit,
|
||||
test_process_proposer_slashing,
|
||||
test_process_transfer,
|
||||
test_process_voluntary_exit,
|
||||
)
|
||||
|
||||
|
@ -48,10 +47,6 @@ if __name__ == "__main__":
|
|||
create_provider('deposit', test_process_deposit, 'mainnet'),
|
||||
create_provider('proposer_slashing', test_process_proposer_slashing, 'minimal'),
|
||||
create_provider('proposer_slashing', test_process_proposer_slashing, 'mainnet'),
|
||||
create_provider('transfer', test_process_transfer, 'minimal'),
|
||||
# Disabled, due to the high amount of different transfer tests, this produces a shocking size of tests.
|
||||
# Unnecessarily, as transfer are disabled currently, so not a priority.
|
||||
# create_provider('transfer', test_process_transfer, 'mainnet'),
|
||||
create_provider('voluntary_exit', test_process_voluntary_exit, 'minimal'),
|
||||
create_provider('voluntary_exit', test_process_voluntary_exit, 'mainnet'),
|
||||
])
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Shuffling Tests
|
||||
|
||||
Tests for the swap-or-not shuffling in ETH 2.0.
|
||||
Tests for the swap-or-not shuffling in Eth2.
|
||||
|
||||
Tips for initial shuffling write:
|
||||
- run with `round_count = 1` first, do the same with pyspec.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# SSZ-static
|
||||
|
||||
The purpose of this test-generator is to provide test-vectors for the most important applications of SSZ:
|
||||
the serialization and hashing of ETH 2.0 data types.
|
||||
the serialization and hashing of Eth2 data types.
|
||||
|
||||
Test-format documentation can be found [here](../../specs/test_formats/ssz_static/README.md).
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# ETH 2.0 config helpers
|
||||
# Eth2 config helpers
|
||||
|
||||
`preset_loader`: A util to load constants-presets with.
|
||||
See [Constants-presets documentation](../../configs/constants_presets/README.md).
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# ETH 2.0 test generator helpers
|
||||
# Eth2 test generator helpers
|
||||
|
||||
## `gen_base`
|
||||
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
ruamel.yaml==0.15.96
|
||||
ruamel.yaml==0.16.5
|
||||
eth-utils==1.6.0
|
||||
|
|
|
@ -4,7 +4,7 @@ setup(
|
|||
name='gen_helpers',
|
||||
packages=['gen_base', 'gen_from_tests'],
|
||||
install_requires=[
|
||||
"ruamel.yaml==0.15.96",
|
||||
"ruamel.yaml==0.16.5",
|
||||
"eth-utils==1.6.0"
|
||||
]
|
||||
)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Eth 2.0 Executable Python Spec (PySpec)
|
||||
# Eth2 Executable Python Spec (PySpec)
|
||||
|
||||
The executable Python spec is built from the Eth 2.0 specification,
|
||||
The executable Python spec is built from the Eth2 specification,
|
||||
complemented with the necessary helper functions for hashing, BLS, and more.
|
||||
|
||||
With this executable spec,
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
from eth2spec.phase0 import spec as spec_phase0
|
||||
from eth2spec.phase1 import spec as spec_phase1
|
||||
# from eth2spec.phase1 import spec as spec_phase1
|
||||
from eth2spec.utils import bls
|
||||
|
||||
from .helpers.genesis import create_genesis_state
|
||||
|
@ -143,7 +143,9 @@ def bls_switch(fn):
|
|||
def entry(*args, **kw):
|
||||
old_state = bls.bls_active
|
||||
bls.bls_active = kw.pop('bls_active', DEFAULT_BLS_ACTIVE)
|
||||
yield from fn(*args, **kw)
|
||||
res = fn(*args, **kw)
|
||||
if res is not None:
|
||||
yield from res
|
||||
bls.bls_active = old_state
|
||||
return entry
|
||||
|
||||
|
@ -153,7 +155,7 @@ all_phases = ['phase0', 'phase1']
|
|||
|
||||
def with_all_phases(fn):
|
||||
"""
|
||||
A decorator for running a test wil every phase
|
||||
A decorator for running a test with every phase
|
||||
"""
|
||||
return with_phases(all_phases)(fn)
|
||||
|
||||
|
@ -189,7 +191,9 @@ def with_phases(phases):
|
|||
if 'phase0' in run_phases:
|
||||
ret = run_with_spec_version(spec_phase0, *args, **kw)
|
||||
if 'phase1' in run_phases:
|
||||
ret = run_with_spec_version(spec_phase1, *args, **kw)
|
||||
# temporarily disable phase 1 tests
|
||||
return
|
||||
# ret = run_with_spec_version(spec_phase1, *args, **kw)
|
||||
return ret
|
||||
return wrapper
|
||||
return decorator
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from eth2spec.test.context import with_all_phases, with_state, bls_switch
|
||||
from eth2spec.test.context import with_all_phases, spec_state_test
|
||||
from eth2spec.test.helpers.attestations import get_valid_attestation
|
||||
from eth2spec.test.helpers.block import build_empty_block_for_next_slot
|
||||
from eth2spec.test.helpers.state import state_transition_and_sign_block
|
||||
|
@ -27,8 +27,7 @@ def add_attestation_to_store(spec, store, attestation):
|
|||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_genesis(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -37,8 +36,7 @@ def test_genesis(spec, state):
|
|||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_chain_no_attestations(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -59,8 +57,7 @@ def test_chain_no_attestations(spec, state):
|
|||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_split_tie_breaker_no_attestations(spec, state):
|
||||
genesis_state = state.copy()
|
||||
|
||||
|
@ -88,8 +85,7 @@ def test_split_tie_breaker_no_attestations(spec, state):
|
|||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_shorter_chain_but_heavier_weight(spec, state):
|
||||
genesis_state = state.copy()
|
||||
|
||||
|
|
|
@ -1,8 +1,10 @@
|
|||
from eth2spec.test.context import with_all_phases, with_state, bls_switch
|
||||
|
||||
from eth2spec.test.context import with_all_phases, spec_state_test, with_phases
|
||||
|
||||
|
||||
from eth2spec.test.helpers.block import build_empty_block_for_next_slot
|
||||
from eth2spec.test.helpers.attestations import get_valid_attestation
|
||||
from eth2spec.test.helpers.state import next_slot
|
||||
from eth2spec.test.helpers.state import state_transition_and_sign_block
|
||||
|
||||
|
||||
def run_on_attestation(spec, state, store, attestation, valid=True):
|
||||
|
@ -26,27 +28,24 @@ def run_on_attestation(spec, state, store, attestation, valid=True):
|
|||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_attestation(spec, state):
|
||||
store = spec.get_genesis_store(state)
|
||||
time = 100
|
||||
spec.on_tick(store, time)
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state, signed=True)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
# store block in store
|
||||
spec.on_block(store, block)
|
||||
|
||||
next_slot(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, slot=block.slot)
|
||||
run_on_attestation(spec, state, store, attestation)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_attestation_target_not_in_store(spec, state):
|
||||
store = spec.get_genesis_store(state)
|
||||
time = 100
|
||||
|
@ -55,28 +54,27 @@ def test_on_attestation_target_not_in_store(spec, state):
|
|||
# move to next epoch to make block new target
|
||||
state.slot += spec.SLOTS_PER_EPOCH
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state, signed=True)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
# do not add block to store
|
||||
|
||||
next_slot(spec, state)
|
||||
attestation = get_valid_attestation(spec, state, slot=block.slot)
|
||||
run_on_attestation(spec, state, store, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_attestation_future_epoch(spec, state):
|
||||
store = spec.get_genesis_store(state)
|
||||
time = 3 * spec.SECONDS_PER_SLOT
|
||||
spec.on_tick(store, time)
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state, signed=True)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
# store block in store
|
||||
spec.on_block(store, block)
|
||||
next_slot(spec, state)
|
||||
|
||||
# move state forward but not store
|
||||
attestation_slot = block.slot + spec.SLOTS_PER_EPOCH
|
||||
|
@ -87,36 +85,34 @@ def test_on_attestation_future_epoch(spec, state):
|
|||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_attestation_same_slot(spec, state):
|
||||
store = spec.get_genesis_store(state)
|
||||
time = 1 * spec.SECONDS_PER_SLOT
|
||||
spec.on_tick(store, time)
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state, signed=True)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
spec.on_block(store, block)
|
||||
next_slot(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, slot=block.slot)
|
||||
run_on_attestation(spec, state, store, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@with_phases(['phase0'])
|
||||
@spec_state_test
|
||||
def test_on_attestation_invalid_attestation(spec, state):
|
||||
store = spec.get_genesis_store(state)
|
||||
time = 3 * spec.SECONDS_PER_SLOT
|
||||
spec.on_tick(store, time)
|
||||
|
||||
block = build_empty_block_for_next_slot(spec, state, signed=True)
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
spec.on_block(store, block)
|
||||
next_slot(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, slot=block.slot)
|
||||
# make attestation invalid
|
||||
attestation.custody_bits[0:8] = [0, 0, 0, 0, 1, 1, 1, 1]
|
||||
# make attestation invalid by setting a phase1-only custody bit
|
||||
attestation.custody_bits[0] = 1
|
||||
run_on_attestation(spec, state, store, attestation, False)
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
from copy import deepcopy
|
||||
from eth2spec.utils.ssz.ssz_impl import signing_root
|
||||
|
||||
from eth2spec.test.context import with_all_phases, with_state, bls_switch
|
||||
from eth2spec.test.helpers.block import build_empty_block_for_next_slot
|
||||
from eth2spec.test.helpers.state import next_epoch, next_epoch_with_attestations
|
||||
from eth2spec.test.context import with_all_phases, spec_state_test
|
||||
from eth2spec.test.helpers.block import build_empty_block_for_next_slot, sign_block
|
||||
from eth2spec.test.helpers.state import next_epoch, next_epoch_with_attestations, state_transition_and_sign_block
|
||||
|
||||
|
||||
def run_on_block(spec, state, store, block, valid=True):
|
||||
def run_on_block(spec, store, block, valid=True):
|
||||
if not valid:
|
||||
try:
|
||||
spec.on_block(store, block)
|
||||
|
@ -19,19 +20,18 @@ def run_on_block(spec, state, store, block, valid=True):
|
|||
|
||||
|
||||
def apply_next_epoch_with_attestations(spec, state, store):
|
||||
_, new_blocks, state = next_epoch_with_attestations(spec, state, True, False)
|
||||
_, new_blocks, post_state = next_epoch_with_attestations(spec, state, True, False)
|
||||
for block in new_blocks:
|
||||
block_root = signing_root(block)
|
||||
store.blocks[block_root] = block
|
||||
store.block_states[block_root] = state
|
||||
store.block_states[block_root] = post_state
|
||||
last_block = block
|
||||
spec.on_tick(store, store.time + state.slot * spec.SECONDS_PER_SLOT)
|
||||
return state, store, last_block
|
||||
return post_state, store, last_block
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_basic(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -41,21 +41,22 @@ def test_basic(spec, state):
|
|||
|
||||
# On receiving a block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
run_on_block(spec, state, store, block)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
run_on_block(spec, store, block)
|
||||
|
||||
# On receiving a block of next epoch
|
||||
store.time = time + spec.SECONDS_PER_SLOT * spec.SLOTS_PER_EPOCH
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
block.slot += spec.SLOTS_PER_EPOCH
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
run_on_block(spec, state, store, block)
|
||||
run_on_block(spec, store, block)
|
||||
|
||||
# TODO: add tests for justified_root and finalized_root
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_block_checkpoints(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -70,18 +71,18 @@ def test_on_block_checkpoints(spec, state):
|
|||
last_block_root = signing_root(last_block)
|
||||
|
||||
# Mock the finalized_checkpoint
|
||||
store.block_states[last_block_root].finalized_checkpoint = (
|
||||
fin_state = store.block_states[last_block_root]
|
||||
fin_state.finalized_checkpoint = (
|
||||
store.block_states[last_block_root].current_justified_checkpoint
|
||||
)
|
||||
|
||||
# On receiving a block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
run_on_block(spec, state, store, block)
|
||||
block = build_empty_block_for_next_slot(spec, fin_state)
|
||||
state_transition_and_sign_block(spec, deepcopy(fin_state), block)
|
||||
run_on_block(spec, store, block)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_block_future_block(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -90,12 +91,12 @@ def test_on_block_future_block(spec, state):
|
|||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
run_on_block(spec, state, store, block, False)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
run_on_block(spec, store, block, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_block_bad_parent_root(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -104,13 +105,18 @@ def test_on_block_bad_parent_root(spec, state):
|
|||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
spec.state_transition(state, block)
|
||||
block.state_root = state.hash_tree_root()
|
||||
|
||||
block.parent_root = b'\x45' * 32
|
||||
run_on_block(spec, state, store, block, False)
|
||||
|
||||
sign_block(spec, state, block)
|
||||
|
||||
run_on_block(spec, store, block, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@with_state
|
||||
@bls_switch
|
||||
@spec_state_test
|
||||
def test_on_block_before_finalized(spec, state):
|
||||
# Initialization
|
||||
store = spec.get_genesis_store(state)
|
||||
|
@ -124,4 +130,5 @@ def test_on_block_before_finalized(spec, state):
|
|||
|
||||
# Fail receiving block of `GENESIS_SLOT + 1` slot
|
||||
block = build_empty_block_for_next_slot(spec, state)
|
||||
run_on_block(spec, state, store, block, False)
|
||||
state_transition_and_sign_block(spec, state, block)
|
||||
run_on_block(spec, store, block, False)
|
||||
|
|
|
@ -3,11 +3,10 @@ from typing import List
|
|||
from eth2spec.test.helpers.block import build_empty_block_for_next_slot, sign_block
|
||||
from eth2spec.test.helpers.keys import privkeys
|
||||
from eth2spec.utils.bls import bls_sign, bls_aggregate_signatures
|
||||
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
|
||||
from eth2spec.utils.ssz.ssz_typing import Bitlist
|
||||
|
||||
|
||||
def build_attestation_data(spec, state, slot, shard):
|
||||
def build_attestation_data(spec, state, slot, index):
|
||||
assert state.slot >= slot
|
||||
|
||||
if slot == state.slot:
|
||||
|
@ -15,7 +14,7 @@ def build_attestation_data(spec, state, slot, shard):
|
|||
else:
|
||||
block_root = spec.get_block_root_at_slot(state, slot)
|
||||
|
||||
current_epoch_start_slot = spec.compute_start_slot_of_epoch(spec.get_current_epoch(state))
|
||||
current_epoch_start_slot = spec.compute_start_slot_at_epoch(spec.get_current_epoch(state))
|
||||
if slot < current_epoch_start_slot:
|
||||
epoch_boundary_root = spec.get_block_root(state, spec.get_previous_epoch(state))
|
||||
elif slot == current_epoch_start_slot:
|
||||
|
@ -30,43 +29,30 @@ def build_attestation_data(spec, state, slot, shard):
|
|||
source_epoch = state.current_justified_checkpoint.epoch
|
||||
source_root = state.current_justified_checkpoint.root
|
||||
|
||||
if spec.compute_epoch_of_slot(slot) == spec.get_current_epoch(state):
|
||||
parent_crosslink = state.current_crosslinks[shard]
|
||||
else:
|
||||
parent_crosslink = state.previous_crosslinks[shard]
|
||||
|
||||
return spec.AttestationData(
|
||||
slot=slot,
|
||||
index=index,
|
||||
beacon_block_root=block_root,
|
||||
source=spec.Checkpoint(epoch=source_epoch, root=source_root),
|
||||
target=spec.Checkpoint(epoch=spec.compute_epoch_of_slot(slot), root=epoch_boundary_root),
|
||||
crosslink=spec.Crosslink(
|
||||
shard=shard,
|
||||
start_epoch=parent_crosslink.end_epoch,
|
||||
end_epoch=min(spec.compute_epoch_of_slot(slot), parent_crosslink.end_epoch + spec.MAX_EPOCHS_PER_CROSSLINK),
|
||||
data_root=spec.Hash(),
|
||||
parent_root=hash_tree_root(parent_crosslink),
|
||||
),
|
||||
target=spec.Checkpoint(epoch=spec.compute_epoch_at_slot(slot), root=epoch_boundary_root),
|
||||
)
|
||||
|
||||
|
||||
def get_valid_attestation(spec, state, slot=None, signed=False):
|
||||
def get_valid_attestation(spec, state, slot=None, index=None, signed=False):
|
||||
if slot is None:
|
||||
slot = state.slot
|
||||
if index is None:
|
||||
index = 0
|
||||
|
||||
epoch = spec.compute_epoch_of_slot(slot)
|
||||
epoch_start_shard = spec.get_start_shard(state, epoch)
|
||||
committees_per_slot = spec.get_committee_count(state, epoch) // spec.SLOTS_PER_EPOCH
|
||||
shard = (epoch_start_shard + committees_per_slot * (slot % spec.SLOTS_PER_EPOCH)) % spec.SHARD_COUNT
|
||||
attestation_data = build_attestation_data(spec, state, slot, index)
|
||||
|
||||
attestation_data = build_attestation_data(spec, state, slot, shard)
|
||||
|
||||
crosslink_committee = spec.get_crosslink_committee(
|
||||
beacon_committee = spec.get_beacon_committee(
|
||||
state,
|
||||
attestation_data.target.epoch,
|
||||
attestation_data.crosslink.shard,
|
||||
attestation_data.slot,
|
||||
attestation_data.index,
|
||||
)
|
||||
|
||||
committee_size = len(crosslink_committee)
|
||||
committee_size = len(beacon_committee)
|
||||
aggregation_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE](*([0] * committee_size))
|
||||
custody_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE](*([0] * committee_size))
|
||||
attestation = spec.Attestation(
|
||||
|
@ -122,19 +108,20 @@ def get_attestation_signature(spec, state, attestation_data, privkey, custody_bi
|
|||
privkey=privkey,
|
||||
domain=spec.get_domain(
|
||||
state=state,
|
||||
domain_type=spec.DOMAIN_ATTESTATION,
|
||||
domain_type=spec.DOMAIN_BEACON_ATTESTER,
|
||||
message_epoch=attestation_data.target.epoch,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def fill_aggregate_attestation(spec, state, attestation, signed=False):
|
||||
crosslink_committee = spec.get_crosslink_committee(
|
||||
|
||||
beacon_committee = spec.get_beacon_committee(
|
||||
state,
|
||||
attestation.data.target.epoch,
|
||||
attestation.data.crosslink.shard,
|
||||
attestation.data.slot,
|
||||
attestation.data.index,
|
||||
)
|
||||
for i in range(len(crosslink_committee)):
|
||||
for i in range(len(beacon_committee)):
|
||||
attestation.aggregation_bits[i] = True
|
||||
|
||||
if signed:
|
||||
|
|
|
@ -14,7 +14,7 @@ def sign_block(spec, state, block, proposer_index=None):
|
|||
if block.slot == state.slot:
|
||||
proposer_index = spec.get_beacon_proposer_index(state)
|
||||
else:
|
||||
if spec.compute_epoch_of_slot(state.slot) + 1 > spec.compute_epoch_of_slot(block.slot):
|
||||
if spec.compute_epoch_at_slot(state.slot) + 1 > spec.compute_epoch_at_slot(block.slot):
|
||||
print("warning: block slot far away, and no proposer index manually given."
|
||||
" Signing block is slow due to transition for proposer index calculation.")
|
||||
# use stub state to get proposer index of future slot
|
||||
|
@ -26,10 +26,10 @@ def sign_block(spec, state, block, proposer_index=None):
|
|||
|
||||
block.body.randao_reveal = bls_sign(
|
||||
privkey=privkey,
|
||||
message_hash=hash_tree_root(spec.compute_epoch_of_slot(block.slot)),
|
||||
message_hash=hash_tree_root(spec.compute_epoch_at_slot(block.slot)),
|
||||
domain=spec.get_domain(
|
||||
state,
|
||||
message_epoch=spec.compute_epoch_of_slot(block.slot),
|
||||
message_epoch=spec.compute_epoch_at_slot(block.slot),
|
||||
domain_type=spec.DOMAIN_RANDAO,
|
||||
)
|
||||
)
|
||||
|
@ -39,7 +39,7 @@ def sign_block(spec, state, block, proposer_index=None):
|
|||
domain=spec.get_domain(
|
||||
state,
|
||||
spec.DOMAIN_BEACON_PROPOSER,
|
||||
spec.compute_epoch_of_slot(block.slot)))
|
||||
spec.compute_epoch_at_slot(block.slot)))
|
||||
|
||||
|
||||
def apply_empty_block(spec, state):
|
||||
|
|
|
@ -1,6 +1,11 @@
|
|||
from eth2spec.test.helpers.keys import privkeys
|
||||
from eth2spec.utils.bls import bls_sign, bls_aggregate_signatures
|
||||
from eth2spec.utils.hash_function import hash
|
||||
from eth2spec.utils.ssz.ssz_typing import Bitlist, BytesN, Bitvector
|
||||
from eth2spec.utils.ssz.ssz_impl import chunkify, pack, hash_tree_root
|
||||
from eth2spec.utils.merkle_minimal import get_merkle_tree, get_merkle_proof
|
||||
|
||||
BYTES_PER_CHUNK = 32
|
||||
|
||||
|
||||
def get_valid_early_derived_secret_reveal(spec, state, epoch=None):
|
||||
|
@ -13,7 +18,7 @@ def get_valid_early_derived_secret_reveal(spec, state, epoch=None):
|
|||
|
||||
# Generate the secret that is being revealed
|
||||
reveal = bls_sign(
|
||||
message_hash=spec.hash_tree_root(spec.Epoch(epoch)),
|
||||
message_hash=hash_tree_root(spec.Epoch(epoch)),
|
||||
privkey=privkeys[revealed_index],
|
||||
domain=spec.get_domain(
|
||||
state=state,
|
||||
|
@ -42,3 +47,128 @@ def get_valid_early_derived_secret_reveal(spec, state, epoch=None):
|
|||
masker_index=masker_index,
|
||||
mask=mask,
|
||||
)
|
||||
|
||||
|
||||
def get_valid_custody_key_reveal(spec, state, period=None):
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
revealer_index = spec.get_active_validator_indices(state, current_epoch)[0]
|
||||
revealer = state.validators[revealer_index]
|
||||
|
||||
if period is None:
|
||||
period = revealer.next_custody_secret_to_reveal
|
||||
|
||||
epoch_to_sign = spec.get_randao_epoch_for_custody_period(period, revealer_index)
|
||||
|
||||
# Generate the secret that is being revealed
|
||||
reveal = bls_sign(
|
||||
message_hash=hash_tree_root(spec.Epoch(epoch_to_sign)),
|
||||
privkey=privkeys[revealer_index],
|
||||
domain=spec.get_domain(
|
||||
state=state,
|
||||
domain_type=spec.DOMAIN_RANDAO,
|
||||
message_epoch=epoch_to_sign,
|
||||
),
|
||||
)
|
||||
return spec.CustodyKeyReveal(
|
||||
revealer_index=revealer_index,
|
||||
reveal=reveal,
|
||||
)
|
||||
|
||||
|
||||
def bitlist_from_int(max_len, num_bits, n):
|
||||
return Bitlist[max_len](*[(n >> i) & 0b1 for i in range(num_bits)])
|
||||
|
||||
|
||||
def get_valid_bit_challenge(spec, state, attestation, invalid_custody_bit=False):
|
||||
beacon_committee = spec.get_beacon_committee(
|
||||
state,
|
||||
attestation.data.slot,
|
||||
attestation.data.crosslink.shard,
|
||||
)
|
||||
responder_index = beacon_committee[0]
|
||||
challenger_index = beacon_committee[-1]
|
||||
|
||||
epoch = spec.get_randao_epoch_for_custody_period(attestation.data.target.epoch,
|
||||
responder_index)
|
||||
|
||||
# Generate the responder key
|
||||
responder_key = bls_sign(
|
||||
message_hash=hash_tree_root(spec.Epoch(epoch)),
|
||||
privkey=privkeys[responder_index],
|
||||
domain=spec.get_domain(
|
||||
state=state,
|
||||
domain_type=spec.DOMAIN_RANDAO,
|
||||
message_epoch=epoch,
|
||||
),
|
||||
)
|
||||
|
||||
chunk_count = spec.get_custody_chunk_count(attestation.data.crosslink)
|
||||
|
||||
chunk_bits = bitlist_from_int(spec.MAX_CUSTODY_CHUNKS, chunk_count, 0)
|
||||
|
||||
n = 0
|
||||
while spec.get_chunk_bits_root(chunk_bits) == attestation.custody_bits[0] ^ invalid_custody_bit:
|
||||
chunk_bits = bitlist_from_int(spec.MAX_CUSTODY_CHUNKS, chunk_count, n)
|
||||
n += 1
|
||||
|
||||
return spec.CustodyBitChallenge(
|
||||
responder_index=responder_index,
|
||||
attestation=attestation,
|
||||
challenger_index=challenger_index,
|
||||
responder_key=responder_key,
|
||||
chunk_bits=chunk_bits,
|
||||
)
|
||||
|
||||
|
||||
def custody_chunkify(spec, x):
|
||||
chunks = [bytes(x[i:i + spec.BYTES_PER_CUSTODY_CHUNK]) for i in range(0, len(x), spec.BYTES_PER_CUSTODY_CHUNK)]
|
||||
chunks[-1] = chunks[-1].ljust(spec.BYTES_PER_CUSTODY_CHUNK, b"\0")
|
||||
return chunks
|
||||
|
||||
|
||||
def get_valid_custody_response(spec, state, bit_challenge, custody_data, challenge_index, invalid_chunk_bit=False):
|
||||
chunks = custody_chunkify(spec, custody_data)
|
||||
|
||||
chunk_index = len(chunks) - 1
|
||||
chunk_bit = spec.get_custody_chunk_bit(bit_challenge.responder_key, chunks[chunk_index])
|
||||
|
||||
while chunk_bit == bit_challenge.chunk_bits[chunk_index] ^ invalid_chunk_bit:
|
||||
chunk_index -= 1
|
||||
chunk_bit = spec.get_custody_chunk_bit(bit_challenge.responder_key, chunks[chunk_index])
|
||||
|
||||
chunks_hash_tree_roots = [hash_tree_root(BytesN[spec.BYTES_PER_CUSTODY_CHUNK](chunk)) for chunk in chunks]
|
||||
chunks_hash_tree_roots += [
|
||||
hash_tree_root(BytesN[spec.BYTES_PER_CUSTODY_CHUNK](b"\0" * spec.BYTES_PER_CUSTODY_CHUNK))
|
||||
for i in range(2 ** spec.ceillog2(len(chunks)) - len(chunks))]
|
||||
data_tree = get_merkle_tree(chunks_hash_tree_roots)
|
||||
|
||||
data_branch = get_merkle_proof(data_tree, chunk_index)
|
||||
|
||||
bitlist_chunk_index = chunk_index // BYTES_PER_CHUNK
|
||||
bitlist_chunks = chunkify(pack(bit_challenge.chunk_bits))
|
||||
bitlist_tree = get_merkle_tree(bitlist_chunks, pad_to=spec.MAX_CUSTODY_CHUNKS // 256)
|
||||
bitlist_chunk_branch = get_merkle_proof(bitlist_tree, chunk_index // 256) + \
|
||||
[len(bit_challenge.chunk_bits).to_bytes(32, "little")]
|
||||
|
||||
bitlist_chunk_index = chunk_index // 256
|
||||
|
||||
chunk_bits_leaf = Bitvector[256](bit_challenge.chunk_bits[bitlist_chunk_index * 256:
|
||||
(bitlist_chunk_index + 1) * 256])
|
||||
|
||||
return spec.CustodyResponse(
|
||||
challenge_index=challenge_index,
|
||||
chunk_index=chunk_index,
|
||||
chunk=BytesN[spec.BYTES_PER_CUSTODY_CHUNK](chunks[chunk_index]),
|
||||
data_branch=data_branch,
|
||||
chunk_bits_branch=bitlist_chunk_branch,
|
||||
chunk_bits_leaf=chunk_bits_leaf,
|
||||
)
|
||||
|
||||
|
||||
def get_custody_test_vector(bytelength):
|
||||
ints = bytelength // 4
|
||||
return b"".join(i.to_bytes(4, "little") for i in range(ints))
|
||||
|
||||
|
||||
def get_custody_merkle_root(data):
|
||||
return get_merkle_tree(chunkify(data))[-1][0]
|
||||
|
|
|
@ -47,7 +47,7 @@ def build_deposit(spec,
|
|||
deposit_data_list.append(deposit_data)
|
||||
root = hash_tree_root(List[spec.DepositData, 2**spec.DEPOSIT_CONTRACT_TREE_DEPTH](*deposit_data_list))
|
||||
tree = calc_merkle_tree_from_leaves(tuple([d.hash_tree_root() for d in deposit_data_list]))
|
||||
proof = list(get_merkle_proof(tree, item_index=index)) + [(index + 1).to_bytes(32, 'little')]
|
||||
proof = list(get_merkle_proof(tree, item_index=index, tree_len=32)) + [(index + 1).to_bytes(32, 'little')]
|
||||
leaf = deposit_data.hash_tree_root()
|
||||
assert spec.is_valid_merkle_branch(leaf, proof, spec.DEPOSIT_CONTRACT_TREE_DEPTH + 1, index, root)
|
||||
deposit = spec.Deposit(proof=proof, data=deposit_data)
|
||||
|
|
|
@ -1,7 +1,5 @@
|
|||
from eth2spec.test.helpers.keys import pubkeys
|
||||
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
|
||||
from eth2spec.utils.ssz.ssz_typing import List
|
||||
import copy
|
||||
from eth2spec.test.helpers.keys import pubkeys
|
||||
|
||||
|
||||
def build_mock_validator(spec, i: int, balance: int):
|
||||
|
@ -22,15 +20,17 @@ def build_mock_validator(spec, i: int, balance: int):
|
|||
def create_genesis_state(spec, validator_balances, activation_threshold):
|
||||
deposit_root = b'\x42' * 32
|
||||
|
||||
eth1_block_hash = b'\xda' * 32
|
||||
state = spec.BeaconState(
|
||||
genesis_time=0,
|
||||
eth1_deposit_index=len(validator_balances),
|
||||
eth1_data=spec.Eth1Data(
|
||||
deposit_root=deposit_root,
|
||||
deposit_count=len(validator_balances),
|
||||
block_hash=spec.Hash(),
|
||||
block_hash=eth1_block_hash,
|
||||
),
|
||||
latest_block_header=spec.BeaconBlockHeader(body_root=spec.hash_tree_root(spec.BeaconBlockBody())),
|
||||
randao_mixes=[eth1_block_hash] * spec.EPOCHS_PER_HISTORICAL_VECTOR,
|
||||
)
|
||||
|
||||
# We "hack" in the initial validators,
|
||||
|
@ -44,12 +44,4 @@ def create_genesis_state(spec, validator_balances, activation_threshold):
|
|||
validator.activation_eligibility_epoch = spec.GENESIS_EPOCH
|
||||
validator.activation_epoch = spec.GENESIS_EPOCH
|
||||
|
||||
genesis_active_index_root = hash_tree_root(List[spec.ValidatorIndex, spec.VALIDATOR_REGISTRY_LIMIT](
|
||||
spec.get_active_validator_indices(state, spec.GENESIS_EPOCH)))
|
||||
genesis_compact_committees_root = hash_tree_root(List[spec.ValidatorIndex, spec.VALIDATOR_REGISTRY_LIMIT](
|
||||
spec.get_active_validator_indices(state, spec.GENESIS_EPOCH)))
|
||||
for index in range(spec.EPOCHS_PER_HISTORICAL_VECTOR):
|
||||
state.active_index_roots[index] = genesis_active_index_root
|
||||
state.compact_committees_roots[index] = genesis_compact_committees_root
|
||||
|
||||
return state
|
||||
|
|
|
@ -0,0 +1,40 @@
|
|||
from eth2spec.test.helpers.keys import privkeys
|
||||
from eth2spec.utils.bls import (
|
||||
bls_aggregate_signatures,
|
||||
bls_sign,
|
||||
)
|
||||
|
||||
|
||||
def sign_shard_attestation(spec, beacon_state, shard_state, block, participants):
|
||||
signatures = []
|
||||
message_hash = spec.ShardAttestationData(
|
||||
slot=block.slot,
|
||||
parent_root=block.parent_root,
|
||||
).hash_tree_root()
|
||||
block_epoch = spec.compute_epoch_of_shard_slot(block.slot)
|
||||
for validator_index in participants:
|
||||
privkey = privkeys[validator_index]
|
||||
signatures.append(
|
||||
get_attestation_signature(
|
||||
spec,
|
||||
beacon_state,
|
||||
shard_state,
|
||||
message_hash,
|
||||
block_epoch,
|
||||
privkey,
|
||||
)
|
||||
)
|
||||
|
||||
return bls_aggregate_signatures(signatures)
|
||||
|
||||
|
||||
def get_attestation_signature(spec, beacon_state, shard_state, message_hash, block_epoch, privkey):
|
||||
return bls_sign(
|
||||
message_hash=message_hash,
|
||||
privkey=privkey,
|
||||
domain=spec.get_domain(
|
||||
state=beacon_state,
|
||||
domain_type=spec.DOMAIN_SHARD_ATTESTER,
|
||||
message_epoch=block_epoch,
|
||||
)
|
||||
)
|
|
@ -0,0 +1,82 @@
|
|||
from copy import deepcopy
|
||||
|
||||
from eth2spec.test.helpers.keys import privkeys
|
||||
from eth2spec.utils.bls import (
|
||||
bls_sign,
|
||||
only_with_bls,
|
||||
)
|
||||
from eth2spec.utils.ssz.ssz_impl import (
|
||||
signing_root,
|
||||
)
|
||||
|
||||
from .attestations import (
|
||||
sign_shard_attestation,
|
||||
)
|
||||
|
||||
|
||||
@only_with_bls()
|
||||
def sign_shard_block(spec, beacon_state, shard_state, block, proposer_index=None):
|
||||
if proposer_index is None:
|
||||
proposer_index = spec.get_shard_proposer_index(beacon_state, shard_state.shard, block.slot)
|
||||
|
||||
privkey = privkeys[proposer_index]
|
||||
|
||||
block.signature = bls_sign(
|
||||
message_hash=signing_root(block),
|
||||
privkey=privkey,
|
||||
domain=spec.get_domain(
|
||||
beacon_state,
|
||||
spec.DOMAIN_SHARD_PROPOSER,
|
||||
spec.compute_epoch_of_shard_slot(block.slot),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def build_empty_shard_block(spec,
|
||||
beacon_state,
|
||||
shard_state,
|
||||
slot,
|
||||
signed=False,
|
||||
full_attestation=False):
|
||||
if slot is None:
|
||||
slot = shard_state.slot
|
||||
|
||||
previous_beacon_header = deepcopy(beacon_state.latest_block_header)
|
||||
if previous_beacon_header.state_root == spec.Bytes32():
|
||||
previous_beacon_header.state_root = beacon_state.hash_tree_root()
|
||||
beacon_block_root = spec.signing_root(previous_beacon_header)
|
||||
|
||||
previous_block_header = deepcopy(shard_state.latest_block_header)
|
||||
if previous_block_header.state_root == spec.Bytes32():
|
||||
previous_block_header.state_root = shard_state.hash_tree_root()
|
||||
parent_root = signing_root(previous_block_header)
|
||||
|
||||
block = spec.ShardBlock(
|
||||
shard=shard_state.shard,
|
||||
slot=slot,
|
||||
beacon_block_root=beacon_block_root,
|
||||
parent_root=parent_root,
|
||||
block_size_sum=shard_state.block_size_sum + spec.SHARD_HEADER_SIZE,
|
||||
)
|
||||
|
||||
if full_attestation:
|
||||
shard_committee = spec.get_shard_committee(beacon_state, shard_state.shard, block.slot)
|
||||
block.aggregation_bits = list(
|
||||
(True,) * len(shard_committee) +
|
||||
(False,) * (spec.MAX_PERIOD_COMMITTEE_SIZE * 2 - len(shard_committee))
|
||||
)
|
||||
else:
|
||||
shard_committee = []
|
||||
|
||||
block.attestations = sign_shard_attestation(
|
||||
spec,
|
||||
beacon_state,
|
||||
shard_state,
|
||||
block,
|
||||
participants=shard_committee,
|
||||
)
|
||||
|
||||
if signed:
|
||||
sign_shard_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
return block
|
|
@ -0,0 +1,18 @@
|
|||
from eth2spec.test.helpers.phase1.shard_block import sign_shard_block
|
||||
|
||||
|
||||
def configure_shard_state(spec, beacon_state, shard=0):
|
||||
beacon_state.slot = spec.Slot(spec.SHARD_GENESIS_EPOCH * spec.SLOTS_PER_EPOCH)
|
||||
shard_state = spec.get_genesis_shard_state(spec.Shard(shard))
|
||||
shard_state.slot = spec.ShardSlot(spec.SHARD_GENESIS_EPOCH * spec.SHARD_SLOTS_PER_EPOCH)
|
||||
return beacon_state, shard_state
|
||||
|
||||
|
||||
def shard_state_transition_and_sign_block(spec, beacon_state, shard_state, block):
|
||||
"""
|
||||
Shard state transition via the provided ``block``
|
||||
then package the block with the state root and signature.
|
||||
"""
|
||||
spec.shard_state_transition(beacon_state, shard_state, block)
|
||||
block.state_root = shard_state.hash_tree_root()
|
||||
sign_shard_block(spec, beacon_state, shard_state, block)
|
|
@ -18,7 +18,6 @@ def get_valid_proposer_slashing(spec, state, signed_1=False, signed_2=False):
|
|||
)
|
||||
header_2 = deepcopy(header_1)
|
||||
header_2.parent_root = b'\x99' * 32
|
||||
header_2.slot = slot + 1
|
||||
|
||||
if signed_1:
|
||||
sign_block_header(spec, state, header_1, privkey)
|
||||
|
|
|
@ -52,14 +52,18 @@ def next_epoch_with_attestations(spec,
|
|||
block = build_empty_block_for_next_slot(spec, post_state)
|
||||
if fill_cur_epoch and post_state.slot >= spec.MIN_ATTESTATION_INCLUSION_DELAY:
|
||||
slot_to_attest = post_state.slot - spec.MIN_ATTESTATION_INCLUSION_DELAY + 1
|
||||
if slot_to_attest >= spec.compute_start_slot_of_epoch(spec.get_current_epoch(post_state)):
|
||||
cur_attestation = get_valid_attestation(spec, post_state, slot_to_attest)
|
||||
block.body.attestations.append(cur_attestation)
|
||||
committees_per_slot = spec.get_committee_count_at_slot(state, slot_to_attest)
|
||||
if slot_to_attest >= spec.compute_start_slot_at_epoch(spec.get_current_epoch(post_state)):
|
||||
for index in range(committees_per_slot):
|
||||
cur_attestation = get_valid_attestation(spec, post_state, slot_to_attest, index=index)
|
||||
block.body.attestations.append(cur_attestation)
|
||||
|
||||
if fill_prev_epoch:
|
||||
slot_to_attest = post_state.slot - spec.SLOTS_PER_EPOCH + 1
|
||||
prev_attestation = get_valid_attestation(spec, post_state, slot_to_attest)
|
||||
block.body.attestations.append(prev_attestation)
|
||||
committees_per_slot = spec.get_committee_count_at_slot(state, slot_to_attest)
|
||||
for index in range(committees_per_slot):
|
||||
prev_attestation = get_valid_attestation(spec, post_state, slot_to_attest, index=index)
|
||||
block.body.attestations.append(prev_attestation)
|
||||
|
||||
state_transition_and_sign_block(spec, post_state, block)
|
||||
blocks.append(block)
|
||||
|
|
|
@ -1,53 +0,0 @@
|
|||
from eth2spec.test.helpers.keys import pubkeys, privkeys
|
||||
from eth2spec.test.helpers.state import get_balance
|
||||
from eth2spec.utils.bls import bls_sign
|
||||
from eth2spec.utils.ssz.ssz_impl import signing_root
|
||||
|
||||
|
||||
def get_valid_transfer(spec, state, slot=None, sender_index=None,
|
||||
recipient_index=None, amount=None, fee=None, signed=False):
|
||||
if slot is None:
|
||||
slot = state.slot
|
||||
current_epoch = spec.get_current_epoch(state)
|
||||
if sender_index is None:
|
||||
sender_index = spec.get_active_validator_indices(state, current_epoch)[-1]
|
||||
if recipient_index is None:
|
||||
recipient_index = spec.get_active_validator_indices(state, current_epoch)[0]
|
||||
transfer_pubkey = pubkeys[-1]
|
||||
transfer_privkey = privkeys[-1]
|
||||
|
||||
if fee is None:
|
||||
fee = get_balance(state, sender_index) // 32
|
||||
if amount is None:
|
||||
amount = get_balance(state, sender_index) - fee
|
||||
|
||||
transfer = spec.Transfer(
|
||||
sender=sender_index,
|
||||
recipient=recipient_index,
|
||||
amount=amount,
|
||||
fee=fee,
|
||||
slot=slot,
|
||||
pubkey=transfer_pubkey,
|
||||
)
|
||||
if signed:
|
||||
sign_transfer(spec, state, transfer, transfer_privkey)
|
||||
|
||||
# ensure withdrawal_credentials reproducible
|
||||
state.validators[transfer.sender].withdrawal_credentials = (
|
||||
spec.BLS_WITHDRAWAL_PREFIX + spec.hash(transfer.pubkey)[1:]
|
||||
)
|
||||
|
||||
return transfer
|
||||
|
||||
|
||||
def sign_transfer(spec, state, transfer, privkey):
|
||||
transfer.signature = bls_sign(
|
||||
message_hash=signing_root(transfer),
|
||||
privkey=privkey,
|
||||
domain=spec.get_domain(
|
||||
state=state,
|
||||
domain_type=spec.DOMAIN_TRANSFER,
|
||||
message_epoch=spec.get_current_epoch(state),
|
||||
)
|
||||
)
|
||||
return transfer
|
|
@ -0,0 +1,152 @@
|
|||
import re
|
||||
from eth_utils import (
|
||||
to_tuple,
|
||||
)
|
||||
|
||||
from eth2spec.test.context import (
|
||||
expect_assertion_error,
|
||||
spec_state_test,
|
||||
with_all_phases_except,
|
||||
)
|
||||
from eth2spec.utils.ssz.ssz_typing import (
|
||||
Bytes32,
|
||||
Container,
|
||||
List,
|
||||
uint64,
|
||||
)
|
||||
|
||||
|
||||
class Foo(Container):
|
||||
x: uint64
|
||||
y: List[Bytes32, 2]
|
||||
|
||||
# Tree
|
||||
# root
|
||||
# / \
|
||||
# x y_root
|
||||
# / \
|
||||
# y_data_root len(y)
|
||||
# / \
|
||||
# / \ / \
|
||||
#
|
||||
# Generalized indices
|
||||
# 1
|
||||
# / \
|
||||
# 2 (x) 3 (y_root)
|
||||
# / \
|
||||
# 6 7
|
||||
# / \
|
||||
# 12 13
|
||||
|
||||
|
||||
@to_tuple
|
||||
def ssz_object_to_path(start, end):
|
||||
is_len = False
|
||||
len_findall = re.findall(r"(?<=len\().*(?=\))", end)
|
||||
if len_findall:
|
||||
is_len = True
|
||||
end = len_findall[0]
|
||||
|
||||
route = ''
|
||||
if end.startswith(start):
|
||||
route = end[len(start):]
|
||||
|
||||
segments = route.split('.')
|
||||
for word in segments:
|
||||
index_match = re.match(r"(\w+)\[(\d+)]", word)
|
||||
if index_match:
|
||||
yield from index_match.groups()
|
||||
elif len(word):
|
||||
yield word
|
||||
if is_len:
|
||||
yield '__len__'
|
||||
|
||||
|
||||
to_path_test_cases = [
|
||||
('foo', 'foo.x', ('x',)),
|
||||
('foo', 'foo.x[100].y', ('x', '100', 'y')),
|
||||
('foo', 'foo.x[100].y[1].z[2]', ('x', '100', 'y', '1', 'z', '2')),
|
||||
('foo', 'len(foo.x[100].y[1].z[2])', ('x', '100', 'y', '1', 'z', '2', '__len__')),
|
||||
]
|
||||
|
||||
|
||||
def test_to_path():
|
||||
for test_case in to_path_test_cases:
|
||||
start, end, expected = test_case
|
||||
assert ssz_object_to_path(start, end) == expected
|
||||
|
||||
|
||||
generalized_index_cases = [
|
||||
(Foo, ('x',), 2),
|
||||
(Foo, ('y',), 3),
|
||||
(Foo, ('y', 0), 12),
|
||||
(Foo, ('y', 1), 13),
|
||||
(Foo, ('y', '__len__'), None),
|
||||
]
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_get_generalized_index(spec, state):
|
||||
for typ, path, generalized_index in generalized_index_cases:
|
||||
if generalized_index is not None:
|
||||
assert spec.get_generalized_index(
|
||||
typ=typ,
|
||||
path=path,
|
||||
) == generalized_index
|
||||
else:
|
||||
expect_assertion_error(lambda: spec.get_generalized_index(typ=typ, path=path))
|
||||
|
||||
yield 'typ', typ
|
||||
yield 'path', path
|
||||
yield 'generalized_index', generalized_index
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_verify_merkle_proof(spec, state):
|
||||
h = spec.hash
|
||||
a = b'\x11' * 32
|
||||
b = b'\x22' * 32
|
||||
c = b'\x33' * 32
|
||||
d = b'\x44' * 32
|
||||
root = h(h(a + b) + h(c + d))
|
||||
leaf = a
|
||||
generalized_index = 4
|
||||
proof = [b, h(c + d)]
|
||||
|
||||
is_valid = spec.verify_merkle_proof(
|
||||
leaf=leaf,
|
||||
proof=proof,
|
||||
index=generalized_index,
|
||||
root=root,
|
||||
)
|
||||
assert is_valid
|
||||
|
||||
yield 'proof', proof
|
||||
yield 'is_valid', is_valid
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_verify_merkle_multiproof(spec, state):
|
||||
h = spec.hash
|
||||
a = b'\x11' * 32
|
||||
b = b'\x22' * 32
|
||||
c = b'\x33' * 32
|
||||
d = b'\x44' * 32
|
||||
root = h(h(a + b) + h(c + d))
|
||||
leaves = [a, d]
|
||||
generalized_indices = [4, 7]
|
||||
proof = [c, b] # helper_indices = [6, 5]
|
||||
|
||||
is_valid = spec.verify_merkle_multiproof(
|
||||
leaves=leaves,
|
||||
proof=proof,
|
||||
indices=generalized_indices,
|
||||
root=root,
|
||||
)
|
||||
assert is_valid
|
||||
|
||||
yield 'proof', proof
|
||||
yield 'is_valid', is_valid
|
|
@ -1,5 +1,12 @@
|
|||
from eth2spec.test.context import spec_state_test, expect_assertion_error, always_bls, \
|
||||
with_all_phases, with_phases, spec_test, low_balances, with_custom_state
|
||||
from eth2spec.test.context import (
|
||||
spec_state_test,
|
||||
expect_assertion_error,
|
||||
always_bls, never_bls,
|
||||
with_all_phases, with_phases,
|
||||
spec_test,
|
||||
low_balances,
|
||||
with_custom_state,
|
||||
)
|
||||
from eth2spec.test.helpers.attestations import (
|
||||
get_valid_attestation,
|
||||
sign_aggregate_attestation,
|
||||
|
@ -7,7 +14,6 @@ from eth2spec.test.helpers.attestations import (
|
|||
)
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
next_slot,
|
||||
)
|
||||
from eth2spec.test.helpers.block import apply_empty_block
|
||||
from eth2spec.utils.ssz.ssz_typing import Bitlist
|
||||
|
@ -79,54 +85,6 @@ def test_success_previous_epoch(spec, state):
|
|||
yield from run_attestation_processing(spec, state, attestation)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_success_since_max_epochs_per_crosslink(spec, state):
|
||||
# Do not run mainnet (64 epochs), that would mean the equivalent of ~7 hours chain simulation.
|
||||
if spec.MAX_EPOCHS_PER_CROSSLINK > 4:
|
||||
return
|
||||
for _ in range(spec.MAX_EPOCHS_PER_CROSSLINK + 2):
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
data = attestation.data
|
||||
# test logic sanity check: make sure the attestation only includes MAX_EPOCHS_PER_CROSSLINK epochs
|
||||
assert data.crosslink.end_epoch - data.crosslink.start_epoch == spec.MAX_EPOCHS_PER_CROSSLINK
|
||||
|
||||
for _ in range(spec.MIN_ATTESTATION_INCLUSION_DELAY):
|
||||
next_slot(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_wrong_end_epoch_with_max_epochs_per_crosslink(spec, state):
|
||||
# Do not run mainnet (64 epochs), that would mean the equivalent of ~7 hours chain simulation.
|
||||
if spec.MAX_EPOCHS_PER_CROSSLINK > 4:
|
||||
return
|
||||
for _ in range(spec.MAX_EPOCHS_PER_CROSSLINK + 2):
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state)
|
||||
data = attestation.data
|
||||
# test logic sanity check: make sure the attestation only includes MAX_EPOCHS_PER_CROSSLINK epochs
|
||||
assert data.crosslink.end_epoch - data.crosslink.start_epoch == spec.MAX_EPOCHS_PER_CROSSLINK
|
||||
# Now change it to be different
|
||||
data.crosslink.end_epoch += 1
|
||||
|
||||
sign_attestation(spec, state, attestation)
|
||||
|
||||
for _ in range(spec.MIN_ATTESTATION_INCLUSION_DELAY):
|
||||
next_slot(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@always_bls
|
||||
|
@ -180,27 +138,41 @@ def test_old_source_epoch(spec, state):
|
|||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_wrong_shard(spec, state):
|
||||
@always_bls
|
||||
def test_wrong_index_for_committee_signature(spec, state):
|
||||
attestation = get_valid_attestation(spec, state)
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
attestation.data.crosslink.shard += 1
|
||||
|
||||
sign_attestation(spec, state, attestation)
|
||||
attestation.data.index += 1
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_invalid_shard(spec, state):
|
||||
@never_bls
|
||||
def test_wrong_index_for_slot(spec, state):
|
||||
committees_per_slot = spec.get_committee_count_at_slot(state, state.slot)
|
||||
assert committees_per_slot < spec.MAX_COMMITTEES_PER_SLOT
|
||||
index = committees_per_slot
|
||||
|
||||
attestation = get_valid_attestation(spec, state)
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
attestation.data.index = index
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@never_bls
|
||||
def test_invalid_index(spec, state):
|
||||
attestation = get_valid_attestation(spec, state)
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
# off by one (with respect to valid range) on purpose
|
||||
attestation.data.crosslink.shard = spec.SHARD_COUNT
|
||||
|
||||
sign_attestation(spec, state, attestation)
|
||||
attestation.data.index = spec.MAX_COMMITTEES_PER_SLOT
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
@ -302,80 +274,13 @@ def test_bad_source_root(spec, state):
|
|||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_phases(['phase0'])
|
||||
@spec_state_test
|
||||
def test_non_zero_crosslink_data_root(spec, state):
|
||||
attestation = get_valid_attestation(spec, state)
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
attestation.data.crosslink.data_root = b'\x42' * 32
|
||||
|
||||
sign_attestation(spec, state, attestation)
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_bad_parent_crosslink(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH - 1
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=False)
|
||||
for _ in range(spec.MIN_ATTESTATION_INCLUSION_DELAY):
|
||||
next_slot(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation.data.crosslink.parent_root = b'\x27' * 32
|
||||
sign_attestation(spec, state, attestation)
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_bad_crosslink_start_epoch(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH - 1
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=False)
|
||||
for _ in range(spec.MIN_ATTESTATION_INCLUSION_DELAY):
|
||||
next_slot(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation.data.crosslink.start_epoch += 1
|
||||
sign_attestation(spec, state, attestation)
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_bad_crosslink_end_epoch(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH - 1
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=False)
|
||||
for _ in range(spec.MIN_ATTESTATION_INCLUSION_DELAY):
|
||||
next_slot(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation.data.crosslink.end_epoch += 1
|
||||
sign_attestation(spec, state, attestation)
|
||||
|
||||
yield from run_attestation_processing(spec, state, attestation, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_inconsistent_bits(spec, state):
|
||||
attestation = get_valid_attestation(spec, state)
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
custody_bits = attestation.aggregation_bits[:]
|
||||
custody_bits = attestation.custody_bits[:]
|
||||
custody_bits.append(False)
|
||||
|
||||
attestation.custody_bits = custody_bits
|
||||
|
|
|
@ -1,368 +0,0 @@
|
|||
from eth2spec.test.context import spec_state_test, expect_assertion_error, always_bls, with_all_phases
|
||||
from eth2spec.test.helpers.state import next_epoch
|
||||
from eth2spec.test.helpers.block import apply_empty_block
|
||||
from eth2spec.test.helpers.transfers import get_valid_transfer, sign_transfer
|
||||
|
||||
|
||||
def run_transfer_processing(spec, state, transfer, valid=True):
|
||||
"""
|
||||
Run ``process_transfer``, yielding:
|
||||
- pre-state ('pre')
|
||||
- transfer ('transfer')
|
||||
- post-state ('post').
|
||||
If ``valid == False``, run expecting ``AssertionError``
|
||||
"""
|
||||
|
||||
yield 'pre', state
|
||||
yield 'transfer', transfer
|
||||
|
||||
if not valid:
|
||||
expect_assertion_error(lambda: spec.process_transfer(state, transfer))
|
||||
yield 'post', None
|
||||
return
|
||||
|
||||
proposer_index = spec.get_beacon_proposer_index(state)
|
||||
pre_transfer_sender_balance = state.balances[transfer.sender]
|
||||
pre_transfer_recipient_balance = state.balances[transfer.recipient]
|
||||
pre_transfer_proposer_balance = state.balances[proposer_index]
|
||||
|
||||
spec.process_transfer(state, transfer)
|
||||
yield 'post', state
|
||||
|
||||
sender_balance = state.balances[transfer.sender]
|
||||
recipient_balance = state.balances[transfer.recipient]
|
||||
assert sender_balance == pre_transfer_sender_balance - transfer.amount - transfer.fee
|
||||
assert recipient_balance == pre_transfer_recipient_balance + transfer.amount
|
||||
assert state.balances[proposer_index] == pre_transfer_proposer_balance + transfer.fee
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_success_non_activated(spec, state):
|
||||
transfer = get_valid_transfer(spec, state, signed=True)
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_success_withdrawable(spec, state):
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
transfer = get_valid_transfer(spec, state, signed=True)
|
||||
|
||||
# withdrawable_epoch in past so can transfer
|
||||
state.validators[transfer.sender].withdrawable_epoch = spec.get_current_epoch(state) - 1
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_success_active_above_max_effective(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MAX_EFFECTIVE_BALANCE + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=1, fee=0, signed=True)
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_success_active_above_max_effective_fee(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MAX_EFFECTIVE_BALANCE + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=0, fee=1, signed=True)
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
@always_bls
|
||||
def test_invalid_signature(spec, state):
|
||||
transfer = get_valid_transfer(spec, state)
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_active_but_transfer_past_effective_balance(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
amount = spec.MAX_EFFECTIVE_BALANCE // 32
|
||||
state.balances[sender_index] = spec.MAX_EFFECTIVE_BALANCE
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=amount, fee=0, signed=True)
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_incorrect_slot(spec, state):
|
||||
transfer = get_valid_transfer(spec, state, slot=state.slot + 1, signed=True)
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_transfer_clean(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=spec.MIN_DEPOSIT_AMOUNT, fee=0, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_transfer_clean_split_to_fee(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=spec.MIN_DEPOSIT_AMOUNT // 2, fee=spec.MIN_DEPOSIT_AMOUNT // 2, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_fee(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=0, fee=1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_fee_result_full(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=0, fee=state.balances[sender_index] + 1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_amount_result_dust(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=1, fee=0, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_amount_result_full(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=state.balances[sender_index] + 1, fee=0, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_combined_result_dust(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee without dust, and amount without dust, but not both.
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=1, fee=1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_combined_result_full(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee fully without dust left, and amount fully without dust left, but not both.
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT * 2 + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=spec.MIN_DEPOSIT_AMOUNT + 1,
|
||||
fee=spec.MIN_DEPOSIT_AMOUNT + 1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_combined_big_amount(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee fully without dust left, and amount fully without dust left, but not both.
|
||||
# Try to create a dust balance (off by 1) with combination of fee and amount.
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT * 2 + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=spec.MIN_DEPOSIT_AMOUNT + 1, fee=1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_for_combined_big_fee(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee fully without dust left, and amount fully without dust left, but not both.
|
||||
# Try to create a dust balance (off by 1) with combination of fee and amount.
|
||||
state.balances[sender_index] = spec.MIN_DEPOSIT_AMOUNT * 2 + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=1, fee=spec.MIN_DEPOSIT_AMOUNT + 1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_off_by_1_fee(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee fully without dust left, and amount fully without dust left, but not both.
|
||||
# Try to print money by using the full balance as amount, plus 1 for fee.
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=state.balances[sender_index], fee=1, signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_off_by_1_amount(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee fully without dust left, and amount fully without dust left, but not both.
|
||||
# Try to print money by using the full balance as fee, plus 1 for amount.
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=1,
|
||||
fee=state.balances[sender_index], signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_insufficient_balance_duplicate_as_fee_and_amount(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# Enough to pay fee fully without dust left, and amount fully without dust left, but not both.
|
||||
# Try to print money by using the full balance, twice.
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
amount=state.balances[sender_index],
|
||||
fee=state.balances[sender_index], signed=True)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_no_dust_sender(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
balance = state.balances[sender_index]
|
||||
transfer = get_valid_transfer(
|
||||
spec,
|
||||
state,
|
||||
sender_index=sender_index,
|
||||
amount=balance - spec.MIN_DEPOSIT_AMOUNT + 1,
|
||||
fee=0,
|
||||
signed=True,
|
||||
)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_no_dust_recipient(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MAX_EFFECTIVE_BALANCE + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=1, fee=0, signed=True)
|
||||
state.balances[transfer.recipient] = 0
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_non_existent_sender(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index, amount=1, fee=0)
|
||||
transfer.sender = len(state.validators)
|
||||
sign_transfer(spec, state, transfer, 42) # mostly valid signature, but sender won't exist, use bogus key.
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_non_existent_recipient(spec, state):
|
||||
sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
state.balances[sender_index] = spec.MAX_EFFECTIVE_BALANCE + 1
|
||||
transfer = get_valid_transfer(spec, state, sender_index=sender_index,
|
||||
recipient_index=len(state.validators), amount=1, fee=0, signed=True)
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_invalid_pubkey(spec, state):
|
||||
transfer = get_valid_transfer(spec, state, signed=True)
|
||||
state.validators[transfer.sender].withdrawal_credentials = spec.Hash()
|
||||
|
||||
# un-activate so validator can transfer
|
||||
state.validators[transfer.sender].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
yield from run_transfer_processing(spec, state, transfer, False)
|
|
@ -1,7 +1,6 @@
|
|||
|
||||
process_calls = [
|
||||
'process_justification_and_finalization',
|
||||
'process_crosslinks',
|
||||
'process_rewards_and_penalties',
|
||||
'process_registry_updates',
|
||||
'process_reveal_deadlines',
|
||||
|
|
|
@ -1,166 +0,0 @@
|
|||
from copy import deepcopy
|
||||
|
||||
from eth2spec.test.context import spec_state_test, with_all_phases
|
||||
from eth2spec.test.helpers.state import (
|
||||
next_epoch,
|
||||
next_slot
|
||||
)
|
||||
from eth2spec.test.helpers.block import apply_empty_block
|
||||
from eth2spec.test.helpers.attestations import (
|
||||
add_attestations_to_state,
|
||||
get_valid_attestation,
|
||||
sign_attestation)
|
||||
from eth2spec.test.phase_0.epoch_processing.run_epoch_process_base import run_epoch_processing_with
|
||||
|
||||
|
||||
def run_process_crosslinks(spec, state):
|
||||
yield from run_epoch_processing_with(spec, state, 'process_crosslinks')
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_no_attestations(spec, state):
|
||||
yield from run_process_crosslinks(spec, state)
|
||||
|
||||
for shard in range(spec.SHARD_COUNT):
|
||||
assert state.previous_crosslinks[shard] == state.current_crosslinks[shard]
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_single_crosslink_update_from_current_epoch(spec, state):
|
||||
next_epoch(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
add_attestations_to_state(spec, state, [attestation], state.slot + spec.MIN_ATTESTATION_INCLUSION_DELAY)
|
||||
|
||||
assert len(state.current_epoch_attestations) == 1
|
||||
|
||||
shard = attestation.data.crosslink.shard
|
||||
pre_crosslink = deepcopy(state.current_crosslinks[shard])
|
||||
|
||||
yield from run_process_crosslinks(spec, state)
|
||||
|
||||
assert state.previous_crosslinks[shard] != state.current_crosslinks[shard]
|
||||
assert pre_crosslink != state.current_crosslinks[shard]
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_single_crosslink_update_from_previous_epoch(spec, state):
|
||||
next_epoch(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
add_attestations_to_state(spec, state, [attestation], state.slot + spec.SLOTS_PER_EPOCH)
|
||||
|
||||
assert len(state.previous_epoch_attestations) == 1
|
||||
|
||||
shard = attestation.data.crosslink.shard
|
||||
pre_crosslink = deepcopy(state.current_crosslinks[shard])
|
||||
|
||||
crosslink_deltas = spec.get_crosslink_deltas(state)
|
||||
|
||||
yield from run_process_crosslinks(spec, state)
|
||||
|
||||
assert state.previous_crosslinks[shard] != state.current_crosslinks[shard]
|
||||
assert pre_crosslink != state.current_crosslinks[shard]
|
||||
|
||||
# ensure rewarded
|
||||
for index in spec.get_crosslink_committee(
|
||||
state,
|
||||
attestation.data.target.epoch,
|
||||
attestation.data.crosslink.shard):
|
||||
assert crosslink_deltas[0][index] > 0
|
||||
assert crosslink_deltas[1][index] == 0
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_double_late_crosslink(spec, state):
|
||||
if spec.get_committee_count(state, spec.get_current_epoch(state)) < spec.SHARD_COUNT:
|
||||
print("warning: ignoring test, test-assumptions are incompatible with configuration")
|
||||
return
|
||||
|
||||
next_epoch(spec, state)
|
||||
state.slot += 4
|
||||
|
||||
attestation_1 = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
# add attestation_1 to next epoch
|
||||
next_epoch(spec, state)
|
||||
add_attestations_to_state(spec, state, [attestation_1], state.slot + spec.MIN_ATTESTATION_INCLUSION_DELAY)
|
||||
|
||||
for _ in range(spec.SLOTS_PER_EPOCH):
|
||||
attestation_2 = get_valid_attestation(spec, state)
|
||||
if attestation_2.data.crosslink.shard == attestation_1.data.crosslink.shard:
|
||||
sign_attestation(spec, state, attestation_2)
|
||||
break
|
||||
next_slot(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
# add attestation_2 in the next epoch after attestation_1 has
|
||||
# already updated the relevant crosslink
|
||||
next_epoch(spec, state)
|
||||
add_attestations_to_state(spec, state, [attestation_2], state.slot + spec.MIN_ATTESTATION_INCLUSION_DELAY)
|
||||
|
||||
assert len(state.previous_epoch_attestations) == 1
|
||||
assert len(state.current_epoch_attestations) == 0
|
||||
|
||||
crosslink_deltas = spec.get_crosslink_deltas(state)
|
||||
|
||||
yield from run_process_crosslinks(spec, state)
|
||||
|
||||
shard = attestation_2.data.crosslink.shard
|
||||
|
||||
# ensure that the current crosslinks were not updated by the second attestation
|
||||
assert state.previous_crosslinks[shard] == state.current_crosslinks[shard]
|
||||
# ensure no reward, only penalties for the failed crosslink
|
||||
for index in spec.get_crosslink_committee(
|
||||
state,
|
||||
attestation_2.data.target.epoch,
|
||||
attestation_2.data.crosslink.shard):
|
||||
assert crosslink_deltas[0][index] == 0
|
||||
assert crosslink_deltas[1][index] > 0
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_tied_crosslink_between_epochs(spec, state):
|
||||
"""
|
||||
Addresses scenario found at Interop described by this test case
|
||||
https://github.com/djrtwo/interop-test-cases/tree/master/tests/night_one_16_crosslinks
|
||||
|
||||
Ensure that ties on crosslinks between epochs are broken by previous epoch.
|
||||
"""
|
||||
prev_attestation = get_valid_attestation(spec, state)
|
||||
sign_attestation(spec, state, prev_attestation)
|
||||
|
||||
# add attestation at start of next epoch
|
||||
next_epoch(spec, state)
|
||||
add_attestations_to_state(spec, state, [prev_attestation], state.slot)
|
||||
|
||||
# create attestation from current epoch for same shard
|
||||
for _ in range(spec.SLOTS_PER_EPOCH):
|
||||
cur_attestation = get_valid_attestation(spec, state)
|
||||
if cur_attestation.data.crosslink.shard == prev_attestation.data.crosslink.shard:
|
||||
sign_attestation(spec, state, cur_attestation)
|
||||
break
|
||||
next_slot(spec, state)
|
||||
|
||||
add_attestations_to_state(spec, state, [cur_attestation], state.slot + spec.MIN_ATTESTATION_INCLUSION_DELAY)
|
||||
|
||||
shard = prev_attestation.data.crosslink.shard
|
||||
pre_crosslink = deepcopy(state.current_crosslinks[shard])
|
||||
|
||||
assert prev_attestation.data.crosslink != cur_attestation.data.crosslink
|
||||
assert state.current_crosslinks[shard] == spec.Crosslink()
|
||||
assert len(state.previous_epoch_attestations) == 1
|
||||
assert len(state.current_epoch_attestations) == 1
|
||||
|
||||
yield from run_process_crosslinks(spec, state)
|
||||
|
||||
assert state.previous_crosslinks[shard] != state.current_crosslinks[shard]
|
||||
assert pre_crosslink != state.current_crosslinks[shard]
|
||||
assert state.current_crosslinks[shard] == prev_attestation.data.crosslink
|
|
@ -89,20 +89,3 @@ def test_historical_root_accumulator(spec, state):
|
|||
yield from run_process_final_updates(spec, state)
|
||||
|
||||
assert len(state.historical_roots) == history_len + 1
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_compact_committees_root(spec, state):
|
||||
assert spec.SLOTS_PER_ETH1_VOTING_PERIOD > spec.SLOTS_PER_EPOCH
|
||||
# skip ahead to the end of the epoch
|
||||
state.slot = spec.SLOTS_PER_EPOCH - 1
|
||||
|
||||
next_epoch = spec.get_current_epoch(state) + 1
|
||||
|
||||
# ensure that order in which items are processed in final_updates
|
||||
# does not alter the expected_root
|
||||
expected_root = spec.get_compact_committees_root(state, next_epoch)
|
||||
yield from run_process_final_updates(spec, state)
|
||||
|
||||
assert state.compact_committees_roots[next_epoch % spec.EPOCHS_PER_HISTORICAL_VECTOR] == expected_root
|
||||
|
|
|
@ -8,14 +8,6 @@ def run_process_just_and_fin(spec, state):
|
|||
yield from run_epoch_processing_with(spec, state, 'process_justification_and_finalization')
|
||||
|
||||
|
||||
def get_shards_for_slot(spec, state, slot):
|
||||
epoch = spec.compute_epoch_of_slot(slot)
|
||||
epoch_start_shard = spec.get_start_shard(state, epoch)
|
||||
committees_per_slot = spec.get_committee_count(state, epoch) // spec.SLOTS_PER_EPOCH
|
||||
shard = (epoch_start_shard + committees_per_slot * (slot % spec.SLOTS_PER_EPOCH)) % spec.SHARD_COUNT
|
||||
return [shard + i for i in range(committees_per_slot)]
|
||||
|
||||
|
||||
def add_mock_attestations(spec, state, epoch, source, target, sufficient_support=False, messed_up_target=False):
|
||||
# we must be at the end of the epoch
|
||||
assert (state.slot + 1) % spec.SLOTS_PER_EPOCH == 0
|
||||
|
@ -33,15 +25,16 @@ def add_mock_attestations(spec, state, epoch, source, target, sufficient_support
|
|||
total_balance = spec.get_total_active_balance(state)
|
||||
remaining_balance = total_balance * 2 // 3
|
||||
|
||||
start_slot = spec.compute_start_slot_of_epoch(epoch)
|
||||
start_slot = spec.compute_start_slot_at_epoch(epoch)
|
||||
for slot in range(start_slot, start_slot + spec.SLOTS_PER_EPOCH):
|
||||
for shard in get_shards_for_slot(spec, state, slot):
|
||||
committees_per_slot = spec.get_committee_count_at_slot(state, slot)
|
||||
for index in range(committees_per_slot):
|
||||
# Check if we already have had sufficient balance. (and undone if we don't want it).
|
||||
# If so, do not create more attestations. (we do not have empty pending attestations normally anyway)
|
||||
if remaining_balance < 0:
|
||||
return
|
||||
|
||||
committee = spec.get_crosslink_committee(state, spec.compute_epoch_of_slot(slot), shard)
|
||||
committee = spec.get_beacon_committee(state, slot, index)
|
||||
# Create a bitfield filled with the given count per attestation,
|
||||
# exactly on the right-most part of the committee field.
|
||||
|
||||
|
@ -60,10 +53,11 @@ def add_mock_attestations(spec, state, epoch, source, target, sufficient_support
|
|||
attestations.append(spec.PendingAttestation(
|
||||
aggregation_bits=aggregation_bits,
|
||||
data=spec.AttestationData(
|
||||
slot=slot,
|
||||
beacon_block_root=b'\xff' * 32, # irrelevant to testing
|
||||
source=source,
|
||||
target=target,
|
||||
crosslink=spec.Crosslink(shard=shard)
|
||||
index=index,
|
||||
),
|
||||
inclusion_delay=1,
|
||||
))
|
||||
|
@ -82,7 +76,7 @@ def get_checkpoints(spec, epoch):
|
|||
|
||||
def put_checkpoints_in_block_roots(spec, state, checkpoints):
|
||||
for c in checkpoints:
|
||||
state.block_roots[spec.compute_start_slot_of_epoch(c.epoch) % spec.SLOTS_PER_HISTORICAL_ROOT] = c.root
|
||||
state.block_roots[spec.compute_start_slot_at_epoch(c.epoch) % spec.SLOTS_PER_HISTORICAL_ROOT] = c.root
|
||||
|
||||
|
||||
def finalize_on_234(spec, state, epoch, sufficient_support):
|
||||
|
|
|
@ -21,7 +21,7 @@ def test_activation(spec, state):
|
|||
index = 0
|
||||
mock_deposit(spec, state, index)
|
||||
|
||||
for _ in range(spec.ACTIVATION_EXIT_DELAY + 1):
|
||||
for _ in range(spec.MAX_SEED_LOOKAHEAD + 1):
|
||||
next_epoch(spec, state)
|
||||
|
||||
yield from run_process_registry_updates(spec, state)
|
||||
|
@ -73,7 +73,7 @@ def test_ejection(spec, state):
|
|||
# Mock an ejection
|
||||
state.validators[index].effective_balance = spec.EJECTION_BALANCE
|
||||
|
||||
for _ in range(spec.ACTIVATION_EXIT_DELAY + 1):
|
||||
for _ in range(spec.MAX_SEED_LOOKAHEAD + 1):
|
||||
next_epoch(spec, state)
|
||||
|
||||
yield from run_process_registry_updates(spec, state)
|
||||
|
|
|
@ -22,7 +22,7 @@ def run_process_rewards_and_penalties(spec, state):
|
|||
def test_genesis_epoch_no_attestations_no_penalties(spec, state):
|
||||
pre_state = deepcopy(state)
|
||||
|
||||
assert spec.compute_epoch_of_slot(state.slot) == spec.GENESIS_EPOCH
|
||||
assert spec.compute_epoch_at_slot(state.slot) == spec.GENESIS_EPOCH
|
||||
|
||||
yield from run_process_rewards_and_penalties(spec, state)
|
||||
|
||||
|
@ -46,7 +46,7 @@ def test_genesis_epoch_full_attestations_no_rewards(spec, state):
|
|||
next_slot(spec, state)
|
||||
|
||||
# ensure has not cross the epoch boundary
|
||||
assert spec.compute_epoch_of_slot(state.slot) == spec.GENESIS_EPOCH
|
||||
assert spec.compute_epoch_at_slot(state.slot) == spec.GENESIS_EPOCH
|
||||
|
||||
pre_state = deepcopy(state)
|
||||
|
||||
|
@ -69,7 +69,7 @@ def prepare_state_with_full_attestations(spec, state):
|
|||
add_attestations_to_state(spec, state, [include_att], state.slot)
|
||||
next_slot(spec, state)
|
||||
|
||||
assert spec.compute_epoch_of_slot(state.slot) == spec.GENESIS_EPOCH + 1
|
||||
assert spec.compute_epoch_at_slot(state.slot) == spec.GENESIS_EPOCH + 1
|
||||
assert len(state.previous_epoch_attestations) == spec.SLOTS_PER_EPOCH
|
||||
|
||||
return attestations
|
||||
|
@ -109,7 +109,7 @@ def test_full_attestations_misc_balances(spec, state):
|
|||
for index in range(len(pre_state.validators)):
|
||||
if index in attesting_indices:
|
||||
assert state.balances[index] > pre_state.balances[index]
|
||||
elif spec.is_active_validator(pre_state.validators[index], spec.compute_epoch_of_slot(state.slot)):
|
||||
elif spec.is_active_validator(pre_state.validators[index], spec.compute_epoch_at_slot(state.slot)):
|
||||
assert state.balances[index] < pre_state.balances[index]
|
||||
else:
|
||||
assert state.balances[index] == pre_state.balances[index]
|
||||
|
@ -121,7 +121,7 @@ def test_no_attestations_all_penalties(spec, state):
|
|||
next_epoch(spec, state)
|
||||
pre_state = deepcopy(state)
|
||||
|
||||
assert spec.compute_epoch_of_slot(state.slot) == spec.GENESIS_EPOCH + 1
|
||||
assert spec.compute_epoch_at_slot(state.slot) == spec.GENESIS_EPOCH + 1
|
||||
|
||||
yield from run_process_rewards_and_penalties(spec, state)
|
||||
|
||||
|
@ -189,7 +189,7 @@ def test_attestations_some_slashed(spec, state):
|
|||
for i in range(spec.MIN_PER_EPOCH_CHURN_LIMIT):
|
||||
spec.slash_validator(state, attesting_indices_before_slashings[i])
|
||||
|
||||
assert spec.compute_epoch_of_slot(state.slot) == spec.GENESIS_EPOCH + 1
|
||||
assert spec.compute_epoch_at_slot(state.slot) == spec.GENESIS_EPOCH + 1
|
||||
assert len(state.previous_epoch_attestations) == spec.SLOTS_PER_EPOCH
|
||||
|
||||
pre_state = deepcopy(state)
|
||||
|
|
|
@ -0,0 +1,350 @@
|
|||
from eth2spec.test.helpers.custody import (
|
||||
get_valid_bit_challenge,
|
||||
get_valid_custody_response,
|
||||
get_custody_test_vector,
|
||||
get_custody_merkle_root
|
||||
)
|
||||
from eth2spec.test.helpers.attestations import (
|
||||
get_valid_attestation,
|
||||
)
|
||||
from eth2spec.utils.ssz.ssz_impl import hash_tree_root
|
||||
from eth2spec.test.helpers.state import next_epoch, get_balance
|
||||
from eth2spec.test.helpers.block import apply_empty_block
|
||||
from eth2spec.test.context import (
|
||||
with_all_phases_except,
|
||||
spec_state_test,
|
||||
expect_assertion_error,
|
||||
)
|
||||
from eth2spec.test.phase_0.block_processing.test_process_attestation import run_attestation_processing
|
||||
|
||||
|
||||
def run_bit_challenge_processing(spec, state, custody_bit_challenge, valid=True):
|
||||
"""
|
||||
Run ``process_bit_challenge``, yielding:
|
||||
- pre-state ('pre')
|
||||
- CustodyBitChallenge ('custody_bit_challenge')
|
||||
- post-state ('post').
|
||||
If ``valid == False``, run expecting ``AssertionError``
|
||||
"""
|
||||
yield 'pre', state
|
||||
yield 'custody_bit_challenge', custody_bit_challenge
|
||||
|
||||
if not valid:
|
||||
expect_assertion_error(lambda: spec.process_bit_challenge(state, custody_bit_challenge))
|
||||
yield 'post', None
|
||||
return
|
||||
|
||||
spec.process_bit_challenge(state, custody_bit_challenge)
|
||||
|
||||
assert state.custody_bit_challenge_records[state.custody_challenge_index - 1].chunk_bits_merkle_root == \
|
||||
hash_tree_root(custody_bit_challenge.chunk_bits)
|
||||
assert state.custody_bit_challenge_records[state.custody_challenge_index - 1].challenger_index == \
|
||||
custody_bit_challenge.challenger_index
|
||||
assert state.custody_bit_challenge_records[state.custody_challenge_index - 1].responder_index == \
|
||||
custody_bit_challenge.responder_index
|
||||
|
||||
yield 'post', state
|
||||
|
||||
|
||||
def run_custody_response_processing(spec, state, custody_response, valid=True):
|
||||
"""
|
||||
Run ``process_bit_challenge_response``, yielding:
|
||||
- pre-state ('pre')
|
||||
- CustodyResponse ('custody_response')
|
||||
- post-state ('post').
|
||||
If ``valid == False``, run expecting ``AssertionError``
|
||||
"""
|
||||
yield 'pre', state
|
||||
yield 'custody_response', custody_response
|
||||
|
||||
if not valid:
|
||||
expect_assertion_error(lambda: spec.process_custody_response(state, custody_response))
|
||||
yield 'post', None
|
||||
return
|
||||
|
||||
# TODO: Add capability to also process chunk challenges, not only bit challenges
|
||||
challenge = state.custody_bit_challenge_records[custody_response.challenge_index]
|
||||
pre_slashed_balance = get_balance(state, challenge.challenger_index)
|
||||
|
||||
spec.process_custody_response(state, custody_response)
|
||||
|
||||
slashed_validator = state.validators[challenge.challenger_index]
|
||||
|
||||
assert slashed_validator.slashed
|
||||
assert slashed_validator.exit_epoch < spec.FAR_FUTURE_EPOCH
|
||||
assert slashed_validator.withdrawable_epoch < spec.FAR_FUTURE_EPOCH
|
||||
|
||||
assert get_balance(state, challenge.challenger_index) < pre_slashed_balance
|
||||
yield 'post', state
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_challenge_appended(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_multiple_epochs_custody(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH * 3
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * (spec.EPOCHS_PER_CUSTODY_PERIOD - 1)
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_many_epochs_custody(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH * 100
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * (spec.EPOCHS_PER_CUSTODY_PERIOD - 1)
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_off_chain_attestation(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
state.slot += spec.SLOTS_PER_EPOCH * spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_invalid_custody_bit_challenge(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation, invalid_custody_bit=True)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge, valid=False)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_max_reveal_lateness_1(spec, state):
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
responder_index = challenge.responder_index
|
||||
target_epoch = attestation.data.target.epoch
|
||||
|
||||
state.validators[responder_index].max_reveal_lateness = 3
|
||||
|
||||
latest_reveal_epoch = spec.get_randao_epoch_for_custody_period(
|
||||
spec.get_custody_period_for_validator(state, responder_index, target_epoch),
|
||||
responder_index
|
||||
) + 2 * spec.EPOCHS_PER_CUSTODY_PERIOD + state.validators[responder_index].max_reveal_lateness
|
||||
|
||||
while spec.get_current_epoch(state) < latest_reveal_epoch - 2:
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_max_reveal_lateness_2(spec, state):
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
responder_index = challenge.responder_index
|
||||
|
||||
state.validators[responder_index].max_reveal_lateness = 3
|
||||
|
||||
for i in range(spec.get_randao_epoch_for_custody_period(
|
||||
spec.get_custody_period_for_validator(state, responder_index),
|
||||
responder_index
|
||||
) + 2 * spec.EPOCHS_PER_CUSTODY_PERIOD + state.validators[responder_index].max_reveal_lateness - 1):
|
||||
next_epoch(spec, state)
|
||||
apply_empty_block(spec, state)
|
||||
|
||||
yield from run_bit_challenge_processing(spec, state, challenge, False)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_custody_response(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
_, _, _ = run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
bit_challenge_index = state.custody_challenge_index - 1
|
||||
|
||||
custody_response = get_valid_custody_response(spec, state, challenge, test_vector, bit_challenge_index)
|
||||
|
||||
yield from run_custody_response_processing(spec, state, custody_response)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_custody_response_multiple_epochs(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH * 3
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
_, _, _ = run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
bit_challenge_index = state.custody_challenge_index - 1
|
||||
|
||||
custody_response = get_valid_custody_response(spec, state, challenge, test_vector, bit_challenge_index)
|
||||
|
||||
yield from run_custody_response_processing(spec, state, custody_response)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_custody_response_many_epochs(spec, state):
|
||||
state.slot = spec.SLOTS_PER_EPOCH * 100
|
||||
attestation = get_valid_attestation(spec, state, signed=True)
|
||||
|
||||
test_vector = get_custody_test_vector(
|
||||
spec.get_custody_chunk_count(attestation.data.crosslink) * spec.BYTES_PER_CUSTODY_CHUNK)
|
||||
shard_root = get_custody_merkle_root(test_vector)
|
||||
attestation.data.crosslink.data_root = shard_root
|
||||
attestation.custody_bits[0] = 0
|
||||
|
||||
state.slot += spec.MIN_ATTESTATION_INCLUSION_DELAY
|
||||
|
||||
_, _, _ = run_attestation_processing(spec, state, attestation)
|
||||
|
||||
state.slot += spec.SLOTS_PER_EPOCH * spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
|
||||
challenge = get_valid_bit_challenge(spec, state, attestation)
|
||||
|
||||
_, _, _ = run_bit_challenge_processing(spec, state, challenge)
|
||||
|
||||
bit_challenge_index = state.custody_challenge_index - 1
|
||||
|
||||
custody_response = get_valid_custody_response(spec, state, challenge, test_vector, bit_challenge_index)
|
||||
|
||||
yield from run_custody_response_processing(spec, state, custody_response)
|
|
@ -0,0 +1,118 @@
|
|||
from eth2spec.test.helpers.custody import get_valid_custody_key_reveal
|
||||
from eth2spec.test.context import (
|
||||
with_all_phases_except,
|
||||
spec_state_test,
|
||||
expect_assertion_error,
|
||||
always_bls,
|
||||
)
|
||||
|
||||
|
||||
def run_custody_key_reveal_processing(spec, state, custody_key_reveal, valid=True):
|
||||
"""
|
||||
Run ``process_custody_key_reveal``, yielding:
|
||||
- pre-state ('pre')
|
||||
- custody_key_reveal ('custody_key_reveal')
|
||||
- post-state ('post').
|
||||
If ``valid == False``, run expecting ``AssertionError``
|
||||
"""
|
||||
yield 'pre', state
|
||||
yield 'custody_key_reveal', custody_key_reveal
|
||||
|
||||
if not valid:
|
||||
expect_assertion_error(lambda: spec.process_custody_key_reveal(state, custody_key_reveal))
|
||||
yield 'post', None
|
||||
return
|
||||
|
||||
revealer_index = custody_key_reveal.revealer_index
|
||||
|
||||
pre_next_custody_secret_to_reveal = \
|
||||
state.validators[revealer_index].next_custody_secret_to_reveal
|
||||
pre_reveal_lateness = state.validators[revealer_index].max_reveal_lateness
|
||||
|
||||
spec.process_custody_key_reveal(state, custody_key_reveal)
|
||||
|
||||
post_next_custody_secret_to_reveal = \
|
||||
state.validators[revealer_index].next_custody_secret_to_reveal
|
||||
post_reveal_lateness = state.validators[revealer_index].max_reveal_lateness
|
||||
|
||||
assert post_next_custody_secret_to_reveal == pre_next_custody_secret_to_reveal + 1
|
||||
|
||||
if spec.get_current_epoch(state) > spec.get_randao_epoch_for_custody_period(
|
||||
pre_next_custody_secret_to_reveal,
|
||||
revealer_index
|
||||
) + spec.EPOCHS_PER_CUSTODY_PERIOD:
|
||||
assert post_reveal_lateness > 0
|
||||
if pre_reveal_lateness == 0:
|
||||
assert post_reveal_lateness == spec.get_current_epoch(state) - spec.get_randao_epoch_for_custody_period(
|
||||
pre_next_custody_secret_to_reveal,
|
||||
revealer_index
|
||||
) - spec.EPOCHS_PER_CUSTODY_PERIOD
|
||||
else:
|
||||
if pre_reveal_lateness > 0:
|
||||
assert post_reveal_lateness < pre_reveal_lateness
|
||||
|
||||
yield 'post', state
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@always_bls
|
||||
@spec_state_test
|
||||
def test_success(spec, state):
|
||||
state.slot += spec.EPOCHS_PER_CUSTODY_PERIOD * spec.SLOTS_PER_EPOCH
|
||||
custody_key_reveal = get_valid_custody_key_reveal(spec, state)
|
||||
|
||||
yield from run_custody_key_reveal_processing(spec, state, custody_key_reveal)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@always_bls
|
||||
@spec_state_test
|
||||
def test_reveal_too_early(spec, state):
|
||||
custody_key_reveal = get_valid_custody_key_reveal(spec, state)
|
||||
|
||||
yield from run_custody_key_reveal_processing(spec, state, custody_key_reveal, False)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@always_bls
|
||||
@spec_state_test
|
||||
def test_wrong_period(spec, state):
|
||||
custody_key_reveal = get_valid_custody_key_reveal(spec, state, period=5)
|
||||
|
||||
yield from run_custody_key_reveal_processing(spec, state, custody_key_reveal, False)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@always_bls
|
||||
@spec_state_test
|
||||
def test_late_reveal(spec, state):
|
||||
state.slot += spec.EPOCHS_PER_CUSTODY_PERIOD * spec.SLOTS_PER_EPOCH * 3 + 150
|
||||
custody_key_reveal = get_valid_custody_key_reveal(spec, state)
|
||||
|
||||
yield from run_custody_key_reveal_processing(spec, state, custody_key_reveal)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@always_bls
|
||||
@spec_state_test
|
||||
def test_double_reveal(spec, state):
|
||||
state.slot += spec.EPOCHS_PER_CUSTODY_PERIOD * spec.SLOTS_PER_EPOCH * 2
|
||||
custody_key_reveal = get_valid_custody_key_reveal(spec, state)
|
||||
|
||||
_, _, _ = run_custody_key_reveal_processing(spec, state, custody_key_reveal)
|
||||
|
||||
yield from run_custody_key_reveal_processing(spec, state, custody_key_reveal, False)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@always_bls
|
||||
@spec_state_test
|
||||
def test_max_decrement(spec, state):
|
||||
state.slot += spec.EPOCHS_PER_CUSTODY_PERIOD * spec.SLOTS_PER_EPOCH * 3 + 150
|
||||
custody_key_reveal = get_valid_custody_key_reveal(spec, state)
|
||||
|
||||
_, _, _ = run_custody_key_reveal_processing(spec, state, custody_key_reveal)
|
||||
|
||||
custody_key_reveal2 = get_valid_custody_key_reveal(spec, state)
|
||||
|
||||
yield from run_custody_key_reveal_processing(spec, state, custody_key_reveal2)
|
|
@ -98,25 +98,21 @@ def test_reveal_with_custody_padding_minus_one(spec, state):
|
|||
@spec_state_test
|
||||
@never_bls
|
||||
def test_double_reveal(spec, state):
|
||||
epoch = spec.get_current_epoch(state) + spec.RANDAO_PENALTY_EPOCHS
|
||||
randao_key_reveal1 = get_valid_early_derived_secret_reveal(
|
||||
spec,
|
||||
state,
|
||||
spec.get_current_epoch(state) + spec.RANDAO_PENALTY_EPOCHS + 1,
|
||||
epoch,
|
||||
)
|
||||
res = dict(run_early_derived_secret_reveal_processing(spec, state, randao_key_reveal1))
|
||||
pre_state = res['pre']
|
||||
yield 'pre', pre_state
|
||||
intermediate_state = res['post']
|
||||
_, _, _ = dict(run_early_derived_secret_reveal_processing(spec, state, randao_key_reveal1))
|
||||
|
||||
randao_key_reveal2 = get_valid_early_derived_secret_reveal(
|
||||
spec,
|
||||
intermediate_state,
|
||||
spec.get_current_epoch(pre_state) + spec.RANDAO_PENALTY_EPOCHS + 1,
|
||||
state,
|
||||
epoch,
|
||||
)
|
||||
res = dict(run_early_derived_secret_reveal_processing(spec, intermediate_state, randao_key_reveal2, False))
|
||||
post_state = res['post']
|
||||
yield 'randao_key_reveal', [randao_key_reveal1, randao_key_reveal2]
|
||||
yield 'post', post_state
|
||||
|
||||
yield from run_early_derived_secret_reveal_processing(spec, state, randao_key_reveal2, False)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
|
|
|
@ -0,0 +1,177 @@
|
|||
from copy import deepcopy
|
||||
|
||||
from eth2spec.test.helpers.phase1.shard_block import (
|
||||
build_empty_shard_block,
|
||||
sign_shard_block,
|
||||
)
|
||||
from eth2spec.test.helpers.phase1.shard_state import (
|
||||
configure_shard_state,
|
||||
shard_state_transition_and_sign_block,
|
||||
)
|
||||
from eth2spec.test.context import (
|
||||
always_bls,
|
||||
expect_assertion_error,
|
||||
spec_state_test,
|
||||
with_all_phases_except,
|
||||
)
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
@always_bls
|
||||
def test_process_empty_shard_block(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
block = build_empty_shard_block(
|
||||
spec,
|
||||
beacon_state,
|
||||
shard_state,
|
||||
slot=shard_state.slot + 1,
|
||||
signed=True,
|
||||
full_attestation=False,
|
||||
)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
|
||||
shard_state_transition_and_sign_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
yield 'blocks', [block]
|
||||
yield 'post', shard_state
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
@always_bls
|
||||
def test_process_full_attestation_shard_block(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
block = build_empty_shard_block(
|
||||
spec,
|
||||
beacon_state,
|
||||
shard_state,
|
||||
slot=shard_state.slot + 1,
|
||||
signed=True,
|
||||
full_attestation=True,
|
||||
)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
|
||||
shard_state_transition_and_sign_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
yield 'blocks', [block]
|
||||
yield 'post', shard_state
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_prev_slot_block_transition(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
# Go to clean slot
|
||||
spec.process_shard_slots(shard_state, shard_state.slot + 1)
|
||||
# Make a block for it
|
||||
block = build_empty_shard_block(spec, beacon_state, shard_state, slot=shard_state.slot, signed=True)
|
||||
# Transition to next slot, above block will not be invalid on top of new state.
|
||||
spec.process_shard_slots(shard_state, shard_state.slot + 1)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
expect_assertion_error(
|
||||
lambda: spec.shard_state_transition(beacon_state, shard_state, block)
|
||||
)
|
||||
yield 'blocks', [block]
|
||||
yield 'post', None
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_same_slot_block_transition(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
# Same slot on top of pre-state, but move out of slot 0 first.
|
||||
spec.process_shard_slots(shard_state, shard_state.slot + 1)
|
||||
block = build_empty_shard_block(spec, beacon_state, shard_state, slot=shard_state.slot, signed=True)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
|
||||
shard_state_transition_and_sign_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
yield 'blocks', [block]
|
||||
yield 'post', shard_state
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_invalid_state_root(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
spec.process_shard_slots(shard_state, shard_state.slot + 1)
|
||||
block = build_empty_shard_block(spec, beacon_state, shard_state, slot=shard_state.slot)
|
||||
block.state_root = b'\x36' * 32
|
||||
sign_shard_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
expect_assertion_error(
|
||||
lambda: spec.shard_state_transition(beacon_state, shard_state, block, validate_state_root=True)
|
||||
)
|
||||
yield 'blocks', [block]
|
||||
yield 'post', None
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_skipped_slots(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
block = build_empty_shard_block(spec, beacon_state, shard_state, slot=shard_state.slot + 3, signed=True)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
|
||||
shard_state_transition_and_sign_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
yield 'blocks', [block]
|
||||
yield 'post', shard_state
|
||||
|
||||
assert shard_state.slot == block.slot
|
||||
latest_block_header = deepcopy(shard_state.latest_block_header)
|
||||
latest_block_header.state_root = shard_state.hash_tree_root()
|
||||
assert latest_block_header.signing_root() == block.signing_root()
|
||||
|
||||
|
||||
@with_all_phases_except(['phase0'])
|
||||
@spec_state_test
|
||||
def test_empty_shard_period_transition(spec, state):
|
||||
beacon_state, shard_state = configure_shard_state(spec, state)
|
||||
|
||||
# modify some of the deltas to ensure the period transition works properly
|
||||
stub_delta = 10
|
||||
shard_state.newer_committee_positive_deltas[0] = stub_delta
|
||||
shard_state.newer_committee_negative_deltas[0] = stub_delta
|
||||
|
||||
slot = shard_state.slot + spec.SHARD_SLOTS_PER_EPOCH * spec.EPOCHS_PER_SHARD_PERIOD
|
||||
beacon_state.slot = spec.compute_epoch_of_shard_slot(slot) * spec.SLOTS_PER_EPOCH - 4
|
||||
spec.process_slots(beacon_state, spec.compute_epoch_of_shard_slot(slot) * spec.SLOTS_PER_EPOCH)
|
||||
|
||||
# all validators get slashed for not revealing keys
|
||||
# undo this to allow for a block proposal
|
||||
for index in range(len(beacon_state.validators)):
|
||||
beacon_state.validators[index].slashed = False
|
||||
block = build_empty_shard_block(spec, beacon_state, shard_state, slot=slot, signed=True)
|
||||
|
||||
yield 'pre', shard_state
|
||||
yield 'beacon_state', beacon_state
|
||||
|
||||
shard_state_transition_and_sign_block(spec, beacon_state, shard_state, block)
|
||||
|
||||
yield 'blocks', [block]
|
||||
yield 'post', shard_state
|
||||
|
||||
shard_state.older_committee_positive_deltas[0] == stub_delta
|
||||
shard_state.older_committee_negative_deltas[0] == stub_delta
|
||||
shard_state.newer_committee_positive_deltas[0] == 0
|
||||
shard_state.newer_committee_negative_deltas[0] == 0
|
|
@ -378,40 +378,6 @@ def test_voluntary_exit(spec, state):
|
|||
assert state.validators[validator_index].exit_epoch < spec.FAR_FUTURE_EPOCH
|
||||
|
||||
|
||||
# @with_all_phases
|
||||
# @spec_state_test
|
||||
# def test_transfer(spec, state):
|
||||
# overwrite default 0 to test
|
||||
# spec.MAX_TRANSFERS = 1
|
||||
|
||||
# sender_index = spec.get_active_validator_indices(state, spec.get_current_epoch(state))[-1]
|
||||
# amount = get_balance(state, sender_index)
|
||||
|
||||
# transfer = get_valid_transfer(spec, state, state.slot + 1, sender_index, amount, signed=True)
|
||||
# recipient_index = transfer.recipient
|
||||
# pre_transfer_recipient_balance = get_balance(state, recipient_index)
|
||||
|
||||
# un-activate so validator can transfer
|
||||
# state.validators[sender_index].activation_eligibility_epoch = spec.FAR_FUTURE_EPOCH
|
||||
|
||||
# yield 'pre', state
|
||||
|
||||
# Add to state via block transition
|
||||
# block = build_empty_block_for_next_slot(spec, state)
|
||||
# block.body.transfers.append(transfer)
|
||||
# sign_block(spec, state, block)
|
||||
|
||||
# state_transition_and_sign_block(spec, state, block)
|
||||
|
||||
# yield 'blocks', [block]
|
||||
# yield 'post', state
|
||||
|
||||
# sender_balance = get_balance(state, sender_index)
|
||||
# recipient_balance = get_balance(state, recipient_index)
|
||||
# assert sender_balance == 0
|
||||
# assert recipient_balance == pre_transfer_recipient_balance + amount
|
||||
|
||||
|
||||
@with_all_phases
|
||||
@spec_state_test
|
||||
def test_balance_driven_status_transitions(spec, state):
|
||||
|
@ -509,6 +475,8 @@ def test_eth1_data_votes_no_consensus(spec, state):
|
|||
if spec.SLOTS_PER_ETH1_VOTING_PERIOD > 16:
|
||||
return
|
||||
|
||||
pre_eth1_hash = state.eth1_data.block_hash
|
||||
|
||||
offset_block = build_empty_block(spec, state, slot=spec.SLOTS_PER_ETH1_VOTING_PERIOD - 1)
|
||||
sign_block(spec, state, offset_block)
|
||||
state_transition_and_sign_block(spec, state, offset_block)
|
||||
|
@ -528,7 +496,7 @@ def test_eth1_data_votes_no_consensus(spec, state):
|
|||
blocks.append(block)
|
||||
|
||||
assert len(state.eth1_data_votes) == spec.SLOTS_PER_ETH1_VOTING_PERIOD
|
||||
assert state.eth1_data.block_hash == b'\x00' * 32
|
||||
assert state.eth1_data.block_hash == pre_eth1_hash
|
||||
|
||||
yield 'blocks', blocks
|
||||
yield 'post', state
|
||||
|
|
|
@ -5,6 +5,7 @@ bls_active = True
|
|||
|
||||
STUB_SIGNATURE = b'\x11' * 96
|
||||
STUB_PUBKEY = b'\x22' * 48
|
||||
STUB_COORDINATES = bls.api.signature_to_G2(bls.sign(b"", 0, b"\0" * 8))
|
||||
|
||||
|
||||
def only_with_bls(alt_return=None):
|
||||
|
@ -47,3 +48,8 @@ def bls_aggregate_signatures(signatures):
|
|||
def bls_sign(message_hash, privkey, domain):
|
||||
return bls.sign(message_hash=message_hash, privkey=privkey,
|
||||
domain=domain)
|
||||
|
||||
|
||||
@only_with_bls(alt_return=STUB_COORDINATES)
|
||||
def bls_signature_to_G2(signature):
|
||||
return bls.api.signature_to_G2(signature)
|
||||
|
|
|
@ -20,6 +20,13 @@ def calc_merkle_tree_from_leaves(values, layer_count=32):
|
|||
return tree
|
||||
|
||||
|
||||
def get_merkle_tree(values, pad_to=None):
|
||||
layer_count = (len(values) - 1).bit_length() if pad_to is None else (pad_to - 1).bit_length()
|
||||
if len(values) == 0:
|
||||
return zerohashes[layer_count]
|
||||
return calc_merkle_tree_from_leaves(values, layer_count)
|
||||
|
||||
|
||||
def get_merkle_root(values, pad_to=1):
|
||||
if pad_to == 0:
|
||||
return zerohashes[0]
|
||||
|
@ -29,9 +36,9 @@ def get_merkle_root(values, pad_to=1):
|
|||
return calc_merkle_tree_from_leaves(values, layer_count)[-1][0]
|
||||
|
||||
|
||||
def get_merkle_proof(tree, item_index):
|
||||
def get_merkle_proof(tree, item_index, tree_len=None):
|
||||
proof = []
|
||||
for i in range(32):
|
||||
for i in range(tree_len if tree_len is not None else len(tree)):
|
||||
subindex = (item_index // 2**i) ^ 1
|
||||
proof.append(tree[i][subindex] if subindex < len(tree[i]) else zerohashes[i])
|
||||
return proof
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from typing import Dict, Iterator
|
||||
from typing import Dict, Iterator, Iterable
|
||||
import copy
|
||||
from types import GeneratorType
|
||||
|
||||
|
@ -195,6 +195,12 @@ class Container(Series, metaclass=SSZType):
|
|||
return {}
|
||||
return dict(cls.__annotations__)
|
||||
|
||||
@classmethod
|
||||
def get_field_names(cls) -> Iterable[SSZType]:
|
||||
if not hasattr(cls, '__annotations__'): # no container fields
|
||||
return ()
|
||||
return list(cls.__annotations__.keys())
|
||||
|
||||
@classmethod
|
||||
def default(cls):
|
||||
return cls(**{f: t.default() for f, t in cls.get_fields().items()})
|
||||
|
@ -344,7 +350,7 @@ class BaseList(list, Elements):
|
|||
return super().__iter__()
|
||||
|
||||
def last(self):
|
||||
# be explict about getting the last item, for the non-python readers, and negative-index safety
|
||||
# be explicit about getting the last item, for the non-python readers, and negative-index safety
|
||||
return self[len(self) - 1]
|
||||
|
||||
|
||||
|
|
|
@ -222,7 +222,7 @@ def test_bytesn_subclass():
|
|||
|
||||
|
||||
def test_uint_math():
|
||||
assert uint8(0) + uint8(uint32(16)) == uint8(16) # allow explict casting to make invalid addition valid
|
||||
assert uint8(0) + uint8(uint32(16)) == uint8(16) # allow explicit casting to make invalid addition valid
|
||||
|
||||
expect_value_error(lambda: uint8(0) - uint8(1), "no underflows allowed")
|
||||
expect_value_error(lambda: uint8(1) + uint8(255), "no overflows allowed")
|
||||
|
|
Loading…
Reference in New Issue