Merge branch 'master' into vbuterin-patch-1
This commit is contained in:
commit
e5eb3daad1
|
@ -38,24 +38,23 @@ The primary source of load on the beacon chain are "attestations". Attestations
|
|||
| `GWEI_PER_ETH` | 10**9 | Gwei/ETH |
|
||||
| `DEPOSIT_CONTRACT_ADDRESS` | **TBD** | - |
|
||||
| `TARGET_COMMITTEE_SIZE` | 2**8 (= 256) | validators |
|
||||
| `GENESIS_TIME` | **TBD** | seconds |
|
||||
| `SLOT_DURATION` | 6 | seconds |
|
||||
| `CYCLE_LENGTH` | 2**6 (= 64) | slots | ~6 minutes |
|
||||
| `MIN_VALIDATOR_SET_CHANGE_INTERVAL` | 2**8 (= 256) | slots | ~25 minutes |
|
||||
| `SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD` | 2**17 (= 131,072) | slots | ~9 days |
|
||||
| `MIN_ATTESTATION_INCLUSION_DELAY` | 2**2 (= 4) | slots | ~24 seconds |
|
||||
| `RANDAO_SLOTS_PER_LAYER` | 2**12 (= 4096) | slots | ~7 hours |
|
||||
| `RANDAO_SLOTS_PER_LAYER` | 2**12 (= 4,096) | slots | ~7 hours |
|
||||
| `SQRT_E_DROP_TIME` | 2**11 (= 1,024) | cycles | ~9 days |
|
||||
| `WITHDRAWALS_PER_CYCLE` | 2**2 (=4) | validators | 5.2m ETH in ~6 months |
|
||||
| `MIN_WITHDRAWAL_PERIOD` | 2**13 (= 8192) | slots | ~14 hours |
|
||||
| `MIN_WITHDRAWAL_PERIOD` | 2**13 (= 8,192) | slots | ~14 hours |
|
||||
| `DELETION_PERIOD` | 2**22 (= 4,194,304) | slots | ~290 days |
|
||||
| `COLLECTIVE_PENALTY_CALCULATION_PERIOD` | 2**20 (= 1,048,576) | slots | ~2.4 months |
|
||||
| `POW_RECEIPT_ROOT_VOTING_PERIOD` | 2**10 (= 1,024) | slots | ~1.7 hours |
|
||||
| `SLASHING_WHISTLEBLOWER_REWARD_DENOMINATOR` | 2**9 (= 512) |
|
||||
| `BASE_REWARD_QUOTIENT` | 2**11 (= 2,048) | — |
|
||||
| `INCLUDER_REWARD_QUOTIENT` | 2**14 (= 16,384) | — |
|
||||
| `MAX_VALIDATOR_CHURN_QUOTIENT` | 2**5 (= 32) | — |
|
||||
| `POW_HASH_VOTING_PERIOD` | 2**10 (=1024) | - |
|
||||
| `POW_CONTRACT_MERKLE_TREE_DEPTH` | 2**5 (=32) | - |
|
||||
| `POW_CONTRACT_MERKLE_TREE_DEPTH` | 2**5 (= 32) | - |
|
||||
| `LOGOUT_MESSAGE` | `"LOGOUT"` | — |
|
||||
| `INITIAL_FORK_VERSION` | 0 | — |
|
||||
|
||||
|
@ -93,9 +92,18 @@ The primary source of load on the beacon chain are "attestations". Attestations
|
|||
| `ENTRY` | `0` |
|
||||
| `EXIT` | `1` |
|
||||
|
||||
**Domains for BLS signatures**
|
||||
|
||||
| Name | Value |
|
||||
| - | :-: |
|
||||
| `DOMAIN_DEPOSIT` | `0` |
|
||||
| `DOMAIN_ATTESTATION` | `1` |
|
||||
| `DOMAIN_PROPOSAL` | `2` |
|
||||
| `DOMAIN_LOGOUT` | `3` |
|
||||
|
||||
### PoW chain registration contract
|
||||
|
||||
The initial deployment phases of Ethereum 2.0 are implemented without consensus changes to the PoW chain. A registration contract is added to the PoW chain to deposit ETH. This contract has a `registration` function which takes as arguments `pubkey`, `withdrawal_shard`, `withdrawal_address`, `randao_commitment` as defined in a `ValidatorRecord` below. A BLS `proof_of_possession` of types `bytes` is given as a final argument.
|
||||
The initial deployment phases of Ethereum 2.0 are implemented without consensus changes to the PoW chain. A registration contract is added to the PoW chain to deposit ETH. This contract has a `registration` function which takes as arguments `pubkey`, `withdrawal_credentials`, `randao_commitment` as defined in a `ValidatorRecord` below. A BLS `proof_of_possession` of types `bytes` is given as a final argument.
|
||||
|
||||
The registration contract emits a log with the various arguments for consumption by the beacon chain. It does not do validation, pushing the registration logic to the beacon chain. In particular, the proof of possession (based on the BLS12-381 curve) is not verified by the registration contract.
|
||||
|
||||
|
@ -110,7 +118,7 @@ A `BeaconBlock` has the following fields:
|
|||
'slot': 'uint64',
|
||||
# Proposer RANDAO reveal
|
||||
'randao_reveal': 'hash32',
|
||||
# Recent PoW chain reference (receipt root)
|
||||
# Recent PoW receipt root
|
||||
'candidate_pow_receipt_root': 'hash32',
|
||||
# Skip list of previous beacon block hashes
|
||||
# i'th item is the most recent ancestor whose slot is a multiple of 2**i for i = 0, ..., 31
|
||||
|
@ -122,7 +130,7 @@ A `BeaconBlock` has the following fields:
|
|||
# Specials (e.g. logouts, penalties)
|
||||
'specials': [SpecialRecord],
|
||||
# Proposer signature
|
||||
'proposer_signature': ['uint256'],
|
||||
'proposer_signature': ['uint384'],
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -133,7 +141,7 @@ An `AttestationRecord` has the following fields:
|
|||
# Slot number
|
||||
'slot': 'uint64',
|
||||
# Shard number
|
||||
'shard': 'uint16',
|
||||
'shard': 'uint64',
|
||||
# Beacon block hashes not part of the current chain, oldest to newest
|
||||
'parent_hashes': ['hash32'],
|
||||
# Shard block hash being attested to
|
||||
|
@ -149,7 +157,7 @@ An `AttestationRecord` has the following fields:
|
|||
# Hash of last justified beacon block
|
||||
'justified_block_hash': 'hash32',
|
||||
# BLS aggregate signature
|
||||
'aggregate_sig': ['uint256']
|
||||
'aggregate_sig': ['uint384']
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -157,12 +165,10 @@ A `ProposalSignedData` has the following fields:
|
|||
|
||||
```python
|
||||
{
|
||||
# Fork version
|
||||
'fork_version': 'uint64',
|
||||
# Slot number
|
||||
'slot': 'uint64',
|
||||
# Shard ID (or `2**64 - 1` for beacon chain)
|
||||
'shard_id': 'uint64',
|
||||
# Shard number (or `2**64 - 1` for beacon chain)
|
||||
'shard': 'uint64',
|
||||
# Block hash
|
||||
'block_hash': 'hash32',
|
||||
}
|
||||
|
@ -172,12 +178,10 @@ An `AttestationSignedData` has the following fields:
|
|||
|
||||
```python
|
||||
{
|
||||
# Fork version
|
||||
'fork_version': 'uint64',
|
||||
# Slot number
|
||||
'slot': 'uint64',
|
||||
# Shard number
|
||||
'shard': 'uint16',
|
||||
'shard': 'uint64',
|
||||
# CYCLE_LENGTH parent hashes
|
||||
'parent_hashes': ['hash32'],
|
||||
# Shard block hash
|
||||
|
@ -238,10 +242,9 @@ The `BeaconState` has the following fields:
|
|||
'current_exit_seq': 'uint64',
|
||||
# Genesis time
|
||||
'genesis_time': 'uint64',
|
||||
# PoW chain reference
|
||||
'known_pow_receipt_root': 'hash32',
|
||||
'candidate_pow_receipt_root': 'hash32',
|
||||
'candidate_pow_receipt_root_votes': 'uint64',
|
||||
# PoW receipt root
|
||||
'processed_pow_receipt_root': 'hash32',
|
||||
'candidate_pow_receipt_roots': [CandidatePoWReceiptRootRecord],
|
||||
# Parameters relevant to hard forks / versioning.
|
||||
# Should be updated only by hard forks.
|
||||
'pre_fork_version': 'uint64',
|
||||
|
@ -261,11 +264,9 @@ A `ValidatorRecord` has the following fields:
|
|||
```python
|
||||
{
|
||||
# BLS public key
|
||||
'pubkey': 'uint256',
|
||||
# Withdrawal shard number
|
||||
'withdrawal_shard': 'uint16',
|
||||
# Withdrawal address
|
||||
'withdrawal_address': 'address',
|
||||
'pubkey': 'uint384',
|
||||
# Withdrawal credentials
|
||||
'withdrawal_credentials': 'hash32',
|
||||
# RANDAO commitment
|
||||
'randao_commitment': 'hash32',
|
||||
# Slot the RANDAO commitment was last changed
|
||||
|
@ -274,8 +275,8 @@ A `ValidatorRecord` has the following fields:
|
|||
'balance': 'uint64',
|
||||
# Status code
|
||||
'status': 'uint8',
|
||||
# Slot when validator exited (or 0)
|
||||
'exit_slot': 'uint64'
|
||||
# Slot when validator last changed status (or 0)
|
||||
'last_status_change_slot': 'uint64'
|
||||
# Sequence number when validator exited (or 0)
|
||||
'exit_seq': 'uint64'
|
||||
}
|
||||
|
@ -297,7 +298,7 @@ A `ShardAndCommittee` object has the following fields:
|
|||
```python
|
||||
{
|
||||
# Shard number
|
||||
'shard': 'uint16',
|
||||
'shard': 'uint64',
|
||||
# Validator indices
|
||||
'committee': ['uint24']
|
||||
}
|
||||
|
@ -310,12 +311,23 @@ A `ShardReassignmentRecord` object has the following fields:
|
|||
# Which validator to reassign
|
||||
'validator_index': 'uint24',
|
||||
# To which shard
|
||||
'shard': 'uint16',
|
||||
'shard': 'uint64',
|
||||
# When
|
||||
'slot': 'uint64'
|
||||
}
|
||||
```
|
||||
|
||||
A `CandidatePoWReceiptRootRecord` object contains the following fields:
|
||||
|
||||
```python
|
||||
{
|
||||
# Candidate PoW receipt root
|
||||
'candidate_pow_receipt_root': 'hash32',
|
||||
# Vote count
|
||||
'votes': 'uint64'
|
||||
}
|
||||
```
|
||||
|
||||
## Beacon chain processing
|
||||
|
||||
The beacon chain is the "main chain" of the PoS system. The beacon chain's main responsibilities are:
|
||||
|
@ -330,8 +342,8 @@ For a block on the beacon chain to be processed by a node, four conditions have
|
|||
|
||||
* The parent pointed to by the `ancestor_hashes[0]` has already been processed and accepted
|
||||
* An attestation from the _proposer_ of the block (see later for definition) is included along with the block in the network message object
|
||||
* The PoW chain block pointed to by the `pow_chain_reference` has already been processed and accepted
|
||||
* The node's local clock time is greater than or equal to the minimum timestamp as computed by `GENESIS_TIME + block.slot * SLOT_DURATION`
|
||||
* The PoW chain block pointed to by the `processed_pow_receipt_root` has already been processed and accepted
|
||||
* The node's local clock time is greater than or equal to the minimum timestamp as computed by `state.genesis_time + block.slot * SLOT_DURATION`
|
||||
|
||||
If these conditions are not met, the client should delay processing the beacon block until the conditions are all satisfied.
|
||||
|
||||
|
@ -339,15 +351,34 @@ Beacon block production is significantly different because of the proof of stake
|
|||
|
||||
### Beacon chain fork choice rule
|
||||
|
||||
The beacon chain uses the Casper FFG fork choice rule of "favor the chain containing the highest-slot-number justified block". To choose between chains that are all descended from the same justified block, the chain uses "immediate message driven GHOST" (IMD GHOST) to choose the head of the chain.
|
||||
The beacon chain fork choice rule is a hybrid that combines justification and finality with Latest Message Driven (LMD) Greediest Heaviest Observed SubTree (GHOST). At any point in time a validator `v` subjectively calculates the beacon chain head as follows.
|
||||
|
||||
For a description see: **https://ethresear.ch/t/beacon-chain-casper-ffg-rpj-mini-spec/2760**
|
||||
* Let `store` be the set of attestations and blocks that the validator `v` has observed and verified (in particular, block ancestors must be recursively verified). Attestations not part of any chain are still included in `store`.
|
||||
* Let `finalized_head` be the finalized block with the highest slot number. (A block `B` is finalized if there is a descendant of `B` in `store` the processing of which sets `B` as finalized.)
|
||||
* Let `justified_head` be the descendant of `finalized_head` with the highest slot number that has been justified for at least `CYCLE_LENGTH` slots. (A block `B` is justified is there is a descendant of `B` in `store` the processing of which sets `B` as justified.) If no such descendant exists set `justified_head` to `finalized_head`.
|
||||
* Let `get_ancestor(store, block, slot)` be the ancestor of `block` with slot number `slot`. The `get_ancestor` function can be defined recursively as `def get_ancestor(store, block, slot): return block if block.slot == slot else get_ancestor(store, store.get_parent(block), slot)`.
|
||||
* Let `get_latest_attestation(store, validator)` be the attestation with the highest slot number in `store` from `validator`. If several such attestations exist use the one the validator `v` observed first.
|
||||
* Let `get_latest_attestation_target(store, validator)` be the target block in the attestation `get_latest_attestation(store, validator)`.
|
||||
* The head is `lmd_ghost(store, justified_head)` where the function `lmd_ghost` is defined below. Note that the implementation below is suboptimal; there are implementations that compute the head in time logarithmic in slot count.
|
||||
|
||||
For an implementation with a network simulator see: **https://github.com/ethereum/research/blob/master/clock_disparity/ghost_node.py**
|
||||
```python
|
||||
def lmd_ghost(store, start):
|
||||
validators = start.state.validators
|
||||
active_validators = [validators[i] for i in
|
||||
get_active_validator_indices(validators, start.slot)]
|
||||
attestation_targets = [get_latest_attestation_target(store, validator)
|
||||
for validator in active_validators]
|
||||
def get_vote_count(block):
|
||||
return len([target for target in attestation_targets if
|
||||
get_ancestor(store, target, block.slot) == block])
|
||||
|
||||
Here's an example of its working (green is finalized blocks, yellow is justified, grey is attestations):
|
||||
|
||||
![](https://vitalik.ca/files/RPJ.png)
|
||||
head = start
|
||||
while 1:
|
||||
children = get_children(head)
|
||||
if len(children) == 0:
|
||||
return head
|
||||
head = max(children, key=get_vote_count)
|
||||
```
|
||||
|
||||
## Beacon chain state transition function
|
||||
|
||||
|
@ -517,7 +548,7 @@ The following is a function that determines the proposer of a beacon block:
|
|||
|
||||
```python
|
||||
def get_beacon_proposer(state:BeaconState, slot: int) -> ValidatorRecord:
|
||||
first_committee = get_shards_and_committees_for_slot(state, slot)[0]
|
||||
first_committee = get_shards_and_committees_for_slot(state, slot)[0].committee
|
||||
index = first_committee[slot % len(first_committee)]
|
||||
return state.validators[index]
|
||||
```
|
||||
|
@ -579,8 +610,8 @@ def int_sqrt(n: int) -> int:
|
|||
The beacon chain is initialized when a condition is met inside a contract on the existing PoW chain. This contract's code in Vyper is as follows:
|
||||
|
||||
```python
|
||||
HashChainValue: event({prev_tip: bytes32, data: bytes[2064], total_deposit_count: int128})
|
||||
ChainStart: event({hash_chain_tip: bytes32, time: bytes[8]})
|
||||
HashChainValue: event({previous_receipt_root: bytes32, data: bytes[2064], total_deposit_count: int128})
|
||||
ChainStart: event({receipt_root: bytes32, time: bytes[8]})
|
||||
|
||||
receipt_tree: bytes32[int128]
|
||||
total_deposit_count: int128
|
||||
|
@ -615,8 +646,7 @@ The contract is at address `DEPOSIT_CONTRACT_ADDRESS`. When a user wishes to bec
|
|||
{
|
||||
'pubkey': 'int256',
|
||||
'proof_of_possession': ['int256'],
|
||||
'withdrawal_shard': 'int64',
|
||||
'withdrawal_address`: 'bytes20',
|
||||
'withdrawal_credentials`: 'hash32',
|
||||
'randao_commitment`: 'hash32'
|
||||
}
|
||||
```
|
||||
|
@ -625,7 +655,7 @@ If the user wishes to deposit more than `DEPOSIT_SIZE` ETH, they would need to m
|
|||
|
||||
* `initial_validator_entries` equal to the list of data records published as HashChainValue logs so far, in the order in which they were published (oldest to newest).
|
||||
* `genesis_time` equal to the `time` value published in the log
|
||||
* `pow_hash_chain_tip` equal to the `hash_chain_tip` value published in the log
|
||||
* `processed_pow_receipt_root` equal to the `receipt_root` value published in the log
|
||||
|
||||
### On startup
|
||||
|
||||
|
@ -635,7 +665,7 @@ A valid block with slot `0` (the "genesis block") has the following values. Othe
|
|||
{
|
||||
'slot': 0,
|
||||
'randao_reveal': bytes32(0),
|
||||
'candidate_pow_receipt_root': bytes32(0),
|
||||
'candidate_pow_receipt_roots': [],
|
||||
'ancestor_hashes': [bytes32(0) for i in range(32)],
|
||||
'state_root': STARTUP_STATE_ROOT,
|
||||
'attestations': [],
|
||||
|
@ -647,17 +677,16 @@ A valid block with slot `0` (the "genesis block") has the following values. Othe
|
|||
`STARTUP_STATE_ROOT` is the root of the initial state, computed by running the following code:
|
||||
|
||||
```python
|
||||
def on_startup(initial_validator_entries: List[Any], genesis_time: uint64, pow_hash_chain_tip: Hash32) -> BeaconState:
|
||||
def on_startup(initial_validator_entries: List[Any], genesis_time: uint64, processed_pow_receipt_root: Hash32) -> BeaconState:
|
||||
# Induct validators
|
||||
validators = []
|
||||
for pubkey, proof_of_possession, withdrawal_shard, withdrawal_address, \
|
||||
for pubkey, proof_of_possession, withdrawal_credentials, \
|
||||
randao_commitment in initial_validator_entries:
|
||||
add_validator(
|
||||
validators=validators,
|
||||
pubkey=pubkey,
|
||||
proof_of_possession=proof_of_possession,
|
||||
withdrawal_shard=withdrawal_shard,
|
||||
withdrawal_address=withdrawal_address,
|
||||
withdrawal_credentials=withdrawal_credentials,
|
||||
randao_commitment=randao_commitment,
|
||||
current_slot=0,
|
||||
status=ACTIVE,
|
||||
|
@ -684,14 +713,12 @@ def on_startup(initial_validator_entries: List[Any], genesis_time: uint64, pow_h
|
|||
persistent_committees=split(shuffle(validators, bytes([0] * 32)), SHARD_COUNT),
|
||||
persistent_committee_reassignments=[],
|
||||
deposits_penalized_in_period=[],
|
||||
next_shuffling_seed=b'\x00'*32,
|
||||
next_shuffling_seed=bytes([0] * 32),
|
||||
validator_set_delta_hash_chain=bytes([0] * 32), # stub
|
||||
current_exit_seq=0,
|
||||
genesis_time=genesis_time,
|
||||
known_pow_hash_chain_tip=pow_hash_chain_tip,
|
||||
processed_pow_hash_chain_tip=pow_hash_chain_tip,
|
||||
candidate_pow_hash_chain_tip=bytes([0] * 32),
|
||||
candidate_pow_hash_chain_tip_votes=0,
|
||||
processed_pow_receipt_root=processed_pow_receipt_root,
|
||||
candidate_pow_receipt_roots=[],
|
||||
pre_fork_version=INITIAL_FORK_VERSION,
|
||||
post_fork_version=INITIAL_FORK_VERSION,
|
||||
fork_slot_number=0,
|
||||
|
@ -710,62 +737,71 @@ The `add_validator` routine is defined below.
|
|||
|
||||
This routine should be run for every validator that is inducted as part of a log created on the PoW chain [TODO: explain where to check for these logs]. The status of the validators added after genesis is `PENDING_ACTIVATION`. These logs should be processed in the order in which they are emitted by the PoW chain.
|
||||
|
||||
First, a helper function:
|
||||
First, some helper functions:
|
||||
|
||||
```python
|
||||
def min_empty_validator(validators: List[ValidatorRecord], current_slot: int):
|
||||
for i, v in enumerate(validators):
|
||||
if v.status == WITHDRAWN and v.exit_slot <= current_slot - DELETION_PERIOD:
|
||||
if v.status == WITHDRAWN and v.last_status_change_slot + DELETION_PERIOD <= current_slot:
|
||||
return i
|
||||
return None
|
||||
```
|
||||
|
||||
```python
|
||||
def get_fork_version(state: State, slot: int) -> int:
|
||||
return state.pre_fork_version if slot < state.fork_slot_number else state.post_fork_version
|
||||
|
||||
def get_domain(state: State, slot: int, base_domain: int) -> int:
|
||||
return get_fork_version(state, slot) * 2**32 + base_domain
|
||||
```
|
||||
|
||||
Now, to add a validator:
|
||||
|
||||
```python
|
||||
def add_validator(validators: List[ValidatorRecord],
|
||||
def add_validator(state: State,
|
||||
pubkey: int,
|
||||
proof_of_possession: bytes,
|
||||
withdrawal_shard: int,
|
||||
withdrawal_address: Address,
|
||||
withdrawal_credentials: Hash32,
|
||||
randao_commitment: Hash32,
|
||||
status: int,
|
||||
current_slot: int) -> int:
|
||||
# if following assert fails, validator induction failed
|
||||
# move on to next validator registration log
|
||||
signed_message = bytes32(pubkey) + bytes2(withdrawal_shard) + withdrawal_address + randao_commitment
|
||||
signed_message = bytes32(pubkey) + bytes2(withdrawal_shard) + withdrawal_credentials + randao_commitment
|
||||
assert BLSVerify(pub=pubkey,
|
||||
msg=hash(signed_message),
|
||||
sig=proof_of_possession)
|
||||
sig=proof_of_possession,
|
||||
domain=get_domain(state, current_slot, DOMAIN_DEPOSIT))
|
||||
# Pubkey uniqueness
|
||||
assert pubkey not in [v.pubkey for v in validators]
|
||||
assert pubkey not in [v.pubkey for v in state.validators]
|
||||
rec = ValidatorRecord(
|
||||
pubkey=pubkey,
|
||||
withdrawal_shard=withdrawal_shard,
|
||||
withdrawal_address=withdrawal_address,
|
||||
withdrawal_credentials=withdrawal_credentials,
|
||||
randao_commitment=randao_commitment,
|
||||
randao_last_change=current_slot,
|
||||
balance=DEPOSIT_SIZE * GWEI_PER_ETH,
|
||||
status=status,
|
||||
exit_slot=0,
|
||||
last_status_change_slot=current_slot,
|
||||
exit_seq=0
|
||||
)
|
||||
# Add the validator
|
||||
index = min_empty_validator(validators)
|
||||
index = min_empty_validator(state.validators)
|
||||
if index is None:
|
||||
validators.append(rec)
|
||||
index = len(validators) - 1
|
||||
state.validators.append(rec)
|
||||
index = len(state.validators) - 1
|
||||
else:
|
||||
validators[index] = rec
|
||||
state.validators[index] = rec
|
||||
return index
|
||||
```
|
||||
|
||||
`BLSVerify` is a function for verifying a BLS12-381 signature, defined in the BLS12-381 spec.
|
||||
|
||||
### Routine for removing a validator
|
||||
|
||||
```python
|
||||
def exit_validator(index, state, block, penalize, current_slot):
|
||||
validator = state.validators[index]
|
||||
validator.exit_slot = current_slot
|
||||
validator.last_status_change_slot = current_slot
|
||||
validator.exit_seq = state.current_exit_seq
|
||||
state.current_exit_seq += 1
|
||||
for committee in state.persistent_committees:
|
||||
|
@ -826,25 +862,28 @@ For each `AttestationRecord` object `obj`:
|
|||
* Compute `full_parent_hashes` = `[get_block_hash(state, block, slot - CYCLE_LENGTH + i) for i in range(1, CYCLE_LENGTH - len(parent_hashes) + 1)] + parent_hashes` (eg, if `CYCLE_LENGTH = 4`, `slot = 5`, the actual block hashes starting from slot 0 are `Z A B C D E F G H I J`, and `parent_hashes = [D', E']` then `full_parent_hashes = [B, C, D' E']`). Note that when *creating* an attestation for a block, the hash of that block itself won't yet be in the `state`, so you would need to add it explicitly.
|
||||
* Let `bit0_attestation_indices, bit1_attestation_indices = get_attestation_participants(state, obj)` (and verify that the method returns successfully)
|
||||
* Let `bit0_group_public_key = BLSAddPubkeys(bit0_attestation_indices)` and `bit1_group_public_key = BLSAddPubkeys(bit1_attestation_indices)`
|
||||
* Let `fork_version = pre_fork_version if slot < fork_slot_number else post_fork_version`.
|
||||
* Verify that `aggregate_sig` verifies using the group pubkey generated and the serialized form of `AttestationSignedData(fork_version, slot, shard, parent_hashes, shard_block_hash, last_crosslinked_hash, shard_block_combined_data_root, justified_slot)` as the message.
|
||||
* Let `data = AttestationSignedData(slot, shard, parent_hashes, shard_block_hash, last_crosslinked_hash, shard_block_combined_data_root, justified_slot)`.
|
||||
* Check `BLSVerify(pubkey=group_public_key, msg=data, sig=aggregate_sig, domain=get_domain(state, slot, DOMAIN_ATTESTATION))`.
|
||||
* [TO BE REMOVED IN PHASE 1] Verify that `shard_block_hash == bytes([0] * 32)`.
|
||||
|
||||
Extend the list of `AttestationRecord` objects in the `state` with those included in the block, ordering the new additions in the same order as they came in the block, and replacing `obj.parent_hashes` with the calculated value of `full_parent_hashes`.
|
||||
|
||||
### Verify proposer signature
|
||||
|
||||
Let `proposal_hash = hash(ProposalSignedData(fork_version, block.slot, 2**64 - 1, block_hash_without_sig))` where `block_hash_without_sig` is the hash of the block except setting `proposer_signature` to `[0, 0]`.
|
||||
Let `proposal_hash = hash(ProposalSignedData(block.slot, 2**64 - 1, block_hash_without_sig))` where `block_hash_without_sig` is the hash of the block except setting `proposer_signature` to `[0, 0]`.
|
||||
|
||||
Verify that `BLSVerify(pubkey=get_beacon_proposer(state, block.slot).pubkey, data=proposal_hash, sig=block.proposer_signature)` passes.
|
||||
Verify that `BLSVerify(pubkey=get_beacon_proposer(state, block.slot).pubkey, data=proposal_hash, sig=block.proposer_signature, domain=get_domain(state, block.slot, DOMAIN_PROPOSAL))` passes.
|
||||
|
||||
### Verify and process RANDAO reveal
|
||||
|
||||
* Let `repeat_hash(x, n) = x if n == 0 else repeat_hash(hash(x), n-1)`.
|
||||
* Let `proposer = get_beacon_proposer(state, block.slot).
|
||||
* Verify that `repeat_hash(block.randao_reveal, (block.slot - proposer.randao_last_change) // RANDAO_SLOTS_PER_LAYER + 1) == proposer.randao_commitment`
|
||||
* Set `state.randao_mix = xor(state.randao_mix, block.randao_reveal)`, `proposer.randao_commitment = block.randao_reveal`, `proposer.randao_last_change = block.slot`
|
||||
* Let `proposer = get_beacon_proposer(state, block.slot)`.
|
||||
* Verify that `repeat_hash(block.randao_reveal, (block.slot - proposer.randao_last_change) // RANDAO_SLOTS_PER_LAYER + 1) == proposer.randao_commitment`.
|
||||
* Set `state.randao_mix = xor(state.randao_mix, block.randao_reveal)`, `proposer.randao_commitment = block.randao_reveal`, `proposer.randao_last_change = block.slot`.
|
||||
|
||||
Finally, if `block.candidate_pow_hash_chain_tip = state.candidate_pow_hash_chain_tip`, set `state.candidate_hash_chain_tip_votes += 1`.
|
||||
### Process PoW receipt root
|
||||
|
||||
If `block.candidate_pow_receipt_root` is `x.candidate_pow_receipt_root` for some `x` in `state.candidate_pow_receipt_roots`, set `x.votes += 1`. Otherwise, append to `state.candidate_pow_receipt_roots` a new `CandidatePoWReceiptRootRecord(candidate_pow_receipt_root=block.candidate_pow_receipt_root, votes=1)`.
|
||||
|
||||
### Process penalties, logouts and other special objects
|
||||
|
||||
|
@ -857,13 +896,14 @@ For each `SpecialRecord` `obj` in `block.specials`, verify that its `kind` is on
|
|||
```python
|
||||
{
|
||||
'validator_index': 'uint64',
|
||||
'signature': '[uint256]'
|
||||
'signature': '[uint384]'
|
||||
}
|
||||
```
|
||||
Perform the following checks:
|
||||
|
||||
* Let `fork_version = pre_fork_version if block.slot < fork_slot_number else post_fork_version`. Verify that `BLSVerify(pubkey=validators[data.validator_index].pubkey, msg=hash(LOGOUT_MESSAGE + bytes8(fork_version)), sig=data.signature)`
|
||||
* Verify that `BLSVerify(pubkey=validators[data.validator_index].pubkey, msg=hash(LOGOUT_MESSAGE + bytes8(fork_version)), sig=data.signature, domain=get_domain(state, current_slot, DOMAIN_LOGOUT))`
|
||||
* Verify that `validators[validator_index].status == ACTIVE`.
|
||||
* Verify that `block.slot >= last_status_change_slot + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD`
|
||||
|
||||
Run `exit_validator(data.validator_index, state, block, penalize=False, current_slot=block.slot)`.
|
||||
|
||||
|
@ -873,16 +913,16 @@ Run `exit_validator(data.validator_index, state, block, penalize=False, current_
|
|||
{
|
||||
'vote1_aggregate_sig_indices': '[uint24]',
|
||||
'vote1_data': AttestationSignedData,
|
||||
'vote1_aggregate_sig': '[uint256]',
|
||||
'vote1_aggregate_sig': '[uint384]',
|
||||
'vote2_aggregate_sig_indices': '[uint24]',
|
||||
'vote2_data': AttestationSignedData,
|
||||
'vote2_aggregate_sig': '[uint256]',
|
||||
'vote2_aggregate_sig': '[uint384]',
|
||||
}
|
||||
```
|
||||
|
||||
Perform the following checks:
|
||||
|
||||
* For each `aggregate_sig`, verify that `BLSVerify(pubkey=aggregate_pubkey([validators[i].pubkey for i in aggregate_sig_indices]), msg=vote_data, sig=aggsig)` passes.
|
||||
* For each `vote`, verify that `BLSVerify(pubkey=aggregate_pubkey([validators[i].pubkey for i in vote_aggregate_sig_indices]), msg=vote_data, sig=vote_aggregate_sig, domain=get_domain(state, vote_data.slot, DOMAIN_ATTESTATION))` passes.
|
||||
* Verify that `vote1_data != vote2_data`.
|
||||
* Let `intersection = [x for x in vote1_aggregate_sig_indices if x in vote2_aggregate_sig_indices]`. Verify that `len(intersection) >= 1`.
|
||||
* Verify that `vote1_data.justified_slot < vote2_data.justified_slot < vote2_data.slot <= vote1_data.slot`.
|
||||
|
@ -895,12 +935,12 @@ For each validator index `v` in `intersection`, if `state.validators[v].status`
|
|||
{
|
||||
'proposer_index': 'uint24',
|
||||
'proposal1_data': ProposalSignedData,
|
||||
'proposal1_signature': '[uint256]',
|
||||
'proposal1_signature': '[uint384]',
|
||||
'proposal2_data': ProposalSignedData,
|
||||
'proposal1_signature': '[uint256]',
|
||||
'proposal1_signature': '[uint384]',
|
||||
}
|
||||
```
|
||||
For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer_index].pubkey, msg=hash(proposal_data), sig=proposal_signature)` passes. Verify that `proposal1_data.slot == proposal2_data.slot` but `proposal1 != proposal2`. If `state.validators[proposer_index].status` does not equal `PENALIZED`, then run `exit_validator(proposer_index, state, penalize=True, current_slot=block.slot)`
|
||||
For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer_index].pubkey, msg=hash(proposal_data), sig=proposal_signature, domain=get_domain(state, proposal_data.slot, DOMAIN_PROPOSAL))` passes. Verify that `proposal1_data.slot == proposal2_data.slot` but `proposal1 != proposal2`. If `state.validators[proposer_index].status` does not equal `PENALIZED`, then run `exit_validator(proposer_index, state, penalize=True, current_slot=block.slot)`
|
||||
|
||||
#### DEPOSIT_PROOF
|
||||
|
||||
|
@ -918,7 +958,7 @@ For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer
|
|||
|
||||
Note that `deposit_data` in serialized form should be the `DepositParams` followed by 8 bytes for the `msg_value` and 8 bytes for the `timestamp`, or exactly the `deposit_data` in the PoW contract of which the hash was placed into the Merkle tree.
|
||||
|
||||
Use the following procedure to verify the `merkle_branch`, setting `leaf=serialized_deposit_data`, `depth=POW_CONTRACT_MERKLE_TREE_DEPTH` and `root=state.known_pow_receipt_root`:
|
||||
Use the following procedure to verify the `merkle_branch`, setting `leaf=serialized_deposit_data`, `depth=POW_CONTRACT_MERKLE_TREE_DEPTH` and `root=state.processed_pow_receipt_root`:
|
||||
|
||||
```python
|
||||
def verify_merkle_branch(leaf: Hash32, branch: [Hash32], depth: int, index: int, root: Hash32) -> bool:
|
||||
|
@ -933,7 +973,7 @@ def verify_merkle_branch(leaf: Hash32, branch: [Hash32], depth: int, index: int,
|
|||
|
||||
Verify that `deposit_data.msg_value == DEPOSIT_SIZE` and `block.slot - (deposit_data.timestamp - state.genesis_time) // SLOT_DURATION < DELETION_PERIOD`.
|
||||
|
||||
Run `add_validator(validators, deposit_data.deposit_params.pubkey, deposit_data.deposit_params.proof_of_possession, deposit_data.deposit_params.withdrawal_shard, data.deposit_params.withdrawal_address, deposit_data.deposit_params.randao_commitment, PENDING_ACTIVATION, block.slot)`.
|
||||
Run `add_validator(validators, deposit_data.deposit_params.pubkey, deposit_data.deposit_params.proof_of_possession, deposit_data.deposit_params.withdrawal_credentials, deposit_data.deposit_params.randao_commitment, PENDING_ACTIVATION, block.slot)`.
|
||||
|
||||
## Cycle boundary processing
|
||||
|
||||
|
@ -1011,11 +1051,10 @@ For every `ShardAndCommittee` object `obj` in `shard_and_committee_for_slots[:CY
|
|||
|
||||
#### PoW chain related rules
|
||||
|
||||
If `last_state_recalculation_slot % POW_HASH_VOTING_PERIOD == 0`, then:
|
||||
If `last_state_recalculation_slot % POW_RECEIPT_ROOT_VOTING_PERIOD == 0`, then:
|
||||
|
||||
* If `state.candidate_hash_chain_tip_votes * 3 >= POW_HASH_VOTING_PERIOD * 2`, set `state.hash_chain_tip = state.candidate_hash_chain_tip`
|
||||
* Set `state.candidate_hash_chain_tip = block.candidate_pow_hash_chain_tip`
|
||||
* Set `state.candidate_hash_chain_tip_votes = 0`
|
||||
* If for any `x` in `state.candidate_pow_receipt_root`, `x.votes * 2 >= POW_RECEIPT_ROOT_VOTING_PERIOD` set `state.processed_pow_receipt_root = x.receipt_root`.
|
||||
* Set `state.candidate_pow_receipt_roots = []`.
|
||||
|
||||
#### Proposer reshuffling
|
||||
|
||||
|
@ -1080,7 +1119,7 @@ def change_validators(validators: List[ValidatorRecord], current_slot: int) -> N
|
|||
)
|
||||
if validators[i].status == PENDING_EXIT:
|
||||
validators[i].status = PENDING_WITHDRAW
|
||||
validators[i].exit_slot = current_slot
|
||||
validators[i].last_status_change_slot = current_slot
|
||||
total_changed += balance_at_stake(validators[i])
|
||||
add_validator_set_change_record(
|
||||
state=state,
|
||||
|
@ -1102,14 +1141,14 @@ def change_validators(validators: List[ValidatorRecord], current_slot: int) -> N
|
|||
# calculate their penalties if they were slashed
|
||||
|
||||
def withdrawable(v):
|
||||
return v.status in (PENDING_WITHDRAW, PENALIZED) and current_slot >= v.exit_slot + MIN_WITHDRAWAL_PERIOD
|
||||
return v.status in (PENDING_WITHDRAW, PENALIZED) and current_slot >= v.last_status_change_slot + MIN_WITHDRAWAL_PERIOD
|
||||
|
||||
withdrawable_validators = sorted(filter(withdrawable, validators), key=lambda v: v.exit_seq)
|
||||
for v in withdrawable_validators[:WITHDRAWALS_PER_CYCLE]:
|
||||
if v.status == PENALIZED:
|
||||
v.balance -= balance_at_stake(v) * min(total_penalties * 3, total_balance) // total_balance
|
||||
v.status = WITHDRAWN
|
||||
v.exit_slot = current_slot
|
||||
v.last_status_change_slot = current_slot
|
||||
|
||||
withdraw_amount = v.balance
|
||||
# STUB: withdraw to shard chain
|
||||
|
@ -1175,6 +1214,7 @@ Note: This spec is ~65% complete.
|
|||
* [ ] Add penalties for deposits below 32 ETH (or some other threshold)
|
||||
* [ ] Add a `SpecialRecord` to (re)register
|
||||
|
||||
|
||||
# Appendix
|
||||
## Appendix A - Hash function
|
||||
|
||||
|
|
|
@ -19,7 +19,14 @@ Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md
|
|||
| Constant | Value | Unit | Approximation |
|
||||
|------------------------|-----------------|-------|---------------|
|
||||
| `CHUNK_SIZE` | 2**8 (= 256) | bytes | |
|
||||
| `MAX_SHARD_BLOCK_SIZE` | 2**15 (= 32768) | bytes | |
|
||||
| `SHARD_BLOCK_SIZE` | 2**14 (= 16384) | bytes | |
|
||||
|
||||
### Flags, domains, etc.
|
||||
|
||||
| Constant | Value |
|
||||
|------------------------|-----------------|
|
||||
| `SHARD_PROPOSER_DOMAIN`| 129 |
|
||||
| `SHARD_ATTESTER_DOMAIN`| 130 |
|
||||
|
||||
## Data Structures
|
||||
|
||||
|
@ -43,7 +50,9 @@ A `ShardBlock` object has the following fields:
|
|||
'data_root': 'hash32'
|
||||
# State root (placeholder for now)
|
||||
'state_root': 'hash32',
|
||||
# Attestation (including block signature)
|
||||
# Block signature
|
||||
'signature': ['uint256'],
|
||||
# Attestation
|
||||
'attester_bitfield': 'bytes',
|
||||
'aggregate_sig': ['uint256'],
|
||||
}
|
||||
|
@ -61,31 +70,27 @@ To validate a block header on shard `shard_id`, compute as follows:
|
|||
* Verify that `beacon_chain_ref` is the hash of a block in the beacon chain with slot less than or equal to `slot`. Verify that `beacon_chain_ref` is equal to or a descendant of the `beacon_chain_ref` specified in the `ShardBlock` pointed to by `parent_hash`.
|
||||
* Let `state` be the state of the beacon chain block referred to by `beacon_chain_ref`. Let `validators` be `[validators[i] for i in state.current_persistent_committees[shard_id]]`.
|
||||
* Assert `len(attester_bitfield) == ceil_div8(len(validators))`
|
||||
* Let `curblock_proposer_index = hash(state.randao_mix + bytes8(shard_id) + bytes8(slot)) % len(validators)`. Let `parent_proposer_index` be the same value calculated for the parent block.
|
||||
* Make sure that the `parent_proposer_index`'th bit in the `attester_bitfield` is set to 1.
|
||||
* Generate the group public key by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify the `aggregate_sig` using this as the pubkey and the `parent_hash` as the message.
|
||||
* Let `proposer_index = hash(state.randao_mix + bytes8(shard_id) + bytes8(slot)) % len(validators)`. Let `msg` be the block but with the `block.signature` set to `[0, 0]`. Verify that `BLSVerify(pub=validators[proposer_index].pubkey, msg=hash(msg), sig=block.signature, domain=get_domain(state, slot, SHARD_PROPOSER_DOMAIN))` passes.
|
||||
* Generate the `group_public_key` by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify that `BLSVerify(pub=group_public_key, msg=parent_hash, sig=block.aggregate_sig, domain=get_domain(state, slot, SHARD_ATTESTER_DOMAIN))` passes.
|
||||
|
||||
### Block Merklization helper
|
||||
|
||||
```python
|
||||
def merkle_root(block_body):
|
||||
assert len(block_body) == SHARD_BLOCK_SIZE
|
||||
chunks = SHARD_BLOCK_SIZE // CHUNK_SIZE
|
||||
o = [0] * chunks + [block_body[i * CHUNK_SIZE: (i+1) * CHUNK_SIZE] for i in range(chunks)]
|
||||
for i in range(chunks-1, 0, -1):
|
||||
o[i] = hash(o[i*2] + o[i*2+1])
|
||||
return o[1]
|
||||
```
|
||||
|
||||
### Verifying shard block data
|
||||
|
||||
At network layer, we expect a shard block header to be broadcast along with its `block_body`. First, we define a helper function that takes as input beacon chain state and outputs the max block size in bytes:
|
||||
At network layer, we expect a shard block header to be broadcast along with its `block_body`.
|
||||
|
||||
```python
|
||||
def shard_block_maxbytes(state):
|
||||
max_grains = MAX_SHARD_BLOCK_SIZE // CHUNK_SIZE
|
||||
validators_at_target_committee_size = SHARD_COUNT * TARGET_COMMITTEE_SIZE
|
||||
|
||||
# number of grains per block is proportional to the number of validators
|
||||
# up until `validators_at_target_committee_size`
|
||||
grains = min(
|
||||
len(get_active_validator_indices(state.validators)) * max_grains // validators_at_target_committee_size,
|
||||
max_grains
|
||||
)
|
||||
|
||||
return CHUNK_SIZE * grains
|
||||
```
|
||||
|
||||
* Verify that `len(block_body) == shard_block_maxbytes(state)`
|
||||
* Define `filler_bytes = next_power_of_2(len(block_body)) - len(block_body)`. Compute a simple binary Merkle tree of `block_body + bytes([0] * filler_bytes)` and verify that the root equals the `data_root` in the header.
|
||||
* Verify that `len(block_body) == SHARD_BLOCK_SIZE`
|
||||
* Verify that `merkle_root(block_body)` equals the `data_root` in the header.
|
||||
|
||||
### Verifying a crosslink
|
||||
|
||||
|
@ -93,23 +98,13 @@ A node should sign a crosslink only if the following conditions hold. **If a nod
|
|||
|
||||
First, the conditions must recursively apply to the crosslink referenced in `last_crosslink_hash` for the same shard (unless `last_crosslink_hash` equals zero, in which case we are at the genesis).
|
||||
|
||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_hash`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. Let `state[i]` be the beacon chain state at height `h+i` (if the beacon chain is missing a block at some slot, the state is unchanged), and `depths[i]` be equal to `log2(next_power_of_2(shard_block_maxbytes(state[i]) // CHUNK_SIZE))` (ie. the expected depth of the i'th data tree). We define the function for computing the combined data root as follows:
|
||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_hash`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. We define the function for computing the combined data root as follows:
|
||||
|
||||
```python
|
||||
def get_zeroroot_at_depth(n):
|
||||
o = b'\x00' * CHUNK_SIZE
|
||||
for i in range(n):
|
||||
o = hash(o + o)
|
||||
return o
|
||||
ZERO_ROOT = merkle_root(bytes([0] * SHARD_BLOCK_SIZE))
|
||||
|
||||
def mk_combined_data_root(depths, roots):
|
||||
default_value = get_zeroroot_at_depth(max(depths))
|
||||
data = [default_value for _ in range(next_power_of_2(len(roots)))]
|
||||
for i, (depth, root) in enumerate(zip(depths, roots)):
|
||||
value = root
|
||||
for j in range(depth, max(depths)):
|
||||
value = hash(value, get_zeroroot_at_depth(depth + j))
|
||||
data[i] = value
|
||||
def mk_combined_data_root(roots):
|
||||
data = roots + [ZERO_ROOT for _ in range(len(roots), next_power_of_2(len(roots)))]
|
||||
return compute_merkle_root(data)
|
||||
```
|
||||
|
||||
|
@ -117,12 +112,7 @@ This outputs the root of a tree of the data roots, with the data roots all adjus
|
|||
|
||||
```python
|
||||
def mk_combined_data_root(depths, bodies):
|
||||
default_value = get_zeroroot_at_depth(max(depths))
|
||||
padded_body_length = max([CHUNK_SIZE * 2**d for d in depths])
|
||||
data = b''
|
||||
for body in bodies:
|
||||
padded_body = body + bytes([0] * (padded_body_length - len(body)))
|
||||
data += padded_body
|
||||
data = b''.join(bodies)
|
||||
data += bytes([0] * (next_power_of_2(len(data)) - len(data))
|
||||
return compute_merkle_root([data[pos:pos+CHUNK_SIZE] for pos in range(0, len(data), CHUNK_SIZE)])
|
||||
```
|
||||
|
|
Loading…
Reference in New Issue