Merge branch 'master' into vbuterin-patch-7

This commit is contained in:
Danny Ryan 2018-11-27 13:51:12 -06:00
commit 126a7abfa8
No known key found for this signature in database
GPG Key ID: 2765A792E42CE07A
5 changed files with 477 additions and 237 deletions

View File

@ -13,7 +13,7 @@ Core specifications for eth2.0 client validation can be found in [specs/core](sp
## Design goals ## Design goals
The following are the broad design goals for Ethereum 2.0: The following are the broad design goals for Ethereum 2.0:
* to minimize complexity, even at the cost of some losses in efficiency * to minimize complexity, even at the cost of some losses in efficiency
* to remain live through major network partitions and when very large portions of nodes going offline * to remain live through major network partitions and when very large portions of nodes go offline
* to select all components such that they are either quantum secure or can be easily swapped out for quantum secure counterparts when available * to select all components such that they are either quantum secure or can be easily swapped out for quantum secure counterparts when available
* to utilize crypto and design techniques that allow for a large participation of validators in total and per unit time * to utilize crypto and design techniques that allow for a large participation of validators in total and per unit time
* to allow for a typical consumer laptop with `O(C)` resources to process/validate `O(1)` shards (including any system level validation such as the beacon chain) * to allow for a typical consumer laptop with `O(C)` resources to process/validate `O(1)` shards (including any system level validation such as the beacon chain)

64
specs/bls_verify.md Normal file
View File

@ -0,0 +1,64 @@
### BLS Verification
**Warning: This document is pending academic review and should not yet be considered secure.**
See https://z.cash/blog/new-snark-curve/ for BLS-12-381 parameters.
We represent coordinates as defined in https://github.com/zkcrypto/pairing/tree/master/src/bls12_381/.
Specifically, a point in G1 as a 384-bit integer `z`, which we decompose into:
* `x = z % 2**381`
* `highflag = z // 2**382`
* `lowflag = (z % 2**382) // 2**381`
If `highflag == 3`, the point is the point at infinity and we require `lowflag = x = 0`. Otherwise, we require `highflag == 2`, in which case the point is `(x, y)` where `y` is the valid coordinate such that `(y * 2) // q == lowflag`.
We represent a point in G2 as a pair of 384-bit integers `(z1, z2)` that are each decomposed into `x1`, `highflag1`, `lowflag1`, `x2`, `highflag2`, `lowflag2` as above. We require `lowflag2 == highflag2 == 0`. If `highflag1 == 3`, the point is the point at infinity and we require `lowflag1 == x1 == x2 == 0`. Otherwise, we require `highflag == 2`, in which case the point is `(x1 * i + x2, y)` where `y` is the valid coordinate such that the imaginary part of `y` satisfies `(y_im * 2) // q == lowflag1`.
`BLSVerify(pubkey: uint384, msg: bytes32, sig: [uint384], domain: uint64)` is done as follows:
* Verify that `pubkey` is a valid G1 point and `sig` is a valid G2 point.
* Convert `msg` to a G2 point using `hash_to_G2` defined below.
* Do the pairing check: verify `e(pubkey, hash_to_G2(msg, domain)) == e(G1, sig)` (where `e` is the BLS pairing function)
Here is the `hash_to_G2` definition:
```python
G2_cofactor = 305502333931268344200999753193121504214466019254188142667664032982267604182971884026507427359259977847832272839041616661285803823378372096355777062779109
field_modulus = 4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787
def hash_to_G2(m, domain):
x1 = hash(bytes8(domain) + b'\x01' + m)
x2 = hash(bytes8(domain) + b'\x02' + m)
x_coord = FQ2([x1, x2]) # x1 + x2 * i
while 1:
x_cubed_plus_b2 = x_coord ** 3 + FQ2([4,4])
y_coord = mod_sqrt(x_cubed_plus_b2)
if y_coord is not None:
break
x_coord += FQ2([1, 0]) # Add one until we get a quadratic residue
assert is_on_curve((x_coord, y_coord))
return multiply((x_coord, y_coord), G2_cofactor)
```
Here is a sample implementation of `mod_sqrt`:
```python
qmod = field_modulus ** 2 - 1
eighth_roots_of_unity = [FQ2([1,1]) ** ((qmod * k) // 8) for k in range(8)]
def mod_sqrt(val):
candidate_sqrt = val ** ((qmod + 8) // 16)
check = candidate_sqrt ** 2 / val
if check in eighth_roots_of_unity[::2]:
return candidate_sqrt / eighth_roots_of_unity[eighth_roots_of_unity.index(check) // 2]
return None
```
`BLSMultiVerify(pubkeys: [uint384], msgs: [bytes32], sig: [uint384], domain: uint64)` is done as follows:
* Verify that each element of `pubkeys` is a valid G1 point and `sig` is a valid G2 point.
* Convert each element of `msg` to a G2 point using `hash_to_G2` defined above, using the specified `domain`.
* Check that the length of `pubkeys` and `msgs` is the same, call the length `L`
* Do the pairing check: verify `e(pubkeys[0], hash_to_G2(msgs[0], domain)) * ... * e(pubkeys[L-1], hash_to_G2(msgs[L-1], domain)) == e(G1, sig)`

View File

@ -45,14 +45,15 @@ The primary source of load on the beacon chain are "attestations". Attestations
| `MIN_VALIDATOR_SET_CHANGE_INTERVAL` | 2**8 (= 256) | slots | ~25 minutes | | `MIN_VALIDATOR_SET_CHANGE_INTERVAL` | 2**8 (= 256) | slots | ~25 minutes |
| `SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD` | 2**17 (= 131,072) | slots | ~9 days | | `SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD` | 2**17 (= 131,072) | slots | ~9 days |
| `MIN_ATTESTATION_INCLUSION_DELAY` | 2**2 (= 4) | slots | ~24 seconds | | `MIN_ATTESTATION_INCLUSION_DELAY` | 2**2 (= 4) | slots | ~24 seconds |
| `SQRT_E_DROP_TIME` | 2**18 (= 262,144) | slots | ~18 days | | `SQRT_E_DROP_TIME` | 2**11 (= 1,024) | cycles | ~9 days |
| `WITHDRAWALS_PER_CYCLE` | 2**2 (=4) | validators | 5.2m ETH in ~6 months | | `WITHDRAWALS_PER_CYCLE` | 2**2 (=4) | validators | 5.2m ETH in ~6 months |
| `MIN_WITHDRAWAL_PERIOD` | 2**13 (= 8,192) | slots | ~14 hours | | `MIN_WITHDRAWAL_PERIOD` | 2**13 (= 8,192) | slots | ~14 hours |
| `DELETION_PERIOD` | 2**22 (= 4,194,304) | slots | ~290 days | | `DELETION_PERIOD` | 2**22 (= 4,194,304) | slots | ~290 days |
| `COLLECTIVE_PENALTY_CALCULATION_PERIOD` | 2**20 (= 1,048,576) | slots | ~2.4 months | | `COLLECTIVE_PENALTY_CALCULATION_PERIOD` | 2**20 (= 1,048,576) | slots | ~2.4 months |
| `POW_RECEIPT_ROOT_VOTING_PERIOD` | 2**10 (= 1,024) | slots | ~1.7 hours | | `POW_RECEIPT_ROOT_VOTING_PERIOD` | 2**10 (= 1,024) | slots | ~1.7 hours |
| `SLASHING_WHISTLEBLOWER_REWARD_DENOMINATOR` | 2**9 (= 512) | | `SLASHING_WHISTLEBLOWER_REWARD_DENOMINATOR` | 2**9 (= 512) |
| `BASE_REWARD_QUOTIENT` | 2**15 (= 32,768) | — | | `BASE_REWARD_QUOTIENT` | 2**11 (= 2,048) | — |
| `INCLUDER_REWARD_SHARE_QUOTIENT` | 2**3 (= 8) | — |
| `MAX_VALIDATOR_CHURN_QUOTIENT` | 2**5 (= 32) | — | | `MAX_VALIDATOR_CHURN_QUOTIENT` | 2**5 (= 32) | — |
| `POW_CONTRACT_MERKLE_TREE_DEPTH` | 2**5 (= 32) | - | | `POW_CONTRACT_MERKLE_TREE_DEPTH` | 2**5 (= 32) | - |
| `MAX_ATTESTATION_COUNT` | 2**7 (= 128) | - | | `MAX_ATTESTATION_COUNT` | 2**7 (= 128) | - |
@ -63,7 +64,7 @@ The primary source of load on the beacon chain are "attestations". Attestations
* See a recommended min committee size of 111 [here](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); our algorithm will generally ensure the committee size is at least half the target. * See a recommended min committee size of 111 [here](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); our algorithm will generally ensure the committee size is at least half the target.
* The `SQRT_E_DROP_TIME` constant is the amount of time it takes for the quadratic leak to cut deposits of non-participating validators by ~39.4%. * The `SQRT_E_DROP_TIME` constant is the amount of time it takes for the quadratic leak to cut deposits of non-participating validators by ~39.4%.
* The `BASE_REWARD_QUOTIENT` constant is the per-slot interest rate assuming all validators are participating, assuming total deposits of 1 ETH. It corresponds to ~3.88% annual interest assuming 10 million participating ETH. * The `BASE_REWARD_QUOTIENT` constant dictates the per-cycle interest rate assuming all validators are participating, assuming total deposits of 1 ETH. It corresponds to ~2.57% annual interest assuming 10 million participating ETH.
* At most `1/MAX_VALIDATOR_CHURN_QUOTIENT` of the validators can change during each validator set change. * At most `1/MAX_VALIDATOR_CHURN_QUOTIENT` of the validators can change during each validator set change.
**Validator status codes** **Validator status codes**
@ -137,28 +138,38 @@ A `BeaconBlock` has the following fields:
An `AttestationRecord` has the following fields: An `AttestationRecord` has the following fields:
```python
{
'data': AttestationSignedData,
# Attester participation bitfield
'attester_bitfield': 'bytes',
# Proof of custody bitfield
'poc_bitfield': 'bytes',
# BLS aggregate signature
'aggregate_sig': ['uint384']
}
```
`AttestationSignedData`:
```python ```python
{ {
# Slot number # Slot number
'slot': 'uint64', 'slot': 'uint64',
# Shard number # Shard number
'shard': 'uint64', 'shard': 'uint64',
# Beacon block hashes not part of the current chain, oldest to newest # Hash of the block we're signing
'oblique_parent_hashes': ['hash32'], 'block_hash': 'hash32',
# Hash of the ancestor at the cycle boundary
'cycle_boundary_hash': 'hash32',
# Shard block hash being attested to # Shard block hash being attested to
'shard_block_hash': 'hash32', 'shard_block_hash': 'hash32',
# Last crosslink hash # Last crosslink hash
'last_crosslink_hash': 'hash32', 'last_crosslink_hash': 'hash32',
# Root of data between last hash and this one
'shard_block_combined_data_root': 'hash32',
# Attester participation bitfield (1 bit per attester)
'attester_bitfield': 'bytes',
# Slot of last justified beacon block # Slot of last justified beacon block
'justified_slot': 'uint64', 'justified_slot': 'uint64',
# Hash of last justified beacon block # Hash of last justified beacon block
'justified_block_hash': 'hash32', 'justified_block_hash': 'hash32',
# BLS aggregate signature
'aggregate_sig': ['uint384']
} }
``` ```
@ -175,27 +186,6 @@ A `ProposalSignedData` has the following fields:
} }
``` ```
An `AttestationSignedData` has the following fields:
```python
{
# Slot number
'slot': 'uint64',
# Shard number
'shard': 'uint64',
# CYCLE_LENGTH parent hashes
'parent_hashes': ['hash32'],
# Shard block hash
'shard_block_hash': 'hash32',
# Last crosslink hash
'last_crosslink_hash': 'hash32',
# Root of data between last hash and this one
'shard_block_combined_data_root': 'hash32',
# Slot of last justified beacon block referenced in the attestation
'justified_slot': 'uint64'
}
```
A `SpecialRecord` has the following fields: A `SpecialRecord` has the following fields:
```python ```python
@ -223,10 +213,11 @@ The `BeaconState` has the following fields:
'last_state_recalculation_slot': 'uint64', 'last_state_recalculation_slot': 'uint64',
# Last finalized slot # Last finalized slot
'last_finalized_slot': 'uint64', 'last_finalized_slot': 'uint64',
# Last justified slot # Justification source
'last_justified_slot': 'uint64', 'justification_source': 'uint64',
# Number of consecutive justified slots 'prev_cycle_justification_source': 'uint64',
'justified_streak': 'uint64', # Recent justified slot bitmask
'justified_slot_bitfield': 'uint64',
# Committee members and their assigned shard, per slot # Committee members and their assigned shard, per slot
'shard_and_committee_for_slots': [[ShardAndCommittee]], 'shard_and_committee_for_slots': [[ShardAndCommittee]],
# Persistent shard committees # Persistent shard committees
@ -247,11 +238,9 @@ The `BeaconState` has the following fields:
'candidate_pow_receipt_roots': [CandidatePoWReceiptRootRecord], 'candidate_pow_receipt_roots': [CandidatePoWReceiptRootRecord],
# Parameters relevant to hard forks / versioning. # Parameters relevant to hard forks / versioning.
# Should be updated only by hard forks. # Should be updated only by hard forks.
'pre_fork_version': 'uint64', 'fork_data': ForkData,
'post_fork_version': 'uint64',
'fork_slot_number': 'uint64',
# Attestations not yet processed # Attestations not yet processed
'pending_attestations': [AttestationRecord], 'pending_attestations': [ProcessedAttestations],
# recent beacon block hashes needed to process attestations, older to newer # recent beacon block hashes needed to process attestations, older to newer
'recent_block_hashes': ['hash32'], 'recent_block_hashes': ['hash32'],
# RANDAO state # RANDAO state
@ -328,6 +317,31 @@ A `CandidatePoWReceiptRootRecord` object contains the following fields:
} }
``` ```
A `ForkData` object contains the following fields:
```python
{
# Previous fork version
'pre_fork_version': 'uint64',
# Post fork version
'post_fork_version': 'uint64',
# Fork slot number
'fork_slot_number': 'uint64'
```
A `ProcessedAttestation` object has the following fields:
```python
{
# Signed data
'data': AttestationSignedData,
# Attester participation bitfield (2 bits per attester)
'attester_bitfield': 'bytes',
# Proof of custody bitfield
'poc_bitfield': 'bytes',
# Slot in which it was included
'slot_included': 'uint64'
}
```
## Beacon chain processing ## Beacon chain processing
The beacon chain is the "main chain" of the PoS system. The beacon chain's main responsibilities are: The beacon chain is the "main chain" of the PoS system. The beacon chain's main responsibilities are:
@ -355,7 +369,7 @@ The beacon chain fork choice rule is a hybrid that combines justification and fi
* Let `store` be the set of attestations and blocks that the validator `v` has observed and verified (in particular, block ancestors must be recursively verified). Attestations not part of any chain are still included in `store`. * Let `store` be the set of attestations and blocks that the validator `v` has observed and verified (in particular, block ancestors must be recursively verified). Attestations not part of any chain are still included in `store`.
* Let `finalized_head` be the finalized block with the highest slot number. (A block `B` is finalized if there is a descendant of `B` in `store` the processing of which sets `B` as finalized.) * Let `finalized_head` be the finalized block with the highest slot number. (A block `B` is finalized if there is a descendant of `B` in `store` the processing of which sets `B` as finalized.)
* Let `justified_head` be the descendant of `finalized_head` with the highest slot number that has been justified for at least `CYCLE_LENGTH` slots. (A block `B` is justified is there is a descendant of `B` in `store` the processing of which sets `B` as justified.) If no such descendant exists set `justified_head` to `finalized_head`. * Let `justified_head` be the descendant of `finalized_head` with the highest slot number that has been justified for at least `CYCLE_LENGTH` slots. (A block `B` is justified if there is a descendant of `B` in `store` the processing of which sets `B` as justified.) If no such descendant exists set `justified_head` to `finalized_head`.
* Let `get_ancestor(store, block, slot)` be the ancestor of `block` with slot number `slot`. The `get_ancestor` function can be defined recursively as `def get_ancestor(store, block, slot): return block if block.slot == slot else get_ancestor(store, store.get_parent(block), slot)`. * Let `get_ancestor(store, block, slot)` be the ancestor of `block` with slot number `slot`. The `get_ancestor` function can be defined recursively as `def get_ancestor(store, block, slot): return block if block.slot == slot else get_ancestor(store, store.get_parent(block), slot)`.
* Let `get_latest_attestation(store, validator)` be the attestation with the highest slot number in `store` from `validator`. If several such attestations exist use the one the validator `v` observed first. * Let `get_latest_attestation(store, validator)` be the attestation with the highest slot number in `store` from `validator`. If several such attestations exist use the one the validator `v` observed first.
* Let `get_latest_attestation_target(store, validator)` be the target block in the attestation `get_latest_attestation(store, validator)`. * Let `get_latest_attestation_target(store, validator)` be the target block in the attestation `get_latest_attestation(store, validator)`.
@ -399,7 +413,7 @@ def get_active_validator_indices(validators)
return [i for i, v in enumerate(validators) if v.status == ACTIVE] return [i for i, v in enumerate(validators) if v.status == ACTIVE]
``` ```
The following is a function that shuffles the validator list: The following is a function that shuffles any list; we primarily use it for the validator list:
```python ```python
def shuffle(values: List[Any], def shuffle(values: List[Any],
@ -547,24 +561,52 @@ def get_block_hash(state: BeaconState,
The following is a function that determines the proposer of a beacon block: The following is a function that determines the proposer of a beacon block:
```python ```python
def get_beacon_proposer(state:BeaconState, slot: int) -> ValidatorRecord: def get_beacon_proposer_index(state:BeaconState, slot: int) -> int:
first_committee = get_shards_and_committees_for_slot(state, slot)[0].committee first_committee = get_shards_and_committees_for_slot(state, slot)[0].committee
index = first_committee[slot % len(first_committee)] index = first_committee[slot % len(first_committee)]
return state.validators[index] return index
```
The following is a function that determines the validators that participated in an attestation:
```python
def get_attestation_participants(state: State,
attestation_data: AttestationSignedData,
attester_bitfield: bytes) -> List[int]:
sncs_for_slot = get_shards_and_committees_for_slot(state, attestation_data.slot)
snc = [x for x in sncs_for_slot if x.shard == attestation_data.shard][0]
assert len(attester_bitfield) == ceil_div8(len(snc.committee))
participants = []
for i, vindex in enumerate(snc.committee):
bit = (attester_bitfield[i//8] >> (7 - (i % 8))) % 2
if bit == 1:
participants.append(vindex)
return participants
``` ```
We define another set of helpers to be used throughout: `bytes1(x): return x.to_bytes(1, 'big')`, `bytes2(x): return x.to_bytes(2, 'big')`, and so on for all integers, particularly 1, 2, 3, 4, 8, 32. We define another set of helpers to be used throughout: `bytes1(x): return x.to_bytes(1, 'big')`, `bytes2(x): return x.to_bytes(2, 'big')`, and so on for all integers, particularly 1, 2, 3, 4, 8, 32.
We define a function to determine the balance of a validator used for determining punishments and calculating stake:
```python
def balance_at_stake(validator: ValidatorRecord) -> int:
return min(validator.balance, DEPOSIT_SIZE)
```
We define a function to "add a link" to the validator hash chain, used when a validator is added or removed: We define a function to "add a link" to the validator hash chain, used when a validator is added or removed:
```python ```python
def add_validator_set_change_record(state: BeaconState, def get_new_validator_set_delta_hash_chain(current_validator_set_delta_hash_chain: Hash32,
index: int, index: int,
pubkey: int, pubkey: int,
flag: int) -> None: flag: int) -> Hash32:
state.validator_set_delta_hash_chain = \ new_validator_set_delta_hash_chain = hash(
hash(state.validator_set_delta_hash_chain + current_validator_set_delta_hash_chain +
bytes1(flag) + bytes3(index) + bytes32(pubkey)) bytes1(flag) +
bytes3(index) +
bytes32(pubkey)
)
return new_validator_set_delta_hash_chain
``` ```
Finally, we abstractly define `int_sqrt(n)` for use in reward/penalty calculations as the largest integer `k` such that `k**2 <= n`. Here is one possible implementation, though clients are free to use their own including standard libraries for [integer square root](https://en.wikipedia.org/wiki/Integer_square_root) if available and meet the specification. Finally, we abstractly define `int_sqrt(n)` for use in reward/penalty calculations as the largest integer `k` such that `k**2 <= n`. Here is one possible implementation, though clients are free to use their own including standard libraries for [integer square root](https://en.wikipedia.org/wiki/Integer_square_root) if available and meet the specification.
@ -665,14 +707,24 @@ A valid block with slot `0` (the "genesis block") has the following values. Othe
`STARTUP_STATE_ROOT` is the root of the initial state, computed by running the following code: `STARTUP_STATE_ROOT` is the root of the initial state, computed by running the following code:
```python ```python
def on_startup(initial_validator_entries: List[Any], genesis_time: uint64, processed_pow_receipt_root: Hash32) -> BeaconState: def on_startup(current_validators: List[ValidatorRecord],
pre_fork_version: int,
initial_validator_entries: List[Any],
genesis_time: int,
processed_pow_receipt_root: Hash32) -> BeaconState:
# Induct validators # Induct validators
validators = [] validators = []
for pubkey, proof_of_possession, withdrawal_credentials, \ for pubkey, deposit_size, proof_of_possession, withdrawal_credentials, \
randao_commitment in initial_validator_entries: randao_commitment in initial_validator_entries:
add_or_topup_validator( validators, _ = get_new_validators(
validators=validators, current_validators=validators,
fork_data=ForkData(
pre_fork_version=pre_fork_version,
post_fork_version=pre_fork_version,
fork_slot_number=2**64 - 1,
),
pubkey=pubkey, pubkey=pubkey,
deposit_size=deposit_size,
proof_of_possession=proof_of_possession, proof_of_possession=proof_of_possession,
withdrawal_credentials=withdrawal_credentials, withdrawal_credentials=withdrawal_credentials,
randao_commitment=randao_commitment, randao_commitment=randao_commitment,
@ -694,8 +746,9 @@ def on_startup(initial_validator_entries: List[Any], genesis_time: uint64, proce
crosslinks=crosslinks, crosslinks=crosslinks,
last_state_recalculation_slot=0, last_state_recalculation_slot=0,
last_finalized_slot=0, last_finalized_slot=0,
last_justified_slot=0, justification_source=0,
justified_streak=0, prev_cycle_justification_source=0,
justified_slot_bitfield=0,
shard_and_committee_for_slots=x + x, shard_and_committee_for_slots=x + x,
persistent_committees=split(shuffle(validators, bytes([0] * 32)), SHARD_COUNT), persistent_committees=split(shuffle(validators, bytes([0] * 32)), SHARD_COUNT),
persistent_committee_reassignments=[], persistent_committee_reassignments=[],
@ -727,25 +780,94 @@ This routine should be run for every validator that is inducted as part of a log
First, some helper functions: First, some helper functions:
```python ```python
def min_empty_validator(validators: List[ValidatorRecord], current_slot: int): def min_empty_validator_index(validators: List[ValidatorRecord], current_slot: int) -> int:
for i, v in enumerate(validators): for i, v in enumerate(validators):
if v.status == WITHDRAWN and v.last_status_change_slot + DELETION_PERIOD <= current_slot: if v.status == WITHDRAWN and v.last_status_change_slot + DELETION_PERIOD <= current_slot:
return i return i
return None return None
def get_fork_version(fork_data: ForkData,
slot: int) -> int:
if slot < fork_data.fork_slot_number:
return fork_data.pre_fork_version
else:
return fork_data.post_fork_version
def get_domain(fork_data: ForkData,
slot: int,
base_domain: int) -> int:
return get_fork_version(
fork_data,
slot
) * 2**32 + base_domain
def get_new_validators(current_validators: List[ValidatorRecord],
fork_data: ForkData,
pubkey: int,
deposit_size: int,
proof_of_possession: bytes,
withdrawal_credentials: Hash32,
randao_commitment: Hash32,
status: int,
current_slot: int) -> Tuple[List[ValidatorRecord], int]:
# if any asserts fail, validator induction/topup failed
# move on to next validator deposit log
signed_message = bytes32(pubkey) + withdrawal_credentials + randao_commitment
assert BLSVerify(
pub=pubkey,
msg=hash(signed_message),
sig=proof_of_possession,
domain=get_domain(
fork_data,
current_slot,
DOMAIN_DEPOSIT
)
)
new_validators = copy.deepcopy(current_validators)
validator_pubkeys = [v.pubkey for v in new_validators]
# add new validator
if pubkey not in validator_pubkeys:
assert deposit_size == DEPOSIT_SIZE
rec = ValidatorRecord(
pubkey=pubkey,
withdrawal_credentials=withdrawal_credentials,
randao_commitment=randao_commitment,
randao_skips=0,
balance=DEPOSIT_SIZE * GWEI_PER_ETH,
status=status,
last_status_change_slot=current_slot,
exit_seq=0
)
index = min_empty_validator(new_validators)
if index is None:
new_validators.append(rec)
index = len(new_validators) - 1
else:
new_validators[index] = rec
return new_validators, index
# topup existing validator
else:
index = validator_pubkeys.index(pubkey)
val = new_validators[index]
assert deposit_size >= MIN_TOPUP_SIZE
assert val.status != WITHDRAWN
assert val.withdrawal_credentials == withdrawal_credentials
val.balance += deposit_size
return new_validators, index
``` ```
```python Now, to add a validator or top up an existing validator's balance:
def get_fork_version(state: State, slot: int) -> int:
return state.pre_fork_version if slot < state.fork_slot_number else state.post_fork_version
def get_domain(state: State, slot: int, base_domain: int) -> int:
return get_fork_version(state, slot) * 2**32 + base_domain
```
Now, to add a validator:
```python ```python
def add_or_topup_validator(state: State, def add_or_topup_validator(state: BeaconState,
pubkey: int, pubkey: int,
deposit_size: int, deposit_size: int,
proof_of_possession: bytes, proof_of_possession: bytes,
@ -753,43 +875,27 @@ def add_or_topup_validator(state: State,
randao_commitment: Hash32, randao_commitment: Hash32,
status: int, status: int,
current_slot: int) -> int: current_slot: int) -> int:
# if following assert fails, validator induction failed """
# move on to next validator registration log Add the validator into the given `state`.
signed_message = bytes32(pubkey) + bytes2(withdrawal_shard) + withdrawal_credentials + randao_commitment Note that this function mutates `state`.
assert BLSVerify(pub=pubkey, """
msg=hash(signed_message), state.validators, index = get_new_validators(
sig=proof_of_possession, current_validators=state.validators,
domain=get_domain(state, current_slot, DOMAIN_DEPOSIT)) fork_data=ForkData(
rec = ValidatorRecord( pre_fork_version=state.pre_fork_version,
post_fork_version=state.post_fork_version,
fork_slot_number=state.fork_slot_number,
),
pubkey=pubkey, pubkey=pubkey,
deposit_size=deposit_size,
proof_of_possession=proof_of_possession,
withdrawal_credentials=withdrawal_credentials, withdrawal_credentials=withdrawal_credentials,
randao_commitment=randao_commitment, randao_commitment=randao_commitment,
randao_skips=0,
balance=DEPOSIT_SIZE * GWEI_PER_ETH,
status=status, status=status,
last_status_change_slot=current_slot, current_slot=current_slot,
exit_seq=0
) )
# Pubkey uniqueness
validator_pubkeys = [v.pubkey for v in state.validators]
if pubkey not in validator_pubkeys:
assert deposit_size == DEPOSIT_SIZE
index = min_empty_validator(state.validators) return index
if index is None:
state.validators.append(rec)
return len(state.validators) - 1
else:
state.validators[index] = rec
return index
else:
assert val.status != WITHDRAWN
index = validator_pubkeys.index(pubkey)
val = state.validators[index]
assert val.withdrawal_credentials == withdrawal_credentials
assert deposit_size >= MIN_TOPUP_SIZE
val.balance += deposit_size
return index
``` ```
`BLSVerify` is a function for verifying a BLS12-381 signature, defined in the BLS12-381 spec. `BLSVerify` is a function for verifying a BLS12-381 signature, defined in the BLS12-381 spec.
@ -797,20 +903,38 @@ def add_or_topup_validator(state: State,
### Routine for removing a validator ### Routine for removing a validator
```python ```python
def exit_validator(index, state, block, penalize, current_slot): def exit_validator(index: int,
state: BeaconState,
block: BeaconBlock,
penalize: bool,
current_slot: int) -> None:
"""
Remove the validator with the given `index` from `state`.
Note that this function mutates `state`.
"""
validator = state.validators[index] validator = state.validators[index]
validator.last_status_change_slot = current_slot validator.last_status_change_slot = current_slot
validator.exit_seq = state.current_exit_seq validator.exit_seq = state.current_exit_seq
state.current_exit_seq += 1 state.current_exit_seq += 1
for committee in state.persistent_committees:
for i, vindex in committee:
if vindex == index:
committee.pop(i)
break
if penalize: if penalize:
state.deposits_penalized_in_period[current_slot // COLLECTIVE_PENALTY_CALCULATION_PERIOD] += balance_at_stake(validator)
validator.status = PENALIZED validator.status = PENALIZED
whistleblower_xfer_amount = validator.deposit // SLASHING_WHISTLEBLOWER_REWARD_DENOMINATOR whistleblower_xfer_amount = validator.deposit // SLASHING_WHISTLEBLOWER_REWARD_DENOMINATOR
validator.deposit -= whistleblower_xfer_amount validator.deposit -= whistleblower_xfer_amount
get_beacon_proposer(state, block.slot).deposit += whistleblower_xfer_amount state.validators[get_beacon_proposer_index(state, block.slot)].deposit += whistleblower_xfer_amount
state.deposits_penalized_in_period[current_slot // COLLECTIVE_PENALTY_CALCULATION_PERIOD] += validator.balance
else: else:
validator.status = PENDING_EXIT validator.status = PENDING_EXIT
add_validator_set_change_record(state, index, validator.pubkey, EXIT) state.validator_set_delta_hash_chain = get_new_validator_set_delta_hash_chain(
validator_set_delta_hash_chain=state.validator_set_delta_hash_chain,
index=index,
pubkey=validator.pubkey,
flag=EXIT,
)
``` ```
## Per-block processing ## Per-block processing
@ -818,7 +942,7 @@ def exit_validator(index, state, block, penalize, current_slot):
This procedure should be carried out every beacon block. This procedure should be carried out every beacon block.
* Let `parent_hash` be the hash of the immediate previous beacon block (ie. equal to `ancestor_hashes[0]`). * Let `parent_hash` be the hash of the immediate previous beacon block (ie. equal to `ancestor_hashes[0]`).
* Let `parent` be the beacon block with the hash `parent_hash` * Let `parent` be the beacon block with the hash `parent_hash`.
First, set `recent_block_hashes` to the output of the following: First, set `recent_block_hashes` to the output of the following:
@ -846,27 +970,26 @@ def update_ancestor_hashes(parent_ancestor_hashes: List[Hash32],
### Verify attestations ### Verify attestations
Verify that there are at most `MAX_ATTESTATION_COUNT` `AttestationRecord` objects. For each `AttestationRecord` object: Verify that there are at most `MAX_ATTESTATION_COUNT` `AttestationRecord` objects.
* Verify that `slot <= block.slot - MIN_ATTESTATION_INCLUSION_DELAY` and `slot >= max(parent.slot - CYCLE_LENGTH + 1, 0)`. For each `AttestationRecord` object `obj`:
* Verify that `justified_slot` is equal to or earlier than `last_justified_slot`.
* Verify that `justified_block_hash` is the hash of the block in the current chain at the slot -- `justified_slot`. * Verify that `obj.data.slot <= block.slot - MIN_ATTESTATION_INCLUSION_DELAY` and `obj.data.slot >= max(parent.slot - CYCLE_LENGTH + 1, 0)`.
* Verify that either `last_crosslink_hash` or `shard_block_hash` equals `state.crosslinks[shard].shard_block_hash`. * Verify that `obj.data.justified_slot` is equal to `justification_source if obj.data.slot >= state.last_state_recalculation_slot else prev_cycle_justification_source`
* Compute `parent_hashes` = `[get_block_hash(state, block, slot - CYCLE_LENGTH + i) for i in range(1, CYCLE_LENGTH - len(oblique_parent_hashes) + 1)] + oblique_parent_hashes` (eg, if `CYCLE_LENGTH = 4`, `slot = 5`, the actual block hashes starting from slot 0 are `Z A B C D E F G H I J`, and `oblique_parent_hashes = [D', E']` then `parent_hashes = [B, C, D' E']`). Note that when *creating* an attestation for a block, the hash of that block itself won't yet be in the `state`, so you would need to add it explicitly. * Verify that `obj.data.justified_block_hash` is equal to `get_block_hash(state, block, obj.data.justified_slot)`.
* Let `attestation_indices` be `get_shards_and_committees_for_slot(state, slot)[x]`, choosing `x` so that `attestation_indices.shard` equals the `shard` value provided to find the set of validators that is creating this attestation record. * Verify that either `obj.data.last_crosslink_hash` or `obj.data.shard_block_hash` equals `state.crosslinks[shard].shard_block_hash`.
* Verify that `len(attester_bitfield) == ceil_div8(len(attestation_indices))`, where `ceil_div8 = (x + 7) // 8`. Verify that bits `len(attestation_indices)....` and higher, if present (i.e. `len(attestation_indices)` is not a multiple of 8), are all zero. * `aggregate_sig` verification:
* Derive a `group_public_key` by adding the public keys of all of the attesters in `attestation_indices` for whom the corresponding bit in `attester_bitfield` (the ith bit is `(attester_bitfield[i // 8] >> (7 - (i % 8))) % 2`) equals 1. * Let `participants = get_attestation_participants(state, obj.data, obj.attester_bitfield)`
* Let `data = AttestationSignedData(slot, shard, parent_hashes, shard_block_hash, last_crosslinked_hash, shard_block_combined_data_root, justified_slot)`. * Let `group_public_key = BLSAddPubkeys([state.validators[v].pubkey for v in participants])`
* Check `BLSVerify(pubkey=group_public_key, msg=data, sig=aggregate_sig, domain=get_domain(state, slot, DOMAIN_ATTESTATION))`. * Check `BLSVerify(pubkey=group_public_key, msg=obj.data, sig=aggregate_sig, domain=get_domain(state.fork_data, slot, DOMAIN_ATTESTATION))`.
* [TO BE REMOVED IN PHASE 1] Verify that `shard_block_hash == bytes([0] * 32)`. * [TO BE REMOVED IN PHASE 1] Verify that `shard_block_hash == bytes([0] * 32)`.
* Append `ProcessedAttestation(data=obj.data, attester_bitfield=obj.attester_bitfield, poc_bitfield=obj.poc_bitfield, slot_included=block.slot)` to `state.pending_attestations`.
Extend the list of `AttestationRecord` objects in the `state` with those included in the block, ordering the new additions in the same order as they came in the block.
### Verify proposer signature ### Verify proposer signature
Let `proposal_hash = hash(ProposalSignedData(block.slot, 2**64 - 1, block_hash_without_sig))` where `block_hash_without_sig` is the hash of the block except setting `proposer_signature` to `[0, 0]`. Let `proposal_hash = hash(ProposalSignedData(block.slot, 2**64 - 1, block_hash_without_sig))` where `block_hash_without_sig` is the hash of the block except setting `proposer_signature` to `[0, 0]`.
Verify that `BLSVerify(pubkey=get_beacon_proposer(state, block.slot).pubkey, data=proposal_hash, sig=block.proposer_signature, domain=get_domain(state, block.slot, DOMAIN_PROPOSAL))` passes. Verify that `BLSVerify(pubkey=state.validators[get_beacon_proposer_index(state, block.slot)].pubkey, data=proposal_hash, sig=block.proposer_signature, domain=get_domain(state.fork_data, block.slot, DOMAIN_PROPOSAL))` passes.
### Verify and process RANDAO reveal ### Verify and process RANDAO reveal
@ -874,16 +997,16 @@ First run the following state transition to update `randao_skips` variables for
```python ```python
for slot in range(parent.slot + 1, block.slot): for slot in range(parent.slot + 1, block.slot):
proposer = get_beacon_proposer(state, slot) proposer_index = get_beacon_proposer_index(state, slot)
proposer.randao_skips += 1 state.validators[proposer_index].randao_skips += 1
``` ```
Then: Then:
* Let `repeat_hash(x, n) = x if n == 0 else repeat_hash(hash(x), n-1)`. * Let `repeat_hash(x, n) = x if n == 0 else repeat_hash(hash(x), n-1)`.
* Let `V = get_beacon_proposer(state, block.slot)`. * Let `proposer = state.validators[get_beacon_proposer_index(state, block.slot)]`.
* Verify that `repeat_hash(block.randao_reveal, V.randao_skips + 1) == V.randao_commitment` * Verify that `repeat_hash(block.randao_reveal, proposer.randao_skips + 1) == proposer.randao_commitment`
* Set `state.randao_mix = xor(state.randao_mix, block.randao_reveal)`, `V.randao_commitment = block.randao_reveal`, `V.randao_skips = 0` * Set `state.randao_mix = xor(state.randao_mix, block.randao_reveal)`, `proposer.randao_commitment = block.randao_reveal`, `proposer.randao_skips = 0`
### Process PoW receipt root ### Process PoW receipt root
@ -904,10 +1027,9 @@ For each `SpecialRecord` `obj` in `block.specials`, verify that its `kind` is on
} }
``` ```
Perform the following checks: Perform the following checks:
* Verify that `BLSVerify(pubkey=validators[data.validator_index].pubkey, msg=bytes([0] * 32), sig=data.signature, domain=get_domain(state.fork_data, current_slot, DOMAIN_LOGOUT))`.
* Verify that `BLSVerify(pubkey=validators[data.validator_index].pubkey, msg=bytes([0] * 32), sig=data.signature, domain=get_domain(state, current_slot, DOMAIN_LOGOUT))`
* Verify that `validators[validator_index].status == ACTIVE`. * Verify that `validators[validator_index].status == ACTIVE`.
* Verify that `block.slot >= last_status_change_slot + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD` * Verify that `block.slot >= last_status_change_slot + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD`.
Run `exit_validator(data.validator_index, state, block, penalize=False, current_slot=block.slot)`. Run `exit_validator(data.validator_index, state, block, penalize=False, current_slot=block.slot)`.
@ -926,7 +1048,7 @@ Run `exit_validator(data.validator_index, state, block, penalize=False, current_
Perform the following checks: Perform the following checks:
* For each `vote`, verify that `BLSVerify(pubkey=aggregate_pubkey([validators[i].pubkey for i in vote_aggregate_sig_indices]), msg=vote_data, sig=vote_aggregate_sig, domain=get_domain(state, vote_data.slot, DOMAIN_ATTESTATION))` passes. * For each `vote`, verify that `BLSVerify(pubkey=aggregate_pubkey([validators[i].pubkey for i in vote_aggregate_sig_indices]), msg=vote_data, sig=vote_aggregate_sig, domain=get_domain(state.fork_data, vote_data.slot, DOMAIN_ATTESTATION))` passes.
* Verify that `vote1_data != vote2_data`. * Verify that `vote1_data != vote2_data`.
* Let `intersection = [x for x in vote1_aggregate_sig_indices if x in vote2_aggregate_sig_indices]`. Verify that `len(intersection) >= 1`. * Let `intersection = [x for x in vote1_aggregate_sig_indices if x in vote2_aggregate_sig_indices]`. Verify that `len(intersection) >= 1`.
* Verify that `vote1_data.justified_slot < vote2_data.justified_slot < vote2_data.slot <= vote1_data.slot`. * Verify that `vote1_data.justified_slot < vote2_data.justified_slot < vote2_data.slot <= vote1_data.slot`.
@ -944,7 +1066,7 @@ For each validator index `v` in `intersection`, if `state.validators[v].status`
'proposal1_signature': '[uint384]', 'proposal1_signature': '[uint384]',
} }
``` ```
For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer_index].pubkey, msg=hash(proposal_data), sig=proposal_signature, domain=get_domain(state, proposal_data.slot, DOMAIN_PROPOSAL))` passes. Verify that `proposal1_data.slot == proposal2_data.slot` but `proposal1 != proposal2`. If `state.validators[proposer_index].status` does not equal `PENALIZED`, then run `exit_validator(proposer_index, state, penalize=True, current_slot=block.slot)` For each `proposal_signature`, verify that `BLSVerify(pubkey=validators[proposer_index].pubkey, msg=hash(proposal_data), sig=proposal_signature, domain=get_domain(state.fork_data, proposal_data.slot, DOMAIN_PROPOSAL))` passes. Verify that `proposal1_data.slot == proposal2_data.slot` but `proposal1 != proposal2`. If `state.validators[proposer_index].status` does not equal `PENALIZED`, then run `exit_validator(proposer_index, state, penalize=True, current_slot=block.slot)`
#### DEPOSIT_PROOF #### DEPOSIT_PROOF
@ -977,62 +1099,93 @@ def verify_merkle_branch(leaf: Hash32, branch: [Hash32], depth: int, index: int,
Verify that `block.slot - (deposit_data.timestamp - state.genesis_time) // SLOT_DURATION < DELETION_PERIOD`. Verify that `block.slot - (deposit_data.timestamp - state.genesis_time) // SLOT_DURATION < DELETION_PERIOD`.
Run `add_or_topup_validator(validators, deposit_data.deposit_params.pubkey, deposit_data.msg_value, deposit_data.deposit_params.proof_of_possession, deposit_data.deposit_params.withdrawal_credentials, deposit_data.deposit_params.randao_commitment, PENDING_ACTIVATION, block.slot)`. Run `add_or_topup_validator(state, pupkey=deposit_data.deposit_params.pubkey, deposit_size=deposit_data.msg_value, proof_of_possession=deposit_data.deposit_params.proof_of_possession, withdrawal_credentials=deposit_data.deposit_params.withdrawal_credentials, randao_commitment=deposit_data.deposit_params.randao_commitment, status=PENDING_ACTIVATION, current_slot=block.slot)`.
## State recalculations (every `CYCLE_LENGTH` slots) ## Cycle boundary processing
Repeat while `slot - last_state_recalculation_slot >= CYCLE_LENGTH`: Repeat the steps in this section while `block.slot - last_state_recalculation_slot >= CYCLE_LENGTH`. For simplicity, we'll use `s` as `last_state_recalculation_slot`.
_Note: `last_state_recalculation_slot` will always be a multiple of `CYCLE_LENGTH`. In the "happy case", this process will trigger, and loop once, every time `block.slot` passes a new exact multiple of `CYCLE_LENGTH`, but if a chain skips more than an entire cycle then the loop may run multiple times, incrementing `last_state_recalculation_slot` by `CYCLE_LENGTH` with each iteration._
#### Precomputation
All validators:
* Let `active_validators = [state.validators[i] for i in get_active_validator_indices(state.validators)]`.
* Let `total_balance = sum([balance_at_stake(v) for v in active_validators])`. Let `total_balance_in_eth = total_balance // GWEI_PER_ETH`.
* Let `reward_quotient = BASE_REWARD_QUOTIENT * int_sqrt(total_balance_in_eth)`. (The per-slot maximum interest rate is `2/reward_quotient`.)
Validators justifying the cycle boundary block at the start of the current cycle:
* Let `this_cycle_attestations = [a for a in state.pending_attestations if s <= a.data.slot < s + CYCLE_LENGTH]`. (note: this is the set of attestations _of slots in the cycle `s...s+CYCLE_LENGTH-1`_, not attestations _that got included in the chain during the cycle `s...s+CYCLE_LENGTH-1`_)
* Let `this_cycle_boundary_attestations = [a for a in this_cycle_attestations if a.data.cycle_boundary_hash == get_block_hash(state, block, s) and a.justified_slot == state.justification_source]`.
* Let `this_cycle_boundary_attesters` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.attester_bitfield) for a in this_cycle_boundary_attestations]`.
* Let `this_cycle_boundary_attesting_balance = sum([balance_at_stake(v) for v in this_cycle_boundary_attesters])`.
Validators justifying the cycle boundary block at the start of the previous cycle:
* Let `prev_cycle_attestations = [a for a in state.pending_attestations if s - CYCLE_LENGTH <= a.slot < s]`.
* Let `prev_cycle_boundary_attestations = [a for a in this_cycle_attestations + prev_cycle_attestations if a.cycle_boundary_hash == get_block_hash(state, block, s - CYCLE_LENGTH) and a.justified_slot == state.prev_cycle_justification_source]`.
* Let `prev_cycle_boundary_attesters` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.attester_bitfield) for a in prev_cycle_boundary_attestations]`.
* Let `prev_cycle_boundary_attesting_balance = sum([balance_at_stake(v) for v in prev_cycle_boundary_attesters])`.
For every `ShardAndCommittee` object `obj` in `shard_and_committee_for_slots`, let:
* `attesting_validators(obj, shard_block_hash)` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.attester_bitfield) for a in this_cycle_attestations + prev_cycle_attestations if a.shard == obj.shard and a.shard_block_hash == shard_block_hash]`
* `attesting_validators(obj)` be equal to `attesting_validators(obj, shard_block_hash)` for the value of `shard_block_hash` such that `sum([balance_at_stake(v) for v in attesting_validators(obj, shard_block_hash)])` is maximized (ties broken by favoring lower `shard_block_hash` values)
* `total_attesting_balance(obj)` be the sum of the balances-at-stake of `attesting_validators(obj)`
* `winning_hash(obj)` be the winning `shard_block_hash` value
* `total_balance(obj) = sum([balance_at_stake(v) for v in obj.committee])`
Let `inclusion_slot(v)` equal `a.slot_included` for the attestation `a` where `v` is in `get_attestation_participants(state, a.data, a.attester_bitfield)`, and `inclusion_distance(v) = a.slot_included - a.data.slot` for the same attestation. We define a function `adjust_for_inclusion_distance(magnitude, dist)` which adjusts the reward of an attestation based on how long it took to get included (the longer, the lower the reward). Returns a value between 0 and `magnitude`
```python
def adjust_for_inclusion_distance(magnitude: int, dist: int) -> int:
return magnitude // 2 + (magnitude // 2) * MIN_ATTESTATION_INCLUSION_DELAY // dist
```
For any validator `v`, `base_reward(v) = balance_at_stake(v) // reward_quotient`
#### Adjust justified slots and crosslink status #### Adjust justified slots and crosslink status
For every slot `s` in the range `last_state_recalculation_slot - CYCLE_LENGTH ... last_state_recalculation_slot - 1`: * Set `state.justified_slot_bitfield = (state.justified_slot_bitfield * 2) % 2**64`.
* If `3 * prev_cycle_boundary_attesting_balance >= 2 * total_balance` then set `state.justified_slot_bitfield &= 2` (ie. flip the second lowest bit to 1) and `new_justification_source = s - CYCLE_LENGTH`.
* If `3 * this_cycle_boundary_attesting_balance >= 2 * total_balance` then set `state.justified_slot_bitfield &= 1` (ie. flip the lowest bit to 1) and `new_justification_source = s`.
* If `state.justification_source == s - CYCLE_LENGTH and state.justified_slot_bitfield % 4 == 3`, set `last_finalized_slot = justification_source`.
* If `state.justification_source == s - CYCLE_LENGTH - CYCLE_LENGTH and state.justified_slot_bitfield % 8 == 7`, set `state.last_finalized_slot = state.justification_source`.
* If `state.justification_source == s - CYCLE_LENGTH - 2 * CYCLE_LENGTH and state.justified_slot_bitfield % 16 in (15, 14)`, set `last_finalized_slot = justification_source`.
* Set `state.prev_cycle_justification_source = state.justification_source` and if `new_justification_source` has been set, set `state.justification_source = new_justification_source`.
* Let `total_balance` be the total balance of active validators. For every `ShardAndCommittee` object `obj`:
* Let `total_balance_attesting_at_s` be the total balance of validators that attested to the beacon block at slot `s`.
* If `3 * total_balance_attesting_at_s >= 2 * total_balance` set `last_justified_slot = max(last_justified_slot, s)` and `justified_streak += 1`. Otherwise set `justified_streak = 0`.
* If `justified_streak >= CYCLE_LENGTH + 1` set `last_finalized_slot = max(last_finalized_slot, s - CYCLE_LENGTH - 1)`.
For every `(shard, shard_block_hash)` tuple: * If `3 * total_attesting_balance(obj) >= 2 * total_balance(obj)`, set `crosslinks[shard] = CrosslinkRecord(slot=last_state_recalculation_slot + CYCLE_LENGTH, hash=winning_hash(obj))`.
* Let `total_balance_attesting_to_h` be the total balance of validators that attested to the shard block with hash `shard_block_hash`.
* Let `total_committee_balance` be the total balance in the committee of validators that could have attested to the shard block with hash `shard_block_hash`.
* If `3 * total_balance_attesting_to_h >= 2 * total_committee_balance`, set `crosslinks[shard] = CrosslinkRecord(slot=last_state_recalculation_slot + CYCLE_LENGTH, hash=shard_block_hash)`.
#### Balance recalculations related to FFG rewards #### Balance recalculations related to FFG rewards
Note: When applying penalties in the following balance recalculations implementers should make sure the `uint64` does not underflow. Note: When applying penalties in the following balance recalculations implementers should make sure the `uint64` does not underflow.
* Let `total_balance` be the total balance of active validators. * Let `quadratic_penalty_quotient = SQRT_E_DROP_TIME**2`. (The portion lost by offline validators after `D` cycles is about `D*D/2/quadratic_penalty_quotient`.)
* Let `total_balance_in_eth = total_balance // GWEI_PER_ETH`. * Let `time_since_finality = block.slot - state.last_finalized_slot`.
* Let `reward_quotient = BASE_REWARD_QUOTIENT * int_sqrt(total_balance_in_eth)`. (The per-slot maximum interest rate is `1/reward_quotient`.)
* Let `quadratic_penalty_quotient = SQRT_E_DROP_TIME**2`. (The portion lost by offline validators after `D` slots is about `D*D/2/quadratic_penalty_quotient`.)
* Let `time_since_finality = block.slot - last_finalized_slot`.
For every slot `s` in the range `last_state_recalculation_slot - CYCLE_LENGTH ... last_state_recalculation_slot - 1`: Case 1: `time_since_finality <= 4 * CYCLE_LENGTH`:
* Let `total_balance_participating` be the total balance of validators that voted for the canonical beacon block at slot `s`. In the normal case every validator will be in one of the `CYCLE_LENGTH` slots following slot `s` and so can vote for a block at slot `s`. * Any validator `v` in `prev_cycle_boundary_attesters` gains `adjust_for_inclusion_distance(base_reward(v) * prev_cycle_boundary_attesting_balance // total_balance, inclusion_distance(v))`.
* Let `B` be the balance of any given validator whose balance we are adjusting, not including any balance changes from this round of state recalculation. * Any active validator `v` not in `prev_cycle_boundary_attesters` loses `base_reward(v)`.
* If `time_since_finality <= 3 * CYCLE_LENGTH` adjust the balance of participating and non-participating validators as follows:
* Participating validators gain `B // reward_quotient * (2 * total_balance_participating - total_balance) // total_balance`. (Note that this value may be negative.)
* Non-participating validators lose `B // reward_quotient`.
* Otherwise:
* Participating validators gain nothing.
* Non-participating validators lose `B // reward_quotient + B * time_since_finality // quadratic_penalty_quotient`.
In addition, validators with `status == PENALIZED` lose `B // reward_quotient + B * time_since_finality // quadratic_penalty_quotient`. Case 2: `time_since_finality > 4 * CYCLE_LENGTH`:
* Any validator in `prev_cycle_boundary_attesters` sees their balance unchanged.
* Any active validator `v` not in `prev_cycle_boundary_attesters`, and any validator with `status == PENALIZED`, loses `base_reward(v) + balance_at_stake(v) * time_since_finality // quadratic_penalty_quotient`.
For each `v` in `prev_cycle_boundary_attesters`, we determine the proposer `proposer_index = get_beacon_proposer_index(state, inclusion_slot(v))` and set `state.validators[proposer_index].balance += base_reward(v) // INCLUDER_REWARD_SHARE_QUOTIENT`.
#### Balance recalculations related to crosslink rewards #### Balance recalculations related to crosslink rewards
For every shard number `shard` for which a crosslink committee exists in the cycle prior to the most recent cycle (`last_state_recalculation_slot - CYCLE_LENGTH ... last_state_recalculation_slot - 1`), let `V` be the corresponding validator set. Let `B` be the balance of any given validator whose balance we are adjusting, not including any balance changes from this round of state recalculation. For each `shard`, `V`: For every `ShardAndCommittee` object `obj` in `shard_and_committee_for_slots[:CYCLE_LENGTH]` (ie. the objects corresponding to the cycle before the current one), for each `v` in `[state.validators[index] for index in obj.committee]`, adjust balances as follows:
* Let `total_balance_of_v` be the total balance of `V`. * If `v in attesting_validators(obj)`, `v.balance += adjust_for_inclusion_distance(base_reward(v) * total_attesting_balance(obj) // total_balance(obj)), inclusion_distance(v))`.
* Let `winning_shard_hash` be the hash that the largest total deposits signed for the `shard` during the cycle. * If `v not in attesting_validators(obj)`, `v.balance -= base_reward(v)`.
* Define a "participating validator" as a member of `V` that signed a crosslink of `winning_shard_hash`.
* Let `total_balance_of_v_participating` be the total balance of the subset of `V` that participated.
* Let `time_since_last_confirmation = block.slot - crosslinks[shard].slot`.
* Adjust balances as follows:
* Participating validators gain `B // reward_quotient * (2 * total_balance_of_v_participating - total_balance_of_v) // total_balance_of_v`.
* Non-participating validators lose `B // reward_quotient`.
#### PoW chain related rules #### PoW chain related rules
@ -1041,22 +1194,27 @@ If `last_state_recalculation_slot % POW_RECEIPT_ROOT_VOTING_PERIOD == 0`, then:
* If for any `x` in `state.candidate_pow_receipt_root`, `x.votes * 2 >= POW_RECEIPT_ROOT_VOTING_PERIOD` set `state.processed_pow_receipt_root = x.receipt_root`. * If for any `x` in `state.candidate_pow_receipt_root`, `x.votes * 2 >= POW_RECEIPT_ROOT_VOTING_PERIOD` set `state.processed_pow_receipt_root = x.receipt_root`.
* Set `state.candidate_pow_receipt_roots = []`. * Set `state.candidate_pow_receipt_roots = []`.
### Validator set change #### Validator set change
A validator set change can happen after a state recalculation if all of the following criteria are satisfied: A validator set change can happen if all of the following criteria are satisfied:
* `block.slot - state.validator_set_change_slot >= MIN_VALIDATOR_SET_CHANGE_INTERVAL`
* `last_finalized_slot > state.validator_set_change_slot` * `last_finalized_slot > state.validator_set_change_slot`
* For every shard number `shard` in `shard_and_committee_for_slots`, `crosslinks[shard].slot > state.validator_set_change_slot` * For every shard number `shard` in `shard_and_committee_for_slots`, `crosslinks[shard].slot > state.validator_set_change_slot`
Then, run the following algorithm to update the validator set: A helper function is defined as:
```python ```python
def change_validators(validators: List[ValidatorRecord], current_slot: int) -> None: def get_changed_validators(validators: List[ValidatorRecord],
deposits_penalized_in_period: List[int],
validator_set_delta_hash_chain: int,
current_slot: int) -> Tuple[List[ValidatorRecord], List[int], int]:
"""
Return changed validator set and `deposits_penalized_in_period`, `validator_set_delta_hash_chain`.
"""
# The active validator set # The active validator set
active_validators = get_active_validator_indices(validators) active_validators = get_active_validator_indices(validators)
# The total balance of active validators # The total balance of active validators
total_balance = sum([v.balance for i, v in enumerate(validators) if i in active_validators]) total_balance = sum([balance_at_stake(v) for i, v in enumerate(validators) if i in active_validators])
# The maximum total wei that can deposit+withdraw # The maximum total wei that can deposit+withdraw
max_allowable_change = max( max_allowable_change = max(
2 * DEPOSIT_SIZE * GWEI_PER_ETH, 2 * DEPOSIT_SIZE * GWEI_PER_ETH,
@ -1068,21 +1226,21 @@ def change_validators(validators: List[ValidatorRecord], current_slot: int) -> N
if validators[i].status == PENDING_ACTIVATION: if validators[i].status == PENDING_ACTIVATION:
validators[i].status = ACTIVE validators[i].status = ACTIVE
total_changed += DEPOSIT_SIZE * GWEI_PER_ETH total_changed += DEPOSIT_SIZE * GWEI_PER_ETH
add_validator_set_change_record( validator_set_delta_hash_chain = get_new_validator_set_delta_hash_chain(
state=state, validator_set_delta_hash_chain=validator_set_delta_hash_chain,
index=i, index=i,
pubkey=validators[i].pubkey, pubkey=validators[i].pubkey,
flag=ENTRY flag=ENTRY,
) )
if validators[i].status == PENDING_EXIT: if validators[i].status == PENDING_EXIT:
validators[i].status = PENDING_WITHDRAW validators[i].status = PENDING_WITHDRAW
validators[i].last_status_change_slot = current_slot validators[i].last_status_change_slot = current_slot
total_changed += validators[i].balance total_changed += balance_at_stake(validators[i])
add_validator_set_change_record( validator_set_delta_hash_chain = get_new_validator_set_delta_hash_chain(
state=state, validator_set_delta_hash_chain=validator_set_delta_hash_chain,
index=i, index=i,
pubkey=validators[i].pubkey, pubkey=validators[i].pubkey,
flag=EXIT flag=EXIT,
) )
if total_changed >= max_allowable_change: if total_changed >= max_allowable_change:
break break
@ -1090,9 +1248,9 @@ def change_validators(validators: List[ValidatorRecord], current_slot: int) -> N
# Calculate the total ETH that has been penalized in the last ~2-3 withdrawal periods # Calculate the total ETH that has been penalized in the last ~2-3 withdrawal periods
period_index = current_slot // COLLECTIVE_PENALTY_CALCULATION_PERIOD period_index = current_slot // COLLECTIVE_PENALTY_CALCULATION_PERIOD
total_penalties = ( total_penalties = (
(state.deposits_penalized_in_period[period_index]) + (deposits_penalized_in_period[period_index]) +
(state.deposits_penalized_in_period[period_index - 1] if period_index >= 1 else 0) + (deposits_penalized_in_period[period_index - 1] if period_index >= 1 else 0) +
(state.deposits_penalized_in_period[period_index - 2] if period_index >= 2 else 0) (deposits_penalized_in_period[period_index - 2] if period_index >= 2 else 0)
) )
# Separate loop to withdraw validators that have been logged out for long enough, and # Separate loop to withdraw validators that have been logged out for long enough, and
# calculate their penalties if they were slashed # calculate their penalties if they were slashed
@ -1103,45 +1261,51 @@ def change_validators(validators: List[ValidatorRecord], current_slot: int) -> N
withdrawable_validators = sorted(filter(withdrawable, validators), key=lambda v: v.exit_seq) withdrawable_validators = sorted(filter(withdrawable, validators), key=lambda v: v.exit_seq)
for v in withdrawable_validators[:WITHDRAWALS_PER_CYCLE]: for v in withdrawable_validators[:WITHDRAWALS_PER_CYCLE]:
if v.status == PENALIZED: if v.status == PENALIZED:
v.balance -= v.balance * min(total_penalties * 3, total_balance) // total_balance v.balance -= balance_at_stake(v) * min(total_penalties * 3, total_balance) // total_balance
v.status = WITHDRAWN v.status = WITHDRAWN
v.last_status_change_slot = current_slot v.last_status_change_slot = current_slot
withdraw_amount = v.balance withdraw_amount = v.balance
... # STUB: withdraw to shard chain
# STUB: withdraw to shard chain
return validators, deposits_penalized_in_period, validator_set_delta_hash_chain
``` ```
* Set `state.validator_set_change_slot = state.last_state_recalculation_slot` Then, run the following algorithm to update the validator set:
* Set `shard_and_committee_for_slots[:CYCLE_LENGTH] = shard_and_committee_for_slots[CYCLE_LENGTH:]`
* Let `next_start_shard = (shard_and_committee_for_slots[-1][-1].shard + 1) % SHARD_COUNT` ```python
* Set `shard_and_committee_for_slots[CYCLE_LENGTH:] = get_new_shuffling(state.next_shuffling_seed, validators, next_start_shard)` def change_validators(state: BeaconState,
current_slot: int) -> None:
"""
Change validator set.
Note that this function mutates `state`.
"""
state.validators, state.deposits_penalized_in_period = get_changed_validators(
copy.deepcopy(state.validators),
copy.deepcopy(state.deposits_penalized_in_period),
state.validator_set_delta_hash_chain,
current_slot
)
```
And perform the following updates to the `state`:
* Set `state.validator_set_change_slot = s + CYCLE_LENGTH`
* Set `state.shard_and_committee_for_slots[:CYCLE_LENGTH] = state.shard_and_committee_for_slots[CYCLE_LENGTH:]`
* Let `state.next_start_shard = (shard_and_committee_for_slots[-1][-1].shard + 1) % SHARD_COUNT`
* Set `state.shard_and_committee_for_slots[CYCLE_LENGTH:] = get_new_shuffling(state.next_shuffling_seed, validators, next_start_shard)`
* Set `state.next_shuffling_seed = state.randao_mix` * Set `state.next_shuffling_seed = state.randao_mix`
### If a validator set change does NOT happen #### If a validator set change does NOT happen
* Set `shard_and_committee_for_slots[:CYCLE_LENGTH] = shard_and_committee_for_slots[CYCLE_LENGTH:]` * Set `state.shard_and_committee_for_slots[:CYCLE_LENGTH] = state.shard_and_committee_for_slots[CYCLE_LENGTH:]`
* Let `time_since_finality = block.slot - state.validator_set_change_slot` * Let `time_since_finality = block.slot - state.validator_set_change_slot`
* Let `start_shard = shard_and_committee_for_slots[0][0].shard` * Let `start_shard = state.shard_and_committee_for_slots[0][0].shard`
* If `time_since_finality * CYCLE_LENGTH <= MIN_VALIDATOR_SET_CHANGE_INTERVAL` or `time_since_finality` is an exact power of 2, set `shard_and_committee_for_slots[CYCLE_LENGTH:] = get_new_shuffling(state.next_shuffling_seed, validators, start_shard)` and set `state.next_shuffling_seed = state.randao_mix`. Note that `start_shard` is not changed from last cycle. * If `time_since_finality * CYCLE_LENGTH <= MIN_VALIDATOR_SET_CHANGE_INTERVAL` or `time_since_finality` is an exact power of 2, set `state.shard_and_committee_for_slots[CYCLE_LENGTH:] = get_new_shuffling(state.next_shuffling_seed, validators, start_shard)` and set `state.next_shuffling_seed = state.randao_mix`. Note that `start_shard` is not changed from last cycle.
#### Finally... #### Proposer reshuffling
* Remove all attestation records older than slot `state.last_state_recalculation_slot` Run the following code to update the shard proposer set:
* Empty the `state.pending_specials` list
* For any validator with index `v` with balance less than `MIN_ONLINE_DEPOSIT_SIZE` and status `ACTIVE`, run `exit_validator(v, state, block, penalize=False, current_slot=block.slot)`
* Set `state.recent_block_hashes = state.recent_block_hashes[CYCLE_LENGTH:]`
* Set `state.last_state_recalculation_slot += CYCLE_LENGTH`
For any validator that was added or removed from the active validator list during this state recalculation:
* If the validator was removed, remove their index from the `persistent_committees` and remove any `ShardReassignmentRecord`s containing their index from `persistent_committee_reassignments`.
* If the validator was added with index `validator_index`:
* let `assigned_shard = hash(state.randao_mix + bytes8(validator_index)) % SHARD_COUNT`
* let `reassignment_record = ShardReassignmentRecord(validator_index=validator_index, shard=assigned_shard, slot=block.slot + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD)`
* Append `reassignment_record` to the end of `persistent_committee_reassignments`
Now run the following code to reshuffle a few proposers:
```python ```python
active_validator_indices = get_active_validator_indices(validators) active_validator_indices = get_active_validator_indices(validators)
@ -1154,11 +1318,11 @@ for i in range(num_validators_to_reshuffle):
shard_reassignment_record = ShardReassignmentRecord( shard_reassignment_record = ShardReassignmentRecord(
validator_index=vid, validator_index=vid,
shard=new_shard, shard=new_shard,
slot=block.slot + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD slot=s + SHARD_PERSISTENT_COMMITTEE_CHANGE_PERIOD
) )
state.persistent_committee_reassignments.append(shard_reassignment_record) state.persistent_committee_reassignments.append(shard_reassignment_record)
while len(state.persistent_committee_reassignments) > 0 and state.persistent_committee_reassignments[0].slot <= block.slot: while len(state.persistent_committee_reassignments) > 0 and state.persistent_committee_reassignments[0].slot <= s:
rec = state.persistent_committee_reassignments.pop(0) rec = state.persistent_committee_reassignments.pop(0)
for committee in state.persistent_committees: for committee in state.persistent_committees:
if rec.validator_index in committee: if rec.validator_index in committee:
@ -1168,6 +1332,13 @@ while len(state.persistent_committee_reassignments) > 0 and state.persistent_com
state.persistent_committees[rec.shard].append(rec.validator_index) state.persistent_committees[rec.shard].append(rec.validator_index)
``` ```
#### Finally...
* Remove all attestation records older than slot `s`
* For any validator with index `v` with balance less than `MIN_ONLINE_DEPOSIT_SIZE` and status `ACTIVE`, run `exit_validator(v, state, block, penalize=False, current_slot=block.slot)`
* Set `state.recent_block_hashes = state.recent_block_hashes[CYCLE_LENGTH:]`
* Set `state.last_state_recalculation_slot += CYCLE_LENGTH`
# Appendix # Appendix
## Appendix A - Hash function ## Appendix A - Hash function

View File

@ -51,10 +51,10 @@ A `ShardBlock` object has the following fields:
# State root (placeholder for now) # State root (placeholder for now)
'state_root': 'hash32', 'state_root': 'hash32',
# Block signature # Block signature
'signature': ['uint256'], 'signature': ['uint384'],
# Attestation # Attestation
'attester_bitfield': 'bytes', 'attester_bitfield': 'bytes',
'aggregate_sig': ['uint256'], 'aggregate_sig': ['uint384'],
} }
``` ```

View File

@ -1,6 +1,6 @@
# [WIP] SimpleSerialize (SSZ) Spec # [WIP] SimpleSerialize (SSZ) Spec
This is the **work in progress** document to describe `simpleserialize`, the This is the **work in progress** document to describe `SimpleSerialize`, the
current selected serialization method for Ethereum 2.0 using the Beacon Chain. current selected serialization method for Ethereum 2.0 using the Beacon Chain.
This document specifies the general information for serializing and This document specifies the general information for serializing and
@ -13,19 +13,22 @@ deserializing objects and data types.
* [Constants](#constants) * [Constants](#constants)
* [Overview](#overview) * [Overview](#overview)
+ [Serialize/Encode](#serializeencode) + [Serialize/Encode](#serializeencode)
- [uint: 8/16/24/32/64/256](#uint-816243264256) - [uint](#uint)
- [Bool](#bool)
- [Address](#address) - [Address](#address)
- [Hash](#hash) - [Hash](#hash)
- [Bytes](#bytes) - [Bytes](#bytes)
- [List/Vectors](#listvectors) - [List/Vectors](#listvectors)
- [Container](#container) - [Container](#container)
+ [Deserialize/Decode](#deserializedecode) + [Deserialize/Decode](#deserializedecode)
- [uint: 8/16/24/32/64/256](#uint-816243264256-1) - [uint](#uint-1)
- [Bool](#bool-1)
- [Address](#address-1) - [Address](#address-1)
- [Hash](#hash-1) - [Hash](#hash-1)
- [Bytes](#bytes-1) - [Bytes](#bytes-1)
- [List/Vectors](#listvectors-1) - [List/Vectors](#listvectors-1)
- [Container](#container-1) - [Container](#container-1)
+ [Tree Hash](#tree-hash)
* [Implementations](#implementations) * [Implementations](#implementations)
## About ## About
@ -50,16 +53,22 @@ overhead.
## Constants ## Constants
| Constant | Value | Definition | | Constant | Value | Definition |
|:---------------|:-----:|:--------------------------------------------------------------------------------------| |:------------------|:-----:|:--------------------------------------------------------------------------------------|
| `LENGTH_BYTES` | 4 | Number of bytes used for the length added before a variable-length serialized object. | | `LENGTH_BYTES` | 4 | Number of bytes used for the length added before a variable-length serialized object. |
| `SSZ_CHUNK_SIZE` | 128 | Number of bytes for the chuck size of the Merkle tree leaf. |
## Overview ## Overview
### Serialize/Encode ### Serialize/Encode
#### uint: 8/16/24/32/64/256 #### uint
| uint Type | Usage |
|:---------:|:-----------------------------------------------------------|
| `uintN` | Type of `N` bits unsigned integer, where ``N % 8 == 0``. |
Convert directly to bytes the size of the int. (e.g. ``uint16 = 2 bytes``) Convert directly to bytes the size of the int. (e.g. ``uint16 = 2 bytes``)
@ -75,7 +84,7 @@ buffer_size = int_size / 8
return value.to_bytes(buffer_size, 'big') return value.to_bytes(buffer_size, 'big')
``` ```
#### bool #### Bool
Convert directly to a single 0x00 or 0x01 byte. Convert directly to a single 0x00 or 0x01 byte.
@ -91,8 +100,7 @@ return b'\x01' if value is True else b'\x00'
#### Address #### Address
The `address` should already come as a hash/byte format. Ensure that length is The `address` should already come as a hash/byte format. Ensure that length is **20**.
**20**.
| Check to perform | Code | | Check to perform | Code |
|:-----------------------|:---------------------| |:-----------------------|:---------------------|
@ -126,8 +134,7 @@ return value
For general `bytes` type: For general `bytes` type:
1. Get the length/number of bytes; Encode into a `4-byte` integer. 1. Get the length/number of bytes; Encode into a `4-byte` integer.
2. Append the value to the length and return: ``[ length_bytes ] + [ 2. Append the value to the length and return: ``[ length_bytes ] + [ value_bytes ]``
value_bytes ]``
| Check to perform | Code | | Check to perform | Code |
|:-------------------------------------|:-----------------------| |:-------------------------------------|:-----------------------|
@ -233,7 +240,7 @@ At each step, the following checks should be made:
|:-------------------------|:-----------------------------------------------------------| |:-------------------------|:-----------------------------------------------------------|
| Ensure sufficient length | ``length(rawbytes) >= current_index + deserialize_length`` | | Ensure sufficient length | ``length(rawbytes) >= current_index + deserialize_length`` |
#### uint: 8/16/24/32/64/256 #### uint
Convert directly from bytes into integer utilising the number of bytes the same Convert directly from bytes into integer utilising the number of bytes the same
size as the integer length. (e.g. ``uint16 == 2 bytes``) size as the integer length. (e.g. ``uint16 == 2 bytes``)
@ -258,7 +265,7 @@ return True if rawbytes == b'\x01' else False
#### Address #### Address
Return the 20 bytes. Return the 20-byte deserialized address.
```python ```python
assert(len(rawbytes) >= current_index + 20) assert(len(rawbytes) >= current_index + 20)
@ -344,9 +351,7 @@ Instantiate a container with the full set of deserialized data, matching each me
To deserialize: To deserialize:
1. Get the names of the container's fields and sort them. 1. Get the names of the container's fields and sort them.
2. For each name in the sorted list, attempt to deserialize a value for that type. Collect these values as they will be used to construct an instance of the container. 2. For each name in the sorted list, attempt to deserialize a value for that type. Collect these values as they will be used to construct an instance of the container.
3. Construct a container instance after successfully consuming the entire subset of the stream for the serialized container. 3. Construct a container instance after successfully consuming the entire subset of the stream for the serialized container.
**Example in Python** **Example in Python**
@ -383,23 +388,23 @@ assert item_index == start + LENGTH_BYTES + length
return typ(**values), item_index return typ(**values), item_index
``` ```
### Tree_hash ### Tree Hash
The below `tree_hash` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For the final output only (ie. not intermediate outputs), if the output is less than 32 bytes, right-zero-pad it to 32 bytes. The goal is collision resistance *within* each type, not between types. The below `tree_hash` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For the final output only (ie. not intermediate outputs), if the output is less than 32 bytes, right-zero-pad it to 32 bytes. The goal is collision resistance *within* each type, not between types.
We define `hash(x)` as `BLAKE2b-512(x)[0:32]`. We define `hash(x)` as `BLAKE2b-512(x)[0:32]`.
#### uint: 8/16/24/32/64/256, bool, address, hash32 #### `uintN`, `bool`, `address`, `hash32`
Return the serialization of the value. Return the serialization of the value.
#### bytes, hash96 #### `bytes`, `hashN`
Return the hash of the serialization of the value. Return the hash of the serialization of the value.
#### List/Vectors #### List/Vectors
First, we define some helpers and then the Merkle tree function. The constant `CHUNK_SIZE` is set to 128. First, we define some helpers and then the Merkle tree function.
```python ```python
# Merkle tree hash of a list of homogenous, non-empty items # Merkle tree hash of a list of homogenous, non-empty items
@ -409,10 +414,10 @@ def merkle_hash(lst):
if len(lst) == 0: if len(lst) == 0:
# Handle empty list case # Handle empty list case
chunkz = [b'\x00' * CHUNKSIZE] chunkz = [b'\x00' * SSZ_CHUNK_SIZE]
elif len(lst[0]) < CHUNKSIZE: elif len(lst[0]) < SSZ_CHUNK_SIZE:
# See how many items fit in a chunk # See how many items fit in a chunk
items_per_chunk = CHUNKSIZE // len(lst[0]) items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0])
# Build a list of chunks based on the number of items in the chunk # Build a list of chunks based on the number of items in the chunk
chunkz = [b''.join(lst[i:i+items_per_chunk]) for i in range(0, len(lst), items_per_chunk)] chunkz = [b''.join(lst[i:i+items_per_chunk]) for i in range(0, len(lst), items_per_chunk)]
@ -423,7 +428,7 @@ def merkle_hash(lst):
# Tree-hash # Tree-hash
while len(chunkz) > 1: while len(chunkz) > 1:
if len(chunkz) % 2 == 1: if len(chunkz) % 2 == 1:
chunkz.append(b'\x00' * CHUNKSIZE) chunkz.append(b'\x00' * SSZ_CHUNK_SIZE)
chunkz = [hash(chunkz[i] + chunkz[i+1]) for i in range(0, len(chunkz), 2)] chunkz = [hash(chunkz[i] + chunkz[i+1]) for i in range(0, len(chunkz), 2)]
# Return hash of root and length data # Return hash of root and length data