Merge branch 'dev' into name-changes
This commit is contained in:
commit
8445d06b8f
|
@ -69,10 +69,10 @@ We require:
|
||||||
G2_cofactor = 305502333931268344200999753193121504214466019254188142667664032982267604182971884026507427359259977847832272839041616661285803823378372096355777062779109
|
G2_cofactor = 305502333931268344200999753193121504214466019254188142667664032982267604182971884026507427359259977847832272839041616661285803823378372096355777062779109
|
||||||
q = 4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787
|
q = 4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787
|
||||||
|
|
||||||
def hash_to_G2(message: bytes32, domain: uint64) -> [uint384]:
|
def hash_to_G2(message_hash: Bytes32, domain: uint64) -> [uint384]:
|
||||||
# Initial candidate x coordinate
|
# Initial candidate x coordinate
|
||||||
x_re = int.from_bytes(hash(message + bytes8(domain) + b'\x01'), 'big')
|
x_re = int.from_bytes(hash(message_hash + bytes8(domain) + b'\x01'), 'big')
|
||||||
x_im = int.from_bytes(hash(message + bytes8(domain) + b'\x02'), 'big')
|
x_im = int.from_bytes(hash(message_hash + bytes8(domain) + b'\x02'), 'big')
|
||||||
x_coordinate = Fq2([x_re, x_im]) # x = x_re + i * x_im
|
x_coordinate = Fq2([x_re, x_im]) # x = x_re + i * x_im
|
||||||
|
|
||||||
# Test candidate y coordinates until a one is found
|
# Test candidate y coordinates until a one is found
|
||||||
|
@ -128,17 +128,17 @@ g = Fq2([g_x, g_y])
|
||||||
|
|
||||||
### `bls_verify`
|
### `bls_verify`
|
||||||
|
|
||||||
Let `bls_verify(pubkey: Bytes48, message: Bytes32, signature: Bytes96, domain: uint64) -> bool`:
|
Let `bls_verify(pubkey: Bytes48, message_hash: Bytes32, signature: Bytes96, domain: uint64) -> bool`:
|
||||||
|
|
||||||
* Verify that `pubkey` is a valid G1 point.
|
* Verify that `pubkey` is a valid G1 point.
|
||||||
* Verify that `signature` is a valid G2 point.
|
* Verify that `signature` is a valid G2 point.
|
||||||
* Verify that `e(pubkey, hash_to_G2(message, domain)) == e(g, signature)`.
|
* Verify that `e(pubkey, hash_to_G2(message_hash, domain)) == e(g, signature)`.
|
||||||
|
|
||||||
### `bls_verify_multiple`
|
### `bls_verify_multiple`
|
||||||
|
|
||||||
Let `bls_verify_multiple(pubkeys: List[Bytes48], messages: List[Bytes32], signature: Bytes96, domain: uint64) -> bool`:
|
Let `bls_verify_multiple(pubkeys: List[Bytes48], message_hashes: List[Bytes32], signature: Bytes96, domain: uint64) -> bool`:
|
||||||
|
|
||||||
* Verify that each `pubkey` in `pubkeys` is a valid G1 point.
|
* Verify that each `pubkey` in `pubkeys` is a valid G1 point.
|
||||||
* Verify that `signature` is a valid G2 point.
|
* Verify that `signature` is a valid G2 point.
|
||||||
* Verify that `len(pubkeys)` equals `len(messages)` and denote the length `L`.
|
* Verify that `len(pubkeys)` equals `len(message_hashes)` and denote the length `L`.
|
||||||
* Verify that `e(pubkeys[0], hash_to_G2(messages[0], domain)) * ... * e(pubkeys[L-1], hash_to_G2(messages[L-1], domain)) == e(g, signature)`.
|
* Verify that `e(pubkeys[0], hash_to_G2(message_hashes[0], domain)) * ... * e(pubkeys[L-1], hash_to_G2(message_hashes[L-1], domain)) == e(g, signature)`.
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
- [Constants](#constants)
|
- [Constants](#constants)
|
||||||
- [Misc](#misc)
|
- [Misc](#misc)
|
||||||
- [Deposit contract](#deposit-contract)
|
- [Deposit contract](#deposit-contract)
|
||||||
|
- [Gwei values](#gwei-values)
|
||||||
- [Initial values](#initial-values)
|
- [Initial values](#initial-values)
|
||||||
- [Time parameters](#time-parameters)
|
- [Time parameters](#time-parameters)
|
||||||
- [State list lengths](#state-list-lengths)
|
- [State list lengths](#state-list-lengths)
|
||||||
|
@ -54,11 +55,12 @@
|
||||||
- [`hash`](#hash)
|
- [`hash`](#hash)
|
||||||
- [`hash_tree_root`](#hash_tree_root)
|
- [`hash_tree_root`](#hash_tree_root)
|
||||||
- [`slot_to_epoch`](#slot_to_epoch)
|
- [`slot_to_epoch`](#slot_to_epoch)
|
||||||
|
- [`get_previous_epoch`](#get_previous_epoch)
|
||||||
- [`get_current_epoch`](#get_current_epoch)
|
- [`get_current_epoch`](#get_current_epoch)
|
||||||
- [`get_epoch_start_slot`](#get_epoch_start_slot)
|
- [`get_epoch_start_slot`](#get_epoch_start_slot)
|
||||||
- [`is_active_validator`](#is_active_validator)
|
- [`is_active_validator`](#is_active_validator)
|
||||||
- [`get_active_validator_indices`](#get_active_validator_indices)
|
- [`get_active_validator_indices`](#get_active_validator_indices)
|
||||||
- [`shuffle`](#shuffle)
|
- [`get_permuted_index`](#get_permuted_index)
|
||||||
- [`split`](#split)
|
- [`split`](#split)
|
||||||
- [`get_epoch_committee_count`](#get_epoch_committee_count)
|
- [`get_epoch_committee_count`](#get_epoch_committee_count)
|
||||||
- [`get_shuffling`](#get_shuffling)
|
- [`get_shuffling`](#get_shuffling)
|
||||||
|
@ -75,7 +77,9 @@
|
||||||
- [`get_attestation_participants`](#get_attestation_participants)
|
- [`get_attestation_participants`](#get_attestation_participants)
|
||||||
- [`is_power_of_two`](#is_power_of_two)
|
- [`is_power_of_two`](#is_power_of_two)
|
||||||
- [`int_to_bytes1`, `int_to_bytes2`, ...](#int_to_bytes1-int_to_bytes2-)
|
- [`int_to_bytes1`, `int_to_bytes2`, ...](#int_to_bytes1-int_to_bytes2-)
|
||||||
|
- [`bytes_to_int`](#bytes_to_int)
|
||||||
- [`get_effective_balance`](#get_effective_balance)
|
- [`get_effective_balance`](#get_effective_balance)
|
||||||
|
- [`get_total_balance`](#get_total_balance)
|
||||||
- [`get_fork_version`](#get_fork_version)
|
- [`get_fork_version`](#get_fork_version)
|
||||||
- [`get_domain`](#get_domain)
|
- [`get_domain`](#get_domain)
|
||||||
- [`get_bitfield_bit`](#get_bitfield_bit)
|
- [`get_bitfield_bit`](#get_bitfield_bit)
|
||||||
|
@ -121,7 +125,7 @@
|
||||||
- [Deposits](#deposits-1)
|
- [Deposits](#deposits-1)
|
||||||
- [Exits](#exits-1)
|
- [Exits](#exits-1)
|
||||||
- [Per-epoch processing](#per-epoch-processing)
|
- [Per-epoch processing](#per-epoch-processing)
|
||||||
- [Helpers](#helpers)
|
- [Helper variables](#helper-variables)
|
||||||
- [Eth1 data](#eth1-data-1)
|
- [Eth1 data](#eth1-data-1)
|
||||||
- [Justification](#justification)
|
- [Justification](#justification)
|
||||||
- [Crosslinks](#crosslinks)
|
- [Crosslinks](#crosslinks)
|
||||||
|
@ -150,7 +154,7 @@ The primary source of load on the beacon chain is "attestations". Attestations a
|
||||||
|
|
||||||
## Notation
|
## Notation
|
||||||
|
|
||||||
Code snippets appearing in `this style` are to be interpreted as Python code. Beacon blocks that trigger unhandled Python exceptions (e.g. out-of-range list accesses) and failed asserts are considered invalid.
|
Code snippets appearing in `this style` are to be interpreted as Python code.
|
||||||
|
|
||||||
## Terminology
|
## Terminology
|
||||||
|
|
||||||
|
@ -163,7 +167,7 @@ Code snippets appearing in `this style` are to be interpreted as Python code. Be
|
||||||
* **Shard chain** - one of the chains on which user transactions take place and account data is stored.
|
* **Shard chain** - one of the chains on which user transactions take place and account data is stored.
|
||||||
* **Block root** - a 32-byte Merkle root of a beacon chain block or shard chain block. Previously called "block hash".
|
* **Block root** - a 32-byte Merkle root of a beacon chain block or shard chain block. Previously called "block hash".
|
||||||
* **Crosslink** - a set of signatures from a committee attesting to a block in a shard chain, which can be included into the beacon chain. Crosslinks are the main means by which the beacon chain "learns about" the updated state of shard chains.
|
* **Crosslink** - a set of signatures from a committee attesting to a block in a shard chain, which can be included into the beacon chain. Crosslinks are the main means by which the beacon chain "learns about" the updated state of shard chains.
|
||||||
* **Slot** - a period of `SLOT_DURATION` seconds, during which one proposer has the ability to create a beacon chain block and some attesters have the ability to make attestations
|
* **Slot** - a period during which one proposer has the ability to create a beacon chain block and some attesters have the ability to make attestations
|
||||||
* **Epoch** - an aligned span of slots during which all [validators](#dfn-validator) get exactly one chance to make an attestation
|
* **Epoch** - an aligned span of slots during which all [validators](#dfn-validator) get exactly one chance to make an attestation
|
||||||
* **Finalized**, **justified** - see Casper FFG finalization [[casper-ffg]](#ref-casper-ffg)
|
* **Finalized**, **justified** - see Casper FFG finalization [[casper-ffg]](#ref-casper-ffg)
|
||||||
* **Withdrawal period** - the number of slots between a [validator](#dfn-validator) exit and the [validator](#dfn-validator) balance being withdrawable
|
* **Withdrawal period** - the number of slots between a [validator](#dfn-validator) exit and the [validator](#dfn-validator) balance being withdrawable
|
||||||
|
@ -177,29 +181,36 @@ Code snippets appearing in `this style` are to be interpreted as Python code. Be
|
||||||
| - | - | :-: |
|
| - | - | :-: |
|
||||||
| `SHARD_COUNT` | `2**10` (= 1,024) | shards |
|
| `SHARD_COUNT` | `2**10` (= 1,024) | shards |
|
||||||
| `TARGET_COMMITTEE_SIZE` | `2**7` (= 128) | [validators](#dfn-validator) |
|
| `TARGET_COMMITTEE_SIZE` | `2**7` (= 128) | [validators](#dfn-validator) |
|
||||||
| `EJECTION_BALANCE` | `2**4 * 1e9` (= 16,000,000,000) | Gwei |
|
|
||||||
| `MAX_BALANCE_CHURN_QUOTIENT` | `2**5` (= 32) | - |
|
| `MAX_BALANCE_CHURN_QUOTIENT` | `2**5` (= 32) | - |
|
||||||
| `BEACON_CHAIN_SHARD_NUMBER` | `2**64 - 1` | - |
|
| `BEACON_CHAIN_SHARD_NUMBER` | `2**64 - 1` | - |
|
||||||
| `MAX_INDICES_PER_SLASHABLE_VOTE` | `2**12` (= 4,096) | votes |
|
| `MAX_INDICES_PER_SLASHABLE_VOTE` | `2**12` (= 4,096) | votes |
|
||||||
| `MAX_WITHDRAWALS_PER_EPOCH` | `2**2` (= 4) | withdrawals |
|
| `MAX_WITHDRAWALS_PER_EPOCH` | `2**2` (= 4) | withdrawals |
|
||||||
|
| `SHUFFLE_ROUND_COUNT` | 90 | - |
|
||||||
|
|
||||||
* For the safety of crosslinks `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `EPOCH_LENGTH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)
|
* For the safety of crosslinks `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `EPOCH_LENGTH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)
|
||||||
|
|
||||||
### Deposit contract
|
### Deposit contract
|
||||||
|
|
||||||
|
| Name | Value |
|
||||||
|
| - | - |
|
||||||
|
| `DEPOSIT_CONTRACT_ADDRESS` | **TBD** |
|
||||||
|
| `DEPOSIT_CONTRACT_TREE_DEPTH` | `2**5` (= 32) |
|
||||||
|
|
||||||
|
### Gwei values
|
||||||
|
|
||||||
| Name | Value | Unit |
|
| Name | Value | Unit |
|
||||||
| - | - | :-: |
|
| - | - | :-: |
|
||||||
| `DEPOSIT_CONTRACT_ADDRESS` | **TBD** |
|
|
||||||
| `DEPOSIT_CONTRACT_TREE_DEPTH` | `2**5` (= 32) | - |
|
|
||||||
| `MIN_DEPOSIT_AMOUNT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei |
|
| `MIN_DEPOSIT_AMOUNT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei |
|
||||||
| `MAX_DEPOSIT_AMOUNT` | `2**5 * 1e9` (= 32,000,000,000) | Gwei |
|
| `MAX_DEPOSIT_AMOUNT` | `2**5 * 1e9` (= 32,000,000,000) | Gwei |
|
||||||
|
| `FORK_CHOICE_BALANCE_INCREMENT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei |
|
||||||
|
| `EJECTION_BALANCE` | `2**4 * 1e9` (= 16,000,000,000) | Gwei |
|
||||||
|
|
||||||
### Initial values
|
### Initial values
|
||||||
|
|
||||||
| Name | Value |
|
| Name | Value |
|
||||||
| - | - |
|
| - | - |
|
||||||
| `GENESIS_FORK_VERSION` | `0` |
|
| `GENESIS_FORK_VERSION` | `0` |
|
||||||
| `GENESIS_SLOT` | `2**19` |
|
| `GENESIS_SLOT` | `2**63` |
|
||||||
| `GENESIS_EPOCH` | `slot_to_epoch(GENESIS_SLOT)` |
|
| `GENESIS_EPOCH` | `slot_to_epoch(GENESIS_SLOT)` |
|
||||||
| `GENESIS_START_SHARD` | `0` |
|
| `GENESIS_START_SHARD` | `0` |
|
||||||
| `FAR_FUTURE_EPOCH` | `2**64 - 1` |
|
| `FAR_FUTURE_EPOCH` | `2**64 - 1` |
|
||||||
|
@ -353,8 +364,8 @@ The following data structures are defined as [SimpleSerialize (SSZ)](https://git
|
||||||
'epoch_boundary_root': 'bytes32',
|
'epoch_boundary_root': 'bytes32',
|
||||||
# Shard block's hash of root
|
# Shard block's hash of root
|
||||||
'shard_block_root': 'bytes32',
|
'shard_block_root': 'bytes32',
|
||||||
# Last crosslink's hash of root
|
# Last crosslink
|
||||||
'latest_crosslink_root': 'bytes32',
|
'latest_crosslink': Crosslink,
|
||||||
# Last justified epoch in the beacon state
|
# Last justified epoch in the beacon state
|
||||||
'justified_epoch': 'uint64',
|
'justified_epoch': 'uint64',
|
||||||
# Hash of the last justified beacon block
|
# Hash of the last justified beacon block
|
||||||
|
@ -515,6 +526,7 @@ The following data structures are defined as [SimpleSerialize (SSZ)](https://git
|
||||||
# Ethereum 1.0 chain data
|
# Ethereum 1.0 chain data
|
||||||
'latest_eth1_data': Eth1Data,
|
'latest_eth1_data': Eth1Data,
|
||||||
'eth1_data_votes': [Eth1DataVote],
|
'eth1_data_votes': [Eth1DataVote],
|
||||||
|
'deposit_index': 'uint64'
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -639,6 +651,20 @@ def slot_to_epoch(slot: Slot) -> Epoch:
|
||||||
return slot // EPOCH_LENGTH
|
return slot // EPOCH_LENGTH
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `get_previous_epoch`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_previous_epoch(state: BeaconState) -> EpochNumber:
|
||||||
|
"""`
|
||||||
|
Return the previous epoch of the given ``state``.
|
||||||
|
If the current epoch is ``GENESIS_EPOCH``, return ``GENESIS_EPOCH``.
|
||||||
|
"""
|
||||||
|
current_epoch = get_current_epoch(state)
|
||||||
|
if current_epoch == GENESIS_EPOCH:
|
||||||
|
return GENESIS_EPOCH
|
||||||
|
return current_epoch - 1
|
||||||
|
```
|
||||||
|
|
||||||
### `get_current_epoch`
|
### `get_current_epoch`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -678,57 +704,27 @@ def get_active_validator_indices(validators: List[Validator], epoch: Epoch) -> L
|
||||||
return [i for i, v in enumerate(validators) if is_active_validator(v, epoch)]
|
return [i for i, v in enumerate(validators) if is_active_validator(v, epoch)]
|
||||||
```
|
```
|
||||||
|
|
||||||
### `shuffle`
|
### `get_permuted_index`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def shuffle(values: List[Any], seed: Bytes32) -> List[Any]:
|
def get_permuted_index(index: int, list_size: int, seed: Bytes32) -> int:
|
||||||
"""
|
"""
|
||||||
Return the shuffled ``values`` with ``seed`` as entropy.
|
Return `p(index)` in a pseudorandom permutation `p` of `0...list_size-1` with ``seed`` as entropy.
|
||||||
|
|
||||||
|
Utilizes 'swap or not' shuffling found in
|
||||||
|
https://link.springer.com/content/pdf/10.1007%2F978-3-642-32009-5_1.pdf
|
||||||
|
See the 'generalized domain' algorithm on page 3.
|
||||||
"""
|
"""
|
||||||
values_count = len(values)
|
for round in range(SHUFFLE_ROUND_COUNT):
|
||||||
|
pivot = bytes_to_int(hash(seed + int_to_bytes1(round))[0:8]) % list_size
|
||||||
|
flip = (pivot - index) % list_size
|
||||||
|
position = max(index, flip)
|
||||||
|
source = hash(seed + int_to_bytes1(round) + int_to_bytes4(position // 256))
|
||||||
|
byte = source[(position % 256) // 8]
|
||||||
|
bit = (byte >> (position % 8)) % 2
|
||||||
|
index = flip if bit else index
|
||||||
|
|
||||||
# Entropy is consumed from the seed in 3-byte (24 bit) chunks.
|
return index
|
||||||
rand_bytes = 3
|
|
||||||
# The highest possible result of the RNG.
|
|
||||||
rand_max = 2 ** (rand_bytes * 8) - 1
|
|
||||||
|
|
||||||
# The range of the RNG places an upper-bound on the size of the list that
|
|
||||||
# may be shuffled. It is a logic error to supply an oversized list.
|
|
||||||
assert values_count < rand_max
|
|
||||||
|
|
||||||
output = [x for x in values]
|
|
||||||
source = seed
|
|
||||||
index = 0
|
|
||||||
while index < values_count - 1:
|
|
||||||
# Re-hash the `source` to obtain a new pattern of bytes.
|
|
||||||
source = hash(source)
|
|
||||||
# Iterate through the `source` bytes in 3-byte chunks.
|
|
||||||
for position in range(0, 32 - (32 % rand_bytes), rand_bytes):
|
|
||||||
# Determine the number of indices remaining in `values` and exit
|
|
||||||
# once the last index is reached.
|
|
||||||
remaining = values_count - index
|
|
||||||
if remaining == 1:
|
|
||||||
break
|
|
||||||
|
|
||||||
# Read 3-bytes of `source` as a 24-bit big-endian integer.
|
|
||||||
sample_from_source = int.from_bytes(source[position:position + rand_bytes], 'big')
|
|
||||||
|
|
||||||
# Sample values greater than or equal to `sample_max` will cause
|
|
||||||
# modulo bias when mapped into the `remaining` range.
|
|
||||||
sample_max = rand_max - rand_max % remaining
|
|
||||||
|
|
||||||
# Perform a swap if the consumed entropy will not cause modulo bias.
|
|
||||||
if sample_from_source < sample_max:
|
|
||||||
# Select a replacement index for the current index.
|
|
||||||
replacement_position = (sample_from_source % remaining) + index
|
|
||||||
# Swap the current index with the replacement index.
|
|
||||||
output[index], output[replacement_position] = output[replacement_position], output[index]
|
|
||||||
index += 1
|
|
||||||
else:
|
|
||||||
# The sample causes modulo bias. A new sample should be read.
|
|
||||||
pass
|
|
||||||
|
|
||||||
return output
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### `split`
|
### `split`
|
||||||
|
@ -778,8 +774,10 @@ def get_shuffling(seed: Bytes32,
|
||||||
committees_per_epoch = get_epoch_committee_count(len(active_validator_indices))
|
committees_per_epoch = get_epoch_committee_count(len(active_validator_indices))
|
||||||
|
|
||||||
# Shuffle
|
# Shuffle
|
||||||
seed = xor(seed, int_to_bytes32(epoch))
|
shuffled_active_validator_indices = [
|
||||||
shuffled_active_validator_indices = shuffle(active_validator_indices, seed)
|
active_validator_indices[get_permuted_index(i, len(active_validator_indices), seed)]
|
||||||
|
for i in active_validator_indices
|
||||||
|
]
|
||||||
|
|
||||||
# Split the shuffled list into committees_per_epoch pieces
|
# Split the shuffled list into committees_per_epoch pieces
|
||||||
return split(shuffled_active_validator_indices, committees_per_epoch)
|
return split(shuffled_active_validator_indices, committees_per_epoch)
|
||||||
|
@ -821,6 +819,9 @@ def get_current_epoch_committee_count(state: BeaconState) -> int:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def get_next_epoch_committee_count(state: BeaconState) -> int:
|
def get_next_epoch_committee_count(state: BeaconState) -> int:
|
||||||
|
"""
|
||||||
|
Return the number of committees in the next epoch of the given ``state``.
|
||||||
|
"""
|
||||||
next_active_validators = get_active_validator_indices(
|
next_active_validators = get_active_validator_indices(
|
||||||
state.validator_registry,
|
state.validator_registry,
|
||||||
get_current_epoch(state) + 1,
|
get_current_epoch(state) + 1,
|
||||||
|
@ -833,7 +834,7 @@ def get_next_epoch_committee_count(state: BeaconState) -> int:
|
||||||
```python
|
```python
|
||||||
def get_crosslink_committees_at_slot(state: BeaconState,
|
def get_crosslink_committees_at_slot(state: BeaconState,
|
||||||
slot: Slot,
|
slot: Slot,
|
||||||
registry_change=False: bool) -> List[Tuple[List[ValidatorIndex], Shard]]:
|
registry_change: bool=False) -> List[Tuple[List[ValidatorIndex], Shard]]:
|
||||||
"""
|
"""
|
||||||
Return the list of ``(committee, shard)`` tuples for the ``slot``.
|
Return the list of ``(committee, shard)`` tuples for the ``slot``.
|
||||||
|
|
||||||
|
@ -842,7 +843,7 @@ def get_crosslink_committees_at_slot(state: BeaconState,
|
||||||
"""
|
"""
|
||||||
epoch = slot_to_epoch(slot)
|
epoch = slot_to_epoch(slot)
|
||||||
current_epoch = get_current_epoch(state)
|
current_epoch = get_current_epoch(state)
|
||||||
previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else current_epoch
|
previous_epoch = get_previous_epoch(state)
|
||||||
next_epoch = current_epoch + 1
|
next_epoch = current_epoch + 1
|
||||||
|
|
||||||
assert previous_epoch <= epoch <= next_epoch
|
assert previous_epoch <= epoch <= next_epoch
|
||||||
|
@ -942,7 +943,8 @@ def generate_seed(state: BeaconState,
|
||||||
"""
|
"""
|
||||||
return hash(
|
return hash(
|
||||||
get_randao_mix(state, epoch - MIN_SEED_LOOKAHEAD) +
|
get_randao_mix(state, epoch - MIN_SEED_LOOKAHEAD) +
|
||||||
get_active_index_root(state, epoch)
|
get_active_index_root(state, epoch) +
|
||||||
|
int_to_bytes32(epoch)
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -1012,7 +1014,14 @@ def is_power_of_two(value: int) -> bool:
|
||||||
|
|
||||||
### `int_to_bytes1`, `int_to_bytes2`, ...
|
### `int_to_bytes1`, `int_to_bytes2`, ...
|
||||||
|
|
||||||
`int_to_bytes1(x): return x.to_bytes(1, 'big')`, `int_to_bytes2(x): return x.to_bytes(2, 'big')`, and so on for all integers, particularly 1, 2, 3, 4, 8, 32, 48, 96.
|
`int_to_bytes1(x): return x.to_bytes(1, 'little')`, `int_to_bytes2(x): return x.to_bytes(2, 'little')`, and so on for all integers, particularly 1, 2, 3, 4, 8, 32, 48, 96.
|
||||||
|
|
||||||
|
### `bytes_to_int`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def bytes_to_int(data: bytes) -> int:
|
||||||
|
return int.from_bytes(data, 'little')
|
||||||
|
```
|
||||||
|
|
||||||
### `get_effective_balance`
|
### `get_effective_balance`
|
||||||
|
|
||||||
|
@ -1024,6 +1033,16 @@ def get_effective_balance(state: State, index: ValidatorIndex) -> Gwei:
|
||||||
return min(state.validator_balances[index], MAX_DEPOSIT_AMOUNT)
|
return min(state.validator_balances[index], MAX_DEPOSIT_AMOUNT)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `get_total_balance`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_total_balance(state: BeaconState, validators: List[ValidatorIndex]) -> Gwei:
|
||||||
|
"""
|
||||||
|
Return the combined effective balance of an array of validators.
|
||||||
|
"""
|
||||||
|
return sum([get_effective_balance(state, i) for i in validators])
|
||||||
|
```
|
||||||
|
|
||||||
### `get_fork_version`
|
### `get_fork_version`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -1058,7 +1077,7 @@ def get_bitfield_bit(bitfield: bytes, i: int) -> int:
|
||||||
"""
|
"""
|
||||||
Extract the bit in ``bitfield`` at position ``i``.
|
Extract the bit in ``bitfield`` at position ``i``.
|
||||||
"""
|
"""
|
||||||
return (bitfield[i // 8] >> (7 - (i % 8))) % 2
|
return (bitfield[i // 8] >> (i % 8)) % 2
|
||||||
```
|
```
|
||||||
|
|
||||||
### `verify_bitfield`
|
### `verify_bitfield`
|
||||||
|
@ -1071,7 +1090,8 @@ def verify_bitfield(bitfield: bytes, committee_size: int) -> bool:
|
||||||
if len(bitfield) != (committee_size + 7) // 8:
|
if len(bitfield) != (committee_size + 7) // 8:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
for i in range(committee_size + 1, committee_size - committee_size % 8 + 8):
|
# Check `bitfield` is padded with zero bits only
|
||||||
|
for i in range(committee_size, len(bitfield) * 8):
|
||||||
if get_bitfield_bit(bitfield, i) == 0b1:
|
if get_bitfield_bit(bitfield, i) == 0b1:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -1109,21 +1129,17 @@ def verify_slashable_attestation(state: BeaconState, slashable_attestation: Slas
|
||||||
else:
|
else:
|
||||||
custody_bit_1_indices.append(validator_index)
|
custody_bit_1_indices.append(validator_index)
|
||||||
|
|
||||||
return bls_verify(
|
return bls_verify_multiple(
|
||||||
pubkeys=[
|
pubkeys=[
|
||||||
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_0_indices]),
|
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_0_indices]),
|
||||||
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_1_indices]),
|
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_1_indices]),
|
||||||
],
|
],
|
||||||
messages=[
|
message_hashes=[
|
||||||
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b0)),
|
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b0)),
|
||||||
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b1)),
|
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b1)),
|
||||||
],
|
],
|
||||||
signature=slashable_attestation.aggregate_signature,
|
signature=slashable_attestation.aggregate_signature,
|
||||||
domain=get_domain(
|
domain=get_domain(state.fork, slot_to_epoch(vote_data.data.slot), DOMAIN_ATTESTATION),
|
||||||
state.fork,
|
|
||||||
slot_to_epoch(vote_data.data.slot),
|
|
||||||
DOMAIN_ATTESTATION,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -1213,7 +1229,7 @@ def validate_proof_of_possession(state: BeaconState,
|
||||||
|
|
||||||
return bls_verify(
|
return bls_verify(
|
||||||
pubkey=pubkey,
|
pubkey=pubkey,
|
||||||
message=hash_tree_root(proof_of_possession_data),
|
message_hash=hash_tree_root(proof_of_possession_data),
|
||||||
signature=proof_of_possession,
|
signature=proof_of_possession,
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
state.fork,
|
state.fork,
|
||||||
|
@ -1238,13 +1254,16 @@ def process_deposit(state: BeaconState,
|
||||||
Note that this function mutates ``state``.
|
Note that this function mutates ``state``.
|
||||||
"""
|
"""
|
||||||
# Validate the given `proof_of_possession`
|
# Validate the given `proof_of_possession`
|
||||||
assert validate_proof_of_possession(
|
proof_is_valid = validate_proof_of_possession(
|
||||||
state,
|
state,
|
||||||
pubkey,
|
pubkey,
|
||||||
proof_of_possession,
|
proof_of_possession,
|
||||||
withdrawal_credentials,
|
withdrawal_credentials,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if not proof_is_valid:
|
||||||
|
return
|
||||||
|
|
||||||
validator_pubkeys = [v.pubkey for v in state.validator_registry]
|
validator_pubkeys = [v.pubkey for v in state.validator_registry]
|
||||||
|
|
||||||
if pubkey not in validator_pubkeys:
|
if pubkey not in validator_pubkeys:
|
||||||
|
@ -1378,87 +1397,16 @@ When sufficiently many full deposits have been made the deposit contract emits t
|
||||||
|
|
||||||
### Vyper code
|
### Vyper code
|
||||||
|
|
||||||
```python
|
The source for the Vyper contract lives in a [separate repository](https://github.com/ethereum/deposit_contract) at [https://github.com/ethereum/deposit_contract/blob/master/deposit_contract/contracts/validator_registration.v.py](https://github.com/ethereum/deposit_contract/blob/master/deposit_contract/contracts/validator_registration.v.py).
|
||||||
## compiled with v0.1.0-beta.6 ##
|
|
||||||
|
|
||||||
MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei
|
|
||||||
MAX_DEPOSIT_AMOUNT: constant(uint256) = 32000000000 # Gwei
|
|
||||||
GWEI_PER_ETH: constant(uint256) = 1000000000 # 10**9
|
|
||||||
CHAIN_START_FULL_DEPOSIT_THRESHOLD: constant(uint256) = 16384 # 2**14
|
|
||||||
DEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32
|
|
||||||
TWO_TO_POWER_OF_TREE_DEPTH: constant(uint256) = 4294967296 # 2**32
|
|
||||||
SECONDS_PER_DAY: constant(uint256) = 86400
|
|
||||||
|
|
||||||
Deposit: event({deposit_root: bytes32, data: bytes[528], merkle_tree_index: bytes[8], branch: bytes32[32]})
|
|
||||||
Eth2Genesis: event({deposit_root: bytes32, time: bytes[8]})
|
|
||||||
|
|
||||||
zerohashes: bytes32[32]
|
|
||||||
branch: bytes32[32]
|
|
||||||
deposit_count: uint256
|
|
||||||
full_deposit_count: uint256
|
|
||||||
chainStarted: public(bool)
|
|
||||||
|
|
||||||
@public
|
|
||||||
def __init__():
|
|
||||||
for i in range(31):
|
|
||||||
self.zerohashes[i+1] = sha3(concat(self.zerohashes[i], self.zerohashes[i]))
|
|
||||||
self.branch[i+1] = self.zerohashes[i+1]
|
|
||||||
|
|
||||||
@public
|
|
||||||
@constant
|
|
||||||
def get_deposit_root() -> bytes32:
|
|
||||||
root:bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
|
|
||||||
size:uint256 = self.deposit_count
|
|
||||||
for h in range(32):
|
|
||||||
if size % 2 == 1:
|
|
||||||
root = sha3(concat(self.branch[h], root))
|
|
||||||
else:
|
|
||||||
root = sha3(concat(root, self.zerohashes[h]))
|
|
||||||
size /= 2
|
|
||||||
return root
|
|
||||||
|
|
||||||
@payable
|
|
||||||
@public
|
|
||||||
def deposit(deposit_input: bytes[512]):
|
|
||||||
assert msg.value >= as_wei_value(MIN_DEPOSIT_AMOUNT, "gwei")
|
|
||||||
assert msg.value <= as_wei_value(MAX_DEPOSIT_AMOUNT, "gwei")
|
|
||||||
|
|
||||||
index: uint256 = self.deposit_count
|
|
||||||
deposit_amount: bytes[8] = slice(concat("", convert(msg.value / GWEI_PER_ETH, bytes32)), start=24, len=8)
|
|
||||||
deposit_timestamp: bytes[8] = slice(concat("", convert(block.timestamp, bytes32)), start=24, len=8)
|
|
||||||
deposit_data: bytes[528] = concat(deposit_amount, deposit_timestamp, deposit_input)
|
|
||||||
merkle_tree_index: bytes[8] = slice(concat("", convert(index, bytes32)), start=24, len=8)
|
|
||||||
|
|
||||||
# add deposit to merkle tree
|
|
||||||
i: int128 = 0
|
|
||||||
power_of_two: uint256 = 2
|
|
||||||
for _ in range(32):
|
|
||||||
if (index+1) % power_of_two != 0:
|
|
||||||
break
|
|
||||||
i += 1
|
|
||||||
power_of_two *= 2
|
|
||||||
value:bytes32 = sha3(deposit_data)
|
|
||||||
for j in range(32):
|
|
||||||
if j < i:
|
|
||||||
value = sha3(concat(self.branch[j], value))
|
|
||||||
self.branch[i] = value
|
|
||||||
|
|
||||||
self.deposit_count += 1
|
|
||||||
|
|
||||||
new_deposit_root:bytes32 = self.get_deposit_root()
|
|
||||||
log.Deposit(new_deposit_root, deposit_data, merkle_tree_index, self.branch)
|
|
||||||
|
|
||||||
if msg.value == as_wei_value(MAX_DEPOSIT_AMOUNT, "gwei"):
|
|
||||||
self.full_deposit_count += 1
|
|
||||||
if self.full_deposit_count == CHAIN_START_FULL_DEPOSIT_THRESHOLD:
|
|
||||||
timestamp_day_boundary: uint256 = as_unitless_number(block.timestamp) - as_unitless_number(block.timestamp) % SECONDS_PER_DAY + SECONDS_PER_DAY
|
|
||||||
chainstart_time: bytes[8] = slice(concat("", convert(timestamp_day_boundary, bytes32)), start=24, len=8)
|
|
||||||
log.Eth2Genesis(new_deposit_root, chainstart_time)
|
|
||||||
self.chainStarted = True
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: to save ~10x on gas this contract uses a somewhat unintuitive progressive Merkle root calculation algo that requires only O(log(n)) storage. See https://github.com/ethereum/research/blob/master/beacon_chain_impl/progressive_merkle_tree.py for an implementation of the same algo in python tested for correctness.
|
Note: to save ~10x on gas this contract uses a somewhat unintuitive progressive Merkle root calculation algo that requires only O(log(n)) storage. See https://github.com/ethereum/research/blob/master/beacon_chain_impl/progressive_merkle_tree.py for an implementation of the same algo in python tested for correctness.
|
||||||
|
|
||||||
|
For convenience, we provide the interface to the contract here:
|
||||||
|
|
||||||
|
* `__init__()`: initializes the contract
|
||||||
|
* `get_deposit_root() -> bytes32`: returns the current root of the deposit tree
|
||||||
|
* `deposit(bytes[512])`: adds a deposit instance to the deposit tree, incorporating the input argument and the value transferred in the given call. Note: the amount of value transferred *must* be within `MIN_DEPOSIT_AMOUNT` and `MAX_DEPOSIT_AMOUNT`, inclusive. Each of these constants are specified in units of Gwei.
|
||||||
|
|
||||||
## On startup
|
## On startup
|
||||||
|
|
||||||
A valid block with slot `GENESIS_SLOT` (a "genesis block") has the following values. Other validity rules (e.g. requiring a signature) do not apply.
|
A valid block with slot `GENESIS_SLOT` (a "genesis block") has the following values. Other validity rules (e.g. requiring a signature) do not apply.
|
||||||
|
@ -1534,6 +1482,7 @@ def get_initial_beacon_state(initial_validator_deposits: List[Deposit],
|
||||||
# Ethereum 1.0 chain data
|
# Ethereum 1.0 chain data
|
||||||
latest_eth1_data=latest_eth1_data,
|
latest_eth1_data=latest_eth1_data,
|
||||||
eth1_data_votes=[],
|
eth1_data_votes=[],
|
||||||
|
deposit_index=len(initial_validator_deposits)
|
||||||
)
|
)
|
||||||
|
|
||||||
# Process initial deposits
|
# Process initial deposits
|
||||||
|
@ -1573,7 +1522,7 @@ For a beacon chain block, `block`, to be processed by a node, the following cond
|
||||||
|
|
||||||
* The parent block with root `block.parent_root` has been processed and accepted.
|
* The parent block with root `block.parent_root` has been processed and accepted.
|
||||||
* An Ethereum 1.0 block pointed to by the `state.latest_eth1_data.block_hash` has been processed and accepted.
|
* An Ethereum 1.0 block pointed to by the `state.latest_eth1_data.block_hash` has been processed and accepted.
|
||||||
* The node's local clock time is greater than or equal to `state.genesis_time + block.slot * SLOT_DURATION`.
|
* The node's Unix time is greater than or equal to `state.genesis_time + block.slot * SLOT_DURATION`. (Note that leap seconds mean that slots will occasionally last `SLOT_DURATION + 1` or `SLOT_DURATION - 1` seconds, possibly several times a year.)
|
||||||
|
|
||||||
If these conditions are not met, the client should delay processing the beacon block until the conditions are all satisfied.
|
If these conditions are not met, the client should delay processing the beacon block until the conditions are all satisfied.
|
||||||
|
|
||||||
|
@ -1601,8 +1550,8 @@ def get_ancestor(store: Store, block: BeaconBlock, slot: Slot) -> BeaconBlock:
|
||||||
return get_ancestor(store, store.get_parent(block), slot)
|
return get_ancestor(store, store.get_parent(block), slot)
|
||||||
```
|
```
|
||||||
|
|
||||||
* Let `get_latest_attestation(store: Store, validator: Validator) -> Attestation` be the attestation with the highest slot number in `store` from `validator`. If several such attestations exist, use the one the [validator](#dfn-validator) `v` observed first.
|
* Let `get_latest_attestation(store: Store, validator_index: ValidatorIndex) -> Attestation` be the attestation with the highest slot number in `store` from the validator with the given `validator_index`. If several such attestations exist, use the one the [validator](#dfn-validator) `v` observed first.
|
||||||
* Let `get_latest_attestation_target(store: Store, validator: Validator) -> BeaconBlock` be the target block in the attestation `get_latest_attestation(store, validator)`.
|
* Let `get_latest_attestation_target(store: Store, validator_index: ValidatorIndex) -> BeaconBlock` be the target block in the attestation `get_latest_attestation(store, validator_index)`.
|
||||||
* Let `get_children(store: Store, block: BeaconBlock) -> List[BeaconBlock]` returns the child blocks of the given `block`.
|
* Let `get_children(store: Store, block: BeaconBlock) -> List[BeaconBlock]` returns the child blocks of the given `block`.
|
||||||
* Let `justified_head_state` be the resulting `BeaconState` object from processing the chain up to the `justified_head`.
|
* Let `justified_head_state` be the resulting `BeaconState` object from processing the chain up to the `justified_head`.
|
||||||
* The `head` is `lmd_ghost(store, justified_head_state, justified_head)` where the function `lmd_ghost` is defined below. Note that the implementation below is suboptimal; there are implementations that compute the head in time logarithmic in slot count.
|
* The `head` is `lmd_ghost(store, justified_head_state, justified_head)` where the function `lmd_ghost` is defined below. Note that the implementation below is suboptimal; there are implementations that compute the head in time logarithmic in slot count.
|
||||||
|
@ -1613,21 +1562,18 @@ def lmd_ghost(store: Store, start_state: BeaconState, start_block: BeaconBlock)
|
||||||
Execute the LMD-GHOST algorithm to find the head ``BeaconBlock``.
|
Execute the LMD-GHOST algorithm to find the head ``BeaconBlock``.
|
||||||
"""
|
"""
|
||||||
validators = start_state.validator_registry
|
validators = start_state.validator_registry
|
||||||
active_validators = [
|
active_validator_indices = get_active_validator_indices(validators, start_state.slot)
|
||||||
validators[i]
|
|
||||||
for i in get_active_validator_indices(validators, start_state.slot)
|
|
||||||
]
|
|
||||||
attestation_targets = [
|
attestation_targets = [
|
||||||
get_latest_attestation_target(store, validator)
|
(validator_index, get_latest_attestation_target(store, validator_index))
|
||||||
for validator in active_validators
|
for validator_index in active_validator_indices
|
||||||
]
|
]
|
||||||
|
|
||||||
def get_vote_count(block: BeaconBlock) -> int:
|
def get_vote_count(block: BeaconBlock) -> int:
|
||||||
return len([
|
return sum(
|
||||||
target
|
get_effective_balance(start_state.validator_balances[validator_index]) // FORK_CHOICE_BALANCE_INCREMENT
|
||||||
for target in attestation_targets
|
for validator_index, target in attestation_targets
|
||||||
if get_ancestor(store, target, block.slot) == block
|
if get_ancestor(store, target, block.slot) == block
|
||||||
])
|
)
|
||||||
|
|
||||||
head = start_block
|
head = start_block
|
||||||
while 1:
|
while 1:
|
||||||
|
@ -1647,6 +1593,8 @@ We now define the state transition function. At a high level the state transitio
|
||||||
|
|
||||||
The per-slot transitions focus on the slot counter and block roots records updates; the per-block transitions generally focus on verifying aggregate signatures and saving temporary records relating to the per-block activity in the `BeaconState`; the per-epoch transitions focus on the [validator](#dfn-validator) registry, including adjusting balances and activating and exiting [validators](#dfn-validator), as well as processing crosslinks and managing block justification/finalization.
|
The per-slot transitions focus on the slot counter and block roots records updates; the per-block transitions generally focus on verifying aggregate signatures and saving temporary records relating to the per-block activity in the `BeaconState`; the per-epoch transitions focus on the [validator](#dfn-validator) registry, including adjusting balances and activating and exiting [validators](#dfn-validator), as well as processing crosslinks and managing block justification/finalization.
|
||||||
|
|
||||||
|
Beacon blocks that trigger unhandled Python exceptions (e.g. out-of-range list accesses) and failed `assert`s during the state transition are considered invalid.
|
||||||
|
|
||||||
_Note_: If there are skipped slots between a block and its parent block, run the steps in the [per-slot](#per-slot-processing) and [per-epoch](#per-epoch-processing) sections once for each skipped slot and then once for the slot containing the new block.
|
_Note_: If there are skipped slots between a block and its parent block, run the steps in the [per-slot](#per-slot-processing) and [per-epoch](#per-epoch-processing) sections once for each skipped slot and then once for the slot containing the new block.
|
||||||
|
|
||||||
### Per-slot processing
|
### Per-slot processing
|
||||||
|
@ -1659,7 +1607,7 @@ Below are the processing steps that happen at every slot.
|
||||||
|
|
||||||
#### Block roots
|
#### Block roots
|
||||||
|
|
||||||
* Let `previous_block_root` be the `tree_hash_root` of the previous beacon block processed in the chain.
|
* Let `previous_block_root` be the `hash_tree_root` of the previous beacon block processed in the chain.
|
||||||
* Set `state.latest_block_roots[(state.slot - 1) % LATEST_BLOCK_ROOTS_LENGTH] = previous_block_root`.
|
* Set `state.latest_block_roots[(state.slot - 1) % LATEST_BLOCK_ROOTS_LENGTH] = previous_block_root`.
|
||||||
* If `state.slot % LATEST_BLOCK_ROOTS_LENGTH == 0` append `merkle_root(state.latest_block_roots)` to `state.batched_block_roots`.
|
* If `state.slot % LATEST_BLOCK_ROOTS_LENGTH == 0` append `merkle_root(state.latest_block_roots)` to `state.batched_block_roots`.
|
||||||
|
|
||||||
|
@ -1675,17 +1623,17 @@ Below are the processing steps that happen at every `block`.
|
||||||
|
|
||||||
* Let `block_without_signature_root` be the `hash_tree_root` of `block` where `block.signature` is set to `EMPTY_SIGNATURE`.
|
* Let `block_without_signature_root` be the `hash_tree_root` of `block` where `block.signature` is set to `EMPTY_SIGNATURE`.
|
||||||
* Let `proposal_root = hash_tree_root(ProposalSignedData(state.slot, BEACON_CHAIN_SHARD_NUMBER, block_without_signature_root))`.
|
* Let `proposal_root = hash_tree_root(ProposalSignedData(state.slot, BEACON_CHAIN_SHARD_NUMBER, block_without_signature_root))`.
|
||||||
* Verify that `bls_verify(pubkey=state.validator_registry[get_beacon_proposer_index(state, state.slot)].pubkey, message=proposal_root, signature=block.signature, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_PROPOSAL))`.
|
* Verify that `bls_verify(pubkey=state.validator_registry[get_beacon_proposer_index(state, state.slot)].pubkey, message_hash=proposal_root, signature=block.signature, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_PROPOSAL))`.
|
||||||
|
|
||||||
#### RANDAO
|
#### RANDAO
|
||||||
|
|
||||||
* Let `proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)]`.
|
* Let `proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)]`.
|
||||||
* Verify that `bls_verify(pubkey=proposer.pubkey, message=int_to_bytes32(get_current_epoch(state)), signature=block.randao_reveal, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_RANDAO))`.
|
* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=int_to_bytes32(get_current_epoch(state)), signature=block.randao_reveal, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_RANDAO))`.
|
||||||
* Set `state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = xor(get_randao_mix(state, get_current_epoch(state)), hash(block.randao_reveal))`.
|
* Set `state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = xor(get_randao_mix(state, get_current_epoch(state)), hash(block.randao_reveal))`.
|
||||||
|
|
||||||
#### Eth1 data
|
#### Eth1 data
|
||||||
|
|
||||||
* If `block.eth1_data` equals `eth1_data_vote.eth1_data` for some `eth1_data_vote` in `state.eth1_data_votes`, set `eth1_data_vote.vote_count += 1`.
|
* If there exists an `eth1_data_vote` in `states.eth1_data_votes` for which `eth1_data_vote.eth1_data == block.eth1_data` (there will be at most one), set `eth1_data_vote.vote_count += 1`.
|
||||||
* Otherwise, append to `state.eth1_data_votes` a new `Eth1DataVote(eth1_data=block.eth1_data, vote_count=1)`.
|
* Otherwise, append to `state.eth1_data_votes` a new `Eth1DataVote(eth1_data=block.eth1_data, vote_count=1)`.
|
||||||
|
|
||||||
#### Operations
|
#### Operations
|
||||||
|
@ -1701,8 +1649,8 @@ For each `proposer_slashing` in `block.body.proposer_slashings`:
|
||||||
* Verify that `proposer_slashing.proposal_data_1.shard == proposer_slashing.proposal_data_2.shard`.
|
* Verify that `proposer_slashing.proposal_data_1.shard == proposer_slashing.proposal_data_2.shard`.
|
||||||
* Verify that `proposer_slashing.proposal_data_1.block_root != proposer_slashing.proposal_data_2.block_root`.
|
* Verify that `proposer_slashing.proposal_data_1.block_root != proposer_slashing.proposal_data_2.block_root`.
|
||||||
* Verify that `proposer.slashed_epoch > get_current_epoch(state)`.
|
* Verify that `proposer.slashed_epoch > get_current_epoch(state)`.
|
||||||
* Verify that `bls_verify(pubkey=proposer.pubkey, message=hash_tree_root(proposer_slashing.proposal_data_1), signature=proposer_slashing.proposal_signature_1, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_1.slot), DOMAIN_PROPOSAL))`.
|
* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=hash_tree_root(proposer_slashing.proposal_data_1), signature=proposer_slashing.proposal_signature_1, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_1.slot), DOMAIN_PROPOSAL))`.
|
||||||
* Verify that `bls_verify(pubkey=proposer.pubkey, message=hash_tree_root(proposer_slashing.proposal_data_2), signature=proposer_slashing.proposal_signature_2, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_2.slot), DOMAIN_PROPOSAL))`.
|
* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=hash_tree_root(proposer_slashing.proposal_data_2), signature=proposer_slashing.proposal_signature_2, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_2.slot), DOMAIN_PROPOSAL))`.
|
||||||
* Run `penalize_validator(state, proposer_slashing.proposer_index)`.
|
* Run `penalize_validator(state, proposer_slashing.proposer_index)`.
|
||||||
|
|
||||||
##### Attester slashings
|
##### Attester slashings
|
||||||
|
@ -1730,7 +1678,7 @@ For each `attestation` in `block.body.attestations`:
|
||||||
* Verify that `attestation.data.slot <= state.slot - MIN_ATTESTATION_INCLUSION_DELAY < attestation.data.slot + EPOCH_LENGTH`.
|
* Verify that `attestation.data.slot <= state.slot - MIN_ATTESTATION_INCLUSION_DELAY < attestation.data.slot + EPOCH_LENGTH`.
|
||||||
* Verify that `attestation.data.justified_epoch` is equal to `state.justified_epoch if attestation.data.slot >= get_epoch_start_slot(get_current_epoch(state)) else state.previous_justified_epoch`.
|
* Verify that `attestation.data.justified_epoch` is equal to `state.justified_epoch if attestation.data.slot >= get_epoch_start_slot(get_current_epoch(state)) else state.previous_justified_epoch`.
|
||||||
* Verify that `attestation.data.justified_block_root` is equal to `get_block_root(state, get_epoch_start_slot(attestation.data.justified_epoch))`.
|
* Verify that `attestation.data.justified_block_root` is equal to `get_block_root(state, get_epoch_start_slot(attestation.data.justified_epoch))`.
|
||||||
* Verify that either `attestation.data.latest_crosslink_root` or `attestation.data.shard_block_root` equals `state.latest_crosslinks[shard].shard_block_root`.
|
* Verify that either (i) `state.latest_crosslinks[attestation.data.shard] == attestation.data.latest_crosslink` or (ii) `state.latest_crosslinks[attestation.data.shard] == Crosslink(shard_block_root=attestation.data.shard_block_root, epoch=slot_to_epoch(attestation.data.slot))`.
|
||||||
* Verify bitfields and aggregate signature:
|
* Verify bitfields and aggregate signature:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -1776,6 +1724,7 @@ Verify that `len(block.body.deposits) <= MAX_DEPOSITS`.
|
||||||
For each `deposit` in `block.body.deposits`:
|
For each `deposit` in `block.body.deposits`:
|
||||||
|
|
||||||
* Let `serialized_deposit_data` be the serialized form of `deposit.deposit_data`. It should be 8 bytes for `deposit_data.amount` followed by 8 bytes for `deposit_data.timestamp` and then the `DepositInput` bytes. That is, it should match `deposit_data` in the [Ethereum 1.0 deposit contract](#ethereum-10-deposit-contract) of which the hash was placed into the Merkle tree.
|
* Let `serialized_deposit_data` be the serialized form of `deposit.deposit_data`. It should be 8 bytes for `deposit_data.amount` followed by 8 bytes for `deposit_data.timestamp` and then the `DepositInput` bytes. That is, it should match `deposit_data` in the [Ethereum 1.0 deposit contract](#ethereum-10-deposit-contract) of which the hash was placed into the Merkle tree.
|
||||||
|
* Verify that `deposit.index == state.deposit_index`.
|
||||||
* Verify that `verify_merkle_branch(hash(serialized_deposit_data), deposit.branch, DEPOSIT_CONTRACT_TREE_DEPTH, deposit.index, state.latest_eth1_data.deposit_root)` is `True`.
|
* Verify that `verify_merkle_branch(hash(serialized_deposit_data), deposit.branch, DEPOSIT_CONTRACT_TREE_DEPTH, deposit.index, state.latest_eth1_data.deposit_root)` is `True`.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -1804,6 +1753,8 @@ process_deposit(
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
* Set `state.deposit_index += 1`.
|
||||||
|
|
||||||
##### Exits
|
##### Exits
|
||||||
|
|
||||||
Verify that `len(block.body.exits) <= MAX_EXITS`.
|
Verify that `len(block.body.exits) <= MAX_EXITS`.
|
||||||
|
@ -1814,46 +1765,46 @@ For each `exit` in `block.body.exits`:
|
||||||
* Verify that `validator.exit_epoch > get_entry_exit_effect_epoch(get_current_epoch(state))`.
|
* Verify that `validator.exit_epoch > get_entry_exit_effect_epoch(get_current_epoch(state))`.
|
||||||
* Verify that `get_current_epoch(state) >= exit.epoch`.
|
* Verify that `get_current_epoch(state) >= exit.epoch`.
|
||||||
* Let `exit_message = hash_tree_root(Exit(epoch=exit.epoch, validator_index=exit.validator_index, signature=EMPTY_SIGNATURE))`.
|
* Let `exit_message = hash_tree_root(Exit(epoch=exit.epoch, validator_index=exit.validator_index, signature=EMPTY_SIGNATURE))`.
|
||||||
* Verify that `bls_verify(pubkey=validator.pubkey, message=exit_message, signature=exit.signature, domain=get_domain(state.fork, exit.epoch, DOMAIN_EXIT))`.
|
* Verify that `bls_verify(pubkey=validator.pubkey, message_hash=exit_message, signature=exit.signature, domain=get_domain(state.fork, exit.epoch, DOMAIN_EXIT))`.
|
||||||
* Run `initiate_validator_exit(state, exit.validator_index)`.
|
* Run `initiate_validator_exit(state, exit.validator_index)`.
|
||||||
|
|
||||||
### Per-epoch processing
|
### Per-epoch processing
|
||||||
|
|
||||||
The steps below happen when `(state.slot + 1) % EPOCH_LENGTH == 0`.
|
The steps below happen when `(state.slot + 1) % EPOCH_LENGTH == 0`.
|
||||||
|
|
||||||
#### Helpers
|
#### Helper variables
|
||||||
|
|
||||||
* Let `current_epoch = get_current_epoch(state)`.
|
* Let `current_epoch = get_current_epoch(state)`.
|
||||||
* Let `previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else current_epoch`.
|
* Let `previous_epoch = get_previous_epoch(state)`.
|
||||||
* Let `next_epoch = current_epoch + 1`.
|
* Let `next_epoch = current_epoch + 1`.
|
||||||
|
|
||||||
[Validators](#dfn-Validator) attesting during the current epoch:
|
[Validators](#dfn-Validator) attesting during the current epoch:
|
||||||
|
|
||||||
* Let `current_total_balance = sum([get_effective_balance(state, i) for i in get_active_validator_indices(state.validator_registry, current_epoch)])`.
|
* Let `current_total_balance = get_total_balance(state, get_active_validator_indices(state.validator_registry, current_epoch))`.
|
||||||
* Let `current_epoch_attestations = [a for a in state.latest_attestations if current_epoch == slot_to_epoch(a.data.slot)]`. (Note: this is the set of attestations of slots in the epoch `current_epoch`, _not_ attestations that got included in the chain during the epoch `current_epoch`.)
|
* Let `current_epoch_attestations = [a for a in state.latest_attestations if current_epoch == slot_to_epoch(a.data.slot)]`. (Note: this is the set of attestations of slots in the epoch `current_epoch`, _not_ attestations that got included in the chain during the epoch `current_epoch`.)
|
||||||
* Validators justifying the epoch boundary block at the start of the current epoch:
|
* Validators justifying the epoch boundary block at the start of the current epoch:
|
||||||
* Let `current_epoch_boundary_attestations = [a for a in current_epoch_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(current_epoch)) and a.data.justified_epoch == state.justified_epoch]`.
|
* Let `current_epoch_boundary_attestations = [a for a in current_epoch_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(current_epoch)) and a.data.justified_epoch == state.justified_epoch]`.
|
||||||
* Let `current_epoch_boundary_attester_indices` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_boundary_attestations]`.
|
* Let `current_epoch_boundary_attester_indices` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_boundary_attestations]`.
|
||||||
* Let `current_epoch_boundary_attesting_balance = sum([get_effective_balance(state, i) for i in current_epoch_boundary_attester_indices])`.
|
* Let `current_epoch_boundary_attesting_balance = get_total_balance(state, current_epoch_boundary_attester_indices)`.
|
||||||
|
|
||||||
[Validators](#dfn-Validator) attesting during the previous epoch:
|
[Validators](#dfn-Validator) attesting during the previous epoch:
|
||||||
|
|
||||||
* Let `previous_total_balance = sum([get_effective_balance(state, i) for i in get_active_validator_indices(state.validator_registry, previous_epoch)])`.
|
* Let `previous_total_balance = get_total_balance(state, get_active_validator_indices(state.validator_registry, previous_epoch))`.
|
||||||
* Validators that made an attestation during the previous epoch:
|
* Validators that made an attestation during the previous epoch:
|
||||||
* Let `previous_epoch_attestations = [a for a in state.latest_attestations if previous_epoch == slot_to_epoch(a.data.slot)]`.
|
* Let `previous_epoch_attestations = [a for a in state.latest_attestations if previous_epoch == slot_to_epoch(a.data.slot)]`.
|
||||||
* Let `previous_epoch_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_attestations]`.
|
* Let `previous_epoch_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_attestations]`.
|
||||||
* Validators targeting the previous justified slot:
|
* Validators targeting the previous justified slot:
|
||||||
* Let `previous_epoch_justified_attestations = [a for a in current_epoch_attestations + previous_epoch_attestations if a.data.justified_epoch == state.previous_justified_epoch]`.
|
* Let `previous_epoch_justified_attestations = [a for a in current_epoch_attestations + previous_epoch_attestations if a.data.justified_epoch == state.previous_justified_epoch]`.
|
||||||
* Let `previous_epoch_justified_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_justified_attestations]`.
|
* Let `previous_epoch_justified_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_justified_attestations]`.
|
||||||
* Let `previous_epoch_justified_attesting_balance = sum([get_effective_balance(state, i) for i in previous_epoch_justified_attester_indices])`.
|
* Let `previous_epoch_justified_attesting_balance = get_total_balance(state, previous_epoch_justified_attester_indices)`.
|
||||||
* Validators justifying the epoch boundary block at the start of the previous epoch:
|
* Validators justifying the epoch boundary block at the start of the previous epoch:
|
||||||
* Let `previous_epoch_boundary_attestations = [a for a in previous_epoch_justified_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(previous_epoch))]`.
|
* Let `previous_epoch_boundary_attestations = [a for a in previous_epoch_justified_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(previous_epoch))]`.
|
||||||
* Let `previous_epoch_boundary_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_boundary_attestations]`.
|
* Let `previous_epoch_boundary_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_boundary_attestations]`.
|
||||||
* Let `previous_epoch_boundary_attesting_balance = sum([get_effective_balance(state, i) for i in previous_epoch_boundary_attester_indices])`.
|
* Let `previous_epoch_boundary_attesting_balance = get_total_balance(state, previous_epoch_boundary_attester_indices)`.
|
||||||
* Validators attesting to the expected beacon chain head during the previous epoch:
|
* Validators attesting to the expected beacon chain head during the previous epoch:
|
||||||
* Let `previous_epoch_head_attestations = [a for a in previous_epoch_attestations if a.data.beacon_block_root == get_block_root(state, a.data.slot)]`.
|
* Let `previous_epoch_head_attestations = [a for a in previous_epoch_attestations if a.data.beacon_block_root == get_block_root(state, a.data.slot)]`.
|
||||||
* Let `previous_epoch_head_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_head_attestations]`.
|
* Let `previous_epoch_head_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_head_attestations]`.
|
||||||
* Let `previous_epoch_head_attesting_balance = sum([get_effective_balance(state, i) for i in previous_epoch_head_attester_indices])`.
|
* Let `previous_epoch_head_attesting_balance = get_total_balance(state, previous_epoch_head_attester_indices)`.
|
||||||
|
|
||||||
**Note**: `previous_total_balance` and `previous_epoch_boundary_attesting_balance` balance might be marginally different than the actual balances during previous epoch transition. Due to the tight bound on validator churn each epoch and small per-epoch rewards/penalties, the potential balance difference is very low and only marginally affects consensus safety.
|
**Note**: `previous_total_balance` and `previous_epoch_boundary_attesting_balance` balance might be marginally different than the actual balances during previous epoch transition. Due to the tight bound on validator churn each epoch and small per-epoch rewards/penalties, the potential balance difference is very low and only marginally affects consensus safety.
|
||||||
|
|
||||||
|
@ -1861,10 +1812,9 @@ For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_s
|
||||||
|
|
||||||
* Let `shard_block_root` be `state.latest_crosslinks[shard].shard_block_root`
|
* Let `shard_block_root` be `state.latest_crosslinks[shard].shard_block_root`
|
||||||
* Let `attesting_validator_indices(crosslink_committee, shard_block_root)` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_attestations + previous_epoch_attestations if a.data.shard == shard and a.data.shard_block_root == shard_block_root]`.
|
* Let `attesting_validator_indices(crosslink_committee, shard_block_root)` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_attestations + previous_epoch_attestations if a.data.shard == shard and a.data.shard_block_root == shard_block_root]`.
|
||||||
* Let `winning_root(crosslink_committee)` be equal to the value of `shard_block_root` such that `sum([get_effective_balance(state, i) for i in attesting_validator_indices(crosslink_committee, shard_block_root)])` is maximized (ties broken by favoring lower `shard_block_root` values).
|
* Let `winning_root(crosslink_committee)` be equal to the value of `shard_block_root` such that `get_total_balance(state, attesting_validator_indices(crosslink_committee, shard_block_root))` is maximized (ties broken by favoring lower `shard_block_root` values).
|
||||||
* Let `attesting_validators(crosslink_committee)` be equal to `attesting_validator_indices(crosslink_committee, winning_root(crosslink_committee))` for convenience.
|
* Let `attesting_validators(crosslink_committee)` be equal to `attesting_validator_indices(crosslink_committee, winning_root(crosslink_committee))` for convenience.
|
||||||
* Let `total_attesting_balance(crosslink_committee) = sum([get_effective_balance(state, i) for i in attesting_validators(crosslink_committee)])`.
|
* Let `total_attesting_balance(crosslink_committee) = get_total_balance(state, attesting_validators(crosslink_committee))`.
|
||||||
* Let `total_balance(crosslink_committee) = sum([get_effective_balance(state, i) for i in crosslink_committee])`.
|
|
||||||
|
|
||||||
Define the following helpers to process attestation inclusion rewards and inclusion distance reward/penalty. For every attestation `a` in `previous_epoch_attestations`:
|
Define the following helpers to process attestation inclusion rewards and inclusion distance reward/penalty. For every attestation `a` in `previous_epoch_attestations`:
|
||||||
|
|
||||||
|
@ -1903,7 +1853,7 @@ Finally, update the following:
|
||||||
|
|
||||||
For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch))`, let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`, compute:
|
For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch))`, let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`, compute:
|
||||||
|
|
||||||
* Set `state.latest_crosslinks[shard] = Crosslink(epoch=current_epoch, shard_block_root=winning_root(crosslink_committee))` if `3 * total_attesting_balance(crosslink_committee) >= 2 * total_balance(crosslink_committee)`.
|
* Set `state.latest_crosslinks[shard] = Crosslink(epoch=slot_to_epoch(slot), shard_block_root=winning_root(crosslink_committee))` if `3 * total_attesting_balance(crosslink_committee) >= 2 * get_total_balance(crosslink_committee)`.
|
||||||
|
|
||||||
#### Rewards and penalties
|
#### Rewards and penalties
|
||||||
|
|
||||||
|
@ -1950,8 +1900,8 @@ For each `index` in `previous_epoch_attester_indices`, we determine the proposer
|
||||||
For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(current_epoch))`:
|
For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(current_epoch))`:
|
||||||
|
|
||||||
* Let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`.
|
* Let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`.
|
||||||
* For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`:
|
* For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot` and every `index` in `crosslink_committee`:
|
||||||
* If `index in attesting_validators(crosslink_committee)`, `state.validator_balances[index] += base_reward(state, index) * total_attesting_balance(crosslink_committee) // total_balance(crosslink_committee))`.
|
* If `index in attesting_validators(crosslink_committee)`, `state.validator_balances[index] += base_reward(state, index) * total_attesting_balance(crosslink_committee) // get_total_balance(state, crosslink_committee))`.
|
||||||
* If `index not in attesting_validators(crosslink_committee)`, `state.validator_balances[index] -= base_reward(state, index)`.
|
* If `index not in attesting_validators(crosslink_committee)`, `state.validator_balances[index] -= base_reward(state, index)`.
|
||||||
|
|
||||||
#### Ejections
|
#### Ejections
|
||||||
|
@ -1994,7 +1944,7 @@ def update_validator_registry(state: BeaconState) -> None:
|
||||||
# The active validators
|
# The active validators
|
||||||
active_validator_indices = get_active_validator_indices(state.validator_registry, current_epoch)
|
active_validator_indices = get_active_validator_indices(state.validator_registry, current_epoch)
|
||||||
# The total effective balance of active validators
|
# The total effective balance of active validators
|
||||||
total_balance = sum([get_effective_balance(state, i) for i in active_validator_indices])
|
total_balance = get_total_balance(state, active_validator_indices)
|
||||||
|
|
||||||
# The maximum balance churn in Gwei (for deposits and exits separately)
|
# The maximum balance churn in Gwei (for deposits and exits separately)
|
||||||
max_balance_churn = max(
|
max_balance_churn = max(
|
||||||
|
@ -2080,12 +2030,10 @@ def process_penalties_and_exits(state: BeaconState) -> None:
|
||||||
eligible_indices = filter(eligible, all_indices)
|
eligible_indices = filter(eligible, all_indices)
|
||||||
# Sort in order of exit epoch, and validators that exit within the same epoch exit in order of validator index
|
# Sort in order of exit epoch, and validators that exit within the same epoch exit in order of validator index
|
||||||
sorted_indices = sorted(eligible_indices, key=lambda index: state.validator_registry[index].exit_epoch)
|
sorted_indices = sorted(eligible_indices, key=lambda index: state.validator_registry[index].exit_epoch)
|
||||||
withdrawn_so_far = 0
|
for withdrawn_so_far, index in enumerate(sorted_indices):
|
||||||
for index in sorted_indices:
|
|
||||||
prepare_validator_for_withdrawal(state, index)
|
|
||||||
withdrawn_so_far += 1
|
|
||||||
if withdrawn_so_far >= MAX_WITHDRAWALS_PER_EPOCH:
|
if withdrawn_so_far >= MAX_WITHDRAWALS_PER_EPOCH:
|
||||||
break
|
break
|
||||||
|
prepare_validator_for_withdrawal(state, index)
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Final updates
|
#### Final updates
|
||||||
|
|
|
@ -16,10 +16,12 @@ Ethereum 2.0 consists of a central beacon chain along with `SHARD_COUNT` shard c
|
||||||
|
|
||||||
Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md#constants) in addition to the following:
|
Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md#constants) in addition to the following:
|
||||||
|
|
||||||
| Constant | Value | Unit | Approximation |
|
| Constant | Value | Unit | Approximation |
|
||||||
|------------------------|-----------------|-------|---------------|
|
|-------------------------------|------------------|--------|---------------|
|
||||||
| `SHARD_CHUNK_SIZE` | 2**5 (= 32) | bytes | |
|
| `SHARD_CHUNK_SIZE` | 2**5 (= 32) | bytes | |
|
||||||
| `SHARD_BLOCK_SIZE` | 2**14 (= 16384) | bytes | |
|
| `SHARD_BLOCK_SIZE` | 2**14 (= 16,384) | bytes | |
|
||||||
|
| `CROSSLINK_LOOKBACK` | 2**5 (= 32) | slots | |
|
||||||
|
| `PERSISTENT_COMMITTEE_PERIOD` | 2**11 (= 2,048) | epochs | 9 days |
|
||||||
|
|
||||||
### Flags, domains, etc.
|
### Flags, domains, etc.
|
||||||
|
|
||||||
|
@ -28,6 +30,89 @@ Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md
|
||||||
| `SHARD_PROPOSER_DOMAIN`| 129 |
|
| `SHARD_PROPOSER_DOMAIN`| 129 |
|
||||||
| `SHARD_ATTESTER_DOMAIN`| 130 |
|
| `SHARD_ATTESTER_DOMAIN`| 130 |
|
||||||
|
|
||||||
|
## Helper functions
|
||||||
|
|
||||||
|
#### get_split_offset
|
||||||
|
|
||||||
|
````python
|
||||||
|
def get_split_offset(list_size: int, chunks: int, index: int) -> int:
|
||||||
|
"""
|
||||||
|
Returns a value such that for a list L, chunk count k and index i,
|
||||||
|
split(L, k)[i] == L[get_split_offset(len(L), k, i): get_split_offset(len(L), k+1, i)]
|
||||||
|
"""
|
||||||
|
return (len(list_size) * index) // chunks
|
||||||
|
````
|
||||||
|
|
||||||
|
#### get_shuffled_committee
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_shuffled_committee(state: BeaconState,
|
||||||
|
shard: ShardNumber,
|
||||||
|
committee_start_epoch: EpochNumber) -> List[ValidatorIndex]:
|
||||||
|
"""
|
||||||
|
Return shuffled committee.
|
||||||
|
"""
|
||||||
|
validator_indices = get_active_validator_indices(state.validators, committee_start_epoch)
|
||||||
|
seed = generate_seed(state, committee_start_epoch)
|
||||||
|
start_offset = get_split_offset(len(validator_indices), SHARD_COUNT, shard)
|
||||||
|
end_offset = get_split_offset(len(validator_indices), SHARD_COUNT, shard + 1)
|
||||||
|
return [
|
||||||
|
validator_indices[get_permuted_index(i, len(validator_indices), seed)]
|
||||||
|
for i in range(start_offset, end_offset)
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### get_persistent_committee
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_persistent_committee(state: BeaconState,
|
||||||
|
shard: ShardNumber,
|
||||||
|
epoch: EpochNumber) -> List[ValidatorIndex]:
|
||||||
|
"""
|
||||||
|
Return the persistent committee for the given ``shard`` at the given ``epoch``.
|
||||||
|
"""
|
||||||
|
earlier_committee_start_epoch = epoch - (epoch % PERSISTENT_COMMITTEE_PERIOD) - PERSISTENT_COMMITTEE_PERIOD * 2
|
||||||
|
earlier_committee = get_shuffled_committee(state, shard, earlier_committee_start_epoch)
|
||||||
|
|
||||||
|
later_committee_start_epoch = epoch - (epoch % PERSISTENT_COMMITTEE_PERIOD) - PERSISTENT_COMMITTEE_PERIOD
|
||||||
|
later_committee = get_shuffled_committee(state, shard, later_committee_start_epoch)
|
||||||
|
|
||||||
|
def get_switchover_epoch(index):
|
||||||
|
return (
|
||||||
|
bytes_to_int(hash(earlier_seed + bytes3(index))[0:8]) %
|
||||||
|
PERSISTENT_COMMITTEE_PERIOD
|
||||||
|
)
|
||||||
|
|
||||||
|
# Take not-yet-cycled-out validators from earlier committee and already-cycled-in validators from
|
||||||
|
# later committee; return a sorted list of the union of the two, deduplicated
|
||||||
|
return sorted(list(set(
|
||||||
|
[i for i in earlier_committee if epoch % PERSISTENT_COMMITTEE_PERIOD < get_switchover_epoch(i)] +
|
||||||
|
[i for i in later_committee if epoch % PERSISTENT_COMMITTEE_PERIOD >= get_switchover_epoch(i)]
|
||||||
|
)))
|
||||||
|
```
|
||||||
|
#### get_shard_proposer_index
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_shard_proposer_index(state: BeaconState,
|
||||||
|
shard: ShardNumber,
|
||||||
|
slot: SlotNumber) -> ValidatorIndex:
|
||||||
|
seed = hash(
|
||||||
|
state.current_epoch_seed +
|
||||||
|
int_to_bytes8(shard) +
|
||||||
|
int_to_bytes8(slot)
|
||||||
|
)
|
||||||
|
persistent_committee = get_persistent_committee(state, shard, slot_to_epoch(slot))
|
||||||
|
# Default proposer
|
||||||
|
index = bytes_to_int(seed[0:8]) % len(persistent_committee)
|
||||||
|
# If default proposer exits, try the other proposers in order; if all are exited
|
||||||
|
# return None (ie. no block can be proposed)
|
||||||
|
validators_to_try = persistent_committee[index:] + persistent_committee[:index]
|
||||||
|
for index in validators_to_try:
|
||||||
|
if is_active_validator(state.validators[index], get_current_epoch(state)):
|
||||||
|
return index
|
||||||
|
return None
|
||||||
|
```
|
||||||
|
|
||||||
## Data Structures
|
## Data Structures
|
||||||
|
|
||||||
### Shard chain blocks
|
### Shard chain blocks
|
||||||
|
@ -40,50 +125,43 @@ A `ShardBlock` object has the following fields:
|
||||||
'slot': 'uint64',
|
'slot': 'uint64',
|
||||||
# What shard is it on
|
# What shard is it on
|
||||||
'shard_id': 'uint64',
|
'shard_id': 'uint64',
|
||||||
# Parent block's hash of root
|
# Parent block's root
|
||||||
'parent_root': 'hash32',
|
'parent_root': 'bytes32',
|
||||||
# Beacon chain block
|
# Beacon chain block
|
||||||
'beacon_chain_ref': 'hash32',
|
'beacon_chain_ref': 'bytes32',
|
||||||
# Depth of the Merkle tree
|
|
||||||
'data_tree_depth': 'uint8',
|
|
||||||
# Merkle root of data
|
# Merkle root of data
|
||||||
'data_root': 'hash32'
|
'data_root': 'bytes32'
|
||||||
# State root (placeholder for now)
|
# State root (placeholder for now)
|
||||||
'state_root': 'hash32',
|
'state_root': 'bytes32',
|
||||||
# Block signature
|
# Block signature
|
||||||
'signature': ['uint384'],
|
'signature': 'bytes96',
|
||||||
# Attestation
|
# Attestation
|
||||||
'participation_bitfield': 'bytes',
|
'participation_bitfield': 'bytes',
|
||||||
'aggregate_signature': ['uint384'],
|
'aggregate_signature': 'bytes96',
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Shard block processing
|
## Shard block processing
|
||||||
|
|
||||||
For a block on a shard to be processed by a node, the following conditions must be met:
|
For a `shard_block` on a shard to be processed by a node, the following conditions must be met:
|
||||||
|
|
||||||
* The `ShardBlock` pointed to by `parent_root` has already been processed and accepted
|
* The `ShardBlock` pointed to by `shard_block.parent_root` has already been processed and accepted
|
||||||
* The signature for the block from the _proposer_ (see below for definition) of that block is included along with the block in the network message object
|
* The signature for the block from the _proposer_ (see below for definition) of that block is included along with the block in the network message object
|
||||||
|
|
||||||
To validate a block header on shard `shard_id`, compute as follows:
|
To validate a block header on shard `shard_block.shard_id`, compute as follows:
|
||||||
|
|
||||||
* Verify that `beacon_chain_ref` is the hash of a block in the beacon chain with slot less than or equal to `slot`. Verify that `beacon_chain_ref` is equal to or a descendant of the `beacon_chain_ref` specified in the `ShardBlock` pointed to by `parent_root`.
|
* Verify that `shard_block.beacon_chain_ref` is the hash of a block in the (canonical) beacon chain with slot less than or equal to `slot`.
|
||||||
* Let `state` be the state of the beacon chain block referred to by `beacon_chain_ref`. Let `validators` be `[validators[i] for i in state.current_persistent_committees[shard_id]]`.
|
* Verify that `shard_block.beacon_chain_ref` is equal to or a descendant of the `shard_block.beacon_chain_ref` specified in the `ShardBlock` pointed to by `shard_block.parent_root`.
|
||||||
* Assert `len(participation_bitfield) == ceil_div8(len(validators))`
|
* Let `state` be the state of the beacon chain block referred to by `shard_block.beacon_chain_ref`.
|
||||||
* Let `proposer_index = hash(state.randao_mix + int_to_bytes8(shard_id) + int_to_bytes8(slot)) % len(validators)`. Let `msg` be the block but with the `block.signature` set to `[0, 0]`. Verify that `BLSVerify(pub=validators[proposer_index].pubkey, msg=hash(msg), sig=block.signature, domain=get_domain(state, slot, SHARD_PROPOSER_DOMAIN))` passes.
|
* Let `persistent_committee = get_persistent_committee(state, shard_block.shard_id, slot_to_epoch(shard_block.slot))`.
|
||||||
* Generate the `group_public_key` by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify that `BLSVerify(pub=group_public_key, msg=parent_root, sig=block.aggregate_signature, domain=get_domain(state, slot, SHARD_ATTESTER_DOMAIN))` passes.
|
* Assert `verify_bitfield(shard_block.participation_bitfield, len(persistent_committee))`
|
||||||
|
* For every `i in range(len(persistent_committee))` where `is_active_validator(state.validators[persistent_committee[i]], get_current_epoch(state))` returns `False`, verify that `get_bitfield_bit(shard_block.participation_bitfield, i) == 0`
|
||||||
### Block Merklization helper
|
* Let `proposer_index = get_shard_proposer_index(state, shard_block.shard_id, shard_block.slot)`.
|
||||||
|
* Verify that `proposer_index` is not `None`.
|
||||||
```python
|
* Let `msg` be the `shard_block` but with `shard_block.signature` set to `[0, 0]`.
|
||||||
def merkle_root(block_body):
|
* Verify that `bls_verify(pubkey=validators[proposer_index].pubkey, message_hash=hash(msg), signature=shard_block.signature, domain=get_domain(state, slot_to_epoch(shard_block.slot), SHARD_PROPOSER_DOMAIN))` passes.
|
||||||
assert len(block_body) == SHARD_BLOCK_SIZE
|
* Let `group_public_key = bls_aggregate_pubkeys([state.validators[index].pubkey for i, index in enumerate(persistent_committee) if get_bitfield_bit(shard_block.participation_bitfield, i) is True])`.
|
||||||
chunks = SHARD_BLOCK_SIZE // SHARD_CHUNK_SIZE
|
* Verify that `bls_verify(pubkey=group_public_key, message_hash=shard_block.parent_root, sig=shard_block.aggregate_signature, domain=get_domain(state, slot_to_epoch(shard_block.slot), SHARD_ATTESTER_DOMAIN))` passes.
|
||||||
o = [0] * chunks + [block_body[i * SHARD_CHUNK_SIZE: (i+1) * SHARD_CHUNK_SIZE] for i in range(chunks)]
|
|
||||||
for i in range(chunks-1, 0, -1):
|
|
||||||
o[i] = hash(o[i*2] + o[i*2+1])
|
|
||||||
return o[1]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verifying shard block data
|
### Verifying shard block data
|
||||||
|
|
||||||
|
@ -98,27 +176,40 @@ A node should sign a crosslink only if the following conditions hold. **If a nod
|
||||||
|
|
||||||
First, the conditions must recursively apply to the crosslink referenced in `last_crosslink_root` for the same shard (unless `last_crosslink_root` equals zero, in which case we are at the genesis).
|
First, the conditions must recursively apply to the crosslink referenced in `last_crosslink_root` for the same shard (unless `last_crosslink_root` equals zero, in which case we are at the genesis).
|
||||||
|
|
||||||
Second, we verify the `shard_block_combined_data_root`. Let `h` be the slot _immediately after_ the slot of the shard block included by the last crosslink, and `h+n-1` be the slot number of the block directly referenced by the current `shard_block_root`. Let `B[i]` be the block at slot `h+i` in the shard chain. Let `bodies[0] .... bodies[n-1]` be the bodies of these blocks and `roots[0] ... roots[n-1]` the data roots. If there is a missing slot in the shard chain at position `h+i`, then `bodies[i] == b'\x00' * shard_block_maxbytes(state[i])` and `roots[i]` be the Merkle root of the empty data. Define `compute_merkle_root` be a simple Merkle root calculating function that takes as input a list of objects, where the list's length must be an exact power of two. We define the function for computing the combined data root as follows:
|
Second, we verify the `shard_chain_commitment`.
|
||||||
|
* Let `start_slot = state.latest_crosslinks[shard].epoch * EPOCH_LENGTH + EPOCH_LENGTH - CROSSLINK_LOOKBACK`.
|
||||||
|
* Let `end_slot = attestation.data.slot - attestation.data.slot % EPOCH_LENGTH - CROSSLINK_LOOKBACK`.
|
||||||
|
* Let `length = end_slot - start_slot`, `headers[0] .... headers[length-1]` be the serialized block headers in the canonical shard chain from the verifer's point of view (note that this implies that `headers` and `bodies` have been checked for validity).
|
||||||
|
* Let `bodies[0] ... bodies[length-1]` be the bodies of the blocks.
|
||||||
|
* Note: If there is a missing slot, then the header and body are the same as that of the block at the most recent slot that has a block.
|
||||||
|
|
||||||
|
We define two helpers:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
ZERO_ROOT = merkle_root(bytes([0] * SHARD_BLOCK_SIZE))
|
def pad_to_power_of_2(values: List[bytes]) -> List[bytes]:
|
||||||
|
while not is_power_of_two(len(values)):
|
||||||
def mk_combined_data_root(roots):
|
values = values + [SHARD_BLOCK_SIZE]
|
||||||
data = roots + [ZERO_ROOT for _ in range(len(roots), next_power_of_2(len(roots)))]
|
return values
|
||||||
return compute_merkle_root(data)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This outputs the root of a tree of the data roots, with the data roots all adjusted to have the same height if needed. The tree can also be viewed as a tree of all of the underlying data concatenated together, appropriately padded. Here is an equivalent definition that uses bodies instead of roots [TODO: check equivalence]:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def mk_combined_data_root(depths, bodies):
|
def merkle_root_of_bytes(data: bytes) -> bytes:
|
||||||
data = b''.join(bodies)
|
return merkle_root([data[i:i+32] for i in range(0, len(data), 32)])
|
||||||
data += bytes([0] * (next_power_of_2(len(data)) - len(data))
|
|
||||||
return compute_merkle_root([data[pos:pos+SHARD_CHUNK_SIZE] for pos in range(0, len(data), SHARD_CHUNK_SIZE)])
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Verify that the `shard_block_combined_data_root` is the output of these functions.
|
We define the function for computing the commitment as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def compute_commitment(headers: List[ShardBlock], bodies: List[bytes]) -> Bytes32:
|
||||||
|
return hash(
|
||||||
|
merkle_root(pad_to_power_of_2([merkle_root_of_bytes(zpad(serialize(h), SHARD_BLOCK_SIZE)) for h in headers])),
|
||||||
|
merkle_root(pad_to_power_of_2([merkle_root_of_bytes(h) for h in bodies]))
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
The `shard_chain_commitment` is only valid if it equals `compute_commitment(headers, bodies)`.
|
||||||
|
|
||||||
|
|
||||||
### Shard block fork choice rule
|
### Shard block fork choice rule
|
||||||
|
|
||||||
The fork choice rule for any shard is LMD GHOST using the validators currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (ie. `state.crosslinks[shard].shard_block_root`). Only blocks whose `beacon_chain_ref` is the block in the main beacon chain at the specified `slot` should be considered (if the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than a slot).
|
The fork choice rule for any shard is LMD GHOST using the shard chain attestations of the persistent committee and the beacon chain attestations of the crosslink committee currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (ie. `state.crosslinks[shard].shard_block_root`). Only blocks whose `beacon_chain_ref` is the block in the main beacon chain at the specified `slot` should be considered (if the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than a slot).
|
||||||
|
|
|
@ -9,28 +9,24 @@ deserializing objects and data types.
|
||||||
## ToC
|
## ToC
|
||||||
|
|
||||||
* [About](#about)
|
* [About](#about)
|
||||||
* [Terminology](#terminology)
|
* [Variables and Functions](#variables-and-functions)
|
||||||
* [Constants](#constants)
|
* [Constants](#constants)
|
||||||
* [Overview](#overview)
|
* [Overview](#overview)
|
||||||
+ [Serialize/Encode](#serializeencode)
|
+ [Serialize/Encode](#serializeencode)
|
||||||
- [uint](#uint)
|
- [uintN](#uintn)
|
||||||
- [Bool](#bool)
|
- [bool](#bool)
|
||||||
- [Bytes](#bytes)
|
- [bytesN](#bytesn)
|
||||||
- [bytesN](#bytesn)
|
|
||||||
- [bytes](#bytes-1)
|
|
||||||
- [List/Vectors](#listvectors)
|
- [List/Vectors](#listvectors)
|
||||||
- [Container](#container)
|
- [Container](#container)
|
||||||
+ [Deserialize/Decode](#deserializedecode)
|
+ [Deserialize/Decode](#deserializedecode)
|
||||||
- [uint](#uint-1)
|
- [uintN](#uintn-1)
|
||||||
- [Bool](#bool-1)
|
- [bool](#bool-1)
|
||||||
- [Bytes](#bytes-2)
|
- [bytesN](#bytesn-1)
|
||||||
- [bytesN](#bytesn-1)
|
|
||||||
- [bytes](#bytes-1)
|
|
||||||
- [List/Vectors](#listvectors-1)
|
- [List/Vectors](#listvectors-1)
|
||||||
- [Container](#container-1)
|
- [Container](#container-1)
|
||||||
+ [Tree Hash](#tree-hash)
|
+ [Tree Hash](#tree-hash)
|
||||||
- [`uint8`..`uint256`, `bool`, `bytes1`..`bytes32`](#uint8uint256-bool-bytes1bytes32)
|
- [`uint8`..`uint256`, `bool`, `bytes1`..`bytes32`](#uint8uint256-bool-bytes1bytes32)
|
||||||
- [`uint264`..`uintN`, `bytes`, `bytes33`..`bytesN`](#uint264uintn-bytes-bytes33bytesn)
|
- [`uint264`..`uintN`, `bytes33`..`bytesN`](#uint264uintn-bytes33bytesn)
|
||||||
- [List/Vectors](#listvectors-2)
|
- [List/Vectors](#listvectors-2)
|
||||||
- [Container](#container-2)
|
- [Container](#container-2)
|
||||||
* [Implementations](#implementations)
|
* [Implementations](#implementations)
|
||||||
|
@ -68,11 +64,11 @@ overhead.
|
||||||
|
|
||||||
### Serialize/Encode
|
### Serialize/Encode
|
||||||
|
|
||||||
#### uint
|
#### uintN
|
||||||
|
|
||||||
| uint Type | Usage |
|
| uint Type | Usage |
|
||||||
|:---------:|:-----------------------------------------------------------|
|
|:---------:|:-----------------------------------------------------------|
|
||||||
| `uintN` | Type of `N` bits unsigned integer, where ``N % 8 == 0``. |
|
| `uintN` | Type of `N` bits unsigned integer, where ``N % 8 == 0``. |
|
||||||
|
|
||||||
Convert directly to bytes the size of the int. (e.g. ``uint16 = 2 bytes``)
|
Convert directly to bytes the size of the int. (e.g. ``uint16 = 2 bytes``)
|
||||||
|
|
||||||
|
@ -88,7 +84,7 @@ buffer_size = int_size / 8
|
||||||
return value.to_bytes(buffer_size, 'little')
|
return value.to_bytes(buffer_size, 'little')
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Bool
|
#### bool
|
||||||
|
|
||||||
Convert directly to a single 0x00 or 0x01 byte.
|
Convert directly to a single 0x00 or 0x01 byte.
|
||||||
|
|
||||||
|
@ -101,18 +97,13 @@ assert(value in (True, False))
|
||||||
return b'\x01' if value is True else b'\x00'
|
return b'\x01' if value is True else b'\x00'
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Bytes
|
#### bytesN
|
||||||
|
|
||||||
| Bytes Type | Usage |
|
A fixed-size byte array.
|
||||||
|:---------:|:------------------------------------|
|
|
||||||
| `bytesN` | Explicit length `N` bytes data. |
|
|
||||||
| `bytes` | Bytes data with arbitrary length. |
|
|
||||||
|
|
||||||
##### bytesN
|
|
||||||
|
|
||||||
| Checks to perform | Code |
|
| Checks to perform | Code |
|
||||||
|:---------------------------------------|:---------------------|
|
|:---------------------------------------|:---------------------|
|
||||||
| Length in bytes is correct for `bytesN` | ``len(value) == N`` |
|
| Length in bytes is correct for `bytesN` | ``len(value) == N`` |
|
||||||
|
|
||||||
```python
|
```python
|
||||||
assert(len(value) == N)
|
assert(len(value) == N)
|
||||||
|
@ -120,21 +111,6 @@ assert(len(value) == N)
|
||||||
return value
|
return value
|
||||||
```
|
```
|
||||||
|
|
||||||
##### bytes
|
|
||||||
For general `bytes` type:
|
|
||||||
1. Get the length/number of bytes; Encode into a `4-byte` integer.
|
|
||||||
2. Append the value to the length and return: ``[ length_bytes ] + [ value_bytes ]``
|
|
||||||
|
|
||||||
| Check to perform | Code |
|
|
||||||
|:-------------------------------------|:-----------------------|
|
|
||||||
| Length of bytes can fit into 4 bytes | ``len(value) < 2**32`` |
|
|
||||||
|
|
||||||
```python
|
|
||||||
assert(len(value) < 2**32)
|
|
||||||
byte_length = (len(value)).to_bytes(LENGTH_BYTES, 'little')
|
|
||||||
return byte_length + value
|
|
||||||
```
|
|
||||||
|
|
||||||
#### List/Vectors
|
#### List/Vectors
|
||||||
|
|
||||||
Lists are a collection of elements of the same homogeneous type.
|
Lists are a collection of elements of the same homogeneous type.
|
||||||
|
@ -146,6 +122,8 @@ Lists are a collection of elements of the same homogeneous type.
|
||||||
1. Serialize all list elements individually and concatenate them.
|
1. Serialize all list elements individually and concatenate them.
|
||||||
2. Prefix the concatenation with its length encoded as a `4-byte` **little-endian** unsigned integer.
|
2. Prefix the concatenation with its length encoded as a `4-byte` **little-endian** unsigned integer.
|
||||||
|
|
||||||
|
We define `bytes` to be a synonym of `List[bytes1]`.
|
||||||
|
|
||||||
**Example in Python**
|
**Example in Python**
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -168,8 +146,8 @@ A container represents a heterogenous, associative collection of key-value pairs
|
||||||
|
|
||||||
To serialize a container, obtain the list of its field's names in the specified order. For each field name in this list, obtain the corresponding value and serialize it. Tightly pack the complete set of serialized values in the same order as the field names into a buffer. Calculate the size of this buffer of serialized bytes and encode as a `4-byte` **little endian** `uint32`. Prepend the encoded length to the buffer. The result of this concatenation is the final serialized value of the container.
|
To serialize a container, obtain the list of its field's names in the specified order. For each field name in this list, obtain the corresponding value and serialize it. Tightly pack the complete set of serialized values in the same order as the field names into a buffer. Calculate the size of this buffer of serialized bytes and encode as a `4-byte` **little endian** `uint32`. Prepend the encoded length to the buffer. The result of this concatenation is the final serialized value of the container.
|
||||||
|
|
||||||
| Check to perform | Code |
|
| Check to perform | Code |
|
||||||
|:--------------------------------------------|:----------------------------|
|
|:----------------------------------------------|:----------------------------|
|
||||||
| Length of serialized fields fits into 4 bytes | ``len(serialized) < 2**32`` |
|
| Length of serialized fields fits into 4 bytes | ``len(serialized) < 2**32`` |
|
||||||
|
|
||||||
To serialize:
|
To serialize:
|
||||||
|
@ -231,7 +209,7 @@ At the final step, the following checks should be made:
|
||||||
|:-------------------------|:-------------------------------------|
|
|:-------------------------|:-------------------------------------|
|
||||||
| Ensure no extra length | `new_index == len(rawbytes)` |
|
| Ensure no extra length | `new_index == len(rawbytes)` |
|
||||||
|
|
||||||
#### uint
|
#### uintN
|
||||||
|
|
||||||
Convert directly from bytes into integer utilising the number of bytes the same
|
Convert directly from bytes into integer utilising the number of bytes the same
|
||||||
size as the integer length. (e.g. ``uint16 == 2 bytes``)
|
size as the integer length. (e.g. ``uint16 == 2 bytes``)
|
||||||
|
@ -245,7 +223,7 @@ assert(len(rawbytes) >= new_index)
|
||||||
return int.from_bytes(rawbytes[current_index:current_index+byte_length], 'little'), new_index
|
return int.from_bytes(rawbytes[current_index:current_index+byte_length], 'little'), new_index
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Bool
|
#### bool
|
||||||
|
|
||||||
Return True if 0x01, False if 0x00.
|
Return True if 0x01, False if 0x00.
|
||||||
|
|
||||||
|
@ -254,9 +232,7 @@ assert rawbytes in (b'\x00', b'\x01')
|
||||||
return True if rawbytes == b'\x01' else False
|
return True if rawbytes == b'\x01' else False
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Bytes
|
#### bytesN
|
||||||
|
|
||||||
##### bytesN
|
|
||||||
|
|
||||||
Return the `N` bytes.
|
Return the `N` bytes.
|
||||||
|
|
||||||
|
@ -266,28 +242,6 @@ new_index = current_index + N
|
||||||
return rawbytes[current_index:current_index+N], new_index
|
return rawbytes[current_index:current_index+N], new_index
|
||||||
```
|
```
|
||||||
|
|
||||||
##### bytes
|
|
||||||
|
|
||||||
Get the length of the bytes, return the bytes.
|
|
||||||
|
|
||||||
| Check to perform | code |
|
|
||||||
|:--------------------------------------------------|:-------------------------------------------------|
|
|
||||||
| rawbytes has enough left for length | ``len(rawbytes) > current_index + LENGTH_BYTES`` |
|
|
||||||
| bytes to return not greater than serialized bytes | ``len(rawbytes) > bytes_end `` |
|
|
||||||
|
|
||||||
```python
|
|
||||||
assert(len(rawbytes) > current_index + LENGTH_BYTES)
|
|
||||||
bytes_length = int.from_bytes(rawbytes[current_index:current_index + LENGTH_BYTES], 'little')
|
|
||||||
|
|
||||||
bytes_start = current_index + LENGTH_BYTES
|
|
||||||
bytes_end = bytes_start + bytes_length
|
|
||||||
new_index = bytes_end
|
|
||||||
|
|
||||||
assert(len(rawbytes) >= bytes_end)
|
|
||||||
|
|
||||||
return rawbytes[bytes_start:bytes_end], new_index
|
|
||||||
```
|
|
||||||
|
|
||||||
#### List/Vectors
|
#### List/Vectors
|
||||||
|
|
||||||
Deserialize each element in the list.
|
Deserialize each element in the list.
|
||||||
|
@ -295,7 +249,6 @@ Deserialize each element in the list.
|
||||||
2. Loop through deserializing each item in the list until you reach the
|
2. Loop through deserializing each item in the list until you reach the
|
||||||
entire length of the list.
|
entire length of the list.
|
||||||
|
|
||||||
|
|
||||||
| Check to perform | code |
|
| Check to perform | code |
|
||||||
|:------------------------------------------|:----------------------------------------------------------------|
|
|:------------------------------------------|:----------------------------------------------------------------|
|
||||||
| ``rawbytes`` has enough left for length | ``len(rawbytes) > current_index + LENGTH_BYTES`` |
|
| ``rawbytes`` has enough left for length | ``len(rawbytes) > current_index + LENGTH_BYTES`` |
|
||||||
|
@ -371,7 +324,12 @@ return typ(**values), item_index
|
||||||
|
|
||||||
### Tree Hash
|
### Tree Hash
|
||||||
|
|
||||||
The below `hash_tree_root` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For the final output only (ie. not intermediate outputs), if the output is less than 32 bytes, right-zero-pad it to 32 bytes. The goal is collision resistance *within* each type, not between types.
|
The below `hash_tree_root_internal` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For use as a "final output" (eg. for signing), use `hash_tree_root(x) = zpad(hash_tree_root_internal(x), 32)`, where `zpad` is a helper that extends the given `bytes` value to the desired `length` by adding zero bytes on the right:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def zpad(input: bytes, length: int) -> bytes:
|
||||||
|
return input + b'\x00' * (length - len(input))
|
||||||
|
```
|
||||||
|
|
||||||
Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#hash) of Phase 0 of the [Eth2.0 specs](https://github.com/ethereum/eth2.0-specs) for a definition of the hash function used below, `hash(x)`.
|
Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#hash) of Phase 0 of the [Eth2.0 specs](https://github.com/ethereum/eth2.0-specs) for a definition of the hash function used below, `hash(x)`.
|
||||||
|
|
||||||
|
@ -379,13 +337,13 @@ Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/b
|
||||||
|
|
||||||
Return the serialization of the value.
|
Return the serialization of the value.
|
||||||
|
|
||||||
#### `uint264`..`uintN`, `bytes`, `bytes33`..`bytesN`
|
#### `uint264`..`uintN`, `bytes33`..`bytesN`
|
||||||
|
|
||||||
Return the hash of the serialization of the value.
|
Return the hash of the serialization of the value.
|
||||||
|
|
||||||
#### List/Vectors
|
#### List/Vectors
|
||||||
|
|
||||||
First, we define some helpers and then the Merkle tree function.
|
First, we define the Merkle tree function.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Merkle tree hash of a list of homogenous, non-empty items
|
# Merkle tree hash of a list of homogenous, non-empty items
|
||||||
|
@ -401,35 +359,41 @@ def merkle_hash(lst):
|
||||||
items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0])
|
items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0])
|
||||||
|
|
||||||
# Build a list of chunks based on the number of items in the chunk
|
# Build a list of chunks based on the number of items in the chunk
|
||||||
chunkz = [b''.join(lst[i:i+items_per_chunk]) for i in range(0, len(lst), items_per_chunk)]
|
chunkz = [
|
||||||
|
zpad(b''.join(lst[i:i + items_per_chunk]), SSZ_CHUNK_SIZE)
|
||||||
|
for i in range(0, len(lst), items_per_chunk)
|
||||||
|
]
|
||||||
else:
|
else:
|
||||||
# Leave large items alone
|
# Leave large items alone
|
||||||
chunkz = lst
|
chunkz = lst
|
||||||
|
|
||||||
# Tree-hash
|
# Merkleise
|
||||||
|
def next_power_of_2(x):
|
||||||
|
return 1 if x == 0 else 2**(x - 1).bit_length()
|
||||||
|
|
||||||
|
for i in range(len(chunkz), next_power_of_2(len(chunkz))):
|
||||||
|
chunkz.append(b'\x00' * SSZ_CHUNK_SIZE)
|
||||||
while len(chunkz) > 1:
|
while len(chunkz) > 1:
|
||||||
if len(chunkz) % 2 == 1:
|
|
||||||
chunkz.append(b'\x00' * SSZ_CHUNK_SIZE)
|
|
||||||
chunkz = [hash(chunkz[i] + chunkz[i+1]) for i in range(0, len(chunkz), 2)]
|
chunkz = [hash(chunkz[i] + chunkz[i+1]) for i in range(0, len(chunkz), 2)]
|
||||||
|
|
||||||
# Return hash of root and length data
|
# Return hash of root and data length
|
||||||
return hash(chunkz[0] + datalen)
|
return hash(chunkz[0] + datalen)
|
||||||
```
|
```
|
||||||
|
|
||||||
To `hash_tree_root` a list, we simply do:
|
To `hash_tree_root_internal` a list, we simply do:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
return merkle_hash([hash_tree_root(item) for item in value])
|
return merkle_hash([hash_tree_root_internal(item) for item in value])
|
||||||
```
|
```
|
||||||
|
|
||||||
Where the inner `hash_tree_root` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values).
|
Where the inner `hash_tree_root_internal` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values).
|
||||||
|
|
||||||
#### Container
|
#### Container
|
||||||
|
|
||||||
Recursively tree hash the values in the container in the same order as the fields, and return the hash of the concatenation of the results.
|
Recursively tree hash the values in the container in the same order as the fields, and return the hash of the concatenation of the results.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
return hash(b''.join([hash_tree_root(getattr(x, field)) for field in value.fields]))
|
return hash(b''.join([hash_tree_root_internal(getattr(x, field)) for field in value.fields]))
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementations
|
## Implementations
|
||||||
|
@ -444,6 +408,7 @@ return hash(b''.join([hash_tree_root(getattr(x, field)) for field in value.field
|
||||||
| Java | [ https://www.github.com/ConsenSys/cava/tree/master/ssz ](https://www.github.com/ConsenSys/cava/tree/master/ssz) | SSZ Java library part of the Cava suite |
|
| Java | [ https://www.github.com/ConsenSys/cava/tree/master/ssz ](https://www.github.com/ConsenSys/cava/tree/master/ssz) | SSZ Java library part of the Cava suite |
|
||||||
| Go | [ https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz ](https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz) | Go implementation of SSZ mantained by Prysmatic Labs |
|
| Go | [ https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz ](https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz) | Go implementation of SSZ mantained by Prysmatic Labs |
|
||||||
| Swift | [ https://github.com/yeeth/SimpleSerialize.swift ](https://github.com/yeeth/SimpleSerialize.swift) | Swift implementation maintained SSZ |
|
| Swift | [ https://github.com/yeeth/SimpleSerialize.swift ](https://github.com/yeeth/SimpleSerialize.swift) | Swift implementation maintained SSZ |
|
||||||
|
| C# | [ https://github.com/codingupastorm/csharp-ssz ](https://github.com/codingupastorm/csharp-ssz) | C# implementation maintained SSZ |
|
||||||
|
|
||||||
## Copyright
|
## Copyright
|
||||||
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
|
||||||
|
|
|
@ -42,7 +42,7 @@ __NOTICE__: This document is a work-in-progress for researchers and implementers
|
||||||
- [Beacon block root](#beacon-block-root)
|
- [Beacon block root](#beacon-block-root)
|
||||||
- [Epoch boundary root](#epoch-boundary-root)
|
- [Epoch boundary root](#epoch-boundary-root)
|
||||||
- [Shard block root](#shard-block-root)
|
- [Shard block root](#shard-block-root)
|
||||||
- [Latest crosslink root](#latest-crosslink-root)
|
- [Latest crosslink](#latest-crosslink)
|
||||||
- [Justified epoch](#justified-epoch)
|
- [Justified epoch](#justified-epoch)
|
||||||
- [Justified block root](#justified-block-root)
|
- [Justified block root](#justified-block-root)
|
||||||
- [Construct attestation](#construct-attestation)
|
- [Construct attestation](#construct-attestation)
|
||||||
|
@ -166,7 +166,7 @@ Set `block.randao_reveal = epoch_signature` where `epoch_signature` is defined a
|
||||||
```python
|
```python
|
||||||
epoch_signature = bls_sign(
|
epoch_signature = bls_sign(
|
||||||
privkey=validator.privkey, # privkey store locally, not in state
|
privkey=validator.privkey, # privkey store locally, not in state
|
||||||
message=int_to_bytes32(slot_to_epoch(block.slot)),
|
message_hash=int_to_bytes32(slot_to_epoch(block.slot)),
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
||||||
epoch=slot_to_epoch(block.slot),
|
epoch=slot_to_epoch(block.slot),
|
||||||
|
@ -205,7 +205,7 @@ proposal_root = hash_tree_root(proposal_data)
|
||||||
|
|
||||||
signed_proposal_data = bls_sign(
|
signed_proposal_data = bls_sign(
|
||||||
privkey=validator.privkey, # privkey store locally, not in state
|
privkey=validator.privkey, # privkey store locally, not in state
|
||||||
message=proposal_root,
|
message_hash=proposal_root,
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
||||||
epoch=slot_to_epoch(block.slot),
|
epoch=slot_to_epoch(block.slot),
|
||||||
|
@ -270,9 +270,9 @@ Set `attestation_data.shard_block_root = ZERO_HASH`.
|
||||||
|
|
||||||
_Note:_ This is a stub for phase 0.
|
_Note:_ This is a stub for phase 0.
|
||||||
|
|
||||||
##### Latest crosslink root
|
##### Latest crosslink
|
||||||
|
|
||||||
Set `attestation_data.latest_crosslink_root = state.latest_crosslinks[shard].shard_block_root` where `state` is the beacon state at `head` and `shard` is the validator's assigned shard.
|
Set `attestation_data.latest_crosslink = state.latest_crosslinks[shard]` where `state` is the beacon state at `head` and `shard` is the validator's assigned shard.
|
||||||
|
|
||||||
##### Justified epoch
|
##### Justified epoch
|
||||||
|
|
||||||
|
@ -321,7 +321,7 @@ attestation_message_to_sign = hash_tree_root(attestation_data_and_custody_bit)
|
||||||
|
|
||||||
signed_attestation_data = bls_sign(
|
signed_attestation_data = bls_sign(
|
||||||
privkey=validator.privkey, # privkey store locally, not in state
|
privkey=validator.privkey, # privkey store locally, not in state
|
||||||
message=attestation_message_to_sign,
|
message_hash=attestation_message_to_sign,
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
fork=fork, # `fork` is the fork object at the slot, `attestation_data.slot`
|
fork=fork, # `fork` is the fork object at the slot, `attestation_data.slot`
|
||||||
epoch=slot_to_epoch(attestation_data.slot),
|
epoch=slot_to_epoch(attestation_data.slot),
|
||||||
|
@ -341,27 +341,47 @@ There are three possibilities for the shuffling at the next epoch:
|
||||||
|
|
||||||
Either (2) or (3) occurs if (1) fails. The choice between (2) and (3) is deterministic based upon `epochs_since_last_registry_update`.
|
Either (2) or (3) occurs if (1) fails. The choice between (2) and (3) is deterministic based upon `epochs_since_last_registry_update`.
|
||||||
|
|
||||||
`get_crosslink_committees_at_slot` is designed to be able to query slots in the next epoch. When querying slots in the next epoch there are two options -- with and without a `registry_change` -- which is the optional third parameter of the function. The following helper can be used to get the potential crosslink committees in the next epoch for a given `validator_index`. This function returns a list of 2 shard committee tuples.
|
`get_crosslink_committees_at_slot` is designed to be able to query slots in the next epoch. When querying slots in the next epoch there are two options -- with and without a `registry_change` -- which is the optional third parameter of the function. The following helper can be used to get the potential crosslink committee assignments in the next epoch for a given `validator_index` and `registry_change`.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def get_next_epoch_crosslink_committees(state: BeaconState,
|
def get_next_epoch_committee_assignment(
|
||||||
validator_index: ValidatorIndex) -> List[Tuple[ValidatorIndex], ShardNumber]:
|
state: BeaconState,
|
||||||
|
validator_index: ValidatorIndex,
|
||||||
|
registry_change: bool) -> Tuple[List[ValidatorIndex], ShardNumber, SlotNumber, bool]:
|
||||||
|
"""
|
||||||
|
Return the committee assignment in the next epoch for ``validator_index`` and ``registry_change``.
|
||||||
|
``assignment`` returned is a tuple of the following form:
|
||||||
|
* ``assignment[0]`` is the list of validators in the committee
|
||||||
|
* ``assignment[1]`` is the shard to which the committee is assigned
|
||||||
|
* ``assignment[2]`` is the slot at which the committee is assigned
|
||||||
|
* ``assignment[3]`` is a bool signalling if the validator is expected to propose
|
||||||
|
a beacon block at the assigned slot.
|
||||||
|
"""
|
||||||
current_epoch = get_current_epoch(state)
|
current_epoch = get_current_epoch(state)
|
||||||
next_epoch = current_epoch + 1
|
next_epoch = current_epoch + 1
|
||||||
next_epoch_start_slot = get_epoch_start_slot(next_epoch)
|
next_epoch_start_slot = get_epoch_start_slot(next_epoch)
|
||||||
potential_committees = []
|
for slot in range(next_epoch_start_slot, next_epoch_start_slot + EPOCH_LENGTH):
|
||||||
for validator_registry in [False, True]:
|
crosslink_committees = get_crosslink_committees_at_slot(
|
||||||
for slot in range(next_epoch_start_slot, next_epoch_start_slot + EPOCH_LENGTH):
|
state,
|
||||||
shard_committees = get_crosslink_committees_at_slot(state, slot, validator_registry)
|
slot,
|
||||||
selected_committees = [committee for committee in shard_committees if validator_index in committee[0]]
|
registry_change=registry_change,
|
||||||
if len(selected_committees) > 0:
|
)
|
||||||
potential_assignments.append(selected_committees)
|
selected_committees = [
|
||||||
break
|
committee # Tuple[List[ValidatorIndex], ShardNumber]
|
||||||
|
for committee in crosslink_committees
|
||||||
|
if validator_index in committee[0]
|
||||||
|
]
|
||||||
|
if len(selected_committees) > 0:
|
||||||
|
validators = selected_committees[0][0]
|
||||||
|
shard = selected_committees[0][1]
|
||||||
|
first_committee_at_slot = crosslink_committees[0][0] # List[ValidatorIndex]
|
||||||
|
is_proposer = first_committee_at_slot[slot % len(first_committee_at_slot)] == validator_index
|
||||||
|
|
||||||
return potential_assignments
|
assignment = (validators, shard, slot, is_proposer)
|
||||||
|
return assignment
|
||||||
```
|
```
|
||||||
|
|
||||||
`get_next_epoch_crosslink_committees` should be called at the beginning of each epoch to plan for the next epoch. A validator should always plan for both values of `registry_change` as a possibility unless the validator can concretely eliminate one of the options. Planning for a future shuffling involves noting at which slot one might have to attest and propose and also which shard one should begin syncing (in phase 1+).
|
`get_next_epoch_committee_assignment` should be called at the start of each epoch to get the assignment for the next epoch (slots during `current_epoch + 1`). A validator should always plan for assignments from both values of `registry_change` unless the validator can concretely eliminate one of the options. Planning for future assignments involves noting at which future slot one might have to attest and propose and also which shard one should begin syncing (in phase 1+).
|
||||||
|
|
||||||
## How to avoid slashing
|
## How to avoid slashing
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue