Merge branch 'dev' into big-to-little
This commit is contained in:
commit
0ad2ffab50
|
@ -69,10 +69,10 @@ We require:
|
||||||
G2_cofactor = 305502333931268344200999753193121504214466019254188142667664032982267604182971884026507427359259977847832272839041616661285803823378372096355777062779109
|
G2_cofactor = 305502333931268344200999753193121504214466019254188142667664032982267604182971884026507427359259977847832272839041616661285803823378372096355777062779109
|
||||||
q = 4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787
|
q = 4002409555221667393417789825735904156556882819939007885332058136124031650490837864442687629129015664037894272559787
|
||||||
|
|
||||||
def hash_to_G2(message: bytes32, domain: uint64) -> [uint384]:
|
def hash_to_G2(message_hash: Bytes32, domain: uint64) -> [uint384]:
|
||||||
# Initial candidate x coordinate
|
# Initial candidate x coordinate
|
||||||
x_re = int.from_bytes(hash(message + bytes8(domain) + b'\x01'), 'big')
|
x_re = int.from_bytes(hash(message_hash + bytes8(domain) + b'\x01'), 'big')
|
||||||
x_im = int.from_bytes(hash(message + bytes8(domain) + b'\x02'), 'big')
|
x_im = int.from_bytes(hash(message_hash + bytes8(domain) + b'\x02'), 'big')
|
||||||
x_coordinate = Fq2([x_re, x_im]) # x = x_re + i * x_im
|
x_coordinate = Fq2([x_re, x_im]) # x = x_re + i * x_im
|
||||||
|
|
||||||
# Test candidate y coordinates until a one is found
|
# Test candidate y coordinates until a one is found
|
||||||
|
@ -128,17 +128,17 @@ g = Fq2([g_x, g_y])
|
||||||
|
|
||||||
### `bls_verify`
|
### `bls_verify`
|
||||||
|
|
||||||
Let `bls_verify(pubkey: Bytes48, message: Bytes32, signature: Bytes96, domain: uint64) -> bool`:
|
Let `bls_verify(pubkey: Bytes48, message_hash: Bytes32, signature: Bytes96, domain: uint64) -> bool`:
|
||||||
|
|
||||||
* Verify that `pubkey` is a valid G1 point.
|
* Verify that `pubkey` is a valid G1 point.
|
||||||
* Verify that `signature` is a valid G2 point.
|
* Verify that `signature` is a valid G2 point.
|
||||||
* Verify that `e(pubkey, hash_to_G2(message, domain)) == e(g, signature)`.
|
* Verify that `e(pubkey, hash_to_G2(message_hash, domain)) == e(g, signature)`.
|
||||||
|
|
||||||
### `bls_verify_multiple`
|
### `bls_verify_multiple`
|
||||||
|
|
||||||
Let `bls_verify_multiple(pubkeys: List[Bytes48], messages: List[Bytes32], signature: Bytes96, domain: uint64) -> bool`:
|
Let `bls_verify_multiple(pubkeys: List[Bytes48], message_hashes: List[Bytes32], signature: Bytes96, domain: uint64) -> bool`:
|
||||||
|
|
||||||
* Verify that each `pubkey` in `pubkeys` is a valid G1 point.
|
* Verify that each `pubkey` in `pubkeys` is a valid G1 point.
|
||||||
* Verify that `signature` is a valid G2 point.
|
* Verify that `signature` is a valid G2 point.
|
||||||
* Verify that `len(pubkeys)` equals `len(messages)` and denote the length `L`.
|
* Verify that `len(pubkeys)` equals `len(message_hashes)` and denote the length `L`.
|
||||||
* Verify that `e(pubkeys[0], hash_to_G2(messages[0], domain)) * ... * e(pubkeys[L-1], hash_to_G2(messages[L-1], domain)) == e(g, signature)`.
|
* Verify that `e(pubkeys[0], hash_to_G2(message_hashes[0], domain)) * ... * e(pubkeys[L-1], hash_to_G2(message_hashes[L-1], domain)) == e(g, signature)`.
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
- [Constants](#constants)
|
- [Constants](#constants)
|
||||||
- [Misc](#misc)
|
- [Misc](#misc)
|
||||||
- [Deposit contract](#deposit-contract)
|
- [Deposit contract](#deposit-contract)
|
||||||
|
- [Gwei values](#gwei-values)
|
||||||
- [Initial values](#initial-values)
|
- [Initial values](#initial-values)
|
||||||
- [Time parameters](#time-parameters)
|
- [Time parameters](#time-parameters)
|
||||||
- [State list lengths](#state-list-lengths)
|
- [State list lengths](#state-list-lengths)
|
||||||
|
@ -54,11 +55,12 @@
|
||||||
- [`hash`](#hash)
|
- [`hash`](#hash)
|
||||||
- [`hash_tree_root`](#hash_tree_root)
|
- [`hash_tree_root`](#hash_tree_root)
|
||||||
- [`slot_to_epoch`](#slot_to_epoch)
|
- [`slot_to_epoch`](#slot_to_epoch)
|
||||||
|
- [`get_previous_epoch`](#get_previous_epoch)
|
||||||
- [`get_current_epoch`](#get_current_epoch)
|
- [`get_current_epoch`](#get_current_epoch)
|
||||||
- [`get_epoch_start_slot`](#get_epoch_start_slot)
|
- [`get_epoch_start_slot`](#get_epoch_start_slot)
|
||||||
- [`is_active_validator`](#is_active_validator)
|
- [`is_active_validator`](#is_active_validator)
|
||||||
- [`get_active_validator_indices`](#get_active_validator_indices)
|
- [`get_active_validator_indices`](#get_active_validator_indices)
|
||||||
- [`shuffle`](#shuffle)
|
- [`get_permuted_index`](#get_permuted_index)
|
||||||
- [`split`](#split)
|
- [`split`](#split)
|
||||||
- [`get_epoch_committee_count`](#get_epoch_committee_count)
|
- [`get_epoch_committee_count`](#get_epoch_committee_count)
|
||||||
- [`get_shuffling`](#get_shuffling)
|
- [`get_shuffling`](#get_shuffling)
|
||||||
|
@ -75,7 +77,9 @@
|
||||||
- [`get_attestation_participants`](#get_attestation_participants)
|
- [`get_attestation_participants`](#get_attestation_participants)
|
||||||
- [`is_power_of_two`](#is_power_of_two)
|
- [`is_power_of_two`](#is_power_of_two)
|
||||||
- [`int_to_bytes1`, `int_to_bytes2`, ...](#int_to_bytes1-int_to_bytes2-)
|
- [`int_to_bytes1`, `int_to_bytes2`, ...](#int_to_bytes1-int_to_bytes2-)
|
||||||
|
- [`bytes_to_int`](#bytes_to_int)
|
||||||
- [`get_effective_balance`](#get_effective_balance)
|
- [`get_effective_balance`](#get_effective_balance)
|
||||||
|
- [`get_total_balance`](#get_total_balance)
|
||||||
- [`get_fork_version`](#get_fork_version)
|
- [`get_fork_version`](#get_fork_version)
|
||||||
- [`get_domain`](#get_domain)
|
- [`get_domain`](#get_domain)
|
||||||
- [`get_bitfield_bit`](#get_bitfield_bit)
|
- [`get_bitfield_bit`](#get_bitfield_bit)
|
||||||
|
@ -177,29 +181,36 @@ Code snippets appearing in `this style` are to be interpreted as Python code. Be
|
||||||
| - | - | :-: |
|
| - | - | :-: |
|
||||||
| `SHARD_COUNT` | `2**10` (= 1,024) | shards |
|
| `SHARD_COUNT` | `2**10` (= 1,024) | shards |
|
||||||
| `TARGET_COMMITTEE_SIZE` | `2**7` (= 128) | [validators](#dfn-validator) |
|
| `TARGET_COMMITTEE_SIZE` | `2**7` (= 128) | [validators](#dfn-validator) |
|
||||||
| `EJECTION_BALANCE` | `2**4 * 1e9` (= 16,000,000,000) | Gwei |
|
|
||||||
| `MAX_BALANCE_CHURN_QUOTIENT` | `2**5` (= 32) | - |
|
| `MAX_BALANCE_CHURN_QUOTIENT` | `2**5` (= 32) | - |
|
||||||
| `BEACON_CHAIN_SHARD_NUMBER` | `2**64 - 1` | - |
|
| `BEACON_CHAIN_SHARD_NUMBER` | `2**64 - 1` | - |
|
||||||
| `MAX_INDICES_PER_SLASHABLE_VOTE` | `2**12` (= 4,096) | votes |
|
| `MAX_INDICES_PER_SLASHABLE_VOTE` | `2**12` (= 4,096) | votes |
|
||||||
| `MAX_WITHDRAWALS_PER_EPOCH` | `2**2` (= 4) | withdrawals |
|
| `MAX_WITHDRAWALS_PER_EPOCH` | `2**2` (= 4) | withdrawals |
|
||||||
|
| `SHUFFLE_ROUND_COUNT` | 90 | - |
|
||||||
|
|
||||||
* For the safety of crosslinks `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `EPOCH_LENGTH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)
|
* For the safety of crosslinks `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `EPOCH_LENGTH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)
|
||||||
|
|
||||||
### Deposit contract
|
### Deposit contract
|
||||||
|
|
||||||
|
| Name | Value |
|
||||||
|
| - | - |
|
||||||
|
| `DEPOSIT_CONTRACT_ADDRESS` | **TBD** |
|
||||||
|
| `DEPOSIT_CONTRACT_TREE_DEPTH` | `2**5` (= 32) |
|
||||||
|
|
||||||
|
### Gwei values
|
||||||
|
|
||||||
| Name | Value | Unit |
|
| Name | Value | Unit |
|
||||||
| - | - | :-: |
|
| - | - | :-: |
|
||||||
| `DEPOSIT_CONTRACT_ADDRESS` | **TBD** |
|
|
||||||
| `DEPOSIT_CONTRACT_TREE_DEPTH` | `2**5` (= 32) | - |
|
|
||||||
| `MIN_DEPOSIT_AMOUNT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei |
|
| `MIN_DEPOSIT_AMOUNT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei |
|
||||||
| `MAX_DEPOSIT_AMOUNT` | `2**5 * 1e9` (= 32,000,000,000) | Gwei |
|
| `MAX_DEPOSIT_AMOUNT` | `2**5 * 1e9` (= 32,000,000,000) | Gwei |
|
||||||
|
| `FORK_CHOICE_BALANCE_INCREMENT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei |
|
||||||
|
| `EJECTION_BALANCE` | `2**4 * 1e9` (= 16,000,000,000) | Gwei |
|
||||||
|
|
||||||
### Initial values
|
### Initial values
|
||||||
|
|
||||||
| Name | Value |
|
| Name | Value |
|
||||||
| - | - |
|
| - | - |
|
||||||
| `GENESIS_FORK_VERSION` | `0` |
|
| `GENESIS_FORK_VERSION` | `0` |
|
||||||
| `GENESIS_SLOT` | `2**19` |
|
| `GENESIS_SLOT` | `2**63` |
|
||||||
| `GENESIS_EPOCH` | `slot_to_epoch(GENESIS_SLOT)` |
|
| `GENESIS_EPOCH` | `slot_to_epoch(GENESIS_SLOT)` |
|
||||||
| `GENESIS_START_SHARD` | `0` |
|
| `GENESIS_START_SHARD` | `0` |
|
||||||
| `FAR_FUTURE_EPOCH` | `2**64 - 1` |
|
| `FAR_FUTURE_EPOCH` | `2**64 - 1` |
|
||||||
|
@ -639,6 +650,19 @@ def slot_to_epoch(slot: SlotNumber) -> EpochNumber:
|
||||||
return slot // EPOCH_LENGTH
|
return slot // EPOCH_LENGTH
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `get_previous_epoch`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_previous_epoch(state: BeaconState) -> EpochNumber:
|
||||||
|
"""`
|
||||||
|
Return the previous epoch of the given ``state``.
|
||||||
|
If the current epoch is ``GENESIS_EPOCH``, return ``GENESIS_EPOCH``.
|
||||||
|
"""
|
||||||
|
if slot_to_epoch(state.slot) > GENESIS_EPOCH:
|
||||||
|
return slot_to_epoch(state.slot) - 1
|
||||||
|
return slot_to_epoch(state.slot)
|
||||||
|
```
|
||||||
|
|
||||||
### `get_current_epoch`
|
### `get_current_epoch`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -678,57 +702,27 @@ def get_active_validator_indices(validators: List[Validator], epoch: EpochNumber
|
||||||
return [i for i, v in enumerate(validators) if is_active_validator(v, epoch)]
|
return [i for i, v in enumerate(validators) if is_active_validator(v, epoch)]
|
||||||
```
|
```
|
||||||
|
|
||||||
### `shuffle`
|
### `get_permuted_index`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def shuffle(values: List[Any], seed: Bytes32) -> List[Any]:
|
def get_permuted_index(index: int, list_size: int, seed: Bytes32) -> int:
|
||||||
"""
|
"""
|
||||||
Return the shuffled ``values`` with ``seed`` as entropy.
|
Return `p(index)` in a pseudorandom permutation `p` of `0...list_size-1` with ``seed`` as entropy.
|
||||||
|
|
||||||
|
Utilizes 'swap or not' shuffling found in
|
||||||
|
https://link.springer.com/content/pdf/10.1007%2F978-3-642-32009-5_1.pdf
|
||||||
|
See the 'generalized domain' algorithm on page 3.
|
||||||
"""
|
"""
|
||||||
values_count = len(values)
|
for round in range(SHUFFLE_ROUND_COUNT):
|
||||||
|
pivot = bytes_to_int(hash(seed + int_to_bytes1(round))[0:8]) % list_size
|
||||||
|
flip = (pivot - index) % list_size
|
||||||
|
position = max(index, flip)
|
||||||
|
source = hash(seed + int_to_bytes1(round) + int_to_bytes4(position // 256))
|
||||||
|
byte = source[(position % 256) // 8]
|
||||||
|
bit = (byte >> (position % 8)) % 2
|
||||||
|
index = flip if bit else index
|
||||||
|
|
||||||
# Entropy is consumed from the seed in 3-byte (24 bit) chunks.
|
return index
|
||||||
rand_bytes = 3
|
|
||||||
# The highest possible result of the RNG.
|
|
||||||
rand_max = 2 ** (rand_bytes * 8) - 1
|
|
||||||
|
|
||||||
# The range of the RNG places an upper-bound on the size of the list that
|
|
||||||
# may be shuffled. It is a logic error to supply an oversized list.
|
|
||||||
assert values_count < rand_max
|
|
||||||
|
|
||||||
output = [x for x in values]
|
|
||||||
source = seed
|
|
||||||
index = 0
|
|
||||||
while index < values_count - 1:
|
|
||||||
# Re-hash the `source` to obtain a new pattern of bytes.
|
|
||||||
source = hash(source)
|
|
||||||
# Iterate through the `source` bytes in 3-byte chunks.
|
|
||||||
for position in range(0, 32 - (32 % rand_bytes), rand_bytes):
|
|
||||||
# Determine the number of indices remaining in `values` and exit
|
|
||||||
# once the last index is reached.
|
|
||||||
remaining = values_count - index
|
|
||||||
if remaining == 1:
|
|
||||||
break
|
|
||||||
|
|
||||||
# Read 3-bytes of `source` as a 24-bit little-endian integer.
|
|
||||||
sample_from_source = int.from_bytes(source[position:position + rand_bytes], 'little')
|
|
||||||
|
|
||||||
# Sample values greater than or equal to `sample_max` will cause
|
|
||||||
# modulo bias when mapped into the `remaining` range.
|
|
||||||
sample_max = rand_max - rand_max % remaining
|
|
||||||
|
|
||||||
# Perform a swap if the consumed entropy will not cause modulo bias.
|
|
||||||
if sample_from_source < sample_max:
|
|
||||||
# Select a replacement index for the current index.
|
|
||||||
replacement_position = (sample_from_source % remaining) + index
|
|
||||||
# Swap the current index with the replacement index.
|
|
||||||
output[index], output[replacement_position] = output[replacement_position], output[index]
|
|
||||||
index += 1
|
|
||||||
else:
|
|
||||||
# The sample causes modulo bias. A new sample should be read.
|
|
||||||
pass
|
|
||||||
|
|
||||||
return output
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### `split`
|
### `split`
|
||||||
|
@ -778,8 +772,10 @@ def get_shuffling(seed: Bytes32,
|
||||||
committees_per_epoch = get_epoch_committee_count(len(active_validator_indices))
|
committees_per_epoch = get_epoch_committee_count(len(active_validator_indices))
|
||||||
|
|
||||||
# Shuffle
|
# Shuffle
|
||||||
seed = xor(seed, int_to_bytes32(epoch))
|
shuffled_active_validator_indices = [
|
||||||
shuffled_active_validator_indices = shuffle(active_validator_indices, seed)
|
active_validator_indices[get_permuted_index(i, len(active_validator_indices), seed)]
|
||||||
|
for i in active_validator_indices
|
||||||
|
]
|
||||||
|
|
||||||
# Split the shuffled list into committees_per_epoch pieces
|
# Split the shuffled list into committees_per_epoch pieces
|
||||||
return split(shuffled_active_validator_indices, committees_per_epoch)
|
return split(shuffled_active_validator_indices, committees_per_epoch)
|
||||||
|
@ -836,7 +832,7 @@ def get_next_epoch_committee_count(state: BeaconState) -> int:
|
||||||
```python
|
```python
|
||||||
def get_crosslink_committees_at_slot(state: BeaconState,
|
def get_crosslink_committees_at_slot(state: BeaconState,
|
||||||
slot: SlotNumber,
|
slot: SlotNumber,
|
||||||
registry_change=False: bool) -> List[Tuple[List[ValidatorIndex], ShardNumber]]:
|
registry_change: bool=False) -> List[Tuple[List[ValidatorIndex], ShardNumber]]:
|
||||||
"""
|
"""
|
||||||
Return the list of ``(committee, shard)`` tuples for the ``slot``.
|
Return the list of ``(committee, shard)`` tuples for the ``slot``.
|
||||||
|
|
||||||
|
@ -845,7 +841,7 @@ def get_crosslink_committees_at_slot(state: BeaconState,
|
||||||
"""
|
"""
|
||||||
epoch = slot_to_epoch(slot)
|
epoch = slot_to_epoch(slot)
|
||||||
current_epoch = get_current_epoch(state)
|
current_epoch = get_current_epoch(state)
|
||||||
previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else current_epoch
|
previous_epoch = get_previous_epoch(state)
|
||||||
next_epoch = current_epoch + 1
|
next_epoch = current_epoch + 1
|
||||||
|
|
||||||
assert previous_epoch <= epoch <= next_epoch
|
assert previous_epoch <= epoch <= next_epoch
|
||||||
|
@ -945,7 +941,8 @@ def generate_seed(state: BeaconState,
|
||||||
"""
|
"""
|
||||||
return hash(
|
return hash(
|
||||||
get_randao_mix(state, epoch - SEED_LOOKAHEAD) +
|
get_randao_mix(state, epoch - SEED_LOOKAHEAD) +
|
||||||
get_active_index_root(state, epoch)
|
get_active_index_root(state, epoch) +
|
||||||
|
int_to_bytes32(epoch)
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -1017,6 +1014,13 @@ def is_power_of_two(value: int) -> bool:
|
||||||
|
|
||||||
`int_to_bytes1(x): return x.to_bytes(1, 'little')`, `int_to_bytes2(x): return x.to_bytes(2, 'little')`, and so on for all integers, particularly 1, 2, 3, 4, 8, 32, 48, 96.
|
`int_to_bytes1(x): return x.to_bytes(1, 'little')`, `int_to_bytes2(x): return x.to_bytes(2, 'little')`, and so on for all integers, particularly 1, 2, 3, 4, 8, 32, 48, 96.
|
||||||
|
|
||||||
|
### `bytes_to_int`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def bytes_to_int(data: bytes) -> int:
|
||||||
|
return int.from_bytes(data, 'little')
|
||||||
|
```
|
||||||
|
|
||||||
### `get_effective_balance`
|
### `get_effective_balance`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -1027,6 +1031,16 @@ def get_effective_balance(state: State, index: ValidatorIndex) -> Gwei:
|
||||||
return min(state.validator_balances[index], MAX_DEPOSIT_AMOUNT)
|
return min(state.validator_balances[index], MAX_DEPOSIT_AMOUNT)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### `get_total_balance`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def get_total_balance(state: BeaconState, validators: List[ValidatorIndex]) -> Gwei:
|
||||||
|
"""
|
||||||
|
Return the combined effective balance of an array of validators.
|
||||||
|
"""
|
||||||
|
return sum([get_effective_balance(state, i) for i in validators])
|
||||||
|
```
|
||||||
|
|
||||||
### `get_fork_version`
|
### `get_fork_version`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -1074,7 +1088,8 @@ def verify_bitfield(bitfield: bytes, committee_size: int) -> bool:
|
||||||
if len(bitfield) != (committee_size + 7) // 8:
|
if len(bitfield) != (committee_size + 7) // 8:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
for i in range(committee_size + 1, committee_size - committee_size % 8 + 8):
|
# Check `bitfield` is padded with zero bits only
|
||||||
|
for i in range(committee_size, len(bitfield) * 8):
|
||||||
if get_bitfield_bit(bitfield, i) == 0b1:
|
if get_bitfield_bit(bitfield, i) == 0b1:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -1112,21 +1127,17 @@ def verify_slashable_attestation(state: BeaconState, slashable_attestation: Slas
|
||||||
else:
|
else:
|
||||||
custody_bit_1_indices.append(validator_index)
|
custody_bit_1_indices.append(validator_index)
|
||||||
|
|
||||||
return bls_verify(
|
return bls_verify_multiple(
|
||||||
pubkeys=[
|
pubkeys=[
|
||||||
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_0_indices]),
|
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_0_indices]),
|
||||||
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_1_indices]),
|
bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_1_indices]),
|
||||||
],
|
],
|
||||||
messages=[
|
message_hashes=[
|
||||||
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b0)),
|
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b0)),
|
||||||
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b1)),
|
hash_tree_root(AttestationDataAndCustodyBit(data=slashable_attestation.data, custody_bit=0b1)),
|
||||||
],
|
],
|
||||||
signature=slashable_attestation.aggregate_signature,
|
signature=slashable_attestation.aggregate_signature,
|
||||||
domain=get_domain(
|
domain=get_domain(state.fork, slot_to_epoch(vote_data.data.slot), DOMAIN_ATTESTATION),
|
||||||
state.fork,
|
|
||||||
slot_to_epoch(vote_data.data.slot),
|
|
||||||
DOMAIN_ATTESTATION,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -1216,7 +1227,7 @@ def validate_proof_of_possession(state: BeaconState,
|
||||||
|
|
||||||
return bls_verify(
|
return bls_verify(
|
||||||
pubkey=pubkey,
|
pubkey=pubkey,
|
||||||
message=hash_tree_root(proof_of_possession_data),
|
message_hash=hash_tree_root(proof_of_possession_data),
|
||||||
signature=proof_of_possession,
|
signature=proof_of_possession,
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
state.fork,
|
state.fork,
|
||||||
|
@ -1381,87 +1392,16 @@ When sufficiently many full deposits have been made the deposit contract emits t
|
||||||
|
|
||||||
### Vyper code
|
### Vyper code
|
||||||
|
|
||||||
```python
|
The source for the Vyper contract lives in a [separate repository](https://github.com/ethereum/deposit_contract) at [https://github.com/ethereum/deposit_contract/blob/master/deposit_contract/contracts/validator_registration.v.py](https://github.com/ethereum/deposit_contract/blob/master/deposit_contract/contracts/validator_registration.v.py).
|
||||||
## compiled with v0.1.0-beta.6 ##
|
|
||||||
|
|
||||||
MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei
|
|
||||||
MAX_DEPOSIT_AMOUNT: constant(uint256) = 32000000000 # Gwei
|
|
||||||
GWEI_PER_ETH: constant(uint256) = 1000000000 # 10**9
|
|
||||||
CHAIN_START_FULL_DEPOSIT_THRESHOLD: constant(uint256) = 16384 # 2**14
|
|
||||||
DEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32
|
|
||||||
TWO_TO_POWER_OF_TREE_DEPTH: constant(uint256) = 4294967296 # 2**32
|
|
||||||
SECONDS_PER_DAY: constant(uint256) = 86400
|
|
||||||
|
|
||||||
Deposit: event({deposit_root: bytes32, data: bytes[528], merkle_tree_index: bytes[8], branch: bytes32[32]})
|
|
||||||
ChainStart: event({deposit_root: bytes32, time: bytes[8]})
|
|
||||||
|
|
||||||
zerohashes: bytes32[32]
|
|
||||||
branch: bytes32[32]
|
|
||||||
deposit_count: uint256
|
|
||||||
full_deposit_count: uint256
|
|
||||||
chainStarted: public(bool)
|
|
||||||
|
|
||||||
@public
|
|
||||||
def __init__():
|
|
||||||
for i in range(31):
|
|
||||||
self.zerohashes[i+1] = sha3(concat(self.zerohashes[i], self.zerohashes[i]))
|
|
||||||
self.branch[i+1] = self.zerohashes[i+1]
|
|
||||||
|
|
||||||
@public
|
|
||||||
@constant
|
|
||||||
def get_deposit_root() -> bytes32:
|
|
||||||
root:bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
|
|
||||||
size:uint256 = self.deposit_count
|
|
||||||
for h in range(32):
|
|
||||||
if size % 2 == 1:
|
|
||||||
root = sha3(concat(self.branch[h], root))
|
|
||||||
else:
|
|
||||||
root = sha3(concat(root, self.zerohashes[h]))
|
|
||||||
size /= 2
|
|
||||||
return root
|
|
||||||
|
|
||||||
@payable
|
|
||||||
@public
|
|
||||||
def deposit(deposit_input: bytes[512]):
|
|
||||||
assert msg.value >= as_wei_value(MIN_DEPOSIT_AMOUNT, "gwei")
|
|
||||||
assert msg.value <= as_wei_value(MAX_DEPOSIT_AMOUNT, "gwei")
|
|
||||||
|
|
||||||
index: uint256 = self.deposit_count
|
|
||||||
deposit_amount: bytes[8] = slice(concat("", convert(msg.value / GWEI_PER_ETH, bytes32)), start=24, len=8)
|
|
||||||
deposit_timestamp: bytes[8] = slice(concat("", convert(block.timestamp, bytes32)), start=24, len=8)
|
|
||||||
deposit_data: bytes[528] = concat(deposit_amount, deposit_timestamp, deposit_input)
|
|
||||||
merkle_tree_index: bytes[8] = slice(concat("", convert(index, bytes32)), start=24, len=8)
|
|
||||||
|
|
||||||
# add deposit to merkle tree
|
|
||||||
i: int128 = 0
|
|
||||||
power_of_two: uint256 = 2
|
|
||||||
for _ in range(32):
|
|
||||||
if (index+1) % power_of_two != 0:
|
|
||||||
break
|
|
||||||
i += 1
|
|
||||||
power_of_two *= 2
|
|
||||||
value:bytes32 = sha3(deposit_data)
|
|
||||||
for j in range(32):
|
|
||||||
if j < i:
|
|
||||||
value = sha3(concat(self.branch[j], value))
|
|
||||||
self.branch[i] = value
|
|
||||||
|
|
||||||
self.deposit_count += 1
|
|
||||||
|
|
||||||
new_deposit_root:bytes32 = self.get_deposit_root()
|
|
||||||
log.Deposit(new_deposit_root, deposit_data, merkle_tree_index, self.branch)
|
|
||||||
|
|
||||||
if msg.value == as_wei_value(MAX_DEPOSIT_AMOUNT, "gwei"):
|
|
||||||
self.full_deposit_count += 1
|
|
||||||
if self.full_deposit_count == CHAIN_START_FULL_DEPOSIT_THRESHOLD:
|
|
||||||
timestamp_day_boundary: uint256 = as_unitless_number(block.timestamp) - as_unitless_number(block.timestamp) % SECONDS_PER_DAY + SECONDS_PER_DAY
|
|
||||||
chainstart_time: bytes[8] = slice(concat("", convert(timestamp_day_boundary, bytes32)), start=24, len=8)
|
|
||||||
log.ChainStart(new_deposit_root, chainstart_time)
|
|
||||||
self.chainStarted = True
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: to save ~10x on gas this contract uses a somewhat unintuitive progressive Merkle root calculation algo that requires only O(log(n)) storage. See https://github.com/ethereum/research/blob/master/beacon_chain_impl/progressive_merkle_tree.py for an implementation of the same algo in python tested for correctness.
|
Note: to save ~10x on gas this contract uses a somewhat unintuitive progressive Merkle root calculation algo that requires only O(log(n)) storage. See https://github.com/ethereum/research/blob/master/beacon_chain_impl/progressive_merkle_tree.py for an implementation of the same algo in python tested for correctness.
|
||||||
|
|
||||||
|
For convenience, we provide the interface to the contract here:
|
||||||
|
|
||||||
|
* `__init__()`: initializes the contract
|
||||||
|
* `get_deposit_root() -> bytes32`: returns the current root of the deposit tree
|
||||||
|
* `deposit(bytes[512])`: adds a deposit instance to the deposit tree, incorporating the input argument and the value transferred in the given call. Note: the amount of value transferred *must* be within `MIN_DEPOSIT_AMOUNT` and `MAX_DEPOSIT_AMOUNT`, inclusive. Each of these constants are specified in units of Gwei.
|
||||||
|
|
||||||
## On startup
|
## On startup
|
||||||
|
|
||||||
A valid block with slot `GENESIS_SLOT` (a "genesis block") has the following values. Other validity rules (e.g. requiring a signature) do not apply.
|
A valid block with slot `GENESIS_SLOT` (a "genesis block") has the following values. Other validity rules (e.g. requiring a signature) do not apply.
|
||||||
|
@ -1604,8 +1544,8 @@ def get_ancestor(store: Store, block: BeaconBlock, slot: SlotNumber) -> BeaconBl
|
||||||
return get_ancestor(store, store.get_parent(block), slot)
|
return get_ancestor(store, store.get_parent(block), slot)
|
||||||
```
|
```
|
||||||
|
|
||||||
* Let `get_latest_attestation(store: Store, validator: Validator) -> Attestation` be the attestation with the highest slot number in `store` from `validator`. If several such attestations exist, use the one the [validator](#dfn-validator) `v` observed first.
|
* Let `get_latest_attestation(store: Store, validator_index: ValidatorIndex) -> Attestation` be the attestation with the highest slot number in `store` from the validator with the given `validator_index`. If several such attestations exist, use the one the [validator](#dfn-validator) `v` observed first.
|
||||||
* Let `get_latest_attestation_target(store: Store, validator: Validator) -> BeaconBlock` be the target block in the attestation `get_latest_attestation(store, validator)`.
|
* Let `get_latest_attestation_target(store: Store, validator_index: ValidatorIndex) -> BeaconBlock` be the target block in the attestation `get_latest_attestation(store, validator_index)`.
|
||||||
* Let `get_children(store: Store, block: BeaconBlock) -> List[BeaconBlock]` returns the child blocks of the given `block`.
|
* Let `get_children(store: Store, block: BeaconBlock) -> List[BeaconBlock]` returns the child blocks of the given `block`.
|
||||||
* Let `justified_head_state` be the resulting `BeaconState` object from processing the chain up to the `justified_head`.
|
* Let `justified_head_state` be the resulting `BeaconState` object from processing the chain up to the `justified_head`.
|
||||||
* The `head` is `lmd_ghost(store, justified_head_state, justified_head)` where the function `lmd_ghost` is defined below. Note that the implementation below is suboptimal; there are implementations that compute the head in time logarithmic in slot count.
|
* The `head` is `lmd_ghost(store, justified_head_state, justified_head)` where the function `lmd_ghost` is defined below. Note that the implementation below is suboptimal; there are implementations that compute the head in time logarithmic in slot count.
|
||||||
|
@ -1616,21 +1556,18 @@ def lmd_ghost(store: Store, start_state: BeaconState, start_block: BeaconBlock)
|
||||||
Execute the LMD-GHOST algorithm to find the head ``BeaconBlock``.
|
Execute the LMD-GHOST algorithm to find the head ``BeaconBlock``.
|
||||||
"""
|
"""
|
||||||
validators = start_state.validator_registry
|
validators = start_state.validator_registry
|
||||||
active_validators = [
|
active_validator_indices = get_active_validator_indices(validators, start_state.slot)
|
||||||
validators[i]
|
|
||||||
for i in get_active_validator_indices(validators, start_state.slot)
|
|
||||||
]
|
|
||||||
attestation_targets = [
|
attestation_targets = [
|
||||||
get_latest_attestation_target(store, validator)
|
(validator_index, get_latest_attestation_target(store, validator_index))
|
||||||
for validator in active_validators
|
for validator_index in active_validator_indices
|
||||||
]
|
]
|
||||||
|
|
||||||
def get_vote_count(block: BeaconBlock) -> int:
|
def get_vote_count(block: BeaconBlock) -> int:
|
||||||
return len([
|
return sum(
|
||||||
target
|
get_effective_balance(start_state.validator_balances[validator_index]) // FORK_CHOICE_BALANCE_INCREMENT
|
||||||
for target in attestation_targets
|
for validator_index, target in attestation_targets
|
||||||
if get_ancestor(store, target, block.slot) == block
|
if get_ancestor(store, target, block.slot) == block
|
||||||
])
|
)
|
||||||
|
|
||||||
head = start_block
|
head = start_block
|
||||||
while 1:
|
while 1:
|
||||||
|
@ -1662,7 +1599,7 @@ Below are the processing steps that happen at every slot.
|
||||||
|
|
||||||
#### Block roots
|
#### Block roots
|
||||||
|
|
||||||
* Let `previous_block_root` be the `tree_hash_root` of the previous beacon block processed in the chain.
|
* Let `previous_block_root` be the `hash_tree_root` of the previous beacon block processed in the chain.
|
||||||
* Set `state.latest_block_roots[(state.slot - 1) % LATEST_BLOCK_ROOTS_LENGTH] = previous_block_root`.
|
* Set `state.latest_block_roots[(state.slot - 1) % LATEST_BLOCK_ROOTS_LENGTH] = previous_block_root`.
|
||||||
* If `state.slot % LATEST_BLOCK_ROOTS_LENGTH == 0` append `merkle_root(state.latest_block_roots)` to `state.batched_block_roots`.
|
* If `state.slot % LATEST_BLOCK_ROOTS_LENGTH == 0` append `merkle_root(state.latest_block_roots)` to `state.batched_block_roots`.
|
||||||
|
|
||||||
|
@ -1678,17 +1615,17 @@ Below are the processing steps that happen at every `block`.
|
||||||
|
|
||||||
* Let `block_without_signature_root` be the `hash_tree_root` of `block` where `block.signature` is set to `EMPTY_SIGNATURE`.
|
* Let `block_without_signature_root` be the `hash_tree_root` of `block` where `block.signature` is set to `EMPTY_SIGNATURE`.
|
||||||
* Let `proposal_root = hash_tree_root(ProposalSignedData(state.slot, BEACON_CHAIN_SHARD_NUMBER, block_without_signature_root))`.
|
* Let `proposal_root = hash_tree_root(ProposalSignedData(state.slot, BEACON_CHAIN_SHARD_NUMBER, block_without_signature_root))`.
|
||||||
* Verify that `bls_verify(pubkey=state.validator_registry[get_beacon_proposer_index(state, state.slot)].pubkey, message=proposal_root, signature=block.signature, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_PROPOSAL))`.
|
* Verify that `bls_verify(pubkey=state.validator_registry[get_beacon_proposer_index(state, state.slot)].pubkey, message_hash=proposal_root, signature=block.signature, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_PROPOSAL))`.
|
||||||
|
|
||||||
#### RANDAO
|
#### RANDAO
|
||||||
|
|
||||||
* Let `proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)]`.
|
* Let `proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)]`.
|
||||||
* Verify that `bls_verify(pubkey=proposer.pubkey, message=int_to_bytes32(get_current_epoch(state)), signature=block.randao_reveal, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_RANDAO))`.
|
* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=int_to_bytes32(get_current_epoch(state)), signature=block.randao_reveal, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_RANDAO))`.
|
||||||
* Set `state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = xor(get_randao_mix(state, get_current_epoch(state)), hash(block.randao_reveal))`.
|
* Set `state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = xor(get_randao_mix(state, get_current_epoch(state)), hash(block.randao_reveal))`.
|
||||||
|
|
||||||
#### Eth1 data
|
#### Eth1 data
|
||||||
|
|
||||||
* If `block.eth1_data` equals `eth1_data_vote.eth1_data` for some `eth1_data_vote` in `state.eth1_data_votes`, set `eth1_data_vote.vote_count += 1`.
|
* If there exists an `eth1_data_vote` in `states.eth1_data_votes` for which `eth1_data_vote.eth1_data == block.eth1_data` (there will be at most one), set `eth1_data_vote.vote_count += 1`.
|
||||||
* Otherwise, append to `state.eth1_data_votes` a new `Eth1DataVote(eth1_data=block.eth1_data, vote_count=1)`.
|
* Otherwise, append to `state.eth1_data_votes` a new `Eth1DataVote(eth1_data=block.eth1_data, vote_count=1)`.
|
||||||
|
|
||||||
#### Operations
|
#### Operations
|
||||||
|
@ -1704,8 +1641,8 @@ For each `proposer_slashing` in `block.body.proposer_slashings`:
|
||||||
* Verify that `proposer_slashing.proposal_data_1.shard == proposer_slashing.proposal_data_2.shard`.
|
* Verify that `proposer_slashing.proposal_data_1.shard == proposer_slashing.proposal_data_2.shard`.
|
||||||
* Verify that `proposer_slashing.proposal_data_1.block_root != proposer_slashing.proposal_data_2.block_root`.
|
* Verify that `proposer_slashing.proposal_data_1.block_root != proposer_slashing.proposal_data_2.block_root`.
|
||||||
* Verify that `proposer.penalized_epoch > get_current_epoch(state)`.
|
* Verify that `proposer.penalized_epoch > get_current_epoch(state)`.
|
||||||
* Verify that `bls_verify(pubkey=proposer.pubkey, message=hash_tree_root(proposer_slashing.proposal_data_1), signature=proposer_slashing.proposal_signature_1, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_1.slot), DOMAIN_PROPOSAL))`.
|
* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=hash_tree_root(proposer_slashing.proposal_data_1), signature=proposer_slashing.proposal_signature_1, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_1.slot), DOMAIN_PROPOSAL))`.
|
||||||
* Verify that `bls_verify(pubkey=proposer.pubkey, message=hash_tree_root(proposer_slashing.proposal_data_2), signature=proposer_slashing.proposal_signature_2, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_2.slot), DOMAIN_PROPOSAL))`.
|
* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=hash_tree_root(proposer_slashing.proposal_data_2), signature=proposer_slashing.proposal_signature_2, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_data_2.slot), DOMAIN_PROPOSAL))`.
|
||||||
* Run `penalize_validator(state, proposer_slashing.proposer_index)`.
|
* Run `penalize_validator(state, proposer_slashing.proposer_index)`.
|
||||||
|
|
||||||
##### Attester slashings
|
##### Attester slashings
|
||||||
|
@ -1817,7 +1754,7 @@ For each `exit` in `block.body.exits`:
|
||||||
* Verify that `validator.exit_epoch > get_entry_exit_effect_epoch(get_current_epoch(state))`.
|
* Verify that `validator.exit_epoch > get_entry_exit_effect_epoch(get_current_epoch(state))`.
|
||||||
* Verify that `get_current_epoch(state) >= exit.epoch`.
|
* Verify that `get_current_epoch(state) >= exit.epoch`.
|
||||||
* Let `exit_message = hash_tree_root(Exit(epoch=exit.epoch, validator_index=exit.validator_index, signature=EMPTY_SIGNATURE))`.
|
* Let `exit_message = hash_tree_root(Exit(epoch=exit.epoch, validator_index=exit.validator_index, signature=EMPTY_SIGNATURE))`.
|
||||||
* Verify that `bls_verify(pubkey=validator.pubkey, message=exit_message, signature=exit.signature, domain=get_domain(state.fork, exit.epoch, DOMAIN_EXIT))`.
|
* Verify that `bls_verify(pubkey=validator.pubkey, message_hash=exit_message, signature=exit.signature, domain=get_domain(state.fork, exit.epoch, DOMAIN_EXIT))`.
|
||||||
* Run `initiate_validator_exit(state, exit.validator_index)`.
|
* Run `initiate_validator_exit(state, exit.validator_index)`.
|
||||||
|
|
||||||
### Per-epoch processing
|
### Per-epoch processing
|
||||||
|
@ -1827,36 +1764,36 @@ The steps below happen when `(state.slot + 1) % EPOCH_LENGTH == 0`.
|
||||||
#### Helpers
|
#### Helpers
|
||||||
|
|
||||||
* Let `current_epoch = get_current_epoch(state)`.
|
* Let `current_epoch = get_current_epoch(state)`.
|
||||||
* Let `previous_epoch = current_epoch - 1 if current_epoch > GENESIS_EPOCH else current_epoch`.
|
* Let `previous_epoch = get_previous_epoch(state)`.
|
||||||
* Let `next_epoch = current_epoch + 1`.
|
* Let `next_epoch = current_epoch + 1`.
|
||||||
|
|
||||||
[Validators](#dfn-Validator) attesting during the current epoch:
|
[Validators](#dfn-Validator) attesting during the current epoch:
|
||||||
|
|
||||||
* Let `current_total_balance = sum([get_effective_balance(state, i) for i in get_active_validator_indices(state.validator_registry, current_epoch)])`.
|
* Let `current_total_balance = get_total_balance(state, get_active_validator_indices(state.validator_registry, current_epoch))`.
|
||||||
* Let `current_epoch_attestations = [a for a in state.latest_attestations if current_epoch == slot_to_epoch(a.data.slot)]`. (Note: this is the set of attestations of slots in the epoch `current_epoch`, _not_ attestations that got included in the chain during the epoch `current_epoch`.)
|
* Let `current_epoch_attestations = [a for a in state.latest_attestations if current_epoch == slot_to_epoch(a.data.slot)]`. (Note: this is the set of attestations of slots in the epoch `current_epoch`, _not_ attestations that got included in the chain during the epoch `current_epoch`.)
|
||||||
* Validators justifying the epoch boundary block at the start of the current epoch:
|
* Validators justifying the epoch boundary block at the start of the current epoch:
|
||||||
* Let `current_epoch_boundary_attestations = [a for a in current_epoch_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(current_epoch)) and a.data.justified_epoch == state.justified_epoch]`.
|
* Let `current_epoch_boundary_attestations = [a for a in current_epoch_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(current_epoch)) and a.data.justified_epoch == state.justified_epoch]`.
|
||||||
* Let `current_epoch_boundary_attester_indices` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_boundary_attestations]`.
|
* Let `current_epoch_boundary_attester_indices` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_boundary_attestations]`.
|
||||||
* Let `current_epoch_boundary_attesting_balance = sum([get_effective_balance(state, i) for i in current_epoch_boundary_attester_indices])`.
|
* Let `current_epoch_boundary_attesting_balance = get_total_balance(state, current_epoch_boundary_attester_indices)`.
|
||||||
|
|
||||||
[Validators](#dfn-Validator) attesting during the previous epoch:
|
[Validators](#dfn-Validator) attesting during the previous epoch:
|
||||||
|
|
||||||
* Let `previous_total_balance = sum([get_effective_balance(state, i) for i in get_active_validator_indices(state.validator_registry, previous_epoch)])`.
|
* Let `previous_total_balance = get_total_balance(state, get_active_validator_indices(state.validator_registry, previous_epoch))`.
|
||||||
* Validators that made an attestation during the previous epoch:
|
* Validators that made an attestation during the previous epoch:
|
||||||
* Let `previous_epoch_attestations = [a for a in state.latest_attestations if previous_epoch == slot_to_epoch(a.data.slot)]`.
|
* Let `previous_epoch_attestations = [a for a in state.latest_attestations if previous_epoch == slot_to_epoch(a.data.slot)]`.
|
||||||
* Let `previous_epoch_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_attestations]`.
|
* Let `previous_epoch_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_attestations]`.
|
||||||
* Validators targeting the previous justified slot:
|
* Validators targeting the previous justified slot:
|
||||||
* Let `previous_epoch_justified_attestations = [a for a in current_epoch_attestations + previous_epoch_attestations if a.data.justified_epoch == state.previous_justified_epoch]`.
|
* Let `previous_epoch_justified_attestations = [a for a in current_epoch_attestations + previous_epoch_attestations if a.data.justified_epoch == state.previous_justified_epoch]`.
|
||||||
* Let `previous_epoch_justified_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_justified_attestations]`.
|
* Let `previous_epoch_justified_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_justified_attestations]`.
|
||||||
* Let `previous_epoch_justified_attesting_balance = sum([get_effective_balance(state, i) for i in previous_epoch_justified_attester_indices])`.
|
* Let `previous_epoch_justified_attesting_balance = get_total_balance(state, previous_epoch_justified_attester_indices)`.
|
||||||
* Validators justifying the epoch boundary block at the start of the previous epoch:
|
* Validators justifying the epoch boundary block at the start of the previous epoch:
|
||||||
* Let `previous_epoch_boundary_attestations = [a for a in previous_epoch_justified_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(previous_epoch))]`.
|
* Let `previous_epoch_boundary_attestations = [a for a in previous_epoch_justified_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(previous_epoch))]`.
|
||||||
* Let `previous_epoch_boundary_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_boundary_attestations]`.
|
* Let `previous_epoch_boundary_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_boundary_attestations]`.
|
||||||
* Let `previous_epoch_boundary_attesting_balance = sum([get_effective_balance(state, i) for i in previous_epoch_boundary_attester_indices])`.
|
* Let `previous_epoch_boundary_attesting_balance = get_total_balance(state, previous_epoch_boundary_attester_indices)`.
|
||||||
* Validators attesting to the expected beacon chain head during the previous epoch:
|
* Validators attesting to the expected beacon chain head during the previous epoch:
|
||||||
* Let `previous_epoch_head_attestations = [a for a in previous_epoch_attestations if a.data.beacon_block_root == get_block_root(state, a.data.slot)]`.
|
* Let `previous_epoch_head_attestations = [a for a in previous_epoch_attestations if a.data.beacon_block_root == get_block_root(state, a.data.slot)]`.
|
||||||
* Let `previous_epoch_head_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_head_attestations]`.
|
* Let `previous_epoch_head_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_head_attestations]`.
|
||||||
* Let `previous_epoch_head_attesting_balance = sum([get_effective_balance(state, i) for i in previous_epoch_head_attester_indices])`.
|
* Let `previous_epoch_head_attesting_balance = get_total_balance(state, previous_epoch_head_attester_indices)`.
|
||||||
|
|
||||||
**Note**: `previous_total_balance` and `previous_epoch_boundary_attesting_balance` balance might be marginally different than the actual balances during previous epoch transition. Due to the tight bound on validator churn each epoch and small per-epoch rewards/penalties, the potential balance difference is very low and only marginally affects consensus safety.
|
**Note**: `previous_total_balance` and `previous_epoch_boundary_attesting_balance` balance might be marginally different than the actual balances during previous epoch transition. Due to the tight bound on validator churn each epoch and small per-epoch rewards/penalties, the potential balance difference is very low and only marginally affects consensus safety.
|
||||||
|
|
||||||
|
@ -1864,10 +1801,9 @@ For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_s
|
||||||
|
|
||||||
* Let `shard_block_root` be `state.latest_crosslinks[shard].shard_block_root`
|
* Let `shard_block_root` be `state.latest_crosslinks[shard].shard_block_root`
|
||||||
* Let `attesting_validator_indices(crosslink_committee, shard_block_root)` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_attestations + previous_epoch_attestations if a.data.shard == shard and a.data.shard_block_root == shard_block_root]`.
|
* Let `attesting_validator_indices(crosslink_committee, shard_block_root)` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_attestations + previous_epoch_attestations if a.data.shard == shard and a.data.shard_block_root == shard_block_root]`.
|
||||||
* Let `winning_root(crosslink_committee)` be equal to the value of `shard_block_root` such that `sum([get_effective_balance(state, i) for i in attesting_validator_indices(crosslink_committee, shard_block_root)])` is maximized (ties broken by favoring lower `shard_block_root` values).
|
* Let `winning_root(crosslink_committee)` be equal to the value of `shard_block_root` such that `get_total_balance(state, attesting_validator_indices(crosslink_committee, shard_block_root))` is maximized (ties broken by favoring lower `shard_block_root` values).
|
||||||
* Let `attesting_validators(crosslink_committee)` be equal to `attesting_validator_indices(crosslink_committee, winning_root(crosslink_committee))` for convenience.
|
* Let `attesting_validators(crosslink_committee)` be equal to `attesting_validator_indices(crosslink_committee, winning_root(crosslink_committee))` for convenience.
|
||||||
* Let `total_attesting_balance(crosslink_committee) = sum([get_effective_balance(state, i) for i in attesting_validators(crosslink_committee)])`.
|
* Let `total_attesting_balance(crosslink_committee) = get_total_balance(state, attesting_validators(crosslink_committee))`.
|
||||||
* Let `total_balance(crosslink_committee) = sum([get_effective_balance(state, i) for i in crosslink_committee])`.
|
|
||||||
|
|
||||||
Define the following helpers to process attestation inclusion rewards and inclusion distance reward/penalty. For every attestation `a` in `previous_epoch_attestations`:
|
Define the following helpers to process attestation inclusion rewards and inclusion distance reward/penalty. For every attestation `a` in `previous_epoch_attestations`:
|
||||||
|
|
||||||
|
@ -1906,7 +1842,7 @@ Finally, update the following:
|
||||||
|
|
||||||
For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch))`, let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`, compute:
|
For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch))`, let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`, compute:
|
||||||
|
|
||||||
* Set `state.latest_crosslinks[shard] = Crosslink(epoch=current_epoch, shard_block_root=winning_root(crosslink_committee))` if `3 * total_attesting_balance(crosslink_committee) >= 2 * total_balance(crosslink_committee)`.
|
* Set `state.latest_crosslinks[shard] = Crosslink(epoch=current_epoch, shard_block_root=winning_root(crosslink_committee))` if `3 * total_attesting_balance(crosslink_committee) >= 2 * get_total_balance(crosslink_committee)`.
|
||||||
|
|
||||||
#### Rewards and penalties
|
#### Rewards and penalties
|
||||||
|
|
||||||
|
@ -1954,7 +1890,7 @@ For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_s
|
||||||
|
|
||||||
* Let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`.
|
* Let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`.
|
||||||
* For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`:
|
* For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`:
|
||||||
* If `index in attesting_validators(crosslink_committee)`, `state.validator_balances[index] += base_reward(state, index) * total_attesting_balance(crosslink_committee) // total_balance(crosslink_committee))`.
|
* If `index in attesting_validators(crosslink_committee)`, `state.validator_balances[index] += base_reward(state, index) * total_attesting_balance(crosslink_committee) // get_total_balance(state, crosslink_committee))`.
|
||||||
* If `index not in attesting_validators(crosslink_committee)`, `state.validator_balances[index] -= base_reward(state, index)`.
|
* If `index not in attesting_validators(crosslink_committee)`, `state.validator_balances[index] -= base_reward(state, index)`.
|
||||||
|
|
||||||
#### Ejections
|
#### Ejections
|
||||||
|
@ -1997,7 +1933,7 @@ def update_validator_registry(state: BeaconState) -> None:
|
||||||
# The active validators
|
# The active validators
|
||||||
active_validator_indices = get_active_validator_indices(state.validator_registry, current_epoch)
|
active_validator_indices = get_active_validator_indices(state.validator_registry, current_epoch)
|
||||||
# The total effective balance of active validators
|
# The total effective balance of active validators
|
||||||
total_balance = sum([get_effective_balance(state, i) for i in active_validator_indices])
|
total_balance = get_total_balance(state, active_validator_indices)
|
||||||
|
|
||||||
# The maximum balance churn in Gwei (for deposits and exits separately)
|
# The maximum balance churn in Gwei (for deposits and exits separately)
|
||||||
max_balance_churn = max(
|
max_balance_churn = max(
|
||||||
|
|
|
@ -371,7 +371,12 @@ return typ(**values), item_index
|
||||||
|
|
||||||
### Tree Hash
|
### Tree Hash
|
||||||
|
|
||||||
The below `hash_tree_root` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For the final output only (ie. not intermediate outputs), if the output is less than 32 bytes, right-zero-pad it to 32 bytes. The goal is collision resistance *within* each type, not between types.
|
The below `hash_tree_root_internal` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For use as a "final output" (eg. for signing), use `hash_tree_root(x) = zpad(hash_tree_root_internal(x), 32)`, where `zpad` is a helper that extends the given `bytes` value to the desired `length` by adding zero bytes on the right:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def zpad(input: bytes, length: int) -> bytes:
|
||||||
|
return input + b'\x00' * (length - len(input))
|
||||||
|
```
|
||||||
|
|
||||||
Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#hash) of Phase 0 of the [Eth2.0 specs](https://github.com/ethereum/eth2.0-specs) for a definition of the hash function used below, `hash(x)`.
|
Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#hash) of Phase 0 of the [Eth2.0 specs](https://github.com/ethereum/eth2.0-specs) for a definition of the hash function used below, `hash(x)`.
|
||||||
|
|
||||||
|
@ -385,7 +390,7 @@ Return the hash of the serialization of the value.
|
||||||
|
|
||||||
#### List/Vectors
|
#### List/Vectors
|
||||||
|
|
||||||
First, we define some helpers and then the Merkle tree function.
|
First, we define the Merkle tree function.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Merkle tree hash of a list of homogenous, non-empty items
|
# Merkle tree hash of a list of homogenous, non-empty items
|
||||||
|
@ -401,7 +406,10 @@ def merkle_hash(lst):
|
||||||
items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0])
|
items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0])
|
||||||
|
|
||||||
# Build a list of chunks based on the number of items in the chunk
|
# Build a list of chunks based on the number of items in the chunk
|
||||||
chunkz = [b''.join(lst[i:i+items_per_chunk]) for i in range(0, len(lst), items_per_chunk)]
|
chunkz = [
|
||||||
|
zpad(b''.join(lst[i:i + items_per_chunk]), SSZ_CHUNK_SIZE)
|
||||||
|
for i in range(0, len(lst), items_per_chunk)
|
||||||
|
]
|
||||||
else:
|
else:
|
||||||
# Leave large items alone
|
# Leave large items alone
|
||||||
chunkz = lst
|
chunkz = lst
|
||||||
|
@ -416,20 +424,20 @@ def merkle_hash(lst):
|
||||||
return hash(chunkz[0] + datalen)
|
return hash(chunkz[0] + datalen)
|
||||||
```
|
```
|
||||||
|
|
||||||
To `hash_tree_root` a list, we simply do:
|
To `hash_tree_root_internal` a list, we simply do:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
return merkle_hash([hash_tree_root(item) for item in value])
|
return merkle_hash([hash_tree_root_internal(item) for item in value])
|
||||||
```
|
```
|
||||||
|
|
||||||
Where the inner `hash_tree_root` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values).
|
Where the inner `hash_tree_root_internal` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values).
|
||||||
|
|
||||||
#### Container
|
#### Container
|
||||||
|
|
||||||
Recursively tree hash the values in the container in the same order as the fields, and return the hash of the concatenation of the results.
|
Recursively tree hash the values in the container in the same order as the fields, and return the hash of the concatenation of the results.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
return hash(b''.join([hash_tree_root(getattr(x, field)) for field in value.fields]))
|
return hash(b''.join([hash_tree_root_internal(getattr(x, field)) for field in value.fields]))
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementations
|
## Implementations
|
||||||
|
|
|
@ -95,7 +95,7 @@ The validator constructs their `withdrawal_credentials` via the following:
|
||||||
|
|
||||||
### Submit deposit
|
### Submit deposit
|
||||||
|
|
||||||
In phase 0, all incoming validator deposits originate from the Ethereum 1.0 PoW chain. Deposits are made to the [deposit contract](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#ethereum-10-deposit-contract) located at `DEPOSIT_CONTRACT_ADDRESS`.
|
In phase 0, all incoming validator deposits originate from the Ethereum 1.0 PoW chain. Deposits are made to the [deposit contract](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#ethereum-10-deposit-contract) located at `DEPOSIT_CONTRACT_ADDRESS`.
|
||||||
|
|
||||||
To submit a deposit:
|
To submit a deposit:
|
||||||
|
|
||||||
|
@ -166,7 +166,7 @@ Set `block.randao_reveal = epoch_signature` where `epoch_signature` is defined a
|
||||||
```python
|
```python
|
||||||
epoch_signature = bls_sign(
|
epoch_signature = bls_sign(
|
||||||
privkey=validator.privkey, # privkey store locally, not in state
|
privkey=validator.privkey, # privkey store locally, not in state
|
||||||
message=int_to_bytes32(slot_to_epoch(block.slot)),
|
message_hash=int_to_bytes32(slot_to_epoch(block.slot)),
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
||||||
epoch=slot_to_epoch(block.slot),
|
epoch=slot_to_epoch(block.slot),
|
||||||
|
@ -205,7 +205,7 @@ proposal_root = hash_tree_root(proposal_data)
|
||||||
|
|
||||||
signed_proposal_data = bls_sign(
|
signed_proposal_data = bls_sign(
|
||||||
privkey=validator.privkey, # privkey store locally, not in state
|
privkey=validator.privkey, # privkey store locally, not in state
|
||||||
message=proposal_root,
|
message_hash=proposal_root,
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
fork=fork, # `fork` is the fork object at the slot `block.slot`
|
||||||
epoch=slot_to_epoch(block.slot),
|
epoch=slot_to_epoch(block.slot),
|
||||||
|
@ -321,7 +321,7 @@ attestation_message_to_sign = hash_tree_root(attestation_data_and_custody_bit)
|
||||||
|
|
||||||
signed_attestation_data = bls_sign(
|
signed_attestation_data = bls_sign(
|
||||||
privkey=validator.privkey, # privkey store locally, not in state
|
privkey=validator.privkey, # privkey store locally, not in state
|
||||||
message=attestation_message_to_sign,
|
message_hash=attestation_message_to_sign,
|
||||||
domain=get_domain(
|
domain=get_domain(
|
||||||
fork=fork, # `fork` is the fork object at the slot, `attestation_data.slot`
|
fork=fork, # `fork` is the fork object at the slot, `attestation_data.slot`
|
||||||
epoch=slot_to_epoch(attestation_data.slot),
|
epoch=slot_to_epoch(attestation_data.slot),
|
||||||
|
@ -344,24 +344,48 @@ Either (2) or (3) occurs if (1) fails. The choice between (2) and (3) is determi
|
||||||
`get_crosslink_committees_at_slot` is designed to be able to query slots in the next epoch. When querying slots in the next epoch there are two options -- with and without a `registry_change` -- which is the optional third parameter of the function. The following helper can be used to get the potential crosslink committees in the next epoch for a given `validator_index`. This function returns a list of 2 shard committee tuples.
|
`get_crosslink_committees_at_slot` is designed to be able to query slots in the next epoch. When querying slots in the next epoch there are two options -- with and without a `registry_change` -- which is the optional third parameter of the function. The following helper can be used to get the potential crosslink committees in the next epoch for a given `validator_index`. This function returns a list of 2 shard committee tuples.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def get_next_epoch_crosslink_committees(state: BeaconState,
|
def get_next_epoch_committee_assignments(
|
||||||
validator_index: ValidatorIndex) -> List[Tuple[ValidatorIndex], ShardNumber]:
|
state: BeaconState,
|
||||||
|
validator_index: ValidatorIndex) -> List[Tuple[List[ValidatorIndex], ShardNumber, SlotNumber, bool]]:
|
||||||
|
"""
|
||||||
|
Return a list of the two possible committee assignments for ``validator_index`` at the next epoch.
|
||||||
|
Possible committee ``assignment`` is of the form (List[ValidatorIndex], ShardNumber, SlotNumber, bool).
|
||||||
|
* ``assignment[0]`` is the list of validators in the committee
|
||||||
|
* ``assignment[1]`` is the shard to which the committee is assigned
|
||||||
|
* ``assignment[2]`` is the slot at which the committee is assigned
|
||||||
|
* ``assignment[3]`` is a bool signalling if the validator is expected to propose
|
||||||
|
a beacon block at the assigned slot.
|
||||||
|
"""
|
||||||
current_epoch = get_current_epoch(state)
|
current_epoch = get_current_epoch(state)
|
||||||
next_epoch = current_epoch + 1
|
next_epoch = current_epoch + 1
|
||||||
next_epoch_start_slot = get_epoch_start_slot(next_epoch)
|
next_epoch_start_slot = get_epoch_start_slot(next_epoch)
|
||||||
potential_committees = []
|
potential_assignments = []
|
||||||
for validator_registry in [False, True]:
|
for registry_change in [False, True]:
|
||||||
for slot in range(next_epoch_start_slot, next_epoch_start_slot + EPOCH_LENGTH):
|
for slot in range(next_epoch_start_slot, next_epoch_start_slot + EPOCH_LENGTH):
|
||||||
shard_committees = get_crosslink_committees_at_slot(state, slot, validator_registry)
|
crosslink_committees = get_crosslink_committees_at_slot(
|
||||||
selected_committees = [committee for committee in shard_committees if validator_index in committee[0]]
|
state,
|
||||||
|
slot,
|
||||||
|
registry_change=registry_change,
|
||||||
|
)
|
||||||
|
selected_committees = [
|
||||||
|
committee # Tuple[List[ValidatorIndex], ShardNumber]
|
||||||
|
for committee in crosslink_committees
|
||||||
|
if validator_index in committee[0]
|
||||||
|
]
|
||||||
if len(selected_committees) > 0:
|
if len(selected_committees) > 0:
|
||||||
potential_assignments.append(selected_committees)
|
assignment = selected_committees[0]
|
||||||
|
assignment += (slot,)
|
||||||
|
first_committee_at_slot = crosslink_committees[0][0] # List[ValidatorIndex]
|
||||||
|
is_proposer = first_committee_at_slot[slot % len(first_committee_at_slot)] == validator_index
|
||||||
|
assignment += (is_proposer,)
|
||||||
|
|
||||||
|
potential_assignments.append(assignment)
|
||||||
break
|
break
|
||||||
|
|
||||||
return potential_assignments
|
return potential_assignments
|
||||||
```
|
```
|
||||||
|
|
||||||
`get_next_epoch_crosslink_committees` should be called at the beginning of each epoch to plan for the next epoch. A validator should always plan for both values of `registry_change` as a possibility unless the validator can concretely eliminate one of the options. Planning for a future shuffling involves noting at which slot one might have to attest and propose and also which shard one should begin syncing (in phase 1+).
|
`get_next_epoch_committee_assignments` should be called at the beginning of each epoch to plan for the next epoch. A validator should always plan for both values of `registry_change` as a possibility unless the validator can concretely eliminate one of the options. Planning for a future shuffling involves noting at which slot one might have to attest and propose and also which shard one should begin syncing (in phase 1+).
|
||||||
|
|
||||||
## How to avoid slashing
|
## How to avoid slashing
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue