diff --git a/LICENSE b/LICENSE new file mode 100644 index 000000000..670154e35 --- /dev/null +++ b/LICENSE @@ -0,0 +1,116 @@ +CC0 1.0 Universal + +Statement of Purpose + +The laws of most jurisdictions throughout the world automatically confer +exclusive Copyright and Related Rights (defined below) upon the creator and +subsequent owner(s) (each and all, an "owner") of an original work of +authorship and/or a database (each, a "Work"). + +Certain owners wish to permanently relinquish those rights to a Work for the +purpose of contributing to a commons of creative, cultural and scientific +works ("Commons") that the public can reliably and without fear of later +claims of infringement build upon, modify, incorporate in other works, reuse +and redistribute as freely as possible in any form whatsoever and for any +purposes, including without limitation commercial purposes. These owners may +contribute to the Commons to promote the ideal of a free culture and the +further production of creative, cultural and scientific works, or to gain +reputation or greater distribution for their Work in part through the use and +efforts of others. + +For these and/or other purposes and motivations, and without any expectation +of additional consideration or compensation, the person associating CC0 with a +Work (the "Affirmer"), to the extent that he or she is an owner of Copyright +and Related Rights in the Work, voluntarily elects to apply CC0 to the Work +and publicly distribute the Work under its terms, with knowledge of his or her +Copyright and Related Rights in the Work and the meaning and intended legal +effect of CC0 on those rights. + +1. Copyright and Related Rights. A Work made available under CC0 may be +protected by copyright and related or neighboring rights ("Copyright and +Related Rights"). Copyright and Related Rights include, but are not limited +to, the following: + + i. the right to reproduce, adapt, distribute, perform, display, communicate, + and translate a Work; + + ii. moral rights retained by the original author(s) and/or performer(s); + + iii. publicity and privacy rights pertaining to a person's image or likeness + depicted in a Work; + + iv. rights protecting against unfair competition in regards to a Work, + subject to the limitations in paragraph 4(a), below; + + v. rights protecting the extraction, dissemination, use and reuse of data in + a Work; + + vi. database rights (such as those arising under Directive 96/9/EC of the + European Parliament and of the Council of 11 March 1996 on the legal + protection of databases, and under any national implementation thereof, + including any amended or successor version of such directive); and + + vii. other similar, equivalent or corresponding rights throughout the world + based on applicable law or treaty, and any national implementations thereof. + +2. Waiver. To the greatest extent permitted by, but not in contravention of, +applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and +unconditionally waives, abandons, and surrenders all of Affirmer's Copyright +and Related Rights and associated claims and causes of action, whether now +known or unknown (including existing as well as future claims and causes of +action), in the Work (i) in all territories worldwide, (ii) for the maximum +duration provided by applicable law or treaty (including future time +extensions), (iii) in any current or future medium and for any number of +copies, and (iv) for any purpose whatsoever, including without limitation +commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes +the Waiver for the benefit of each member of the public at large and to the +detriment of Affirmer's heirs and successors, fully intending that such Waiver +shall not be subject to revocation, rescission, cancellation, termination, or +any other legal or equitable action to disrupt the quiet enjoyment of the Work +by the public as contemplated by Affirmer's express Statement of Purpose. + +3. Public License Fallback. Should any part of the Waiver for any reason be +judged legally invalid or ineffective under applicable law, then the Waiver +shall be preserved to the maximum extent permitted taking into account +Affirmer's express Statement of Purpose. In addition, to the extent the Waiver +is so judged Affirmer hereby grants to each affected person a royalty-free, +non transferable, non sublicensable, non exclusive, irrevocable and +unconditional license to exercise Affirmer's Copyright and Related Rights in +the Work (i) in all territories worldwide, (ii) for the maximum duration +provided by applicable law or treaty (including future time extensions), (iii) +in any current or future medium and for any number of copies, and (iv) for any +purpose whatsoever, including without limitation commercial, advertising or +promotional purposes (the "License"). The License shall be deemed effective as +of the date CC0 was applied by Affirmer to the Work. Should any part of the +License for any reason be judged legally invalid or ineffective under +applicable law, such partial invalidity or ineffectiveness shall not +invalidate the remainder of the License, and in such case Affirmer hereby +affirms that he or she will not (i) exercise any of his or her remaining +Copyright and Related Rights in the Work or (ii) assert any associated claims +and causes of action with respect to the Work, in either case contrary to +Affirmer's express Statement of Purpose. + +4. Limitations and Disclaimers. + + a. No trademark or patent rights held by Affirmer are waived, abandoned, + surrendered, licensed or otherwise affected by this document. + + b. Affirmer offers the Work as-is and makes no representations or warranties + of any kind concerning the Work, express, implied, statutory or otherwise, + including without limitation warranties of title, merchantability, fitness + for a particular purpose, non infringement, or the absence of latent or + other defects, accuracy, or the present or absence of errors, whether or not + discoverable, all to the greatest extent permissible under applicable law. + + c. Affirmer disclaims responsibility for clearing rights of other persons + that may apply to the Work or any use thereof, including without limitation + any person's Copyright and Related Rights in the Work. Further, Affirmer + disclaims responsibility for obtaining any necessary consents, permissions + or other rights required for any use of the Work. + + d. Affirmer understands and acknowledges that Creative Commons is not a + party to this document and has no duty or obligation with respect to this + CC0 or use of the Work. + +For more information, please see + diff --git a/specs/core/0_beacon-chain.md b/specs/core/0_beacon-chain.md index 6bff0f705..daa1bc108 100644 --- a/specs/core/0_beacon-chain.md +++ b/specs/core/0_beacon-chain.md @@ -21,40 +21,39 @@ - [Max transactions per block](#max-transactions-per-block) - [Signature domains](#signature-domains) - [Data structures](#data-structures) - - [Beacon chain transactions](#beacon-chain-transactions) - - [Proposer slashings](#proposer-slashings) - - [`ProposerSlashing`](#proposerslashing) - - [Attester slashings](#attester-slashings) - - [`AttesterSlashing`](#attesterslashing) - - [`SlashableAttestation`](#slashableattestation) - - [Attestations](#attestations) - - [`Attestation`](#attestation) - - [`AttestationData`](#attestationdata) - - [`AttestationDataAndCustodyBit`](#attestationdataandcustodybit) - - [Deposits](#deposits) - - [`Deposit`](#deposit) - - [`DepositData`](#depositdata) - - [`DepositInput`](#depositinput) - - [Voluntary exits](#voluntary-exits) - - [`VoluntaryExit`](#voluntaryexit) - - [Transfers](#transfers) - - [`Transfer`](#transfer) - - [Beacon chain blocks](#beacon-chain-blocks) - - [`BeaconBlock`](#beaconblock) - - [`BeaconBlockBody`](#beaconblockbody) - - [`Proposal`](#proposal) - - [Beacon chain state](#beacon-chain-state) - - [`BeaconState`](#beaconstate) - - [`Validator`](#validator) - - [`Crosslink`](#crosslink) - - [`PendingAttestation`](#pendingattestation) + - [Misc dependencies](#misc-dependencies) - [`Fork`](#fork) + - [`Crosslink`](#crosslink) - [`Eth1Data`](#eth1data) - [`Eth1DataVote`](#eth1datavote) + - [`AttestationData`](#attestationdata) + - [`AttestationDataAndCustodyBit`](#attestationdataandcustodybit) + - [`SlashableAttestation`](#slashableattestation) + - [`DepositInput`](#depositinput) + - [`DepositData`](#depositdata) + - [`BeaconBlockHeader`](#beaconblockheader) + - [`Validator`](#validator) + - [`PendingAttestation`](#pendingattestation) + - [`HistoricalBatch`](#historicalbatch) + - [Beacon transactions](#beacon-transactions) + - [`ProposerSlashing`](#proposerslashing) + - [`AttesterSlashing`](#attesterslashing) + - [`Attestation`](#attestation) + - [`Deposit`](#deposit) + - [`VoluntaryExit`](#voluntaryexit) + - [`Transfer`](#transfer) + - [Beacon blocks](#beacon-blocks) + - [`BeaconBlockBody`](#beaconblockbody) + - [`BeaconBlock`](#beaconblock) + - [Beacon state](#beacon-state) + - [`BeaconState`](#beaconstate) - [Custom Types](#custom-types) - [Helper functions](#helper-functions) + - [`xor`](#xor) - [`hash`](#hash) - [`hash_tree_root`](#hash_tree_root) + - [`signed_root`](#signed_root) + - [`get_temporary_block_header`](#get_temporary_block_header) - [`slot_to_epoch`](#slot_to_epoch) - [`get_previous_epoch`](#get_previous_epoch) - [`get_current_epoch`](#get_current_epoch) @@ -70,11 +69,12 @@ - [`get_next_epoch_committee_count`](#get_next_epoch_committee_count) - [`get_crosslink_committees_at_slot`](#get_crosslink_committees_at_slot) - [`get_block_root`](#get_block_root) + - [`get_state_root`](#get_state_root) - [`get_randao_mix`](#get_randao_mix) - [`get_active_index_root`](#get_active_index_root) - [`generate_seed`](#generate_seed) - [`get_beacon_proposer_index`](#get_beacon_proposer_index) - - [`merkle_root`](#merkle_root) + - [`verify_merkle_branch`](#verify_merkle_branch) - [`get_attestation_participants`](#get_attestation_participants) - [`is_power_of_two`](#is_power_of_two) - [`int_to_bytes1`, `int_to_bytes2`, ...](#int_to_bytes1-int_to_bytes2-) @@ -110,34 +110,33 @@ - [Beacon chain processing](#beacon-chain-processing) - [Beacon chain fork choice rule](#beacon-chain-fork-choice-rule) - [Beacon chain state transition function](#beacon-chain-state-transition-function) + - [State caching](#state-caching) + - [Per-epoch processing](#per-epoch-processing) + - [Helper functions](#helper-functions-1) + - [Justification](#justification) + - [Crosslinks](#crosslinks) + - [Eth1 data](#eth1-data-1) + - [Rewards and penalties](#rewards-and-penalties) + - [Justification and finalization](#justification-and-finalization) + - [Crosslinks](#crosslinks-1) + - [Apply rewards](#apply-rewards) + - [Ejections](#ejections) + - [Validator registry and shuffling seed data](#validator-registry-and-shuffling-seed-data) + - [Slashings and exit queue](#slashings-and-exit-queue) + - [Final updates](#final-updates) - [Per-slot processing](#per-slot-processing) - - [Slot](#slot) - - [Block roots](#block-roots) - [Per-block processing](#per-block-processing) - - [Slot](#slot-1) - - [Block signature](#block-signature) + - [Block header](#block-header) - [RANDAO](#randao) - [Eth1 data](#eth1-data) - [Transactions](#transactions) - - [Proposer slashings](#proposer-slashings-1) - - [Attester slashings](#attester-slashings-1) - - [Attestations](#attestations-1) - - [Deposits](#deposits-1) - - [Voluntary exits](#voluntary-exits-1) - - [Transfers](#transfers-1) - - [Per-epoch processing](#per-epoch-processing) - - [Helper variables](#helper-variables) - - [Eth1 data](#eth1-data-1) - - [Justification](#justification) - - [Crosslinks](#crosslinks) - - [Rewards and penalties](#rewards-and-penalties) - - [Justification and finalization](#justification-and-finalization) - - [Attestation inclusion](#attestation-inclusion) - - [Crosslinks](#crosslinks-1) - - [Ejections](#ejections) - - [Validator registry and shuffling seed data](#validator-registry-and-shuffling-seed-data) - - [Final updates](#final-updates) - - [State root verification](#state-root-verification) + - [Proposer slashings](#proposer-slashings) + - [Attester slashings](#attester-slashings) + - [Attestations](#attestations) + - [Deposits](#deposits) + - [Voluntary exits](#voluntary-exits) + - [Transfers](#transfers) + - [State root verification](#state-root-verification) - [References](#references) - [Normative](#normative) - [Informative](#informative) @@ -183,7 +182,6 @@ Code snippets appearing in `this style` are to be interpreted as Python code. | `SHARD_COUNT` | `2**10` (= 1,024) | | `TARGET_COMMITTEE_SIZE` | `2**7` (= 128) | | `MAX_BALANCE_CHURN_QUOTIENT` | `2**5` (= 32) | -| `BEACON_CHAIN_SHARD_NUMBER` | `2**64 - 1` | | `MAX_INDICES_PER_SLASHABLE_VOTE` | `2**12` (= 4,096) | | `MAX_EXIT_DEQUEUES_PER_EPOCH` | `2**2` (= 4) | | `SHUFFLE_ROUND_COUNT` | 90 | @@ -201,10 +199,10 @@ Code snippets appearing in `this style` are to be interpreted as Python code. | Name | Value | Unit | | - | - | :-: | -| `MIN_DEPOSIT_AMOUNT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei | -| `MAX_DEPOSIT_AMOUNT` | `2**5 * 1e9` (= 32,000,000,000) | Gwei | -| `FORK_CHOICE_BALANCE_INCREMENT` | `2**0 * 1e9` (= 1,000,000,000) | Gwei | -| `EJECTION_BALANCE` | `2**4 * 1e9` (= 16,000,000,000) | Gwei | +| `MIN_DEPOSIT_AMOUNT` | `2**0 * 10**9` (= 1,000,000,000) | Gwei | +| `MAX_DEPOSIT_AMOUNT` | `2**5 * 10**9` (= 32,000,000,000) | Gwei | +| `FORK_CHOICE_BALANCE_INCREMENT` | `2**0 * 10**9` (= 1,000,000,000) | Gwei | +| `EJECTION_BALANCE` | `2**4 * 10**9` (= 16,000,000,000) | Gwei | ### Initial values @@ -231,13 +229,14 @@ Code snippets appearing in `this style` are to be interpreted as Python code. | `MIN_SEED_LOOKAHEAD` | `2**0` (= 1) | epochs | 6.4 minutes | | `ACTIVATION_EXIT_DELAY` | `2**2` (= 4) | epochs | 25.6 minutes | | `EPOCHS_PER_ETH1_VOTING_PERIOD` | `2**4` (= 16) | epochs | ~1.7 hours | +| `SLOTS_PER_HISTORICAL_ROOT` | `2**13` (= 8,192) | slots | ~13 hours | | `MIN_VALIDATOR_WITHDRAWABILITY_DELAY` | `2**8` (= 256) | epochs | ~27 hours | +| `PERSISTENT_COMMITTEE_PERIOD` | `2**11` (= 2,048) | epochs | 9 days | ### State list lengths | Name | Value | Unit | Duration | | - | - | :-: | :-: | -| `LATEST_BLOCK_ROOTS_LENGTH` | `2**13` (= 8,192) | slots | ~13 hours | | `LATEST_RANDAO_MIXES_LENGTH` | `2**13` (= 8,192) | epochs | ~36 days | | `LATEST_ACTIVE_INDEX_ROOTS_LENGTH` | `2**13` (= 8,192) | epochs | ~36 days | | `LATEST_SLASHED_EXIT_LENGTH` | `2**13` (= 8,192) | epochs | ~36 days | @@ -271,304 +270,31 @@ Code snippets appearing in `this style` are to be interpreted as Python code. | Name | Value | | - | - | -| `DOMAIN_DEPOSIT` | `0` | -| `DOMAIN_ATTESTATION` | `1` | -| `DOMAIN_PROPOSAL` | `2` | -| `DOMAIN_EXIT` | `3` | -| `DOMAIN_RANDAO` | `4` | +| `DOMAIN_BEACON_BLOCK` | `0` | +| `DOMAIN_RANDAO` | `1` | +| `DOMAIN_ATTESTATION` | `2` | +| `DOMAIN_DEPOSIT` | `3` | +| `DOMAIN_VOLUNTARY_EXIT` | `4` | | `DOMAIN_TRANSFER` | `5` | ## Data structures The following data structures are defined as [SimpleSerialize (SSZ)](https://github.com/ethereum/eth2.0-specs/blob/master/specs/simple-serialize.md) objects. -### Beacon chain transactions +The types are defined topologically to aid in facilitating an executable version of the spec. -#### Proposer slashings +### Misc dependencies -##### `ProposerSlashing` +#### `Fork` ```python { - # Proposer index - 'proposer_index': 'uint64', - # First proposal - 'proposal_1': Proposal, - # Second proposal - 'proposal_2': Proposal, -} -``` - -#### Attester slashings - -##### `AttesterSlashing` - -```python -{ - # First slashable attestation - 'slashable_attestation_1': SlashableAttestation, - # Second slashable attestation - 'slashable_attestation_2': SlashableAttestation, -} -``` - -##### `SlashableAttestation` - -```python -{ - # Validator indices - 'validator_indices': ['uint64'], - # Attestation data - 'data': AttestationData, - # Custody bitfield - 'custody_bitfield': 'bytes', - # Aggregate signature - 'aggregate_signature': 'bytes96', -} -``` - -#### Attestations - -##### `Attestation` - -```python -{ - # Attester aggregation bitfield - 'aggregation_bitfield': 'bytes', - # Attestation data - 'data': AttestationData, - # Custody bitfield - 'custody_bitfield': 'bytes', - # BLS aggregate signature - 'aggregate_signature': 'bytes96', -} -``` - -##### `AttestationData` - -```python -{ - # Slot number - 'slot': 'uint64', - # Shard number - 'shard': 'uint64', - # Root of the signed beacon block - 'beacon_block_root': 'bytes32', - # Root of the ancestor at the epoch boundary - 'epoch_boundary_root': 'bytes32', - # Data from the shard since the last attestation - 'crosslink_data_root': 'bytes32', - # Last crosslink - 'latest_crosslink': Crosslink, - # Last justified epoch in the beacon state - 'justified_epoch': 'uint64', - # Hash of the last justified beacon block - 'justified_block_root': 'bytes32', -} -``` - -##### `AttestationDataAndCustodyBit` - -```python -{ - # Attestation data - 'data': AttestationData, - # Custody bit - 'custody_bit': 'bool', -} -``` - -#### Deposits - -##### `Deposit` - -```python -{ - # Branch in the deposit tree - 'branch': ['bytes32'], - # Index in the deposit tree - 'index': 'uint64', - # Data - 'deposit_data': DepositData, -} -``` - -##### `DepositData` - -```python -{ - # Amount in Gwei - 'amount': 'uint64', - # Timestamp from deposit contract - 'timestamp': 'uint64', - # Deposit input - 'deposit_input': DepositInput, -} -``` - -##### `DepositInput` - -```python -{ - # BLS pubkey - 'pubkey': 'bytes48', - # Withdrawal credentials - 'withdrawal_credentials': 'bytes32', - # A BLS signature of this `DepositInput` - 'proof_of_possession': 'bytes96', -} -``` - -#### Voluntary exits - -##### `VoluntaryExit` - -```python -{ - # Minimum epoch for processing exit + # Previous fork version + 'previous_version': 'bytes4', + # Current fork version + 'current_version': 'bytes4', + # Fork epoch number 'epoch': 'uint64', - # Index of the exiting validator - 'validator_index': 'uint64', - # Validator signature - 'signature': 'bytes96', -} -``` - -#### Transfers - -##### `Transfer` - -```python -{ - # Sender index - 'from': 'uint64', - # Recipient index - 'to': 'uint64', - # Amount in Gwei - 'amount': 'uint64', - # Fee in Gwei for block proposer - 'fee': 'uint64', - # Inclusion slot - 'slot': 'uint64', - # Sender withdrawal pubkey - 'pubkey': 'bytes48', - # Sender signature - 'signature': 'bytes96', -} -``` - -### Beacon chain blocks - -#### `BeaconBlock` - -```python -{ - # Header - 'slot': 'uint64', - 'parent_root': 'bytes32', - 'state_root': 'bytes32', - 'randao_reveal': 'bytes96', - 'eth1_data': Eth1Data, - - # Body - 'body': BeaconBlockBody, - # Signature - 'signature': 'bytes96', -} -``` - -#### `BeaconBlockBody` - -```python -{ - 'proposer_slashings': [ProposerSlashing], - 'attester_slashings': [AttesterSlashing], - 'attestations': [Attestation], - 'deposits': [Deposit], - 'voluntary_exits': [VoluntaryExit], - 'transfers': [Transfer], -} -``` - -#### `Proposal` - -```python -{ - # Slot number - 'slot': 'uint64', - # Shard number (`BEACON_CHAIN_SHARD_NUMBER` for beacon chain) - 'shard': 'uint64', - # Block root - 'block_root': 'bytes32', - # Signature - 'signature': 'bytes96', -} -``` - -### Beacon chain state - -#### `BeaconState` - -```python -{ - # Misc - 'slot': 'uint64', - 'genesis_time': 'uint64', - 'fork': Fork, # For versioning hard forks - - # Validator registry - 'validator_registry': [Validator], - 'validator_balances': ['uint64'], - 'validator_registry_update_epoch': 'uint64', - - # Randomness and committees - 'latest_randao_mixes': ['bytes32'], - 'previous_shuffling_start_shard': 'uint64', - 'current_shuffling_start_shard': 'uint64', - 'previous_shuffling_epoch': 'uint64', - 'current_shuffling_epoch': 'uint64', - 'previous_shuffling_seed': 'bytes32', - 'current_shuffling_seed': 'bytes32', - - # Finality - 'previous_justified_epoch': 'uint64', - 'justified_epoch': 'uint64', - 'justification_bitfield': 'uint64', - 'finalized_epoch': 'uint64', - - # Recent state - 'latest_crosslinks': [Crosslink], - 'latest_block_roots': ['bytes32'], - 'latest_active_index_roots': ['bytes32'], - 'latest_slashed_balances': ['uint64'], # Balances slashed at every withdrawal period - 'latest_attestations': [PendingAttestation], - 'batched_block_roots': ['bytes32'], - - # Ethereum 1.0 chain data - 'latest_eth1_data': Eth1Data, - 'eth1_data_votes': [Eth1DataVote], - 'deposit_index': 'uint64' -} -``` - -#### `Validator` - -```python -{ - # BLS public key - 'pubkey': 'bytes48', - # Withdrawal credentials - 'withdrawal_credentials': 'bytes32', - # Epoch when validator activated - 'activation_epoch': 'uint64', - # Epoch when validator exited - 'exit_epoch': 'uint64', - # Epoch when validator is eligible to withdraw - 'withdrawable_epoch': 'uint64', - # Did the validator initiate an exit - 'initiated_exit': 'bool', - # Was the validator slashed - 'slashed': 'bool', } ``` @@ -583,34 +309,6 @@ The following data structures are defined as [SimpleSerialize (SSZ)](https://git } ``` -#### `PendingAttestation` - -```python -{ - # Attester aggregation bitfield - 'aggregation_bitfield': 'bytes', - # Attestation data - 'data': AttestationData, - # Custody bitfield - 'custody_bitfield': 'bytes', - # Inclusion slot - 'inclusion_slot': 'uint64', -} -``` - -#### `Fork` - -```python -{ - # Previous fork version - 'previous_version': 'uint64', - # Current fork version - 'current_version': 'uint64', - # Fork epoch number - 'epoch': 'uint64', -} -``` - #### `Eth1Data` ```python @@ -633,6 +331,307 @@ The following data structures are defined as [SimpleSerialize (SSZ)](https://git } ``` +#### `AttestationData` + +```python +{ + # LMD GHOST vote + 'slot': 'uint64', + 'beacon_block_root': 'bytes32', + + # FFG vote + 'source_epoch': 'uint64', + 'source_root': 'bytes32', + 'target_root': 'bytes32', + + # Crosslink vote + 'shard': 'uint64', + 'previous_crosslink': Crosslink, + 'crosslink_data_root': 'bytes32', +} +``` + +#### `AttestationDataAndCustodyBit` + +```python +{ + # Attestation data + 'data': AttestationData, + # Custody bit + 'custody_bit': 'bool', +} +``` + +#### `SlashableAttestation` + +```python +{ + # Validator indices + 'validator_indices': ['uint64'], + # Attestation data + 'data': AttestationData, + # Custody bitfield + 'custody_bitfield': 'bytes', + # Aggregate signature + 'aggregate_signature': 'bytes96', +} +``` + +#### `DepositInput` + +```python +{ + # BLS pubkey + 'pubkey': 'bytes48', + # Withdrawal credentials + 'withdrawal_credentials': 'bytes32', + # A BLS signature of this `DepositInput` + 'proof_of_possession': 'bytes96', +} +``` + +#### `DepositData` + +```python +{ + # Amount in Gwei + 'amount': 'uint64', + # Timestamp from deposit contract + 'timestamp': 'uint64', + # Deposit input + 'deposit_input': DepositInput, +} +``` + +#### `BeaconBlockHeader` + +```python +{ + 'slot': 'uint64', + 'previous_block_root': 'bytes32', + 'state_root': 'bytes32', + 'block_body_root': 'bytes32', + 'signature': 'bytes96', +} +``` + +#### `Validator` + +```python +{ + # BLS public key + 'pubkey': 'bytes48', + # Withdrawal credentials + 'withdrawal_credentials': 'bytes32', + # Epoch when validator activated + 'activation_epoch': 'uint64', + # Epoch when validator exited + 'exit_epoch': 'uint64', + # Epoch when validator is eligible to withdraw + 'withdrawable_epoch': 'uint64', + # Did the validator initiate an exit + 'initiated_exit': 'bool', + # Was the validator slashed + 'slashed': 'bool', +} +``` + +#### `PendingAttestation` + +```python +{ + # Attester aggregation bitfield + 'aggregation_bitfield': 'bytes', + # Attestation data + 'data': AttestationData, + # Custody bitfield + 'custody_bitfield': 'bytes', + # Inclusion slot + 'inclusion_slot': 'uint64', +} +``` + +#### `HistoricalBatch` + +```python +{ + # Block roots + 'block_roots': ['bytes32', SLOTS_PER_HISTORICAL_ROOT], + # State roots + 'state_roots': ['bytes32', SLOTS_PER_HISTORICAL_ROOT], +} +``` + +### Beacon transactions + +#### `ProposerSlashing` + +```python +{ + # Proposer index + 'proposer_index': 'uint64', + # First block header + 'header_1': BeaconBlockHeader, + # Second block header + 'header_2': BeaconBlockHeader, +} +``` + +#### `AttesterSlashing` + +```python +{ + # First slashable attestation + 'slashable_attestation_1': SlashableAttestation, + # Second slashable attestation + 'slashable_attestation_2': SlashableAttestation, +} +``` + +#### `Attestation` + +```python +{ + # Attester aggregation bitfield + 'aggregation_bitfield': 'bytes', + # Attestation data + 'data': AttestationData, + # Custody bitfield + 'custody_bitfield': 'bytes', + # BLS aggregate signature + 'aggregate_signature': 'bytes96', +} +``` + +#### `Deposit` + +```python +{ + # Branch in the deposit tree + 'proof': ['bytes32', DEPOSIT_CONTRACT_TREE_DEPTH], + # Index in the deposit tree + 'index': 'uint64', + # Data + 'deposit_data': DepositData, +} +``` + +#### `VoluntaryExit` + +```python +{ + # Minimum epoch for processing exit + 'epoch': 'uint64', + # Index of the exiting validator + 'validator_index': 'uint64', + # Validator signature + 'signature': 'bytes96', +} +``` + +#### `Transfer` + +```python +{ + # Sender index + 'sender': 'uint64', + # Recipient index + 'recipient': 'uint64', + # Amount in Gwei + 'amount': 'uint64', + # Fee in Gwei for block proposer + 'fee': 'uint64', + # Inclusion slot + 'slot': 'uint64', + # Sender withdrawal pubkey + 'pubkey': 'bytes48', + # Sender signature + 'signature': 'bytes96', +} +``` + +### Beacon blocks + +#### `BeaconBlockBody` + +```python +{ + 'randao_reveal': 'bytes96', + 'eth1_data': Eth1Data, + 'proposer_slashings': [ProposerSlashing], + 'attester_slashings': [AttesterSlashing], + 'attestations': [Attestation], + 'deposits': [Deposit], + 'voluntary_exits': [VoluntaryExit], + 'transfers': [Transfer], +} +``` + +#### `BeaconBlock` + +```python +{ + # Header + 'slot': 'uint64', + 'previous_block_root': 'bytes32', + 'state_root': 'bytes32', + 'body': BeaconBlockBody, + 'signature': 'bytes96', +} +``` + +### Beacon state + +#### `BeaconState` + +```python +{ + # Misc + 'slot': 'uint64', + 'genesis_time': 'uint64', + 'fork': Fork, # For versioning hard forks + + # Validator registry + 'validator_registry': [Validator], + 'validator_balances': ['uint64'], + 'validator_registry_update_epoch': 'uint64', + + # Randomness and committees + 'latest_randao_mixes': ['bytes32', LATEST_RANDAO_MIXES_LENGTH], + 'previous_shuffling_start_shard': 'uint64', + 'current_shuffling_start_shard': 'uint64', + 'previous_shuffling_epoch': 'uint64', + 'current_shuffling_epoch': 'uint64', + 'previous_shuffling_seed': 'bytes32', + 'current_shuffling_seed': 'bytes32', + + # Finality + 'previous_epoch_attestations': [PendingAttestation], + 'current_epoch_attestations': [PendingAttestation], + 'previous_justified_epoch': 'uint64', + 'current_justified_epoch': 'uint64', + 'previous_justified_root': 'bytes32', + 'current_justified_root': 'bytes32', + 'justification_bitfield': 'uint64', + 'finalized_epoch': 'uint64', + 'finalized_root': 'bytes32', + + # Recent state + 'latest_crosslinks': [Crosslink, SHARD_COUNT], + 'latest_block_roots': ['bytes32', SLOTS_PER_HISTORICAL_ROOT], + 'latest_state_roots': ['bytes32', SLOTS_PER_HISTORICAL_ROOT], + 'latest_active_index_roots': ['bytes32', LATEST_ACTIVE_INDEX_ROOTS_LENGTH], + 'latest_slashed_balances': ['uint64', LATEST_SLASHED_EXIT_LENGTH], # Balances slashed at every withdrawal period + 'latest_block_header': BeaconBlockHeader, # `latest_block_header.state_root == ZERO_HASH` temporarily + 'historical_roots': ['bytes32'], + + # Ethereum 1.0 chain data + 'latest_eth1_data': Eth1Data, + 'eth1_data_votes': [Eth1DataVote], + 'deposit_index': 'uint64' +} +``` + ## Custom Types We define the following Python custom types for type hinting and readability: @@ -652,6 +651,13 @@ We define the following Python custom types for type hinting and readability: Note: The definitions below are for specification purposes and are not necessarily optimal implementations. +### `xor` + +```python +def xor(bytes1: Bytes32, bytes2: Bytes32) -> Bytes32: + return bytes(a ^ b for a, b in zip(bytes1, bytes2)) +``` + ### `hash` The hash function is denoted by `hash`. In Phase 0 the beacon chain is deployed with the same hash function as Ethereum 1.0, i.e. Keccak-256 (also incorrectly known as SHA3). @@ -666,6 +672,22 @@ Note: We aim to migrate to a S[T/N]ARK-friendly hash function in a future Ethere `def signed_root(object: SSZContainer) -> Bytes32` is a function defined in the [SimpleSerialize spec](https://github.com/ethereum/eth2.0-specs/blob/master/specs/simple-serialize.md#signed-roots) to compute signed messages. +### `get_temporary_block_header` + +```python +def get_temporary_block_header(block: BeaconBlock) -> BeaconBlockHeader: + """ + Return the block header corresponding to a block with ``state_root`` set to ``ZERO_HASH``. + """ + return BeaconBlockHeader( + slot=block.slot, + previous_block_root=block.previous_block_root, + state_root=ZERO_HASH, + block_body_root=hash_tree_root(block.body), + signature=block.signature, + ) +``` + ### `slot_to_epoch` ```python @@ -683,7 +705,7 @@ def get_previous_epoch(state: BeaconState) -> Epoch: """` Return the previous epoch of the given ``state``. """ - return max(get_current_epoch(state) - 1, GENESIS_EPOCH) + return get_current_epoch(state) - 1 ``` ### `get_current_epoch` @@ -786,7 +808,7 @@ def get_epoch_committee_count(active_validator_count: int) -> int: ```python def get_shuffling(seed: Bytes32, validators: List[Validator], - epoch: Epoch) -> List[List[ValidatorIndex]] + epoch: Epoch) -> List[List[ValidatorIndex]]: """ Shuffle active validators and split into crosslink committees. Return a list of committees (each a list of validator indices). @@ -876,19 +898,22 @@ def get_crosslink_committees_at_slot(state: BeaconState, shuffling_epoch = state.previous_shuffling_epoch shuffling_start_shard = state.previous_shuffling_start_shard elif epoch == next_epoch: - current_committees_per_epoch = get_current_epoch_committee_count(state) - committees_per_epoch = get_next_epoch_committee_count(state) - shuffling_epoch = next_epoch - epochs_since_last_registry_update = current_epoch - state.validator_registry_update_epoch if registry_change: + committees_per_epoch = get_next_epoch_committee_count(state) seed = generate_seed(state, next_epoch) + shuffling_epoch = next_epoch + current_committees_per_epoch = get_current_epoch_committee_count(state) shuffling_start_shard = (state.current_shuffling_start_shard + current_committees_per_epoch) % SHARD_COUNT elif epochs_since_last_registry_update > 1 and is_power_of_two(epochs_since_last_registry_update): + committees_per_epoch = get_next_epoch_committee_count(state) seed = generate_seed(state, next_epoch) + shuffling_epoch = next_epoch shuffling_start_shard = state.current_shuffling_start_shard else: + committees_per_epoch = get_current_epoch_committee_count(state) seed = state.current_shuffling_seed + shuffling_epoch = state.current_shuffling_epoch shuffling_start_shard = state.current_shuffling_start_shard shuffling = get_shuffling( @@ -917,13 +942,23 @@ def get_block_root(state: BeaconState, """ Return the block root at a recent ``slot``. """ - assert state.slot <= slot + LATEST_BLOCK_ROOTS_LENGTH - assert slot < state.slot - return state.latest_block_roots[slot % LATEST_BLOCK_ROOTS_LENGTH] + assert slot < state.slot <= slot + SLOTS_PER_HISTORICAL_ROOT + return state.latest_block_roots[slot % SLOTS_PER_HISTORICAL_ROOT] ``` `get_block_root(_, s)` should always return `hash_tree_root` of the block in the beacon chain at slot `s`, and `get_crosslink_committees_at_slot(_, s)` should not change unless the [validator](#dfn-validator) registry changes. +### `get_state_root` + +```python +def get_state_root(state: BeaconState, + slot: Slot) -> Bytes32: + """ + Return the state root at a recent ``slot``. + """ + assert slot < state.slot <= slot + SLOTS_PER_HISTORICAL_ROOT + return state.latest_state_roots[slot % SLOTS_PER_HISTORICAL_ROOT] +``` ### `get_randao_mix` ```python @@ -967,26 +1002,37 @@ def generate_seed(state: BeaconState, ```python def get_beacon_proposer_index(state: BeaconState, - slot: Slot) -> ValidatorIndex: + slot: Slot, + registry_change: bool=False) -> ValidatorIndex: """ Return the beacon proposer index for the ``slot``. """ - first_committee, _ = get_crosslink_committees_at_slot(state, slot)[0] - return first_committee[slot % len(first_committee)] + epoch = slot_to_epoch(slot) + current_epoch = get_current_epoch(state) + previous_epoch = get_previous_epoch(state) + next_epoch = current_epoch + 1 + + assert previous_epoch <= epoch <= next_epoch + + first_committee, _ = get_crosslink_committees_at_slot(state, slot, registry_change)[0] + return first_committee[epoch % len(first_committee)] ``` -### `merkle_root` +### `verify_merkle_branch` ```python -def merkle_root(values: List[Bytes32]) -> Bytes32: +def verify_merkle_branch(leaf: Bytes32, proof: List[Bytes32], depth: int, index: int, root: Bytes32) -> bool: """ - Merkleize ``values`` (where ``len(values)`` is a power of two) and return the Merkle root. - Note that the leaves are not hashed. + Verify that the given ``leaf`` is on the merkle branch ``proof`` + starting with the given ``root``. """ - o = [0] * len(values) + values - for i in range(len(values) - 1, 0, -1): - o[i] = hash(o[i * 2] + o[i * 2 + 1]) - return o[1] + value = leaf + for i in range(depth): + if index // (2**i) % 2: + value = hash(proof[i] + value) + else: + value = hash(value + proof[i]) + return value == root ``` ### `get_attestation_participants` @@ -1039,7 +1085,7 @@ def bytes_to_int(data: bytes) -> int: ### `get_effective_balance` ```python -def get_effective_balance(state: State, index: ValidatorIndex) -> Gwei: +def get_effective_balance(state: BeaconState, index: ValidatorIndex) -> Gwei: """ Return the effective balance (also known as "balance at stake") for a validator with the given ``index``. """ @@ -1051,7 +1097,7 @@ def get_effective_balance(state: State, index: ValidatorIndex) -> Gwei: ```python def get_total_balance(state: BeaconState, validators: List[ValidatorIndex]) -> Gwei: """ - Return the combined effective balance of an array of validators. + Return the combined effective balance of an array of ``validators``. """ return sum([get_effective_balance(state, i) for i in validators]) ``` @@ -1060,7 +1106,7 @@ def get_total_balance(state: BeaconState, validators: List[ValidatorIndex]) -> G ```python def get_fork_version(fork: Fork, - epoch: Epoch) -> int: + epoch: Epoch) -> bytes: """ Return the fork version of the given ``epoch``. """ @@ -1079,8 +1125,7 @@ def get_domain(fork: Fork, """ Get the domain number that represents the fork meta and signature domain. """ - fork_version = get_fork_version(fork, epoch) - return fork_version * 2**32 + domain_type + return bytes_to_int(get_fork_version(fork, epoch) + int_to_bytes4(domain_type)) ``` ### `get_bitfield_bit` @@ -1177,8 +1222,8 @@ def is_surround_vote(attestation_data_1: AttestationData, """ Check if ``attestation_data_1`` surrounds ``attestation_data_2``. """ - source_epoch_1 = attestation_data_1.justified_epoch - source_epoch_2 = attestation_data_2.justified_epoch + source_epoch_1 = attestation_data_1.source_epoch + source_epoch_2 = attestation_data_2.source_epoch target_epoch_1 = slot_to_epoch(attestation_data_1.slot) target_epoch_2 = slot_to_epoch(attestation_data_2.slot) @@ -1235,19 +1280,29 @@ def process_deposit(state: BeaconState, deposit: Deposit) -> None: """ deposit_input = deposit.deposit_data.deposit_input - proof_is_valid = bls_verify( - pubkey=deposit_input.pubkey, - message_hash=signed_root(deposit_input, "proof_of_possession"), - signature=deposit_input.proof_of_possession, - domain=get_domain( - state.fork, - get_current_epoch(state), - DOMAIN_DEPOSIT, - ) + # Should equal 8 bytes for deposit_data.amount + + # 8 bytes for deposit_data.timestamp + + # 176 bytes for deposit_data.deposit_input + # It should match the deposit_data in the eth1.0 deposit contract + serialized_deposit_data = serialize(deposit.deposit_data) + # Deposits must be processed in order + assert deposit.index == state.deposit_index + + # Verify the Merkle branch + merkle_branch_is_valid = verify_merkle_branch( + leaf=hash(serialized_deposit_data), + proof=deposit.proof, + depth=DEPOSIT_CONTRACT_TREE_DEPTH, + index=deposit.index, + root=state.latest_eth1_data.deposit_root, ) - - if not proof_is_valid: - return + assert merkle_branch_is_valid + + # Increment the next deposit index we are expecting. Note that this + # needs to be done here because while the deposit contract will never + # create an invalid Merkle branch, it may admit an invalid deposit + # object, and we need to be able to skip over it + state.deposit_index += 1 validator_pubkeys = [v.pubkey for v in state.validator_registry] pubkey = deposit_input.pubkey @@ -1255,6 +1310,20 @@ def process_deposit(state: BeaconState, deposit: Deposit) -> None: withdrawal_credentials = deposit_input.withdrawal_credentials if pubkey not in validator_pubkeys: + # Verify the proof of possession + proof_is_valid = bls_verify( + pubkey=deposit_input.pubkey, + message_hash=signed_root(deposit_input), + signature=deposit_input.proof_of_possession, + domain=get_domain( + state.fork, + get_current_epoch(state), + DOMAIN_DEPOSIT, + ) + ) + if not proof_is_valid: + return + # Add new validator validator = Validator( pubkey=pubkey, @@ -1271,10 +1340,7 @@ def process_deposit(state: BeaconState, deposit: Deposit) -> None: state.validator_balances.append(amount) else: # Increase balance by deposit amount - index = validator_pubkeys.index(pubkey) - assert state.validator_registry[index].withdrawal_credentials == withdrawal_credentials - - state.validator_balances[index] += amount + state.validator_balances[validator_pubkeys.index(pubkey)] += amount ``` ### Routines for updating validator status @@ -1315,12 +1381,13 @@ def exit_validator(state: BeaconState, index: ValidatorIndex) -> None: Note that this function mutates ``state``. """ validator = state.validator_registry[index] + delayed_activation_exit_epoch = get_delayed_activation_exit_epoch(get_current_epoch(state)) # The following updates only occur if not previous exited - if validator.exit_epoch <= get_delayed_activation_exit_epoch(get_current_epoch(state)): + if validator.exit_epoch <= delayed_activation_exit_epoch: return - - validator.exit_epoch = get_delayed_activation_exit_epoch(get_current_epoch(state)) + else: + validator.exit_epoch = delayed_activation_exit_epoch ``` #### `slash_validator` @@ -1401,35 +1468,47 @@ For convenience, we provide the interface to the contract here: ## On genesis -A valid block with slot `GENESIS_SLOT` (a "genesis block") has the following values. Other validity rules (e.g. requiring a signature) do not apply. +When enough full deposits have been made to the deposit contract, an `Eth2Genesis` log is emitted. Construct a corresponding `genesis_state` and `genesis_block` as follows: + +* Let `genesis_validator_deposits` be the list of deposits, ordered chronologically, up to and including the deposit that triggered the `Eth2Genesis` log. +* Let `genesis_time` be the timestamp specified in the `Eth2Genesis` log. +* Let `genesis_eth1_data` be the `Eth1Data` object where: + * `genesis_eth1_data.deposit_root` is the `deposit_root` contained in the `Eth2Genesis` log. + * `genesis_eth1_data.block_hash` is the hash of the Ethereum 1.0 block that emitted the `Eth2Genesis` log. +* Let `genesis_state = get_genesis_beacon_state(genesis_validator_deposits, genesis_time, genesis_eth1_data)`. +* Let `genesis_block = get_empty_block()`. +* Set `genesis_block.state_root = hash_tree_root(genesis_state)`. ```python -{ - slot=GENESIS_SLOT, - parent_root=ZERO_HASH, - state_root=GENESIS_STATE_ROOT, - randao_reveal=EMPTY_SIGNATURE, - eth1_data=Eth1Data( - deposit_root=ZERO_HASH, - block_hash=ZERO_HASH - ), - signature=EMPTY_SIGNATURE, - body=BeaconBlockBody( - proposer_slashings=[], - attester_slashings=[], - attestations=[], - deposits=[], - exits=[], - ), -} +def get_empty_block() -> BeaconBlock: + """ + Get an empty ``BeaconBlock``. + """ + return BeaconBlock( + slot=GENESIS_SLOT, + previous_block_root=ZERO_HASH, + state_root=ZERO_HASH, + body=BeaconBlockBody( + randao_reveal=EMPTY_SIGNATURE, + eth1_data=Eth1Data( + deposit_root=ZERO_HASH, + block_hash=ZERO_HASH, + ), + proposer_slashings=[], + attester_slashings=[], + attestations=[], + deposits=[], + voluntary_exits=[], + transfers=[], + ), + signature=EMPTY_SIGNATURE, + ) ``` -`GENESIS_STATE_ROOT` (in the above "genesis block") is generated from the `get_genesis_beacon_state` function below. When enough full deposits have been made to the deposit contract and the `Eth2Genesis` log has been emitted, `get_genesis_beacon_state` will execute to compute the `hash_tree_root` of `BeaconState`. - ```python def get_genesis_beacon_state(genesis_validator_deposits: List[Deposit], genesis_time: int, - latest_eth1_data: Eth1Data) -> BeaconState: + genesis_eth1_data: Eth1Data) -> BeaconState: """ Get the genesis ``BeaconState``. """ @@ -1438,8 +1517,8 @@ def get_genesis_beacon_state(genesis_validator_deposits: List[Deposit], slot=GENESIS_SLOT, genesis_time=genesis_time, fork=Fork( - previous_version=GENESIS_FORK_VERSION, - current_version=GENESIS_FORK_VERSION, + previous_version=int_to_bytes4(GENESIS_FORK_VERSION), + current_version=int_to_bytes4(GENESIS_FORK_VERSION), epoch=GENESIS_EPOCH, ), @@ -1449,7 +1528,7 @@ def get_genesis_beacon_state(genesis_validator_deposits: List[Deposit], validator_registry_update_epoch=GENESIS_EPOCH, # Randomness and committees - latest_randao_mixes=[EMPTY_SIGNATURE for _ in range(LATEST_RANDAO_MIXES_LENGTH)], + latest_randao_mixes=[ZERO_HASH for _ in range(LATEST_RANDAO_MIXES_LENGTH)], previous_shuffling_start_shard=GENESIS_START_SHARD, current_shuffling_start_shard=GENESIS_START_SHARD, previous_shuffling_epoch=GENESIS_EPOCH, @@ -1458,23 +1537,29 @@ def get_genesis_beacon_state(genesis_validator_deposits: List[Deposit], current_shuffling_seed=ZERO_HASH, # Finality + previous_epoch_attestations=[], + current_epoch_attestations=[], previous_justified_epoch=GENESIS_EPOCH, - justified_epoch=GENESIS_EPOCH, + current_justified_epoch=GENESIS_EPOCH, + previous_justified_root=ZERO_HASH, + current_justified_root=ZERO_HASH, justification_bitfield=0, finalized_epoch=GENESIS_EPOCH, + finalized_root=ZERO_HASH, # Recent state latest_crosslinks=[Crosslink(epoch=GENESIS_EPOCH, crosslink_data_root=ZERO_HASH) for _ in range(SHARD_COUNT)], - latest_block_roots=[ZERO_HASH for _ in range(LATEST_BLOCK_ROOTS_LENGTH)], + latest_block_roots=[ZERO_HASH for _ in range(SLOTS_PER_HISTORICAL_ROOT)], + latest_state_roots=[ZERO_HASH for _ in range(SLOTS_PER_HISTORICAL_ROOT)], latest_active_index_roots=[ZERO_HASH for _ in range(LATEST_ACTIVE_INDEX_ROOTS_LENGTH)], latest_slashed_balances=[0 for _ in range(LATEST_SLASHED_EXIT_LENGTH)], - latest_attestations=[], - batched_block_roots=[], + latest_block_header=get_temporary_block_header(get_empty_block()), + historical_roots=[], # Ethereum 1.0 chain data - latest_eth1_data=latest_eth1_data, + latest_eth1_data=genesis_eth1_data, eth1_data_votes=[], - deposit_index=len(genesis_validator_deposits) + deposit_index=0, ) # Process genesis deposits @@ -1506,7 +1591,7 @@ Processing the beacon chain is similar to processing the Ethereum 1.0 chain. Cli For a beacon chain block, `block`, to be processed by a node, the following conditions must be met: -* The parent block with root `block.parent_root` has been processed and accepted. +* The parent block with root `block.previous_block_root` has been processed and accepted. * An Ethereum 1.0 block pointed to by the `state.latest_eth1_data.block_hash` has been processed and accepted. * The node's Unix time is greater than or equal to `state.genesis_time + (block.slot - GENESIS_SLOT) * SECONDS_PER_SLOT`. (Note that leap seconds mean that slots will occasionally last `SECONDS_PER_SLOT + 1` or `SECONDS_PER_SLOT - 1` seconds, possibly several times a year.) @@ -1566,344 +1651,396 @@ def lmd_ghost(store: Store, start_state: BeaconState, start_block: BeaconBlock) children = get_children(store, head) if len(children) == 0: return head - head = max(children, key=get_vote_count) + head = max(children, key=lambda x: (get_vote_count(x), hash_tree_root(x))) ``` ## Beacon chain state transition function -We now define the state transition function. At a high level the state transition is made up of three parts: +We now define the state transition function. At a high level the state transition is made up of four parts: -1. The per-slot transitions, which happens at the start of every slot. -2. The per-block transitions, which happens at every block. -3. The per-epoch transitions, which happens at the end of the last slot of every epoch (i.e. `(state.slot + 1) % SLOTS_PER_EPOCH == 0`). +1. State caching, which happens at the start of every slot. +2. The per-epoch transitions, which happens at the start of the first slot of every epoch. +3. The per-slot transitions, which happens at every slot. +4. The per-block transitions, which happens at every block. -The per-slot transitions focus on the slot counter and block roots records updates; the per-block transitions generally focus on verifying aggregate signatures and saving temporary records relating to the per-block activity in the `BeaconState`; the per-epoch transitions focus on the [validator](#dfn-validator) registry, including adjusting balances and activating and exiting [validators](#dfn-validator), as well as processing crosslinks and managing block justification/finalization. +Transition section notes: +* The state caching, caches the state root of the previous slot. +* The per-epoch transitions focus on the [validator](#dfn-validator) registry, including adjusting balances and activating and exiting [validators](#dfn-validator), as well as processing crosslinks and managing block justification/finalization. +* The per-slot transitions focus on the slot counter and block roots records updates. +* The per-block transitions generally focus on verifying aggregate signatures and saving temporary records relating to the per-block activity in the `BeaconState`. Beacon blocks that trigger unhandled Python exceptions (e.g. out-of-range list accesses) and failed `assert`s during the state transition are considered invalid. -_Note_: If there are skipped slots between a block and its parent block, run the steps in the [per-slot](#per-slot-processing) and [per-epoch](#per-epoch-processing) sections once for each skipped slot and then once for the slot containing the new block. +_Note_: If there are skipped slots between a block and its parent block, run the steps in the [state-root](#state-caching), [per-epoch](#per-epoch-processing), and [per-slot](#per-slot-processing) sections once for each skipped slot and then once for the slot containing the new block. -### Per-slot processing +### State caching -Below are the processing steps that happen at every slot. - -#### Slot - -* Set `state.slot += 1`. - -#### Block roots - -* Let `previous_block_root` be the `hash_tree_root` of the previous beacon block processed in the chain. -* Set `state.latest_block_roots[(state.slot - 1) % LATEST_BLOCK_ROOTS_LENGTH] = previous_block_root`. -* If `state.slot % LATEST_BLOCK_ROOTS_LENGTH == 0` append `merkle_root(state.latest_block_roots)` to `state.batched_block_roots`. - -### Per-block processing - -Below are the processing steps that happen at every `block`. - -#### Slot - -* Verify that `block.slot == state.slot`. - -#### Block signature - -* Let `proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)]`. -* Let `proposal = Proposal(block.slot, BEACON_CHAIN_SHARD_NUMBER, signed_root(block, "signature"), block.signature)`. -* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=signed_root(proposal, "signature"), signature=proposal.signature, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_PROPOSAL))`. - -#### RANDAO - -* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=hash_tree_root(get_current_epoch(state)), signature=block.randao_reveal, domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_RANDAO))`. -* Set `state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = xor(get_randao_mix(state, get_current_epoch(state)), hash(block.randao_reveal))`. - -#### Eth1 data - -* If there exists an `eth1_data_vote` in `state.eth1_data_votes` for which `eth1_data_vote.eth1_data == block.eth1_data` (there will be at most one), set `eth1_data_vote.vote_count += 1`. -* Otherwise, append to `state.eth1_data_votes` a new `Eth1DataVote(eth1_data=block.eth1_data, vote_count=1)`. - -#### Transactions - -##### Proposer slashings - -Verify that `len(block.body.proposer_slashings) <= MAX_PROPOSER_SLASHINGS`. - -For each `proposer_slashing` in `block.body.proposer_slashings`: - -* Let `proposer = state.validator_registry[proposer_slashing.proposer_index]`. -* Verify that `proposer_slashing.proposal_1.slot == proposer_slashing.proposal_2.slot`. -* Verify that `proposer_slashing.proposal_1.shard == proposer_slashing.proposal_2.shard`. -* Verify that `proposer_slashing.proposal_1.block_root != proposer_slashing.proposal_2.block_root`. -* Verify that `proposer.slashed == False`. -* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=signed_root(proposer_slashing.proposal_1, "signature"), signature=proposer_slashing.proposal_1.signature, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_1.slot), DOMAIN_PROPOSAL))`. -* Verify that `bls_verify(pubkey=proposer.pubkey, message_hash=signed_root(proposer_slashing.proposal_2, "signature"), signature=proposer_slashing.proposal_2.signature, domain=get_domain(state.fork, slot_to_epoch(proposer_slashing.proposal_2.slot), DOMAIN_PROPOSAL))`. -* Run `slash_validator(state, proposer_slashing.proposer_index)`. - -##### Attester slashings - -Verify that `len(block.body.attester_slashings) <= MAX_ATTESTER_SLASHINGS`. - -For each `attester_slashing` in `block.body.attester_slashings`: - -* Let `slashable_attestation_1 = attester_slashing.slashable_attestation_1`. -* Let `slashable_attestation_2 = attester_slashing.slashable_attestation_2`. -* Verify that `slashable_attestation_1.data != slashable_attestation_2.data`. -* Verify that `is_double_vote(slashable_attestation_1.data, slashable_attestation_2.data)` or `is_surround_vote(slashable_attestation_1.data, slashable_attestation_2.data)`. -* Verify that `verify_slashable_attestation(state, slashable_attestation_1)`. -* Verify that `verify_slashable_attestation(state, slashable_attestation_2)`. -* Let `slashable_indices = [index for index in slashable_attestation_1.validator_indices if index in slashable_attestation_2.validator_indices and state.validator_registry[index].slashed == False]`. -* Verify that `len(slashable_indices) >= 1`. -* Run `slash_validator(state, index)` for each `index` in `slashable_indices`. - -##### Attestations - -Verify that `len(block.body.attestations) <= MAX_ATTESTATIONS`. - -For each `attestation` in `block.body.attestations`: - -* Verify that `attestation.data.slot >= GENESIS_SLOT`. -* Verify that `attestation.data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot`. -* Verify that `state.slot < attestation.data.slot + SLOTS_PER_EPOCH. -* Verify that `attestation.data.justified_epoch` is equal to `state.justified_epoch if slot_to_epoch(attestation.data.slot + 1) >= get_current_epoch(state) else state.previous_justified_epoch`. -* Verify that `attestation.data.justified_block_root` is equal to `get_block_root(state, get_epoch_start_slot(attestation.data.justified_epoch))`. -* Verify that either (i) `state.latest_crosslinks[attestation.data.shard] == attestation.data.latest_crosslink` or (ii) `state.latest_crosslinks[attestation.data.shard] == Crosslink(crosslink_data_root=attestation.data.crosslink_data_root, epoch=slot_to_epoch(attestation.data.slot))`. -* Verify bitfields and aggregate signature: +At every `slot > GENESIS_SLOT` run the following function: ```python - assert attestation.custody_bitfield == b'\x00' * len(attestation.custody_bitfield) # [TO BE REMOVED IN PHASE 1] - assert attestation.aggregation_bitfield != b'\x00' * len(attestation.aggregation_bitfield) +def cache_state(state: BeaconState) -> None: + previous_slot_state_root = hash_tree_root(state) - crosslink_committee = [ - committee for committee, shard in get_crosslink_committees_at_slot(state, attestation.data.slot) - if shard == attestation.data.shard - ][0] - for i in range(len(crosslink_committee)): - if get_bitfield_bit(attestation.aggregation_bitfield, i) == 0b0: - assert get_bitfield_bit(attestation.custody_bitfield, i) == 0b0 + # store the previous slot's post state transition root + state.latest_state_roots[state.slot % SLOTS_PER_HISTORICAL_ROOT] = previous_slot_state_root - participants = get_attestation_participants(state, attestation.data, attestation.aggregation_bitfield) - custody_bit_1_participants = get_attestation_participants(state, attestation.data, attestation.custody_bitfield) - custody_bit_0_participants = [i in participants for i not in custody_bit_1_participants] + # cache state root in stored latest_block_header if empty + if state.latest_block_header.state_root == ZERO_HASH: + state.latest_block_header.state_root = previous_slot_state_root - assert bls_verify_multiple( - pubkeys=[ - bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_0_participants]), - bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_1_participants]), - ], - message_hashes=[ - hash_tree_root(AttestationDataAndCustodyBit(data=attestation.data, custody_bit=0b0)), - hash_tree_root(AttestationDataAndCustodyBit(data=attestation.data, custody_bit=0b1)), - ], - signature=attestation.aggregate_signature, - domain=get_domain(state.fork, slot_to_epoch(attestation.data.slot), DOMAIN_ATTESTATION), - ) + # store latest known block for previous slot + state.latest_block_roots[state.slot % SLOTS_PER_HISTORICAL_ROOT] = hash_tree_root(state.latest_block_header) ``` -* [TO BE REMOVED IN PHASE 1] Verify that `attestation.data.crosslink_data_root == ZERO_HASH`. -* Append `PendingAttestation(data=attestation.data, aggregation_bitfield=attestation.aggregation_bitfield, custody_bitfield=attestation.custody_bitfield, inclusion_slot=state.slot)` to `state.latest_attestations`. - -##### Deposits - -Verify that `len(block.body.deposits) <= MAX_DEPOSITS`. - -[TODO: update the call to `verify_merkle_branch` below if it needs to change after we process deposits in order] - -For each `deposit` in `block.body.deposits`: - -* Let `serialized_deposit_data` be the serialized form of `deposit.deposit_data`. It should be 8 bytes for `deposit_data.amount` followed by 8 bytes for `deposit_data.timestamp` and then the `DepositInput` bytes. That is, it should match `deposit_data` in the [Ethereum 1.0 deposit contract](#ethereum-10-deposit-contract) of which the hash was placed into the Merkle tree. -* Verify that `deposit.index == state.deposit_index`. -* Verify that `verify_merkle_branch(hash(serialized_deposit_data), deposit.branch, DEPOSIT_CONTRACT_TREE_DEPTH, deposit.index, state.latest_eth1_data.deposit_root)` is `True`. - -```python -def verify_merkle_branch(leaf: Bytes32, branch: List[Bytes32], depth: int, index: int, root: Bytes32) -> bool: - """ - Verify that the given ``leaf`` is on the merkle branch ``branch``. - """ - value = leaf - for i in range(depth): - if index // (2**i) % 2: - value = hash(branch[i] + value) - else: - value = hash(value + branch[i]) - return value == root -``` - -* Run the following: - -```python -process_deposit(state, deposit) -``` - -* Set `state.deposit_index += 1`. - -##### Voluntary exits - -Verify that `len(block.body.voluntary_exits) <= MAX_VOLUNTARY_EXITS`. - -For each `exit` in `block.body.voluntary_exits`: - -* Let `validator = state.validator_registry[exit.validator_index]`. -* Verify that `validator.exit_epoch > get_delayed_activation_exit_epoch(get_current_epoch(state))`. -* Verify that `get_current_epoch(state) >= exit.epoch`. -* Verify that `bls_verify(pubkey=validator.pubkey, message_hash=signed_root(exit, "signature"), signature=exit.signature, domain=get_domain(state.fork, exit.epoch, DOMAIN_EXIT))`. -* Run `initiate_validator_exit(state, exit.validator_index)`. - -##### Transfers - -Note: Transfers are a temporary functionality for phases 0 and 1, to be removed in phase 2. - -Verify that `len(block.body.transfers) <= MAX_TRANSFERS` and that all transfers are distinct. - -For each `transfer` in `block.body.transfers`: - -* Verify that `state.validator_balances[transfer.from] >= transfer.amount`. -* Verify that `state.validator_balances[transfer.from] >= transfer.fee`. -* Verify that `state.validator_balances[transfer.from] == transfer.amount + transfer.fee` or `state.validator_balances[transfer.from] >= transfer.amount + transfer.fee + MIN_DEPOSIT_AMOUNT`. -* Verify that `state.slot == transfer.slot`. -* Verify that `get_current_epoch(state) >= state.validator_registry[transfer.from].withdrawable_epoch` or `state.validator_registry[transfer.from].activation_epoch == FAR_FUTURE_EPOCH`. -* Verify that `state.validator_registry[transfer.from].withdrawal_credentials == BLS_WITHDRAWAL_PREFIX_BYTE + hash(transfer.pubkey)[1:]`. -* Verify that `bls_verify(pubkey=transfer.pubkey, message_hash=signed_root(transfer, "signature"), signature=transfer.signature, domain=get_domain(state.fork, slot_to_epoch(transfer.slot), DOMAIN_TRANSFER))`. -* Set `state.validator_balances[transfer.from] -= transfer.amount + transfer.fee`. -* Set `state.validator_balances[transfer.to] += transfer.amount`. -* Set `state.validator_balances[get_beacon_proposer_index(state, state.slot)] += transfer.fee`. - ### Per-epoch processing -The steps below happen when `(state.slot + 1) % SLOTS_PER_EPOCH == 0`. +The steps below happen when `state.slot > GENESIS_SLOT and (state.slot + 1) % SLOTS_PER_EPOCH == 0`. -#### Helper variables +#### Helper functions -* Let `current_epoch = get_current_epoch(state)`. -* Let `previous_epoch = get_previous_epoch(state)`. -* Let `next_epoch = current_epoch + 1`. +We define some helper functions utilized when processing an epoch transition: -[Validators](#dfn-Validator) attesting during the current epoch: +```python +def get_current_total_balance(state: BeaconState) -> Gwei: + return get_total_balance(state, get_active_validator_indices(state.validator_registry, get_current_epoch(state))) +``` -* Let `current_total_balance = get_total_balance(state, get_active_validator_indices(state.validator_registry, current_epoch))`. -* Let `current_epoch_attestations = [a for a in state.latest_attestations if current_epoch == slot_to_epoch(a.data.slot)]`. (Note: Each of these attestations votes for the current justified epoch/block root because of the [attestation block validity rules](#attestations-1).) -* Validators justifying the epoch boundary block at the start of the current epoch: - * Let `current_epoch_boundary_attestations = [a for a in current_epoch_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(current_epoch))]`. - * Let `current_epoch_boundary_attester_indices` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_boundary_attestations]`. - * Let `current_epoch_boundary_attesting_balance = get_total_balance(state, current_epoch_boundary_attester_indices)`. +```python +def get_previous_total_balance(state: BeaconState) -> Gwei: + return get_total_balance(state, get_active_validator_indices(state.validator_registry, get_previous_epoch(state))) +``` -[Validators](#dfn-Validator) attesting during the previous epoch: +```python +def get_attesting_indices(state: BeaconState, attestations: List[PendingAttestation]) -> List[ValidatorIndex]: + output = set() + for a in attestations: + output = output.union(get_attestation_participants(state, a.data, a.aggregation_bitfield)) + return sorted(list(output)) +``` -* Let `previous_total_balance = get_total_balance(state, get_active_validator_indices(state.validator_registry, previous_epoch))`. -* Validators that made an attestation during the previous epoch, targeting the previous justified slot: - * Let `previous_epoch_attestations = [a for a in state.latest_attestations if previous_epoch == slot_to_epoch(a.data.slot)]`. (Note: Each of these attestations votes for the previous justified epoch/block root because of the [attestation block validity rules](#attestations-1).) - * Let `previous_epoch_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_attestations]`. - * Let `previous_epoch_attesting_balance = get_total_balance(state, previous_epoch_attester_indices)`. -* Validators justifying the epoch boundary block at the start of the previous epoch: - * Let `previous_epoch_boundary_attestations = [a for a in previous_epoch_attestations if a.data.epoch_boundary_root == get_block_root(state, get_epoch_start_slot(previous_epoch))]`. - * Let `previous_epoch_boundary_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_boundary_attestations]`. - * Let `previous_epoch_boundary_attesting_balance = get_total_balance(state, previous_epoch_boundary_attester_indices)`. -* Validators attesting to the expected beacon chain head during the previous epoch: - * Let `previous_epoch_head_attestations = [a for a in previous_epoch_attestations if a.data.beacon_block_root == get_block_root(state, a.data.slot)]`. - * Let `previous_epoch_head_attester_indices` be the union of the validator index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in previous_epoch_head_attestations]`. - * Let `previous_epoch_head_attesting_balance = get_total_balance(state, previous_epoch_head_attester_indices)`. +```python +def get_attesting_balance(state: BeaconState, attestations: List[PendingAttestation]) -> Gwei: + return get_total_balance(state, get_attesting_indices(state, attestations)) +``` -**Note**: `previous_total_balance` and `previous_epoch_boundary_attesting_balance` balance might be marginally different than the actual balances during previous epoch transition. Due to the tight bound on validator churn each epoch and small per-epoch rewards/penalties, the potential balance difference is very low and only marginally affects consensus safety. +```python +def get_current_epoch_boundary_attestations(state: BeaconState) -> List[PendingAttestation]: + return [ + a for a in state.current_epoch_attestations + if a.data.target_root == get_block_root(state, get_epoch_start_slot(get_current_epoch(state))) + ] +``` -For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch))`, let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`, compute: +```python +def get_previous_epoch_boundary_attestations(state: BeaconState) -> List[PendingAttestation]: + return [ + a for a in state.previous_epoch_attestations + if a.data.target_root == get_block_root(state, get_epoch_start_slot(get_previous_epoch(state))) + ] +``` -* Let `crosslink_data_root` be `state.latest_crosslinks[shard].crosslink_data_root` -* Let `attesting_validator_indices(crosslink_committee, crosslink_data_root)` be the union of the [validator](#dfn-validator) index sets given by `[get_attestation_participants(state, a.data, a.aggregation_bitfield) for a in current_epoch_attestations + previous_epoch_attestations if a.data.shard == shard and a.data.crosslink_data_root == crosslink_data_root]`. -* Let `winning_root(crosslink_committee)` be equal to the value of `crosslink_data_root` such that `get_total_balance(state, attesting_validator_indices(crosslink_committee, crosslink_data_root))` is maximized (ties broken by favoring lexicographically smallest `crosslink_data_root`). -* Let `attesting_validators(crosslink_committee)` be equal to `attesting_validator_indices(crosslink_committee, winning_root(crosslink_committee))` for convenience. -* Let `total_attesting_balance(crosslink_committee) = get_total_balance(state, attesting_validators(crosslink_committee))`. +```python +def get_previous_epoch_matching_head_attestations(state: BeaconState) -> List[PendingAttestation]: + return [ + a for a in state.previous_epoch_attestations + if a.data.beacon_block_root == get_block_root(state, a.data.slot) + ] +``` -Define the following helpers to process attestation inclusion rewards and inclusion distance reward/penalty. For every attestation `a` in `previous_epoch_attestations`: +**Note**: Total balances computed for the previous epoch might be marginally different than the actual total balances during the previous epoch transition. Due to the tight bound on validator churn each epoch and small per-epoch rewards/penalties, the potential balance difference is very low and only marginally affects consensus safety. -* Let `inclusion_slot(state, index) = a.inclusion_slot` for the attestation `a` where `index` is in `get_attestation_participants(state, a.data, a.aggregation_bitfield)`. If multiple attestations are applicable, the attestation with lowest `inclusion_slot` is considered. -* Let `inclusion_distance(state, index) = a.inclusion_slot - a.data.slot` where `a` is the above attestation. +```python +def get_winning_root_and_participants(state: BeaconState, shard: Shard) -> Tuple[Bytes32, List[ValidatorIndex]]: + all_attestations = state.current_epoch_attestations + state.previous_epoch_attestations + valid_attestations = [ + a for a in all_attestations if a.data.previous_crosslink == state.latest_crosslinks[shard] + ] + all_roots = [a.data.crosslink_data_root for a in valid_attestations] -#### Eth1 data + # handle when no attestations for shard available + if len(all_roots) == 0: + return ZERO_HASH, [] -If `next_epoch % EPOCHS_PER_ETH1_VOTING_PERIOD == 0`: + def get_attestations_for(root) -> List[PendingAttestation]: + return [a for a in valid_attestations if a.data.crosslink_data_root == root] -* If `eth1_data_vote.vote_count * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH` for some `eth1_data_vote` in `state.eth1_data_votes` (ie. more than half the votes in this voting period were for that value), set `state.latest_eth1_data = eth1_data_vote.eth1_data`. -* Set `state.eth1_data_votes = []`. + # Winning crosslink root is the root with the most votes for it, ties broken in favor of + # lexicographically higher hash + winning_root = max(all_roots, key=lambda r: (get_attesting_balance(state, get_attestations_for(r)), r)) + + return winning_root, get_attesting_indices(state, get_attestations_for(winning_root)) +``` + +```python +def earliest_attestation(state: BeaconState, validator_index: ValidatorIndex) -> PendingAttestation: + return min([ + a for a in state.previous_epoch_attestations if + validator_index in get_attestation_participants(state, a.data, a.aggregation_bitfield) + ], key=lambda a: a.inclusion_slot) +``` + +```python +def inclusion_slot(state: BeaconState, validator_index: ValidatorIndex) -> Slot: + return earliest_attestation(state, validator_index).inclusion_slot +``` + +```python +def inclusion_distance(state: BeaconState, validator_index: ValidatorIndex) -> int: + attestation = earliest_attestation(state, validator_index) + return attestation.inclusion_slot - attestation.data.slot +``` #### Justification -First, update the justification bitfield: +Run the following function: -* Let `new_justified_epoch = state.justified_epoch`. -* Set `state.justification_bitfield = state.justification_bitfield << 1`. -* Set `state.justification_bitfield |= 2` and `new_justified_epoch = previous_epoch` if `3 * previous_epoch_boundary_attesting_balance >= 2 * previous_total_balance`. -* Set `state.justification_bitfield |= 1` and `new_justified_epoch = current_epoch` if `3 * current_epoch_boundary_attesting_balance >= 2 * current_total_balance`. +```python +def update_justification_and_finalization(state: BeaconState) -> None: + new_justified_epoch = state.current_justified_epoch + new_finalized_epoch = state.finalized_epoch -Next, update last finalized epoch if possible: + # Rotate the justification bitfield up one epoch to make room for the current epoch + state.justification_bitfield <<= 1 + # If the previous epoch gets justified, fill the second last bit + previous_boundary_attesting_balance = get_attesting_balance(state, get_previous_epoch_boundary_attestations(state)) + if previous_boundary_attesting_balance * 3 >= get_previous_total_balance(state) * 2: + new_justified_epoch = get_current_epoch(state) - 1 + state.justification_bitfield |= 2 + # If the current epoch gets justified, fill the last bit + current_boundary_attesting_balance = get_attesting_balance(state, get_current_epoch_boundary_attestations(state)) + if current_boundary_attesting_balance * 3 >= get_current_total_balance(state) * 2: + new_justified_epoch = get_current_epoch(state) + state.justification_bitfield |= 1 -* Set `state.finalized_epoch = state.previous_justified_epoch` if `(state.justification_bitfield >> 1) % 8 == 0b111 and state.previous_justified_epoch == previous_epoch - 2`. -* Set `state.finalized_epoch = state.previous_justified_epoch` if `(state.justification_bitfield >> 1) % 4 == 0b11 and state.previous_justified_epoch == previous_epoch - 1`. -* Set `state.finalized_epoch = state.justified_epoch` if `(state.justification_bitfield >> 0) % 8 == 0b111 and state.justified_epoch == previous_epoch - 1`. -* Set `state.finalized_epoch = state.justified_epoch` if `(state.justification_bitfield >> 0) % 4 == 0b11 and state.justified_epoch == previous_epoch`. + # Process finalizations + bitfield = state.justification_bitfield + current_epoch = get_current_epoch(state) + # The 2nd/3rd/4th most recent epochs are all justified, the 2nd using the 4th as source + if (bitfield >> 1) % 8 == 0b111 and state.previous_justified_epoch == current_epoch - 3: + new_finalized_epoch = state.previous_justified_epoch + # The 2nd/3rd most recent epochs are both justified, the 2nd using the 3rd as source + if (bitfield >> 1) % 4 == 0b11 and state.previous_justified_epoch == current_epoch - 2: + new_finalized_epoch = state.previous_justified_epoch + # The 1st/2nd/3rd most recent epochs are all justified, the 1st using the 3rd as source + if (bitfield >> 0) % 8 == 0b111 and state.current_justified_epoch == current_epoch - 2: + new_finalized_epoch = state.current_justified_epoch + # The 1st/2nd most recent epochs are both justified, the 1st using the 2nd as source + if (bitfield >> 0) % 4 == 0b11 and state.current_justified_epoch == current_epoch - 1: + new_finalized_epoch = state.current_justified_epoch -Finally, update the following: - -* Set `state.previous_justified_epoch = state.justified_epoch`. -* Set `state.justified_epoch = new_justified_epoch`. + # Update state jusification/finality fields + state.previous_justified_epoch = state.current_justified_epoch + state.previous_justified_root = state.current_justified_root + if new_justified_epoch != state.current_justified_epoch: + state.current_justified_epoch = new_justified_epoch + state.current_justified_root = get_block_root(state, get_epoch_start_slot(new_justified_epoch)) + if new_finalized_epoch != state.finalized_epoch: + state.finalized_epoch = new_finalized_epoch + state.finalized_root = get_block_root(state, get_epoch_start_slot(new_finalized_epoch)) +``` #### Crosslinks -For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch))`, let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot`, compute: +Run the following function: -* Set `state.latest_crosslinks[shard] = Crosslink(epoch=slot_to_epoch(slot), crosslink_data_root=winning_root(crosslink_committee))` if `3 * total_attesting_balance(crosslink_committee) >= 2 * get_total_balance(crosslink_committee)`. +```python +def process_crosslinks(state: BeaconState) -> None: + current_epoch = get_current_epoch(state) + previous_epoch = current_epoch - 1 + next_epoch = current_epoch + 1 + for slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(next_epoch)): + for crosslink_committee, shard in get_crosslink_committees_at_slot(state, slot): + winning_root, participants = get_winning_root_and_participants(state, shard) + participating_balance = get_total_balance(state, participants) + total_balance = get_total_balance(state, crosslink_committee) + if 3 * participating_balance >= 2 * total_balance: + state.latest_crosslinks[shard] = Crosslink( + epoch=slot_to_epoch(slot), + crosslink_data_root=winning_root + ) +``` + +#### Eth1 data + +Run the following function: + +```python +def maybe_reset_eth1_period(state: BeaconState) -> None: + if (get_current_epoch(state) + 1) % EPOCHS_PER_ETH1_VOTING_PERIOD == 0: + for eth1_data_vote in state.eth1_data_votes: + # If a majority of all votes were for a particular eth1_data value, + # then set that as the new canonical value + if eth1_data_vote.vote_count * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH: + state.latest_eth1_data = eth1_data_vote.eth1_data + state.eth1_data_votes = [] +``` #### Rewards and penalties First, we define some additional helpers: -* Let `base_reward_quotient = integer_squareroot(previous_total_balance) // BASE_REWARD_QUOTIENT`. -* Let `base_reward(state, index) = get_effective_balance(state, index) // base_reward_quotient // 5` for any validator with the given `index`. -* Let `inactivity_penalty(state, index, epochs_since_finality) = base_reward(state, index) + get_effective_balance(state, index) * epochs_since_finality // INACTIVITY_PENALTY_QUOTIENT // 2` for any validator with the given `index`. +```python +def get_base_reward(state: BeaconState, index: ValidatorIndex) -> Gwei: + if get_previous_total_balance(state) == 0: + return 0 + + adjusted_quotient = integer_squareroot(get_previous_total_balance(state)) // BASE_REWARD_QUOTIENT + return get_effective_balance(state, index) // adjusted_quotient // 5 +``` + +```python +def get_inactivity_penalty(state: BeaconState, index: ValidatorIndex, epochs_since_finality: int) -> Gwei: + return ( + get_base_reward(state, index) + + get_effective_balance(state, index) * epochs_since_finality // INACTIVITY_PENALTY_QUOTIENT // 2 + ) +``` Note: When applying penalties in the following balance recalculations implementers should make sure the `uint64` does not underflow. ##### Justification and finalization -Note: Rewards and penalties are for participation in the previous epoch, so the "active validator" set is drawn from `get_active_validator_indices(state.validator_registry, previous_epoch)`. +```python +def get_justification_and_finalization_deltas(state: BeaconState) -> Tuple[List[Gwei], List[Gwei]]: + epochs_since_finality = get_current_epoch(state) + 1 - state.finalized_epoch + if epochs_since_finality <= 4: + return compute_normal_justification_and_finalization_deltas(state) + else: + return compute_inactivity_leak_deltas(state) +``` -* Let `epochs_since_finality = next_epoch - state.finalized_epoch`. +When blocks are finalizing normally... -Case 1: `epochs_since_finality <= 4`: +```python +def compute_normal_justification_and_finalization_deltas(state: BeaconState) -> Tuple[List[Gwei], List[Gwei]]: + # deltas[0] for rewards + # deltas[1] for penalties + deltas = [ + [0 for index in range(len(state.validator_registry))], + [0 for index in range(len(state.validator_registry))] + ] + # Some helper variables + boundary_attestations = get_previous_epoch_boundary_attestations(state) + boundary_attesting_balance = get_attesting_balance(state, boundary_attestations) + total_balance = get_previous_total_balance(state) + total_attesting_balance = get_attesting_balance(state, state.previous_epoch_attestations) + matching_head_attestations = get_previous_epoch_matching_head_attestations(state) + matching_head_balance = get_attesting_balance(state, matching_head_attestations) + # Process rewards or penalties for all validators + for index in get_active_validator_indices(state.validator_registry, get_previous_epoch(state)): + # Expected FFG source + if index in get_attesting_indices(state, state.previous_epoch_attestations): + deltas[0][index] += get_base_reward(state, index) * total_attesting_balance // total_balance + # Inclusion speed bonus + deltas[0][index] += ( + get_base_reward(state, index) * MIN_ATTESTATION_INCLUSION_DELAY // + inclusion_distance(state, index) + ) + else: + deltas[1][index] += get_base_reward(state, index) + # Expected FFG target + if index in get_attesting_indices(state, boundary_attestations): + deltas[0][index] += get_base_reward(state, index) * boundary_attesting_balance // total_balance + else: + deltas[1][index] += get_base_reward(state, index) + # Expected head + if index in get_attesting_indices(state, matching_head_attestations): + deltas[0][index] += get_base_reward(state, index) * matching_head_balance // total_balance + else: + deltas[1][index] += get_base_reward(state, index) + # Proposer bonus + if index in get_attesting_indices(state, state.previous_epoch_attestations): + proposer_index = get_beacon_proposer_index(state, inclusion_slot(state, index)) + deltas[0][proposer_index] += get_base_reward(state, index) // ATTESTATION_INCLUSION_REWARD_QUOTIENT + return deltas +``` -* Expected FFG source: - * Any [validator](#dfn-validator) `index` in `previous_epoch_attester_indices` gains `base_reward(state, index) * previous_epoch_attesting_balance // previous_total_balance`. - * Any [active validator](#dfn-active-validator) `index` not in `previous_epoch_attester_indices` loses `base_reward(state, index)`. -* Expected FFG target: - * Any [validator](#dfn-validator) `index` in `previous_epoch_boundary_attester_indices` gains `base_reward(state, index) * previous_epoch_boundary_attesting_balance // previous_total_balance`. - * Any [active validator](#dfn-active-validator) `index` not in `previous_epoch_boundary_attester_indices` loses `base_reward(state, index)`. -* Expected beacon chain head: - * Any [validator](#dfn-validator) `index` in `previous_epoch_head_attester_indices` gains `base_reward(state, index) * previous_epoch_head_attesting_balance // previous_total_balance)`. - * Any [active validator](#dfn-active-validator) `index` not in `previous_epoch_head_attester_indices` loses `base_reward(state, index)`. -* Inclusion distance: - * Any [validator](#dfn-validator) `index` in `previous_epoch_attester_indices` gains `base_reward(state, index) * MIN_ATTESTATION_INCLUSION_DELAY // inclusion_distance(state, index)` +When blocks are not finalizing normally... -Case 2: `epochs_since_finality > 4`: - -* Any [active validator](#dfn-active-validator) `index` not in `previous_epoch_attester_indices`, loses `inactivity_penalty(state, index, epochs_since_finality)`. -* Any [active validator](#dfn-active-validator) `index` not in `previous_epoch_boundary_attester_indices`, loses `inactivity_penalty(state, index, epochs_since_finality)`. -* Any [active validator](#dfn-active-validator) `index` not in `previous_epoch_head_attester_indices`, loses `base_reward(state, index)`. -* Any [active validator](#dfn-active-validator) `index` with `validator.slashed == True`, loses `2 * inactivity_penalty(state, index, epochs_since_finality) + base_reward(state, index)`. -* Any [validator](#dfn-validator) `index` in `previous_epoch_attester_indices` loses `base_reward(state, index) - base_reward(state, index) * MIN_ATTESTATION_INCLUSION_DELAY // inclusion_distance(state, index)` - -##### Attestation inclusion - -For each `index` in `previous_epoch_attester_indices`, we determine the proposer `proposer_index = get_beacon_proposer_index(state, inclusion_slot(state, index))` and set `state.validator_balances[proposer_index] += base_reward(state, index) // ATTESTATION_INCLUSION_REWARD_QUOTIENT`. +```python +def compute_inactivity_leak_deltas(state: BeaconState) -> Tuple[List[Gwei], List[Gwei]]: + # deltas[0] for rewards + # deltas[1] for penalties + deltas = [ + [0 for index in range(len(state.validator_registry))], + [0 for index in range(len(state.validator_registry))] + ] + boundary_attestations = get_previous_epoch_boundary_attestations(state) + matching_head_attestations = get_previous_epoch_matching_head_attestations(state) + active_validator_indices = get_active_validator_indices(state.validator_registry, get_previous_epoch(state)) + epochs_since_finality = get_current_epoch(state) + 1 - state.finalized_epoch + for index in active_validator_indices: + if index not in get_attesting_indices(state, state.previous_epoch_attestations): + deltas[1][index] += get_inactivity_penalty(state, index, epochs_since_finality) + else: + # If a validator did attest, apply a small penalty for getting attestations included late + deltas[0][index] += ( + get_base_reward(state, index) * MIN_ATTESTATION_INCLUSION_DELAY // + inclusion_distance(state, index) + ) + deltas[1][index] += get_base_reward(state, index) + if index not in get_attesting_indices(state, boundary_attestations): + deltas[1][index] += get_inactivity_penalty(state, index, epochs_since_finality) + if index not in get_attesting_indices(state, matching_head_attestations): + deltas[1][index] += get_base_reward(state, index) + # Penalize slashed-but-inactive validators as though they were active but offline + for index in range(len(state.validator_registry)): + eligible = ( + index not in active_validator_indices and + state.validator_registry[index].slashed and + get_current_epoch(state) < state.validator_registry[index].withdrawable_epoch + ) + if eligible: + deltas[1][index] += ( + 2 * get_inactivity_penalty(state, index, epochs_since_finality) + + get_base_reward(state, index) + ) + return deltas +``` ##### Crosslinks -For every `slot in range(get_epoch_start_slot(previous_epoch), get_epoch_start_slot(current_epoch))`: +```python +def get_crosslink_deltas(state: BeaconState) -> Tuple[List[Gwei], List[Gwei]]: + # deltas[0] for rewards + # deltas[1] for penalties + deltas = [ + [0 for index in range(len(state.validator_registry))], + [0 for index in range(len(state.validator_registry))] + ] + previous_epoch_start_slot = get_epoch_start_slot(get_previous_epoch(state)) + current_epoch_start_slot = get_epoch_start_slot(get_current_epoch(state)) + for slot in range(previous_epoch_start_slot, current_epoch_start_slot): + for crosslink_committee, shard in get_crosslink_committees_at_slot(state, slot): + winning_root, participants = get_winning_root_and_participants(state, shard) + participating_balance = get_total_balance(state, participants) + total_balance = get_total_balance(state, crosslink_committee) + for index in crosslink_committee: + if index in participants: + deltas[0][index] += get_base_reward(state, index) * participating_balance // total_balance + else: + deltas[1][index] += get_base_reward(state, index) + return deltas +``` -* Let `crosslink_committees_at_slot = get_crosslink_committees_at_slot(state, slot)`. -* For every `(crosslink_committee, shard)` in `crosslink_committees_at_slot` and every `index` in `crosslink_committee`: - * If `index in attesting_validators(crosslink_committee)`, `state.validator_balances[index] += base_reward(state, index) * total_attesting_balance(crosslink_committee) // get_total_balance(state, crosslink_committee))`. - * If `index not in attesting_validators(crosslink_committee)`, `state.validator_balances[index] -= base_reward(state, index)`. +#### Apply rewards + +Run the following: + +```python +def apply_rewards(state: BeaconState) -> None: + deltas1 = get_justification_and_finalization_deltas(state) + deltas2 = get_crosslink_deltas(state) + for i in range(len(state.validator_registry)): + state.validator_balances[i] = max( + 0, + state.validator_balances[i] + deltas1[0][i] + deltas2[0][i] - deltas1[1][i] - deltas2[1][i] + ) +``` #### Ejections -* Run `process_ejections(state)`. +Run `process_ejections(state)`. ```python def process_ejections(state: BeaconState) -> None: @@ -1911,25 +2048,28 @@ def process_ejections(state: BeaconState) -> None: Iterate through the validator registry and eject active validators with balance below ``EJECTION_BALANCE``. """ - for index in get_active_validator_indices(state.validator_registry, current_epoch(state)): + for index in get_active_validator_indices(state.validator_registry, get_current_epoch(state)): if state.validator_balances[index] < EJECTION_BALANCE: exit_validator(state, index) ``` #### Validator registry and shuffling seed data -First, update the following: - -* Set `state.previous_shuffling_epoch = state.current_shuffling_epoch`. -* Set `state.previous_shuffling_start_shard = state.current_shuffling_start_shard`. -* Set `state.previous_shuffling_seed = state.current_shuffling_seed`. - -If the following are satisfied: - -* `state.finalized_epoch > state.validator_registry_update_epoch` -* `state.latest_crosslinks[shard].epoch > state.validator_registry_update_epoch` for every shard number `shard` in `[(state.current_shuffling_start_shard + i) % SHARD_COUNT for i in range(get_current_epoch_committee_count(state))]` (that is, for every shard in the current committees) - -update the validator registry and associated fields by running +```python +def should_update_validator_registry(state: BeaconState) -> bool: + # Must have finalized a new block + if state.finalized_epoch <= state.validator_registry_update_epoch: + return False + # Must have processed new crosslinks on all shards of the current epoch + shards_to_check = [ + (state.current_shuffling_start_shard + i) % SHARD_COUNT + for i in range(get_current_epoch_committee_count(state)) + ] + for shard in shards_to_check: + if state.latest_crosslinks[shard].epoch <= state.validator_registry_update_epoch: + return False + return True +``` ```python def update_validator_registry(state: BeaconState) -> None: @@ -1964,7 +2104,7 @@ def update_validator_registry(state: BeaconState) -> None: # Exit validators within the allowable balance churn balance_churn = 0 for index, validator in enumerate(state.validator_registry): - if validator.activation_epoch == FAR_FUTURE_EPOCH and validator.initiated_exit: + if validator.exit_epoch == FAR_FUTURE_EPOCH and validator.initiated_exit: # Check the balance churn would be within the allowance balance_churn += get_effective_balance(state, index) if balance_churn > max_balance_churn: @@ -1976,23 +2116,40 @@ def update_validator_registry(state: BeaconState) -> None: state.validator_registry_update_epoch = current_epoch ``` -and perform the following updates: +Run the following function: -* Set `state.current_shuffling_epoch = next_epoch` -* Set `state.current_shuffling_start_shard = (state.current_shuffling_start_shard + get_current_epoch_committee_count(state)) % SHARD_COUNT` -* Set `state.current_shuffling_seed = generate_seed(state, state.current_shuffling_epoch)` - -If a validator registry update does _not_ happen do the following: - -* Let `epochs_since_last_registry_update = current_epoch - state.validator_registry_update_epoch`. -* If `epochs_since_last_registry_update > 1` and `is_power_of_two(epochs_since_last_registry_update)`: - * Set `state.current_shuffling_epoch = next_epoch`. - * Set `state.current_shuffling_seed = generate_seed(state, state.current_shuffling_epoch)` - * _Note_ that `state.current_shuffling_start_shard` is left unchanged. +```python +def update_registry_and_shuffling_data(state: BeaconState) -> None: + # First set previous shuffling data to current shuffling data + state.previous_shuffling_epoch = state.current_shuffling_epoch + state.previous_shuffling_start_shard = state.current_shuffling_start_shard + state.previous_shuffling_seed = state.current_shuffling_seed + current_epoch = get_current_epoch(state) + next_epoch = current_epoch + 1 + # Check if we should update, and if so, update + if should_update_validator_registry(state): + update_validator_registry(state) + # If we update the registry, update the shuffling data and shards as well + state.current_shuffling_epoch = next_epoch + state.current_shuffling_start_shard = ( + state.current_shuffling_start_shard + + get_current_epoch_committee_count(state) % SHARD_COUNT + ) + state.current_shuffling_seed = generate_seed(state, state.current_shuffling_epoch) + else: + # If processing at least one crosslink keeps failing, then reshuffle every power of two, + # but don't update the current_shuffling_start_shard + epochs_since_last_registry_update = current_epoch - state.validator_registry_update_epoch + if epochs_since_last_registry_update > 1 and is_power_of_two(epochs_since_last_registry_update): + state.current_shuffling_epoch = next_epoch + state.current_shuffling_seed = generate_seed(state, state.current_shuffling_epoch) +``` **Invariant**: the active index root that is hashed into the shuffling seed actually is the `hash_tree_root` of the validator set that is used for that epoch. -Regardless of whether or not a validator set change happens run `process_slashings(state)` and `process_exit_queue(state)`: +#### Slashings and exit queue + +Run `process_slashings(state)` and `process_exit_queue(state)`: ```python def process_slashings(state: BeaconState) -> None: @@ -2002,14 +2159,15 @@ def process_slashings(state: BeaconState) -> None: """ current_epoch = get_current_epoch(state) active_validator_indices = get_active_validator_indices(state.validator_registry, current_epoch) - total_balance = sum(get_effective_balance(state, i) for i in active_validator_indices) + total_balance = get_total_balance(state, active_validator_indices) + + # Compute `total_penalties` + total_at_start = state.latest_slashed_balances[(current_epoch + 1) % LATEST_SLASHED_EXIT_LENGTH] + total_at_end = state.latest_slashed_balances[current_epoch % LATEST_SLASHED_EXIT_LENGTH] + total_penalties = total_at_end - total_at_start for index, validator in enumerate(state.validator_registry): if validator.slashed and current_epoch == validator.withdrawable_epoch - LATEST_SLASHED_EXIT_LENGTH // 2: - epoch_index = current_epoch % LATEST_SLASHED_EXIT_LENGTH - total_at_start = state.latest_slashed_balances[(epoch_index + 1) % LATEST_SLASHED_EXIT_LENGTH] - total_at_end = state.latest_slashed_balances[epoch_index] - total_penalties = total_at_end - total_at_start penalty = max( get_effective_balance(state, index) * min(total_penalties * 3, total_balance) // total_balance, get_effective_balance(state, index) // MIN_PENALTY_QUOTIENT @@ -2043,14 +2201,344 @@ def process_exit_queue(state: BeaconState) -> None: #### Final updates -* Set `state.latest_active_index_roots[(next_epoch + ACTIVATION_EXIT_DELAY) % LATEST_ACTIVE_INDEX_ROOTS_LENGTH] = hash_tree_root(get_active_validator_indices(state.validator_registry, next_epoch + ACTIVATION_EXIT_DELAY))`. -* Set `state.latest_slashed_balances[(next_epoch) % LATEST_SLASHED_EXIT_LENGTH] = state.latest_slashed_balances[current_epoch % LATEST_SLASHED_EXIT_LENGTH]`. -* Set `state.latest_randao_mixes[next_epoch % LATEST_RANDAO_MIXES_LENGTH] = get_randao_mix(state, current_epoch)`. -* Remove any `attestation` in `state.latest_attestations` such that `slot_to_epoch(attestation.data.slot) < current_epoch`. +Run the following function: -### State root verification +```python +def finish_epoch_update(state: BeaconState) -> None: + current_epoch = get_current_epoch(state) + next_epoch = current_epoch + 1 + # Set active index root + index_root_position = (next_epoch + ACTIVATION_EXIT_DELAY) % LATEST_ACTIVE_INDEX_ROOTS_LENGTH + state.latest_active_index_roots[index_root_position] = hash_tree_root( + get_active_validator_indices(state.validator_registry, next_epoch + ACTIVATION_EXIT_DELAY) + ) + # Set total slashed balances + state.latest_slashed_balances[next_epoch % LATEST_SLASHED_EXIT_LENGTH] = ( + state.latest_slashed_balances[current_epoch % LATEST_SLASHED_EXIT_LENGTH] + ) + # Set randao mix + state.latest_randao_mixes[next_epoch % LATEST_RANDAO_MIXES_LENGTH] = get_randao_mix(state, current_epoch) + # Set historical root accumulator + if next_epoch % (SLOTS_PER_HISTORICAL_ROOT // SLOTS_PER_EPOCH) == 0: + historical_batch = HistoricalBatch( + block_roots=state.latest_block_roots, + state_roots=state.latest_state_roots, + ) + state.historical_roots.append(hash_tree_root(historical_batch)) + # Rotate current/previous epoch attestations + state.previous_epoch_attestations = state.current_epoch_attestations + state.current_epoch_attestations = [] +``` -Verify `block.state_root == hash_tree_root(state)` if there exists a `block` for the slot being processed. +### Per-slot processing + +At every `slot > GENESIS_SLOT` run the following function: + +```python +def advance_slot(state: BeaconState) -> None: + state.slot += 1 +``` + +### Per-block processing + +For every `block` except the genesis block, run `process_block_header(state, block)`, `process_randao(state, block)` and `process_eth1_data(state, block)`. + +#### Block header + +```python +def process_block_header(state: BeaconState, block: BeaconBlock) -> None: + # Verify that the slots match + assert block.slot == state.slot + # Verify that the parent matches + assert block.previous_block_root == hash_tree_root(state.latest_block_header) + # Save current block as the new latest block + state.latest_block_header = get_temporary_block_header(block) + # Verify proposer signature + proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)] + assert bls_verify( + pubkey=proposer.pubkey, + message_hash=signed_root(block), + signature=block.signature, + domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_BEACON_BLOCK) + ) +``` + +#### RANDAO + +```python +def process_randao(state: BeaconState, block: BeaconBlock) -> None: + proposer = state.validator_registry[get_beacon_proposer_index(state, state.slot)] + # Verify that the provided randao value is valid + assert bls_verify( + pubkey=proposer.pubkey, + message_hash=hash_tree_root(get_current_epoch(state)), + signature=block.body.randao_reveal, + domain=get_domain(state.fork, get_current_epoch(state), DOMAIN_RANDAO) + ) + # Mix it in + state.latest_randao_mixes[get_current_epoch(state) % LATEST_RANDAO_MIXES_LENGTH] = ( + xor(get_randao_mix(state, get_current_epoch(state)), + hash(block.body.randao_reveal)) + ) +``` + +#### Eth1 data + +```python +def process_eth1_data(state: BeaconState, block: BeaconBlock) -> None: + for eth1_data_vote in state.eth1_data_votes: + # If someone else has already voted for the same hash, add to its counter + if eth1_data_vote.eth1_data == block.body.eth1_data: + eth1_data_vote.vote_count += 1 + return + # If we're seeing this hash for the first time, make a new counter + state.eth1_data_votes.append(Eth1DataVote(eth1_data=block.body.eth1_data, vote_count=1)) +``` + +#### Transactions + +##### Proposer slashings + +Verify that `len(block.body.proposer_slashings) <= MAX_PROPOSER_SLASHINGS`. + +For each `proposer_slashing` in `block.body.proposer_slashings`, run the following function: + +```python +def process_proposer_slashing(state: BeaconState, + proposer_slashing: ProposerSlashing) -> None: + """ + Process ``ProposerSlashing`` transaction. + Note that this function mutates ``state``. + """ + proposer = state.validator_registry[proposer_slashing.proposer_index] + # Verify that the epoch is the same + assert slot_to_epoch(proposer_slashing.header_1.slot) == slot_to_epoch(proposer_slashing.header_2.slot) + # But the headers are different + assert proposer_slashing.header_1 != proposer_slashing.header_2 + # Proposer is not yet slashed + assert proposer.slashed is False + # Signatures are valid + for header in (proposer_slashing.header_1, proposer_slashing.header_2): + assert bls_verify( + pubkey=proposer.pubkey, + message_hash=signed_root(header), + signature=header.signature, + domain=get_domain(state.fork, slot_to_epoch(header.slot), DOMAIN_BEACON_BLOCK) + ) + slash_validator(state, proposer_slashing.proposer_index) +``` + +##### Attester slashings + +Verify that `len(block.body.attester_slashings) <= MAX_ATTESTER_SLASHINGS`. + +For each `attester_slashing` in `block.body.attester_slashings`, run the following function: + +```python +def process_attester_slashing(state: BeaconState, + attester_slashing: AttesterSlashing) -> None: + """ + Process ``AttesterSlashing`` transaction. + Note that this function mutates ``state``. + """ + attestation1 = attester_slashing.slashable_attestation_1 + attestation2 = attester_slashing.slashable_attestation_2 + # Check that the attestations are conflicting + assert attestation1.data != attestation2.data + assert ( + is_double_vote(attestation1.data, attestation2.data) or + is_surround_vote(attestation1.data, attestation2.data) + ) + assert verify_slashable_attestation(state, attestation1) + assert verify_slashable_attestation(state, attestation2) + slashable_indices = [ + index for index in attestation1.validator_indices + if ( + index in attestation2.validator_indices and + state.validator_registry[index].slashed is False + ) + ] + assert len(slashable_indices) >= 1 + for index in slashable_indices: + slash_validator(state, index) +``` + +##### Attestations + +Verify that `len(block.body.attestations) <= MAX_ATTESTATIONS`. + +For each `attestation` in `block.body.attestations`, run the following function: + +```python +def process_attestation(state: BeaconState, attestation: Attestation) -> None: + """ + Process ``Attestation`` transaction. + Note that this function mutates ``state``. + """ + # Can't submit attestations that are too far in history (or in prehistory) + assert attestation.data.slot >= GENESIS_SLOT + assert state.slot <= attestation.data.slot + SLOTS_PER_EPOCH + # Can't submit attestations too quickly + assert attestation.data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot + # Verify that the justified epoch and root is correct + if slot_to_epoch(attestation.data.slot) >= get_current_epoch(state): + # Case 1: current epoch attestations + assert attestation.data.source_epoch == state.current_justified_epoch + assert attestation.data.source_root == state.current_justified_root + else: + # Case 2: previous epoch attestations + assert attestation.data.source_epoch == state.previous_justified_epoch + assert attestation.data.source_root == state.previous_justified_root + # Check that the crosslink data is valid + acceptable_crosslink_data = { + # Case 1: Latest crosslink matches the one in the state + attestation.data.previous_crosslink, + # Case 2: State has already been updated, state's latest crosslink matches the crosslink + # the attestation is trying to create + Crosslink( + crosslink_data_root=attestation.data.crosslink_data_root, + epoch=slot_to_epoch(attestation.data.slot) + ) + } + assert state.latest_crosslinks[attestation.data.shard] in acceptable_crosslink_data + # Attestation must be nonempty! + assert attestation.aggregation_bitfield != b'\x00' * len(attestation.aggregation_bitfield) + # Custody must be empty (to be removed in phase 1) + assert attestation.custody_bitfield == b'\x00' * len(attestation.custody_bitfield) + # Get the committee for the specific shard that this attestation is for + crosslink_committee = [ + committee for committee, shard in get_crosslink_committees_at_slot(state, attestation.data.slot) + if shard == attestation.data.shard + ][0] + # Custody bitfield must be a subset of the attestation bitfield + for i in range(len(crosslink_committee)): + if get_bitfield_bit(attestation.aggregation_bitfield, i) == 0b0: + assert get_bitfield_bit(attestation.custody_bitfield, i) == 0b0 + # Verify aggregate signature + participants = get_attestation_participants(state, attestation.data, attestation.aggregation_bitfield) + custody_bit_1_participants = get_attestation_participants(state, attestation.data, attestation.custody_bitfield) + custody_bit_0_participants = [i for i in participants if i not in custody_bit_1_participants] + + assert bls_verify_multiple( + pubkeys=[ + bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_0_participants]), + bls_aggregate_pubkeys([state.validator_registry[i].pubkey for i in custody_bit_1_participants]), + ], + message_hashes=[ + hash_tree_root(AttestationDataAndCustodyBit(data=attestation.data, custody_bit=0b0)), + hash_tree_root(AttestationDataAndCustodyBit(data=attestation.data, custody_bit=0b1)), + ], + signature=attestation.aggregate_signature, + domain=get_domain(state.fork, slot_to_epoch(attestation.data.slot), DOMAIN_ATTESTATION), + ) + # Crosslink data root is zero (to be removed in phase 1) + assert attestation.data.crosslink_data_root == ZERO_HASH + # Apply the attestation + pending_attestation = PendingAttestation( + data=attestation.data, + aggregation_bitfield=attestation.aggregation_bitfield, + custody_bitfield=attestation.custody_bitfield, + inclusion_slot=state.slot + ) + if slot_to_epoch(attestation.data.slot) == get_current_epoch(state): + state.current_epoch_attestations.append(pending_attestation) + elif slot_to_epoch(attestation.data.slot) == get_previous_epoch(state): + state.previous_epoch_attestations.append(pending_attestation) +``` + +##### Deposits + +Verify that `len(block.body.deposits) <= MAX_DEPOSITS`. + +For each `deposit` in `block.body.deposits`, run `process_deposit(state, deposit)`. + +##### Voluntary exits + +Verify that `len(block.body.voluntary_exits) <= MAX_VOLUNTARY_EXITS`. + +For each `exit` in `block.body.voluntary_exits`, run the following function: + +```python +def process_voluntary_exit(state: BeaconState, exit: VoluntaryExit) -> None: + """ + Process ``VoluntaryExit`` transaction. + Note that this function mutates ``state``. + """ + validator = state.validator_registry[exit.validator_index] + # Verify the validator has not yet exited + assert validator.exit_epoch == FAR_FUTURE_EPOCH + # Verify the validator has not initiated an exit + assert validator.initiated_exit is False + # Exits must specify an epoch when they become valid; they are not valid before then + assert get_current_epoch(state) >= exit.epoch + # Must have been in the validator set long enough + assert get_current_epoch(state) - validator.activation_epoch >= PERSISTENT_COMMITTEE_PERIOD + # Verify signature + assert bls_verify( + pubkey=validator.pubkey, + message_hash=signed_root(exit), + signature=exit.signature, + domain=get_domain(state.fork, exit.epoch, DOMAIN_VOLUNTARY_EXIT) + ) + # Run the exit + initiate_validator_exit(state, exit.validator_index) +``` + +##### Transfers + +Note: Transfers are a temporary functionality for phases 0 and 1, to be removed in phase 2. + +Verify that `len(block.body.transfers) <= MAX_TRANSFERS` and that all transfers are distinct. + +For each `transfer` in `block.body.transfers`, run the following function: + +```python +def process_transfer(state: BeaconState, transfer: Transfer) -> None: + """ + Process ``Transfer`` transaction. + Note that this function mutates ``state``. + """ + # Verify the amount and fee aren't individually too big (for anti-overflow purposes) + assert state.validator_balances[transfer.sender] >= max(transfer.amount, transfer.fee) + # Verify that we have enough ETH to send, and that after the transfer the balance will be either + # exactly zero or at least MIN_DEPOSIT_AMOUNT + assert ( + state.validator_balances[transfer.sender] == transfer.amount + transfer.fee or + state.validator_balances[transfer.sender] >= transfer.amount + transfer.fee + MIN_DEPOSIT_AMOUNT + ) + # A transfer is valid in only one slot + assert state.slot == transfer.slot + # Only withdrawn or not-yet-deposited accounts can transfer + assert ( + get_current_epoch(state) >= state.validator_registry[transfer.sender].withdrawable_epoch or + state.validator_registry[transfer.sender].activation_epoch == FAR_FUTURE_EPOCH + ) + # Verify that the pubkey is valid + assert ( + state.validator_registry[transfer.sender].withdrawal_credentials == + BLS_WITHDRAWAL_PREFIX_BYTE + hash(transfer.pubkey)[1:] + ) + # Verify that the signature is valid + assert bls_verify( + pubkey=transfer.pubkey, + message_hash=signed_root(transfer), + signature=transfer.signature, + domain=get_domain(state.fork, slot_to_epoch(transfer.slot), DOMAIN_TRANSFER) + ) + # Process the transfer + state.validator_balances[transfer.sender] -= transfer.amount + transfer.fee + state.validator_balances[transfer.recipient] += transfer.amount + state.validator_balances[get_beacon_proposer_index(state, state.slot)] += transfer.fee +``` + +#### State root verification + +Verify the block's `state_root` by running the following function: + +```python +def verify_block_state_root(state: BeaconState, block: BeaconBlock) -> None: + assert block.state_root == hash_tree_root(state) +``` # References diff --git a/specs/core/1_shard-data-chains.md b/specs/core/1_shard-data-chains.md index 48eea7d2b..1713c6cbf 100644 --- a/specs/core/1_shard-data-chains.md +++ b/specs/core/1_shard-data-chains.md @@ -2,6 +2,8 @@ **NOTICE**: This document is a work-in-progress for researchers and implementers. It reflects recent spec changes and takes precedence over the [Python proof-of-concept implementation](https://github.com/ethereum/beacon_chain). +At the current stage, Phase 1, while fundamentally feature-complete, is still subject to change. Development teams with spare resources may consider starting on the "Shard chains and crosslink data" section; at least basic properties, such as the fact that a shard block can get created every slot and is dependent on both a parent block in the same shard and a beacon chain block at or before that same slot, are unlikely to change, though details are likely to undergo similar kinds of changes to what Phase 0 has undergone since the start of the year. + ## Table of contents @@ -15,19 +17,20 @@ - [Time parameters](#time-parameters) - [Max operations per block](#max-operations-per-block) - [Signature domains](#signature-domains) - - [Helper functions](#helper-functions) + - [Shard chains and crosslink data](#shard-chains-and-crosslink-data) + - [Helper functions](#helper-functions) - [`get_split_offset`](#get_split_offset) - [`get_shuffled_committee`](#get_shuffled_committee) - [`get_persistent_committee`](#get_persistent_committee) - [`get_shard_proposer_index`](#get_shard_proposer_index) - - [Data Structures](#data-structures) + - [Data Structures](#data-structures) - [Shard chain blocks](#shard-chain-blocks) - - [Shard block processing](#shard-block-processing) + - [Shard block processing](#shard-block-processing) - [Verifying shard block data](#verifying-shard-block-data) - [Verifying a crosslink](#verifying-a-crosslink) - [Shard block fork choice rule](#shard-block-fork-choice-rule) -- [Updates to the beacon chain](#updates-to-the-beacon-chain) - - [Data structures](#data-structures) + - [Updates to the beacon chain](#updates-to-the-beacon-chain) + - [Data structures](#data-structures) - [`Validator`](#validator) - [`BeaconBlockBody`](#beaconblockbody) - [`BranchChallenge`](#branchchallenge) @@ -35,20 +38,20 @@ - [`BranchChallengeRecord`](#branchchallengerecord) - [`SubkeyReveal`](#subkeyreveal) - [Helpers](#helpers) - - [`get_attestation_merkle_depth`](#get_attestation_merkle_depth) + - [`get_attestation_data_merkle_depth`](#get_attestation_data_merkle_depth) - [`epoch_to_custody_period`](#epoch_to_custody_period) - [`slot_to_custody_period`](#slot_to_custody_period) - [`get_current_custody_period`](#get_current_custody_period) - [`verify_custody_subkey_reveal`](#verify_custody_subkey_reveal) - [`prepare_validator_for_withdrawal`](#prepare_validator_for_withdrawal) - [`penalize_validator`](#penalize_validator) - - [Per-slot processing](#per-slot-processing) + - [Per-slot processing](#per-slot-processing) - [Operations](#operations) - [Branch challenges](#branch-challenges) - [Branch responses](#branch-responses) - [Subkey reveals](#subkey-reveals) - - [Per-epoch processing](#per-epoch-processing) - - [One-time phase 1 initiation transition](#one-time-phase-1-initiation-transition) + - [Per-epoch processing](#per-epoch-processing) + - [One-time phase 1 initiation transition](#one-time-phase-1-initiation-transition) @@ -71,6 +74,9 @@ Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md | `SHARD_CHUNK_SIZE` | 2**5 (= 32) | bytes | | `SHARD_BLOCK_SIZE` | 2**14 (= 16,384) | bytes | | `MINOR_REWARD_QUOTIENT` | 2**8 (= 256) | | +| `MAX_POC_RESPONSE_DEPTH` | 5 | | +| `ZERO_PUBKEY` | int_to_bytes48(0)| | +| `VALIDATOR_NULL` | 2**64 - 1 | | #### Time parameters @@ -84,19 +90,25 @@ Phase 1 depends upon all of the constants defined in [Phase 0](0_beacon-chain.md #### Max operations per block -| Name | Value | -|-------------------------------|---------------| -| `MAX_BRANCH_CHALLENGES` | 2**2 (= 4) | -| `MAX_BRANCH_RESPONSES` | 2**4 (= 16) | -| `MAX_EARLY_SUBKEY_REVEALS` | 2**4 (= 16) | +| Name | Value | +|----------------------------------------------------|---------------| +| `MAX_BRANCH_CHALLENGES` | 2**2 (= 4) | +| `MAX_BRANCH_RESPONSES` | 2**4 (= 16) | +| `MAX_EARLY_SUBKEY_REVEALS` | 2**4 (= 16) | +| `MAX_INTERACTIVE_CUSTODY_CHALLENGE_INITIATIONS` | 2 | +| `MAX_INTERACTIVE_CUSTODY_CHALLENGE_RESPONSES` | 16 | +| `MAX_INTERACTIVE_CUSTODY_CHALLENGE_CONTINUTATIONS` | 16 | #### Signature domains -| Name | Value | -|------------------------|-----------------| -| `DOMAIN_SHARD_PROPOSER`| 129 | -| `DOMAIN_SHARD_ATTESTER`| 130 | -| `DOMAIN_CUSTODY_SUBKEY`| 131 | +| Name | Value | +|------------------------------|-----------------| +| `DOMAIN_SHARD_PROPOSER` | 129 | +| `DOMAIN_SHARD_ATTESTER` | 130 | +| `DOMAIN_CUSTODY_SUBKEY` | 131 | +| `DOMAIN_CUSTODY_INTERACTIVE` | 132 | + +# Shard chains and crosslink data ## Helper functions @@ -158,7 +170,6 @@ def get_persistent_committee(state: BeaconState, [i for i in later_committee if epoch % PERSISTENT_COMMITTEE_PERIOD >= get_switchover_epoch(i)] ))) ``` - #### `get_shard_proposer_index` ```python @@ -290,7 +301,7 @@ The `shard_chain_commitment` is only valid if it equals `compute_commitment(head ### Shard block fork choice rule -The fork choice rule for any shard is LMD GHOST using the shard chain attestations of the persistent committee and the beacon chain attestations of the crosslink committee currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the latest block referenced in the most recent accepted crosslink (ie. `state.crosslinks[shard].crosslink_data_root`). Only blocks whose `beacon_chain_ref` is the block in the main beacon chain at the specified `slot` should be considered (if the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than a slot). +The fork choice rule for any shard is LMD GHOST using the shard chain attestations of the persistent committee and the beacon chain attestations of the crosslink committee currently assigned to that shard, but instead of being rooted in the genesis it is rooted in the block referenced in the most recent accepted crosslink (ie. `state.crosslinks[shard].shard_block_root`). Only blocks whose `beacon_chain_ref` is the block in the main beacon chain at the specified `slot` should be considered (if the beacon chain skips a slot, then the block at that slot is considered to be the block in the beacon chain at the highest slot lower than a slot). # Updates to the beacon chain @@ -301,7 +312,6 @@ The fork choice rule for any shard is LMD GHOST using the shard chain attestatio Add member values to the end of the `Validator` object: ```python - 'open_branch_challenges': [BranchChallengeRecord], 'next_subkey_to_reveal': 'uint64', 'reveal_max_periods_late': 'uint64', ``` @@ -309,7 +319,6 @@ Add member values to the end of the `Validator` object: And the initializers: ```python - 'open_branch_challenges': [], 'next_subkey_to_reveal': get_current_custody_period(state), 'reveal_max_periods_late': 0, ``` @@ -322,6 +331,10 @@ Add member values to the `BeaconBlockBody` structure: 'branch_challenges': [BranchChallenge], 'branch_responses': [BranchResponse], 'subkey_reveals': [SubkeyReveal], + 'interactive_custody_challenge_initiations': [InteractiveCustodyChallengeInitiation], + 'interactive_custody_challenge_responses': [InteractiveCustodyChallengeResponse], + 'interactive_custody_challenge_continuations': [InteractiveCustodyChallengeContinuation], + ``` And initialize to the following: @@ -332,6 +345,17 @@ And initialize to the following: 'subkey_reveals': [], ``` +### `BeaconState` + +Add member values to the `BeaconState` structure: + +```python + 'branch_challenge_records': [BranchChallengeRecord], + 'next_branch_challenge_id': 'uint64', + 'custody_challenge_records': [InteractiveCustodyChallengeRecord], + 'next_custody_challenge_id': 'uint64', +``` + ### `BranchChallenge` Define a `BranchChallenge` as follows: @@ -350,11 +374,10 @@ Define a `BranchResponse` as follows: ```python { - 'responder_index': 'uint64', + 'challenge_id': 'uint64', + 'responding_to_custody_challenge': 'bool', 'data': 'bytes32', 'branch': ['bytes32'], - 'data_index': 'uint64', - 'root': 'bytes32', } ``` @@ -364,14 +387,75 @@ Define a `BranchChallengeRecord` as follows: ```python { + 'challenge_id': 'uint64', 'challenger_index': 'uint64', + 'responder_index': 'uint64', 'root': 'bytes32', 'depth': 'uint64', - 'inclusion_epoch': 'uint64', + 'deadline': 'uint64', 'data_index': 'uint64', } ``` +### `InteractiveCustodyChallengeRecord` + +```python +{ + 'challenge_id': 'uint64', + 'challenger_index': 'uint64', + 'responder_index': 'uint64', + # Initial data root + 'data_root': 'bytes32', + # Initial custody bit + 'custody_bit': 'bool', + # Responder subkey + 'responder_subkey': 'bytes96', + # The hash in the PoC tree in the position that we are currently at + 'current_custody_tree_node': 'bytes32', + # The position in the tree, in terms of depth and position offset + 'depth': 'uint64', + 'offset': 'uint64', + # Max depth of the branch + 'max_depth': 'uint64', + # Deadline to respond (as an epoch) + 'deadline': 'uint64', +} +``` + +### `InteractiveCustodyChallengeInitiation` + +```python +{ + 'attestation': SlashableAttestation, + 'responder_index': 'uint64', + 'challenger_index': 'uint64', + 'responder_subkey': 'bytes96', + 'signature': 'bytes96', +} +``` + +### `InteractiveCustodyChallengeResponse` + +```python +{ + 'challenge_id': 'uint64', + 'hashes': ['bytes32'], + 'signature': 'bytes96', +} +``` + +### `InteractiveCustodyChallengeContinuation` + +```python +{ + 'challenge_id': 'uint64', + 'sub_index': 'uint64', + 'new_custody_tree_node': 'bytes32', + 'proof': ['bytes32'], + 'signature': 'bytes96', +} +``` + ### `SubkeyReveal` Define a `SubkeyReveal` as follows: @@ -388,6 +472,20 @@ Define a `SubkeyReveal` as follows: ## Helpers +### `get_branch_challenge_record_by_id` + +```python +def get_branch_challenge_record_by_id(state: BeaconState, id: int) -> BranchChallengeRecord: + return [c for c in state.branch_challenges if c.challenge_id == id][0] +``` + +### `get_custody_challenge_record_by_id` + +```python +def get_custody_challenge_record_by_id(state: BeaconState, id: int) -> BranchChallengeRecord: + return [c for c in state.branch_challenges if c.challenge_id == id][0] +``` + ### `get_attestation_merkle_depth` ```python @@ -453,6 +551,19 @@ def verify_custody_subkey_reveal(pubkey: bytes48, ) ``` +### `verify_signed_challenge_message` + +```python +def verify_signed_challenge_message(message: Any, pubkey: bytes48) -> bool: + return bls_verify( + message_hash=signed_root(message), + pubkey=pubkey, + signature=message.signature, + domain=get_domain(state, get_current_epoch(state), DOMAIN_CUSTODY_INTERACTIVE) + ) + +``` + ### `penalize_validator` Change the definition of `penalize_validator` as follows: @@ -493,29 +604,88 @@ Add the following operations to the per-slot processing, in order the given belo Verify that `len(block.body.branch_challenges) <= MAX_BRANCH_CHALLENGES`. -For each `challenge` in `block.body.branch_challenges`: +For each `challenge` in `block.body.branch_challenges`, run: -* Verify that `slot_to_epoch(challenge.attestation.data.slot) >= get_current_epoch(state) - MAX_BRANCH_CHALLENGE_DELAY`. -* Verify that `state.validator_registry[responder_index].exit_epoch >= get_current_epoch(state) - MAX_BRANCH_CHALLENGE_DELAY`. -* Verify that `verify_slashable_attestation(state, challenge.attestation)` returns `True`. -* Verify that `challenge.responder_index` is in `challenge.attestation.validator_indices`. -* Let `depth = get_attestation_merkle_depth(challenge.attestation)`. Verify that `challenge.data_index < 2**depth`. -* Verify that there does not exist a `BranchChallengeRecord` in `state.validator_registry[challenge.responder_index].open_branch_challenges` with `root == challenge.attestation.data.shard_chain_commitment` and `data_index == data_index`. -* Append to `state.validator_registry[challenge.responder_index].open_branch_challenges` the object `BranchChallengeRecord(challenger_index=get_beacon_proposer_index(state, state.slot), root=challenge.attestation.data.shard_chain_commitment, depth=depth, inclusion_epoch=get_current_epoch(state), data_index=data_index)`. - -**Invariant**: the `open_branch_challenges` array will always stay sorted in order of `inclusion_epoch`. +```python +def process_branch_challenge(state: BeaconState, + challenge: BranchChallenge) -> None: + # Check that it's not too late to challenge + assert slot_to_epoch(challenge.attestation.data.slot) >= get_current_epoch(state) - MAX_BRANCH_CHALLENGE_DELAY + assert state.validator_registry[responder_index].exit_epoch >= get_current_epoch(state) - MAX_BRANCH_CHALLENGE_DELAY + # Check the attestation is valid + assert verify_slashable_attestation(state, challenge.attestation) + # Check that the responder participated + assert challenger.responder_index in challenge.attestation.validator_indices + # Check the challenge is not a duplicate + assert [ + c for c in state.branch_challenge_records if c.root == challenge.attestation.data.crosslink_data_root and + c.data_index == challenge.data_index + ] == [] + # Check validity of depth + depth = get_attestation_merkle_depth(challenge.attestation) + assert c.data_index < 2**depth + # Add new challenge + state.branch_challenge_records.append(BranchChallengeRecord( + challenge_id=state.next_branch_challenge_id, + challenger_index=get_beacon_proposer_index(state, state.slot), + root=challenge.attestation.data.shard_chain_commitment, + depth=depth, + deadline=get_current_epoch(state) + CHALLENGE_RESPONSE_DEADLINE, + data_index=challenge.data_index + )) + state.next_branch_challenge_id += 1 +``` #### Branch responses Verify that `len(block.body.branch_responses) <= MAX_BRANCH_RESPONSES`. -For each `response` in `block.body.branch_responses`: +For each `response` in `block.body.branch_responses`, if `response.responding_to_custody_challenge == False`, run: -* Find the `BranchChallengeRecord` in `state.validator_registry[response.responder_index].open_branch_challenges` whose (`root`, `data_index`) match the (`root`, `data_index`) of the `response`. Verify that one such record exists (it is not possible for there to be more than one), call it `record`. -* Verify that `verify_merkle_branch(leaf=response.data, branch=response.branch, depth=record.depth, index=record.data_index, root=record.root)` is True. -* Verify that `get_current_epoch(state) >= record.inclusion_epoch + ENTRY_EXIT_DELAY`. -* Remove the `record` from `state.validator_registry[response.responder_index].open_branch_challenges` -* Determine the proposer `proposer_index = get_beacon_proposer_index(state, state.slot)` and set `state.validator_balances[proposer_index] += base_reward(state, index) // MINOR_REWARD_QUOTIENT`. +```python +def process_branch_exploration_response(state: BeaconState, + response: BranchResponse) -> None: + challenge = get_branch_challenge_record_by_id(response.challenge_id) + assert verify_merkle_branch( + leaf=response.data, + branch=response.branch, + depth=challenge.depth, + index=challenge.data_index, + root=challenge.root + ) + # Must wait at least ENTRY_EXIT_DELAY before responding to a branch challenge + assert get_current_epoch(state) >= challenge.inclusion_epoch + ENTRY_EXIT_DELAY + state.branch_challenge_records.pop(challenge) + # Reward the proposer + proposer_index = get_beacon_proposer_index(state, state.slot) + state.validator_balances[proposer_index] += base_reward(state, index) // MINOR_REWARD_QUOTIENT +``` + +If `response.responding_to_custody_challenge == True`, run: + +```python +def process_branch_custody_response(state: BeaconState, + response: BranchResponse) -> None: + challenge = get_custody_challenge_record_by_id(response.challenge_id) + responder = state.validator_registry[challenge.responder_index] + # Verify we're not too late + assert get_current_epoch(state) < responder.withdrawable_epoch + # Verify the Merkle branch *of the data tree* + assert verify_merkle_branch( + leaf=response.data, + branch=response.branch, + depth=challenge.max_depth, + index=challenge.offset, + root=challenge.data_root + ) + # Responder wins + if hash(challenge.responder_subkey + response.data) == challenge.current_custody_tree_node: + penalize_validator(state, challenge.challenger_index, challenge.responder_index) + # Challenger wins + else: + penalize_validator(state, challenge.responder_index, challenge.challenger_index) + state.custody_challenge_records.pop(challenge) +``` #### Subkey reveals @@ -541,6 +711,126 @@ In case (ii): * Set `state.validator_registry[reveal.validator_index].next_subkey_to_reveal += 1` * Set `state.validator_registry[reveal.validator_index].reveal_max_periods_late = max(state.validator_registry[reveal.validator_index].reveal_max_periods_late, get_current_period(state) - reveal.period)`. +#### Interactive custody challenge initiations + +Verify that `len(block.body.interactive_custody_challenge_initiations) <= MAX_INTERACTIVE_CUSTODY_CHALLENGE_INITIATIONS`. + +For each `initiation` in `block.body.interactive_custody_challenge_initiations`, use the following function to process it: + +```python +def process_initiation(state: BeaconState, + initiation: InteractiveCustodyChallengeInitiation) -> None: + challenger = state.validator_registry[initiation.challenger_index] + responder = state.validator_registry[initiation.responder_index] + # Verify the signature + assert verify_signed_challenge_message(initiation, challenger.pubkey) + # Verify the attestation + assert verify_slashable_attestation(initiation.attestation, state) + # Check that the responder actually participated in the attestation + assert initiation.responder_index in attestation.validator_indices + # Any validator can be a challenger or responder of max 1 challenge at a time + for c in state.custody_challenge_records: + assert c.challenger_index != initiation.challenger_index + assert c.responder_index != initiation.responder_index + # Can't challenge if you've been penalized + assert challenger.penalized_epoch == FAR_FUTURE_EPOCH + # Make sure the revealed subkey is valid + assert verify_custody_subkey_reveal( + pubkey=state.validator_registry[responder_index].pubkey, + subkey=initiation.responder_subkey, + period=slot_to_custody_period(attestation.data.slot) + ) + # Verify that the attestation is still eligible for challenging + min_challengeable_epoch = responder.exit_epoch - CUSTODY_PERIOD_LENGTH * (1 + responder.reveal_max_periods_late) + assert min_challengeable_epoch <= slot_to_epoch(initiation.attestation.data.slot) + # Create a new challenge object + state.branch_challenge_records.append(InteractiveCustodyChallengeRecord( + challenge_id=state.next_branch_challenge_id, + challenger_index=initiation.challenger_index, + responder_index=initiation.responder_index, + data_root=attestation.custody_commitment, + custody_bit=get_bitfield_bit(attestation.custody_bitfield, attestation.validator_indices.index(responder_index)), + responder_subkey=responder_subkey, + current_custody_tree_node=ZERO_HASH, + depth=0, + offset=0, + max_depth=get_attestation_data_merkle_depth(initiation.attestation.data), + deadline=get_current_epoch(state) + CHALLENGE_RESPONSE_DEADLINE + )) + state.next_branch_challenge_id += 1 + # Responder can't withdraw yet! + state.validator_registry[responder_index].withdrawable_epoch = FAR_FUTURE_EPOCH +``` + +#### Interactive custody challenge responses + +A response provides 32 hashes that are under current known proof of custody tree node. Note that at the beginning the tree node is just one bit of the custody root, so we ask the responder to sign to commit to the top 5 levels of the tree and therefore the root hash; at all other stages in the game responses are self-verifying. + +Verify that `len(block.body.interactive_custody_challenge_responses) <= MAX_INTERACTIVE_CUSTODY_CHALLENGE_RESPONSES`. + +For each `response` in `block.body.interactive_custody_challenge_responses`, use the following function to process it: + +```python +def process_response(state: BeaconState, + response: InteractiveCustodyChallengeResponse) -> None: + challenge = get_custody_challenge_record_by_id(state, response.challenge_id) + responder = state.validator_registry[challenge.responder_index] + # Check that the right number of hashes was provided + expected_depth = min(challenge.max_depth - challenge.depth, MAX_POC_RESPONSE_DEPTH) + assert 2**expected_depth == len(response.hashes) + # Must make some progress! + assert expected_depth > 0 + # Check the hashes match the previously provided root + root = merkle_root(response.hashes) + # If this is the first response check the bit and the signature and set the root + if challenge.depth == 0: + assert get_bitfield_bit(root, 0) == challenge.custody_bit + assert verify_signed_challenge_message(response, responder.pubkey) + challenge.current_custody_tree_node = root + # Otherwise just check the response against the root + else: + assert root == challenge_data.current_custody_tree_node + # Update challenge data + challenge.deadline=FAR_FUTURE_EPOCH + responder.withdrawable_epoch = get_current_epoch(state) + MAX_POC_RESPONSE_DEPTH +``` + +#### Interactive custody challenge continuations + +Once a response provides 32 hashes, the challenger has the right to choose any one of them that they feel is constructed incorrectly to continue the game. Note that eventually, the game will get to the point where the `new_custody_tree_node` is a leaf node. + +Verify that `len(block.body.interactive_custody_challenge_continuations) <= MAX_INTERACTIVE_CUSTODY_CHALLENGE_CONTINUATIONS`. + +For each `continuation` in `block.body.interactive_custody_challenge_continuations`, use the following function to process it: + +```python +def process_continuation(state: BeaconState, + continuation: InteractiveCustodyChallengeContinuation) -> None: + challenge = get_custody_challenge_record_by_id(state, continuation.challenge_id) + challenger = state.validator_registry[challenge.challenger_index] + responder = state.validator_registry[challenge.responder_index] + expected_depth = min(challenge_data.max_depth - challenge_data.depth, MAX_POC_RESPONSE_DEPTH) + # Verify we're not too late + assert get_current_epoch(state) < responder.withdrawable_epoch + # Verify the Merkle branch (the previous custody response provided the next level of hashes so the + # challenger has the info to make any Merkle branch) + assert verify_merkle_branch( + leaf=new_custody_tree_node, + branch=continuation.proof, + depth=expected_depth, + index=sub_index, + root=challenge_data.current_custody_tree_node + ) + # Verify signature + assert verify_signed_challenge_message(continuation, challenger.pubkey) + # Update the challenge data + challenge.current_custody_tree_node = continuation.new_custody_tree_node + challenge.depth += expected_depth + challenge.deadline = get_current_epoch(state) + MAX_POC_RESPONSE_DEPTH + responder.withdrawable_epoch = FAR_FUTURE_EPOCH + challenge.offset = challenge_data.offset * 2**expected_depth + sub_index +``` + ## Per-epoch processing Add the following loop immediately below the `process_ejections` loop: @@ -548,12 +838,18 @@ Add the following loop immediately below the `process_ejections` loop: ```python def process_challenge_absences(state: BeaconState) -> None: """ - Iterate through the validator registry + Iterate through the challenge list and penalize validators with balance that did not answer challenges. """ - for index, validator in enumerate(state.validator_registry): - if len(validator.open_branch_challenges) > 0 and get_current_epoch(state) > validator.open_branch_challenges[0].inclusion_epoch + CHALLENGE_RESPONSE_DEADLINE: - penalize_validator(state, index, validator.open_branch_challenges[0].challenger_index) + for c in state.branch_challenge_records: + if get_current_epoch(state) > c.deadline: + penalize_validator(state, c.responder_index, c.challenger_index) + + for c in state.custody_challenge_records: + if get_current_epoch(state) > c.deadline: + penalize_validator(state, c.responder_index, c.challenger_index) + if get_current_epoch(state) > state.validator_registry[c.responder_index].withdrawable_epoch: + penalize_validator(state, c.challenger_index, c.responder_index) ``` In `process_penalties_and_exits`, change the definition of `eligible` to the following (note that it is not a pure function because `state` is declared in the surrounding scope): @@ -562,7 +858,7 @@ In `process_penalties_and_exits`, change the definition of `eligible` to the fol def eligible(index): validator = state.validator_registry[index] # Cannot exit if there are still open branch challenges - if len(validator.open_branch_challenges) > 0: + if [c for c in state.branch_challenge_records if c.responder_index == index] != []: return False # Cannot exit if you have not revealed all of your subkeys elif validator.next_subkey_to_reveal <= epoch_to_custody_period(validator.exit_epoch): @@ -582,7 +878,15 @@ Run the following on the fork block after per-slot processing and before per-blo For all `validator` in `ValidatorRegistry`, update it to the new format and fill the new member values with: ```python - 'open_branch_challenges': [], 'next_subkey_to_reveal': get_current_custody_period(state), 'reveal_max_periods_late': 0, ``` + +Update the `BeaconState` to the new format and fill the new member values with: + +```python + 'branch_challenge_records': [], + 'next_branch_challenge_id': 0, + 'custody_challenge_records': [], + 'next_custody_challenge_id': 0, +``` diff --git a/specs/simple-serialize.md b/specs/simple-serialize.md index 109ee289e..862d13edf 100644 --- a/specs/simple-serialize.md +++ b/specs/simple-serialize.md @@ -1,424 +1,124 @@ -# [WIP] SimpleSerialize (SSZ) Spec +# SimpleSerialiZe (SSZ) -This is the **work in progress** document to describe `SimpleSerialize`, the -current selected serialization method for Ethereum 2.0 using the Beacon Chain. +This is a **work in progress** describing typing, serialization and Merkleization of Ethereum 2.0 objects. -This document specifies the general information for serializing and -deserializing objects and data types. +## Table of contents -## ToC - -* [About](#about) -* [Variables and Functions](#variables-and-functions) -* [Constants](#constants) -* [Overview](#overview) - + [Serialize/Encode](#serializeencode) - - [uintN](#uintn) - - [bool](#bool) - - [bytesN](#bytesn) - - [List/Vectors](#listvectors) - - [Container](#container) - + [Deserialize/Decode](#deserializedecode) - - [uintN](#uintn-1) - - [bool](#bool-1) - - [bytesN](#bytesn-1) - - [List/Vectors](#listvectors-1) - - [Container](#container-1) - + [Tree Hash](#tree-hash) - - [`uint8`..`uint256`, `bool`, `bytes1`..`bytes32`](#uint8uint256-bool-bytes1bytes32) - - [`uint264`..`uintN`, `bytes33`..`bytesN`](#uint264uintn-bytes33bytesn) - - [List/Vectors](#listvectors-2) - - [Container](#container-2) - + [Signed Roots](#signed-roots) -* [Implementations](#implementations) - -## About - -`SimpleSerialize` was first proposed by Vitalik Buterin as the serialization -protocol for use in the Ethereum 2.0 Beacon Chain. - -The core feature of `ssz` is the simplicity of the serialization with low -overhead. - -## Variables and Functions - -| Term | Definition | -|:-------------|:-----------------------------------------------------------------------------------------------| -| `little` | Little endian. | -| `byteorder` | Specifies [endianness](https://en.wikipedia.org/wiki/Endianness): big endian or little endian. | -| `len` | Length/number of bytes. | -| `to_bytes` | Convert to bytes. Should take parameters ``size`` and ``byteorder``. | -| `from_bytes` | Convert from bytes to object. Should take ``bytes`` and ``byteorder``. | -| `value` | The value to serialize. | -| `rawbytes` | Raw serialized bytes. | -| `deserialized_object` | The deserialized data in the data structure of your programming language. | -| `new_index` | An index to keep track the latest position where the `rawbytes` have been deserialized. | +- [Constants](#constants) +- [Typing](#typing) + - [Basic types](#basic-types) + - [Composite types](#composite-types) + - [Aliases](#aliases) +- [Serialization](#serialization) + - [`"uintN"`](#uintn) + - [`"bool"`](#bool) + - [Tuples, containers, lists](#tuples-containers-lists) +- [Deserialization](#deserialization) +- [Merkleization](#merkleization) +- [Self-signed containers](#self-signed-containers) +- [Implementations](#implementations) ## Constants -| Constant | Value | Definition | -|:------------------|:-----:|:--------------------------------------------------------------------------------------| -| `LENGTH_BYTES` | 4 | Number of bytes used for the length added before a variable-length serialized object. | -| `SSZ_CHUNK_SIZE` | 128 | Number of bytes for the chunk size of the Merkle tree leaf. | +| Name | Value | Description | +|-|-|-| +| `BYTES_PER_CHUNK` | `32` | Number of bytes per chunk. +| `BYTES_PER_LENGTH_PREFIX` | `4` | Number of bytes per serialized length prefix. | -## Overview +## Typing +### Basic types -### Serialize/Encode +* `"uintN"`: `N`-bit unsigned integer (where `N in [8, 16, 32, 64, 128, 256]`) +* `"bool"`: `True` or `False` -#### uintN +### Composite types -| uint Type | Usage | -|:---------:|:-----------------------------------------------------------| -| `uintN` | Type of `N` bits unsigned integer, where ``N % 8 == 0``. | +* **container**: ordered heterogenous collection of values + * key-pair curly bracket notation `{}`, e.g. `{'foo': "uint64", 'bar': "bool"}` +* **tuple**: ordered fixed-length homogeneous collection of values + * angle bracket notation `[type, N]`, e.g. `["uint64", N]` +* **list**: ordered variable-length homogenous collection of values + * angle bracket notation `[type]`, e.g. `["uint64"]` -Convert directly to bytes the size of the int. (e.g. ``uint16 = 2 bytes``) +### Aliases -All integers are serialized as **little endian**. +For convenience we alias: -| Check to perform | Code | -|:-----------------------|:----------------------| -| Size is a byte integer | ``int_size % 8 == 0`` | +* `"byte"` to `"uint8"` (this is a basic type) +* `"bytes"` to `["byte"]` (this is *not* a basic type) +* `"bytesN"` to `["byte", N]` (this is *not* a basic type) + +## Serialization + +We recursively define the `serialize` function which consumes an object `value` (of the type specified) and returns a bytestring of type `"bytes"`. + +*Note*: In the function definitions below (`serialize`, `hash_tree_root`, `signed_root`, etc.) objects implicitly carry their type. + +### `uintN` ```python -assert(int_size % 8 == 0) -buffer_size = int_size / 8 -return value.to_bytes(buffer_size, 'little') +assert N in [8, 16, 32, 64, 128, 256] +return value.to_bytes(N // 8, 'little') ``` -#### bool - -Convert directly to a single 0x00 or 0x01 byte. - -| Check to perform | Code | -|:------------------|:---------------------------| -| Value is boolean | ``value in (True, False)`` | +### `bool` ```python -assert(value in (True, False)) +assert value in (True, False) return b'\x01' if value is True else b'\x00' ``` -#### bytesN +### Tuples, containers, lists -A fixed-size byte array. - -| Checks to perform | Code | -|:---------------------------------------|:---------------------| -| Length in bytes is correct for `bytesN` | ``len(value) == N`` | +If `value` is fixed-length (i.e. does not embed a list): ```python -assert(len(value) == N) - -return value +return ''.join([serialize(element) for element in value]) ``` -#### List/Vectors - -Lists are a collection of elements of the same homogeneous type. - -| Check to perform | Code | -|:--------------------------------------------|:----------------------------| -| Length of serialized list fits into 4 bytes | ``len(serialized) < 2**32`` | - -1. Serialize all list elements individually and concatenate them. -2. Prefix the concatenation with its length encoded as a `4-byte` **little-endian** unsigned integer. - -We define `bytes` to be a synonym of `List[bytes1]`. - -**Example in Python** +If `value` is variable-length (i.e. embeds a list): ```python - -serialized_list_string = b'' - -for item in value: - serialized_list_string += serialize(item) - -assert(len(serialized_list_string) < 2**32) - -serialized_len = (len(serialized_list_string).to_bytes(LENGTH_BYTES, 'little')) - -return serialized_len + serialized_list_string +serialized_bytes = ''.join([serialize(element) for element in value]) +assert len(serialized_bytes) < 2**(8 * BYTES_PER_LENGTH_PREFIX) +serialized_length = len(serialized_bytes).to_bytes(BYTES_PER_LENGTH_PREFIX, 'little') +return serialized_length + serialized_bytes ``` -#### Container +## Deserialization -A container represents a heterogenous, associative collection of key-value pairs. Each pair is referred to as a `field`. To get the value for a given field, you supply the key which is a symbol unique to the container referred to as the field's `name`. The container data type is analogous to the `struct` type found in many languages like C or Go. +Because serialization is an injective function (i.e. two distinct objects of the same type will serialize to different values) any bytestring has at most one object it could deserialize to. Efficient algorithms for computing this object can be found in [the implementations](#implementations). -To serialize a container, obtain the list of its field's names in the specified order. For each field name in this list, obtain the corresponding value and serialize it. Tightly pack the complete set of serialized values in the same order as the field names into a buffer. Calculate the size of this buffer of serialized bytes and encode as a `4-byte` **little endian** `uint32`. Prepend the encoded length to the buffer. The result of this concatenation is the final serialized value of the container. +## Merkleization -| Check to perform | Code | -|:----------------------------------------------|:----------------------------| -| Length of serialized fields fits into 4 bytes | ``len(serialized) < 2**32`` | +We first define helper functions: -To serialize: +* `pack`: Given ordered objects of the same basic type, serialize them, pack them into `BYTES_PER_CHUNK`-byte chunks, right-pad the last chunk with zero bytes, and return the chunks. +* `merkleize`: Given ordered `BYTES_PER_CHUNK`-byte chunks, if necessary append zero chunks so that the number of chunks is a power of two, Merkleize the chunks, and return the root. +* `mix_in_length`: Given a Merkle root `root` and a length `length` (`"uint256"` little-endian serialization) return `hash(root + length)`. -1. Get the list of the container's fields. +We now define Merkleization `hash_tree_root(value)` of an object `value` recursively: -2. For each name in the list, obtain the corresponding value from the container and serialize it. Place this serialized value into a buffer. The serialized values should be tightly packed. +* `merkleize(pack(value))` if `value` is a basic object or a tuple of basic objects +* `mix_in_length(merkleize(pack(value)), len(value))` if `value` is a list of basic objects +* `merkleize([hash_tree_root(element) for element in value])` if `value` is a tuple of composite objects or a container +* `mix_in_length(merkleize([hash_tree_root(element) for element in value]), len(value))` if `value` is a list of composite objects -3. Get the number of raw bytes in the serialized buffer. Encode that number as a `4-byte` **little endian** `uint32`. +## Self-signed containers -4. Prepend the length to the serialized buffer. - -**Example in Python** - -```python -def get_field_names(typ): - return typ.fields.keys() - -def get_value_for_field_name(value, field_name): - return getattr(value, field_name) - -def get_type_for_field_name(typ, field_name): - return typ.fields[field_name] - -serialized_buffer = b'' - -typ = type(value) -for field_name in get_field_names(typ): - field_value = get_value_for_field_name(value, field_name) - field_type = get_type_for_field_name(typ, field_name) - serialized_buffer += serialize(field_value, field_type) - -assert(len(serialized_buffer) < 2**32) - -serialized_len = (len(serialized_buffer).to_bytes(LENGTH_BYTES, 'little')) - -return serialized_len + serialized_buffer -``` - -### Deserialize/Decode - -The decoding requires knowledge of the type of the item to be decoded. When -performing decoding on an entire serialized string, it also requires knowledge -of the order in which the objects have been serialized. - -Note: Each return will provide: -- `deserialized_object` -- `new_index` - -At each step, the following checks should be made: - -| Check to perform | Check | -|:-------------------------|:-----------------------------------------------------------| -| Ensure sufficient length | ``len(rawbytes) >= current_index + deserialize_length`` | - -At the final step, the following checks should be made: - -| Check to perform | Check | -|:-------------------------|:-------------------------------------| -| Ensure no extra length | `new_index == len(rawbytes)` | - -#### uintN - -Convert directly from bytes into integer utilising the number of bytes the same -size as the integer length. (e.g. ``uint16 == 2 bytes``) - -All integers are interpreted as **little endian**. - -```python -byte_length = int_size / 8 -new_index = current_index + byte_length -assert(len(rawbytes) >= new_index) -return int.from_bytes(rawbytes[current_index:current_index+byte_length], 'little'), new_index -``` - -#### bool - -Return True if 0x01, False if 0x00. - -```python -assert rawbytes in (b'\x00', b'\x01') -return True if rawbytes == b'\x01' else False -``` - -#### bytesN - -Return the `N` bytes. - -```python -assert(len(rawbytes) >= current_index + N) -new_index = current_index + N -return rawbytes[current_index:current_index+N], new_index -``` - -#### List/Vectors - -Deserialize each element in the list. -1. Get the length of the serialized list. -2. Loop through deserializing each item in the list until you reach the -entire length of the list. - -| Check to perform | code | -|:------------------------------------------|:----------------------------------------------------------------| -| ``rawbytes`` has enough left for length | ``len(rawbytes) > current_index + LENGTH_BYTES`` | -| list is not greater than serialized bytes | ``len(rawbytes) > current_index + LENGTH_BYTES + total_length`` | - -```python -assert(len(rawbytes) > current_index + LENGTH_BYTES) -total_length = int.from_bytes(rawbytes[current_index:current_index + LENGTH_BYTES], 'little') -new_index = current_index + LENGTH_BYTES + total_length -assert(len(rawbytes) >= new_index) -item_index = current_index + LENGTH_BYTES -deserialized_list = [] - -while item_index < new_index: - object, item_index = deserialize(rawbytes, item_index, item_type) - deserialized_list.append(object) - -return deserialized_list, new_index -``` - -#### Container - -Refer to the section on container encoding for some definitions. - -To deserialize a container, loop over each field in the container and use the type of that field to know what kind of deserialization to perform. Consume successive elements of the data stream for each successful deserialization. - -Instantiate a container with the full set of deserialized data, matching each member with the corresponding field. - -| Check to perform | code | -|:------------------------------------------|:----------------------------------------------------------------| -| ``rawbytes`` has enough left for length | ``len(rawbytes) > current_index + LENGTH_BYTES`` | -| list is not greater than serialized bytes | ``len(rawbytes) > current_index + LENGTH_BYTES + total_length`` | - -To deserialize: - -1. Get the list of the container's fields. -2. For each name in the list, attempt to deserialize a value for that type. Collect these values as they will be used to construct an instance of the container. -3. Construct a container instance after successfully consuming the entire subset of the stream for the serialized container. - -**Example in Python** - -```python -def get_field_names(typ): - return typ.fields.keys() - -def get_value_for_field_name(value, field_name): - return getattr(value, field_name) - -def get_type_for_field_name(typ, field_name): - return typ.fields[field_name] - -class Container: - # this is the container; here we will define an empty class for demonstration - pass - -# get a reference to the type in some way... -container = Container() -typ = type(container) - -assert(len(rawbytes) > current_index + LENGTH_BYTES) -total_length = int.from_bytes(rawbytes[current_index:current_index + LENGTH_BYTES], 'little') -new_index = current_index + LENGTH_BYTES + total_length -assert(len(rawbytes) >= new_index) -item_index = current_index + LENGTH_BYTES - -values = {} -for field_name in get_field_names(typ): - field_name_type = get_type_for_field_name(typ, field_name) - values[field_name], item_index = deserialize(data, item_index, field_name_type) -assert item_index == new_index -return typ(**values), item_index -``` - -### Tree Hash - -The below `hash_tree_root_internal` algorithm is defined recursively in the case of lists and containers, and it outputs a value equal to or less than 32 bytes in size. For use as a "final output" (eg. for signing), use `hash_tree_root(x) = zpad(hash_tree_root_internal(x), 32)`, where `zpad` is a helper that extends the given `bytes` value to the desired `length` by adding zero bytes on the right: - -```python -def zpad(input: bytes, length: int) -> bytes: - return input + b'\x00' * (length - len(input)) -``` - -Refer to [the helper function `hash`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#hash) of Phase 0 of the [Eth2.0 specs](https://github.com/ethereum/eth2.0-specs) for a definition of the hash function used below, `hash(x)`. - -#### `uint8`..`uint256`, `bool`, `bytes1`..`bytes32` - -Return the serialization of the value. - -#### `uint264`..`uintN`, `bytes33`..`bytesN` - -Return the hash of the serialization of the value. - -#### List/Vectors - -First, we define the Merkle tree function. - -```python -# Merkle tree hash of a list of homogenous, non-empty items -def merkle_hash(lst): - # Store length of list (to compensate for non-bijectiveness of padding) - datalen = len(lst).to_bytes(32, 'little') - - if len(lst) == 0: - # Handle empty list case - chunkz = [b'\x00' * SSZ_CHUNK_SIZE] - elif len(lst[0]) < SSZ_CHUNK_SIZE: - # See how many items fit in a chunk - items_per_chunk = SSZ_CHUNK_SIZE // len(lst[0]) - - # Build a list of chunks based on the number of items in the chunk - chunkz = [ - zpad(b''.join(lst[i:i + items_per_chunk]), SSZ_CHUNK_SIZE) - for i in range(0, len(lst), items_per_chunk) - ] - else: - # Leave large items alone - chunkz = lst - - # Merkleise - def next_power_of_2(x): - return 1 if x == 0 else 2**(x - 1).bit_length() - - for i in range(len(chunkz), next_power_of_2(len(chunkz))): - chunkz.append(b'\x00' * SSZ_CHUNK_SIZE) - while len(chunkz) > 1: - chunkz = [hash(chunkz[i] + chunkz[i+1]) for i in range(0, len(chunkz), 2)] - - # Return hash of root and data length - return hash(chunkz[0] + datalen) -``` - -To `hash_tree_root_internal` a list, we simply do: - -```python -return merkle_hash([hash_tree_root_internal(item) for item in value]) -``` - -Where the inner `hash_tree_root_internal` is a recursive application of the tree-hashing function (returning less than 32 bytes for short single values). - -#### Container - -Recursively tree hash the values in the container in the same order as the fields, and Merkle hash the results. - -```python -return merkle_hash([hash_tree_root_internal(getattr(x, field)) for field in value.fields]) -``` - -### Signed roots - -Let `field_name` be a field name in an SSZ container `container`. We define `truncate(container, field_name)` to be the `container` with the fields from `field_name` onwards truncated away. That is, `truncate(container, field_name) = [getattr(container, field)) for field in value.fields[:i]]` where `i = value.fields.index(field_name)`. - -When `field_name` maps to a signature (e.g. a BLS12-381 signature of type `Bytes96`) the convention is that the corresponding signed message be `signed_root(container, field_name) = hash_tree_root(truncate(container, field_name))`. For example if `container = {"foo": sub_object_1, "bar": sub_object_2, "signature": bytes96, "baz": sub_object_3}` then `signed_root(container, "signature") = merkle_hash([hash_tree_root(sub_object_1), hash_tree_root(sub_object_2)])`. - -Note that this convention means that fields after the signature are _not_ signed over. If there are multiple signatures in `container` then those are expected to be signing over the fields in the order specified. If multiple signatures of the same value are expected the convention is that the signature field be an array of signatures. +Let `value` be a self-signed container object. The convention is that the signature (e.g. a `"bytes96"` BLS12-381 signature) be the last field of `value`. Further, the signed message for `value` is `signed_root(value) = hash_tree_root(truncate_last(value))` where `truncate_last` truncates the last element of `value`. ## Implementations -| Language | Implementation | Description | -|:--------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------| -| Python | [ https://github.com/ethereum/py-ssz ](https://github.com/ethereum/py-ssz) | Python implementation of SSZ | -| Rust | [ https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz ](https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz) | Lighthouse (Rust Ethereum 2.0 Node) maintained SSZ. | -| Nim | [ https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim ](https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim) | Nim Implementation maintained SSZ. | -| Rust | [ https://github.com/paritytech/shasper/tree/master/util/ssz ](https://github.com/paritytech/shasper/tree/master/util/ssz) | Shasper implementation of SSZ maintained by ParityTech. | -| Javascript | [ https://github.com/ChainSafeSystems/ssz-js/blob/master/src/index.js ](https://github.com/ChainSafeSystems/ssz-js/blob/master/src/index.js) | Javascript Implementation maintained SSZ | -| Java | [ https://www.github.com/ConsenSys/cava/tree/master/ssz ](https://www.github.com/ConsenSys/cava/tree/master/ssz) | SSZ Java library part of the Cava suite | -| Go | [ https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz ](https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz) | Go implementation of SSZ mantained by Prysmatic Labs | -| Swift | [ https://github.com/yeeth/SimpleSerialize.swift ](https://github.com/yeeth/SimpleSerialize.swift) | Swift implementation maintained SSZ | -| C# | [ https://github.com/codingupastorm/csharp-ssz ](https://github.com/codingupastorm/csharp-ssz) | C# implementation maintained SSZ | -| C++ | [ https://github.com/NAKsir-melody/cpp_ssz](https://github.com/NAKsir-melody/cpp_ssz) | C++ implementation maintained SSZ | - -## Copyright -Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). +| Language | Project | Maintainer | Implementation | +|-|-|-|-| +| Python | Ethereum 2.0 | Ethereum Foundation | [https://github.com/ethereum/py-ssz](https://github.com/ethereum/py-ssz) | +| Rust | Lighthouse | Sigma Prime | [https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz](https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz) | +| Nim | Nimbus | Status | [https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim](https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/ssz.nim) | +| Rust | Shasper | ParityTech | [https://github.com/paritytech/shasper/tree/master/util/ssz](https://github.com/paritytech/shasper/tree/master/util/ssz) | +| Javascript | Lodestart | Chain Safe Systems | [https://github.com/ChainSafeSystems/ssz-js/blob/master/src/index.js](https://github.com/ChainSafeSystems/ssz-js/blob/master/src/index.js) | +| Java | Cava | ConsenSys | [https://www.github.com/ConsenSys/cava/tree/master/ssz](https://www.github.com/ConsenSys/cava/tree/master/ssz) | +| Go | Prysm | Prysmatic Labs | [https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz](https://github.com/prysmaticlabs/prysm/tree/master/shared/ssz) | +| Swift | Yeeth | Dean Eigenmann | [https://github.com/yeeth/SimpleSerialize.swift](https://github.com/yeeth/SimpleSerialize.swift) | +| C# | | Jordan Andrews | [https://github.com/codingupastorm/csharp-ssz](https://github.com/codingupastorm/csharp-ssz) | +| C++ | | | [https://github.com/NAKsir-melody/cpp_ssz](https://github.com/NAKsir-melody/cpp_ssz) | diff --git a/specs/validator/0_beacon-chain-validator.md b/specs/validator/0_beacon-chain-validator.md index 1fd1c7ac3..be3008227 100644 --- a/specs/validator/0_beacon-chain-validator.md +++ b/specs/validator/0_beacon-chain-validator.md @@ -40,17 +40,17 @@ __NOTICE__: This document is a work-in-progress for researchers and implementers - [Slot](#slot-1) - [Shard](#shard) - [Beacon block root](#beacon-block-root) - - [Epoch boundary root](#epoch-boundary-root) + - [Target root](#target-root) - [Crosslink data root](#crosslink-data-root) - [Latest crosslink](#latest-crosslink) - - [Justified epoch](#justified-epoch) - - [Justified block root](#justified-block-root) + - [Source epoch](#source-epoch) + - [Source root](#source-root) - [Construct attestation](#construct-attestation) - [Data](#data) - [Aggregation bitfield](#aggregation-bitfield) - [Custody bitfield](#custody-bitfield) - [Aggregate signature](#aggregate-signature) - - [Validator assigments](#validator-assignments) + - [Validator assignments](#validator-assignments) - [Lookahead](#lookahead) - [How to avoid slashing](#how-to-avoid-slashing) - [Proposer slashing](#proposer-slashing) @@ -101,8 +101,7 @@ In phase 0, all incoming validator deposits originate from the Ethereum 1.0 PoW To submit a deposit: * Pack the validator's [initialization parameters](#initialization) into `deposit_input`, a [`DepositInput`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#depositinput) SSZ object. -* Set `deposit_input.proof_of_possession = EMPTY_SIGNATURE`. -* Let `proof_of_possession` be the result of `bls_sign` of the `hash_tree_root(deposit_input)` with `domain=DOMAIN_DEPOSIT`. +* Let `proof_of_possession` be the result of `bls_sign` of the `signed_root(deposit_input)` with `domain=DOMAIN_DEPOSIT`. * Set `deposit_input.proof_of_possession = proof_of_possession`. * Let `amount` be the amount in Gwei to be deposited by the validator where `MIN_DEPOSIT_AMOUNT <= amount <= MAX_DEPOSIT_AMOUNT`. * Send a transaction on the Ethereum 1.0 chain to `DEPOSIT_CONTRACT_ADDRESS` executing `deposit` along with `serialize(deposit_input)` as the singular `bytes` input along with a deposit `amount` in Gwei. @@ -121,11 +120,12 @@ Once a validator has been processed and added to the beacon state's `validator_r In normal operation, the validator is quickly activated at which point the validator is added to the shuffling and begins validation after an additional `ACTIVATION_EXIT_DELAY` epochs (25.6 minutes). -The function [`is_active_validator`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#is_active_validator) can be used to check if a validator is active during a given epoch. Usage is as follows: +The function [`is_active_validator`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#is_active_validator) can be used to check if a validator is active during a given shuffling epoch. Note that the `BeaconState` contains a field `current_shuffling_epoch` which dictates from which epoch the current active validators are taken. Usage is as follows: ```python +shuffling_epoch = state.current_shuffling_epoch validator = state.validator_registry[validator_index] -is_active = is_active_validator(validator, epoch) +is_active = is_active_validator(validator, shuffling_epoch) ``` Once a validator is activated, the validator is assigned [responsibilities](#beacon-chain-responsibilities) until exited. @@ -138,7 +138,7 @@ A validator has two primary responsibilities to the beacon chain -- [proposing b ### Block proposal -A validator is expected to propose a [`BeaconBlock`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beaconblock) at the beginning of any slot during which `get_beacon_proposer_index(state, slot)` returns the validator's `validator_index`. To propose, the validator selects the `BeaconBlock`, `parent`, that in their view of the fork choice is the head of the chain during `slot`. The validator is to create, sign, and broadcast a `block` that is a child of `parent` and that executes a valid [beacon chain state transition](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beacon-chain-state-transition-function). +A validator is expected to propose a [`BeaconBlock`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beaconblock) at the beginning of any slot during which `get_beacon_proposer_index(state, slot)` returns the validator's `validator_index`. To propose, the validator selects the `BeaconBlock`, `parent`, that in their view of the fork choice is the head of the chain during `slot - 1`. The validator is to create, sign, and broadcast a `block` that is a child of `parent` and that executes a valid [beacon chain state transition](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#beacon-chain-state-transition-function). There is one proposer per slot, so if there are N active validators any individual validator will on average be assigned to propose once per N slots (eg. at 312500 validators = 10 million ETH, that's once per ~3 weeks). @@ -152,13 +152,13 @@ _Note:_ there might be "skipped" slots between the `parent` and `block`. These s ##### Parent root -Set `block.parent_root = hash_tree_root(parent)`. +Set `block.previous_block_root = hash_tree_root(parent)`. ##### State root Set `block.state_root = hash_tree_root(state)` of the resulting `state` of the `parent -> block` state transition. -_Note_: To calculate `state_root`, the validator should first run the state transition function on an unsigned `block` containing a stub for the `state_root`. It is useful to be able to run a state transition function that does _not_ validate signatures for this purpose. +_Note_: To calculate `state_root`, the validator should first run the state transition function on an unsigned `block` containing a stub for the `state_root`. It is useful to be able to run a state transition function that does _not_ validate signatures or state root for this purpose. ##### Randao reveal @@ -166,8 +166,8 @@ Set `block.randao_reveal = epoch_signature` where `epoch_signature` is defined a ```python epoch_signature = bls_sign( - privkey=validator.privkey, # privkey store locally, not in state - message_hash=int_to_bytes32(slot_to_epoch(block.slot)), + privkey=validator.privkey, # privkey stored locally, not in state + message_hash=hash_tree_root(slot_to_epoch(block.slot)), domain=get_domain( fork=fork, # `fork` is the fork object at the slot `block.slot` epoch=slot_to_epoch(block.slot), @@ -194,23 +194,16 @@ epoch_signature = bls_sign( ##### Signature -Set `block.signature = signed_proposal_data` where `signed_proposal_data` is defined as: +Set `block.signature = block_signature` where `block_signature` is defined as: ```python -proposal_data = ProposalSignedData( - slot=slot, - shard=BEACON_CHAIN_SHARD_NUMBER, - block_root=hash_tree_root(block), # where `block.sigature == EMPTY_SIGNATURE -) -proposal_root = hash_tree_root(proposal_data) - -signed_proposal_data = bls_sign( +block_signature = bls_sign( privkey=validator.privkey, # privkey store locally, not in state - message_hash=proposal_root, + message_hash=signed_root(block), domain=get_domain( fork=fork, # `fork` is the fork object at the slot `block.slot` epoch=slot_to_epoch(block.slot), - domain_type=DOMAIN_PROPOSAL, + domain_type=DOMAIN_BEACON_BLOCK, ) ) ``` @@ -227,12 +220,14 @@ Up to `MAX_ATTESTER_SLASHINGS` [`AttesterSlashing`](https://github.com/ethereum/ ##### Attestations -Up to `MAX_ATTESTATIONS` aggregate attestations can be included in the `block`. The attestations added must satisfy the verification conditions found in [attestation processing](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attestations-1). To maximize profit, the validator should attempt to create aggregate attestations that include singular attestations from the largest number of validators whose signatures from the same epoch have not previously been added on chain. +Up to `MAX_ATTESTATIONS` aggregate attestations can be included in the `block`. The attestations added must satisfy the verification conditions found in [attestation processing](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attestations-1). To maximize profit, the validator should attempt to gather aggregate attestations that include singular attestations from the largest number of validators whose signatures from the same epoch have not previously been added on chain. ##### Deposits Up to `MAX_DEPOSITS` [`Deposit`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#deposit) objects can be included in the `block`. These deposits are constructed from the `Deposit` logs from the [Eth1.0 deposit contract](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#ethereum-10-deposit-contract) and must be processed in sequential order. The deposits included in the `block` must satisfy the verification conditions found in [deposits processing](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#deposits-1). +The `proof` for each deposit must be constructed against the deposit root contained in `state.latest_eth1_data` rather than the deposit root at the time the deposit was initially logged from the 1.0 chain. This entails storing a full deposit merkle tree locally and computing updated proofs against the `latest_eth1_data.deposit_root` as needed. See [`minimal_merkle.py`](https://github.com/ethereum/research/blob/master/spec_pythonizer/utils/merkle_minimal.py) for a sample implementation. + ##### Voluntary exits Up to `MAX_VOLUNTARY_EXITS` [`VoluntaryExit`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#voluntaryexit) objects can be included in the `block`. The exits must satisfy the verification conditions found in [exits processing](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#exits-1). @@ -247,9 +242,12 @@ A validator should create and broadcast the attestation halfway through the `slo First the validator should construct `attestation_data`, an [`AttestationData`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attestationdata) object based upon the state at the assigned slot. +* Let `head_block` be the result of running the fork choice during the assigned slot. +* Let `head_state` be the state of `head_block` processed through any empty slots up to the assigned slot. + ##### Slot -Set `attestation_data.slot = slot` where `slot` is the current slot of which the validator is a member of a committee. +Set `attestation_data.slot = head_state.slot`. ##### Shard @@ -257,15 +255,15 @@ Set `attestation_data.shard = shard` where `shard` is the shard associated with ##### Beacon block root -Set `attestation_data.beacon_block_root = hash_tree_root(head)` where `head` is the validator's view of the `head` block of the beacon chain during `slot`. +Set `attestation_data.beacon_block_root = hash_tree_root(head_block)`. -##### Epoch boundary root +##### Target root -Set `attestation_data.epoch_boundary_root = hash_tree_root(epoch_boundary)` where `epoch_boundary` is the block at the most recent epoch boundary in the chain defined by `head` -- i.e. the `BeaconBlock` where `block.slot == get_epoch_start_slot(slot_to_epoch(head.slot))`. +Set `attestation_data.target_root = hash_tree_root(epoch_boundary)` where `epoch_boundary` is the block at the most recent epoch boundary. _Note:_ This can be looked up in the state using: -* Let `epoch_start_slot = get_epoch_start_slot(slot_to_epoch(head.slot))`. -* Set `epoch_boundary_root = hash_tree_root(head) if epoch_start_slot == head.slot else get_block_root(state, epoch_start_slot)`. +* Let `epoch_start_slot = get_epoch_start_slot(get_current_epoch(head_state))`. +* Set `epoch_boundary = head if epoch_start_slot == head_state.slot else get_block_root(state, epoch_start_slot)`. ##### Crosslink data root @@ -275,17 +273,15 @@ _Note:_ This is a stub for phase 0. ##### Latest crosslink -Set `attestation_data.latest_crosslink = state.latest_crosslinks[shard]` where `state` is the beacon state at `head` and `shard` is the validator's assigned shard. +Set `attestation_data.previous_crosslink = head_state.latest_crosslinks[shard]`. -##### Justified epoch +##### Source epoch -Set `attestation_data.justified_epoch = state.justified_epoch` where `state` is the beacon state at `head`. +Set `attestation_data.source_epoch = head_state.justified_epoch`. -##### Justified block root +##### Source root -Set `attestation_data.justified_block_root = hash_tree_root(justified_block)` where `justified_block` is the block at the slot `get_epoch_start_slot(state.justified_epoch)` in the chain defined by `head`. - -_Note:_ This can be looked up in the state using `get_block_root(state, get_epoch_start_slot(state.justified_epoch))`. +Set `attestation_data.source_root = head_state.current_justified_root`. #### Construct attestation @@ -320,11 +316,11 @@ attestation_data_and_custody_bit = AttestationDataAndCustodyBit( data=attestation.data, custody_bit=0b0, ) -attestation_message_to_sign = hash_tree_root(attestation_data_and_custody_bit) +attestation_message = hash_tree_root(attestation_data_and_custody_bit) signed_attestation_data = bls_sign( - privkey=validator.privkey, # privkey store locally, not in state - message_hash=attestation_message_to_sign, + privkey=validator.privkey, # privkey stored locally, not in state + message_hash=attestation_message, domain=get_domain( fork=fork, # `fork` is the fork object at the slot, `attestation_data.slot` epoch=slot_to_epoch(attestation_data.slot), @@ -353,7 +349,7 @@ def get_committee_assignment( a beacon block at the assigned slot. """ previous_epoch = get_previous_epoch(state) - next_epoch = get_current_epoch(state) + next_epoch = get_current_epoch(state) + 1 assert previous_epoch <= epoch <= next_epoch epoch_start_slot = get_epoch_start_slot(epoch) @@ -371,8 +367,7 @@ def get_committee_assignment( if len(selected_committees) > 0: validators = selected_committees[0][0] shard = selected_committees[0][1] - first_committee_at_slot = crosslink_committees[0][0] # List[ValidatorIndex] - is_proposer = first_committee_at_slot[slot % len(first_committee_at_slot)] == validator_index + is_proposer = validator_index == get_beacon_proposer_index(state, slot, registry_change=registry_change) assignment = (validators, shard, slot, is_proposer) return assignment @@ -380,7 +375,7 @@ def get_committee_assignment( ### Lookahead -The beacon chain shufflings are designed to provide a minimum of 1 epoch lookahead on the validator's upcoming assignemnts of proposing and attesting dictated by the shuffling and slot. +The beacon chain shufflings are designed to provide a minimum of 1 epoch lookahead on the validator's upcoming assignments of proposing and attesting dictated by the shuffling and slot. There are three possibilities for the shuffling at the next epoch: 1. The shuffling changes due to a "validator registry change". @@ -403,12 +398,12 @@ _Note_: Signed data must be within a sequential `Fork` context to conflict. Mess ### Proposer slashing -To avoid "proposer slashings", a validator must not sign two conflicting [`ProposalSignedData`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#proposalsigneddata) where conflicting is defined as having the same `slot` and `shard` but a different `block_root`. In phase 0, proposals are only made for the beacon chain (`shard == BEACON_CHAIN_SHARD_NUMBER`). +To avoid "proposer slashings", a validator must not sign two conflicting [`BeaconBlock`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#proposalsigneddata) where conflicting is defined as two distinct blocks within the same epoch. -_In phase 0, as long as the validator does not sign two different beacon chain proposals for the same slot, the validator is safe against proposer slashings._ +_In phase 0, as long as the validator does not sign two different beacon blocks for the same epoch, the validator is safe against proposer slashings._ Specifically, when signing an `BeaconBlock`, a validator should perform the following steps in the following order: -1. Save a record to hard disk that an beacon block has been signed for the `slot=slot` and `shard=BEACON_CHAIN_SHARD_NUMBER`. +1. Save a record to hard disk that an beacon block has been signed for the `epoch=slot_to_epoch(block.slot)`. 2. Generate and broadcast the block. If the software crashes at some point within this routine, then when the validator comes back online the hard disk has the record of the _potentially_ signed/broadcast block and can effectively avoid slashing. @@ -418,7 +413,7 @@ If the software crashes at some point within this routine, then when the validat To avoid "attester slashings", a validator must not sign two conflicting [`AttestationData`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#attestationdata) objects where conflicting is defined as a set of two attestations that satisfy either [`is_double_vote`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#is_double_vote) or [`is_surround_vote`](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md#is_surround_vote). Specifically, when signing an `Attestation`, a validator should perform the following steps in the following order: -1. Save a record to hard disk that an attestation has been signed for source -- `attestation_data.justified_epoch` -- and target -- `slot_to_epoch(attestation_data.slot)`. +1. Save a record to hard disk that an attestation has been signed for source -- `attestation_data.source_epoch` -- and target -- `slot_to_epoch(attestation_data.slot)`. 2. Generate and broadcast attestation. If the software crashes at some point within this routine, then when the validator comes back online the hard disk has the record of the _potentially_ signed/broadcast attestation and can effectively avoid slashing.