nimbus-eth2/AllTests-mainnet.md

327 lines
20 KiB
Markdown
Raw Normal View History

2020-03-10 04:24:33 +00:00
AllTests-mainnet
===
##
```diff
+ BASE_REWARD_FACTOR 64 [Preset: mainnet] OK
+ BLS_WITHDRAWAL_PREFIX "0x00" [Preset: mainnet] OK
+ CHURN_LIMIT_QUOTIENT 65536 [Preset: mainnet] OK
CONFIG_NAME "mainnet" [Preset: mainnet] Skip
DEPOSIT_CHAIN_ID 1 [Preset: mainnet] Skip
DEPOSIT_CONTRACT_ADDRESS "0x00000000219ab540356cBB839Cbe05303d770 Skip
DEPOSIT_NETWORK_ID 1 [Preset: mainnet] Skip
+ DOMAIN_AGGREGATE_AND_PROOF "0x06000000" [Preset: mainnet] OK
+ DOMAIN_BEACON_ATTESTER "0x01000000" [Preset: mainnet] OK
+ DOMAIN_BEACON_PROPOSER "0x00000000" [Preset: mainnet] OK
+ DOMAIN_DEPOSIT "0x03000000" [Preset: mainnet] OK
+ DOMAIN_RANDAO "0x02000000" [Preset: mainnet] OK
+ DOMAIN_SELECTION_PROOF "0x05000000" [Preset: mainnet] OK
+ DOMAIN_VOLUNTARY_EXIT "0x04000000" [Preset: mainnet] OK
+ EFFECTIVE_BALANCE_INCREMENT 1000000000 [Preset: mainnet] OK
+ EJECTION_BALANCE 16000000000 [Preset: mainnet] OK
+ EPOCHS_PER_ETH1_VOTING_PERIOD 64 [Preset: mainnet] OK
+ EPOCHS_PER_HISTORICAL_VECTOR 65536 [Preset: mainnet] OK
+ EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION 256 [Preset: mainnet] OK
+ EPOCHS_PER_SLASHINGS_VECTOR 8192 [Preset: mainnet] OK
ETH1_FOLLOW_DISTANCE 2048 [Preset: mainnet] Skip
GENESIS_DELAY 604800 [Preset: mainnet] Skip
GENESIS_FORK_VERSION "0x00000000" [Preset: mainnet] Skip
+ HISTORICAL_ROOTS_LIMIT 16777216 [Preset: mainnet] OK
+ HYSTERESIS_DOWNWARD_MULTIPLIER 1 [Preset: mainnet] OK
+ HYSTERESIS_QUOTIENT 4 [Preset: mainnet] OK
+ HYSTERESIS_UPWARD_MULTIPLIER 5 [Preset: mainnet] OK
+ INACTIVITY_PENALTY_QUOTIENT 67108864 [Preset: mainnet] OK
+ MAX_ATTESTATIONS 128 [Preset: mainnet] OK
+ MAX_ATTESTER_SLASHINGS 2 [Preset: mainnet] OK
+ MAX_COMMITTEES_PER_SLOT 64 [Preset: mainnet] OK
+ MAX_DEPOSITS 16 [Preset: mainnet] OK
+ MAX_EFFECTIVE_BALANCE 32000000000 [Preset: mainnet] OK
+ MAX_PROPOSER_SLASHINGS 16 [Preset: mainnet] OK
+ MAX_SEED_LOOKAHEAD 4 [Preset: mainnet] OK
+ MAX_VALIDATORS_PER_COMMITTEE 2048 [Preset: mainnet] OK
+ MAX_VOLUNTARY_EXITS 16 [Preset: mainnet] OK
+ MIN_ATTESTATION_INCLUSION_DELAY 1 [Preset: mainnet] OK
+ MIN_DEPOSIT_AMOUNT 1000000000 [Preset: mainnet] OK
+ MIN_EPOCHS_TO_INACTIVITY_PENALTY 4 [Preset: mainnet] OK
MIN_GENESIS_ACTIVE_VALIDATOR_COUNT 16384 [Preset: mainnet] Skip
MIN_GENESIS_TIME 1606824000 [Preset: mainnet] Skip
+ MIN_PER_EPOCH_CHURN_LIMIT 4 [Preset: mainnet] OK
+ MIN_SEED_LOOKAHEAD 1 [Preset: mainnet] OK
+ MIN_SLASHING_PENALTY_QUOTIENT 128 [Preset: mainnet] OK
+ MIN_VALIDATOR_WITHDRAWABILITY_DELAY 256 [Preset: mainnet] OK
+ PROPORTIONAL_SLASHING_MULTIPLIER 1 [Preset: mainnet] OK
+ PROPOSER_REWARD_QUOTIENT 8 [Preset: mainnet] OK
+ RANDOM_SUBNETS_PER_VALIDATOR 1 [Preset: mainnet] OK
+ SAFE_SLOTS_TO_UPDATE_JUSTIFIED 8 [Preset: mainnet] OK
+ SECONDS_PER_ETH1_BLOCK 14 [Preset: mainnet] OK
+ SECONDS_PER_SLOT 12 [Preset: mainnet] OK
+ SHARD_COMMITTEE_PERIOD 256 [Preset: mainnet] OK
+ SHUFFLE_ROUND_COUNT 90 [Preset: mainnet] OK
+ SLOTS_PER_EPOCH 32 [Preset: mainnet] OK
+ SLOTS_PER_HISTORICAL_ROOT 8192 [Preset: mainnet] OK
+ TARGET_AGGREGATORS_PER_COMMITTEE 16 [Preset: mainnet] OK
+ TARGET_COMMITTEE_SIZE 128 [Preset: mainnet] OK
+ VALIDATOR_REGISTRY_LIMIT 1099511627776 [Preset: mainnet] OK
+ WHISTLEBLOWER_REWARD_QUOTIENT 512 [Preset: mainnet] OK
```
OK: 51/60 Fail: 0/60 Skip: 9/60
## Attestation pool processing [Preset: mainnet]
```diff
+ Attestations may arrive in any order [Preset: mainnet] OK
+ Attestations may overlap, bigger first [Preset: mainnet] OK
+ Attestations may overlap, smaller first [Preset: mainnet] OK
+ Attestations should be combined [Preset: mainnet] OK
Revamp attestation pool This is a revamp of the attestation pool that cleans up several aspects of attestation processing as the network grows larger and block space becomes more precious. The aim is to better exploit the divide between attestation subnets and aggregations by keeping the two kinds separate until it's time to either produce a block or aggregate. This means we're no longer eagerly combining single-vote attestations, but rather wait until the last moment, and then try to add singles to all aggregates, including those coming from the network. Importantly, the branch improves on poor aggregate quality and poor attestation packing in cases where block space is running out. A basic greed scoring mechanism is used to select attestations for blocks - attestations are added based on how much many new votes they bring to the table. * Collect single-vote attestations separately and store these until it's time to make aggregates * Create aggregates based on single-vote attestations * Select _best_ aggregate rather than _first_ aggregate when on aggregation duty * Top up all aggregates with singles when it's time make the attestation cut, thus improving the chances of grabbing the best aggregates out there * Improve aggregation test coverage * Improve bitseq operations * Simplify aggregate signature creation * Make attestation cache temporary instead of storing it in attestation pool - most of the time, blocks are not being produced, no need to keep the data around * Remove redundant aggregate storage that was used only for RPC * Use tables to avoid some linear seeks when looking up attestation data * Fix long cleanup on large slot jumps * Avoid some pointers * Speed up iterating all attestations for a slot (fixes #2490)
2021-04-12 20:25:09 +00:00
+ Can add and retrieve simple attestations [Preset: mainnet] OK
+ Everyone voting for something different [Preset: mainnet] OK
+ Fork choice returns block with attestation OK
+ Fork choice returns latest block with no attestations OK
+ Trying to add a block twice tags the second as an error OK
+ Trying to add a duplicate block from an old pruned epoch is tagged as an error OK
Revamp attestation pool This is a revamp of the attestation pool that cleans up several aspects of attestation processing as the network grows larger and block space becomes more precious. The aim is to better exploit the divide between attestation subnets and aggregations by keeping the two kinds separate until it's time to either produce a block or aggregate. This means we're no longer eagerly combining single-vote attestations, but rather wait until the last moment, and then try to add singles to all aggregates, including those coming from the network. Importantly, the branch improves on poor aggregate quality and poor attestation packing in cases where block space is running out. A basic greed scoring mechanism is used to select attestations for blocks - attestations are added based on how much many new votes they bring to the table. * Collect single-vote attestations separately and store these until it's time to make aggregates * Create aggregates based on single-vote attestations * Select _best_ aggregate rather than _first_ aggregate when on aggregation duty * Top up all aggregates with singles when it's time make the attestation cut, thus improving the chances of grabbing the best aggregates out there * Improve aggregation test coverage * Improve bitseq operations * Simplify aggregate signature creation * Make attestation cache temporary instead of storing it in attestation pool - most of the time, blocks are not being produced, no need to keep the data around * Remove redundant aggregate storage that was used only for RPC * Use tables to avoid some linear seeks when looking up attestation data * Fix long cleanup on large slot jumps * Avoid some pointers * Speed up iterating all attestations for a slot (fixes #2490)
2021-04-12 20:25:09 +00:00
+ Working with aggregates [Preset: mainnet] OK
```
Revamp attestation pool This is a revamp of the attestation pool that cleans up several aspects of attestation processing as the network grows larger and block space becomes more precious. The aim is to better exploit the divide between attestation subnets and aggregations by keeping the two kinds separate until it's time to either produce a block or aggregate. This means we're no longer eagerly combining single-vote attestations, but rather wait until the last moment, and then try to add singles to all aggregates, including those coming from the network. Importantly, the branch improves on poor aggregate quality and poor attestation packing in cases where block space is running out. A basic greed scoring mechanism is used to select attestations for blocks - attestations are added based on how much many new votes they bring to the table. * Collect single-vote attestations separately and store these until it's time to make aggregates * Create aggregates based on single-vote attestations * Select _best_ aggregate rather than _first_ aggregate when on aggregation duty * Top up all aggregates with singles when it's time make the attestation cut, thus improving the chances of grabbing the best aggregates out there * Improve aggregation test coverage * Improve bitseq operations * Simplify aggregate signature creation * Make attestation cache temporary instead of storing it in attestation pool - most of the time, blocks are not being produced, no need to keep the data around * Remove redundant aggregate storage that was used only for RPC * Use tables to avoid some linear seeks when looking up attestation data * Fix long cleanup on large slot jumps * Avoid some pointers * Speed up iterating all attestations for a slot (fixes #2490)
2021-04-12 20:25:09 +00:00
OK: 11/11 Fail: 0/11 Skip: 0/11
2020-03-10 04:24:33 +00:00
## Beacon chain DB [Preset: mainnet]
```diff
+ empty database [Preset: mainnet] OK
+ find ancestors [Preset: mainnet] OK
+ sanity check blocks [Preset: mainnet] OK
immutable validator database factoring (#2297) * initial immutable validator database factoring * remove changes from chain_dag: this abstraction properly belongs in beacon_chain_db * add merging mutable/immutable validator portions; individually test database roundtripping of immutable validators and states-sans-immutable-validators * update test summaries * use stew/assign2 instead of Nim assignment * add reading/writing of immutable validators in chaindag * remove unused import * replace chunked k/v store of immutable validators with per-row SQL table storage * use List instead of HashList * un-stub some ncli_db code so that it uses * switch HashArray to array; move BeaconStateNoImmutableValidators from datatypes to beacon_chain_db * begin only-mutable-part state storage * uncomment some assigns * work around https://github.com/nim-lang/Nim/issues/17253 * fix most of the issues/oversights; local sim runs again * fix test suite by adding missing beaconstate field to copy function * have ncli bench also store immutable validators * extract some immutable-validator-specific code from the beacon chain db module * add more rigorous database state roundtripping, with changing validator sets * adjust ncli_db to use new schema * simplify putState/getState by moving all immutable validator accounting into beacon state DB * remove redundant test case and move code to immutable-beacon-chain module * more efficient, but still brute-force, mutable+immutable validator merging * reuse BeaconState in getState * ensure HashList/HashArray caches are cleared when reusing getState buffers; add ncli_db and a unit test to verify this * HashList.clear() -> HashList.clearCache() * only copy incrementally necessary immutable validators * increase strictness of test cases and fix/work around resulting HashList cache invalidation issues * remove explanatory scaffolding * allow for storage of full (with all validators) states for backwards/forwards-compatibility * adjust DbSeq type usage * store full, with-validators, state every 64 epochs to enable reverting versions * reduce memory allocation and intermediate objects in state storage codepath * eliminate allocation/copying through intermediate BeaconStateNoImmutableValidators objects * skip benchmarking initial genesis-validator-heavy state store * always store new-style state and sometimes old-style state * document intent behind BeaconState/Validator type-punnery * more accurate failure message on SQLite in-memory database initialization failure
2021-03-15 14:11:51 +00:00
+ sanity check full states [Preset: mainnet] OK
+ sanity check full states, reusing buffers [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
+ sanity check genesis roundtrip [Preset: mainnet] OK
+ sanity check state diff roundtrip [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
+ sanity check states [Preset: mainnet] OK
immutable validator database factoring (#2297) * initial immutable validator database factoring * remove changes from chain_dag: this abstraction properly belongs in beacon_chain_db * add merging mutable/immutable validator portions; individually test database roundtripping of immutable validators and states-sans-immutable-validators * update test summaries * use stew/assign2 instead of Nim assignment * add reading/writing of immutable validators in chaindag * remove unused import * replace chunked k/v store of immutable validators with per-row SQL table storage * use List instead of HashList * un-stub some ncli_db code so that it uses * switch HashArray to array; move BeaconStateNoImmutableValidators from datatypes to beacon_chain_db * begin only-mutable-part state storage * uncomment some assigns * work around https://github.com/nim-lang/Nim/issues/17253 * fix most of the issues/oversights; local sim runs again * fix test suite by adding missing beaconstate field to copy function * have ncli bench also store immutable validators * extract some immutable-validator-specific code from the beacon chain db module * add more rigorous database state roundtripping, with changing validator sets * adjust ncli_db to use new schema * simplify putState/getState by moving all immutable validator accounting into beacon state DB * remove redundant test case and move code to immutable-beacon-chain module * more efficient, but still brute-force, mutable+immutable validator merging * reuse BeaconState in getState * ensure HashList/HashArray caches are cleared when reusing getState buffers; add ncli_db and a unit test to verify this * HashList.clear() -> HashList.clearCache() * only copy incrementally necessary immutable validators * increase strictness of test cases and fix/work around resulting HashList cache invalidation issues * remove explanatory scaffolding * allow for storage of full (with all validators) states for backwards/forwards-compatibility * adjust DbSeq type usage * store full, with-validators, state every 64 epochs to enable reverting versions * reduce memory allocation and intermediate objects in state storage codepath * eliminate allocation/copying through intermediate BeaconStateNoImmutableValidators objects * skip benchmarking initial genesis-validator-heavy state store * always store new-style state and sometimes old-style state * document intent behind BeaconState/Validator type-punnery * more accurate failure message on SQLite in-memory database initialization failure
2021-03-15 14:11:51 +00:00
+ sanity check states, reusing buffers [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
```
immutable validator database factoring (#2297) * initial immutable validator database factoring * remove changes from chain_dag: this abstraction properly belongs in beacon_chain_db * add merging mutable/immutable validator portions; individually test database roundtripping of immutable validators and states-sans-immutable-validators * update test summaries * use stew/assign2 instead of Nim assignment * add reading/writing of immutable validators in chaindag * remove unused import * replace chunked k/v store of immutable validators with per-row SQL table storage * use List instead of HashList * un-stub some ncli_db code so that it uses * switch HashArray to array; move BeaconStateNoImmutableValidators from datatypes to beacon_chain_db * begin only-mutable-part state storage * uncomment some assigns * work around https://github.com/nim-lang/Nim/issues/17253 * fix most of the issues/oversights; local sim runs again * fix test suite by adding missing beaconstate field to copy function * have ncli bench also store immutable validators * extract some immutable-validator-specific code from the beacon chain db module * add more rigorous database state roundtripping, with changing validator sets * adjust ncli_db to use new schema * simplify putState/getState by moving all immutable validator accounting into beacon state DB * remove redundant test case and move code to immutable-beacon-chain module * more efficient, but still brute-force, mutable+immutable validator merging * reuse BeaconState in getState * ensure HashList/HashArray caches are cleared when reusing getState buffers; add ncli_db and a unit test to verify this * HashList.clear() -> HashList.clearCache() * only copy incrementally necessary immutable validators * increase strictness of test cases and fix/work around resulting HashList cache invalidation issues * remove explanatory scaffolding * allow for storage of full (with all validators) states for backwards/forwards-compatibility * adjust DbSeq type usage * store full, with-validators, state every 64 epochs to enable reverting versions * reduce memory allocation and intermediate objects in state storage codepath * eliminate allocation/copying through intermediate BeaconStateNoImmutableValidators objects * skip benchmarking initial genesis-validator-heavy state store * always store new-style state and sometimes old-style state * document intent behind BeaconState/Validator type-punnery * more accurate failure message on SQLite in-memory database initialization failure
2021-03-15 14:11:51 +00:00
OK: 9/9 Fail: 0/9 Skip: 0/9
2020-03-10 04:24:33 +00:00
## Beacon state [Preset: mainnet]
```diff
+ Smoke test initialize_beacon_state_from_eth1 [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Bit fields
```diff
+ iterating words OK
+ overlaps OK
+ roundtrips BitArray OK
+ roundtrips BitSeq OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## Block pool processing [Preset: mainnet]
```diff
+ Adding the same block twice returns a Duplicate error [Preset: mainnet] OK
+ Reverse order block add & get [Preset: mainnet] OK
+ Simple block add&get [Preset: mainnet] OK
+ getRef returns nil for missing blocks OK
improve slot processing speeds (#1670) about 40% better slot processing times (with LTO enabled) - these don't do BLS but are used heavily during replay (state transition = slot + block transition) tests using a recent medalla state and advancing it 1000 slots: ``` ./ncli slots --preState2:state-302271-3c1dbf19-c1f944bf.ssz --slot:1000 --postState2:xx.ssz ``` pre: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 39.236, 0.000, 39.236, 39.236, 1, Load state from file 0.049, 0.002, 0.046, 0.063, 968, Apply slot 256.504, 81.008, 213.471, 591.902, 32, Apply epoch slot 28.597, 0.000, 28.597, 28.597, 1, Save state to file ``` cast: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 37.079, 0.000, 37.079, 37.079, 1, Load state from file 0.042, 0.002, 0.040, 0.090, 968, Apply slot 215.552, 68.763, 180.155, 500.103, 32, Apply epoch slot 25.106, 0.000, 25.106, 25.106, 1, Save state to file ``` cast+rewards: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 40.049, 0.000, 40.049, 40.049, 1, Load state from file 0.048, 0.001, 0.045, 0.060, 968, Apply slot 164.981, 76.273, 142.099, 477.868, 32, Apply epoch slot 28.498, 0.000, 28.498, 28.498, 1, Save state to file ``` cast+rewards+shr ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 12.898, 0.000, 12.898, 12.898, 1, Load state from file 0.039, 0.002, 0.038, 0.054, 968, Apply slot 139.971, 68.797, 120.088, 428.844, 32, Apply epoch slot 24.761, 0.000, 24.761, 24.761, 1, Save state to file ```
2020-09-16 20:59:33 +00:00
+ loading tail block works [Preset: mainnet] OK
+ updateHead updates head and headState [Preset: mainnet] OK
+ updateStateData sanity [Preset: mainnet] OK
```
OK: 7/7 Fail: 0/7 Skip: 0/7
2020-03-10 04:24:33 +00:00
## Block processing [Preset: mainnet]
```diff
+ Attestation gets processed at epoch [Preset: mainnet] OK
+ Passes from genesis state, empty block [Preset: mainnet] OK
+ Passes from genesis state, no block [Preset: mainnet] OK
+ Passes through epoch update, empty block [Preset: mainnet] OK
+ Passes through epoch update, no block [Preset: mainnet] OK
```
OK: 5/5 Fail: 0/5 Skip: 0/5
## BlockRef and helpers [Preset: mainnet]
```diff
+ epochAncestor sanity [Preset: mainnet] OK
+ get_ancestor sanity [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
+ isAncestorOf sanity [Preset: mainnet] OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
2020-03-10 04:24:33 +00:00
## BlockSlot and helpers [Preset: mainnet]
```diff
+ atSlot sanity [Preset: mainnet] OK
+ parent sanity [Preset: mainnet] OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## Eth1 monitor
```diff
+ Rewrite HTTPS Infura URLs OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Eth2 specific discovery tests
```diff
+ Invalid attnets field OK
+ Subnet query OK
+ Subnet query after ENR update OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## Exit pool testing suite
```diff
+ addExitMessage/getAttesterSlashingMessage OK
+ addExitMessage/getProposerSlashingMessage OK
+ addExitMessage/getVoluntaryExitMessage OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## Fork Choice + Finality [Preset: mainnet]
```diff
+ fork_choice - testing finality #01 OK
+ fork_choice - testing finality #02 OK
+ fork_choice - testing no votes OK
+ fork_choice - testing with votes OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## Gossip validation [Preset: mainnet]
```diff
+ Validation sanity OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
## Honest validator
```diff
+ General pubsub topics OK
+ Mainnet attestation topics OK
+ is_aggregator OK
2020-03-10 04:24:33 +00:00
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## Interop
```diff
+ Interop genesis OK
+ Interop signatures OK
+ Mocked start private key OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
2020-03-10 04:24:33 +00:00
## PeerPool testing suite
```diff
+ Access peers by key test OK
+ Acquire from empty pool OK
+ Acquire/Sorting and consistency test OK
improve slot processing speeds (#1670) about 40% better slot processing times (with LTO enabled) - these don't do BLS but are used heavily during replay (state transition = slot + block transition) tests using a recent medalla state and advancing it 1000 slots: ``` ./ncli slots --preState2:state-302271-3c1dbf19-c1f944bf.ssz --slot:1000 --postState2:xx.ssz ``` pre: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 39.236, 0.000, 39.236, 39.236, 1, Load state from file 0.049, 0.002, 0.046, 0.063, 968, Apply slot 256.504, 81.008, 213.471, 591.902, 32, Apply epoch slot 28.597, 0.000, 28.597, 28.597, 1, Save state to file ``` cast: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 37.079, 0.000, 37.079, 37.079, 1, Load state from file 0.042, 0.002, 0.040, 0.090, 968, Apply slot 215.552, 68.763, 180.155, 500.103, 32, Apply epoch slot 25.106, 0.000, 25.106, 25.106, 1, Save state to file ``` cast+rewards: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 40.049, 0.000, 40.049, 40.049, 1, Load state from file 0.048, 0.001, 0.045, 0.060, 968, Apply slot 164.981, 76.273, 142.099, 477.868, 32, Apply epoch slot 28.498, 0.000, 28.498, 28.498, 1, Save state to file ``` cast+rewards+shr ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 12.898, 0.000, 12.898, 12.898, 1, Load state from file 0.039, 0.002, 0.038, 0.054, 968, Apply slot 139.971, 68.797, 120.088, 428.844, 32, Apply epoch slot 24.761, 0.000, 24.761, 24.761, 1, Save state to file ```
2020-09-16 20:59:33 +00:00
+ Delete peer on release text OK
2020-03-10 04:24:33 +00:00
+ Iterators test OK
+ Peer lifetime test OK
+ Safe/Clear test OK
+ Score check test OK
+ Space tests OK
2020-03-10 04:24:33 +00:00
+ addPeer() test OK
+ addPeerNoWait() test OK
+ deletePeer() test OK
```
OK: 12/12 Fail: 0/12 Skip: 0/12
2020-03-10 04:24:33 +00:00
## SSZ dynamic navigator
```diff
+ navigating fields OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## SSZ navigator
```diff
2020-06-02 09:44:51 +00:00
+ basictype OK
2020-03-10 04:24:33 +00:00
+ lists with max size OK
+ simple object fields OK
```
2020-06-02 09:44:51 +00:00
OK: 3/3 Fail: 0/3 Skip: 0/3
2020-09-21 15:58:35 +00:00
## Slashing Protection DB - Interchange [Preset: mainnet]
```diff
+ Smoke test - Complete format - Invalid database is refused [Preset: mainnet] OK
2020-09-21 15:58:35 +00:00
+ Smoke test - Complete format [Preset: mainnet] OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## Slashing Protection DB - v1 and v2 migration [Preset: mainnet]
```diff
+ Minimal format migration [Preset: mainnet] OK
```
2020-09-21 15:58:35 +00:00
OK: 1/1 Fail: 0/1 Skip: 0/1
## Slashing Protection DB [Preset: mainnet]
```diff
2020-09-26 21:58:12 +00:00
+ Attestation ordering #1698 OK
+ Don't prune the very last attestation(s) even by mistake OK
+ Don't prune the very last block even by mistake OK
2020-09-21 15:58:35 +00:00
+ Empty database [Preset: mainnet] OK
+ Pruning attestations works OK
+ Pruning blocks works OK
2020-09-21 15:58:35 +00:00
+ SP for block proposal - backtracking append OK
+ SP for block proposal - linear append OK
+ SP for same epoch attestation target - linear append OK
+ SP for surrounded attestations OK
+ SP for surrounding attestations OK
2020-09-26 21:58:12 +00:00
+ Test valid attestation #1699 OK
2020-09-21 15:58:35 +00:00
```
OK: 12/12 Fail: 0/12 Skip: 0/12
## Spec datatypes
```diff
+ Graffiti bytes OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
## Spec helpers
```diff
+ integer_squareroot OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Sync protocol
```diff
+ Compile OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## SyncManager test suite
```diff
+ [SyncQueue] Async pending and resetWait() test OK
+ [SyncQueue] Async unordered push start from zero OK
+ [SyncQueue] Async unordered push with not full start from non-zero OK
+ [SyncQueue] Full and incomplete success/fail start from non-zero OK
+ [SyncQueue] Full and incomplete success/fail start from zero OK
+ [SyncQueue] One smart and one stupid + debt split + empty OK
+ [SyncQueue] Smart and stupid success/fail OK
+ [SyncQueue] Start and finish slots equal OK
+ [SyncQueue] Two full requests success/fail OK
+ [SyncQueue] checkResponse() test OK
+ [SyncQueue] contains() test OK
+ [SyncQueue] getLastNonEmptySlot() test OK
+ [SyncQueue] hasEndGap() test OK
```
OK: 13/13 Fail: 0/13 Skip: 0/13
2020-03-10 04:24:33 +00:00
## Zero signature sanity checks
```diff
+ SSZ serialization roundtrip of SignedBeaconBlockHeader OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## [Unit - Spec - Block processing] Attestations [Preset: mainnet]
```diff
+ Valid attestation OK
+ Valid attestation from previous epoch OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
2020-03-10 04:24:33 +00:00
## [Unit - Spec - Block processing] Deposits [Preset: mainnet]
```diff
+ Deposit at MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
+ Deposit over MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
+ Deposit under MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
2020-07-08 12:36:03 +00:00
+ Invalid deposit at MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
2020-03-10 04:24:33 +00:00
+ Validator top-up OK
```
2020-07-08 12:36:03 +00:00
OK: 5/5 Fail: 0/5 Skip: 0/5
2020-03-10 04:24:33 +00:00
## [Unit - Spec - Epoch processing] Justification and Finalization [Preset: mainnet]
```diff
+ Rule I - 234 finalization with enough support OK
+ Rule I - 234 finalization without support OK
+ Rule II - 23 finalization with enough support OK
+ Rule II - 23 finalization without support OK
+ Rule III - 123 finalization with enough support OK
+ Rule III - 123 finalization without support OK
+ Rule IV - 12 finalization with enough support OK
+ Rule IV - 12 finalization without support OK
```
OK: 8/8 Fail: 0/8 Skip: 0/8
## chain DAG finalization tests [Preset: mainnet]
```diff
+ init with gaps [Preset: mainnet] OK
+ orphaned epoch block [Preset: mainnet] OK
+ prune heads on finalization [Preset: mainnet] OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## hash
```diff
+ HashArray OK
+ HashList OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## state diff tests [Preset: mainnet]
```diff
+ random slot differences [Preset: mainnet] OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
---TOTAL---
OK: 180/189 Fail: 0/189 Skip: 9/189