nimbus-eth2/AllTests-mainnet.md

524 lines
31 KiB
Markdown
Raw Normal View History

2020-03-10 04:24:33 +00:00
AllTests-mainnet
===
## Attestation pool processing [Preset: mainnet]
```diff
+ Attestations may arrive in any order [Preset: mainnet] OK
+ Attestations may overlap, bigger first [Preset: mainnet] OK
+ Attestations may overlap, smaller first [Preset: mainnet] OK
+ Attestations should be combined [Preset: mainnet] OK
Revamp attestation pool This is a revamp of the attestation pool that cleans up several aspects of attestation processing as the network grows larger and block space becomes more precious. The aim is to better exploit the divide between attestation subnets and aggregations by keeping the two kinds separate until it's time to either produce a block or aggregate. This means we're no longer eagerly combining single-vote attestations, but rather wait until the last moment, and then try to add singles to all aggregates, including those coming from the network. Importantly, the branch improves on poor aggregate quality and poor attestation packing in cases where block space is running out. A basic greed scoring mechanism is used to select attestations for blocks - attestations are added based on how much many new votes they bring to the table. * Collect single-vote attestations separately and store these until it's time to make aggregates * Create aggregates based on single-vote attestations * Select _best_ aggregate rather than _first_ aggregate when on aggregation duty * Top up all aggregates with singles when it's time make the attestation cut, thus improving the chances of grabbing the best aggregates out there * Improve aggregation test coverage * Improve bitseq operations * Simplify aggregate signature creation * Make attestation cache temporary instead of storing it in attestation pool - most of the time, blocks are not being produced, no need to keep the data around * Remove redundant aggregate storage that was used only for RPC * Use tables to avoid some linear seeks when looking up attestation data * Fix long cleanup on large slot jumps * Avoid some pointers * Speed up iterating all attestations for a slot (fixes #2490)
2021-04-12 20:25:09 +00:00
+ Can add and retrieve simple attestations [Preset: mainnet] OK
+ Everyone voting for something different [Preset: mainnet] OK
+ Fork choice returns block with attestation OK
+ Fork choice returns latest block with no attestations OK
+ Trying to add a block twice tags the second as an error OK
+ Trying to add a duplicate block from an old pruned epoch is tagged as an error OK
Revamp attestation pool This is a revamp of the attestation pool that cleans up several aspects of attestation processing as the network grows larger and block space becomes more precious. The aim is to better exploit the divide between attestation subnets and aggregations by keeping the two kinds separate until it's time to either produce a block or aggregate. This means we're no longer eagerly combining single-vote attestations, but rather wait until the last moment, and then try to add singles to all aggregates, including those coming from the network. Importantly, the branch improves on poor aggregate quality and poor attestation packing in cases where block space is running out. A basic greed scoring mechanism is used to select attestations for blocks - attestations are added based on how much many new votes they bring to the table. * Collect single-vote attestations separately and store these until it's time to make aggregates * Create aggregates based on single-vote attestations * Select _best_ aggregate rather than _first_ aggregate when on aggregation duty * Top up all aggregates with singles when it's time make the attestation cut, thus improving the chances of grabbing the best aggregates out there * Improve aggregation test coverage * Improve bitseq operations * Simplify aggregate signature creation * Make attestation cache temporary instead of storing it in attestation pool - most of the time, blocks are not being produced, no need to keep the data around * Remove redundant aggregate storage that was used only for RPC * Use tables to avoid some linear seeks when looking up attestation data * Fix long cleanup on large slot jumps * Avoid some pointers * Speed up iterating all attestations for a slot (fixes #2490)
2021-04-12 20:25:09 +00:00
+ Working with aggregates [Preset: mainnet] OK
```
Revamp attestation pool This is a revamp of the attestation pool that cleans up several aspects of attestation processing as the network grows larger and block space becomes more precious. The aim is to better exploit the divide between attestation subnets and aggregations by keeping the two kinds separate until it's time to either produce a block or aggregate. This means we're no longer eagerly combining single-vote attestations, but rather wait until the last moment, and then try to add singles to all aggregates, including those coming from the network. Importantly, the branch improves on poor aggregate quality and poor attestation packing in cases where block space is running out. A basic greed scoring mechanism is used to select attestations for blocks - attestations are added based on how much many new votes they bring to the table. * Collect single-vote attestations separately and store these until it's time to make aggregates * Create aggregates based on single-vote attestations * Select _best_ aggregate rather than _first_ aggregate when on aggregation duty * Top up all aggregates with singles when it's time make the attestation cut, thus improving the chances of grabbing the best aggregates out there * Improve aggregation test coverage * Improve bitseq operations * Simplify aggregate signature creation * Make attestation cache temporary instead of storing it in attestation pool - most of the time, blocks are not being produced, no need to keep the data around * Remove redundant aggregate storage that was used only for RPC * Use tables to avoid some linear seeks when looking up attestation data * Fix long cleanup on large slot jumps * Avoid some pointers * Speed up iterating all attestations for a slot (fixes #2490)
2021-04-12 20:25:09 +00:00
OK: 11/11 Fail: 0/11 Skip: 0/11
## Backfill
```diff
+ backfill to genesis OK
+ reload backfill position OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
2020-03-10 04:24:33 +00:00
## Beacon chain DB [Preset: mainnet]
```diff
+ empty database [Preset: mainnet] OK
+ find ancestors [Preset: mainnet] OK
+ sanity check Altair and cross-fork getState rollback [Preset: mainnet] OK
+ sanity check Altair blocks [Preset: mainnet] OK
+ sanity check Altair states [Preset: mainnet] OK
+ sanity check Altair states, reusing buffers [Preset: mainnet] OK
+ sanity check Bellatrix and cross-fork getState rollback [Preset: mainnet] OK
+ sanity check Bellatrix blocks [Preset: mainnet] OK
+ sanity check Bellatrix states [Preset: mainnet] OK
+ sanity check Bellatrix states, reusing buffers [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
+ sanity check genesis roundtrip [Preset: mainnet] OK
+ sanity check phase 0 blocks [Preset: mainnet] OK
+ sanity check phase 0 getState rollback [Preset: mainnet] OK
+ sanity check phase 0 states [Preset: mainnet] OK
+ sanity check phase 0 states, reusing buffers [Preset: mainnet] OK
+ sanity check state diff roundtrip [Preset: mainnet] OK
2020-03-10 04:24:33 +00:00
```
OK: 16/16 Fail: 0/16 Skip: 0/16
2020-03-10 04:24:33 +00:00
## Beacon state [Preset: mainnet]
```diff
+ Smoke test initialize_beacon_state_from_eth1 [Preset: mainnet] OK
+ get_beacon_proposer_index OK
+ latest_block_root OK
+ process_slots OK
2020-03-10 04:24:33 +00:00
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## Beacon time
```diff
+ basics OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
Speed up altair block processing 2x (#3115) * Speed up altair block processing >2x Like #3089, this PR drastially speeds up historical REST queries and other long state replays. * cache sync committee validator indices * use ~80mb less memory for validator pubkey mappings * batch-verify sync aggregate signature (fixes #2985) * document sync committee hack with head block vs sync message block * add batch signature verification failure tests Before: ``` ../env.sh nim c -d:release -r ncli_db --db:mainnet_0/db bench --start-slot:-1000 All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 5830.675, 0.000, 5830.675, 5830.675, 1, Initialize DB 0.481, 1.878, 0.215, 59.167, 981, Load block from database 8422.566, 0.000, 8422.566, 8422.566, 1, Load state from database 6.996, 1.678, 0.042, 14.385, 969, Advance slot, non-epoch 93.217, 8.318, 84.192, 122.209, 32, Advance slot, epoch 20.513, 23.665, 11.510, 201.561, 981, Apply block, no slot processing 0.000, 0.000, 0.000, 0.000, 0, Database load 0.000, 0.000, 0.000, 0.000, 0, Database store ``` After: ``` 7081.422, 0.000, 7081.422, 7081.422, 1, Initialize DB 0.553, 2.122, 0.175, 66.692, 981, Load block from database 5439.446, 0.000, 5439.446, 5439.446, 1, Load state from database 6.829, 1.575, 0.043, 12.156, 969, Advance slot, non-epoch 94.716, 2.749, 88.395, 100.026, 32, Advance slot, epoch 11.636, 23.766, 4.889, 205.250, 981, Apply block, no slot processing 0.000, 0.000, 0.000, 0.000, 0, Database load 0.000, 0.000, 0.000, 0.000, 0, Database store ``` * add comment
2021-11-24 12:43:50 +00:00
## Block pool altair processing [Preset: mainnet]
```diff
+ Invalid signatures [Preset: mainnet] OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Block pool processing [Preset: mainnet]
```diff
+ Adding the same block twice returns a Duplicate error [Preset: mainnet] OK
+ Simple block add&get [Preset: mainnet] OK
limit by-root requests to non-finalized blocks (#3293) * limit by-root requests to non-finalized blocks Presently, we keep a mapping from block root to `BlockRef` in memory - this has simplified reasoning about the dag, but is not sustainable with the chain growing. We can distinguish between two cases where by-root access is useful: * unfinalized blocks - this is where the beacon chain is operating generally, by validating incoming data as interesting for future fork choice decisions - bounded by the length of the unfinalized period * finalized blocks - historical access in the REST API etc - no bounds, really In this PR, we limit the by-root block index to the first use case: finalized chain data can more efficiently be addressed by slot number. Future work includes: * limiting the `BlockRef` horizon in general - each instance is 40 bytes+overhead which adds up - this needs further refactoring to deal with the tail vs state problem * persisting the finalized slot-to-hash index - this one also keeps growing unbounded (albeit slowly) Anyway, this PR easily shaves ~128mb of memory usage at the time of writing. * No longer honor `BeaconBlocksByRoot` requests outside of the non-finalized period - previously, Nimbus would generously return any block through this libp2p request - per the spec, finalized blocks should be fetched via `BeaconBlocksByRange` instead. * return `Opt[BlockRef]` instead of `nil` when blocks can't be found - this becomes a lot more common now and thus deserves more attention * `dag.blocks` -> `dag.forkBlocks` - this index only carries unfinalized blocks from now - `finalizedBlocks` covers the other `BlockRef` instances * in backfill, verify that the last backfilled block leads back to genesis, or panic * add backfill timings to log * fix missing check that `BlockRef` block can be fetched with `getForkedBlock` reliably * shortcut doppelganger check when feature is not enabled * in REST/JSON-RPC, fetch blocks without involving `BlockRef` * fix dag.blocks ref
2022-01-21 11:33:16 +00:00
+ getBlockRef returns none for missing blocks OK
improve slot processing speeds (#1670) about 40% better slot processing times (with LTO enabled) - these don't do BLS but are used heavily during replay (state transition = slot + block transition) tests using a recent medalla state and advancing it 1000 slots: ``` ./ncli slots --preState2:state-302271-3c1dbf19-c1f944bf.ssz --slot:1000 --postState2:xx.ssz ``` pre: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 39.236, 0.000, 39.236, 39.236, 1, Load state from file 0.049, 0.002, 0.046, 0.063, 968, Apply slot 256.504, 81.008, 213.471, 591.902, 32, Apply epoch slot 28.597, 0.000, 28.597, 28.597, 1, Save state to file ``` cast: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 37.079, 0.000, 37.079, 37.079, 1, Load state from file 0.042, 0.002, 0.040, 0.090, 968, Apply slot 215.552, 68.763, 180.155, 500.103, 32, Apply epoch slot 25.106, 0.000, 25.106, 25.106, 1, Save state to file ``` cast+rewards: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 40.049, 0.000, 40.049, 40.049, 1, Load state from file 0.048, 0.001, 0.045, 0.060, 968, Apply slot 164.981, 76.273, 142.099, 477.868, 32, Apply epoch slot 28.498, 0.000, 28.498, 28.498, 1, Save state to file ``` cast+rewards+shr ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 12.898, 0.000, 12.898, 12.898, 1, Load state from file 0.039, 0.002, 0.038, 0.054, 968, Apply slot 139.971, 68.797, 120.088, 428.844, 32, Apply epoch slot 24.761, 0.000, 24.761, 24.761, 1, Save state to file ```
2020-09-16 20:59:33 +00:00
+ loading tail block works [Preset: mainnet] OK
+ updateHead updates head and headState [Preset: mainnet] OK
+ updateStateData sanity [Preset: mainnet] OK
```
OK: 6/6 Fail: 0/6 Skip: 0/6
## Block processor [Preset: mainnet]
```diff
+ Reverse order block add & get [Preset: mainnet] OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Block quarantine
```diff
+ Unviable smoke test OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## BlockId and helpers
2020-03-10 04:24:33 +00:00
```diff
+ atSlot sanity OK
+ parent sanity OK
2020-03-10 04:24:33 +00:00
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## BlockRef and helpers
2020-03-10 04:24:33 +00:00
```diff
+ get_ancestor sanity OK
+ isAncestorOf sanity OK
2020-03-10 04:24:33 +00:00
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## BlockSlot and helpers
```diff
+ atSlot sanity OK
+ parent sanity OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## ChainDAG helpers
```diff
+ epochAncestor sanity [Preset: mainnet] OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## DeleteKeys requests [Preset: mainnet]
```diff
+ Deleting not existing key [Preset: mainnet] OK
+ Invalid Authorization Header [Preset: mainnet] OK
+ Invalid Authorization Token [Preset: mainnet] OK
+ Missing Authorization header [Preset: mainnet] OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
2022-02-08 14:39:15 +00:00
## DeleteRemoteKeys requests [Preset: mainnet]
```diff
+ Deleting existing local key and remote key [Preset: mainnet] OK
+ Deleting not existing key [Preset: mainnet] OK
+ Invalid Authorization Header [Preset: mainnet] OK
+ Invalid Authorization Token [Preset: mainnet] OK
+ Missing Authorization header [Preset: mainnet] OK
```
OK: 5/5 Fail: 0/5 Skip: 0/5
## Diverging hardforks
```diff
+ Non-tail block in common OK
+ Tail block only in common OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## EF - SSZ generic types
```diff
Testing basic_vector inputs - invalid Skip
+ Testing basic_vector inputs - valid OK
+ Testing bitlist inputs - invalid OK
+ Testing bitlist inputs - valid OK
Testing bitvector inputs - invalid Skip
+ Testing bitvector inputs - valid OK
+ Testing boolean inputs - invalid OK
+ Testing boolean inputs - valid OK
+ Testing containers inputs - invalid - skipping BitsStruct OK
+ Testing containers inputs - valid - skipping BitsStruct OK
+ Testing uints inputs - invalid OK
+ Testing uints inputs - valid OK
```
OK: 10/12 Fail: 0/12 Skip: 2/12
## Eth1 monitor
```diff
+ Rewrite HTTPS Infura URLs OK
+ Roundtrip engine RPC and consensus ExecutionPayload representations OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## Eth2 specific discovery tests
```diff
+ Invalid attnets field OK
+ Subnet query OK
+ Subnet query after ENR update OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## Exit pool testing suite
```diff
+ addExitMessage/getAttesterSlashingMessage OK
+ addExitMessage/getProposerSlashingMessage OK
+ addExitMessage/getVoluntaryExitMessage OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
Store finalized block roots in database (3s startup) (#3320) * Store finalized block roots in database (3s startup) When the chain has finalized a checkpoint, the history from that point onwards becomes linear - this is exploited in `.era` files to allow constant-time by-slot lookups. In the database, we can do the same by storing finalized block roots in a simple sparse table indexed by slot, bringing the two representations closer to each other in terms of conceptual layout and performance. Doing so has a number of interesting effects: * mainnet startup time is improved 3-5x (3s on my laptop) * the _first_ startup might take slightly longer as the new index is being built - ~10s on the same laptop * we no longer rely on the beacon block summaries to load the full dag - this is a lot faster because we no longer have to look up each block by parent root * a collateral benefit is that we no longer need to load the full summaries table into memory - we get the RSS benefits of #3164 without the CPU hit. Other random stuff: * simplify forky block generics * fix withManyWrites multiple evaluation * fix validator key cache not being updated properly in chaindag read-only mode * drop pre-altair summaries from `kvstore` * recreate missing summaries from altair+ blocks as well (in case database has lost some to an involuntary restart) * print database startup timings in chaindag load log * avoid allocating superfluos state at startup * use a recursive sql query to load the summaries of the unfinalized blocks
2022-01-30 16:51:04 +00:00
## FinalizedBlocks [Preset: mainnet]
```diff
+ Basic ops [Preset: mainnet] OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Fork Choice + Finality [Preset: mainnet]
```diff
+ fork_choice - testing finality #01 OK
+ fork_choice - testing finality #02 OK
+ fork_choice - testing no votes OK
+ fork_choice - testing with votes OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## Fork id compatibility test
```diff
+ Digest check OK
+ Fork check OK
+ Next fork epoch check OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## Forked SSZ readers
```diff
+ load altair block OK
+ load altair state OK
+ load bellatrix block OK
+ load bellatrix state OK
+ load phase0 block OK
+ load phase0 state OK
+ should raise on unknown data OK
```
OK: 7/7 Fail: 0/7 Skip: 0/7
## Gossip fork transition
```diff
+ Gossip fork transition OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## Gossip validation [Preset: mainnet]
```diff
+ Empty committee when no committee for slot OK
Speed up altair block processing 2x (#3115) * Speed up altair block processing >2x Like #3089, this PR drastially speeds up historical REST queries and other long state replays. * cache sync committee validator indices * use ~80mb less memory for validator pubkey mappings * batch-verify sync aggregate signature (fixes #2985) * document sync committee hack with head block vs sync message block * add batch signature verification failure tests Before: ``` ../env.sh nim c -d:release -r ncli_db --db:mainnet_0/db bench --start-slot:-1000 All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 5830.675, 0.000, 5830.675, 5830.675, 1, Initialize DB 0.481, 1.878, 0.215, 59.167, 981, Load block from database 8422.566, 0.000, 8422.566, 8422.566, 1, Load state from database 6.996, 1.678, 0.042, 14.385, 969, Advance slot, non-epoch 93.217, 8.318, 84.192, 122.209, 32, Advance slot, epoch 20.513, 23.665, 11.510, 201.561, 981, Apply block, no slot processing 0.000, 0.000, 0.000, 0.000, 0, Database load 0.000, 0.000, 0.000, 0.000, 0, Database store ``` After: ``` 7081.422, 0.000, 7081.422, 7081.422, 1, Initialize DB 0.553, 2.122, 0.175, 66.692, 981, Load block from database 5439.446, 0.000, 5439.446, 5439.446, 1, Load state from database 6.829, 1.575, 0.043, 12.156, 969, Advance slot, non-epoch 94.716, 2.749, 88.395, 100.026, 32, Advance slot, epoch 11.636, 23.766, 4.889, 205.250, 981, Apply block, no slot processing 0.000, 0.000, 0.000, 0.000, 0, Database load 0.000, 0.000, 0.000, 0.000, 0, Database store ``` * add comment
2021-11-24 12:43:50 +00:00
+ validateAttestation OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## Gossip validation - Extra
```diff
+ validateSyncCommitteeMessage OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
## Honest validator
```diff
+ General pubsub topics OK
+ Mainnet attestation topics OK
+ isNearSyncCommitteePeriod OK
+ is_aggregator OK
2020-03-10 04:24:33 +00:00
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## ImportKeystores requests [Preset: mainnet]
```diff
2022-02-08 14:39:15 +00:00
+ ImportKeystores/ListKeystores/DeleteKeystores [Preset: mainnet] OK
+ Invalid Authorization Header [Preset: mainnet] OK
+ Invalid Authorization Token [Preset: mainnet] OK
+ Missing Authorization header [Preset: mainnet] OK
```
2022-02-08 14:39:15 +00:00
OK: 4/4 Fail: 0/4 Skip: 0/4
## ImportRemoteKeys/ListRemoteKeys/DeleteRemoteKeys [Preset: mainnet]
```diff
+ Importing list of remote keys [Preset: mainnet] OK
+ Invalid Authorization Header [Preset: mainnet] OK
+ Invalid Authorization Token [Preset: mainnet] OK
+ Missing Authorization header [Preset: mainnet] OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## Interop
```diff
+ Interop genesis OK
+ Interop signatures OK
+ Mocked start private key OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## KeyStorage testing suite
```diff
+ Pbkdf2 errors OK
+ [PBKDF2] Keystore decryption OK
+ [PBKDF2] Keystore encryption OK
+ [PBKDF2] Network Keystore decryption OK
+ [PBKDF2] Network Keystore encryption OK
+ [SCRYPT] Keystore decryption OK
+ [SCRYPT] Keystore encryption OK
+ [SCRYPT] Network Keystore decryption OK
+ [SCRYPT] Network Keystore encryption OK
```
OK: 9/9 Fail: 0/9 Skip: 0/9
## Light client [Preset: mainnet]
```diff
+ Init from checkpoint OK
+ Light client sync OK
+ Pre-Altair OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## ListKeys requests [Preset: mainnet]
```diff
+ Correct token provided [Preset: mainnet] OK
+ Invalid Authorization Header [Preset: mainnet] OK
+ Invalid Authorization Token [Preset: mainnet] OK
+ Missing Authorization header [Preset: mainnet] OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
2022-02-08 14:39:15 +00:00
## ListRemoteKeys requests [Preset: mainnet]
```diff
+ Correct token provided [Preset: mainnet] OK
+ Invalid Authorization Header [Preset: mainnet] OK
+ Invalid Authorization Token [Preset: mainnet] OK
+ Missing Authorization header [Preset: mainnet] OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## Message signatures
```diff
+ Aggregate and proof signatures OK
+ Attestation signatures OK
+ Deposit signatures OK
+ Slot signatures OK
+ Sync committee message signatures OK
+ Sync committee selection proof signatures OK
+ Sync committee signed contribution and proof signatures OK
+ Voluntary exit signatures OK
```
OK: 8/8 Fail: 0/8 Skip: 0/8
## Old database versions [Preset: mainnet]
```diff
+ pre-1.1.0 OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
## PeerPool testing suite
```diff
+ Access peers by key test OK
+ Acquire from empty pool OK
+ Acquire/Sorting and consistency test OK
improve slot processing speeds (#1670) about 40% better slot processing times (with LTO enabled) - these don't do BLS but are used heavily during replay (state transition = slot + block transition) tests using a recent medalla state and advancing it 1000 slots: ``` ./ncli slots --preState2:state-302271-3c1dbf19-c1f944bf.ssz --slot:1000 --postState2:xx.ssz ``` pre: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 39.236, 0.000, 39.236, 39.236, 1, Load state from file 0.049, 0.002, 0.046, 0.063, 968, Apply slot 256.504, 81.008, 213.471, 591.902, 32, Apply epoch slot 28.597, 0.000, 28.597, 28.597, 1, Save state to file ``` cast: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 37.079, 0.000, 37.079, 37.079, 1, Load state from file 0.042, 0.002, 0.040, 0.090, 968, Apply slot 215.552, 68.763, 180.155, 500.103, 32, Apply epoch slot 25.106, 0.000, 25.106, 25.106, 1, Save state to file ``` cast+rewards: ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 40.049, 0.000, 40.049, 40.049, 1, Load state from file 0.048, 0.001, 0.045, 0.060, 968, Apply slot 164.981, 76.273, 142.099, 477.868, 32, Apply epoch slot 28.498, 0.000, 28.498, 28.498, 1, Save state to file ``` cast+rewards+shr ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 12.898, 0.000, 12.898, 12.898, 1, Load state from file 0.039, 0.002, 0.038, 0.054, 968, Apply slot 139.971, 68.797, 120.088, 428.844, 32, Apply epoch slot 24.761, 0.000, 24.761, 24.761, 1, Save state to file ```
2020-09-16 20:59:33 +00:00
+ Delete peer on release text OK
2020-03-10 04:24:33 +00:00
+ Iterators test OK
+ Peer lifetime test OK
+ Safe/Clear test OK
+ Score check test OK
+ Space tests OK
2020-03-10 04:24:33 +00:00
+ addPeer() test OK
+ addPeerNoWait() test OK
+ deletePeer() test OK
```
OK: 12/12 Fail: 0/12 Skip: 0/12
## Slashing Interchange tests [Preset: mainnet]
```diff
+ Slashing test: duplicate_pubkey_not_slashable.json OK
+ Slashing test: duplicate_pubkey_slashable_attestation.json OK
+ Slashing test: duplicate_pubkey_slashable_block.json OK
+ Slashing test: multiple_interchanges_multiple_validators_repeat_idem.json OK
+ Slashing test: multiple_interchanges_overlapping_validators_merge_stale.json OK
+ Slashing test: multiple_interchanges_overlapping_validators_repeat_idem.json OK
+ Slashing test: multiple_interchanges_single_validator_fail_iff_imported.json OK
+ Slashing test: multiple_interchanges_single_validator_first_surrounds_second.json OK
+ Slashing test: multiple_interchanges_single_validator_multiple_blocks_out_of_order.json OK
+ Slashing test: multiple_interchanges_single_validator_second_surrounds_first.json OK
+ Slashing test: multiple_interchanges_single_validator_single_att_out_of_order.json OK
+ Slashing test: multiple_interchanges_single_validator_single_block_out_of_order.json OK
+ Slashing test: multiple_interchanges_single_validator_single_message_gap.json OK
+ Slashing test: multiple_validators_multiple_blocks_and_attestations.json OK
+ Slashing test: multiple_validators_same_slot_blocks.json OK
+ Slashing test: single_validator_genesis_attestation.json OK
+ Slashing test: single_validator_import_only.json OK
+ Slashing test: single_validator_multiple_block_attempts.json OK
+ Slashing test: single_validator_multiple_blocks_and_attestations.json OK
+ Slashing test: single_validator_out_of_order_attestations.json OK
+ Slashing test: single_validator_out_of_order_blocks.json OK
Slashing test: single_validator_resign_attestation.json Skip
+ Slashing test: single_validator_resign_block.json OK
+ Slashing test: single_validator_single_attestation.json OK
+ Slashing test: single_validator_single_block.json OK
+ Slashing test: single_validator_single_block_and_attestation.json OK
+ Slashing test: single_validator_single_block_and_attestation_signing_root.json OK
+ Slashing test: single_validator_slashable_attestations_double_vote.json OK
+ Slashing test: single_validator_slashable_attestations_surrounded_by_existing.json OK
+ Slashing test: single_validator_slashable_attestations_surrounds_existing.json OK
+ Slashing test: single_validator_slashable_blocks.json OK
+ Slashing test: single_validator_slashable_blocks_no_root.json OK
+ Slashing test: single_validator_source_greater_than_target.json OK
+ Slashing test: single_validator_source_greater_than_target_sensible_iff_minified.json OK
Slashing test: single_validator_source_greater_than_target_surrounded.json Skip
Slashing test: single_validator_source_greater_than_target_surrounding.json Skip
+ Slashing test: single_validator_two_blocks_no_signing_root.json OK
+ Slashing test: wrong_genesis_validators_root.json OK
```
OK: 35/38 Fail: 0/38 Skip: 3/38
2020-09-21 15:58:35 +00:00
## Slashing Protection DB [Preset: mainnet]
```diff
2020-09-26 21:58:12 +00:00
+ Attestation ordering #1698 OK
+ Don't prune the very last attestation(s) even by mistake OK
+ Don't prune the very last block even by mistake OK
2020-09-21 15:58:35 +00:00
+ Empty database [Preset: mainnet] OK
+ Pruning attestations works OK
+ Pruning blocks works OK
2020-09-21 15:58:35 +00:00
+ SP for block proposal - backtracking append OK
+ SP for block proposal - linear append OK
+ SP for same epoch attestation target - linear append OK
+ SP for surrounded attestations OK
+ SP for surrounding attestations OK
2020-09-26 21:58:12 +00:00
+ Test valid attestation #1699 OK
2020-09-21 15:58:35 +00:00
```
OK: 12/12 Fail: 0/12 Skip: 0/12
## Spec datatypes
```diff
+ Graffiti bytes OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
## Spec helpers
```diff
+ build_proof - BeaconState OK
+ get_branch_indices OK
+ get_helper_indices OK
+ get_path_indices OK
2020-03-10 04:24:33 +00:00
+ integer_squareroot OK
+ is_valid_merkle_branch OK
+ verify_merkle_multiproof OK
2020-03-10 04:24:33 +00:00
```
OK: 7/7 Fail: 0/7 Skip: 0/7
disentangle eth2 types from the ssz library (#2785) * reorganize ssz dependencies This PR continues the work in https://github.com/status-im/nimbus-eth2/pull/2646, https://github.com/status-im/nimbus-eth2/pull/2779 as well as past issues with serialization and type, to disentangle SSZ from eth2 and at the same time simplify imports and exports with a structured approach. The principal idea here is that when a library wants to introduce SSZ support, they do so via 3 files: * `ssz_codecs` which imports and reexports `codecs` - this covers the basic byte conversions and ensures no overloads get lost * `xxx_merkleization` imports and exports `merkleization` to specialize and get access to `hash_tree_root` and friends * `xxx_ssz_serialization` imports and exports `ssz_serialization` to specialize ssz for a specific library Those that need to interact with SSZ always import the `xxx_` versions of the modules and never `ssz` itself so as to keep imports simple and safe. This is similar to how the REST / JSON-RPC serializers are structured in that someone wanting to serialize spec types to REST-JSON will import `eth2_rest_serialization` and nothing else. * split up ssz into a core library that is independendent of eth2 types * rename `bytes_reader` to `codec` to highlight that it contains coding and decoding of bytes and native ssz types * remove tricky List init overload that causes compile issues * get rid of top-level ssz import * reenable merkleization tests * move some "standard" json serializers to spec * remove `ValidatorIndex` serialization for now * remove test_ssz_merkleization * add tests for over/underlong byte sequences * fix broken seq[byte] test - seq[byte] is not an SSZ type There are a few things this PR doesn't solve: * like #2646 this PR is weak on how to handle root and other dontSerialize fields that "sometimes" should be computed - the same problem appears in REST / JSON-RPC etc * Fix a build problem on macOS * Another way to fix the macOS builds Co-authored-by: Zahary Karadjov <zahary@gmail.com>
2021-08-18 18:57:58 +00:00
## Specific field types
```diff
+ root update OK
+ roundtrip OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
## Sync committee pool
```diff
+ Aggregating votes OK
+ An empty pool is safe to prune OK
+ An empty pool is safe to prune 2 OK
+ An empty pool is safe to use OK
```
OK: 4/4 Fail: 0/4 Skip: 0/4
## SyncManager test suite
```diff
+ Process all unviable blocks OK
+ [SyncQueue#Backward] Async unordered push test OK
+ [SyncQueue#Backward] Async unordered push with rewind test OK
+ [SyncQueue#Backward] Pass through established limits test OK
+ [SyncQueue#Backward] Smoke test OK
+ [SyncQueue#Backward] Start and finish slots equal OK
+ [SyncQueue#Backward] Two full requests success/fail OK
+ [SyncQueue#Backward] getRewindPoint() test OK
+ [SyncQueue#Forward] Async unordered push test OK
+ [SyncQueue#Forward] Async unordered push with rewind test OK
+ [SyncQueue#Forward] Pass through established limits test OK
+ [SyncQueue#Forward] Smoke test OK
+ [SyncQueue#Forward] Start and finish slots equal OK
+ [SyncQueue#Forward] Two full requests success/fail OK
+ [SyncQueue#Forward] getRewindPoint() test OK
+ [SyncQueue] checkResponse() test OK
+ [SyncQueue] contains() test OK
+ [SyncQueue] getLastNonEmptySlot() test OK
+ [SyncQueue] hasEndGap() test OK
```
OK: 19/19 Fail: 0/19 Skip: 0/19
2020-03-10 04:24:33 +00:00
## Zero signature sanity checks
```diff
+ SSZ serialization roundtrip of SignedBeaconBlockHeader OK
+ Zero signatures cannot be loaded into a BLS signature object OK
+ default initialization of signatures OK
2020-03-10 04:24:33 +00:00
```
OK: 3/3 Fail: 0/3 Skip: 0/3
2020-03-10 04:24:33 +00:00
## [Unit - Spec - Block processing] Deposits [Preset: mainnet]
```diff
+ Deposit at MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
+ Deposit over MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
+ Deposit under MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
2020-07-08 12:36:03 +00:00
+ Invalid deposit at MAX_EFFECTIVE_BALANCE balance (32 ETH) OK
2020-03-10 04:24:33 +00:00
+ Validator top-up OK
```
2020-07-08 12:36:03 +00:00
OK: 5/5 Fail: 0/5 Skip: 0/5
2020-03-10 04:24:33 +00:00
## [Unit - Spec - Epoch processing] Justification and Finalization [Preset: mainnet]
```diff
+ Rule I - 234 finalization with enough support OK
+ Rule I - 234 finalization without support OK
+ Rule II - 23 finalization with enough support OK
+ Rule II - 23 finalization without support OK
+ Rule III - 123 finalization with enough support OK
+ Rule III - 123 finalization without support OK
+ Rule IV - 12 finalization with enough support OK
+ Rule IV - 12 finalization without support OK
```
OK: 8/8 Fail: 0/8 Skip: 0/8
## chain DAG finalization tests [Preset: mainnet]
```diff
+ init with gaps [Preset: mainnet] OK
+ orphaned epoch block [Preset: mainnet] OK
+ prune heads on finalization [Preset: mainnet] OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
2022-02-08 14:39:15 +00:00
## createValidatorFiles()
```diff
2022-02-08 14:39:15 +00:00
+ Add keystore files [LOCAL] OK
+ Add keystore files [REMOTE] OK
+ Add keystore files twice [LOCAL] OK
+ Add keystore files twice [REMOTE] OK
+ `createValidatorFiles` with `keystoreDir` without permissions OK
+ `createValidatorFiles` with `secretsDir` without permissions OK
+ `createValidatorFiles` with `validatorsDir` without permissions OK
+ `createValidatorFiles` with already existing dirs and any error OK
```
2022-02-08 14:39:15 +00:00
OK: 8/8 Fail: 0/8 Skip: 0/8
## engine API authentication
```diff
+ HS256 JWS iat token signing OK
+ HS256 JWS signing OK
+ getIatToken OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
## eth2.0-deposits-cli compatibility
```diff
+ restoring mnemonic with password OK
+ restoring mnemonic without password OK
```
OK: 2/2 Fail: 0/2 Skip: 0/2
2022-02-08 14:39:15 +00:00
## removeValidatorFiles()
```diff
+ Remove nonexistent validator OK
+ Remove validator files OK
+ Remove validator files twice OK
```
OK: 3/3 Fail: 0/3 Skip: 0/3
2022-02-08 14:39:15 +00:00
## removeValidatorFiles() multiple keystore types
```diff
+ Remove [LOCAL] when [LOCAL] is missing OK
+ Remove [LOCAL] when [LOCAL] is present OK
+ Remove [LOCAL] when [REMOTE] is present OK
+ Remove [REMOTE] when [LOCAL] is present OK
+ Remove [REMOTE] when [REMOTE] is missing OK
+ Remove [REMOTE] when [REMOTE] is present OK
```
OK: 6/6 Fail: 0/6 Skip: 0/6
## saveKeystore()
```diff
+ Save [LOCAL] keystore after [LOCAL] keystore with different id OK
+ Save [LOCAL] keystore after [LOCAL] keystore with same id OK
+ Save [LOCAL] keystore after [REMOTE] keystore with different id OK
+ Save [LOCAL] keystore after [REMOTE] keystore with same id OK
+ Save [REMOTE] keystore after [LOCAL] keystore with different id OK
+ Save [REMOTE] keystore after [LOCAL] keystore with same id OK
+ Save [REMOTE] keystore after [REMOTE] keystore with different id OK
+ Save [REMOTE] keystore after [REMOTE] keystore with same id OK
```
OK: 8/8 Fail: 0/8 Skip: 0/8
## state diff tests [Preset: mainnet]
```diff
+ random slot differences [Preset: mainnet] OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
## subnet tracker
```diff
+ should register stability subnets on attester duties OK
```
OK: 1/1 Fail: 0/1 Skip: 0/1
2020-03-10 04:24:33 +00:00
---TOTAL---
OK: 285/290 Fail: 0/290 Skip: 5/290