forks coverage description cleanup

This commit is contained in:
protolambda 2019-04-15 22:39:07 +10:00
parent 0b2a03e276
commit d64a4f248e
No known key found for this signature in database
GPG Key ID: EC89FDBB2B4C7623
10 changed files with 6 additions and 58 deletions

View File

@ -75,22 +75,11 @@ There are two types of fork-data:
The first is neat to have as a separate form: we prevent duplication, and can run with different presets
(e.g. fork timeline for a minimal local test, for a public testnet, or for mainnet)
The second is still somewhat ambiguous: some tests may want cover multiple forks, and can do so in different ways:
- run one test, transitioning from one to the other
- run the same test for both
- run a test for every transition from one fork to the other
- more
There is a common factor here however: the options are exclusive, and give a clear idea on what test suites need to be ran to cover testing for a specific fork.
The way this list of forks is interpreted, is up to the test-runner:
State-transition test suites may want to just declare forks that are being covered in the test suite,
whereas shuffling test suites may want to declare a list of forks to test the shuffling algorithm for individually.
Test-formats specify the following `forks` interpretation rules:
- `collective`: the test suite applies to all specified forks, and only needs to run once
- `individual`: the test suite should be ran against every fork
- more types may be specified with future test types.
The second does not affect the result of the tests, it just states what is covered by the tests,
so that the right suites can be executed to see coverage for a certain fork.
For some types of tests, it may be beneficial to ensure it runs exactly the same, with any given fork "active".
Test-formats can be explicit on the need to repeat a test with different forks being "active",
but generally tests run only once.
### Test completeness
@ -107,8 +96,7 @@ The aim is to provide clients with a well-defined scope of work to run a particu
title: <string, short, one line> -- Display name for the test suite
summary: <string, average, 1-3 lines> -- Summarizes the test suite
forks_timeline: <string, reference to a fork definition file, without extension> -- Used to determine the forking timeline
forks: <list of strings> -- Runner decides what to do: run for each fork, or run for all at once, each fork transition, etc.
- ... <string, first the fork name, then the spec version>
forks: <list of strings> -- Defines the coverage. Test-runner code may decide to re-run with the different forks "activated", when applicable.
config: <string, reference to a config file, without extension> -- Used to determine which set of constants to run (possibly compile time) with
runner: <string, no spaces, python-like naming format> *MUST be consistent with folder structure*
handler: <string, no spaces, python-like naming format> *MUST be consistent with folder structure*

View File

@ -15,7 +15,3 @@ output: BLS Pubkey -- expected output, single BLS pubkey
## Condition
The `aggregate_pubkeys` handler should aggregate the keys in the `input`, and the result should match the expected `output`.
## Forks
Forks-interpretation: `collective`

View File

@ -15,7 +15,3 @@ output: BLS Signature -- expected output, single BLS signature
## Condition
The `aggregate_sigs` handler should aggregate the signatures in the `input`, and the result should match the expected `output`.
## Forks
Forks-interpretation: `collective`

View File

@ -17,7 +17,3 @@ All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `
## Condition
The `msg_hash_g2_compressed` handler should hash the `message`, with the given `domain`, to G2 with compression, and the result should match the expected `output`.
## Forks
Forks-interpretation: `collective`

View File

@ -17,7 +17,3 @@ All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `
## Condition
The `msg_hash_g2_uncompressed` handler should hash the `message`, with the given `domain`, to G2, without compression, and the result should match the expected `output`.
## Forks
Forks-interpretation: `collective`

View File

@ -15,7 +15,3 @@ All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `
## Condition
The `priv_to_pub` handler should compute the public key for the given private key `input`, and the result should match the expected `output`.
## Forks
Forks-interpretation: `collective`

View File

@ -18,7 +18,3 @@ All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `
## Condition
The `sign_msg` handler should sign the given `message`, with `domain`, using the given `privkey`, and the result should match the expected `output`.
## Forks
Forks-interpretation: `collective`

View File

@ -16,11 +16,3 @@ post: BeaconState -- state after applying the deposit. No value if deposit pr
A `deposits` handler of the `operations` should process these cases,
calling the implementation of the `process_deposit(state, deposit)` functionality described in the spec.
The resulting state should match the expected `post` state, or no change if the `post` state is left blank.
## Forks
Forks-interpretation: `collective`
Pre and post state contain slot numbers, and are time sensitive.
Additional tests will be added for future forks to cover fork-specific behavior based on input data
(including suites with deposits on fork transition blocks, covering multiple forks)

View File

@ -30,7 +30,3 @@ Seed is the raw shuffling seed, passed to permute-index (or optimized shuffling
The resulting list should match the expected output `shuffled` after shuffling the implied input, using the given `seed`.
## Forks
Forks-interpretation: `collective`

View File

@ -17,7 +17,3 @@ tags: List[string] -- description of test case, in the form of a list of labels
Two-way testing can be implemented in the test-runner:
- Encoding: After encoding the given input number `value`, the output should match `ssz`
- Decoding: After decoding the given `ssz` bytes, it should match the input number `value`
## Forks
Forks-interpretation: `collective`