* also pack attestations where LMD vote is orphaned
When `attestation.data.beacon_block_root` gets orphaned, attestations
with a good `attestation.data.target.root` may still be valuable.
The LMD GHOST vote is not relevant for attestation rewards.
Switch to use the FFG vote (`attestation.data.target.root`) instead,
gossip validation ensures it is an ancestor of `beacon_block_root`.
* lint
* Clarify addOrphan error/logging
addOrphan returned a bool to indicate success. Change this to a Result
so that different errors can be distinguished.
* Update beacon_chain/consensus_object_pools/block_quarantine.nim
Co-authored-by: tersec <tersec@users.noreply.github.com>
* Update beacon_chain/gossip_processing/gossip_validation.nim
---------
Co-authored-by: tersec <tersec@users.noreply.github.com>
When doing sync for blocks older than
MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, we skip the blobs by range
request, but we then pass en empty blob sequence to
validation, which then fails.
To fix this: Use an Option[Blobsidecars] to allow expressing the
distinction between "empty blob sequence" and "blobs unavailable". Use
the latter for "old" blocks, and don't attempt to run blob validation.
`SyncCommitteeMsgPool` grouped messages by their `beacon_block_root`.
This is problematic around sync committee period boundaries and forks.
Around sync committee period boundaries, members from both the current
and next sync committee may sign the same `beacon_block_root`; mixing
the signatures from both committees together is a mistake. Likewise,
around fork transitions, the `signing_root` changes, so those messages
also need to be segregated.
* don't run flaky CI in Windows
* reduce stack size in `test_light_client`
* regen tests
* re-enable maybe non-problematic test module on Windows
---------
Co-authored-by: Etan Kissling <etan@status.im>
When an uncached `ShufflingRef` is requested, we currently replay state
which can take several seconds. Acceleration is possible by:
1. Start from any state with locked-in `get_active_validator_indices`.
Any blocks / slots applied to such a state can only affect that
result for future epochs, so are viable for querying target epoch.
`compute_activation_exit_epoch(state.slot.epoch) > target.epoch`
2. Determine highest common ancestor among `state` and `target.blck`.
At the ancestor slot, same rules re `get_active_validator_indices`.
`compute_activation_exit_epoch(ancestorSlot.epoch) > target.epoch`
3. We now have a `state` that shares history with `target.blck` up
through a common ancestor slot. Any blocks / slots that the `state`
contains, which are not part of the `target.blck` history, affect
`get_active_validator_indices` at epochs _after_ `target.epoch`.
4. Select `state.randao_mixes[N]` that is closest to common ancestor.
Either direction is fine (above / below ancestor).
5. From that RANDAO mix, mix in / out all RANDAO reveals from blocks
in-between. This is just an XOR operation, so fully reversible.
`mix = mix xor SHA256(blck.message.body.randao_reveal)`
6. Compute the attester dependent slot from `target.epoch`.
`if epoch >= 2: (target.epoch - 1).start_slot - 1 else: GENESIS_SLOT`
7. Trace back from `target.blck` to the attester dependent slot.
We now have the destination for which we want to obtain RANDAO.
8. Mix in all RANDAO reveals from blocks up through the `dependentBlck`.
Same method, no special handling necessary for epoch transitions.
9. Combine `get_active_validator_indices` from `state` at `target.epoch`
with the recovered RANDAO value at `dependentBlck` to obtain the
requested shuffling, and construct the `ShufflingRef` without replay.
* more tests and simplify logic
* test with different number of deposits per branch
* Update beacon_chain/consensus_object_pools/blockchain_dag.nim
Co-authored-by: Jacek Sieka <jacek@status.im>
* `commonAncestor` tests
* lint
---------
Co-authored-by: Jacek Sieka <jacek@status.im>
`attachMerkleProofs` is used by `mockUpdateStateForNewDeposit` to create
a single deposit. The function doesn't work correctly when trying with
with multiple deposits, though. Fix this to enable more complex tests,
and also return the `deposit_root` for forming matching `Eth1Data`.
We already updated the field order in the actual `ExecutionPayload`,
but in init code and tests / logs etc we still used the old order.
Update those occurrences to also match the field order in the struct.
Furthermore, add `excess_data_gas` to last entry in `test_eth1_monitor`.
Back then, Milagro interop used offset 1000 for mock BLS keys.
Meanwhile, interop code was removed and multi client testnets are there.
EF tests use an offset of 1 for mock BLS keys. This patch aligns our
implementation to also use offset of 1, potentially making debugging of
state differences a bit easier (but, ultimately, low impact).
Furthermore, simulation files are now emitted into a subdirectory
to have less chunk in the repo root directory, and to avoid conflicts
where a cached file with offset 1000 runs against tests expecting 1.
See https://github.com/status-im/nimbus-eth2/pull/2928/files#r719266863
The consensus-spec-tests already cover the scenarios of our custom test
runner, so the custom tests can be removed. Also cleans up unused config
flags and related unreachable logic.
* Simplify block quarantine blobless
The quarantine blobless table was initially keyed off of (Eth2Digest,
ValidatorSig). This was modelled off the orphan table. The presence of
the signature in the key is necessary for orphans, because we can't
verify the signature for an orphan. That is not the case for a
blobless block, where the signature can be verified.
So this PR changes the blobless block table to be keyed off a
Eth2Digest only. This simplifies the retrieval and handling of
blobless blocks.
* review feedback
* fix false positive getopt failure with multiple getopt matches in searched path
* also get launch_local_testnet
* also make_prometheus_config, called from launch_local_testnet
* Refactor nimbus_signing_node to support Unix signals.
* Fix SN unable to close REST server properly.
* Fix `keys`, `deposit` and `validator_registration` endpoints issues.
Add getValidatorExitSignature() and getDepositMessageSignature() to validator_pool.
* Add /reload endpoint and implementation.
Fix signData to not cancel `timer`.
Fix validator_pool should clear attachedValidators table.
* Diva protocol enhancement implementation.