Commit Graph

252 Commits

Author SHA1 Message Date
Etan Kissling c70fd8fe97
Merge branch 'unstable' into dev/etan/rd-shufflingacc 2023-05-17 14:06:31 +02:00
tersec 74511f61d1
Use withdrawal credentials as default fee recipient (#4968) 2023-05-17 07:56:37 +03:00
Etan Kissling 40e89937c5
segregate sync committee messages by period / fork (#4953)
`SyncCommitteeMsgPool` grouped messages by their `beacon_block_root`.
This is problematic around sync committee period boundaries and forks.
Around sync committee period boundaries, members from both the current
and next sync committee may sign the same `beacon_block_root`; mixing
the signatures from both committees together is a mistake. Likewise,
around fork transitions, the `signing_root` changes, so those messages
also need to be segregated.
2023-05-17 07:55:55 +03:00
Jacek Sieka 83393cea8d
dependent slot helpers 2023-05-16 11:04:25 +02:00
Etan Kissling dbba003a38
Revert "Revert "accelerate `getShufflingRef` (#4911)" (#4958)"
This reverts commit 748be8b67b.
2023-05-15 17:41:40 +02:00
Etan Kissling 748be8b67b
Revert "accelerate `getShufflingRef` (#4911)" (#4958)
This reverts commit ea97e93e74.
2023-05-15 15:25:51 +00:00
Etan Kissling ea97e93e74
accelerate `getShufflingRef` (#4911)
When an uncached `ShufflingRef` is requested, we currently replay state
which can take several seconds. Acceleration is possible by:

1. Start from any state with locked-in `get_active_validator_indices`.
   Any blocks / slots applied to such a state can only affect that
   result for future epochs, so are viable for querying target epoch.
   `compute_activation_exit_epoch(state.slot.epoch) > target.epoch`

2. Determine highest common ancestor among `state` and `target.blck`.
   At the ancestor slot, same rules re `get_active_validator_indices`.
   `compute_activation_exit_epoch(ancestorSlot.epoch) > target.epoch`

3. We now have a `state` that shares history with `target.blck` up
   through a common ancestor slot. Any blocks / slots that the `state`
   contains, which are not part of the `target.blck` history, affect
   `get_active_validator_indices` at epochs _after_ `target.epoch`.

4. Select `state.randao_mixes[N]` that is closest to common ancestor.
   Either direction is fine (above / below ancestor).

5. From that RANDAO mix, mix in / out all RANDAO reveals from blocks
   in-between. This is just an XOR operation, so fully reversible.
   `mix = mix xor SHA256(blck.message.body.randao_reveal)`

6. Compute the attester dependent slot from `target.epoch`.
   `if epoch >= 2: (target.epoch - 1).start_slot - 1 else: GENESIS_SLOT`

7. Trace back from `target.blck` to the attester dependent slot.
   We now have the destination for which we want to obtain RANDAO.

8. Mix in all RANDAO reveals from blocks up through the `dependentBlck`.
   Same method, no special handling necessary for epoch transitions.

9. Combine `get_active_validator_indices` from `state` at `target.epoch`
   with the recovered RANDAO value at `dependentBlck` to obtain the
   requested shuffling, and construct the `ShufflingRef` without replay.

* more tests and simplify logic

* test with different number of deposits per branch

* Update beacon_chain/consensus_object_pools/blockchain_dag.nim

Co-authored-by: Jacek Sieka <jacek@status.im>

* `commonAncestor` tests

* lint

---------

Co-authored-by: Jacek Sieka <jacek@status.im>
2023-05-12 19:36:59 +02:00
Jacek Sieka b3c6320d56
embed genesis states using incbin (#4905) 2023-05-11 11:11:00 +00:00
zah 5bf9284e62
Initial public version of the Verifying Web3Signer functionality (#4912)
* Allow the list of proved properties for web3signer to be configured
* Document the Web3Signer setups (regular, distributed and verified)
2023-05-09 11:16:43 +03:00
Etan Kissling e6e4ba9de6
clean up redundant tests and config (#4836)
The consensus-spec-tests already cover the scenarios of our custom test
runner, so the custom tests can be removed. Also cleans up unused config
flags and related unreachable logic.
2023-04-18 21:26:36 +02:00
tersec cc24429828
remove empty block fallback now that capella's on mainnnet (#4821)
* remove empty block fallback now that capella's on mainnnet

* build_empty_execution_payload is only testing infrastructure now

* update error message
2023-04-18 09:21:15 +00:00
Eugene Kabanov 0ff86e9538
web3signer refactoring and test suite. (#4775)
* Refactor nimbus_signing_node to support Unix signals.

* Fix SN unable to close REST server properly.

* Fix `keys`, `deposit` and `validator_registration` endpoints issues.
Add getValidatorExitSignature() and getDepositMessageSignature() to validator_pool.

* Add /reload endpoint and implementation.
Fix signData to not cancel `timer`.
Fix validator_pool should clear attachedValidators table.

* Diva protocol enhancement implementation.
2023-04-06 16:16:21 +03:00
tersec ceb24d31d9
test forks.nim capella and deneb block/state ssz serialization (#4772) 2023-03-29 13:22:19 +00:00
tersec fc1f9a2065
builder API liveness failsafe (#4746)
* builder API liveness failsafe

* add test summary change
2023-03-22 18:48:48 +01:00
Zahary Karadjov 4d1b2dd9f5
Merge branch 'stable' into unstable 2023-03-17 17:51:39 +02:00
Zahary Karadjov 46f48269ef
Backwards compatible handling of the web3-url parameter in TOML 2023-03-14 17:50:03 +02:00
Etan Kissling 0f272adebd
rename `EIP4844` > `Deneb` in `test_beacon_chain_db` (#4721) 2023-03-11 02:49:17 +01:00
Etan Kissling 969c6f73ae
misc local `EIP4844` > `Deneb` bumps (#4717)
* misc local `EIP4844` > `Deneb` bumps

* fix
2023-03-11 00:28:19 +00:00
henridf 90640cce05
Update sync to use post-decoupling RPC (#4701)
* Update sync to use post-decoupling RPCs

blob_sidecars_by_range returns a flat list of sidecars, which must
then be grouped per-slot.

* Add test for groupBlobs

* createBlobs: convert proc to func
2023-03-07 20:19:17 +00:00
Jacek Sieka 83f9745df1
restore doppelganger check on connectivity loss (#4616)
* restore doppelganger check on connectivity loss

https://github.com/status-im/nimbus-eth2/pull/4398 introduced a
regression in functionality where doppelganger detection would not be
rerun during connectivity loss. This PR reintroduces this check and
makes some adjustments to the implementation to simplify the code flow
for both BN and VC.

* track when check was last performed for each validator (to deal with
late-added validators)
* track when we performed a doppel-detectable activity (attesting) so as
to avoid false positives
* remove nodeStart special case (this should be treated the same as
adding a validator dynamically just after startup)

* allow sync committee duties in doppelganger period

* don't trigger doppelganger when registering duties

* fix crash when expected index response is missing

* fix missing slashingSafe propagation
2023-02-20 13:28:56 +02:00
zah ff464e49cf
Implement the set of gas_limit end-points in the Keymanager API (#4612)
Fixes #3946
2023-02-15 15:10:31 +00:00
tersec 07ccd3fa6e
remove capella and deneb empty execution payload fallbacks (#4613) 2023-02-13 19:15:16 +02:00
tersec bcc9781cc7
rm obsolete interop module (#4570) 2023-02-01 16:29:55 +01:00
tersec 0fb726c420
`BeaconStateFork/BeaconBlockFork` -> `ConsensusFork` (#4560)
* `BeaconStateFork/BeaconBlockFork` -> `ConsensusFork`

* revert unrelated change

* revert unrelated changes

* update test summaries
2023-01-28 19:53:41 +00:00
tersec 819e007689
exit/validatorchange pool includes BLS to execution messages; REST support for new pool (#4519)
* exit/validatorchange pool includes BLS to execution messages; REST
support for new pool

* catch failed individual futures

* increase BLS changes bound and keep BLS seen consistent with subpool

* deque capacities should be powers of 2
2023-01-19 22:00:40 +00:00
Etan Kissling fda03548e3
use `ForkedLightClientStore` internally (#4512)
When running `nimbus_light_client`, we persist the latest header from
`LightClientStore.finalized_header` in a database across restarts.
Because the data format is derived from the latest `LightClientStore`,
this could lead to data being persisted in pre-release formats.

To enable us to test later `LightClientStore` versions on devnets,
transition to a `ForkedLightClientStore` internally that is only
migrated to newer forks on-demand (instead of starting at latest).
2023-01-16 16:53:45 +01:00
Etan Kissling 609227559f
LC data fork cleanup (#4506)
Distinguish between those code locations that need to be updated on each
light client data format change, and those others that should generally
be fine, as long as a valid light client object is processed.

The former are tagged with static assert for `LightClientDataFork.high`.

The latter are changed to `lcDataFork > LightClientDataFork.None` to
indicate that they depend only on presence of any valid object.
Also bundled a few minor cleanups and fixes.

Also add `Forky` type for `LightClientStore` and minor fixes / cleanups.
2023-01-14 22:19:50 +01:00
henridf 309f8690de
Wire up engine_newPayloadV3 (#4482)
* Wire up eip4844's newPayloadV3

* Add eip4844 test

* Update AllTests-mainnet.md and fix typo
2023-01-11 18:21:19 +00:00
Jacek Sieka 0ba9fc4ede
History pruning (fixes #4419) (#4445)
Introduce (optional) pruning of historical data - a pruned node will
continue to answer queries for historical data up to
`MIN_EPOCHS_FOR_BLOCK_REQUESTS` epochs, or roughly 5 months, capping
typical database usage at around 60-70gb.

To enable pruning, add `--history=prune` to the command line - on the
first start, old data will be cleared (which may take a while) - after
that, data is pruned continuously.

When pruning an existing database, the database will not shrink -
instead, the freed space is recycled as the node continues to run - to
free up space, perform a trusted node sync with a fresh database.

When switching on archive mode in a pruned node, history is retained
from that point onwards.

History pruning is scheduled to be enabled by default in a future
release.

In this PR, `minimal` mode from #4419 is not implemented meaning
retention periods for states and blocks are always the same - depending
on user demand, a future PR may implement `minimal` as well.
2023-01-07 10:02:15 +00:00
Etan Kissling 2184fd2322
bump `nim-eth`, extend empty block fallback for EIP4844 (#4425)
Implements `emptyPayloadToBlockHeader` for EIP-4844.
https://github.com/status-im/nim-eth/pull/570
2022-12-20 20:00:56 +01:00
Jacek Sieka bd8f08204e
Implement skip_randao_verification for blinded blocks (#4435)
* Implement skip_randao_verification for blinded blocks

* fix redundant randao verification on block replay (5% faster)
* check randao in REST instead of internally
* avoid redundant copies when making blocks
* cleanup leftover randao skipping code

* fix test summary
2022-12-19 15:11:12 +02:00
tersec e7706768c3
add database beaconstate tests for capella and eip4844 (#4429) 2022-12-14 23:12:29 +00:00
tersec bc996623e0
add EIP4844 block database read/write test (#4416) 2022-12-13 00:56:50 +00:00
tersec ad8e682f76
remove redundant justification and finalization tests (#4412) 2022-12-09 22:45:48 +00:00
Jacek Sieka 6e2a02466e
unify bn/vc doppelganger detection (#4398)
* fix REST liveness endpoint responding even when gossip is not enabled
* fix VC exit code on doppelganger hit
* fix activation epoch not being updated correctly on long deposit
queues
* fix activation epoch being set incorrectly when updating validator
* move most implementation logic to `validator_pool`, add tests
* ensure consistent logging between VC and BN
* add docs
2022-12-09 17:05:55 +01:00
tersec 9df19f68fe
remove redundant deposit processing tests (#4408) 2022-12-09 13:00:22 +00:00
zah d30cb8baf1
Support for obtaining deposit snapshots during trustedNodeSync (#4303)
Other changes:

* More optimal search for TTD block.

* Add timeouts to all REST requests during trusted node sync.
  Fixes #4037

* Removed support for storing a deposit snapshot in the network
  metadata.
2022-12-07 12:24:51 +02:00
Etan Kissling c46dc3d7e8
check that tests are up-to-date in CI (#4373)
Enforce the `AllTests-mainnet.md` files and others to be up-to-date.
2022-11-30 02:01:04 +01:00
tersec ed672113bc
support engine API execution payloads with withdrawals (#4358) 2022-11-29 05:02:16 +00:00
Etan Kissling c941dc801f
bump `nim-eth`, extend empty block fallback for Capella (#4357)
Implements `emptyPayloadToBlockHeader` for Capella.
https://github.com/status-im/nim-eth/pull/562
2022-11-28 14:41:25 +01:00
tersec 1146470f7d
use v1.3.0-alpha.1 consensus spec test vectors (#4338) 2022-11-21 08:44:49 +01:00
Jacek Sieka 09ade6d33d
Make trusted node sync era-aware (#4283)
This PR removes a bunch of code to make TNS aware of era files, avoiding
a duplicated backfill when era files are available.

* reuse chaindag for loading backfill state, replacing the TNS homebrew
* fix era block iteration to skip empty slots
* add tests for `can_advance_slots`
2022-11-10 10:44:47 +00:00
tersec 90eb2ccb20
database and fork choice test runner support for capella (#4309) 2022-11-09 17:32:10 +00:00
tersec 0919b8689e
run capella fork transition tests in CI (#4307) 2022-11-09 12:28:34 +00:00
Etan Kissling 7ad610f6d7
set fee recipient in empty payload fallback (#4291)
When the EL/Builder fails to produce an execution payload, we fall back
to an empty `ExecutionPayload`. Even though it contains no transactions
it should refer to the configured fee recipient. This is useful for
privacy reasons (do not reveal the reason for the empty payload) and for
compliance with additional fee recipient rules by staking pools.
2022-11-08 14:19:56 +00:00
tersec 0a43c89cd2
run capella block sanity tests in CI (#4292) 2022-11-07 18:37:48 +00:00
Jacek Sieka d839b9d07e
State-only checkpoint state startup (#4251)
Currently, we require genesis and a checkpoint block and state to start
from an arbitrary slot - this PR relaxes this requirement so that we can
start with a state alone.

The current trusted-node-sync algorithm works by first downloading
blocks until we find an epoch aligned non-empty slot, then downloads the
state via slot.

However, current
[proposals](https://github.com/ethereum/beacon-APIs/pull/226) for
checkpointing prefer finalized state as
the main reference - this allows more simple access control and caching
on the server side - in particular, this should help checkpoint-syncing
from sources that have a fast `finalized` state download (like infura
and teku) but are slow when accessing state via slot.

Earlier versions of Nimbus will not be able to read databases created
without a checkpoint block and genesis. In most cases, backfilling makes
the database compatible except where genesis is also missing (custom
networks).

* backfill checkpoint block from libp2p instead of checkpoint source,
when doing trusted node sync
* allow starting the client without genesis / checkpoint block
* perform epoch start slot lookahead when loading tail state, so as to
deal with the case where the epoch start slot does not have a block
* replace `--blockId` with `--state-id` in TNS command line
* when replaying, also look at the parent of the last-known-block (even
if we don't have the parent block data, we can still replay from a
"parent" state) - in particular, this clears the way for implementing
state pruning
* deprecate `--finalized-checkpoint-block` option (no longer needed)
2022-11-02 10:02:38 +00:00
Jacek Sieka 819442acc3
Allow chain dag without genesis / block (#4230)
* Allow chain dag without genesis / block

This PR enables the initialization of the dag without access to blocks
or genesis state - it is a prerequisite for implementing a number of
interesting features:

* checkpoint sync without any block download
* pruning of blocks and states

* backfill checkpoint block
2022-10-14 22:40:10 +03:00
Jacek Sieka af9ec577d0
nicer error message for failed backfill (#4188)
* nicer error message for failed backfill

Many checkpoint sources don't support block download

* RestGenericError -> RestErrorMessage

...and other assorted fixes to bring rest types closer to spec

* fix tests
2022-09-29 23:55:18 +03:00
Etan Kissling c1fe81d2ec
sync test report (#4197)
Update `AllTests-mainnet.md` by running `make -j test`.
2022-09-29 05:22:58 +00:00
tersec 57d68d0f72
re-enable randao checks (#4187)
* re-enable randao checks

* use `asSigVerified` consistently

* fix spelling

* document why state_transition.makeBeaconBlock trusting signatures is safe
2022-09-28 01:15:10 +00:00
tersec 0f6d19b4b3
implement v1.2.0 optimistic sync tests (#4174)
* implement v1.2.0 optimistic sync tests

* Update beacon_chain/consensus_object_pools/blockchain_dag.nim

Co-authored-by: Etan Kissling <etan@status.im>

* `lvh` -> `latestValidHash` and only invalidate one specific block"

* `getEarliestInvalidRoot` -> `getEarliestInvalidBlockRoot`; `defaultEarliestInvalidRoot` -> `defaultEarliestInvalidBlockRoot`

Co-authored-by: Etan Kissling <etan@status.im>
2022-09-27 15:11:47 +03:00
Jacek Sieka 7f9af78ddb
test randao skippping (complements #3837) (#4179) 2022-09-27 09:22:24 +02:00
Etan Kissling 9180f09641
reduce LC optsync latency (#4002)
The optimistic sync spec was updated since the LC based optsync module
was introduced. It is no longer necessary to wait for the justified
checkpoint to have execution enabled; instead, any block is okay to be
optimistically imported to the EL client, as long as its parent block
has execution enabled. Complex syncing logic has been removed, and the
LC optsync module will now follow gossip directly, reducing the latency
when using this module. Note that because this is now based on gossip
instead of using sync manager / request manager, that individual blocks
may be missed. However, EL clients should recover from this by fetching
missing blocks themselves.
2022-08-25 03:53:59 +00:00
Jacek Sieka 9e9db216c5
Harden block proposal against expired slashings/exits (#4013)
* Harden block proposal against expired slashings/exits

When a message is signed in a phase0 domain, it can no longer be
validated under bellatrix due to the correct fork no longer being
available in the `BeaconState`.

To ensure that all slashing/exits are still valid, in this PR we re-run
the checks in the state that we're proposing for, thus hardening against
both signatures and other changes in the state that might have
invalidated the message.

* fix same message added multiple times

in case of attestation slashing of multiple validators in one go
2022-08-23 18:30:46 +03:00
Etan Kissling f1ddcfff0f
support connecting to peers without bellatrix (#4011)
* support connecting to peers without bellatrix

Make discovery fork ID aware of scheduled Bellatrix fork to enable
connections to peers that don't have Bellatrix scheduled yet.
Without this, has peering issues with peers on older SW version.

* expand tests with compatibility checks

* more exhaustive compatibility checks
2022-08-21 19:36:46 +02:00
Etan Kissling bac50610fd
re-generate test report (#4007)
A couple tests have been removed recently; re-ran `make -j test` to sync
the test report.
2022-08-20 14:40:33 +00:00
zah fca20e08d6
Keymanager API for the validator client (#3976)
* Keymanager API for the validator client
* Properly treat the 'description' field as optional when loading Keystores
* Spec-compliant serialization of the slashing data in Keymanager's DeleteKeys response ()

Fixes #3940
Fixes #3964
Closes #3884 by adding test
2022-08-19 13:30:07 +03:00
Etan Kissling c360db8194
avoid materializing potentially long deposits seq (#3947)
When fetching eth1 data and deposits for a new block proposal, the list
of deposits from previous eth1 data to the next one is fully loaded into
a `seq`. This can potentially be a very long list in active periods.
Changing this to an `iterator` saves memory by ensuring that the entire
list is no longer materialized; only the `DepositData` roots are needed.
2022-08-12 16:52:06 +03:00
zah dc50abbc90
Implement a missing ingnore rule for sync committee contributions (#3941) 2022-08-09 12:52:11 +03:00
Etan Kissling 735c1df62f
add strict mode to light client processor (#3894)
The light client sync protocol employs heuristics to ensure it does not
become stuck during non-finality or low sync committee participation.
These can enable use cases that prefer availability of recent data
over security. For our syncing use case, though, security is preferred.
An option is added to light client processor to configure this tradeoff.
2022-07-21 11:16:10 +02:00
zah 806536a040
[Keymanager API] Support for the feerecipient end-points (#3864)
Other changes:

* The Keymanager error responses differ from the Beacon API responses.
  'keymanagerApiError' replaces the former usages of 'jsonError'.

* Return status code 401 and 403 for authorization errors in accordance
  to the spec.

* Eliminate inconsistencies in the REST JSON parsing. Some of the code
  paths allowed missing fields.

* Added logging of serialization failure details at DEBUG level.
2022-07-13 17:45:04 +03:00
tersec ba4d4c14db
fix Nim 1.6 deprecation and unused import warnings (#3834) 2022-07-01 21:52:23 +00:00
Etan Kissling aa1b8e4a17
bump nim-ssz-serialization to `3db6cc0f282708aca6c290914488edd832971d61` (#3119)
This updates `nim-ssz-serialization` to
`3db6cc0f282708aca6c290914488edd832971d61`.

Notable changes:
- Use `uint64` for `GeneralizedIndex`
- Add support for building merkle multiproofs
2022-06-26 19:33:06 +02:00
zah c24c737866
Fix #3650 (participation format in BeaconState result is out of spec) (#3776)
* Fix #3650 (participation format in BeaconState result is out of spec)
* Make EpochParticipationFlags a distinct type
2022-06-20 08:38:56 +03:00
zah 5402dfab6b
Correct URLs for the DeleteKeys request in the Keymanager API (#3727)
Other changes:

* Make it easier to run the REST tests locally through a Makefile target
2022-06-19 20:54:12 +03:00
tersec cbb0b8142f
include capella fork version in fork consistency check (#3772)
* include capella fork version in fork consistency check

* Update tests/test_conf.nim

Co-authored-by: Etan Kissling <etan@status.im>

Co-authored-by: Etan Kissling <etan@status.im>
2022-06-18 10:05:33 +00:00
Etan Kissling 15967c4076
keep track of latest blocks for optimistic sync (#3715)
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
2022-06-10 14:16:37 +00:00
Etan Kissling c808f17a37
update to latest light client libp2p protocol (#3623)
Incorporates the latest changes to the light client sync protocol based
on Devconnect AMS feedback. Note that this breaks compatibility with the
previous prototype, due to changes to data structures and endpoints.
See https://github.com/ethereum/consensus-specs/pull/2802
2022-05-23 14:02:54 +02:00
zah 6d11ad6ce1
Support for distributed keystores with multiple remotes based on threshold signatures (#3616)
Other fixes:

* Fix bit rot in the `make prater-dev-deposit` target.
* Correct content-type in the responses of the Nimbus signing node
* Invalid JSON payload was being sent in the web3signer requests
2022-05-10 03:32:12 +03:00
Etan Kissling 18bd6df1b4
fix light client data collection for checkpoint sync (#3498)
When doing checkpoint sync, collecting light client data of known blocks
and states incorrectly assumes that `finalized_checkpoint` information
is also known. Hardens collection to only collect finalized checkpoint
data after `dag.computeEarliestLightClientSlot`.
2022-03-18 15:47:53 +01:00
Etan Kissling 12dc427535
introduce light client processor (#3509)
Adds `LightClientProcessor` as the pendant to `BlockProcessor` while
operating in light client mode. Note that a similar mechanism based on
async futures is used for interoperability with existing infrastructure,
despite light client object validation being done synchronously.
2022-03-17 23:26:56 +01:00
Jacek Sieka 05ffe7b2bf
Prune `BlockRef` on finalization (#3513)
Up til now, the block dag has been using `BlockRef`, a structure adapted
for a full DAG, to represent all of chain history. This is a correct and
simple design, but does not exploit the linearity of the chain once
parts of it finalize.

By pruning the in-memory `BlockRef` structure at finalization, we save,
at the time of writing, a cool ~250mb (or 25%:ish) chunk of memory
landing us at a steady state of ~750mb normal memory usage for a
validating node.

Above all though, we prevent memory usage from growing proportionally
with the length of the chain, something that would not be sustainable
over time -  instead, the steady state memory usage is roughly
determined by the validator set size which grows much more slowly. With
these changes, the core should remain sustainable memory-wise post-merge
all the way to withdrawals (when the validator set is expected to grow).

In-memory indices are still used for the "hot" unfinalized portion of
the chain - this ensure that consensus performance remains unchanged.

What changes is that for historical access, we use a db-based linear
slot index which is cache-and-disk-friendly, keeping the cost for
accessing historical data at a similar level as before, achieving the
savings at no percievable cost to functionality or performance.

A nice collateral benefit is the almost-instant startup since we no
longer load any large indicies at dag init.

The cost of this functionality instead can be found in the complexity of
having to deal with two ways of traversing the chain - by `BlockRef` and
by slot.

* use `BlockId` instead of `BlockRef` where finalized / historical data
may be required
* simplify clearance pre-advancement
* remove dag.finalizedBlocks (~50:ish mb)
* remove `getBlockAtSlot` - use `getBlockIdAtSlot` instead
* `parent` and `atSlot` for `BlockId` now require a `ChainDAGRef`
instance, unlike `BlockRef` traversal
* prune `BlockRef` parents on finality (~200:ish mb)
* speed up ChainDAG init by not loading finalized history index
* mess up light client server error handling - this need revisiting :)
2022-03-17 17:42:56 +00:00
Jacek Sieka c64bf045f3
remove StateData (#3507)
One more step on the journey to reduce `BlockRef` usage across the
codebase - this one gets rid of `StateData` whose job was to keep track
of which block was last assigned to a state - these duties have now been
taken over by `latest_block_root`, a fairly recent addition that
computes this block root from state data (at a small cost that should be
insignificant)

99% mechanical change.
2022-03-16 08:20:40 +01:00
Etan Kissling 6d1d31dd01
avoid re-requesting finalized blocks during sync (#3461)
When a `beaconBlocksByRange` response advances the `safeSlot`, but later
has errors, the sync queue keeps repeating that same request until it is
fulfilled without errors. Data up through `safeSlot` is considered to be
immutable, i.e., finalized, so re-requesting that data is not useful.
By advancing the sync progress in that scenario, those redundant query
portions can be avoided. Note, the finalized block _itself_ is always
requested, even in the initial request. This behaviour is kept same.
2022-03-15 18:56:56 +01:00
Jacek Sieka a3bd01b58d
move dependent root computations to `BeaconState` / `EpochRef` (#3478)
* fewer deps on `BlockRef` traversal in anticipation of pruning
* allows identifying EpochRef:s by their shuffling as a first step of
* tighten error handling around missing blocks

using the zero hash for signalling "missing block" is fragile and easy
to miss - with checkpoint sync now, and pruning in the future, missing
blocks become "normal".
2022-03-15 09:24:55 +01:00
Etan Kissling ae408c279a
add option to collect light client data (#3474)
Light clients require full nodes to serve additional data so that they
can stay in sync with the network. This patch adds a new launch option
`--import-light-client-data` to configure what data to make available.
For now, data is only kept in memory; it is not persisted at this time.
Note that data is only locally collected, a separate patch is needed to
actually make it availble over the network. `--serve-light-client-data`
will be used for serving data, but is not functional yet outside tests.
2022-03-11 21:28:10 +01:00
Mamy Ratsimbazafy ef7e8bdbd2
Minify slashing protection before SQLite (#3393) 2022-03-04 16:43:34 +02:00
tersec ef9767eb7a
implement --jwt-secret and HS256 JWT/JWS signing for engine API alpha.7 (#3440) 2022-02-27 16:55:02 +00:00
Ștefan Talpalaru e03a653bbe
update AllTests-mainnet.md (#3363) 2022-02-08 15:39:15 +01:00
Jacek Sieka d583e8e4ac
Store finalized block roots in database (3s startup) (#3320)
* Store finalized block roots in database (3s startup)

When the chain has finalized a checkpoint, the history from that point
onwards becomes linear - this is exploited in `.era` files to allow
constant-time by-slot lookups.

In the database, we can do the same by storing finalized block roots in
a simple sparse table indexed by slot, bringing the two representations
closer to each other in terms of conceptual layout and performance.

Doing so has a number of interesting effects:

* mainnet startup time is improved 3-5x (3s on my laptop)
* the _first_ startup might take slightly longer as the new index is
being built - ~10s on the same laptop
* we no longer rely on the beacon block summaries to load the full dag -
this is a lot faster because we no longer have to look up each block by
parent root
* a collateral benefit is that we no longer need to load the full
summaries table into memory - we get the RSS benefits of #3164 without
the CPU hit.

Other random stuff:

* simplify forky block generics
* fix withManyWrites multiple evaluation
* fix validator key cache not being updated properly in chaindag
read-only mode
* drop pre-altair summaries from `kvstore`
* recreate missing summaries from altair+ blocks as well (in case
database has lost some to an involuntary restart)
* print database startup timings in chaindag load log
* avoid allocating superfluos state at startup
* use a recursive sql query to load the summaries of the unfinalized
blocks
2022-01-30 18:51:04 +02:00
tersec 2b4a960270
rename On{Merge,Bellatrix}BlockAdded and Rollback{Merge,Bellatrix}HashedProc (#3321) 2022-01-26 13:21:29 +01:00
Jacek Sieka f70aceef37
Harden handling of unviable forks (#3312)
* Harden handling of unviable forks

In our current handling of unviable forks, we allow peers to send us
blocks that come from a different fork - this is not necessarily an
error as it can happen naturally, but it does open up the client to a
case where the same unviable fork keeps getting requested - rather than
allowing this to happen, we'll now give these peers a small negative
score - if it keeps happening, we'll disconnect them.

* keep track of unviable forks in quarantine, to avoid filling it with
known junk
* collect peer scores in single module
* descore peers when they send unviable blocks during sync
* don't give score for duplicate blocks
* increase quarantine size to a level that allows finality to happen
under optimal conditions - this helps avoid downloading the same blocks
over and over in case of an unviable fork
* increase initial score for new peers to make room for one more failure
before disconnection
* log and score invalid/unviable blocks in requestmanager too
* avoid ChainDAG dependency in quarantine
* reject gossip blocks with unviable parent
* continue processing unviable sync blocks in order to build unviable
dag

* docs

* Update beacon_chain/consensus_object_pools/block_pools_types.nim

* add unviable queue test
2022-01-26 13:20:08 +01:00
tersec 00a347457a
dynamic sync committee subscriptions (#3308)
* dynamic sync committee subscriptions

* fast-path trivial case rather than rely on RNG with probability 1 outcome

Co-authored-by: zah <zahary@gmail.com>

* use func instead of template; avoid calling async function unnecessarily

* avoid unnecessary sync committee topic computation; use correct epoch lookahead; enforce exception/effect tracking

* don't over-optimistically update ENR syncnets; non-looping version of nearSyncCommitteePeriod

* allow separately setting --allow-all-{sub,att,sync}nets

* remove unnecessary async

Co-authored-by: zah <zahary@gmail.com>
2022-01-24 20:40:59 +00:00
Jacek Sieka 61342c2449
limit by-root requests to non-finalized blocks (#3293)
* limit by-root requests to non-finalized blocks

Presently, we keep a mapping from block root to `BlockRef` in memory -
this has simplified reasoning about the dag, but is not sustainable with
the chain growing.

We can distinguish between two cases where by-root access is useful:

* unfinalized blocks - this is where the beacon chain is operating
generally, by validating incoming data as interesting for future fork
choice decisions - bounded by the length of the unfinalized period
* finalized blocks - historical access in the REST API etc - no bounds,
really

In this PR, we limit the by-root block index to the first use case:
finalized chain data can more efficiently be addressed by slot number.

Future work includes:

* limiting the `BlockRef` horizon in general - each instance is 40
bytes+overhead which adds up - this needs further refactoring to deal
with the tail vs state problem
* persisting the finalized slot-to-hash index - this one also keeps
growing unbounded (albeit slowly)

Anyway, this PR easily shaves ~128mb of memory usage at the time of
writing.

* No longer honor `BeaconBlocksByRoot` requests outside of the
non-finalized period - previously, Nimbus would generously return any
block through this libp2p request - per the spec, finalized blocks
should be fetched via `BeaconBlocksByRange` instead.
* return `Opt[BlockRef]` instead of `nil` when blocks can't be found -
this becomes a lot more common now and thus deserves more attention
* `dag.blocks` -> `dag.forkBlocks` - this index only carries unfinalized
blocks from now - `finalizedBlocks` covers the other `BlockRef`
instances
* in backfill, verify that the last backfilled block leads back to
genesis, or panic
* add backfill timings to log
* fix missing check that `BlockRef` block can be fetched with
`getForkedBlock` reliably
* shortcut doppelganger check when feature is not enabled
* in REST/JSON-RPC, fetch blocks without involving `BlockRef`

* fix dag.blocks ref
2022-01-21 13:33:16 +02:00
Zahary Karadjov 47f1f7ff1a More efficient reward data persistance; Address review comments
The new format is based on compressed CSV files in two channels:

* Detailed per-epoch data
* Aggregated "daily" summaries

The use of append-only CSV file speeds up significantly the epoch
processing speed during data generation. The use of compression
results in smaller storage requirements overall. The use of the
aggregated files has a very minor cost in both CPU and storage,
but leads to near interactive speed for report generation.

Other changes:

- Implemented support for graceful shut downs to avoid corrupting
  the saved files.

- Fixed a memory leak caused by lacking `StateCache` clean up on each
  iteration.

- Addressed review comments

- Moved the rewards and penalties calculation code in a separate module

Required invasive changes to existing modules:

- The `data` field of the `KeyedBlockRef` type is made public to be used
  by the validator rewards monitor's Chain DAG update procedure.

- The `getForkedBlock` procedure from the `blockchain_dag.nim` module
  is made public to be used by the validator rewards monitor's Chain DAG
  update procedure.
2022-01-18 01:56:56 +02:00
Jacek Sieka 805e85e1ff
time: spring cleaning (#3262)
Time in the beacon chain is expressed relative to the genesis time -
this PR creates a `beacon_time` module that collects helpers and
utilities for dealing the time units - the new module does not deal with
actual wall time (that's remains in `beacon_clock`).

Collecting the time related stuff in one place makes it easier to find,
avoids some circular imports and allows more easily identifying the code
actually needs wall time to operate.

* move genesis-time-related functionality into `spec/beacon_time`
* avoid using `chronos.Duration` for time differences - it does not
support negative values (such as when something happens earlier than it
should)
* saturate conversions between `FAR_FUTURE_XXX`, so as to avoid
overflows
* fix delay reporting in validator client so it uses the expected
deadline of the slot, not "closest wall slot"
* simplify looping over the slots of an epoch
* `compute_start_slot_at_epoch` -> `start_slot`
* `compute_epoch_at_slot` -> `epoch`

A follow-up PR will (likely) introduce saturating arithmetic for the
time units - this is merely code moves, renames and fixing of small
bugs.
2022-01-11 11:01:54 +01:00
Jacek Sieka 20e700fae4
Harden CommitteeIndex, SubnetId, SyncSubcommitteeIndex (#3259)
* Harden CommitteeIndex, SubnetId, SyncSubcommitteeIndex

Harden the use of `CommitteeIndex` et al to prevent future issues by
using a distinct type, then validating before use in several cases -
datatypes in spec are kept simple though so that invalid data still can
be read.

* fix invalid epoch used in REST
`/eth/v1/beacon/states/{state_id}/committees` committee length (could
return invalid data)
* normalize some variable names
* normalize committee index loops
* fix `RestAttesterDuty` to use `uint64` for `validator_committee_index`
* validate `CommitteeIndex` on ingress in REST API
* update rest rules with stricter parsing
* better REST serializers
* save lots of memory by not using `zip` ...at least a few bytes!
2022-01-09 01:28:49 +02:00
tersec 18d83e0ece
rm spec_block_processing/test_process_attestation (#3258) 2022-01-08 10:07:44 +01:00
tersec cd77377375
add Bellatrix fork and transition tests; "Ethereum Foundation" -> EF (#3242) 2022-01-05 09:42:56 +01:00
Zahary Karadjov 54d0d588b1 Implementation of the Keymanager API (BETA)
https://github.com/ethereum/keymanager-APIs
2022-01-04 18:51:45 +02:00
tersec d4680df8d2
convert between engine and consensus ExecutionPayloads (#3228)
* convert between engine and consensus ExecutionPayloads
2022-01-03 13:22:56 +01:00
tersec 1a6a56bdb1
use BeaconTime instead of Slot in fork choice (#3138)
* use v1.1.6 test vectors; use BeaconTime instead of Slot in fork choice

* tick through every slot at least once

* use div INTERVALS_PER_SLOT and use precomputed constants of them

* use correct (even if numerically equal) constant
2021-12-21 18:56:08 +00:00
tersec 0d4e49f946
Merge fork gossip support (#3213)
* Merge fork gossip support

* index directly by BeaconStateFork and remove debugging log statement
2021-12-21 15:24:23 +01:00
Jacek Sieka c270ec21e4
Validator monitoring (#2925)
Validator monitoring based on and mostly compatible with the
implementation in Lighthouse - tracks additional logs and metrics for
specified validators so as to stay on top on performance.

The implementation works more or less the following way:
* Validator pubkeys are singled out for monitoring - these can be
running on the node or not
* For every action that the validator takes, we record steps in the
process such as messages being seen on the network or published in the
API
* When the dust settles at the end of an epoch, we report the
information from one epoch before that, which coincides with the
balances being updated - this is a tradeoff between being correct
(waiting for finalization) and providing relevant information in a
timely manner)
2021-12-20 20:20:31 +01:00
Jacek Sieka 03005f48e1
Backfill support for ChainDAG (#3171)
In the ChainDAG, 3 block pointers are kept: genesis, tail and head. This
PR adds one more block pointer: the backfill block which represents the
block that has been backfilled so far.

When doing a checkpoint sync, a random block is given as starting point
- this is the tail block, and we require that the tail block has a
corresponding state.

When backfilling, we end up with blocks without corresponding states,
hence we cannot use `tail` as a backfill pointer - there is no state.

Nonetheless, we need to keep track of where we are in the backfill
process between restarts, such that we can answer GetBeaconBlocksByRange
requests.

This PR adds the basic support for backfill handling - it needs to be
integrated with backfill sync, and the REST API needs to be adjusted to
take advantage of the new backfilled blocks when responding to certain
requests.

Future work will also enable moving the tail in either direction:
* pruning means moving the tail forward in time and removing states
* backwards means recreating past states from genesis, such that
intermediate states are recreated step by step all the way to the tail -
at that point, tail, genesis and backfill will match up.
* backfilling is done when backfill != genesis - later, this will be the
WSS checkpoint instead
2021-12-13 14:36:06 +01:00
Etan Kissling 984dc18dc6
import `is_valid_merkle_branch` test cases from `nim-eth` (#3182)
As of https://github.com/status-im/nim-eth/pull/379 `nim-eth` defines a
couple static test cases for merkle proof verification.
Since the EF has defined a `is_valid_merkle_branch` function in the spec
we are no longer using the custom implementation from `nim-eth`, but the
tests were never ported to target the new implementation. This patch now
follows up on that and integrates those tests from `nim-eth`.
2021-12-10 16:56:26 +01:00
Jacek Sieka 9f27f0d97c
BlockId reform (#3176)
* BlockId reform

Introduce `BlockId` that helps track a root/slot pair - this prepares
the codebase for backfilling and handling out-of-dag blocks

* move block dag code to separate module
* fix finalised state root in REST event stream
* fix finalised head computation on head update, when starting from
checkpoint
* clean up chaindag init
* revert `epochAncestor` change in introduced in #3144 that would return
an epoch ancestor from the canoncial history instead of the given
history, causing `EpochRef` keys to point to the wrong block
2021-12-09 19:06:21 +02:00
Eugene Kabanov b05734f610
Backward sync support for SyncManager. (#3131)
* Unbundle SyncQueue from sync_manager.nim.
Unbundle Peer scores constants to peer_scores.nim.
Add Forward/Backward enum.

* Further improvements and tests.

* Adopt getRewindPoint() and fix MissingParent handler.

* Remove unused procedures.
Refactor `result` usage.
Fix resetWait().

* Add all the tests and fix the issue with rewind point.

* Fix get() issue.

* Fix flaky tests.

* test fixes

Co-authored-by: Jacek Sieka <jacek@status.im>
2021-12-08 22:15:29 +01:00
Etan Kissling 3f6b7ec8f1
fix test results totals (#3168)
Updates `AllTests-mainnet.md` output for `make test -j16` execution.
2021-12-07 13:43:06 +00:00