The `eth1_monitor` check to require engine API from bellatrix onward
has issues in setups where the EL and CL are started simultaneously
because the EL may not be ready to answer requests by the time that the
check is performed. This can be observed, e.g., on Raspberry Pi 4 when
using Besu as the EL client. Now that the merge transition happened, the
check is also not that useful anymore, as users have other ways to know
that their setup is not working correctly (e.g., repeated exchange logs)
When TTD hits before Bellatrix, avoid waiting for new blocks and detect
the TTD block as the terminal block hash even before Bellatrix hits.
Also allow detecting EL genesis block as merge transition block.
This fixes the local testnet simulation with Geth to actually merge.
Per spec, we should not be sending our detected terminal block to EL -
the EL configuration exchange should only look at values from
configuration and report mismatches.
`p.dataProvider` may become `nil` between individual attempts to
exchange transition configuration with the EL. Harden by capturing
the data provider on function start.
Note that other functions are already hardened, or are unaffected.
Only `close` transitions `p.dataProvider` to `nil`, and `close` is
only called by the main deposits import sequence. During the deposits
import, `close` is not called, so extra checks are not needed.
When issuing an engine API call while the EL is disconnected, a `nil`
pointer is dereferenced. Fixed by correctly initializing futures.
```
Traceback (most recent call last, using override)
vendor/nim-libp2p/libp2p/protocols/pubsub/pubsub.nim(890) main
beacon_chain/nimbus_beacon_node.nim(2139) main
beacon_chain/nimbus_beacon_node.nim(0) handleStartUpCmd
beacon_chain/nimbus_beacon_node.nim(0) doRunBeaconNode
beacon_chain/nimbus_beacon_node.nim(0) start
beacon_chain/nimbus_beacon_node.nim(1589) run
vendor/nimbus-build-system/vendor/Nim/lib/system/iterators_1.nim(107) poll
vendor/nim-chronos/chronos/asyncfutures2.nim(365) futureContinue
beacon_chain/consensus_object_pools/consensus_manager.nim(297) updateHeadWithExecution
vendor/nim-chronos/chronos/asyncmacro2.nim(213) runProposalForkchoiceUpdated
vendor/nim-chronos/chronos/asyncfutures2.nim(365) futureContinue
beacon_chain/consensus_object_pools/consensus_manager.nim(259) runProposalForkchoiceUpdated
beacon_chain/eth1/eth1_monitor.nim(0) forkchoiceUpdated
vendor/nim-chronos/chronos/asyncfutures2.nim(219) complete
vendor/nim-chronos/chronos/asyncfutures2.nim(149) cancelled
vendor/nimbus-build-system/vendor/Nim/lib/system/excpt.nim(610) signalHandler
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
```
* import EL deposits even when EL is stuck
The `eth1_monitor` only starts importing deposits once the EL reports a
new head block. However, the EL may be stuck at a block, e.g., the TTD.
By polling the latest EL block once after subscribing to new EL block
events it is ensured that deposits are still imported in this situation.
* also poll once on re-connects
* update `eth1_latest_head` metric in poll mode
* add comment about similar polling vs events parts
* replace check with assert
* `isNewLastBlock` helper
When fetching eth1 data and deposits for a new block proposal, the list
of deposits from previous eth1 data to the next one is fully loaded into
a `seq`. This can potentially be a very long list in active periods.
Changing this to an `iterator` saves memory by ensuring that the entire
list is no longer materialized; only the `DepositData` roots are needed.
When the EL connection is interrupted, deposits are once more requested
in chunks of 5000 blocks. This is a problem when the response takes over
a minute to produce and consistently times out as followup requests with
lower chunk sizes may no longer work after a request was canceled, e.g.,
when using Geth with websockets. By keeping track of `blocksPerRequest`
across EL reconnections, it is possible to recover from this by avoiding
to continuously repeat the initial request with the full 5000 blocks.
Also cleans up one more "retry of retry" instance; `DataProviderTimeout`
is a `CatchableError` and already handled by the existing retry logic.
When connection to the EL is lost as part of EL deposits importing, the
targeted block range to sync would reset. This is changed to properly
remember import progress across reconnects.
https://github.com/status-im/nimbus-eth2/pull/3944
The use of nested `awaitWithRetries` calls would have
resulted in an unexpected number of retries (3x3).
We now use regular `await` in outer layer to avoid the problem.
https://github.com/status-im/nimbus-eth2/pull/3943
The new code has an invariant that the `headMerkleizer` field in
the `Eth1Chain` is always kept in sync with the blocks stored in
the chain.
This invariant is now enforced better by doing the necessary merkleizer updates
in the `Eth1Chain.addBlock` function, in the `Eth1Chain.init` function and in the
`Eth1Chain.reset` function.
When importing blocks with deposits from the EL, the timestamp is never
initialized for them. Therefore, only blocks without deposits (for which
the timestamp is obtained) are considered for `is_candidate_block`.
This is fixed by also importing timestamps for blocks with deposits.
* fix obtaining deposits after connection loss
When an error occurs during Eth1 deposits import, the already imported
blocks are kept while the connection to the EL is re-established.
However, the corresponding merkleizer is not persisted, leading to any
future deposits no longer being properly imported. This is quite common
when syncing a fresh Nimbus instance against an already-synced Geth EL.
Fixed by persisting the head merkleizer together with the blocks.
Adds the `--web3-url` launch argument to `nimbus_light_client` to enable
driving the EL with the optimistic head obtained from LC sync protocol.
This will keep issuing `newPayload` / `forkChoiceUpdated` requests for
new blocks, marking them as optimistic. `ZERO_HASH` is reported as the
finalized block for now.
`m.depositsChain.blocks.len` may change during `startEth1Syncing`. If
that happened, an additional check now ensures that `scratchMerkleizer`
was initialized before attempting to use it.