* packaging updates
* one package per binary (nimbus_beacon_node, nimbus_validator_client)
* use `-` in package name (`_` is separating the version)
* don't include (un)installation scripts in package
* default metrics port 8108 for vc
* fix several upgrade/install errors in scripts
* add JWT option to service files
* don't attempt to remove user on purge
* reorganise navigation menus
* update light client guide with comparison table
* add suggested fee recipient and JWT secrets to the merge guide
* add some background info to book readme
* add JWT docs
also limit toc to make it displayable with substeps.
* import EL deposits even when EL is stuck
The `eth1_monitor` only starts importing deposits once the EL reports a
new head block. However, the EL may be stuck at a block, e.g., the TTD.
By polling the latest EL block once after subscribing to new EL block
events it is ensured that deposits are still imported in this situation.
* also poll once on re-connects
* update `eth1_latest_head` metric in poll mode
* add comment about similar polling vs events parts
* replace check with assert
* `isNewLastBlock` helper
When fetching eth1 data and deposits for a new block proposal, the list
of deposits from previous eth1 data to the next one is fully loaded into
a `seq`. This can potentially be a very long list in active periods.
Changing this to an `iterator` saves memory by ensuring that the entire
list is no longer materialized; only the `DepositData` roots are needed.
When the EL connection is interrupted, deposits are once more requested
in chunks of 5000 blocks. This is a problem when the response takes over
a minute to produce and consistently times out as followup requests with
lower chunk sizes may no longer work after a request was canceled, e.g.,
when using Geth with websockets. By keeping track of `blocksPerRequest`
across EL reconnections, it is possible to recover from this by avoiding
to continuously repeat the initial request with the full 5000 blocks.
Also cleans up one more "retry of retry" instance; `DataProviderTimeout`
is a `CatchableError` and already handled by the existing retry logic.
When connection to the EL is lost as part of EL deposits importing, the
targeted block range to sync would reset. This is changed to properly
remember import progress across reconnects.
https://github.com/status-im/nimbus-eth2/pull/3944
The use of nested `awaitWithRetries` calls would have
resulted in an unexpected number of retries (3x3).
We now use regular `await` in outer layer to avoid the problem.
https://github.com/status-im/nimbus-eth2/pull/3943
The new code has an invariant that the `headMerkleizer` field in
the `Eth1Chain` is always kept in sync with the blocks stored in
the chain.
This invariant is now enforced better by doing the necessary merkleizer updates
in the `Eth1Chain.addBlock` function, in the `Eth1Chain.init` function and in the
`Eth1Chain.reset` function.
When importing blocks with deposits from the EL, the timestamp is never
initialized for them. Therefore, only blocks without deposits (for which
the timestamp is obtained) are considered for `is_candidate_block`.
This is fixed by also importing timestamps for blocks with deposits.
* fix obtaining deposits after connection loss
When an error occurs during Eth1 deposits import, the already imported
blocks are kept while the connection to the EL is re-established.
However, the corresponding merkleizer is not persisted, leading to any
future deposits no longer being properly imported. This is quite common
when syncing a fresh Nimbus instance against an already-synced Geth EL.
Fixed by persisting the head merkleizer together with the blocks.
* MEV validator registration
* add nearby canary to detect new beacon chain forks
* remove special MEV graffiti
* web3signer support
* fix trace logging
* Nim 1.2 needs raises Defect
* use template rather than proc in REST JSON parsing
* use --payload-builder-enable and --payload-builder-url
* explicitly default MEV to disabled
* explicitly empty default value for payload builder URL
* revert attestation pool to unstable version
* Use final `v1` version for light client protocols
* Unhide LC data collection options
* Default enable LC data serving
* rm unneeded import
* Connect to EL on startup
* Add docs for LC based EL sync
A fix for a bug triggered by recent `Jenkinsfile` refactoring done in:
https://github.com/status-im/nimbus-eth2/pull/3827
Which due to a big in Jenkins Throttling plugin caused jobs to start
running in parallel on the same host despite global configuration that
is supposed to block this:
https://issues.jenkins.io/browse/JENKINS-49173https://github.com/jenkinsci/throttle-concurrent-builds-plugin/pull/68
An attempt to fix this was made in this PR:
https://github.com/status-im/nimbus-eth2/pull/3913
But it was ineffective due to bugs in the Throttle plugin.
As a result semi-random testnet launches would fail with errors like this:
```
./scripts/launch_local_testnet.sh: line 1026: 58977 Killed: 9 ${BEACON_NODE_COMMAND} ...
```
The culprit was the old process cleanup in `scripts/launch_local_testnet.sh`:
```
+ make local-testnet-mainnet
Found old process listening on port 7001, with PID 58977. Killing it.
Found old process listening on port 7002, with PID 59024. Killing it.
Found old process listening on port 7003, withu PID 59027. Killing it.
Found old process listening on port 7004, with PID 59030. Killing it.
```
Which was triggered due to use of immediate assignment for `EXECUTOR_NUMBER`:
```
EXECUTOR_NUMBER := 0
```
Which cause the `EXECUTOR_NUMBER` value set by Jenkins to be ignored.
For more details see:
https://www.gnu.org/software/make/manual/html_node/Flavors.html#Flavors
Signed-off-by: Jakub Sokołowski <jakub@status.im>