The optimistic candidate block check that only imports a new block into
the EL client if its parent block also had execution enabled is not
needed anymore, as mainnet has merged and the attack period is over.
Port changes to `nextExchangeTransitionConfiguration` from BN to LC:
- 60 seconds delay before initial exchange
- 45 seconds interval between followup exchanges
- Only exchange post Bellatrix
When the sync queue processes results for a blocks by range request,
and the requested range contained some slots that are already finalized,
`BlockError.MissingParent` currently leads to `PeerScoreBadBlocks` even
when the error occurs on a non-finalized slot in the requested range.
This patch changes the scoring in that case to `PeerScoreMissingBlocks`
for consistency with range requests solely covering non-finalized slots,
and, likewise, rewinds the sync queue to the next `rewindSlot`.
Per spec, we should not be sending our detected terminal block to EL -
the EL configuration exchange should only look at values from
configuration and report mismatches.
`p.dataProvider` may become `nil` between individual attempts to
exchange transition configuration with the EL. Harden by capturing
the data provider on function start.
Note that other functions are already hardened, or are unaffected.
Only `close` transitions `p.dataProvider` to `nil`, and `close` is
only called by the main deposits import sequence. During the deposits
import, `close` is not called, so extra checks are not needed.
When there are a lot of deposits, we decompress the public key into a
crypto cache. To avoid having those caches grow unreasonably big,
make sure to operate on the decompressed pubkey instead.
* more efficient forkchoiceUpdated usage
* await rather than asyncSpawn; ensure head update before dag.updateHead
* use action tracker rather than attached validators to check for next slot proposal; use wall slot + 1 rather than state slot + 1 to correctly check when missing blocks
* re-add two-fcU case for when newPayload not VALID
* check dynamicFeeRecipientsStore for potential proposal
* remove duplicate checks for whether next proposer
`news` has a few open issues that are not present in `nim-websock`:
1. There is a 1 second delay between each MB of sent data.
2. Cancelling an ongoing `send` makes the entire WebSocket unusable.
3. Control packets do not have priority over ongoing message frames.
Using `news`, there are quite a few of these messages in Geth:
```
Previously seen beacon client is offline. Please ensure it is
operational to follow the chain!
```
It may take quite some time to reconnect when this happens.
Using `nim-websock`, this message still occurs because `eth1_monitor`
reconnects the EL connection when no new blocks occurred for 5 minutes,
but reconnecting is quick and the message is rarer.
The sync protocol does not distinguish between:
- All requested slots are empty
- Peer does not have data available about requested range
Therefore, we treat EOF for `beacon_blocks_by_range` and for
`beacon_blocks_by_range` as valid responses, as if the entire epoch
really contained no single block for any slot. Once a followup response
provides new blocks, we detect that some blocks were missing and rewind.
During backfill, we also request the known-to-exist `backfill.slot`,
so we can actually detect whether an epoch really does not have blocks
or whether a response is incomplete (`PeerScoreNoBlocks`).
When the BN-embedded LC makes sync progress, pass the corresponding
execution block hash to the EL via `engine_forkchoiceUpdatedV1`.
This allows the EL to sync to wall slot while the chain DAG is behind.
Renamed `--light-client` to `--sync-light-client` for clarity, and
`--light-client-trusted-block-root` to `--trusted-block-root` for
consistency with `nimbus_light_client`.
Note that this does not work well in practice at this time:
- Geth sticks to the optimistic sync:
"Ignoring payload while snap syncing" (when passing the LC head)
"Forkchoice requested unknown head" (when updating to LC head)
- Nethermind syncs to LC head but does not report ancestors as VALID,
so the main forward sync is still stuck in optimistic mode:
"Pre-pivot block, ignored and returned Syncing"
To aid EL client teams in fixing those issues, having this available
as a hidden option is still useful.
When issuing an engine API call while the EL is disconnected, a `nil`
pointer is dereferenced. Fixed by correctly initializing futures.
```
Traceback (most recent call last, using override)
vendor/nim-libp2p/libp2p/protocols/pubsub/pubsub.nim(890) main
beacon_chain/nimbus_beacon_node.nim(2139) main
beacon_chain/nimbus_beacon_node.nim(0) handleStartUpCmd
beacon_chain/nimbus_beacon_node.nim(0) doRunBeaconNode
beacon_chain/nimbus_beacon_node.nim(0) start
beacon_chain/nimbus_beacon_node.nim(1589) run
vendor/nimbus-build-system/vendor/Nim/lib/system/iterators_1.nim(107) poll
vendor/nim-chronos/chronos/asyncfutures2.nim(365) futureContinue
beacon_chain/consensus_object_pools/consensus_manager.nim(297) updateHeadWithExecution
vendor/nim-chronos/chronos/asyncmacro2.nim(213) runProposalForkchoiceUpdated
vendor/nim-chronos/chronos/asyncfutures2.nim(365) futureContinue
beacon_chain/consensus_object_pools/consensus_manager.nim(259) runProposalForkchoiceUpdated
beacon_chain/eth1/eth1_monitor.nim(0) forkchoiceUpdated
vendor/nim-chronos/chronos/asyncfutures2.nim(219) complete
vendor/nim-chronos/chronos/asyncfutures2.nim(149) cancelled
vendor/nimbus-build-system/vendor/Nim/lib/system/excpt.nim(610) signalHandler
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
```
The optimistic sync spec was updated since the LC based optsync module
was introduced. It is no longer necessary to wait for the justified
checkpoint to have execution enabled; instead, any block is okay to be
optimistically imported to the EL client, as long as its parent block
has execution enabled. Complex syncing logic has been removed, and the
LC optsync module will now follow gossip directly, reducing the latency
when using this module. Note that because this is now based on gossip
instead of using sync manager / request manager, that individual blocks
may be missed. However, EL clients should recover from this by fetching
missing blocks themselves.
When the EL fails to respond to `newPayload`, e.g., because connection
to the EL got interrupted, or due to misconfiguration, optimistic blocks
cannot be imported according to spec. This condition is treated the same
as if the peer returned a block with missing parent which gets the block
out of our processing queue, but can have nasty side effects.
For example, if sync manager asks for validation of a block known to be
in the finalized range, if it receives a `MissingParent` verdict, the
peer is immediately removed from the peer pool.
```
DBG 2022-08-24 11:45:26.874+02:00 newPayload: inserting block into execution engine parentHash=e4ca7424 blockHash=36cdc198 stateRoot=cf3902c1 receiptsRoot=56e81f17 prevRandao=0b49a172 blockNumber=1518089 gasLimit=30000000 gasUsed=0 timestamp=1657980396 extraDataLen=0 baseFeePerGas=7 numTransactions=0
ERR 2022-08-24 11:45:26.875+02:00 newPayload failed msg="Transport is not initialised (missing a call to connect?)"
DBG 2022-08-24 11:45:26.875+02:00 Block pool rejected peer's response topics="syncman" request=187232:32@1475 peer=16U*MsCJdx direction=forward blocks_map=xxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxx blocks_count=31 ok=false unviable=false missing_parent=true sync_ident=main
ERR 2022-08-24 11:45:26.875+02:00 Unexpected missing parent at finalized epoch slot topics="syncman" request=187232:32@1475 peer=16U*MsCJdx direction=forward rewind_to_slot=187232 blocks_count=31 blocks_map=xxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxx sync_ident=main
DBG 2022-08-24 11:45:26.875+02:00 Peer was removed from PeerPool due to low score topics="beacnde" peer=16U*MsCJdx peer_score=-1000 score_low_limit=0 score_high_limit=1000
DBG 2022-08-24 11:45:26.875+02:00 Lost connection to peer topics="networking" peer=16U*MsCJdx connections=0
```
By delaying issuing a verdict until the EL connection is restored and
`newPayload` successfully ran, the problem should be fixed. This also
induces back pressure to the sync manager by stopping download of new
blocks (or re-downloading the same block over and over again).
* Harden block proposal against expired slashings/exits
When a message is signed in a phase0 domain, it can no longer be
validated under bellatrix due to the correct fork no longer being
available in the `BeaconState`.
To ensure that all slashing/exits are still valid, in this PR we re-run
the checks in the state that we're proposing for, thus hardening against
both signatures and other changes in the state that might have
invalidated the message.
* fix same message added multiple times
in case of attestation slashing of multiple validators in one go
* support connecting to peers without bellatrix
Make discovery fork ID aware of scheduled Bellatrix fork to enable
connections to peers that don't have Bellatrix scheduled yet.
Without this, has peering issues with peers on older SW version.
* expand tests with compatibility checks
* more exhaustive compatibility checks
Aligns the default retention policy for LC data with the one for blocks.
Minimum spec requirement for both blocks and LC data is ~5 months.
Additional use cases are better supported by retaining data for longer.
* Fixes a segfault during block production when the Keymanager API
is disabled. The Keymanager is now disabled on half of the local
testnet nodes to catch such problems in the future.
* Fixes multiple potential stalls from REST requests being done
without a timeout. From practice, we know that such requests
can hang forever if not cancelled with a timeout. At best,
this would be a resource leak, at worst, it may lead to a
full stall of the client and missed validator duties.
* Changes some Options usages to Opt (for easier use of valueOr)
When the client was started without any validators, the doppelganger
detection structures were never initialized properly. Later, when
validators were added through the Keymanager API, they interacted
with the uninitialized doppelganger detection structures and their
duties were inappropriately skipped.
* Keymanager API for the validator client
* Properly treat the 'description' field as optional when loading Keystores
* Spec-compliant serialization of the slashing data in Keymanager's DeleteKeys response ()
Fixes#3940Fixes#3964Closes#3884 by adding test
In order to avoid full replays when validating attestations hailing from
untaken forks, it's better to keep shufflings separate from `EpochRef`
and perform a lookahead on the shuffling when processing the block that
determines them.
This also helps performance in the case where REST clients are trying to
perform lookahead on attestation duties and decreases memory usage by
sharing shufflings between EpochRef instances of the same dependent
root.
* packaging updates
* one package per binary (nimbus_beacon_node, nimbus_validator_client)
* use `-` in package name (`_` is separating the version)
* don't include (un)installation scripts in package
* default metrics port 8108 for vc
* fix several upgrade/install errors in scripts
* add JWT option to service files
* don't attempt to remove user on purge
* import EL deposits even when EL is stuck
The `eth1_monitor` only starts importing deposits once the EL reports a
new head block. However, the EL may be stuck at a block, e.g., the TTD.
By polling the latest EL block once after subscribing to new EL block
events it is ensured that deposits are still imported in this situation.
* also poll once on re-connects
* update `eth1_latest_head` metric in poll mode
* add comment about similar polling vs events parts
* replace check with assert
* `isNewLastBlock` helper
When fetching eth1 data and deposits for a new block proposal, the list
of deposits from previous eth1 data to the next one is fully loaded into
a `seq`. This can potentially be a very long list in active periods.
Changing this to an `iterator` saves memory by ensuring that the entire
list is no longer materialized; only the `DepositData` roots are needed.
When the EL connection is interrupted, deposits are once more requested
in chunks of 5000 blocks. This is a problem when the response takes over
a minute to produce and consistently times out as followup requests with
lower chunk sizes may no longer work after a request was canceled, e.g.,
when using Geth with websockets. By keeping track of `blocksPerRequest`
across EL reconnections, it is possible to recover from this by avoiding
to continuously repeat the initial request with the full 5000 blocks.
Also cleans up one more "retry of retry" instance; `DataProviderTimeout`
is a `CatchableError` and already handled by the existing retry logic.
When connection to the EL is lost as part of EL deposits importing, the
targeted block range to sync would reset. This is changed to properly
remember import progress across reconnects.
https://github.com/status-im/nimbus-eth2/pull/3944
The use of nested `awaitWithRetries` calls would have
resulted in an unexpected number of retries (3x3).
We now use regular `await` in outer layer to avoid the problem.
https://github.com/status-im/nimbus-eth2/pull/3943
The new code has an invariant that the `headMerkleizer` field in
the `Eth1Chain` is always kept in sync with the blocks stored in
the chain.
This invariant is now enforced better by doing the necessary merkleizer updates
in the `Eth1Chain.addBlock` function, in the `Eth1Chain.init` function and in the
`Eth1Chain.reset` function.
When importing blocks with deposits from the EL, the timestamp is never
initialized for them. Therefore, only blocks without deposits (for which
the timestamp is obtained) are considered for `is_candidate_block`.
This is fixed by also importing timestamps for blocks with deposits.
* fix obtaining deposits after connection loss
When an error occurs during Eth1 deposits import, the already imported
blocks are kept while the connection to the EL is re-established.
However, the corresponding merkleizer is not persisted, leading to any
future deposits no longer being properly imported. This is quite common
when syncing a fresh Nimbus instance against an already-synced Geth EL.
Fixed by persisting the head merkleizer together with the blocks.
* MEV validator registration
* add nearby canary to detect new beacon chain forks
* remove special MEV graffiti
* web3signer support
* fix trace logging
* Nim 1.2 needs raises Defect
* use template rather than proc in REST JSON parsing
* use --payload-builder-enable and --payload-builder-url
* explicitly default MEV to disabled
* explicitly empty default value for payload builder URL
* revert attestation pool to unstable version
* Use final `v1` version for light client protocols
* Unhide LC data collection options
* Default enable LC data serving
* rm unneeded import
* Connect to EL on startup
* Add docs for LC based EL sync
LC cancels concurrent requests if one peer sent a correct response and
waiting for other peers is no longer useful. On the server side this
resulted in a descore (-500) and a likely disconnect. The behaviour is
changed to match `UnexpectedEOF`, `PotentiallyExpectedEOF` handling that
return an error response without disconnecting from the peer.
When there is heavy forking, proposals may get missed due to including
attestations from different forks that later fail verification.
Checking attestation signatures when building blocks should fix this.
The light client sync protocol employs heuristics to ensure it does not
become stuck during non-finality or low sync committee participation.
These can enable use cases that prefer availability of recent data
over security. For our syncing use case, though, security is preferred.
An option is added to light client processor to configure this tradeoff.
* Re-enabled requireAllFields after a fix in nim-json-serialization
The problem was that `Option[T]` fields were not treated as optional
when requireAllFields is set to true. This is now fixed in NJS.
* Add makefile targets for recreating the Jenkins simulation runs
* Fix a discrepancy with the REST spec
It's not quite clear why this condition was triggered in the local
simulation, but it seems a viable scenario after the Keymanager API
is integrated in the validator client.
The user can temporarily remove all validator keys from a running
client before adding another set of keys.
Adds the `--web3-url` launch argument to `nimbus_light_client` to enable
driving the EL with the optimistic head obtained from LC sync protocol.
This will keep issuing `newPayload` / `forkChoiceUpdated` requests for
new blocks, marking them as optimistic. `ZERO_HASH` is reported as the
finalized block for now.
#3864 introduced a regression by turning on `requireAllFields` globally
for JSON parsing. Certain endpoints such as `RestSyncInfo` have optional
fields that do not parse correctly without additional changes. This is
reverted for now to restore previous behaviour and unblock CI testing.
Other changes:
* The Keymanager error responses differ from the Beacon API responses.
'keymanagerApiError' replaces the former usages of 'jsonError'.
* Return status code 401 and 403 for authorization errors in accordance
to the spec.
* Eliminate inconsistencies in the REST JSON parsing. Some of the code
paths allowed missing fields.
* Added logging of serialization failure details at DEBUG level.
* track the SyncCommittee period in slot end logs
* Update beacon_chain/nimbus_beacon_node.nim
Co-authored-by: Etan Kissling <etan@status.im>
Co-authored-by: Etan Kissling <etan@status.im>
Removes a few extra-ambitious templates to make `self` updates explicit,
and moves the `FinalityCheckpoints` type from `base` to `helpers` as it
is an additional Nimbus specific type not defined by spec.
a notice in the log is enough - we don't want the REST API to return an
error in this case because that makes the validator client think
something is seriously wrong (like the BN or message being broken)
Whether new blocks/attestations/etc are produced internally or received
via REST, their journey through the node is the same - to ensure that
they get the same treatment (logging, metrics, processing), this PR
moves the routing to a dedicated module and fixes several small
differences that existed before.
* `xxxValidator` -> `processMessageName` - the processor also was adding
messages to pools, so we want the name to reflect that action
* add missing "sent" metrics for some messages
* document ignore policy better - already-seen messages are not actaully
rebroadcast by libp2p
* skip redundant signature checks for internal validators consistently
The justified and finalized `Checkpoint` are frequently passed around
together. This introduces a new `FinalityCheckpoint` data structure that
combines them into one.
Due to the large usage of this structure in fork choice, also took this
opportunity to update fork choice tests to the latest v1.2.0-rc.1 spec.
Many additional tests enabled, some need more work, e.g. EL mock blocks.
Also implemented `discard_equivocations` which was skipped in #3661,
and improved code reuse across fork choice logic while at it.
* merge LC db into main BN db
To treat derived LC data similar to derived state caches, merge it into
the main beacon node DB.
* shorten table names, group with lc prefix
* optimistic sync
* flag that initially loaded blocks from database might need execution block root filled in
* return optimistic status in REST calls
* refactor blockslot pruning
* ensure beacon_blocks_by_{root,range} do not provide optimistic blocks
* handle forkchoice head being pre-merge with block being postmerge
* re-enable blocking head updates on validator duties
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* mark blocks sans execution payloads valid during head update
* persist LC data across restarts
With the Altair spec `LightClientUpdate` structure taking its final form
it is finally possible to persist LC data across restarts without having
to worry about data migration due to spec changes. A separate `lcdataV1`
database is created in the `caches` subdirectory to hold known LC data.
A full database with default settings (129 periods) uses <15 MB disk.
* extend LC data DB rationale
* wording
* add `isSupportedBySQLite` helper and explicit return
* remove redundant `return`
All message processing is done in the validation callbacks, so there's
no need to trigger data handlers for messages we publish - the
self-publish is async, and therefore has an associated cost
Separate LC initialization options from the main ChainDAGRef options to
allow ChainDAGRef to treat them as opaque and reduce risk for conflicts
when extending those options in the future.
* remove web3 url prompt in launcher script
The interactive prompt for web3 has outlived its utility as we now load
url:s from command line params and config files, preventing the prompt
from correctly detecting when it's needed.
Also, after the merge, a JWT secret will (likely) be needed.
* log notice when web3 url is missing
* fix docs to not mention default that doesn't exist
* fix scripts to properly quote arguments
Merkle proofs tend to have long underlying type definitions, e.g.,
`array[log2trunc(NEXT_SYNC_COMMITTEE_INDEX), Eth2Digest]`. For the
ones used in the LC sync protocol, dedicated types are introduced
to improve readability. Furthermore, the `CachedLightClientBootstrap`
wrapper that solely wrapped a merkle branch is eliminated.
Adds a `--light-client-data-max-periods` option to override the number
of sync committee periods to retain light client data.
Raising it above the default enables archive nodes to serve full data.
Lowering below the default speeds up import times (still no persistence)
This updates `nim-ssz-serialization` to
`3db6cc0f282708aca6c290914488edd832971d61`.
Notable changes:
- Use `uint64` for `GeneralizedIndex`
- Add support for building merkle multiproofs
Combines the LC data configuration options (serve / importMode), the
callbacks (finality / optimistic LC update) as well as the cache storing
light client data, into a new `LightClientDataStore` structure.
Also moves the structure into a light client specific file.
* Initial commit
* Make `events` API spec compliant.
* Add `Eth-Consensus-Version` in responses.
* Bump chronos to get redirect with headers working.
* Add `is_optimistic` field and handling to syncing RestSyncInfo.
* adopt LC REST API with v0 suffix (without proofs)
Adopts the light client data REST API used by Lodestar as defined in
https://github.com/ethereum/beacon-APIs/pull/181 with a v0 suffix.
Requests:
- `/eth/v0/beacon/light_client/bootstrap/{block_root}`
- `/eth/v0/beacon/light_client/updates?start_period={start_period}&count={count}`
- `/eth/v0/beacon/light_client/finality_update`
- `/eth/v0/beacon/light_client/optimistic_update`
HTTP Server-Sent Events (SSE):
- `light_client_finality_update_v0`
- `light_client_optimistic_update_v0`
More work is needed to adopt the proofs endpoint, it is not included.
* initialize event queues
* register event topics
`m.depositsChain.blocks.len` may change during `startEth1Syncing`. If
that happened, an additional check now ensures that `scratchMerkleizer`
was initialized before attempting to use it.
If database access errors are encountered while proccessing LC data,
track the section which was accessed without errors so that the rest
may be attempted to be re-indexed later.
Ensures that all intermediate blocks are reported if a small gap is
encountered when downloading optimistic blocks. Gaps may occur when
a block is missed and still downloading, or when EL processing is slow.
If the gap exceeds 1 epoch, optimistic block stream jumps to latest.
* check for and log gossip broadcast failure
* switch notices to warns; update LC variables regardless
* don't both return a Result and log sending error
* add metrics counter for failed-due-to-no-peers and removed unnecessary async
* don't report failure of sync committee messages
* remove redundant metric
* document metric being incremented
The initial sync committee period follows a different finality rule than
the other ones. Instead of next sync committee finalizing as soon as the
`finalizedHead.slot >= period.start_slot` have to use Altair start slot.
For consistency with other options, use a common prefix for light client
data configuration options.
* `--serve-light-client-data` --> `--light-client-data-serve`
* `--import-light-client-data` --> `--light-client-data-import-mode`
No deprecation of the old identifiers as they were only sparingly used
and all usage can be easily updated without interferance.
Corrects an off-by-1 in the reported sync percentage computation.
New logic is based on `SyncQueue.total` and `SyncQueue.progress`
with `pivot` instead of `sq.startSlot`.
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
Adds a `LightClient` instance to the beacon node as preparation to
accelerate syncing in the future (optimistic sync).
- `--light-client-enable` turns on the feature
- `--light-client-trusted-block-root` configures block to start from
If no block root is configured, light client tracks DAG `finalizedHead`.
Introduces a new library for syncing using libp2p based light client
sync protocol, and adds a new `nimbus_light_client` executable that uses
this library for syncing. The new executable emits log messages when
new beacon block headers are received, and is integrated into testing.
* SSZ `[]` -> `mitem`
* `[]` -> `item`
immutable access via mutable instance cannot rely on template
overloading, and `[]` cannot be a `func` because of special seq handling
in compiler.
* remove deprecated JSON-RPC server
* keep the command-line options around as no-ops, temporarily
* service -> server; JSON-RPC is still used elsewhere
* document static vs dynamic range checking requirements
* add `vindices` iterator to iterate over valid validator indices in a
state
* clean up spec comments in general
* fixup
Co-authored-by: tersec <tersec@users.noreply.github.com>
Incorporates the latest changes to the light client sync protocol based
on Devconnect AMS feedback. Note that this breaks compatibility with the
previous prototype, due to changes to data structures and endpoints.
See https://github.com/ethereum/consensus-specs/pull/2802
Since we were not verifying BLS signature in blocks that we produce,
we were failing to notice that some deposits need to be ignored (due
to having an invalid signature). Processing these deposits resulted
in a different ending state after the state transition which caused
our blocks to be rejected by the network.
* Some Web3Signer versions insist replying with text/plain messages
* When reading blocks, the Web3Signer uses upper-case fork identifiers
instead of lower-case identifies like the Beacon API.
Follows up on https://github.com/status-im/nimbus-eth2/pull/3461 which
ensured that repeated `beaconBlocksByRange` requests get shrinked to
account for potential out-of-band advancements to `safeSlot`, with
similar logic for the initial request.
Other changes:
* logtrace can now verify sync committee messages and contributions
* Many unnecessary use of pairs() have been removed for consistency
* Map 40x BN response codes to BeaconNodeStatus.Incompatible in the VC
Other fixes:
* Fix bit rot in the `make prater-dev-deposit` target.
* Correct content-type in the responses of the Nimbus signing node
* Invalid JSON payload was being sent in the web3signer requests
* era file verification
Implement and document era file verification
* era file states now come with block applied for easier verification
* clarify conflicting version handling
* document verification requirements
* remove count from name, use start-era, end-root to discover range
* remove obsolete todo
* abstract out block root loading
Updated outdated presets / configs / REST config to v1.1.10 specs.
- `TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH` and `PROPOSER_SCORE_BOOST` are
not yet used in `eth2-networks`, added configurability as TODOs.
- `MIN_ANCHOR_POW_BLOCK_DIFFICULTY` is no longer needed, put on ignore
list as some Altair devnets still reference it.
* use MAX_CHUNK_SIZE_BELLATRIX for signed Bellatrix blocks
* Update beacon_chain/networking/eth2_network.nim
Co-authored-by: Etan Kissling <etan@status.im>
* localPassC to localPassc
* check against maxChunkSize rather than constant
Co-authored-by: Etan Kissling <etan@status.im>
This PR makes the necessary adjustments to deal with the revamped snappy
API.
In practical terms for nimbus-eth2, there are performance increases to
gossip processing, database reading and writing as well as era file
processing. Exporting `.era` files for example, a snappy-heavy
operation, almost halves in total processing time:
Pre:
```
Average, StdDev, Min, Max, Samples, Test
39.088, 8.735, 23.619, 53.301, 50, tState
237.079, 46.692, 165.620, 355.481, 49, tBlocks
```
Post:
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
25.350, 5.303, 15.351, 41.856, 50, tState
141.238, 24.164, 99.990, 199.329, 49, tBlocks
```
* Add `NoMonitor` flag to stop SyncManager from monitoring sync situation.
* Remove `toleranceValue` and `PeerScoreHeadTooNew`.
Co-authored-by: Etan Kissling <etan@status.im>