On `ELECTRA_FORK_EPOCH`, PeerDAS is not yet activated, hence the current
mechanism based on `BlobSidecar` is still in use. With EIP-7688, the
generalized indices of `BeaconBlockBody` get reindexed, changing the
length of the inclusion proof within the `BlobSidecar`. Because network
Req/Resp operations allow responses across fork boundaries, this creates
the need for a `ForkedBlobSidecar` in that layer, same as already done
for `ForkedSignedBeaconBock` for similar reasons.
Note: This PR is only needed if PeerDAS is adopted _after_ EIP-7688.
If PeerDAS is adopted _before_ EIP-7688, a similar PR may be needed for
forked columns. Coincidental `Forked` jank can only be fully avoided if
both features activate at the same epoch, actual changes to blobs aside.
Delaying EIP-7688 for sole purpose of epoch alignemnt is not worth it.
The fork digest determines the underlying data type on libp2p gossip,
so it's important to use the matching fork digest instead of picking
whatever wall epoch happens to be.
In nim-web3 all std.Option are replaced by results.Opt. The same goes in nim-eth, with additional fields name changes and GasInt changed from int64 to uint64.
* Make listen-address default to use dualstack.
* Use correct newProtocol().
* Bump nim-eth.
* Bump nim-eth one more time.
* Use `*` instead of IPv6 address for dualstack sockets.
* Bump chronos and nim-eth.
* Use new constructor.
* Fix listenAddress should be Opt[T] not Option[T].
* Fix options.md.
- add support for setting protocol handlers with `{.raises.}` annotation
- fix: valueOr and withValue utilities
- fix: remove explicit param from GossipSubParams constructor
The `<` function to compare peers was not exported, leading to the same
peer be acquired over and over again until kick. `mixin` doesn't pull it
into `peerCmp` without `*` export, and with the export no mixin needed.
Iterating peers should only yield peers present in registry, otherwise
`nil` pointers are returned and depending on comparison function it will
break, see #6149.
In split view situation, the canonical chain may only be served by a
tiny amount of peers, and branches may span long durations. Minority
branches may still have a large weight from attestations and should
be discovered. To assist with that, add a branch discovery module that
assists in such a situation by specifically targeting peers with unknown
histories and downloading from them, in addition to sync manager work
which handles popular branches.
`eth2_network` forgets to descore peers when opening connection times
out. It only descores when opening the connection succeeds and then
there is a subsequent error. The caller cannot distinguish the cases,
so ensure that the descore is also applied if the request fails during
its initial portion.
Each individual blob currently uses as much quota from the network limit
as an entire block does, 128 items per second shared across all peers.
Blobs are 128 KB each instead of up to several MB and are simpler to
encode. There can be multiple per block (6 currently), so allow 2000
blobs per second across all peers. That decreases the cost per block
from `3125 + 3125 * blobs.len` quota (= `[3125, 21875]`) to a lower
`3125 + 200 * blobs.len` quota (= `[3125, 4325]`), accounting for the
slight increase in data transfer and encoding time.
Fix the `/eth/v1/beacon/deposit_snapshot` API to produce proper EIP-4881
compatible `DepositTreeSnapshot` responses. The endpoint used to expose
a Nimbus-specific database internal format.
Also fix trusted node sync to consume properly formatted EIP-4881 data
with `--with-deposit-snapshot`, and `--finalized-deposit-tree-snapshot`
beacon node launch option to use the EIP-4881 data. Further ensure that
`ncli_testnet` produces EIP-4881 formatted data for interoperability.
The exports in `eth2_discovery` produce deprecation warnings as they
refer to `close`, `closeWait` and so on. Turns out that they are not
necessary at all. The `Eth2DiscoveryProtocol` is even already exported
two lines above using `*` marker...
This PR allows sharing the pubkey data between validators by using a
thread-local cache for pubkey data, netting about a 400mb mem usage
reduction on holesky due to us keeping 3 permanent + several ephemeral
state copies in memory at all times and each state copy holding a full
validator.
The PR also introduces a hash cache for the key which gives ~14% speedup
for a full state `hash_tree_root` - the key makes up for a large part of
the `Validator` htr time.
Finally, the time it takes to copy a state goes down as well from ~80m
ms to ~60, for reasons similar to htr.
We use a `ptr` even if a `ref` could in theory have been used - there is
not much practical benefit to a `ref` (given it's mutable) while a `ptr`
is cheaper and easier to copy (when copying temporary states).
We could go further and cache a cooked pubkey but it turns out this is
quite intrusive - in all the relevant places, we're already using a
cooked key from the immutable validator data so there are no immediate
performance gains of doing so while managing the compressed -> cooked
key mapping would become more difficult - something for a future PR
perhaps.
Co-authored-by: Etan Kissling <etan@status.im>