In nim-web3 all std.Option are replaced by results.Opt. The same goes in nim-eth, with additional fields name changes and GasInt changed from int64 to uint64.
* Beacon node side implementation.
* Validator client side implementation.
* Address review comments and fix the test.
* Only 400 errors could be IndexedErrorMessage, 500 errors are always ErrorMessage.
* Remove VC shutdown functionality.
* Remove magic constants.
* Make arguments more visible and disable default values.
* Address review comments.
* electra attestation updates
In Electra, we have two attestation formats: on-chain and on-network -
the former combines all committees of a slot in a single committee bit
list.
This PR makes a number of cleanups to move towards fixing this -
attestation packing however still needs to be fixed as it currently
creates attestations with a single committee only which is very
inefficient.
* more attestations in the blocks
* signing and aggregation fixes
* tool fix
* test, import
During sync, sometimes the same block gets encountered and added to
quarantine multiple times. If its parent is already known, quarantine
incorrectly registers it as missing, leading to re-download. This can
be fixed by registering the parent's deepest missing parent recursively.
Also increase the stickiness of `missing`. We only perform 4 attempts
within ~16 seconds before giving up. Very frequently, this is not enough
and there is no progress until sync manager kicks in even on holesky.
#6087 introduced a subtle change to `nim-web3` resulting in `Gwei` to be
serialized differently than before. Using a `distinct` type for `Gwei`
improves type safety and avoids such problems in the future.
Fix the `/eth/v1/beacon/deposit_snapshot` API to produce proper EIP-4881
compatible `DepositTreeSnapshot` responses. The endpoint used to expose
a Nimbus-specific database internal format.
Also fix trusted node sync to consume properly formatted EIP-4881 data
with `--with-deposit-snapshot`, and `--finalized-deposit-tree-snapshot`
beacon node launch option to use the EIP-4881 data. Further ensure that
`ncli_testnet` produces EIP-4881 formatted data for interoperability.
EIP-4881 was never correctly implemented, the `DepositTreeSnapshot`
structure has nothing to do with its actual definition. Reflect that
by renaming the type to a Nimbus-specific `DepositContractSnapshot`,
so that an actual EIP-4881 implementation can use the correct names.
- https://eips.ethereum.org/EIPS/eip-4881#specification
Notably, `DepositTreeSnapshot` contains a compressed sequence in
`finalized`, only containing the minimally required intermediate roots.
That also explains the incorrect REST response reported in #5508.
The non-canonical representation was introduced in #4303 and is also
persisted in the database. We'll have to maintain it for a while.
Annotate the `research` and `test` files for which no further changes
are needed to successfully compile them, to not interfere with periodic
tasks such as spec reference bumps.
`scanf` apparently has both a `bool` return as well as raising random
exceptions depending on what functions get called by the `macro`.
To make this explicit, catch the `ValueError` from the generated
`parseInt` call, to separate `scanf` behaviour from the actual SSZ
test logic. In the end, it mostly doesn't matter as there are some
`doAssert wasMatched` on the next line (not everywhere though).
But it still makes the `scanf` internals explicit, so is clearer.
Add `{.raises.}` annotations to `tests` files where needed to enable
`{.push raises: [].}`. Avoids interfering with periodic changes such as
spec version bumps, and avoids special casing folders when editing.
The effort to maintain `{.raises.}` is trivial after the initial round.
`test_peer_pool` is a bit different from the other tests as it uses
`closureScope` which doesn't play well with `{.push raises: [].}`.
Define an overload instead that allows passing `{.raises.}` to the
`template`. This allows using `unittest2`'s exception handler without
having to refactor the test.
`stderr.write` may fail, e.g., if no tty is connected, which may happen
in some CI configurations. Discard such failures and continue quitting
instead of raising the error.
This PR allows sharing the pubkey data between validators by using a
thread-local cache for pubkey data, netting about a 400mb mem usage
reduction on holesky due to us keeping 3 permanent + several ephemeral
state copies in memory at all times and each state copy holding a full
validator.
The PR also introduces a hash cache for the key which gives ~14% speedup
for a full state `hash_tree_root` - the key makes up for a large part of
the `Validator` htr time.
Finally, the time it takes to copy a state goes down as well from ~80m
ms to ~60, for reasons similar to htr.
We use a `ptr` even if a `ref` could in theory have been used - there is
not much practical benefit to a `ref` (given it's mutable) while a `ptr`
is cheaper and easier to copy (when copying temporary states).
We could go further and cache a cooked pubkey but it turns out this is
quite intrusive - in all the relevant places, we're already using a
cooked key from the immutable validator data so there are no immediate
performance gains of doing so while managing the compressed -> cooked
key mapping would become more difficult - something for a future PR
perhaps.
Co-authored-by: Etan Kissling <etan@status.im>
* compute post-merge randao mix without loading state
* avoid copying state on shuffling computation and compute epochref
* speed up state copy for block production
When using checkpoint sync, only checkpoint state is available, block is
not downloaded and backfilled later.
`dag.backfill` tracks latest filled `slot`, and latest `parent_root` for
which no block has been synced yet.
In checkpoint sync, this assumption is broken, because there, the start
`dag.backfill.slot` is set based on checkpoint state slot, and the block
is also not available.
However, sync manager in backward mode also requests `dag.backfill.slot`
and `block_clearance` then backfills the checkpoint block once it is
synced. But, there is no guarantee that a peer ever sends us that block.
They could send us all parent blocks and solely omit the checkpoint
block itself. In that situation, we would accept the parent blocks and
advance `dag.backfill`, and subsequently never request the checkpoint
block again, resulting in gap inside blocks DB that is never filled.
To mitigate that, the assumption is restored that `dag.backfill.slot`
is the latest filled `slot`, and `dag.backfill.parent_root` is the next
block that needs to be synced. By setting `slot` to `tail.slot + 1` and
`parent_root` to `tail.root`, we put a fake summary into `dag.backfill`
so that `block_clearance` only proceeds once checkpoint block exists.