- Client also handle error message if id is null
- Client pass meaningful error to newFut when processMessage failed
- Refactoring: extract rpc handler from HTTP and WebSocket server
- Use pragma push/pop pair to disable warning
- http server better exception handling
- Fix CI badge url
- Upgrade github actions to v4
- Revert "Fix CI badge url"
- HttpAuthHook use async raises
- Move CancelledError handling to outer try/except of RpcWebsocketServer
- Implement RPC batch call both in servers and clients
- v0.4.0
- Should compile if chronicles log turned on
- Add framework to support more optional types
- v0.4.2
- test refc in CI in Nim 2.0 and later
- use non-EOL macOS version for GitHub Actions CI
- avoid failing uninitialized `Future`
- Improve batch call example and wrapper comments
- Fix ws and socket client error handling and add test to #212
- Add build test with chronicles to json enabled
* track latest duration instead of total in new timing metrics
Change `db_checkpoint_seconds` and `state_replay_seconds` metrics to
record the latest duration instead of the total. `nim-metrics` already
synthesizes a `_total` metric from these implicitly.
* still have to use inc, metrics only synthesizes the name not the sum
* prefix with `beacon_dag`
- update instructions for tracking upstream MIRACL Core
- bump `bls12-381-tests` to `v0.1.2`
- bump `miracl-core` to `0f67878bee7c4108405deb2b0b5e4e58d1ae30fc`
- test refc in CI in Nim 2.0 and later
- rename `milagro.nims` -> `miracl.nims`
- rename `milagro.nim` -> `miracl.nim`
- rename `milagro(Path|_func)` -> `miracl(Path|_func)`
- rename `milagro` references -> `miracl` in documentation
Validator monitoring gained 2 new metrics for tracking when blocks are
included or not on the head chain.
Similar to attestations, if the block is produced in epoch N, reporting
will use the state when switching to epoch N+2 to do the reporting (so
as to reasonably stabilise the block inclusion in the face of reorgs).
Database checkpointing can take seconds, e.g., while Geth is syncing.
Add a debug log + metric for it, and also info log if it takes longer
than 250ms, same as for the existing `State replayed` log. If the log
shows up for a user while the system is not overloaded, it may point
to slow disk speed or thermal issue.
Make raised exceptions explicit in `ncli_common.nim`, and handle more of
them in `ncli_db.nim` to have better UX when directories cannot be read
or file names do not parse against the expected format.
`simutils.nim` is quite outdated w.r.t. code style. Apply the following:
- Use string concatenation instead of `strformat` for simple cases
- Catch `IOError` and `SerializationError` when loading/saving SSZ files
- Catch `ValueError` for remaining `strformat` usage
- Consistently use `chronicles` in `loadGenesis`
* compute post-merge randao mix without loading state
* avoid copying state on shuffling computation and compute epochref
* speed up state copy for block production
With checkpoint sync, the checkpoint block is typically unavailable at
the start, and only backfilled later. To avoid treating it as having
zero hash, execution disabled in some contexts, wrap the result of
`loadExecutionBlockHash` in `Opt` and handle block hash being unknown.
---------
Co-authored-by: Jacek Sieka <jacek@status.im>
When syncing, we log a notice each time someone asks us for a block that
we haven't backfilled yet. This is quite verbose and not unexpected,
because the status message does not allow indicating backfill progress.
When using checkpoint sync, only checkpoint state is available, block is
not downloaded and backfilled later.
`dag.backfill` tracks latest filled `slot`, and latest `parent_root` for
which no block has been synced yet.
In checkpoint sync, this assumption is broken, because there, the start
`dag.backfill.slot` is set based on checkpoint state slot, and the block
is also not available.
However, sync manager in backward mode also requests `dag.backfill.slot`
and `block_clearance` then backfills the checkpoint block once it is
synced. But, there is no guarantee that a peer ever sends us that block.
They could send us all parent blocks and solely omit the checkpoint
block itself. In that situation, we would accept the parent blocks and
advance `dag.backfill`, and subsequently never request the checkpoint
block again, resulting in gap inside blocks DB that is never filled.
To mitigate that, the assumption is restored that `dag.backfill.slot`
is the latest filled `slot`, and `dag.backfill.parent_root` is the next
block that needs to be synced. By setting `slot` to `tail.slot + 1` and
`parent_root` to `tail.root`, we put a fake summary into `dag.backfill`
so that `block_clearance` only proceeds once checkpoint block exists.
After checkpoint sync, historical block IDs cannot yet be queried.
However, they are needed to compute dependent roots of `ShufflingRef`.
To allow lookup, enable `getBlockIdAtSlot` to answer from compatible
states in memory; as long as they descend from the finalized checkpoint
and the requested slot is sufficiently recent, `block_roots` contains
everything to recover `BlockSlotId` up to `SLOTS_PER_HISTORICAL_ROOT`.
This is similar to how `attester_dependent_root` etc. are computed.
This accelerates the first couple minutes of checkpoint sync on Mainnet,
especially the time until finality advances past the synced checkpoint.