2024-09-09 09:12:56 +00:00
|
|
|
# Nimbus
|
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
|
|
|
# Licensed and distributed under either of
|
|
|
|
# * MIT license (license terms in the root directory or at
|
|
|
|
# https://opensource.org/licenses/MIT).
|
|
|
|
# * Apache v2 license (license terms in the root directory or at
|
|
|
|
# https://www.apache.org/licenses/LICENSE-2.0).
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
{.push raises:[].}
|
|
|
|
|
|
|
|
import
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
pkg/[chronicles, chronos],
|
2024-09-09 09:12:56 +00:00
|
|
|
pkg/eth/[common, rlp],
|
2024-11-01 19:18:41 +00:00
|
|
|
pkg/stew/[interval_set, sorted_set],
|
2024-09-09 09:12:56 +00:00
|
|
|
pkg/results,
|
2024-10-17 17:59:50 +00:00
|
|
|
"../../.."/[common, core/chain, db/storage_types],
|
2024-09-09 09:12:56 +00:00
|
|
|
../worker_desc,
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
./headers_unproc
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
const
|
2024-10-02 11:31:33 +00:00
|
|
|
LhcStateKey = 1.beaconStateKey
|
2024-09-09 09:12:56 +00:00
|
|
|
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-17 17:59:50 +00:00
|
|
|
proc fetchSyncStateLayout(ctx: BeaconCtxRef): Opt[SyncStateLayout] =
|
2024-09-09 09:12:56 +00:00
|
|
|
let data = ctx.db.ctx.getKvt().get(LhcStateKey.toOpenArray).valueOr:
|
|
|
|
return err()
|
|
|
|
try:
|
2024-10-17 17:59:50 +00:00
|
|
|
return ok(rlp.decode(data, SyncStateLayout))
|
2024-09-09 09:12:56 +00:00
|
|
|
except RlpError:
|
2024-10-17 17:59:50 +00:00
|
|
|
discard
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
err()
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-11-01 19:18:41 +00:00
|
|
|
|
|
|
|
proc deleteStaleHeadersAndState(
|
|
|
|
ctx: BeaconCtxRef;
|
|
|
|
upTo: BlockNumber;
|
|
|
|
info: static[string];
|
|
|
|
) =
|
|
|
|
## Delete stale headers and state
|
|
|
|
let
|
|
|
|
kvt = ctx.db.ctx.getKvt()
|
|
|
|
stateNum = ctx.db.getSavedStateBlockNumber() # for persisting
|
|
|
|
|
|
|
|
var bn = upTo
|
|
|
|
while 0 < bn and kvt.hasKey(beaconHeaderKey(bn).toOpenArray):
|
|
|
|
discard kvt.del(beaconHeaderKey(bn).toOpenArray)
|
|
|
|
bn.dec
|
|
|
|
|
|
|
|
# Occasionallly persist the deleted headers. This will succeed if
|
|
|
|
# this function is called early enough after restart when there is
|
|
|
|
# no database transaction pending.
|
|
|
|
if (upTo - bn) mod 8192 == 0:
|
|
|
|
ctx.db.persistent(stateNum).isOkOr:
|
|
|
|
debug info & ": cannot persist deleted sync headers", error=($$error)
|
|
|
|
# So be it, stop here.
|
|
|
|
return
|
|
|
|
|
|
|
|
# Delete persistent state, there will be no use of it anymore
|
|
|
|
discard kvt.del(LhcStateKey.toOpenArray)
|
|
|
|
ctx.db.persistent(stateNum).isOkOr:
|
|
|
|
debug info & ": cannot persist deleted sync headers", error=($$error)
|
|
|
|
return
|
|
|
|
|
|
|
|
if bn < upTo:
|
|
|
|
debug info & ": deleted stale sync headers", iv=BnRange.new(bn+1,upTo)
|
|
|
|
|
2024-09-09 09:12:56 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-10-28 16:22:04 +00:00
|
|
|
proc dbStoreSyncStateLayout*(ctx: BeaconCtxRef; info: static[string]) =
|
2024-09-09 09:12:56 +00:00
|
|
|
## Save chain layout to persistent db
|
|
|
|
let data = rlp.encode(ctx.layout)
|
|
|
|
ctx.db.ctx.getKvt().put(LhcStateKey.toOpenArray, data).isOkOr:
|
|
|
|
raiseAssert info & " put() failed: " & $$error
|
|
|
|
|
|
|
|
# While executing blocks there are frequent save cycles. Otherwise, an
|
|
|
|
# extra save request might help to pick up an interrupted sync session.
|
2024-11-01 19:18:41 +00:00
|
|
|
if ctx.db.level() == 0 and ctx.stash.len == 0:
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
let number = ctx.db.getSavedStateBlockNumber()
|
|
|
|
ctx.db.persistent(number).isOkOr:
|
2024-11-01 19:18:41 +00:00
|
|
|
raiseAssert info & " persistent() failed: " & $$error
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
|
2024-11-01 19:18:41 +00:00
|
|
|
proc dbLoadSyncStateLayout*(ctx: BeaconCtxRef; info: static[string]): bool =
|
|
|
|
## Restore chain layout from persistent db. It returns `true` if a previous
|
|
|
|
## state could be loaded, and `false` if a new state was created.
|
2024-10-17 17:59:50 +00:00
|
|
|
let
|
|
|
|
rc = ctx.fetchSyncStateLayout()
|
|
|
|
latest = ctx.chain.latestNumber()
|
|
|
|
|
2024-11-01 19:18:41 +00:00
|
|
|
# If there was a manual import after a previous sync, then saved state
|
|
|
|
# might be outdated.
|
2024-10-28 16:22:04 +00:00
|
|
|
if rc.isOk and
|
2024-11-01 19:18:41 +00:00
|
|
|
# The base number is the least record of the FCU chains/tree. So the
|
|
|
|
# finalised entry must not be smaller.
|
2024-10-28 16:22:04 +00:00
|
|
|
ctx.chain.baseNumber() <= rc.value.final and
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
|
2024-11-01 19:18:41 +00:00
|
|
|
# If the latest FCU number is not larger than the head, there is nothing
|
|
|
|
# to do (might also happen after a manual import.)
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
latest < rc.value.head and
|
|
|
|
|
|
|
|
# Can only resume a header download. Blocks need to be set up from scratch.
|
|
|
|
rc.value.lastState == collectingHeaders:
|
2024-11-01 19:18:41 +00:00
|
|
|
|
|
|
|
# Assign saved sync state
|
2024-10-17 17:59:50 +00:00
|
|
|
ctx.sst.layout = rc.value
|
|
|
|
|
|
|
|
# Add interval of unprocessed header range `(C,D)` from `README.md`
|
|
|
|
ctx.headersUnprocSet(ctx.layout.coupler+1, ctx.layout.dangling-1)
|
|
|
|
|
2024-11-22 13:23:53 +00:00
|
|
|
trace info & ": restored syncer state", L=latest.bnStr,
|
2024-10-17 17:59:50 +00:00
|
|
|
C=ctx.layout.coupler.bnStr, D=ctx.layout.dangling.bnStr,
|
2024-11-22 13:23:53 +00:00
|
|
|
H=ctx.layout.head.bnStr
|
2024-10-17 17:59:50 +00:00
|
|
|
|
2024-11-01 19:18:41 +00:00
|
|
|
true
|
|
|
|
|
2024-09-09 09:12:56 +00:00
|
|
|
else:
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
ctx.sst.layout = SyncStateLayout() # empty layout
|
2024-11-01 19:18:41 +00:00
|
|
|
|
|
|
|
if rc.isOk:
|
|
|
|
# Some stored headers might have become stale, so delete them. Even
|
|
|
|
# though it is not critical, stale headers just stay on the database
|
|
|
|
# forever occupying space without purpose. Also, delete the state record.
|
|
|
|
# After deleting headers, the state record becomes stale as well.
|
|
|
|
if rc.value.head <= latest:
|
|
|
|
# After manual import, the `latest` state might be ahead of the old
|
|
|
|
# `head` which leaves a gap `(rc.value.head,latest)` of missing headers.
|
|
|
|
# So the `deleteStaleHeadersAndState()` clean up routine needs to start
|
|
|
|
# at the `head` and work backwards.
|
|
|
|
ctx.deleteStaleHeadersAndState(rc.value.head, info)
|
|
|
|
else:
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
# Delete stale headers with block numbers starting at to `latest` while
|
2024-11-01 19:18:41 +00:00
|
|
|
# working backwards.
|
|
|
|
ctx.deleteStaleHeadersAndState(latest, info)
|
|
|
|
|
|
|
|
false
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
# ------------------
|
|
|
|
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
proc dbHeadersClear*(ctx: BeaconCtxRef) =
|
|
|
|
## Clear stashed in-memory headers
|
|
|
|
ctx.stash.clear
|
|
|
|
|
|
|
|
proc dbHeadersStash*(
|
2024-10-02 11:31:33 +00:00
|
|
|
ctx: BeaconCtxRef;
|
2024-09-09 09:12:56 +00:00
|
|
|
first: BlockNumber;
|
2024-10-01 09:19:29 +00:00
|
|
|
revBlobs: openArray[seq[byte]];
|
2024-10-28 16:22:04 +00:00
|
|
|
info: static[string];
|
2024-09-09 09:12:56 +00:00
|
|
|
) =
|
|
|
|
## Temporarily store header chain to persistent db (oblivious of the chain
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## layout.) The headers should not be stashed if they are imepreted and
|
|
|
|
## executed on the database, already.
|
2024-09-09 09:12:56 +00:00
|
|
|
##
|
2024-09-10 11:37:49 +00:00
|
|
|
## The `revBlobs[]` arguments are passed in reverse order so that block
|
|
|
|
## numbers apply as
|
|
|
|
## ::
|
|
|
|
## #first -- revBlobs[^1]
|
|
|
|
## #(first+1) -- revBlobs[^2]
|
|
|
|
## ..
|
|
|
|
##
|
|
|
|
let
|
2024-11-01 19:18:41 +00:00
|
|
|
txLevel = ctx.db.level()
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
last = first + revBlobs.len.uint64 - 1
|
2024-11-01 19:18:41 +00:00
|
|
|
if 0 < txLevel:
|
|
|
|
# Need to cache it because FCU has blocked writing through to disk.
|
|
|
|
for n,data in revBlobs:
|
|
|
|
ctx.stash[last - n.uint64] = data
|
|
|
|
else:
|
|
|
|
let kvt = ctx.db.ctx.getKvt()
|
|
|
|
for n,data in revBlobs:
|
|
|
|
let key = beaconHeaderKey(last - n.uint64)
|
|
|
|
kvt.put(key.toOpenArray, data).isOkOr:
|
|
|
|
raiseAssert info & ": put() failed: " & $$error
|
2024-09-09 09:12:56 +00:00
|
|
|
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
proc dbHeaderPeek*(ctx: BeaconCtxRef; num: BlockNumber): Opt[Header] =
|
2024-09-09 09:12:56 +00:00
|
|
|
## Retrieve some stashed header.
|
2024-11-01 19:18:41 +00:00
|
|
|
# Try cache first
|
|
|
|
ctx.stash.withValue(num, val):
|
|
|
|
try:
|
|
|
|
return ok(rlp.decode(val[], Header))
|
|
|
|
except RlpError:
|
|
|
|
discard
|
|
|
|
# Use persistent storage next
|
2024-09-09 09:12:56 +00:00
|
|
|
let
|
2024-10-02 11:31:33 +00:00
|
|
|
key = beaconHeaderKey(num)
|
2024-09-09 09:12:56 +00:00
|
|
|
rc = ctx.db.ctx.getKvt().get(key.toOpenArray)
|
|
|
|
if rc.isOk:
|
|
|
|
try:
|
2024-10-01 09:19:29 +00:00
|
|
|
return ok(rlp.decode(rc.value, Header))
|
2024-09-09 09:12:56 +00:00
|
|
|
except RlpError:
|
|
|
|
discard
|
|
|
|
err()
|
|
|
|
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
proc dbHeaderParentHash*(ctx: BeaconCtxRef; num: BlockNumber): Opt[Hash32] =
|
2024-09-09 09:12:56 +00:00
|
|
|
## Retrieve some stashed parent hash.
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
ok (? ctx.dbHeaderPeek num).parentHash
|
2024-09-09 09:12:56 +00:00
|
|
|
|
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler
* Provide `clear()` functions for explicitly flushing data objects
* Renaming header cache functions
why:
More systematic, all functions start with prefix `dbHeader`
* Remove `danglingParent` from layout
why:
Already provided by header cache
* Remove `couplerHash` and `headHash` from layout
why:
No need to cache, `headHash` is unused and `couplerHash` used typically
once, only.
* Remove `lastLayout` from sync descriptor
why:
No need to compare changes, saving is always triggered after actively
changing the sync layout state
* Early reject unsuitable head + finalised header from CL
why:
The finalised header is only passed by its hash so the header must be
fetched somewhere, e.g. from a peer via eth/xx.
Also, finalised headers earlier than the `base` from `FC` cannot be
handled due to the `Aristo` single state database architecture.
Luckily, on a full node, the complete block history is available so
unsuitable finalised headers are stored there already which is exploited
here to avoid unnecessary network traffic.
* Code cosmetics, remove cruft, prettify logging, remove `final` metrics
detail:
The `final` layout parameter will be deprecated and later removed
* Update/re-calibrate syncer logic documentation
why:
The current implementation sucks if the `FC` module changes the
canonical branch in the middle of completing a header chain (due
to concurrent updates by the `newPayload()` logic.)
* Implement according to re-calibrated syncer docu
details:
The implementation employs the notion of named layout states (see
`SyncLayoutState` in `worker_desc.nim`) which are derived from the
state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
|
|
|
proc dbHeaderUnstash*(ctx: BeaconCtxRef; bn: BlockNumber) =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Remove header from temporary DB list
|
2024-11-03 00:11:24 +00:00
|
|
|
ctx.stash.withValue(bn, _):
|
2024-11-01 19:18:41 +00:00
|
|
|
ctx.stash.del bn
|
|
|
|
return
|
2024-10-02 11:31:33 +00:00
|
|
|
discard ctx.db.ctx.getKvt().del(beaconHeaderKey(bn).toOpenArray)
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-09-09 09:12:56 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|