2024-10-03 20:19:11 +00:00
|
|
|
Beacon Sync
|
|
|
|
===========
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
According to the merge-first
|
|
|
|
[glossary](https://notes.status.im/nimbus-merge-first-el?both=#Glossary),
|
|
|
|
a beacon sync is a "*Sync method that relies on devp2p and eth/6x to fetch
|
|
|
|
headers and bodies backwards then apply these in the forward direction to the
|
|
|
|
head state*".
|
|
|
|
|
|
|
|
This [glossary](https://notes.status.im/nimbus-merge-first-el?both=#Glossary)
|
|
|
|
is used as a naming template for relevant entities described here. When
|
|
|
|
referred to, names from the glossary are printed **bold**.
|
|
|
|
|
|
|
|
Syncing blocks is performed in two overlapping phases
|
|
|
|
|
|
|
|
* loading header chains and stashing them into a separate database table,
|
|
|
|
* removing headers from the stashed headers chain, fetching the block bodies
|
|
|
|
the headers refer to and importing/executing them via `persistentBlocks()`.
|
|
|
|
|
|
|
|
So this beacon syncer slightly differs from the definition in the
|
|
|
|
[glossary](https://notes.status.im/nimbus-merge-first-el?both=#Glossary) in
|
|
|
|
that only headers are stashed on the database table and the block bodies are
|
|
|
|
fetched in the *forward* direction.
|
|
|
|
|
|
|
|
The reason for that behavioural change is that the block bodies are addressed
|
|
|
|
by the hash of the block headers for fetching. They cannot be fully verified
|
|
|
|
upon arrival on the cheap (e.g. by a payload hash.) They will be validated not
|
|
|
|
before imported/executed. So potentially corrupt blocks will be discarded.
|
|
|
|
They will automatically be re-fetched with other missing blocks in the
|
|
|
|
*forward* direction.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
|
|
|
|
Header chains
|
|
|
|
-------------
|
|
|
|
|
|
|
|
The header chains are the triple of
|
|
|
|
|
|
|
|
* a consecutively linked chain of headers starting starting at Genesis
|
|
|
|
* followed by a sequence of missing headers
|
|
|
|
* followed by a consecutively linked chain of headers ending up at a
|
2024-10-03 20:19:11 +00:00
|
|
|
finalised block header (earlier received from the consensus layer)
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
A sequence *@[h(1),h(2),..]* of block headers is called a *linked chain* if
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
* block numbers join without gaps, i.e. *h(n).number+1 == h(n+1).number*
|
|
|
|
* parent hashes match, i.e. *h(n).hash == h(n+1).parentHash*
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
General header linked chains layout diagram
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
0 C D E (1)
|
2024-09-09 09:12:56 +00:00
|
|
|
o----------------o---------------------o----------------o--->
|
|
|
|
| <-- linked --> | <-- unprocessed --> | <-- linked --> |
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
Here, the single upper letter symbols *0*, *C*, *D*, *E* denote block numbers.
|
2024-09-09 09:12:56 +00:00
|
|
|
For convenience, these letters are also identified with its associated block
|
2024-10-03 20:19:11 +00:00
|
|
|
header or the full blocks. Saying *"the header 0"* is short for *"the header
|
|
|
|
with block number 0"*.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
Meaning of *0*, *C*, *D*, *E*:
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
* *0* -- Genesis, block number number *0*
|
|
|
|
* *C* -- coupler, maximal block number of linked chain starting at *0*
|
|
|
|
* *D* -- dangling, minimal block number of linked chain ending at *E*
|
|
|
|
with *C <= D*
|
|
|
|
* *E* -- end, block number of some finalised block (not necessarily the latest
|
|
|
|
one)
|
|
|
|
|
|
|
|
This definition implies *0 <= C <= D <= E* and the state of the header linked
|
|
|
|
chains can uniquely be described by the triple of block numbers *(C,D,E)*.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
### Storage of header chains:
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
Some block numbers from the closed interval (including end points) *[0,C]* may
|
|
|
|
correspond to finalised blocks, e.g. the sub-interval *[0,**base**]* where
|
|
|
|
**base** is the block number of the ledger state. The headers for
|
|
|
|
*[0,**base**]* are stored in the persistent state database. The headers for the
|
|
|
|
half open interval *(**base**,C]* are always stored on the *beaconHeader*
|
|
|
|
column of the *KVT* database.
|
|
|
|
|
|
|
|
The block numbers from the interval *[D,E]* also reside on the *beaconHeader*
|
|
|
|
column of the *KVT* database table.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
### Header linked chains initialisation:
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
Minimal layout on a pristine system
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
0 (2)
|
|
|
|
C
|
|
|
|
D
|
|
|
|
E
|
2024-09-09 09:12:56 +00:00
|
|
|
o--->
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
When first initialised, the header linked chains are set to *(0,0,0)*.
|
|
|
|
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
### Updating a header linked chains:
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
A header chain with an non empty open interval *(C,D)* can be updated only by
|
|
|
|
increasing *C* or decreasing *D* by adding/prepending headers so that the
|
|
|
|
linked chain condition is not violated.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
Only when the gap open interval *(C,D)* vanishes, the right end *E* can be
|
2024-10-09 18:00:00 +00:00
|
|
|
increased to a larger target block number *T*, say. This block number will
|
|
|
|
typically be the **consensus head**. Then
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
* *C==D* beacuse the open interval *(C,D)* is empty
|
|
|
|
* *C==E* because *C* is maximal (see definition of `C` above)
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
and the header chains *(E,E,E)* (depicted in *(3)* below) can be set to
|
2024-10-09 18:00:00 +00:00
|
|
|
*(C,T,T)* as depicted in *(4)* below.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
Layout before updating of *E*
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
C (3)
|
|
|
|
D
|
2024-10-09 18:00:00 +00:00
|
|
|
0 E T
|
2024-09-09 09:12:56 +00:00
|
|
|
o----------------o---------------------o---->
|
|
|
|
| <-- linked --> |
|
|
|
|
|
2024-10-09 18:00:00 +00:00
|
|
|
New layout with moving *D* and *E* to *T*
|
2024-09-09 09:12:56 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
D' (4)
|
|
|
|
0 C E'
|
2024-09-09 09:12:56 +00:00
|
|
|
o----------------o---------------------o---->
|
|
|
|
| <-- linked --> | <-- unprocessed --> |
|
|
|
|
|
2024-10-09 18:00:00 +00:00
|
|
|
with *D'=T* and *E'=T*.
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
Note that diagram *(3)* is a generalisation of *(2)*.
|
|
|
|
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
### Complete a header linked chain:
|
2024-09-09 09:12:56 +00:00
|
|
|
|
|
|
|
The header chain is *relatively complete* if it satisfies clause *(3)* above
|
2024-10-09 18:00:00 +00:00
|
|
|
for *0 < C*. It is *fully complete* if *E==T*. It should be obvious that the
|
|
|
|
latter condition is temporary only on a live system (as *T* is contiuously
|
2024-09-09 09:12:56 +00:00
|
|
|
updated.)
|
|
|
|
|
|
|
|
If a *relatively complete* header chain is reached for the first time, the
|
2024-10-03 20:19:11 +00:00
|
|
|
execution layer can start running an importer in the background
|
|
|
|
compiling/executing blocks (starting from block number *#1*.) So the ledger
|
|
|
|
database state will be updated incrementally.
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
Block chain import/execution
|
|
|
|
-----------------------------
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
The following diagram with a parially imported/executed block chain amends the
|
|
|
|
layout *(1)*:
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
0 B C D E (5)
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
o------------------o-------o---------------------o----------------o-->
|
|
|
|
| <-- imported --> | | | |
|
|
|
|
| <------- linked ------> | <-- unprocessed --> | <-- linked --> |
|
|
|
|
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
where *B* is the **base**, i.e. the **base state** block number of the last
|
|
|
|
imported/executed block. It also refers to the global state block number of
|
|
|
|
the ledger database.
|
|
|
|
|
|
|
|
The headers corresponding to the half open interval `(B,C]` will be completed
|
|
|
|
by fetching block bodies and then import/execute them together with the already
|
|
|
|
cached headers.
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
|
|
|
|
|
|
|
Running the sync process for *MainNet*
|
|
|
|
--------------------------------------
|
|
|
|
|
|
|
|
For syncing, a beacon node is needed that regularly informs via *RPC* of a
|
|
|
|
recently finalised block header.
|
|
|
|
|
|
|
|
The beacon node program used here is the *nimbus_beacon_node* binary from the
|
2024-10-03 20:19:11 +00:00
|
|
|
*nimbus-eth2* project (any other, e.g.the *light client* will do.)
|
|
|
|
*Nimbus_beacon_node* is started as
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
|
|
|
./run-mainnet-beacon-node.sh \
|
|
|
|
--web3-url=http://127.0.0.1:8551 \
|
|
|
|
--jwt-secret=/tmp/jwtsecret
|
|
|
|
|
|
|
|
where *http://127.0.0.1:8551* is the URL of the sync process that receives the
|
|
|
|
finalised block header (here on the same physical machine) and `/tmp/jwtsecret`
|
|
|
|
is the shared secret file needed for mutual communication authentication.
|
|
|
|
|
|
|
|
It will take a while for *nimbus_beacon_node* to catch up (see the
|
|
|
|
[Nimbus Guide](https://nimbus.guide/quick-start.html) for details.)
|
|
|
|
|
|
|
|
### Starting `nimbus` for syncing
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
As the syncing process is quite slow, it makes sense to pre-load the database
|
|
|
|
from an *Era1* archive (if available) before starting the real sync process.
|
|
|
|
The command for importing an *Era1* reproitory would be something like
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-15 09:37:54 +00:00
|
|
|
./build/nimbus_execution_client import \
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
--era1-dir:/path/to/main-era1/repo \
|
|
|
|
...
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
which will take its time for the full *MainNet* Era1 repository (but way faster
|
|
|
|
than the beacon sync.)
|
|
|
|
|
|
|
|
On a system with memory considerably larger than *8GiB* the *nimbus* binary is
|
|
|
|
started on the same machine where the beacon node runs with the command
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
|
|
|
|
2024-10-15 09:37:54 +00:00
|
|
|
./build/nimbus_execution_client \
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
--network=mainnet \
|
|
|
|
--engine-api=true \
|
|
|
|
--engine-api-port=8551 \
|
|
|
|
--engine-api-ws=true \
|
|
|
|
--jwt-secret=/tmp/jwtsecret \
|
|
|
|
...
|
|
|
|
|
|
|
|
Note that *--engine-api-port=8551* and *--jwt-secret=/tmp/jwtsecret* match
|
|
|
|
the corresponding options from the *nimbus-eth2* beacon source example.
|
|
|
|
|
|
|
|
### Syncing on a low memory machine
|
|
|
|
|
|
|
|
On a system with memory with *8GiB* the following additional options proved
|
|
|
|
useful for *nimbus* to reduce the memory footprint.
|
|
|
|
|
|
|
|
For the *Era1* pre-load (if any) the following extra options apply to
|
|
|
|
"*nimbus import*":
|
|
|
|
|
|
|
|
--chunk-size=1024
|
|
|
|
--debug-rocksdb-row-cache-size=512000
|
|
|
|
--debug-rocksdb-block-cache-size=1500000
|
|
|
|
|
|
|
|
To start syncing, the following additional options apply to *nimbus*:
|
|
|
|
|
2024-10-02 11:31:33 +00:00
|
|
|
--debug-beacon-chunk-size=384
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
--debug-rocksdb-max-open-files=384
|
|
|
|
--debug-rocksdb-write-buffer-size=50331648
|
|
|
|
--debug-rocksdb-block-cache-size=1073741824
|
|
|
|
--debug-rdb-key-cache-size=67108864
|
|
|
|
--debug-rdb-vtx-cache-size=268435456
|
|
|
|
|
|
|
|
Also, to reduce the backlog for *nimbus-eth2* stored on disk, the following
|
2024-10-03 20:19:11 +00:00
|
|
|
changes might be considered. In the file
|
|
|
|
*nimbus-eth2/vendor/mainnet/metadata/config.yaml* change the folloing
|
|
|
|
settings
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
|
|
|
MIN_EPOCHS_FOR_BLOCK_REQUESTS: 33024
|
|
|
|
MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS: 4096
|
|
|
|
to
|
|
|
|
|
|
|
|
MIN_EPOCHS_FOR_BLOCK_REQUESTS: 8
|
|
|
|
MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS: 8
|
|
|
|
|
|
|
|
Caveat: These changes are not useful when running *nimbus_beacon_node* as a
|
|
|
|
production system.
|
|
|
|
|
|
|
|
Metrics
|
|
|
|
-------
|
|
|
|
|
|
|
|
The following metrics are defined in *worker/update/metrics.nim* which will
|
|
|
|
be available if *nimbus* is compiled with the additional make flags
|
|
|
|
*NIMFLAGS="-d:metrics \-\-threads:on"*:
|
|
|
|
|
2024-10-03 20:19:11 +00:00
|
|
|
| *Variable* | *Logic type* | *Short description* |
|
|
|
|
|:-------------------|:------------:|:--------------------|
|
|
|
|
| | | |
|
|
|
|
| beacon_base | block height | **B**, *increasing* |
|
|
|
|
| beacon_coupler | block height | **C**, *increasing* |
|
|
|
|
| beacon_dangling | block height | **D** |
|
|
|
|
| beacon_end | block height | **E**, *increasing* |
|
2024-10-09 18:00:00 +00:00
|
|
|
| beacon_target | block height | **T**, *increasing* |
|
2024-10-03 20:19:11 +00:00
|
|
|
| | | |
|
|
|
|
| beacon_header_lists_staged | size | # of staged header list records |
|
|
|
|
| beacon_headers_unprocessed | size | # of accumulated header block numbers|
|
|
|
|
| beacon_block_lists_staged | size | # of staged block list records |
|
|
|
|
| beacon_blocks_unprocessed | size | # of accumulated body block numbers |
|
|
|
|
| | | |
|
|
|
|
| beacon_buddies | size | # of peers working concurrently |
|