* Update comments and colouring of example metrics display
* Update comment for import/download serialisation flag
details
When importing starts while peers are actively downloading, the
system tends to loose download peers, most probably due to high
system activity. By this behaviour, there will be extra waiting
delays for finding and connecting to new download peers.
For this reason, importing and downloading is serialised. Downloading
does not take place while importing.
* Update comment on import start condition
details
When importing starts while peers are actively downloading, the
system tends to loose download peers, most probably due to high
system activity. By this behaviour, there will be extra waiting
delays for finding and connecting to new download peers.
For this reason, importing starts not before the staged blocks queue
is filled up. The ramp up time to fill up is a fraction of the
potential waiting time when losing peers.
* Update comment on header or block fetch conversation via eth/XX
* Increase staged blocks queue
why
Better overall throughput for slightly increased memory usage
* Reduce header queue limits
why
In regular circumstances, the header queue has a few records most of
time. Longer queues appear with unwieldy peers (bogus data, timeouts,
etc.) if they happen to lock the lowest record so preventing from
temporarily serialising the queue.
* Remove obsolete header cache
why:
Was fall back for the case that the DB table was inaccessible before
`FC` module reorg.
* Add the number of unused connected peers to metric
* Update docu, add Grafana example
why:
Provides useful settings, e.g. for memory debugging
* Re-calibrate blocks queue for import
why:
Old queue setup provided a staging area which was much too large
consuming too much idle memory. Also the command-line re-calibrating
for debugging was much too complicated.
And the naming for the old setup was wrong: There is no max queue
size. Rather there is a HWM where filling the queue stops when reached.
The currently tested size allows for 1.5k blocks on the queue.
* Rename hidden command-line option for debug/re-calibrating blocks queue
* Simplify txFrame protocol, improve persist performance
To prepare forked-layers for further surgery to avoid the nesting tax,
the commit/rollback style of interacting must first be adjusted, since
it does not provide a point in time where the frame is "done" and goes
from being actively written to, to simply waiting to be persisted or
discarded.
A collateral benefit of this change is that the scheme removes some
complexity from the process by moving the "last saved block number" into
txframe along with the actual state changes thus reducing the risk that
they go "out of sync" and removing the "commit" consolidation
responsibility from ForkedChain.
* commit/rollback become checkpoint/dispose - since these are pure
in-memory constructs, there's less error handling and there's no real
"rollback" involved - dispose better implies that the instance cannot be
used and we can more aggressively clear the memory it uses
* simplified block number handling that moves to become part of txFrame
just like the data that the block number references
* avoid reparenting step by replacing the base instead of keeping a
singleton instance
* persist builds the set of changes from the bottom which helps avoid
moving changes in the top layers through each ancestor level of the
frame stack
* when using an in-memory database in tests, allow the instance to be
passed around to enable testing persist and reload logic
* Use unittest2 test runner
Since upgrading to unittest2, the test runner prints the command line to
re-run a failed test - this however relies on actually using the
unittest2 command line runner.
Previously, test files were assigned numbers - with the unittest2
runner, tests are run using suite/category names instead, like so:
```
# run the Genesis suite
build/all_tests "Genesis::``
# run all tests with "blsMapG1" in the name
build/all_tests "blsMapG1*"
# run tests verbosely
build/all_tests -v
```
A reasonable follow-up here would be to review the suite names to make
them easier to run :)
* lint
* easier-to-compare test order
* bump unittest2 (also the repo)
* add custom `hash` for `RootedVertexID`
There's no benefit hashing `root` since `vid` is already unique and the
"default" hash is not free - this trivially brings a small perf boost to
one of the key lookup tables in aristo_layers.
* lint
* Fix fringe case where the final `checkpoint()` must not be applied
why
If there are no `era` or `era1` files that can be imported from,
then the internal state will not be in sync with the DB state. This
would lead to overwriting the saved state block number with something
fancy. As a consequence the database becomes unusable for the next
process which will eventually fail with a state root mismatch.
* Update comment
By introducing the "shared rocksdb instance" concept to the backend, we
can remove the "piggybacking" mode , thus reducing the complexity of
database initialisation and opening the possibility of extending how
write batching works across kvt/aristo.
The change makes explicit the hidden shared state that was previously
hiding in closures and provides the first step towards simplifying the
"commit/persist" interface of coredb, preparing it for optimizations to
reduce the "layering tax" that `forked-layers` introduced.
* Update Nimbus EVM code to use the latest nim-evmc which is now on EVMC v12.1.0
* Fix copyright.
* Fix tests.
* Update to use FkLatest.
* Fix copyright and update test helper.
* renamed nimbus folder to execution_chain
* Renamed "nimbus" references to "execution_chain"
* fixed wrongly changed http reference
* delete snap types file given that it was deleted before this PR merge
* missing 'execution_chain' replacement
---------
Co-authored-by: pmmiranda <pedro.miranda@nimbus.team>