* Nimbus folder environment update
details:
* Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder.
* The `nimbus` program does not compile yet as it needs the updates
in the parallel `stateless` sub-folder.
* Stateless environment update
details:
* Integrated `CoreDbRef` for the sources in the `stateless` sub-folder.
* The `nimbus` program compiles now.
* Premix environment update
details:
* Integrated `CoreDbRef` for the sources in the `premix` sub-folder.
* Fluffy environment update
details:
* Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder.
* Tools environment update
details:
* Integrated `CoreDbRef` for the sources in the `tools` sub-folder.
* Nodocker environment update
details:
* Integrated `CoreDbRef` for the sources in the
`hive_integration/nodocker` sub-folder.
* Tests environment update
details:
* Integrated `CoreDbRef` for the sources in the `tests` sub-folder.
* The unit tests compile and run cleanly now.
* Generalise `CoreDbRef` to any `select_backend` supported database
why:
Generalisation was just missed due to overcoming some compiler oddity
which was tied to rocksdb for testing.
* Suppress compiler warning for `newChainDB()`
why:
Warning was added to this function which must be wrapped so that
any `CatchableError` is re-raised as `Defect`.
* Split off persistent `CoreDbRef` constructor into separate file
why:
This allows to compile a memory only database version without linking
the backend library.
* Use memory `CoreDbRef` database by default
detail:
Persistent DB constructor needs to import `db/core_db/persistent
why:
Most tests use memory DB anyway. This avoids linking `-lrocksdb` or
any other backend by default.
* fix `toLegacyBackend()` availability check
why:
got garbled after memory/persistent split.
* Clarify raw access to MPT for snap sync handler
why:
Logically, `kvt` is not the raw access for the hexary trie (although
this holds for the legacy database)
* Unit tests update, code cosmetics
* Fix segfault with zombie handling
why:
In order to save memory, the data records of zombie entries are removed
and only the key (aka peer node) is kept. Consequently, logging these
zombies can only be done by the key.
* Allow to accept V2 payload without `shanghaiTime` set while syncing
why:
Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
beacon node headers can be processed regardless. Normal (aka strict)
processing will be automatically restored when leaving snap sync mode.
* Extract RocksDB timing tests from snap unit tests as separate module
why:
Declutter, make space for more snap related unit tests.
* Renamed `undumpNextGroup()` => `undumpBlocks()`
why:
Source file name is called `undump_blocks.nim` which should be sort
of in sync with the method name(s).
* Implement snap/1 server method `getByteCodes()`
* Implement snap/1 client method `getByteCodes()`
* Implement faculty for handling contract code fetching via snap/1
* Provide persistent storage for contract code records
* Implement contract code snap sync fetch & store
* Code massage, cosmetics
* Unit tests for verifying snap sync snapshot dump
details:
Use `undump_kvp.dumpAllDb()` to dump any database.
* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
* Redesign snap1 message GetTrieNodes argument prototypes
why:
A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
than an opaque definition `seq[seq[Blob]]` because the inner object
`SnapTriePath` object has a dedicated inner structure (for how to
interprete `seq[Blob]`.)
* Collect some public constants into `constants.nim` file
* Reorg `hexary_paths.nim`
why:
+ Collecting nodes following a partial path properly ending at an
extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_paths.rootPathExtend()`
info:
Extracted common tasks to `hexary_nodes_helper.nim`
* Implement `StorageRanges` message handler for snap/1 protocol
* Add state root to node steps path register `RPath` or `XPath`
why:
Typically, the first node in the path register is the state root. There
are occasions, when the path register is empty (i.e. there are no node
references) which typically applies to a zero node key.
In order to find the next node key greater than zero, the state root is
is needed which is now part of the `RPath` or `XPath` data types.
* Extracted hexary tree debugging functions into separate files
* Update empty path fringe case for left/right node neighbour
why:
When starting at zero, the node steps path register would be empty. So
will any path that is before the fist non-zero link of a state root (if
it is a `Branch` node.)
The `hexaryNearbyRight()` or `hexaryNearbyLeft()` function required a
non-zero node steps path register. Now the first node is to be advanced
starting at the first state root link if necessary.
* Simplify/reorg neighbour node finder
why:
There was too mach code repetition for the cases
* persistent or in-memory database
* left or right move
details:
Most algorithms apply for persistent and in-memory alike. Using
templates/generic functions most of these algorithms can be stated
in a unified way
* Update storage slots snap/1 handler
details:
Minor changes to be more debugging friendly.
* Fix detection of full database for snap sync
* Docu: Snap sync test & debugging scenario
* Clean up some function prototypes
why:
Simplify polymorphic prototype variances for easier maintenance.
* Fix fringe condition crash when importing bogus RLP node
why:
Accessing non-list RLP entry as a list causes `Defect`
* Fix left boundary proof at range extractor
why:
Was insufficient. The main problem was that there was no unit test for
the validity of the generated left boundary.
* Handle incomplete left boundary proofs early
why:
Attempt to do it later leads to overly complex code in order to prevent
looping when the same peer repeats to send the same incomplete proof.
Contrary, gaps in the leaf sequence can be handled gracefully with
registering the gaps
* Implement a manual pivot setup mechanism for snap sync
why:
For a test scenario it is convenient to set the pivot to something
lower than the beacon header from the consensus layer. This does not
need rely on any RPC mechanism.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix calculation error
why:
Prevent from calculating negative square root
* Fix locked database file annoyance with unit tests on Windows
why:
Need to clean up old files first from previous session as files remain
locked despite closing of database.
* Fix initialisation order
detail:
Apparently this has no real effect as the ticker is only initialised
here but started later.
This possible bug has been in all for a while and was running with the
previous compiler and libraries.
* Better naming of data fields for sync descriptors
details:
* BuddyRef[S,W]: buddy.data -> buddy.only
* CtxRef[S]: ctx.data -> ctx.pool
* Redefine `seq[Blob]` => `seq[SnapProof]` for `snap/1` protocol
why:
Proof nodes are traded as `Blob` type items rather than Nim objects. So
the RLP transcoder must not extra wrap proofs which are of type
seq[Blob]. Without custom encoding one would produce a
`list(blob(item1), blob(item2) ..)` instead of `list(item1, item2 ..)`.
* Limit leaf extractor by RLP size rather than number of items
why:
To be used serving `snap/1` requests, the result of function
`hexaryRangeLeafsProof()` is limited by the maximal space
needed to serialise the result which will be part of the
`snap/1` repsonse.
* Let the range extractor `hexaryRangeLeafsProof()` return RLP list sizes
why:
When collecting accounts, the size oft the accounts list when encoded
as RLP is continually updated. So the summed up value is available
anyway. For the proof nodes list, there are not many (~ 10) so summing
up is not expensive here.
* Removed some Windows specific unit test annoyances
details:
+ Short put()/get() cycles on persistent database have a race condition
with vendor rocksdb. On a specific (and slow) qemu/win7 a 50ms `sleep()`
in between will mostly do the job (i.e. unless heavy CPU load.) This
issue was not observed on github/ci.
+ Removed annoyances when qemu/Win7 keeps the rocksdb database files
locked even after closing the db. The problem is solved by strictly
using fresh names for each test. No assumption made to be able to
properly clean up. This issue was not observed on github/ci.
* Silence some compiler gossip -- part 7, misc/non(sync or graphql)
details:
Adding some missing exception annotation
* Unit tests to verify calculations based on hard coded constants
why:
Sizes of RLP encoded objects are available at run time only.
* Changed argument order for `hexaryRangeLeafsProof()` prototype
why:
Better to read as a stand-alone function (arguments were optimised
for functional pipelines)
* Run sub-range proof tests for extracted ranges
* Cosmetics
details:
+ Update doc generator
+ Fix key type representation in `hexary_desc` for debugging
+ Redefine `isImportOk()` as template for better `check()` line reporting
* Fix fringe condition when interpolating Merkle-Patricia tries
details:
Small change with profound effect fixing some pathological condition
that haunted the unit test set on large data sers. There is still one
condition left which might well be due to an incomplete data set.
* Unit test proof nodes for node range extractor
* Unit tests to run on full extraction set
why:
Left over from troubleshooting, range length was only 5
* Update comments and test noise
* Fix boundary proofs
why:
Where neither used in production, nor unit tested. For production, other
methods apply to test leaf range integrity directly based of the proof
nodes.
* Added `hexary_range()`: interval range + proof extractor
details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)
todo:
Need to verify completeness of proof nodes
* Reduce some nim 1.6 compiler noise
* Stop unit test gossip for ci tests
* Extracted RocksDB timing unit tests into separate file
why:
make space for more in main module :)
* Extracted `inspectionRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted `storagesRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted pivot checkpoint store/retrieval unit tests into separate file
why:
make space for more in main module :)
* Extract helper functions into separate source file
* Extracted account import unit tests into separate file
why:
make space for more in main module :)
* Rename `test_decompose()` => `test_NodeRangeDecompose()`
why:
There will be more functions with `test_NodeRange` prefix.
* Rename and update dismantle => hexaryEnvelopeDecompose()
why:
+ As for naming, a positive connotation is prefered
+ The unit tests were really insufficient
+ The function result was wrong on a few boundry conditions
detail:
+ Extracted the function from `hexary_paths.nim` and re-implemented
it together with other envelope functions => `hexary_envelope.nim`
+ Re-wrote docu for `hexaryEnvelopeDecompose()`
* Relaxed right condition for `hexaryEnvelopeDecompose()` range argument
why;
Previously, the right point of the argument interval had to be a path
to an allocated leaf node. While this is typically a given for accounts,
it is easier to require an arbitrary range of paths (or keys) with
the requirement of a `boundary proof` for left and right (i.e. enough
nodes in the database to find the end points.)
also:
Bug fixes for related functions (typos, missing conditions etc.)
* Add missing unit tests include file
* Add quick hexary trie inspector, called `dismantle()`
why:
+ Full hexary trie perusal is slow if running down leaf nodes
+ For known range of leaf nodes, work out the UInt126-complement of
partial sub-trie paths (for existing nodes). The result should cover
no (or only a few) sub-tries with leaf nodes.
* Extract common healing methods => `sub_tries_helper.nim`
details:
Also apply quick hexary trie inspection tool `dismantle()`
Replace `inspectAccountsTrie()` wrapper by `hexaryInspectTrie()`
* Re-arrange task dispatching in main peer worker
* Refactor accounts and storage slots downloaders
* Rename `HexaryDbError` => `HexaryError`
* Stop negotiating pivot if peer repeatedly replies w/usesless answers
why:
There is some fringe condition where a peer replies with legit but
useless empty headers repetely. This goes on until somebody stops.
We stop now.
* Rename `missingNodes` => `sickSubTries`
why:
These (probably missing) nodes represent in reality fully or partially
missing sub-tries. The top nodes may even exist, e.g. as a shallow
sub-trie.
also:
Keep track of account healing on/of by bool variable `accountsHealing`
controlled in `pivot_helper.execSnapSyncAction()`
* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)
also:
+ Trigger the recovery (or similar) process from inside the global peer
worker initialisation `worker.setup()` and not by the `snap.start()`
function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
scheduler.
* Can import partial snap sync checkpoint at start
details:
+ Modified what is stored with the checkpoint in `snapdb_pivot.nim`
+ Will be loaded within `runDaemon()` if activated
* Forgot to import total coverage range
why:
Only the top (or latest) pivot needs coverage but the total coverage
is the list of all ranges for all pivots -- simply forgotten.
* Piecemeal trie inspection
details:
Trie inspection will stop after maximum number of nodes visited.
The inspection can be resumed using the returned state from the
last session.
why:
This feature allows for task switch between `piecemeal` sessions.
* Extract pivot helper code from `worker.nim` => `pivot_helper.nim`
* Accounts import will now return dangling paths from `proof` nodes
why:
With proper bookkeeping, this can be used to start healing without
analysing the the probably full trie.
* Update `unprocessed` account range handling
why:
More generally, the API of a pairs of unprocessed intervals favours
the first set and not before that is exhausted the second set comes
into play.
This was unfortunately implemented which caused the ranges to be
unnecessarily fractioned. Now the number of range interval typically
remains in the lower single digit numbers.
* Save sync state after end of downloading some accounts
details:
restore/resume to be implemented later
* Update log ticker, using time interval rather than ticker count
why:
Counting and logging ticker occurrences is inherently imprecise. So
time intervals are used.
* Use separate storage tables for snap sync data
* Left boundary proof update
why:
Was not properly implemented, yet.
* Capture pivot in peer worker (aka buddy) tasks
why:
The pivot environment is linked to the `buddy` descriptor. While
there is a task switch, the pivot may change. So it is passed on as
function argument `env` rather than retrieved from the buddy at
the start of a sub-function.
* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`
* Remove obsolete account range returned from `GetAccountRange` message
why:
Handler returned the wrong right value of the range. This range was
for convenience, only.
* Prioritise storage slots if the queue becomes large
why:
Currently, accounts processing is prioritised up until all accounts
are downloaded. The new prioritisation has two thresholds for
+ start processing storage slots with a new worker
+ stop account processing and switch to storage processing
also:
Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`
* Generalise left boundary proof for accounts or storage slots.
why:
Detailed explanation how this works is documented with
`snapdb_accounts.importAccounts()`.
Instead of enforcing a left boundary proof (which is still the default),
the importer functions return a list of `holes` (aka node paths) found in
the argument ranges of leaf nodes. This in turn is used by the book
keeping software for data download.
* Forgot to pass on variable in function wrapper
also:
+ Start healing not before 99% accounts covered (previously 95%)
+ Logging updated/prettified
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
* Re-model persistent database access
why:
Storage slots healing just run on the wrong sub-trie (i.e. the wrong
key mapping). So get/put and bulk functions now use the definitions
in `snapdb_desc` (earlier there were some shortcuts for `get()`.)
* Fixes: missing return code, typo, redundant imports etc.
* Remove obsolete debugging directives from `worker_desc` module
* Correct failing unit tests for storage slots trie inspection
why:
Some pathological cases for the extended tests do not produce any
hexary trie data. This is rightly detected by the trie inspection
and the result checks needed to adjusted.
* For snap sync, publish `EthWireRef` in sync descriptor
why:
currently used for noise control
* Detect and reuse existing storage slots
* Provide healing module for storage slots
* Update statistic ticker (adding range factor for unprocessed storage)
* Complete mere function for work item ranges
why:
Merging interval into existing partial item was missing
* Show av storage queue lengths in ticker
detail;
Previous attempt shows average completeness which did not tell much
* Correct the meaning of the storage counter (per pivot)
detail:
Is the # accounts that have a storage saved
* Rename `LeafRange` => `NodeTagRange`
* Replacing storage slot partition point by interval
why:
The partition point only allows to describe slots `[point,high(Uint256)]`
for fetching interval slot ranges. This has been generalised for any
interval.
* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`
why:
Generalised healing status for accounts, and later for storage slots.
* Improve accounts healing loop
* Split `snap_db` into accounts and storage modules
why:
It is cleaner to have separate session descriptors for accounts and
storage slots (based on a common base descriptor.)
Also, persistent storage handling might be changed in future which
requires the storage slot implementation disentangled from the accounts
handling.
* Re-model worker queues for storage slots
why:
There is a dynamic list of storage sub-tries, each one has to be
treated similar to the accounts database. This applied to slot
interval downloads as well as to healing
* Compress some return value report lists for snapdb methods
why:
No need to report all handling details for work items that are filteres
out and discarded, anyway.
* Remove inner loop frame from healing function
why:
The healing function runs as a loop body already.
* Split fetch accounts into sub-modules
details:
There will be separated modules for accounts snapshot, storage snapshot,
and healing for either.
* Allow to rebase pivot before negotiated header
why:
Peers seem to have not too many snapshots available. By setting back the
pivot block header slightly, the chances might be higher to find more
peers to serve this pivot. Experiment on mainnet showed that setting back
too much (tested with 1024), the chances to find matching snapshot peers
seem to decrease.
* Add accounts healing
* Update variable/field naming in `worker_desc` for readability
* Handle leaf nodes in accounts healing
why:
There is no need to fetch accounts when they had been added by the
healing process. On the flip side, these accounts must be checked for
storage data and the batch queue updated, accordingly.
* Reorganising accounts hash ranges batch queue
why:
The aim is to formally cover as many accounts as possible for different
pivot state root environments. Formerly, this was tried by starting the
accounts batch queue at a random value for each pivot (and wrapping
around.)
Now, each pivot environment starts with an interval set mutually
disjunct from any interval set retrieved with other pivot state roots.
also:
Stop fishing for more pivots in `worker` if 100% download is reached
* Reorganise/update accounts healing
why:
Error handling was wrong and the (math. complexity of) whole process
could be better managed.
details:
Much of the algorithm is now documented at the top of the file
`heal_accounts.nim`
* Added inspect module
why:
Find dangling references for trie healing support.
details:
+ This patch set provides only the inspect module and some unit tests.
+ There are also extensive unit tests which need bulk data from the
`nimbus-eth1-blob` module.
* Alternative pivot finder
why:
Attempt to be faster on start up. Also tying to decouple pivot finder
somehow by providing different mechanisms (this one runs in `single`
mode.)
* Use inspect module for healing
details:
+ After some progress with account and storage data, the inspect facility
is used to find dangling links in the database to be filled nose-wise.
+ This is a crude attempt to cobble together functional elements. The
set up needs to be honed.
* fix scheduler to avoid starting dead peers
why:
Some peers drop out while in `sleepAsync()`. So extra `if` clauses
make sure that this event is detected early.
* Bug fixes causing crashes
details:
+ prettify.toPC():
int/intToStr() numeric range over/underflow
+ hexary_inspect.hexaryInspectPath():
take care of half initialised step with branch but missing index into
branch array
* improve handling of dropped peers in alternaive pivot finder
why:
Strange things may happen while querying data from the network.
Additional checks make sure that the state of other peers is updated
immediately.
* Update trace messages
* reorganise snap fetch & store schedule
* Re-implemented `hexaryFollow()` in a more general fashion
details:
+ New name for re-implemented `hexaryFollow()` is `hexaryPath()`
+ Renamed `rTreeFollow()` as `hexaryPath()`
why:
Returning similarly organised structures, the results of the
`hexaryPath()` functions become comparable when running over
the persistent and the in-memory databases.
* Added traversal functionality for persistent ChainDB
* Using `Account` values as re-packed Blob
* Repack samples as compressed data files
* Produce test data
details:
+ Can force pivot state root switch after minimal coverage.
+ For emulating certain network behaviour, downloading accounts stops for
a particular pivot state root if 30% (some static number) coverage is
reached. Following accounts are downloaded for a later pivot state root.
* Bump nim-stew
why:
Need fixed interval set
* Keep track of accumulated account ranges over all state roots
* Added comments and explanations to unit tests
* typo
* Extracted functionality into sub-modules for maintainability
* Setting SST bulk load as default in `accounts_db`
details:
+ currently, the same data are stored via rocksdb if available, and
the same via embedded `storage_type` with (non-standard) prefix 200
for time comparisons
+ fallback to normal `put()` unless rocksdb is accessible
* Provided common scheduler API, applied to `full` sync
* Use hexary trie as storage for proofs_db records
also:
+ Store metadata with account for keeping track of account state
+ add iterator over accounts
* Common scheduler API applied to `snap` sync
* Prepare for accounts bulk import
details:
+ Added some ad-hoc checks for proving accounts data received from the
snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
(turned of by default, see `worker_desc.nim`)
* Relocated `IntervalSets` to nim-stew repo
* Accumulate accounts on temporary kv-DB
why:
Explore the data as returned from snap/1. Will be converted to a
`eth/db` next.
details:
Verify and accumulate per/state-root accounts downloaded via snap.
also:
Some unit tests
* Replace `Table` by `TrieDatabaseRef` for accounts accumulator
* update ticker statistics
details:
mean/variance based counter update
* allow persistent db for proved accounts
* rebase, and globally activate unit test
* fix statistics