* Better error handling
why:
Bail out on some error as early as possible before any changes.
* Implement `fetch()` as opposite of `merge()`
rationale:
In the `Aristo` realm, the action named `fetch()` and `merge()` indicate
leaf value related actions on the MPT, while actions `get()` and `put()`
handle vertex or hash key related operations that constitute the MPT.
* Re-factor `merge()` prototypes
why:
The most used variant of `merge()` should have the simplest prototype.
* Persistent DB constructor needs to import `aristo/aristo_init/persistent`
why:
Most applications use memory DB anyway. This avoids linking `-lrocksdb`
or any other back end libraries by default.
* Re-factor transaction module
why:
Got the paradigm wrong. The transaction descriptor did replace the
database one but should be handled separately.
* Nimbus folder environment update
details:
* Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder.
* The `nimbus` program does not compile yet as it needs the updates
in the parallel `stateless` sub-folder.
* Stateless environment update
details:
* Integrated `CoreDbRef` for the sources in the `stateless` sub-folder.
* The `nimbus` program compiles now.
* Premix environment update
details:
* Integrated `CoreDbRef` for the sources in the `premix` sub-folder.
* Fluffy environment update
details:
* Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder.
* Tools environment update
details:
* Integrated `CoreDbRef` for the sources in the `tools` sub-folder.
* Nodocker environment update
details:
* Integrated `CoreDbRef` for the sources in the
`hive_integration/nodocker` sub-folder.
* Tests environment update
details:
* Integrated `CoreDbRef` for the sources in the `tests` sub-folder.
* The unit tests compile and run cleanly now.
* Generalise `CoreDbRef` to any `select_backend` supported database
why:
Generalisation was just missed due to overcoming some compiler oddity
which was tied to rocksdb for testing.
* Suppress compiler warning for `newChainDB()`
why:
Warning was added to this function which must be wrapped so that
any `CatchableError` is re-raised as `Defect`.
* Split off persistent `CoreDbRef` constructor into separate file
why:
This allows to compile a memory only database version without linking
the backend library.
* Use memory `CoreDbRef` database by default
detail:
Persistent DB constructor needs to import `db/core_db/persistent
why:
Most tests use memory DB anyway. This avoids linking `-lrocksdb` or
any other backend by default.
* fix `toLegacyBackend()` availability check
why:
got garbled after memory/persistent split.
* Clarify raw access to MPT for snap sync handler
why:
Logically, `kvt` is not the raw access for the hexary trie (although
this holds for the legacy database)
why:
* Resolves some compiler coughing when it bails out on persitent
db constructor inside `test()` caluses (works perfectly outside.)
* API looks cleaner and better to maintain for the price of slightly
more work at the backend
* Remove 32bit os support from `custom_network` unit test
also:
* Fix compilation annoyance #1648
* Fix unit test on Kiln (changed `merge` logic?)
* Hide unused sources do not compile
why:
* Get them out of the way before major update
* Import and function prototype mismatch -- maybe some changes got out
of scope.
* Re-implemented `db_chain` as `core_db`
why:
Hiding `TrieDatabaseRef` and `HexaryTrie` by default allows to replace
the current db wrapper by some other one, e.g. Aristo
* Support compiler exception warnings for CoreDbRef base methods.
* Allow `pairs()` iterator on all memory based key-value tables
why:
Previously only available for capture recorder.
* Backport `chain_db.nim` changes into its re-implementation `core_apps.nim`
* Fix exception annotation
on windows, using "localhost" for rpc test is very slow.
both pyspec_sim and engine_sim will need more than one hour.
while on linux and macos only few minutes.
* Misc fixes
detail:
* Fix de-serialisation for account leafs
* Update node recovery from unit tests
* Remove `LegacyAccount` from `PayloadRef` object
why:
Legacy accounts use a hash key as storage root which is detrimental
to the working of the Aristo database which uses a vertex ID.
* Dissolve `hashify_helper` into `aristo_utils` and `aristo_transcode`
why:
Functions are of general interest so they should live in first level
code files.
* Added left/right iterators over leaf nodes
* Some helper/wrapper functions that might be useful
why:
For the main tree with root vertex ID 1, the leaf nodes hold the
account data. These accounts may link to sub trees the storage root
node ID of which must be registered here. There is no reverse key
lookup on the backend.
note:
These definitions are experimental. Also, there are some tests missing
for validating Payload data conversions.
* Provide transaction based interface for standard operations
* Provide unit tests for new Aristo interface using transactions
details:
These new tests combine and replace several single-purpose tests.
The now unused test sources will be kept for a while to be eventually
removed.
* Slightly tighten some self-check conditions
* Redefined the database descriptor object as reference (to the object)
why:
The upcoming transaction wrapper will work with a database reference
rather than the object itself
* Append state before `save()` to the Aristo descriptor
why:
This stae was previously returned by the function. Appending it to
a field of the Aristo descriptor seems easier to handle.
* Fix missing branch checks in transcoder
why:
Symmetry problem. `Blobify()` allowed for encoding degenerate branch
vertices while `Deblobify()` rejected decoding wrongly encoded data.
* Update memory backend so that it rejects storing bogus vertices.
why:
Error behaviour made similar to the rocks DB backend.
* Make sure that leaf vertex IDs are not repurposed
why:
This makes it easier to record leaf node changes
* Update error return code for next()/right() traversal
why:
Returning offending vertex ID (besides error code) helps debugging
* Update Merkle hasher for deleted nodes
why:
Not implemented, yet
also:
Provide cache & backend consistency check functions. This was
partly re-implemented from `hashifyCheck()`
* Simplify some unit tests
* Fix delete function
why:
Was conceptually wrong
previously, the withdrawal validation is in process_block only,
but the one in persist block, which is also used in synchronizer
is not validated properly.
* Added missing deferred cleanup directive to sub-test functions
why:
Rocksdb keeps the files locked for a short while leading to errors. This
was previously solved my using different db sub-directories
* Provide vertex deep-copy function globally.
why:
is just handy
* Avoid unnecessary vertex caching when merging proof nodes
also:
Run all merge tests on the rocksdb backend
Previously, proof node tests were run without backend
* Fix vertex ID generator state handling for rocksdb backend
why:
* Key error in walk iterator
* Needs to be loaded when opening the database
* Use non-zero sub-table prefixes for rocksdb
why:
Handy for debugging
* Fix error code for missing key on rocksdb backend
why:
Previously returned `VOID_HASH_KEY` rather than `GetKeyNotFound`
* Explicitly copy vertex data between internal table and function/result argument
why:
Function argument or return reference may still refer to the same data
object.
* Updated error symbols
why:
Error symbol names for the hike module now start with the prefix `Hike`.
* Write back modified branch node into local top layer cache
why:
With the backend available, the source of the branch node references
might not be the top layer cache. So any change must be explicitely
recorded.
* Generalised Aristo DB constructor for any type of backend
details:
* Records to be deleted are represented as key-void (rather than
key-value) pairs by the put-function arguments
* Allow direct driver access, iterators as example implementation and
for testing.
* Provide backend storage interface
details:
Stores the top layer onto backend tables
* Implemented Rocks DB backend
details:
Transaction based `put()` functionality
Iterators (based on direct RocksDB access)
* Fix include
why:
Eth67 not default yet so that got missed
* Rename `LeafKey` => `LeafTie`
why:
Name is a pen picture of what this object is for. Also, it avoids the
ubiquitous term `key`.
* Provided `getOrVoid()` wrapper for `getOrDefault()`
also:
Provide `isValid()` syntactic sugar for `.isNil.not`, `!= 0` etc.
Reorg descriptor source, split into sub-sources
* Bundled `NodeKey` objects with root ID and called it `HashLabel`
why:
`NodeKey` (aka repurposed Hash265) objects are unique only within a
particular sub-trie (e.g. storage slots) which are kept separated
(i.e non-interleaved) by design. This is not applied to the backend
as the map VertexID->NodeKey labelling the nodes needs not be injective.
For the in-memory database (transaction) layers, the injective map
VertexID->(VertexID,NodeKey) is used where the first field of the image
tuple is the root ID of the sub-trie the `NodeKey` object is valid. So
identical storage tries for different accounts can be represented.
* Exclude some storage tests
why:
These test running on external dumps slipped through. The particular
dumps were reported earlier as somehow dodgy.
This was changed in `#1457` but having a second look, the change on
hexary_interpolate.nim(350) might be incorrect.
* Redesign `Aristo DB` descriptor for transaction based layers
why:
Previous descriptor layout made it cumbersome to push/pop
database delta layers.
The new architecture keeps each layer with the full delta set
relative to the database backend.
* Keep root ID as part of the `Patricia Trie` leaf path
why;
That way, forests are supported
* Fix missing Merkle key removal in `merge()`
* Accept optional root hash argument in `hashify()`
why:
For importing a full database, there will be no proof data except the
root key. So this can be used to check and set the root key in the
database descriptor.
also:
Associate vertex ID to `hashify()` error return code
* Added Aristo Trie traversal function
why:
* step along leaf vertices in sorted order
* tree/trie consistency checks when debugging
* Enabled storage slots test data for Aristo DB
* Keep vertex ID generator state with each db-layer
why:
The vertex ID generator state is part of the difference to the below
layer
* Move otherwise unused source to test directory
* Add Merkle hash generator
also:
* Verification facility for debugging
* Empty Merkle key hashes encoded as `EMPTY_ROOT_HASH`
details:
1. Merging a leaf vertex merges a `Patricia Trie` path (while
adding/modiying vertices) and adds a leaf node with payload
2. Merging a Merkel node merges a single vertex to the `Patricia Trie`
and registers merkel hashes
3. Action 2 can be used before action 1 in order to construct a
Merkel proof as required for handling `snap/1` data.
4. Unit tests show that action 3 is benign for now :)
* Unit tests update, code cosmetics
* Fix segfault with zombie handling
why:
In order to save memory, the data records of zombie entries are removed
and only the key (aka peer node) is kept. Consequently, logging these
zombies can only be done by the key.
* Allow to accept V2 payload without `shanghaiTime` set while syncing
why:
Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
beacon node headers can be processed regardless. Normal (aka strict)
processing will be automatically restored when leaving snap sync mode.
* Cosmetics, renamed fields (eVtx, bVtx) -> (eVid, bVid)
* Multilayered delta architecture for Aristo DB
details:
Any VertexID or data retrieval needs to go down the rabbit hole and
fetch/get/manipulate the bottom layer -- even without explicit
backend.
* Direct reference to backend from top-level layer
why:
Some services as the vid management needs to be synchronised among all
layers. So access is optimised.
* Experimental MP-trie
why:
Deleting records is a infeasible with the current structure
* Added vertex ID recycling management
Todo:
Provide some unit tests
* DB layout update
why:
Main news is the separation of `Merkel` hashes into an extra table.
details:
The code fragments cover conversion between compact MPT records and
Aristo DB records as well as some rudimentary cache handling for
the `Merkel` hashes (i.e. the extra table entries.)
todo:
Add some simple unit test for the descriptor record (currently used
for vertex ID management, only.)
* Updated vertex ID recycling management
details:
added simple unit tests (mainly testing ABI)
* docu update
* Set maximum time for nodes to be banned.
why:
Useless nodes are marked zombies and banned. They a kept in a table
until flushed out by new connections. This works well if there are many
connections. For the case that there are a few only, a maximum time is
set. When expired, zombies are flushed automatically.
* Suspend full sync while block number at beacon block
details:
Also allows to use external setting from file (2nd line)
* Resume state at full sync after restart (if any)
* Relocate full sync descriptors from global `worker_desc.nim` to local pass
why:
These settings are needed only for the full sync pass.
* Rename `pivotAccountsCoverage*()` => `accountsCoverage*()`
details:
Extract from `worker_desc.nim` into separate source file.
* Relocate snap sync sub-descriptors
details:
..from global `worker_desc.nim` to local pass module `snap_pass_desc.nam`.
* Rename `SnapPivotRef` => `SnapPassPivotRef`
* Mostly removed `SnapPass` prefix from object type names
why:
These objects are solely used on the snap pass.
* Rename `playXXX` => `passXXX`
why:
Better purpose match
* Code massage, log message updates
* Moved `ticker.nim` to `misc` folder to be used the same by full and snap sync
why:
Simplifies maintenance
* Move `worker/pivot*` => `worker/pass/pass_snap/*`
why:
better for maintenance
* Moved helper source file => `pass/pass_snap/helper`
* Renamed ComError => GetError, `worker/com/` => `worker/get/`
* Keep ticker enable flag in worker descriptor
why:
This allows to pass this flag with the descriptor and not an extra
function argument when calling the setup function.
* Extracted setup/release code from `worker.nim` => `pass/pass_init.nim`
* Recreating some of the speculative-execution code.
Not really using it yet. Also there's some new inefficiency in
memory.nim, but it's fixable - just haven't gotten around to it yet.
The big thing introduced here is the idea of "cells" for stack,
memory, and storage values. A cell is basically just a Future (though
there's also the option of making it an Identity - just a simple
distinct wrapper around a value - if you want to turn off the
asynchrony).
* Bumped nim-eth.
* Cleaned up a few comments.
* Bumped nim-secp256k1.
* Oops.
* Fixing a few compiler errors that show up with EVMC enabled.
* Extract RocksDB timing tests from snap unit tests as separate module
why:
Declutter, make space for more snap related unit tests.
* Renamed `undumpNextGroup()` => `undumpBlocks()`
why:
Source file name is called `undump_blocks.nim` which should be sort
of in sync with the method name(s).
* Implement snap/1 server method `getByteCodes()`
* Implement snap/1 client method `getByteCodes()`
* Implement faculty for handling contract code fetching via snap/1
* Provide persistent storage for contract code records
* Implement contract code snap sync fetch & store
* Code massage, cosmetics
* Unit tests for verifying snap sync snapshot dump
details:
Use `undump_kvp.dumpAllDb()` to dump any database.
* Improve logging and logging options in Fluffy
- Allow selection of log format, including:
- JSON
- automatic selection based on tty
- Allow log levels per topic configured on cli
* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
- Remove failing on withdrawalsRoot to allow Shanghai BlockHeader.
Still need to add real checks in block header / body validation.
- Add withdrawals array to Block object for JSON-RPC API
* Somewhat tighten error handling
why:
Zombie state is invoked when the current peer turns out to be useless
for further communication. While there is a chance to further talk
to a peer about another topic (aka healing) after some protocol failure,
it makes no sense to do so after a network problem.
The latter state is explained bu the `peerDegraded` flag that goes
together with the `zombie` state flag. A degraded peer is dropped
immediately.
* Remove `--sync-mode=snapCtx` option, always start snap in recovery mode
why:
No need for a snap sync option without recovery mode, can be achieved
by deleting the database.
* Code cosmetics, typos, prettify logging, debugging helper, etc.
* Split off snap sync sub-mode handler into separate modules
details:
The original `worker.nim` source has become a multiplexer for several
snap sync sub-modes `full` and `snap`. The source modules of the
incarnations of a particular sync sub-mode are places into the
`worker/play` directory.
* Update ticker for snap and full sync logging
* Fix fringe condition for `GetStorageRanges` message handler
why:
Receiving a proved empty range was not considered at all. This lead to
inconsistencies of the return value which led to subsequent errors.
* Update storage range bulk download
details;
Mainly re-org of storage queue processing in `storage_queue_helper.nim`
* Update logging variables/messages
* Update storage slots healing
details:
Mainly clean up after improved helper functions from the sources
`find_missing_nodes.nim` and `storage_queue_helper.nim`.
* Simplify account fetch
why:
To much fuss made tolerating some errors. There will be an overall
strategy implemented where the concert of download and healing function
is orchestrated.
* Add error resilience to the concert of download and healing.
why:
The idea is that a peer might stop serving snap/1 accounts and storage
slot downloads while still able to support fetching nodes for healing.
* Update nearby/neighbour leaf nodes finder
details:
Update return error codes so that in the case that there is no more
leaf node beyond the search direction, the particular error code
`NearbyBeyondRange` is returned.
* Compile largest interval range containing only this leaf point
why:
Will be needed in snap sync for adding single leaf nodes to the range
of already allocated nodes.
* Reorg `hexary_inspect.nim`
why:
Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_inspect.inspectTrieImpl()`
* Update fetching accounts range failure handling in `rangeFetchAccounts()`
why:
Rejected response leads now to fetching for another account range. Only
repeated failures (or all done) terminate the algorithm.
* Update accounts healing
why:
+ Fixed looping over a bogus node response that could not inserted into
the database. As a solution, these nodes are locally registered and not
asked for in this download cycle.
+ Sub-optimal handling of interval range for a healed account leaf node.
Now the maximal range interval containing this node is registered as
processed which leafs to de-fragementation of the processed (and
unprocessed) range list(s). So *gap* ranges which are known not to
cover any account leaf node are not asked for on the network, anymore.
+ Sporadically remove empty interval ranges (if any)
* Update logging, better variable names
t8n: a silly bug contract address generator, should use original
tx nonce instead of read the nonce from sender address in state db.
Although in EVM contract address generated by reading nonce from state db
is correct, outside EVM that nonce value might have been modified,
thus generating incorrect contract address.
accounts cache: when clearing account storage, the originalValue
cache is not cleared, only the storageRoot set to empty storage root,
this will cause getStorage and getCommitedStorage return wrong value
if the originalValue cache contains old value.
* Redesign snap1 message GetTrieNodes argument prototypes
why:
A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
than an opaque definition `seq[seq[Blob]]` because the inner object
`SnapTriePath` object has a dedicated inner structure (for how to
interprete `seq[Blob]`.)
* Collect some public constants into `constants.nim` file
* Reorg `hexary_paths.nim`
why:
+ Collecting nodes following a partial path properly ending at an
extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_paths.rootPathExtend()`
info:
Extracted common tasks to `hexary_nodes_helper.nim`
* Implement `StorageRanges` message handler for snap/1 protocol
now macro assembler support merge fork, shanghai, etc without using ugly hack.
also each assembler test have their own `setup` section that can access
`vmState` and perform various custom setup.
simplify EVM and delegete those things to accounts cache.
also no more manual state clearing, accounts cache will be
responsible for both collecting touched account and perform
state clearing.
* Gwei conversion should use u256 because u64 can overflow.
* Make withdrawals follow the EIP-158 state-clearing rules.
(i.e. Empty accounts should be deleted.)
* Allow the zero address in normalizeNumber.
(Necessary for one of the new withdrawals-related tests.)
* Another fix with a withdrawals-related test.
* Add state root to node steps path register `RPath` or `XPath`
why:
Typically, the first node in the path register is the state root. There
are occasions, when the path register is empty (i.e. there are no node
references) which typically applies to a zero node key.
In order to find the next node key greater than zero, the state root is
is needed which is now part of the `RPath` or `XPath` data types.
* Extracted hexary tree debugging functions into separate files
* Update empty path fringe case for left/right node neighbour
why:
When starting at zero, the node steps path register would be empty. So
will any path that is before the fist non-zero link of a state root (if
it is a `Branch` node.)
The `hexaryNearbyRight()` or `hexaryNearbyLeft()` function required a
non-zero node steps path register. Now the first node is to be advanced
starting at the first state root link if necessary.
* Simplify/reorg neighbour node finder
why:
There was too mach code repetition for the cases
* persistent or in-memory database
* left or right move
details:
Most algorithms apply for persistent and in-memory alike. Using
templates/generic functions most of these algorithms can be stated
in a unified way
* Update storage slots snap/1 handler
details:
Minor changes to be more debugging friendly.
* Fix detection of full database for snap sync
* Docu: Snap sync test & debugging scenario
* Gwei conversion should use u256 because u64 can overflow.
* Make withdrawals follow the EIP-158 state-clearing rules.
(i.e. Empty accounts should be deleted.)
* Allow the zero address in normalizeNumber.
(Necessary for one of the new withdrawals-related tests.)
* Handle last/all node(s) proof conditions at leaf node extractor
detail:
Flag whether the maximum extracted node is the last one in database
No proof needed if the full tree was extracted
* Clean up some helpers & definitions
details:
Move entities to more plausible locations, e.g. `Account` object need
not be dealt with in the range extractor as it applies to any kind of
leaf data.
* Fix next/prev database walk fringe condition
details:
First check needed might be for a leaf node which was done too late.
* Homogenise snap/1 protocol function prototypes
why:
The range arguments `origin` and `limit` data types differed in various
function prototypes (`Hash256` vs. `openArray[byte]`.)
* Implement `GetStorageRange` handler
* Implement server timeout for leaf node retrieval
why:
This feature leaves control on the server for probably costly action
invoked by the network
* Implement maximal reply size for snap service
why:
This feature leaves control on the server for probably costly action
invoked by the network.
* Part of EIP-4895: add withdrawals processing to block processing.
* Refactoring: extracted the engine API handler bodies into procs.
Intending to implement the V2 versions next. (I need the bodies to be
in separate procs so that multiple versions can use them.)
* Working on Engine API changes for Shanghai.
* Updated nim-web3, resolved ambiguity in Hash256 type.
* Updated nim-eth3 to point to master, now that I've merged that.
* I'm confused about what's going on with engine_client.
But let's try resolving this Hash256 ambiguity.
* Still trying to fix this conflict with the Hash256 types.
* Does this work now that nimbus-eth2 has been updated?
* Corrected blockValue in getPayload responses back to UInt256.
c834f67a37
* Working on getting the withdrawals-related tests to pass.
* Fixing more of those Hash256 ambiguities.
(I'm not sure why the nim-web3 library introduced a conflicting type
named Hash256, but right now I just want to get this code to compile again.)
* Bumped a couple of libraries to fix some error messages.
* Needed to get "make fluffy-tools" to pass, too.
* Getting "make nimbus_verified_proxy" to build.
* Clean up some function prototypes
why:
Simplify polymorphic prototype variances for easier maintenance.
* Fix fringe condition crash when importing bogus RLP node
why:
Accessing non-list RLP entry as a list causes `Defect`
* Fix left boundary proof at range extractor
why:
Was insufficient. The main problem was that there was no unit test for
the validity of the generated left boundary.
* Handle incomplete left boundary proofs early
why:
Attempt to do it later leads to overly complex code in order to prevent
looping when the same peer repeats to send the same incomplete proof.
Contrary, gaps in the leaf sequence can be handled gracefully with
registering the gaps
* Implement a manual pivot setup mechanism for snap sync
why:
For a test scenario it is convenient to set the pivot to something
lower than the beacon header from the consensus layer. This does not
need rely on any RPC mechanism.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix calculation error
why:
Prevent from calculating negative square root
why:
The peer manager runs concurrently to the discovery scheme. So the p2p
peer observer will also present `peer` non-static entries. Previously,
this peer manager throw an assert defect when this happened.
* Renaming androgynous sub-object names according to where they belong
why:
These objects are not explicitly dealt with. They give meaning to
some generic wrapper objects. Naming them after their origin may
help troubleshooting.
* Redefine proof nodes list data type for `snap/1` wire protocol
why:
The current specification suffered from the fact that the basic data
type for a proof node is an RLP encoded hexary node. This slightly
confused the encoding/decoding magic.
details:
This is the second attempt, now wrapping the `seq[Blob]` into a
wrapper object of `seq[SnapProof]` for a distinct alias sequence.
In the previous attempt, `SnapProof` was a wrapper object holding the
`Blob` with magic applied to the `seq[]`. This needed the `append`
mixin to strip the outer wrapper that was applied to the `Blob` already
when it was passed as argument.
* Fix some prototype inconsistency
why:
For easy reading, `getAccountRange()` handler return code should
resemble the `accoundRange()` anruments prototype.
* Enable `snap/1` accounts range service
* Allow to change the garbage collector to `boehm` as a Makefile option.
why:
There is still an unsolved memory corruption problem that might be
related to the standard `gc`. It seemingly goes away if the `gc` is
changed to `boehm`.
Specifying another `gc` on the make level simplifies debugging and
development.
* Code cosmetics
details:
* updated exception annotations
* extracted `worker_desc.nim` from `full/worker.nim`
* etc.
* Implement option to state a sync modifier file
why:
This allows to specify extra sync type specific options which might
change over time. This file is regularly checked for updates.
* Implement a threshold when to suspend full syncing
why:
For a test scenario, a full sync beep may work as a local snap server.
There is no need to download the full block chain.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix locked database file annoyance with unit tests on Windows
why:
Need to clean up old files first from previous session as files remain
locked despite closing of database.
* Fix initialisation order
detail:
Apparently this has no real effect as the ticker is only initialised
here but started later.
This possible bug has been in all for a while and was running with the
previous compiler and libraries.
* Better naming of data fields for sync descriptors
details:
* BuddyRef[S,W]: buddy.data -> buddy.only
* CtxRef[S]: ctx.data -> ctx.pool
* Refactoring in preparation for time-based forking.
* Timestamp-based hard-fork-transition.
* Workaround SideEffect issue / compiler bug for both failing locations in Portal history code
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* Redefine `seq[Blob]` => `seq[SnapProof]` for `snap/1` protocol
why:
Proof nodes are traded as `Blob` type items rather than Nim objects. So
the RLP transcoder must not extra wrap proofs which are of type
seq[Blob]. Without custom encoding one would produce a
`list(blob(item1), blob(item2) ..)` instead of `list(item1, item2 ..)`.
* Limit leaf extractor by RLP size rather than number of items
why:
To be used serving `snap/1` requests, the result of function
`hexaryRangeLeafsProof()` is limited by the maximal space
needed to serialise the result which will be part of the
`snap/1` repsonse.
* Let the range extractor `hexaryRangeLeafsProof()` return RLP list sizes
why:
When collecting accounts, the size oft the accounts list when encoded
as RLP is continually updated. So the summed up value is available
anyway. For the proof nodes list, there are not many (~ 10) so summing
up is not expensive here.
* Removed some Windows specific unit test annoyances
details:
+ Short put()/get() cycles on persistent database have a race condition
with vendor rocksdb. On a specific (and slow) qemu/win7 a 50ms `sleep()`
in between will mostly do the job (i.e. unless heavy CPU load.) This
issue was not observed on github/ci.
+ Removed annoyances when qemu/Win7 keeps the rocksdb database files
locked even after closing the db. The problem is solved by strictly
using fresh names for each test. No assumption made to be able to
properly clean up. This issue was not observed on github/ci.
* Silence some compiler gossip -- part 7, misc/non(sync or graphql)
details:
Adding some missing exception annotation
* Silence some compiler gossip -- part 6, evm
details:
Adding some missing exception annotation
* Update evmc cases
why:
were previously missing
* Increase Windows stack needed to run EVMC unit tests
why:
After annotating functions to trace exceptions some unit tests started
to fail on Windows without clear error report.
EVMC works recursively and now there seems to be a stack problem
reported by the nim compiler. Increasing the NIM stack ass sugessted by
NIM (using -d:nimCallDepthLimit=###) had some effect but no clear
solution.
Note that this patch set unrolls some NIM compiler settings
* Unit tests to verify calculations based on hard coded constants
why:
Sizes of RLP encoded objects are available at run time only.
* Changed argument order for `hexaryRangeLeafsProof()` prototype
why:
Better to read as a stand-alone function (arguments were optimised
for functional pipelines)
* Run sub-range proof tests for extracted ranges
* Cosmetics
details:
+ Update doc generator
+ Fix key type representation in `hexary_desc` for debugging
+ Redefine `isImportOk()` as template for better `check()` line reporting
* Fix fringe condition when interpolating Merkle-Patricia tries
details:
Small change with profound effect fixing some pathological condition
that haunted the unit test set on large data sers. There is still one
condition left which might well be due to an incomplete data set.
* Unit test proof nodes for node range extractor
* Unit tests to run on full extraction set
why:
Left over from troubleshooting, range length was only 5
* Reduce Nim 1.6 compiler warnings/hints for Fluffy and Nimbus proxy
Mostly raises Defect removals, TaintedString removal and some
unnecessary imports.
Also updating the copyright years alongside.
* Further reduce Nim 1.6 compiler warnings/hints for Nimbus
* Silence some compiler gossip -- part 5, common
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 6, db, rpc, utils
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 7, randomly collected source files
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 8, assorted tests
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
More impossible exceptions (undoes temporary fix from previous PR)
* Silence some compiler gossip -- part 1, tx_pool
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 2, clique
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 3, misc core
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 4, sync
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
Missing exception annotation
* Update comments and test noise
* Fix boundary proofs
why:
Where neither used in production, nor unit tested. For production, other
methods apply to test leaf range integrity directly based of the proof
nodes.
* Added `hexary_range()`: interval range + proof extractor
details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)
todo:
Need to verify completeness of proof nodes
* Reduce some nim 1.6 compiler noise
* Stop unit test gossip for ci tests
* Updated to the latest nim-eth, nim-rocksdb, nim-web3
* Bump nimbus-eth2 module and fix related issues
Temporarily disabling Portal beacon light client network as it is
a lot of copy pasted code that did not yet take into account
forks. This will require a bigger rework and was not yet tested
in an actual network anyhow.
* More nimbus fixes after module bumps
---------
Co-authored-by: Adam Spitz <adamspitz@status.im>
Co-authored-by: jangko <jangko128@gmail.com>
Two unresolved items currently:
- Three tests that are temporarily disabled as they fail in the
macro_assembler code, which seems to be due to an ambigious
identifier Stop (Ops and chronos ServerCommand enum).
- i386 CI disabled as it fails at Nim compilation already. Failed
tests where already ignored for this target.
* Extracted RocksDB timing unit tests into separate file
why:
make space for more in main module :)
* Extracted `inspectionRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted `storagesRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted pivot checkpoint store/retrieval unit tests into separate file
why:
make space for more in main module :)
* Extract helper functions into separate source file
* Extracted account import unit tests into separate file
why:
make space for more in main module :)
* Rename `test_decompose()` => `test_NodeRangeDecompose()`
why:
There will be more functions with `test_NodeRange` prefix.
* Cosmetics, update logger `topics`
* Clean up sync/start methods in nimbus
why:
* The `protocols` list selects served (as opposed to sync) protocols only.
* The `SyncMode.Default` object is allocated with the other possible sync
mode objects.
* Add snap service stub to `nimbus`
* Provide full set of snap response handler stubs
* Bicarb for the latest CI hiccup
why:
Might be a change in the CI engine for MacOS.
* Simplify pivot update
why:
No need to fetch the pivot header from the network when it can be
be made available in the ivot cache
also:
Keep `txPool` update disabled while syncing
* Cosmetics, tune down some logging noise
* Support `snap/1` without `eth/6?`
why:
Eth is not needed here.
* Snap is an (optional) extension of `eth`
so:
It it must be supported somehow. Nevertheless it will be currently
unused in the snap syncer.
* Register external beacon stream header
why:
This will be used to sync the peers against.
* Update total coverage book-keeping for 100% roll-over
details:
Provide commonly available/used function
* Replace best pivot by beacon stream tracker
details:
Beacon stream header cache will be updated by external chain monitor via
RPC. This cached header will then be used to sync the pivot.
why:
Some peers reconnect recurrently after dialogue was found useless. The
reconnect loop protection was in place already, albeit insufficient.
also:
Some updates to allow setting previously constant parameters at run
time.
* Simplify accounts healing threshold management
why:
Was over-engineered.
details:
Previously, healing was based on recursive hexary trie perusal.
Due to "cheap" envelope decomposition of a range complement for the
hexary trie, the cost of running extra laps have become time-affordable
again and a simple trigger mechanism for healing will do.
* Control number of dangling result nodes in `hexaryInspectTrie()`
also:
+ Returns number of visited nodes available for logging so the maximum
number of nodes can be tuned accordingly.
+ Some code and docu update
* Update names of constants
why:
Declutter, more systematic naming
* Re-implemented `worker_desc.merge()` for storage slots
why:
Provided as proper queue management in `storage_queue_helper`.
details:
+ Several append modes (replaces `merge()`)
+ Added third queue to record entries currently fetched by a worker. So
another parallel running worker can safe the complete set of storage
slots in as checkpoint. This was previously lost.
* Refactor healing
why:
Simplify and remove deep hexary trie perusal for finding completeness.
Due to "cheap" envelope decomposition of a range complement for the
hexary trie, the cost of running extra laps have become time-affordable
again and a simple trigger mechanism for healing will do.
* Docu update
* Run a storage job only once in download loop
why:
Download failure or rejection (i.e. missing data) lead to repeated
fetch requests until peer disconnects, otherwise.
* Relocated mothballing (i.e. swap-in preparation) logic
details:
Mothballing was previously tested & started after downloading
account ranges in `range_fetch_accounts`.
Whenever current download or healing stops because of a pivot change,
swap-in preparation is needed (otherwise some storage slots may get
lost when swap-in takes place.)
Also, `execSnapSyncAction()` has been moved back to `pivot_helper`.
* Reorganised source file directories
details:
Grouped pivot focused modules into `pivot` directory
* Renamed `checkNodes`, `sickSubTries` as `nodes.check`, `nodes.missing`
why:
Both lists are typically used together as pair. Renaming `sickSubTries`
reflects moving away from a healing centric view towards a swap-in
attitude.
* Multi times coverage recording
details:
Per pivot account ranges are accumulated into coverage range set. This
set fill eventually contain a singe range of account hashes [0..2^256]
which amounts to 100% capacity.
A counter has been added that is incremented whenever max capacity is
reached. The accumulated range is then reset to empty.
The effect of this setting is that the coverage can be evenly duplicated.
So 200% would not accumulate on a particular region.
* Update range length comparisons (mod 2^256)
why:
A range interval can have sizes 1..2^256 as it cannot be empty by
definition. The number of points in a range intervals set can have
0..2^256 points. As the scalar range is a residue class modulo 2^256,
the residue class 0 means length 2^256 for a range interval, but can
be 0 or 2^256 for the number of points in a range intervals set.
* Generalised `hexaryEnvelopeDecompose()`
details:
Compile the complement of the union of some (processed) intervals and
express this complement as a list of envelopes of sub-tries.
This facility is directly applicable to swap-in book-keeping.
* Re-factor `swapIn()`
why:
Good idea but baloney implementation. The main algorithm is based on
the generalised version of `hexaryEnvelopeDecompose()` which has been
derived from this implementation.
* Refactor `healAccounts()` using `hexaryEnvelopeDecompose()` as main driver
why:
Previously, the hexary trie was searched recursively for dangling nodes
which has a poor worst case performance already when the trie is
reasonably populated.
The function `hexaryEnvelopeDecompose()` is a magnitude faster because
it does not peruse existing sub-tries in order to find missing nodes
although result is not fully compatible with the previous function.
So recursive search is used in a limited mode only when the decomposer
will not deliver a useful result.
* Logging & maintenance fixes
details:
Preparation for abandoning buddy-global healing variables `node`,
`resumeCtx`, and `lockTriePerusal`. These variable are trie-perusal
centric which will be run on the back burner in favour of
`hexaryEnvelopeDecompose()` which is used for accounts healing already.
* Additional logging for scheduler
* Fix duplicate occurrence of `bestNumber`
why:
Happened when the `block_queue` module was separated out of
the `worker` module. Somehow testing was insufficient or skipped,
at all.
* Update `runPool()` mixin for scheduler
details:
Could be simplified
* Dynamically adapt pivot header negotiation mode
details:
After accepting one peer and some timeout, do not search for more
peers for start syncing but rather continue in relaxed mode with a
single peer.
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4844 (danksharding). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `excessDataGas` field until proper EIP4844 support exists.
https://github.com/status-im/nim-eth/pull/570
* Provide index to reconstruct missing storage slots
why;
Pivots will be changed anymore once they are officially archived. The
account of the archived pivots are ready to be swapped into the active
pivot. This leaves open how to treat storage slots not fetched yet.
Solution: when mothballing, an `account->storage-root` index is
compiled that can be used when swapping in accounts.
* Implement swap-in from earlier pivots
details;
When most accounts are covered by the current and previous pivot
sessions, swapping inthe accounts and storage slots (i.e. registering
account ranges done) from earlier pivots takes place if there is a
common sub-trie.
* Throttle pivot change when healing state has bean reached
why:
There is a hope to complete the current pivot, so pivot update can be
throttled. This is achieved by setting another minimum block number
distance for the pivot headers. This feature is still experimental
* Miscellaneous tweaks & fixes
details:
+ Catch `TransportError` exception in `legacy.nim` module
+ Fix self-calling wrapper `hexaryEnvelopeTouchedBy()`
* Update documentation, logging etc.
* Changed `checkNode` batch list `seq[Blob]` => `seq[NodeSpecs]`
why:
The `NodeSpecs` type as used here is a tuple `(partial-path,node-key)`.
When `checkNode` partial paths are collected, also the node key is
available so it should be registered and not repeatedly recovered from
the database.
* Add optional begin/end trace statement in snap scheduler
why:
Allows to trace invoked entity and scheduler state variables
* Rename and update dismantle => hexaryEnvelopeDecompose()
why:
+ As for naming, a positive connotation is prefered
+ The unit tests were really insufficient
+ The function result was wrong on a few boundry conditions
detail:
+ Extracted the function from `hexary_paths.nim` and re-implemented
it together with other envelope functions => `hexary_envelope.nim`
+ Re-wrote docu for `hexaryEnvelopeDecompose()`
* Relaxed right condition for `hexaryEnvelopeDecompose()` range argument
why;
Previously, the right point of the argument interval had to be a path
to an allocated leaf node. While this is typically a given for accounts,
it is easier to require an arbitrary range of paths (or keys) with
the requirement of a `boundary proof` for left and right (i.e. enough
nodes in the database to find the end points.)
also:
Bug fixes for related functions (typos, missing conditions etc.)
* Add missing unit tests include file
* Add quick hexary trie inspector, called `dismantle()`
why:
+ Full hexary trie perusal is slow if running down leaf nodes
+ For known range of leaf nodes, work out the UInt126-complement of
partial sub-trie paths (for existing nodes). The result should cover
no (or only a few) sub-tries with leaf nodes.
* Extract common healing methods => `sub_tries_helper.nim`
details:
Also apply quick hexary trie inspection tool `dismantle()`
Replace `inspectAccountsTrie()` wrapper by `hexaryInspectTrie()`
* Re-arrange task dispatching in main peer worker
* Refactor accounts and storage slots downloaders
* Rename `HexaryDbError` => `HexaryError`
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4895 (withdrawals). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `withdrawalsRoot` field until proper withdrawals support exists.
https://github.com/status-im/nim-eth/pull/562
* Stop negotiating pivot if peer repeatedly replies w/usesless answers
why:
There is some fringe condition where a peer replies with legit but
useless empty headers repetely. This goes on until somebody stops.
We stop now.
* Rename `missingNodes` => `sickSubTries`
why:
These (probably missing) nodes represent in reality fully or partially
missing sub-tries. The top nodes may even exist, e.g. as a shallow
sub-trie.
also:
Keep track of account healing on/of by bool variable `accountsHealing`
controlled in `pivot_helper.execSnapSyncAction()`
* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)
also:
+ Trigger the recovery (or similar) process from inside the global peer
worker initialisation `worker.setup()` and not by the `snap.start()`
function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
scheduler.
* Can import partial snap sync checkpoint at start
details:
+ Modified what is stored with the checkpoint in `snapdb_pivot.nim`
+ Will be loaded within `runDaemon()` if activated
* Forgot to import total coverage range
why:
Only the top (or latest) pivot needs coverage but the total coverage
is the list of all ranges for all pivots -- simply forgotten.
* Piecemeal trie inspection
details:
Trie inspection will stop after maximum number of nodes visited.
The inspection can be resumed using the returned state from the
last session.
why:
This feature allows for task switch between `piecemeal` sessions.
* Extract pivot helper code from `worker.nim` => `pivot_helper.nim`
* Accounts import will now return dangling paths from `proof` nodes
why:
With proper bookkeeping, this can be used to start healing without
analysing the the probably full trie.
* Update `unprocessed` account range handling
why:
More generally, the API of a pairs of unprocessed intervals favours
the first set and not before that is exhausted the second set comes
into play.
This was unfortunately implemented which caused the ranges to be
unnecessarily fractioned. Now the number of range interval typically
remains in the lower single digit numbers.
* Save sync state after end of downloading some accounts
details:
restore/resume to be implemented later
* Add `stop()` methods to shutdown to shutdown procedure
why:
Nasty behaviour when hitting Ctrl-C, otherwise
* Add background service to sync scheduler
why:
The background service will be used for sync data import and recovery
after restart.
It is controlled by the sync scheduler for an easy turn/on off API.
also:
Simplified snap ticker time calc.
* Fix typo
why:
Single mode here means there is only such (single mode) instance
activated but multi mode instances for other peers are allowed.
Erroneously, multi mode instances were held back waiting while some
single mode instance was running which reduced the number of parallel
download peers.
* Update log ticker, using time interval rather than ticker count
why:
Counting and logging ticker occurrences is inherently imprecise. So
time intervals are used.
* Use separate storage tables for snap sync data
* Left boundary proof update
why:
Was not properly implemented, yet.
* Capture pivot in peer worker (aka buddy) tasks
why:
The pivot environment is linked to the `buddy` descriptor. While
there is a task switch, the pivot may change. So it is passed on as
function argument `env` rather than retrieved from the buddy at
the start of a sub-function.
* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`
* Remove obsolete account range returned from `GetAccountRange` message
why:
Handler returned the wrong right value of the range. This range was
for convenience, only.
* Prioritise storage slots if the queue becomes large
why:
Currently, accounts processing is prioritised up until all accounts
are downloaded. The new prioritisation has two thresholds for
+ start processing storage slots with a new worker
+ stop account processing and switch to storage processing
also:
Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`
* Generalise left boundary proof for accounts or storage slots.
why:
Detailed explanation how this works is documented with
`snapdb_accounts.importAccounts()`.
Instead of enforcing a left boundary proof (which is still the default),
the importer functions return a list of `holes` (aka node paths) found in
the argument ranges of leaf nodes. This in turn is used by the book
keeping software for data download.
* Forgot to pass on variable in function wrapper
also:
+ Start healing not before 99% accounts covered (previously 95%)
+ Logging updated/prettified
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
* Update docu and logging
* Extracted and updated constants from `worker_desc` into separate file
* Update and re-calibrate communication error handling
* Allow simplified pivot negotiation
why:
This feature allows to turn off pivot negotiation so that peers agree
on a a pivot header.
For snap sync with fast changing pivots this only throttles the sync
process. The finally downloaded DB snapshot is typically a merged
version of different pivot states augmented by a healing process.
* Re-model worker queues for accounts download & healing
why:
Currently there is only one data fetch per download or healing task.
This task is then repeated by the scheduler after a short time. In
many cases, this short time seems enough for some peers to decide to
terminate connection.
* Update main task batch `runMulti()`
details:
The function `runMulti()` is activated in quasi-parallel mode by the
scheduler. This function calls the download, healing and fast-sync
functions.
While in debug mode, after each set of jobs run by this function the
database is analysed (by the `snapdb_check` module) and the result
printed.
* Update logging
* Fix node hash associated with partial path for missing nodes
why:
Healing uses the partial paths for fetching nodes from the network. The
node hash (or key) is used to verify the node data retrieved.
The trie inspector function returned the parent hash instead of the node hash
with the partial path when a missing node was detected. So all nodes
for healing were rejected.
* Must not modify sequence while looping over it
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
* Multiple storage batches at a time
why:
Previously only some small portion was processed at a time so the peer
might have gone when the process was resumed at a later time
* Renamed some field of snap/1 protocol response object
why:
Documented as `slots` is in reality a per-account list of slot lists. So
the new name `slotLists` better reflects the nature of the beast.
* Some minor healing re-arrangements for storage slot tries
why;
Resolving all complete inherited slots tries first in sync mode keeps
the worker queues smaller which improves logging.
* Prettify logging, comments update etc.
* Re-model persistent database access
why:
Storage slots healing just run on the wrong sub-trie (i.e. the wrong
key mapping). So get/put and bulk functions now use the definitions
in `snapdb_desc` (earlier there were some shortcuts for `get()`.)
* Fixes: missing return code, typo, redundant imports etc.
* Remove obsolete debugging directives from `worker_desc` module
* Correct failing unit tests for storage slots trie inspection
why:
Some pathological cases for the extended tests do not produce any
hexary trie data. This is rightly detected by the trie inspection
and the result checks needed to adjusted.
* For snap sync, publish `EthWireRef` in sync descriptor
why:
currently used for noise control
* Detect and reuse existing storage slots
* Provide healing module for storage slots
* Update statistic ticker (adding range factor for unprocessed storage)
* Complete mere function for work item ranges
why:
Merging interval into existing partial item was missing
* Show av storage queue lengths in ticker
detail;
Previous attempt shows average completeness which did not tell much
* Correct the meaning of the storage counter (per pivot)
detail:
Is the # accounts that have a storage saved
* Rename `LeafRange` => `NodeTagRange`
* Replacing storage slot partition point by interval
why:
The partition point only allows to describe slots `[point,high(Uint256)]`
for fetching interval slot ranges. This has been generalised for any
interval.
* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`
why:
Generalised healing status for accounts, and later for storage slots.
* Improve accounts healing loop
* Split `snap_db` into accounts and storage modules
why:
It is cleaner to have separate session descriptors for accounts and
storage slots (based on a common base descriptor.)
Also, persistent storage handling might be changed in future which
requires the storage slot implementation disentangled from the accounts
handling.
* Re-model worker queues for storage slots
why:
There is a dynamic list of storage sub-tries, each one has to be
treated similar to the accounts database. This applied to slot
interval downloads as well as to healing
* Compress some return value report lists for snapdb methods
why:
No need to report all handling details for work items that are filteres
out and discarded, anyway.
* Remove inner loop frame from healing function
why:
The healing function runs as a loop body already.
* Split fetch accounts into sub-modules
details:
There will be separated modules for accounts snapshot, storage snapshot,
and healing for either.
* Allow to rebase pivot before negotiated header
why:
Peers seem to have not too many snapshots available. By setting back the
pivot block header slightly, the chances might be higher to find more
peers to serve this pivot. Experiment on mainnet showed that setting back
too much (tested with 1024), the chances to find matching snapshot peers
seem to decrease.
* Add accounts healing
* Update variable/field naming in `worker_desc` for readability
* Handle leaf nodes in accounts healing
why:
There is no need to fetch accounts when they had been added by the
healing process. On the flip side, these accounts must be checked for
storage data and the batch queue updated, accordingly.
* Reorganising accounts hash ranges batch queue
why:
The aim is to formally cover as many accounts as possible for different
pivot state root environments. Formerly, this was tried by starting the
accounts batch queue at a random value for each pivot (and wrapping
around.)
Now, each pivot environment starts with an interval set mutually
disjunct from any interval set retrieved with other pivot state roots.
also:
Stop fishing for more pivots in `worker` if 100% download is reached
* Reorganise/update accounts healing
why:
Error handling was wrong and the (math. complexity of) whole process
could be better managed.
details:
Much of the algorithm is now documented at the top of the file
`heal_accounts.nim`
* Miscellaneous updates TBC
* Disentangled pivot2 module from snap
why:
Wrote as template on top of sync so it can be shared by fast and snap
sync.
* Renamed and relocated pivot sources
* Integrated `best_pivot` module into full and snap sync
why:
Full sync used an older version of `best_pivot`
* isolating download module from full sync
why;
might be shared with snap sync at a later stage