* Fix include
why:
Eth67 not default yet so that got missed
* Rename `LeafKey` => `LeafTie`
why:
Name is a pen picture of what this object is for. Also, it avoids the
ubiquitous term `key`.
* Provided `getOrVoid()` wrapper for `getOrDefault()`
also:
Provide `isValid()` syntactic sugar for `.isNil.not`, `!= 0` etc.
Reorg descriptor source, split into sub-sources
* Bundled `NodeKey` objects with root ID and called it `HashLabel`
why:
`NodeKey` (aka repurposed Hash265) objects are unique only within a
particular sub-trie (e.g. storage slots) which are kept separated
(i.e non-interleaved) by design. This is not applied to the backend
as the map VertexID->NodeKey labelling the nodes needs not be injective.
For the in-memory database (transaction) layers, the injective map
VertexID->(VertexID,NodeKey) is used where the first field of the image
tuple is the root ID of the sub-trie the `NodeKey` object is valid. So
identical storage tries for different accounts can be represented.
* Extract RocksDB timing tests from snap unit tests as separate module
why:
Declutter, make space for more snap related unit tests.
* Renamed `undumpNextGroup()` => `undumpBlocks()`
why:
Source file name is called `undump_blocks.nim` which should be sort
of in sync with the method name(s).
* Implement snap/1 server method `getByteCodes()`
* Implement snap/1 client method `getByteCodes()`
* Implement faculty for handling contract code fetching via snap/1
* Provide persistent storage for contract code records
* Implement contract code snap sync fetch & store
* Code massage, cosmetics
* Unit tests for verifying snap sync snapshot dump
details:
Use `undump_kvp.dumpAllDb()` to dump any database.
* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
* Redesign snap1 message GetTrieNodes argument prototypes
why:
A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
than an opaque definition `seq[seq[Blob]]` because the inner object
`SnapTriePath` object has a dedicated inner structure (for how to
interprete `seq[Blob]`.)
* Collect some public constants into `constants.nim` file
* Reorg `hexary_paths.nim`
why:
+ Collecting nodes following a partial path properly ending at an
extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_paths.rootPathExtend()`
info:
Extracted common tasks to `hexary_nodes_helper.nim`
* Implement `StorageRanges` message handler for snap/1 protocol
* Handle last/all node(s) proof conditions at leaf node extractor
detail:
Flag whether the maximum extracted node is the last one in database
No proof needed if the full tree was extracted
* Clean up some helpers & definitions
details:
Move entities to more plausible locations, e.g. `Account` object need
not be dealt with in the range extractor as it applies to any kind of
leaf data.
* Fix next/prev database walk fringe condition
details:
First check needed might be for a leaf node which was done too late.
* Homogenise snap/1 protocol function prototypes
why:
The range arguments `origin` and `limit` data types differed in various
function prototypes (`Hash256` vs. `openArray[byte]`.)
* Implement `GetStorageRange` handler
* Implement server timeout for leaf node retrieval
why:
This feature leaves control on the server for probably costly action
invoked by the network
* Implement maximal reply size for snap service
why:
This feature leaves control on the server for probably costly action
invoked by the network.
* Renaming androgynous sub-object names according to where they belong
why:
These objects are not explicitly dealt with. They give meaning to
some generic wrapper objects. Naming them after their origin may
help troubleshooting.
* Redefine proof nodes list data type for `snap/1` wire protocol
why:
The current specification suffered from the fact that the basic data
type for a proof node is an RLP encoded hexary node. This slightly
confused the encoding/decoding magic.
details:
This is the second attempt, now wrapping the `seq[Blob]` into a
wrapper object of `seq[SnapProof]` for a distinct alias sequence.
In the previous attempt, `SnapProof` was a wrapper object holding the
`Blob` with magic applied to the `seq[]`. This needed the `append`
mixin to strip the outer wrapper that was applied to the `Blob` already
when it was passed as argument.
* Fix some prototype inconsistency
why:
For easy reading, `getAccountRange()` handler return code should
resemble the `accoundRange()` anruments prototype.
* Enable `snap/1` accounts range service
* Allow to change the garbage collector to `boehm` as a Makefile option.
why:
There is still an unsolved memory corruption problem that might be
related to the standard `gc`. It seemingly goes away if the `gc` is
changed to `boehm`.
Specifying another `gc` on the make level simplifies debugging and
development.
* Code cosmetics
details:
* updated exception annotations
* extracted `worker_desc.nim` from `full/worker.nim`
* etc.
* Implement option to state a sync modifier file
why:
This allows to specify extra sync type specific options which might
change over time. This file is regularly checked for updates.
* Implement a threshold when to suspend full syncing
why:
For a test scenario, a full sync beep may work as a local snap server.
There is no need to download the full block chain.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Redefine `seq[Blob]` => `seq[SnapProof]` for `snap/1` protocol
why:
Proof nodes are traded as `Blob` type items rather than Nim objects. So
the RLP transcoder must not extra wrap proofs which are of type
seq[Blob]. Without custom encoding one would produce a
`list(blob(item1), blob(item2) ..)` instead of `list(item1, item2 ..)`.
* Limit leaf extractor by RLP size rather than number of items
why:
To be used serving `snap/1` requests, the result of function
`hexaryRangeLeafsProof()` is limited by the maximal space
needed to serialise the result which will be part of the
`snap/1` repsonse.
* Let the range extractor `hexaryRangeLeafsProof()` return RLP list sizes
why:
When collecting accounts, the size oft the accounts list when encoded
as RLP is continually updated. So the summed up value is available
anyway. For the proof nodes list, there are not many (~ 10) so summing
up is not expensive here.
* Silence some compiler gossip -- part 1, tx_pool
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 2, clique
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 3, misc core
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 4, sync
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
Missing exception annotation
* Update comments and test noise
* Fix boundary proofs
why:
Where neither used in production, nor unit tested. For production, other
methods apply to test leaf range integrity directly based of the proof
nodes.
* Added `hexary_range()`: interval range + proof extractor
details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)
todo:
Need to verify completeness of proof nodes
* Reduce some nim 1.6 compiler noise
* Stop unit test gossip for ci tests
* Cosmetics, update logger `topics`
* Clean up sync/start methods in nimbus
why:
* The `protocols` list selects served (as opposed to sync) protocols only.
* The `SyncMode.Default` object is allocated with the other possible sync
mode objects.
* Add snap service stub to `nimbus`
* Provide full set of snap response handler stubs
* Bicarb for the latest CI hiccup
why:
Might be a change in the CI engine for MacOS.
* Miscellaneous tweaks & fixes
details:
+ Catch `TransportError` exception in `legacy.nim` module
+ Fix self-calling wrapper `hexaryEnvelopeTouchedBy()`
* Update documentation, logging etc.
* Changed `checkNode` batch list `seq[Blob]` => `seq[NodeSpecs]`
why:
The `NodeSpecs` type as used here is a tuple `(partial-path,node-key)`.
When `checkNode` partial paths are collected, also the node key is
available so it should be registered and not repeatedly recovered from
the database.
* Add optional begin/end trace statement in snap scheduler
why:
Allows to trace invoked entity and scheduler state variables
* Multiple storage batches at a time
why:
Previously only some small portion was processed at a time so the peer
might have gone when the process was resumed at a later time
* Renamed some field of snap/1 protocol response object
why:
Documented as `slots` is in reality a per-account list of slot lists. So
the new name `slotLists` better reflects the nature of the beast.
* Some minor healing re-arrangements for storage slot tries
why;
Resolving all complete inherited slots tries first in sync mode keeps
the worker queues smaller which improves logging.
* Prettify logging, comments update etc.
* Re-model persistent database access
why:
Storage slots healing just run on the wrong sub-trie (i.e. the wrong
key mapping). So get/put and bulk functions now use the definitions
in `snapdb_desc` (earlier there were some shortcuts for `get()`.)
* Fixes: missing return code, typo, redundant imports etc.
* Remove obsolete debugging directives from `worker_desc` module
* Correct failing unit tests for storage slots trie inspection
why:
Some pathological cases for the extended tests do not produce any
hexary trie data. This is rightly detected by the trie inspection
and the result checks needed to adjusted.
* For snap sync, publish `EthWireRef` in sync descriptor
why:
currently used for noise control
* Detect and reuse existing storage slots
* Provide healing module for storage slots
* Update statistic ticker (adding range factor for unprocessed storage)
* Complete mere function for work item ranges
why:
Merging interval into existing partial item was missing
* Show av storage queue lengths in ticker
detail;
Previous attempt shows average completeness which did not tell much
* Correct the meaning of the storage counter (per pivot)
detail:
Is the # accounts that have a storage saved
* Provided common scheduler API, applied to `full` sync
* Use hexary trie as storage for proofs_db records
also:
+ Store metadata with account for keeping track of account state
+ add iterator over accounts
* Common scheduler API applied to `snap` sync
* Prepare for accounts bulk import
details:
+ Added some ad-hoc checks for proving accounts data received from the
snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
(turned of by default, see `worker_desc.nim`)
* Relocated `IntervalSets` to nim-stew repo
* Accumulate accounts on temporary kv-DB
why:
Explore the data as returned from snap/1. Will be converted to a
`eth/db` next.
details:
Verify and accumulate per/state-root accounts downloaded via snap.
also:
Some unit tests
* Replace `Table` by `TrieDatabaseRef` for accounts accumulator
* update ticker statistics
details:
mean/variance based counter update
* allow persistent db for proved accounts
* rebase, and globally activate unit test
* fix statistics
* Using `IntervalSet` type data for `LeafRange`
* Updated log ticker
* Update to `eth67`
details:
Disabled by default, use `ENABLE_LEGACY_ETH66=0` to enable
No support for `Get/NodeData` dialogue via eth, anymore
* Dissolved fetch/common.nim
details;
the log/ticker part becomes ticker.nim
the interval range management is merged into fetch.nim
* Updated account scheduler
why:
The previous scheduler fetched each account once (for different state
roots.) The updated scheduler re-calibrates after a change of the state
root and potentially (until told otherwise) fetches all possible
accounts.
* Fix `high(P)` fringe cases in `IntervalSet` handling
why:
The `high(P)` value for a point type `P` cannot be represented with
half open intervals `[a,b)` for a,b points of `P`. So this single value
needs extra treatment which was slightly wrong.
* Updated docu/comments
also:
rebased
* Update scheduler
details:
Change the `pivot` management when creating new accounts lists. It is
strictly increasing (and wrapping around) depending on last updated
accounts list.
* Fix/recover download flag
why:
The fetch indicator used to control the data download somehow got
lost during re-org.
* Updated chronicles/logger topics
* Reorganised run state flags
why:
The original code used a pair of boolean flags `(stopped,stopThisState)`
which was translated to three states running, stoppedPending, and
stopped. It is currently not clear whether collapsing some states was
correct. So the original logic has been re-stored, albeit wrapped into
directives like `isStopped()` etc.
also:
Moving some function bodies in `worker.nim`
* Moved `reply_data.nim` and `validate_trienode.nim` to sub-directory `fetch_trie`
why:
Only used in `fetch_trie.nim`.
* Move `fetch_*` file and directory objects to `fetch` subdirectory
why:
Only used in `fetch.nim`
* Added start/stop and/or setup/release methods for all sub-modules
why:
good housekeeping
also:
updated getters/setters for ctrl states
updated trace messages
* Reorg SnapPeerBase descriptor, notably start/stop flags
details:
Instead of using three boolean flags startedFetch, stopped, and
stopThisState a single enum type is used with values SyncRunningOk,
SyncStopRequest, and SyncStopped.
* Restricting snap to eth66 and later
why:
Id-tracked request/response wire protocol can handle overlapped
responses when requests are sent in row.
* Align function names with source code file names
why:
Easier to reconcile when following the implemented logic.
* Update trace logging (want file locations)
why:
The macros previously used hid the relevant file location (when
`chroniclesLineNumbers` turned on.) It rather printed the file
location of the template that was wrapping `trace`.
* Use KeyedQueue table instead of sequence
why:
Quick access, easy configuration as LRU or FIFO with max entries
(currently LRU.)
* Dissolve `SnapPeerEx` object extension into `SnapPeer`
why;
It is logically cleaner and more obvious not to inherit from
`SnapPeerBase` but to specify opaque field object references of the
merged `SnapPeer` object. These can then be locally inherited.
* Dissolve `SnapSyncEx` object extension into `SnapSync`
why;
It is logically cleaner and more obvious not to inherit from
`SnapSyncEx` but to specify opaque field object references of
the `SnapPeer` object. These can then be locally inherited.
Also, in the re-factored code here the interface descriptor
`SnapSyncCtx` inherited `SnapSyncEx` which was sub-optimal (OO
inheritance makes it easier to work with call back functions.)
* new: time_helper, types
* new: path_desc
* new: base_desc
* Re-organised objects inheritance
why:
Previous code used macros to instantiate opaque object references. This
has been re-implemented with OO inheritance based logic.
* Normalised trace macros
* Using distinct types for Hash256 aliases
why:
Better control of the meaning of the hashes, all or the same format
caveat:
The protocol handler DSL used by eth66.nim and snap1.nim uses the
underlying type Hash256 and cannot handle the distinct alias in
rlp and chronicles/log macros. So Hash256 is used directly (does
not change readability as the type is clear by parameter names.)
* Use type name eth and snap (rather than snap1)
* Prettified snap/eth handler trace messages
* Regrouped sync sources
details:
Snap storage related sources are moved to common directory.
Option --new-sync renamed to --snap-sync
also:
Normalised logging for secondary/non-protocol handlers.
* Merge protocol wrapper files => protocol.nim
details:
Merge wrapper sync/protocol_ethxx.nim and sync/protocol_snapxx.nim
into single file snap/protocol.nim
* Comments cosmetics
* Similar start logic for blockchain_sync.nim and sync/snap.nim
* Renamed p2p/blockchain_sync.nim -> sync/fast.nim
why:
Accidentally wrapped into waitFor() directive with reviving jl/sync
branch.
also:
Decorate eth/66 and snap/1 protocol trace messages with protocol
type and version
* Squashed snap-sync-preview patch
why:
Providing end results makes it easier to have an overview.
Collected patch set comments are available as nimbus/sync/ChangeLog.md
in chronological order, oldest first.
* Removed some cruft and obsolete imports, normalised logging