* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
* Somewhat tighten error handling
why:
Zombie state is invoked when the current peer turns out to be useless
for further communication. While there is a chance to further talk
to a peer about another topic (aka healing) after some protocol failure,
it makes no sense to do so after a network problem.
The latter state is explained bu the `peerDegraded` flag that goes
together with the `zombie` state flag. A degraded peer is dropped
immediately.
* Remove `--sync-mode=snapCtx` option, always start snap in recovery mode
why:
No need for a snap sync option without recovery mode, can be achieved
by deleting the database.
* Code cosmetics, typos, prettify logging, debugging helper, etc.
* Split off snap sync sub-mode handler into separate modules
details:
The original `worker.nim` source has become a multiplexer for several
snap sync sub-modes `full` and `snap`. The source modules of the
incarnations of a particular sync sub-mode are places into the
`worker/play` directory.
* Update ticker for snap and full sync logging
* Fix fringe condition for `GetStorageRanges` message handler
why:
Receiving a proved empty range was not considered at all. This lead to
inconsistencies of the return value which led to subsequent errors.
* Update storage range bulk download
details;
Mainly re-org of storage queue processing in `storage_queue_helper.nim`
* Update logging variables/messages
* Update storage slots healing
details:
Mainly clean up after improved helper functions from the sources
`find_missing_nodes.nim` and `storage_queue_helper.nim`.
* Simplify account fetch
why:
To much fuss made tolerating some errors. There will be an overall
strategy implemented where the concert of download and healing function
is orchestrated.
* Add error resilience to the concert of download and healing.
why:
The idea is that a peer might stop serving snap/1 accounts and storage
slot downloads while still able to support fetching nodes for healing.
* Update nearby/neighbour leaf nodes finder
details:
Update return error codes so that in the case that there is no more
leaf node beyond the search direction, the particular error code
`NearbyBeyondRange` is returned.
* Compile largest interval range containing only this leaf point
why:
Will be needed in snap sync for adding single leaf nodes to the range
of already allocated nodes.
* Reorg `hexary_inspect.nim`
why:
Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_inspect.inspectTrieImpl()`
* Update fetching accounts range failure handling in `rangeFetchAccounts()`
why:
Rejected response leads now to fetching for another account range. Only
repeated failures (or all done) terminate the algorithm.
* Update accounts healing
why:
+ Fixed looping over a bogus node response that could not inserted into
the database. As a solution, these nodes are locally registered and not
asked for in this download cycle.
+ Sub-optimal handling of interval range for a healed account leaf node.
Now the maximal range interval containing this node is registered as
processed which leafs to de-fragementation of the processed (and
unprocessed) range list(s). So *gap* ranges which are known not to
cover any account leaf node are not asked for on the network, anymore.
+ Sporadically remove empty interval ranges (if any)
* Update logging, better variable names
* Clean up some function prototypes
why:
Simplify polymorphic prototype variances for easier maintenance.
* Fix fringe condition crash when importing bogus RLP node
why:
Accessing non-list RLP entry as a list causes `Defect`
* Fix left boundary proof at range extractor
why:
Was insufficient. The main problem was that there was no unit test for
the validity of the generated left boundary.
* Handle incomplete left boundary proofs early
why:
Attempt to do it later leads to overly complex code in order to prevent
looping when the same peer repeats to send the same incomplete proof.
Contrary, gaps in the leaf sequence can be handled gracefully with
registering the gaps
* Implement a manual pivot setup mechanism for snap sync
why:
For a test scenario it is convenient to set the pivot to something
lower than the beacon header from the consensus layer. This does not
need rely on any RPC mechanism.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix calculation error
why:
Prevent from calculating negative square root
* Enable `snap/1` accounts range service
* Allow to change the garbage collector to `boehm` as a Makefile option.
why:
There is still an unsolved memory corruption problem that might be
related to the standard `gc`. It seemingly goes away if the `gc` is
changed to `boehm`.
Specifying another `gc` on the make level simplifies debugging and
development.
* Code cosmetics
details:
* updated exception annotations
* extracted `worker_desc.nim` from `full/worker.nim`
* etc.
* Implement option to state a sync modifier file
why:
This allows to specify extra sync type specific options which might
change over time. This file is regularly checked for updates.
* Implement a threshold when to suspend full syncing
why:
For a test scenario, a full sync beep may work as a local snap server.
There is no need to download the full block chain.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix locked database file annoyance with unit tests on Windows
why:
Need to clean up old files first from previous session as files remain
locked despite closing of database.
* Fix initialisation order
detail:
Apparently this has no real effect as the ticker is only initialised
here but started later.
This possible bug has been in all for a while and was running with the
previous compiler and libraries.
* Better naming of data fields for sync descriptors
details:
* BuddyRef[S,W]: buddy.data -> buddy.only
* CtxRef[S]: ctx.data -> ctx.pool
* Silence some compiler gossip -- part 1, tx_pool
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 2, clique
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 3, misc core
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 4, sync
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
Missing exception annotation
* Update comments and test noise
* Fix boundary proofs
why:
Where neither used in production, nor unit tested. For production, other
methods apply to test leaf range integrity directly based of the proof
nodes.
* Added `hexary_range()`: interval range + proof extractor
details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)
todo:
Need to verify completeness of proof nodes
* Reduce some nim 1.6 compiler noise
* Stop unit test gossip for ci tests
Two unresolved items currently:
- Three tests that are temporarily disabled as they fail in the
macro_assembler code, which seems to be due to an ambigious
identifier Stop (Ops and chronos ServerCommand enum).
- i386 CI disabled as it fails at Nim compilation already. Failed
tests where already ignored for this target.
* Simplify pivot update
why:
No need to fetch the pivot header from the network when it can be
be made available in the ivot cache
also:
Keep `txPool` update disabled while syncing
* Cosmetics, tune down some logging noise
* Support `snap/1` without `eth/6?`
why:
Eth is not needed here.
* Snap is an (optional) extension of `eth`
so:
It it must be supported somehow. Nevertheless it will be currently
unused in the snap syncer.
* Register external beacon stream header
why:
This will be used to sync the peers against.
* Update total coverage book-keeping for 100% roll-over
details:
Provide commonly available/used function
* Replace best pivot by beacon stream tracker
details:
Beacon stream header cache will be updated by external chain monitor via
RPC. This cached header will then be used to sync the pivot.
* Simplify accounts healing threshold management
why:
Was over-engineered.
details:
Previously, healing was based on recursive hexary trie perusal.
Due to "cheap" envelope decomposition of a range complement for the
hexary trie, the cost of running extra laps have become time-affordable
again and a simple trigger mechanism for healing will do.
* Control number of dangling result nodes in `hexaryInspectTrie()`
also:
+ Returns number of visited nodes available for logging so the maximum
number of nodes can be tuned accordingly.
+ Some code and docu update
* Update names of constants
why:
Declutter, more systematic naming
* Re-implemented `worker_desc.merge()` for storage slots
why:
Provided as proper queue management in `storage_queue_helper`.
details:
+ Several append modes (replaces `merge()`)
+ Added third queue to record entries currently fetched by a worker. So
another parallel running worker can safe the complete set of storage
slots in as checkpoint. This was previously lost.
* Refactor healing
why:
Simplify and remove deep hexary trie perusal for finding completeness.
Due to "cheap" envelope decomposition of a range complement for the
hexary trie, the cost of running extra laps have become time-affordable
again and a simple trigger mechanism for healing will do.
* Docu update
* Run a storage job only once in download loop
why:
Download failure or rejection (i.e. missing data) lead to repeated
fetch requests until peer disconnects, otherwise.
* Relocated mothballing (i.e. swap-in preparation) logic
details:
Mothballing was previously tested & started after downloading
account ranges in `range_fetch_accounts`.
Whenever current download or healing stops because of a pivot change,
swap-in preparation is needed (otherwise some storage slots may get
lost when swap-in takes place.)
Also, `execSnapSyncAction()` has been moved back to `pivot_helper`.
* Reorganised source file directories
details:
Grouped pivot focused modules into `pivot` directory
* Renamed `checkNodes`, `sickSubTries` as `nodes.check`, `nodes.missing`
why:
Both lists are typically used together as pair. Renaming `sickSubTries`
reflects moving away from a healing centric view towards a swap-in
attitude.
* Multi times coverage recording
details:
Per pivot account ranges are accumulated into coverage range set. This
set fill eventually contain a singe range of account hashes [0..2^256]
which amounts to 100% capacity.
A counter has been added that is incremented whenever max capacity is
reached. The accumulated range is then reset to empty.
The effect of this setting is that the coverage can be evenly duplicated.
So 200% would not accumulate on a particular region.
* Update range length comparisons (mod 2^256)
why:
A range interval can have sizes 1..2^256 as it cannot be empty by
definition. The number of points in a range intervals set can have
0..2^256 points. As the scalar range is a residue class modulo 2^256,
the residue class 0 means length 2^256 for a range interval, but can
be 0 or 2^256 for the number of points in a range intervals set.
* Generalised `hexaryEnvelopeDecompose()`
details:
Compile the complement of the union of some (processed) intervals and
express this complement as a list of envelopes of sub-tries.
This facility is directly applicable to swap-in book-keeping.
* Re-factor `swapIn()`
why:
Good idea but baloney implementation. The main algorithm is based on
the generalised version of `hexaryEnvelopeDecompose()` which has been
derived from this implementation.
* Refactor `healAccounts()` using `hexaryEnvelopeDecompose()` as main driver
why:
Previously, the hexary trie was searched recursively for dangling nodes
which has a poor worst case performance already when the trie is
reasonably populated.
The function `hexaryEnvelopeDecompose()` is a magnitude faster because
it does not peruse existing sub-tries in order to find missing nodes
although result is not fully compatible with the previous function.
So recursive search is used in a limited mode only when the decomposer
will not deliver a useful result.
* Logging & maintenance fixes
details:
Preparation for abandoning buddy-global healing variables `node`,
`resumeCtx`, and `lockTriePerusal`. These variable are trie-perusal
centric which will be run on the back burner in favour of
`hexaryEnvelopeDecompose()` which is used for accounts healing already.
* Miscellaneous updates TBC
* Disentangled pivot2 module from snap
why:
Wrote as template on top of sync so it can be shared by fast and snap
sync.
* Renamed and relocated pivot sources
* Integrated `best_pivot` module into full and snap sync
why:
Full sync used an older version of `best_pivot`
* isolating download module from full sync
why;
might be shared with snap sync at a later stage
* Re-implemented `hexaryFollow()` in a more general fashion
details:
+ New name for re-implemented `hexaryFollow()` is `hexaryPath()`
+ Renamed `rTreeFollow()` as `hexaryPath()`
why:
Returning similarly organised structures, the results of the
`hexaryPath()` functions become comparable when running over
the persistent and the in-memory databases.
* Added traversal functionality for persistent ChainDB
* Using `Account` values as re-packed Blob
* Repack samples as compressed data files
* Produce test data
details:
+ Can force pivot state root switch after minimal coverage.
+ For emulating certain network behaviour, downloading accounts stops for
a particular pivot state root if 30% (some static number) coverage is
reached. Following accounts are downloaded for a later pivot state root.
* Provided common scheduler API, applied to `full` sync
* Use hexary trie as storage for proofs_db records
also:
+ Store metadata with account for keeping track of account state
+ add iterator over accounts
* Common scheduler API applied to `snap` sync
* Prepare for accounts bulk import
details:
+ Added some ad-hoc checks for proving accounts data received from the
snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
(turned of by default, see `worker_desc.nim`)