Commit Graph

13 Commits

Author SHA1 Message Date
Jordan Hrycaj 44a57496d9
Snap sync interval complement method to speed up trie perusal (#1328)
* Add quick hexary trie inspector, called `dismantle()`

why:
+ Full hexary trie perusal is slow if running down leaf nodes
+ For known range of leaf nodes, work out the UInt126-complement of
  partial sub-trie paths (for existing nodes). The result should cover
  no (or only a few) sub-tries with leaf nodes.

* Extract common healing methods => `sub_tries_helper.nim`

details:
  Also apply quick hexary trie inspection tool `dismantle()`
  Replace `inspectAccountsTrie()` wrapper by `hexaryInspectTrie()`

* Re-arrange task dispatching in main peer worker

* Refactor accounts and storage slots downloaders

* Rename `HexaryDbError` => `HexaryError`
2022-11-28 09:03:23 +00:00
Jordan Hrycaj 7688148565
Snap sync can start on saved checkpoint (#1327)
* Stop negotiating pivot if peer repeatedly replies w/usesless answers

why:
  There is some fringe condition where a peer replies with legit but
  useless empty headers repetely. This goes on until somebody stops.
  We stop now.

* Rename `missingNodes` => `sickSubTries`

why:
  These (probably missing) nodes represent in reality fully or partially
  missing sub-tries. The top nodes may even exist, e.g. as a shallow
  sub-trie.

also:
  Keep track of account healing on/of by bool variable `accountsHealing`
  controlled in `pivot_helper.execSnapSyncAction()`

* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)

also:
+ Trigger the recovery (or similar) process from inside the global peer
  worker initialisation `worker.setup()` and not by the `snap.start()`
  function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
  scheduler.

* Can import partial snap sync checkpoint at start

details:
 + Modified what is stored with the checkpoint in `snapdb_pivot.nim`
 + Will be loaded within `runDaemon()` if activated

* Forgot to import total coverage range

why:
  Only the top (or latest) pivot needs coverage but the total coverage
  is the list of all ranges for all pivots -- simply forgotten.
2022-11-25 14:56:42 +00:00
Jordan Hrycaj bba1bea4c8
Snap sync state save (#1302)
* Piecemeal trie inspection

details:
  Trie inspection will stop after maximum number of nodes visited.
  The inspection can be resumed using the returned state from the
  last session.

why:
  This feature allows for task switch between `piecemeal` sessions.

* Extract pivot helper code from `worker.nim` => `pivot_helper.nim`

* Accounts import will now return dangling paths from `proof` nodes

why:
  With proper bookkeeping, this can be used to start healing without
  analysing the the probably full trie.

* Update `unprocessed` account range handling

why:
  More generally, the API of a pairs of unprocessed intervals favours
  the first set and not before that is exhausted the second set comes
  into play.

  This was unfortunately implemented which caused the ranges to be
  unnecessarily fractioned. Now the number of range interval typically
  remains in the lower single digit numbers.

* Save sync state after end of downloading some accounts

details:
  restore/resume to be implemented later
2022-11-16 23:51:06 +00:00
jangko 9dd256cae7
eth wire handlers: implement transactions exchange 2022-11-11 09:24:32 +07:00
Jordan Hrycaj e14fd4b96c
Prep for full sync after snap make 6 (#1291)
* Update log ticker, using time interval rather than ticker count

why:
  Counting and logging ticker occurrences is inherently imprecise. So
  time intervals are used.

* Use separate storage tables for snap sync data

* Left boundary proof update

why:
  Was not properly implemented, yet.

* Capture pivot in peer worker (aka buddy) tasks

why:
  The pivot environment is linked to the `buddy` descriptor. While
  there is a task switch, the pivot may change. So it is passed on as
  function argument `env` rather than retrieved from the buddy at
  the start of a sub-function.

* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`

* Remove obsolete account range returned from `GetAccountRange` message

why:
  Handler returned the wrong right value of the range. This range was
  for convenience, only.

* Prioritise storage slots if the queue becomes large

why:
  Currently, accounts processing is prioritised up until all accounts
  are downloaded. The new prioritisation has two thresholds for
  + start processing storage slots with a new worker
  + stop account processing and switch to storage processing

also:
  Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`

* Generalise left boundary proof for accounts or storage slots.

why:
  Detailed explanation how this works is documented with
  `snapdb_accounts.importAccounts()`.

  Instead of enforcing a left boundary proof (which is still the default),
  the importer functions return a list of `holes` (aka node paths) found in
  the argument ranges of leaf nodes. This in turn is used by the book
   keeping software for data download.

* Forgot to pass on variable in function wrapper

also:
  + Start healing not before 99% accounts covered (previously 95%)
  + Logging updated/prettified
2022-11-08 18:56:04 +00:00
Jordan Hrycaj a689e9185a
Prep for full sync after snap make 5 (#1286)
* Update docu and logging

* Extracted and updated constants from `worker_desc` into separate file

* Update and re-calibrate communication error handling

* Allow simplified pivot negotiation

why:
  This feature allows to turn off pivot negotiation so that peers agree
  on a a pivot header.

  For snap sync with fast changing pivots this only throttles the sync
  process. The finally downloaded DB snapshot is typically a merged
  version of different pivot states augmented by a healing process.

* Re-model worker queues for accounts download & healing

why:
  Currently there is only one data fetch per download or healing task.
  This task is then repeated by the scheduler after a short time. In
  many cases, this short time seems enough for some peers to decide to
  terminate connection.

* Update main task batch `runMulti()`

details:
  The function `runMulti()` is activated in quasi-parallel mode by the
  scheduler. This function calls the download, healing and fast-sync
  functions.

  While in debug mode, after each set of jobs run by this function the
  database is analysed (by the `snapdb_check` module) and the result
  printed.
2022-11-01 15:07:44 +00:00
Jordan Hrycaj a8df4c1165
Fix trie inspector for healing (#1284)
* Update logging

* Fix node hash associated with partial path for missing nodes

why:
  Healing uses the partial paths for fetching nodes from the network. The
  node hash (or key) is used to verify the node data retrieved.

  The trie inspector function returned the parent hash instead of the node hash
  with the partial path when a missing node was detected. So all nodes
  for healing were rejected.

* Must not modify sequence while looping over it
2022-10-28 08:26:17 +01:00
Jordan Hrycaj 1b4572ed3b
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module

why;
  Previously, fetching partial slot ranges first has a chance of
  terminating the worker peer 9due to network error) while there were
  many inheritable storage slots on the queue.

  Now, inheritance is checked first, then full slot ranges and finally
  partial ranges.

* Update logging

* Bundled node information for healing into single object `NodeSpecs`

why:
  Previously, partial paths and node keys were kept in separate variables.
  This approach was error prone due to copying/reassembling function
  argument objects.

  As all partial paths, keys, and node data types are more or less handled
  as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
  hold these `Blob`s as named field in a single object (even if not all
  fields are active for the current purpose.)

* For good housekeeping, using `NodeKey` type only for account keys

why:
  previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
  state or storage root keys use the `Hash256` type.

* Always accept latest pivot (and not a slightly older one)

why;
  For testing it was tried to use a slightly older pivot state root than
  available. Some anecdotal tests seemed to suggest an advantage so that
  more peers are willing to serve on that older pivot. But this could not
  be confirmed in subsequent tests (still anecdotal, though.)

  As a side note, the distance of the latest pivot to its predecessor is
  at least 128 (or whatever the constant `minPivotBlockDistance` is
  assigned to.)

* Reshuffle name components for some file and function names

why:
  Clarifies purpose:
  "storages" becomes: "storage slots"
  "store" becomes: "range fetch"

* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 14:49:28 +01:00
Jordan Hrycaj 82ceec313d
Prettify logging for snap sync environment (#1278)
* Multiple storage batches at a time

why:
  Previously only some small portion was processed at a time so the peer
  might have gone when the process was resumed at a later time

* Renamed some field of snap/1 protocol response object

why:
  Documented as `slots` is in reality a per-account list of slot lists. So
  the new name `slotLists` better reflects the nature of the beast.

* Some minor healing re-arrangements for storage slot tries

why;
  Resolving all complete inherited slots tries first in sync mode keeps
  the worker queues smaller which improves logging.

* Prettify logging, comments update etc.
2022-10-21 20:29:42 +01:00
Jordan Hrycaj c0d580715e
Remodel persistent snapdb access (#1274)
* Re-model persistent database access

why:
  Storage slots healing just run on the wrong sub-trie (i.e. the wrong
  key mapping). So get/put and bulk functions now use the definitions
  in `snapdb_desc` (earlier there were some shortcuts for `get()`.)

* Fixes: missing return code, typo, redundant imports etc.

* Remove obsolete debugging directives from `worker_desc` module

* Correct failing unit tests for storage slots trie inspection

why:
  Some pathological cases for the extended tests do not produce any
  hexary trie data. This is rightly detected by the trie inspection
  and the result checks needed to adjusted.
2022-10-20 17:59:54 +01:00
Jordan Hrycaj 85fdb61699
Prep for full sync after snap make 3 (#1270)
* For snap sync, publish `EthWireRef` in sync descriptor

why:
  currently used for noise control

* Detect and reuse existing storage slots

* Provide healing module for storage slots

* Update statistic ticker (adding range factor for unprocessed storage)

* Complete mere function for work item ranges

why:
  Merging interval into existing partial item was missing

* Show av storage queue lengths in ticker

detail;
  Previous attempt shows average completeness which did not tell much

* Correct the meaning of the storage counter (per pivot)

detail:
  Is the # accounts that have a storage saved
2022-10-19 11:04:06 +01:00
Jordan Hrycaj 8c7d91512b
Prep for full sync after snap mark2 (#1263)
* Rename `LeafRange` => `NodeTagRange`

* Replacing storage slot partition point by interval

why:
  The partition point only allows to describe slots `[point,high(Uint256)]`
  for fetching interval slot ranges. This has been generalised for any
  interval.

* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`

why:
  Generalised healing status for accounts, and later for storage slots.

* Improve accounts healing loop

* Split `snap_db` into accounts and storage modules

why:
  It is cleaner to have separate session descriptors for accounts and
  storage slots (based on a common base descriptor.)

  Also, persistent storage handling might be changed in future which
  requires the storage slot implementation disentangled from the accounts
  handling.

* Re-model worker queues for storage slots

why:
  There is a dynamic list of storage sub-tries, each one has to be
  treated similar to the accounts database. This applied to slot
  interval downloads as well as to healing

* Compress some return value report lists for snapdb methods

why:
  No need to report all handling details for work items that are filteres
  out and discarded, anyway.

* Remove inner loop frame from healing function

why:
  The healing function runs as a loop body already.
2022-10-14 17:40:32 +01:00
Jordan Hrycaj d53eacb854
Prep for full sync after snap (#1253)
* Split fetch accounts into sub-modules

details:
  There will be separated modules for accounts snapshot, storage snapshot,
  and healing for either.

* Allow to rebase pivot before negotiated header

why:
  Peers seem to have not too many snapshots available. By setting back the
  pivot block header slightly, the chances might be higher to find more
  peers to serve this pivot. Experiment on mainnet showed that setting back
  too much (tested with 1024), the chances to find matching snapshot peers
  seem to decrease.

* Add accounts healing

* Update variable/field naming in `worker_desc` for readability

* Handle leaf nodes in accounts healing

why:
  There is no need to fetch accounts when they had been added by the
  healing process. On the flip side, these accounts must be checked for
  storage data and the batch queue updated, accordingly.

* Reorganising accounts hash ranges batch queue

why:
  The aim is to formally cover as many accounts as possible for different
  pivot state root environments. Formerly, this was tried by starting the
  accounts batch queue at a random value for each pivot (and wrapping
  around.)

  Now, each pivot environment starts with an interval set mutually
  disjunct from any interval set retrieved with other pivot state roots.

also:
  Stop fishing for more pivots in `worker` if 100% download is reached

* Reorganise/update accounts healing

why:
  Error handling was wrong and the (math. complexity of) whole process
  could be better managed.

details:
  Much of the algorithm is now documented at the top of the file
  `heal_accounts.nim`
2022-10-08 18:20:50 +01:00