Commit Graph

10 Commits

Author SHA1 Message Date
Jordan Hrycaj 0a3bc102eb
Pre functional snap to full sync (#1546)
* Update sync scheduler pool mode

why:
  The pool mode allows to loop over active peers one after another. This
  is ideal for soft re-starting peers. As this is a two tier experience
  (start/stop, setup/release) the loop must be run twice. This is
  controlled by a more rigid re-definition of how to use the `poolMode`
  flag.

* Mitigate RLP serialiser deficiency

why:
  Currently, serialising the `BlockBody` in not conevrtible and need
  to be checked in the `eth` module. Currently a local fix for the
  wire protocol applies. Unit tests will stay (after this local solution
  will have been removed.)

* Code cosmetics and massage

details:
  Main part is `types.toStr()` as a unified function for logging block
  numbers.

* Allow to use a logical genesis replacement (start of history)

why:
  Snap sync will set up an arbitrary pivot at a block number different
  from zero. In fact, the higher the block number the better.

details:
  A non-genesis start of history will currently only affect the score
  values which were derived from the difficulty.

* Provide function to store the snap pivot block header in chain db

why:
  Together with the start of history facility, this allows to proceed
  with full syncing once snap has finished.

details:
  Snap db storage was switched from a sub-tables to the flat chain db.

* Provide database completeness and sanity checker

details:
  For debugging on smaller databases, only

* Implement snap -> full sync switch
2023-04-14 23:28:57 +01:00
Jordan Hrycaj c01045c246
Update snap client account healing (#1521)
* Update nearby/neighbour leaf nodes finder

details:
  Update return error codes so that in the case that there is no more
  leaf node beyond the search direction, the particular error code
  `NearbyBeyondRange` is returned.

* Compile largest interval range containing only this leaf point

why:
  Will be needed in snap sync for adding single leaf nodes to the range
  of already allocated nodes.

* Reorg `hexary_inspect.nim`

why:
 Merged the nodes collecting algorithm for persistent and in-memory
 into a single generic function `hexary_inspect.inspectTrieImpl()`

* Update fetching accounts range failure handling in `rangeFetchAccounts()`

why:
  Rejected response leads now to fetching for another account range. Only
  repeated failures (or all done) terminate the algorithm.

* Update accounts healing

why:
+ Fixed looping over a bogus node response that could not inserted into
  the database. As a solution, these nodes are locally registered and not
  asked for in this download cycle.
+ Sub-optimal handling of interval range for a healed account leaf node.
  Now the maximal range interval containing this node is registered as
  processed which leafs to de-fragementation of the processed (and
  unprocessed) range list(s). So *gap* ranges which are known not to
  cover any account leaf node are not asked for on the network, anymore.
+ Sporadically remove empty interval ranges (if any)

* Update logging, better variable names
2023-03-25 10:44:48 +00:00
Jordan Hrycaj 33023aaf39
Update snap server client test scenario (#1518)
* Redesign snap1 message GetTrieNodes argument prototypes

why:
  A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
  than an opaque definition `seq[seq[Blob]]` because the inner object
  `SnapTriePath` object has a dedicated inner structure (for how to
  interprete `seq[Blob]`.)

* Collect some public constants into `constants.nim` file

* Reorg `hexary_paths.nim`

why:
+ Collecting nodes following a partial path properly ending at an
  extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
  into a single generic function `hexary_paths.rootPathExtend()`

info:
  Extracted common tasks to `hexary_nodes_helper.nim`

* Implement `StorageRanges` message handler for snap/1 protocol
2023-03-22 20:11:49 +00:00
Jordan Hrycaj 89ae9621c4
Silence compiler gossip after nim upgrade (#1454)
* Silence some compiler gossip -- part 1, tx_pool

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 2, clique

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 3, misc core

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 4, sync

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Clique update

why:
  Missing exception annotation
2023-01-30 22:10:23 +00:00
Jordan Hrycaj 88b315bb41
Snap sync refactor healing (#1397)
* Simplify accounts healing threshold management

why:
  Was over-engineered.

details:
  Previously, healing was based on recursive hexary trie perusal.

  Due to "cheap" envelope decomposition of a range complement for the
  hexary trie, the cost of running extra laps have become time-affordable
  again and a simple trigger mechanism for healing will do.

* Control number of dangling result nodes in `hexaryInspectTrie()`

also:
+ Returns number of visited nodes available for logging so the maximum
  number of nodes can be tuned accordingly.
+ Some code and docu update

* Update names of constants

why:
  Declutter, more systematic naming

* Re-implemented `worker_desc.merge()` for storage slots

why:
  Provided as proper queue management in `storage_queue_helper`.

details:
+ Several append modes (replaces `merge()`)
+ Added third queue to record entries currently fetched by a worker. So
  another parallel running worker can safe the complete set of storage
  slots in as checkpoint. This was previously lost.

* Refactor healing

why:
  Simplify and remove deep hexary trie perusal for finding completeness.

   Due to "cheap" envelope decomposition of a range complement for the
   hexary trie, the cost of running extra laps have become time-affordable
   again and a simple trigger mechanism for healing will do.

* Docu update

* Run a storage job only once in download loop

why:
  Download failure or rejection (i.e. missing data) lead to repeated
  fetch requests until peer disconnects, otherwise.
2022-12-24 09:54:18 +00:00
Jordan Hrycaj a689e9185a
Prep for full sync after snap make 5 (#1286)
* Update docu and logging

* Extracted and updated constants from `worker_desc` into separate file

* Update and re-calibrate communication error handling

* Allow simplified pivot negotiation

why:
  This feature allows to turn off pivot negotiation so that peers agree
  on a a pivot header.

  For snap sync with fast changing pivots this only throttles the sync
  process. The finally downloaded DB snapshot is typically a merged
  version of different pivot states augmented by a healing process.

* Re-model worker queues for accounts download & healing

why:
  Currently there is only one data fetch per download or healing task.
  This task is then repeated by the scheduler after a short time. In
  many cases, this short time seems enough for some peers to decide to
  terminate connection.

* Update main task batch `runMulti()`

details:
  The function `runMulti()` is activated in quasi-parallel mode by the
  scheduler. This function calls the download, healing and fast-sync
  functions.

  While in debug mode, after each set of jobs run by this function the
  database is analysed (by the `snapdb_check` module) and the result
  printed.
2022-11-01 15:07:44 +00:00
Jordan Hrycaj 1b4572ed3b
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module

why;
  Previously, fetching partial slot ranges first has a chance of
  terminating the worker peer 9due to network error) while there were
  many inheritable storage slots on the queue.

  Now, inheritance is checked first, then full slot ranges and finally
  partial ranges.

* Update logging

* Bundled node information for healing into single object `NodeSpecs`

why:
  Previously, partial paths and node keys were kept in separate variables.
  This approach was error prone due to copying/reassembling function
  argument objects.

  As all partial paths, keys, and node data types are more or less handled
  as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
  hold these `Blob`s as named field in a single object (even if not all
  fields are active for the current purpose.)

* For good housekeeping, using `NodeKey` type only for account keys

why:
  previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
  state or storage root keys use the `Hash256` type.

* Always accept latest pivot (and not a slightly older one)

why;
  For testing it was tried to use a slightly older pivot state root than
  available. Some anecdotal tests seemed to suggest an advantage so that
  more peers are willing to serve on that older pivot. But this could not
  be confirmed in subsequent tests (still anecdotal, though.)

  As a side note, the distance of the latest pivot to its predecessor is
  at least 128 (or whatever the constant `minPivotBlockDistance` is
  assigned to.)

* Reshuffle name components for some file and function names

why:
  Clarifies purpose:
  "storages" becomes: "storage slots"
  "store" becomes: "range fetch"

* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 14:49:28 +01:00
Jordan Hrycaj c0d580715e
Remodel persistent snapdb access (#1274)
* Re-model persistent database access

why:
  Storage slots healing just run on the wrong sub-trie (i.e. the wrong
  key mapping). So get/put and bulk functions now use the definitions
  in `snapdb_desc` (earlier there were some shortcuts for `get()`.)

* Fixes: missing return code, typo, redundant imports etc.

* Remove obsolete debugging directives from `worker_desc` module

* Correct failing unit tests for storage slots trie inspection

why:
  Some pathological cases for the extended tests do not produce any
  hexary trie data. This is rightly detected by the trie inspection
  and the result checks needed to adjusted.
2022-10-20 17:59:54 +01:00
Jordan Hrycaj d53eacb854
Prep for full sync after snap (#1253)
* Split fetch accounts into sub-modules

details:
  There will be separated modules for accounts snapshot, storage snapshot,
  and healing for either.

* Allow to rebase pivot before negotiated header

why:
  Peers seem to have not too many snapshots available. By setting back the
  pivot block header slightly, the chances might be higher to find more
  peers to serve this pivot. Experiment on mainnet showed that setting back
  too much (tested with 1024), the chances to find matching snapshot peers
  seem to decrease.

* Add accounts healing

* Update variable/field naming in `worker_desc` for readability

* Handle leaf nodes in accounts healing

why:
  There is no need to fetch accounts when they had been added by the
  healing process. On the flip side, these accounts must be checked for
  storage data and the batch queue updated, accordingly.

* Reorganising accounts hash ranges batch queue

why:
  The aim is to formally cover as many accounts as possible for different
  pivot state root environments. Formerly, this was tried by starting the
  accounts batch queue at a random value for each pivot (and wrapping
  around.)

  Now, each pivot environment starts with an interval set mutually
  disjunct from any interval set retrieved with other pivot state roots.

also:
  Stop fishing for more pivots in `worker` if 100% download is reached

* Reorganise/update accounts healing

why:
  Error handling was wrong and the (math. complexity of) whole process
  could be better managed.

details:
  Much of the algorithm is now documented at the top of the file
  `heal_accounts.nim`
2022-10-08 18:20:50 +01:00
Jordan Hrycaj 4ff0948fed
Snap sync accounts healing (#1225)
* Added inspect module

why:
  Find dangling references for trie healing support.

details:
 + This patch set provides only the inspect module and some unit tests.
 + There are also extensive unit tests which need bulk data from the
   `nimbus-eth1-blob` module.

* Alternative pivot finder

why:
  Attempt to be faster on start up. Also tying to decouple pivot finder
  somehow by providing different mechanisms (this one runs in `single`
  mode.)

* Use inspect module for healing

details:
 + After some progress with account and storage data, the inspect facility
   is used to find dangling links in the database to be filled nose-wise.
 + This is a crude attempt to cobble together functional elements. The
   set up needs to be honed.

* fix scheduler to avoid starting dead peers

why:
  Some peers drop out while in `sleepAsync()`. So extra `if` clauses
  make sure that this event is detected early.

* Bug fixes causing crashes

details:

+ prettify.toPC():
  int/intToStr() numeric range over/underflow

+ hexary_inspect.hexaryInspectPath():
  take care of half initialised step with branch but missing index into
  branch array

* improve handling of dropped peers in alternaive pivot finder

why:
  Strange things may happen while querying data from the network.
  Additional checks make sure that the state of other peers is updated
  immediately.

* Update trace messages

* reorganise snap fetch & store schedule
2022-09-16 08:24:12 +01:00