Commit Graph

11 Commits

Author SHA1 Message Date
Jordan Hrycaj 2f7f2dba2d
Prepare snap server client test scenario cont3 (#1491)
* Handle last/all node(s) proof conditions at leaf node extractor

detail:
  Flag whether the maximum extracted node is the last one in database
  No proof needed if the full tree was extracted

* Clean up some helpers & definitions

details:
  Move entities to more plausible locations, e.g. `Account` object need
  not be dealt with in the range extractor as it applies to any kind of
  leaf data.

* Fix next/prev database walk fringe condition

details:
  First check needed might be for a leaf node which was done too late.

* Homogenise snap/1 protocol function prototypes

why:
  The range arguments `origin` and `limit` data types differed in various
  function prototypes (`Hash256` vs. `openArray[byte]`.)

* Implement `GetStorageRange` handler

* Implement server timeout for leaf node retrieval

why:
  This feature leaves control on the server for probably costly action
  invoked by the network

* Implement maximal reply size for snap service

why:
  This feature leaves control on the server for probably costly action
  invoked by the network.
2023-03-10 17:10:30 +00:00
Jordan Hrycaj 10ad7867e4
Prepare snap server client test scenario cont1 (#1485)
* Renaming androgynous sub-object names according to where they belong

why:
  These objects are not explicitly dealt with. They give meaning to
  some generic wrapper objects. Naming them after their origin may
  help troubleshooting.

* Redefine proof nodes list data type for `snap/1` wire protocol

why:
  The current specification suffered from the fact that the basic data
  type for a proof node is an RLP encoded hexary node. This slightly
  confused the encoding/decoding magic.

details:
  This is the second attempt, now wrapping the `seq[Blob]` into a
  wrapper object of `seq[SnapProof]` for a distinct alias sequence.

  In the previous attempt, `SnapProof` was a wrapper object holding the
  `Blob` with magic applied to the `seq[]`. This needed the `append`
  mixin to strip the outer wrapper that was applied to the `Blob` already
  when it was passed as argument.

* Fix some prototype inconsistency

why:
  For easy reading, `getAccountRange()` handler return code should
  resemble the `accoundRange()` anruments prototype.
2023-03-03 20:01:59 +00:00
Jordan Hrycaj 89ae9621c4
Silence compiler gossip after nim upgrade (#1454)
* Silence some compiler gossip -- part 1, tx_pool

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 2, clique

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 3, misc core

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 4, sync

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Clique update

why:
  Missing exception annotation
2023-01-30 22:10:23 +00:00
Jordan Hrycaj 88b315bb41
Snap sync refactor healing (#1397)
* Simplify accounts healing threshold management

why:
  Was over-engineered.

details:
  Previously, healing was based on recursive hexary trie perusal.

  Due to "cheap" envelope decomposition of a range complement for the
  hexary trie, the cost of running extra laps have become time-affordable
  again and a simple trigger mechanism for healing will do.

* Control number of dangling result nodes in `hexaryInspectTrie()`

also:
+ Returns number of visited nodes available for logging so the maximum
  number of nodes can be tuned accordingly.
+ Some code and docu update

* Update names of constants

why:
  Declutter, more systematic naming

* Re-implemented `worker_desc.merge()` for storage slots

why:
  Provided as proper queue management in `storage_queue_helper`.

details:
+ Several append modes (replaces `merge()`)
+ Added third queue to record entries currently fetched by a worker. So
  another parallel running worker can safe the complete set of storage
  slots in as checkpoint. This was previously lost.

* Refactor healing

why:
  Simplify and remove deep hexary trie perusal for finding completeness.

   Due to "cheap" envelope decomposition of a range complement for the
   hexary trie, the cost of running extra laps have become time-affordable
   again and a simple trigger mechanism for healing will do.

* Docu update

* Run a storage job only once in download loop

why:
  Download failure or rejection (i.e. missing data) lead to repeated
  fetch requests until peer disconnects, otherwise.
2022-12-24 09:54:18 +00:00
Jordan Hrycaj e14fd4b96c
Prep for full sync after snap make 6 (#1291)
* Update log ticker, using time interval rather than ticker count

why:
  Counting and logging ticker occurrences is inherently imprecise. So
  time intervals are used.

* Use separate storage tables for snap sync data

* Left boundary proof update

why:
  Was not properly implemented, yet.

* Capture pivot in peer worker (aka buddy) tasks

why:
  The pivot environment is linked to the `buddy` descriptor. While
  there is a task switch, the pivot may change. So it is passed on as
  function argument `env` rather than retrieved from the buddy at
  the start of a sub-function.

* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`

* Remove obsolete account range returned from `GetAccountRange` message

why:
  Handler returned the wrong right value of the range. This range was
  for convenience, only.

* Prioritise storage slots if the queue becomes large

why:
  Currently, accounts processing is prioritised up until all accounts
  are downloaded. The new prioritisation has two thresholds for
  + start processing storage slots with a new worker
  + stop account processing and switch to storage processing

also:
  Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`

* Generalise left boundary proof for accounts or storage slots.

why:
  Detailed explanation how this works is documented with
  `snapdb_accounts.importAccounts()`.

  Instead of enforcing a left boundary proof (which is still the default),
  the importer functions return a list of `holes` (aka node paths) found in
  the argument ranges of leaf nodes. This in turn is used by the book
   keeping software for data download.

* Forgot to pass on variable in function wrapper

also:
  + Start healing not before 99% accounts covered (previously 95%)
  + Logging updated/prettified
2022-11-08 18:56:04 +00:00
Jordan Hrycaj a689e9185a
Prep for full sync after snap make 5 (#1286)
* Update docu and logging

* Extracted and updated constants from `worker_desc` into separate file

* Update and re-calibrate communication error handling

* Allow simplified pivot negotiation

why:
  This feature allows to turn off pivot negotiation so that peers agree
  on a a pivot header.

  For snap sync with fast changing pivots this only throttles the sync
  process. The finally downloaded DB snapshot is typically a merged
  version of different pivot states augmented by a healing process.

* Re-model worker queues for accounts download & healing

why:
  Currently there is only one data fetch per download or healing task.
  This task is then repeated by the scheduler after a short time. In
  many cases, this short time seems enough for some peers to decide to
  terminate connection.

* Update main task batch `runMulti()`

details:
  The function `runMulti()` is activated in quasi-parallel mode by the
  scheduler. This function calls the download, healing and fast-sync
  functions.

  While in debug mode, after each set of jobs run by this function the
  database is analysed (by the `snapdb_check` module) and the result
  printed.
2022-11-01 15:07:44 +00:00
Jordan Hrycaj 1b4572ed3b
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module

why;
  Previously, fetching partial slot ranges first has a chance of
  terminating the worker peer 9due to network error) while there were
  many inheritable storage slots on the queue.

  Now, inheritance is checked first, then full slot ranges and finally
  partial ranges.

* Update logging

* Bundled node information for healing into single object `NodeSpecs`

why:
  Previously, partial paths and node keys were kept in separate variables.
  This approach was error prone due to copying/reassembling function
  argument objects.

  As all partial paths, keys, and node data types are more or less handled
  as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
  hold these `Blob`s as named field in a single object (even if not all
  fields are active for the current purpose.)

* For good housekeeping, using `NodeKey` type only for account keys

why:
  previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
  state or storage root keys use the `Hash256` type.

* Always accept latest pivot (and not a slightly older one)

why;
  For testing it was tried to use a slightly older pivot state root than
  available. Some anecdotal tests seemed to suggest an advantage so that
  more peers are willing to serve on that older pivot. But this could not
  be confirmed in subsequent tests (still anecdotal, though.)

  As a side note, the distance of the latest pivot to its predecessor is
  at least 128 (or whatever the constant `minPivotBlockDistance` is
  assigned to.)

* Reshuffle name components for some file and function names

why:
  Clarifies purpose:
  "storages" becomes: "storage slots"
  "store" becomes: "range fetch"

* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 14:49:28 +01:00
Jordan Hrycaj c0d580715e
Remodel persistent snapdb access (#1274)
* Re-model persistent database access

why:
  Storage slots healing just run on the wrong sub-trie (i.e. the wrong
  key mapping). So get/put and bulk functions now use the definitions
  in `snapdb_desc` (earlier there were some shortcuts for `get()`.)

* Fixes: missing return code, typo, redundant imports etc.

* Remove obsolete debugging directives from `worker_desc` module

* Correct failing unit tests for storage slots trie inspection

why:
  Some pathological cases for the extended tests do not produce any
  hexary trie data. This is rightly detected by the trie inspection
  and the result checks needed to adjusted.
2022-10-20 17:59:54 +01:00
Jordan Hrycaj 8c7d91512b
Prep for full sync after snap mark2 (#1263)
* Rename `LeafRange` => `NodeTagRange`

* Replacing storage slot partition point by interval

why:
  The partition point only allows to describe slots `[point,high(Uint256)]`
  for fetching interval slot ranges. This has been generalised for any
  interval.

* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`

why:
  Generalised healing status for accounts, and later for storage slots.

* Improve accounts healing loop

* Split `snap_db` into accounts and storage modules

why:
  It is cleaner to have separate session descriptors for accounts and
  storage slots (based on a common base descriptor.)

  Also, persistent storage handling might be changed in future which
  requires the storage slot implementation disentangled from the accounts
  handling.

* Re-model worker queues for storage slots

why:
  There is a dynamic list of storage sub-tries, each one has to be
  treated similar to the accounts database. This applied to slot
  interval downloads as well as to healing

* Compress some return value report lists for snapdb methods

why:
  No need to report all handling details for work items that are filteres
  out and discarded, anyway.

* Remove inner loop frame from healing function

why:
  The healing function runs as a loop body already.
2022-10-14 17:40:32 +01:00
Jordan Hrycaj d53eacb854
Prep for full sync after snap (#1253)
* Split fetch accounts into sub-modules

details:
  There will be separated modules for accounts snapshot, storage snapshot,
  and healing for either.

* Allow to rebase pivot before negotiated header

why:
  Peers seem to have not too many snapshots available. By setting back the
  pivot block header slightly, the chances might be higher to find more
  peers to serve this pivot. Experiment on mainnet showed that setting back
  too much (tested with 1024), the chances to find matching snapshot peers
  seem to decrease.

* Add accounts healing

* Update variable/field naming in `worker_desc` for readability

* Handle leaf nodes in accounts healing

why:
  There is no need to fetch accounts when they had been added by the
  healing process. On the flip side, these accounts must be checked for
  storage data and the batch queue updated, accordingly.

* Reorganising accounts hash ranges batch queue

why:
  The aim is to formally cover as many accounts as possible for different
  pivot state root environments. Formerly, this was tried by starting the
  accounts batch queue at a random value for each pivot (and wrapping
  around.)

  Now, each pivot environment starts with an interval set mutually
  disjunct from any interval set retrieved with other pivot state roots.

also:
  Stop fishing for more pivots in `worker` if 100% download is reached

* Reorganise/update accounts healing

why:
  Error handling was wrong and the (math. complexity of) whole process
  could be better managed.

details:
  Much of the algorithm is now documented at the top of the file
  `heal_accounts.nim`
2022-10-08 18:20:50 +01:00
Jordan Hrycaj 4ff0948fed
Snap sync accounts healing (#1225)
* Added inspect module

why:
  Find dangling references for trie healing support.

details:
 + This patch set provides only the inspect module and some unit tests.
 + There are also extensive unit tests which need bulk data from the
   `nimbus-eth1-blob` module.

* Alternative pivot finder

why:
  Attempt to be faster on start up. Also tying to decouple pivot finder
  somehow by providing different mechanisms (this one runs in `single`
  mode.)

* Use inspect module for healing

details:
 + After some progress with account and storage data, the inspect facility
   is used to find dangling links in the database to be filled nose-wise.
 + This is a crude attempt to cobble together functional elements. The
   set up needs to be honed.

* fix scheduler to avoid starting dead peers

why:
  Some peers drop out while in `sleepAsync()`. So extra `if` clauses
  make sure that this event is detected early.

* Bug fixes causing crashes

details:

+ prettify.toPC():
  int/intToStr() numeric range over/underflow

+ hexary_inspect.hexaryInspectPath():
  take care of half initialised step with branch but missing index into
  branch array

* improve handling of dropped peers in alternaive pivot finder

why:
  Strange things may happen while querying data from the network.
  Additional checks make sure that the state of other peers is updated
  immediately.

* Update trace messages

* reorganise snap fetch & store schedule
2022-09-16 08:24:12 +01:00