nimbus-eth1/nimbus/sync/snap
Jordan Hrycaj 8c7d91512b
Prep for full sync after snap mark2 (#1263)
* Rename `LeafRange` => `NodeTagRange`

* Replacing storage slot partition point by interval

why:
  The partition point only allows to describe slots `[point,high(Uint256)]`
  for fetching interval slot ranges. This has been generalised for any
  interval.

* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`

why:
  Generalised healing status for accounts, and later for storage slots.

* Improve accounts healing loop

* Split `snap_db` into accounts and storage modules

why:
  It is cleaner to have separate session descriptors for accounts and
  storage slots (based on a common base descriptor.)

  Also, persistent storage handling might be changed in future which
  requires the storage slot implementation disentangled from the accounts
  handling.

* Re-model worker queues for storage slots

why:
  There is a dynamic list of storage sub-tries, each one has to be
  treated similar to the accounts database. This applied to slot
  interval downloads as well as to healing

* Compress some return value report lists for snapdb methods

why:
  No need to report all handling details for work items that are filteres
  out and discarded, anyway.

* Remove inner loop frame from healing function

why:
  The healing function runs as a loop body already.
2022-10-14 17:40:32 +01:00
..
worker Prep for full sync after snap mark2 (#1263) 2022-10-14 17:40:32 +01:00
range_desc.nim Prep for full sync after snap mark2 (#1263) 2022-10-14 17:40:32 +01:00
worker.nim Prep for full sync after snap mark2 (#1263) 2022-10-14 17:40:32 +01:00
worker_desc.nim Prep for full sync after snap mark2 (#1263) 2022-10-14 17:40:32 +01:00