2022-10-08 17:20:50 +00:00
|
|
|
# Nimbus
|
2022-08-04 08:04:30 +00:00
|
|
|
# Copyright (c) 2021 Status Research & Development GmbH
|
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
import
|
2022-10-20 16:59:54 +00:00
|
|
|
std/hashes,
|
|
|
|
eth/[common, p2p],
|
2022-12-12 22:00:24 +00:00
|
|
|
stew/[interval_set, keyed_queue, sorted_set],
|
2022-10-20 16:59:54 +00:00
|
|
|
../../db/select_backend,
|
|
|
|
../sync_desc,
|
2022-11-16 23:51:06 +00:00
|
|
|
./worker/com/com_error,
|
2022-11-25 14:56:42 +00:00
|
|
|
./worker/db/[hexary_desc, snapdb_desc, snapdb_pivot],
|
2022-11-16 23:51:06 +00:00
|
|
|
./worker/ticker,
|
2022-08-04 08:04:30 +00:00
|
|
|
./range_desc
|
|
|
|
|
|
|
|
{.push raises: [Defect].}
|
|
|
|
|
|
|
|
type
|
2022-12-12 22:00:24 +00:00
|
|
|
SnapAccountsList* = SortedSet[NodeTag,Hash256]
|
|
|
|
## Sorted pair of `(account,state-root)` entries
|
|
|
|
|
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 13:49:28 +00:00
|
|
|
SnapSlotsQueue* = KeyedQueue[Hash256,SnapSlotsQueueItemRef]
|
2022-10-14 16:40:32 +00:00
|
|
|
## Handles list of storage slots data for fetch indexed by storage root.
|
2022-09-02 18:16:09 +00:00
|
|
|
##
|
2022-10-14 16:40:32 +00:00
|
|
|
## Typically, storage data requests cover the full storage slots trie. If
|
|
|
|
## there is only a partial list of slots to fetch, the queue entry is
|
|
|
|
## stored left-most for easy access.
|
|
|
|
|
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 13:49:28 +00:00
|
|
|
SnapSlotsQueuePair* = KeyedQueuePair[Hash256,SnapSlotsQueueItemRef]
|
2022-10-19 10:04:06 +00:00
|
|
|
## Key-value return code from `SnapSlotsQueue` handler
|
|
|
|
|
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 13:49:28 +00:00
|
|
|
SnapSlotsQueueItemRef* = ref object
|
2022-10-14 16:40:32 +00:00
|
|
|
## Storage slots request data. This entry is similar to `AccountSlotsHeader`
|
|
|
|
## where the optional `subRange` interval has been replaced by an interval
|
|
|
|
## range + healing support.
|
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 13:49:28 +00:00
|
|
|
accKey*: NodeKey ## Owner account
|
2022-11-25 14:56:42 +00:00
|
|
|
slots*: SnapRangeBatchRef ## slots to fetch, nil => all slots
|
2022-10-19 10:04:06 +00:00
|
|
|
inherit*: bool ## mark this trie seen already
|
2022-09-02 18:16:09 +00:00
|
|
|
|
2022-11-08 18:56:04 +00:00
|
|
|
SnapTodoRanges* = array[2,NodeTagRangeSet]
|
2022-11-25 14:56:42 +00:00
|
|
|
## Pair of sets of ``unprocessed`` node ranges that need to be fetched and
|
|
|
|
## integrated. The ranges in the first set must be handled with priority.
|
|
|
|
##
|
|
|
|
## This data structure is used for coordinating peers that run quasi
|
|
|
|
## parallel.
|
2022-09-16 07:24:12 +00:00
|
|
|
|
2022-11-25 14:56:42 +00:00
|
|
|
SnapRangeBatchRef* = ref object
|
2022-10-14 16:40:32 +00:00
|
|
|
## `NodeTag` ranges to fetch, healing support
|
2022-11-25 14:56:42 +00:00
|
|
|
unprocessed*: SnapTodoRanges ## Range of slots to be fetched
|
|
|
|
processed*: NodeTagRangeSet ## Nodes definitely processed
|
2022-12-09 13:43:55 +00:00
|
|
|
checkNodes*: seq[NodeSpecs] ## Nodes with prob. dangling child links
|
2022-11-25 14:56:42 +00:00
|
|
|
sickSubTries*: seq[NodeSpecs] ## Top ref for sub-tries to be healed
|
2022-11-16 23:51:06 +00:00
|
|
|
resumeCtx*: TrieNodeStatCtxRef ## State for resuming trie inpection
|
2022-11-28 09:03:23 +00:00
|
|
|
lockTriePerusal*: bool ## Only one process at a time
|
2022-10-14 16:40:32 +00:00
|
|
|
|
2022-08-04 08:04:30 +00:00
|
|
|
SnapPivotRef* = ref object
|
2022-08-17 07:30:11 +00:00
|
|
|
## Per-state root cache for particular snap data environment
|
2022-09-02 18:16:09 +00:00
|
|
|
stateHeader*: BlockHeader ## Pivot state, containg state root
|
2022-10-08 17:20:50 +00:00
|
|
|
|
|
|
|
# Accounts download
|
2022-11-25 14:56:42 +00:00
|
|
|
fetchAccounts*: SnapRangeBatchRef ## Set of accounts ranges to fetch
|
|
|
|
healThresh*: float ## Start healing when fill factor reached
|
2022-10-08 17:20:50 +00:00
|
|
|
|
|
|
|
# Storage slots download
|
2022-11-08 18:56:04 +00:00
|
|
|
fetchStorageFull*: SnapSlotsQueue ## Fetch storage trie for these accounts
|
|
|
|
fetchStoragePart*: SnapSlotsQueue ## Partial storage trie to com[plete
|
2022-11-01 15:07:44 +00:00
|
|
|
storageDone*: bool ## Done with storage, block sync next
|
2022-10-08 17:20:50 +00:00
|
|
|
|
|
|
|
# Info
|
2022-10-19 10:04:06 +00:00
|
|
|
nAccounts*: uint64 ## Imported # of accounts
|
2022-10-21 19:29:42 +00:00
|
|
|
nSlotLists*: uint64 ## Imported # of account storage tries
|
2022-12-12 22:00:24 +00:00
|
|
|
|
|
|
|
# Mothballing, ready to be swapped into newer pivot record
|
|
|
|
storageAccounts*: SnapAccountsList ## Accounts with missing stortage slots
|
|
|
|
archived*: bool ## Not latest pivot, anymore
|
2022-08-04 08:04:30 +00:00
|
|
|
|
2022-12-09 13:43:55 +00:00
|
|
|
SnapPivotTable* = KeyedQueue[Hash256,SnapPivotRef]
|
2022-08-04 08:04:30 +00:00
|
|
|
## LRU table, indexed by state root
|
|
|
|
|
2022-11-25 14:56:42 +00:00
|
|
|
SnapRecoveryRef* = ref object
|
|
|
|
## Recovery context
|
|
|
|
state*: SnapDbPivotRegistry ## Saved recovery context state
|
|
|
|
level*: int ## top level is zero
|
|
|
|
|
2022-08-04 08:04:30 +00:00
|
|
|
BuddyData* = object
|
2022-08-17 07:30:11 +00:00
|
|
|
## Per-worker local descriptor data extension
|
2022-10-08 17:20:50 +00:00
|
|
|
errors*: ComErrorStatsRef ## For error handling
|
|
|
|
pivotFinder*: RootRef ## Opaque object reference for sub-module
|
|
|
|
pivotEnv*: SnapPivotRef ## Environment containing state root
|
2022-08-24 13:44:18 +00:00
|
|
|
|
2022-08-04 08:04:30 +00:00
|
|
|
CtxData* = object
|
|
|
|
## Globally shared data extension
|
2022-09-02 18:16:09 +00:00
|
|
|
rng*: ref HmacDrbgContext ## Random generator
|
|
|
|
dbBackend*: ChainDB ## Low level DB driver access (if any)
|
2022-11-16 23:51:06 +00:00
|
|
|
snapDb*: SnapDbRef ## Accounts snapshot DB
|
|
|
|
|
|
|
|
# Pivot table
|
2022-09-02 18:16:09 +00:00
|
|
|
pivotTable*: SnapPivotTable ## Per state root environment
|
2022-10-08 17:20:50 +00:00
|
|
|
pivotFinderCtx*: RootRef ## Opaque object reference for sub-module
|
2022-10-14 16:40:32 +00:00
|
|
|
coveredAccounts*: NodeTagRangeSet ## Derived from all available accounts
|
2022-11-25 14:56:42 +00:00
|
|
|
recovery*: SnapRecoveryRef ## Current recovery checkpoint/context
|
|
|
|
noRecovery*: bool ## Ignore recovery checkpoints
|
2022-10-08 17:20:50 +00:00
|
|
|
|
|
|
|
# Info
|
|
|
|
ticker*: TickerRef ## Ticker, logger
|
2022-08-24 13:44:18 +00:00
|
|
|
|
|
|
|
SnapBuddyRef* = BuddyRef[CtxData,BuddyData]
|
2022-08-04 08:04:30 +00:00
|
|
|
## Extended worker peer descriptor
|
|
|
|
|
2022-08-24 13:44:18 +00:00
|
|
|
SnapCtxRef* = CtxRef[CtxData]
|
2022-08-04 08:04:30 +00:00
|
|
|
## Extended global descriptor
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 13:49:28 +00:00
|
|
|
proc hash*(a: SnapSlotsQueueItemRef): Hash =
|
2022-09-02 18:16:09 +00:00
|
|
|
## Table/KeyedQueue mixin
|
|
|
|
cast[pointer](a).hash
|
|
|
|
|
2022-10-08 17:20:50 +00:00
|
|
|
proc hash*(a: Hash256): Hash =
|
|
|
|
## Table/KeyedQueue mixin
|
|
|
|
a.data.hash
|
|
|
|
|
2022-10-14 16:40:32 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2022-11-08 18:56:04 +00:00
|
|
|
# Public helpers: SnapTodoRanges
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
proc init*(q: var SnapTodoRanges) =
|
2022-11-16 23:51:06 +00:00
|
|
|
## Populate node range sets with maximal range in the first range set. This
|
|
|
|
## kind of pair or interval sets is manages as follows:
|
|
|
|
## * As long as possible, fetch and merge back intervals on the first set.
|
|
|
|
## * If the first set is empty and some intervals are to be fetched, swap
|
|
|
|
## first and second interval lists.
|
|
|
|
## That way, intervals from the first set are prioitised while the rest is
|
|
|
|
## is considered after the prioitised intervals are exhausted.
|
2022-11-08 18:56:04 +00:00
|
|
|
q[0] = NodeTagRangeSet.init()
|
|
|
|
q[1] = NodeTagRangeSet.init()
|
|
|
|
discard q[0].merge(low(NodeTag),high(NodeTag))
|
|
|
|
|
|
|
|
|
|
|
|
proc merge*(q: var SnapTodoRanges; iv: NodeTagRange) =
|
2022-11-25 14:56:42 +00:00
|
|
|
## Unconditionally merge the node range into the account ranges list.
|
2022-11-16 23:51:06 +00:00
|
|
|
discard q[0].merge(iv)
|
|
|
|
discard q[1].reduce(iv)
|
2022-11-08 18:56:04 +00:00
|
|
|
|
|
|
|
proc merge*(q: var SnapTodoRanges; minPt, maxPt: NodeTag) =
|
|
|
|
## Variant of `merge()`
|
|
|
|
q.merge NodeTagRange.new(minPt, maxPt)
|
|
|
|
|
|
|
|
|
|
|
|
proc reduce*(q: var SnapTodoRanges; iv: NodeTagRange) =
|
|
|
|
## Unconditionally remove the node range from the account ranges list
|
|
|
|
discard q[0].reduce(iv)
|
|
|
|
discard q[1].reduce(iv)
|
|
|
|
|
|
|
|
proc reduce*(q: var SnapTodoRanges; minPt, maxPt: NodeTag) =
|
|
|
|
## Variant of `reduce()`
|
|
|
|
q.reduce NodeTagRange.new(minPt, maxPt)
|
|
|
|
|
|
|
|
|
2022-11-16 23:51:06 +00:00
|
|
|
iterator ivItems*(q: var SnapTodoRanges): NodeTagRange =
|
|
|
|
## Iterator over all list entries
|
|
|
|
for ivSet in q:
|
|
|
|
for iv in ivSet.increasing:
|
|
|
|
yield iv
|
|
|
|
|
|
|
|
|
2022-11-08 18:56:04 +00:00
|
|
|
proc fetch*(q: var SnapTodoRanges; maxLen: UInt256): Result[NodeTagRange,void] =
|
|
|
|
## Fetch interval from node ranges with maximal size `maxLen`
|
|
|
|
|
|
|
|
# Swap batch queues if the first one is empty
|
|
|
|
if q[0].isEmpty:
|
|
|
|
swap(q[0], q[1])
|
|
|
|
|
|
|
|
# Fetch from first range list
|
|
|
|
let rc = q[0].ge()
|
|
|
|
if rc.isErr:
|
|
|
|
return err()
|
|
|
|
|
|
|
|
let
|
|
|
|
val = rc.value
|
|
|
|
iv = if 0 < val.len and val.len <= maxLen: val # val.len==0 => 2^256
|
|
|
|
else: NodeTagRange.new(val.minPt, val.minPt + (maxLen - 1.u256))
|
|
|
|
discard q[0].reduce(iv)
|
|
|
|
ok(iv)
|
|
|
|
|
2022-11-25 14:56:42 +00:00
|
|
|
|
|
|
|
proc verify*(q: var SnapTodoRanges): bool =
|
|
|
|
## Verify consistency, i.e. that the two sets of ranges have no overlap.
|
|
|
|
if q[0].chunks == 0 or q[1].chunks == 0:
|
|
|
|
# At least on set is empty
|
|
|
|
return true
|
|
|
|
if q[0].total == 0 or q[1].total == 0:
|
|
|
|
# At least one set is maximal and the other non-empty
|
|
|
|
return false
|
|
|
|
let (a,b) = if q[0].chunks < q[1].chunks: (0,1) else: (1,0)
|
|
|
|
for iv in q[a].increasing:
|
|
|
|
if 0 < q[b].covered(iv):
|
|
|
|
return false
|
|
|
|
true
|
|
|
|
|
2022-11-08 18:56:04 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public helpers: SlotsQueue
|
2022-10-14 16:40:32 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2022-10-19 10:04:06 +00:00
|
|
|
proc merge*(q: var SnapSlotsQueue; kvp: SnapSlotsQueuePair) =
|
|
|
|
## Append/prepend a queue item record into the batch queue.
|
|
|
|
let
|
|
|
|
reqKey = kvp.key
|
|
|
|
rc = q.eq(reqKey)
|
2022-11-25 14:56:42 +00:00
|
|
|
if rc.isErr:
|
|
|
|
# Append to list
|
|
|
|
discard q.append(reqKey, kvp.data)
|
|
|
|
else:
|
2022-10-19 10:04:06 +00:00
|
|
|
# Entry exists already
|
|
|
|
let qData = rc.value
|
|
|
|
if not qData.slots.isNil:
|
|
|
|
# So this entry is not maximal and can be extended
|
|
|
|
if kvp.data.slots.isNil:
|
|
|
|
# Remove restriction for this entry and move it to the right end
|
|
|
|
qData.slots = nil
|
2022-11-25 14:56:42 +00:00
|
|
|
discard q.lruFetch reqKey
|
2022-10-19 10:04:06 +00:00
|
|
|
else:
|
|
|
|
# Merge argument intervals into target set
|
|
|
|
for ivSet in kvp.data.slots.unprocessed:
|
|
|
|
for iv in ivSet.increasing:
|
2022-11-16 23:51:06 +00:00
|
|
|
qData.slots.unprocessed.reduce iv
|
2022-10-19 10:04:06 +00:00
|
|
|
|
2022-10-14 16:40:32 +00:00
|
|
|
proc merge*(q: var SnapSlotsQueue; fetchReq: AccountSlotsHeader) =
|
2022-11-25 14:56:42 +00:00
|
|
|
## Append/prepend a slot header record into the batch queue. If there is
|
|
|
|
## a range merger, the argument range will be sortred in a way so that it
|
|
|
|
## is processed separately with highest priority.
|
2022-10-19 10:04:06 +00:00
|
|
|
let
|
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 13:49:28 +00:00
|
|
|
reqKey = fetchReq.storageRoot
|
2022-10-19 10:04:06 +00:00
|
|
|
rc = q.eq(reqKey)
|
|
|
|
if rc.isOk:
|
|
|
|
# Entry exists already
|
|
|
|
let qData = rc.value
|
|
|
|
if not qData.slots.isNil:
|
|
|
|
# So this entry is not maximal and can be extended
|
|
|
|
if fetchReq.subRange.isNone:
|
|
|
|
# Remove restriction for this entry and move it to the right end
|
|
|
|
qData.slots = nil
|
2022-11-25 14:56:42 +00:00
|
|
|
discard q.lruFetch reqKey
|
2022-10-19 10:04:06 +00:00
|
|
|
else:
|
2022-11-25 14:56:42 +00:00
|
|
|
# Merge argument interval into target separated from the already
|
|
|
|
# existing sets (note that this works only for the last set)
|
|
|
|
for iv in qData.slots.unprocessed[0].increasing:
|
|
|
|
# Move all to second set
|
|
|
|
discard qData.slots.unprocessed[1].merge iv
|
|
|
|
# Clear first set and add argument range
|
|
|
|
qData.slots.unprocessed[0].clear()
|
|
|
|
qData.slots.unprocessed.merge fetchReq.subRange.unsafeGet
|
|
|
|
|
|
|
|
elif fetchReq.subRange.isNone:
|
|
|
|
# Append full range to the list
|
|
|
|
discard q.append(reqKey, SnapSlotsQueueItemRef(
|
|
|
|
accKey: fetchReq.accKey))
|
|
|
|
|
2022-10-19 10:04:06 +00:00
|
|
|
else:
|
2022-11-25 14:56:42 +00:00
|
|
|
# Partial range, add healing support and interval
|
|
|
|
var unprocessed = [NodeTagRangeSet.init(), NodeTagRangeSet.init()]
|
|
|
|
discard unprocessed[0].merge(fetchReq.subRange.unsafeGet)
|
|
|
|
discard q.append(reqKey, SnapSlotsQueueItemRef(
|
|
|
|
accKey: fetchReq.accKey,
|
|
|
|
slots: SnapRangeBatchRef(
|
|
|
|
unprocessed: unprocessed,
|
|
|
|
processed: NodeTagRangeSet.init())))
|
2022-10-14 16:40:32 +00:00
|
|
|
|
2022-10-19 10:04:06 +00:00
|
|
|
proc merge*(
|
|
|
|
q: var SnapSlotsQueue;
|
|
|
|
reqList: openArray[SnapSlotsQueuePair|AccountSlotsHeader]) =
|
2022-10-14 16:40:32 +00:00
|
|
|
## Variant fof `merge()` for a list argument
|
|
|
|
for w in reqList:
|
|
|
|
q.merge w
|
|
|
|
|
2022-08-04 08:04:30 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|