nimbus-eth1/nimbus/sync/snap/worker_desc.nim

74 lines
2.6 KiB
Nim
Raw Normal View History

Prep for full sync after snap (#1253) * Split fetch accounts into sub-modules details: There will be separated modules for accounts snapshot, storage snapshot, and healing for either. * Allow to rebase pivot before negotiated header why: Peers seem to have not too many snapshots available. By setting back the pivot block header slightly, the chances might be higher to find more peers to serve this pivot. Experiment on mainnet showed that setting back too much (tested with 1024), the chances to find matching snapshot peers seem to decrease. * Add accounts healing * Update variable/field naming in `worker_desc` for readability * Handle leaf nodes in accounts healing why: There is no need to fetch accounts when they had been added by the healing process. On the flip side, these accounts must be checked for storage data and the batch queue updated, accordingly. * Reorganising accounts hash ranges batch queue why: The aim is to formally cover as many accounts as possible for different pivot state root environments. Formerly, this was tried by starting the accounts batch queue at a random value for each pivot (and wrapping around.) Now, each pivot environment starts with an interval set mutually disjunct from any interval set retrieved with other pivot state roots. also: Stop fishing for more pivots in `worker` if 100% download is reached * Reorganise/update accounts healing why: Error handling was wrong and the (math. complexity of) whole process could be better managed. details: Much of the algorithm is now documented at the top of the file `heal_accounts.nim`
2022-10-08 17:20:50 +00:00
# Nimbus
# Copyright (c) 2021 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
{.push raises: [].}
import
eth/[common, p2p],
../../db/select_backend,
../misc/ticker,
../sync_desc,
./worker/get/get_error,
./worker/db/[snapdb_desc]
export
sync_desc # worker desc prototype
type
SnapBuddyData* = object
## Peer-worker local descriptor data extension
errors*: GetErrorStatsRef ## For error handling
full*: RootRef ## Peer local full sync descriptor
# snap*: RootRef ## Peer local snap sync descriptor
SnapSyncPassType* = enum
## Current sync mode, after a snapshot has been downloaded, the system
## proceeds with full sync.
SnapSyncMode = 0 ## Start mode
FullSyncMode
SnapSyncPass* = object
## Full specs for all sync modes. This table must be held in the main
## descriptor and initialised at run time. The table values are opaque
## and will be specified in the worker module(s).
active*: SnapSyncPassType
tab*: array[SnapSyncPassType,RootRef]
SnapCtxData* = object
## Globally shared data extension
rng*: ref HmacDrbgContext ## Random generator
dbBackend*: ChainDB ## Low level DB driver access (if any)
snapDb*: SnapDbRef ## Accounts snapshot DB
# Info
beaconHeader*: BlockHeader ## Running on beacon chain
enableTicker*: bool ## Advisary, extra level of gossip
ticker*: TickerRef ## Ticker, logger descriptor
# Snap/full mode muliplexing
syncMode*: SnapSyncPass ## Sync mode methods & data
# Snap sync parameters, pivot table
snap*: RootRef ## Global snap sync descriptor
# Full sync continuation parameters
fullHeader*: Option[BlockHeader] ## Start full sync from here
full*: RootRef ## Global full sync descriptor
SnapBuddyRef* = BuddyRef[SnapCtxData,SnapBuddyData]
## Extended worker peer descriptor
SnapCtxRef* = CtxRef[SnapCtxData]
## Extended global descriptor
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------