Commit Graph

33 Commits

Author SHA1 Message Date
Jordan Hrycaj a241050c94
Beacon sync update multi exe heads aware (#2861)
* Log/trace cancellation events in scheduler

* Provide `clear()` functions for explicitly flushing data objects

* Renaming header cache functions

why:
  More systematic, all functions start with prefix `dbHeader`

* Remove `danglingParent` from layout

why:
  Already provided by header cache

* Remove `couplerHash` and `headHash` from layout

why:
  No need to cache, `headHash` is unused and `couplerHash` used typically
  once, only.

* Remove `lastLayout` from sync descriptor

why:
  No need to compare changes, saving is always triggered after actively
  changing the sync layout state

* Early reject unsuitable head + finalised header from CL

why:
  The finalised header is only passed by its hash so the header must be
  fetched somewhere, e.g. from a peer via eth/xx.

  Also, finalised headers earlier than the `base` from `FC` cannot be
  handled due to the `Aristo` single state database architecture.

  Luckily, on a full node, the complete block history is available so
  unsuitable finalised headers are stored there already which is exploited
  here to avoid unnecessary network traffic.

* Code cosmetics, remove cruft, prettify logging, remove `final` metrics

detail:
  The `final` layout parameter will be deprecated and later removed

* Update/re-calibrate syncer logic documentation

why:
  The current implementation sucks if the `FC` module changes the
  canonical branch in the middle of completing a header chain (due
  to concurrent updates by the `newPayload()` logic.)

* Implement according to re-calibrated syncer docu

details:
  The implementation employs the notion of named layout states (see
  `SyncLayoutState` in `worker_desc.nim`) which are derived from the
  state parameter triple `(C,D,H)` as described in `README.md`.
2024-11-21 16:32:47 +00:00
Jordan Hrycaj 7fe4023d1f
Beacon sync docu todo and async prototype update v2 (#2832)
* Annotate `async` functions for non-exception tracking at compile time

details:
  This also requires some additional try/except catching in the function
  bodies.

* Update sync logic docu to what is to be updated

why:
  The understanding of details of how to accommodate for merging
  sub-chains of blocks or headers have changed. Some previous set-ups
  are outright wrong.
2024-11-05 11:39:45 +00:00
Jordan Hrycaj 430611d3bc
Beacon sync updates tbc (#2818)
* Clear rejected sync target so that it would not be processed again

* Use in-memory table to stash headers after FCU import has started

why:
  After block imported has started, there is no way to save/stash block
  headers persistently. The FCU handlers always maintain a positive
  transaction level and in some instances the current transaction is
  flushed and re-opened.

  This patch fixes an exception thrown when a block header has gone
  missing.

* When resuming sync, delete stale headers and state

why:
  Deleting headers saves some persistent space that would get lost
  otherwise. Deleting the state after resuming prevents from race
  conditions.

* On clean start hibernate sync `deamon` entity before first update from CL

details:
  Only reduces services are running
  * accept FCU from CL
  * fetch finalised header after accepting FCY (provides hash only)

* Improve text/meaning of some log messages

* Revisit error handling for useless peers

why:
  A peer is abandoned from if the error score is too high. This was not
  properly handled for some fringe case when the error was detected at
  staging time but fetching via eth/xx was ok.

* Clarify `break` meaning by using labelled `break` statements

* Fix action how to commit when sync target has been reached

why:
  The sync target block number might precede than latest FCU block number.
  This happens when the engine API squeezes in some request to execute
  and import subsequent blocks.

  This patch fixes and assert thrown when after reaching target the latest
  FCU block number is higher than the expected target block number.

* Update TODO list
2024-11-01 19:18:41 +00:00
Jordan Hrycaj ea268e81ff
Beacon sync activation control update (#2782)
* Clarifying/commenting FCU setup condition & small fixes, comments etc.

* Update some logging

* Reorg metrics updater and activation

* Better `async` responsiveness

why:
  Block import does not allow `async` task activation while
  executing. So allow potential switch after each imported
  block (rather than a group of 32 blocks.)

* Handle resuming after previous sync followed by import

why:
  In this case the ledger state is more recent than the saved
  sync state. So this is considered a pristine sync where any
  previous sync state is forgotten.

  This fixes some assert thrown because of inconsistent internal
  state at some point.

* Provide option for clearing saved beacon sync state before starting syncer

why:
  It would resume with the last state otherwise which might be undesired
  sometimes.

  Without RPC available, the syncer typically stops and terminates with
  the canonical head larger than the base/finalised head. The latter one
  will be saved as database/ledger state and the canonical head as syncer
  target. Resuming syncing here will repeat itself.

  So clearing the syncer state can prevent from starting the syncer
  unnecessarily avoiding useless actions.

* Allow workers to request syncer shutdown from within

why:
  In one-trick-pony mode (after resuming without RPC support) the
  syncer can be stopped from within soavoiding unnecessary polling.
  In that case, the syncer can (theoretically) be restarted externally
  with `startSync()`.

* Terminate beacon sync after a single run target is reached

why:
  Stops doing useless polling (typically when there is no RPC available)

* Remove crufty comments

* Tighten state reload condition when resuming

why:
  Some pathological case might apply if the syncer is stopped while the
  distance between finalised block and head is very large and the FCU
  base becomes larger than the locked finalised state.

* Verify that finalised number from CL is at least FCU base number

why:
  The FCU base number is determined by the database, non zero if
  manually imported. The finalised number is passed via RPC by the CL
  node and will increase over time. Unless fully synced, this number
  will be pretty low.

  On the other hand, the FCU call `forkChoice()` will eventually fail
  if the `finalizedHash` argument refers to something outside the
  internal chain starting at the FCU base block.

* Remove support for completing interrupted sync without RPC support

why:
  Simplifies start/stop logic

* Rmove unused import
2024-10-28 16:22:04 +00:00
Jordan Hrycaj 9b666860db
Beacon sync: Fix race condition with peer fifo overflow (#2699)
why:
  Must not call `runStop()` twice rather rely on the worker to honour
  to `zombie` flag (no harm if it does not.)
2024-10-04 20:23:30 +00:00
Jordan Hrycaj c022b29d14
Clean up modules in sync folder (#2670)
* Dissolve legacy `sync/types.nim` into `*/eth/eth_types.nim`

* Flare sync: Simplify scheduler and remove `runSingle()` method

why:
  `runSingle()` is not used anymore (main purpose was for negotiating
  best headers in legacy full sync.)

  Also, `runMulti()` was renamed `runPeer()`

* Flare sync: Move `chain` field from `sync_desc` -> `worker_desc`

* Flare sync: Remove handler descriptor lateral reference

why:
  Not used anymore. It enabled to turn on/off eth handler activity with
  regards to the sync state, i.e.from with in the sync worker.

* Flare sync: Update `Hash256` and other deprecated `std/eth` symbols

* Protocols: Update `Hash256` and other deprecated `std/eth` symbols

* Eth handler: Update `Hash256` and other deprecated `std/eth` symbols

* Update flare TODO

* Remove redundant `sync/type` import

why:
  The import module `type` has been removed

* Remove duplicate implementation
2024-10-01 09:19:29 +00:00
andri lim 4d9e288340
Wiring ForkedChainRef to other components (#2423)
* Wiring ForkedChainRef to other components

- Disable majority of hive simulators
- Only enable pyspec_sim for the moment
- The pyspec_sim is using a smaller RPC service wired to ForkedChainRef
- The RPC service will gradually grow

* Addressing PR review

* Fix test_beacon/setup_env

* Enable consensus_sim (#2441)

* Enable consensus_sim

* Remove isFile check

* Enable Engine API jwt auth tests and exchange cap tests

* Enable engine api in build_sim.sh

* Wire ForkedChainRef to Engine API newPayload

* Wire Engine API getBodies to ForkedChainRef

* Wire Engine API api_forkchoice to ForkedChainRef

* Wire more RPC methods to ForkedChainRef

* Implement eth_syncing

* Implement eth_call and eth_getlogs

* TxPool: simplify smartHead

* Fix smartHead usage

* Fix txpool headDiff

* Remove hasBlockHeader and use headerExists

* Addressing review
2024-09-04 09:54:54 +00:00
Jordan Hrycaj 42a08cfba9
Coredb and sync maintenance update (#2583)
* bump metrics

* Remove cruft

* Cosmetics, update some logging, noise control

* Renamed `CoreDb` function `hasKey` => `hasKeyRc` and provided `hasKey`

why:
  Currently, `hasKey` returns a `Result[]` rather than a `bool` which
  is what one would expect from a function prototype of this name.

  This was a bit of an annoyance and cost unnecessary attention.
2024-08-30 11:18:36 +00:00
tersec fd03038cab
Replace some usage of std/options with results Opt (#2323)
* Replace some usage of std/options with results Opt

* more updates
2024-06-07 23:39:58 +02:00
andri lim bea558740f
Reduce compiler warnings (#2030)
* Reduce compiler warnings

* Reduce compiler warnings in test code
2024-02-16 16:08:07 +07:00
Jordan Hrycaj 4c865ec884
Snap sync update pivot updating via rpc (#1583)
* Unit tests update, code cosmetics

* Fix segfault with zombie handling

why:
  In order to save memory, the data records of zombie entries are removed
  and only the key (aka peer node) is kept. Consequently, logging these
  zombies can only be done by the key.

* Allow to accept V2 payload without `shanghaiTime` set while syncing

why:
  Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
  beacon node headers can be processed regardless. Normal (aka strict)
  processing will be automatically restored when leaving snap sync mode.
2023-05-16 14:52:44 +01:00
Jordan Hrycaj e1369a7c25
Improve full sync part behaviour 4 snap sync suite (#1564)
* Set maximum time for nodes to be banned.

why:
  Useless nodes are marked zombies and banned. They a kept in a table
  until flushed out by new connections. This works well if there are many
  connections. For the case that there are a few only, a maximum time is
  set. When expired, zombies are flushed automatically.

* Suspend full sync while block number at beacon block

details:
  Also allows to use external setting from file (2nd line)

* Resume state at full sync after restart (if any)
2023-04-26 16:46:42 +01:00
Jordan Hrycaj d6ee672ba5
Fix pivot setup after switch to full sync (#1562)
* Cosmetics, update logging, docu

* Fix pivot hand-over after switch to full sync

why:
  Got garbled after code clean up
2023-04-25 13:24:32 +01:00
Jordan Hrycaj c5e895aaab
Code reorg 4 snap sync suite (#1560)
* Rename `playXXX` => `passXXX`

why:
  Better purpose match

* Code massage, log message updates

* Moved `ticker.nim` to `misc` folder to be used the same by full and snap sync

why:
  Simplifies maintenance

* Move `worker/pivot*` => `worker/pass/pass_snap/*`

why:
  better for maintenance

* Moved helper source file => `pass/pass_snap/helper`

* Renamed ComError => GetError, `worker/com/` => `worker/get/`

* Keep ticker enable flag in worker descriptor

why:
  This allows to pass this flag with the descriptor and not an extra
  function argument when calling the setup function.

* Extracted setup/release code from `worker.nim` => `pass/pass_init.nim`
2023-04-24 21:24:07 +01:00
Jordan Hrycaj 0a3bc102eb
Pre functional snap to full sync (#1546)
* Update sync scheduler pool mode

why:
  The pool mode allows to loop over active peers one after another. This
  is ideal for soft re-starting peers. As this is a two tier experience
  (start/stop, setup/release) the loop must be run twice. This is
  controlled by a more rigid re-definition of how to use the `poolMode`
  flag.

* Mitigate RLP serialiser deficiency

why:
  Currently, serialising the `BlockBody` in not conevrtible and need
  to be checked in the `eth` module. Currently a local fix for the
  wire protocol applies. Unit tests will stay (after this local solution
  will have been removed.)

* Code cosmetics and massage

details:
  Main part is `types.toStr()` as a unified function for logging block
  numbers.

* Allow to use a logical genesis replacement (start of history)

why:
  Snap sync will set up an arbitrary pivot at a block number different
  from zero. In fact, the higher the block number the better.

details:
  A non-genesis start of history will currently only affect the score
  values which were derived from the difficulty.

* Provide function to store the snap pivot block header in chain db

why:
  Together with the start of history facility, this allows to proceed
  with full syncing once snap has finished.

details:
  Snap db storage was switched from a sub-tables to the flat chain db.

* Provide database completeness and sanity checker

details:
  For debugging on smaller databases, only

* Implement snap -> full sync switch
2023-04-14 23:28:57 +01:00
Jordan Hrycaj fe3a6d67c6
Prepare snap server client test scenario cont2 (#1487)
* Clean up some function prototypes

why:
  Simplify polymorphic prototype variances for easier maintenance.

* Fix fringe condition crash when importing bogus RLP node

why:
  Accessing non-list RLP entry as a list causes `Defect`

* Fix left boundary proof at range extractor

why:
  Was insufficient. The main problem was that there was no unit test for
  the validity of the generated left boundary.

* Handle incomplete left boundary proofs early

why:
  Attempt to do it later leads to overly complex code in order to prevent
  looping when the same peer repeats to send the same incomplete proof.

  Contrary, gaps in the leaf sequence can be handled gracefully with
  registering the gaps

* Implement a manual pivot setup mechanism for snap sync

why:
  For a test scenario it is convenient to set the pivot to something
  lower than the beacon header from the consensus layer. This does not
  need rely on any RPC mechanism.

details:
  The file containing the pivot specs is specified by the
  `--sync-ctrl-file` option. It is regularly parsed for updates.

* Fix calculation error

why:
  Prevent from calculating negative square root
2023-03-07 14:23:22 +00:00
Jordan Hrycaj f20f20f962
Prepare snap server client test scenario (#1483)
* Enable `snap/1` accounts range service

* Allow to change the garbage collector to `boehm` as a Makefile option.

why:
  There is still an unsolved memory corruption problem that might be
  related to the standard `gc`. It seemingly goes away if the `gc` is
  changed to `boehm`.

  Specifying another `gc` on the make level simplifies debugging and
  development.

* Code cosmetics

details:
* updated exception annotations
* extracted `worker_desc.nim` from `full/worker.nim`
* etc.

* Implement option to state a sync modifier file

why:
  This allows to specify extra sync type specific options which might
  change over time. This file is regularly checked for updates.

* Implement a threshold when to suspend full syncing

why:
  For a test scenario, a full sync beep may work as a local snap server.
  There is no need to download the full block chain.

details:
  The file containing the pivot specs is specified by the
  `--sync-ctrl-file` option. It is regularly parsed for updates.
2023-03-02 09:57:58 +00:00
Jordan Hrycaj 89ae9621c4
Silence compiler gossip after nim upgrade (#1454)
* Silence some compiler gossip -- part 1, tx_pool

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 2, clique

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 3, misc core

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 4, sync

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Clique update

why:
  Missing exception annotation
2023-01-30 22:10:23 +00:00
Jordan Hrycaj 707e47ac38
External beacon stream tracker (#1433)
* Register external beacon stream header

why:
  This will be used to sync the peers against.

* Update total coverage book-keeping for 100% roll-over

details:
  Provide commonly available/used function

* Replace best pivot by beacon stream tracker

details:
  Beacon stream header cache will be updated by external chain monitor via
  RPC. This cached header will then be used to sync the pivot.
2023-01-17 09:28:14 +00:00
Jordan Hrycaj 179b4adac3
Snap sync tweaks n fixes (#1359)
* Miscellaneous tweaks & fixes

details:
+ Catch `TransportError` exception in `legacy.nim` module
+ Fix self-calling wrapper `hexaryEnvelopeTouchedBy()`

* Update documentation, logging etc.

* Changed `checkNode` batch list `seq[Blob]` => `seq[NodeSpecs]`

why:
  The `NodeSpecs` type as used here is a tuple `(partial-path,node-key)`.

  When `checkNode` partial paths are collected, also the node key is
  available so it should be registered and not repeatedly recovered from
  the database.

* Add optional begin/end trace statement in snap scheduler

why:
  Allows to trace invoked entity and scheduler state variables
2022-12-09 13:43:55 +00:00
jangko 94a94c5b65 implement better hardfork management 2022-12-02 13:51:42 +07:00
Jordan Hrycaj 7688148565
Snap sync can start on saved checkpoint (#1327)
* Stop negotiating pivot if peer repeatedly replies w/usesless answers

why:
  There is some fringe condition where a peer replies with legit but
  useless empty headers repetely. This goes on until somebody stops.
  We stop now.

* Rename `missingNodes` => `sickSubTries`

why:
  These (probably missing) nodes represent in reality fully or partially
  missing sub-tries. The top nodes may even exist, e.g. as a shallow
  sub-trie.

also:
  Keep track of account healing on/of by bool variable `accountsHealing`
  controlled in `pivot_helper.execSnapSyncAction()`

* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)

also:
+ Trigger the recovery (or similar) process from inside the global peer
  worker initialisation `worker.setup()` and not by the `snap.start()`
  function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
  scheduler.

* Can import partial snap sync checkpoint at start

details:
 + Modified what is stored with the checkpoint in `snapdb_pivot.nim`
 + Will be loaded within `runDaemon()` if activated

* Forgot to import total coverage range

why:
  Only the top (or latest) pivot needs coverage but the total coverage
  is the list of all ranges for all pivots -- simply forgotten.
2022-11-25 14:56:42 +00:00
Jordan Hrycaj 9aa925cf36
Update sync scheduler (#1297)
* Add `stop()` methods to shutdown to shutdown procedure

why:
  Nasty behaviour when hitting Ctrl-C, otherwise

* Add background service to sync scheduler

why:
  The background service will be used for sync data import and recovery
  after restart.

   It is controlled by the sync scheduler for an easy turn/on off API.

also:
  Simplified snap ticker time calc.

* Fix typo
2022-11-14 14:13:00 +00:00
Jordan Hrycaj 21837546c3
Fix/clarify single mode for async sync scheduler (#1292)
why:
  Single mode here means there is only such (single mode) instance
  activated but multi mode instances for other peers are allowed.

  Erroneously, multi mode instances were held back waiting while some
  single mode instance was running which reduced the number of parallel
  download peers.
2022-11-09 19:16:25 +00:00
Jordan Hrycaj a689e9185a
Prep for full sync after snap make 5 (#1286)
* Update docu and logging

* Extracted and updated constants from `worker_desc` into separate file

* Update and re-calibrate communication error handling

* Allow simplified pivot negotiation

why:
  This feature allows to turn off pivot negotiation so that peers agree
  on a a pivot header.

  For snap sync with fast changing pivots this only throttles the sync
  process. The finally downloaded DB snapshot is typically a merged
  version of different pivot states augmented by a healing process.

* Re-model worker queues for accounts download & healing

why:
  Currently there is only one data fetch per download or healing task.
  This task is then repeated by the scheduler after a short time. In
  many cases, this short time seems enough for some peers to decide to
  terminate connection.

* Update main task batch `runMulti()`

details:
  The function `runMulti()` is activated in quasi-parallel mode by the
  scheduler. This function calls the download, healing and fast-sync
  functions.

  While in debug mode, after each set of jobs run by this function the
  database is analysed (by the `snapdb_check` module) and the result
  printed.
2022-11-01 15:07:44 +00:00
Jordan Hrycaj 82ceec313d
Prettify logging for snap sync environment (#1278)
* Multiple storage batches at a time

why:
  Previously only some small portion was processed at a time so the peer
  might have gone when the process was resumed at a later time

* Renamed some field of snap/1 protocol response object

why:
  Documented as `slots` is in reality a per-account list of slot lists. So
  the new name `slotLists` better reflects the nature of the beast.

* Some minor healing re-arrangements for storage slot tries

why;
  Resolving all complete inherited slots tries first in sync mode keeps
  the worker queues smaller which improves logging.

* Prettify logging, comments update etc.
2022-10-21 20:29:42 +01:00
Jordan Hrycaj c0d580715e
Remodel persistent snapdb access (#1274)
* Re-model persistent database access

why:
  Storage slots healing just run on the wrong sub-trie (i.e. the wrong
  key mapping). So get/put and bulk functions now use the definitions
  in `snapdb_desc` (earlier there were some shortcuts for `get()`.)

* Fixes: missing return code, typo, redundant imports etc.

* Remove obsolete debugging directives from `worker_desc` module

* Correct failing unit tests for storage slots trie inspection

why:
  Some pathological cases for the extended tests do not produce any
  hexary trie data. This is rightly detected by the trie inspection
  and the result checks needed to adjusted.
2022-10-20 17:59:54 +01:00
Jordan Hrycaj 85fdb61699
Prep for full sync after snap make 3 (#1270)
* For snap sync, publish `EthWireRef` in sync descriptor

why:
  currently used for noise control

* Detect and reuse existing storage slots

* Provide healing module for storage slots

* Update statistic ticker (adding range factor for unprocessed storage)

* Complete mere function for work item ranges

why:
  Merging interval into existing partial item was missing

* Show av storage queue lengths in ticker

detail;
  Previous attempt shows average completeness which did not tell much

* Correct the meaning of the storage counter (per pivot)

detail:
  Is the # accounts that have a storage saved
2022-10-19 11:04:06 +01:00
jangko 3fa1b012e6
initial wire protocol transformation
rework on the eth wire protocol handlers.
curently still missing 4 handlers implementation.
but the framework is ready for eexpansion.
2022-10-15 19:48:21 +07:00
Jordan Hrycaj eca5882238
Isolating sync action modules (#1249)
* Miscellaneous updates TBC

* Disentangled pivot2 module from snap

why:
  Wrote as template on top of sync so it can be shared by fast and snap
  sync.

* Renamed and relocated pivot sources

* Integrated `best_pivot` module into full and snap sync

why:
  Full sync used an older version of `best_pivot`

* isolating download module from full sync

why;
  might be shared with snap sync at a later stage
2022-09-30 09:22:14 +01:00
Jordan Hrycaj 4ff0948fed
Snap sync accounts healing (#1225)
* Added inspect module

why:
  Find dangling references for trie healing support.

details:
 + This patch set provides only the inspect module and some unit tests.
 + There are also extensive unit tests which need bulk data from the
   `nimbus-eth1-blob` module.

* Alternative pivot finder

why:
  Attempt to be faster on start up. Also tying to decouple pivot finder
  somehow by providing different mechanisms (this one runs in `single`
  mode.)

* Use inspect module for healing

details:
 + After some progress with account and storage data, the inspect facility
   is used to find dangling links in the database to be filled nose-wise.
 + This is a crude attempt to cobble together functional elements. The
   set up needs to be honed.

* fix scheduler to avoid starting dead peers

why:
  Some peers drop out while in `sleepAsync()`. So extra `if` clauses
  make sure that this event is detected early.

* Bug fixes causing crashes

details:

+ prettify.toPC():
  int/intToStr() numeric range over/underflow

+ hexary_inspect.hexaryInspectPath():
  take care of half initialised step with branch but missing index into
  branch array

* improve handling of dropped peers in alternaive pivot finder

why:
  Strange things may happen while querying data from the network.
  Additional checks make sure that the state of other peers is updated
  immediately.

* Update trace messages

* reorganise snap fetch & store schedule
2022-09-16 08:24:12 +01:00
Jordan Hrycaj de2c13e136
Update snap offline tests (#1199)
* Re-implemented `hexaryFollow()` in a more general fashion

details:
+ New name for re-implemented `hexaryFollow()` is `hexaryPath()`
+ Renamed `rTreeFollow()` as `hexaryPath()`

why:
  Returning similarly organised structures, the results of the
  `hexaryPath()` functions become comparable when running over
  the persistent and the in-memory databases.

* Added traversal functionality for persistent ChainDB

* Using `Account` values as re-packed Blob

* Repack samples as compressed data files

* Produce test data

details:
+ Can force pivot state root switch after minimal coverage.
+ For emulating certain network behaviour, downloading accounts stops for
  a particular pivot state root if 30% (some static number) coverage is
  reached. Following accounts are downloaded for a later pivot state root.
2022-08-24 14:44:18 +01:00
Jordan Hrycaj 5f0e89a41e
Snap accounts bulk import preparer (#1183)
* Provided common scheduler API, applied to `full` sync

* Use hexary trie as storage for proofs_db records

also:
 + Store metadata with account for keeping track of account state
 + add iterator over accounts

* Common scheduler API applied to `snap` sync

* Prepare for accounts bulk import

details:
+ Added some ad-hoc checks for proving accounts data received from the
  snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
  (turned of by default, see `worker_desc.nim`)
2022-08-04 09:04:30 +01:00