Commit Graph

173 Commits

Author SHA1 Message Date
Jordan Hrycaj 6d132811ba
Core db update providing additional results code interface (#1776)
* Split `core_db/base.nim` into several sources

* Rename `core_db/legacy.nim` => `core_db/legacy_db.nim`

* Update `CoreDb` API, dual methods returning `Result[]` or plain value

detail:
  Plain value methods implemet the legacy API, they defect on error results

* Redesign `CoreDB` direct backend access

why:
  Made the `backend` directive integral part of the API

* Discontinue providing unused or otherwise available functions

details:
+ setTransactionID() removed, not used and not easily replicable in Aristo
+ maybeGet() removed, available via direct backend access
+ newPhk() removed, never used & was experimental anyway

* Update/reorg backend API

why:
+ Added error print function `$$()`
+ General descriptor completion (and optional validation) via `bless()`

* Update `Aristo`/`Kvt` exception handling

why:
  Avoid `CatchableError` exceptions, rather pass them as error code where
  appropriate.

* More `CoreDB` compliant `Aristo` and `Kvt` methods

details:
+ Providing functions like `contains()`, `getVtxRc()` (returns `Result[]`).
+ Additional error code: `NotImplemented`

* Rewrite/reorg of Aristo DB constructor

why:
  Previously used global object `DefaultQidLayoutRef` as default
  initialiser. This object was created at compile time which lead to
  non-gc safe functions.

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/aristo/aristo_transcode.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

---------

Co-authored-by: Kim De Mey <kim.demey@gmail.com>
2023-09-26 10:21:13 +01:00
jangko a07247b547
Remove dead code from beacon skeleton 2023-09-25 17:21:59 +07:00
andri lim 7af9e3dc53
Refactor beacon skeleton (#1761)
* Fix unlisted exception due to recent modification in CoreDB interface

* Refactor beacon skeleton

* More unlisted exception fix
2023-09-19 11:52:28 +07:00
Jordan Hrycaj 6bc55d4e6f
Core db aristo and kvt updates preparing for integration (#1760)
* Kvt: Implemented multi-descriptor access on the same backend

why:
  This behaviour mirrors the one of Aristo and can be used for
  simultaneous transactions on Aristo + Kvt

* Kvt: Update database iterators

why:
  Forgot to run on the top layer first

* Kvt: Misc fixes

* Aristo, use `openArray[byte]` rather than `Blob` in prototype

* Aristo, by default hashify right after cloning descriptor

why:
  Typically, a completed descriptor is expected after cloning. Hashing
  can be suppressed by argument flag.

* Aristo provides `replicate()` iterator, similar to legacy `replicate()`

* Aristo API fixes and updates

* CoreDB: Rename `legacy_persistent` => `legacy_rocksdb`

why:
  More systematic, will be in line with Aristo DB which might have
  more than one persistent backends

* CoreDB: Prettify API sources

why:
  Better to read and maintain

details:
  Annotating with custom pragmas which cleans up the prototypes

* CoreDB: Update MPT/put() prototype allowing `CatchableError`

why:
  Will be needed for Aristo API (legacy is OK with `RlpError`)
2023-09-18 21:20:28 +01:00
andri lim 56215ed83f
Bump stint to v2.0: new array backend (#1747)
* Bump stint to v2.0: new array backend
2023-09-13 09:32:38 +07:00
andri lim 948c94763c
Bump nim-eth: Add closeWait to EthereumNode (#1742) 2023-09-09 13:54:58 +07:00
andri lim cf9553196e
When memory backend selected, no snap sync (#1738) 2023-09-08 15:21:59 +07:00
jangko 5fb0fc65ba
Implement beacon sync stub
- Prepare a test env for beacon sync in engine api simulator.
- Wiring beacon sync to the rest of subsystems.
2023-09-07 08:49:31 +07:00
jangko 92713ef326
Fix rpc.sendRawTransaction and txPool: reject invalid transaction earlier 2023-08-21 09:11:10 +07:00
Jordan Hrycaj 221e6c9e2f
Unified database frontend integration (#1670)
* Nimbus folder environment update

details:
* Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder.
* The `nimbus` program does not compile yet as it needs the updates
  in the parallel `stateless` sub-folder.

* Stateless environment update

details:
* Integrated `CoreDbRef` for the sources in the `stateless` sub-folder.
* The `nimbus` program compiles now.

* Premix environment update

details:
* Integrated `CoreDbRef` for the sources in the `premix` sub-folder.

* Fluffy environment update

details:
* Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder.

* Tools environment update

details:
* Integrated `CoreDbRef` for the sources in the `tools` sub-folder.

* Nodocker environment update

details:
* Integrated `CoreDbRef` for the sources in the
  `hive_integration/nodocker` sub-folder.

* Tests environment update

details:
* Integrated `CoreDbRef` for the sources in the `tests` sub-folder.
* The unit tests compile and run cleanly now.

* Generalise `CoreDbRef` to any `select_backend` supported database

why:
  Generalisation was just missed due to overcoming some compiler oddity
  which was tied to rocksdb for testing.

* Suppress compiler warning for `newChainDB()`

why:
  Warning was added to this function which must be wrapped so that
  any `CatchableError` is re-raised as `Defect`.

* Split off persistent `CoreDbRef` constructor into separate file

why:
  This allows to compile a memory only database version without linking
  the backend library.

* Use memory `CoreDbRef` database by default

detail:
 Persistent DB constructor needs to import `db/core_db/persistent

why:
 Most tests use memory DB anyway. This avoids linking `-lrocksdb` or
 any other backend by default.

* fix `toLegacyBackend()` availability check

why:
  got garbled after memory/persistent split.

* Clarify raw access to MPT for snap sync handler

why:
  Logically, `kvt` is not the raw access for the hexary trie (although
  this holds for the legacy database)
2023-08-04 12:10:09 +01:00
Jordan Hrycaj 322f1c2e9e
Unified database frontend (#1661)
* Remove 32bit os support from `custom_network` unit test

also:
* Fix compilation annoyance #1648
* Fix unit test on Kiln (changed `merge` logic?)

* Hide unused sources do not compile

why:
* Get them out of the way before major update
* Import and function prototype mismatch -- maybe some changes got out
  of scope.

* Re-implemented `db_chain` as `core_db`

why:
  Hiding `TrieDatabaseRef` and `HexaryTrie` by default allows to replace
  the current db wrapper by some other one, e.g. Aristo

* Support compiler exception warnings for CoreDbRef base methods.

* Allow `pairs()` iterator on all memory based key-value tables

why:
  Previously only available for capture recorder.

* Backport `chain_db.nim` changes into its re-implementation `core_apps.nim`

* Fix exception annotation
2023-07-31 14:43:38 +01:00
jangko ff1a45e095
fix shanghai withdrawal validation
previously, the withdrawal validation is in process_block only,
but the one in persist block, which is also used in synchronizer
is not validated properly.
2023-06-26 07:46:09 +07:00
andri lim 26a8759c34
implementation of EIP-4844: Shard Blob Transactions (#1440)
* EIP-4844: add pointEvaluation precompiled contract

* EIP-4844: validate transaction and block header

* EIP-4844: implement DataHash Op Code

* EIP-4844: txPool support excessDataGas calculation

* EIP-4844: make sure tx produce correct txHash

* EIP-4844: node should not automatically broadcast blob tx to it's peers

* EIP-4844: add test cases

* EIP-4844: add EIP-4844 support to t8n tool

* EIP-4844: update nim-eth to branch eip-4844

* fix t8n transaction decoding

* add t8n test data

* EIP-4844: fix blobHash opcode

* disable blobHash test when evmc_enable
2023-06-24 20:56:44 +07:00
Jordan Hrycaj 0308dfac4f
Aristo db address sup trie items properly (#1600)
* Fix include

why:
  Eth67 not default yet so that got missed

* Rename `LeafKey` => `LeafTie`

why:
  Name is a pen picture of what this object is for. Also, it avoids the
  ubiquitous term `key`.

* Provided `getOrVoid()` wrapper for `getOrDefault()`

also:
  Provide `isValid()` syntactic sugar for `.isNil.not`, `!= 0` etc.
  Reorg descriptor source, split into sub-sources

* Bundled `NodeKey` objects with root ID and called it `HashLabel`

why:
  `NodeKey` (aka repurposed Hash265) objects are unique only within a
  particular sub-trie (e.g. storage slots) which are kept separated
  (i.e non-interleaved) by design. This is not applied to the backend
  as the map VertexID->NodeKey labelling the nodes needs not be injective.

  For the in-memory database (transaction) layers, the injective map
  VertexID->(VertexID,NodeKey) is used where the first field of the image
  tuple is the root ID of the sub-trie the `NodeKey` object is valid. So
  identical storage tries for different accounts can be represented.
2023-06-12 14:48:47 +01:00
jangko 8700d8b1e1
reduce compiler warnings 2023-06-12 12:58:53 +07:00
jangko 67aaf92c1d
bump submodules 2023-06-07 18:12:02 +07:00
Jordan Hrycaj 4c865ec884
Snap sync update pivot updating via rpc (#1583)
* Unit tests update, code cosmetics

* Fix segfault with zombie handling

why:
  In order to save memory, the data records of zombie entries are removed
  and only the key (aka peer node) is kept. Consequently, logging these
  zombies can only be done by the key.

* Allow to accept V2 payload without `shanghaiTime` set while syncing

why:
  Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
  beacon node headers can be processed regardless. Normal (aka strict)
  processing will be automatically restored when leaving snap sync mode.
2023-05-16 14:52:44 +01:00
Kim De Mey 408394a2bd
Bump nim-eth and remove unneeded Defect raises (#1575) 2023-05-10 18:04:35 +02:00
Jordan Hrycaj e1369a7c25
Improve full sync part behaviour 4 snap sync suite (#1564)
* Set maximum time for nodes to be banned.

why:
  Useless nodes are marked zombies and banned. They a kept in a table
  until flushed out by new connections. This works well if there are many
  connections. For the case that there are a few only, a maximum time is
  set. When expired, zombies are flushed automatically.

* Suspend full sync while block number at beacon block

details:
  Also allows to use external setting from file (2nd line)

* Resume state at full sync after restart (if any)
2023-04-26 16:46:42 +01:00
Jordan Hrycaj 68b2448ce1
Snap sync cosmetic code update (#1563)
* Relocate full sync descriptors from global `worker_desc.nim` to local pass

why:
  These settings are needed only for the full sync pass.

* Rename `pivotAccountsCoverage*()` => `accountsCoverage*()`

details:
  Extract from `worker_desc.nim` into separate source file.

* Relocate snap sync sub-descriptors

details:
  ..from global `worker_desc.nim` to local pass module `snap_pass_desc.nam`.

* Rename `SnapPivotRef` => `SnapPassPivotRef`

* Mostly removed `SnapPass` prefix from object type names

why:
  These objects are solely used on the snap pass.
2023-04-25 17:34:48 +01:00
Jordan Hrycaj d6ee672ba5
Fix pivot setup after switch to full sync (#1562)
* Cosmetics, update logging, docu

* Fix pivot hand-over after switch to full sync

why:
  Got garbled after code clean up
2023-04-25 13:24:32 +01:00
Jordan Hrycaj c5e895aaab
Code reorg 4 snap sync suite (#1560)
* Rename `playXXX` => `passXXX`

why:
  Better purpose match

* Code massage, log message updates

* Moved `ticker.nim` to `misc` folder to be used the same by full and snap sync

why:
  Simplifies maintenance

* Move `worker/pivot*` => `worker/pass/pass_snap/*`

why:
  better for maintenance

* Moved helper source file => `pass/pass_snap/helper`

* Renamed ComError => GetError, `worker/com/` => `worker/get/`

* Keep ticker enable flag in worker descriptor

why:
  This allows to pass this flag with the descriptor and not an extra
  function argument when calling the setup function.

* Extracted setup/release code from `worker.nim` => `pass/pass_init.nim`
2023-04-24 21:24:07 +01:00
Jordan Hrycaj f40a066cc6
Update snap sync ready to succeed at lab test (#1556)
* Extract RocksDB timing tests from snap unit tests as separate module

why:
  Declutter, make space for more snap related unit tests.

* Renamed `undumpNextGroup()` => `undumpBlocks()`

why:
  Source file name is called `undump_blocks.nim` which should be sort
  of in sync with the method name(s).

* Implement snap/1 server method `getByteCodes()`

* Implement snap/1 client method `getByteCodes()`

* Implement faculty for handling contract code fetching via snap/1

* Provide persistent storage for contract code records

* Implement contract code snap sync fetch & store

* Code massage, cosmetics

* Unit tests for verifying snap sync snapshot dump

details:
  Use `undump_kvp.dumpAllDb()` to dump any database.
2023-04-21 22:11:04 +01:00
Jordan Hrycaj 0387afb7b1
Remove local block body rlp fix (#1555)
* Remove local block body rlp fix

why:
  Fix moved to `nim-eth` module

* Update nim-eth bumper
2023-04-21 20:08:18 +01:00
Jordan Hrycaj 0a3bc102eb
Pre functional snap to full sync (#1546)
* Update sync scheduler pool mode

why:
  The pool mode allows to loop over active peers one after another. This
  is ideal for soft re-starting peers. As this is a two tier experience
  (start/stop, setup/release) the loop must be run twice. This is
  controlled by a more rigid re-definition of how to use the `poolMode`
  flag.

* Mitigate RLP serialiser deficiency

why:
  Currently, serialising the `BlockBody` in not conevrtible and need
  to be checked in the `eth` module. Currently a local fix for the
  wire protocol applies. Unit tests will stay (after this local solution
  will have been removed.)

* Code cosmetics and massage

details:
  Main part is `types.toStr()` as a unified function for logging block
  numbers.

* Allow to use a logical genesis replacement (start of history)

why:
  Snap sync will set up an arbitrary pivot at a block number different
  from zero. In fact, the higher the block number the better.

details:
  A non-genesis start of history will currently only affect the score
  values which were derived from the difficulty.

* Provide function to store the snap pivot block header in chain db

why:
  Together with the start of history facility, this allows to proceed
  with full syncing once snap has finished.

details:
  Snap db storage was switched from a sub-tables to the flat chain db.

* Provide database completeness and sanity checker

details:
  For debugging on smaller databases, only

* Implement snap -> full sync switch
2023-04-14 23:28:57 +01:00
Jordan Hrycaj 9facab91cb
Prepare snap client for continuing with full sync (#1534)
* Somewhat tighten error handling

why:
  Zombie state is invoked when the current peer turns out to be useless
  for further communication. While there is a chance to further talk
  to a peer about another topic (aka healing) after some protocol failure,
  it makes no sense to do so after a network problem.

  The latter state is explained bu the `peerDegraded` flag that goes
  together with the `zombie` state flag. A degraded peer is dropped
  immediately.

* Remove `--sync-mode=snapCtx` option, always start snap in recovery mode

why:
  No need for a snap sync option without recovery mode, can be achieved
  by deleting the database.

* Code cosmetics, typos, prettify logging, debugging helper, etc.

* Split off snap sync sub-mode handler into separate modules

details:
  The original `worker.nim` source has become a multiplexer for several
  snap sync sub-modes `full` and `snap`. The source modules of the
  incarnations of a particular sync sub-mode are places into the
  `worker/play` directory.

* Update ticker for snap and full sync logging
2023-04-06 20:42:07 +01:00
Jordan Hrycaj 5e865edec0
Update snap client storage slots download and healing (#1529)
* Fix fringe condition for `GetStorageRanges` message handler

why:
  Receiving a proved empty range was not considered at all. This lead to
  inconsistencies of the return value which led to subsequent errors.

* Update storage range bulk download

details;
  Mainly re-org of storage queue processing in `storage_queue_helper.nim`

* Update logging variables/messages

* Update storage slots healing

details:
  Mainly clean up after improved helper functions from the sources
  `find_missing_nodes.nim` and `storage_queue_helper.nim`.

* Simplify account fetch

why:
  To much fuss made tolerating some errors. There will be an overall
  strategy implemented where the concert of download and healing function
  is orchestrated.

* Add error resilience to the concert of download and healing.

why:
  The idea is that a peer might stop serving snap/1 accounts and storage
  slot downloads while still able to support fetching nodes for healing.
2023-04-04 14:36:18 +01:00
Jordan Hrycaj c01045c246
Update snap client account healing (#1521)
* Update nearby/neighbour leaf nodes finder

details:
  Update return error codes so that in the case that there is no more
  leaf node beyond the search direction, the particular error code
  `NearbyBeyondRange` is returned.

* Compile largest interval range containing only this leaf point

why:
  Will be needed in snap sync for adding single leaf nodes to the range
  of already allocated nodes.

* Reorg `hexary_inspect.nim`

why:
 Merged the nodes collecting algorithm for persistent and in-memory
 into a single generic function `hexary_inspect.inspectTrieImpl()`

* Update fetching accounts range failure handling in `rangeFetchAccounts()`

why:
  Rejected response leads now to fetching for another account range. Only
  repeated failures (or all done) terminate the algorithm.

* Update accounts healing

why:
+ Fixed looping over a bogus node response that could not inserted into
  the database. As a solution, these nodes are locally registered and not
  asked for in this download cycle.
+ Sub-optimal handling of interval range for a healed account leaf node.
  Now the maximal range interval containing this node is registered as
  processed which leafs to de-fragementation of the processed (and
  unprocessed) range list(s). So *gap* ranges which are known not to
  cover any account leaf node are not asked for on the network, anymore.
+ Sporadically remove empty interval ranges (if any)

* Update logging, better variable names
2023-03-25 10:44:48 +00:00
Jordan Hrycaj 33023aaf39
Update snap server client test scenario (#1518)
* Redesign snap1 message GetTrieNodes argument prototypes

why:
  A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
  than an opaque definition `seq[seq[Blob]]` because the inner object
  `SnapTriePath` object has a dedicated inner structure (for how to
  interprete `seq[Blob]`.)

* Collect some public constants into `constants.nim` file

* Reorg `hexary_paths.nim`

why:
+ Collecting nodes following a partial path properly ending at an
  extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
  into a single generic function `hexary_paths.rootPathExtend()`

info:
  Extracted common tasks to `hexary_nodes_helper.nim`

* Implement `StorageRanges` message handler for snap/1 protocol
2023-03-22 20:11:49 +00:00
Jordan Hrycaj 15d0ccb39c
Prepare snap server client test scenario cont4 (#1507)
* Add state root to node steps path register `RPath` or `XPath`

why:
  Typically, the first node in the path register is the state root. There
  are occasions, when the path register is empty (i.e. there are no node
  references) which typically applies to a zero node key.

  In order to find the next node key greater than zero, the state root is
  is needed which is now part of the `RPath` or `XPath` data types.

* Extracted hexary tree debugging functions into separate files

* Update empty path fringe case for left/right node neighbour

why:
  When starting at zero, the node steps path register would be empty. So
  will any path that is before the fist non-zero link of a state root (if
  it is a `Branch` node.)

  The `hexaryNearbyRight()` or `hexaryNearbyLeft()` function required a
  non-zero node steps path register.  Now the first node is to be advanced
  starting at the first state root link if necessary.

* Simplify/reorg neighbour node finder

why:
  There was too mach code repetition for the cases
  * persistent or in-memory database
  * left or right move

details:
  Most algorithms apply for persistent and in-memory alike. Using
  templates/generic functions most of these algorithms can be stated
  in a unified way

* Update storage slots snap/1 handler

details:
  Minor changes to be more debugging friendly.

* Fix detection of full database for snap sync

* Docu: Snap sync test & debugging scenario
2023-03-17 14:46:50 +00:00
Adam Spitz ee50f06a3e
Sketching in "stateless mode". (#1495)
(Doesn't do anything yet.)
2023-03-13 14:18:30 -04:00
Jordan Hrycaj 2f7f2dba2d
Prepare snap server client test scenario cont3 (#1491)
* Handle last/all node(s) proof conditions at leaf node extractor

detail:
  Flag whether the maximum extracted node is the last one in database
  No proof needed if the full tree was extracted

* Clean up some helpers & definitions

details:
  Move entities to more plausible locations, e.g. `Account` object need
  not be dealt with in the range extractor as it applies to any kind of
  leaf data.

* Fix next/prev database walk fringe condition

details:
  First check needed might be for a leaf node which was done too late.

* Homogenise snap/1 protocol function prototypes

why:
  The range arguments `origin` and `limit` data types differed in various
  function prototypes (`Hash256` vs. `openArray[byte]`.)

* Implement `GetStorageRange` handler

* Implement server timeout for leaf node retrieval

why:
  This feature leaves control on the server for probably costly action
  invoked by the network

* Implement maximal reply size for snap service

why:
  This feature leaves control on the server for probably costly action
  invoked by the network.
2023-03-10 17:10:30 +00:00
Jordan Hrycaj fe3a6d67c6
Prepare snap server client test scenario cont2 (#1487)
* Clean up some function prototypes

why:
  Simplify polymorphic prototype variances for easier maintenance.

* Fix fringe condition crash when importing bogus RLP node

why:
  Accessing non-list RLP entry as a list causes `Defect`

* Fix left boundary proof at range extractor

why:
  Was insufficient. The main problem was that there was no unit test for
  the validity of the generated left boundary.

* Handle incomplete left boundary proofs early

why:
  Attempt to do it later leads to overly complex code in order to prevent
  looping when the same peer repeats to send the same incomplete proof.

  Contrary, gaps in the leaf sequence can be handled gracefully with
  registering the gaps

* Implement a manual pivot setup mechanism for snap sync

why:
  For a test scenario it is convenient to set the pivot to something
  lower than the beacon header from the consensus layer. This does not
  need rely on any RPC mechanism.

details:
  The file containing the pivot specs is specified by the
  `--sync-ctrl-file` option. It is regularly parsed for updates.

* Fix calculation error

why:
  Prevent from calculating negative square root
2023-03-07 14:23:22 +00:00
Jordan Hrycaj fe04b50fef
Slightly change the static peer manager lookup behaviour (#1484)
why:
  The peer manager runs concurrently to the discovery scheme. So the p2p
  peer observer will also present `peer` non-static entries. Previously,
  this peer manager throw an assert defect when this happened.
2023-03-06 09:22:07 +00:00
Jordan Hrycaj 10ad7867e4
Prepare snap server client test scenario cont1 (#1485)
* Renaming androgynous sub-object names according to where they belong

why:
  These objects are not explicitly dealt with. They give meaning to
  some generic wrapper objects. Naming them after their origin may
  help troubleshooting.

* Redefine proof nodes list data type for `snap/1` wire protocol

why:
  The current specification suffered from the fact that the basic data
  type for a proof node is an RLP encoded hexary node. This slightly
  confused the encoding/decoding magic.

details:
  This is the second attempt, now wrapping the `seq[Blob]` into a
  wrapper object of `seq[SnapProof]` for a distinct alias sequence.

  In the previous attempt, `SnapProof` was a wrapper object holding the
  `Blob` with magic applied to the `seq[]`. This needed the `append`
  mixin to strip the outer wrapper that was applied to the `Blob` already
  when it was passed as argument.

* Fix some prototype inconsistency

why:
  For easy reading, `getAccountRange()` handler return code should
  resemble the `accoundRange()` anruments prototype.
2023-03-03 20:01:59 +00:00
Jordan Hrycaj f20f20f962
Prepare snap server client test scenario (#1483)
* Enable `snap/1` accounts range service

* Allow to change the garbage collector to `boehm` as a Makefile option.

why:
  There is still an unsolved memory corruption problem that might be
  related to the standard `gc`. It seemingly goes away if the `gc` is
  changed to `boehm`.

  Specifying another `gc` on the make level simplifies debugging and
  development.

* Code cosmetics

details:
* updated exception annotations
* extracted `worker_desc.nim` from `full/worker.nim`
* etc.

* Implement option to state a sync modifier file

why:
  This allows to specify extra sync type specific options which might
  change over time. This file is regularly checked for updates.

* Implement a threshold when to suspend full syncing

why:
  For a test scenario, a full sync beep may work as a local snap server.
  There is no need to download the full block chain.

details:
  The file containing the pivot specs is specified by the
  `--sync-ctrl-file` option. It is regularly parsed for updates.
2023-03-02 09:57:58 +00:00
Jordan Hrycaj bf53226c2c
Minor updates for testing and cosmetics (#1476)
* Fix locked database file annoyance with unit tests on Windows

why:
  Need to clean up old files first from previous session as files remain
  locked despite closing of database.

* Fix initialisation order

detail:
  Apparently this has no real effect as the ticker is only initialised
  here but started later.

  This possible bug has been in all for a while and was running with the
  previous compiler and libraries.

* Better naming of data fields for sync descriptors

details:
* BuddyRef[S,W]: buddy.data -> buddy.only
* CtxRef[S]: ctx.data -> ctx.pool
2023-02-23 13:13:02 +00:00
Adam Spitz fad3ed64cf
Time based forking (#1465)
* Refactoring in preparation for time-based forking.

* Timestamp-based hard-fork-transition.

* Workaround SideEffect issue / compiler bug for both failing locations in Portal history code

---------

Co-authored-by: kdeme <kim.demey@gmail.com>
2023-02-16 12:40:07 +01:00
Jordan Hrycaj b793f0de8d
Snap sync extractor and sub range proofs cont1 (#1468)
* Redefine `seq[Blob]` => `seq[SnapProof]` for `snap/1` protocol

why:
  Proof nodes are traded as `Blob` type items rather than Nim objects. So
  the RLP transcoder must not extra wrap proofs which are of type
  seq[Blob]. Without custom encoding one would produce a
  `list(blob(item1), blob(item2) ..)` instead of `list(item1, item2 ..)`.

* Limit leaf extractor by RLP size rather than number of items

why:
  To be used serving `snap/1` requests, the result of function
  `hexaryRangeLeafsProof()` is limited by the maximal space
  needed to serialise the result which will be part of the
  `snap/1` repsonse.

* Let the range extractor `hexaryRangeLeafsProof()` return RLP list sizes

why:
  When collecting accounts, the size oft the accounts list when encoded
  as RLP is continually updated. So the summed up value is available
  anyway. For the proof nodes list, there are not many (~ 10) so summing
  up is not expensive here.
2023-02-15 10:14:40 +00:00
Jordan Hrycaj 880313d7a4
Silence some compiler gossip -- part 8, sync (#1467)
details:
  Adding some missing exception annotation
2023-02-14 23:38:33 +00:00
Jordan Hrycaj c2fc46a99a
Snap sync extractor test sub range proofs (#1460)
* Unit tests to verify calculations based on hard coded constants

why:
  Sizes of RLP encoded objects are available at run time only.

* Changed argument order for `hexaryRangeLeafsProof()` prototype

why:
  Better to read as a stand-alone function (arguments were optimised
  for functional pipelines)

* Run sub-range proof tests for extracted ranges
2023-02-02 13:27:09 +00:00
Jordan Hrycaj 6ca6bcd96f
Snap sync fix trie interpolation fringe condition (#1457)
* Cosmetics

details:
+ Update doc generator
+ Fix key type representation in `hexary_desc` for debugging
+ Redefine `isImportOk()` as template for better `check()` line reporting

* Fix fringe condition when interpolating Merkle-Patricia tries

details:
  Small change with profound effect fixing some pathological condition
  that haunted the unit test set on large data sers. There is still one
  condition left which might well be due to an incomplete data set.

* Unit test proof nodes for node range extractor

* Unit tests to run on full extraction set

why:
  Left over from troubleshooting, range length was only 5
2023-02-01 18:56:06 +00:00
Jordan Hrycaj 89ae9621c4
Silence compiler gossip after nim upgrade (#1454)
* Silence some compiler gossip -- part 1, tx_pool

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 2, clique

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 3, misc core

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Silence some compiler gossip -- part 4, sync

details:
  Mostly removing redundant imports and `Defect` tracer after switch
  to nim 1.6

* Clique update

why:
  Missing exception annotation
2023-01-30 22:10:23 +00:00
Jordan Hrycaj 197d2b16dd
Snap sync interval range extractor (#1449)
* Update comments and test noise

* Fix boundary proofs

why:
  Where neither used in production, nor unit tested. For production, other
  methods apply to test leaf range integrity directly based of the proof
  nodes.

* Added `hexary_range()`: interval range + proof extractor

details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)

todo:
  Need to verify completeness of proof nodes

* Reduce some nim 1.6 compiler noise

* Stop unit test gossip for ci tests
2023-01-30 17:50:58 +00:00
Kim De Mey a669b51ec5
Bump Nim to 1.6 and resolve the related issues (#1445)
Two unresolved items currently:
- Three tests that are temporarily disabled as they fail in the
macro_assembler code, which seems to be due to an ambigious
identifier Stop (Ops and chronos ServerCommand enum).
- i386 CI disabled as it fails at Nim compilation already. Failed
tests where already ignored for this target.
2023-01-26 13:37:19 +01:00
Jordan Hrycaj e093fa452d
Declutter snap sync unit tests (#1444)
* Extracted RocksDB timing unit tests into separate file

why:
  make space for more in main module :)

* Extracted `inspectionRunner()` unit tests into separate file

why:
  make space for more in main module :)

* Extracted `storagesRunner()` unit tests into separate file

why:
  make space for more in main module :)

* Extracted pivot checkpoint store/retrieval unit tests into separate file

why:
  make space for more in main module :)

* Extract helper functions into separate source file

* Extracted account import unit tests into separate file

why:
  make space for more in main module :)

* Rename `test_decompose()` => `test_NodeRangeDecompose()`

why:
  There will be more functions with `test_NodeRange` prefix.
2023-01-23 16:09:12 +00:00
Jordan Hrycaj 6fb48517ba
Add snap protocol service stub (#1438)
* Cosmetics, update logger `topics`

* Clean up sync/start methods in nimbus

why:
* The `protocols` list selects served (as opposed to sync) protocols only.
* The `SyncMode.Default` object is allocated with the other possible sync
  mode objects.

* Add snap service stub to `nimbus`

* Provide full set of snap response handler stubs

* Bicarb for the latest CI hiccup

why:
  Might be a change in the CI engine for MacOS.
2023-01-20 15:01:29 +00:00
Jordan Hrycaj fda7971aaf
Reorganise eth handlers (#1436)
* Reorganise eth handlers

why:
  Make space for `snap` handlers in a similar fashion.

* fix typo
2023-01-18 15:00:14 +00:00
Jordan Hrycaj 30135ab1ef
Simplify beacon stream pivot update (#1435)
* Simplify pivot update

why:
  No need to fetch the pivot header from the network when it can be
  be made available in the ivot cache

also:
  Keep `txPool` update disabled while syncing

* Cosmetics, tune down some logging noise

* Support `snap/1` without `eth/6?`

why:
  Eth is not needed here.

* Snap is an (optional) extension of `eth`

so:
  It it must be supported somehow. Nevertheless it will be currently
  unused in the snap syncer.
2023-01-18 08:31:57 +00:00
Jordan Hrycaj 707e47ac38
External beacon stream tracker (#1433)
* Register external beacon stream header

why:
  This will be used to sync the peers against.

* Update total coverage book-keeping for 100% roll-over

details:
  Provide commonly available/used function

* Replace best pivot by beacon stream tracker

details:
  Beacon stream header cache will be updated by external chain monitor via
  RPC. This cached header will then be used to sync the pivot.
2023-01-17 09:28:14 +00:00
Jordan Hrycaj a6f45e341b
Fetch-reject-reconnect loop protection (#1432)
why:
  Some peers reconnect recurrently after dialogue was found useless. The
  reconnect loop protection was in place already, albeit insufficient.

also:
  Some updates to allow setting previously constant parameters at run
  time.
2023-01-16 14:51:32 +00:00
Jordan Hrycaj 8da4002df3
Update eth/6? messages Get/PooledTransactions (#1415)
why:
  There non-existent txs must be skipped.
2023-01-13 19:55:16 +00:00
jangko 74e76e5237
remove unused 'refundGas' from evm/state_transactions 2022-12-28 01:45:56 +07:00
Jordan Hrycaj 5134bb5e04
Extract finding of missing nodes for healing into separate module (#1398)
why:
  Duplicate implementation of same functionality
2022-12-25 17:56:57 +00:00
Jordan Hrycaj 88b315bb41
Snap sync refactor healing (#1397)
* Simplify accounts healing threshold management

why:
  Was over-engineered.

details:
  Previously, healing was based on recursive hexary trie perusal.

  Due to "cheap" envelope decomposition of a range complement for the
  hexary trie, the cost of running extra laps have become time-affordable
  again and a simple trigger mechanism for healing will do.

* Control number of dangling result nodes in `hexaryInspectTrie()`

also:
+ Returns number of visited nodes available for logging so the maximum
  number of nodes can be tuned accordingly.
+ Some code and docu update

* Update names of constants

why:
  Declutter, more systematic naming

* Re-implemented `worker_desc.merge()` for storage slots

why:
  Provided as proper queue management in `storage_queue_helper`.

details:
+ Several append modes (replaces `merge()`)
+ Added third queue to record entries currently fetched by a worker. So
  another parallel running worker can safe the complete set of storage
  slots in as checkpoint. This was previously lost.

* Refactor healing

why:
  Simplify and remove deep hexary trie perusal for finding completeness.

   Due to "cheap" envelope decomposition of a range complement for the
   hexary trie, the cost of running extra laps have become time-affordable
   again and a simple trigger mechanism for healing will do.

* Docu update

* Run a storage job only once in download loop

why:
  Download failure or rejection (i.e. missing data) lead to repeated
  fetch requests until peer disconnects, otherwise.
2022-12-24 09:54:18 +00:00
Jordan Hrycaj 0f132c1d01
Snap sync fix ticker crash (#1393)
* Fix SEGFAULT showstopper

* Update logging
2022-12-20 15:38:57 +00:00
Jordan Hrycaj bd42ebb193
Snap sync refactor accounts healing (#1392)
* Relocated mothballing (i.e. swap-in preparation) logic

details:
  Mothballing was previously tested & started after downloading
  account ranges in `range_fetch_accounts`.

  Whenever current download or healing stops because of a pivot change,
  swap-in preparation is needed (otherwise some storage slots may get
  lost when swap-in takes place.)

  Also, `execSnapSyncAction()` has been moved back to `pivot_helper`.

* Reorganised source file directories

details:
  Grouped pivot focused modules into `pivot` directory

* Renamed `checkNodes`, `sickSubTries` as `nodes.check`, `nodes.missing`

why:
  Both lists are typically used together as pair. Renaming `sickSubTries`
  reflects moving away from a healing centric view towards a swap-in
  attitude.

* Multi times coverage recording

details:
  Per pivot account ranges are accumulated into coverage range set. This
  set fill eventually contain a singe range of account hashes [0..2^256]
  which amounts to 100% capacity.

  A counter has been added that is incremented whenever max capacity is
  reached. The accumulated range is then reset to empty.

  The effect of this setting is that the coverage can be evenly duplicated.
  So 200% would not accumulate on a particular region.

* Update range length comparisons (mod 2^256)

why:
  A range interval can have sizes 1..2^256 as it cannot be empty by
  definition. The number of points in a range intervals set can have
  0..2^256 points. As the scalar range is a residue class modulo 2^256,
  the residue class 0 means length 2^256 for a range interval, but can
  be 0 or 2^256 for the number of points in a range intervals set.

* Generalised `hexaryEnvelopeDecompose()`

details:
  Compile the complement of the union of some (processed) intervals and
  express this complement as a list of envelopes of sub-tries.

  This facility is directly applicable to swap-in book-keeping.

* Re-factor `swapIn()`

why:
  Good idea but baloney implementation. The main algorithm is based on
  the generalised version of `hexaryEnvelopeDecompose()` which has been
  derived from this implementation.

* Refactor `healAccounts()` using `hexaryEnvelopeDecompose()` as main driver

why:
  Previously, the hexary trie was searched recursively for dangling nodes
  which has a poor worst case performance already when the trie  is
  reasonably populated.

  The function `hexaryEnvelopeDecompose()` is a magnitude faster because
  it does not peruse existing sub-tries in order to find missing nodes
  although result is not fully compatible with the previous function.

  So recursive search is used in a limited mode only when the decomposer
  will not deliver a useful result.

* Logging & maintenance fixes

details:
  Preparation for abandoning buddy-global healing variables `node`,
  `resumeCtx`, and `lockTriePerusal`. These variable are trie-perusal
  centric which will be run on the back burner in favour of
  `hexaryEnvelopeDecompose()` which is used for accounts healing already.
2022-12-19 21:22:09 +00:00
Jordan Hrycaj d55a72ae49
Full sync peer negotiation control (#1390)
* Additional logging for scheduler

* Fix duplicate occurrence of `bestNumber`

why:
  Happened when the `block_queue` module was separated out of
  the `worker` module. Somehow testing was insufficient or skipped,
  at all.

* Update `runPool()` mixin for scheduler

details:
  Could be simplified

* Dynamically adapt pivot header negotiation mode

details:
  After accepting one peer and some timeout, do not search for more
  peers for start syncing but rather continue in relaxed mode with a
  single peer.
2022-12-18 16:06:43 +00:00
jangko eb701fd3d7
fix addKnownToPeer in wire protocol handler 2022-12-16 07:55:38 +07:00
Etan Kissling 22338b7870 bump `nim-eth` for `eip4844` support
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4844 (danksharding). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `excessDataGas` field until proper EIP4844 support exists.
https://github.com/status-im/nim-eth/pull/570
2022-12-14 11:04:13 +02:00
Jordan Hrycaj 52517d598f
Fix typo (#1364)
details:
  Accessing wrong result (out of two) leads to an exception
2022-12-13 11:13:13 +00:00
Jordan Hrycaj cc2c888a63
Snap sync swap in other pivots (#1363)
* Provide index to reconstruct missing storage slots

why;
  Pivots will be changed anymore once they are officially archived. The
  account of the archived pivots are ready to be swapped into the active
  pivot. This leaves open how to treat storage slots not fetched yet.

  Solution: when mothballing, an `account->storage-root` index is
  compiled that can be used when swapping in accounts.

* Implement swap-in from earlier pivots

details;
  When most accounts are covered by the current and previous pivot
  sessions, swapping inthe accounts and storage slots  (i.e. registering
  account ranges done) from earlier pivots takes place if there is a
  common sub-trie.

* Throttle pivot change when healing state has bean reached

why:
  There is a hope to complete the current pivot, so pivot update can be
  throttled. This is achieved by setting another minimum block number
  distance for the pivot headers. This feature is still experimental
2022-12-12 22:00:24 +00:00
Jordan Hrycaj 179b4adac3
Snap sync tweaks n fixes (#1359)
* Miscellaneous tweaks & fixes

details:
+ Catch `TransportError` exception in `legacy.nim` module
+ Fix self-calling wrapper `hexaryEnvelopeTouchedBy()`

* Update documentation, logging etc.

* Changed `checkNode` batch list `seq[Blob]` => `seq[NodeSpecs]`

why:
  The `NodeSpecs` type as used here is a tuple `(partial-path,node-key)`.

  When `checkNode` partial paths are collected, also the node key is
  available so it should be registered and not repeatedly recovered from
  the database.

* Add optional begin/end trace statement in snap scheduler

why:
  Allows to trace invoked entity and scheduler state variables
2022-12-09 13:43:55 +00:00
Jordan Hrycaj 3766eddf5a
Some updates to the envelope module (#1353)
details:
+ Add detailed error return codes
+ Remove cruft
+ Some prototype wrappers
2022-12-06 20:13:31 +00:00
Jordan Hrycaj 85de03fd6e
Rename and update dismantle => hexaryEnvelopeDecompose() (#1351)
* Rename and update dismantle => hexaryEnvelopeDecompose()

why:
+ As for naming, a positive connotation is prefered
+ The unit tests were really insufficient
+ The function result was wrong on a few boundry conditions

detail:
+ Extracted the function from `hexary_paths.nim` and re-implemented
  it together with other envelope functions => `hexary_envelope.nim`
+ Re-wrote docu for `hexaryEnvelopeDecompose()`

* Relaxed right condition for `hexaryEnvelopeDecompose()` range argument

why;
  Previously, the right point of the argument interval had to be a path
  to an allocated leaf node. While this is typically a given for accounts,
  it is easier to require an arbitrary range of paths (or keys) with
  the requirement of a `boundary proof` for left and right (i.e. enough
  nodes in the database to find the end points.)

also:
  Bug fixes for related functions (typos, missing conditions etc.)

* Add missing unit tests include file
2022-12-06 17:35:56 +00:00
jangko 4cf2ab661c
connect legacy sync to rpc/eth_syncing and graphql/syncing
fix #1333
2022-12-05 10:25:21 +07:00
jangko 56f169b23e
rename Fast Sync to Legacy Sync
fix #1332
2022-12-05 09:42:25 +07:00
jangko 94a94c5b65 implement better hardfork management 2022-12-02 13:51:42 +07:00
jangko ac2cb82a2b saner source code grouping 2022-12-02 13:51:42 +07:00
Jordan Hrycaj 44a57496d9
Snap sync interval complement method to speed up trie perusal (#1328)
* Add quick hexary trie inspector, called `dismantle()`

why:
+ Full hexary trie perusal is slow if running down leaf nodes
+ For known range of leaf nodes, work out the UInt126-complement of
  partial sub-trie paths (for existing nodes). The result should cover
  no (or only a few) sub-tries with leaf nodes.

* Extract common healing methods => `sub_tries_helper.nim`

details:
  Also apply quick hexary trie inspection tool `dismantle()`
  Replace `inspectAccountsTrie()` wrapper by `hexaryInspectTrie()`

* Re-arrange task dispatching in main peer worker

* Refactor accounts and storage slots downloaders

* Rename `HexaryDbError` => `HexaryError`
2022-11-28 09:03:23 +00:00
Etan Kissling bc3f164b97
bump `nim-eth` for `withdrawalsRoot` support (#1326)
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4895 (withdrawals). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `withdrawalsRoot` field until proper withdrawals support exists.
https://github.com/status-im/nim-eth/pull/562
2022-11-26 15:59:19 +01:00
Jordan Hrycaj 7688148565
Snap sync can start on saved checkpoint (#1327)
* Stop negotiating pivot if peer repeatedly replies w/usesless answers

why:
  There is some fringe condition where a peer replies with legit but
  useless empty headers repetely. This goes on until somebody stops.
  We stop now.

* Rename `missingNodes` => `sickSubTries`

why:
  These (probably missing) nodes represent in reality fully or partially
  missing sub-tries. The top nodes may even exist, e.g. as a shallow
  sub-trie.

also:
  Keep track of account healing on/of by bool variable `accountsHealing`
  controlled in `pivot_helper.execSnapSyncAction()`

* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)

also:
+ Trigger the recovery (or similar) process from inside the global peer
  worker initialisation `worker.setup()` and not by the `snap.start()`
  function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
  scheduler.

* Can import partial snap sync checkpoint at start

details:
 + Modified what is stored with the checkpoint in `snapdb_pivot.nim`
 + Will be loaded within `runDaemon()` if activated

* Forgot to import total coverage range

why:
  Only the top (or latest) pivot needs coverage but the total coverage
  is the list of all ranges for all pivots -- simply forgotten.
2022-11-25 14:56:42 +00:00
jangko fffe071f86
eth wire protocol: implement NewBlock and NewBlockHashes handler
and it also do invasive changes to fast sync because
they are tightly related.

fix #673
2022-11-18 01:22:31 +07:00
Jordan Hrycaj bba1bea4c8
Snap sync state save (#1302)
* Piecemeal trie inspection

details:
  Trie inspection will stop after maximum number of nodes visited.
  The inspection can be resumed using the returned state from the
  last session.

why:
  This feature allows for task switch between `piecemeal` sessions.

* Extract pivot helper code from `worker.nim` => `pivot_helper.nim`

* Accounts import will now return dangling paths from `proof` nodes

why:
  With proper bookkeeping, this can be used to start healing without
  analysing the the probably full trie.

* Update `unprocessed` account range handling

why:
  More generally, the API of a pairs of unprocessed intervals favours
  the first set and not before that is exhausted the second set comes
  into play.

  This was unfortunately implemented which caused the ranges to be
  unnecessarily fractioned. Now the number of range interval typically
  remains in the lower single digit numbers.

* Save sync state after end of downloading some accounts

details:
  restore/resume to be implemented later
2022-11-16 23:51:06 +00:00
Jordan Hrycaj 9aa925cf36
Update sync scheduler (#1297)
* Add `stop()` methods to shutdown to shutdown procedure

why:
  Nasty behaviour when hitting Ctrl-C, otherwise

* Add background service to sync scheduler

why:
  The background service will be used for sync data import and recovery
  after restart.

   It is controlled by the sync scheduler for an easy turn/on off API.

also:
  Simplified snap ticker time calc.

* Fix typo
2022-11-14 14:13:00 +00:00
jangko 43f4b99a1b
disable NewBlockHashes and NewBlock of eth wire handler after POS transition
fix #1133
2022-11-14 16:17:34 +07:00
jangko c7b3c374f0
add exception handlers to transaction exchange code 2022-11-14 12:43:05 +07:00
jangko 9dd256cae7
eth wire handlers: implement transactions exchange 2022-11-11 09:24:32 +07:00
Jordan Hrycaj 21837546c3
Fix/clarify single mode for async sync scheduler (#1292)
why:
  Single mode here means there is only such (single mode) instance
  activated but multi mode instances for other peers are allowed.

  Erroneously, multi mode instances were held back waiting while some
  single mode instance was running which reduced the number of parallel
  download peers.
2022-11-09 19:16:25 +00:00
Jordan Hrycaj e14fd4b96c
Prep for full sync after snap make 6 (#1291)
* Update log ticker, using time interval rather than ticker count

why:
  Counting and logging ticker occurrences is inherently imprecise. So
  time intervals are used.

* Use separate storage tables for snap sync data

* Left boundary proof update

why:
  Was not properly implemented, yet.

* Capture pivot in peer worker (aka buddy) tasks

why:
  The pivot environment is linked to the `buddy` descriptor. While
  there is a task switch, the pivot may change. So it is passed on as
  function argument `env` rather than retrieved from the buddy at
  the start of a sub-function.

* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`

* Remove obsolete account range returned from `GetAccountRange` message

why:
  Handler returned the wrong right value of the range. This range was
  for convenience, only.

* Prioritise storage slots if the queue becomes large

why:
  Currently, accounts processing is prioritised up until all accounts
  are downloaded. The new prioritisation has two thresholds for
  + start processing storage slots with a new worker
  + stop account processing and switch to storage processing

also:
  Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`

* Generalise left boundary proof for accounts or storage slots.

why:
  Detailed explanation how this works is documented with
  `snapdb_accounts.importAccounts()`.

  Instead of enforcing a left boundary proof (which is still the default),
  the importer functions return a list of `holes` (aka node paths) found in
  the argument ranges of leaf nodes. This in turn is used by the book
   keeping software for data download.

* Forgot to pass on variable in function wrapper

also:
  + Start healing not before 99% accounts covered (previously 95%)
  + Logging updated/prettified
2022-11-08 18:56:04 +00:00
Jordan Hrycaj a689e9185a
Prep for full sync after snap make 5 (#1286)
* Update docu and logging

* Extracted and updated constants from `worker_desc` into separate file

* Update and re-calibrate communication error handling

* Allow simplified pivot negotiation

why:
  This feature allows to turn off pivot negotiation so that peers agree
  on a a pivot header.

  For snap sync with fast changing pivots this only throttles the sync
  process. The finally downloaded DB snapshot is typically a merged
  version of different pivot states augmented by a healing process.

* Re-model worker queues for accounts download & healing

why:
  Currently there is only one data fetch per download or healing task.
  This task is then repeated by the scheduler after a short time. In
  many cases, this short time seems enough for some peers to decide to
  terminate connection.

* Update main task batch `runMulti()`

details:
  The function `runMulti()` is activated in quasi-parallel mode by the
  scheduler. This function calls the download, healing and fast-sync
  functions.

  While in debug mode, after each set of jobs run by this function the
  database is analysed (by the `snapdb_check` module) and the result
  printed.
2022-11-01 15:07:44 +00:00
Jordan Hrycaj a8df4c1165
Fix trie inspector for healing (#1284)
* Update logging

* Fix node hash associated with partial path for missing nodes

why:
  Healing uses the partial paths for fetching nodes from the network. The
  node hash (or key) is used to verify the node data retrieved.

  The trie inspector function returned the parent hash instead of the node hash
  with the partial path when a missing node was detected. So all nodes
  for healing were rejected.

* Must not modify sequence while looping over it
2022-10-28 08:26:17 +01:00
Jordan Hrycaj 1b4572ed3b
Prep for full sync after snap make 4 (#1282)
* Re-arrange fetching storage slots in batch module

why;
  Previously, fetching partial slot ranges first has a chance of
  terminating the worker peer 9due to network error) while there were
  many inheritable storage slots on the queue.

  Now, inheritance is checked first, then full slot ranges and finally
  partial ranges.

* Update logging

* Bundled node information for healing into single object `NodeSpecs`

why:
  Previously, partial paths and node keys were kept in separate variables.
  This approach was error prone due to copying/reassembling function
  argument objects.

  As all partial paths, keys, and node data types are more or less handled
  as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
  hold these `Blob`s as named field in a single object (even if not all
  fields are active for the current purpose.)

* For good housekeeping, using `NodeKey` type only for account keys

why:
  previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
  state or storage root keys use the `Hash256` type.

* Always accept latest pivot (and not a slightly older one)

why;
  For testing it was tried to use a slightly older pivot state root than
  available. Some anecdotal tests seemed to suggest an advantage so that
  more peers are willing to serve on that older pivot. But this could not
  be confirmed in subsequent tests (still anecdotal, though.)

  As a side note, the distance of the latest pivot to its predecessor is
  at least 128 (or whatever the constant `minPivotBlockDistance` is
  assigned to.)

* Reshuffle name components for some file and function names

why:
  Clarifies purpose:
  "storages" becomes: "storage slots"
  "store" becomes: "range fetch"

* Stash away currently unused modules in sub-folder named "notused"
2022-10-27 14:49:28 +01:00
Jordan Hrycaj 82ceec313d
Prettify logging for snap sync environment (#1278)
* Multiple storage batches at a time

why:
  Previously only some small portion was processed at a time so the peer
  might have gone when the process was resumed at a later time

* Renamed some field of snap/1 protocol response object

why:
  Documented as `slots` is in reality a per-account list of slot lists. So
  the new name `slotLists` better reflects the nature of the beast.

* Some minor healing re-arrangements for storage slot tries

why;
  Resolving all complete inherited slots tries first in sync mode keeps
  the worker queues smaller which improves logging.

* Prettify logging, comments update etc.
2022-10-21 20:29:42 +01:00
Jordan Hrycaj c0d580715e
Remodel persistent snapdb access (#1274)
* Re-model persistent database access

why:
  Storage slots healing just run on the wrong sub-trie (i.e. the wrong
  key mapping). So get/put and bulk functions now use the definitions
  in `snapdb_desc` (earlier there were some shortcuts for `get()`.)

* Fixes: missing return code, typo, redundant imports etc.

* Remove obsolete debugging directives from `worker_desc` module

* Correct failing unit tests for storage slots trie inspection

why:
  Some pathological cases for the extended tests do not produce any
  hexary trie data. This is rightly detected by the trie inspection
  and the result checks needed to adjusted.
2022-10-20 17:59:54 +01:00
Jordan Hrycaj 096d93ab31
Remove direct support for legacy pivot finder (#1272)
why:
  Not used anymore. The current finder is good enough based on the
  the reported best header and difficulty.
2022-10-19 15:03:55 +01:00
Jordan Hrycaj 85fdb61699
Prep for full sync after snap make 3 (#1270)
* For snap sync, publish `EthWireRef` in sync descriptor

why:
  currently used for noise control

* Detect and reuse existing storage slots

* Provide healing module for storage slots

* Update statistic ticker (adding range factor for unprocessed storage)

* Complete mere function for work item ranges

why:
  Merging interval into existing partial item was missing

* Show av storage queue lengths in ticker

detail;
  Previous attempt shows average completeness which did not tell much

* Correct the meaning of the storage counter (per pivot)

detail:
  Is the # accounts that have a storage saved
2022-10-19 11:04:06 +01:00
jangko 3fa1b012e6
initial wire protocol transformation
rework on the eth wire protocol handlers.
curently still missing 4 handlers implementation.
but the framework is ready for eexpansion.
2022-10-15 19:48:21 +07:00
Jordan Hrycaj 8c7d91512b
Prep for full sync after snap mark2 (#1263)
* Rename `LeafRange` => `NodeTagRange`

* Replacing storage slot partition point by interval

why:
  The partition point only allows to describe slots `[point,high(Uint256)]`
  for fetching interval slot ranges. This has been generalised for any
  interval.

* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`

why:
  Generalised healing status for accounts, and later for storage slots.

* Improve accounts healing loop

* Split `snap_db` into accounts and storage modules

why:
  It is cleaner to have separate session descriptors for accounts and
  storage slots (based on a common base descriptor.)

  Also, persistent storage handling might be changed in future which
  requires the storage slot implementation disentangled from the accounts
  handling.

* Re-model worker queues for storage slots

why:
  There is a dynamic list of storage sub-tries, each one has to be
  treated similar to the accounts database. This applied to slot
  interval downloads as well as to healing

* Compress some return value report lists for snapdb methods

why:
  No need to report all handling details for work items that are filteres
  out and discarded, anyway.

* Remove inner loop frame from healing function

why:
  The healing function runs as a loop body already.
2022-10-14 17:40:32 +01:00
Jordan Hrycaj d53eacb854
Prep for full sync after snap (#1253)
* Split fetch accounts into sub-modules

details:
  There will be separated modules for accounts snapshot, storage snapshot,
  and healing for either.

* Allow to rebase pivot before negotiated header

why:
  Peers seem to have not too many snapshots available. By setting back the
  pivot block header slightly, the chances might be higher to find more
  peers to serve this pivot. Experiment on mainnet showed that setting back
  too much (tested with 1024), the chances to find matching snapshot peers
  seem to decrease.

* Add accounts healing

* Update variable/field naming in `worker_desc` for readability

* Handle leaf nodes in accounts healing

why:
  There is no need to fetch accounts when they had been added by the
  healing process. On the flip side, these accounts must be checked for
  storage data and the batch queue updated, accordingly.

* Reorganising accounts hash ranges batch queue

why:
  The aim is to formally cover as many accounts as possible for different
  pivot state root environments. Formerly, this was tried by starting the
  accounts batch queue at a random value for each pivot (and wrapping
  around.)

  Now, each pivot environment starts with an interval set mutually
  disjunct from any interval set retrieved with other pivot state roots.

also:
  Stop fishing for more pivots in `worker` if 100% download is reached

* Reorganise/update accounts healing

why:
  Error handling was wrong and the (math. complexity of) whole process
  could be better managed.

details:
  Much of the algorithm is now documented at the top of the file
  `heal_accounts.nim`
2022-10-08 18:20:50 +01:00
Jordan Hrycaj eca5882238
Isolating sync action modules (#1249)
* Miscellaneous updates TBC

* Disentangled pivot2 module from snap

why:
  Wrote as template on top of sync so it can be shared by fast and snap
  sync.

* Renamed and relocated pivot sources

* Integrated `best_pivot` module into full and snap sync

why:
  Full sync used an older version of `best_pivot`

* isolating download module from full sync

why;
  might be shared with snap sync at a later stage
2022-09-30 09:22:14 +01:00
jangko 86f6d284aa
initial beacon sync skeleton implementation 2022-09-17 09:08:55 +07:00
jangko 513f44d7d4
add local copy of ethereum wire protocol spec 2022-09-17 09:08:55 +07:00
Jordan Hrycaj 4ff0948fed
Snap sync accounts healing (#1225)
* Added inspect module

why:
  Find dangling references for trie healing support.

details:
 + This patch set provides only the inspect module and some unit tests.
 + There are also extensive unit tests which need bulk data from the
   `nimbus-eth1-blob` module.

* Alternative pivot finder

why:
  Attempt to be faster on start up. Also tying to decouple pivot finder
  somehow by providing different mechanisms (this one runs in `single`
  mode.)

* Use inspect module for healing

details:
 + After some progress with account and storage data, the inspect facility
   is used to find dangling links in the database to be filled nose-wise.
 + This is a crude attempt to cobble together functional elements. The
   set up needs to be honed.

* fix scheduler to avoid starting dead peers

why:
  Some peers drop out while in `sleepAsync()`. So extra `if` clauses
  make sure that this event is detected early.

* Bug fixes causing crashes

details:

+ prettify.toPC():
  int/intToStr() numeric range over/underflow

+ hexary_inspect.hexaryInspectPath():
  take care of half initialised step with branch but missing index into
  branch array

* improve handling of dropped peers in alternaive pivot finder

why:
  Strange things may happen while querying data from the network.
  Additional checks make sure that the state of other peers is updated
  immediately.

* Update trace messages

* reorganise snap fetch & store schedule
2022-09-16 08:24:12 +01:00
andri lim ad4e25b27e
Fix eth66 and eth67 handshake (#1214)
* bump nim-eth

* fix eth66 and eth67 handshake
2022-09-05 23:37:58 +02:00
Jacek Sieka c2ed731fa5
eth: adapt to smaller eth_types (#1210) 2022-09-03 20:15:35 +02:00
Jordan Hrycaj 72a31593a9
Snap fetch account storage data (#1211)
* Removed database write comparison statistics

* Provide life storage tests data

details:
  database dumps on external repo `nimbus-eth1`-blobs`

* Update hexary tree interpolation for storage bulk tests

* fetch storage update
2022-09-02 19:16:09 +01:00
Jordan Hrycaj de2c13e136
Update snap offline tests (#1199)
* Re-implemented `hexaryFollow()` in a more general fashion

details:
+ New name for re-implemented `hexaryFollow()` is `hexaryPath()`
+ Renamed `rTreeFollow()` as `hexaryPath()`

why:
  Returning similarly organised structures, the results of the
  `hexaryPath()` functions become comparable when running over
  the persistent and the in-memory databases.

* Added traversal functionality for persistent ChainDB

* Using `Account` values as re-packed Blob

* Repack samples as compressed data files

* Produce test data

details:
+ Can force pivot state root switch after minimal coverage.
+ For emulating certain network behaviour, downloading accounts stops for
  a particular pivot state root if 30% (some static number) coverage is
  reached. Following accounts are downloaded for a later pivot state root.
2022-08-24 14:44:18 +01:00
Jordan Hrycaj f07945d37b
Misc snap sync updates (#1192)
* Bump nim-stew

why:
  Need fixed interval set

* Keep track of accumulated account ranges over all state roots

* Added comments and explanations to unit tests

* typo
2022-08-17 08:30:11 +01:00
Jordan Hrycaj 7489784ba8
Snap sync accounts db code reorg (#1189)
* Extracted functionality into sub-modules for maintainability

* Setting SST bulk load as default in `accounts_db`

details:
+ currently, the same data are stored via rocksdb if available, and
  the same via embedded `storage_type` with (non-standard) prefix 200
  for time comparisons
+ fallback to normal `put()` unless rocksdb is accessible
2022-08-15 16:51:50 +01:00