* Keep vertex ID generator state with each db-layer
why:
The vertex ID generator state is part of the difference to the below
layer
* Move otherwise unused source to test directory
* Add Merkle hash generator
also:
* Verification facility for debugging
* Empty Merkle key hashes encoded as `EMPTY_ROOT_HASH`
details:
1. Merging a leaf vertex merges a `Patricia Trie` path (while
adding/modiying vertices) and adds a leaf node with payload
2. Merging a Merkel node merges a single vertex to the `Patricia Trie`
and registers merkel hashes
3. Action 2 can be used before action 1 in order to construct a
Merkel proof as required for handling `snap/1` data.
4. Unit tests show that action 3 is benign for now :)
* Unit tests update, code cosmetics
* Fix segfault with zombie handling
why:
In order to save memory, the data records of zombie entries are removed
and only the key (aka peer node) is kept. Consequently, logging these
zombies can only be done by the key.
* Allow to accept V2 payload without `shanghaiTime` set while syncing
why:
Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
beacon node headers can be processed regardless. Normal (aka strict)
processing will be automatically restored when leaving snap sync mode.
* Cosmetics, renamed fields (eVtx, bVtx) -> (eVid, bVid)
* Multilayered delta architecture for Aristo DB
details:
Any VertexID or data retrieval needs to go down the rabbit hole and
fetch/get/manipulate the bottom layer -- even without explicit
backend.
* Direct reference to backend from top-level layer
why:
Some services as the vid management needs to be synchronised among all
layers. So access is optimised.
* Experimental MP-trie
why:
Deleting records is a infeasible with the current structure
* Added vertex ID recycling management
Todo:
Provide some unit tests
* DB layout update
why:
Main news is the separation of `Merkel` hashes into an extra table.
details:
The code fragments cover conversion between compact MPT records and
Aristo DB records as well as some rudimentary cache handling for
the `Merkel` hashes (i.e. the extra table entries.)
todo:
Add some simple unit test for the descriptor record (currently used
for vertex ID management, only.)
* Updated vertex ID recycling management
details:
added simple unit tests (mainly testing ABI)
* docu update
* Set maximum time for nodes to be banned.
why:
Useless nodes are marked zombies and banned. They a kept in a table
until flushed out by new connections. This works well if there are many
connections. For the case that there are a few only, a maximum time is
set. When expired, zombies are flushed automatically.
* Suspend full sync while block number at beacon block
details:
Also allows to use external setting from file (2nd line)
* Resume state at full sync after restart (if any)
* Relocate full sync descriptors from global `worker_desc.nim` to local pass
why:
These settings are needed only for the full sync pass.
* Rename `pivotAccountsCoverage*()` => `accountsCoverage*()`
details:
Extract from `worker_desc.nim` into separate source file.
* Relocate snap sync sub-descriptors
details:
..from global `worker_desc.nim` to local pass module `snap_pass_desc.nam`.
* Rename `SnapPivotRef` => `SnapPassPivotRef`
* Mostly removed `SnapPass` prefix from object type names
why:
These objects are solely used on the snap pass.
* Rename `playXXX` => `passXXX`
why:
Better purpose match
* Code massage, log message updates
* Moved `ticker.nim` to `misc` folder to be used the same by full and snap sync
why:
Simplifies maintenance
* Move `worker/pivot*` => `worker/pass/pass_snap/*`
why:
better for maintenance
* Moved helper source file => `pass/pass_snap/helper`
* Renamed ComError => GetError, `worker/com/` => `worker/get/`
* Keep ticker enable flag in worker descriptor
why:
This allows to pass this flag with the descriptor and not an extra
function argument when calling the setup function.
* Extracted setup/release code from `worker.nim` => `pass/pass_init.nim`
* Recreating some of the speculative-execution code.
Not really using it yet. Also there's some new inefficiency in
memory.nim, but it's fixable - just haven't gotten around to it yet.
The big thing introduced here is the idea of "cells" for stack,
memory, and storage values. A cell is basically just a Future (though
there's also the option of making it an Identity - just a simple
distinct wrapper around a value - if you want to turn off the
asynchrony).
* Bumped nim-eth.
* Cleaned up a few comments.
* Bumped nim-secp256k1.
* Oops.
* Fixing a few compiler errors that show up with EVMC enabled.
* Extract RocksDB timing tests from snap unit tests as separate module
why:
Declutter, make space for more snap related unit tests.
* Renamed `undumpNextGroup()` => `undumpBlocks()`
why:
Source file name is called `undump_blocks.nim` which should be sort
of in sync with the method name(s).
* Implement snap/1 server method `getByteCodes()`
* Implement snap/1 client method `getByteCodes()`
* Implement faculty for handling contract code fetching via snap/1
* Provide persistent storage for contract code records
* Implement contract code snap sync fetch & store
* Code massage, cosmetics
* Unit tests for verifying snap sync snapshot dump
details:
Use `undump_kvp.dumpAllDb()` to dump any database.
* Improve logging and logging options in Fluffy
- Allow selection of log format, including:
- JSON
- automatic selection based on tty
- Allow log levels per topic configured on cli
* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
- Remove failing on withdrawalsRoot to allow Shanghai BlockHeader.
Still need to add real checks in block header / body validation.
- Add withdrawals array to Block object for JSON-RPC API
* Somewhat tighten error handling
why:
Zombie state is invoked when the current peer turns out to be useless
for further communication. While there is a chance to further talk
to a peer about another topic (aka healing) after some protocol failure,
it makes no sense to do so after a network problem.
The latter state is explained bu the `peerDegraded` flag that goes
together with the `zombie` state flag. A degraded peer is dropped
immediately.
* Remove `--sync-mode=snapCtx` option, always start snap in recovery mode
why:
No need for a snap sync option without recovery mode, can be achieved
by deleting the database.
* Code cosmetics, typos, prettify logging, debugging helper, etc.
* Split off snap sync sub-mode handler into separate modules
details:
The original `worker.nim` source has become a multiplexer for several
snap sync sub-modes `full` and `snap`. The source modules of the
incarnations of a particular sync sub-mode are places into the
`worker/play` directory.
* Update ticker for snap and full sync logging
* Fix fringe condition for `GetStorageRanges` message handler
why:
Receiving a proved empty range was not considered at all. This lead to
inconsistencies of the return value which led to subsequent errors.
* Update storage range bulk download
details;
Mainly re-org of storage queue processing in `storage_queue_helper.nim`
* Update logging variables/messages
* Update storage slots healing
details:
Mainly clean up after improved helper functions from the sources
`find_missing_nodes.nim` and `storage_queue_helper.nim`.
* Simplify account fetch
why:
To much fuss made tolerating some errors. There will be an overall
strategy implemented where the concert of download and healing function
is orchestrated.
* Add error resilience to the concert of download and healing.
why:
The idea is that a peer might stop serving snap/1 accounts and storage
slot downloads while still able to support fetching nodes for healing.