* Update log ticker, using time interval rather than ticker count
why:
Counting and logging ticker occurrences is inherently imprecise. So
time intervals are used.
* Use separate storage tables for snap sync data
* Left boundary proof update
why:
Was not properly implemented, yet.
* Capture pivot in peer worker (aka buddy) tasks
why:
The pivot environment is linked to the `buddy` descriptor. While
there is a task switch, the pivot may change. So it is passed on as
function argument `env` rather than retrieved from the buddy at
the start of a sub-function.
* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`
* Remove obsolete account range returned from `GetAccountRange` message
why:
Handler returned the wrong right value of the range. This range was
for convenience, only.
* Prioritise storage slots if the queue becomes large
why:
Currently, accounts processing is prioritised up until all accounts
are downloaded. The new prioritisation has two thresholds for
+ start processing storage slots with a new worker
+ stop account processing and switch to storage processing
also:
Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`
* Generalise left boundary proof for accounts or storage slots.
why:
Detailed explanation how this works is documented with
`snapdb_accounts.importAccounts()`.
Instead of enforcing a left boundary proof (which is still the default),
the importer functions return a list of `holes` (aka node paths) found in
the argument ranges of leaf nodes. This in turn is used by the book
keeping software for data download.
* Forgot to pass on variable in function wrapper
also:
+ Start healing not before 99% accounts covered (previously 95%)
+ Logging updated/prettified
* Add headers with proof content type and use it for verification
- Add BlockHeaderWithProof content type & content
- Use BlockHeaderWithProof content to verify if chain data is
part of the canonical chain
- Adjust parser & seeder code to be able to seed these headers
with proof
- Adjust eth_data_exporter to be able to export custom header
ranges for which to build proofs (mostly for testing)
There is currently quite some ugliness & clean-up needed for which
a big part is due tos upporting both BlockHeader and
BlockHeaderWithProof on the network.
* Change accumulator proof to array / SSZ vector type
- Change accumulator proof to SSZ vector instead of SSZ list.
- Add and use general buildProof and buildHeaderWithProof func.
* Make the BlockHeaderWithProof an SSZ Union with None option
* Update portal-spec-tests to master commit
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
* Update docu and logging
* Extracted and updated constants from `worker_desc` into separate file
* Update and re-calibrate communication error handling
* Allow simplified pivot negotiation
why:
This feature allows to turn off pivot negotiation so that peers agree
on a a pivot header.
For snap sync with fast changing pivots this only throttles the sync
process. The finally downloaded DB snapshot is typically a merged
version of different pivot states augmented by a healing process.
* Re-model worker queues for accounts download & healing
why:
Currently there is only one data fetch per download or healing task.
This task is then repeated by the scheduler after a short time. In
many cases, this short time seems enough for some peers to decide to
terminate connection.
* Update main task batch `runMulti()`
details:
The function `runMulti()` is activated in quasi-parallel mode by the
scheduler. This function calls the download, healing and fast-sync
functions.
While in debug mode, after each set of jobs run by this function the
database is analysed (by the `snapdb_check` module) and the result
printed.
* Update logging
* Fix node hash associated with partial path for missing nodes
why:
Healing uses the partial paths for fetching nodes from the network. The
node hash (or key) is used to verify the node data retrieved.
The trie inspector function returned the parent hash instead of the node hash
with the partial path when a missing node was detected. So all nodes
for healing were rejected.
* Must not modify sequence while looping over it
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
* Multiple storage batches at a time
why:
Previously only some small portion was processed at a time so the peer
might have gone when the process was resumed at a later time
* Renamed some field of snap/1 protocol response object
why:
Documented as `slots` is in reality a per-account list of slot lists. So
the new name `slotLists` better reflects the nature of the beast.
* Some minor healing re-arrangements for storage slot tries
why;
Resolving all complete inherited slots tries first in sync mode keeps
the worker queues smaller which improves logging.
* Prettify logging, comments update etc.
* Re-model persistent database access
why:
Storage slots healing just run on the wrong sub-trie (i.e. the wrong
key mapping). So get/put and bulk functions now use the definitions
in `snapdb_desc` (earlier there were some shortcuts for `get()`.)
* Fixes: missing return code, typo, redundant imports etc.
* Remove obsolete debugging directives from `worker_desc` module
* Correct failing unit tests for storage slots trie inspection
why:
Some pathological cases for the extended tests do not produce any
hexary trie data. This is rightly detected by the trie inspection
and the result checks needed to adjusted.
* For snap sync, publish `EthWireRef` in sync descriptor
why:
currently used for noise control
* Detect and reuse existing storage slots
* Provide healing module for storage slots
* Update statistic ticker (adding range factor for unprocessed storage)
* Complete mere function for work item ranges
why:
Merging interval into existing partial item was missing
* Show av storage queue lengths in ticker
detail;
Previous attempt shows average completeness which did not tell much
* Correct the meaning of the storage counter (per pivot)
detail:
Is the # accounts that have a storage saved
- Can write epoch accumulators to files now with eth_data_exporter
- RPC requests to gossip epoch accumulators now uses these files
instead of building on the fly
- Other build accumulator calls are adjusted and only used for
tests and thus moved to testing folder
For accumulator building we now use intermediary header epoch
files which renders the use of a temporary db for this no longer
needed. Code was already no longer in use.
Portal master accumulator was removed from the network specs as a
content type shared on the network, as since the merge this is
a finite accumulator (pre-merge only).
So in this PR the accumulator gets removed as network type and
gets instead baked into the library. Building it is done by
seperate tooling (eth_data_exporter).
Because of this a lot of extra code can be removed that was
located in history_network, content_db, portal_protocol, etc.
Also removed to option to build the accumulator at start-up
of fluffy as this takes several minutes making it not viable.
It can still be loaded from a provided file however.
The ssz accumulator file is for now stored in the recently
created portal-spec-tests repository.
* Rename `LeafRange` => `NodeTagRange`
* Replacing storage slot partition point by interval
why:
The partition point only allows to describe slots `[point,high(Uint256)]`
for fetching interval slot ranges. This has been generalised for any
interval.
* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`
why:
Generalised healing status for accounts, and later for storage slots.
* Improve accounts healing loop
* Split `snap_db` into accounts and storage modules
why:
It is cleaner to have separate session descriptors for accounts and
storage slots (based on a common base descriptor.)
Also, persistent storage handling might be changed in future which
requires the storage slot implementation disentangled from the accounts
handling.
* Re-model worker queues for storage slots
why:
There is a dynamic list of storage sub-tries, each one has to be
treated similar to the accounts database. This applied to slot
interval downloads as well as to healing
* Compress some return value report lists for snapdb methods
why:
No need to report all handling details for work items that are filteres
out and discarded, anyway.
* Remove inner loop frame from healing function
why:
The healing function runs as a loop body already.
- Add exportHeaders to write headers in e2s file per epoch
- Add verifyHeaders to verify decoding the above files
- Adjust exportAccumulatorData to use generated header epoch files
and write accumulator in SSZ (no hex)
- Add printAccumulatorData to print all root hashes
- Let accumulator finish its last pre merge epoch (hash_tree_root
on incomplete epoch).
- Adjust code to use isPreMerge and remove isCurrentEpoch
- Split up tests to a set that runs with a mainnet merge block
number and a set that runs with a testing value.
* Split fetch accounts into sub-modules
details:
There will be separated modules for accounts snapshot, storage snapshot,
and healing for either.
* Allow to rebase pivot before negotiated header
why:
Peers seem to have not too many snapshots available. By setting back the
pivot block header slightly, the chances might be higher to find more
peers to serve this pivot. Experiment on mainnet showed that setting back
too much (tested with 1024), the chances to find matching snapshot peers
seem to decrease.
* Add accounts healing
* Update variable/field naming in `worker_desc` for readability
* Handle leaf nodes in accounts healing
why:
There is no need to fetch accounts when they had been added by the
healing process. On the flip side, these accounts must be checked for
storage data and the batch queue updated, accordingly.
* Reorganising accounts hash ranges batch queue
why:
The aim is to formally cover as many accounts as possible for different
pivot state root environments. Formerly, this was tried by starting the
accounts batch queue at a random value for each pivot (and wrapping
around.)
Now, each pivot environment starts with an interval set mutually
disjunct from any interval set retrieved with other pivot state roots.
also:
Stop fishing for more pivots in `worker` if 100% download is reached
* Reorganise/update accounts healing
why:
Error handling was wrong and the (math. complexity of) whole process
could be better managed.
details:
Much of the algorithm is now documented at the top of the file
`heal_accounts.nim`
* Miscellaneous updates TBC
* Disentangled pivot2 module from snap
why:
Wrote as template on top of sync so it can be shared by fast and snap
sync.
* Renamed and relocated pivot sources
* Integrated `best_pivot` module into full and snap sync
why:
Full sync used an older version of `best_pivot`
* isolating download module from full sync
why;
might be shared with snap sync at a later stage
Also requires us to split header data propagation from block body
and receipts propagation as the now fixed bug would allow for more
data to be gossiped even when data does not get validated (which
requires the headers).