The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4895 (withdrawals). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `withdrawalsRoot` field until proper withdrawals support exists.
https://github.com/status-im/nim-eth/pull/562
* Add headers with proof content type and use it for verification
- Add BlockHeaderWithProof content type & content
- Use BlockHeaderWithProof content to verify if chain data is
part of the canonical chain
- Adjust parser & seeder code to be able to seed these headers
with proof
- Adjust eth_data_exporter to be able to export custom header
ranges for which to build proofs (mostly for testing)
There is currently quite some ugliness & clean-up needed for which
a big part is due tos upporting both BlockHeader and
BlockHeaderWithProof on the network.
* Change accumulator proof to array / SSZ vector type
- Change accumulator proof to SSZ vector instead of SSZ list.
- Add and use general buildProof and buildHeaderWithProof func.
* Make the BlockHeaderWithProof an SSZ Union with None option
* Update portal-spec-tests to master commit
Portal master accumulator was removed from the network specs as a
content type shared on the network, as since the merge this is
a finite accumulator (pre-merge only).
So in this PR the accumulator gets removed as network type and
gets instead baked into the library. Building it is done by
seperate tooling (eth_data_exporter).
Because of this a lot of extra code can be removed that was
located in history_network, content_db, portal_protocol, etc.
Also removed to option to build the accumulator at start-up
of fluffy as this takes several minutes making it not viable.
It can still be loaded from a provided file however.
The ssz accumulator file is for now stored in the recently
created portal-spec-tests repository.
In preparation of our migration to the new Nimble-based setup
and [nim-workspace][1], we switch to a new model where the vendor
packages are no longer imported through a locally generated Nimble
dir, but rather through an auto-generated `nimbus-build-system.paths`
file that features regular `--path:` statements.
This file will be imported only within the nimbus-build-system
environment in order to avoid any unwanted interference in working
copies based on the new Nimble setup.
[1]: https://github.com/status-im/nim-workspace
* Bump nim-stew
why:
Need fixed interval set
* Keep track of accumulated account ranges over all state roots
* Added comments and explanations to unit tests
* typo
* Provided common scheduler API, applied to `full` sync
* Use hexary trie as storage for proofs_db records
also:
+ Store metadata with account for keeping track of account state
+ add iterator over accounts
* Common scheduler API applied to `snap` sync
* Prepare for accounts bulk import
details:
+ Added some ad-hoc checks for proving accounts data received from the
snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
(turned of by default, see `worker_desc.nim`)
* Error return in `persistBlocks()` on initial `VmState` roblem
why:
previously threw an exception
* Updated sync mode option
why:
using enum rather than bool => space for more
* Added sync mode `full`, re-factued legacy sync
also:
rebased
* Fix typo (crashes `pesistBlocks()` otherwise)
also:
rebase to master
* Reduce log ticker noise by suppressing duplicate messages
* Clarify staged queue overflow handling
why:
backtrack/re-org mode in `stageItem()` should be detected by both,
the global indicator or the work item where it might have moved into.
also:
rebased
* Relocated `IntervalSets` to nim-stew repo
* Accumulate accounts on temporary kv-DB
why:
Explore the data as returned from snap/1. Will be converted to a
`eth/db` next.
details:
Verify and accumulate per/state-root accounts downloaded via snap.
also:
Some unit tests
* Replace `Table` by `TrieDatabaseRef` for accounts accumulator
* update ticker statistics
details:
mean/variance based counter update
* allow persistent db for proved accounts
* rebase, and globally activate unit test
* fix statistics
* Reorg SnapPeerBase descriptor, notably start/stop flags
details:
Instead of using three boolean flags startedFetch, stopped, and
stopThisState a single enum type is used with values SyncRunningOk,
SyncStopRequest, and SyncStopped.
* Restricting snap to eth66 and later
why:
Id-tracked request/response wire protocol can handle overlapped
responses when requests are sent in row.
* Align function names with source code file names
why:
Easier to reconcile when following the implemented logic.
* Update trace logging (want file locations)
why:
The macros previously used hid the relevant file location (when
`chroniclesLineNumbers` turned on.) It rather printed the file
location of the template that was wrapping `trace`.
* Use KeyedQueue table instead of sequence
why:
Quick access, easy configuration as LRU or FIFO with max entries
(currently LRU.)
* Dissolve `SnapPeerEx` object extension into `SnapPeer`
why;
It is logically cleaner and more obvious not to inherit from
`SnapPeerBase` but to specify opaque field object references of the
merged `SnapPeer` object. These can then be locally inherited.
* Dissolve `SnapSyncEx` object extension into `SnapSync`
why;
It is logically cleaner and more obvious not to inherit from
`SnapSyncEx` but to specify opaque field object references of
the `SnapPeer` object. These can then be locally inherited.
Also, in the re-factored code here the interface descriptor
`SnapSyncCtx` inherited `SnapSyncEx` which was sub-optimal (OO
inheritance makes it easier to work with call back functions.)
* Update exception tracinig
* Use lazy JSON parser
why:
Uint246 type is not directly supported by the JSON serializer
* Json parser update for stringified UInt256 integers
why:
Now available
* update sub-module branch reference
Currently re-using http connections for the json-rpc requests
seems to slow down the requests considerably. Avoid this by
forcing a close after each requests in the blockwalk tool.
* Enable JWT authentication for websockets
details:
Currently, this is optional and only enabled when the jwtsecret option
is set.
There is a default mechanism to generate a JWT secret if it is not
explicitly stated. This mechanism is currently unused.
* Make JWT authentication compulsory for websockets
* Fix unit test entry point + cosmetics
* Update JSON-RPC link
* Improvements as suggested by Mamy
* Enable optional chunked RLPx messages
why:
Legacy feature used by Nethermind
details:
Disable with make flag: ENABLE_CHUNKED_RLPX=0
* Rebase & bump nim-eth
* Fix default behaviour
why:
Got lost somehow. Comments do not match GNU make code directives.
* dist: precompiled binaries and Docker images
The builds are reproducible, the binaries are portable and statically link librocksdb.
This took some patching. Upstream PR: https://github.com/facebook/rocksdb/pull/9752
32-bit ARM is missing as a target because two different GCC versions
fail with an ICE when trying to cross-compile RocksDB. Using Clang
instead is too much trouble for a platform that nobody should be using
anyway.
(Clang doesn't come with its own target headers and libraries, can't be
easily convinced to use the ones from GCC, so it needs an fs image from
a 32-bit ARM distro - at which point I stopped caring).
* CI: disable reproducibility test
* Improve the tests of the local testnet
The local testnet test was rather flaky and would occasionally
fail. It has been made more robust by adding the ENRs directly
to the routing table instead of doing some random lookups.
Additionally, the amount of nodes were increased (=64), ip limits
configuration was added, and the bits-per-hop value was set to 1
in order to make the lookups more likely to hit the network
instead of only the local routing table.
Failure is obviously still possible to happen when sufficient
packets get lost. If this turns out to be the case with the current
amount of nodes, we might have to revise the testing strategy here.
* Disable lookup test for State network
Disable lookup test for State network due to issue with custom
distance function causing the lookup to not always converging
towards the target.
Includes a simple test harness for the merge interop M1 milestone
This aims to enable connecting nimbus-eth2 to nimbus-eth1 within
the testing protocol described here:
https://github.com/status-im/nimbus-eth2/blob/amphora-merge-interop/docs/interop_merge.md
To execute the work-in-progress test, please run:
In terminal 1:
tests/amphora/launch-nimbus.sh
In terminal 2:
tests/amphora/check-merge-test-vectors.sh
* crash test scenario
details:
Example code for inspecting nested block chain and accounts cache
database transaction framework. There seems to be a pathological
case where the system crashes after a rollback (as appeared in the
tx-pool packer code.)
* simplified crash scenario
* Workable solution (as suggested by Andri)
details:
Avoiding db.rollback() (db.commit() is OK) while vmState.stateDB is
alive.
* Rename text_txcrash => test_accounts_cache
why:
Unit tests covers part of accounts_cache handling
* comment update
Currently bootstrap nodes for discv5 and for the Portal nodes
were provided through separate cli arguments. This is however
confusing and cumbersome as typically when (currently) testing
a node will have both discv5 and the Portal networks enabled.
We merge them into one argument.
If a node happens not to support a Portal network, it will be
removed after message request failure.
Commit also contains additional clean-up and nim-eth bump.
bugfix and features:
- Switch to the Chronos HTTP client (adds support for HTTPS)
- Allow dynamic RPC method names in the 'rpc' macro
- Restore the support for using the news package
- Add basic discv5 and portal json-rpc calls and activate them in
fluffy
- Renames in the rpc folder
- Add local testnet script and run this script in CI
- bump nim-eth
* Add SSZ Unions through case objects
* Add connection id content response test and improve other test vectors
* Implement content keys and ids for state network as per spec
Content keys case object is used so that it can be serialized and
deserialized as an SSZ Union.
* Let message Union in Portal wire protocol start at 0 as per new spec