Commit Graph

1922 Commits

Author SHA1 Message Date
Jordan Hrycaj 05483d89bd
Rename flare as beacon (#2680)
* Remove `--sync-mode` option from nimbus config

why:
  Currently there is only one sync mode available.

* Rename `flare` -> `beacon`, but not base module folder and nim source

why:
  The name `flare` was used do designate an alternative `beacon` mode that.

  Leaving the base folder and source as-is for a moment, makes it easier
  to read change diffs.

* Rename `flare` base module folder and nim source: `flare` -> `beacon`
2024-10-02 11:31:33 +00:00
tersec 216604d0d6
add eth_sendRawTransaction to server API (#2678) 2024-10-02 03:56:39 +00:00
Jordan Hrycaj 5b6ccddaa0
Db folder sources and related remove compiler warnings (#2673)
* Aristo: Rename `Hash256` -> `Hash32`

* CoreDb: Rename `Hash256` -> `Hash32`

* Ledger: Rename `Hash256` -> `Hash32`

* StorageTypes: Rename `Hash256` -> `Hash32`

* Aristo: Rename `Blob` -> `seq[byte]`, `keccakHash` -> `keccak256`

* Kvt: Rename `Blob` -> `seq[byte]`

* CoreDb: Rename `Blob` -> `seq[byte]`, `keccakHash` -> `keccak256`

* Ledger: Rename `Blob` -> `seq[byte]`, `keccakHash` -> `keccak256`

* CoreDb: Rename `BlockHeader` -> `Header`, `BlockNonce` -> `Bytes8`

* Misc: Rename `StorageKey` -> `Bytes32`

* Tracer: `Hash256` -> `Hash32`, `BlockHeader` -> `Header`, etc.

* Fix copyright header
2024-10-01 21:03:10 +00:00
Jacek Sieka 219b22b1f5
Versioned hash32 (#2672) 2024-10-01 19:40:37 +02:00
Jordan Hrycaj c022b29d14
Clean up modules in sync folder (#2670)
* Dissolve legacy `sync/types.nim` into `*/eth/eth_types.nim`

* Flare sync: Simplify scheduler and remove `runSingle()` method

why:
  `runSingle()` is not used anymore (main purpose was for negotiating
  best headers in legacy full sync.)

  Also, `runMulti()` was renamed `runPeer()`

* Flare sync: Move `chain` field from `sync_desc` -> `worker_desc`

* Flare sync: Remove handler descriptor lateral reference

why:
  Not used anymore. It enabled to turn on/off eth handler activity with
  regards to the sync state, i.e.from with in the sync worker.

* Flare sync: Update `Hash256` and other deprecated `std/eth` symbols

* Protocols: Update `Hash256` and other deprecated `std/eth` symbols

* Eth handler: Update `Hash256` and other deprecated `std/eth` symbols

* Update flare TODO

* Remove redundant `sync/type` import

why:
  The import module `type` has been removed

* Remove duplicate implementation
2024-10-01 09:19:29 +00:00
Advaita Saha e8542f951f
External syncer (#2574)
* inital external sync structure

* add to makefile

* feat: external syncer with healing process

* fix: suggestions

* network mapping

* minor changes

* forward sync

* jwt auth

* nrpc structure

* nrpc engine-api loader

* fix: suggestions

* fix: suggestions

* remove sync_db

* fix: edge cases and forks

* fix: rebase changes

* revert nimbus config changes
2024-09-29 18:48:11 +02:00
Jacek Sieka c210885b73
eth: bump to new types (#2660)
This is a minimal set of changes to make things work with the new types
in nim-eth - this is the minimal PR that merely resolves
incompatibilities while the full change set would include more cleanup
and migration.
2024-09-29 14:37:09 +02:00
andri lim 5f1b945ebe
Remove beacon sync (#2666) 2024-09-29 02:13:50 +00:00
Jordan Hrycaj debb68b3a7
Flare sync: Remove `debug` modules (#2665)
why:
  Not needed anymore but deletion kept in a separate PR to make it easy
  to refer back and find these modules.
2024-09-27 16:59:16 +00:00
Jordan Hrycaj 0d2a72d2a9
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier

* Remove direct `Era1` support

why:
  Era1 is indirectly supported by using the import tool before syncing.

* Clarify database persistent save function.

why:
  Function relied on the last saved state block number which was wrong.
  It now relies on the tx-level. If it is 0, then data are saved directly.
  Otherwise the task that owns the tx will do it.

* Extracted configuration constants into separate file

* Enable single peer mode for debugging

* Fix peer losing issue in multi-mode

details:
  Running concurrent download peers was previously programmed as running
  a batch downloading and storing ~8k headers and then leaving the `async`
  function to be restarted by a scheduler.

  This was unfortunate because of occasionally occurring long waiting
  times for restart.

  While the time gap until restarting were typically observed a few
  millisecs, there were always a few outliers which well exceed several
  seconds. This seemed to let remote peers run into timeouts.

* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`

why:
  There will be other `unproc` and `staged` modules.

* Remove cruft, update logging

* Fix accounting issue

details:
  When staging after fetching headers from the network, there was an off
  by 1 error occurring when the result was by one smaller than requested.
  Also, a whole range was mis-accounted when a peer was terminating
  connection immediately after responding.

* Fix slow/error header accounting when fetching

why:
  Originally set for detecting slow headers in a row, the counter
  was wrongly extended to general errors.

* Ban peers for a while that respond with too few headers continuously

why:
  Some peers only returned one header at a time. If these peers sit on a
  farm, they might collectively slow down the download process.

* Update RPC beacon header updater

why:
  Old function hook has slightly changed its meaning since it was used
  for snap sync. Also, the old hook is used by other functions already.

* Limit number of peers or set to single peer mode

details:
  Merge several concepts, single peer mode being one of it.

* Some code clean up, fixings for removing of compiler warnings

* De-noise header fetch related sources

why:
  Header download looks relatively stable, so general debugging is not
  needed, anymore. This is the equivalent of removing the scaffold from
  the part of the building where work has completed.

* More clean up and code prettification for headers stuff

* Implement body fetch and block import

details:
  Available headers are used stage blocks by combining existing headers
  with newly fetched blocks. Then these blocks are imported/executed via
  `persistBlocks()`.

* Logger cosmetics and cleanup

* Remove staged block queue debugging

details:
  Feature still available, just not executed anymore

* Docu, logging update

* Update/simplify `runDaemon()`

* Re-calibrate block body requests and soft config for import blocks batch

why:
* For fetching, larger fetch requests are mostly truncated anyway on
  MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
  price of longer execution times.

* Update metrics counters

* Docu update

* Some fixes, formatting updates, etc.

* Update `borrowed` type: uint -. uint64

also:
  Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
andri lim db8b68a28c
ForkedChainRef.forkchoice: Skip newBase calculation and skip chain finalization if finalizedHash is zero (#2654)
* ForkedChainRef.forkchoice: Skip newBase calculation and skip chain finalization if finalizedHash is zero

* Fix ForkedChainRef.forkChoice: do nothing if headHash is the same with cursorHash

* Fix stupid bug in engine API FCU when calling ForkedChainRef.forkChoice

* Wire RPC server API to nimbus RPC manager

* Add test case

* Use default(Hash256) in ForkedChainRef
2024-09-27 07:53:27 +07:00
Jacek Sieka f3e3c6bbe0
init style for Hash256 (#2661)
* init style for Hash256

https://github.com/status-im/nim-eth/pull/733 updates `Hash256` to
become an array instead of an object - unfortunately, nim does not allow
constructing arrays with `name()`, so this PR changes it to `default`
which works with both.

* lint
2024-09-26 13:24:36 +02:00
Jacek Sieka 513f11f911
bumps (#2652)
eth/stew/unittest2 in preparation for eth refactoring
2024-09-24 13:19:09 +02:00
andri lim 38d651c9c8
FCU should consider ForkedChainRef when calculating valid ancestor (#2651) 2024-09-24 10:53:18 +00:00
Advaita Saha 379592e711
Fix import stuck with era history behind (#2629)
* fix: nimbus state ahead of era history

* comments

* fix: suggestions

* fix: messages

* fix edge case resume

* check from last file

* formatting

* fix: typo

* fix: unwanted quit before rlp import
2024-09-21 08:38:38 +02:00
Jacek Sieka 7a15aa2a3a
clean up vertex delete (#2644)
avoid allocating and updating the trie twice when the branch is fully
removed
2024-09-20 10:31:29 +02:00
Jacek Sieka b4b4d16729
speed up key computation (#2642)
* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
2024-09-20 07:43:53 +02:00
Jacek Sieka 2fe8cc4551
leaf cache fixes (#2637)
* Add missing leaf cache update when a leaf turns to a branch with two
leaves (on merge) and vice versa (on delete) - this could lead to stale
leaves being returned from the cache causing validation failures - it
didn't happen because the leaf caches were not being used efficiently :)
* Replace `seq` with `ArrayBuf` in `Hike` allowing it to become
allocation-free - this PR also works around an inefficiency in nim in
returning large types via a `var` parameter
* Use the leaf cache instead of `getVtxRc` to fetch recent leaves - this
makes the vertex cache more efficient at caching branches because fewer
leaf requests pass through it.
2024-09-19 10:39:06 +02:00
tersec 3fb2e080ea
rm exp_ RPC API infrastructure; had no actual RPC endpoints (#2635)
* rm exp_ RPC API infrastructure; had no actual RPC endpoints

* update command-line flag descriptions
2024-09-18 08:53:26 +00:00
tersec 6bf4cd55b9
rm some commented-out/stub/obsolete RPC endpoints (#2630) 2024-09-16 19:33:20 +00:00
Jacek Sieka 5cd0297462
fix missed cache opportunity (#2628)
The storage leaf cache was being circumvented when actually fetching
leaves and was instead only being filled with items :/

Also avoids an expensive copy when fetching account data (broadly,
variant objects are comparatively expensive to copy and fetching
accounts is a hotspot)
2024-09-14 09:47:32 +02:00
Jacek Sieka adb8d64377
simplify VertexRef (#2626)
* move pfx out of variant which avoids pointless field type panic checks
and copies on access
* make `VertexRef` a non-inheritable object which reduces its memory
footprint and simplifies its use - it's also unclear from a semantic
point of view why inheritance makes sense for storing keys
2024-09-13 18:55:17 +02:00
andri lim 0be6291fba
Bump nim-eth and nim-web3 (#2625) 2024-09-13 15:48:27 +02:00
Jacek Sieka 5c1e2e7d3b
Migrate `keyed_queue` to `minilru` (#2608)
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
2024-09-13 15:47:50 +02:00
tersec aaefac0795
key eth_syncing off correct indication, not peer count (#2619) 2024-09-12 16:42:38 +00:00
andri lim 178d77ab31
Implement EIP-7002 and EIP-7251 (#2616) 2024-09-12 16:09:46 +07:00
andri lim 6503d51b44
Implement EIP-6110: Execution layer triggered deposits (#2612)
* Implement EIP-6110: Execution layer triggered deposits

* Implement EIP-6110 of t8n tool

* Avoid unnecessary DepositRequestType check

* Avoid using 'result' in t8n helpers

* Fix logs collection and deposits validation
2024-09-12 16:09:46 +07:00
jangko 8e8258e460
Prague types conversion 2024-09-12 16:09:42 +07:00
web3-developer e8a9cfe555
Re-enable eth_getProof implementation (#2599)
* Re-enable eth_getProof implementation.

* Update to use latest Aristo proof changes.

* Refactor and cleanup.
2024-09-12 09:06:31 +08:00
Jordan Hrycaj c6674311eb
Fringe case portal proof for existing account without storage tree (#2613)
detail:
  For practical reasons, ifsuch an account is asked for a slot, an empty
  proof list is returned. It is up to the user to provide an account
  proof that shows that there is no storage tree.
2024-09-11 20:27:42 +00:00
Jordan Hrycaj 75808bc03b
Add portal proof functionality for non-existing keys/paths (#2610) 2024-09-11 09:39:45 +00:00
Jordan Hrycaj 1ced684d8f
Update flare header download mechanism (#2607)
* Reverse order in staged blob lists

why:
  having the largest block number with the least header list index `0`
  makes it easier to grow the list with parent headers, i.e. decreasing
  block numbers.

* Set a header response threshold when to ditch peer

* Refactor extension of staged header chains record

why:
  Was cobbled together as a proof of concept after several approaches of
  how to run the download.

* TODO update

* Make debugging code independent of `release` flag

* Update import from jacek
2024-09-10 11:37:49 +00:00
andri lim 38c58c4feb
Implement EIP-2935: Serve historical block hashes from state (#2606)
* Implement EIP-2935: Serve historical block hashes from state

* Fix EIP-2935 in t8n
2024-09-10 09:52:03 +00:00
andri lim 5464f8e5f1
Fix EIP-2537: Precompile for BLS12-381 curve operations (#2603)
* Fix EIP-2537: Precompile for BLS12-381 curve operations

* Update test vectors
2024-09-10 06:56:08 +00:00
tersec 1b173d420d
small cleanups (#2598)
* small cleanups

* stop hiding ConvFromXtoItselfNotNeeded hints

* lowmem optimization flag is no-op
2024-09-10 05:24:45 +00:00
Jordan Hrycaj 71e466d173
Block header download beacon to era1 (#2601)
* Block header download starting at Beacon down to Era1

details:
  The header download implementation is intended to be completed to a
  full sync facility.

  Downloaded block headers are stored in a `CoreDb` table. Later on they
  should be fetched, complemented by a block body, executed/imported,
  and deleted from the table.

  The Era1 repository may be partial or missing. Era1 headers are neither
  downloaded nor stored on the `CoreDb` table.

  Headers are downloaded top down (largest block number first) using the
  hash of the block header by one peer. Other peers fetch headers
  opportunistically using block numbers

  Observed download times for 14m `MainNet` headers varies between 30min
  and 1h (Era1 size truncated to 66m blocks.), full download 52min
  (anectdotal.) The number of peers downloading concurrently is crucial
  here.

* Activate `flare` by command line option

* Fix copyright year
2024-09-09 09:12:56 +00:00
Jacek Sieka 0a8986bc77
avoid copying data when merging save points (#2584)
Saving both memory and processing, we can move entries from one
savepoint to another, specially when the target is empty as it often is
during transaction processing
2024-09-06 22:45:29 +02:00
Jacek Sieka d39c589ec3
lru cache updates (#2590)
* replace rocksdb row cache with larger rdb lru caches - these serve the
same purpose but are more efficient because they skips serialization,
locking and rocksdb layering
* don't append fresh items to cache - this has the effect of evicting
the existing items and replacing them with low-value entries that might
never be read - during write-heavy periods of processing, the
newly-added entries were evicted during the store loop
* allow tuning rdb lru size at runtime
* add (hidden) option to print lru stats at exit (replacing the
compile-time flag)

pre:
```
INF 2024-09-03 15:07:01.136+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.675 tps=1870.397 mgps=176.819 avgBps=10.288
avgTps=1574.889 avgMGps=155.952 elapsed=19m26s458ms
```

post:
```
INF 2024-09-03 13:54:26.730+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.637 tps=1864.384 mgps=176.250 avgBps=11.202
avgTps=1714.920 avgMGps=169.818 elapsed=17m51s211ms
```

9%:ish import perf improvement on similar mem usage :)
2024-09-05 11:18:32 +02:00
Jordan Hrycaj 3c6400673d
Coredb fix kvt save only fringe condition (#2592)
* Cosmetics, spelling, etc.

* Aristo: make sure that a save cycle always commits even when empty

why:
  If `Kvt` is tied to the `Aristo` DB save cycle, then this save cycle
  must also be committed if there is no data to save for `Aristo`.

  Otherwise this will lead to excessive core memory use with some fringe
  condition where Eth headers (or blocks) are downloaded while syncing
  and not really stored on disk.

* CoreDb: Correct persistent save mode

why:
  Saving `Kvt` first is seen as a harbinger (or canary) for `Aristo` as
  both run in sync. If `Kvt` succeeds saving first, so must be `Aristo`
  next. Other than this is a defect.
2024-09-04 13:48:38 +00:00
andri lim 4d9e288340
Wiring ForkedChainRef to other components (#2423)
* Wiring ForkedChainRef to other components

- Disable majority of hive simulators
- Only enable pyspec_sim for the moment
- The pyspec_sim is using a smaller RPC service wired to ForkedChainRef
- The RPC service will gradually grow

* Addressing PR review

* Fix test_beacon/setup_env

* Enable consensus_sim (#2441)

* Enable consensus_sim

* Remove isFile check

* Enable Engine API jwt auth tests and exchange cap tests

* Enable engine api in build_sim.sh

* Wire ForkedChainRef to Engine API newPayload

* Wire Engine API getBodies to ForkedChainRef

* Wire Engine API api_forkchoice to ForkedChainRef

* Wire more RPC methods to ForkedChainRef

* Implement eth_syncing

* Implement eth_call and eth_getlogs

* TxPool: simplify smartHead

* Fix smartHead usage

* Fix txpool headDiff

* Remove hasBlockHeader and use headerExists

* Addressing review
2024-09-04 09:54:54 +00:00
Jacek Sieka 35cc78c86d
add metrics for rdb lru cache (#2586)
This is a first step towards measuring the efficiency of the LRU caches
over time - metrics can be collected during import or when running
regulary.

Since `nim-metrics` carries some overhead for its default way of
reporting metrics, this PR implements a custom collector over atomic
counters, given that this is one of the hottest spots in the block
processing pipeline.

Using a compile-time flag, the same metrics can be printed on exit which
is useful when comparing different strategies for caching - here's a
recent run over blocks 16000001-1616384 - this is a good candidate to
expose in a better way in the future, maybe:

```
   state    vtype       miss        hit      total hitrate
 Account     Leaf    4909417    4466215    9375632  47.64%
 Account   Branch   20742574   72015123   92757697  77.64%
   World     Leaf     940483    1140946    2081429  54.82%
   World   Branch    8224151  131496580  139720731  94.11%
     all      all   34816625  209118864  243935489  85.73%
```
2024-09-02 17:34:10 +02:00
Jacek Sieka ef1bab0802
avoid some trivial memory allocations (#2587)
* pre-allocate `blobify` data and remove redundant error handling
(cannot fail on correct data)
* use threadvar for temporary storage when decoding rdb, avoiding
closure env
* speed up database walkers by avoiding many temporaries

~5% perf improvement on block import, 100x on database iteration (useful
for building analysis tooling)
2024-09-02 16:03:10 +02:00
Jordan Hrycaj a25ea63dec
Revert lazy implementation (#2585) 2024-09-02 10:34:42 +00:00
Jacek Sieka 84a72c8658
Use zstd compression in bottommost layer (#2582)
Tested up to block ~14m, zstd uses ~12% less space which seems to result
in a small:ish (2-4%) performance improvement on block import speed -
this seems like a better baseline for more extensive testing in the
future.

Pre: 57383308 kb
Post: 50831236 kb
2024-08-30 17:32:13 +02:00
Jordan Hrycaj 42a08cfba9
Coredb and sync maintenance update (#2583)
* bump metrics

* Remove cruft

* Cosmetics, update some logging, noise control

* Renamed `CoreDb` function `hasKey` => `hasKeyRc` and provided `hasKey`

why:
  Currently, `hasKey` returns a `Result[]` rather than a `bool` which
  is what one would expect from a function prototype of this name.

  This was a bit of an annoyance and cost unnecessary attention.
2024-08-30 11:18:36 +00:00
Jacek Sieka 8857fccb44
create per-fork opcode dispatcher (#2579)
In the current VM opcode dispatcher, a two-level case statement is
generated that first matches the opcode and then uses another nested
case statement to select the actual implementation based on which fork
it is, causing the dispatcher to grow by `O(opcodes) * O(forks)`.

The fork does not change between instructions causing significant
inefficiency for this approach - not only because it repeats the fork
lookup but also because of code size bloat and missed optimizations.

A second source of inefficiency in dispatching is the tracer code which
in the vast majority of cases is disabled but nevertheless sees multiple
conditionals being evaluated for each instruction only to remain
disabled throughout exeuction.

This PR rewrites the opcode dispatcher macro to generate a separate
dispatcher for each fork and tracer setting and goes on to pick the
right one at the start of the computation.

This has many advantages:

* much smaller dispatcher
* easier to compile
* better inlining
* fewer pointlessly repeated instruction
* simplified macro (!)
* slow "low-compiler-memory" dispatcher code can be removed

Net block import improvement at about 4-6% depending on the contract -
synthetic EVM benchmnarks would show an even better result most likely.
2024-08-28 10:20:36 +02:00
web3-developer 8bf581e72d
Cleanup unused exp_getProofsByBlockNumber endpoint (#2577)
* Cleanup unused exp_getProofsByBlockNumber endpoint.
2024-08-23 22:39:33 +08:00
Jacek Sieka dbabe7e0a7
import: reduce stack usage (#2575)
Because EthBlock is quite large, the stack usage that results from the
multiple copies (temporary and not) present in the import command is
larger than it should be - this PR moves some of that data to a closure
environment allocated once per EthBlock - a larger restructuring of the
code is due but in the meantime, this simple change speeds up garbage
collection a little bit.
2024-08-22 10:06:45 +02:00
Jacek Sieka d72a73de8b
avoid digest when loading era block (#2572)
Computing the digest is unnecessary but takes a little bit of time -
remove computation and reduce mem usage slightly when loading era blocks
2024-08-20 15:23:14 +02:00
Jordan Hrycaj 4db9c5c2d5
Small updates and fixes for rlpx suite (#2571)
* Remove redundant `eth/68` message and clean up docu

details:
  There is only eth/68 available at the moment

* Allow to turn on chronicles line number logging in `Makefile`

* Accept (and forget) tx hashes announcements

why:
  Does no harm to just ignore it at the moment

* Bump nim-eth (rlp fix)
2024-08-19 14:00:10 +00:00