4409 Commits

Author SHA1 Message Date
andri lim
b4226b66e4
Remove chain_desc.nim and ChainRef usage from hive simulator (#3105) 2025-02-25 15:18:44 +07:00
bhartnett
bd473f95f4
Encode correct custom payload in response to basic radius ping request. (#3106) 2025-02-25 08:22:57 +01:00
tersec
42d25daaeb
rm nim-graphql submodule (#3103) 2025-02-25 09:47:31 +07:00
kdeme
21be015031
Remove SSZ Union usage in BlockHeaderWithProof type (#3019)
* Remove SSZ Union usage in BlockHeaderWithProof type

Remove SSZ Union in BlockHeaderWithProof type by making the proof
an SSZ encoded ByteList. The right type for the proof can be
selected at the decoding step by first looking into the header
for the timestamp and selecting the right type based on the
hardfork the block is in.

* Add content db migration function to fcli_db tool

* Update test vector repo
2025-02-24 18:09:05 +01:00
andri lim
aa4770d337
Fix finalizedBlockHash validation in engine fCU (#3089)
* Fix finalizedBlockHash validation in engine fCU

* restore fcu order
2025-02-20 21:59:16 +00:00
Jordan Hrycaj
38413d25ac
Beacon sync recalibrate blocks queue (#3099)
* Remove obsolete header cache

why:
  Was fall back for the case that the DB table was inaccessible before
  `FC` module reorg.

* Add the number of unused connected peers to metric

* Update docu, add Grafana example

why:
  Provides useful settings, e.g. for memory debugging

* Re-calibrate blocks queue for import

why:
  Old queue setup provided a staging area which was much too large
  consuming too much idle memory. Also the command-line re-calibrating
  for debugging was much too complicated.

  And the naming for the old setup was wrong: There is no max queue
  size. Rather there is a HWM where filling the queue stops when reached.

  The currently tested size allows for 1.5k blocks on the queue.

* Rename hidden command-line option for debug/re-calibrating blocks queue
2025-02-20 18:29:35 +00:00
andri lim
c82fc13836
Merge KvtDbRef/AristoDbRef with their BackendRef (#3088)
* Merge KvtDbRef/AristoDbRef with their BackendRef

* Fix aristo memory_only constructor

* Remove aristo_persist.nim
2025-02-19 10:05:11 +07:00
andri lim
bc1a9dd2c9
Fix data dir id not persisted immediately (#3090) 2025-02-19 10:04:36 +07:00
Jacek Sieka
03010dc558
Reduce ChainRef usage (#3093)
* Reduce ChainRef usage

It's now only used in the hive node, from which it probably also should
be removed

* lint
2025-02-19 10:04:22 +07:00
kdeme
a86df054b2
portal_bridge: default latest false + no ephemeral headers gossip (#3094) 2025-02-18 20:04:08 +01:00
kdeme
fccaafff49
Bump nim-ssz-serialization submodule (#3091) 2025-02-18 14:23:15 +00:00
Advaita Saha
0528c843be
wire in forkedchain to p2p (#3082)
* wire in forkedchain to p2p

* cleanup code not used

* add receipts capability

* suggested changes
2025-02-18 16:07:28 +07:00
andri lim
a97312064e
Merge CoreDbRef with CoreDbCtxRef (#3086) 2025-02-18 13:52:43 +07:00
andri lim
d346759008
Remove Coredb abstraction (#3084)
* Remove Coredb abstraction

* lint
2025-02-18 09:04:18 +07:00
Jacek Sieka
3732b3f95e
fix level reporting (#3085)
Oops, level 0 was always used which needlessly increases mem usage -
comes with an assortment of simplifications
2025-02-18 08:01:44 +07:00
Miran
816ce73a2a
update Ubuntu version: 20.04 is EOL (#3081)
* update Ubuntu version: 20.04 is EOL

* make it consistent

* update the copyright year

* specify Ubuntu 22.04
2025-02-17 14:34:14 +00:00
bhartnett
296b319f9f
Fluffy: Portal subnetwork peer ban list (#3007) 2025-02-17 15:12:24 +01:00
andri lim
a1a9c6b027
Merge AristoTxRef/KvtTxRef with their LayerRef (#3078)
* Merge AristoTxRef/KvtTxRef with their LayerRef

also clean up unused files

* rm unused test_rocksdb

* Fix tests

* lint
2025-02-17 10:30:43 +07:00
Jacek Sieka
caca11b30b
Simplify txFrame protocol, improve persist performance (#3077)
* Simplify txFrame protocol, improve persist performance

To prepare forked-layers for further surgery to avoid the nesting tax,
the commit/rollback style of interacting must first be adjusted, since
it does not provide a point in time where the frame is "done" and goes
from being actively written to, to simply waiting to be persisted or
discarded.

A collateral benefit of this change is that the scheme removes some
complexity from the process by moving the "last saved block number" into
txframe along with the actual state changes thus reducing the risk that
they go "out of sync" and removing the "commit" consolidation
responsibility from ForkedChain.

* commit/rollback become checkpoint/dispose - since these are pure
in-memory constructs, there's less error handling and there's no real
"rollback" involved - dispose better implies that the instance cannot be
used and we can more aggressively clear the memory it uses
* simplified block number handling that moves to become part of txFrame
just like the data that the block number references
* avoid reparenting step by replacing the base instead of keeping a
singleton instance
* persist builds the set of changes from the bottom which helps avoid
moving changes in the top layers through each ancestor level of the
frame stack
* when using an in-memory database in tests, allow the instance to be
passed around to enable testing persist and reload logic
2025-02-17 01:51:56 +00:00
tersec
c8e6247a16
rm graphql (EIP-1767) server (#3080) 2025-02-17 07:48:47 +07:00
kdeme
c0e329d768
Add log message on content query failure in lookup + refactor (#3079)
* Add log message on content query failure in lookup

* Refactor response handling Portal wire
2025-02-16 20:08:28 +01:00
andri lim
c300b41c07
Add engine_getBlobsV1 implementation (#3071)
* Add engine_getBlobsV1 implementation

* Addressing review
2025-02-15 12:18:07 +00:00
Jacek Sieka
45ec6e7050
Use unittest2 test runner (#3073)
* Use unittest2 test runner

Since upgrading to unittest2, the test runner prints the command line to
re-run a failed test - this however relies on actually using the
unittest2 command line runner.

Previously, test files were assigned numbers - with the unittest2
runner, tests are run using suite/category names instead, like so:

```
# run the Genesis suite
build/all_tests "Genesis::``
# run all tests with "blsMapG1" in the name
build/all_tests "blsMapG1*"
# run tests verbosely
build/all_tests -v
```

A reasonable follow-up here would be to review the suite names to make
them easier to run :)

* lint

* easier-to-compare test order

* bump unittest2 (also the repo)
2025-02-15 14:08:50 +07:00
kdeme
40722cdec9
Bump nim-eth submodule (#3075) 2025-02-14 21:37:59 +00:00
kdeme
fc4969d2ee
Bump nimbus-eth2 and required nim-kzg4884 (#3074) 2025-02-14 22:00:16 +01:00
Jacek Sieka
b6584153ff
add custom hash for RootedVertexID (#3070)
* add custom `hash` for `RootedVertexID`

There's no benefit hashing `root` since `vid` is already unique and the
"default" hash is not free - this trivially brings a small perf boost to
one of the key lookup tables in aristo_layers.

* lint
2025-02-14 14:50:46 +00:00
Jordan Hrycaj
e666eb52b0
Fix fringe case where the final checkpoint() must not be applied (#3072)
* Fix fringe case where the final `checkpoint()` must not be applied

why
  If there are no `era` or `era1` files that can be imported from,
  then the internal state will not be in sync with the DB state. This
  would lead to overwriting the saved state block number with something
  fancy. As a consequence the database becomes unusable for the next
  process which will eventually fail with a state root mismatch.

* Update comment
2025-02-14 12:51:56 +00:00
Jacek Sieka
42bb640443
Simplify shared rocksdb instance / write batch handling (#3063)
By introducing the "shared rocksdb instance" concept to the backend, we
can remove the "piggybacking" mode , thus reducing the complexity of
database initialisation and opening the possibility of extending how
write batching works across kvt/aristo.

The change makes explicit the hidden shared state that was previously
hiding in closures and provides the first step towards simplifying the
"commit/persist" interface of coredb, preparing it for optimizations to
reduce the "layering tax" that `forked-layers` introduced.
2025-02-14 09:40:22 +01:00
andri lim
bd65a019c7
Bump nim-web3 to e9640d65eca5618291438bf6e98f6ea21f4c1d03 (#3069) 2025-02-14 13:59:39 +07:00
andri lim
c08e83a9a5
Add Prague fork timestamp for Holesky and Sepolia (#3068) 2025-02-14 01:47:20 +00:00
tersec
b417dd1986
rm engine_exchangeTransitionConfigurationV1 support (#3067) 2025-02-14 08:10:47 +07:00
bhartnett
f513e1dc53
Update Nimbus EVM code to use the latest nim-evmc which supports EVMC v12.1.0 (#3065)
* Update Nimbus EVM code to use the latest nim-evmc which is now on EVMC v12.1.0

* Fix copyright.

* Fix tests.

* Update to use FkLatest.

* Fix copyright and update test helper.
2025-02-14 08:06:54 +07:00
andri lim
67dbe817a9
Add Sepolia depositContractAddress (#3059) 2025-02-12 12:53:01 +07:00
pmmiranda
411a3cadfa
Renamed 'nimbus' directory and its references to 'execution_chain' (#3052)
* renamed nimbus folder to execution_chain

* Renamed "nimbus" references to "execution_chain"

* fixed wrongly changed http reference

* delete snap types file given that it was deleted before this PR merge

* missing 'execution_chain' replacement

---------

Co-authored-by: pmmiranda <pedro.miranda@nimbus.team>
2025-02-11 22:28:42 +00:00
Advaita Saha
02014b382d
Speedup eth_getLogs (#3062)
* fetch txHash from txRecord + reduce blockHash call

* minor fixes
2025-02-11 13:34:24 +00:00
Bhaskar Metiya
577e355949
Fix getPayloadBodiesByRangeV1 fetching too many payloads (#3057) 2025-02-11 04:17:21 +00:00
bhartnett
f033a40482
Build nimbus evmc shared library and fix issue to enable loading (#3050) 2025-02-10 10:53:31 +08:00
Jordan Hrycaj
67b8dd7fdc
Beacon sync bookkeeping update and bug fix (#3051)
* Update unprocessed blocks bookkeeping avoiding race condition

details:
  Instead of keeping track in `borrowed` of the sum of a set of blocks
  fetched from the `unprocessed` list, the particular ranges details are
  stored. When committing a particular range, the range itself (rather
  then its length) is removed from the `borrowed` data.

why:
  The function `blocksUnprocAmend()`) may produce a race condition, so it
  was re-implemented as `blocksUnprocAppend()` considering range overlaps
  with borrowed.

* Clean up/re-org `blocks_unproc` module

why:
  The `borrowed` data structure can be fully maintained with the
  `blocksUnprocXxx()` functions.

* Update/simplify unprocessed headers bookkeeping (similar as for blocks)

* Removing stashed headers right after successfully assembled blocks

why:
  Was previously deleted after importing blocks although it is not needed
  anymore after blocks have been assembled using bodies fetched over
  the ethXX network.

* Always curb the blocks to the initialised length

why:
  Some error exit directives missed some clean up leaving the blocks
  list with empty/stale block bodies.

* Enter reorg-mode right away after block import error

why:
  There is not much one can do. Typically, this type of error is due
  to a switch to a different canonical chain. When this happens, the
  block batch is expected to be relatively short as the cause for a
  chain switch is an RPC instruction. This in turn is effective if some
  of the blocks on the `FC` database are maintained by the `CL`.
2025-02-07 10:43:00 +00:00
tersec
daebbfa18d
rm more Snap sync code (#3047)
* rm more Snap sync code

* clean up snap1
2025-02-07 08:45:03 +07:00
Advaita Saha
796c2f7cbf
wire in the syncstate callback to rpc (#3049) 2025-02-06 12:06:13 +00:00
Jacek Sieka
2961905a95
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes

This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.

In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.

Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.

"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.

In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.

Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.

Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.

Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.

Once this change has been merged, there are several follow-ups to do:

* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now

More about the changes:

* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!

* fix layer vtop after rollback

* engine fix

* Fix test_txpool

* Fix test_rpc

* Fix copyright year

* fix simulator

* Fix copyright year

* Fix copyright year

* Fix tracer

* Fix infinite recursion bug

* Remove aristo and kvt empty files

* Fic copyright year

* Fix fc chain_kvt

* ForkedChain refactoring

* Fix merge master conflict

* Fix copyright year

* Reparent txFrame

* Fix test

* Fix txFrame reparent again

* Cleanup and fix test

* UpdateBase bugfix and fix test

* Fixe newPayload bug discovered by hive

* Fix engine api fcu

* Clean up call template, chain_kvt, andn txguid

* Fix copyright year

* work around base block loading issue

* Add test

* Fix updateHead bug

* Fix updateBase bug

* Change func commitBase to proc commitBase

* Touch up and fix debug mode crash

---------

Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 14:04:50 +07:00
Advaita Saha
7ebede9e1e
devnet-6 fix bls_12_381 (#3048)
* fix mathematical misconceptions

* fix lint

* change proc to func
2025-02-05 22:53:28 +00:00
andri lim
db0a971416
devnet-6: Update EIP-7702 outdated implementation (#3046)
f27ddf2b0a/EIPS/eip-7702.md (behavior)
2025-02-04 15:05:35 +00:00
andri lim
825cc4c242
Fix tools helper and allow GST to parse new eest devnet-6 test vectors (#3045)
* Fix tools helper

* Allow GST to parse new eest devnet-6 test vectors
2025-02-04 17:06:34 +07:00
andri lim
d4266dc186
devnet-6: Update EIP-2935, 7002, 7251: Final system contract address (#3044)
- https://github.com/ethereum/EIPs/pull/9287
- https://github.com/ethereum/EIPs/pull/9288
- https://github.com/ethereum/EIPs/pull/9289
2025-02-03 16:33:36 +07:00
Jacek Sieka
838f8649c3
Bump RPC server buffer size (#3042)
* Bump RPC server buffer size

When large blocks arrive via RPC, we need to be able to read them from
the socket in reasonable time - at 4kb, we might need thousands of reads
before the JSON can be parsed - 256kb ensures that most blocks can be
read in a few loop iterations - the size doesn't greatly matter since we
only have one of these (unlike p2p connections)

* copyright
2025-01-30 21:03:00 +00:00
Jacek Sieka
5c0310a7b2
Avoid re-compacting historical data (#3041)
* Avoid re-compacting historical data

Per rocksdb docs, it will by default re-compact any data not touched for
30 days - this is obviously wasteful since our historical data rarely
changes and _hopefully_ can stay untouched once written (with a bit of
key sorting luck).

* copyright
2025-01-30 20:49:03 +01:00
Jacek Sieka
8690a03af7
Fix poor eth_getLogs performance (fixes #3033) (#3040)
* Fix poor eth_getLogs performance (fixes #3033)

* don't recompute txhash in inner log loop (!)
* filter logs before computing hashes

* copyright
2025-01-30 19:38:24 +00:00
bhartnett
e03a9c3172
Update nim-rocksdb to v9.10.0.0 (#3035) 2025-01-30 19:34:27 +01:00
Jordan Hrycaj
bc0620f6ca
Provide global sync progress, supersedes activation indicator (#3039) 2025-01-29 16:20:25 +00:00