* Cleaning up, removing cruft and debugging statements
* Make `aristo_delta` fluffy compatible
why:
A sub-module that uses `chronicles` must import all possible
modules used by a parent module that imports the sub-module.
* update TODO
* Extract sub-tree deletion functions into separate sub-modules
* Move/rename `aristo_desc.accLruSize` => `aristo_constants.ACC_LRU_SIZE`
* Lazily delete sub-trees
why:
This gives some control of the memory used to keep the deleted vertices
in the cached layers. For larger sub-trees, keys and vertices might be
on the persistent backend to a large extend. This would pull an amount
of extra information from the backend into the cached layer.
For lazy deleting it is enough to remember sub-trees by a small set of
(at most 16) sub-roots to be processed when storing persistent data.
Marking the tree root deleted immediately allows to let most of the code
base work as before.
* Comments and cosmetics
* No need to import all for `Aristo` here
* Kludge to make `chronicle` usage in sub-modules work with `fluffy`
why:
That `fluffy` would not run with any logging in `core_deb` is a problem
I have known for a while. Up to now, logging was only used for debugging.
With the current `Aristo` PR, there are cases where logging might be
wanted but this works only if `chronicles` runs without the
`json[dynamic]` sinks.
So this should be re-visited.
* More of a kludge
* Update state network to use addressHash instead of address in contract trie and contract code content keys.
* Fix path calculation bug in getParent when working with extension nodes.
* Bump portal spec tests repo.
* Finish updating tests due to portal test vector changes.
* Update Fluffy state bridge to use addressHash.
* Update Fluffy book with correct commands for running portal hive tests.
* Use RPC batching to send offer requests and filter out duplicate offers.
* Lookup offers after gossip to check if gossip successful.
* Use multiple workers for gossiping offers.
* Update Fluffy state network logging.
* Use single RPC calls instead of batching.
* Update cli parameters.
* Fix bug in contract trie offer building.
* Create block offers queue and collect account preimages.
* Implement iterators to return account and storage proofs and bytecode from updatedCaches.
* Implement building offers from proofs.
* Refactor BlockDataRef type to only include required fields.
* Store block data in database.
* Improve state diff types.
* Implement start state backfill from specific block.
* Record last persisted block number in database.
* Persist account preimages in db.
* Apply state updates for DAO hard fork.
* Implement state gossip of block offers via portal JSON RPC.
- Add basic validation for LC bootstrap gossip, validating either
by trusted block root (only 1) when not synced, or by comparing
with the header of the latest finality update when synced.
- Update portal_bridge beacon to also gossip bootstraps into the
network on each end of epoch.
* Started state bridge.
* Implement call to fetch stateDiffs using trace_replayBlockTransactions.
* Convert JSON responses to stateDiff types.
* State updates working for first few blocks.
* Correctly building state for first 200K blocks.
* Add storage of code and cleanup.
* Start state bridge refactor.
* More cleanup and fixes.
* Use RocksDb as backend for state.
* Implement transactions.
* Build RocksDb dependency when building fluffy tools.
* Move code to world state helper.
* Implement producer and consumer queue.
* Cleanup exceptions.
* Improve logging.
* Add update caches to DatabaseRef backends.
The 3 proofs can be reworked to two proofs as we can use the
BeaconBlock directly instead of BeaconBlockHeader and
BeaconBlockBody. This is possible because the HTR of the
BeaconBlock is the same as the one of the BeaconBlockHeader.
This results in 32 bytes less as an intermediate hash can be
removed. But more importantly looks more clean and compact in
structure and code.
- EpochAccumulator got renamed to EpochRecord
- MasterAccumulator is not HistoricalHashesAccumulator
- The List size for the accumulator got a different maximum which
also result in a different encoding and HTR
- Use --portal-rpc-url as url option to connect to Portal JSON-RPC
interface, just as --web3-url for EL JSON RPC interface.
- Improve logging to know beter which call on which JSON-RPC
interface fails
* Bump nim-eth, nim-web3, nimbus-eth2
- Replace std.Option with results.Opt
- Fields name changes
* More fixes
* Fix Portal stream async raises and portal testnet Opt usage
* Bump eth + nimbus-eth2 + more fixes related to eth_types changes
* Fix in utp test app and nimbus-eth2 bump
* Fix test_blockchain_json rebase conflict
* Fix EVMC block_timestamp conversion plus commentary
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* Updates to Fluffy book for hive tests.
* Add support to disable state root checks for state content from the command line.
* Update portal_stateStore endpoint to support decoding offer and storing retrieval value.
* Started implementation of state endpoints.
* Add rpc calls and server stubs.
* Initial implementation of getAccountProof and getStorageProof.
* Refactor validation to use toAccount utils functions.
* Add state endpoints tests.
`initTable` is obsolete since nim 0.19 and can introduce significant
memory overhead while providing no benefit (since the table will be
grown to the default initial size on first use anyway).
In particular, aristo layers will not necessarily use all tables they
initialize, for exampe when many empty accounts are being created.
This PR consolidates the split header-body sequences into a single EthBlock
sequence and cleans up the fallout from that which significantly reduces
block processing overhead during import thanks to less garbage collection
and fewer copies of things all around.
Notably, since the number of headers must always match the number of bodies,
we also get rid of a pointless degree of freedom that in the future could
introduce unnecessary bugs.
* only read header and body from era file
* avoid several unnecessary copies along the block processing way
* simplify signatures, cleaning up unused arguemnts and returns
* use `stew/assign2` in a few strategic places where the generated
nim assignent is slow and add a few `move` to work around poor
analysis in nim 1.6 (will need to be revisited for 2.0)
```
stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv
bps_x bps_y tps_x tps_y bpsd tpsd timed
block_number
(498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92%
(713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21%
(928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28%
(1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50%
(1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80%
(1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91%
(1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91%
(2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61%
(2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80%
blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s
bpsd (mean): 19.55%
tpsd (mean): 19.55%
Time (total): -29m13s, -15.14%
```