Commit Graph

354 Commits

Author SHA1 Message Date
Jacek Sieka f294d1e086
Clear account cache after each block (#2411)
When processing long ranges of blocks, the account cache grows unbounded
which cause huge memory spikes.

Here, we move the cache to a second-level cache after each block - the
second-level cache is cleared on the next block after that which creates
a simple LRU effect.

There's a small performance cost of course, though overall the freed-up
memory can now be reassigned to the rocksdb row cache which not only
makes up for the loss but overall leads to a performance increase.

The bump to 2gb of rocksdb row cache here needs more testing but is
slightly less and loosely basedy on the savings from this PR and the
circular ref fix in #2408 - another way to phrase this is that it's
better to give rocksdb more breathing room than let the memory sit
unused until circular ref collection happens ;)
2024-06-25 07:30:32 +02:00
Jacek Sieka 9521582005
avoid closure environment for mpt methods (#2408)
An instance of `CoreDbMptRef` is created for and stored in every account
- when we are processing blocks and have many accounts in memory, this
closure environment takes up hundreds of mb of memory (around block 5M,
it is the 4:th largest memory consumer!) - incidentally, this also
removes a circular reference in the setup that causes the
`AristoCodeDbMptRef` to linger in memory much longer than it
has to which is the core reason why it takes so much.

The real solution here is to remove the methods indirection entirely,
but this PR provides relief until that has been done.

Similar treatment is given to some of the other core api functions to
avoid circulars there too.
2024-06-24 07:56:41 +02:00
Jacek Sieka 6b68ff92d3
Allocation-free nibbles buffer (#2406)
This buffer eleminates a large part of allocations during MPT traversal,
reducing overall memory usage and GC pressure.

Ideally, we would use it throughout in the API instead of
`openArray[byte]` since the built-in length limit appropriately exposes
the natural 64-nibble depth constraint that `openArray` fails to
capture.
2024-06-22 22:33:37 +02:00
Jacek Sieka 768307d91d
Cache code and invalid jump destination tables (fixes #2268) (#2404)
It is common for many accounts to share the same code - at the database
level, code is stored by hash meaning only one copy exists per unique
program but when loaded in memory, a copy is made for each account.

Further, every time we execute the code, it must be scanned for invalid
jump destinations which slows down EVM exeuction.

Finally, the extcodesize call causes code to be loaded even if only the
size is needed.

This PR improves on all these points by introducing a shared
CodeBytesRef type whose code section is immutable and that can be shared
between accounts. Further, a dedicated `len` API call is added so that
the EXTCODESIZE opcode can operate without polluting the GC and code
cache, for cases where only the size is requested - rocksdb will in this
case cache the code itself in the row cache meaning that lookup of the
code itself remains fast when length is asked for first.

With 16k code entries, there's a 90% hit rate which goes up to 99%
during the 2.3M attack - the cache significantly lowers memory
consumption and execution time not only during this event but across the
board.
2024-06-21 09:44:10 +02:00
Jordan Hrycaj 081cb15493
Coredb maintenance (#2398)
* CoreDb: remove PHK tries

why:
  There is no general use anymore for an MPT with a pre-hashed key. It
  was used to resemble the `SecureHexaryTrie` logic from the legacy DB.

  The only pace where this is needed is the `Leger` which uses a
  a distinct MPT version anyway (see `distinct_ledgers.nim`.)

* Rename `CoreDx*` -> `CoreDb*`

why:
  The naming `CoreDx*` was used to differentiate the new CoreDb API from
  the legacy API which had descriptors named `CoreDb*`.
2024-06-19 14:13:12 +00:00
Jordan Hrycaj e7be0d185c
Aristo uses pre classified tree types cont2 (#2397)
* Provide dedicated functions for fetching accounts and storage trees

why:
  Different prototypes for each class `account`, `generic` and
  `storage`.

* Remove `fetchPayload()` and other cruft from API, `aristo_fetch`, etc.

* Fix typos, debugging left overs, comments
2024-06-19 12:40:00 +00:00
andri lim 0e5fd3ffc9
LedgerRef: stateOrVoid become stateEmptyOrVoid (#2394) 2024-06-19 14:14:36 +02:00
andri lim 5a39fc0d69
Remove unused dbkey (#2396) 2024-06-19 14:11:14 +02:00
Jacek Sieka 41cf81f80b
Fix dboptions init (#2391)
For the block cache to be shared between column families, the options
instance must be shared between the various column families being
created. This also ensures that there is only one source of truth for
configuration options instead of having two different sets depending on
how the tables were initialized.

This PR also removes the re-opening mechanism which can double startup
time - every time the database is opened, the log is replayed - a large
log file will take a long time to open.

Finally, several options got correclty implemented as column family
options, including an one that puts a hash index in the SST files.
2024-06-19 10:55:57 +02:00
Miran ea0d18424a
use Nim 2.0.6 (#2384)
* use Nim 2.0.6

* Fixes for nim 2.0.6

* Workaround nim 2.0 array indexing issue

* Remove excess gcsafe pragma

* Oops, fix recursive template

* Fix imports

* Fluffy nph linting

---------

Co-authored-by: jangko <jangko128@gmail.com>
Co-authored-by: tersec <tersec@users.noreply.github.com>
2024-06-19 01:27:54 +00:00
Jordan Hrycaj 8727307ef4
Aristo uses pre classified tree types cont1 (#2389)
* Provide dedicated functions for deleteing accounts and storage trees

why:
  Storage trees are always linked to an account, so there is no need
  for an application to fiddle about (e.g. re-cycling, unlinking)
  storage tree vertex IDs.

* Remove `delete()` and other cruft from API, `aristo_delete`, etc.

* clean up delete functions

details:
  The delete implementations `deleteImpl()` and `delTreeImpl()` do not
  need to be super generic anymore as all the edge cases are covered by
  the specialised `deleteAccountPayload()`, `deleteGenericData()`, etc.

* Avoid unnecessary re-calculations of account keys

why:
  The function `registerAccountForUpdate()` did extract the storage ID
  (if any) and automatically marked the Merkle keys along the account
  path for re-hashing.

  This would also apply if there was later detected that the account
  or the storage tree did not need to be updated.

  So the `registerAccountForUpdate()` function was split into a part
  which retrieved the storage ID, and another one which marked the
  Merkle keys for re-calculation to be applied only when needed.
2024-06-18 19:30:01 +00:00
Jordan Hrycaj 51f02090b8
Aristo uses pre classified tree types (#2385)
* Remove unused `merge*()` functions (for production)

details:
  Some functionality moved to test suite

* Make sure that only `AccountData` leaf type is exactly used on VertexID(1)

* clean up payload type

* Provide dedicated functions for merging accounts and storage trees

why:
  Storage trees are always linked to an account, so there is no need
  for an application to fiddle about (e.e. creating, re-cycling) with
  storage tree vertex IDs.

* CoreDb: Disable tracer functionality

why:
  Must be updated to accommodate new/changed `Aristo` functions.

* CoreDb: Use new `mergeXXX()` functions

why:
  Makes explicit vertex ID management obsolete for creating new
  storage trees.

* Remove `mergePayload()` and other cruft from API, `aristo_merge`, etc.

* clean up merge functions

details:
  The merge implementation `mergePayloadImpl()` does not need to be super
  generic anymore as all the edge cases are covered by the specialised
  functions `mergeAccountPayload()`, `mergeGenericData()`, and
  `mergeStorageData()`.

* No tracer available at the moment, so disable offending tests
2024-06-18 11:14:02 +00:00
Jacek Sieka 8926da02b6
Fix lowest-hanging fruit in VM (#2382)
* replace set with bitseq for code validity test
* remove unusued code from CodeStream
* avoid unnecessary byte-by-byte copies
2024-06-18 07:55:35 +07:00
Jacek Sieka 9c6fd46a51
avoid computing state root just to know if storage is empty (#2380)
The state root computation here is one of the major hotspots in block
processing - in the cases the code only needs to know if it's empty or
not, it can be done a lot faster.

Adding a separate function for this looks fragile and should probably be
revisited.
2024-06-17 15:29:07 +02:00
Jacek Sieka 1fb658ff03
Remove hashify calls when forking (#2377)
This appears to no longer be needed and we want to delay hashing as much
as possible.
2024-06-17 14:18:50 +02:00
andri lim 69044dda60
Remove AccountStateDB (#2368)
* Remove AccountStateDB

AccountStateDB should no longer be used.
It's usage have been reduce to read only operations.
Replace it with LedgerRef to reduce maintenance burden.

* remove extra spaces

Co-authored-by: tersec <tersec@users.noreply.github.com>

---------

Co-authored-by: tersec <tersec@users.noreply.github.com>
2024-06-16 10:21:02 +07:00
Jacek Sieka af34f90fe4
fix `max_total_wal_size` which should be set on the DB (#2363) 2024-06-16 02:11:30 +00:00
Jacek Sieka 242bbf03fc
Light verification and storage mode for import (#2367)
When performing block import, we can batch state root verifications and
header checks, doing them only once per chunk of blocks, assuming that
the other blocks in the batch are valid by extension.

When we're not generating receipts, we can also skip per-transaction
state root computation pre-byzantium, which is what provides a ~20%
speedup in this PR, at least on those early blocks :)

We also stop storing transactions, receipts and uncles redundantly when
importing from era1 - there is no need to waste database storage on this
when we can load it from the era1 file (eventually).
2024-06-15 11:22:37 +02:00
Jacek Sieka 68f462e3e4
avoid state root lookup when computing linear history (#2362)
State lookups potentially trigger expensive re-hashings - this is the
first of several steps to remove the unnecessary ones from the general
flow of block processing

* avoid re-reading parent block header from database when it's already
in memory
2024-06-14 15:56:56 +02:00
Jordan Hrycaj debba5a620
Coeredb related clean up and maint fixes (#2360)
* Fix initialiser

why:
  Possible crash (app profiling, tracer etc.)

* Update column family options processing

why:
  Same for kvt as for aristo

* Move `AristoDbDualRocks` backend type to the test suite

why:
  So it is not available for production

* Fix typos in API jump table

why:
  Used for tracing and app profiling only. Needed some update

* Purged CoreDb legacy API

why:
  Not needed anymore, was transitionary and disabled.

* Rename `flush` argument to `eradicate` in a DB close context

why:
  The word `eradicate` leaves no doubt what is meant

* Rename `stoFlush()` -> `stoDelete()`

* Rename `core_apps_newapi` -> `core_apps` (not so new anymore)
2024-06-14 11:19:48 +00:00
andri lim 5a18537450
Bump nim-eth, nim-web3, nimbus-eth2 (#2344)
* Bump nim-eth, nim-web3, nimbus-eth2

- Replace std.Option with results.Opt
- Fields name changes

* More fixes

* Fix Portal stream async raises and portal testnet Opt usage

* Bump eth + nimbus-eth2 + more fixes related to eth_types changes

* Fix in utp test app and nimbus-eth2 bump

* Fix test_blockchain_json rebase conflict

* Fix EVMC block_timestamp conversion plus commentary

---------

Co-authored-by: kdeme <kim.demey@gmail.com>
2024-06-14 14:31:08 +07:00
Jacek Sieka 189a20bbae
Avoid recomputing hashes when persisting data (#2350) 2024-06-14 07:10:00 +02:00
Jordan Hrycaj 5a5cc6295e
Triggered write event for kvt (#2351)
* bump rockdb

* Rename `KVT` objects related to filters according to `Aristo` naming

details:
  filter* => delta*
  roFilter => balancer

* Compulsory error handling if `persistent()` fails

* Add return code to `reCentre()`

why:
  Might eventually fail if re-centring is blocked. Some logic will be
  added in subsequent patch sets.

* Add column families from earlier session to rocksdb in opening procedure

why:
  All previously used CFs must be declared when re-opening an existing
  database.

* Update `init()` and add rocksdb `reinit()` methods for changing parameters

why:
  Opening a set column families (with different open options) must span
  at least the ones that are already on disk.

* Provide write-trigger-event interface into `Aristo` backend

why:
  This allows to save data from a guest application (think `KVT`) to
  get synced with the write cycle so the guest and `Aristo` save all
  atomically.

* Use `KVT` with new column family interface from `Aristo`

* Remove obsolete guest interface

* Implement `KVT` piggyback on `Aristo` backend

* CoreDb: Add separate `KVT`/`Aristo` backend mode for debugging

* Remove `rocks_db` import from `persist()` function

why:
  Some systems (i.p `fluffy` and friends) use the `Aristo` memory
  backend emulation and do not link against rocksdb when building the
  application. So this should fix that problem.
2024-06-13 18:15:11 +00:00
Jacek Sieka 5f44be1bcd
Remove rehashing in storage ledger init (#2340)
This looks like a debug leftover, but it causes a state root
recomputation which is slow
2024-06-12 21:13:53 +02:00
Jacek Sieka 54f793f946
Apply some basic rocksdb options (#2339)
These options, inspired by Nethermind and general internet wisdom, bring
the database size down to 2/3 without affecting throughput. In theory,
they should also bring down memory usage and/or make more efficient use
of whatever memory is already assigned to rocksdb but this needs
verification in a longer test at synced-mainnet sizes.

In the meantime, they make testing easier by removing some noise that
the profiler says are bad, such as excessive SkipList access (countered
by bloom filters).
2024-06-12 14:52:27 +02:00
Jacek Sieka eb041abba7
avoid unnecessary memory allocations and lookups (#2334)
* use `withValue` instead of `hasKey` + `[]`
* avoid `@` et al
* parse database data inside `onData` instead of making seq then parsing
2024-06-11 11:38:58 +02:00
Jordan Hrycaj a347291413
Aristo use rocksdb cf instead of key pfx (#2332)
* Use RocksDb column families instead of a prefixed single column

why:
  Better performance

* Use structural objects `VertexRef` and `HashKey` in LRU cache for RocksDb

why:
  Avoids repeated de/serialisation
2024-06-10 12:04:22 +00:00
Jacek Sieka f6be4bd0ec
avoid initTable (#2328)
`initTable` is obsolete since nim 0.19 and can introduce significant
memory overhead while providing no benefit (since the table will be
grown to the default initial size on first use anyway).

In particular, aristo layers will not necessarily use all tables they
initialize, for exampe when many empty accounts are being created.
2024-06-10 11:05:30 +02:00
Jacek Sieka 359f7ada65
eth: avoid sink (#2331)
* eth: avoid sink

* bump

* fix extra transactions

from #2330
2024-06-10 14:16:22 +07:00
Jacek Sieka 0b32078c4b
Consolidate block type for block processing (#2325)
This PR consolidates the split header-body sequences into a single EthBlock
sequence and cleans up the fallout from that which significantly reduces
block processing overhead during import thanks to less garbage collection
and fewer copies of things all around.

Notably, since the number of headers must always match the number of bodies,
we also get rid of a pointless degree of freedom that in the future could
introduce unnecessary bugs.

* only read header and body from era file
* avoid several unnecessary copies along the block processing way
* simplify signatures, cleaning up unused arguemnts and returns
* use `stew/assign2` in a few strategic places where the generated
  nim assignent is slow and add a few `move` to work around poor
  analysis in nim 1.6 (will need to be revisited for 2.0)

```
stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv
                       bps_x     bps_y     tps_x        tps_y    bpsd    tpsd    timed
block_number
(498305, 713245]    1,540.52  1,809.73  2,361.58  2775.340189  17.63%  17.63%  -14.92%
(713245, 928185]      730.36    865.26  1,715.90  2028.973852  18.01%  18.01%  -15.21%
(928185, 1143126]     663.03    789.10  2,529.26  3032.490771  19.79%  19.79%  -16.28%
(1143126, 1358066]    393.46    508.05  2,152.50  2777.578119  29.13%  29.13%  -22.50%
(1358066, 1573007]    370.88    440.72  2,351.31  2791.896052  18.81%  18.81%  -15.80%
(1573007, 1787947]    283.65    335.11  2,068.93  2441.373402  17.60%  17.60%  -14.91%
(1787947, 2002888]    287.29    342.11  2,078.39  2474.179448  18.99%  18.99%  -15.91%
(2002888, 2217828]    293.38    343.16  2,208.83   2584.77457  17.16%  17.16%  -14.61%
(2217828, 2432769]    140.09    167.86  1,081.87  1296.336926  18.82%  18.82%  -15.80%

blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s
bpsd (mean): 19.55%
tpsd (mean): 19.55%
Time (total): -29m13s, -15.14%
```
2024-06-09 16:32:20 +02:00
web3-developer db8c5b90bd
Cleanup stateless and block witness code. (#2295)
* Cleanup unneeded stateless and block witness code. Keeping MultiKeys which is used in the eth_getProofsByBlockNumber RPC endpoint which is needed for the Fluffy state network bridge.

* Rename generateWitness flag to collectWitnessData to better describe what the flag does. We only collect the keys of the touched accounts and storage slots but no block witness generation is supported for now.

* Move remaining stateless code into nimbus directory.

* Add vmstate parameter to ChainRef to fix test.

* Exclude *.in from check copyright year

---------

Co-authored-by: jangko <jangko128@gmail.com>
2024-06-08 15:05:00 +07:00
tersec 5008b89185
EIP-2537 BLS12-381 G1 add/mul/exp and G2 add/mul support with tests (#2315) 2024-06-08 07:39:53 +07:00
tersec fd03038cab
Replace some usage of std/options with results Opt (#2323)
* Replace some usage of std/options with results Opt

* more updates
2024-06-07 23:39:58 +02:00
Jordan Hrycaj 392088e5e9
Coredb fix storage tree issues (#2317)
* Code cosmetics

* Re-org `aristo_merge`, internally split into sub-modules

why:
  Became a burden for maintenance because it hosts two different
  functionalities under the same merge paradigm: account/data merge
  and snap proof merge where the latter produces a partial trie.

* Fix CoreDb tracer

* Ledger: fix potential account vs. storage tree sync problems

* Remove bound on the size of removable whole storage trees

* Activate `test_tracer_json`
2024-06-07 10:56:31 +00:00
Jacek Sieka c5b3081828
eth: bump (#2308)
* eth: bump

Speed up basic operations like hashing and creating RLP:s - up to 25%
improvement in certain block ranges!

```
876729c.csv /data/nimbus_stats/stats-20240605_2204-ed4f6221.csv
stats-20240605_2000-c876729c.csv vs stats-20240605_2204-ed4f6221.csv
                       bps_x   bps_y     tps_x        tps_y    bpsd    tpsd    timed
block_number
(500001, 888889]    1,017.72  996.07  1,784.96  1742.438676  -2.72%  -2.72%    3.31%
(888889, 1277778]     528.00  536.30  2,159.79  2198.781046   1.69%   1.69%   -1.44%
(1277778, 1666667]    324.29  317.78  2,064.48  2008.106377  -2.82%  -2.82%    3.33%
(1666667, 2055556]    253.87  258.74  1,840.94  1872.935273   1.67%   1.67%   -1.39%
(2055556, 2444445]    175.79  178.66  1,340.61  1363.248939   0.93%   0.93%   -0.74%
(2444445, 2833334]    137.27  159.74    958.75  1113.323757  14.24%  14.24%  -10.69%
(2833334, 3222223]    170.48  228.63  1,272.70  1704.047195  34.41%  34.41%  -25.17%
(3222223, 3611112]    127.49  125.48  1,572.39  1548.835791  -1.19%  -1.19%    1.47%
(3611112, 4000001]     37.25   40.42  1,100.65  1184.740493   9.58%   9.58%   -7.04%

blocks: 3501696, baseline: 11h59m40s, contender: 11h21m38s
bpsd (mean): 6.18%
tpsd (mean): 6.18%
Time (sum): -38m1s, -4.26%

bpsd = blocks per sec diff (+), tpsd = txs per sec diff, timed = time to process diff (-)
+ = more is better, - = less is better
```

* ignore gitignore
2024-06-06 23:39:09 +00:00
Jordan Hrycaj e9eae4df70
Core db disable legacy api n remove distinct tries (#2299)
* CoreDb: Remove crufty second/off-site KVT

why:
  Was used to allow late `Clique` to store directly to disk

* CoreDb: Remove prune flag related functionality

why:
  Is completely legacy stuff

* CoreDb: Remove dependence on legacy API (tests unsupported yet)

why:
  Does not fully support Aristo

* Re-factoring `state_db` using new API

details:
  Only minimum changes needed to compile `nimbus`

* Update tests and aux modules

* Turn off legacy API and remove `distinct_tries`

comment:
  The legacy API has now cruft status, will be removed soon

* Fix copyright years

* Update rpc for verified proxy

---------

Co-authored-by: Jacek Sieka <jacek@status.im>
2024-06-05 20:52:04 +00:00
Jordan Hrycaj 8985535ab2
Core db+aristo updates n fixes (#2298)
* Fix `blobify()` for `SavedState` object

why:
  Have to treat varying sizes for `HashKey`, i.p. for an empty key which
  has zero size.

* Store correct block number in `SavedState` record

why:
  Stored `block-number - 1` for some obscure reason.

* Cosmetcs, docu
2024-06-05 18:17:50 +00:00
Jacek Sieka c876729c4d
Add some basic rocksdb options to command line (#2286)
These options are there mainly to drive experiments, and are therefore
hidden.

One thing that this PR brings in is an initial set of caches and buffers for rocksdb - the set that I've been using during various performance tests to get to a viable baseline performance level.
2024-06-05 17:08:29 +02:00
Jacek Sieka 95a4adc1e8
use statically linked rocksdb on linux/mac, dll on windows (#2291)
The `rocksdb` version shipped with distributions is typically old and
therefore often lacks features we use - it also doesn't match the one
assumed by nim-rocksdb leading to ABI mismatch risks.

Instead of depending on the system rocksdb, we'll now use the rocksdb
version assumed by nim-rocksdb and locked in its vendor folder by always
building it together with nimbus.

This avoids the problem of unknown rocksdb versions at a (small) cost to
build time.

CI caching and full windows support for building from source [remains
TODO](https://github.com/status-im/nim-rocksdb/issues/44).
2024-06-04 18:15:33 +02:00
Jordan Hrycaj 69a158864c
Remove vid recycling feature (#2294) 2024-06-04 15:05:13 +00:00
Jordan Hrycaj cc909c99f2
Fix crash in de-serialiser (#2289)
why:
  Late change from `Hash256` to `HashKey` without fully updating
  the serialiser.
2024-06-04 10:38:11 +00:00
Jordan Hrycaj f926222fec
Aristo cull journal related stuff (#2288)
* Remove all journal related stuff

* Refactor function names journal*() => delta*(), filter*() => delta*()

* remove `trg` fileld from `FilterRef`

why:
  Same as `kMap[$1]`

* Re-type FilterRef.src as `HashKey`

why:
  So it is directly comparable to `kMap[$1]`

* Moved `vGen[]` field from `LayerFinalRef` to `LayerDeltaRef`

why:
  Then a separate `FilterRef` type is not needed, anymore

* Rename `roFilter` field in `AristoDbRef` => `balancer`

why:
  New name more appropriate.

* Replace `FilterRef` by `LayerDeltaRef` type

why:
  This allows to avoid copying into the `balancer` (see next patch set)
  most of the time. Typically, only one instance is running on the backend
  and the `balancer` is only used as a stage before saving data.

* Refactor way how to store data persistently

why:
  Avoid useless copy when staging `top` layer for persistently saving to
  backend.

* Fix copyright header?
2024-06-03 20:10:35 +00:00
Jacek Sieka 7f76586214
Speed up account ledger a little (#2279)
`persist` is a hotspot when processing blocks because it is run at least
once per transaction and loops over the entire account cache every time.

Here, we introduce an extra `dirty` map that keeps track of all accounts
that need checking during `persist` which fixes the immediate
inefficiency, though probably this could benefit from a more thorough
review - we also get rid of the unused clearCache flag - we start with
a fresh cache on every fresh vmState.

* avoid unnecessary code hash comparisons
* avoid unnecessary copies when iterating
* use EMPTY_CODE_HASH throughout for code hash comparison
2024-06-02 21:21:29 +02:00
Jacek Sieka ef864ba167
disable vid reuse compaction (#2276)
The current implementation cannot be used practically since it causes
several full reallocations of the whole free list per deletion - it
needs to be reimplemented, or the chain cannot practically progress
beyond ~2.5M blocks where a lot of removals happen.

Co-authored-by: tersec <tersec@users.noreply.github.com>
2024-06-01 17:14:54 +02:00
Jacek Sieka 9f879406f3
append instead of reallocate in blobify (#2277)
...otherwise, we get lots and lots of temporary allocations of seq's
2024-06-01 17:13:24 +02:00
andri lim f9765e617b
Early exit from some of CoreDbRef functions if nothing to do (#2259)
* Early exit from some of CoreDbRef functions if nothing to do

* More exits

* persistReceipts early exit if nothing to do
2024-06-01 16:54:02 +02:00
tersec cfbbcda4f7
rm unused Nim modules (#2270) 2024-06-01 17:49:46 +07:00
Jordan Hrycaj bda760f41d
Run coredb without journal (#2266)
* Add persistent last state stamp feature

why:
  This allows to run `CoreDb` without journal

* Start `CoreDb` without journal

* Remove journal related functions from `CoredDb`
2024-05-31 17:32:22 +00:00
Jordan Hrycaj 0f430c70fd
Aristo avoid storage trie update race conditions (#2251)
* Update TDD suite logger output format choices

why:
  New format is not practical for TDD as it just dumps data across a wide
  range (considerably larder than 80 columns.)

  So the new format can be turned on by function argument.

* Update unit tests samples configuration

why:
  Slightly changed the way to find the `era1` directory

* Remove compiler warnings (fix deprecated expressions and phrases)

* Update `Aristo` debugging tools

* Always update the `storageID` field of account leaf vertices

why:
  Storage tries are weekly linked to an account leaf object in that
  the `storageID` field is updated by the application.

  Previously, `Aristo` verified that leaf objects make sense when passed
  to the database. As a consequence
  * the database was inconsistent for a short while
  * the burden for correctness was all on the application which led
    to delayed error handling which is hard to debug.

  So `Aristo` will internally update the account leaf objects so that
  there are no race conditions due to the storage trie handling

* Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)`

why:
  The journal and filter logic depends on the hash of the `VertexID(1)`
  which is commonly known as the state root. This implies that all
  changes to the database are somehow related to that.

* Make sure that a `Ledger` account does not overwrite the storage trie reference

why:
  Due to the abstraction of a sub-trie (now referred to as column with a
  hash describing its state) there was a weakness in the `Aristo` handler
  where an account leaf could be overwritten though changing the validity
  of the database. This has been changed and the database will now reject
  such changes.

  This patch fixes the behaviour on the application layer. In particular,
  the column handle returned by the `CoreDb` needs to be updated by
  the `Aristo` database state. This mitigates the problem that a storage
  trie might have vanished or re-apperaed with a different vertex ID.

* Fix sub-trie deletion test

why:
  Was originally hinged on `VertexID(1)` which cannot be wholesale
  deleted anymore after the last Aristo update. Also, running with
  `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()`
  or `persist()` work.

* Cosmetics

* Activate `test_generalstate_json`

* Temporarily `deactivate test_tracer_json`

* Fix copyright header

---------

Co-authored-by: jordan <jordan@dry.pudding>
Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 17:48:38 +00:00
Jacek Sieka 919242c98e
results: use canonical import (#2248) 2024-05-30 14:54:03 +02:00