nimbus-eth1/premix
Jordan Hrycaj 221e6c9e2f
Unified database frontend integration (#1670)
* Nimbus folder environment update

details:
* Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder.
* The `nimbus` program does not compile yet as it needs the updates
  in the parallel `stateless` sub-folder.

* Stateless environment update

details:
* Integrated `CoreDbRef` for the sources in the `stateless` sub-folder.
* The `nimbus` program compiles now.

* Premix environment update

details:
* Integrated `CoreDbRef` for the sources in the `premix` sub-folder.

* Fluffy environment update

details:
* Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder.

* Tools environment update

details:
* Integrated `CoreDbRef` for the sources in the `tools` sub-folder.

* Nodocker environment update

details:
* Integrated `CoreDbRef` for the sources in the
  `hive_integration/nodocker` sub-folder.

* Tests environment update

details:
* Integrated `CoreDbRef` for the sources in the `tests` sub-folder.
* The unit tests compile and run cleanly now.

* Generalise `CoreDbRef` to any `select_backend` supported database

why:
  Generalisation was just missed due to overcoming some compiler oddity
  which was tied to rocksdb for testing.

* Suppress compiler warning for `newChainDB()`

why:
  Warning was added to this function which must be wrapped so that
  any `CatchableError` is re-raised as `Defect`.

* Split off persistent `CoreDbRef` constructor into separate file

why:
  This allows to compile a memory only database version without linking
  the backend library.

* Use memory `CoreDbRef` database by default

detail:
 Persistent DB constructor needs to import `db/core_db/persistent

why:
 Most tests use memory DB anyway. This avoids linking `-lrocksdb` or
 any other backend by default.

* fix `toLegacyBackend()` availability check

why:
  got garbled after memory/persistent split.

* Clarify raw access to MPT for snap sync handler

why:
  Logically, `kvt` is not the raw access for the hexary trie (although
  this holds for the legacy database)
2023-08-04 12:10:09 +01:00
..
assets upgrade jquery to 3.5.0, uikit to 3.4.0 and silence github security alert 2020-05-01 11:24:49 +07:00
.gitignore refactor utils 2019-02-27 13:30:18 +02:00
configuration.nim implement better hardfork management 2022-12-02 13:51:42 +07:00
debug.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00
downloader.nim implement better hardfork management 2022-12-02 13:51:42 +07:00
dumper.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00
graphql_downloader.nim premix: fixes premix parser dan graphql_downloader 2021-09-26 10:38:41 +07:00
hunter.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00
index.html add pagination to premix report page 2019-03-11 10:49:36 +07:00
js_tracer.nim sign with GPG 2019-01-28 20:58:06 +07:00
parser.nim More work on withdrawals (#1482) 2023-03-09 18:40:55 -05:00
persist.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00
premix.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00
premixcore.nim first step into styleCheck fixes 2022-04-14 08:39:50 +07:00
prestate.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00
readme.md new member of premix tool set 2019-02-27 13:44:01 +02:00
regress.nim Unified database frontend integration (#1670) 2023-08-04 12:10:09 +01:00

readme.md

Premix

Premix is premium gasoline mixed with lubricant oil and it is used in two-stroke internal combustion engines. It tends to produce a lot of smoke.

This Premix is a block validation debugging tool for the Nimbus Ethereum client. Premix will query transaction execution steps from other Ethereum clients and compare them with those generated by Nimbus. It will then produce a web page to present comparison results that can be inspected by the developer to pinpoint the faulty instruction.

Premix will also produce a test case for the specific problematic transaction, complete with a database snapshot to execute transaction validation in isolation. This test case can then be integrated with the Nimbus project's test suite.

screenshot

Requirements

Before you can use the Premix debugging tool there are several things you need to prepare. The first requirement is a recent version of geth installed from source or binary. The minimum required version is 1.8.18. Beware that version 1.8.x contains bugs in transaction tracer, upgrade it to 1.9.x soon after it has been released. Afterwards, you can run it with this command:

geth --rpc --rpcapi eth,debug --syncmode full --gcmode=archive

You need to run it until it fully syncs past the problematic block you want to debug (you might need to do it on an empty db, because some geth versions will keep on doing a fast sync if that's what was done before). After that, you can stop it by pressing CTRL-C and rerun it with the additional flag --maxpeers 0 if you want it to stop syncing

  • or just let it run as is if you want to keep syncing.

The next requirement is building Nimbus and Premix:

# in the top-level directory:
make

After that, you can run Nimbus with this command:

./build/nimbus --prune:archive --port:30304

Nimbus will try to sync up to the problematic block, then stop and execute Premix which will then load a report page in your default browser. If it fails to do that, you can see the report page by manually opening premix/index.html.

In your browser, you can explore the tracing result and find where the problem is.

Tools

Premix

Premix is the main debugging tool. It produces reports that can be viewed in a browser and serialised debug data that can be consumed by the debug tool. Premix consumes data produced by either nimbus, persist, or dumper.

You can run it manually using this command:

./build/premix debug*.json

Persist

Because the Nimbus P2P layer still contains bugs, you may become impatient when trying to sync blocks. In the ./premix directory, you can find a persist tool. It will help you sync relatively quicker because it will bypass the P2P layer and download blocks from geth via rpc-api.

When it encounters a problematic block during syncing, it will stop and produce debugging data just like Nimbus does.

./build/persist [--dataDir:your_database_directory] [--head: blockNumber] [--maxBlocks: number] [--numCommits: number]

Debug

In the same ./premix directory you'll find the debug tool that you can use to process previously generated debugging info in order to work with one block and one transaction at a time instead of multiple confusing blocks and transactions.

./build/debug block*.json

where block*.json contains the database snapshot needed to debug a single block produced by the Premix tool.

Dumper

Dumper was designed specifically to produce debugging data that can be further processed by Premix from information already stored in database. It will create tracing information for a single block if that block has been already persisted.

If you want to generate debugging data, it's better to use the Persist tool. The data generated by Dumper is usually used to debug Premix features in general and the report page logic in particular.

# usage:
./build/dumper [--datadir:your_path] --head:blockNumber

Hunter

Hunter's purpose is to track down problematic blocks and create debugging info associated with them. It will not access your on-disk database, because it has its own prestate construction code.

Hunter will download all it needs from geth, just make sure your geth version is at least 1.8.18.

Hunter depends on eth_getProof(EIP1186). Make sure your installed geth supports this functionality (older versions don't have this implemented).

# usage:
./build/hunter --head:blockNumber --maxBlocks:number

blockNumber is the starting block where the hunt begins.

maxBlocks is the number of problematic blocks you want to capture before stopping the hunt.

Regress

Regress is an offline block validation tool. It will not download block information from anywhere like Persist tool. Regress will validate your already persisted block in database. It will try to find any regression introduced either by bugfixing or refactoring.

# usage:
./build/regress [--dataDir:your_db_path] --head:blockNumber