* logs summarized
* fix copyright year
* add topics for logs
* fix copyright year
* bring syncer logs to info & debug level
* fix debug dockerfile
* fix: copyright error
* shift txpool logs to debug and introduce logs in rpc
* after header bring block download to info level
* comments for finalization summary of logs
* change literals to meaningful names
* remove unwanted data from userfacing logs
* include target logs
* remove control
* fix capitalization
* complete txpool
- Minor refactor and cleanup of gossip retry and logging.
- Wait time before verifying the gossip for a block is now proportional to the number of offers per block.
- Don't retry gossipping content after finding it in the network. When retrying gossip of a block, only the offers not yet found in the network will be re-sent.
This commit adds --csv-output support to blocks import script.
Github renders CSVs as tables and this addition would be useful in nimbus-eth1-benchmark repo to make this comparison easy to render.
* Refactor TxPool: leaner and simpler
* Rewrite test_txpool
Reduce number of tables used, from 5 to 2. Reduce number of files.
If need to modify the price rule or other filters, now is far more easier because only one table to work with(sender/nonce).
And the other table is just a map from txHash to TxItemRef.
Removing transactions from txPool either because of producing new block or syncing became much easier.
Removing expired transactions also simple.
Explicit Tx Pending, Staged, or Packed status is removed. The status of the transactions can be inferred implicitly.
Developer new to TxPool can easily follow the logic.
But the most important is we can revive the test_txpool without dirty trick and remove usage of getCanonicalHead furthermore to prepare for better integration with ForkedChain.
* Revert previous change in PortalStream. Allow zero as a valid connectionId if randomly generated.
* Rename portal_*Gossip JSON-RPC endpoints to portal_*PutContent to be in line with updated portal spec.
Instead of using ancient/dirty code to setup the rpc test, now using newest method from TxPool and ForkedChain.
Also fix some bugs in server_api discovered when using this new setup.
* Fix name after API change
why:
Slipped through (debugging mode)
* Fine tuning error counters
why:
Previous operating mode was quite blunt and considered some unnecessary
condition. Error handling was invoked and the peer zombified where one
could have continued working with that peer.
* Provide `kvt` table API bypassing `FC`
details:
Not a full bypass yet
why:
As discussed on Discord:
Ideally, those would pass through fc as well, as thin wrappers around
the db calls, for now - later, we probably see some policy involved
here and at that point, fc will be responsible for arbitrage between
sources (ie if a rpc source sends the block the syncer is syncing
while the syncer is working, fc is there to referee

* Apply `kvt` API from `FC` to beacon sync
* No need to use extra table for persistent header cache state record
why:
Slot zero can do. This allows deleting that table wholesale when needed
once thatfeature is available.
* Logger updates
details:
+ Lifting main header/block op logs from `trace` to `debug`
+ Set metrics update before nano-sleep (for task switch)
Thanks to @holiman of goevmlab for his fuzzer.
Similar with Blake2b precompile regression #2919.
When error, the precompile should not return any output.
When running the import, currently blocks are loaded in batches into a
`seq` then passed to the importer as such.
In reality, blocks are still processed one by one, so the batching does
not offer any performance advantage. It does however require that the
client wastes memory, up to several GB, on the block sequence while
they're waiting to be processed.
This PR introduces a persister that accepts these potentially large
blocks one by one and at the same time removes a number of redundant /
unnecessary copies, assignments and resets that were slowing down the
import process in general.
The forking facility has been replaced by ForkedChain - frames and
layers are two other mechanisms that mostly do the same thing at the
aristo level, without quite providing the functionality FC needs - this
cleanup will make that integration easier.
The current getCanonicalHead of core db should not be confused with ForkedChain.latestHeader.
Therefore we need to use getCanonicalHead to restricted case only, e.g. initializing ForkedChain.
* Metrics cosmetics
* Better naming for error threshold constants
* Treating header/body process error different from response errors
why:
Error handling becomes active not until some consecutive failures
appear. As both types of errors may interleave (i.g. no response
errors) the counter reset for one type might affect the other.
By doing it wrong, a peer might send repeatedly a bogus block so
locking in the syncer in an endless loop.
In block processing, depending on the complexity of a transaction and
hotness of caches etc, signature checking can actually make up the
majority of time needed to process a transaction (60% observed in some
randomly sampled block ranges).
Fortunately, this is a task that trivially can be offloaded to a task
pool similar to how nimbus-eth2 does it.
This PR introduces taskpools in the most simple way possible, by
performing signature checking concurrently with other TX processing,
assigning a taskpool task per TX effectively.
With this little trick, we're in gigagas land 🎉 on my laptop!
```
INF 2024-12-10 21:05:35.170+01:00 Imported blocks
blockNumber=3874817 b... mgps=1222.707 ...
```
Tests don't use the taskpool for now because it needs manual cleanup and
we don't have a good mechanism in place. Future PR:s should address this
by creating a common shutdown sequence that also closes and cleans up
other resources like the DB.
Co-authored-by: andri lim <jangko128@gmail.com>
* Cosmetics, add some metrics updates to smoothen curves
why:
Progress downloading blocks was just a jump from none to full
* Reclassifying some syncer gossip from TRC to DBG
why:
Might help debugging without full trace logs
Now that branches are small, we can add a branch cache that fits more
verticies in memory by only storing the branch portion (16 bytes) of the
VertexRef (136 bytes).
Where the original vertex cache hovers around a hit rate of ~60:ish,
this branch cache reaches >90% hit rate instead around block 20M which
gives a nice boost to processing.
A downside of this approach is that a new VertexRef must be allocated
for every cache hit instead of reusing an existing instance - this
causes some GC overhead that needs to be addressed.
Nice 15% improvement nonetheless, can't complain!
```
blocks: 19630784, baseline: 161h18m38s, contender: 136h23m23s
Time (total): -24h55m14s, -15.45%
```