* logs summarized
* fix copyright year
* add topics for logs
* fix copyright year
* bring syncer logs to info & debug level
* fix debug dockerfile
* fix: copyright error
* shift txpool logs to debug and introduce logs in rpc
* after header bring block download to info level
* comments for finalization summary of logs
* change literals to meaningful names
* remove unwanted data from userfacing logs
* include target logs
* remove control
* fix capitalization
* complete txpool
- Minor refactor and cleanup of gossip retry and logging.
- Wait time before verifying the gossip for a block is now proportional to the number of offers per block.
- Don't retry gossipping content after finding it in the network. When retrying gossip of a block, only the offers not yet found in the network will be re-sent.
This commit adds --csv-output support to blocks import script.
Github renders CSVs as tables and this addition would be useful in nimbus-eth1-benchmark repo to make this comparison easy to render.
* Refactor TxPool: leaner and simpler
* Rewrite test_txpool
Reduce number of tables used, from 5 to 2. Reduce number of files.
If need to modify the price rule or other filters, now is far more easier because only one table to work with(sender/nonce).
And the other table is just a map from txHash to TxItemRef.
Removing transactions from txPool either because of producing new block or syncing became much easier.
Removing expired transactions also simple.
Explicit Tx Pending, Staged, or Packed status is removed. The status of the transactions can be inferred implicitly.
Developer new to TxPool can easily follow the logic.
But the most important is we can revive the test_txpool without dirty trick and remove usage of getCanonicalHead furthermore to prepare for better integration with ForkedChain.
* Revert previous change in PortalStream. Allow zero as a valid connectionId if randomly generated.
* Rename portal_*Gossip JSON-RPC endpoints to portal_*PutContent to be in line with updated portal spec.
Instead of using ancient/dirty code to setup the rpc test, now using newest method from TxPool and ForkedChain.
Also fix some bugs in server_api discovered when using this new setup.
* Fix name after API change
why:
Slipped through (debugging mode)
* Fine tuning error counters
why:
Previous operating mode was quite blunt and considered some unnecessary
condition. Error handling was invoked and the peer zombified where one
could have continued working with that peer.
* Provide `kvt` table API bypassing `FC`
details:
Not a full bypass yet
why:
As discussed on Discord:
Ideally, those would pass through fc as well, as thin wrappers around
the db calls, for now - later, we probably see some policy involved
here and at that point, fc will be responsible for arbitrage between
sources (ie if a rpc source sends the block the syncer is syncing
while the syncer is working, fc is there to referee

* Apply `kvt` API from `FC` to beacon sync
* No need to use extra table for persistent header cache state record
why:
Slot zero can do. This allows deleting that table wholesale when needed
once thatfeature is available.
* Logger updates
details:
+ Lifting main header/block op logs from `trace` to `debug`
+ Set metrics update before nano-sleep (for task switch)
Thanks to @holiman of goevmlab for his fuzzer.
Similar with Blake2b precompile regression #2919.
When error, the precompile should not return any output.
When running the import, currently blocks are loaded in batches into a
`seq` then passed to the importer as such.
In reality, blocks are still processed one by one, so the batching does
not offer any performance advantage. It does however require that the
client wastes memory, up to several GB, on the block sequence while
they're waiting to be processed.
This PR introduces a persister that accepts these potentially large
blocks one by one and at the same time removes a number of redundant /
unnecessary copies, assignments and resets that were slowing down the
import process in general.
The forking facility has been replaced by ForkedChain - frames and
layers are two other mechanisms that mostly do the same thing at the
aristo level, without quite providing the functionality FC needs - this
cleanup will make that integration easier.
The current getCanonicalHead of core db should not be confused with ForkedChain.latestHeader.
Therefore we need to use getCanonicalHead to restricted case only, e.g. initializing ForkedChain.