* Sync scheduler provides an independent `ticker` loop process
why:
Can be used to update `metrics` and for debug logging. While an event
driven solution would stall if there are no events at the moment (e.g.
when the syncer hibernates, the `ticker` will run regardless.
* Use `runTicker()` loop interface alike for updating ticker
why:
Not event driven anymore so it will not stall when the syncer
hibernates.
* Re-implement logging ticker by running it within the `runTicker()` driver
why:
Simplifies implementation
* Re-name metrics variable to better fit into the current naming schemes
* Fix copyright header
Discard the received data on uTP content stream read timeout.
Before the data was still added to the queue and being processed
and should normally fail in validation. However as we know not all
data got read it should not even move to the validation step.
Added however a FIN send after the timeout instead of the delayed
socket clean-up which does not make much sense in that scenario
either. Basically either be nice and still send a FIN or just
destroy the socket immediatly.
* Force metrics update when peers vanish
why:
After that there might be reduced activity so that the next metrics
update is delayed.
* Update comments (code cosmetics)
* Tidy up nano-sleep wait directives to an `update.nim`-function
* Fix copyright year
* Move EIP-7702 Authorization validation to authority func
If the authorization is invalid the transaction itself is still valid,
the invalid authorization will be skipped.
* Fix copyright year
* logs summarized
* fix copyright year
* add topics for logs
* fix copyright year
* bring syncer logs to info & debug level
* fix debug dockerfile
* fix: copyright error
* shift txpool logs to debug and introduce logs in rpc
* after header bring block download to info level
* comments for finalization summary of logs
* change literals to meaningful names
* remove unwanted data from userfacing logs
* include target logs
* remove control
* fix capitalization
* complete txpool
- Minor refactor and cleanup of gossip retry and logging.
- Wait time before verifying the gossip for a block is now proportional to the number of offers per block.
- Don't retry gossipping content after finding it in the network. When retrying gossip of a block, only the offers not yet found in the network will be re-sent.
This commit adds --csv-output support to blocks import script.
Github renders CSVs as tables and this addition would be useful in nimbus-eth1-benchmark repo to make this comparison easy to render.
* Refactor TxPool: leaner and simpler
* Rewrite test_txpool
Reduce number of tables used, from 5 to 2. Reduce number of files.
If need to modify the price rule or other filters, now is far more easier because only one table to work with(sender/nonce).
And the other table is just a map from txHash to TxItemRef.
Removing transactions from txPool either because of producing new block or syncing became much easier.
Removing expired transactions also simple.
Explicit Tx Pending, Staged, or Packed status is removed. The status of the transactions can be inferred implicitly.
Developer new to TxPool can easily follow the logic.
But the most important is we can revive the test_txpool without dirty trick and remove usage of getCanonicalHead furthermore to prepare for better integration with ForkedChain.
* Revert previous change in PortalStream. Allow zero as a valid connectionId if randomly generated.
* Rename portal_*Gossip JSON-RPC endpoints to portal_*PutContent to be in line with updated portal spec.
Instead of using ancient/dirty code to setup the rpc test, now using newest method from TxPool and ForkedChain.
Also fix some bugs in server_api discovered when using this new setup.