* move quarantine outside of chaindag
The quarantine has been part of the ChainDAG for the longest time, but
this design has a few issues:
* the function in which blocks are verified and added to the dag becomes
reentrant and therefore difficult to reason about - we're currently
using a stateful flag to work around it
* quarantined blocks bypass the processing queue leading to a processing
stampede
* the quarantine flow is unsuitable for orphaned attestations - these
should also should be quarantined eventually
Instead of processing the quarantine inside ChainDAG, this PR moves
re-queueing to `block_processor` which already is responsible for
dealing with follow-up work when a block is added to the dag
This sets the stage for keeping attestations in the quarantine as well.
Also:
* make `BlockError` `{.pure.}`
* avoid use of `ValidationResult` in block clearance (that's for gossip)
* Add exponential rewind on MissingParent.
* Try to avoid peers which are useless for syncing.
Fix forward sync restart at proper point.
Fix getLocalWallSlot() to not return slots from the future.
* Fix incorrect logs.
* Fix logging.
Enable peer's status messages log on DEBUG level.
* Fix watch task to monitor operation progress, but not local head progress.
* Add more logging information.
Remove recurring failures detection mechanism.
* Concentrate all sensitive writeFile/createPath calls in one place.
Fix eth2_network_simulation for Windows.
* Remove artifacts.
* fix import
Co-authored-by: Jacek Sieka <jacek@status.im>
* Fix continuous sync queue rewinds on slow PCs.
Fix recurring disconnects on low peer score.
* Calculate average syncing speed, not the current one.
Move speed calculation to different task.
* Address review comments.
Calculating rewards/penalties is slow due to how we compute sets of
attestations validators then use the sets for inclusion checks, to see
who attested. The dominant function during validated block processing /
epoch processing is hash set building and lookup.
This PR inverts the flow by removing the sets and creating a single
large validator status list, then applying all relevant state
attestations, then updating rewards and penalties.
This provides a 10x speedup to epoch processing which in turn speeds up
both empty slot and block processing - for example, on startup, we
replay all non-finalized blocks to prime fork choice - the same when
validating attestations or replaying states on reorg.
* addPeer() and addPeerNoWait() now returns PeerStatus, not bool.
Minor refactoring of PeerPool.
Fix tests.
* Refactor PeerPool.
Add lenSpace.
Add tests for lenSpace.
PeerPool.add procedures now return different error codes.
Fix SyncManager break/continue problem.
Fix connectWorker break/continue problem.
Refactor connectWorker and discoveryLoop.
Fix incoming/outgoing blocking problem.
* Refactor discovery loop.
Add checkPeer.
* Fix logic and compilation bugs.
* Adjust position of debugging log.
* Fix issue with maximum peers in PeerPool.
Optimize node record decoding.
* fix discoveryLoop.
* Remove aliases and fix tests using aliases.
* Slashing protection + interchange initial commit
* Restrict the when UseSlashingProtection dance in other modules
* Integrate slashing tests in other all_tests
* Add attestation slashing protection support
* Add a message that mention if built with/without slashing protection
* no op the initialization proc
* test slashing protection in Jenkins (temp)
* where to configure NIMFLAGS in Jenkins ...
* Jenkins -> ensure Built with slashing protection
* Add slashing protection complete import
* use Opt.get(otherwise)
* Don't use negation in proc name
* Turn slashing protection on by default
* remove some superfluous gcsafes
* remove getTailState (unused)
* don't store old epochrefs in blocks
* document attestation pool a bit
* remove `pcs =` cruft from log
* skeleton of attester slashing pool & validators
* add skeleton for proposer slashings and voluntary exits; rename pool to more inclusive exit pool to stay consistent with all three; ensure is initialized by beacon_node so is safe to merge, even if it doesn't do much yet
* Syncing workers now not bound to peers.
Sync status is now printed in statusbar.
* Add `SyncQueue.outSlot` to statusbar too.
* Add `inRangeEvent` and `rangeAge` parameter.
* Fix rangeAge is not depends on SyncQueue latest slot.
Fix syncManager to start from latest local head slot.
* Add notInRange event.
* Remove suspects field.