* reworked some of the das core specs, pr'd to check whether whether the conflicting type issue is centric to my machine or not
* bumped nim-blscurve to 9c6e80c6109133c0af3025654f5a8820282cff05, same as unstable
* bumped nim-eth2-scenarios, nim-nat-traversal at par with unstable, added more pathches, made peerdas devnet branch backward compatible, peerdas passing new ssz tests as per alpha3, disabled electra fixture tests, as branch hasn't been rebased for a while
* refactor test fixture files
* rm: serializeDataColumn
* refactor: took data columns extracted from blobs during block proposal to the heap
* disable blob broadcast in pd devnet
* fix addBlock in message router
* fix: data column iterator
* added debug checkpoints to check CI
* refactor if else conditions
* add: updated das core specs to alpha 3, and unit tests pass
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
When checking for `MissingParent`, it may be that the parent block was
already discovered as part of a prior run. In that case, it can be
loaded from storage and processed without having to rediscover the
entire branch from the network. This is similar to #6112 but for blocks
that are discovered via gossip / sync mgr instead of via request mgr.
When restarting beacon node, orphaned blocks remain in the database but
on startup, only the canonical chain as selected by fork choice loads.
When a new block is discovered that builds on top of an orphaned block,
the orphaned block is re-downloaded using sync/request manager, despite
it already being present on disk. Such queries can be answered locally
to improve discovery speed of alternate forks.