Prune `BlockRef` on finalization (#3513)
Up til now, the block dag has been using `BlockRef`, a structure adapted for a full DAG, to represent all of chain history. This is a correct and simple design, but does not exploit the linearity of the chain once parts of it finalize. By pruning the in-memory `BlockRef` structure at finalization, we save, at the time of writing, a cool ~250mb (or 25%:ish) chunk of memory landing us at a steady state of ~750mb normal memory usage for a validating node. Above all though, we prevent memory usage from growing proportionally with the length of the chain, something that would not be sustainable over time - instead, the steady state memory usage is roughly determined by the validator set size which grows much more slowly. With these changes, the core should remain sustainable memory-wise post-merge all the way to withdrawals (when the validator set is expected to grow). In-memory indices are still used for the "hot" unfinalized portion of the chain - this ensure that consensus performance remains unchanged. What changes is that for historical access, we use a db-based linear slot index which is cache-and-disk-friendly, keeping the cost for accessing historical data at a similar level as before, achieving the savings at no percievable cost to functionality or performance. A nice collateral benefit is the almost-instant startup since we no longer load any large indicies at dag init. The cost of this functionality instead can be found in the complexity of having to deal with two ways of traversing the chain - by `BlockRef` and by slot. * use `BlockId` instead of `BlockRef` where finalized / historical data may be required * simplify clearance pre-advancement * remove dag.finalizedBlocks (~50:ish mb) * remove `getBlockAtSlot` - use `getBlockIdAtSlot` instead * `parent` and `atSlot` for `BlockId` now require a `ChainDAGRef` instance, unlike `BlockRef` traversal * prune `BlockRef` parents on finality (~200:ish mb) * speed up ChainDAG init by not loading finalized history index * mess up light client server error handling - this need revisiting :)
This commit is contained in:
parent
9a2b50d2c6
commit
05ffe7b2bf
|
@ -64,12 +64,11 @@ OK: 1/1 Fail: 0/1 Skip: 0/1
|
||||||
```diff
|
```diff
|
||||||
+ Adding the same block twice returns a Duplicate error [Preset: mainnet] OK
|
+ Adding the same block twice returns a Duplicate error [Preset: mainnet] OK
|
||||||
+ Simple block add&get [Preset: mainnet] OK
|
+ Simple block add&get [Preset: mainnet] OK
|
||||||
+ getBlockRef returns none for missing blocks OK
|
+ basic ops OK
|
||||||
+ loading tail block works [Preset: mainnet] OK
|
|
||||||
+ updateHead updates head and headState [Preset: mainnet] OK
|
+ updateHead updates head and headState [Preset: mainnet] OK
|
||||||
+ updateState sanity [Preset: mainnet] OK
|
+ updateState sanity [Preset: mainnet] OK
|
||||||
```
|
```
|
||||||
OK: 6/6 Fail: 0/6 Skip: 0/6
|
OK: 5/5 Fail: 0/5 Skip: 0/5
|
||||||
## Block processor [Preset: mainnet]
|
## Block processor [Preset: mainnet]
|
||||||
```diff
|
```diff
|
||||||
+ Reverse order block add & get [Preset: mainnet] OK
|
+ Reverse order block add & get [Preset: mainnet] OK
|
||||||
|
@ -98,11 +97,6 @@ OK: 2/2 Fail: 0/2 Skip: 0/2
|
||||||
+ parent sanity OK
|
+ parent sanity OK
|
||||||
```
|
```
|
||||||
OK: 2/2 Fail: 0/2 Skip: 0/2
|
OK: 2/2 Fail: 0/2 Skip: 0/2
|
||||||
## ChainDAG helpers
|
|
||||||
```diff
|
|
||||||
+ epochAncestor sanity [Preset: mainnet] OK
|
|
||||||
```
|
|
||||||
OK: 1/1 Fail: 0/1 Skip: 0/1
|
|
||||||
## DeleteKeys requests [Preset: mainnet]
|
## DeleteKeys requests [Preset: mainnet]
|
||||||
```diff
|
```diff
|
||||||
+ Deleting not existing key [Preset: mainnet] OK
|
+ Deleting not existing key [Preset: mainnet] OK
|
||||||
|
@ -525,4 +519,4 @@ OK: 1/1 Fail: 0/1 Skip: 0/1
|
||||||
OK: 1/1 Fail: 0/1 Skip: 0/1
|
OK: 1/1 Fail: 0/1 Skip: 0/1
|
||||||
|
|
||||||
---TOTAL---
|
---TOTAL---
|
||||||
OK: 289/295 Fail: 0/295 Skip: 6/295
|
OK: 287/293 Fail: 0/293 Skip: 6/293
|
||||||
|
|
|
@ -60,10 +60,6 @@ proc addResolvedHeadBlock(
|
||||||
if not foundHead:
|
if not foundHead:
|
||||||
dag.heads.add(blockRef)
|
dag.heads.add(blockRef)
|
||||||
|
|
||||||
# Up to here, state.data was referring to the new state after the block had
|
|
||||||
# been applied but the `blck` field was still set to the parent
|
|
||||||
dag.clearanceBlck = blockRef
|
|
||||||
|
|
||||||
# Regardless of the chain we're on, the deposits come in the same order so
|
# Regardless of the chain we're on, the deposits come in the same order so
|
||||||
# as soon as we import a block, we'll also update the shared public key
|
# as soon as we import a block, we'll also update the shared public key
|
||||||
# cache
|
# cache
|
||||||
|
@ -71,7 +67,7 @@ proc addResolvedHeadBlock(
|
||||||
|
|
||||||
# Getting epochRef with the state will potentially create a new EpochRef
|
# Getting epochRef with the state will potentially create a new EpochRef
|
||||||
let
|
let
|
||||||
epochRef = dag.getEpochRef(state, blockRef, cache)
|
epochRef = dag.getEpochRef(state, cache)
|
||||||
epochRefTick = Moment.now()
|
epochRefTick = Moment.now()
|
||||||
|
|
||||||
debug "Block resolved",
|
debug "Block resolved",
|
||||||
|
@ -122,19 +118,20 @@ proc advanceClearanceState*(dag: ChainDAGRef) =
|
||||||
# epoch transition ahead of time.
|
# epoch transition ahead of time.
|
||||||
# Notably, we use the clearance state here because that's where the block will
|
# Notably, we use the clearance state here because that's where the block will
|
||||||
# first be seen - later, this state will be copied to the head state!
|
# first be seen - later, this state will be copied to the head state!
|
||||||
if dag.clearanceBlck.slot == getStateField(dag.clearanceState, slot):
|
let advanced = withState(dag.clearanceState):
|
||||||
let next = dag.clearanceBlck.atSlot(dag.clearanceBlck.slot + 1)
|
state.data.slot > state.data.latest_block_header.slot
|
||||||
|
if not advanced:
|
||||||
|
let next = getStateField(dag.clearanceState, slot) + 1
|
||||||
|
|
||||||
let startTick = Moment.now()
|
let startTick = Moment.now()
|
||||||
var cache = StateCache()
|
var
|
||||||
if not updateState(dag, dag.clearanceState, next, true, cache):
|
cache = StateCache()
|
||||||
# The next head update will likely fail - something is very wrong here
|
info = ForkedEpochInfo()
|
||||||
error "Cannot advance to next slot, database corrupt?",
|
|
||||||
clearance = shortLog(dag.clearanceBlck),
|
dag.advanceSlots(dag.clearanceState, next, true, cache, info)
|
||||||
next = shortLog(next)
|
|
||||||
else:
|
debug "Prepared clearance state for next block",
|
||||||
debug "Prepared clearance state for next block",
|
next, updateStateDur = Moment.now() - startTick
|
||||||
next, updateStateDur = Moment.now() - startTick
|
|
||||||
|
|
||||||
proc addHeadBlock*(
|
proc addHeadBlock*(
|
||||||
dag: ChainDAGRef, verifier: var BatchVerifier,
|
dag: ChainDAGRef, verifier: var BatchVerifier,
|
||||||
|
@ -216,17 +213,17 @@ proc addHeadBlock*(
|
||||||
# by the time a new block reaches this point, the parent block will already
|
# by the time a new block reaches this point, the parent block will already
|
||||||
# have "established" itself in the network to some degree at least.
|
# have "established" itself in the network to some degree at least.
|
||||||
var cache = StateCache()
|
var cache = StateCache()
|
||||||
|
let clearanceBlock =
|
||||||
|
parent.atSlot(signedBlock.message.slot).toBlockslotId.expect("not nil")
|
||||||
if not updateState(
|
if not updateState(
|
||||||
dag, dag.clearanceState, parent.atSlot(signedBlock.message.slot), true,
|
dag, dag.clearanceState, clearanceBlock, true, cache):
|
||||||
cache):
|
|
||||||
# We should never end up here - the parent must be a block no older than and
|
# We should never end up here - the parent must be a block no older than and
|
||||||
# rooted in the finalized checkpoint, hence we should always be able to
|
# rooted in the finalized checkpoint, hence we should always be able to
|
||||||
# load its corresponding state
|
# load its corresponding state
|
||||||
error "Unable to load clearance state for parent block, database corrupt?",
|
error "Unable to load clearance state for parent block, database corrupt?",
|
||||||
parent = shortLog(parent.atSlot(signedBlock.message.slot)),
|
parent = shortLog(parent.atSlot(signedBlock.message.slot)),
|
||||||
clearanceBlock = shortLog(dag.clearanceBlck)
|
clearanceBlock = shortLog(clearanceBlock)
|
||||||
return err(BlockError.MissingParent)
|
return err(BlockError.MissingParent)
|
||||||
dag.clearanceBlck = parent
|
|
||||||
|
|
||||||
let stateDataTick = Moment.now()
|
let stateDataTick = Moment.now()
|
||||||
|
|
||||||
|
|
|
@ -75,10 +75,10 @@ type
|
||||||
## a snapshots and applies blocks until the desired point in history is
|
## a snapshots and applies blocks until the desired point in history is
|
||||||
## reached.
|
## reached.
|
||||||
##
|
##
|
||||||
## Several indices are kept in memory to enable fast lookups - their shape
|
## Several indices are kept in memory and database to enable fast lookups -
|
||||||
## and contents somewhat depend on how the chain was instantiated: sync
|
## their shape and contents somewhat depend on how the chain was
|
||||||
## from genesis or checkpoint, and therefore, what features we can offer in
|
## instantiated: sync from genesis or checkpoint, and therefore, what
|
||||||
## terms of historical replay.
|
## features we can offer in terms of historical replay.
|
||||||
##
|
##
|
||||||
## Beacuse the state transition is forwards-only, checkpoint sync generally
|
## Beacuse the state transition is forwards-only, checkpoint sync generally
|
||||||
## allows replaying states from that point onwards - anything earlier
|
## allows replaying states from that point onwards - anything earlier
|
||||||
|
@ -94,12 +94,12 @@ type
|
||||||
## pointers may overlap and some indices might be empty as a result.
|
## pointers may overlap and some indices might be empty as a result.
|
||||||
##
|
##
|
||||||
## / heads
|
## / heads
|
||||||
## /-------* |
|
## | archive | history /-------* |
|
||||||
## *--------*---------*---------------*--------------*
|
## *--------*---------*---------------*--------------*
|
||||||
## | | | | |
|
## | | | | |
|
||||||
## genesis backfill tail finalizedHead head
|
## genesis backfill tail finalizedHead head
|
||||||
## | | |
|
## | | |
|
||||||
## archive finalizedBlocks forkBlocks
|
## db.finalizedBlocks dag.forkBlocks
|
||||||
##
|
##
|
||||||
## The archive is the the part of finalized history for which we no longer
|
## The archive is the the part of finalized history for which we no longer
|
||||||
## recreate states quickly because we don't have a reasonable state to
|
## recreate states quickly because we don't have a reasonable state to
|
||||||
|
@ -107,9 +107,11 @@ type
|
||||||
## case - recreating history requires either replaying from genesis or
|
## case - recreating history requires either replaying from genesis or
|
||||||
## providing an earlier checkpoint state.
|
## providing an earlier checkpoint state.
|
||||||
##
|
##
|
||||||
## We do not keep an in-memory index for the archive - instead, lookups are
|
## We do not keep an in-memory index for finalized blocks - instead, lookups
|
||||||
## made via `BeaconChainDB.finalizedBlocks` which covers the full range from
|
## are made via `BeaconChainDB.finalizedBlocks` which covers the full range
|
||||||
## `backfill` to `finalizedHead`.
|
## from `backfill` to `finalizedHead`. Finalized blocks are generally not
|
||||||
|
## needed for day-to-day validation work - rather, they're used for
|
||||||
|
## auxiliary functionality such as historical state access and replays.
|
||||||
|
|
||||||
db*: BeaconChainDB
|
db*: BeaconChainDB
|
||||||
## Database of recent chain history as well as the state and metadata
|
## Database of recent chain history as well as the state and metadata
|
||||||
|
@ -125,16 +127,10 @@ type
|
||||||
## `finalizedHead.slot..head.slot` (inclusive) - dag.heads keeps track
|
## `finalizedHead.slot..head.slot` (inclusive) - dag.heads keeps track
|
||||||
## of each potential head block in this table.
|
## of each potential head block in this table.
|
||||||
|
|
||||||
finalizedBlocks*: seq[BlockRef]
|
genesis*: BlockId
|
||||||
## Slot -> BlockRef mapping for the finalized portion of the canonical
|
|
||||||
## chain - use getBlockAtSlot to access
|
|
||||||
## Covers the slots `tail.slot..finalizedHead.slot` (including the
|
|
||||||
## finalized head block). Indices offset by `tail.slot`.
|
|
||||||
|
|
||||||
genesis*: BlockRef
|
|
||||||
## The genesis block of the network
|
## The genesis block of the network
|
||||||
|
|
||||||
tail*: BlockRef
|
tail*: BlockId
|
||||||
## The earliest finalized block for which we have a corresponding state -
|
## The earliest finalized block for which we have a corresponding state -
|
||||||
## when making a replay of chain history, this is as far back as we can
|
## when making a replay of chain history, this is as far back as we can
|
||||||
## go - the tail block is unique in that its parent is set to `nil`, even
|
## go - the tail block is unique in that its parent is set to `nil`, even
|
||||||
|
@ -158,7 +154,7 @@ type
|
||||||
# -----------------------------------
|
# -----------------------------------
|
||||||
# Pruning metadata
|
# Pruning metadata
|
||||||
|
|
||||||
lastPrunePoint*: BlockSlot
|
lastPrunePoint*: BlockSlotId
|
||||||
## The last prune point
|
## The last prune point
|
||||||
## We can prune up to finalizedHead
|
## We can prune up to finalizedHead
|
||||||
|
|
||||||
|
@ -176,8 +172,6 @@ type
|
||||||
clearanceState*: ForkedHashedBeaconState
|
clearanceState*: ForkedHashedBeaconState
|
||||||
## Cached state used during block clearance - must only be used in
|
## Cached state used during block clearance - must only be used in
|
||||||
## clearance module
|
## clearance module
|
||||||
clearanceBlck*: BlockRef
|
|
||||||
## The latest block that was applied to the clearance state
|
|
||||||
|
|
||||||
updateFlags*: UpdateFlags
|
updateFlags*: UpdateFlags
|
||||||
|
|
||||||
|
@ -233,7 +227,7 @@ type
|
||||||
## the epoch start - we call this block the "epoch ancestor" in other parts
|
## the epoch start - we call this block the "epoch ancestor" in other parts
|
||||||
## of the code.
|
## of the code.
|
||||||
epoch*: Epoch
|
epoch*: Epoch
|
||||||
blck*: BlockRef
|
bid*: BlockId
|
||||||
|
|
||||||
EpochRef* = ref object
|
EpochRef* = ref object
|
||||||
dag*: ChainDAGRef
|
dag*: ChainDAGRef
|
||||||
|
@ -304,7 +298,7 @@ template epoch*(e: EpochRef): Epoch = e.key.epoch
|
||||||
|
|
||||||
func shortLog*(v: EpochKey): string =
|
func shortLog*(v: EpochKey): string =
|
||||||
# epoch:root when logging epoch, root:slot when logging slot!
|
# epoch:root when logging epoch, root:slot when logging slot!
|
||||||
$v.epoch & ":" & shortLog(v.blck)
|
$v.epoch & ":" & shortLog(v.bid)
|
||||||
|
|
||||||
template setFinalizationCb*(dag: ChainDAGRef, cb: OnFinalizedCallback) =
|
template setFinalizationCb*(dag: ChainDAGRef, cb: OnFinalizedCallback) =
|
||||||
dag.onFinHappened = cb
|
dag.onFinHappened = cb
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -72,8 +72,8 @@ proc currentSyncCommitteeForPeriod(
|
||||||
syncCommitteeSlot = max(periodStartSlot, earliestSlot)
|
syncCommitteeSlot = max(periodStartSlot, earliestSlot)
|
||||||
# TODO introduce error handling in the case that we don't have historical
|
# TODO introduce error handling in the case that we don't have historical
|
||||||
# data for the period
|
# data for the period
|
||||||
bs = dag.getBlockAtSlot(syncCommitteeSlot).expect("TODO")
|
bsi = dag.getBlockIdAtSlot(syncCommitteeSlot).expect("TODO")
|
||||||
dag.withUpdatedState(tmpState, bs) do:
|
dag.withUpdatedState(tmpState, bsi) do:
|
||||||
withState(state):
|
withState(state):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
state.data.current_sync_committee
|
state.data.current_sync_committee
|
||||||
|
@ -100,8 +100,8 @@ proc syncCommitteeRootForPeriod(
|
||||||
let
|
let
|
||||||
periodStartSlot = period.start_slot
|
periodStartSlot = period.start_slot
|
||||||
syncCommitteeSlot = max(periodStartSlot, earliestSlot)
|
syncCommitteeSlot = max(periodStartSlot, earliestSlot)
|
||||||
bs = dag.getBlockAtSlot(syncCommitteeSlot).expect("TODO")
|
bsi = dag.getBlockIdAtSlot(syncCommitteeSlot).expect("TODO")
|
||||||
dag.withUpdatedState(tmpState, bs) do:
|
dag.withUpdatedState(tmpState, bsi) do:
|
||||||
withState(state):
|
withState(state):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
state.syncCommitteeRoot
|
state.syncCommitteeRoot
|
||||||
|
@ -199,17 +199,13 @@ proc createLightClientUpdates(
|
||||||
dag: ChainDAGRef,
|
dag: ChainDAGRef,
|
||||||
state: HashedBeaconStateWithSyncCommittee,
|
state: HashedBeaconStateWithSyncCommittee,
|
||||||
blck: TrustedSignedBeaconBlockWithSyncAggregate,
|
blck: TrustedSignedBeaconBlockWithSyncAggregate,
|
||||||
parent: BlockRef) =
|
parent: BlockId) =
|
||||||
## Create `LightClientUpdate` and `OptimisticLightClientUpdate` instances for
|
## Create `LightClientUpdate` and `OptimisticLightClientUpdate` instances for
|
||||||
## a given block and its post-state, and keep track of best / latest ones.
|
## a given block and its post-state, and keep track of best / latest ones.
|
||||||
## Data about the parent block's post-state and its `finalized_checkpoint`'s
|
## Data about the parent block's post-state and its `finalized_checkpoint`'s
|
||||||
## block's post-state needs to be cached (`cacheLightClientData`) before
|
## block's post-state needs to be cached (`cacheLightClientData`) before
|
||||||
## calling this function.
|
## calling this function.
|
||||||
|
|
||||||
# Parent needs to be known to continue
|
|
||||||
if parent == nil:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Verify sync committee has sufficient participants
|
# Verify sync committee has sufficient participants
|
||||||
template sync_aggregate(): auto = blck.message.body.sync_aggregate
|
template sync_aggregate(): auto = blck.message.body.sync_aggregate
|
||||||
template sync_committee_bits(): auto = sync_aggregate.sync_committee_bits
|
template sync_committee_bits(): auto = sync_aggregate.sync_committee_bits
|
||||||
|
@ -402,10 +398,10 @@ proc processNewBlockForLightClient*(
|
||||||
|
|
||||||
when signedBlock is bellatrix.TrustedSignedBeaconBlock:
|
when signedBlock is bellatrix.TrustedSignedBeaconBlock:
|
||||||
dag.cacheLightClientData(state.bellatrixData, signedBlock)
|
dag.cacheLightClientData(state.bellatrixData, signedBlock)
|
||||||
dag.createLightClientUpdates(state.bellatrixData, signedBlock, parent)
|
dag.createLightClientUpdates(state.bellatrixData, signedBlock, parent.bid)
|
||||||
elif signedBlock is altair.TrustedSignedBeaconBlock:
|
elif signedBlock is altair.TrustedSignedBeaconBlock:
|
||||||
dag.cacheLightClientData(state.altairData, signedBlock)
|
dag.cacheLightClientData(state.altairData, signedBlock)
|
||||||
dag.createLightClientUpdates(state.altairData, signedBlock, parent)
|
dag.createLightClientUpdates(state.altairData, signedBlock, parent.bid)
|
||||||
elif signedBlock is phase0.TrustedSignedBeaconBlock:
|
elif signedBlock is phase0.TrustedSignedBeaconBlock:
|
||||||
discard
|
discard
|
||||||
else:
|
else:
|
||||||
|
@ -467,13 +463,13 @@ proc processFinalizationForLightClient*(dag: ChainDAGRef) =
|
||||||
let lowSlot = max(lastCheckpoint.epoch.start_slot, earliestSlot)
|
let lowSlot = max(lastCheckpoint.epoch.start_slot, earliestSlot)
|
||||||
var boundarySlot = dag.finalizedHead.slot
|
var boundarySlot = dag.finalizedHead.slot
|
||||||
while boundarySlot >= lowSlot:
|
while boundarySlot >= lowSlot:
|
||||||
let blck = dag.getBlockAtSlot(boundarySlot).expect("historical data").blck
|
let bid = dag.getBlockIdAtSlot(boundarySlot).expect("historical data").bid
|
||||||
if blck.slot >= lowSlot:
|
if bid.slot >= lowSlot:
|
||||||
dag.lightClientCache.bootstrap[blck.slot] =
|
dag.lightClientCache.bootstrap[bid.slot] =
|
||||||
CachedLightClientBootstrap(
|
CachedLightClientBootstrap(
|
||||||
current_sync_committee_branch:
|
current_sync_committee_branch:
|
||||||
dag.getLightClientData(blck.bid).current_sync_committee_branch)
|
dag.getLightClientData(bid).current_sync_committee_branch)
|
||||||
boundarySlot = blck.slot.nextEpochBoundarySlot
|
boundarySlot = bid.slot.nextEpochBoundarySlot
|
||||||
if boundarySlot < SLOTS_PER_EPOCH:
|
if boundarySlot < SLOTS_PER_EPOCH:
|
||||||
break
|
break
|
||||||
boundarySlot -= SLOTS_PER_EPOCH
|
boundarySlot -= SLOTS_PER_EPOCH
|
||||||
|
@ -540,17 +536,23 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
period, update = dag.lightClientCache.bestUpdates.getOrDefault(period),
|
period, update = dag.lightClientCache.bestUpdates.getOrDefault(period),
|
||||||
computeDur = endTick - startTick
|
computeDur = endTick - startTick
|
||||||
|
|
||||||
proc maxParticipantsBlock(highBlck: BlockRef, lowSlot: Slot): BlockRef =
|
proc maxParticipantsBlock(
|
||||||
|
dag: ChainDAGRef, highBlck: BlockId, lowSlot: Slot): Opt[BlockId] =
|
||||||
## Determine the earliest block with most sync committee signatures among
|
## Determine the earliest block with most sync committee signatures among
|
||||||
## ancestors of `highBlck` with at least `lowSlot` as parent block slot.
|
## ancestors of `highBlck` with at least `lowSlot` as parent block slot.
|
||||||
## Return `nil` if no block with `MIN_SYNC_COMMITTEE_PARTICIPANTS` is found.
|
## Return `nil` if no block with `MIN_SYNC_COMMITTEE_PARTICIPANTS` is found.
|
||||||
var
|
var
|
||||||
maxParticipants = 0
|
maxParticipants = 0
|
||||||
maxBlockRef: BlockRef
|
maxBlockRef: Opt[BlockId]
|
||||||
blockRef = highBlck
|
blockRef = highBlck
|
||||||
while blockRef.parent != nil and blockRef.parent.slot >= lowSlot:
|
while true:
|
||||||
|
let parent = dag.parent(blockRef).valueOr:
|
||||||
|
break
|
||||||
|
if parent.slot < lowSlot:
|
||||||
|
break
|
||||||
|
|
||||||
let
|
let
|
||||||
bdata = dag.getForkedBlock(blockRef.bid).get
|
bdata = dag.getForkedBlock(blockRef).get
|
||||||
numParticipants =
|
numParticipants =
|
||||||
withBlck(bdata):
|
withBlck(bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
|
@ -558,19 +560,19 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
else: raiseAssert "Unreachable"
|
else: raiseAssert "Unreachable"
|
||||||
if numParticipants >= maxParticipants:
|
if numParticipants >= maxParticipants:
|
||||||
maxParticipants = numParticipants
|
maxParticipants = numParticipants
|
||||||
maxBlockRef = blockRef
|
maxBlockRef = ok blockRef
|
||||||
blockRef = blockRef.parent
|
blockRef = parent
|
||||||
if maxParticipants < MIN_SYNC_COMMITTEE_PARTICIPANTS:
|
if maxParticipants < MIN_SYNC_COMMITTEE_PARTICIPANTS:
|
||||||
maxBlockRef = nil
|
maxBlockRef.reset()
|
||||||
maxBlockRef
|
maxBlockRef
|
||||||
|
|
||||||
# Determine the block in the period with highest sync committee participation
|
# Determine the block in the period with highest sync committee participation
|
||||||
let
|
let
|
||||||
lowSlot = max(periodStartSlot, earliestSlot)
|
lowSlot = max(periodStartSlot, earliestSlot)
|
||||||
highSlot = min(periodEndSlot, dag.finalizedHead.blck.slot)
|
highSlot = min(periodEndSlot, dag.finalizedHead.blck.slot)
|
||||||
highBlck = dag.getBlockAtSlot(highSlot).expect("TODO").blck
|
highBlck = dag.getBlockIdAtSlot(highSlot).expect("TODO").bid
|
||||||
bestNonFinalizedRef = maxParticipantsBlock(highBlck, lowSlot)
|
bestNonFinalizedRef = dag.maxParticipantsBlock(highBlck, lowSlot)
|
||||||
if bestNonFinalizedRef == nil:
|
if bestNonFinalizedRef.isNone:
|
||||||
dag.lightClientCache.bestUpdates[period] = default(altair.LightClientUpdate)
|
dag.lightClientCache.bestUpdates[period] = default(altair.LightClientUpdate)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -581,11 +583,11 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
var
|
var
|
||||||
tmpState = assignClone(dag.headState)
|
tmpState = assignClone(dag.headState)
|
||||||
bestFinalizedRef = bestNonFinalizedRef
|
bestFinalizedRef = bestNonFinalizedRef
|
||||||
finalizedBlck {.noinit.}: BlockRef
|
finalizedBlck: Opt[BlockId]
|
||||||
while bestFinalizedRef != nil:
|
while bestFinalizedRef.isSome:
|
||||||
let
|
let
|
||||||
finalizedEpoch = block:
|
finalizedEpoch = block:
|
||||||
dag.withUpdatedState(tmpState[], bestFinalizedRef.parent.atSlot) do:
|
dag.withUpdatedState(tmpState[], dag.parent(bestFinalizedRef.get()).expect("TODO").atSlot) do:
|
||||||
withState(state):
|
withState(state):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
state.data.finalized_checkpoint.epoch
|
state.data.finalized_checkpoint.epoch
|
||||||
|
@ -593,20 +595,20 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
do: raiseAssert "Unreachable"
|
do: raiseAssert "Unreachable"
|
||||||
finalizedEpochStartSlot = finalizedEpoch.start_slot
|
finalizedEpochStartSlot = finalizedEpoch.start_slot
|
||||||
if finalizedEpochStartSlot >= lowSlot:
|
if finalizedEpochStartSlot >= lowSlot:
|
||||||
finalizedBlck = dag.getBlockAtSlot(finalizedEpochStartSlot).expect(
|
finalizedBlck = Opt[BlockId].ok(dag.getBlockIdAtSlot(finalizedEpochStartSlot).expect(
|
||||||
"TODO").blck
|
"TODO").bid)
|
||||||
if finalizedBlck.slot >= lowSlot:
|
if finalizedBlck.get.slot >= lowSlot:
|
||||||
break
|
break
|
||||||
bestFinalizedRef = maxParticipantsBlock(highBlck, bestFinalizedRef.slot + 1)
|
bestFinalizedRef = dag.maxParticipantsBlock(highBlck, bestFinalizedRef.get().slot + 1)
|
||||||
|
|
||||||
# If a finalized block has been found within the sync commitee period,
|
# If a finalized block has been found within the sync commitee period,
|
||||||
# create a `LightClientUpdate` for that one. Otherwise, create a non-finalized
|
# create a `LightClientUpdate` for that one. Otherwise, create a non-finalized
|
||||||
# `LightClientUpdate`
|
# `LightClientUpdate`
|
||||||
var update {.noinit.}: LightClientUpdate
|
var update {.noinit.}: LightClientUpdate
|
||||||
if bestFinalizedRef != nil:
|
if bestFinalizedRef.isSome:
|
||||||
# Fill data from attested block
|
# Fill data from attested block
|
||||||
dag.withUpdatedState(tmpState[], bestFinalizedRef.parent.atSlot) do:
|
dag.withUpdatedState(tmpState[], dag.parent(bestFinalizedRef.get()).expect("TODO").atSlot) do:
|
||||||
let bdata = dag.getForkedBlock(blck.bid).get
|
let bdata = dag.getForkedBlock(bid).get
|
||||||
withStateAndBlck(state, bdata):
|
withStateAndBlck(state, bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
update.attested_header =
|
update.attested_header =
|
||||||
|
@ -617,18 +619,18 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
do: raiseAssert "Unreachable"
|
do: raiseAssert "Unreachable"
|
||||||
|
|
||||||
# Fill data from signature block
|
# Fill data from signature block
|
||||||
let bdata = dag.getForkedBlock(bestFinalizedRef.bid).get
|
let bdata = dag.getForkedBlock(bestFinalizedRef.get()).get
|
||||||
withBlck(bdata):
|
withBlck(bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
update.sync_aggregate =
|
update.sync_aggregate =
|
||||||
isomorphicCast[SyncAggregate](blck.message.body.sync_aggregate)
|
isomorphicCast[SyncAggregate](blck.message.body.sync_aggregate)
|
||||||
else: raiseAssert "Unreachable"
|
else: raiseAssert "Unreachable"
|
||||||
update.fork_version =
|
update.fork_version =
|
||||||
dag.cfg.forkAtEpoch(bestFinalizedRef.slot.epoch).current_version
|
dag.cfg.forkAtEpoch(bestFinalizedRef.get().slot.epoch).current_version
|
||||||
|
|
||||||
# Fill data from finalized block
|
# Fill data from finalized block
|
||||||
dag.withUpdatedState(tmpState[], finalizedBlck.atSlot) do:
|
dag.withUpdatedState(tmpState[], finalizedBlck.get().atSlot) do:
|
||||||
let bdata = dag.getForkedBlock(blck.bid).get
|
let bdata = dag.getForkedBlock(bid).get
|
||||||
withStateAndBlck(state, bdata):
|
withStateAndBlck(state, bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
update.next_sync_committee =
|
update.next_sync_committee =
|
||||||
|
@ -641,8 +643,8 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
do: raiseAssert "Unreachable"
|
do: raiseAssert "Unreachable"
|
||||||
else:
|
else:
|
||||||
# Fill data from attested block
|
# Fill data from attested block
|
||||||
dag.withUpdatedState(tmpState[], bestNonFinalizedRef.parent.atSlot) do:
|
dag.withUpdatedState(tmpState[], dag.parent(bestNonFinalizedRef.get()).expect("TODO").atSlot) do:
|
||||||
let bdata = dag.getForkedBlock(blck.bid).get
|
let bdata = dag.getForkedBlock(bid).get
|
||||||
withStateAndBlck(state, bdata):
|
withStateAndBlck(state, bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
update.attested_header =
|
update.attested_header =
|
||||||
|
@ -657,14 +659,14 @@ proc initBestLightClientUpdateForPeriod(
|
||||||
do: raiseAssert "Unreachable"
|
do: raiseAssert "Unreachable"
|
||||||
|
|
||||||
# Fill data from signature block
|
# Fill data from signature block
|
||||||
let bdata = dag.getForkedBlock(bestNonFinalizedRef.bid).get
|
let bdata = dag.getForkedBlock(bestNonFinalizedRef.get()).get
|
||||||
withBlck(bdata):
|
withBlck(bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
update.sync_aggregate =
|
update.sync_aggregate =
|
||||||
isomorphicCast[SyncAggregate](blck.message.body.sync_aggregate)
|
isomorphicCast[SyncAggregate](blck.message.body.sync_aggregate)
|
||||||
else: raiseAssert "Unreachable"
|
else: raiseAssert "Unreachable"
|
||||||
update.fork_version =
|
update.fork_version =
|
||||||
dag.cfg.forkAtEpoch(bestNonFinalizedRef.slot.epoch).current_version
|
dag.cfg.forkAtEpoch(bestNonFinalizedRef.get.slot.epoch).current_version
|
||||||
dag.lightClientCache.bestUpdates[period] = update
|
dag.lightClientCache.bestUpdates[period] = update
|
||||||
|
|
||||||
proc initLightClientBootstrapForPeriod(
|
proc initLightClientBootstrapForPeriod(
|
||||||
|
@ -699,7 +701,7 @@ proc initLightClientBootstrapForPeriod(
|
||||||
nextBoundarySlot = lowBoundarySlot
|
nextBoundarySlot = lowBoundarySlot
|
||||||
while nextBoundarySlot <= highBoundarySlot:
|
while nextBoundarySlot <= highBoundarySlot:
|
||||||
let
|
let
|
||||||
blck = dag.getBlockAtSlot(nextBoundarySlot).expect("TODO").blck
|
blck = dag.getBlockIdAtSlot(nextBoundarySlot).expect("TODO").bid
|
||||||
boundarySlot = blck.slot.nextEpochBoundarySlot
|
boundarySlot = blck.slot.nextEpochBoundarySlot
|
||||||
if boundarySlot == nextBoundarySlot and
|
if boundarySlot == nextBoundarySlot and
|
||||||
blck.slot >= lowSlot and blck.slot <= highSlot and
|
blck.slot >= lowSlot and blck.slot <= highSlot and
|
||||||
|
@ -741,11 +743,11 @@ proc initLightClientCache*(dag: ChainDAGRef) =
|
||||||
# first build a todo list, then process them in ascending order
|
# first build a todo list, then process them in ascending order
|
||||||
let lowSlot = max(finalizedSlot, dag.computeEarliestLightClientSlot)
|
let lowSlot = max(finalizedSlot, dag.computeEarliestLightClientSlot)
|
||||||
var
|
var
|
||||||
blocksBetween = newSeqOfCap[BlockRef](dag.head.slot - lowSlot + 1)
|
blocksBetween = newSeqOfCap[BlockId](dag.head.slot - lowSlot + 1)
|
||||||
blockRef = dag.head
|
blockRef = dag.head.bid
|
||||||
while blockRef.slot > lowSlot:
|
while blockRef.slot > lowSlot:
|
||||||
blocksBetween.add blockRef
|
blocksBetween.add blockRef
|
||||||
blockRef = blockRef.parent
|
blockRef = dag.parent(blockRef).expect("TODO")
|
||||||
blocksBetween.add blockRef
|
blocksBetween.add blockRef
|
||||||
|
|
||||||
# Process blocks (reuses `dag.headState`, but restores it to the current head)
|
# Process blocks (reuses `dag.headState`, but restores it to the current head)
|
||||||
|
@ -759,7 +761,7 @@ proc initLightClientCache*(dag: ChainDAGRef) =
|
||||||
doAssert dag.updateState(
|
doAssert dag.updateState(
|
||||||
dag.headState, blockRef.atSlot(), save = false, cache)
|
dag.headState, blockRef.atSlot(), save = false, cache)
|
||||||
withStateVars(dag.headState):
|
withStateVars(dag.headState):
|
||||||
let bdata = dag.getForkedBlock(blockRef.bid).get
|
let bdata = dag.getForkedBlock(blockRef).get
|
||||||
withStateAndBlck(state, bdata):
|
withStateAndBlck(state, bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
# Cache data for `LightClientUpdate` of descendant blocks
|
# Cache data for `LightClientUpdate` of descendant blocks
|
||||||
|
@ -788,13 +790,13 @@ proc initLightClientCache*(dag: ChainDAGRef) =
|
||||||
# This is because light clients are unable to advance slots.
|
# This is because light clients are unable to advance slots.
|
||||||
if checkpoint.root != dag.finalizedHead.blck.root:
|
if checkpoint.root != dag.finalizedHead.blck.root:
|
||||||
let cpRef =
|
let cpRef =
|
||||||
dag.getBlockAtSlot(checkpoint.epoch.start_slot).expect("TODO").blck
|
dag.getBlockIdAtSlot(checkpoint.epoch.start_slot).expect("TODO").bid
|
||||||
if cpRef != nil and cpRef.slot >= earliestSlot:
|
if cpRef.slot >= earliestSlot:
|
||||||
assert cpRef.bid.root == checkpoint.root
|
assert cpRef.root == checkpoint.root
|
||||||
doAssert dag.updateState(
|
doAssert dag.updateState(
|
||||||
tmpState[], cpRef.atSlot, save = false, tmpCache)
|
tmpState[], cpRef.atSlot, save = false, tmpCache)
|
||||||
withStateVars(tmpState[]):
|
withStateVars(tmpState[]):
|
||||||
let bdata = dag.getForkedBlock(cpRef.bid).get
|
let bdata = dag.getForkedBlock(cpRef).get
|
||||||
withStateAndBlck(state, bdata):
|
withStateAndBlck(state, bdata):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
dag.cacheLightClientData(state, blck, isNew = false)
|
dag.cacheLightClientData(state, blck, isNew = false)
|
||||||
|
@ -802,7 +804,7 @@ proc initLightClientCache*(dag: ChainDAGRef) =
|
||||||
|
|
||||||
# Create `LightClientUpdate` for non-finalized blocks.
|
# Create `LightClientUpdate` for non-finalized blocks.
|
||||||
if blockRef.slot > finalizedSlot:
|
if blockRef.slot > finalizedSlot:
|
||||||
dag.createLightClientUpdates(state, blck, blockRef.parent)
|
dag.createLightClientUpdates(state, blck, dag.parent(blockRef).expect("TODO"))
|
||||||
else: raiseAssert "Unreachable"
|
else: raiseAssert "Unreachable"
|
||||||
|
|
||||||
let lightClientEndTick = Moment.now()
|
let lightClientEndTick = Moment.now()
|
||||||
|
@ -879,7 +881,7 @@ proc getLightClientBootstrap*(
|
||||||
if cachedBootstrap.current_sync_committee_branch.isZeroMemory:
|
if cachedBootstrap.current_sync_committee_branch.isZeroMemory:
|
||||||
if dag.importLightClientData == ImportLightClientData.OnDemand:
|
if dag.importLightClientData == ImportLightClientData.OnDemand:
|
||||||
var tmpState = assignClone(dag.headState)
|
var tmpState = assignClone(dag.headState)
|
||||||
dag.withUpdatedState(tmpState[], dag.getBlockAtSlot(slot).expect("TODO")) do:
|
dag.withUpdatedState(tmpState[], dag.getBlockIdAtSlot(slot).expect("TODO")) do:
|
||||||
withState(state):
|
withState(state):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
state.data.build_proof(
|
state.data.build_proof(
|
||||||
|
|
|
@ -299,11 +299,11 @@ proc validateBeaconBlock*(
|
||||||
return errIgnore("BeaconBlock: already seen")
|
return errIgnore("BeaconBlock: already seen")
|
||||||
|
|
||||||
let
|
let
|
||||||
slotBlock = getBlockAtSlot(dag, signed_beacon_block.message.slot)
|
slotBlock = getBlockIdAtSlot(dag, signed_beacon_block.message.slot)
|
||||||
|
|
||||||
if slotBlock.isSome() and slotBlock.get().isProposed() and
|
if slotBlock.isSome() and slotBlock.get().isProposed() and
|
||||||
slotBlock.get().blck.slot == signed_beacon_block.message.slot:
|
slotBlock.get().bid.slot == signed_beacon_block.message.slot:
|
||||||
let curBlock = dag.getForkedBlock(slotBlock.get().blck.bid)
|
let curBlock = dag.getForkedBlock(slotBlock.get().bid)
|
||||||
if curBlock.isOk():
|
if curBlock.isOk():
|
||||||
let data = curBlock.get()
|
let data = curBlock.get()
|
||||||
if getForkedBlockField(data, proposer_index) ==
|
if getForkedBlockField(data, proposer_index) ==
|
||||||
|
|
|
@ -119,7 +119,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -127,7 +127,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
||||||
$error)
|
$error)
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
return RestApiResponse.jsonResponse((root: stateRoot))
|
return RestApiResponse.jsonResponse((root: stateRoot))
|
||||||
|
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError)
|
return RestApiResponse.jsonError(Http404, StateNotFoundError)
|
||||||
|
@ -139,7 +139,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -147,7 +147,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
||||||
$error)
|
$error)
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
return RestApiResponse.jsonResponse(
|
return RestApiResponse.jsonResponse(
|
||||||
(
|
(
|
||||||
previous_version: getStateField(state, fork).previous_version,
|
previous_version: getStateField(state, fork).previous_version,
|
||||||
|
@ -165,7 +165,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -173,13 +173,11 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
||||||
$error)
|
$error)
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
return RestApiResponse.jsonResponse(
|
return RestApiResponse.jsonResponse(
|
||||||
(
|
(
|
||||||
previous_justified:
|
previous_justified: getStateField(state, previous_justified_checkpoint),
|
||||||
getStateField(state, previous_justified_checkpoint),
|
current_justified: getStateField(state, current_justified_checkpoint),
|
||||||
current_justified:
|
|
||||||
getStateField(state, current_justified_checkpoint),
|
|
||||||
finalized: getStateField(state, finalized_checkpoint)
|
finalized: getStateField(state, finalized_checkpoint)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
@ -193,7 +191,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -223,7 +221,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
$res.error())
|
$res.error())
|
||||||
res.get()
|
res.get()
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
let
|
let
|
||||||
current_epoch = getStateField(state, slot).epoch()
|
current_epoch = getStateField(state, slot).epoch()
|
||||||
validatorsCount = lenu64(getStateField(state, validators))
|
validatorsCount = lenu64(getStateField(state, validators))
|
||||||
|
@ -320,7 +318,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
vid = validator_id.valueOr:
|
vid = validator_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidValidatorIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidValidatorIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -328,7 +326,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
||||||
$error)
|
$error)
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
let
|
let
|
||||||
current_epoch = getStateField(state, slot).epoch()
|
current_epoch = getStateField(state, slot).epoch()
|
||||||
validatorsCount = lenu64(getStateField(state, validators))
|
validatorsCount = lenu64(getStateField(state, validators))
|
||||||
|
@ -338,8 +336,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
let vid = validator_id.get()
|
let vid = validator_id.get()
|
||||||
case vid.kind
|
case vid.kind
|
||||||
of ValidatorQueryKind.Key:
|
of ValidatorQueryKind.Key:
|
||||||
let optIndices = keysToIndices(node.restKeysCache, state,
|
let optIndices = keysToIndices(node.restKeysCache, state, [vid.key])
|
||||||
[vid.key])
|
|
||||||
if optIndices[0].isNone():
|
if optIndices[0].isNone():
|
||||||
return RestApiResponse.jsonError(Http404, ValidatorNotFoundError)
|
return RestApiResponse.jsonError(Http404, ValidatorNotFoundError)
|
||||||
optIndices[0].get()
|
optIndices[0].get()
|
||||||
|
@ -382,7 +379,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -401,7 +398,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
MaximumNumberOfValidatorIdsError)
|
MaximumNumberOfValidatorIdsError)
|
||||||
ires
|
ires
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
let validatorsCount = lenu64(getStateField(state, validators))
|
let validatorsCount = lenu64(getStateField(state, validators))
|
||||||
|
|
||||||
let indices =
|
let indices =
|
||||||
|
@ -450,7 +447,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
if len(validatorIds) == 0:
|
if len(validatorIds) == 0:
|
||||||
# There is no indices, so we going to return balances of all
|
# There is no indices, so we going to return balances of all
|
||||||
# known validators.
|
# known validators.
|
||||||
for index, balance in getStateField(state, balances).asSeq.pairs():
|
for index, balance in getStateField(state, balances).pairs():
|
||||||
res.add(RestValidatorBalance.init(ValidatorIndex(index),
|
res.add(RestValidatorBalance.init(ValidatorIndex(index),
|
||||||
balance))
|
balance))
|
||||||
else:
|
else:
|
||||||
|
@ -471,7 +468,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -537,7 +534,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
some(res)
|
some(res)
|
||||||
else:
|
else:
|
||||||
none[Slot]()
|
none[Slot]()
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
proc getCommittee(slot: Slot,
|
proc getCommittee(slot: Slot,
|
||||||
index: CommitteeIndex): RestBeaconStatesCommittees =
|
index: CommitteeIndex): RestBeaconStatesCommittees =
|
||||||
let validators = get_beacon_committee(state, slot, index, cache)
|
let validators = get_beacon_committee(state, slot, index, cache)
|
||||||
|
@ -583,7 +580,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
sid = state_id.valueOr:
|
sid = state_id.valueOr:
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$error)
|
$error)
|
||||||
bslot = node.getBlockSlot(sid).valueOr:
|
bslot = node.getBlockSlotId(sid).valueOr:
|
||||||
if sid.kind == StateQueryKind.Root:
|
if sid.kind == StateQueryKind.Root:
|
||||||
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
# TODO (cheatfate): Its impossible to retrieve state by `state_root`
|
||||||
# in current version of database.
|
# in current version of database.
|
||||||
|
@ -609,7 +606,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
# the state will be obtained.
|
# the state will be obtained.
|
||||||
bslot.slot.epoch()
|
bslot.slot.epoch()
|
||||||
|
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
let keys =
|
let keys =
|
||||||
block:
|
block:
|
||||||
let res = syncCommitteeParticipants(state, qepoch)
|
let res = syncCommitteeParticipants(state, qepoch)
|
||||||
|
|
|
@ -25,7 +25,7 @@ proc installDebugApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
if state_id.isErr():
|
if state_id.isErr():
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$state_id.error())
|
$state_id.error())
|
||||||
let bres = node.getBlockSlot(state_id.get())
|
let bres = node.getBlockSlotId(state_id.get())
|
||||||
if bres.isErr():
|
if bres.isErr():
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
||||||
$bres.error())
|
$bres.error())
|
||||||
|
@ -37,7 +37,7 @@ proc installDebugApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
if res.isErr():
|
if res.isErr():
|
||||||
return RestApiResponse.jsonError(Http406, ContentNotAcceptableError)
|
return RestApiResponse.jsonError(Http406, ContentNotAcceptableError)
|
||||||
res.get()
|
res.get()
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
return
|
return
|
||||||
case state.kind
|
case state.kind
|
||||||
of BeaconStateFork.Phase0:
|
of BeaconStateFork.Phase0:
|
||||||
|
@ -60,7 +60,7 @@ proc installDebugApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
if state_id.isErr():
|
if state_id.isErr():
|
||||||
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
return RestApiResponse.jsonError(Http400, InvalidStateIdValueError,
|
||||||
$state_id.error())
|
$state_id.error())
|
||||||
let bres = node.getBlockSlot(state_id.get())
|
let bres = node.getBlockSlotId(state_id.get())
|
||||||
if bres.isErr():
|
if bres.isErr():
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
return RestApiResponse.jsonError(Http404, StateNotFoundError,
|
||||||
$bres.error())
|
$bres.error())
|
||||||
|
@ -72,7 +72,7 @@ proc installDebugApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
if res.isErr():
|
if res.isErr():
|
||||||
return RestApiResponse.jsonError(Http406, ContentNotAcceptableError)
|
return RestApiResponse.jsonError(Http406, ContentNotAcceptableError)
|
||||||
res.get()
|
res.get()
|
||||||
node.withStateForBlockSlot(bslot):
|
node.withStateForBlockSlotId(bslot):
|
||||||
return
|
return
|
||||||
if contentType == jsonMediaType:
|
if contentType == jsonMediaType:
|
||||||
RestApiResponse.jsonResponsePlain(state)
|
RestApiResponse.jsonResponsePlain(state)
|
||||||
|
|
|
@ -230,7 +230,7 @@ proc installNimbusApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
return RestApiResponse.jsonError(Http503, BeaconNodeInSyncError)
|
return RestApiResponse.jsonError(Http503, BeaconNodeInSyncError)
|
||||||
res.get()
|
res.get()
|
||||||
let proposalState = assignClone(node.dag.headState)
|
let proposalState = assignClone(node.dag.headState)
|
||||||
node.dag.withUpdatedState(proposalState[], head.atSlot(wallSlot)) do:
|
node.dag.withUpdatedState(proposalState[], head.atSlot(wallSlot).toBlockSlotId().expect("not nil")) do:
|
||||||
return RestApiResponse.jsonResponse(
|
return RestApiResponse.jsonResponse(
|
||||||
node.getBlockProposalEth1Data(state))
|
node.getBlockProposalEth1Data(state))
|
||||||
do:
|
do:
|
||||||
|
|
|
@ -60,32 +60,32 @@ proc getCurrentHead*(node: BeaconNode,
|
||||||
return err("Requesting epoch for which slot would overflow")
|
return err("Requesting epoch for which slot would overflow")
|
||||||
node.getCurrentHead(epoch.start_slot())
|
node.getCurrentHead(epoch.start_slot())
|
||||||
|
|
||||||
proc getBlockSlot*(node: BeaconNode,
|
proc getBlockSlotId*(node: BeaconNode,
|
||||||
stateIdent: StateIdent): Result[BlockSlot, cstring] =
|
stateIdent: StateIdent): Result[BlockSlotId, cstring] =
|
||||||
case stateIdent.kind
|
case stateIdent.kind
|
||||||
of StateQueryKind.Slot:
|
of StateQueryKind.Slot:
|
||||||
let bs = node.dag.getBlockAtSlot(? node.getCurrentSlot(stateIdent.slot))
|
let bsi = node.dag.getBlockIdAtSlot(? node.getCurrentSlot(stateIdent.slot)).valueOr:
|
||||||
if bs.isSome:
|
return err("State for given slot not found, history not available?")
|
||||||
ok(bs.get())
|
|
||||||
else:
|
ok(bsi)
|
||||||
err("State for given slot not found, history not available?")
|
|
||||||
of StateQueryKind.Root:
|
of StateQueryKind.Root:
|
||||||
if stateIdent.root == getStateRoot(node.dag.headState):
|
if stateIdent.root == getStateRoot(node.dag.headState):
|
||||||
ok(node.dag.head.atSlot())
|
ok(node.dag.head.bid.atSlot())
|
||||||
else:
|
else:
|
||||||
# We don't have a state root -> BlockSlot mapping
|
# We don't have a state root -> BlockSlot mapping
|
||||||
err("State for given root not found")
|
err("State for given root not found")
|
||||||
of StateQueryKind.Named:
|
of StateQueryKind.Named:
|
||||||
case stateIdent.value
|
case stateIdent.value
|
||||||
of StateIdentType.Head:
|
of StateIdentType.Head:
|
||||||
ok(node.dag.head.atSlot())
|
ok(node.dag.head.bid.atSlot())
|
||||||
of StateIdentType.Genesis:
|
of StateIdentType.Genesis:
|
||||||
ok(node.dag.genesis.atSlot())
|
ok(node.dag.genesis.atSlot())
|
||||||
of StateIdentType.Finalized:
|
of StateIdentType.Finalized:
|
||||||
ok(node.dag.finalizedHead)
|
ok(node.dag.finalizedHead.toBlockSlotId().expect("not nil"))
|
||||||
of StateIdentType.Justified:
|
of StateIdentType.Justified:
|
||||||
ok(node.dag.head.atEpochStart(getStateField(
|
ok(node.dag.head.atEpochStart(getStateField(
|
||||||
node.dag.headState, current_justified_checkpoint).epoch))
|
node.dag.headState, current_justified_checkpoint).epoch).toBlockSlotId().expect("not nil"))
|
||||||
|
|
||||||
proc getBlockId*(node: BeaconNode, id: BlockIdent): Opt[BlockId] =
|
proc getBlockId*(node: BeaconNode, id: BlockIdent): Opt[BlockId] =
|
||||||
case id.kind
|
case id.kind
|
||||||
|
@ -94,7 +94,7 @@ proc getBlockId*(node: BeaconNode, id: BlockIdent): Opt[BlockId] =
|
||||||
of BlockIdentType.Head:
|
of BlockIdentType.Head:
|
||||||
ok(node.dag.head.bid)
|
ok(node.dag.head.bid)
|
||||||
of BlockIdentType.Genesis:
|
of BlockIdentType.Genesis:
|
||||||
ok(node.dag.genesis.bid)
|
ok(node.dag.genesis)
|
||||||
of BlockIdentType.Finalized:
|
of BlockIdentType.Finalized:
|
||||||
ok(node.dag.finalizedHead.blck.bid)
|
ok(node.dag.finalizedHead.blck.bid)
|
||||||
of BlockQueryKind.Root:
|
of BlockQueryKind.Root:
|
||||||
|
@ -131,17 +131,17 @@ proc disallowInterruptionsAux(body: NimNode) =
|
||||||
macro disallowInterruptions(body: untyped) =
|
macro disallowInterruptions(body: untyped) =
|
||||||
disallowInterruptionsAux(body)
|
disallowInterruptionsAux(body)
|
||||||
|
|
||||||
template withStateForBlockSlot*(nodeParam: BeaconNode,
|
template withStateForBlockSlotId*(nodeParam: BeaconNode,
|
||||||
blockSlotParam: BlockSlot,
|
blockSlotIdParam: BlockSlotId,
|
||||||
body: untyped): untyped =
|
body: untyped): untyped =
|
||||||
|
|
||||||
block:
|
block:
|
||||||
let
|
let
|
||||||
node = nodeParam
|
node = nodeParam
|
||||||
blockSlot = blockSlotParam
|
blockSlotId = blockSlotIdParam
|
||||||
|
|
||||||
template isState(state: ForkedHashedBeaconState): bool =
|
template isState(state: ForkedHashedBeaconState): bool =
|
||||||
state.matches_block_slot(blockSlot.blck.root, blockSlot.slot)
|
state.matches_block_slot(blockSlotId.bid.root, blockSlotId.slot)
|
||||||
|
|
||||||
var cache {.inject, used.}: StateCache
|
var cache {.inject, used.}: StateCache
|
||||||
|
|
||||||
|
@ -162,11 +162,13 @@ template withStateForBlockSlot*(nodeParam: BeaconNode,
|
||||||
# TODO view-types
|
# TODO view-types
|
||||||
# Avoid the code bloat produced by the double `body` reference through a lent var
|
# Avoid the code bloat produced by the double `body` reference through a lent var
|
||||||
if isState(node.dag.headState):
|
if isState(node.dag.headState):
|
||||||
withStateVars(node.dag.headState):
|
template state: untyped {.inject, used.} = node.dag.headState
|
||||||
body
|
template stateRoot: untyped {.inject, used.} =
|
||||||
|
getStateRoot(node.dag.headState)
|
||||||
|
body
|
||||||
else:
|
else:
|
||||||
let cachedState = if node.stateTtlCache != nil:
|
let cachedState = if node.stateTtlCache != nil:
|
||||||
node.stateTtlCache.getClosestState(blockSlot)
|
node.stateTtlCache.getClosestState(node.dag, blockSlotId)
|
||||||
else:
|
else:
|
||||||
nil
|
nil
|
||||||
|
|
||||||
|
@ -175,13 +177,15 @@ template withStateForBlockSlot*(nodeParam: BeaconNode,
|
||||||
else:
|
else:
|
||||||
assignClone(node.dag.headState)
|
assignClone(node.dag.headState)
|
||||||
|
|
||||||
if node.dag.updateState(stateToAdvance[], blockSlot, false, cache):
|
if node.dag.updateState(stateToAdvance[], blockSlotId, false, cache):
|
||||||
if cachedState == nil and node.stateTtlCache != nil:
|
if cachedState == nil and node.stateTtlCache != nil:
|
||||||
# This was not a cached state, we can cache it now
|
# This was not a cached state, we can cache it now
|
||||||
node.stateTtlCache.add(stateToAdvance)
|
node.stateTtlCache.add(stateToAdvance)
|
||||||
|
|
||||||
withStateVars(stateToAdvance[]):
|
template state: untyped {.inject, used.} = stateToAdvance[]
|
||||||
body
|
template stateRoot: untyped {.inject, used.} = getStateRoot(stateToAdvance[])
|
||||||
|
|
||||||
|
body
|
||||||
|
|
||||||
template strData*(body: ContentBody): string =
|
template strData*(body: ContentBody): string =
|
||||||
bind fromBytes
|
bind fromBytes
|
||||||
|
|
|
@ -260,10 +260,10 @@ proc installValidatorApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
||||||
# in order to compute the sync committee for the epoch. See the following
|
# in order to compute the sync committee for the epoch. See the following
|
||||||
# discussion for more details:
|
# discussion for more details:
|
||||||
# https://github.com/status-im/nimbus-eth2/pull/3133#pullrequestreview-817184693
|
# https://github.com/status-im/nimbus-eth2/pull/3133#pullrequestreview-817184693
|
||||||
let bs = node.dag.getBlockAtSlot(earliestSlotInQSyncPeriod).valueOr:
|
let bsi = node.dag.getBlockIdAtSlot(earliestSlotInQSyncPeriod).valueOr:
|
||||||
return RestApiResponse.jsonError(Http404, StateNotFoundError)
|
return RestApiResponse.jsonError(Http404, StateNotFoundError)
|
||||||
|
|
||||||
node.withStateForBlockSlot(bs):
|
node.withStateForBlockSlotId(bsi):
|
||||||
let res = withState(state):
|
let res = withState(state):
|
||||||
when stateFork >= BeaconStateFork.Altair:
|
when stateFork >= BeaconStateFork.Altair:
|
||||||
produceResponse(indexList,
|
produceResponse(indexList,
|
||||||
|
|
|
@ -155,7 +155,7 @@ proc getForkedBlockFromBlockId(
|
||||||
node.dag.getForkedBlock(node.dag.head.bid).valueOr:
|
node.dag.getForkedBlock(node.dag.head.bid).valueOr:
|
||||||
raise newException(CatchableError, "Block not found")
|
raise newException(CatchableError, "Block not found")
|
||||||
of "genesis":
|
of "genesis":
|
||||||
node.dag.getForkedBlock(node.dag.genesis.bid).valueOr:
|
node.dag.getForkedBlock(node.dag.genesis).valueOr:
|
||||||
raise newException(CatchableError, "Block not found")
|
raise newException(CatchableError, "Block not found")
|
||||||
of "finalized":
|
of "finalized":
|
||||||
node.dag.getForkedBlock(node.dag.finalizedHead.blck.bid).valueOr:
|
node.dag.getForkedBlock(node.dag.finalizedHead.blck.bid).valueOr:
|
||||||
|
|
|
@ -108,7 +108,9 @@ proc installNimbusApiHandlers*(rpcServer: RpcServer, node: BeaconNode) {.
|
||||||
head = node.doChecksAndGetCurrentHead(wallSlot)
|
head = node.doChecksAndGetCurrentHead(wallSlot)
|
||||||
|
|
||||||
let proposalState = assignClone(node.dag.headState)
|
let proposalState = assignClone(node.dag.headState)
|
||||||
node.dag.withUpdatedState(proposalState[], head.atSlot(wallSlot)):
|
node.dag.withUpdatedState(
|
||||||
|
proposalState[],
|
||||||
|
head.atSlot(wallSlot).toBlockSlotId().expect("not nil")):
|
||||||
return node.getBlockProposalEth1Data(state)
|
return node.getBlockProposalEth1Data(state)
|
||||||
do:
|
do:
|
||||||
raise (ref CatchableError)(msg: "Trying to access pruned state")
|
raise (ref CatchableError)(msg: "Trying to access pruned state")
|
||||||
|
|
|
@ -24,10 +24,10 @@ template raiseNoAltairSupport*() =
|
||||||
|
|
||||||
template withStateForStateId*(stateId: string, body: untyped): untyped =
|
template withStateForStateId*(stateId: string, body: untyped): untyped =
|
||||||
let
|
let
|
||||||
bs = node.stateIdToBlockSlot(stateId)
|
bsi = node.stateIdToBlockSlotId(stateId)
|
||||||
|
|
||||||
template isState(state: ForkedHashedBeaconState): bool =
|
template isState(state: ForkedHashedBeaconState): bool =
|
||||||
state.matches_block_slot(bs.blck.root, bs.slot)
|
state.matches_block_slot(bsi.bid.root, bsi.slot)
|
||||||
|
|
||||||
if isState(node.dag.headState):
|
if isState(node.dag.headState):
|
||||||
withStateVars(node.dag.headState):
|
withStateVars(node.dag.headState):
|
||||||
|
@ -35,7 +35,7 @@ template withStateForStateId*(stateId: string, body: untyped): untyped =
|
||||||
body
|
body
|
||||||
else:
|
else:
|
||||||
let rpcState = assignClone(node.dag.headState)
|
let rpcState = assignClone(node.dag.headState)
|
||||||
node.dag.withUpdatedState(rpcState[], bs) do:
|
node.dag.withUpdatedState(rpcState[], bsi) do:
|
||||||
body
|
body
|
||||||
do:
|
do:
|
||||||
raise (ref CatchableError)(msg: "Trying to access pruned state")
|
raise (ref CatchableError)(msg: "Trying to access pruned state")
|
||||||
|
@ -69,10 +69,10 @@ proc parseSlot(slot: string): Slot {.raises: [Defect, CatchableError].} =
|
||||||
raise newException(ValueError, "Not a valid slot number")
|
raise newException(ValueError, "Not a valid slot number")
|
||||||
Slot parsed
|
Slot parsed
|
||||||
|
|
||||||
proc getBlockSlotFromString*(node: BeaconNode, slot: string): BlockSlot {.raises: [Defect, CatchableError].} =
|
proc getBlockSlotIdFromString*(node: BeaconNode, slot: string): BlockSlotId {.raises: [Defect, CatchableError].} =
|
||||||
let parsed = parseSlot(slot)
|
let parsed = parseSlot(slot)
|
||||||
discard node.doChecksAndGetCurrentHead(parsed)
|
discard node.doChecksAndGetCurrentHead(parsed)
|
||||||
node.dag.getBlockAtSlot(parsed).valueOr:
|
node.dag.getBlockIdAtSlot(parsed).valueOr:
|
||||||
raise newException(ValueError, "Block not found")
|
raise newException(ValueError, "Block not found")
|
||||||
|
|
||||||
proc getBlockIdFromString*(node: BeaconNode, slot: string): BlockId {.raises: [Defect, CatchableError].} =
|
proc getBlockIdFromString*(node: BeaconNode, slot: string): BlockId {.raises: [Defect, CatchableError].} =
|
||||||
|
@ -84,25 +84,28 @@ proc getBlockIdFromString*(node: BeaconNode, slot: string): BlockId {.raises: [D
|
||||||
else:
|
else:
|
||||||
raise (ref ValueError)(msg: "Block not found")
|
raise (ref ValueError)(msg: "Block not found")
|
||||||
|
|
||||||
proc stateIdToBlockSlot*(node: BeaconNode, stateId: string): BlockSlot {.raises: [Defect, CatchableError].} =
|
proc stateIdToBlockSlotId*(node: BeaconNode, stateId: string): BlockSlotId {.raises: [Defect, CatchableError].} =
|
||||||
case stateId:
|
case stateId:
|
||||||
of "head":
|
of "head":
|
||||||
node.dag.head.atSlot()
|
node.dag.head.bid.atSlot()
|
||||||
of "genesis":
|
of "genesis":
|
||||||
node.dag.genesis.atSlot()
|
node.dag.genesis.atSlot()
|
||||||
of "finalized":
|
of "finalized":
|
||||||
node.dag.finalizedHead
|
node.dag.finalizedHead.toBlockSlotId().expect("not nil")
|
||||||
of "justified":
|
of "justified":
|
||||||
node.dag.head.atEpochStart(
|
node.dag.head.atEpochStart(
|
||||||
getStateField(node.dag.headState, current_justified_checkpoint).epoch)
|
getStateField(
|
||||||
|
node.dag.headState, current_justified_checkpoint).epoch).
|
||||||
|
toBlockSlotId().valueOr:
|
||||||
|
raise (ref ValueError)(msg: "State not found")
|
||||||
else:
|
else:
|
||||||
if stateId.startsWith("0x"):
|
if stateId.startsWith("0x"):
|
||||||
let stateRoot = parseRoot(stateId)
|
let stateRoot = parseRoot(stateId)
|
||||||
if stateRoot == getStateRoot(node.dag.headState):
|
if stateRoot == getStateRoot(node.dag.headState):
|
||||||
node.dag.head.atSlot()
|
node.dag.head.bid.atSlot()
|
||||||
else:
|
else:
|
||||||
# We don't have a state root -> BlockSlot mapping
|
# We don't have a state root -> BlockSlot mapping
|
||||||
raise (ref ValueError)(msg: "State not found")
|
raise (ref ValueError)(msg: "State not found")
|
||||||
|
|
||||||
else: # Parse as slot number
|
else: # Parse as slot number
|
||||||
node.getBlockSlotFromString(stateId)
|
node.getBlockSlotIdFromString(stateId)
|
||||||
|
|
|
@ -9,7 +9,7 @@ import
|
||||||
chronos,
|
chronos,
|
||||||
chronicles,
|
chronicles,
|
||||||
../spec/beaconstate,
|
../spec/beaconstate,
|
||||||
../consensus_object_pools/block_pools_types
|
../consensus_object_pools/blockchain_dag
|
||||||
|
|
||||||
type
|
type
|
||||||
CacheEntry = ref object
|
CacheEntry = ref object
|
||||||
|
@ -71,7 +71,8 @@ proc add*(cache: StateTtlCache, state: ref ForkedHashedBeaconState) =
|
||||||
cache.scheduleEntryExpiration(index)
|
cache.scheduleEntryExpiration(index)
|
||||||
|
|
||||||
proc getClosestState*(
|
proc getClosestState*(
|
||||||
cache: StateTtlCache, bs: BlockSlot): ref ForkedHashedBeaconState =
|
cache: StateTtlCache, dag: ChainDAGRef,
|
||||||
|
bsi: BlockSlotId): ref ForkedHashedBeaconState =
|
||||||
var
|
var
|
||||||
bestSlotDifference = Slot.high
|
bestSlotDifference = Slot.high
|
||||||
index = -1
|
index = -1
|
||||||
|
@ -81,20 +82,21 @@ proc getClosestState*(
|
||||||
continue
|
continue
|
||||||
|
|
||||||
let stateSlot = getStateField(cache.entries[i][].state[], slot)
|
let stateSlot = getStateField(cache.entries[i][].state[], slot)
|
||||||
if stateSlot > bs.slot:
|
if stateSlot > bsi.slot:
|
||||||
# We can use only states that can be advanced forward in time.
|
# We can use only states that can be advanced forward in time.
|
||||||
continue
|
continue
|
||||||
|
|
||||||
let slotDifference = bs.slot - stateSlot
|
let slotDifference = bsi.slot - stateSlot
|
||||||
if slotDifference > slotDifferenceForCacheHit:
|
if slotDifference > slotDifferenceForCacheHit:
|
||||||
# The state is too old to be useful as a rewind starting point.
|
# The state is too old to be useful as a rewind starting point.
|
||||||
continue
|
continue
|
||||||
|
|
||||||
var cur = bs
|
var cur = bsi
|
||||||
for j in 0 ..< slotDifference:
|
for j in 0 ..< slotDifference:
|
||||||
cur = cur.parentOrSlot
|
cur = dag.parentOrSlot(cur).valueOr:
|
||||||
|
break
|
||||||
|
|
||||||
if not cache.entries[i].state[].matches_block(cur.blck.root):
|
if not cache.entries[i].state[].matches_block(cur.bid.root):
|
||||||
# The cached state and the requested BlockSlot are at different branches
|
# The cached state and the requested BlockSlot are at different branches
|
||||||
# of history.
|
# of history.
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -12,7 +12,7 @@ import
|
||||||
eth/p2p/discoveryv5/random2,
|
eth/p2p/discoveryv5/random2,
|
||||||
../spec/datatypes/base,
|
../spec/datatypes/base,
|
||||||
../spec/[helpers, network],
|
../spec/[helpers, network],
|
||||||
../consensus_object_pools/[block_pools_types, spec_cache]
|
../consensus_object_pools/[blockchain_dag, spec_cache]
|
||||||
|
|
||||||
export base, helpers, network, sets, tables
|
export base, helpers, network, sets, tables
|
||||||
|
|
||||||
|
|
|
@ -442,7 +442,8 @@ proc makeBeaconBlockForHeadAndSlot*(node: BeaconNode,
|
||||||
let
|
let
|
||||||
proposalState = assignClone(node.dag.headState)
|
proposalState = assignClone(node.dag.headState)
|
||||||
|
|
||||||
node.dag.withUpdatedState(proposalState[], head.atSlot(slot - 1)) do:
|
# TODO fails at checkpoint synced head
|
||||||
|
node.dag.withUpdatedState(proposalState[], head.atSlot(slot - 1).toBlockSlotId().expect("not nil")) do:
|
||||||
# Advance to the given slot without calculating state root - we'll only
|
# Advance to the given slot without calculating state root - we'll only
|
||||||
# need a state root _with_ the block applied
|
# need a state root _with_ the block applied
|
||||||
var info: ForkedEpochInfo
|
var info: ForkedEpochInfo
|
||||||
|
|
|
@ -17,7 +17,7 @@ import
|
||||||
state_transition_epoch,
|
state_transition_epoch,
|
||||||
state_transition_block,
|
state_transition_block,
|
||||||
signatures],
|
signatures],
|
||||||
../beacon_chain/consensus_object_pools/block_pools_types
|
../beacon_chain/consensus_object_pools/blockchain_dag
|
||||||
|
|
||||||
type
|
type
|
||||||
RewardsAndPenalties* = object
|
RewardsAndPenalties* = object
|
||||||
|
@ -110,18 +110,21 @@ func getFilePathForEpochs*(startEpoch, endEpoch: Epoch, dir: string): string =
|
||||||
epochAsString(endEpoch) & epochFileNameExtension
|
epochAsString(endEpoch) & epochFileNameExtension
|
||||||
dir / fileName
|
dir / fileName
|
||||||
|
|
||||||
func getBlockRange*(dag: ChainDAGRef, start, ends: Slot): seq[BlockRef] =
|
func getBlockRange*(dag: ChainDAGRef, start, ends: Slot): seq[BlockId] =
|
||||||
# Range of block in reverse order
|
# Range of block in reverse order
|
||||||
doAssert start < ends
|
doAssert start < ends
|
||||||
result = newSeqOfCap[BlockRef](ends - start)
|
result = newSeqOfCap[BlockId](ends - start)
|
||||||
var current = dag.head
|
var current = ends
|
||||||
while current != nil:
|
while current > start:
|
||||||
if current.slot < ends:
|
current -= 1
|
||||||
if current.slot < start or current.slot == 0: # skip genesis
|
let bsid = dag.getBlockIdAtSlot(current).valueOr:
|
||||||
break
|
continue
|
||||||
else:
|
|
||||||
result.add current
|
if bsid.bid.slot < start: # current might be empty
|
||||||
current = current.parent
|
break
|
||||||
|
|
||||||
|
result.add(bsid.bid)
|
||||||
|
current = bsid.bid.slot # skip empty slots
|
||||||
|
|
||||||
func getOutcome(delta: RewardDelta): int64 =
|
func getOutcome(delta: RewardDelta): int64 =
|
||||||
delta.rewards.int64 - delta.penalties.int64
|
delta.rewards.int64 - delta.penalties.int64
|
||||||
|
|
|
@ -253,7 +253,9 @@ proc cmdBench(conf: DbConf, cfg: RuntimeConfig) =
|
||||||
|
|
||||||
withTimer(timers[tLoadState]):
|
withTimer(timers[tLoadState]):
|
||||||
doAssert dag.updateState(
|
doAssert dag.updateState(
|
||||||
stateData[], blockRefs[^1].atSlot(blockRefs[^1].slot - 1), false, cache)
|
stateData[],
|
||||||
|
dag.atSlot(blockRefs[^1], blockRefs[^1].slot - 1).expect("not nil"),
|
||||||
|
false, cache)
|
||||||
|
|
||||||
template processBlocks(blocks: auto) =
|
template processBlocks(blocks: auto) =
|
||||||
for b in blocks.mitems():
|
for b in blocks.mitems():
|
||||||
|
@ -409,12 +411,13 @@ proc cmdRewindState(conf: DbConf, cfg: RuntimeConfig) =
|
||||||
validatorMonitor = newClone(ValidatorMonitor.init())
|
validatorMonitor = newClone(ValidatorMonitor.init())
|
||||||
dag = init(ChainDAGRef, cfg, db, validatorMonitor, {})
|
dag = init(ChainDAGRef, cfg, db, validatorMonitor, {})
|
||||||
|
|
||||||
let blckRef = dag.getBlockRef(fromHex(Eth2Digest, conf.blockRoot)).valueOr:
|
let bid = dag.getBlockId(fromHex(Eth2Digest, conf.blockRoot)).valueOr:
|
||||||
echo "Block not found in database"
|
echo "Block not found in database"
|
||||||
return
|
return
|
||||||
|
|
||||||
let tmpState = assignClone(dag.headState)
|
let tmpState = assignClone(dag.headState)
|
||||||
dag.withUpdatedState(tmpState[], blckRef.atSlot(Slot(conf.slot))) do:
|
dag.withUpdatedState(
|
||||||
|
tmpState[], dag.atSlot(bid, Slot(conf.slot)).expect("block found")) do:
|
||||||
echo "Writing state..."
|
echo "Writing state..."
|
||||||
withState(state):
|
withState(state):
|
||||||
dump("./", state)
|
dump("./", state)
|
||||||
|
@ -480,7 +483,7 @@ proc cmdExportEra(conf: DbConf, cfg: RuntimeConfig) =
|
||||||
group.update(e2, blocks[i].slot, tmp).get()
|
group.update(e2, blocks[i].slot, tmp).get()
|
||||||
|
|
||||||
withTimer(timers[tState]):
|
withTimer(timers[tState]):
|
||||||
dag.withUpdatedState(tmpState[], canonical) do:
|
dag.withUpdatedState(tmpState[], canonical.toBlockSlotId().expect("not nil")) do:
|
||||||
withState(state):
|
withState(state):
|
||||||
group.finish(e2, state.data).get()
|
group.finish(e2, state.data).get()
|
||||||
do: raiseAssert "withUpdatedState failed"
|
do: raiseAssert "withUpdatedState failed"
|
||||||
|
@ -592,7 +595,9 @@ proc cmdValidatorPerf(conf: DbConf, cfg: RuntimeConfig) =
|
||||||
|
|
||||||
let state = newClone(dag.headState)
|
let state = newClone(dag.headState)
|
||||||
doAssert dag.updateState(
|
doAssert dag.updateState(
|
||||||
state[], blockRefs[^1].atSlot(blockRefs[^1].slot - 1), false, cache)
|
state[],
|
||||||
|
dag.atSlot(blockRefs[^1], blockRefs[^1].slot - 1).expect("block found"),
|
||||||
|
false, cache)
|
||||||
|
|
||||||
proc processEpoch() =
|
proc processEpoch() =
|
||||||
let
|
let
|
||||||
|
@ -865,9 +870,11 @@ proc cmdValidatorDb(conf: DbConf, cfg: RuntimeConfig) =
|
||||||
var cache = StateCache()
|
var cache = StateCache()
|
||||||
let slot = if startSlot > 0: startSlot - 1 else: 0.Slot
|
let slot = if startSlot > 0: startSlot - 1 else: 0.Slot
|
||||||
if blockRefs.len > 0:
|
if blockRefs.len > 0:
|
||||||
discard dag.updateState(tmpState[], blockRefs[^1].atSlot(slot), false, cache)
|
discard dag.updateState(
|
||||||
|
tmpState[], dag.atSlot(blockRefs[^1], slot).expect("block"), false, cache)
|
||||||
else:
|
else:
|
||||||
discard dag.updateState(tmpState[], dag.head.atSlot(slot), false, cache)
|
discard dag.updateState(
|
||||||
|
tmpState[], dag.getBlockIdAtSlot(slot).expect("block"), false, cache)
|
||||||
|
|
||||||
let savedValidatorsCount = outDb.getDbValidatorsCount
|
let savedValidatorsCount = outDb.getDbValidatorsCount
|
||||||
var validatorsCount = getStateField(tmpState[], validators).len
|
var validatorsCount = getStateField(tmpState[], validators).len
|
||||||
|
@ -956,7 +963,7 @@ proc cmdValidatorDb(conf: DbConf, cfg: RuntimeConfig) =
|
||||||
clear cache
|
clear cache
|
||||||
|
|
||||||
for bi in 0 ..< blockRefs.len:
|
for bi in 0 ..< blockRefs.len:
|
||||||
let forkedBlock = dag.getForkedBlock(blockRefs[blockRefs.len - bi - 1].bid).get()
|
let forkedBlock = dag.getForkedBlock(blockRefs[blockRefs.len - bi - 1]).get()
|
||||||
withBlck(forkedBlock):
|
withBlck(forkedBlock):
|
||||||
processSlots(blck.message.slot, {skipLastStateRootCalculation})
|
processSlots(blck.message.slot, {skipLastStateRootCalculation})
|
||||||
|
|
||||||
|
|
|
@ -112,7 +112,7 @@ cli do(slots = SLOTS_PER_EPOCH * 6,
|
||||||
let
|
let
|
||||||
attestationHead = dag.head.atSlot(slot)
|
attestationHead = dag.head.atSlot(slot)
|
||||||
|
|
||||||
dag.withUpdatedState(tmpState[], attestationHead) do:
|
dag.withUpdatedState(tmpState[], attestationHead.toBlockSlotId.expect("not nil")) do:
|
||||||
let committees_per_slot =
|
let committees_per_slot =
|
||||||
get_committee_count_per_slot(state, slot.epoch, cache)
|
get_committee_count_per_slot(state, slot.epoch, cache)
|
||||||
|
|
||||||
|
@ -124,7 +124,7 @@ cli do(slots = SLOTS_PER_EPOCH * 6,
|
||||||
if rand(r, 1.0) <= attesterRatio:
|
if rand(r, 1.0) <= attesterRatio:
|
||||||
let
|
let
|
||||||
data = makeAttestationData(
|
data = makeAttestationData(
|
||||||
state, slot, committee_index, blck.root)
|
state, slot, committee_index, bid.root)
|
||||||
sig =
|
sig =
|
||||||
get_attestation_signature(getStateField(state, fork),
|
get_attestation_signature(getStateField(state, fork),
|
||||||
getStateField(state, genesis_validators_root),
|
getStateField(state, genesis_validators_root),
|
||||||
|
@ -303,7 +303,7 @@ cli do(slots = SLOTS_PER_EPOCH * 6,
|
||||||
if rand(r, 1.0) > blockRatio:
|
if rand(r, 1.0) > blockRatio:
|
||||||
return
|
return
|
||||||
|
|
||||||
dag.withUpdatedState(tmpState[], dag.head.atSlot(slot)) do:
|
dag.withUpdatedState(tmpState[], dag.getBlockIdAtSlot(slot).expect("block")) do:
|
||||||
let
|
let
|
||||||
newBlock = getNewBlock[phase0.SignedBeaconBlock](state, slot, cache)
|
newBlock = getNewBlock[phase0.SignedBeaconBlock](state, slot, cache)
|
||||||
added = dag.addHeadBlock(verifier, newBlock) do (
|
added = dag.addHeadBlock(verifier, newBlock) do (
|
||||||
|
@ -324,7 +324,7 @@ cli do(slots = SLOTS_PER_EPOCH * 6,
|
||||||
if rand(r, 1.0) > blockRatio:
|
if rand(r, 1.0) > blockRatio:
|
||||||
return
|
return
|
||||||
|
|
||||||
dag.withUpdatedState(tmpState[], dag.head.atSlot(slot)) do:
|
dag.withUpdatedState(tmpState[], dag.getBlockIdAtSlot(slot).expect("block")) do:
|
||||||
let
|
let
|
||||||
newBlock = getNewBlock[altair.SignedBeaconBlock](state, slot, cache)
|
newBlock = getNewBlock[altair.SignedBeaconBlock](state, slot, cache)
|
||||||
added = dag.addHeadBlock(verifier, newBlock) do (
|
added = dag.addHeadBlock(verifier, newBlock) do (
|
||||||
|
@ -345,7 +345,7 @@ cli do(slots = SLOTS_PER_EPOCH * 6,
|
||||||
if rand(r, 1.0) > blockRatio:
|
if rand(r, 1.0) > blockRatio:
|
||||||
return
|
return
|
||||||
|
|
||||||
dag.withUpdatedState(tmpState[], dag.head.atSlot(slot)) do:
|
dag.withUpdatedState(tmpState[], dag.getBlockIdAtSlot(slot).expect("block")) do:
|
||||||
let
|
let
|
||||||
newBlock = getNewBlock[bellatrix.SignedBeaconBlock](state, slot, cache)
|
newBlock = getNewBlock[bellatrix.SignedBeaconBlock](state, slot, cache)
|
||||||
added = dag.addHeadBlock(verifier, newBlock) do (
|
added = dag.addHeadBlock(verifier, newBlock) do (
|
||||||
|
@ -430,7 +430,8 @@ cli do(slots = SLOTS_PER_EPOCH * 6,
|
||||||
withTimer(timers[tReplay]):
|
withTimer(timers[tReplay]):
|
||||||
var cache = StateCache()
|
var cache = StateCache()
|
||||||
doAssert dag.updateState(
|
doAssert dag.updateState(
|
||||||
replayState[], dag.head.atSlot(Slot(slots)), false, cache)
|
replayState[], dag.getBlockIdAtSlot(Slot(slots)).expect("block"),
|
||||||
|
false, cache)
|
||||||
|
|
||||||
echo "Done!"
|
echo "Done!"
|
||||||
|
|
||||||
|
|
|
@ -169,7 +169,7 @@ proc stepOnBlock(
|
||||||
# 1. Move state to proper slot.
|
# 1. Move state to proper slot.
|
||||||
doAssert dag.updateState(
|
doAssert dag.updateState(
|
||||||
state,
|
state,
|
||||||
dag.head.atSlot(time.slotOrZero),
|
dag.getBlockIdAtSlot(time.slotOrZero).expect("block exists"),
|
||||||
save = false,
|
save = false,
|
||||||
stateCache
|
stateCache
|
||||||
)
|
)
|
||||||
|
|
|
@ -29,32 +29,6 @@ proc pruneAtFinalization(dag: ChainDAGRef) =
|
||||||
if dag.needStateCachesAndForkChoicePruning():
|
if dag.needStateCachesAndForkChoicePruning():
|
||||||
dag.pruneStateCachesDAG()
|
dag.pruneStateCachesDAG()
|
||||||
|
|
||||||
suite "ChainDAG helpers":
|
|
||||||
test "epochAncestor sanity" & preset():
|
|
||||||
let
|
|
||||||
s0 = BlockRef(bid: BlockId(slot: Slot(0)))
|
|
||||||
var cur = s0
|
|
||||||
for i in 1..SLOTS_PER_EPOCH * 2:
|
|
||||||
cur = BlockRef(bid: BlockId(slot: Slot(i)), parent: cur)
|
|
||||||
|
|
||||||
let ancestor = cur.epochAncestor(cur.slot.epoch)
|
|
||||||
|
|
||||||
check:
|
|
||||||
ancestor.epoch == cur.slot.epoch
|
|
||||||
ancestor.blck != cur # should have selected a parent
|
|
||||||
|
|
||||||
ancestor.blck.epochAncestor(cur.slot.epoch) == ancestor
|
|
||||||
ancestor.blck.epochAncestor(ancestor.blck.slot.epoch) != ancestor
|
|
||||||
|
|
||||||
let
|
|
||||||
farEpoch = Epoch(42)
|
|
||||||
farTail = BlockRef(
|
|
||||||
bid: BlockId(slot: farEpoch.start_slot() + 5))
|
|
||||||
check:
|
|
||||||
|
|
||||||
not isNil(epochAncestor(farTail, farEpoch).blck)
|
|
||||||
isNil(epochAncestor(farTail, farEpoch - 1).blck)
|
|
||||||
|
|
||||||
suite "Block pool processing" & preset():
|
suite "Block pool processing" & preset():
|
||||||
setup:
|
setup:
|
||||||
var
|
var
|
||||||
|
@ -70,16 +44,22 @@ suite "Block pool processing" & preset():
|
||||||
b1 = addTestBlock(state[], cache, attestations = att0).phase0Data
|
b1 = addTestBlock(state[], cache, attestations = att0).phase0Data
|
||||||
b2 = addTestBlock(state[], cache).phase0Data
|
b2 = addTestBlock(state[], cache).phase0Data
|
||||||
|
|
||||||
test "getBlockRef returns none for missing blocks":
|
test "basic ops":
|
||||||
check:
|
check:
|
||||||
dag.getBlockRef(default Eth2Digest).isNone()
|
dag.getBlockRef(default Eth2Digest).isNone()
|
||||||
|
|
||||||
test "loading tail block works" & preset():
|
|
||||||
let
|
let
|
||||||
b0 = dag.getForkedBlock(dag.tail.root)
|
b0 = dag.getForkedBlock(dag.tail.root)
|
||||||
|
bh = dag.getForkedBlock(dag.head.root)
|
||||||
|
bh2 = dag.getForkedBlock(dag.head.bid)
|
||||||
check:
|
check:
|
||||||
b0.isSome()
|
b0.isSome()
|
||||||
|
bh.isSome()
|
||||||
|
bh2.isSome()
|
||||||
|
|
||||||
|
dag.getBlockRef(dag.finalizedHead.blck.root).get() ==
|
||||||
|
dag.finalizedHead.blck
|
||||||
|
dag.getBlockRef(dag.head.root).get() == dag.head
|
||||||
|
|
||||||
test "Simple block add&get" & preset():
|
test "Simple block add&get" & preset():
|
||||||
let
|
let
|
||||||
|
@ -96,7 +76,7 @@ suite "Block pool processing" & preset():
|
||||||
let
|
let
|
||||||
b2Add = dag.addHeadBlock(verifier, b2, nilPhase0Callback)
|
b2Add = dag.addHeadBlock(verifier, b2, nilPhase0Callback)
|
||||||
b2Get = dag.getForkedBlock(b2.root)
|
b2Get = dag.getForkedBlock(b2.root)
|
||||||
er = dag.findEpochRef(b1Add[], b1Add[].slot.epoch)
|
er = dag.findEpochRef(b1Add[].bid, b1Add[].slot.epoch)
|
||||||
validators = getStateField(dag.headState, validators).lenu64()
|
validators = getStateField(dag.headState, validators).lenu64()
|
||||||
|
|
||||||
check:
|
check:
|
||||||
|
@ -106,12 +86,16 @@ suite "Block pool processing" & preset():
|
||||||
dag.heads.len == 1
|
dag.heads.len == 1
|
||||||
dag.heads[0] == b2Add[]
|
dag.heads[0] == b2Add[]
|
||||||
dag.containsForkBlock(b2.root)
|
dag.containsForkBlock(b2.root)
|
||||||
|
dag.parent(b2Add[].bid).get() == b1Add[].bid
|
||||||
|
# head not updated yet - getBlockIdAtSlot won't give those blocks
|
||||||
|
dag.getBlockIdAtSlot(b2Add[].slot).get() ==
|
||||||
|
BlockSlotId.init(dag.genesis, b2Add[].slot)
|
||||||
|
|
||||||
not er.isErr()
|
not er.isErr()
|
||||||
# Same epoch - same epochRef
|
# Same epoch - same epochRef
|
||||||
er[] == dag.findEpochRef(b2Add[], b2Add[].slot.epoch)[]
|
er[] == dag.findEpochRef(b2Add[].bid, b2Add[].slot.epoch)[]
|
||||||
# Different epoch that was never processed
|
# Different epoch that was never processed
|
||||||
dag.findEpochRef(b1Add[], b1Add[].slot.epoch + 1).isErr()
|
dag.findEpochRef(b1Add[].bid, b1Add[].slot.epoch + 1).isErr()
|
||||||
|
|
||||||
er[].validatorKey(0'u64).isSome()
|
er[].validatorKey(0'u64).isSome()
|
||||||
er[].validatorKey(validators - 1).isSome()
|
er[].validatorKey(validators - 1).isSome()
|
||||||
|
@ -133,27 +117,35 @@ suite "Block pool processing" & preset():
|
||||||
dag.updateHead(b4Add[], quarantine)
|
dag.updateHead(b4Add[], quarantine)
|
||||||
dag.pruneAtFinalization()
|
dag.pruneAtFinalization()
|
||||||
|
|
||||||
|
check: # getBlockIdAtSlot operates on the head chain!
|
||||||
|
dag.getBlockIdAtSlot(b2Add[].slot).get() ==
|
||||||
|
BlockSlotId.init(b2Add[].bid, b2Add[].slot)
|
||||||
|
dag.parentOrSlot(dag.getBlockIdAtSlot(b2Add[].slot).get()).get() ==
|
||||||
|
BlockSlotId.init(b1Add[].bid, b2Add[].slot)
|
||||||
|
dag.parentOrSlot(dag.getBlockIdAtSlot(b2Add[].slot + 1).get()).get() ==
|
||||||
|
BlockSlotId.init(b2Add[].bid, b2Add[].slot)
|
||||||
|
|
||||||
var blocks: array[3, BlockId]
|
var blocks: array[3, BlockId]
|
||||||
|
|
||||||
check:
|
check:
|
||||||
dag.getBlockRange(Slot(0), 1, blocks.toOpenArray(0, 0)) == 0
|
dag.getBlockRange(Slot(0), 1, blocks.toOpenArray(0, 0)) == 0
|
||||||
blocks[0..<1] == [dag.tail.bid]
|
blocks[0..<1] == [dag.tail]
|
||||||
|
|
||||||
dag.getBlockRange(Slot(0), 1, blocks.toOpenArray(0, 1)) == 0
|
dag.getBlockRange(Slot(0), 1, blocks.toOpenArray(0, 1)) == 0
|
||||||
blocks[0..<2] == [dag.tail.bid, b1Add[].bid]
|
blocks[0..<2] == [dag.tail, b1Add[].bid]
|
||||||
|
|
||||||
dag.getBlockRange(Slot(0), 2, blocks.toOpenArray(0, 1)) == 0
|
dag.getBlockRange(Slot(0), 2, blocks.toOpenArray(0, 1)) == 0
|
||||||
blocks[0..<2] == [dag.tail.bid, b2Add[].bid]
|
blocks[0..<2] == [dag.tail, b2Add[].bid]
|
||||||
|
|
||||||
dag.getBlockRange(Slot(0), 3, blocks.toOpenArray(0, 1)) == 1
|
dag.getBlockRange(Slot(0), 3, blocks.toOpenArray(0, 1)) == 1
|
||||||
blocks[1..<2] == [dag.tail.bid] # block 3 is missing!
|
blocks[1..<2] == [dag.tail] # block 3 is missing!
|
||||||
|
|
||||||
dag.getBlockRange(Slot(2), 2, blocks.toOpenArray(0, 1)) == 0
|
dag.getBlockRange(Slot(2), 2, blocks.toOpenArray(0, 1)) == 0
|
||||||
blocks[0..<2] == [b2Add[].bid, b4Add[].bid] # block 3 is missing!
|
blocks[0..<2] == [b2Add[].bid, b4Add[].bid] # block 3 is missing!
|
||||||
|
|
||||||
# large skip step
|
# large skip step
|
||||||
dag.getBlockRange(Slot(0), uint64.high, blocks.toOpenArray(0, 2)) == 2
|
dag.getBlockRange(Slot(0), uint64.high, blocks.toOpenArray(0, 2)) == 2
|
||||||
blocks[2..2] == [dag.tail.bid]
|
blocks[2..2] == [dag.tail]
|
||||||
|
|
||||||
# large skip step
|
# large skip step
|
||||||
dag.getBlockRange(Slot(2), uint64.high, blocks.toOpenArray(0, 1)) == 1
|
dag.getBlockRange(Slot(2), uint64.high, blocks.toOpenArray(0, 1)) == 1
|
||||||
|
@ -176,13 +168,16 @@ suite "Block pool processing" & preset():
|
||||||
let
|
let
|
||||||
nextEpoch = dag.head.slot.epoch + 1
|
nextEpoch = dag.head.slot.epoch + 1
|
||||||
nextEpochSlot = nextEpoch.start_slot()
|
nextEpochSlot = nextEpoch.start_slot()
|
||||||
stateCheckpoint = dag.head.parent.atSlot(nextEpochSlot).stateCheckpoint
|
parentBsi = dag.head.parent.atSlot(nextEpochSlot).toBlockSlotId().get()
|
||||||
|
stateCheckpoint = dag.stateCheckpoint(parentBsi)
|
||||||
|
|
||||||
check:
|
check:
|
||||||
|
parentBsi.bid == dag.head.parent.bid
|
||||||
|
parentBsi.slot == nextEpochSlot
|
||||||
dag.getEpochRef(dag.head.parent, nextEpoch, true).isOk()
|
dag.getEpochRef(dag.head.parent, nextEpoch, true).isOk()
|
||||||
|
|
||||||
# Getting an EpochRef should not result in states being stored
|
# Getting an EpochRef should not result in states being stored
|
||||||
db.getStateRoot(stateCheckpoint.blck.root, stateCheckpoint.slot).isErr()
|
db.getStateRoot(stateCheckpoint.bid.root, stateCheckpoint.slot).isErr()
|
||||||
# this is required for the test to work - it's not a "public"
|
# this is required for the test to work - it's not a "public"
|
||||||
# post-condition of getEpochRef
|
# post-condition of getEpochRef
|
||||||
getStateField(dag.epochRefState, slot) == nextEpochSlot
|
getStateField(dag.epochRefState, slot) == nextEpochSlot
|
||||||
|
@ -194,7 +189,7 @@ suite "Block pool processing" & preset():
|
||||||
|
|
||||||
check:
|
check:
|
||||||
# Getting an EpochRef should not result in states being stored
|
# Getting an EpochRef should not result in states being stored
|
||||||
db.getStateRoot(stateCheckpoint.blck.root, stateCheckpoint.slot).isOk()
|
db.getStateRoot(stateCheckpoint.bid.root, stateCheckpoint.slot).isOk()
|
||||||
|
|
||||||
test "Adding the same block twice returns a Duplicate error" & preset():
|
test "Adding the same block twice returns a Duplicate error" & preset():
|
||||||
let
|
let
|
||||||
|
@ -220,9 +215,9 @@ suite "Block pool processing" & preset():
|
||||||
let
|
let
|
||||||
b1Add = dag.addHeadBlock(verifier, b1, nilPhase0Callback)
|
b1Add = dag.addHeadBlock(verifier, b1, nilPhase0Callback)
|
||||||
b2Add = dag.addHeadBlock(verifier, b2, nilPhase0Callback)
|
b2Add = dag.addHeadBlock(verifier, b2, nilPhase0Callback)
|
||||||
bs1 = BlockSlot(blck: b1Add[], slot: b1.message.slot)
|
bs1 = BlockSlotId.init(b1Add[].bid, b1.message.slot)
|
||||||
bs1_3 = b1Add[].atSlot(3.Slot)
|
bs1_3 = BlockSlotId.init(b1Add[].bid, 3.Slot)
|
||||||
bs2_3 = b2Add[].atSlot(3.Slot)
|
bs2_3 = BlockSlotId.init(b2Add[].bid, 3.Slot)
|
||||||
|
|
||||||
let tmpState = assignClone(dag.headState)
|
let tmpState = assignClone(dag.headState)
|
||||||
|
|
||||||
|
@ -241,9 +236,10 @@ suite "Block pool processing" & preset():
|
||||||
|
|
||||||
# Move back slots, but not blocks
|
# Move back slots, but not blocks
|
||||||
check:
|
check:
|
||||||
dag.updateState(tmpState[], bs1_3.parent(), false, cache)
|
dag.updateState(
|
||||||
tmpState[].latest_block_root == b1Add[].root
|
tmpState[], dag.parent(bs1_3.bid).expect("block").atSlot(), false, cache)
|
||||||
getStateField(tmpState[], slot) == bs1_3.parent().slot
|
tmpState[].latest_block_root == b1Add[].parent.root
|
||||||
|
getStateField(tmpState[], slot) == b1Add[].parent.slot
|
||||||
|
|
||||||
# Move to different block and slot
|
# Move to different block and slot
|
||||||
check:
|
check:
|
||||||
|
@ -259,9 +255,10 @@ suite "Block pool processing" & preset():
|
||||||
|
|
||||||
# Move back to genesis
|
# Move back to genesis
|
||||||
check:
|
check:
|
||||||
dag.updateState(tmpState[], bs1.parent(), false, cache)
|
dag.updateState(
|
||||||
|
tmpState[], dag.parent(bs1.bid).expect("block").atSlot(), false, cache)
|
||||||
tmpState[].latest_block_root == b1Add[].parent.root
|
tmpState[].latest_block_root == b1Add[].parent.root
|
||||||
getStateField(tmpState[], slot) == bs1.parent.slot
|
getStateField(tmpState[], slot) == b1Add[].parent.slot
|
||||||
|
|
||||||
when declared(GC_fullCollect): # i386 test machines seem to run low..
|
when declared(GC_fullCollect): # i386 test machines seem to run low..
|
||||||
GC_fullCollect()
|
GC_fullCollect()
|
||||||
|
@ -374,7 +371,7 @@ suite "chain DAG finalization tests" & preset():
|
||||||
|
|
||||||
assign(tmpState[], dag.headState)
|
assign(tmpState[], dag.headState)
|
||||||
|
|
||||||
# skip slots so we can test gappy getBlockAtSlot
|
# skip slots so we can test gappy getBlockIdAtSlot
|
||||||
check process_slots(
|
check process_slots(
|
||||||
defaultRuntimeConfig, tmpState[],
|
defaultRuntimeConfig, tmpState[],
|
||||||
getStateField(tmpState[], slot) + 2.uint64,
|
getStateField(tmpState[], slot) + 2.uint64,
|
||||||
|
@ -397,18 +394,24 @@ suite "chain DAG finalization tests" & preset():
|
||||||
|
|
||||||
check:
|
check:
|
||||||
dag.heads.len() == 1
|
dag.heads.len() == 1
|
||||||
dag.getBlockAtSlot(0.Slot).get() == BlockSlot(blck: dag.genesis, slot: 0.Slot)
|
dag.getBlockIdAtSlot(0.Slot).get() == BlockSlotId.init(dag.genesis, 0.Slot)
|
||||||
dag.getBlockAtSlot(2.Slot).get() ==
|
dag.getBlockIdAtSlot(2.Slot).get() ==
|
||||||
BlockSlot(blck: dag.getBlockAtSlot(1.Slot).get().blck, slot: 2.Slot)
|
BlockSlotId.init(dag.getBlockIdAtSlot(1.Slot).get().bid, 2.Slot)
|
||||||
|
|
||||||
dag.getBlockAtSlot(dag.head.slot).get() == BlockSlot(
|
dag.getBlockIdAtSlot(dag.head.slot).get() == BlockSlotId.init(
|
||||||
blck: dag.head, slot: dag.head.slot.Slot)
|
dag.head.bid, dag.head.slot)
|
||||||
dag.getBlockAtSlot(dag.head.slot + 1).get() == BlockSlot(
|
dag.getBlockIdAtSlot(dag.head.slot + 1).get() == BlockSlotId.init(
|
||||||
blck: dag.head, slot: dag.head.slot.Slot + 1)
|
dag.head.bid, dag.head.slot + 1)
|
||||||
|
|
||||||
not dag.containsForkBlock(dag.getBlockAtSlot(5.Slot).get().blck.root)
|
not dag.containsForkBlock(dag.getBlockIdAtSlot(5.Slot).get().bid.root)
|
||||||
dag.containsForkBlock(dag.finalizedHead.blck.root)
|
dag.containsForkBlock(dag.finalizedHead.blck.root)
|
||||||
|
|
||||||
|
dag.getBlockRef(dag.genesis.root).isNone() # Finalized - no BlockRef
|
||||||
|
|
||||||
|
dag.getBlockRef(dag.finalizedHead.blck.root).isSome()
|
||||||
|
|
||||||
|
isNil dag.finalizedHead.blck.parent
|
||||||
|
|
||||||
check:
|
check:
|
||||||
dag.db.immutableValidators.len() == getStateField(dag.headState, validators).len()
|
dag.db.immutableValidators.len() == getStateField(dag.headState, validators).len()
|
||||||
|
|
||||||
|
@ -433,7 +436,9 @@ suite "chain DAG finalization tests" & preset():
|
||||||
# evicted yet
|
# evicted yet
|
||||||
cache = StateCache()
|
cache = StateCache()
|
||||||
check: updateState(
|
check: updateState(
|
||||||
dag, tmpStateData[], dag.head.atSlot(dag.head.slot), false, cache)
|
dag, tmpStateData[],
|
||||||
|
dag.head.atSlot(dag.head.slot).toBlockSlotId().expect("not nil"),
|
||||||
|
false, cache)
|
||||||
|
|
||||||
check:
|
check:
|
||||||
dag.head.slot.epoch in cache.shuffled_active_validator_indices
|
dag.head.slot.epoch in cache.shuffled_active_validator_indices
|
||||||
|
@ -451,11 +456,11 @@ suite "chain DAG finalization tests" & preset():
|
||||||
|
|
||||||
block:
|
block:
|
||||||
let
|
let
|
||||||
finalizedCheckpoint = dag.finalizedHead.stateCheckpoint
|
finalizedCheckpoint = dag.stateCheckpoint(dag.finalizedHead.toBlockSlotId().get())
|
||||||
headCheckpoint = dag.head.atSlot(dag.head.slot).stateCheckpoint
|
headCheckpoint = dag.stateCheckpoint(dag.head.bid.atSlot())
|
||||||
check:
|
check:
|
||||||
db.getStateRoot(headCheckpoint.blck.root, headCheckpoint.slot).isSome
|
db.getStateRoot(headCheckpoint.bid.root, headCheckpoint.slot).isSome
|
||||||
db.getStateRoot(finalizedCheckpoint.blck.root, finalizedCheckpoint.slot).isSome
|
db.getStateRoot(finalizedCheckpoint.bid.root, finalizedCheckpoint.slot).isSome
|
||||||
|
|
||||||
let
|
let
|
||||||
validatorMonitor2 = newClone(ValidatorMonitor.init())
|
validatorMonitor2 = newClone(ValidatorMonitor.init())
|
||||||
|
@ -537,10 +542,10 @@ suite "chain DAG finalization tests" & preset():
|
||||||
var
|
var
|
||||||
cur = dag.head
|
cur = dag.head
|
||||||
tmpStateData = assignClone(dag.headState)
|
tmpStateData = assignClone(dag.headState)
|
||||||
while cur.slot >= dag.finalizedHead.slot:
|
while cur != nil: # Go all the way to dag.finalizedHead
|
||||||
assign(tmpStateData[], dag.headState)
|
assign(tmpStateData[], dag.headState)
|
||||||
check:
|
check:
|
||||||
dag.updateState(tmpStateData[], cur.atSlot(cur.slot), false, cache)
|
dag.updateState(tmpStateData[], cur.bid.atSlot(), false, cache)
|
||||||
dag.getForkedBlock(cur.bid).get().phase0Data.message.state_root ==
|
dag.getForkedBlock(cur.bid).get().phase0Data.message.state_root ==
|
||||||
getStateRoot(tmpStateData[])
|
getStateRoot(tmpStateData[])
|
||||||
getStateRoot(tmpStateData[]) == hash_tree_root(
|
getStateRoot(tmpStateData[]) == hash_tree_root(
|
||||||
|
@ -709,15 +714,17 @@ suite "Backfill":
|
||||||
dag = init(ChainDAGRef, defaultRuntimeConfig, db, validatorMonitor, {})
|
dag = init(ChainDAGRef, defaultRuntimeConfig, db, validatorMonitor, {})
|
||||||
|
|
||||||
check:
|
check:
|
||||||
dag.getBlockRef(tailBlock.root).get() == dag.tail
|
dag.getBlockRef(tailBlock.root).get().bid == dag.tail
|
||||||
dag.getBlockRef(blocks[^2].root).isNone()
|
dag.getBlockRef(blocks[^2].root).isNone()
|
||||||
|
|
||||||
dag.getBlockAtSlot(dag.tail.slot).get().blck == dag.tail
|
dag.getBlockId(tailBlock.root).get() == dag.tail
|
||||||
dag.getBlockAtSlot(dag.tail.slot - 1).isNone()
|
dag.getBlockId(blocks[^2].root).isNone()
|
||||||
|
|
||||||
dag.getBlockAtSlot(Slot(0)).get().blck == dag.genesis
|
dag.getBlockIdAtSlot(dag.tail.slot).get().bid == dag.tail
|
||||||
dag.getBlockIdAtSlot(Slot(0)).get() == dag.genesis.bid.atSlot()
|
dag.getBlockIdAtSlot(dag.tail.slot - 1).isNone()
|
||||||
dag.getBlockIdAtSlot(Slot(1)).isNone
|
|
||||||
|
dag.getBlockIdAtSlot(Slot(0)).get() == dag.genesis.atSlot()
|
||||||
|
dag.getBlockIdAtSlot(Slot(1)).isNone()
|
||||||
|
|
||||||
# No epochref for pre-tail epochs
|
# No epochref for pre-tail epochs
|
||||||
dag.getEpochRef(dag.tail, dag.tail.slot.epoch - 1, true).isErr()
|
dag.getEpochRef(dag.tail, dag.tail.slot.epoch - 1, true).isErr()
|
||||||
|
@ -739,12 +746,13 @@ suite "Backfill":
|
||||||
check:
|
check:
|
||||||
dag.addBackfillBlock(blocks[^2].phase0Data).isOk()
|
dag.addBackfillBlock(blocks[^2].phase0Data).isOk()
|
||||||
|
|
||||||
dag.getBlockRef(tailBlock.root).get() == dag.tail
|
dag.getBlockRef(tailBlock.root).get().bid == dag.tail
|
||||||
dag.getBlockRef(blocks[^2].root).isNone()
|
dag.getBlockRef(blocks[^2].root).isNone()
|
||||||
|
|
||||||
dag.getBlockAtSlot(dag.tail.slot).get().blck == dag.tail
|
dag.getBlockId(tailBlock.root).get() == dag.tail
|
||||||
dag.getBlockAtSlot(dag.tail.slot - 1).isNone()
|
dag.getBlockId(blocks[^2].root).get().root == blocks[^2].root
|
||||||
|
|
||||||
|
dag.getBlockIdAtSlot(dag.tail.slot).get().bid == dag.tail
|
||||||
dag.getBlockIdAtSlot(dag.tail.slot - 1).get() ==
|
dag.getBlockIdAtSlot(dag.tail.slot - 1).get() ==
|
||||||
blocks[^2].toBlockId().atSlot()
|
blocks[^2].toBlockId().atSlot()
|
||||||
dag.getBlockIdAtSlot(dag.tail.slot - 2).isNone
|
dag.getBlockIdAtSlot(dag.tail.slot - 2).isNone
|
||||||
|
@ -795,8 +803,7 @@ suite "Backfill":
|
||||||
dag2.getBlockRef(tailBlock.root).get().root == dag.tail.root
|
dag2.getBlockRef(tailBlock.root).get().root == dag.tail.root
|
||||||
dag2.getBlockRef(blocks[^2].root).isNone()
|
dag2.getBlockRef(blocks[^2].root).isNone()
|
||||||
|
|
||||||
dag2.getBlockAtSlot(dag.tail.slot).get().blck.root == dag.tail.root
|
dag2.getBlockIdAtSlot(dag.tail.slot).get().bid.root == dag.tail.root
|
||||||
dag2.getBlockAtSlot(dag.tail.slot - 1).isNone()
|
|
||||||
|
|
||||||
dag2.getBlockIdAtSlot(dag.tail.slot - 1).get() ==
|
dag2.getBlockIdAtSlot(dag.tail.slot - 1).get() ==
|
||||||
blocks[^2].toBlockId().atSlot()
|
blocks[^2].toBlockId().atSlot()
|
||||||
|
|
Loading…
Reference in New Issue