Merge branch 'unstable' of github.com:status-im/nim-beacon-chain into unstable
This commit is contained in:
commit
e73aafffc3
132
CHANGELOG.md
132
CHANGELOG.md
|
@ -1,39 +1,110 @@
|
|||
2021-04-05 v1.1.0
|
||||
2021-04-20 v1.2.1
|
||||
=================
|
||||
|
||||
This release brings planned reforms to our database schema that provide substantial
|
||||
performance improvements and pave the way for an an improved doppelganger detection
|
||||
ready immediately to propose and attest to blocks (in a future release).
|
||||
This is a hotfix release that solves the database migration issue highlighted
|
||||
in the previous release -- this problem affected new Nimbus users who used
|
||||
v1.1.0 to sync with the network from genesis, essentially resetting their
|
||||
state database and causing them to start re-syncing from genesis.
|
||||
|
||||
Please be aware that we will remain committed to maintaining backwards compatibility between
|
||||
releases, but **this release does not support downgrading back to any previous 1.0.x release**.
|
||||
If you have used an older version of Nimbus prior to upgrading to v1.1.0,
|
||||
you should not be affected.
|
||||
|
||||
As a safety precaution, we advise you to **please backup your Nimbus database before upgrading**
|
||||
if possible.
|
||||
If you were affected, you have a couple of options available to you:
|
||||
|
||||
1) If you have backed-up your database prior to upgrading to v1.2.0, you
|
||||
can restore the database from backup and execute the migration successfully
|
||||
after upgrading to this release.
|
||||
|
||||
2) If you haven't backed up your database, you can upgrade to this release at
|
||||
your convenience; rest assured it won't delete your sync history.
|
||||
|
||||
Please accept our sincerest apologies for any inconvenience we may have caused.
|
||||
We are reviewing our release testing policies to ensure that we cover a greater
|
||||
number of possible upgrade paths going forward.
|
||||
|
||||
|
||||
2021-04-19 v1.2.0
|
||||
=================
|
||||
|
||||
If [`v1.1.0`](https://github.com/status-im/nimbus-eth2/releases/tag/v1.1.0)
|
||||
was the big I/O update, `v1.2.0` is all about the CPU - together, these
|
||||
updates help secure Nimbus against future network growth, and provide us
|
||||
with a higher security margin and substantial [profitability improvements]
|
||||
(https://twitter.com/ethnimbus/status/1384071918723092486).
|
||||
|
||||
To highlight just one data point, CPU usage has been cut by up to 50% over
|
||||
v1.1.0 ( 🙏 batched attestation processing). This makes it the first release
|
||||
we can officially recommend for validating on a Raspberry Pi 4.
|
||||
|
||||
> **N.B.** this release contains a **critical stability fix** so please
|
||||
**make sure you upgrade!**
|
||||
|
||||
**New features:**
|
||||
|
||||
* More efficient state storage format ==> reduced I/O load and lower storage requirements.
|
||||
* Beta support for the official Beacon Node REST API:
|
||||
https://ethereum.github.io/eth2.0-APIs/. Enable it by launching
|
||||
the client with the `--rest:on` command-line flag
|
||||
|
||||
* More efficient in-memory cache for non-finalized states ==> significant reduction in memory
|
||||
usage.
|
||||
* Batched attestation verification and other reforms **->** massive
|
||||
reduction in overall CPU usage.
|
||||
|
||||
* More efficient slashing database schema ==> scales better to a larger number of validators.
|
||||
* Improved attestation aggregation logic **->** denser aggregations
|
||||
which in turn improve the overall health of the network and improve
|
||||
block production.
|
||||
|
||||
* The metrics support is now compiled by default thanks to a new and more secure HTTP back-end.
|
||||
* More efficient LibP2P connection handling code **->** reduction in
|
||||
overall memory usage.
|
||||
|
||||
* Command-line tools for generating testnet keystores and JSON deposit files suitable for use
|
||||
with the official network launchpads.
|
||||
**We've fixed:**
|
||||
|
||||
* `setGraffiti` JSON-RPC call for modifying the graffiti bytes of the client at run-time.
|
||||
* A critical stability issue in attestation processing.
|
||||
|
||||
* `scripts/run-*-node.sh` no longer prompts for a web3 provider URL
|
||||
when the `--web3-url` command-line option has already been specified.
|
||||
|
||||
2021-04-05 v1.1.0
|
||||
=================
|
||||
|
||||
This release brings planned reforms to our database schema that provide
|
||||
substantial performance improvements and pave the way for an an improved
|
||||
doppelganger detection ready immediately to propose and attest to blocks
|
||||
(in a future release).
|
||||
|
||||
Please be aware that we will remain committed to maintaining backwards
|
||||
compatibility between releases, but **this release does not support
|
||||
downgrading back to any previous 1.0.x release**.
|
||||
|
||||
As a safety precaution, we advise you to **please backup your Nimbus
|
||||
database before upgrading** if possible.
|
||||
|
||||
**New features:**
|
||||
|
||||
* More efficient state storage format ==> reduced I/O load and lower
|
||||
storage requirements.
|
||||
|
||||
* More efficient in-memory cache for non-finalized states ==> significant
|
||||
reduction in memory usage.
|
||||
|
||||
* More efficient slashing database schema ==> scales better to a larger
|
||||
number of validators.
|
||||
|
||||
* The metrics support is now compiled by default thanks to a new and
|
||||
more secure HTTP back-end.
|
||||
|
||||
* Command-line tools for generating testnet keystores and JSON deposit
|
||||
files suitable for use with the official network launchpads.
|
||||
|
||||
* `setGraffiti` JSON-RPC call for modifying the graffiti bytes of the
|
||||
client at run-time.
|
||||
|
||||
* `next_action_wait` metric indicating the time until the next scheduled
|
||||
attestation or block proposal.
|
||||
|
||||
* More convenient command-line help messages providing information regarding the default
|
||||
values of all parameters.
|
||||
* More convenient command-line help messages providing information
|
||||
regarding the default values of all parameters.
|
||||
|
||||
* `--direct-peer` gives you the ability to specify gossip nodes to automatically connect to.
|
||||
* `--direct-peer` gives you the ability to specify gossip nodes
|
||||
to automatically connect to.
|
||||
|
||||
* Official docker images for ARM and ARM64.
|
||||
|
||||
|
@ -43,15 +114,19 @@ if possible.
|
|||
|
||||
* Long processing delays induced by database pruning.
|
||||
|
||||
* File descriptor leaks (which manifested after failures of the selected web3 provider).
|
||||
* File descriptor leaks (which manifested after failures of the selected
|
||||
web3 provider).
|
||||
|
||||
* The validator APIs now return precise actual balances instead of rounded effective balances.
|
||||
* The validator APIs now return precise actual balances instead of rounded
|
||||
effective balances.
|
||||
|
||||
* A connection tracking problem which produced failed outgoing connection attempts.
|
||||
* A connection tracking problem which produced failed outgoing connection
|
||||
attempts.
|
||||
|
||||
**Breaking changes:**
|
||||
|
||||
* Nimbus-specific JSON-RPCs intended for debug purposes now have the `debug_` prefix:
|
||||
* Nimbus-specific JSON-RPCs intended for debug purposes now have
|
||||
the `debug_` prefix:
|
||||
|
||||
- `getGossipSubPeers` is now `debug_getGossipSubPeers`
|
||||
- `getChronosFutures` is now `debug_getChronosFutures`
|
||||
|
@ -107,12 +182,13 @@ This release contains important security and performance improvements.
|
|||
|
||||
- Significantly reduces the disk load with a large number of validators (1000+).
|
||||
|
||||
- Makes it possible to enhance our doppelganger detection in the future such that
|
||||
waiting for 2 epochs before attesting is not necessary.
|
||||
- Makes it possible to enhance our doppelganger detection in the future
|
||||
such that waiting for 2 epochs before attesting is not necessary.
|
||||
|
||||
To ensure smooth upgrade and emergency rollback between older and future Nimbus
|
||||
versions, v1.0.10 will keep track of your attestation in both the old and the
|
||||
new format. The extra load should be negligible for home stakers.
|
||||
To ensure smooth upgrade and emergency rollback between older and future
|
||||
Nimbus versions, v1.0.10 will keep track of your attestation in both the
|
||||
old and the new format. The extra load should be negligible for home
|
||||
stakers.
|
||||
|
||||
|
||||
2021-03-09 v1.0.9
|
||||
|
|
4
Makefile
4
Makefile
|
@ -283,7 +283,9 @@ ifneq ($(USE_LIBBACKTRACE), 0)
|
|||
build/generate_makefile: | libbacktrace
|
||||
endif
|
||||
build/generate_makefile: tools/generate_makefile.nim | deps-common
|
||||
$(ENV_SCRIPT) nim c -o:$@ $(NIM_PARAMS) tools/generate_makefile.nim
|
||||
echo -e $(BUILD_MSG) "$@" && \
|
||||
$(ENV_SCRIPT) nim c -o:$@ $(NIM_PARAMS) tools/generate_makefile.nim && \
|
||||
echo -e $(BUILD_END_MSG) "$@"
|
||||
|
||||
# GCC's LTO parallelisation is able to detect a GNU Make jobserver and get its
|
||||
# maximum number of processes from there, but only if we use the "+" prefix.
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
[![Discord: Nimbus](https://img.shields.io/badge/discord-nimbus-orange.svg)](https://discord.gg/XRxWahP)
|
||||
[![Status: #nimbus-general](https://img.shields.io/badge/status-nimbus--general-orange.svg)](https://join.status.im/nimbus-general)
|
||||
|
||||
Nimbus beacon chain is a research implementation of the beacon chain component of the upcoming Ethereum Serenity upgrade, aka Eth2.
|
||||
Nimbus-eth2 is a client implementation for Ethereum 2.0 that strives to be as lightweight as possible in terms of resources used. This allows it to perform well on embedded systems, resource-restricted devices -- including Raspberry Pis and mobile devices -- and multi-purpose servers.
|
||||
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
|
@ -42,6 +42,10 @@ The [Quickstart](https://nimbus.guide/quick-start.html) in particular will help
|
|||
|
||||
You can check where the beacon chain fits in the Ethereum ecosystem our Two-Point-Oh series: https://our.status.im/tag/two-point-oh/
|
||||
|
||||
## Donations
|
||||
|
||||
If you'd like to contribute to Nimbus development, our donation address is [`0x70E47C843E0F6ab0991A3189c28F2957eb6d3842`](https://etherscan.io/address/0x70E47C843E0F6ab0991A3189c28F2957eb6d3842)
|
||||
|
||||
## Branch guide
|
||||
|
||||
* `stable` - latest stable release - **this branch is recommended for most users**
|
||||
|
|
|
@ -567,7 +567,7 @@ proc getBlockSummary*(db: BeaconChainDB, key: Eth2Digest): Opt[BeaconBlockSummar
|
|||
result.err()
|
||||
|
||||
proc getStateOnlyMutableValidators(
|
||||
db: BeaconChainDB, key: Eth2Digest, output: var BeaconState,
|
||||
db: BeaconChainDB, store: KvStoreRef, key: Eth2Digest, output: var BeaconState,
|
||||
rollback: RollbackProc): bool =
|
||||
## Load state into `output` - BeaconState is large so we want to avoid
|
||||
## re-allocating it if possible
|
||||
|
@ -580,7 +580,7 @@ proc getStateOnlyMutableValidators(
|
|||
# TODO RVO is inefficient for large objects:
|
||||
# https://github.com/nim-lang/Nim/issues/13879
|
||||
|
||||
case db.stateStore.getEncoded(
|
||||
case store.getEncoded(
|
||||
subkey(
|
||||
BeaconStateNoImmutableValidators, key),
|
||||
isomorphicCast[BeaconStateNoImmutableValidators](output))
|
||||
|
@ -620,7 +620,7 @@ proc getState*(
|
|||
# https://github.com/nim-lang/Nim/issues/14126
|
||||
# TODO RVO is inefficient for large objects:
|
||||
# https://github.com/nim-lang/Nim/issues/13879
|
||||
if getStateOnlyMutableValidators(db, key, output, rollback):
|
||||
if getStateOnlyMutableValidators(db, db.stateStore, key, output, rollback):
|
||||
return true
|
||||
|
||||
case db.backend.getEncoded(subkey(BeaconState, key), output)
|
||||
|
@ -682,6 +682,38 @@ proc containsState*(db: BeaconChainDB, key: Eth2Digest): bool =
|
|||
proc containsStateDiff*(db: BeaconChainDB, key: Eth2Digest): bool =
|
||||
db.backend.contains(subkey(BeaconStateDiff, key)).expect("working database (disk broken/full?)")
|
||||
|
||||
proc repairGenesisState*(db: BeaconChainDB, key: Eth2Digest): KvResult[void] =
|
||||
# Nimbus 1.0 reads and writes writes genesis BeaconState to `backend`
|
||||
# Nimbus 1.1 writes a genesis BeaconStateNoImmutableValidators to `backend` and
|
||||
# reads both BeaconState and BeaconStateNoImmutableValidators from `backend`
|
||||
# Nimbus 1.2 writes a genesis BeaconStateNoImmutableValidators to `stateStore`
|
||||
# and reads BeaconState from `backend` and BeaconStateNoImmutableValidators
|
||||
# from `stateStore`. This means that 1.2 cannot read a database created with
|
||||
# 1.1 and earlier versions can't read databases created with either of 1.1
|
||||
# and 1.2.
|
||||
# Here, we will try to repair the database so that no matter what, there will
|
||||
# be a `BeaconState` in `backend`:
|
||||
|
||||
if ? db.backend.contains(subkey(BeaconState, key)):
|
||||
# No compatibility issues, life goes on
|
||||
discard
|
||||
elif ? db.backend.contains(subkey(BeaconStateNoImmutableValidators, key)):
|
||||
# 1.1 writes this but not a full state - rewrite a full state
|
||||
var output = new BeaconState
|
||||
if not getStateOnlyMutableValidators(db, db.backend, key, output[], noRollback):
|
||||
return err("Cannot load partial state")
|
||||
|
||||
putStateFull(db, output[])
|
||||
elif ? db.stateStore.contains(subkey(BeaconStateNoImmutableValidators, key)):
|
||||
# 1.2 writes this but not a full state - rewrite a full state
|
||||
var output = new BeaconState
|
||||
if not getStateOnlyMutableValidators(db, db.stateStore, key, output[], noRollback):
|
||||
return err("Cannot load partial state")
|
||||
|
||||
putStateFull(db, output[])
|
||||
|
||||
ok()
|
||||
|
||||
iterator getAncestors*(db: BeaconChainDB, root: Eth2Digest):
|
||||
TrustedSignedBeaconBlock =
|
||||
## Load a chain of ancestors for blck - returns a list of blocks with the
|
||||
|
|
|
@ -81,7 +81,12 @@ func toSlot*(c: BeaconClock, t: Time): tuple[afterGenesis: bool, slot: Slot] =
|
|||
c.toBeaconTime(t).toSlot()
|
||||
|
||||
func toBeaconTime*(s: Slot, offset = Duration()): BeaconTime =
|
||||
BeaconTime(seconds(int64(uint64(s) * SECONDS_PER_SLOT)) + offset)
|
||||
# BeaconTime/Duration stores nanoseconds, internally
|
||||
const maxSlot = (not 0'u64 div 2 div SECONDS_PER_SLOT div 1_000_000_000).Slot
|
||||
var slot = s
|
||||
if slot > maxSlot:
|
||||
slot = maxSlot
|
||||
BeaconTime(seconds(int64(uint64(slot) * SECONDS_PER_SLOT)) + offset)
|
||||
|
||||
proc now*(c: BeaconClock): BeaconTime =
|
||||
## Current time, in slots - this may end up being less than GENESIS_SLOT(!)
|
||||
|
|
|
@ -11,18 +11,22 @@ import
|
|||
# Standard libraries
|
||||
std/[options, tables, sequtils],
|
||||
# Status libraries
|
||||
metrics,
|
||||
chronicles, stew/byteutils, json_serialization/std/sets as jsonSets,
|
||||
# Internal
|
||||
../spec/[beaconstate, datatypes, crypto, digest, validator],
|
||||
../ssz/merkleization,
|
||||
"."/[spec_cache, blockchain_dag, block_quarantine],
|
||||
../beacon_node_types, ../extras,
|
||||
".."/[beacon_clock, beacon_node_types, extras],
|
||||
../fork_choice/fork_choice
|
||||
|
||||
export beacon_node_types
|
||||
|
||||
logScope: topics = "attpool"
|
||||
|
||||
declareGauge attestation_pool_block_attestation_packing_time,
|
||||
"Time it took to create list of attestations for block"
|
||||
|
||||
proc init*(T: type AttestationPool, chainDag: ChainDAGRef, quarantine: QuarantineRef): T =
|
||||
## Initialize an AttestationPool from the chainDag `headState`
|
||||
## The `finalized_root` works around the finalized_checkpoint of the genesis block
|
||||
|
@ -102,12 +106,12 @@ proc addForkChoiceVotes(
|
|||
# hopefully the fork choice will heal itself over time.
|
||||
error "Couldn't add attestation to fork choice, bug?", err = v.error()
|
||||
|
||||
func candidateIdx(pool: AttestationPool, slot: Slot): Option[uint64] =
|
||||
func candidateIdx(pool: AttestationPool, slot: Slot): Option[int] =
|
||||
if slot >= pool.startingSlot and
|
||||
slot < (pool.startingSlot + pool.candidates.lenu64):
|
||||
some(slot mod pool.candidates.lenu64)
|
||||
some(int(slot mod pool.candidates.lenu64))
|
||||
else:
|
||||
none(uint64)
|
||||
none(int)
|
||||
|
||||
proc updateCurrent(pool: var AttestationPool, wallSlot: Slot) =
|
||||
if wallSlot + 1 < pool.candidates.lenu64:
|
||||
|
@ -210,6 +214,52 @@ func updateAggregates(entry: var AttestationEntry) =
|
|||
inc j
|
||||
inc i
|
||||
|
||||
proc addAttestation(entry: var AttestationEntry,
|
||||
attestation: Attestation,
|
||||
signature: CookedSig): bool =
|
||||
logScope:
|
||||
attestation = shortLog(attestation)
|
||||
|
||||
let
|
||||
singleIndex = oneIndex(attestation.aggregation_bits)
|
||||
|
||||
if singleIndex.isSome():
|
||||
if singleIndex.get() in entry.singles:
|
||||
trace "Attestation already seen",
|
||||
singles = entry.singles.len(),
|
||||
aggregates = entry.aggregates.len()
|
||||
|
||||
return false
|
||||
|
||||
debug "Attestation resolved",
|
||||
singles = entry.singles.len(),
|
||||
aggregates = entry.aggregates.len()
|
||||
|
||||
entry.singles[singleIndex.get()] = signature
|
||||
else:
|
||||
# More than one vote in this attestation
|
||||
for i in 0..<entry.aggregates.len():
|
||||
if attestation.aggregation_bits.isSubsetOf(entry.aggregates[i].aggregation_bits):
|
||||
trace "Aggregate already seen",
|
||||
singles = entry.singles.len(),
|
||||
aggregates = entry.aggregates.len()
|
||||
return false
|
||||
|
||||
# Since we're adding a new aggregate, we can now remove existing
|
||||
# aggregates that don't add any new votes
|
||||
entry.aggregates.keepItIf(
|
||||
not it.aggregation_bits.isSubsetOf(attestation.aggregation_bits))
|
||||
|
||||
entry.aggregates.add(Validation(
|
||||
aggregation_bits: attestation.aggregation_bits,
|
||||
aggregate_signature: AggregateSignature.init(signature)))
|
||||
|
||||
debug "Aggregate resolved",
|
||||
singles = entry.singles.len(),
|
||||
aggregates = entry.aggregates.len()
|
||||
|
||||
true
|
||||
|
||||
proc addAttestation*(pool: var AttestationPool,
|
||||
attestation: Attestation,
|
||||
participants: seq[ValidatorIndex],
|
||||
|
@ -234,50 +284,23 @@ proc addAttestation*(pool: var AttestationPool,
|
|||
startingSlot = pool.startingSlot
|
||||
return
|
||||
|
||||
let
|
||||
singleIndex = oneIndex(attestation.aggregation_bits)
|
||||
root = hash_tree_root(attestation.data)
|
||||
# Careful with pointer, candidate table must not be touched after here
|
||||
entry = addr pool.candidates[candidateIdx.get].mGetOrPut(
|
||||
root,
|
||||
AttestationEntry(
|
||||
data: attestation.data,
|
||||
committee_len: attestation.aggregation_bits.len()))
|
||||
|
||||
if singleIndex.isSome():
|
||||
if singleIndex.get() in entry[].singles:
|
||||
trace "Attestation already seen",
|
||||
singles = entry[].singles.len(),
|
||||
aggregates = entry[].aggregates.len()
|
||||
let attestation_data_root = hash_tree_root(attestation.data)
|
||||
|
||||
# TODO withValue is an abomination but hard to use anything else too without
|
||||
# creating an unnecessary AttestationEntry on the hot path and avoiding
|
||||
# multiple lookups
|
||||
pool.candidates[candidateIdx.get()].withValue(attestation_data_root, entry) do:
|
||||
if not addAttestation(entry[], attestation, signature):
|
||||
return
|
||||
do:
|
||||
if not addAttestation(
|
||||
pool.candidates[candidateIdx.get()].mGetOrPut(
|
||||
attestation_data_root,
|
||||
AttestationEntry(
|
||||
data: attestation.data,
|
||||
committee_len: attestation.aggregation_bits.len())),
|
||||
attestation, signature):
|
||||
return
|
||||
|
||||
debug "Attestation resolved",
|
||||
singles = entry[].singles.len(),
|
||||
aggregates = entry[].aggregates.len()
|
||||
|
||||
entry[].singles[singleIndex.get()] = signature
|
||||
else:
|
||||
# More than one vote in this attestation
|
||||
for i in 0..<entry[].aggregates.len():
|
||||
if attestation.aggregation_bits.isSubsetOf(entry[].aggregates[i].aggregation_bits):
|
||||
trace "Aggregate already seen",
|
||||
singles = entry[].singles.len(),
|
||||
aggregates = entry[].aggregates.len()
|
||||
return
|
||||
|
||||
# Since we're adding a new aggregate, we can now remove existing
|
||||
# aggregates that don't add any new votes
|
||||
entry[].aggregates.keepItIf(
|
||||
not it.aggregation_bits.isSubsetOf(attestation.aggregation_bits))
|
||||
|
||||
entry[].aggregates.add(Validation(
|
||||
aggregation_bits: attestation.aggregation_bits,
|
||||
aggregate_signature: AggregateSignature.init(signature)))
|
||||
|
||||
debug "Aggregate resolved",
|
||||
singles = entry[].singles.len(),
|
||||
aggregates = entry[].aggregates.len()
|
||||
|
||||
pool.addForkChoiceVotes(
|
||||
attestation.data.slot, participants, attestation.data.beacon_block_root,
|
||||
|
@ -301,8 +324,18 @@ proc addForkChoice*(pool: var AttestationPool,
|
|||
|
||||
iterator attestations*(pool: AttestationPool, slot: Option[Slot],
|
||||
index: Option[CommitteeIndex]): Attestation =
|
||||
template processTable(table: AttestationTable) =
|
||||
for _, entry in table:
|
||||
let candidateIndices =
|
||||
if slot.isSome():
|
||||
let candidateIdx = pool.candidateIdx(slot.get())
|
||||
if candidateIdx.isSome():
|
||||
candidateIdx.get() .. candidateIdx.get()
|
||||
else:
|
||||
1 .. 0
|
||||
else:
|
||||
0 ..< pool.candidates.len()
|
||||
|
||||
for candidateIndex in candidateIndices:
|
||||
for _, entry in pool.candidates[candidateIndex]:
|
||||
if index.isNone() or entry.data.index == index.get().uint64:
|
||||
var singleAttestation = Attestation(
|
||||
aggregation_bits: CommitteeValidatorsBits.init(entry.committee_len),
|
||||
|
@ -317,14 +350,6 @@ iterator attestations*(pool: AttestationPool, slot: Option[Slot],
|
|||
for v in entry.aggregates:
|
||||
yield entry.toAttestation(v)
|
||||
|
||||
if slot.isSome():
|
||||
let candidateIdx = pool.candidateIdx(slot.get())
|
||||
if candidateIdx.isSome():
|
||||
processTable(pool.candidates[candidateIdx.get()])
|
||||
else:
|
||||
for i in 0..<pool.candidates.len():
|
||||
processTable(pool.candidates[i])
|
||||
|
||||
type
|
||||
AttestationCacheKey* = (Slot, uint64)
|
||||
AttestationCache = Table[AttestationCacheKey, CommitteeValidatorsBits] ##\
|
||||
|
@ -393,6 +418,7 @@ proc getAttestationsForBlock*(pool: var AttestationPool,
|
|||
# Attestations produced in a particular slot are added to the block
|
||||
# at the slot where at least MIN_ATTESTATION_INCLUSION_DELAY have passed
|
||||
maxAttestationSlot = newBlockSlot - MIN_ATTESTATION_INCLUSION_DELAY
|
||||
startPackingTime = Moment.now()
|
||||
|
||||
var
|
||||
candidates: seq[tuple[
|
||||
|
@ -422,9 +448,8 @@ proc getAttestationsForBlock*(pool: var AttestationPool,
|
|||
# Attestations are checked based on the state that we're adding the
|
||||
# attestation to - there might have been a fork between when we first
|
||||
# saw the attestation and the time that we added it
|
||||
# TODO avoid creating a full attestation here and instead do the checks
|
||||
# based on the attestation data and bits
|
||||
if not check_attestation(state, attestation, {skipBlsValidation}, cache).isOk():
|
||||
if not check_attestation(
|
||||
state, attestation, {skipBlsValidation}, cache).isOk():
|
||||
continue
|
||||
|
||||
let score = attCache.score(
|
||||
|
@ -453,7 +478,7 @@ proc getAttestationsForBlock*(pool: var AttestationPool,
|
|||
state.previous_epoch_attestations.maxLen - state.previous_epoch_attestations.len()
|
||||
|
||||
var res: seq[Attestation]
|
||||
|
||||
let totalCandidates = candidates.len()
|
||||
while candidates.len > 0 and res.lenu64() < MAX_ATTESTATIONS:
|
||||
block:
|
||||
# Find the candidate with the highest score - slot is used as a
|
||||
|
@ -491,6 +516,14 @@ proc getAttestationsForBlock*(pool: var AttestationPool,
|
|||
# Only keep candidates that might add coverage
|
||||
it.score > 0
|
||||
|
||||
let
|
||||
packingTime = Moment.now() - startPackingTime
|
||||
|
||||
debug "Packed attestations for block",
|
||||
newBlockSlot, packingTime, totalCandidates, attestations = res.len()
|
||||
attestation_pool_block_attestation_packing_time.set(
|
||||
packingTime.toFloatSeconds())
|
||||
|
||||
res
|
||||
|
||||
func bestValidation(aggregates: openArray[Validation]): (int, int) =
|
||||
|
|
|
@ -1099,6 +1099,11 @@ proc isInitialized*(T: type ChainDAGRef, db: BeaconChainDB): bool =
|
|||
if not (headBlock.isSome() and tailBlock.isSome()):
|
||||
return false
|
||||
|
||||
# 1.1 and 1.2 need a compatibility hack
|
||||
if db.repairGenesisState(tailBlock.get().message.state_root).isErr():
|
||||
notice "Could not repair genesis state"
|
||||
return false
|
||||
|
||||
if not db.containsState(tailBlock.get().message.state_root):
|
||||
return false
|
||||
|
||||
|
@ -1120,6 +1125,7 @@ proc preInit*(
|
|||
validators = tailState.validators.len()
|
||||
|
||||
db.putState(tailState)
|
||||
db.putStateFull(tailState)
|
||||
db.putBlock(tailBlock)
|
||||
db.putTailBlock(tailBlock.root)
|
||||
db.putHeadBlock(tailBlock.root)
|
||||
|
@ -1130,6 +1136,7 @@ proc preInit*(
|
|||
else:
|
||||
doAssert genesisState.slot == GENESIS_SLOT
|
||||
db.putState(genesisState)
|
||||
db.putStateFull(genesisState)
|
||||
let genesisBlock = get_initial_beacon_block(genesisState)
|
||||
db.putBlock(genesisBlock)
|
||||
db.putStateRoot(genesisBlock.root, GENESIS_SLOT, genesisBlock.message.state_root)
|
||||
|
|
|
@ -145,24 +145,29 @@ proc is_valid_indexed_attestation*(
|
|||
# https://github.com/ethereum/eth2.0-specs/blob/v1.0.1/specs/phase0/beacon-chain.md#is_valid_indexed_attestation
|
||||
proc is_valid_indexed_attestation*(
|
||||
fork: Fork, genesis_validators_root: Eth2Digest,
|
||||
epochRef: EpochRef, attesting_indices: auto,
|
||||
epochRef: EpochRef,
|
||||
attestation: SomeAttestation, flags: UpdateFlags): Result[void, cstring] =
|
||||
# This is a variation on `is_valid_indexed_attestation` that works directly
|
||||
# with an attestation instead of first constructing an `IndexedAttestation`
|
||||
# and then validating it - for the purpose of validating the signature, the
|
||||
# order doesn't matter and we can proceed straight to validating the
|
||||
# signature instead
|
||||
if attesting_indices.len == 0:
|
||||
return err("indexed_attestation: no attesting indices")
|
||||
let sigs = attestation.aggregation_bits.countOnes()
|
||||
if sigs == 0:
|
||||
return err("is_valid_indexed_attestation: no attesting indices")
|
||||
|
||||
# Verify aggregate signature
|
||||
if not (skipBLSValidation in flags or attestation.signature is TrustedSig):
|
||||
let pubkeys = mapIt(
|
||||
attesting_indices, epochRef.validator_keys[it])
|
||||
var
|
||||
pubkeys = newSeqOfCap[ValidatorPubKey](sigs)
|
||||
for index in get_attesting_indices(
|
||||
epochRef, attestation.data, attestation.aggregation_bits):
|
||||
pubkeys.add(epochRef.validator_keys[index])
|
||||
|
||||
if not verify_attestation_signature(
|
||||
fork, genesis_validators_root, attestation.data,
|
||||
pubkeys, attestation.signature):
|
||||
return err("indexed attestation: signature verification failure")
|
||||
return err("is_valid_indexed_attestation: signature verification failure")
|
||||
|
||||
ok()
|
||||
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
# beacon_chain
|
||||
# Copyright (c) 2021 Status Research & Development GmbH
|
||||
# Licensed and distributed under either of
|
||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
||||
# at your option. This file may not be copied, modified, or distributed except according to those terms.
|
||||
|
||||
{.push raises: [Defect].}
|
||||
|
||||
import
|
||||
../spec/[datatypes, digest, helpers, presets],
|
||||
./block_pools_types, ./blockchain_dag
|
||||
|
||||
# State-related functions implemented based on StateData instead of BeaconState
|
||||
|
||||
# https://github.com/ethereum/eth2.0-specs/blob/v1.0.1/specs/phase0/beacon-chain.md#get_current_epoch
|
||||
func get_current_epoch*(stateData: StateData): Epoch =
|
||||
## Return the current epoch.
|
||||
getStateField(stateData, slot).epoch
|
||||
|
||||
template hash_tree_root*(stateData: StateData): Eth2Digest =
|
||||
# Dispatch here based on type/fork of state. Since StateData is a ref object
|
||||
# type, if Nim chooses the wrong overload, it will simply fail to compile.
|
||||
stateData.data.root
|
|
@ -47,9 +47,6 @@ declareHistogram beacon_aggregate_delay,
|
|||
declareHistogram beacon_block_delay,
|
||||
"Time(s) between slot start and beacon block reception", buckets = delayBuckets
|
||||
|
||||
declareHistogram beacon_store_block_duration_seconds,
|
||||
"storeBlock() duration", buckets = [0.25, 0.5, 1, 2, 4, 8, Inf]
|
||||
|
||||
type
|
||||
Eth2Processor* = object
|
||||
doppelGangerDetectionEnabled*: bool
|
||||
|
@ -191,11 +188,11 @@ proc checkForPotentialDoppelganger(
|
|||
proc attestationValidator*(
|
||||
self: ref Eth2Processor,
|
||||
attestation: Attestation,
|
||||
committeeIndex: uint64,
|
||||
attestation_subnet: uint64,
|
||||
checksExpensive: bool = true): Future[ValidationResult] {.async.} =
|
||||
logScope:
|
||||
attestation = shortLog(attestation)
|
||||
committeeIndex
|
||||
attestation_subnet
|
||||
|
||||
let
|
||||
wallTime = self.getWallTime()
|
||||
|
@ -214,7 +211,7 @@ proc attestationValidator*(
|
|||
# Now proceed to validation
|
||||
let v = await self.attestationPool.validateAttestation(
|
||||
self.batchCrypto,
|
||||
attestation, wallTime, committeeIndex, checksExpensive)
|
||||
attestation, wallTime, attestation_subnet, checksExpensive)
|
||||
if v.isErr():
|
||||
debug "Dropping attestation", err = v.error()
|
||||
return v.error[0]
|
||||
|
|
|
@ -134,16 +134,16 @@ func check_aggregation_count(
|
|||
|
||||
func check_attestation_subnet(
|
||||
epochRef: EpochRef, attestation: Attestation,
|
||||
topicCommitteeIndex: uint64): Result[void, (ValidationResult, cstring)] =
|
||||
attestation_subnet: uint64): Result[void, (ValidationResult, cstring)] =
|
||||
let
|
||||
expectedSubnet =
|
||||
compute_subnet_for_attestation(
|
||||
get_committee_count_per_slot(epochRef),
|
||||
attestation.data.slot, attestation.data.index.CommitteeIndex)
|
||||
|
||||
if expectedSubnet != topicCommitteeIndex:
|
||||
if expectedSubnet != attestation_subnet:
|
||||
return err((ValidationResult.Reject, cstring(
|
||||
"Attestation's committee index not for the correct subnet")))
|
||||
"Attestation not on the correct subnet")))
|
||||
|
||||
ok()
|
||||
|
||||
|
@ -157,7 +157,7 @@ proc validateAttestation*(
|
|||
batchCrypto: ref BatchCrypto,
|
||||
attestation: Attestation,
|
||||
wallTime: BeaconTime,
|
||||
topicCommitteeIndex: uint64, checksExpensive: bool):
|
||||
attestation_subnet: uint64, checksExpensive: bool):
|
||||
Future[Result[tuple[attestingIndices: seq[ValidatorIndex], sig: CookedSig],
|
||||
(ValidationResult, cstring)]] {.async.} =
|
||||
# Some of the checks below have been reordered compared to the spec, to
|
||||
|
@ -227,7 +227,7 @@ proc validateAttestation*(
|
|||
# attestation.data.target.epoch), which may be pre-computed along with the
|
||||
# committee information for the signature check.
|
||||
block:
|
||||
let v = check_attestation_subnet(epochRef, attestation, topicCommitteeIndex) # [REJECT]
|
||||
let v = check_attestation_subnet(epochRef, attestation, attestation_subnet) # [REJECT]
|
||||
if v.isErr():
|
||||
return err(v.error)
|
||||
|
||||
|
@ -277,8 +277,7 @@ proc validateAttestation*(
|
|||
block:
|
||||
# First pass - without cryptography
|
||||
let v = is_valid_indexed_attestation(
|
||||
fork, genesis_validators_root, epochRef, attesting_indices,
|
||||
attestation,
|
||||
fork, genesis_validators_root, epochRef, attestation,
|
||||
{skipBLSValidation})
|
||||
if v.isErr():
|
||||
return err((ValidationResult.Reject, v.error))
|
||||
|
|
|
@ -1204,12 +1204,12 @@ proc installMessageValidators(node: BeaconNode) =
|
|||
# subnets are subscribed to during any given epoch.
|
||||
for it in 0'u64 ..< ATTESTATION_SUBNET_COUNT.uint64:
|
||||
closureScope:
|
||||
let ci = it
|
||||
let attestation_subnet = it
|
||||
node.network.addAsyncValidator(
|
||||
getAttestationTopic(node.forkDigest, ci),
|
||||
getAttestationTopic(node.forkDigest, attestation_subnet),
|
||||
# This proc needs to be within closureScope; don't lift out of loop.
|
||||
proc(attestation: Attestation): Future[ValidationResult] =
|
||||
node.processor.attestationValidator(attestation, ci))
|
||||
node.processor.attestationValidator(attestation, attestation_subnet))
|
||||
|
||||
node.network.addAsyncValidator(
|
||||
getAggregateAndProofsTopic(node.forkDigest),
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright (c) 2018-2020 Status Research & Development GmbH
|
||||
# Copyright (c) 2018-2021 Status Research & Development GmbH
|
||||
# Licensed and distributed under either of
|
||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
||||
|
@ -9,7 +9,7 @@ import
|
|||
chronicles,
|
||||
nimcrypto/utils as ncrutils,
|
||||
../beacon_node_common, ../networking/eth2_network,
|
||||
../consensus_object_pools/[blockchain_dag, exit_pool],
|
||||
../consensus_object_pools/[blockchain_dag, exit_pool, statedata_helpers],
|
||||
../gossip_processing/gossip_validation,
|
||||
../validators/validator_duties,
|
||||
../spec/[crypto, digest, validator, datatypes, network],
|
||||
|
@ -127,9 +127,9 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
router.api(MethodGet, "/api/eth/v1/beacon/genesis") do () -> RestApiResponse:
|
||||
return RestApiResponse.jsonResponse(
|
||||
(
|
||||
genesis_time: node.chainDag.headState.data.data.genesis_time,
|
||||
genesis_time: getStateField(node.chainDag.headState, genesis_time),
|
||||
genesis_validators_root:
|
||||
node.chainDag.headState.data.data.genesis_validators_root,
|
||||
getStateField(node.chainDag.headState, genesis_validators_root),
|
||||
genesis_fork_version: node.runtimePreset.GENESIS_FORK_VERSION
|
||||
)
|
||||
)
|
||||
|
@ -268,7 +268,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
(res1, res2)
|
||||
|
||||
node.withStateForBlockSlot(bslot):
|
||||
let current_epoch = get_current_epoch(node.chainDag.headState.data.data)
|
||||
let current_epoch = get_current_epoch(node.chainDag.headState)
|
||||
var res: seq[RestValidatorTuple]
|
||||
for index, validator in getStateField(stateData, validators).pairs():
|
||||
let includeFlag =
|
||||
|
@ -309,7 +309,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
return RestApiResponse.jsonError(Http400, InvalidValidatorIdValueError,
|
||||
$validator_id.error())
|
||||
node.withStateForBlockSlot(bslot):
|
||||
let current_epoch = get_current_epoch(node.chainDag.headState.data.data)
|
||||
let current_epoch = get_current_epoch(node.chainDag.headState)
|
||||
let vid = validator_id.get()
|
||||
case vid.kind
|
||||
of ValidatorQueryKind.Key:
|
||||
|
@ -416,7 +416,7 @@ proc installBeaconApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
res2.incl(vitem)
|
||||
(res1, res2)
|
||||
node.withStateForBlockSlot(bslot):
|
||||
let current_epoch = get_current_epoch(node.chainDag.headState.data.data)
|
||||
let current_epoch = get_current_epoch(node.chainDag.headState)
|
||||
var res: seq[RestValidatorBalanceTuple]
|
||||
for index, validator in getStateField(stateData, validators).pairs():
|
||||
let includeFlag =
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright (c) 2018-2020 Status Research & Development GmbH
|
||||
# Copyright (c) 2018-2021 Status Research & Development GmbH
|
||||
# Licensed and distributed under either of
|
||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
||||
|
@ -28,7 +28,7 @@ proc installConfigApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
# TODO: Implemenation needs a fix, when forks infrastructure will be
|
||||
# established.
|
||||
return RestApiResponse.jsonResponse(
|
||||
[node.chainDag.headState.data.data.fork]
|
||||
[getStateField(node.chainDag.headState, fork)]
|
||||
)
|
||||
|
||||
router.api(MethodGet,
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright (c) 2018-2020 Status Research & Development GmbH
|
||||
# Copyright (c) 2018-2021 Status Research & Development GmbH
|
||||
# Licensed and distributed under either of
|
||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
||||
|
@ -96,8 +96,9 @@ proc installNimbusApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
router.api(MethodGet, "/api/nimbus/v1/chain/head") do() -> RestApiResponse:
|
||||
let
|
||||
head = node.chainDag.head
|
||||
finalized = node.chainDag.headState.data.data.finalized_checkpoint
|
||||
justified = node.chainDag.headState.data.data.current_justified_checkpoint
|
||||
finalized = getStateField(node.chainDag.headState, finalized_checkpoint)
|
||||
justified =
|
||||
getStateField(node.chainDag.headState, current_justified_checkpoint)
|
||||
return RestApiResponse.jsonResponse(
|
||||
(
|
||||
head_slot: head.slot,
|
||||
|
|
|
@ -509,7 +509,7 @@ proc getBlockSlot*(node: BeaconNode,
|
|||
ok(node.chainDag.finalizedHead)
|
||||
of StateIdentType.Justified:
|
||||
ok(node.chainDag.head.atEpochStart(
|
||||
node.chainDag.headState.data.data.current_justified_checkpoint.epoch))
|
||||
getStateField(node.chainDag.headState, current_justified_checkpoint).epoch))
|
||||
|
||||
proc getBlockDataFromBlockIdent*(node: BeaconNode,
|
||||
id: BlockIdent): Result[BlockData, cstring] =
|
||||
|
@ -537,7 +537,7 @@ proc getBlockDataFromBlockIdent*(node: BeaconNode,
|
|||
template withStateForBlockSlot*(node: BeaconNode,
|
||||
blockSlot: BlockSlot, body: untyped): untyped =
|
||||
template isState(state: StateData): bool =
|
||||
state.blck.atSlot(state.data.data.slot) == blockSlot
|
||||
state.blck.atSlot(getStateField(state, slot)) == blockSlot
|
||||
|
||||
if isState(node.chainDag.headState):
|
||||
withStateVars(node.chainDag.headState):
|
||||
|
|
|
@ -122,7 +122,7 @@ proc installValidatorApiHandlers*(rpcServer: RpcServer, node: BeaconNode) {.
|
|||
validator_pubkey: ValidatorPubKey, slot_signature: ValidatorSig) -> bool:
|
||||
debug "post_v1_validator_beacon_committee_subscriptions",
|
||||
committee_index, slot
|
||||
if committee_index.uint64 >= ATTESTATION_SUBNET_COUNT.uint64:
|
||||
if committee_index.uint64 >= MAX_COMMITTEES_PER_SLOT.uint64:
|
||||
raise newException(CatchableError,
|
||||
"Invalid committee index")
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright (c) 2018-2020 Status Research & Development GmbH
|
||||
# Copyright (c) 2018-2021 Status Research & Development GmbH
|
||||
# Licensed and distributed under either of
|
||||
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
|
||||
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
|
||||
|
@ -355,17 +355,17 @@ proc installValidatorApiHandlers*(router: var RestRouter, node: BeaconNode) =
|
|||
return RestApiResponse.jsonError(Http503, BeaconNodeInSyncError)
|
||||
|
||||
for request in requests:
|
||||
if uint64(request.committee_index) >= uint64(ATTESTATION_SUBNET_COUNT):
|
||||
if uint64(request.committee_index) >= uint64(MAX_COMMITTEES_PER_SLOT):
|
||||
return RestApiResponse.jsonError(Http400,
|
||||
InvalidCommitteeIndexValueError)
|
||||
let validator_pubkey =
|
||||
block:
|
||||
let idx = request.validator_index
|
||||
if uint64(idx) >=
|
||||
lenu64(node.chainDag.headState.data.data.validators):
|
||||
lenu64(getStateField(node.chainDag.headState, validators)):
|
||||
return RestApiResponse.jsonError(Http400,
|
||||
InvalidValidatorIndexValueError)
|
||||
node.chainDag.headState.data.data.validators[idx].pubkey
|
||||
getStateField(node.chainDag.headState, validators)[idx].pubkey
|
||||
|
||||
let wallSlot = node.beaconClock.now.slotOrZero
|
||||
if wallSlot > request.slot + 1:
|
||||
|
|
|
@ -489,53 +489,44 @@ iterator get_attesting_indices*(state: BeaconState,
|
|||
bits: CommitteeValidatorsBits,
|
||||
cache: var StateCache): ValidatorIndex =
|
||||
## Return the set of attesting indices corresponding to ``data`` and ``bits``.
|
||||
if bits.lenu64 != get_beacon_committee_len(state, data.slot, data.index.CommitteeIndex, cache):
|
||||
if bits.lenu64 != get_beacon_committee_len(
|
||||
state, data.slot, data.index.CommitteeIndex, cache):
|
||||
trace "get_attesting_indices: inconsistent aggregation and committee length"
|
||||
else:
|
||||
var i = 0
|
||||
for index in get_beacon_committee(state, data.slot, data.index.CommitteeIndex, cache):
|
||||
for index in get_beacon_committee(
|
||||
state, data.slot, data.index.CommitteeIndex, cache):
|
||||
if bits[i]:
|
||||
yield index
|
||||
inc i
|
||||
|
||||
iterator get_sorted_attesting_indices*(state: BeaconState,
|
||||
data: AttestationData,
|
||||
bits: CommitteeValidatorsBits,
|
||||
cache: var StateCache): ValidatorIndex =
|
||||
var heap = initHeapQueue[ValidatorIndex]()
|
||||
for index in get_attesting_indices(state, data, bits, cache):
|
||||
heap.push(index)
|
||||
proc is_valid_indexed_attestation*(
|
||||
state: BeaconState, attestation: SomeAttestation, flags: UpdateFlags,
|
||||
cache: var StateCache): Result[void, cstring] =
|
||||
# This is a variation on `is_valid_indexed_attestation` that works directly
|
||||
# with an attestation instead of first constructing an `IndexedAttestation`
|
||||
# and then validating it - for the purpose of validating the signature, the
|
||||
# order doesn't matter and we can proceed straight to validating the
|
||||
# signature instead
|
||||
|
||||
while heap.len > 0:
|
||||
yield heap.pop()
|
||||
let sigs = attestation.aggregation_bits.countOnes()
|
||||
if sigs == 0:
|
||||
return err("is_valid_indexed_attestation: no attesting indices")
|
||||
|
||||
func get_sorted_attesting_indices_list*(
|
||||
state: BeaconState, data: AttestationData, bits: CommitteeValidatorsBits,
|
||||
cache: var StateCache): List[uint64, Limit MAX_VALIDATORS_PER_COMMITTEE] =
|
||||
for index in get_sorted_attesting_indices(state, data, bits, cache):
|
||||
if not result.add index.uint64:
|
||||
raiseAssert "The `result` list has the same max size as the sorted `bits` input"
|
||||
# Verify aggregate signature
|
||||
if not (skipBLSValidation in flags or attestation.signature is TrustedSig):
|
||||
var
|
||||
pubkeys = newSeqOfCap[ValidatorPubKey](sigs)
|
||||
for index in get_attesting_indices(
|
||||
state, attestation.data, attestation.aggregation_bits, cache):
|
||||
pubkeys.add(state.validators[index].pubkey)
|
||||
|
||||
# https://github.com/ethereum/eth2.0-specs/blob/v1.0.1/specs/phase0/beacon-chain.md#get_indexed_attestation
|
||||
func get_indexed_attestation(state: BeaconState, attestation: Attestation,
|
||||
cache: var StateCache): IndexedAttestation =
|
||||
## Return the indexed attestation corresponding to ``attestation``.
|
||||
IndexedAttestation(
|
||||
attesting_indices: get_sorted_attesting_indices_list(
|
||||
state, attestation.data, attestation.aggregation_bits, cache),
|
||||
data: attestation.data,
|
||||
signature: attestation.signature
|
||||
)
|
||||
if not verify_attestation_signature(
|
||||
state.fork, state.genesis_validators_root, attestation.data,
|
||||
pubkeys, attestation.signature):
|
||||
return err("indexed attestation: signature verification failure")
|
||||
|
||||
func get_indexed_attestation(state: BeaconState, attestation: TrustedAttestation,
|
||||
cache: var StateCache): TrustedIndexedAttestation =
|
||||
## Return the indexed attestation corresponding to ``attestation``.
|
||||
TrustedIndexedAttestation(
|
||||
attesting_indices: get_sorted_attesting_indices_list(
|
||||
state, attestation.data, attestation.aggregation_bits, cache),
|
||||
data: attestation.data,
|
||||
signature: attestation.signature
|
||||
)
|
||||
ok()
|
||||
|
||||
# Attestation validation
|
||||
# ------------------------------------------------------------------------------------------
|
||||
|
@ -610,8 +601,7 @@ proc check_attestation*(
|
|||
if not (data.source == state.previous_justified_checkpoint):
|
||||
return err("FFG data not matching previous justified epoch")
|
||||
|
||||
? is_valid_indexed_attestation(
|
||||
state, get_indexed_attestation(state, attestation, cache), flags)
|
||||
? is_valid_indexed_attestation(state, attestation, flags, cache)
|
||||
|
||||
ok()
|
||||
|
||||
|
|
|
@ -66,16 +66,17 @@ proc findValidator(validators: auto, pubKey: ValidatorPubKey):
|
|||
else:
|
||||
some(idx.ValidatorIndex)
|
||||
|
||||
proc addLocalValidator*(node: BeaconNode,
|
||||
state: BeaconState,
|
||||
privKey: ValidatorPrivKey) =
|
||||
proc addLocalValidator(node: BeaconNode,
|
||||
stateData: StateData,
|
||||
privKey: ValidatorPrivKey) =
|
||||
let pubKey = privKey.toPubKey()
|
||||
node.attachedValidators[].addLocalValidator(
|
||||
pubKey, privKey, findValidator(state.validators, pubKey))
|
||||
pubKey, privKey,
|
||||
findValidator(getStateField(stateData, validators), pubKey))
|
||||
|
||||
proc addLocalValidators*(node: BeaconNode) =
|
||||
for validatorKey in node.config.validatorKeys:
|
||||
node.addLocalValidator node.chainDag.headState.data.data, validatorKey
|
||||
node.addLocalValidator node.chainDag.headState, validatorKey
|
||||
|
||||
proc addRemoteValidators*(node: BeaconNode) {.raises: [Defect, OSError, IOError].} =
|
||||
# load all the validators from the child process - loop until `end`
|
||||
|
@ -218,7 +219,7 @@ proc createAndSendAttestation(node: BeaconNode,
|
|||
let deadline = attestationData.slot.toBeaconTime() +
|
||||
seconds(int(SECONDS_PER_SLOT div 3))
|
||||
|
||||
let (delayStr, delayMillis) =
|
||||
let (delayStr, delaySecs) =
|
||||
if wallTime < deadline:
|
||||
("-" & $(deadline - wallTime), -toFloatSeconds(deadline - wallTime))
|
||||
else:
|
||||
|
@ -228,7 +229,7 @@ proc createAndSendAttestation(node: BeaconNode,
|
|||
validator = shortLog(validator), delay = delayStr,
|
||||
indexInCommittee = indexInCommittee
|
||||
|
||||
beacon_attestation_sent_delay.observe(delayMillis)
|
||||
beacon_attestation_sent_delay.observe(delaySecs)
|
||||
|
||||
proc getBlockProposalEth1Data*(node: BeaconNode,
|
||||
stateData: StateData): BlockProposalEth1Data =
|
||||
|
|
|
@ -15,8 +15,8 @@ when not defined(nimscript):
|
|||
|
||||
const
|
||||
versionMajor* = 1
|
||||
versionMinor* = 1
|
||||
versionBuild* = 0
|
||||
versionMinor* = 2
|
||||
versionBuild* = 1
|
||||
|
||||
versionBlob* = "stateofus" # Single word - ends up in the default graffitti
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ make dist
|
|||
|
||||
## Significant differences from self-built binaries
|
||||
|
||||
No `-march=native` and no metrics support.
|
||||
No `-march=native`.
|
||||
|
||||
## Running a Pyrmont node
|
||||
|
||||
|
|
|
@ -224,7 +224,7 @@ cli do(slots = SLOTS_PER_EPOCH * 5,
|
|||
|
||||
# TODO if attestation pool was smarter, it would include older attestations
|
||||
# too!
|
||||
verifyConsensus(chainDag.headState.data.data, attesterRatio * blockRatio)
|
||||
verifyConsensus(chainDag.headState, attesterRatio * blockRatio)
|
||||
|
||||
if t == tEpoch:
|
||||
echo &". slot: {shortLog(slot)} ",
|
||||
|
@ -241,4 +241,4 @@ cli do(slots = SLOTS_PER_EPOCH * 5,
|
|||
|
||||
echo "Done!"
|
||||
|
||||
printTimers(chainDag.headState.data.data, attesters, true, timers)
|
||||
printTimers(chainDag.headState, attesters, true, timers)
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
import
|
||||
stats, os, strformat, times,
|
||||
../tests/[testblockutil],
|
||||
../tests/testblockutil,
|
||||
../beacon_chain/[extras, beacon_chain_db],
|
||||
../beacon_chain/ssz/[merkleization, ssz_serialization],
|
||||
../beacon_chain/spec/[beaconstate, crypto, datatypes, digest, helpers, presets],
|
||||
../beacon_chain/consensus_object_pools/[
|
||||
blockchain_dag, block_pools_types, statedata_helpers],
|
||||
../beacon_chain/eth1/eth1_monitor
|
||||
|
||||
template withTimer*(stats: var RunningStat, body: untyped) =
|
||||
|
@ -41,6 +43,24 @@ func verifyConsensus*(state: BeaconState, attesterRatio: auto) =
|
|||
if current_epoch >= 4:
|
||||
doAssert state.finalized_checkpoint.epoch + 2 >= current_epoch
|
||||
|
||||
func verifyConsensus*(state: StateData, attesterRatio: auto) =
|
||||
if attesterRatio < 0.63:
|
||||
doAssert getStateField(state, current_justified_checkpoint).epoch == 0
|
||||
doAssert getStateField(state, finalized_checkpoint).epoch == 0
|
||||
|
||||
# Quorum is 2/3 of validators, and at low numbers, quantization effects
|
||||
# can dominate, so allow for play above/below attesterRatio of 2/3.
|
||||
if attesterRatio < 0.72:
|
||||
return
|
||||
|
||||
let current_epoch = get_current_epoch(state)
|
||||
if current_epoch >= 3:
|
||||
doAssert getStateField(
|
||||
state, current_justified_checkpoint).epoch + 1 >= current_epoch
|
||||
if current_epoch >= 4:
|
||||
doAssert getStateField(
|
||||
state, finalized_checkpoint).epoch + 2 >= current_epoch
|
||||
|
||||
proc loadGenesis*(validators: Natural, validate: bool):
|
||||
(ref HashedBeaconState, DepositContractSnapshot) =
|
||||
let
|
||||
|
@ -122,3 +142,10 @@ proc printTimers*[Timers: enum](
|
|||
echo "Validators: ", state.validators.len, ", epoch length: ", SLOTS_PER_EPOCH
|
||||
echo "Validators per attestation (mean): ", attesters.mean
|
||||
printTimers(validate, timers)
|
||||
|
||||
proc printTimers*[Timers: enum](
|
||||
state: StateData, attesters: RunningStat, validate: bool,
|
||||
timers: array[Timers, RunningStat]) =
|
||||
echo "Validators: ", getStateField(state, validators).len, ", epoch length: ", SLOTS_PER_EPOCH
|
||||
echo "Validators per attestation (mean): ", attesters.mean
|
||||
printTimers(validate, timers)
|
||||
|
|
|
@ -231,7 +231,6 @@ if [[ $USE_GANACHE == "0" ]]; then
|
|||
--data-dir="${DATA_DIR}" \
|
||||
--deposits-file="${DEPOSITS_FILE}" \
|
||||
--total-validators=${TOTAL_VALIDATORS} \
|
||||
--last-user-validator=${USER_VALIDATORS} \
|
||||
--output-genesis="${NETWORK_DIR}/genesis.ssz" \
|
||||
--output-bootstrap-file="${NETWORK_DIR}/bootstrap_nodes.txt" \
|
||||
--bootstrap-address=${BOOTSTRAP_IP} \
|
||||
|
|
|
@ -93,7 +93,6 @@ make build
|
|||
--data-dir="$DATA_DIR_ABS" \
|
||||
--validators-dir="$VALIDATORS_DIR_ABS" \
|
||||
--total-validators=$TOTAL_VALIDATORS \
|
||||
--last-user-validator=$USER_VALIDATORS \
|
||||
--output-genesis="$NETWORK_DIR_ABS/genesis.ssz" \
|
||||
--output-bootstrap-file="$NETWORK_DIR_ABS/bootstrap_nodes.txt" \
|
||||
--bootstrap-address=$BOOTSTRAP_IP \
|
||||
|
|
|
@ -67,13 +67,13 @@ suiteReport "Attestation pool processing" & preset():
|
|||
cache = StateCache()
|
||||
# Slot 0 is a finalized slot - won't be making attestations for it..
|
||||
check:
|
||||
process_slots(state.data, state.data.data.slot + 1, cache)
|
||||
process_slots(state.data, getStateField(state, slot) + 1, cache)
|
||||
|
||||
timedTest "Can add and retrieve simple attestations" & preset():
|
||||
let
|
||||
# Create an attestation for slot 1!
|
||||
bc0 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
attestation = makeAttestation(
|
||||
state.data.data, state.blck.root, bc0[0], cache)
|
||||
|
||||
|
@ -98,7 +98,8 @@ suiteReport "Attestation pool processing" & preset():
|
|||
none(Slot), some(CommitteeIndex(attestation.data.index + 1)))) == []
|
||||
|
||||
process_slots(
|
||||
state.data, state.data.data.slot + MIN_ATTESTATION_INCLUSION_DELAY, cache)
|
||||
state.data,
|
||||
getStateField(state, slot) + MIN_ATTESTATION_INCLUSION_DELAY, cache)
|
||||
|
||||
let attestations = pool[].getAttestationsForBlock(state.data.data, cache)
|
||||
|
||||
|
@ -111,13 +112,14 @@ suiteReport "Attestation pool processing" & preset():
|
|||
state.data, state.blck.root,
|
||||
cache, attestations = attestations, nextSlot = false).root
|
||||
bc1 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
att1 = makeAttestation(
|
||||
state.data.data, root1, bc1[0], cache)
|
||||
|
||||
check:
|
||||
process_slots(
|
||||
state.data, state.data.data.slot + MIN_ATTESTATION_INCLUSION_DELAY, cache)
|
||||
state.data,
|
||||
getStateField(state, slot) + MIN_ATTESTATION_INCLUSION_DELAY, cache)
|
||||
|
||||
check:
|
||||
# shouldn't include already-included attestations
|
||||
|
@ -176,7 +178,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
let
|
||||
# Create an attestation for slot 1!
|
||||
bc0 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
|
||||
var
|
||||
att0 = makeAttestation(state.data.data, state.blck.root, bc0[0], cache)
|
||||
|
@ -194,7 +196,8 @@ suiteReport "Attestation pool processing" & preset():
|
|||
|
||||
check:
|
||||
process_slots(
|
||||
state.data, state.data.data.slot + MIN_ATTESTATION_INCLUSION_DELAY, cache)
|
||||
state.data,
|
||||
getStateField(state, slot) + MIN_ATTESTATION_INCLUSION_DELAY, cache)
|
||||
|
||||
check:
|
||||
pool[].getAttestationsForBlock(state.data.data, cache).len() == 2
|
||||
|
@ -230,7 +233,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
root.data[0..<8] = toBytesBE(i.uint64)
|
||||
let
|
||||
bc0 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
|
||||
for j in 0..<bc0.len():
|
||||
root.data[8..<16] = toBytesBE(j.uint64)
|
||||
|
@ -239,7 +242,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
inc attestations
|
||||
|
||||
check:
|
||||
process_slots(state.data, state.data.data.slot + 1, cache)
|
||||
process_slots(state.data, getStateField(state, slot) + 1, cache)
|
||||
|
||||
doAssert attestations.uint64 > MAX_ATTESTATIONS,
|
||||
"6*SLOTS_PER_EPOCH validators > 128 mainnet MAX_ATTESTATIONS"
|
||||
|
@ -247,23 +250,24 @@ suiteReport "Attestation pool processing" & preset():
|
|||
# Fill block with attestations
|
||||
pool[].getAttestationsForBlock(state.data.data, cache).lenu64() ==
|
||||
MAX_ATTESTATIONS
|
||||
pool[].getAggregatedAttestation(state.data.data.slot - 1, 0.CommitteeIndex).isSome()
|
||||
pool[].getAggregatedAttestation(
|
||||
getStateField(state, slot) - 1, 0.CommitteeIndex).isSome()
|
||||
|
||||
timedTest "Attestations may arrive in any order" & preset():
|
||||
var cache = StateCache()
|
||||
let
|
||||
# Create an attestation for slot 1!
|
||||
bc0 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
attestation0 = makeAttestation(
|
||||
state.data.data, state.blck.root, bc0[0], cache)
|
||||
|
||||
check:
|
||||
process_slots(state.data, state.data.data.slot + 1, cache)
|
||||
process_slots(state.data, getStateField(state, slot) + 1, cache)
|
||||
|
||||
let
|
||||
bc1 = get_beacon_committee(state.data.data,
|
||||
state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
attestation1 = makeAttestation(
|
||||
state.data.data, state.blck.root, bc1[0], cache)
|
||||
|
||||
|
@ -286,7 +290,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
let
|
||||
# Create an attestation for slot 1!
|
||||
bc0 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
attestation0 = makeAttestation(
|
||||
state.data.data, state.blck.root, bc0[0], cache)
|
||||
attestation1 = makeAttestation(
|
||||
|
@ -311,7 +315,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
var
|
||||
# Create an attestation for slot 1!
|
||||
bc0 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
attestation0 = makeAttestation(
|
||||
state.data.data, state.blck.root, bc0[0], cache)
|
||||
attestation1 = makeAttestation(
|
||||
|
@ -337,7 +341,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
var
|
||||
# Create an attestation for slot 1!
|
||||
bc0 = get_beacon_committee(state.data.data,
|
||||
state.data.data.slot, 0.CommitteeIndex, cache)
|
||||
getStateField(state, slot), 0.CommitteeIndex, cache)
|
||||
attestation0 = makeAttestation(
|
||||
state.data.data, state.blck.root, bc0[0], cache)
|
||||
attestation1 = makeAttestation(
|
||||
|
@ -412,7 +416,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
pool[].addForkChoice(epochRef, blckRef, signedBlock.message, blckRef.slot)
|
||||
|
||||
bc1 = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot - 1, 1.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot) - 1, 1.CommitteeIndex, cache)
|
||||
attestation0 = makeAttestation(state.data.data, b10.root, bc1[0], cache)
|
||||
|
||||
pool[].addAttestation(
|
||||
|
@ -521,7 +525,8 @@ suiteReport "Attestation pool processing" & preset():
|
|||
attestations.setlen(0)
|
||||
for index in 0'u64 ..< committees_per_slot:
|
||||
let committee = get_beacon_committee(
|
||||
state.data.data, state.data.data.slot, index.CommitteeIndex, cache)
|
||||
state.data.data, getStateField(state, slot), index.CommitteeIndex,
|
||||
cache)
|
||||
|
||||
# Create a bitfield filled with the given count per attestation,
|
||||
# exactly on the right-most part of the committee field.
|
||||
|
@ -532,7 +537,7 @@ suiteReport "Attestation pool processing" & preset():
|
|||
attestations.add Attestation(
|
||||
aggregation_bits: aggregation_bits,
|
||||
data: makeAttestationData(
|
||||
state.data.data, state.data.data.slot,
|
||||
state.data.data, getStateField(state, slot),
|
||||
index.CommitteeIndex, blockroot)
|
||||
# signature: ValidatorSig()
|
||||
)
|
||||
|
@ -569,7 +574,7 @@ suiteReport "Attestation validation " & preset():
|
|||
batchCrypto = BatchCrypto.new(keys.newRng())
|
||||
# Slot 0 is a finalized slot - won't be making attestations for it..
|
||||
check:
|
||||
process_slots(state.data, state.data.data.slot + 1, cache)
|
||||
process_slots(state.data, getStateField(state, slot) + 1, cache)
|
||||
|
||||
timedTest "Validation sanity":
|
||||
# TODO: refactor tests to avoid skipping BLS validation
|
||||
|
|
|
@ -15,7 +15,8 @@ import
|
|||
../beacon_chain/spec/[datatypes, digest, helpers, state_transition, presets],
|
||||
../beacon_chain/beacon_node_types,
|
||||
../beacon_chain/ssz,
|
||||
../beacon_chain/consensus_object_pools/[blockchain_dag, block_quarantine, block_clearance]
|
||||
../beacon_chain/consensus_object_pools/[
|
||||
blockchain_dag, block_quarantine, block_clearance, statedata_helpers]
|
||||
|
||||
when isMainModule:
|
||||
import chronicles # or some random compile error happens...
|
||||
|
@ -174,7 +175,7 @@ suiteReport "Block pool processing" & preset():
|
|||
|
||||
# Skip one slot to get a gap
|
||||
check:
|
||||
process_slots(stateData.data, stateData.data.data.slot + 1, cache)
|
||||
process_slots(stateData.data, getStateField(stateData, slot) + 1, cache)
|
||||
|
||||
let
|
||||
b4 = addTestBlock(stateData.data, b2.root, cache)
|
||||
|
@ -262,7 +263,7 @@ suiteReport "Block pool processing" & preset():
|
|||
check:
|
||||
# ensure we loaded the correct head state
|
||||
dag2.head.root == b2.root
|
||||
hash_tree_root(dag2.headState.data.data) == b2.message.state_root
|
||||
hash_tree_root(dag2.headState) == b2.message.state_root
|
||||
dag2.get(b1.root).isSome()
|
||||
dag2.get(b2.root).isSome()
|
||||
dag2.heads.len == 1
|
||||
|
@ -286,7 +287,7 @@ suiteReport "Block pool processing" & preset():
|
|||
|
||||
check:
|
||||
dag.head == b1Add[]
|
||||
dag.headState.data.data.slot == b1Add[].slot
|
||||
getStateField(dag.headState, slot) == b1Add[].slot
|
||||
|
||||
wrappedTimedTest "updateStateData sanity" & preset():
|
||||
let
|
||||
|
@ -304,38 +305,38 @@ suiteReport "Block pool processing" & preset():
|
|||
|
||||
check:
|
||||
tmpState.blck == b1Add[]
|
||||
tmpState.data.data.slot == bs1.slot
|
||||
getStateField(tmpState, slot) == bs1.slot
|
||||
|
||||
# Skip slots
|
||||
dag.updateStateData(tmpState[], bs1_3, false, cache) # skip slots
|
||||
|
||||
check:
|
||||
tmpState.blck == b1Add[]
|
||||
tmpState.data.data.slot == bs1_3.slot
|
||||
getStateField(tmpState, slot) == bs1_3.slot
|
||||
|
||||
# Move back slots, but not blocks
|
||||
dag.updateStateData(tmpState[], bs1_3.parent(), false, cache)
|
||||
check:
|
||||
tmpState.blck == b1Add[]
|
||||
tmpState.data.data.slot == bs1_3.parent().slot
|
||||
getStateField(tmpState, slot) == bs1_3.parent().slot
|
||||
|
||||
# Move to different block and slot
|
||||
dag.updateStateData(tmpState[], bs2_3, false, cache)
|
||||
check:
|
||||
tmpState.blck == b2Add[]
|
||||
tmpState.data.data.slot == bs2_3.slot
|
||||
getStateField(tmpState, slot) == bs2_3.slot
|
||||
|
||||
# Move back slot and block
|
||||
dag.updateStateData(tmpState[], bs1, false, cache)
|
||||
check:
|
||||
tmpState.blck == b1Add[]
|
||||
tmpState.data.data.slot == bs1.slot
|
||||
getStateField(tmpState, slot) == bs1.slot
|
||||
|
||||
# Move back to genesis
|
||||
dag.updateStateData(tmpState[], bs1.parent(), false, cache)
|
||||
check:
|
||||
tmpState.blck == b1Add[].parent
|
||||
tmpState.data.data.slot == bs1.parent.slot
|
||||
getStateField(tmpState, slot) == bs1.parent.slot
|
||||
|
||||
suiteReport "chain DAG finalization tests" & preset():
|
||||
setup:
|
||||
|
@ -431,8 +432,7 @@ suiteReport "chain DAG finalization tests" & preset():
|
|||
dag2.head.root == dag.head.root
|
||||
dag2.finalizedHead.blck.root == dag.finalizedHead.blck.root
|
||||
dag2.finalizedHead.slot == dag.finalizedHead.slot
|
||||
hash_tree_root(dag2.headState.data.data) ==
|
||||
hash_tree_root(dag.headState.data.data)
|
||||
hash_tree_root(dag2.headState) == hash_tree_root(dag.headState)
|
||||
|
||||
wrappedTimedTest "orphaned epoch block" & preset():
|
||||
var prestate = (ref HashedBeaconState)()
|
||||
|
@ -496,7 +496,7 @@ suiteReport "chain DAG finalization tests" & preset():
|
|||
dag.headState.data, dag.head.root, cache,
|
||||
attestations = makeFullAttestations(
|
||||
dag.headState.data.data, dag.head.root,
|
||||
dag.headState.data.data.slot, cache, {}))
|
||||
getStateField(dag.headState, slot), cache, {}))
|
||||
|
||||
let added = dag.addRawBlock(quarantine, blck, nil)
|
||||
check: added.isOk()
|
||||
|
@ -512,5 +512,4 @@ suiteReport "chain DAG finalization tests" & preset():
|
|||
dag2.head.root == dag.head.root
|
||||
dag2.finalizedHead.blck.root == dag.finalizedHead.blck.root
|
||||
dag2.finalizedHead.slot == dag.finalizedHead.slot
|
||||
hash_tree_root(dag2.headState.data.data) ==
|
||||
hash_tree_root(dag.headState.data.data)
|
||||
hash_tree_root(dag2.headState) == hash_tree_root(dag.headState)
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit 613ad40f00ab3d0ee839f9db9c4d25e5e0248dee
|
||||
Subproject commit ef9220280600c0fcc39061008dc3d37758dbd0b7
|
|
@ -1 +1 @@
|
|||
Subproject commit 6b930ae7e69242ef3a62d1d3c4aa085973bf1055
|
||||
Subproject commit e285d8bbf4034ab73c070473e645d8c75cdada51
|
|
@ -1 +1 @@
|
|||
Subproject commit 105af2bfbd4896e8b4086d3dff1e6c187e9d0a41
|
||||
Subproject commit 7349a421cff02c27df9f7622801459d6b5d65dcd
|
|
@ -1 +1 @@
|
|||
Subproject commit 7d2790fdf493dd0869be5ed1b2ecea768eb008c6
|
||||
Subproject commit 7fb220d1e8cbc6cc67b5b12602d4ac0305e693be
|
|
@ -1 +1 @@
|
|||
Subproject commit e7694f16ceccd7c98ecb6870263025018b7d37b3
|
||||
Subproject commit 2b097ec86aead0119c5e6bfff8502c3948a1ceaf
|
Loading…
Reference in New Issue