2020-10-12 01:07:20 +00:00
|
|
|
import
|
|
|
|
os, stats, strformat, tables,
|
|
|
|
chronicles, confutils, stew/byteutils, eth/db/kvstore_sqlite3,
|
2021-03-05 13:12:00 +00:00
|
|
|
../beacon_chain/networking/network_metadata,
|
2020-07-30 19:18:17 +00:00
|
|
|
../beacon_chain/[beacon_chain_db, extras],
|
2021-05-21 09:23:28 +00:00
|
|
|
../beacon_chain/consensus_object_pools/[blockchain_dag, statedata_helpers],
|
|
|
|
../beacon_chain/spec/[crypto, datatypes, digest, helpers, state_transition,
|
2021-05-27 13:22:38 +00:00
|
|
|
state_transition_epoch, presets],
|
2021-03-03 06:23:05 +00:00
|
|
|
../beacon_chain/ssz, ../beacon_chain/ssz/sszdump,
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
../research/simutils, ./e2store
|
2020-05-28 14:19:25 +00:00
|
|
|
|
|
|
|
type Timers = enum
|
|
|
|
tInit = "Initialize DB"
|
|
|
|
tLoadBlock = "Load block from database"
|
|
|
|
tLoadState = "Load state from database"
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
tAdvanceSlot = "Advance slot, non-epoch"
|
|
|
|
tAdvanceEpoch = "Advance slot, epoch"
|
|
|
|
tApplyBlock = "Apply block, no slot processing"
|
2021-03-12 10:02:09 +00:00
|
|
|
tDbLoad = "Database load"
|
2021-02-15 16:40:00 +00:00
|
|
|
tDbStore = "Database store"
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2020-06-01 14:48:24 +00:00
|
|
|
type
|
|
|
|
DbCmd* = enum
|
|
|
|
bench
|
2020-06-06 11:26:19 +00:00
|
|
|
dumpState
|
2020-06-25 10:23:10 +00:00
|
|
|
dumpBlock
|
2020-09-11 13:20:34 +00:00
|
|
|
pruneDatabase
|
2020-06-16 08:49:32 +00:00
|
|
|
rewindState
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
exportEra
|
2021-05-07 11:36:21 +00:00
|
|
|
validatorPerf
|
2021-05-27 13:22:38 +00:00
|
|
|
validatorDb = "Create or update attestation performance database"
|
2020-06-01 14:48:24 +00:00
|
|
|
|
2020-07-07 23:02:14 +00:00
|
|
|
# TODO:
|
|
|
|
# This should probably allow specifying a run-time preset
|
2020-06-01 14:48:24 +00:00
|
|
|
DbConf = object
|
|
|
|
databaseDir* {.
|
|
|
|
defaultValue: ""
|
|
|
|
desc: "Directory where `nbc.sqlite` is stored"
|
|
|
|
name: "db" }: InputDir
|
|
|
|
|
2020-09-01 09:01:57 +00:00
|
|
|
eth2Network* {.
|
|
|
|
desc: "The Eth2 network preset to use"
|
|
|
|
name: "network" }: Option[string]
|
|
|
|
|
2020-06-01 14:48:24 +00:00
|
|
|
case cmd* {.
|
|
|
|
command
|
2020-06-06 11:26:19 +00:00
|
|
|
desc: ""
|
2020-06-01 14:48:24 +00:00
|
|
|
.}: DbCmd
|
|
|
|
|
|
|
|
of bench:
|
2021-05-07 11:36:21 +00:00
|
|
|
benchSlot* {.
|
|
|
|
defaultValue: 0
|
|
|
|
name: "start-slot"
|
|
|
|
desc: "Starting slot, negative = backwards from head".}: int64
|
|
|
|
benchSlots* {.
|
2020-06-25 10:23:10 +00:00
|
|
|
defaultValue: 50000
|
2021-05-07 11:36:21 +00:00
|
|
|
name: "slots"
|
|
|
|
desc: "Number of slots to run benchmark for, 0 = all the way to head".}: uint64
|
2020-08-27 12:52:22 +00:00
|
|
|
storeBlocks* {.
|
|
|
|
defaultValue: false
|
|
|
|
desc: "Store each read block back into a separate database".}: bool
|
2021-02-15 16:40:00 +00:00
|
|
|
storeStates* {.
|
|
|
|
defaultValue: false
|
|
|
|
desc: "Store a state each epoch into a separate database".}: bool
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
printTimes* {.
|
|
|
|
defaultValue: true
|
|
|
|
desc: "Print csv of block processing time".}: bool
|
|
|
|
resetCache* {.
|
|
|
|
defaultValue: false
|
|
|
|
desc: "Process each block with a fresh cache".}: bool
|
2021-04-06 18:56:45 +00:00
|
|
|
|
2020-06-06 11:26:19 +00:00
|
|
|
of dumpState:
|
|
|
|
stateRoot* {.
|
|
|
|
argument
|
|
|
|
desc: "State roots to save".}: seq[string]
|
|
|
|
|
2020-06-25 10:23:10 +00:00
|
|
|
of dumpBlock:
|
|
|
|
blockRootx* {.
|
|
|
|
argument
|
|
|
|
desc: "Block roots to save".}: seq[string]
|
|
|
|
|
2020-09-11 13:20:34 +00:00
|
|
|
of pruneDatabase:
|
|
|
|
dryRun* {.
|
|
|
|
defaultValue: false
|
|
|
|
desc: "Don't write to the database copy; only simulate actions; default false".}: bool
|
|
|
|
keepOldStates* {.
|
|
|
|
defaultValue: true
|
|
|
|
desc: "Keep pre-finalization states; default true".}: bool
|
|
|
|
verbose* {.
|
|
|
|
defaultValue: false
|
|
|
|
desc: "Enables verbose output; default false".}: bool
|
|
|
|
|
2020-06-16 08:49:32 +00:00
|
|
|
of rewindState:
|
|
|
|
blockRoot* {.
|
|
|
|
argument
|
|
|
|
desc: "Block root".}: string
|
|
|
|
|
|
|
|
slot* {.
|
|
|
|
argument
|
|
|
|
desc: "Slot".}: uint64
|
|
|
|
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
of exportEra:
|
|
|
|
era* {.
|
|
|
|
defaultValue: 0
|
|
|
|
desc: "The era number to write".}: uint64
|
|
|
|
eraCount* {.
|
|
|
|
defaultValue: 1
|
|
|
|
desc: "Number of eras to write".}: uint64
|
|
|
|
|
2021-05-07 11:36:21 +00:00
|
|
|
of validatorPerf:
|
|
|
|
perfSlot* {.
|
|
|
|
defaultValue: -128 * SLOTS_PER_EPOCH.int64
|
|
|
|
name: "start-slot"
|
|
|
|
desc: "Starting slot, negative = backwards from head".}: int64
|
|
|
|
perfSlots* {.
|
|
|
|
defaultValue: 0
|
|
|
|
name: "slots"
|
|
|
|
desc: "Number of slots to run benchmark for, 0 = all the way to head".}: uint64
|
2021-05-27 13:22:38 +00:00
|
|
|
of validatorDb:
|
|
|
|
outDir* {.
|
|
|
|
defaultValue: ""
|
|
|
|
name: "out-db"
|
|
|
|
desc: "Output database".}: string
|
|
|
|
perfect* {.
|
|
|
|
defaultValue: false
|
|
|
|
name: "perfect"
|
|
|
|
desc: "Include perfect records (full rewards)".}: bool
|
2021-05-07 11:36:21 +00:00
|
|
|
|
2021-05-27 13:22:38 +00:00
|
|
|
proc getSlotRange(dag: ChainDAGRef, startSlot: int64, count: uint64): (Slot, Slot) =
|
2021-05-07 11:36:21 +00:00
|
|
|
let
|
|
|
|
start =
|
|
|
|
if startSlot >= 0: Slot(startSlot)
|
|
|
|
elif uint64(-startSlot) >= dag.head.slot: Slot(0)
|
|
|
|
else: Slot(dag.head.slot - uint64(-startSlot))
|
|
|
|
ends =
|
|
|
|
if count == 0: dag.head.slot + 1
|
|
|
|
else: start + count
|
2021-05-27 13:22:38 +00:00
|
|
|
(start, ends)
|
|
|
|
|
|
|
|
proc getBlockRange(dag: ChainDAGRef, start, ends: Slot): seq[BlockRef] =
|
|
|
|
# Range of block in reverse order
|
2021-05-07 11:36:21 +00:00
|
|
|
var
|
|
|
|
blockRefs: seq[BlockRef]
|
|
|
|
cur = dag.head
|
|
|
|
|
|
|
|
while cur != nil:
|
|
|
|
if cur.slot < ends:
|
|
|
|
if cur.slot < start or cur.slot == 0: # skip genesis
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
blockRefs.add cur
|
|
|
|
cur = cur.parent
|
|
|
|
blockRefs
|
|
|
|
|
2020-09-01 09:01:57 +00:00
|
|
|
proc cmdBench(conf: DbConf, runtimePreset: RuntimePreset) =
|
2020-05-28 14:19:25 +00:00
|
|
|
var timers: array[Timers, RunningStat]
|
|
|
|
|
|
|
|
echo "Opening database..."
|
|
|
|
let
|
2021-04-06 18:56:45 +00:00
|
|
|
db = BeaconChainDB.new(
|
2021-05-17 16:37:26 +00:00
|
|
|
runtimePreset, conf.databaseDir.string,)
|
2021-04-06 18:56:45 +00:00
|
|
|
dbBenchmark = BeaconChainDB.new(runtimePreset, "benchmark")
|
2021-02-15 16:40:00 +00:00
|
|
|
defer:
|
|
|
|
db.close()
|
|
|
|
dbBenchmark.close()
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2020-07-31 14:49:06 +00:00
|
|
|
if not ChainDAGRef.isInitialized(db):
|
2020-05-28 14:19:25 +00:00
|
|
|
echo "Database not initialized"
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
echo "Initializing block pool..."
|
2021-05-07 11:36:21 +00:00
|
|
|
let dag = withTimerRet(timers[tInit]):
|
2020-09-01 09:01:57 +00:00
|
|
|
ChainDAGRef.init(runtimePreset, db, {})
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2020-06-01 14:48:24 +00:00
|
|
|
var
|
2021-05-27 13:22:38 +00:00
|
|
|
(start, ends) = dag.getSlotRange(conf.benchSlot, conf.benchSlots)
|
|
|
|
blockRefs = dag.getBlockRange(start, ends)
|
2020-06-25 10:23:10 +00:00
|
|
|
blocks: seq[TrustedSignedBeaconBlock]
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2021-05-07 11:36:21 +00:00
|
|
|
echo &"Loaded {dag.blocks.len} blocks, head slot {dag.head.slot}, selected {blockRefs.len} blocks"
|
|
|
|
doAssert blockRefs.len() > 0, "Must select at least one block"
|
2020-06-25 10:23:10 +00:00
|
|
|
|
2021-05-07 11:36:21 +00:00
|
|
|
for b in 0..<blockRefs.len:
|
2020-06-01 14:48:24 +00:00
|
|
|
withTimer(timers[tLoadBlock]):
|
|
|
|
blocks.add db.getBlock(blockRefs[blockRefs.len - b - 1].root).get()
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2021-05-07 11:36:21 +00:00
|
|
|
let state = newClone(dag.headState)
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2021-03-12 10:02:09 +00:00
|
|
|
var
|
|
|
|
cache = StateCache()
|
2021-05-07 11:36:21 +00:00
|
|
|
rewards = RewardInfo()
|
2021-03-12 10:02:09 +00:00
|
|
|
loadedState = new BeaconState
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
|
2021-05-07 11:36:21 +00:00
|
|
|
withTimer(timers[tLoadState]):
|
|
|
|
dag.updateStateData(
|
|
|
|
state[], blockRefs[^1].atSlot(blockRefs[^1].slot - 1), false, cache)
|
|
|
|
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
for b in blocks.mitems():
|
2021-05-07 13:14:20 +00:00
|
|
|
while getStateField(state[], slot) < b.message.slot:
|
|
|
|
let isEpoch = (getStateField(state[], slot) + 1).isEpoch()
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
withTimer(timers[if isEpoch: tAdvanceEpoch else: tAdvanceSlot]):
|
2021-05-07 11:36:21 +00:00
|
|
|
let ok = process_slots(
|
2021-05-07 13:14:20 +00:00
|
|
|
state[].data, getStateField(state[], slot) + 1, cache, rewards, {})
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
doAssert ok, "Slot processing can't fail with correct inputs"
|
|
|
|
|
|
|
|
var start = Moment.now()
|
|
|
|
withTimer(timers[tApplyBlock]):
|
|
|
|
if conf.resetCache:
|
|
|
|
cache = StateCache()
|
2021-06-03 09:42:25 +00:00
|
|
|
if not state_transition_block(
|
2021-06-04 10:38:00 +00:00
|
|
|
runtimePreset, state[].data, b, cache, {}, noRollback):
|
2020-07-16 13:16:51 +00:00
|
|
|
dump("./", b)
|
2020-06-25 10:23:10 +00:00
|
|
|
echo "State transition failed (!)"
|
|
|
|
quit 1
|
performance fixes (#2259)
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
2021-01-25 12:04:18 +00:00
|
|
|
if conf.printTimes:
|
|
|
|
echo b.message.slot, ",", toHex(b.root.data), ",", nanoseconds(Moment.now() - start)
|
2020-08-27 12:52:22 +00:00
|
|
|
if conf.storeBlocks:
|
|
|
|
withTimer(timers[tDbStore]):
|
|
|
|
dbBenchmark.putBlock(b)
|
2020-06-01 14:48:24 +00:00
|
|
|
|
2021-05-07 13:14:20 +00:00
|
|
|
if getStateField(state[], slot).isEpoch and conf.storeStates:
|
|
|
|
if getStateField(state[], slot).epoch < 2:
|
2021-05-07 11:36:21 +00:00
|
|
|
dbBenchmark.putState(state[].data.root, state[].data.data)
|
2021-02-15 16:40:00 +00:00
|
|
|
dbBenchmark.checkpoint()
|
2021-03-15 14:11:51 +00:00
|
|
|
else:
|
|
|
|
withTimer(timers[tDbStore]):
|
2021-05-07 11:36:21 +00:00
|
|
|
dbBenchmark.putState(state[].data.root, state[].data.data)
|
2021-03-15 14:11:51 +00:00
|
|
|
dbBenchmark.checkpoint()
|
2021-02-15 16:40:00 +00:00
|
|
|
|
2021-03-15 14:11:51 +00:00
|
|
|
withTimer(timers[tDbLoad]):
|
2021-05-07 11:36:21 +00:00
|
|
|
doAssert dbBenchmark.getState(state[].data.root, loadedState[], noRollback)
|
2021-03-12 10:02:09 +00:00
|
|
|
|
2021-05-07 13:14:20 +00:00
|
|
|
if getStateField(state[], slot).epoch mod 16 == 0:
|
2021-05-21 09:23:28 +00:00
|
|
|
doAssert hash_tree_root(state[]) == hash_tree_root(loadedState[])
|
2021-03-12 10:02:09 +00:00
|
|
|
|
2020-06-25 10:23:10 +00:00
|
|
|
printTimers(false, timers)
|
2020-06-01 14:48:24 +00:00
|
|
|
|
2020-10-15 11:49:02 +00:00
|
|
|
proc cmdDumpState(conf: DbConf, preset: RuntimePreset) =
|
2021-04-06 18:56:45 +00:00
|
|
|
let db = BeaconChainDB.new(preset, conf.databaseDir.string)
|
2020-09-12 05:35:58 +00:00
|
|
|
defer: db.close()
|
2020-06-06 11:26:19 +00:00
|
|
|
|
|
|
|
for stateRoot in conf.stateRoot:
|
|
|
|
try:
|
|
|
|
let root = Eth2Digest(data: hexToByteArray[32](stateRoot))
|
|
|
|
var state = (ref HashedBeaconState)(root: root)
|
|
|
|
if not db.getState(root, state.data, noRollback):
|
|
|
|
echo "Couldn't load ", root
|
|
|
|
else:
|
|
|
|
dump("./", state[])
|
|
|
|
except CatchableError as e:
|
|
|
|
echo "Couldn't load ", stateRoot, ": ", e.msg
|
|
|
|
|
2020-10-15 11:49:02 +00:00
|
|
|
proc cmdDumpBlock(conf: DbConf, preset: RuntimePreset) =
|
2021-04-06 18:56:45 +00:00
|
|
|
let db = BeaconChainDB.new(preset, conf.databaseDir.string)
|
2020-09-12 05:35:58 +00:00
|
|
|
defer: db.close()
|
2020-06-25 10:23:10 +00:00
|
|
|
|
|
|
|
for blockRoot in conf.blockRootx:
|
|
|
|
try:
|
|
|
|
let root = Eth2Digest(data: hexToByteArray[32](blockRoot))
|
|
|
|
if (let blck = db.getBlock(root); blck.isSome):
|
2020-07-16 13:16:51 +00:00
|
|
|
dump("./", blck.get())
|
2020-06-25 10:23:10 +00:00
|
|
|
else:
|
|
|
|
echo "Couldn't load ", root
|
|
|
|
except CatchableError as e:
|
|
|
|
echo "Couldn't load ", blockRoot, ": ", e.msg
|
|
|
|
|
2020-09-11 13:20:34 +00:00
|
|
|
proc copyPrunedDatabase(
|
|
|
|
db: BeaconChainDB, copyDb: BeaconChainDB,
|
|
|
|
dryRun, verbose, keepOldStates: bool) =
|
|
|
|
## Create a pruned copy of the beacon chain database
|
|
|
|
|
|
|
|
let
|
|
|
|
headBlock = db.getHeadBlock()
|
|
|
|
tailBlock = db.getTailBlock()
|
|
|
|
|
|
|
|
doAssert headBlock.isOk and tailBlock.isOk
|
|
|
|
doAssert db.getBlock(headBlock.get).isOk
|
|
|
|
doAssert db.getBlock(tailBlock.get).isOk
|
|
|
|
|
|
|
|
var
|
|
|
|
beaconState: ref BeaconState
|
|
|
|
finalizedEpoch: Epoch # default value of 0 is conservative/safe
|
|
|
|
prevBlockSlot = db.getBlock(db.getHeadBlock().get).get.message.slot
|
|
|
|
|
|
|
|
beaconState = new BeaconState
|
|
|
|
let headEpoch = db.getBlock(headBlock.get).get.message.slot.epoch
|
|
|
|
|
|
|
|
# Tail states are specially addressed; no stateroot intermediary
|
|
|
|
if not db.getState(
|
|
|
|
db.getBlock(tailBlock.get).get.message.state_root, beaconState[],
|
|
|
|
noRollback):
|
|
|
|
doAssert false, "could not load tail state"
|
|
|
|
if not dry_run:
|
|
|
|
copyDb.putState(beaconState[])
|
|
|
|
|
|
|
|
for signedBlock in getAncestors(db, headBlock.get):
|
|
|
|
if not dry_run:
|
|
|
|
copyDb.putBlock(signedBlock)
|
2021-03-12 17:13:26 +00:00
|
|
|
copyDb.checkpoint()
|
2020-09-11 13:20:34 +00:00
|
|
|
if verbose:
|
|
|
|
echo "copied block at slot ", signedBlock.message.slot
|
|
|
|
|
|
|
|
for slot in countdown(prevBlockSlot, signedBlock.message.slot + 1):
|
|
|
|
if slot mod SLOTS_PER_EPOCH != 0 or
|
|
|
|
((not keepOldStates) and slot.epoch < finalizedEpoch):
|
|
|
|
continue
|
|
|
|
|
|
|
|
# Could also only copy these states, head and finalized, plus tail state
|
|
|
|
let stateRequired = slot.epoch in [finalizedEpoch, headEpoch]
|
|
|
|
|
|
|
|
let sr = db.getStateRoot(signedBlock.root, slot)
|
|
|
|
if sr.isErr:
|
|
|
|
if stateRequired:
|
2020-09-29 15:15:49 +00:00
|
|
|
echo "skipping state root required for slot ",
|
|
|
|
slot, " with root ", signedBlock.root
|
2020-09-11 13:20:34 +00:00
|
|
|
continue
|
|
|
|
|
|
|
|
if not db.getState(sr.get, beaconState[], noRollback):
|
|
|
|
# Don't copy dangling stateroot pointers
|
|
|
|
if stateRequired:
|
|
|
|
doAssert false, "state root and state required"
|
|
|
|
continue
|
|
|
|
|
|
|
|
finalizedEpoch = max(
|
|
|
|
finalizedEpoch, beaconState.finalized_checkpoint.epoch)
|
|
|
|
|
|
|
|
if not dry_run:
|
|
|
|
copyDb.putStateRoot(signedBlock.root, slot, sr.get)
|
|
|
|
copyDb.putState(beaconState[])
|
|
|
|
if verbose:
|
|
|
|
echo "copied state at slot ", slot, " from block at ", shortLog(signedBlock.message.slot)
|
|
|
|
|
|
|
|
prevBlockSlot = signedBlock.message.slot
|
|
|
|
|
|
|
|
if not dry_run:
|
|
|
|
copyDb.putHeadBlock(headBlock.get)
|
|
|
|
copyDb.putTailBlock(tailBlock.get)
|
|
|
|
|
2020-10-15 11:49:02 +00:00
|
|
|
proc cmdPrune(conf: DbConf, preset: RuntimePreset) =
|
2020-09-11 13:20:34 +00:00
|
|
|
let
|
2021-04-06 18:56:45 +00:00
|
|
|
db = BeaconChainDB.new(preset, conf.databaseDir.string)
|
2020-10-12 01:07:20 +00:00
|
|
|
# TODO: add the destination as CLI paramter
|
2021-04-06 18:56:45 +00:00
|
|
|
copyDb = BeaconChainDB.new(preset, "pruned_db")
|
2020-09-11 13:20:34 +00:00
|
|
|
|
2020-09-12 05:35:58 +00:00
|
|
|
defer:
|
|
|
|
db.close()
|
|
|
|
copyDb.close()
|
|
|
|
|
2020-09-11 13:20:34 +00:00
|
|
|
db.copyPrunedDatabase(copyDb, conf.dryRun, conf.verbose, conf.keepOldStates)
|
|
|
|
|
2020-10-15 11:49:02 +00:00
|
|
|
proc cmdRewindState(conf: DbConf, preset: RuntimePreset) =
|
2020-06-16 08:49:32 +00:00
|
|
|
echo "Opening database..."
|
2021-04-06 18:56:45 +00:00
|
|
|
let db = BeaconChainDB.new(preset, conf.databaseDir.string)
|
2020-09-12 05:35:58 +00:00
|
|
|
defer: db.close()
|
2020-06-16 08:49:32 +00:00
|
|
|
|
2020-07-31 14:49:06 +00:00
|
|
|
if not ChainDAGRef.isInitialized(db):
|
2020-06-16 08:49:32 +00:00
|
|
|
echo "Database not initialized"
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
echo "Initializing block pool..."
|
2020-10-15 11:49:02 +00:00
|
|
|
let dag = init(ChainDAGRef, preset, db)
|
2020-06-16 08:49:32 +00:00
|
|
|
|
2020-07-30 19:18:17 +00:00
|
|
|
let blckRef = dag.getRef(fromHex(Eth2Digest, conf.blockRoot))
|
2020-06-16 08:49:32 +00:00
|
|
|
if blckRef == nil:
|
|
|
|
echo "Block not found in database"
|
|
|
|
return
|
|
|
|
|
2021-03-17 10:17:15 +00:00
|
|
|
let tmpState = assignClone(dag.headState)
|
|
|
|
dag.withState(tmpState[], blckRef.atSlot(Slot(conf.slot))):
|
2020-06-16 08:49:32 +00:00
|
|
|
echo "Writing state..."
|
|
|
|
dump("./", hashedState, blck)
|
|
|
|
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
proc atCanonicalSlot(blck: BlockRef, slot: Slot): BlockSlot =
|
|
|
|
if slot == 0:
|
|
|
|
blck.atSlot(slot)
|
|
|
|
else:
|
|
|
|
blck.atSlot(slot - 1).blck.atSlot(slot)
|
|
|
|
|
|
|
|
proc cmdExportEra(conf: DbConf, preset: RuntimePreset) =
|
2021-04-06 18:56:45 +00:00
|
|
|
let db = BeaconChainDB.new(preset, conf.databaseDir.string)
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
defer: db.close()
|
|
|
|
|
|
|
|
if not ChainDAGRef.isInitialized(db):
|
|
|
|
echo "Database not initialized"
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
echo "Initializing block pool..."
|
|
|
|
let
|
|
|
|
dag = init(ChainDAGRef, preset, db)
|
|
|
|
|
2021-03-17 10:17:15 +00:00
|
|
|
let tmpState = assignClone(dag.headState)
|
|
|
|
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
for era in conf.era..<conf.era + conf.eraCount:
|
|
|
|
let
|
|
|
|
firstSlot = if era == 0: Slot(0) else: Slot((era - 1) * SLOTS_PER_HISTORICAL_ROOT)
|
|
|
|
endSlot = Slot(era * SLOTS_PER_HISTORICAL_ROOT)
|
|
|
|
slotCount = endSlot - firstSlot
|
|
|
|
name = &"ethereum2-mainnet-{era.int:08x}-{1:08x}"
|
|
|
|
canonical = dag.head.atCanonicalSlot(endSlot)
|
|
|
|
|
|
|
|
if endSlot > dag.head.slot:
|
|
|
|
echo "Written all complete eras"
|
|
|
|
break
|
|
|
|
|
|
|
|
var e2s = E2Store.open(".", name, firstSlot).get()
|
|
|
|
defer: e2s.close()
|
|
|
|
|
2021-03-17 10:17:15 +00:00
|
|
|
dag.withState(tmpState[], canonical):
|
2021-05-21 09:23:28 +00:00
|
|
|
e2s.appendRecord(stateData.data.data).get()
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
|
|
|
|
var
|
|
|
|
ancestors: seq[BlockRef]
|
|
|
|
cur = canonical.blck
|
|
|
|
if era != 0:
|
|
|
|
while cur != nil and cur.slot >= firstSlot:
|
|
|
|
ancestors.add(cur)
|
|
|
|
cur = cur.parent
|
|
|
|
|
|
|
|
for i in 0..<ancestors.len():
|
|
|
|
let
|
|
|
|
ancestor = ancestors[ancestors.len - 1 - i]
|
|
|
|
e2s.appendRecord(db.getBlock(ancestor.root).get()).get()
|
|
|
|
|
2021-05-07 11:36:21 +00:00
|
|
|
type
|
|
|
|
# Validator performance metrics tool based on
|
|
|
|
# https://github.com/paulhauner/lighthouse/blob/etl/lcli/src/etl/validator_performance.rs
|
|
|
|
# Credits to Paul Hauner
|
|
|
|
ValidatorPerformance = object
|
|
|
|
attestation_hits: uint64
|
|
|
|
attestation_misses: uint64
|
|
|
|
head_attestation_hits: uint64
|
|
|
|
head_attestation_misses: uint64
|
|
|
|
target_attestation_hits: uint64
|
|
|
|
target_attestation_misses: uint64
|
|
|
|
first_slot_head_attester_when_first_slot_empty: uint64
|
|
|
|
first_slot_head_attester_when_first_slot_not_empty: uint64
|
|
|
|
delays: Table[uint64, uint64]
|
|
|
|
|
|
|
|
proc cmdValidatorPerf(conf: DbConf, runtimePreset: RuntimePreset) =
|
|
|
|
echo "Opening database..."
|
|
|
|
let
|
|
|
|
db = BeaconChainDB.new(
|
2021-05-17 16:37:26 +00:00
|
|
|
runtimePreset, conf.databaseDir.string,)
|
2021-05-07 11:36:21 +00:00
|
|
|
defer:
|
|
|
|
db.close()
|
|
|
|
|
|
|
|
if not ChainDAGRef.isInitialized(db):
|
|
|
|
echo "Database not initialized"
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
echo "# Initializing block pool..."
|
|
|
|
let dag = ChainDAGRef.init(runtimePreset, db, {})
|
|
|
|
|
|
|
|
var
|
2021-05-27 13:22:38 +00:00
|
|
|
(start, ends) = dag.getSlotRange(conf.perfSlot, conf.perfSlots)
|
|
|
|
blockRefs = dag.getBlockRange(start, ends)
|
2021-05-07 11:36:21 +00:00
|
|
|
perfs = newSeq[ValidatorPerformance](
|
2021-05-21 09:23:28 +00:00
|
|
|
getStateField(dag.headState, validators).len())
|
2021-05-07 11:36:21 +00:00
|
|
|
cache = StateCache()
|
|
|
|
rewards = RewardInfo()
|
|
|
|
blck: TrustedSignedBeaconBlock
|
|
|
|
|
|
|
|
doAssert blockRefs.len() > 0, "Must select at least one block"
|
|
|
|
|
|
|
|
echo "# Analyzing performance for epochs ",
|
|
|
|
blockRefs[^1].slot.epoch, " - ", blockRefs[0].slot.epoch
|
|
|
|
|
|
|
|
let state = newClone(dag.headState)
|
|
|
|
dag.updateStateData(
|
|
|
|
state[], blockRefs[^1].atSlot(blockRefs[^1].slot - 1), false, cache)
|
|
|
|
|
|
|
|
proc processEpoch() =
|
|
|
|
let
|
|
|
|
prev_epoch_target_slot =
|
2021-05-21 09:23:28 +00:00
|
|
|
state[].get_previous_epoch().compute_start_slot_at_epoch()
|
2021-05-07 11:36:21 +00:00
|
|
|
penultimate_epoch_end_slot =
|
|
|
|
if prev_epoch_target_slot == 0: Slot(0)
|
|
|
|
else: prev_epoch_target_slot - 1
|
|
|
|
first_slot_empty =
|
2021-05-21 09:23:28 +00:00
|
|
|
state[].get_block_root_at_slot(prev_epoch_target_slot) ==
|
|
|
|
state[].get_block_root_at_slot(penultimate_epoch_end_slot)
|
2021-05-07 11:36:21 +00:00
|
|
|
|
|
|
|
let first_slot_attesters = block:
|
2021-05-21 09:23:28 +00:00
|
|
|
let committee_count = state[].get_committee_count_per_slot(
|
|
|
|
prev_epoch_target_slot.epoch, cache)
|
2021-05-07 11:36:21 +00:00
|
|
|
var indices = HashSet[ValidatorIndex]()
|
|
|
|
for committee_index in 0..<committee_count:
|
2021-05-21 09:23:28 +00:00
|
|
|
for validator_index in state[].get_beacon_committee(
|
2021-05-07 11:36:21 +00:00
|
|
|
prev_epoch_target_slot, committee_index.CommitteeIndex, cache):
|
|
|
|
indices.incl(validator_index)
|
|
|
|
indices
|
|
|
|
|
|
|
|
for i, s in rewards.statuses.pairs():
|
|
|
|
let perf = addr perfs[i]
|
|
|
|
if RewardFlags.isActiveInPreviousEpoch in s.flags:
|
|
|
|
if s.is_previous_epoch_attester.isSome():
|
|
|
|
perf.attestation_hits += 1;
|
|
|
|
|
|
|
|
if RewardFlags.isPreviousEpochHeadAttester in s.flags:
|
|
|
|
perf.head_attestation_hits += 1
|
|
|
|
else:
|
|
|
|
perf.head_attestation_misses += 1
|
|
|
|
|
|
|
|
if RewardFlags.isPreviousEpochTargetAttester in s.flags:
|
|
|
|
perf.target_attestation_hits += 1
|
|
|
|
else:
|
|
|
|
perf.target_attestation_misses += 1
|
|
|
|
|
|
|
|
if i.ValidatorIndex in first_slot_attesters:
|
|
|
|
if first_slot_empty:
|
|
|
|
perf.first_slot_head_attester_when_first_slot_empty += 1
|
|
|
|
else:
|
|
|
|
perf.first_slot_head_attester_when_first_slot_not_empty += 1
|
|
|
|
|
2021-05-24 08:40:45 +00:00
|
|
|
if s.is_previous_epoch_attester.isSome():
|
|
|
|
perf.delays.mGetOrPut(
|
|
|
|
s.is_previous_epoch_attester.get().delay, 0'u64) += 1
|
2021-05-07 11:36:21 +00:00
|
|
|
|
|
|
|
else:
|
|
|
|
perf.attestation_misses += 1;
|
|
|
|
|
|
|
|
for bi in 0..<blockRefs.len:
|
|
|
|
blck = db.getBlock(blockRefs[blockRefs.len - bi - 1].root).get()
|
2021-05-07 13:14:20 +00:00
|
|
|
while getStateField(state[], slot) < blck.message.slot:
|
2021-05-07 11:36:21 +00:00
|
|
|
let ok = process_slots(
|
2021-05-07 13:14:20 +00:00
|
|
|
state[].data, getStateField(state[], slot) + 1, cache, rewards, {})
|
2021-05-07 11:36:21 +00:00
|
|
|
doAssert ok, "Slot processing can't fail with correct inputs"
|
|
|
|
|
2021-05-07 13:14:20 +00:00
|
|
|
if getStateField(state[], slot).isEpoch():
|
2021-05-07 11:36:21 +00:00
|
|
|
processEpoch()
|
|
|
|
|
2021-06-03 09:42:25 +00:00
|
|
|
if not state_transition_block(
|
2021-06-04 10:38:00 +00:00
|
|
|
runtimePreset, state[].data, blck, cache, {}, noRollback):
|
2021-05-07 11:36:21 +00:00
|
|
|
echo "State transition failed (!)"
|
|
|
|
quit 1
|
|
|
|
|
2021-05-27 13:22:38 +00:00
|
|
|
# Capture rewards of empty slots as well
|
|
|
|
while getStateField(state[], slot) < ends:
|
|
|
|
let ok = process_slots(
|
|
|
|
state[].data, getStateField(state[], slot) + 1, cache, rewards, {})
|
|
|
|
doAssert ok, "Slot processing can't fail with correct inputs"
|
|
|
|
|
|
|
|
if getStateField(state[], slot).isEpoch():
|
|
|
|
processEpoch()
|
2021-05-07 11:36:21 +00:00
|
|
|
|
|
|
|
echo "validator_index,attestation_hits,attestation_misses,head_attestation_hits,head_attestation_misses,target_attestation_hits,target_attestation_misses,delay_avg,first_slot_head_attester_when_first_slot_empty,first_slot_head_attester_when_first_slot_not_empty"
|
|
|
|
|
|
|
|
for (i, perf) in perfs.pairs:
|
|
|
|
var
|
|
|
|
count = 0'u64
|
|
|
|
sum = 0'u64
|
|
|
|
for delay, n in perf.delays:
|
|
|
|
count += n
|
|
|
|
sum += delay * n
|
|
|
|
echo i,",",
|
|
|
|
perf.attestation_hits,",",
|
|
|
|
perf.attestation_misses,",",
|
|
|
|
perf.head_attestation_hits,",",
|
|
|
|
perf.head_attestation_misses,",",
|
|
|
|
perf.target_attestation_hits,",",
|
|
|
|
perf.target_attestation_misses,",",
|
|
|
|
if count == 0: 0.0
|
|
|
|
else: sum.float / count.float,",",
|
|
|
|
perf.first_slot_head_attester_when_first_slot_empty,",",
|
|
|
|
perf.first_slot_head_attester_when_first_slot_not_empty
|
|
|
|
|
2021-05-27 13:22:38 +00:00
|
|
|
proc cmdValidatorDb(conf: DbConf, runtimePreset: RuntimePreset) =
|
|
|
|
# Create a database with performance information for every epoch
|
|
|
|
echo "Opening database..."
|
|
|
|
let
|
|
|
|
db = BeaconChainDB.new(
|
|
|
|
runtimePreset, conf.databaseDir.string,)
|
|
|
|
defer:
|
|
|
|
db.close()
|
|
|
|
|
|
|
|
if not ChainDAGRef.isInitialized(db):
|
|
|
|
echo "Database not initialized"
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
echo "Initializing block pool..."
|
|
|
|
let dag = ChainDAGRef.init(runtimePreset, db, {})
|
|
|
|
|
|
|
|
let outDb = SqStoreRef.init(conf.outDir, "validatorDb").expect("DB")
|
|
|
|
defer: outDb.close()
|
|
|
|
|
|
|
|
outDb.exec("""
|
|
|
|
CREATE TABLE IF NOT EXISTS validators_raw(
|
|
|
|
validator_index INTEGER PRIMARY KEY,
|
|
|
|
pubkey BLOB NOT NULL,
|
|
|
|
withdrawal_credentials BLOB NOT NULL
|
|
|
|
);
|
|
|
|
""").expect("DB")
|
|
|
|
|
|
|
|
# For convenient viewing
|
|
|
|
outDb.exec("""
|
|
|
|
CREATE VIEW IF NOT EXISTS validators AS
|
|
|
|
SELECT
|
|
|
|
validator_index,
|
|
|
|
'0x' || lower(hex(pubkey)) as pubkey,
|
|
|
|
'0x' || lower(hex(withdrawal_credentials)) as with_cred
|
|
|
|
FROM validators_raw;
|
|
|
|
""").expect("DB")
|
|
|
|
|
|
|
|
outDb.exec("""
|
|
|
|
CREATE TABLE IF NOT EXISTS epoch_info(
|
|
|
|
epoch INTEGER PRIMARY KEY,
|
|
|
|
current_epoch_raw INTEGER NOT NULL,
|
|
|
|
previous_epoch_raw INTEGER NOT NULL,
|
|
|
|
current_epoch_attesters_raw INTEGER NOT NULL,
|
|
|
|
current_epoch_target_attesters_raw INTEGER NOT NULL,
|
|
|
|
previous_epoch_attesters_raw INTEGER NOT NULL,
|
|
|
|
previous_epoch_target_attesters_raw INTEGER NOT NULL,
|
|
|
|
previous_epoch_head_attesters_raw INTEGER NOT NULL
|
|
|
|
);
|
|
|
|
""").expect("DB")
|
|
|
|
outDb.exec("""
|
|
|
|
CREATE TABLE IF NOT EXISTS validator_epoch_info(
|
|
|
|
validator_index INTEGER,
|
|
|
|
epoch INTEGER,
|
|
|
|
rewards INTEGER NOT NULL,
|
|
|
|
penalties INTEGER NOT NULL,
|
|
|
|
source_attester INTEGER NOT NULL,
|
|
|
|
target_attester INTEGER NOT NULL,
|
|
|
|
head_attester INTEGER NOT NULL,
|
|
|
|
inclusion_delay INTEGER NULL,
|
|
|
|
PRIMARY KEY(validator_index, epoch)
|
|
|
|
);
|
|
|
|
""").expect("DB")
|
|
|
|
|
|
|
|
let
|
|
|
|
insertValidator = outDb.prepareStmt("""
|
|
|
|
INSERT INTO validators_raw(
|
|
|
|
validator_index,
|
|
|
|
pubkey,
|
|
|
|
withdrawal_credentials)
|
|
|
|
VALUES(?, ?, ?);""",
|
|
|
|
(int64, array[48, byte], array[32, byte]), void).expect("DB")
|
|
|
|
insertEpochInfo = outDb.prepareStmt("""
|
|
|
|
INSERT INTO epoch_info(
|
|
|
|
epoch,
|
|
|
|
current_epoch_raw,
|
|
|
|
previous_epoch_raw,
|
|
|
|
current_epoch_attesters_raw,
|
|
|
|
current_epoch_target_attesters_raw,
|
|
|
|
previous_epoch_attesters_raw,
|
|
|
|
previous_epoch_target_attesters_raw,
|
|
|
|
previous_epoch_head_attesters_raw)
|
|
|
|
VALUES(?, ?, ?, ?, ?, ?, ?, ?);""",
|
|
|
|
(int64, int64, int64, int64, int64, int64, int64, int64), void).expect("DB")
|
|
|
|
insertValidatorInfo = outDb.prepareStmt("""
|
|
|
|
INSERT INTO validator_epoch_info(
|
|
|
|
validator_index,
|
|
|
|
epoch,
|
|
|
|
rewards,
|
|
|
|
penalties,
|
|
|
|
source_attester,
|
|
|
|
target_attester,
|
|
|
|
head_attester,
|
|
|
|
inclusion_delay)
|
|
|
|
VALUES(?, ?, ?, ?, ?, ?, ?, ?);""",
|
|
|
|
(int64, int64, int64, int64, int64, int64, int64, Option[int64]), void).expect("DB")
|
|
|
|
|
|
|
|
var vals: int64
|
|
|
|
discard outDb.exec("SELECT count(*) FROM validators", ()) do (res: int64):
|
|
|
|
vals = res
|
|
|
|
|
|
|
|
outDb.exec("BEGIN TRANSACTION;").expect("DB")
|
|
|
|
|
|
|
|
for i in vals..<getStateField(dag.headState, validators).len():
|
|
|
|
insertValidator.exec((
|
|
|
|
i,
|
|
|
|
getStateField(dag.headState, validators).data[i].pubkey.toRaw(),
|
|
|
|
getStateField(dag.headState, validators).data[i].withdrawal_credentials.data)).expect("DB")
|
|
|
|
|
|
|
|
outDb.exec("COMMIT;").expect("DB")
|
|
|
|
|
|
|
|
var minEpoch: Epoch
|
|
|
|
discard outDb.exec("SELECT MAX(epoch) FROM epoch_info", ()) do (res: int64):
|
|
|
|
minEpoch = (res + 1).Epoch
|
|
|
|
|
|
|
|
var
|
|
|
|
cache = StateCache()
|
|
|
|
rewards = RewardInfo()
|
|
|
|
blck: TrustedSignedBeaconBlock
|
|
|
|
|
|
|
|
let
|
|
|
|
start = minEpoch.compute_start_slot_at_epoch()
|
|
|
|
ends = dag.finalizedHead.slot # Avoid dealing with changes
|
|
|
|
|
|
|
|
if start > ends:
|
|
|
|
echo "No (new) data found, database at ", minEpoch, ", finalized to ", ends.epoch
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
let blockRefs = dag.getBlockRange(start, ends)
|
|
|
|
|
|
|
|
echo "Analyzing performance for epochs ",
|
|
|
|
start.epoch, " - ", ends.epoch
|
|
|
|
|
|
|
|
let state = newClone(dag.headState)
|
|
|
|
dag.updateStateData(
|
|
|
|
state[], blockRefs[^1].atSlot(if start > 0: start - 1 else: 0.Slot),
|
|
|
|
false, cache)
|
|
|
|
|
|
|
|
proc processEpoch() =
|
|
|
|
echo getStateField(state[], slot).epoch
|
|
|
|
outDb.exec("BEGIN TRANSACTION;").expect("DB")
|
|
|
|
insertEpochInfo.exec(
|
|
|
|
(getStateField(state[], slot).epoch.int64,
|
|
|
|
rewards.total_balances.current_epoch_raw.int64,
|
|
|
|
rewards.total_balances.previous_epoch_raw.int64,
|
|
|
|
rewards.total_balances.current_epoch_attesters_raw.int64,
|
|
|
|
rewards.total_balances.current_epoch_target_attesters_raw.int64,
|
|
|
|
rewards.total_balances.previous_epoch_attesters_raw.int64,
|
|
|
|
rewards.total_balances.previous_epoch_target_attesters_raw.int64,
|
|
|
|
rewards.total_balances.previous_epoch_head_attesters_raw.int64)
|
|
|
|
).expect("DB")
|
|
|
|
|
|
|
|
for index, status in rewards.statuses.pairs():
|
|
|
|
if not is_eligible_validator(status):
|
|
|
|
continue
|
|
|
|
let
|
|
|
|
notSlashed = (RewardFlags.isSlashed notin status.flags)
|
|
|
|
source_attester =
|
|
|
|
notSlashed and status.is_previous_epoch_attester.isSome()
|
|
|
|
target_attester =
|
|
|
|
notSlashed and RewardFlags.isPreviousEpochTargetAttester in status.flags
|
|
|
|
head_attester =
|
|
|
|
notSlashed and RewardFlags.isPreviousEpochHeadAttester in status.flags
|
|
|
|
delay =
|
|
|
|
if notSlashed and status.is_previous_epoch_attester.isSome():
|
|
|
|
some(int64(status.is_previous_epoch_attester.get().delay))
|
|
|
|
else:
|
|
|
|
none(int64)
|
|
|
|
|
|
|
|
if conf.perfect or not
|
|
|
|
(source_attester and target_attester and head_attester and
|
|
|
|
delay.isSome() and delay.get() == 1):
|
|
|
|
insertValidatorInfo.exec(
|
|
|
|
(index.int64,
|
|
|
|
getStateField(state[], slot).epoch.int64,
|
|
|
|
status.delta.rewards.int64,
|
|
|
|
status.delta.penalties.int64,
|
|
|
|
int64(source_attester), # Source delta
|
|
|
|
int64(target_attester), # Target delta
|
|
|
|
int64(head_attester), # Head delta
|
|
|
|
delay)).expect("DB")
|
|
|
|
outDb.exec("COMMIT;").expect("DB")
|
|
|
|
|
|
|
|
for bi in 0..<blockRefs.len:
|
|
|
|
blck = db.getBlock(blockRefs[blockRefs.len - bi - 1].root).get()
|
|
|
|
while getStateField(state[], slot) < blck.message.slot:
|
|
|
|
let ok = process_slots(
|
|
|
|
state[].data, getStateField(state[], slot) + 1, cache, rewards, {})
|
|
|
|
doAssert ok, "Slot processing can't fail with correct inputs"
|
|
|
|
|
|
|
|
if getStateField(state[], slot).isEpoch():
|
|
|
|
processEpoch()
|
|
|
|
|
2021-06-03 09:42:25 +00:00
|
|
|
if not state_transition_block(
|
2021-06-04 10:38:00 +00:00
|
|
|
runtimePreset, state[].data, blck, cache, {}, noRollback):
|
2021-05-27 13:22:38 +00:00
|
|
|
echo "State transition failed (!)"
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
# Capture rewards of empty slots as well, including the epoch that got
|
|
|
|
# finalized
|
|
|
|
while getStateField(state[], slot) <= ends:
|
|
|
|
let ok = process_slots(
|
|
|
|
state[].data, getStateField(state[], slot) + 1, cache, rewards, {})
|
|
|
|
doAssert ok, "Slot processing can't fail with correct inputs"
|
|
|
|
|
|
|
|
if getStateField(state[], slot).isEpoch():
|
|
|
|
processEpoch()
|
|
|
|
|
2020-06-01 14:48:24 +00:00
|
|
|
when isMainModule:
|
2020-09-01 09:01:57 +00:00
|
|
|
var
|
2020-06-06 11:26:19 +00:00
|
|
|
conf = DbConf.load()
|
2020-09-01 09:01:57 +00:00
|
|
|
runtimePreset = getRuntimePresetForNetwork(conf.eth2Network)
|
2020-05-28 14:19:25 +00:00
|
|
|
|
2020-06-06 11:26:19 +00:00
|
|
|
case conf.cmd
|
2020-06-01 14:48:24 +00:00
|
|
|
of bench:
|
2020-09-01 09:01:57 +00:00
|
|
|
cmdBench(conf, runtimePreset)
|
2020-06-06 11:26:19 +00:00
|
|
|
of dumpState:
|
2020-10-15 11:49:02 +00:00
|
|
|
cmdDumpState(conf, runtimePreset)
|
2020-06-25 10:23:10 +00:00
|
|
|
of dumpBlock:
|
2020-10-15 11:49:02 +00:00
|
|
|
cmdDumpBlock(conf, runtimePreset)
|
2020-09-11 13:20:34 +00:00
|
|
|
of pruneDatabase:
|
2020-10-15 11:49:02 +00:00
|
|
|
cmdPrune(conf, runtimePreset)
|
2020-06-16 08:49:32 +00:00
|
|
|
of rewindState:
|
2020-09-01 09:01:57 +00:00
|
|
|
cmdRewindState(conf, runtimePreset)
|
e2store: add era format (#2382)
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
2021-03-15 10:31:39 +00:00
|
|
|
of exportEra:
|
|
|
|
cmdExportEra(conf, runtimePreset)
|
2021-05-07 11:36:21 +00:00
|
|
|
of validatorPerf:
|
|
|
|
cmdValidatorPerf(conf, runtimePreset)
|
2021-05-27 13:22:38 +00:00
|
|
|
of validatorDb:
|
|
|
|
cmdValidatorDb(conf, runtimePreset)
|