ForkedChain implementation (#2405)
* ForkedChain implementation - revamp test_blockchain_json using ForkedChain - re-enable previously failing test cases. * Remove excess error handling * Avoid reloading parent header * Do not force base update * Write baggage to database * Add findActiveChain to finalizedSegment * Create new stagingTx in addBlock * Check last stateRoot existence in test_blockchain_json * Resolve rebase conflict * More precise nomenclature for block import cursor * Ensure bad block nor imported and good block not rejected * finalizeSegment become forkChoice and align with engine API forkChoice spec * Display reason when good block rejected * Fix comments * Put BaseDistance into CalculateNewBase equation * Separate finalizedHash from baseHash * Add more doAssert constraint * Add push raises: []
This commit is contained in:
parent
3e001e322c
commit
cd21c4fbec
|
@ -22,17 +22,17 @@ OK: 15/15 Fail: 0/15 Skip: 0/15
|
|||
## bcArrowGlacierToParis
|
||||
```diff
|
||||
+ difficultyFormula.json OK
|
||||
powToPosBlockRejection.json Skip
|
||||
+ powToPosBlockRejection.json OK
|
||||
+ powToPosTest.json OK
|
||||
```
|
||||
OK: 2/3 Fail: 0/3 Skip: 1/3
|
||||
OK: 3/3 Fail: 0/3 Skip: 0/3
|
||||
## bcBerlinToLondon
|
||||
```diff
|
||||
+ BerlinToLondonTransition.json OK
|
||||
initialVal.json Skip
|
||||
+ initialVal.json OK
|
||||
+ londonUncles.json OK
|
||||
```
|
||||
OK: 2/3 Fail: 0/3 Skip: 1/3
|
||||
OK: 3/3 Fail: 0/3 Skip: 0/3
|
||||
## bcBlockGasLimitTest
|
||||
```diff
|
||||
+ BlockGasLimit2p63m1.json OK
|
||||
|
@ -115,35 +115,35 @@ OK: 3/4 Fail: 0/4 Skip: 1/4
|
|||
## bcForkStressTest
|
||||
```diff
|
||||
+ AmIOnEIP150.json OK
|
||||
ForkStressTest.json Skip
|
||||
+ ForkStressTest.json OK
|
||||
```
|
||||
OK: 1/2 Fail: 0/2 Skip: 1/2
|
||||
OK: 2/2 Fail: 0/2 Skip: 0/2
|
||||
## bcFrontierToHomestead
|
||||
```diff
|
||||
+ CallContractThatCreateContractBeforeAndAfterSwitchover.json OK
|
||||
+ ContractCreationFailsOnHomestead.json OK
|
||||
HomesteadOverrideFrontier.json Skip
|
||||
+ HomesteadOverrideFrontier.json OK
|
||||
+ UncleFromFrontierInHomestead.json OK
|
||||
+ UnclePopulation.json OK
|
||||
blockChainFrontierWithLargerTDvsHomesteadBlockchain.json Skip
|
||||
blockChainFrontierWithLargerTDvsHomesteadBlockchain2.json Skip
|
||||
+ blockChainFrontierWithLargerTDvsHomesteadBlockchain.json OK
|
||||
+ blockChainFrontierWithLargerTDvsHomesteadBlockchain2.json OK
|
||||
```
|
||||
OK: 4/7 Fail: 0/7 Skip: 3/7
|
||||
OK: 7/7 Fail: 0/7 Skip: 0/7
|
||||
## bcGasPricerTest
|
||||
```diff
|
||||
RPC_API_Test.json Skip
|
||||
+ RPC_API_Test.json OK
|
||||
+ highGasUsage.json OK
|
||||
+ notxs.json OK
|
||||
```
|
||||
OK: 2/3 Fail: 0/3 Skip: 1/3
|
||||
OK: 3/3 Fail: 0/3 Skip: 0/3
|
||||
## bcHomesteadToDao
|
||||
```diff
|
||||
DaoTransactions.json Skip
|
||||
+ DaoTransactions.json OK
|
||||
+ DaoTransactions_EmptyTransactionAndForkBlocksAhead.json OK
|
||||
+ DaoTransactions_UncleExtradata.json OK
|
||||
+ DaoTransactions_XBlockm1.json OK
|
||||
```
|
||||
OK: 3/4 Fail: 0/4 Skip: 1/4
|
||||
OK: 4/4 Fail: 0/4 Skip: 0/4
|
||||
## bcHomesteadToEIP150
|
||||
```diff
|
||||
+ EIP150Transition.json OK
|
||||
|
@ -182,17 +182,17 @@ OK: 22/22 Fail: 0/22 Skip: 0/22
|
|||
OK: 1/1 Fail: 0/1 Skip: 0/1
|
||||
## bcMultiChainTest
|
||||
```diff
|
||||
CallContractFromNotBestBlock.json Skip
|
||||
ChainAtoChainB.json Skip
|
||||
ChainAtoChainBCallContractFormA.json Skip
|
||||
ChainAtoChainB_BlockHash.json Skip
|
||||
ChainAtoChainB_difficultyB.json Skip
|
||||
ChainAtoChainBtoChainA.json Skip
|
||||
ChainAtoChainBtoChainAtoChainB.json Skip
|
||||
UncleFromSideChain.json Skip
|
||||
lotsOfLeafs.json Skip
|
||||
+ CallContractFromNotBestBlock.json OK
|
||||
+ ChainAtoChainB.json OK
|
||||
+ ChainAtoChainBCallContractFormA.json OK
|
||||
+ ChainAtoChainB_BlockHash.json OK
|
||||
+ ChainAtoChainB_difficultyB.json OK
|
||||
+ ChainAtoChainBtoChainA.json OK
|
||||
+ ChainAtoChainBtoChainAtoChainB.json OK
|
||||
+ UncleFromSideChain.json OK
|
||||
+ lotsOfLeafs.json OK
|
||||
```
|
||||
OK: 0/9 Fail: 0/9 Skip: 9/9
|
||||
OK: 9/9 Fail: 0/9 Skip: 0/9
|
||||
## bcRandomBlockhashTest
|
||||
```diff
|
||||
+ 201503110226PYTHON_DUP6BC.json OK
|
||||
|
@ -408,18 +408,18 @@ OK: 105/105 Fail: 0/105 Skip: 0/105
|
|||
OK: 99/100 Fail: 0/100 Skip: 1/100
|
||||
## bcTotalDifficultyTest
|
||||
```diff
|
||||
lotsOfBranchesOverrideAtTheEnd.json Skip
|
||||
lotsOfBranchesOverrideAtTheMiddle.json Skip
|
||||
newChainFrom4Block.json Skip
|
||||
newChainFrom5Block.json Skip
|
||||
newChainFrom6Block.json Skip
|
||||
sideChainWithMoreTransactions.json Skip
|
||||
sideChainWithMoreTransactions2.json Skip
|
||||
sideChainWithNewMaxDifficultyStartingFromBlock3AfterBlock4.json Skip
|
||||
uncleBlockAtBlock3AfterBlock3.json Skip
|
||||
uncleBlockAtBlock3afterBlock4.json Skip
|
||||
+ lotsOfBranchesOverrideAtTheEnd.json OK
|
||||
+ lotsOfBranchesOverrideAtTheMiddle.json OK
|
||||
+ newChainFrom4Block.json OK
|
||||
+ newChainFrom5Block.json OK
|
||||
+ newChainFrom6Block.json OK
|
||||
+ sideChainWithMoreTransactions.json OK
|
||||
+ sideChainWithMoreTransactions2.json OK
|
||||
+ sideChainWithNewMaxDifficultyStartingFromBlock3AfterBlock4.json OK
|
||||
+ uncleBlockAtBlock3AfterBlock3.json OK
|
||||
+ uncleBlockAtBlock3afterBlock4.json OK
|
||||
```
|
||||
OK: 0/10 Fail: 0/10 Skip: 10/10
|
||||
OK: 10/10 Fail: 0/10 Skip: 0/10
|
||||
## bcUncleHeaderValidity
|
||||
```diff
|
||||
+ correct.json OK
|
||||
|
@ -3726,4 +3726,4 @@ OK: 11/11 Fail: 0/11 Skip: 0/11
|
|||
OK: 1/1 Fail: 0/1 Skip: 0/1
|
||||
|
||||
---TOTAL---
|
||||
OK: 3140/3272 Fail: 0/3272 Skip: 132/3272
|
||||
OK: 3167/3272 Fail: 0/3272 Skip: 105/3272
|
||||
|
|
|
@ -0,0 +1,425 @@
|
|||
# Nimbus
|
||||
# Copyright (c) 2024 Status Research & Development GmbH
|
||||
# Licensed under either of
|
||||
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
||||
# http://www.apache.org/licenses/LICENSE-2.0)
|
||||
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
||||
# http://opensource.org/licenses/MIT)
|
||||
# at your option. This file may not be copied, modified, or distributed except
|
||||
# according to those terms.
|
||||
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/tables,
|
||||
../../common,
|
||||
../../db/core_db,
|
||||
../../evm/types,
|
||||
../../evm/state,
|
||||
../validate,
|
||||
../executor/process_block
|
||||
|
||||
type
|
||||
CursorDesc = object
|
||||
forkJunction: BlockNumber
|
||||
hash: Hash256
|
||||
|
||||
BlockDesc = object
|
||||
blk: EthBlock
|
||||
receipts: seq[Receipt]
|
||||
|
||||
BaseDesc = object
|
||||
hash: Hash256
|
||||
header: BlockHeader
|
||||
|
||||
CanonicalDesc = object
|
||||
cursorHash: Hash256
|
||||
header: BlockHeader
|
||||
|
||||
ForkedChain* = object
|
||||
stagingTx: CoreDbTxRef
|
||||
db: CoreDbRef
|
||||
com: CommonRef
|
||||
blocks: Table[Hash256, BlockDesc]
|
||||
baseHash: Hash256
|
||||
baseHeader: BlockHeader
|
||||
cursorHash: Hash256
|
||||
cursorHeader: BlockHeader
|
||||
cursorHeads: seq[CursorDesc]
|
||||
|
||||
const
|
||||
BaseDistance = 128
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Private
|
||||
# ------------------------------------------------------------------------------
|
||||
template shouldNotKeyError(body: untyped) =
|
||||
try:
|
||||
body
|
||||
except KeyError as exc:
|
||||
raiseAssert exc.msg
|
||||
|
||||
proc processBlock(c: ForkedChain,
|
||||
parent: BlockHeader,
|
||||
blk: EthBlock): Result[seq[Receipt], string] =
|
||||
template header(): BlockHeader =
|
||||
blk.header
|
||||
|
||||
let vmState = BaseVMState()
|
||||
vmState.init(parent, header, c.com)
|
||||
c.com.hardForkTransition(header)
|
||||
|
||||
?c.com.validateHeaderAndKinship(blk, vmState.parent, checkSealOK = false)
|
||||
|
||||
?vmState.processBlock(
|
||||
blk,
|
||||
skipValidation = false,
|
||||
skipReceipts = false,
|
||||
skipUncles = true,
|
||||
)
|
||||
|
||||
# We still need to write header to database
|
||||
# because validateUncles still need it
|
||||
let blockHash = header.blockHash()
|
||||
if not c.db.persistHeader(
|
||||
blockHash,
|
||||
header, c.com.consensus == ConsensusType.POS,
|
||||
c.com.startOfHistory):
|
||||
return err("Could not persist header")
|
||||
|
||||
ok(move(vmState.receipts))
|
||||
|
||||
func updateCursorHeads(c: var ForkedChain,
|
||||
cursorHash: Hash256,
|
||||
header: BlockHeader) =
|
||||
# Example of cursorHeads and cursor
|
||||
#
|
||||
# -- A1 - A2 - A3 -- D5 - D6
|
||||
# / /
|
||||
# base - B1 - B2 - B3 - B4
|
||||
# \
|
||||
# --- C3 - C4
|
||||
#
|
||||
# A3, B4, C4, and D6, are in cursorHeads
|
||||
# Any one of them with blockHash == cursorHash
|
||||
# is the active chain with cursor pointing to the
|
||||
# latest block of that chain.
|
||||
|
||||
for i in 0..<c.cursorHeads.len:
|
||||
if c.cursorHeads[i].hash == header.parentHash:
|
||||
c.cursorHeads[i].hash = cursorHash
|
||||
return
|
||||
|
||||
c.cursorHeads.add CursorDesc(
|
||||
hash: cursorHash,
|
||||
forkJunction: header.number,
|
||||
)
|
||||
|
||||
func updateCursor(c: var ForkedChain,
|
||||
blk: EthBlock,
|
||||
receipts: sink seq[Receipt]) =
|
||||
template header(): BlockHeader =
|
||||
blk.header
|
||||
|
||||
c.cursorHeader = header
|
||||
c.cursorHash = header.blockHash
|
||||
c.blocks[c.cursorHash] = BlockDesc(
|
||||
blk: blk,
|
||||
receipts: move(receipts)
|
||||
)
|
||||
c.updateCursorHeads(c.cursorHash, header)
|
||||
|
||||
proc validateBlock(c: var ForkedChain,
|
||||
parent: BlockHeader,
|
||||
blk: EthBlock,
|
||||
updateCursor: bool = true): Result[void, string] =
|
||||
let dbTx = c.db.newTransaction()
|
||||
defer:
|
||||
dbTx.dispose()
|
||||
|
||||
var res = c.processBlock(parent, blk)
|
||||
if res.isErr:
|
||||
dbTx.rollback()
|
||||
return err(res.error)
|
||||
|
||||
dbTx.commit()
|
||||
if updateCursor:
|
||||
c.updateCursor(blk, move(res.value))
|
||||
|
||||
ok()
|
||||
|
||||
proc replaySegment(c: var ForkedChain, target: Hash256) =
|
||||
# Replay from base+1 to target block
|
||||
var
|
||||
prevHash = target
|
||||
chain = newSeq[EthBlock]()
|
||||
|
||||
shouldNotKeyError:
|
||||
while prevHash != c.baseHash:
|
||||
chain.add c.blocks[prevHash].blk
|
||||
prevHash = chain[^1].header.parentHash
|
||||
|
||||
c.stagingTx.rollback()
|
||||
c.stagingTx = c.db.newTransaction()
|
||||
c.cursorHeader = c.baseHeader
|
||||
for i in countdown(chain.high, chain.low):
|
||||
c.validateBlock(c.cursorHeader, chain[i],
|
||||
updateCursor = false).expect("have been validated before")
|
||||
c.cursorHeader = chain[i].header
|
||||
|
||||
proc writeBaggage(c: var ForkedChain, target: Hash256) =
|
||||
# Write baggage from base+1 to target block
|
||||
shouldNotKeyError:
|
||||
var prevHash = target
|
||||
while prevHash != c.baseHash:
|
||||
let blk = c.blocks[prevHash]
|
||||
c.db.persistTransactions(blk.blk.header.number, blk.blk.transactions)
|
||||
c.db.persistReceipts(blk.receipts)
|
||||
discard c.db.persistUncles(blk.blk.uncles)
|
||||
if blk.blk.withdrawals.isSome:
|
||||
c.db.persistWithdrawals(blk.blk.withdrawals.get)
|
||||
prevHash = blk.blk.header.parentHash
|
||||
|
||||
func updateBase(c: var ForkedChain,
|
||||
newBaseHash: Hash256,
|
||||
newBaseHeader: BlockHeader,
|
||||
canonicalCursorHash: Hash256) =
|
||||
var cursorHeadsLen = c.cursorHeads.len
|
||||
# Remove obsolete chains, example:
|
||||
# -- A1 - A2 - A3 -- D5 - D6
|
||||
# / /
|
||||
# base - B1 - B2 - [B3] - B4
|
||||
# \
|
||||
# --- C3 - C4
|
||||
# If base move to B3, both A and C will be removed
|
||||
# but not D
|
||||
|
||||
for i in 0..<cursorHeadsLen:
|
||||
if c.cursorHeads[i].forkJunction <= newBaseHeader.number and
|
||||
c.cursorHeads[i].hash != canonicalCursorHash:
|
||||
var prevHash = c.cursorHeads[i].hash
|
||||
while prevHash != c.baseHash:
|
||||
c.blocks.withValue(prevHash, val) do:
|
||||
let rmHash = prevHash
|
||||
prevHash = val.blk.header.parentHash
|
||||
c.blocks.del(rmHash)
|
||||
do:
|
||||
# Older chain segment have been deleted
|
||||
# by previous head
|
||||
break
|
||||
c.cursorHeads.del(i)
|
||||
# If we use `c.cursorHeads.len` in the for loop,
|
||||
# the sequence length will not updated
|
||||
dec cursorHeadsLen
|
||||
|
||||
# Cleanup in-memory blocks starting from newBase backward
|
||||
# while blocks from newBase+1 to canonicalCursor not deleted
|
||||
# e.g. B4 onward
|
||||
var prevHash = newBaseHash
|
||||
while prevHash != c.baseHash:
|
||||
c.blocks.withValue(prevHash, val) do:
|
||||
let rmHash = prevHash
|
||||
prevHash = val.blk.header.parentHash
|
||||
c.blocks.del(rmHash)
|
||||
do:
|
||||
# Older chain segment have been deleted
|
||||
# by previous head
|
||||
break
|
||||
|
||||
c.baseHeader = newBaseHeader
|
||||
c.baseHash = newBaseHash
|
||||
|
||||
func findCanonicalHead(c: ForkedChain,
|
||||
hash: Hash256): Result[CanonicalDesc, string] =
|
||||
if hash == c.baseHash:
|
||||
# The cursorHash here should not be used for next step
|
||||
# because it not point to any active chain
|
||||
return ok(CanonicalDesc(cursorHash: c.baseHash, header: c.baseHeader))
|
||||
|
||||
shouldNotKeyError:
|
||||
# Find hash belong to which chain
|
||||
for cursor in c.cursorHeads:
|
||||
let header = c.blocks[cursor.hash].blk.header
|
||||
var prevHash = cursor.hash
|
||||
while prevHash != c.baseHash:
|
||||
if prevHash == hash:
|
||||
return ok(CanonicalDesc(cursorHash: cursor.hash, header: header))
|
||||
prevHash = c.blocks[prevHash].blk.header.parentHash
|
||||
|
||||
err("Block hash is not part of any active chain")
|
||||
|
||||
func canonicalChain(c: ForkedChain,
|
||||
hash: Hash256,
|
||||
headHash: Hash256): Result[BlockHeader, string] =
|
||||
if hash == c.baseHash:
|
||||
return ok(c.baseHeader)
|
||||
|
||||
shouldNotKeyError:
|
||||
var prevHash = headHash
|
||||
while prevHash != c.baseHash:
|
||||
var header = c.blocks[prevHash].blk.header
|
||||
if prevHash == hash:
|
||||
return ok(header)
|
||||
prevHash = header.parentHash
|
||||
|
||||
err("Block hash not in canonical chain")
|
||||
|
||||
func calculateNewBase(c: ForkedChain,
|
||||
finalizedHeader: BlockHeader,
|
||||
headHash: Hash256,
|
||||
headHeader: BlockHeader): BaseDesc =
|
||||
# It's important to have base at least `BaseDistance` behind head
|
||||
# so we can answer state queries about history that deep.
|
||||
|
||||
let targetNumber = min(finalizedHeader.number,
|
||||
max(headHeader.number, BaseDistance) - BaseDistance)
|
||||
|
||||
# The distance is less than `BaseDistance`, don't move the base
|
||||
if targetNumber - c.baseHeader.number <= BaseDistance:
|
||||
return BaseDesc(hash: c.baseHash, header: c.baseHeader)
|
||||
|
||||
shouldNotKeyError:
|
||||
var prevHash = headHash
|
||||
while prevHash != c.baseHash:
|
||||
var header = c.blocks[prevHash].blk.header
|
||||
if header.number == targetNumber:
|
||||
return BaseDesc(hash: prevHash, header: move(header))
|
||||
prevHash = header.parentHash
|
||||
|
||||
doAssert(false, "Unreachable code")
|
||||
|
||||
func trimCanonicalChain(c: var ForkedChain, head: CanonicalDesc) =
|
||||
# Maybe the current active chain is longer than canonical chain
|
||||
shouldNotKeyError:
|
||||
var prevHash = head.cursorHash
|
||||
while prevHash != c.baseHash:
|
||||
let header = c.blocks[prevHash].blk.header
|
||||
if header.number > head.header.number:
|
||||
c.blocks.del(prevHash)
|
||||
else:
|
||||
break
|
||||
prevHash = header.parentHash
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Public functions
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
proc initForkedChain*(com: CommonRef, baseHeader: BlockHeader): ForkedChain =
|
||||
result.com = com
|
||||
result.db = com.db
|
||||
result.baseHeader = baseHeader
|
||||
result.cursorHash = baseHeader.blockHash
|
||||
result.baseHash = result.cursorHash
|
||||
result.cursorHeader = result.baseHeader
|
||||
|
||||
proc importBlock*(c: var ForkedChain, blk: EthBlock): Result[void, string] =
|
||||
# Try to import block to canonical or side chain.
|
||||
# return error if the block is invalid
|
||||
if c.stagingTx.isNil:
|
||||
c.stagingTx = c.db.newTransaction()
|
||||
|
||||
template header(): BlockHeader =
|
||||
blk.header
|
||||
|
||||
if header.parentHash == c.cursorHash:
|
||||
return c.validateBlock(c.cursorHeader, blk)
|
||||
|
||||
if header.parentHash == c.baseHash:
|
||||
c.stagingTx.rollback()
|
||||
c.stagingTx = c.db.newTransaction()
|
||||
return c.validateBlock(c.baseHeader, blk)
|
||||
|
||||
if header.parentHash notin c.blocks:
|
||||
# If it's parent is an invalid block
|
||||
# there is no hope the descendant is valid
|
||||
return err("Block is not part of valid chain")
|
||||
|
||||
# TODO: If engine API keep importing blocks
|
||||
# but not finalized it, e.g. current chain length > StagedBlocksThreshold
|
||||
# We need to persist some of the in-memory stuff
|
||||
# to a "staging area" or disk-backed memory but it must not afect `base`.
|
||||
# `base` is the point of no return, we only update it on finality.
|
||||
|
||||
c.replaySegment(header.parentHash)
|
||||
c.validateBlock(c.cursorHeader, blk)
|
||||
|
||||
proc forkChoice*(c: var ForkedChain,
|
||||
headHash: Hash256,
|
||||
finalizedHash: Hash256): Result[void, string] =
|
||||
|
||||
# If there are multiple heads, find which chain headHash belongs to
|
||||
let head = ?c.findCanonicalHead(headHash)
|
||||
|
||||
# Finalized block must be part of canonical chain
|
||||
let finalizedHeader = ?c.canonicalChain(finalizedHash, headHash)
|
||||
|
||||
let newBase = c.calculateNewBase(
|
||||
finalizedHeader, headHash, head.header)
|
||||
|
||||
if newBase.hash == c.baseHash:
|
||||
# The base is not updated but the cursor maybe need update
|
||||
if c.cursorHash != head.cursorHash:
|
||||
if not c.stagingTx.isNil:
|
||||
c.stagingTx.rollback()
|
||||
c.stagingTx = c.db.newTransaction()
|
||||
c.replaySegment(headHash)
|
||||
|
||||
c.trimCanonicalChain(head)
|
||||
if c.cursorHash != headHash:
|
||||
c.cursorHeader = head.header
|
||||
c.cursorHash = headHash
|
||||
return ok()
|
||||
|
||||
# At this point cursorHeader.number > baseHeader.number
|
||||
if newBase.hash == c.cursorHash:
|
||||
# Paranoid check, guaranteed by findCanonicalHead
|
||||
doAssert(c.cursorHash == head.cursorHash)
|
||||
|
||||
# Current segment is canonical chain
|
||||
c.writeBaggage(newBase.hash)
|
||||
|
||||
# Paranoid check, guaranteed by `newBase.hash == c.cursorHash`
|
||||
doAssert(not c.stagingTx.isNil)
|
||||
c.stagingTx.commit()
|
||||
c.stagingTx = nil
|
||||
|
||||
# Move base to newBase
|
||||
c.updateBase(newBase.hash, c.cursorHeader, head.cursorHash)
|
||||
|
||||
# Save and record the block number before the last saved block state.
|
||||
c.db.persistent(c.cursorHeader.number).isOkOr:
|
||||
return err("Failed to save state: " & $$error)
|
||||
|
||||
return ok()
|
||||
|
||||
# At this point finalizedHeader.number is <= headHeader.number
|
||||
# and possibly switched to other chain beside the one with cursor
|
||||
doAssert(finalizedHeader.number <= head.header.number)
|
||||
doAssert(newBase.header.number <= finalizedHeader.number)
|
||||
|
||||
# Write segment from base+1 to newBase into database
|
||||
c.stagingTx.rollback()
|
||||
c.stagingTx = c.db.newTransaction()
|
||||
if newBase.header.number > c.baseHeader.number:
|
||||
c.replaySegment(newBase.hash)
|
||||
c.writeBaggage(newBase.hash)
|
||||
c.stagingTx.commit()
|
||||
c.stagingTx = nil
|
||||
# Update base forward to newBase
|
||||
c.updateBase(newBase.hash, newBase.header, head.cursorHash)
|
||||
c.db.persistent(newBase.header.number).isOkOr:
|
||||
return err("Failed to save state: " & $$error)
|
||||
|
||||
# Move chain state forward to current head
|
||||
if newBase.header.number < head.header.number:
|
||||
if c.stagingTx.isNil:
|
||||
c.stagingTx = c.db.newTransaction()
|
||||
c.replaySegment(headHash)
|
||||
|
||||
# Move cursor to current head
|
||||
c.trimCanonicalChain(head)
|
||||
if c.cursorHash != headHash:
|
||||
c.cursorHeader = head.header
|
||||
c.cursorHash = headHash
|
||||
|
||||
ok()
|
|
@ -117,7 +117,7 @@ proc procBlkPreamble(
|
|||
ok()
|
||||
|
||||
proc procBlkEpilogue(
|
||||
vmState: BaseVMState, header: BlockHeader, skipValidation: bool
|
||||
vmState: BaseVMState, header: BlockHeader, skipValidation: bool, skipReceipts: bool
|
||||
): Result[void, string] =
|
||||
# Reward beneficiary
|
||||
vmState.mutateStateDB:
|
||||
|
@ -141,19 +141,20 @@ proc procBlkEpilogue(
|
|||
arrivedFrom = vmState.com.db.getCanonicalHead().stateRoot
|
||||
return err("stateRoot mismatch")
|
||||
|
||||
let bloom = createBloom(vmState.receipts)
|
||||
|
||||
if header.logsBloom != bloom:
|
||||
return err("bloom mismatch")
|
||||
|
||||
let receiptsRoot = calcReceiptsRoot(vmState.receipts)
|
||||
if header.receiptsRoot != receiptsRoot:
|
||||
# TODO replace logging with better error
|
||||
debug "wrong receiptRoot in block",
|
||||
blockNumber = header.number,
|
||||
actual = receiptsRoot,
|
||||
expected = header.receiptsRoot
|
||||
return err("receiptRoot mismatch")
|
||||
if not skipReceipts:
|
||||
let bloom = createBloom(vmState.receipts)
|
||||
|
||||
if header.logsBloom != bloom:
|
||||
return err("bloom mismatch")
|
||||
|
||||
let receiptsRoot = calcReceiptsRoot(vmState.receipts)
|
||||
if header.receiptsRoot != receiptsRoot:
|
||||
# TODO replace logging with better error
|
||||
debug "wrong receiptRoot in block",
|
||||
blockNumber = header.number,
|
||||
actual = receiptsRoot,
|
||||
expected = header.receiptsRoot
|
||||
return err("receiptRoot mismatch")
|
||||
|
||||
ok()
|
||||
|
||||
|
@ -175,7 +176,7 @@ proc processBlock*(
|
|||
if vmState.com.consensus == ConsensusType.POW:
|
||||
vmState.calculateReward(blk.header, blk.uncles)
|
||||
|
||||
?vmState.procBlkEpilogue(blk.header, skipValidation)
|
||||
?vmState.procBlkEpilogue(blk.header, skipValidation, skipReceipts)
|
||||
|
||||
ok()
|
||||
|
||||
|
|
|
@ -115,47 +115,10 @@ func skipBCTests*(folder: string, name: string): bool =
|
|||
"DelegateCallSpam.json",
|
||||
]
|
||||
|
||||
# skip failing cases
|
||||
# TODO: see issue #2260
|
||||
const
|
||||
problematicCases = [
|
||||
"powToPosBlockRejection.json",
|
||||
"initialVal.json",
|
||||
"ForkStressTest.json",
|
||||
"HomesteadOverrideFrontier.json",
|
||||
"blockChainFrontierWithLargerTDvsHomesteadBlockchain.json",
|
||||
"blockChainFrontierWithLargerTDvsHomesteadBlockchain2.json",
|
||||
"RPC_API_Test.json",
|
||||
"DaoTransactions.json",
|
||||
"CallContractFromNotBestBlock.json",
|
||||
"ChainAtoChainB.json",
|
||||
"ChainAtoChainBCallContractFormA.json",
|
||||
"ChainAtoChainB_BlockHash.json",
|
||||
"ChainAtoChainB_difficultyB.json",
|
||||
"ChainAtoChainBtoChainA.json",
|
||||
"ChainAtoChainBtoChainAtoChainB.json",
|
||||
"UncleFromSideChain.json",
|
||||
"lotsOfLeafs.json",
|
||||
"lotsOfBranchesOverrideAtTheEnd.json",
|
||||
"lotsOfBranchesOverrideAtTheMiddle.json",
|
||||
"newChainFrom4Block.json",
|
||||
"newChainFrom5Block.json",
|
||||
"newChainFrom6Block.json",
|
||||
"sideChainWithMoreTransactions.json",
|
||||
"sideChainWithMoreTransactions2.json",
|
||||
"sideChainWithNewMaxDifficultyStartingFromBlock3AfterBlock4.json",
|
||||
"uncleBlockAtBlock3AfterBlock3.json",
|
||||
"uncleBlockAtBlock3afterBlock4.json",
|
||||
]
|
||||
|
||||
func skipNewBCTests*(folder: string, name: string): bool =
|
||||
if folder in ["vmPerformance"]:
|
||||
return true
|
||||
|
||||
# TODO: fix this
|
||||
if name in problematicCases:
|
||||
return true
|
||||
|
||||
|
||||
# the new BC tests also contains these slow tests
|
||||
# for Istanbul fork
|
||||
if slowGSTTests(folder, name):
|
||||
|
@ -166,7 +129,7 @@ func skipNewBCTests*(folder: string, name: string): bool =
|
|||
"randomStatetest94.json",
|
||||
"DelegateCallSpam.json",
|
||||
]
|
||||
|
||||
|
||||
func skipPrecompilesTests*(folder: string, name: string): bool =
|
||||
# EIP2565: modExp gas cost
|
||||
# reason: included in berlin
|
||||
|
|
|
@ -9,521 +9,146 @@
|
|||
# according to those terms.
|
||||
|
||||
import
|
||||
std/[json, os, tables, strutils, options, streams],
|
||||
std/json,
|
||||
unittest2,
|
||||
eth/rlp, eth/trie/trie_defs, eth/common/eth_types_rlp,
|
||||
stew/byteutils,
|
||||
./test_helpers, ./test_allowed_to_fail,
|
||||
../premix/parser, test_config,
|
||||
../nimbus/[evm/state, evm/types, errors, constants],
|
||||
./test_helpers,
|
||||
./test_allowed_to_fail,
|
||||
../nimbus/db/ledger,
|
||||
../nimbus/utils/[utils, debug],
|
||||
../nimbus/evm/tracer/legacy_tracer,
|
||||
../nimbus/evm/tracer/json_tracer,
|
||||
../nimbus/core/[validate, chain, pow/header],
|
||||
../nimbus/core/chain/forked_chain,
|
||||
../tools/common/helpers as chp,
|
||||
../tools/evmstate/helpers,
|
||||
../nimbus/common/common,
|
||||
../nimbus/core/eip4844,
|
||||
../nimbus/rpc/experimental
|
||||
../nimbus/core/eip4844
|
||||
|
||||
const
|
||||
debugMode = false
|
||||
|
||||
type
|
||||
SealEngine = enum
|
||||
NoProof
|
||||
Ethash
|
||||
BlockDesc = object
|
||||
blk: EthBlock
|
||||
badBlock: bool
|
||||
|
||||
TestBlock = object
|
||||
goodBlock: bool
|
||||
blockRLP : Blob
|
||||
header : BlockHeader
|
||||
body : BlockBody
|
||||
hasException: bool
|
||||
withdrawals: Option[seq[Withdrawal]]
|
||||
|
||||
TestCtx = object
|
||||
lastBlockHash: Hash256
|
||||
TestEnv = object
|
||||
blocks: seq[BlockDesc]
|
||||
genesisHeader: BlockHeader
|
||||
blocks : seq[TestBlock]
|
||||
sealEngine : Option[SealEngine]
|
||||
debugMode : bool
|
||||
trace : bool
|
||||
vmState : BaseVMState
|
||||
debugData : JsonNode
|
||||
network : string
|
||||
postStateHash: Hash256
|
||||
json : bool
|
||||
lastBlockHash: Hash256
|
||||
network: string
|
||||
pre: JsonNode
|
||||
|
||||
proc testFixture(node: JsonNode, testStatusIMPL: var TestStatus, debugMode = false, trace = false)
|
||||
|
||||
func normalizeNumber(n: JsonNode): JsonNode =
|
||||
let str = n.getStr
|
||||
if str == "0x":
|
||||
result = newJString("0x0")
|
||||
elif str == "0x0":
|
||||
result = n
|
||||
elif str == "0x00":
|
||||
result = newJString("0x0")
|
||||
elif str[2] == '0':
|
||||
var i = 2
|
||||
while str[i] == '0':
|
||||
inc i
|
||||
result = newJString("0x" & str.substr(i))
|
||||
else:
|
||||
result = n
|
||||
|
||||
func normalizeData(n: JsonNode): JsonNode =
|
||||
if n.getStr() == "":
|
||||
result = newJString("0x")
|
||||
else:
|
||||
result = n
|
||||
|
||||
func normalizeBlockHeader(node: JsonNode): JsonNode =
|
||||
for k, v in node:
|
||||
case k
|
||||
of "bloom": node["logsBloom"] = v
|
||||
of "coinbase": node["miner"] = v
|
||||
of "uncleHash": node["sha3Uncles"] = v
|
||||
of "receiptTrie": node["receiptsRoot"] = v
|
||||
of "transactionsTrie": node["transactionsRoot"] = v
|
||||
of "number", "difficulty", "gasUsed",
|
||||
"gasLimit", "timestamp", "baseFeePerGas":
|
||||
node[k] = normalizeNumber(v)
|
||||
of "extraData":
|
||||
node[k] = normalizeData(v)
|
||||
else: discard
|
||||
result = node
|
||||
|
||||
func normalizeWithdrawal(node: JsonNode): JsonNode =
|
||||
for k, v in node:
|
||||
case k
|
||||
of "amount", "index", "validatorIndex":
|
||||
node[k] = normalizeNumber(v)
|
||||
else: discard
|
||||
result = node
|
||||
|
||||
proc parseHeader(blockHeader: JsonNode, testStatusIMPL: var TestStatus): BlockHeader =
|
||||
result = normalizeBlockHeader(blockHeader).parseBlockHeader
|
||||
var blockHash: Hash256
|
||||
blockHeader.fromJson "hash", blockHash
|
||||
check blockHash == rlpHash(result)
|
||||
|
||||
proc parseWithdrawals(withdrawals: JsonNode): Option[seq[Withdrawal]] =
|
||||
case withdrawals.kind
|
||||
of JArray:
|
||||
var ws: seq[Withdrawal]
|
||||
for v in withdrawals:
|
||||
ws.add(parseWithdrawal(normalizeWithdrawal(v)))
|
||||
some(ws)
|
||||
else:
|
||||
none[seq[Withdrawal]]()
|
||||
|
||||
proc parseBlocks(blocks: JsonNode): seq[TestBlock] =
|
||||
for fixture in blocks:
|
||||
var t: TestBlock
|
||||
t.withdrawals = none[seq[Withdrawal]]()
|
||||
for key, value in fixture:
|
||||
case key
|
||||
of "blockHeader":
|
||||
# header is absent in bad block
|
||||
t.goodBlock = true
|
||||
of "rlp":
|
||||
fixture.fromJson "rlp", t.blockRLP
|
||||
of "transactions", "uncleHeaders", "hasBigInt",
|
||||
"blocknumber", "chainname", "chainnetwork":
|
||||
discard
|
||||
of "transactionSequence":
|
||||
var noError = true
|
||||
for tx in value:
|
||||
let valid = tx["valid"].getStr == "true"
|
||||
noError = noError and valid
|
||||
doAssert(noError == false, "NOT A VALID TEST CASE")
|
||||
of "withdrawals":
|
||||
t.withdrawals = parseWithdrawals(value)
|
||||
of "rlp_decoded":
|
||||
# this field is intended for client who
|
||||
# doesn't support rlp encoding(e.g. evmone)
|
||||
discard
|
||||
else:
|
||||
doAssert("expectException" in key, key)
|
||||
t.hasException = true
|
||||
|
||||
result.add t
|
||||
|
||||
proc parseTestCtx(fixture: JsonNode, testStatusIMPL: var TestStatus): TestCtx =
|
||||
result.blocks = parseBlocks(fixture["blocks"])
|
||||
|
||||
fixture.fromJson "lastblockhash", result.lastBlockHash
|
||||
|
||||
if "genesisRLP" in fixture:
|
||||
var genesisRLP: Blob
|
||||
fixture.fromJson "genesisRLP", genesisRLP
|
||||
result.genesisHeader = rlp.decode(genesisRLP, EthBlock).header
|
||||
else:
|
||||
result.genesisHeader = parseHeader(fixture["genesisBlockHeader"], testStatusIMPL)
|
||||
var goodBlock = true
|
||||
for h in result.blocks:
|
||||
goodBlock = goodBlock and h.goodBlock
|
||||
check goodBlock == false
|
||||
|
||||
if "sealEngine" in fixture:
|
||||
result.sealEngine = some(parseEnum[SealEngine](fixture["sealEngine"].getStr))
|
||||
|
||||
if "postStateHash" in fixture:
|
||||
result.postStateHash.data = hexToByteArray[32](fixture["postStateHash"].getStr)
|
||||
|
||||
result.network = fixture["network"].getStr
|
||||
|
||||
proc testGetMultiKeys(chain: ChainRef, parentHeader, currentHeader: BlockHeader) =
|
||||
# check that current state matches current header
|
||||
let currentStateRoot = chain.vmState.stateDB.rootHash
|
||||
if currentStateRoot != currentHeader.stateRoot:
|
||||
raise newException(ValidationError, "Expected currentStateRoot == currentHeader.stateRoot")
|
||||
|
||||
let mkeys = getMultiKeys(chain.com, currentHeader, false)
|
||||
|
||||
# check that the vmstate hasn't changed after call to getMultiKeys
|
||||
if chain.vmState.stateDB.rootHash != currentHeader.stateRoot:
|
||||
raise newException(ValidationError, "Expected chain.vmstate.stateDB.rootHash == currentHeader.stateRoot")
|
||||
|
||||
# use the MultiKeysRef to build the block proofs
|
||||
let
|
||||
ac = LedgerRef.init(chain.com.db, currentHeader.stateRoot)
|
||||
blockProofs = getBlockProofs(ac, mkeys)
|
||||
if blockProofs.len() != 0:
|
||||
raise newException(ValidationError, "Expected blockProofs.len() == 0")
|
||||
|
||||
proc setupTracer(ctx: TestCtx): TracerRef =
|
||||
if ctx.trace:
|
||||
if ctx.json:
|
||||
var tracerFlags = {
|
||||
TracerFlags.DisableMemory,
|
||||
TracerFlags.DisableStorage,
|
||||
TracerFlags.DisableState,
|
||||
TracerFlags.DisableStateDiff,
|
||||
TracerFlags.DisableReturnData
|
||||
}
|
||||
let stream = newFileStream(stdout)
|
||||
newJsonTracer(stream, tracerFlags, false)
|
||||
else:
|
||||
newLegacyTracer({})
|
||||
else:
|
||||
TracerRef()
|
||||
|
||||
proc importBlock(ctx: var TestCtx, com: CommonRef,
|
||||
tb: TestBlock, checkSeal: bool) =
|
||||
if ctx.vmState.isNil or ctx.vmState.stateDB.isTopLevelClean.not:
|
||||
let
|
||||
parentHeader = com.db.getBlockHeader(tb.header.parentHash)
|
||||
tracerInst = ctx.setupTracer()
|
||||
ctx.vmState = BaseVMState.new(
|
||||
parentHeader,
|
||||
tb.header,
|
||||
com,
|
||||
tracerInst,
|
||||
)
|
||||
ctx.vmState.collectWitnessData = true # Enable saving witness data
|
||||
|
||||
let
|
||||
chain = newChain(com, extraValidation = true, ctx.vmState)
|
||||
res = chain.persistBlocks([EthBlock.init(tb.header, tb.body)])
|
||||
|
||||
if res.isErr():
|
||||
raise newException(ValidationError, res.error())
|
||||
# testGetMultiKeys fails with:
|
||||
# Unhandled defect: AccountLedger.init(): RootNotFound(Aristo, ctx=ctx/newColFn(), error=GenericError) [AssertionDefect]
|
||||
#else:
|
||||
# testGetMultiKeys(chain, chain.vmState.parent, tb.header)
|
||||
|
||||
proc applyFixtureBlockToChain(ctx: var TestCtx, tb: var TestBlock,
|
||||
com: CommonRef, checkSeal: bool) =
|
||||
decompose(tb.blockRLP, tb.header, tb.body)
|
||||
ctx.importBlock(com, tb, checkSeal)
|
||||
|
||||
func shouldCheckSeal(ctx: TestCtx): bool =
|
||||
if ctx.sealEngine.isSome:
|
||||
result = ctx.sealEngine.get() != NoProof
|
||||
|
||||
proc collectDebugData(ctx: var TestCtx) =
|
||||
if ctx.vmState.isNil:
|
||||
return
|
||||
|
||||
let vmState = ctx.vmState
|
||||
let tracerInst = LegacyTracer(vmState.tracer)
|
||||
let tracingResult = if ctx.trace: tracerInst.getTracingResult() else: %[]
|
||||
ctx.debugData.add %{
|
||||
"blockNumber": %($vmState.blockNumber),
|
||||
"structLogs": tracingResult,
|
||||
}
|
||||
|
||||
proc runTestCtx(ctx: var TestCtx, com: CommonRef, testStatusIMPL: var TestStatus) =
|
||||
doAssert com.db.persistHeader(ctx.genesisHeader,
|
||||
com.consensus == ConsensusType.POS)
|
||||
check com.db.getCanonicalHead().blockHash == ctx.genesisHeader.blockHash
|
||||
let checkSeal = ctx.shouldCheckSeal
|
||||
|
||||
if ctx.debugMode:
|
||||
ctx.debugData = newJArray()
|
||||
|
||||
for idx, tb in ctx.blocks:
|
||||
if tb.goodBlock:
|
||||
try:
|
||||
|
||||
ctx.applyFixtureBlockToChain(
|
||||
ctx.blocks[idx], com, checkSeal)
|
||||
|
||||
except CatchableError as ex:
|
||||
debugEcho "FATAL ERROR(WE HAVE BUG): ", ex.msg
|
||||
|
||||
else:
|
||||
var noError = true
|
||||
try:
|
||||
ctx.applyFixtureBlockToChain(ctx.blocks[idx],
|
||||
com, checkSeal)
|
||||
except ValueError, ValidationError, BlockNotFound, RlpError:
|
||||
# failure is expected on this bad block
|
||||
check (tb.hasException or (not tb.goodBlock))
|
||||
noError = false
|
||||
if ctx.debugMode:
|
||||
ctx.debugData.add %{
|
||||
"exception": %($getCurrentException().name),
|
||||
"msg": %getCurrentExceptionMsg()
|
||||
}
|
||||
|
||||
# Block should have caused a validation error
|
||||
check noError == false
|
||||
|
||||
if ctx.debugMode and not ctx.json:
|
||||
ctx.collectDebugData()
|
||||
|
||||
proc debugDataFromAccountList(ctx: TestCtx): JsonNode =
|
||||
let vmState = ctx.vmState
|
||||
result = %{"debugData": ctx.debugData}
|
||||
if not vmState.isNil:
|
||||
result["accounts"] = vmState.dumpAccounts()
|
||||
|
||||
proc debugDataFromPostStateHash(ctx: TestCtx): JsonNode =
|
||||
let vmState = ctx.vmState
|
||||
%{
|
||||
"debugData": ctx.debugData,
|
||||
"postStateHash": %($vmState.readOnlyStateDB.rootHash),
|
||||
"expectedStateHash": %($ctx.postStateHash),
|
||||
"accounts": vmState.dumpAccounts()
|
||||
}
|
||||
|
||||
proc dumpDebugData(ctx: TestCtx, fixtureName: string, fixtureIndex: int, success: bool) =
|
||||
let debugData = if ctx.postStateHash != Hash256():
|
||||
debugDataFromPostStateHash(ctx)
|
||||
else:
|
||||
debugDataFromAccountList(ctx)
|
||||
|
||||
let status = if success: "_success" else: "_failed"
|
||||
let name = fixtureName.replace('/', '-').replace(':', '-')
|
||||
writeFile("debug_" & name & "_" & $fixtureIndex & status & ".json", debugData.pretty())
|
||||
|
||||
proc testFixture(node: JsonNode, testStatusIMPL: var TestStatus, debugMode = false, trace = false) =
|
||||
# 1 - mine the genesis block
|
||||
# 2 - loop over blocks:
|
||||
# - apply transactions
|
||||
# - mine block
|
||||
# 3 - diff resulting state with expected state
|
||||
# 4 - check that all previous blocks were valid
|
||||
let specifyIndex = test_config.getConfiguration().index.get(0)
|
||||
var fixtureIndex = 0
|
||||
var fixtureTested = false
|
||||
|
||||
for fixtureName, fixture in node:
|
||||
inc fixtureIndex
|
||||
if specifyIndex > 0 and fixtureIndex != specifyIndex:
|
||||
continue
|
||||
|
||||
var ctx = parseTestCtx(fixture, testStatusIMPL)
|
||||
|
||||
let
|
||||
memDB = newCoreDbRef DefaultDbMemory
|
||||
stateDB = LedgerRef.init(memDB, emptyRlpHash)
|
||||
config = getChainConfig(ctx.network)
|
||||
com = CommonRef.new(memDB, config)
|
||||
|
||||
setupStateDB(fixture["pre"], stateDB)
|
||||
stateDB.persist()
|
||||
|
||||
check stateDB.rootHash == ctx.genesisHeader.stateRoot
|
||||
|
||||
ctx.debugMode = debugMode
|
||||
ctx.trace = trace
|
||||
ctx.json = test_config.getConfiguration().json
|
||||
|
||||
var success = true
|
||||
proc parseBlocks(node: JsonNode): seq[BlockDesc] =
|
||||
for x in node:
|
||||
try:
|
||||
ctx.runTestCtx(com, testStatusIMPL)
|
||||
let header = com.db.getCanonicalHead()
|
||||
let lastBlockHash = header.blockHash
|
||||
check lastBlockHash == ctx.lastBlockHash
|
||||
success = lastBlockHash == ctx.lastBlockHash
|
||||
if ctx.postStateHash != Hash256():
|
||||
let rootHash = ctx.vmState.stateDB.rootHash
|
||||
if ctx.postStateHash != rootHash:
|
||||
raise newException(ValidationError, "incorrect postStateHash, expect=" &
|
||||
$rootHash & ", get=" &
|
||||
$ctx.postStateHash
|
||||
)
|
||||
elif lastBlockHash == ctx.lastBlockHash:
|
||||
# multiple chain, we are using the last valid canonical
|
||||
# state root to test against 'postState'
|
||||
let stateDB = LedgerRef.init(memDB, header.stateRoot)
|
||||
verifyStateDB(fixture["postState"], ledger.ReadOnlyStateDB(stateDB))
|
||||
let blockRLP = hexToSeqByte(x["rlp"].getStr)
|
||||
let blk = rlp.decode(blockRLP, EthBlock)
|
||||
result.add BlockDesc(
|
||||
blk: blk,
|
||||
badBlock: "expectException" in x,
|
||||
)
|
||||
except RlpError:
|
||||
# invalid rlp will not participate in block validation
|
||||
# e.g. invalid rlp received from network
|
||||
discard
|
||||
|
||||
success = lastBlockHash == ctx.lastBlockHash
|
||||
except ValidationError as E:
|
||||
echo fixtureName, " ERROR: ", E.msg
|
||||
success = false
|
||||
proc parseEnv(node: JsonNode): TestEnv =
|
||||
result.blocks = parseBlocks(node["blocks"])
|
||||
let genesisRLP = hexToSeqByte(node["genesisRLP"].getStr)
|
||||
result.genesisHeader = rlp.decode(genesisRLP, EthBlock).header
|
||||
result.lastBlockHash = Hash256(data: hexToByteArray[32](node["lastblockhash"].getStr))
|
||||
result.network = node["network"].getStr
|
||||
result.pre = node["pre"]
|
||||
|
||||
if ctx.debugMode:
|
||||
ctx.dumpDebugData(fixtureName, fixtureIndex, success)
|
||||
proc rootExists(db: CoreDbRef; root: Hash256): bool =
|
||||
let
|
||||
ctx = db.ctx
|
||||
col = ctx.newColumn(CtAccounts, root).valueOr:
|
||||
return false
|
||||
ctx.getAcc(col).isOkOr:
|
||||
return false
|
||||
true
|
||||
|
||||
fixtureTested = true
|
||||
check success == true
|
||||
proc executeCase(node: JsonNode): bool =
|
||||
let
|
||||
env = parseEnv(node)
|
||||
memDB = newCoreDbRef DefaultDbMemory
|
||||
stateDB = LedgerRef.init(memDB, EMPTY_ROOT_HASH)
|
||||
config = getChainConfig(env.network)
|
||||
com = CommonRef.new(memDB, config)
|
||||
|
||||
if not fixtureTested:
|
||||
echo test_config.getConfiguration().testSubject, " not tested at all, wrong index?"
|
||||
if specifyIndex <= 0 or specifyIndex > node.len:
|
||||
echo "Maximum subtest available: ", node.len
|
||||
setupStateDB(env.pre, stateDB)
|
||||
stateDB.persist()
|
||||
|
||||
proc blockchainJsonMain*(debugMode = false) =
|
||||
if not com.db.persistHeader(env.genesisHeader,
|
||||
com.consensus == ConsensusType.POS):
|
||||
debugEcho "Failed to put genesis header into database"
|
||||
return false
|
||||
|
||||
if com.db.getCanonicalHead().blockHash != env.genesisHeader.blockHash:
|
||||
debugEcho "Genesis block hash is database different with expected genesis block hash"
|
||||
return false
|
||||
|
||||
var c = initForkedChain(com, env.genesisHeader)
|
||||
var lastStateRoot = env.genesisHeader.stateRoot
|
||||
for blk in env.blocks:
|
||||
let res = c.importBlock(blk.blk)
|
||||
if res.isOk:
|
||||
if env.lastBlockHash == blk.blk.header.blockHash:
|
||||
lastStateRoot = blk.blk.header.stateRoot
|
||||
if blk.badBlock:
|
||||
debugEcho "A bug? bad block imported"
|
||||
return false
|
||||
else:
|
||||
if not blk.badBlock:
|
||||
debugEcho "A bug? good block rejected: ", res.error
|
||||
return false
|
||||
|
||||
c.forkChoice(env.lastBlockHash, env.lastBlockHash).isOkOr:
|
||||
debugEcho error
|
||||
return false
|
||||
|
||||
let head = com.db.getCanonicalHead()
|
||||
let headHash = head.blockHash
|
||||
if headHash != env.lastBlockHash:
|
||||
debugEcho "lastestBlockHash mismatch, get: ", headHash,
|
||||
" expect: ", env.lastBlockHash
|
||||
return false
|
||||
|
||||
if not memDB.rootExists(lastStateRoot):
|
||||
debugEcho "Last stateRoot not exists"
|
||||
return false
|
||||
|
||||
true
|
||||
|
||||
proc executeFile(node: JsonNode, testStatusIMPL: var TestStatus) =
|
||||
for name, bctCase in node:
|
||||
when debugMode:
|
||||
debugEcho "TEST NAME: ", name
|
||||
check executeCase(bctCase)
|
||||
|
||||
proc blockchainJsonMain*() =
|
||||
const
|
||||
legacyFolder = "eth_tests/LegacyTests/Constantinople/BlockchainTests"
|
||||
newFolder = "eth_tests/BlockchainTests"
|
||||
#newFolder = "eth_tests/EIPTests/BlockchainTests"
|
||||
#newFolder = "eth_tests/EIPTests/Pyspecs/cancun"
|
||||
|
||||
let res = loadKzgTrustedSetup()
|
||||
if res.isErr:
|
||||
echo "FATAL: ", res.error
|
||||
quit(QuitFailure)
|
||||
|
||||
let config = test_config.getConfiguration()
|
||||
if config.testSubject == "" or not debugMode:
|
||||
# run all test fixtures
|
||||
if config.legacy:
|
||||
suite "block chain json tests":
|
||||
jsonTest(legacyFolder, "BlockchainTests", testFixture, skipBCTests)
|
||||
else:
|
||||
suite "new block chain json tests":
|
||||
jsonTest(newFolder, "newBlockchainTests", testFixture, skipNewBCTests)
|
||||
if false:
|
||||
suite "block chain json tests":
|
||||
jsonTest(legacyFolder, "BlockchainTests", executeFile, skipBCTests)
|
||||
else:
|
||||
# execute single test in debug mode
|
||||
if config.testSubject.len == 0:
|
||||
echo "missing test subject"
|
||||
quit(QuitFailure)
|
||||
|
||||
let folder = if config.legacy: legacyFolder else: newFolder
|
||||
let path = "tests/fixtures/" & folder
|
||||
let n = json.parseFile(path / config.testSubject)
|
||||
var testStatusIMPL: TestStatus
|
||||
testFixture(n, testStatusIMPL, debugMode = true, config.trace)
|
||||
suite "new block chain json tests":
|
||||
jsonTest(newFolder, "newBlockchainTests", executeFile, skipNewBCTests)
|
||||
|
||||
when isMainModule:
|
||||
import std/times
|
||||
var message: string
|
||||
when debugMode:
|
||||
proc executeFile(name: string) =
|
||||
var testStatusIMPL: TestStatus
|
||||
let node = json.parseFile(name)
|
||||
executeFile(node, testStatusIMPL)
|
||||
|
||||
let start = getTime()
|
||||
|
||||
## Processing command line arguments
|
||||
if test_config.processArguments(message) != test_config.Success:
|
||||
echo message
|
||||
quit(QuitFailure)
|
||||
executeFile("tests/fixtures/eth_tests/BlockchainTests/ValidBlocks/bcTotalDifficultyTest/sideChainWithMoreTransactions.json")
|
||||
else:
|
||||
if len(message) > 0:
|
||||
echo message
|
||||
quit(QuitSuccess)
|
||||
|
||||
blockchainJsonMain(true)
|
||||
let elpd = getTime() - start
|
||||
echo "TIME: ", elpd
|
||||
|
||||
# lastBlockHash -> every fixture has it, hash of a block header
|
||||
# genesisRLP -> NOT every fixture has it, rlp bytes of genesis block header
|
||||
# _info -> every fixture has it, can be omitted
|
||||
# pre, postState -> every fixture has it, prestate and post state
|
||||
# genesisHeader -> every fixture has it
|
||||
# network -> every fixture has it
|
||||
# # EIP150 247
|
||||
# # ConstantinopleFix 286
|
||||
# # Homestead 256
|
||||
# # Frontier 396
|
||||
# # Byzantium 263
|
||||
# # EIP158ToByzantiumAt5 1
|
||||
# # EIP158 233
|
||||
# # HomesteadToDaoAt5 4
|
||||
# # Constantinople 285
|
||||
# # HomesteadToEIP150At5 1
|
||||
# # FrontierToHomesteadAt5 7
|
||||
# # ByzantiumToConstantinopleFixAt5 1
|
||||
|
||||
# sealEngine -> NOT every fixture has it
|
||||
# # NoProof 1709
|
||||
# # Ethash 112
|
||||
|
||||
# blocks -> every fixture has it, an array of blocks ranging from 1 block to 303 blocks
|
||||
# # transactions 6230 can be empty
|
||||
# # # to 6089 -> "" if contractCreation
|
||||
# # # value 6089
|
||||
# # # gasLimit 6089 -> "gas"
|
||||
# # # s 6089
|
||||
# # # r 6089
|
||||
# # # gasPrice 6089
|
||||
# # # v 6089
|
||||
# # # data 6089 -> "input"
|
||||
# # # nonce 6089
|
||||
# # blockHeader 6230 can be not present, e.g. bad rlp
|
||||
# # uncleHeaders 6230 can be empty
|
||||
|
||||
# # rlp 6810 has rlp but no blockheader, usually has exception
|
||||
# # blocknumber 2733
|
||||
# # chainname 1821 -> 'A' to 'H', and 'AA' to 'DD'
|
||||
# # chainnetwork 21 -> all values are "Frontier"
|
||||
# # expectExceptionALL 420
|
||||
# # # UncleInChain 55
|
||||
# # # InvalidTimestamp 42
|
||||
# # # InvalidGasLimit 42
|
||||
# # # InvalidNumber 42
|
||||
# # # InvalidDifficulty 35
|
||||
# # # InvalidBlockNonce 28
|
||||
# # # InvalidUncleParentHash 26
|
||||
# # # ExtraDataTooBig 21
|
||||
# # # InvalidStateRoot 21
|
||||
# # # ExtraDataIncorrect 19
|
||||
# # # UnknownParent 16
|
||||
# # # TooMuchGasUsed 14
|
||||
# # # InvalidReceiptsStateRoot 9
|
||||
# # # InvalidUnclesHash 7
|
||||
# # # UncleIsBrother 7
|
||||
# # # UncleTooOld 7
|
||||
# # # InvalidTransactionsRoot 7
|
||||
# # # InvalidGasUsed 7
|
||||
# # # InvalidLogBloom 7
|
||||
# # # TooManyUncles 7
|
||||
# # # OutOfGasIntrinsic 1
|
||||
# # expectExceptionEIP150 17
|
||||
# # # TooMuchGasUsed 7
|
||||
# # # InvalidReceiptsStateRoot 7
|
||||
# # # InvalidStateRoot 3
|
||||
# # expectExceptionByzantium 17
|
||||
# # # InvalidStateRoot 10
|
||||
# # # TooMuchGasUsed 7
|
||||
# # expectExceptionHomestead 17
|
||||
# # # InvalidReceiptsStateRoot 7
|
||||
# # # BlockGasLimitReached 7
|
||||
# # # InvalidStateRoot 3
|
||||
# # expectExceptionConstantinople 14
|
||||
# # # InvalidStateRoot 7
|
||||
# # # TooMuchGasUsed 7
|
||||
# # expectExceptionEIP158 14
|
||||
# # # TooMuchGasUsed 7
|
||||
# # # InvalidReceiptsStateRoot 7
|
||||
# # expectExceptionFrontier 14
|
||||
# # # InvalidReceiptsStateRoot 7
|
||||
# # # BlockGasLimitReached 7
|
||||
# # expectExceptionConstantinopleFix 14
|
||||
# # # InvalidStateRoot 7
|
||||
# # # TooMuchGasUsed 7
|
||||
blockchainJsonMain()
|
||||
|
|
Loading…
Reference in New Issue