2023-11-01 10:32:09 +07:00
|
|
|
# Nimbus
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
# Copyright (c) 2019-2025 Status Research & Development GmbH
|
2023-11-01 10:32:09 +07:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed except
|
|
|
|
# according to those terms.
|
|
|
|
|
2019-01-30 19:29:04 +07:00
|
|
|
import
|
2023-10-18 09:16:11 +07:00
|
|
|
std/[macrocache, strutils],
|
2024-10-04 16:34:31 +02:00
|
|
|
eth/common/[keys, transaction_utils],
|
2023-03-22 18:18:37 +07:00
|
|
|
unittest2,
|
|
|
|
chronicles,
|
|
|
|
stew/byteutils,
|
2021-05-31 11:36:40 +01:00
|
|
|
stew/shims/macros
|
2019-01-30 19:29:04 +07:00
|
|
|
|
|
|
|
import
|
2025-02-11 22:28:42 +00:00
|
|
|
../execution_chain/db/ledger,
|
|
|
|
../execution_chain/evm/types,
|
|
|
|
../execution_chain/evm/interpreter/op_codes,
|
|
|
|
../execution_chain/evm/internals,
|
|
|
|
../execution_chain/transaction/[call_common, call_evm],
|
|
|
|
../execution_chain/evm/state,
|
|
|
|
../execution_chain/core/pow/difficulty
|
|
|
|
|
|
|
|
from ../execution_chain/db/aristo
|
2024-05-28 14:24:10 +00:00
|
|
|
import EmptyBlob
|
|
|
|
|
2023-01-27 15:57:48 +01:00
|
|
|
# Ditto, for GasPrice.
|
2025-02-11 22:28:42 +00:00
|
|
|
import ../execution_chain/transaction except GasPrice
|
2023-10-18 09:16:11 +07:00
|
|
|
import ../tools/common/helpers except LogLevel
|
2023-01-27 15:57:48 +01:00
|
|
|
|
2021-06-01 11:54:13 +01:00
|
|
|
export byteutils
|
2024-09-29 14:37:09 +02:00
|
|
|
|
2019-01-30 19:29:04 +07:00
|
|
|
{.experimental: "dynamicBindSym".}
|
|
|
|
|
|
|
|
type
|
|
|
|
VMWord* = array[32, byte]
|
|
|
|
Storage* = tuple[key, val: VMWord]
|
|
|
|
|
|
|
|
Assembler* = object
|
2023-03-22 18:18:37 +07:00
|
|
|
title* : string
|
|
|
|
stack* : seq[VMWord]
|
|
|
|
memory* : seq[VMWord]
|
|
|
|
storage* : seq[Storage]
|
|
|
|
code* : seq[byte]
|
|
|
|
logs* : seq[Log]
|
|
|
|
success* : bool
|
2024-06-14 14:31:08 +07:00
|
|
|
gasUsed* : Opt[GasInt]
|
2023-03-22 18:18:37 +07:00
|
|
|
data* : seq[byte]
|
|
|
|
output* : seq[byte]
|
2019-01-30 19:29:04 +07:00
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
MacroAssembler = object
|
|
|
|
setup : NimNode
|
|
|
|
asmBlock : Assembler
|
|
|
|
forkStr : string
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
|
2019-01-31 13:35:07 +07:00
|
|
|
const
|
|
|
|
idToOpcode = CacheTable"NimbusMacroAssembler"
|
2019-01-30 19:29:04 +07:00
|
|
|
|
|
|
|
static:
|
2019-01-31 13:35:07 +07:00
|
|
|
for n in Op:
|
|
|
|
idToOpcode[$n] = newLit(ord(n))
|
2022-02-27 12:21:46 +07:00
|
|
|
|
2022-02-05 16:17:41 +07:00
|
|
|
# EIP-4399 new opcode
|
2022-02-27 12:21:46 +07:00
|
|
|
idToOpcode["PrevRandao"] = newLit(ord(Difficulty))
|
|
|
|
|
2019-01-30 19:29:04 +07:00
|
|
|
proc validateVMWord(val: string, n: NimNode): VMWord =
|
2019-01-31 13:35:07 +07:00
|
|
|
if val.len <= 2 or val.len > 66: error("invalid hex string", n)
|
|
|
|
if not (val[0] == '0' and val[1] == 'x'): error("invalid hex string", n)
|
2019-01-30 19:29:04 +07:00
|
|
|
let zerosLen = 64 - (val.len - 2)
|
|
|
|
let value = repeat('0', zerosLen) & val.substr(2)
|
|
|
|
hexToByteArray(value, result)
|
|
|
|
|
|
|
|
proc validateVMWord(val: NimNode): VMWord =
|
|
|
|
val.expectKind(nnkStrLit)
|
|
|
|
validateVMWord(val.strVal, val)
|
|
|
|
|
|
|
|
proc parseVMWords(list: NimNode): seq[VMWord] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for val in list:
|
|
|
|
result.add validateVMWord(val)
|
|
|
|
|
|
|
|
proc validateStorage(val: NimNode): Storage =
|
|
|
|
val.expectKind(nnkCall)
|
|
|
|
val[0].expectKind(nnkStrLit)
|
|
|
|
val[1].expectKind(nnkStmtList)
|
2019-03-13 22:36:54 +01:00
|
|
|
doAssert(val[1].len == 1)
|
2019-01-30 19:29:04 +07:00
|
|
|
val[1][0].expectKind(nnkStrLit)
|
|
|
|
result = (validateVMWord(val[0]), validateVMWord(val[1][0]))
|
|
|
|
|
|
|
|
proc parseStorage(list: NimNode): seq[Storage] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for val in list:
|
|
|
|
result.add validateStorage(val)
|
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
proc parseStringLiteral(node: NimNode): string =
|
|
|
|
let strNode = node[0]
|
|
|
|
strNode.expectKind(nnkStrLit)
|
|
|
|
strNode.strVal
|
|
|
|
|
2019-01-30 19:29:04 +07:00
|
|
|
proc parseSuccess(list: NimNode): bool =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
list[0].expectKind(nnkIdent)
|
|
|
|
$list[0] == "true"
|
|
|
|
|
|
|
|
proc parseData(list: NimNode): seq[byte] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for n in list:
|
|
|
|
n.expectKind(nnkStrLit)
|
|
|
|
result.add hexToSeqByte(n.strVal)
|
|
|
|
|
2019-01-31 13:35:07 +07:00
|
|
|
proc parseLog(node: NimNode): Log =
|
2023-03-22 18:18:37 +07:00
|
|
|
node.expectKind({nnkPar, nnkTupleConstr})
|
2019-01-31 13:35:07 +07:00
|
|
|
for item in node:
|
|
|
|
item.expectKind(nnkExprColonExpr)
|
|
|
|
let label = item[0].strVal
|
|
|
|
let body = item[1]
|
|
|
|
case label.normalize
|
|
|
|
of "address":
|
|
|
|
body.expectKind(nnkStrLit)
|
|
|
|
let value = body.strVal
|
|
|
|
if value.len < 20:
|
|
|
|
error("bad address format", body)
|
2024-09-29 14:37:09 +02:00
|
|
|
hexToByteArray(value, result.address.data)
|
2019-01-31 13:35:07 +07:00
|
|
|
of "topics":
|
|
|
|
body.expectKind(nnkBracket)
|
|
|
|
for x in body:
|
2024-09-29 14:37:09 +02:00
|
|
|
result.topics.add Topic validateVMWord(x.strVal, x)
|
2019-01-31 13:35:07 +07:00
|
|
|
of "data":
|
|
|
|
result.data = hexToSeqByte(body.strVal)
|
|
|
|
else:error("unknown log section '" & label & "'", item[0])
|
|
|
|
|
|
|
|
proc parseLogs(list: NimNode): seq[Log] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for n in list:
|
|
|
|
result.add parseLog(n)
|
|
|
|
|
2019-01-30 19:29:04 +07:00
|
|
|
proc validateOpcode(sym: NimNode) =
|
2024-09-29 14:37:09 +02:00
|
|
|
if $sym notin idToOpcode:
|
2019-01-30 19:29:04 +07:00
|
|
|
error("unknown opcode '" & $sym & "'", sym)
|
|
|
|
|
2019-01-31 13:35:07 +07:00
|
|
|
proc addOpCode(code: var seq[byte], node, params: NimNode) =
|
2024-09-29 14:37:09 +02:00
|
|
|
let opcode = Op(idToOpcode[$node].intVal)
|
2019-01-30 19:29:04 +07:00
|
|
|
case opcode
|
|
|
|
of Push1..Push32:
|
|
|
|
if params.len != 1:
|
|
|
|
error("expect 1 param, but got " & $params.len, node)
|
|
|
|
let paramWidth = (opcode.ord - 95) * 2
|
2019-01-31 13:35:07 +07:00
|
|
|
params[0].expectKind nnkStrLit
|
|
|
|
var val = params[0].strVal
|
2019-01-30 19:29:04 +07:00
|
|
|
if val[0] == '0' and val[1] == 'x':
|
|
|
|
val = val.substr(2)
|
|
|
|
if val.len != paramWidth:
|
|
|
|
error("expected param with " & $paramWidth & " hex digits, got " & $val.len, node)
|
2019-01-31 13:35:07 +07:00
|
|
|
code.add byte(opcode)
|
|
|
|
code.add hexToSeqByte(val)
|
2019-01-30 19:29:04 +07:00
|
|
|
else:
|
|
|
|
error("invalid hex format", node)
|
|
|
|
else:
|
|
|
|
if params.len > 0:
|
2019-01-31 13:35:07 +07:00
|
|
|
error("there should be no param for this instruction", node)
|
|
|
|
code.add byte(opcode)
|
|
|
|
|
|
|
|
proc parseCode(codes: NimNode): seq[byte] =
|
|
|
|
let emptyNode = newEmptyNode()
|
|
|
|
codes.expectKind nnkStmtList
|
|
|
|
for pc, line in codes:
|
|
|
|
line.expectKind({nnkCommand, nnkIdent, nnkStrLit})
|
|
|
|
if line.kind == nnkStrLit:
|
|
|
|
result.add hexToSeqByte(line.strVal)
|
|
|
|
elif line.kind == nnkIdent:
|
|
|
|
let sym = bindSym(line)
|
|
|
|
validateOpcode(sym)
|
|
|
|
result.addOpCode(sym, emptyNode)
|
|
|
|
elif line.kind == nnkCommand:
|
|
|
|
let sym = bindSym(line[0])
|
|
|
|
validateOpcode(sym)
|
|
|
|
var params = newNimNode(nnkBracket)
|
|
|
|
for i in 1 ..< line.len:
|
|
|
|
params.add line[i]
|
|
|
|
result.addOpCode(sym, params)
|
|
|
|
else:
|
|
|
|
error("unknown syntax: " & line.toStrLit.strVal, line)
|
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
proc parseFork(fork: NimNode): string =
|
2019-05-06 19:09:03 +07:00
|
|
|
fork[0].expectKind({nnkIdent, nnkStrLit})
|
2023-03-22 18:18:37 +07:00
|
|
|
fork[0].strVal
|
2019-05-06 19:09:03 +07:00
|
|
|
|
2024-06-14 14:31:08 +07:00
|
|
|
proc parseGasUsed(gas: NimNode): Opt[GasInt] =
|
2020-11-25 18:23:02 +07:00
|
|
|
gas[0].expectKind(nnkIntLit)
|
2024-06-14 14:31:08 +07:00
|
|
|
result = Opt.some(GasInt gas[0].intVal)
|
2020-11-25 18:23:02 +07:00
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
proc parseAssembler(list: NimNode): MacroAssembler =
|
|
|
|
result.forkStr = "Frontier"
|
|
|
|
result.asmBlock.success = true
|
2024-06-14 14:31:08 +07:00
|
|
|
result.asmBlock.gasUsed = Opt.none(GasInt)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for callSection in list:
|
|
|
|
callSection.expectKind(nnkCall)
|
|
|
|
let label = callSection[0].strVal
|
|
|
|
let body = callSection[1]
|
|
|
|
case label.normalize
|
2023-03-22 18:18:37 +07:00
|
|
|
of "title" : result.asmBlock.title = parseStringLiteral(body)
|
|
|
|
of "code" : result.asmBlock.code = parseCode(body)
|
|
|
|
of "memory" : result.asmBlock.memory = parseVMWords(body)
|
|
|
|
of "stack" : result.asmBlock.stack = parseVMWords(body)
|
|
|
|
of "storage": result.asmBlock.storage = parseStorage(body)
|
|
|
|
of "logs" : result.asmBlock.logs = parseLogs(body)
|
|
|
|
of "success": result.asmBlock.success = parseSuccess(body)
|
|
|
|
of "data" : result.asmBlock.data = parseData(body)
|
|
|
|
of "output" : result.asmBlock.output = parseData(body)
|
|
|
|
of "gasused": result.asmBlock.gasUsed = parseGasUsed(body)
|
|
|
|
of "fork" : result.forkStr = parseFork(body)
|
|
|
|
of "setup" : result.setup = body
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
else: error("unknown section '" & label & "'", callSection[0])
|
|
|
|
|
|
|
|
type VMProxy = tuple[sym: NimNode, pr: NimNode]
|
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
proc generateVMProxy(masm: MacroAssembler): VMProxy =
|
2019-01-30 19:29:04 +07:00
|
|
|
let
|
2023-03-10 17:16:42 -05:00
|
|
|
vmProxySym = genSym(nskProc, "vmProxy")
|
2023-03-22 18:18:37 +07:00
|
|
|
body = newLitFixed(masm.asmBlock)
|
|
|
|
setup = if masm.setup.isNil:
|
|
|
|
newEmptyNode()
|
|
|
|
else:
|
|
|
|
masm.setup
|
|
|
|
vmState = ident("vmState")
|
|
|
|
fork = masm.forkStr
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
vmProxyProc = quote do:
|
2023-03-10 17:16:42 -05:00
|
|
|
proc `vmProxySym`(): bool =
|
2023-03-22 18:18:37 +07:00
|
|
|
let `vmState` = initVMEnv(`fork`)
|
|
|
|
`setup`
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
let boa = `body`
|
2023-03-22 18:18:37 +07:00
|
|
|
runVM(`vmState`, boa)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
(vmProxySym, vmProxyProc)
|
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
proc generateAssemblerTest(masm: MacroAssembler): NimNode =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
let
|
2023-03-22 18:18:37 +07:00
|
|
|
(vmProxySym, vmProxyProc) = generateVMProxy(masm)
|
|
|
|
title: string = masm.asmBlock.title
|
2019-01-30 19:29:04 +07:00
|
|
|
|
|
|
|
result = quote do:
|
|
|
|
test `title`:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
`vmProxyProc`
|
2019-05-30 18:42:55 +02:00
|
|
|
{.gcsafe.}:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
check `vmProxySym`()
|
|
|
|
|
|
|
|
when defined(macro_assembler_debug):
|
|
|
|
echo result.toStrLit.strVal
|
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
const
|
2024-09-29 14:37:09 +02:00
|
|
|
codeAddress = address"460121576cc7df020759730751f92bd62fd78dd6"
|
|
|
|
coinbase = address"bb7b8287f3f0a933474a79eae42cbca977791171"
|
2019-01-30 19:29:04 +07:00
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
proc initVMEnv*(network: string): BaseVMState =
|
2019-01-30 19:29:04 +07:00
|
|
|
let
|
2023-03-22 18:18:37 +07:00
|
|
|
conf = getChainConfig(network)
|
2024-05-28 14:24:10 +00:00
|
|
|
cdb = DefaultDbMemory.newCoreDbRef()
|
2023-03-22 18:18:37 +07:00
|
|
|
com = CommonRef.new(
|
2024-02-23 09:17:24 +00:00
|
|
|
cdb,
|
2024-12-13 05:53:41 +01:00
|
|
|
nil, conf,
|
2023-03-22 18:18:37 +07:00
|
|
|
conf.chainId.NetworkId)
|
2024-10-16 07:04:12 +05:30
|
|
|
parent = Header(stateRoot: EMPTY_ROOT_HASH)
|
2023-03-22 18:18:37 +07:00
|
|
|
parentHash = rlpHash(parent)
|
2024-10-16 07:04:12 +05:30
|
|
|
header = Header(
|
2024-06-14 14:31:08 +07:00
|
|
|
number: 1'u64,
|
2023-03-22 18:18:37 +07:00
|
|
|
stateRoot: EMPTY_ROOT_HASH,
|
|
|
|
parentHash: parentHash,
|
2021-10-14 15:00:02 +07:00
|
|
|
coinbase: coinbase,
|
2023-10-18 09:16:11 +07:00
|
|
|
timestamp: EthTime(0x1234),
|
2023-03-22 18:18:37 +07:00
|
|
|
difficulty: 1003.u256,
|
2021-10-14 15:00:02 +07:00
|
|
|
gasLimit: 100_000
|
|
|
|
)
|
|
|
|
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
BaseVMState.new(parent, header, com, com.db.baseTxFrame())
|
2019-01-30 19:29:04 +07:00
|
|
|
|
2024-11-30 10:07:10 +01:00
|
|
|
proc verifyAsmResult(vmState: BaseVMState, boa: Assembler, asmResult: DebugCallResult): bool =
|
2023-03-22 18:18:37 +07:00
|
|
|
let com = vmState.com
|
2021-10-14 15:00:02 +07:00
|
|
|
if not asmResult.isError:
|
2019-01-30 19:29:04 +07:00
|
|
|
if boa.success == false:
|
|
|
|
error "different success value", expected=boa.success, actual=true
|
|
|
|
return false
|
|
|
|
else:
|
|
|
|
if boa.success == true:
|
|
|
|
error "different success value", expected=boa.success, actual=false
|
|
|
|
return false
|
|
|
|
|
2024-06-14 14:31:08 +07:00
|
|
|
if boa.gasUsed.isSome:
|
|
|
|
if boa.gasUsed.get != asmResult.gasUsed:
|
|
|
|
error "different gasUsed", expected=boa.gasUsed.get, actual=asmResult.gasUsed
|
2020-11-25 18:23:02 +07:00
|
|
|
return false
|
|
|
|
|
2024-06-07 15:24:32 +07:00
|
|
|
if boa.stack.len != asmResult.stack.len:
|
|
|
|
error "different stack len", expected=boa.stack.len, actual=asmResult.stack.len
|
2019-01-30 19:29:04 +07:00
|
|
|
return false
|
|
|
|
|
2024-06-07 15:24:32 +07:00
|
|
|
for i, v in asmResult.stack:
|
2019-01-30 19:29:04 +07:00
|
|
|
let actual = v.dumpHex()
|
|
|
|
let val = boa.stack[i].toHex()
|
|
|
|
if actual != val:
|
|
|
|
error "different stack value", idx=i, expected=val, actual=actual
|
|
|
|
return false
|
|
|
|
|
|
|
|
const chunkLen = 32
|
2021-05-04 00:34:51 +01:00
|
|
|
let numChunks = asmResult.memory.len div chunkLen
|
2019-01-30 19:29:04 +07:00
|
|
|
|
|
|
|
if numChunks != boa.memory.len:
|
|
|
|
error "different memory len", expected=boa.memory.len, actual=numChunks
|
|
|
|
return false
|
|
|
|
|
|
|
|
for i in 0 ..< numChunks:
|
2021-05-04 00:34:51 +01:00
|
|
|
let actual = asmResult.memory.bytes.toOpenArray(i * chunkLen, (i + 1) * chunkLen - 1).toHex()
|
2019-01-30 19:29:04 +07:00
|
|
|
let mem = boa.memory[i].toHex()
|
|
|
|
if mem != actual:
|
|
|
|
error "different memory value", idx=i, expected=mem, actual=actual
|
|
|
|
return false
|
|
|
|
|
2024-12-21 20:46:13 +07:00
|
|
|
var ledger = vmState.ledger
|
|
|
|
ledger.persist()
|
2024-05-28 14:24:10 +00:00
|
|
|
|
|
|
|
let
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
al = com.db.baseTxFrame()
|
2024-10-16 07:04:12 +05:30
|
|
|
accPath = keccak256(codeAddress.data)
|
2019-01-30 19:29:04 +07:00
|
|
|
|
|
|
|
for kv in boa.storage:
|
|
|
|
let key = kv[0].toHex()
|
|
|
|
let val = kv[1].toHex()
|
2024-10-16 07:04:12 +05:30
|
|
|
let slotKey = UInt256.fromBytesBE(kv[0]).toBytesBE.keccak256
|
2024-07-05 01:48:45 +02:00
|
|
|
let data = al.slotFetch(accPath, slotKey).valueOr: default(UInt256)
|
|
|
|
let actual = data.toBytesBE().toHex
|
|
|
|
if val != actual:
|
|
|
|
error "storage has different value", key=key, expected=val, actual
|
2019-01-30 19:29:04 +07:00
|
|
|
return false
|
|
|
|
|
2023-03-20 18:51:09 +07:00
|
|
|
let logs = vmState.getAndClearLogEntries()
|
2019-01-30 19:29:04 +07:00
|
|
|
if logs.len != boa.logs.len:
|
|
|
|
error "different logs len", expected=boa.logs.len, actual=logs.len
|
|
|
|
return false
|
|
|
|
|
|
|
|
for i, log in boa.logs:
|
|
|
|
let eAddr = log.address.toHex()
|
|
|
|
let aAddr = logs[i].address.toHex()
|
|
|
|
if eAddr != aAddr:
|
|
|
|
error "different address", expected=eAddr, actual=aAddr, idx=i
|
|
|
|
return false
|
|
|
|
let eData = log.data.toHex()
|
|
|
|
let aData = logs[i].data.toHex()
|
|
|
|
if eData != aData:
|
|
|
|
error "different data", expected=eData, actual=aData, idx=i
|
|
|
|
return false
|
|
|
|
if log.topics.len != logs[i].topics.len:
|
|
|
|
error "different topics len", expected=log.topics.len, actual=logs[i].topics.len, idx=i
|
|
|
|
return false
|
|
|
|
for x, t in log.topics:
|
|
|
|
let eTopic = t.toHex()
|
|
|
|
let aTopic = logs[i].topics[x].toHex()
|
|
|
|
if eTopic != aTopic:
|
|
|
|
error "different topic in log entry", expected=eTopic, actual=aTopic, logIdx=i, topicIdx=x
|
|
|
|
return false
|
|
|
|
|
|
|
|
if boa.output.len > 0:
|
2021-05-04 00:34:51 +01:00
|
|
|
let actual = asmResult.output.toHex()
|
2019-01-30 19:29:04 +07:00
|
|
|
let expected = boa.output.toHex()
|
|
|
|
if expected != actual:
|
|
|
|
error "different output detected", expected=expected, actual=actual
|
|
|
|
return false
|
|
|
|
|
|
|
|
result = true
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
proc createSignedTx(payload: seq[byte], chainId: ChainId): Transaction =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
let privateKey = PrivateKey.fromHex("7a28b5ba57c53603b0b07b56bba752f7784bf506fa95edc395f5cf6c7514fe9d")[]
|
|
|
|
let unsignedTx = Transaction(
|
2024-11-01 19:06:26 +00:00
|
|
|
txType: TxEip4844,
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
nonce: 0,
|
|
|
|
gasPrice: 1.GasInt,
|
|
|
|
gasLimit: 500_000_000.GasInt,
|
2024-06-14 14:31:08 +07:00
|
|
|
to: Opt.some codeAddress,
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
value: 500.u256,
|
2023-06-24 20:56:44 +07:00
|
|
|
payload: payload,
|
2024-10-04 16:34:31 +02:00
|
|
|
chainId: chainId,
|
2024-09-29 14:37:09 +02:00
|
|
|
versionedHashes: @[VersionedHash(EMPTY_UNCLE_HASH), VersionedHash(EMPTY_SHA3)]
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
)
|
2024-10-04 16:34:31 +02:00
|
|
|
signTransaction(unsignedTx, privateKey, false)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
|
2023-03-22 18:18:37 +07:00
|
|
|
proc runVM*(vmState: BaseVMState, boa: Assembler): bool =
|
|
|
|
let
|
|
|
|
com = vmState.com
|
2024-12-21 20:46:13 +07:00
|
|
|
vmState.mutateLedger:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
db.setCode(codeAddress, boa.code)
|
|
|
|
db.setBalance(codeAddress, 1_000_000.u256)
|
2022-12-02 11:39:12 +07:00
|
|
|
let tx = createSignedTx(boa.data, com.chainId)
|
2024-10-04 16:34:31 +02:00
|
|
|
let asmResult = testCallEvm(tx, tx.recoverSender().expect("valid signature"), vmState)
|
2023-03-22 18:18:37 +07:00
|
|
|
verifyAsmResult(vmState, boa, asmResult)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
|
2019-01-30 19:29:04 +07:00
|
|
|
macro assembler*(list: untyped): untyped =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 11:35:46 -04:00
|
|
|
result = parseAssembler(list).generateAssemblerTest()
|
|
|
|
|
2020-07-24 19:54:27 +07:00
|
|
|
macro evmByteCode*(list: untyped): untyped =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
var code = parseCode(list)
|
|
|
|
result = newLitFixed(code)
|