2023-11-01 03:32:09 +00:00
|
|
|
# Nimbus
|
2024-02-23 09:17:24 +00:00
|
|
|
# Copyright (c) 2019-2024 Status Research & Development GmbH
|
2023-11-01 03:32:09 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed except
|
|
|
|
# according to those terms.
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
import
|
2023-10-18 02:16:11 +00:00
|
|
|
std/[macrocache, strutils],
|
2023-03-22 11:18:37 +00:00
|
|
|
eth/keys,
|
|
|
|
unittest2,
|
|
|
|
chronicles,
|
|
|
|
stew/byteutils,
|
2021-05-31 10:36:40 +00:00
|
|
|
stew/shims/macros
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
import
|
2024-05-28 14:24:10 +00:00
|
|
|
../nimbus/db/ledger,
|
2023-03-10 22:16:42 +00:00
|
|
|
../nimbus/evm/types,
|
2024-06-17 07:56:39 +00:00
|
|
|
../nimbus/evm/internals,
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
../nimbus/transaction/[call_common, call_evm],
|
2024-06-17 07:56:39 +00:00
|
|
|
../nimbus/evm/state,
|
2023-10-18 02:16:11 +00:00
|
|
|
../nimbus/core/pow/difficulty
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
|
2024-05-28 14:24:10 +00:00
|
|
|
from ../nimbus/db/aristo
|
|
|
|
import EmptyBlob
|
|
|
|
|
2023-01-27 14:57:48 +00:00
|
|
|
# Ditto, for GasPrice.
|
|
|
|
import ../nimbus/transaction except GasPrice
|
2023-10-18 02:16:11 +00:00
|
|
|
import ../tools/common/helpers except LogLevel
|
2023-01-27 14:57:48 +00:00
|
|
|
|
2021-06-01 10:54:13 +00:00
|
|
|
export byteutils
|
2019-01-30 12:29:04 +00:00
|
|
|
{.experimental: "dynamicBindSym".}
|
|
|
|
|
|
|
|
type
|
|
|
|
VMWord* = array[32, byte]
|
|
|
|
Storage* = tuple[key, val: VMWord]
|
|
|
|
|
|
|
|
Assembler* = object
|
2023-03-22 11:18:37 +00:00
|
|
|
title* : string
|
|
|
|
stack* : seq[VMWord]
|
|
|
|
memory* : seq[VMWord]
|
|
|
|
storage* : seq[Storage]
|
|
|
|
code* : seq[byte]
|
|
|
|
logs* : seq[Log]
|
|
|
|
success* : bool
|
2024-06-14 07:31:08 +00:00
|
|
|
gasUsed* : Opt[GasInt]
|
2023-03-22 11:18:37 +00:00
|
|
|
data* : seq[byte]
|
|
|
|
output* : seq[byte]
|
2019-01-30 12:29:04 +00:00
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
MacroAssembler = object
|
|
|
|
setup : NimNode
|
|
|
|
asmBlock : Assembler
|
|
|
|
forkStr : string
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
|
2019-01-31 06:35:07 +00:00
|
|
|
const
|
|
|
|
idToOpcode = CacheTable"NimbusMacroAssembler"
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
static:
|
2019-01-31 06:35:07 +00:00
|
|
|
for n in Op:
|
|
|
|
idToOpcode[$n] = newLit(ord(n))
|
2022-02-27 05:21:46 +00:00
|
|
|
|
2022-02-05 09:17:41 +00:00
|
|
|
# EIP-4399 new opcode
|
2022-02-27 05:21:46 +00:00
|
|
|
idToOpcode["PrevRandao"] = newLit(ord(Difficulty))
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
proc validateVMWord(val: string, n: NimNode): VMWord =
|
2019-01-31 06:35:07 +00:00
|
|
|
if val.len <= 2 or val.len > 66: error("invalid hex string", n)
|
|
|
|
if not (val[0] == '0' and val[1] == 'x'): error("invalid hex string", n)
|
2019-01-30 12:29:04 +00:00
|
|
|
let zerosLen = 64 - (val.len - 2)
|
|
|
|
let value = repeat('0', zerosLen) & val.substr(2)
|
|
|
|
hexToByteArray(value, result)
|
|
|
|
|
|
|
|
proc validateVMWord(val: NimNode): VMWord =
|
|
|
|
val.expectKind(nnkStrLit)
|
|
|
|
validateVMWord(val.strVal, val)
|
|
|
|
|
|
|
|
proc parseVMWords(list: NimNode): seq[VMWord] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for val in list:
|
|
|
|
result.add validateVMWord(val)
|
|
|
|
|
|
|
|
proc validateStorage(val: NimNode): Storage =
|
|
|
|
val.expectKind(nnkCall)
|
|
|
|
val[0].expectKind(nnkStrLit)
|
|
|
|
val[1].expectKind(nnkStmtList)
|
2019-03-13 21:36:54 +00:00
|
|
|
doAssert(val[1].len == 1)
|
2019-01-30 12:29:04 +00:00
|
|
|
val[1][0].expectKind(nnkStrLit)
|
|
|
|
result = (validateVMWord(val[0]), validateVMWord(val[1][0]))
|
|
|
|
|
|
|
|
proc parseStorage(list: NimNode): seq[Storage] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for val in list:
|
|
|
|
result.add validateStorage(val)
|
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
proc parseStringLiteral(node: NimNode): string =
|
|
|
|
let strNode = node[0]
|
|
|
|
strNode.expectKind(nnkStrLit)
|
|
|
|
strNode.strVal
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
proc parseSuccess(list: NimNode): bool =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
list[0].expectKind(nnkIdent)
|
|
|
|
$list[0] == "true"
|
|
|
|
|
|
|
|
proc parseData(list: NimNode): seq[byte] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for n in list:
|
|
|
|
n.expectKind(nnkStrLit)
|
|
|
|
result.add hexToSeqByte(n.strVal)
|
|
|
|
|
2019-01-31 06:35:07 +00:00
|
|
|
proc parseLog(node: NimNode): Log =
|
2023-03-22 11:18:37 +00:00
|
|
|
node.expectKind({nnkPar, nnkTupleConstr})
|
2019-01-31 06:35:07 +00:00
|
|
|
for item in node:
|
|
|
|
item.expectKind(nnkExprColonExpr)
|
|
|
|
let label = item[0].strVal
|
|
|
|
let body = item[1]
|
|
|
|
case label.normalize
|
|
|
|
of "address":
|
|
|
|
body.expectKind(nnkStrLit)
|
|
|
|
let value = body.strVal
|
|
|
|
if value.len < 20:
|
|
|
|
error("bad address format", body)
|
|
|
|
hexToByteArray(value, result.address)
|
|
|
|
of "topics":
|
|
|
|
body.expectKind(nnkBracket)
|
|
|
|
for x in body:
|
|
|
|
result.topics.add validateVMWord(x.strVal, x)
|
|
|
|
of "data":
|
|
|
|
result.data = hexToSeqByte(body.strVal)
|
|
|
|
else:error("unknown log section '" & label & "'", item[0])
|
|
|
|
|
|
|
|
proc parseLogs(list: NimNode): seq[Log] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for n in list:
|
|
|
|
result.add parseLog(n)
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
proc validateOpcode(sym: NimNode) =
|
|
|
|
let typ = getTypeInst(sym)
|
|
|
|
typ.expectKind(nnkSym)
|
|
|
|
if $typ != "Op":
|
|
|
|
error("unknown opcode '" & $sym & "'", sym)
|
|
|
|
|
2019-01-31 06:35:07 +00:00
|
|
|
proc addOpCode(code: var seq[byte], node, params: NimNode) =
|
|
|
|
node.expectKind nnkSym
|
|
|
|
let opcode = Op(idToOpcode[node.strVal].intVal)
|
2019-01-30 12:29:04 +00:00
|
|
|
case opcode
|
|
|
|
of Push1..Push32:
|
|
|
|
if params.len != 1:
|
|
|
|
error("expect 1 param, but got " & $params.len, node)
|
|
|
|
let paramWidth = (opcode.ord - 95) * 2
|
2019-01-31 06:35:07 +00:00
|
|
|
params[0].expectKind nnkStrLit
|
|
|
|
var val = params[0].strVal
|
2019-01-30 12:29:04 +00:00
|
|
|
if val[0] == '0' and val[1] == 'x':
|
|
|
|
val = val.substr(2)
|
|
|
|
if val.len != paramWidth:
|
|
|
|
error("expected param with " & $paramWidth & " hex digits, got " & $val.len, node)
|
2019-01-31 06:35:07 +00:00
|
|
|
code.add byte(opcode)
|
|
|
|
code.add hexToSeqByte(val)
|
2019-01-30 12:29:04 +00:00
|
|
|
else:
|
|
|
|
error("invalid hex format", node)
|
|
|
|
else:
|
|
|
|
if params.len > 0:
|
2019-01-31 06:35:07 +00:00
|
|
|
error("there should be no param for this instruction", node)
|
|
|
|
code.add byte(opcode)
|
|
|
|
|
|
|
|
proc parseCode(codes: NimNode): seq[byte] =
|
|
|
|
let emptyNode = newEmptyNode()
|
|
|
|
codes.expectKind nnkStmtList
|
|
|
|
for pc, line in codes:
|
|
|
|
line.expectKind({nnkCommand, nnkIdent, nnkStrLit})
|
|
|
|
if line.kind == nnkStrLit:
|
|
|
|
result.add hexToSeqByte(line.strVal)
|
|
|
|
elif line.kind == nnkIdent:
|
|
|
|
let sym = bindSym(line)
|
|
|
|
validateOpcode(sym)
|
|
|
|
result.addOpCode(sym, emptyNode)
|
|
|
|
elif line.kind == nnkCommand:
|
|
|
|
let sym = bindSym(line[0])
|
|
|
|
validateOpcode(sym)
|
|
|
|
var params = newNimNode(nnkBracket)
|
|
|
|
for i in 1 ..< line.len:
|
|
|
|
params.add line[i]
|
|
|
|
result.addOpCode(sym, params)
|
|
|
|
else:
|
|
|
|
error("unknown syntax: " & line.toStrLit.strVal, line)
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc parseFork(fork: NimNode): string =
|
2019-05-06 12:09:03 +00:00
|
|
|
fork[0].expectKind({nnkIdent, nnkStrLit})
|
2023-03-22 11:18:37 +00:00
|
|
|
fork[0].strVal
|
2019-05-06 12:09:03 +00:00
|
|
|
|
2024-06-14 07:31:08 +00:00
|
|
|
proc parseGasUsed(gas: NimNode): Opt[GasInt] =
|
2020-11-25 11:23:02 +00:00
|
|
|
gas[0].expectKind(nnkIntLit)
|
2024-06-14 07:31:08 +00:00
|
|
|
result = Opt.some(GasInt gas[0].intVal)
|
2020-11-25 11:23:02 +00:00
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc parseAssembler(list: NimNode): MacroAssembler =
|
|
|
|
result.forkStr = "Frontier"
|
|
|
|
result.asmBlock.success = true
|
2024-06-14 07:31:08 +00:00
|
|
|
result.asmBlock.gasUsed = Opt.none(GasInt)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for callSection in list:
|
|
|
|
callSection.expectKind(nnkCall)
|
|
|
|
let label = callSection[0].strVal
|
|
|
|
let body = callSection[1]
|
|
|
|
case label.normalize
|
2023-03-22 11:18:37 +00:00
|
|
|
of "title" : result.asmBlock.title = parseStringLiteral(body)
|
|
|
|
of "code" : result.asmBlock.code = parseCode(body)
|
|
|
|
of "memory" : result.asmBlock.memory = parseVMWords(body)
|
|
|
|
of "stack" : result.asmBlock.stack = parseVMWords(body)
|
|
|
|
of "storage": result.asmBlock.storage = parseStorage(body)
|
|
|
|
of "logs" : result.asmBlock.logs = parseLogs(body)
|
|
|
|
of "success": result.asmBlock.success = parseSuccess(body)
|
|
|
|
of "data" : result.asmBlock.data = parseData(body)
|
|
|
|
of "output" : result.asmBlock.output = parseData(body)
|
|
|
|
of "gasused": result.asmBlock.gasUsed = parseGasUsed(body)
|
|
|
|
of "fork" : result.forkStr = parseFork(body)
|
|
|
|
of "setup" : result.setup = body
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
else: error("unknown section '" & label & "'", callSection[0])
|
|
|
|
|
|
|
|
type VMProxy = tuple[sym: NimNode, pr: NimNode]
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc generateVMProxy(masm: MacroAssembler): VMProxy =
|
2019-01-30 12:29:04 +00:00
|
|
|
let
|
2023-03-10 22:16:42 +00:00
|
|
|
vmProxySym = genSym(nskProc, "vmProxy")
|
2023-03-22 11:18:37 +00:00
|
|
|
body = newLitFixed(masm.asmBlock)
|
|
|
|
setup = if masm.setup.isNil:
|
|
|
|
newEmptyNode()
|
|
|
|
else:
|
|
|
|
masm.setup
|
|
|
|
vmState = ident("vmState")
|
|
|
|
fork = masm.forkStr
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
vmProxyProc = quote do:
|
2023-03-10 22:16:42 +00:00
|
|
|
proc `vmProxySym`(): bool =
|
2023-03-22 11:18:37 +00:00
|
|
|
let `vmState` = initVMEnv(`fork`)
|
|
|
|
`setup`
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
let boa = `body`
|
2023-03-22 11:18:37 +00:00
|
|
|
runVM(`vmState`, boa)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
(vmProxySym, vmProxyProc)
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc generateAssemblerTest(masm: MacroAssembler): NimNode =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
let
|
2023-03-22 11:18:37 +00:00
|
|
|
(vmProxySym, vmProxyProc) = generateVMProxy(masm)
|
|
|
|
title: string = masm.asmBlock.title
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
result = quote do:
|
|
|
|
test `title`:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
`vmProxyProc`
|
2019-05-30 16:42:55 +00:00
|
|
|
{.gcsafe.}:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
check `vmProxySym`()
|
|
|
|
|
|
|
|
when defined(macro_assembler_debug):
|
|
|
|
echo result.toStrLit.strVal
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
const
|
|
|
|
codeAddress = hexToByteArray[20]("460121576cc7df020759730751f92bd62fd78dd6")
|
|
|
|
coinbase = hexToByteArray[20]("bb7b8287f3f0a933474a79eae42cbca977791171")
|
2019-01-30 12:29:04 +00:00
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc initVMEnv*(network: string): BaseVMState =
|
2019-01-30 12:29:04 +00:00
|
|
|
let
|
2023-03-22 11:18:37 +00:00
|
|
|
conf = getChainConfig(network)
|
2024-05-28 14:24:10 +00:00
|
|
|
cdb = DefaultDbMemory.newCoreDbRef()
|
2023-03-22 11:18:37 +00:00
|
|
|
com = CommonRef.new(
|
2024-02-23 09:17:24 +00:00
|
|
|
cdb,
|
2023-03-22 11:18:37 +00:00
|
|
|
conf,
|
|
|
|
conf.chainId.NetworkId)
|
|
|
|
parent = BlockHeader(stateRoot: EMPTY_ROOT_HASH)
|
|
|
|
parentHash = rlpHash(parent)
|
2021-10-14 08:00:02 +00:00
|
|
|
header = BlockHeader(
|
2024-06-14 07:31:08 +00:00
|
|
|
number: 1'u64,
|
2023-03-22 11:18:37 +00:00
|
|
|
stateRoot: EMPTY_ROOT_HASH,
|
|
|
|
parentHash: parentHash,
|
2021-10-14 08:00:02 +00:00
|
|
|
coinbase: coinbase,
|
2023-10-18 02:16:11 +00:00
|
|
|
timestamp: EthTime(0x1234),
|
2023-03-22 11:18:37 +00:00
|
|
|
difficulty: 1003.u256,
|
2021-10-14 08:00:02 +00:00
|
|
|
gasLimit: 100_000
|
|
|
|
)
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
com.initializeEmptyDb()
|
|
|
|
BaseVMState.new(parent, header, com)
|
2019-01-30 12:29:04 +00:00
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc verifyAsmResult(vmState: BaseVMState, boa: Assembler, asmResult: CallResult): bool =
|
|
|
|
let com = vmState.com
|
2021-10-14 08:00:02 +00:00
|
|
|
if not asmResult.isError:
|
2019-01-30 12:29:04 +00:00
|
|
|
if boa.success == false:
|
|
|
|
error "different success value", expected=boa.success, actual=true
|
|
|
|
return false
|
|
|
|
else:
|
|
|
|
if boa.success == true:
|
|
|
|
error "different success value", expected=boa.success, actual=false
|
|
|
|
return false
|
|
|
|
|
2024-06-14 07:31:08 +00:00
|
|
|
if boa.gasUsed.isSome:
|
|
|
|
if boa.gasUsed.get != asmResult.gasUsed:
|
|
|
|
error "different gasUsed", expected=boa.gasUsed.get, actual=asmResult.gasUsed
|
2020-11-25 11:23:02 +00:00
|
|
|
return false
|
|
|
|
|
2024-06-07 08:24:32 +00:00
|
|
|
if boa.stack.len != asmResult.stack.len:
|
|
|
|
error "different stack len", expected=boa.stack.len, actual=asmResult.stack.len
|
2019-01-30 12:29:04 +00:00
|
|
|
return false
|
|
|
|
|
2024-06-07 08:24:32 +00:00
|
|
|
for i, v in asmResult.stack:
|
2019-01-30 12:29:04 +00:00
|
|
|
let actual = v.dumpHex()
|
|
|
|
let val = boa.stack[i].toHex()
|
|
|
|
if actual != val:
|
|
|
|
error "different stack value", idx=i, expected=val, actual=actual
|
|
|
|
return false
|
|
|
|
|
|
|
|
const chunkLen = 32
|
2021-05-03 23:34:51 +00:00
|
|
|
let numChunks = asmResult.memory.len div chunkLen
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
if numChunks != boa.memory.len:
|
|
|
|
error "different memory len", expected=boa.memory.len, actual=numChunks
|
|
|
|
return false
|
|
|
|
|
|
|
|
for i in 0 ..< numChunks:
|
2021-05-03 23:34:51 +00:00
|
|
|
let actual = asmResult.memory.bytes.toOpenArray(i * chunkLen, (i + 1) * chunkLen - 1).toHex()
|
2019-01-30 12:29:04 +00:00
|
|
|
let mem = boa.memory[i].toHex()
|
|
|
|
if mem != actual:
|
|
|
|
error "different memory value", idx=i, expected=mem, actual=actual
|
|
|
|
return false
|
|
|
|
|
2021-10-28 09:42:39 +00:00
|
|
|
var stateDB = vmState.stateDB
|
2020-06-01 06:58:14 +00:00
|
|
|
stateDB.persist()
|
2024-05-28 14:24:10 +00:00
|
|
|
|
|
|
|
let
|
2024-06-27 09:01:26 +00:00
|
|
|
al = com.db.ctx.getAccounts()
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath = codeAddress.keccakHash
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
for kv in boa.storage:
|
|
|
|
let key = kv[0].toHex()
|
|
|
|
let val = kv[1].toHex()
|
2024-06-27 09:01:26 +00:00
|
|
|
let slotKey = UInt256.fromBytesBE(kv[0]).toBytesBE.keccakHash.data
|
2024-06-27 19:21:01 +00:00
|
|
|
let data = al.slotFetch(accPath, slotKey).valueOr: EmptyBlob
|
2024-05-28 14:24:10 +00:00
|
|
|
let actual = data.toHex
|
2019-01-30 12:29:04 +00:00
|
|
|
let zerosLen = 64 - (actual.len)
|
|
|
|
let value = repeat('0', zerosLen) & actual
|
|
|
|
if val != value:
|
|
|
|
error "storage has different value", key=key, expected=val, actual=value
|
|
|
|
return false
|
|
|
|
|
2023-03-20 11:51:09 +00:00
|
|
|
let logs = vmState.getAndClearLogEntries()
|
2019-01-30 12:29:04 +00:00
|
|
|
if logs.len != boa.logs.len:
|
|
|
|
error "different logs len", expected=boa.logs.len, actual=logs.len
|
|
|
|
return false
|
|
|
|
|
|
|
|
for i, log in boa.logs:
|
|
|
|
let eAddr = log.address.toHex()
|
|
|
|
let aAddr = logs[i].address.toHex()
|
|
|
|
if eAddr != aAddr:
|
|
|
|
error "different address", expected=eAddr, actual=aAddr, idx=i
|
|
|
|
return false
|
|
|
|
let eData = log.data.toHex()
|
|
|
|
let aData = logs[i].data.toHex()
|
|
|
|
if eData != aData:
|
|
|
|
error "different data", expected=eData, actual=aData, idx=i
|
|
|
|
return false
|
|
|
|
if log.topics.len != logs[i].topics.len:
|
|
|
|
error "different topics len", expected=log.topics.len, actual=logs[i].topics.len, idx=i
|
|
|
|
return false
|
|
|
|
for x, t in log.topics:
|
|
|
|
let eTopic = t.toHex()
|
|
|
|
let aTopic = logs[i].topics[x].toHex()
|
|
|
|
if eTopic != aTopic:
|
|
|
|
error "different topic in log entry", expected=eTopic, actual=aTopic, logIdx=i, topicIdx=x
|
|
|
|
return false
|
|
|
|
|
|
|
|
if boa.output.len > 0:
|
2021-05-03 23:34:51 +00:00
|
|
|
let actual = asmResult.output.toHex()
|
2019-01-30 12:29:04 +00:00
|
|
|
let expected = boa.output.toHex()
|
|
|
|
if expected != actual:
|
|
|
|
error "different output detected", expected=expected, actual=actual
|
|
|
|
return false
|
|
|
|
|
|
|
|
result = true
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc createSignedTx(payload: Blob, chainId: ChainId): Transaction =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
let privateKey = PrivateKey.fromHex("7a28b5ba57c53603b0b07b56bba752f7784bf506fa95edc395f5cf6c7514fe9d")[]
|
|
|
|
let unsignedTx = Transaction(
|
2023-06-24 13:56:44 +00:00
|
|
|
txType: TxEIP4844,
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
nonce: 0,
|
|
|
|
gasPrice: 1.GasInt,
|
|
|
|
gasLimit: 500_000_000.GasInt,
|
2024-06-14 07:31:08 +00:00
|
|
|
to: Opt.some codeAddress,
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
value: 500.u256,
|
2023-06-24 13:56:44 +00:00
|
|
|
payload: payload,
|
|
|
|
versionedHashes: @[EMPTY_UNCLE_HASH, EMPTY_SHA3]
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
)
|
|
|
|
signTransaction(unsignedTx, privateKey, chainId, false)
|
|
|
|
|
2023-03-22 11:18:37 +00:00
|
|
|
proc runVM*(vmState: BaseVMState, boa: Assembler): bool =
|
|
|
|
let
|
|
|
|
com = vmState.com
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
vmState.mutateStateDB:
|
|
|
|
db.setCode(codeAddress, boa.code)
|
|
|
|
db.setBalance(codeAddress, 1_000_000.u256)
|
2022-12-02 04:39:12 +00:00
|
|
|
let tx = createSignedTx(boa.data, com.chainId)
|
2024-07-04 13:48:36 +00:00
|
|
|
let asmResult = testCallEvm(tx, tx.getSender, vmState)
|
2023-03-22 11:18:37 +00:00
|
|
|
verifyAsmResult(vmState, boa, asmResult)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
macro assembler*(list: untyped): untyped =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
result = parseAssembler(list).generateAssemblerTest()
|
|
|
|
|
2020-07-24 12:54:27 +00:00
|
|
|
macro evmByteCode*(list: untyped): untyped =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
var code = parseCode(list)
|
|
|
|
result = newLitFixed(code)
|