nimbus-eth1/tests/macro_assembler.nim

418 lines
12 KiB
Nim
Raw Normal View History

2023-11-01 03:32:09 +00:00
# Nimbus
# Copyright (c) 2019-2023 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
import
std/[macrocache, strutils],
eth/keys,
unittest2,
chronicles,
stew/byteutils,
stew/shims/macros
import
../nimbus/db/[ledger, distinct_tries],
../nimbus/evm/types,
2022-12-02 04:39:12 +00:00
../nimbus/vm_internals,
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
../nimbus/transaction/[call_common, call_evm],
../nimbus/[vm_types, vm_state],
../nimbus/core/pow/difficulty
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
# Ditto, for GasPrice.
import ../nimbus/transaction except GasPrice
import ../tools/common/helpers except LogLevel
export byteutils
{.experimental: "dynamicBindSym".}
# backported from Nim 0.19.9
# remove this when we use newer Nim
#proc newLitFixed*(arg: enum): NimNode {.compileTime.} =
# result = newCall(
# arg.type.getTypeInst[1],
# newLit(int(arg))
# )
type
VMWord* = array[32, byte]
Storage* = tuple[key, val: VMWord]
Assembler* = object
title* : string
stack* : seq[VMWord]
memory* : seq[VMWord]
storage* : seq[Storage]
code* : seq[byte]
logs* : seq[Log]
success* : bool
gasLimit*: GasInt
gasUsed* : GasInt
data* : seq[byte]
output* : seq[byte]
MacroAssembler = object
setup : NimNode
asmBlock : Assembler
forkStr : string
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
const
idToOpcode = CacheTable"NimbusMacroAssembler"
static:
for n in Op:
idToOpcode[$n] = newLit(ord(n))
2022-02-27 05:21:46 +00:00
2022-02-05 09:17:41 +00:00
# EIP-4399 new opcode
2022-02-27 05:21:46 +00:00
idToOpcode["PrevRandao"] = newLit(ord(Difficulty))
proc validateVMWord(val: string, n: NimNode): VMWord =
if val.len <= 2 or val.len > 66: error("invalid hex string", n)
if not (val[0] == '0' and val[1] == 'x'): error("invalid hex string", n)
let zerosLen = 64 - (val.len - 2)
let value = repeat('0', zerosLen) & val.substr(2)
hexToByteArray(value, result)
proc validateVMWord(val: NimNode): VMWord =
val.expectKind(nnkStrLit)
validateVMWord(val.strVal, val)
proc parseVMWords(list: NimNode): seq[VMWord] =
result = @[]
list.expectKind nnkStmtList
for val in list:
result.add validateVMWord(val)
proc validateStorage(val: NimNode): Storage =
val.expectKind(nnkCall)
val[0].expectKind(nnkStrLit)
val[1].expectKind(nnkStmtList)
2019-03-13 21:36:54 +00:00
doAssert(val[1].len == 1)
val[1][0].expectKind(nnkStrLit)
result = (validateVMWord(val[0]), validateVMWord(val[1][0]))
proc parseStorage(list: NimNode): seq[Storage] =
result = @[]
list.expectKind nnkStmtList
for val in list:
result.add validateStorage(val)
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
proc parseStringLiteral(node: NimNode): string =
let strNode = node[0]
strNode.expectKind(nnkStrLit)
strNode.strVal
proc parseSuccess(list: NimNode): bool =
list.expectKind nnkStmtList
list[0].expectKind(nnkIdent)
$list[0] == "true"
proc parseData(list: NimNode): seq[byte] =
result = @[]
list.expectKind nnkStmtList
for n in list:
n.expectKind(nnkStrLit)
result.add hexToSeqByte(n.strVal)
proc parseLog(node: NimNode): Log =
node.expectKind({nnkPar, nnkTupleConstr})
for item in node:
item.expectKind(nnkExprColonExpr)
let label = item[0].strVal
let body = item[1]
case label.normalize
of "address":
body.expectKind(nnkStrLit)
let value = body.strVal
if value.len < 20:
error("bad address format", body)
hexToByteArray(value, result.address)
of "topics":
body.expectKind(nnkBracket)
for x in body:
result.topics.add validateVMWord(x.strVal, x)
of "data":
result.data = hexToSeqByte(body.strVal)
else:error("unknown log section '" & label & "'", item[0])
proc parseLogs(list: NimNode): seq[Log] =
result = @[]
list.expectKind nnkStmtList
for n in list:
result.add parseLog(n)
proc validateOpcode(sym: NimNode) =
let typ = getTypeInst(sym)
typ.expectKind(nnkSym)
if $typ != "Op":
error("unknown opcode '" & $sym & "'", sym)
proc addOpCode(code: var seq[byte], node, params: NimNode) =
node.expectKind nnkSym
let opcode = Op(idToOpcode[node.strVal].intVal)
case opcode
of Push1..Push32:
if params.len != 1:
error("expect 1 param, but got " & $params.len, node)
let paramWidth = (opcode.ord - 95) * 2
params[0].expectKind nnkStrLit
var val = params[0].strVal
if val[0] == '0' and val[1] == 'x':
val = val.substr(2)
if val.len != paramWidth:
error("expected param with " & $paramWidth & " hex digits, got " & $val.len, node)
code.add byte(opcode)
code.add hexToSeqByte(val)
else:
error("invalid hex format", node)
else:
if params.len > 0:
error("there should be no param for this instruction", node)
code.add byte(opcode)
proc parseCode(codes: NimNode): seq[byte] =
let emptyNode = newEmptyNode()
codes.expectKind nnkStmtList
for pc, line in codes:
line.expectKind({nnkCommand, nnkIdent, nnkStrLit})
if line.kind == nnkStrLit:
result.add hexToSeqByte(line.strVal)
elif line.kind == nnkIdent:
let sym = bindSym(line)
validateOpcode(sym)
result.addOpCode(sym, emptyNode)
elif line.kind == nnkCommand:
let sym = bindSym(line[0])
validateOpcode(sym)
var params = newNimNode(nnkBracket)
for i in 1 ..< line.len:
params.add line[i]
result.addOpCode(sym, params)
else:
error("unknown syntax: " & line.toStrLit.strVal, line)
proc parseFork(fork: NimNode): string =
fork[0].expectKind({nnkIdent, nnkStrLit})
fork[0].strVal
2020-11-25 11:23:02 +00:00
proc parseGasUsed(gas: NimNode): GasInt =
gas[0].expectKind(nnkIntLit)
result = gas[0].intVal
proc parseAssembler(list: NimNode): MacroAssembler =
result.forkStr = "Frontier"
result.asmBlock.success = true
result.asmBlock.gasUsed = -1
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
list.expectKind nnkStmtList
for callSection in list:
callSection.expectKind(nnkCall)
let label = callSection[0].strVal
let body = callSection[1]
case label.normalize
of "title" : result.asmBlock.title = parseStringLiteral(body)
of "code" : result.asmBlock.code = parseCode(body)
of "memory" : result.asmBlock.memory = parseVMWords(body)
of "stack" : result.asmBlock.stack = parseVMWords(body)
of "storage": result.asmBlock.storage = parseStorage(body)
of "logs" : result.asmBlock.logs = parseLogs(body)
of "success": result.asmBlock.success = parseSuccess(body)
of "data" : result.asmBlock.data = parseData(body)
of "output" : result.asmBlock.output = parseData(body)
of "gasused": result.asmBlock.gasUsed = parseGasUsed(body)
of "fork" : result.forkStr = parseFork(body)
of "setup" : result.setup = body
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
else: error("unknown section '" & label & "'", callSection[0])
type VMProxy = tuple[sym: NimNode, pr: NimNode]
proc generateVMProxy(masm: MacroAssembler): VMProxy =
let
vmProxySym = genSym(nskProc, "vmProxy")
body = newLitFixed(masm.asmBlock)
setup = if masm.setup.isNil:
newEmptyNode()
else:
masm.setup
vmState = ident("vmState")
fork = masm.forkStr
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
vmProxyProc = quote do:
proc `vmProxySym`(): bool =
let `vmState` = initVMEnv(`fork`)
`setup`
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
let boa = `body`
runVM(`vmState`, boa)
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
(vmProxySym, vmProxyProc)
proc generateAssemblerTest(masm: MacroAssembler): NimNode =
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
let
(vmProxySym, vmProxyProc) = generateVMProxy(masm)
title: string = masm.asmBlock.title
result = quote do:
test `title`:
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
`vmProxyProc`
{.gcsafe.}:
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
check `vmProxySym`()
when defined(macro_assembler_debug):
echo result.toStrLit.strVal
const
codeAddress = hexToByteArray[20]("460121576cc7df020759730751f92bd62fd78dd6")
coinbase = hexToByteArray[20]("bb7b8287f3f0a933474a79eae42cbca977791171")
proc initVMEnv*(network: string): BaseVMState =
let
conf = getChainConfig(network)
com = CommonRef.new(
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
newCoreDbRef LegacyDbMemory,
conf,
false,
conf.chainId.NetworkId)
parent = BlockHeader(stateRoot: EMPTY_ROOT_HASH)
parentHash = rlpHash(parent)
header = BlockHeader(
blockNumber: 1.u256,
stateRoot: EMPTY_ROOT_HASH,
parentHash: parentHash,
coinbase: coinbase,
timestamp: EthTime(0x1234),
difficulty: 1003.u256,
gasLimit: 100_000
)
com.initializeEmptyDb()
BaseVMState.new(parent, header, com)
proc verifyAsmResult(vmState: BaseVMState, boa: Assembler, asmResult: CallResult): bool =
let com = vmState.com
if not asmResult.isError:
if boa.success == false:
error "different success value", expected=boa.success, actual=true
return false
else:
if boa.success == true:
error "different success value", expected=boa.success, actual=false
return false
2020-11-25 11:23:02 +00:00
if boa.gasUsed != -1:
if boa.gasUsed != asmResult.gasUsed:
error "different gasUsed", expected=boa.gasUsed, actual=asmResult.gasUsed
2020-11-25 11:23:02 +00:00
return false
if boa.stack.len != asmResult.stack.values.len:
error "different stack len", expected=boa.stack.len, actual=asmResult.stack.values.len
return false
for i, v in asmResult.stack.values:
let actual = v.dumpHex()
let val = boa.stack[i].toHex()
if actual != val:
error "different stack value", idx=i, expected=val, actual=actual
return false
const chunkLen = 32
let numChunks = asmResult.memory.len div chunkLen
if numChunks != boa.memory.len:
error "different memory len", expected=boa.memory.len, actual=numChunks
return false
for i in 0 ..< numChunks:
let actual = asmResult.memory.bytes.toOpenArray(i * chunkLen, (i + 1) * chunkLen - 1).toHex()
let mem = boa.memory[i].toHex()
if mem != actual:
error "different memory value", idx=i, expected=mem, actual=actual
return false
var stateDB = vmState.stateDB
stateDB.persist()
var
storageRoot = stateDB.getStorageRoot(codeAddress)
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
trie = initStorageTrie(com.db, storageRoot)
for kv in boa.storage:
let key = kv[0].toHex()
let val = kv[1].toHex()
let keyBytes = (@(kv[0]))
let actual = trie.getSlotBytes(keyBytes).toHex()
let zerosLen = 64 - (actual.len)
let value = repeat('0', zerosLen) & actual
if val != value:
error "storage has different value", key=key, expected=val, actual=value
return false
let logs = vmState.getAndClearLogEntries()
if logs.len != boa.logs.len:
error "different logs len", expected=boa.logs.len, actual=logs.len
return false
for i, log in boa.logs:
let eAddr = log.address.toHex()
let aAddr = logs[i].address.toHex()
if eAddr != aAddr:
error "different address", expected=eAddr, actual=aAddr, idx=i
return false
let eData = log.data.toHex()
let aData = logs[i].data.toHex()
if eData != aData:
error "different data", expected=eData, actual=aData, idx=i
return false
if log.topics.len != logs[i].topics.len:
error "different topics len", expected=log.topics.len, actual=logs[i].topics.len, idx=i
return false
for x, t in log.topics:
let eTopic = t.toHex()
let aTopic = logs[i].topics[x].toHex()
if eTopic != aTopic:
error "different topic in log entry", expected=eTopic, actual=aTopic, logIdx=i, topicIdx=x
return false
if boa.output.len > 0:
let actual = asmResult.output.toHex()
let expected = boa.output.toHex()
if expected != actual:
error "different output detected", expected=expected, actual=actual
return false
result = true
proc createSignedTx(payload: Blob, chainId: ChainId): Transaction =
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
let privateKey = PrivateKey.fromHex("7a28b5ba57c53603b0b07b56bba752f7784bf506fa95edc395f5cf6c7514fe9d")[]
let unsignedTx = Transaction(
txType: TxEIP4844,
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
nonce: 0,
gasPrice: 1.GasInt,
gasLimit: 500_000_000.GasInt,
to: codeAddress.some,
value: 500.u256,
payload: payload,
versionedHashes: @[EMPTY_UNCLE_HASH, EMPTY_SHA3]
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
)
signTransaction(unsignedTx, privateKey, chainId, false)
proc runVM*(vmState: BaseVMState, boa: Assembler): bool =
let
com = vmState.com
fork = com.toEVMFork()
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
vmState.mutateStateDB:
db.setCode(codeAddress, boa.code)
db.setBalance(codeAddress, 1_000_000.u256)
2022-12-02 04:39:12 +00:00
let tx = createSignedTx(boa.data, com.chainId)
let asmResult = testCallEvm(tx, tx.getSender, vmState, fork)
verifyAsmResult(vmState, boa, asmResult)
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
macro assembler*(list: untyped): untyped =
Added basic async capabilities for vm2. (#1260) * Added basic async capabilities for vm2. This is a whole new Git branch, not the same one as last time (https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't much worth salvaging. Main differences: I didn't do the "each opcode has to specify an async handler" junk that I put in last time. Instead, in oph_memory.nim you can see sloadOp calling asyncChainTo and passing in an async operation. That async operation is then run by the execCallOrCreate (or asyncExecCallOrCreate) code in interpreter_dispatch.nim. In the test code, the (previously existing) macro called "assembler" now allows you to add a section called "initialStorage", specifying fake data to be used by the EVM computation run by that test. (In the long run we'll obviously want to write tests that for-real use the JSON-RPC API to asynchronously fetch data; for now, this was just an expedient way to write a basic unit test that exercises the async-EVM code pathway.) There's also a new macro called "concurrentAssemblers" that allows you to write a test that runs multiple assemblers concurrently (and then waits for them all to finish). There's one example test using this, in test_op_memory_lazy.nim, though you can't actually see it doing so unless you uncomment some echo statements in async_operations.nim (in which case you can see the two concurrently running EVM computations each printing out what they're doing, and you'll see that they interleave). A question: is it possible to make EVMC work asynchronously? (For now, this code compiles and "make test" passes even if ENABLE_EVMC is turned on, but it doesn't actually work asynchronously, it just falls back on doing the usual synchronous EVMC thing. See FIXME-asyncAndEvmc.) * Moved the AsyncOperationFactory to the BaseVMState object. * Made the AsyncOperationFactory into a table of fn pointers. Also ditched the plain-data Vm2AsyncOperation type; it wasn't really serving much purpose. Instead, the pendingAsyncOperation field directly contains the Future. * Removed the hasStorage idea. It's not the right solution to the "how do we know whether we still need to fetch the storage value or not?" problem. I haven't implemented the right solution yet, but at least we're better off not putting in a wrong one. * Added/modified/removed some comments. (Based on feedback on the PR.) * Removed the waitFor from execCallOrCreate. There was some back-and-forth in the PR regarding whether nested waitFor calls are acceptable: https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449 The eventual decision was to just change the waitFor to a doAssert (since we probably won't want this extra functionality when running synchronously anyway) to make sure that the Future is already finished.
2022-11-01 15:35:46 +00:00
result = parseAssembler(list).generateAssemblerTest()
macro evmByteCode*(list: untyped): untyped =
list.expectKind nnkStmtList
var code = parseCode(list)
result = newLitFixed(code)