2019-01-30 12:29:04 +00:00
|
|
|
import
|
2021-10-14 08:00:02 +00:00
|
|
|
macrocache, strutils, sequtils, unittest2, times,
|
|
|
|
stew/byteutils, chronicles, eth/[common, keys],
|
2021-05-31 10:36:40 +00:00
|
|
|
stew/shims/macros
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
import
|
2021-10-14 08:00:02 +00:00
|
|
|
options, eth/trie/[db, hexary],
|
2020-06-01 06:45:32 +00:00
|
|
|
../nimbus/db/[db_chain, accounts_cache],
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
../nimbus/vm2/[async_operations, types],
|
2021-06-01 11:04:47 +00:00
|
|
|
../nimbus/vm_internals, ../nimbus/forks,
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
../nimbus/transaction/[call_common, call_evm],
|
2021-10-14 08:00:02 +00:00
|
|
|
../nimbus/[transaction, chain_config, genesis, vm_types, vm_state],
|
|
|
|
../nimbus/utils/difficulty
|
2019-01-30 12:29:04 +00:00
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
# Need to exclude ServerCommand because it contains something
|
|
|
|
# called Stop that interferes with the EVM operation named Stop.
|
|
|
|
import chronos except ServerCommand
|
|
|
|
|
2021-06-01 10:54:13 +00:00
|
|
|
export byteutils
|
2019-01-30 12:29:04 +00:00
|
|
|
{.experimental: "dynamicBindSym".}
|
|
|
|
|
2019-05-06 12:09:03 +00:00
|
|
|
# backported from Nim 0.19.9
|
|
|
|
# remove this when we use newer Nim
|
|
|
|
proc newLitFixed*(arg: enum): NimNode {.compileTime.} =
|
|
|
|
result = newCall(
|
|
|
|
arg.type.getTypeInst[1],
|
|
|
|
newLit(int(arg))
|
|
|
|
)
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
type
|
|
|
|
VMWord* = array[32, byte]
|
|
|
|
Storage* = tuple[key, val: VMWord]
|
|
|
|
|
|
|
|
Assembler* = object
|
|
|
|
title*: string
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
chainDBIdentName*: string
|
|
|
|
vmStateIdentName*: string
|
2019-01-30 12:29:04 +00:00
|
|
|
stack*: seq[VMWord]
|
|
|
|
memory*: seq[VMWord]
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
initialStorage*: seq[Storage]
|
2019-01-30 12:29:04 +00:00
|
|
|
storage*: seq[Storage]
|
|
|
|
code*: seq[byte]
|
|
|
|
logs*: seq[Log]
|
|
|
|
success*: bool
|
|
|
|
gasLimit*: GasInt
|
|
|
|
gasUsed*: GasInt
|
|
|
|
data*: seq[byte]
|
|
|
|
output*: seq[byte]
|
2019-05-06 12:09:03 +00:00
|
|
|
fork*: Fork
|
2019-01-30 12:29:04 +00:00
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
ConcurrencyTest* = object
|
|
|
|
title*: string
|
|
|
|
assemblers*: seq[Assembler]
|
|
|
|
|
2019-01-31 06:35:07 +00:00
|
|
|
const
|
|
|
|
idToOpcode = CacheTable"NimbusMacroAssembler"
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
static:
|
2019-01-31 06:35:07 +00:00
|
|
|
for n in Op:
|
|
|
|
idToOpcode[$n] = newLit(ord(n))
|
2022-02-27 05:21:46 +00:00
|
|
|
|
2022-02-05 09:17:41 +00:00
|
|
|
# EIP-4399 new opcode
|
2022-02-27 05:21:46 +00:00
|
|
|
idToOpcode["PrevRandao"] = newLit(ord(Difficulty))
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
proc validateVMWord(val: string, n: NimNode): VMWord =
|
2019-01-31 06:35:07 +00:00
|
|
|
if val.len <= 2 or val.len > 66: error("invalid hex string", n)
|
|
|
|
if not (val[0] == '0' and val[1] == 'x'): error("invalid hex string", n)
|
2019-01-30 12:29:04 +00:00
|
|
|
let zerosLen = 64 - (val.len - 2)
|
|
|
|
let value = repeat('0', zerosLen) & val.substr(2)
|
|
|
|
hexToByteArray(value, result)
|
|
|
|
|
|
|
|
proc validateVMWord(val: NimNode): VMWord =
|
|
|
|
val.expectKind(nnkStrLit)
|
|
|
|
validateVMWord(val.strVal, val)
|
|
|
|
|
|
|
|
proc parseVMWords(list: NimNode): seq[VMWord] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for val in list:
|
|
|
|
result.add validateVMWord(val)
|
|
|
|
|
|
|
|
proc validateStorage(val: NimNode): Storage =
|
|
|
|
val.expectKind(nnkCall)
|
|
|
|
val[0].expectKind(nnkStrLit)
|
|
|
|
val[1].expectKind(nnkStmtList)
|
2019-03-13 21:36:54 +00:00
|
|
|
doAssert(val[1].len == 1)
|
2019-01-30 12:29:04 +00:00
|
|
|
val[1][0].expectKind(nnkStrLit)
|
|
|
|
result = (validateVMWord(val[0]), validateVMWord(val[1][0]))
|
|
|
|
|
|
|
|
proc parseStorage(list: NimNode): seq[Storage] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for val in list:
|
|
|
|
result.add validateStorage(val)
|
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
proc parseStringLiteral(node: NimNode): string =
|
|
|
|
let strNode = node[0]
|
|
|
|
strNode.expectKind(nnkStrLit)
|
|
|
|
strNode.strVal
|
|
|
|
|
|
|
|
proc parseIdent(node: NimNode): string =
|
|
|
|
let identNode = node[0]
|
|
|
|
identNode.expectKind(nnkIdent)
|
|
|
|
identNode.strVal
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
proc parseSuccess(list: NimNode): bool =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
list[0].expectKind(nnkIdent)
|
|
|
|
$list[0] == "true"
|
|
|
|
|
|
|
|
proc parseData(list: NimNode): seq[byte] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for n in list:
|
|
|
|
n.expectKind(nnkStrLit)
|
|
|
|
result.add hexToSeqByte(n.strVal)
|
|
|
|
|
2019-01-31 06:35:07 +00:00
|
|
|
proc parseLog(node: NimNode): Log =
|
|
|
|
node.expectKind(nnkPar)
|
|
|
|
for item in node:
|
|
|
|
item.expectKind(nnkExprColonExpr)
|
|
|
|
let label = item[0].strVal
|
|
|
|
let body = item[1]
|
|
|
|
case label.normalize
|
|
|
|
of "address":
|
|
|
|
body.expectKind(nnkStrLit)
|
|
|
|
let value = body.strVal
|
|
|
|
if value.len < 20:
|
|
|
|
error("bad address format", body)
|
|
|
|
hexToByteArray(value, result.address)
|
|
|
|
of "topics":
|
|
|
|
body.expectKind(nnkBracket)
|
|
|
|
for x in body:
|
|
|
|
result.topics.add validateVMWord(x.strVal, x)
|
|
|
|
of "data":
|
|
|
|
result.data = hexToSeqByte(body.strVal)
|
|
|
|
else:error("unknown log section '" & label & "'", item[0])
|
|
|
|
|
|
|
|
proc parseLogs(list: NimNode): seq[Log] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for n in list:
|
|
|
|
result.add parseLog(n)
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
proc validateOpcode(sym: NimNode) =
|
|
|
|
let typ = getTypeInst(sym)
|
|
|
|
typ.expectKind(nnkSym)
|
|
|
|
if $typ != "Op":
|
|
|
|
error("unknown opcode '" & $sym & "'", sym)
|
|
|
|
|
2019-01-31 06:35:07 +00:00
|
|
|
proc addOpCode(code: var seq[byte], node, params: NimNode) =
|
|
|
|
node.expectKind nnkSym
|
|
|
|
let opcode = Op(idToOpcode[node.strVal].intVal)
|
2019-01-30 12:29:04 +00:00
|
|
|
case opcode
|
|
|
|
of Push1..Push32:
|
|
|
|
if params.len != 1:
|
|
|
|
error("expect 1 param, but got " & $params.len, node)
|
|
|
|
let paramWidth = (opcode.ord - 95) * 2
|
2019-01-31 06:35:07 +00:00
|
|
|
params[0].expectKind nnkStrLit
|
|
|
|
var val = params[0].strVal
|
2019-01-30 12:29:04 +00:00
|
|
|
if val[0] == '0' and val[1] == 'x':
|
|
|
|
val = val.substr(2)
|
|
|
|
if val.len != paramWidth:
|
|
|
|
error("expected param with " & $paramWidth & " hex digits, got " & $val.len, node)
|
2019-01-31 06:35:07 +00:00
|
|
|
code.add byte(opcode)
|
|
|
|
code.add hexToSeqByte(val)
|
2019-01-30 12:29:04 +00:00
|
|
|
else:
|
|
|
|
error("invalid hex format", node)
|
|
|
|
else:
|
|
|
|
if params.len > 0:
|
2019-01-31 06:35:07 +00:00
|
|
|
error("there should be no param for this instruction", node)
|
|
|
|
code.add byte(opcode)
|
|
|
|
|
|
|
|
proc parseCode(codes: NimNode): seq[byte] =
|
|
|
|
let emptyNode = newEmptyNode()
|
|
|
|
codes.expectKind nnkStmtList
|
|
|
|
for pc, line in codes:
|
|
|
|
line.expectKind({nnkCommand, nnkIdent, nnkStrLit})
|
|
|
|
if line.kind == nnkStrLit:
|
|
|
|
result.add hexToSeqByte(line.strVal)
|
|
|
|
elif line.kind == nnkIdent:
|
|
|
|
let sym = bindSym(line)
|
|
|
|
validateOpcode(sym)
|
|
|
|
result.addOpCode(sym, emptyNode)
|
|
|
|
elif line.kind == nnkCommand:
|
|
|
|
let sym = bindSym(line[0])
|
|
|
|
validateOpcode(sym)
|
|
|
|
var params = newNimNode(nnkBracket)
|
|
|
|
for i in 1 ..< line.len:
|
|
|
|
params.add line[i]
|
|
|
|
result.addOpCode(sym, params)
|
|
|
|
else:
|
|
|
|
error("unknown syntax: " & line.toStrLit.strVal, line)
|
|
|
|
|
2019-05-06 12:09:03 +00:00
|
|
|
proc parseFork(fork: NimNode): Fork =
|
|
|
|
fork[0].expectKind({nnkIdent, nnkStrLit})
|
2021-06-01 10:10:20 +00:00
|
|
|
# Normalise whitespace and capitalize each word because `parseEnum` matches
|
|
|
|
# enum string values not symbols, and the strings are capitalized in `Fork`.
|
|
|
|
parseEnum[Fork](fork[0].strVal.splitWhitespace().map(capitalizeAscii).join(" "))
|
2019-05-06 12:09:03 +00:00
|
|
|
|
2020-11-25 11:23:02 +00:00
|
|
|
proc parseGasUsed(gas: NimNode): GasInt =
|
|
|
|
gas[0].expectKind(nnkIntLit)
|
|
|
|
result = gas[0].intVal
|
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
proc parseAssembler(list: NimNode): Assembler =
|
|
|
|
result.success = true
|
|
|
|
result.fork = FkFrontier
|
|
|
|
result.gasUsed = -1
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for callSection in list:
|
|
|
|
callSection.expectKind(nnkCall)
|
|
|
|
let label = callSection[0].strVal
|
|
|
|
let body = callSection[1]
|
|
|
|
case label.normalize
|
|
|
|
of "title": result.title = parseStringLiteral(body)
|
|
|
|
of "vmstate": result.vmStateIdentName = parseIdent(body)
|
|
|
|
of "chaindb": result.chainDBIdentName = parseIdent(body)
|
|
|
|
of "code" : result.code = parseCode(body)
|
|
|
|
of "memory": result.memory = parseVMWords(body)
|
|
|
|
of "stack" : result.stack = parseVMWords(body)
|
|
|
|
of "storage": result.storage = parseStorage(body)
|
|
|
|
of "initialstorage": result.initialStorage = parseStorage(body)
|
|
|
|
of "logs": result.logs = parseLogs(body)
|
|
|
|
of "success": result.success = parseSuccess(body)
|
|
|
|
of "data": result.data = parseData(body)
|
|
|
|
of "output": result.output = parseData(body)
|
|
|
|
of "fork": result.fork = parseFork(body)
|
|
|
|
of "gasused": result.gasUsed = parseGasUsed(body)
|
|
|
|
else: error("unknown section '" & label & "'", callSection[0])
|
|
|
|
|
|
|
|
proc parseAssemblers(list: NimNode): seq[Assembler] =
|
|
|
|
result = @[]
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for callSection in list:
|
|
|
|
# Should we do something with the label? Or is the
|
|
|
|
# assembler's "title" section good enough?
|
|
|
|
# let label = callSection[0].strVal
|
|
|
|
let body = callSection[1]
|
|
|
|
result.add parseAssembler(body)
|
|
|
|
|
|
|
|
proc parseConcurrencyTest(list: NimNode): ConcurrencyTest =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
for callSection in list:
|
|
|
|
callSection.expectKind(nnkCall)
|
|
|
|
let label = callSection[0].strVal
|
|
|
|
let body = callSection[1]
|
|
|
|
case label.normalize
|
|
|
|
of "title": result.title = parseStringLiteral(body)
|
|
|
|
of "assemblers": result.assemblers = parseAssemblers(body)
|
|
|
|
else: error("unknown section '" & label & "'", callSection[0])
|
|
|
|
|
|
|
|
type VMProxy = tuple[sym: NimNode, pr: NimNode]
|
|
|
|
|
|
|
|
proc generateVMProxy(boa: Assembler, shouldBeAsync: bool): VMProxy =
|
2019-01-30 12:29:04 +00:00
|
|
|
let
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
vmProxySym = genSym(nskProc, "asyncVMProxy")
|
|
|
|
chainDB = ident(if boa.chainDBIdentName == "": "chainDB" else: boa.chainDBIdentName)
|
|
|
|
vmState = ident(if boa.vmStateIdentName == "": "vmState" else: boa.vmStateIdentName)
|
2019-01-31 16:25:21 +00:00
|
|
|
body = newLitFixed(boa)
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
returnType = if shouldBeAsync:
|
|
|
|
quote do: Future[bool]
|
|
|
|
else:
|
|
|
|
quote do: bool
|
|
|
|
runVMProcName = ident(if shouldBeAsync: "asyncRunVM" else: "runVM")
|
|
|
|
vmProxyProc = quote do:
|
|
|
|
proc `vmProxySym`(): `returnType` =
|
|
|
|
let boa = `body`
|
|
|
|
let asyncFactory =
|
|
|
|
AsyncOperationFactory(
|
|
|
|
lazyDataSource:
|
|
|
|
if len(boa.initialStorage) == 0:
|
|
|
|
noLazyDataSource()
|
|
|
|
else:
|
|
|
|
fakeLazyDataSource(boa.initialStorage))
|
|
|
|
`runVMProcName`(`vmState`, `chainDB`, boa, asyncFactory)
|
|
|
|
(vmProxySym, vmProxyProc)
|
|
|
|
|
|
|
|
proc generateAssemblerTest(boa: Assembler): NimNode =
|
|
|
|
let
|
|
|
|
(vmProxySym, vmProxyProc) = generateVMProxy(boa, false)
|
|
|
|
title: string = boa.title
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
result = quote do:
|
|
|
|
test `title`:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
`vmProxyProc`
|
2019-05-30 16:42:55 +00:00
|
|
|
{.gcsafe.}:
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
check `vmProxySym`()
|
|
|
|
|
|
|
|
when defined(macro_assembler_debug):
|
|
|
|
echo result.toStrLit.strVal
|
|
|
|
|
|
|
|
type
|
|
|
|
AsyncVMProxyTestProc* = proc(): Future[bool]
|
|
|
|
|
|
|
|
proc generateConcurrencyTest(t: ConcurrencyTest): NimNode =
|
|
|
|
let
|
|
|
|
vmProxies: seq[VMProxy] = t.assemblers.map(proc(boa: Assembler): VMProxy = generateVMProxy(boa, true))
|
|
|
|
vmProxyProcs: seq[NimNode] = vmProxies.map(proc(x: VMProxy): NimNode = x.pr)
|
|
|
|
vmProxySyms: seq[NimNode] = vmProxies.map(proc(x: VMProxy): NimNode = x.sym)
|
|
|
|
title: string = t.title
|
|
|
|
|
|
|
|
let runVMProxy = quote do:
|
|
|
|
{.gcsafe.}:
|
|
|
|
let procs: seq[AsyncVMProxyTestProc] = @(`vmProxySyms`)
|
|
|
|
let futures: seq[Future[bool]] = procs.map(proc(s: AsyncVMProxyTestProc): Future[bool] = s())
|
|
|
|
waitFor(allFutures(futures))
|
|
|
|
|
|
|
|
# Is there a way to use "quote" (or something like it) to splice
|
|
|
|
# in a statement list?
|
|
|
|
let stmtList = newStmtList(vmProxyProcs)
|
|
|
|
stmtList.add(runVMProxy)
|
|
|
|
result = newCall("test", newStrLitNode(title), stmtList)
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
when defined(macro_assembler_debug):
|
|
|
|
echo result.toStrLit.strVal
|
|
|
|
|
2021-10-14 08:00:02 +00:00
|
|
|
proc initDatabase*(networkId = MainNet): (BaseVMState, BaseChainDB) =
|
|
|
|
let db = newBaseChainDB(newMemoryDB(), false, networkId, networkParams(networkId))
|
|
|
|
initializeEmptyDb(db)
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
let
|
2021-10-14 08:00:02 +00:00
|
|
|
parent = getCanonicalHead(db)
|
|
|
|
coinbase = hexToByteArray[20]("bb7b8287f3f0a933474a79eae42cbca977791171")
|
|
|
|
timestamp = parent.timestamp + initDuration(seconds = 1)
|
|
|
|
header = BlockHeader(
|
|
|
|
blockNumber: 1.u256,
|
|
|
|
stateRoot: parent.stateRoot,
|
|
|
|
parentHash: parent.blockHash,
|
|
|
|
coinbase: coinbase,
|
|
|
|
timestamp: timestamp,
|
|
|
|
difficulty: db.config.calcDifficulty(timestamp, parent),
|
|
|
|
gasLimit: 100_000
|
|
|
|
)
|
2022-01-18 16:19:32 +00:00
|
|
|
vmState = BaseVMState.new(header, db)
|
2021-10-14 08:00:02 +00:00
|
|
|
|
|
|
|
(vmState, db)
|
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
const codeAddress = hexToByteArray[20]("460121576cc7df020759730751f92bd62fd78dd6")
|
2019-01-30 12:29:04 +00:00
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
proc verifyAsmResult(vmState: BaseVMState, chainDB: BaseChainDB, boa: Assembler, asmResult: CallResult): bool =
|
2021-10-14 08:00:02 +00:00
|
|
|
if not asmResult.isError:
|
2019-01-30 12:29:04 +00:00
|
|
|
if boa.success == false:
|
|
|
|
error "different success value", expected=boa.success, actual=true
|
|
|
|
return false
|
|
|
|
else:
|
|
|
|
if boa.success == true:
|
|
|
|
error "different success value", expected=boa.success, actual=false
|
|
|
|
return false
|
|
|
|
|
2020-11-25 11:23:02 +00:00
|
|
|
if boa.gasUsed != -1:
|
2021-05-03 23:34:51 +00:00
|
|
|
if boa.gasUsed != asmResult.gasUsed:
|
|
|
|
error "different gasUsed", expected=boa.gasUsed, actual=asmResult.gasUsed
|
2020-11-25 11:23:02 +00:00
|
|
|
return false
|
|
|
|
|
2021-05-03 23:34:51 +00:00
|
|
|
if boa.stack.len != asmResult.stack.values.len:
|
|
|
|
error "different stack len", expected=boa.stack.len, actual=asmResult.stack.values.len
|
2019-01-30 12:29:04 +00:00
|
|
|
return false
|
|
|
|
|
2021-05-03 23:34:51 +00:00
|
|
|
for i, v in asmResult.stack.values:
|
2019-01-30 12:29:04 +00:00
|
|
|
let actual = v.dumpHex()
|
|
|
|
let val = boa.stack[i].toHex()
|
|
|
|
if actual != val:
|
|
|
|
error "different stack value", idx=i, expected=val, actual=actual
|
|
|
|
return false
|
|
|
|
|
|
|
|
const chunkLen = 32
|
2021-05-03 23:34:51 +00:00
|
|
|
let numChunks = asmResult.memory.len div chunkLen
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
if numChunks != boa.memory.len:
|
|
|
|
error "different memory len", expected=boa.memory.len, actual=numChunks
|
|
|
|
return false
|
|
|
|
|
|
|
|
for i in 0 ..< numChunks:
|
2021-05-03 23:34:51 +00:00
|
|
|
let actual = asmResult.memory.bytes.toOpenArray(i * chunkLen, (i + 1) * chunkLen - 1).toHex()
|
2019-01-30 12:29:04 +00:00
|
|
|
let mem = boa.memory[i].toHex()
|
|
|
|
if mem != actual:
|
|
|
|
error "different memory value", idx=i, expected=mem, actual=actual
|
|
|
|
return false
|
|
|
|
|
2021-10-28 09:42:39 +00:00
|
|
|
var stateDB = vmState.stateDB
|
2020-06-01 06:58:14 +00:00
|
|
|
stateDB.persist()
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
var
|
2021-10-14 08:00:02 +00:00
|
|
|
storageRoot = stateDB.getStorageRoot(codeAddress)
|
2020-06-01 06:45:32 +00:00
|
|
|
trie = initSecureHexaryTrie(chainDB.db, storageRoot)
|
2019-01-30 12:29:04 +00:00
|
|
|
|
|
|
|
for kv in boa.storage:
|
|
|
|
let key = kv[0].toHex()
|
|
|
|
let val = kv[1].toHex()
|
2020-04-20 18:12:44 +00:00
|
|
|
let keyBytes = (@(kv[0]))
|
|
|
|
let actual = trie.get(keyBytes).toHex()
|
2019-01-30 12:29:04 +00:00
|
|
|
let zerosLen = 64 - (actual.len)
|
|
|
|
let value = repeat('0', zerosLen) & actual
|
|
|
|
if val != value:
|
|
|
|
error "storage has different value", key=key, expected=val, actual=value
|
|
|
|
return false
|
|
|
|
|
2021-10-14 08:00:02 +00:00
|
|
|
let logs = vmState.logEntries
|
2019-01-30 12:29:04 +00:00
|
|
|
if logs.len != boa.logs.len:
|
|
|
|
error "different logs len", expected=boa.logs.len, actual=logs.len
|
|
|
|
return false
|
|
|
|
|
|
|
|
for i, log in boa.logs:
|
|
|
|
let eAddr = log.address.toHex()
|
|
|
|
let aAddr = logs[i].address.toHex()
|
|
|
|
if eAddr != aAddr:
|
|
|
|
error "different address", expected=eAddr, actual=aAddr, idx=i
|
|
|
|
return false
|
|
|
|
let eData = log.data.toHex()
|
|
|
|
let aData = logs[i].data.toHex()
|
|
|
|
if eData != aData:
|
|
|
|
error "different data", expected=eData, actual=aData, idx=i
|
|
|
|
return false
|
|
|
|
if log.topics.len != logs[i].topics.len:
|
|
|
|
error "different topics len", expected=log.topics.len, actual=logs[i].topics.len, idx=i
|
|
|
|
return false
|
|
|
|
for x, t in log.topics:
|
|
|
|
let eTopic = t.toHex()
|
|
|
|
let aTopic = logs[i].topics[x].toHex()
|
|
|
|
if eTopic != aTopic:
|
|
|
|
error "different topic in log entry", expected=eTopic, actual=aTopic, logIdx=i, topicIdx=x
|
|
|
|
return false
|
|
|
|
|
|
|
|
if boa.output.len > 0:
|
2021-05-03 23:34:51 +00:00
|
|
|
let actual = asmResult.output.toHex()
|
2019-01-30 12:29:04 +00:00
|
|
|
let expected = boa.output.toHex()
|
|
|
|
if expected != actual:
|
|
|
|
error "different output detected", expected=expected, actual=actual
|
|
|
|
return false
|
|
|
|
|
|
|
|
result = true
|
|
|
|
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
proc createSignedTx(boaData: Blob, chainId: ChainId): Transaction =
|
|
|
|
let privateKey = PrivateKey.fromHex("7a28b5ba57c53603b0b07b56bba752f7784bf506fa95edc395f5cf6c7514fe9d")[]
|
|
|
|
let unsignedTx = Transaction(
|
|
|
|
txType: TxLegacy,
|
|
|
|
nonce: 0,
|
|
|
|
gasPrice: 1.GasInt,
|
|
|
|
gasLimit: 500_000_000.GasInt,
|
|
|
|
to: codeAddress.some,
|
|
|
|
value: 500.u256,
|
|
|
|
payload: boaData
|
|
|
|
)
|
|
|
|
signTransaction(unsignedTx, privateKey, chainId, false)
|
|
|
|
|
|
|
|
proc runVM*(vmState: BaseVMState, chainDB: BaseChainDB, boa: Assembler, asyncFactory: AsyncOperationFactory): bool =
|
|
|
|
vmState.asyncFactory = asyncFactory
|
|
|
|
vmState.mutateStateDB:
|
|
|
|
db.setCode(codeAddress, boa.code)
|
|
|
|
db.setBalance(codeAddress, 1_000_000.u256)
|
|
|
|
let tx = createSignedTx(boa.data, chainDB.config.chainId)
|
|
|
|
let asmResult = testCallEvm(tx, tx.getSender, vmState, boa.fork)
|
|
|
|
verifyAsmResult(vmState, chainDB, boa, asmResult)
|
|
|
|
|
|
|
|
# FIXME-duplicatedForAsync
|
|
|
|
proc asyncRunVM*(vmState: BaseVMState, chainDB: BaseChainDB, boa: Assembler, asyncFactory: AsyncOperationFactory): Future[bool] {.async.} =
|
|
|
|
vmState.asyncFactory = asyncFactory
|
|
|
|
vmState.mutateStateDB:
|
|
|
|
db.setCode(codeAddress, boa.code)
|
|
|
|
db.setBalance(codeAddress, 1_000_000.u256)
|
|
|
|
let tx = createSignedTx(boa.data, chainDB.config.chainId)
|
|
|
|
let asmResult = await asyncTestCallEvm(tx, tx.getSender, vmState, boa.fork)
|
|
|
|
return verifyAsmResult(vmState, chainDB, boa, asmResult)
|
|
|
|
|
2019-01-30 12:29:04 +00:00
|
|
|
macro assembler*(list: untyped): untyped =
|
Added basic async capabilities for vm2. (#1260)
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
2022-11-01 15:35:46 +00:00
|
|
|
result = parseAssembler(list).generateAssemblerTest()
|
|
|
|
|
|
|
|
macro concurrentAssemblers*(list: untyped): untyped =
|
|
|
|
result = parseConcurrencyTest(list).generateConcurrencyTest()
|
2020-07-24 12:54:27 +00:00
|
|
|
|
|
|
|
macro evmByteCode*(list: untyped): untyped =
|
|
|
|
list.expectKind nnkStmtList
|
|
|
|
var code = parseCode(list)
|
|
|
|
result = newLitFixed(code)
|