`doCall` used by JSON-RPC is another way to setup and call the EVM.
Move it to `transaction/call_evm`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Start gathering the functions that call the EVM into one place,
`transaction/call_evm.nim`.
This is first of a series of changes to gather all ways the EVM is called to
one place. Duplicate, slightly different setup functions have accumulated over
time, each with some knowledge of EVM internals. When they are brought
together, these methods will be changed to use a single entry point to the EVM,
allowing the entry point to be refactored, EVMC to be completed, and async
concurrency to be implemented on top. This also simplifies the callers.
First, a helper function used by RPC and GraphQL to make EVM calls without
permanently modifying the account state. `setupComputation` ->
`rpcSetupComputation`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
nim-graphql v0.2.2 have numerous bugfixes, but notable ones are:
- only one non-introspection field are allowed in subscription root
- @skip and @include should not allowed for subscription root
- more descriptive error message for playground ethapi, fixes
- fixes GraphiQL client complaint about our instrospection system is old
- graphql http server enhancements: gzip encoding and chunked transfer
the `processArguments` now have overloaded proc, one with opt param and one without.
the OptParser now can be passed to `opt` param.
this is useful in scenario where in test code we need to simulate something
without using real command line arguments.
rather than initialize it to 0, those block numbers
are initialized to high(BlockNumber). this will fix
issue when imported genesis.json doesn't contains all
forks' blockNumber.
- fixes http server response status code
- fixes `__schema.types` and `__schema.directives` implementation
- fixes 'getOperation' in executor.nim
- web ui(graphiql) for http server
The account database code is not supposed to raise exceptions in the EVM, and
the behaviour is not well defined if it does. It isn't compliant with EVMC
spec either. But that will be dealt with properly when the account state-cache
is dealt with, as there is some work to be done on it.
Meanwhile, if it raises in code under `chainTo` and then `(continuation)()`,
the behaviour was changed slightly by the stack-shrink patches.
Before those patches, an exception after the recursion-point was converted to
`c.setError` "Opcode Dispatch Error" in `executeOpcodes. After, it would
propagate out, a different behaviour. (It still correctly walked the chain of
`c.dispose()` calls to clean up.)
It's easy to restore the original behaviour just by moving the continuation
call, so let's do that.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
We can't use `ulimit -s` to limit stack size on Windows. Even though Bash
accepts `ulimit -s` and the numbers change it has no effect and is not passed
to child processes.
(See https://public-inbox.org/git/alpine.DEB.2.21.1.1709131448390.4132@virtualbox/)
Instead, set it when building the test executable, following, a suggestion from
@stefantalpalaru.
https://github.com/status-im/nimbus-eth1/pull/598#discussion_r621107128
To ensure no conflict with `config.nims`, `-d:windowsNoSetStack` is used. This
proved unnecessary in practice because the command-line option is passed to the
linker after the config file option. But given we don't have an automated test
to verify linker behaviour, it's best not to rely on the option order, neither
how the linker treats it, or whether Nim will always send them in that order.
Testing:
This has been verified by using a smaller limit. At 200k, all `ENABLE_EVMC=0`
OS targets passed as expected, and all `ENABLE_EVMC=1` OS targets failed with
expected kinds of errors due to stack overflow, including Windows.
(400k wasn't small enough; 32-bit x86 Windows passed on that).
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Make `run-nimbus-sync` look for and use `~/.nimbus/$TESTNET/nimbus/nodekey`
during Ethereum sync tests. This is a private key which identifies the node.
If you have created that file, its contents should be a hex nodekey, same
format as Geth. In fact you can use Geth to generate one. If found,
`run-nimbus-sync` will use it as the nodekey, instead of Nimbus's default,
which is a random nodekey each time it is run.
Using the same nodekey for each run allows us to add the corresponding
`enode:...` URL (public key) as a trusted peer to the dedicated Geth instances,
using Geth's `admin.addTrustedPeer`.
This ensures Geth will almost always accept our connections, which is very
helpful for sync testing, instead of waiting a long time for a good peer.
Indeed, without this we might never get a willing good peer, due to reputation
effects while working on new sync methods.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
why:
only two public functions left: executeOpcodes() and execCallOrCreate()
where the former one was originally in interpreter_dispatch.nim and
the latter one calls this one.
improves maintainability
overview:
can be verified by running "make check_vm2 X=0" in the nimbus directory
(be patient when running it.) the X=0 flag is necessary if there is a
native NIM compiler which may bail out at some vendor imports.
details:
when compiling state_transaction.nim, the nim flag vm2_enabled must
be set in order to avoid implicit import of native VM definitions.
why:
kludge not needed anymore for oph_handlers.nim sub-sources and sources
that rely on oph_handlers.nim (but not state_transactions.nim which
relies on computation.nim.)
also:
re-integrated stack_defs.nim back into stack.nim
why:
the v2 prefix of the file name was used as a visual aid when
comparing vm2 against vm sources
why:
the v2 prefix of the file name was used as a visual aid when
comparing vm2 against vm sources
details:
all renamed v2*.nim sources compile locally with the -d:kludge:1 flag
set or without (some work with either)
only sources not renamed yet: v2state_transactions.nim
why:
on 32bit windows 7, there seems to be a 64k memory ceiling for the gcc
compiler which was exceeded on some test platform.
details:
compiling VM2 for low memory C compiler can be triggered with
"make ENABLE_VM2LOWMEM". this comes with a ~24% longer execution time
of the test suite against old VM and optimised VM2.
why:
the new implementation lost more then 25% execution time on the test
suite when compared to the original VM. so the handler call and the
surrounding statements have been wrapped in a big case statement similar
to the original VM implementation. on Linux/x64, the execution time of
the new VM2 seems to be on par with the old VM.
details:
on Linux/x64, computed goto works and is activated with the -d:release
flag. here the execution time of the new VM2 was tested short of 0.02%
better than the old VM. without the computed goto, it is short of
0.4% slower than the old VM.
why:
using function stubs made it possible to check the syntax of an op
handler source file by compiling this very file. this was previously
impossible due cyclic import/include mechanism.
details:
only oph_call.nim, oph_create.nim and subsequently op_handlers.nim
still need the -d:kludge:1 flag for syntax check compiling. this flag
also works with interpreter_dispatch.nim which imports op_handlers.nim.
why:
step towards breaking circular dependency
details:
some functions from v2computation.nim have been extracted into
compu_helper.nim which does not explicitly back-import
v2computation.nim. all non recursive op handlers now import this source
file rather than v2computation.nim.
recursive call/create op handler still need to import v2computation.nim.
the executeOpcodes() function from interpreter_dispatch.nim has been
moved to v2computation.nim which allows for <import> rather than
<include> the interpreter_dispatch.nim source.
why:
this allows for passing back information which can eventually be
used for reducing use of exceptions
caveat:
call/create currently needs to un-capture the call-by-reference
(wrapper) argument using the Computation reference inside
why:
the previous approach was replacing the function-lets in
opcode_impl.nim by the particulate table handlers. the test
functions will verify the the handler functions are sort of
correct but not the assignments in the fork tables.
the handler names of old and new for tables are checked here.
caveat:
verifying tables currently takes a while at compile time.