EVMC: Small stacks when using EVMC, closes #575 (segfaults)

This patch reduces stack space used with EVM in ENABLE_EVMC=1 mode, from 13 MB
worst case to 550 kB, a 24x reduction.

This completes fixing the "stack problem" and closes #575 (`EVM: Different
segmentation faults when running the test suite with EVMC`).

It also closes #256 (`recursive EVM call trigger unrecoverable stack overflow`).

After this patch, it is possible to re-enable the CI targets which had to be
disabled due to #575.

This change is also a required precursor for switching over to "nearly EVMC" as
the clean and focused Nimbus-internal API between EVM and sync/database
processes, and is also key to the use of Chronos `async` in those processes
when calling the EVM.

(The motivation is the internal interface has to be substantially changed
_anyway_ for the parallel sync and database processes, and EVMC turns out to be
well-designed and well-suited for this.  It provides good separation between
modules, and suits our needs better than our other current interface.  Might as
well use a good one designed by someone else.  EVMC is 98% done in Nimbus
thanks to great work done before by @jangko, and we can use Nimbus-specific
extensions where we need flexibility, including for performance.  Being aligned
with the ecosystem is a useful bonus feature.)

All tests below were run on Ubuntu 20.04 LTS server, x86-64.  This matches one
of the targets that has been disabled for a while in CI in EVMC mode due to
stack overflow crashing the tests, so it's a good choice.

Measurements before
===================

Testing commit `e76e0144 2021-04-22 11:29:42 +0700 add submodules: graphql and
toml-serialization`.

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 16384 # Requires larger stack than default to avoid crash.
    $ ./build/all_tests 9 | tee tlog
    [Suite] persist block json tests
    ...
    Stack range 38416 depthHigh 3
    ...
    Stack range 13074720 depthHigh 1024
    [OK] tests/fixtures/PersistBlockTests/block1431916.json

These tests use 13.07 MB of stack to run, and so crash with the default stack
limit on Ubuntu Server 20.04 (8MB).  Exactly 12768 bytes per EVM call stack
frame.

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 16384 # Requires larger stack than default.
    $ ./build/all_tests 7 | tee tlog
    [Suite] new generalstate json tests
        ...
    Stack range 14384 depthHigh 2
        ...
    Stack range 3495456 depthHigh 457
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest639.json
    ...
    Stack range 3709600 depthHigh 485
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest458.json
        ...
    Stack range 7831600 depthHigh 1024
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stCreate2/Create2OnDepth1024.json

These tests use 7.83MB of stack to run.  About 7648 bytes per EVM call stack
frame.  It _only just_ avoids crashing with the default Ubuntu Server stack
limit of 8 MB.  However, it still crashes on Windows x86-64, which is why the
Windows CI EVMC target is currently disabled.

On Linux where this passes, this is so borderline that it affects work and
testing of the complex storage code, because that's called from the EVM.

Also, this greatly exceeds the default thread stack size.

Measurements after
==================

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 600 # Because we can!  600k stack.
    $ ./build/all_tests 9 | tee tlog
    [Suite] persist block json tests
    ...
    Stack range 1936 depthHigh 3
    ...
        Stack range 556272 depthHigh 1022
        Stack range 556512 depthHigh 1023
        Stack range 556816 depthHigh 1023
        Stack range 557056 depthHigh 1024
        Stack range 557360 depthHigh 1024
        [OK] tests/fixtures/PersistBlockTests/block1431916.json

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 600 # Because we can!  600k stack.
    $ ./build/all_tests 7 | tee tlog
    [Suite] new generalstate json tests
        ...
    Stack range 1392 depthHigh 2
        ...
    Stack range 248912 depthHigh 457
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest639.json
    ...
    Stack range 264144 depthHigh 485
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest458.json
        ...
        Stack range 557360 depthHigh 1024
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stStaticCall/static_CallRecursiveBombPreCall.json

For both tests, a satisfying *544 bytes* per EVM call stack frame, and EVM
takes less than 600 kB total.  With other overheads, both tests run in 600 kB
stack total at maximum EVM depth.

We must add some headroom on this for database activity called from the EVM,
and different compile targets.  But it means the EVM itself is no longer a
stack burden.

This is much smaller than the default thread stack size on Linux (2MB), with
plenty of margin.  (Just fyi, it isn't smaller than a _small_ thread stack on
Linux from a long time ago (128kB), and some small embedded C targets.)

This size is well suited to running EVMs in threads.

Further reduction
=================

This patch solves the stack problem.  Windows and Linux 64-bit EVMC CI targets
can be re-enabled, and there is no longer a problem with stack usage.

We can reduce further to ~340 bytes per frame and 350 kB total, while still
complying with EVMC.  But as this involves changing how errors are handled to
comply fully with EVMC, and removing `dispose` calls, it's not worth doing now
while there are other EVMC changes in progress that will have the same effect.

A Nimbus-specific extension will allow us to avoid recursion with EVMC anyway,
bringing bytes per frame to zero.  We need the extension anyway, to support
Chronos `async` which parallel transaction processing is built around.

Interop with non-Nimbus over EVMC won't let us avoid recursion, but then we
can't control the stack frame size either.  To prevent stack overflow in
interop I anticipate using (this method in Aleth)
[6e96ce34e3/libethereum/ExtVM.cpp (L61)].

Smoke test other versions of GCC and Clang/LLVM
===============================================

As all builds including Windows use GCC or Apple's Clang/LLVM, this is just to
verify we're in the right ballpark on all targets.  I've only checked `x86_64`
though, not 32-bit, and not ARM.

It's interesting to see GCC 10 uses less stack.  This is because it optimises
`struct` returns better, sometimes skipping an intermediate copy.  Here it
benefits the EVMC API, but I found GCC 10 also improves the larger stack usage
of the rest of `nimbus-eth1` as well.

Apple clang 12.0.0 (clang-1200.0.26.2) on MacOS 10.15:

- 544 bytes per EVM call stack frame

GCC 10.3.0 (Ubuntu 10.3.0-1ubuntu1) on Ubuntu 21.04:

- 464 bytes per EVM call stack frame

GCC 10.2.0 (Ubuntu 10.2.0-5ubuntu1~20.04) on Ubuntu 20.04 LTS:

- 464 bytes per EVM call stack frame

GCC 11.0.1 20210417 (experimental; Ubuntu 11-20210417-1ubuntu1) on Ubuntu 21.04:

- 8 bytes per EVM call stack frame

GCC 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) on Ubuntu 20.04 LTS:

- 544 bytes per EVM call stack frame

GCC 8.4.0 (Ubuntu 8.4.0-3ubuntu2) on Ubuntu 20.04 LTS:

- 544 bytes per EVM call stack frame

GCC 7.5.0 (Ubuntu 7.5.0-6ubuntu2) on Ubuntu 20.04 LTS:

- 544 bytes per EVM call stack frame

GCC 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2) on Ubuntu 19.10:

- 528 bytes per EVM call stack frame

Signed-off-by: Jamie Lokier <jamie@shareable.org>
This commit is contained in:
Jamie Lokier 2021-04-26 16:01:00 +01:00
parent 44744596fd
commit a3c8a5c3f3
No known key found for this signature in database
GPG Key ID: CBC25C68435C30A2
6 changed files with 100 additions and 58 deletions

View File

@ -31,7 +31,15 @@ proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =
proc test(name: string, lang = "c") =
buildBinary name, "tests/", "-d:chronicles_log_level=ERROR"
exec "build/" & name
when not defined(windows):
# Verify stack usage is kept low by setting 1024k stack limit in tests.
exec "ulimit -s 1024 && build/" & name
else:
# Don't enforce stack limit in Windows, as we can't control it with these tools.
# See https://public-inbox.org/git/alpine.DEB.2.21.1.1709131448390.4132@virtualbox/
# When set by ulimit -s` in Bash, it's ignored. Also, the command passed to
# NimScript `exec` on Windows is not a shell script.
exec "build/" & name
task test, "Run tests":
test "all_tests"

View File

@ -291,46 +291,61 @@ proc afterExecCreate(c: Computation) =
else:
c.rollback()
proc beforeExec(c: Computation): bool =
proc beforeExec(c: Computation): bool {.noinline.} =
if not c.msg.isCreate:
c.beforeExecCall()
false
else:
c.beforeExecCreate()
proc afterExec(c: Computation) =
proc afterExec(c: Computation) {.noinline.} =
if not c.msg.isCreate:
c.afterExecCall()
else:
c.afterExecCreate()
template chainTo*(c, toChild: Computation, after: untyped) =
template chainTo*(c: Computation, toChild: typeof(c.child), after: untyped) =
c.child = toChild
c.continuation = proc() =
after
proc execCallOrCreate*(cParam: Computation) =
var (c, before) = (cParam, true)
defer:
while not c.isNil:
c.dispose()
c = c.parent
# No actual recursion, but simulate recursion including before/after/dispose.
while true:
while true:
if before and c.beforeExec():
break
when vm_use_recursion:
# Recursion with tiny stack frame per level.
proc execCallOrCreate*(c: Computation) =
defer: c.dispose()
if c.beforeExec():
return
c.executeOpcodes()
while not c.continuation.isNil:
when evmc_enabled:
c.res = c.host.call(c.child[])
else:
execCallOrCreate(c.child)
c.child = nil
c.executeOpcodes()
if c.continuation.isNil:
c.afterExec()
c.afterExec()
else:
# No actual recursion, but simulate recursion including before/after/dispose.
proc execCallOrCreate*(cParam: Computation) =
var (c, before) = (cParam, true)
defer:
while not c.isNil:
c.dispose()
c = c.parent
while true:
while true:
if before and c.beforeExec():
break
c.executeOpcodes()
if c.continuation.isNil:
c.afterExec()
break
(before, c.child, c, c.parent) = (true, nil.Computation, c.child, c)
if c.parent.isNil:
break
(before, c.child, c, c.parent) = (true, nil.Computation, c.child, c)
if c.parent.isNil:
break
c.dispose()
(before, c.parent, c) = (false, nil.Computation, c.parent)
(c.continuation)()
c.dispose()
(before, c.parent, c) = (false, nil.Computation, c.parent)
proc merge*(c, child: Computation) =
c.logEntries.add child.logEntries

View File

@ -123,7 +123,7 @@ proc hostEmitLogImpl(ctx: Computation, address: EthAddress,
log.address = address
ctx.addLogEntry(log)
template createImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
proc enterCreateImpl(c: Computation, m: nimbus_message): Computation =
# TODO: use evmc_message to evoid copy
let childMsg = Message(
kind: CallKind(m.kind),
@ -133,10 +133,9 @@ template createImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
value: Uint256.fromEvmc(m.value),
data: @(makeOpenArray(m.inputData, m.inputSize.int))
)
return newComputation(c.vmState, childMsg, Uint256.fromEvmc(m.create2_salt))
let child = newComputation(c.vmState, childMsg, Uint256.fromEvmc(m.create2_salt))
child.execCallOrCreate()
template leaveCreateImpl(c, child: Computation, res: nimbus_result) =
if not child.shouldBurnGas:
res.gas_left = child.gasMeter.gasRemaining
@ -153,7 +152,7 @@ template createImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
copyMem(res.output_data, child.output[0].addr, child.output.len)
res.release = hostReleaseResultImpl
template callImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
template enterCallImpl(c: Computation, m: nimbus_message): Computation =
let childMsg = Message(
kind: CallKind(m.kind),
depth: m.depth,
@ -165,10 +164,9 @@ template callImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
data: @(makeOpenArray(m.inputData, m.inputSize.int)),
flags: MsgFlags(m.flags)
)
newComputation(c.vmState, childMsg)
let child = newComputation(c.vmState, childMsg)
child.execCallOrCreate()
template leaveCallImpl(c, child: Computation, res: nimbus_result) =
if not child.shouldBurnGas:
res.gas_left = child.gasMeter.gasRemaining
@ -185,11 +183,22 @@ template callImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
copyMem(res.output_data, child.output[0].addr, child.output.len)
res.release = hostReleaseResultImpl
proc hostCallImpl(ctx: Computation, msg: var nimbus_message): nimbus_result {.cdecl.} =
proc enterHostCall(c: Computation, msg: var nimbus_message): Computation {.noinline.} =
if msg.kind == EVMC_CREATE or msg.kind == EVMC_CREATE2:
createImpl(ctx, msg, result)
enterCreateImpl(c, msg)
else:
callImpl(ctx, msg, result)
enterCallImpl(c, msg)
proc leaveHostCall(c, child: Computation, kind: evmc_call_kind): nimbus_result {.noinline.} =
if kind == EVMC_CREATE or kind == EVMC_CREATE2:
leaveCreateImpl(c, child, result)
else:
leaveCallImpl(c, child, result)
proc hostCallImpl(ctx: Computation, msg: var nimbus_message): nimbus_result {.cdecl.} =
let child = enterHostCall(ctx, msg)
child.execCallOrCreate()
leaveHostCall(ctx, child, msg.kind)
proc initHostInterface(): evmc_host_interface =
result.account_exists = cast[evmc_account_exists_fn](hostAccountExistsImpl)

View File

@ -646,7 +646,8 @@ template genCreate(callName: untyped, opCode: Op): untyped =
c.gasMeter.consumeGas(createMsgGas, reason="CREATE")
when evmc_enabled:
let msg = nimbus_message(
let msg = new(nimbus_message)
msg[] = nimbus_message(
kind: callKind.evmc_call_kind,
depth: (c.msg.depth + 1).int32,
gas: createMsgGas,
@ -657,16 +658,15 @@ template genCreate(callName: untyped, opCode: Op): untyped =
create2_salt: toEvmc(salt)
)
var res = c.host.call(msg)
c.returnData = @(makeOpenArray(res.outputData, res.outputSize.int))
c.gasMeter.returnGas(res.gas_left)
c.chainTo(msg):
c.returnData = @(makeOpenArray(c.res.outputData, c.res.outputSize.int))
c.gasMeter.returnGas(c.res.gas_left)
if res.status_code == EVMC_SUCCESS:
c.stack.top(res.create_address)
if c.res.status_code == EVMC_SUCCESS:
c.stack.top(c.res.create_address)
# TODO: a good candidate for destructor
if not res.release.isNil:
res.release(res)
if not c.res.release.isNil:
c.res.release(c.res)
else:
let childMsg = Message(
kind: callKind,
@ -829,7 +829,8 @@ template genCall(callName: untyped, opCode: Op): untyped =
return
when evmc_enabled:
let msg = nimbus_message(
let msg = new(nimbus_message)
msg[] = nimbus_message(
kind: callKind.evmc_call_kind,
depth: (c.msg.depth + 1).int32,
gas: childGasLimit,
@ -841,22 +842,21 @@ template genCall(callName: untyped, opCode: Op): untyped =
flags: flags.uint32
)
var res = c.host.call(msg)
c.returnData = @(makeOpenArray(res.outputData, res.outputSize.int))
c.chainTo(msg):
c.returnData = @(makeOpenArray(c.res.outputData, c.res.outputSize.int))
let actualOutputSize = min(memOutLen, c.returnData.len)
if actualOutputSize > 0:
c.memory.write(memOutPos,
c.returnData.toOpenArray(0, actualOutputSize - 1))
let actualOutputSize = min(memOutLen, c.returnData.len)
if actualOutputSize > 0:
c.memory.write(memOutPos,
c.returnData.toOpenArray(0, actualOutputSize - 1))
c.gasMeter.returnGas(res.gas_left)
c.gasMeter.returnGas(c.res.gas_left)
if res.status_code == EVMC_SUCCESS:
c.stack.top(1)
if c.res.status_code == EVMC_SUCCESS:
c.stack.top(1)
# TODO: a good candidate for destructor
if not res.release.isNil:
res.release(res)
if not c.res.release.isNil:
c.res.release(c.res)
else:
let msg = Message(
kind: callKind,

View File

@ -387,6 +387,7 @@ proc executeOpcodes(c: Computation) =
block:
if not c.continuation.isNil:
(c.continuation)()
c.continuation = nil
elif c.execPrecompiles(fork):
break

View File

@ -20,6 +20,11 @@ when defined(evmc_enabled):
import
./evmc_api
# Select between small-stack recursion and no recursion. Both are good, fast,
# low resource using methods. Keep both here because true EVMC API requires
# the small-stack method, but Chronos `async` is better without recursion.
const vm_use_recursion* = defined(evmc_enabled)
type
VMFlag* = enum
ExecutionOK
@ -85,7 +90,11 @@ type
savePoint*: SavePoint
instr*: Op
opIndex*: int
parent*, child*: Computation
when defined(evmc_enabled):
child*: ref nimbus_message
res*: nimbus_result
else:
parent*, child*: Computation
continuation*: proc() {.gcsafe.}
Error* = ref object