Update eip-615.md

This commit is contained in:
Greg Colvin 2019-02-18 22:49:35 -07:00 committed by GitHub
parent 9ab3f596c2
commit 3c568e2e77
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 175 additions and 126 deletions

View File

@ -4,122 +4,134 @@ title: Subroutines and Static Jumps for the EVM
status: Draft
type: Standards Track
category: Core
author: Greg Colvin <greg@colvin.org>, Paweł Bylica, Christian Reitwiessner
author: Greg Colvin <greg@colvin.org>, Paweł Bylica (@chfast), Christian Reitwiessner(@chriseth), Brooklyn Zelenka (@expede)
created: 2016-12-10
edited: 2017-25-4
edited: 2019-18-2
---
## Simple Summary
This EIP introduces new EVM opcodes and conditions on EVM code to support subroutines and static jumps, deprecates dynamic jumps, and thereby make EVM code amenable to linear-time static analysis. This will provide for better compilation, interpretation, transpilation, and formal analysis of EVM code.
## Abstract
This EIP introduces new EVM opcodes and conditions on EVM code to support subroutines and static jumps, disallow dynamic jumps, and thereby make EVM code amenable to linear-time static analysis. This will provide for better compilation, interpretation, transpilation, and formal analysis of EVM code.
EVM code is currently difficult to statically analyze, hobbling a critical tool for preventing the many expensive bugs our blockchain has experienced. Futher, none of the current implementations of the Ethereum Virtual Machine—including the compilers—are sufficiently performant to meet the network's long-term demands. This proposal identifies a major reason for these issues, and proposes changes to the EVM specification to address the problem, making futher efforts towards a safer and more performant the EVM possible.
## MOTIVATION
In particular, it imposes restrictions on EVM code and proposes new instructions to help provide for
All current implementations of the Ethereum Virtual Machine, including the just-in-time compilers, are much too slow. This proposal identifies a major reason for this and proposes changes to the EVM specification to address the problem.
* Better static analysis tools
* Much easier formalization
* Faster interpretation
* Near-linear-time compilation to native code, and to and from eWasm
* Easier code generation from other languages, including
* Solidity
* LLVM IR and thus many languages, incuding
* C, C++, Fortran, Haskell, Java, Ruby and Rust.
In particular, it imposes restrictions on current EVM code and proposes new instructions to help provide for
* better-optimized compilation to native code
* faster interpretation
* faster transpilation to eWASM
* better compilation from other languages,
including eWASM and Solidity
* better formal analysis tools
These goals are achieved by:
These goals are achieved by
* disallowing dynamic jumps
* introducing subroutines - jumps with return support
* disallowing pathological control flow and uses of the stack
* Deprecating dynamic jumps
* Introducing subroutines—jumps with return support
* Disallowing pathological control flow and uses of the stack
We also propose to validate - in linear time - that EVM contracts correctly use subroutines, avoid misuse of the stack, and meet other safety conditions _before_ placing them on the blockchain. Validated code precludes most runtime exceptions and the need to test for them. And well-behaved control flow and use of the stack makes life easier for interpreters, compilers, formal analysis, and other tools.
We also propose to validate—in linear time—that EVM contracts correctly use subroutines, avoid misuse of the stack, and meet other safety conditions _before_ placing them on the blockchain. Validated code precludes most runtime exceptions and the need to test for them. And well-behaved control flow and use of the stack makes life easier for interpreters, compilers, formal analysis, and other tools.
## THE PROBLEM
## Motivation
Currently the EVM supports dynamic jumps, where the address to jump to is an argument on the stack. These dynamic jumps obscure the structure of the code and thus mostly inhibit control- and data-flow analysis. This puts the quality and speed of optimized compilation fundamentally at odds. Further, since every jump can potentially be to any jump destination in the code, the number of possible paths through the code goes up as the product of the number of jumps by the number of destinations, as does the time complexity of static analysis. But absent dynamic jumps code can be statically analyzed in linear time.
Currently the EVM supports dynamic jumps, where the address to jump to is an argument on the stack. These dynamic jumps obscure the structure of the code and thus mostly inhibit control- and data-flow analysis. This puts the quality and speed of optimized compilation fundamentally at odds. Further, since every jump can potentially be to any jump destination in the code, the number of possible paths through the code goes up as the product of the number of jumps by the number of destinations, as does the time complexity of static analysis. Many of these cases are undecidable at validation time, further inhibiting static and formal analyses. But absent dynamic jumps code can be statically analyzed in linear time.
Static analysis includes validation, and much of optimization, compilation, transpilation, and formal analysis; every part of the tool chain benefits when linear-time analysis is available. In particular, faster control-flow analysis means faster compilation of EVM code to native code, and better data-flow analysis can help the compiler and the interpreter better track the size of the values on the stack and use native 64- and 32-bit operations when possible. Also, conditions which are checked at validation time dont have to be checked repeatedly at runtime.
Static analysis includes validation, and much of optimization, compilation, transpilation, and formal analysis; every part of the tool chain benefits when linear-time analysis is available. In particular, linear-time control-flow analysis means near-linear-time compilation of EVM code, and better data-flow analysis can help the compiler and the interpreter better track the size of the values on the stack and use native 64- and 32-bit operations when possible. Gas metering optimization also becomes much more tractable. Additionally, conditions which are statically checked at validation time dont have to be checked repeatedly at runtime.
Note that analyses of a contracts bytecode before execution - such as optimizations performed before interpretation, JIT compilation, and on-the-fly machine code generation - must be efficient and linear-time. Otherwise, specially crafted contracts can be used as attack vectors against clients that use static analysis of EVM code before the creation or execution of contracts.
Note that analyses of a contracts bytecode before executionsuch as optimizations performed before interpretation, compilation, and on-the-fly machine code generation—must be efficient and linear-time. Otherwise, specially crafted contracts can be used as attack vectors against clients that use static analysis of EVM code before the creation or execution of contracts.
## PROPOSAL
## Specification
We propose to deprecate two existing instructions - `JUMP` and `JUMPI`. They take their argument on the stack, which means that unless the value is constant they can jump to any `JUMPDEST`. (In simple cases like `PUSH 0 JUMP` the value on the stack can be known to be constant, but in general it's difficult.) We must nonetheless continue to support them in old code.
### Proposal
We propose to deprecate two existing instructions—`JUMP` and `JUMPI`. They take their argument on the stack, which means that unless the value is constant they can jump to any `JUMPDEST`. (In simple cases like `PUSH 0 JUMP` the value on the stack can be known to be constant, but in general it's difficult.) We must nonetheless continue to support them in old code.
Having deprecated `JUMP` and `JUMPI`, we propose new instructions to support their legitimate uses.
### Preliminaries
#### Preliminaries
These forms
* `INSTRUCTION x,`
* `INSTRUCTION x, y`
* `INSTRUCTION n, x ...`
name instructions with one, two, and two or more arguments, respectively. An instruction is represented in the bytecode as a single-byte opcode. The arguments are laid out as immediate data bytes following the opcode. The size and encoding of immediate data fields is open to change. In particular, easily-parsed variable-length encodings might prove useful for bytecode offsets - which are in practice small but in principle can be large.
name instructions with one, two, and two or more arguments, respectively. An instruction is represented in the bytecode as a single-byte opcode. Any arguments are laid out as immediate data bytes following the opcode inline, interpreted as fixed length, MSB-first, two's-complement, two-byte positive integers. (Negative values are reserved for extensions.)
### Primitives
#### Primitives
The two most important uses of `JUMP` and `JUMPI` are static jumps and return jumps. Conditional and unconditional static jumps are the mainstay of control flow. Return jumps are implemented as a dynamic jump to a return address pushed on the stack. With the combination of a static jump and a dynamic return jump you can - and Solidity does - implement subroutines. The problem is that static analysis cannot tell the one place the return jump is going, so it must analyze every possibility.
The two most important uses of `JUMP` and `JUMPI` are static jumps and return jumps. Conditional and unconditional static jumps are the mainstay of control flow. Return jumps are implemented as a dynamic jump to a return address pushed on the stack. With the combination of a static jump and a dynamic return jump you can—and Solidity does—implement subroutines. The problem is that static analysis cannot tell the one place the return jump is going, so it must analyze every possibility (a heavy analysis).
Static jumps are provided by
* `JUMPTO jump_target`
* `JUMPIF jump_target`
which are the same as `JUMP` and `JUMPI` except that they jump to an immediate `jump_target`, given as four immediate bytes, rather than an address on the stack.
which are the same as `JUMP` and `JUMPI` except that they jump to an immediate `jump_target` rather than an address on the stack.
To support subroutines, `BEGINSUB`, `JUMPSUB`, and `RETURNSUB` are provided. Brief descriptions follow, and full semantics are given below.
* `BEGINSUB n_args, n_results`
marks the **single** entry to a subroutine. `n_args` items are taken off of the stack at entry to, and `n_results` items are placed on the stack at return from the subroutine. `n_args` and `n_results` are given as one immediate byte each. The bytecode for a subroutine ends at the next `BEGINSUB` instruction (or `BEGINDATA`, below) or at the end of the bytecode.
marks the **single** entry to a subroutine. `n_args` items are taken off of the stack at entry to, and `n_results` items are placed on the stack at return from the subroutine. The subroutine ends at the next `BEGINSUB` instruction (or `BEGINDATA`, below) or at the end of the bytecode.
* `JUMPSUB jump_target`
jumps to an immediate subroutine address, given as four immediate bytes.
jumps to an immediate subroutine address.
* `RETURNSUB`
returns from the current subroutine to the instruction following the JUMPSUB that entered it.
These five simple instructions form the primitives of the proposal.
### Data
#### Data
In order to validate subroutines, EVM bytecode must be sequentially scanned matching jumps to their destinations. Since creation code has to contain the runtime code as data, that code might not correctly validate in the creation context and also does not have to be validated prior to the execution of the creation code. Because of that, there needs to be a way to place data into the bytecode that will be skipped over and not validated. Such data will prove useful for other purposes as well.
In order to validate subroutines in linear time, EVM bytecode must be sequentially scanned matching jumps to their destinations. Since creation code must contain the runtime code as data, that code might not correctly validate in the creation context and also does not have to be validated prior to the execution of the creation code. There needs to be a way to place data into the bytecode that will be skipped over and not validated. Such data may prove useful for other purposes as well.
* `BEGINDATA`
specifies that all of the following bytes to the end of the bytecode are data, and not reachable code.
### Indirect Jumps
#### Indirect Jumps
The primitive operations provide for static jumps. Dynamic jumps are also used for O(1) indirection: an address to jump to is selected to push on the stack and be jumped to. So we also propose two more instructions to provide for constrained indirection. We support these with vectors of `JUMPDEST` or `BEGINSUB` offsets stored inline, which can be selected with an index on the stack. That constrains validation to a specified subset of all possible destinations. The danger of quadratic blow up is avoided because it takes as much space to store the jump vectors as it does to code the worst case exploit.
The primitive operations provide for static jumps. Dynamic jumps are also used for ``O(1)`` indirection: an address to jump to is selected to push on the stack and be jumped to. So we also propose two more instructions to provide for constrained indirection. We support these with vectors of `JUMPDEST` or `BEGINSUB` offsets stored inline, which can be selected with an index on the stack. That constrains validation to a specified subset of all possible destinations. The danger of quadratic blow up is avoided because it takes as much space to store the jump vectors as it does to code the worst case exploit.
Dynamic jumps to a `JUMPDEST` are used to implement `O(1)` jumptables, which are useful for dense switch statements, and are implemented as instructions similar to this one on most CPUs.
Dynamic jumps to a `JUMPDEST` are used to implement O(1) jumptables, which are useful for dense switch statements, and are implemented as instructions similar to this one on most CPUs.
* `JUMPV n, jumpdest ...`
jumps to one of a vector of `n` `JUMPDEST` offsets via a zero-based index on the stack. The vector is stored inline in the bytecode. If the index is greater than or equal to `n - 1` the last (default) offset is used. `n` is given as four immediate bytes, all `JUMPDEST` offsets as four immediate bytes each.
jumps to one of a vector of `n` `JUMPDEST` offsets via a zero-based index on the stack. The vector is stored inline in the bytecode. If the index is greater than or equal to `n - 1` the last (default) offset is used.
Dynamic jumps to a `BEGINSUB` are used to implement `O(1)` virtual functions and callbacks, which take just two pointer dereferences on most CPUs.
Dynamic jumps to a `BEGINSUB` are used to implement O(1) virtual functions and callbacks, which take just two pointer dereferences on most CPUs.
* `JUMPSUBV n, beginsub ...`
jumps to one of a vector of `n` `BEGINSUB` offsets via a zero-based index on the stack. The vector is stored inline in the bytecode, MSB-first. If the index is greater than or equal to `n - 1` the last (default) offset is used. `n` is given as four immediate bytes, the `n` offsets as four immediate bytes each.
jumps to one of a vector of `n` `BEGINSUB` offsets via a zero-based index on the stack. The vector is stored inline in the bytecode, MSB-first. If the index is greater than or equal to `n - 1` the last (default) offset is used.
`JUMPV` and `JUMPSUBV` are not strictly necessary. They provide O(1) operations that can be replaced by O(n) or O(log n) EVM code using static jumps, but that code will be slower, larger and use more gas for things that can and should be fast, small, and cheap, and that are directly supported in WASM with br_table and call_indirect.
`JUMPV` and `JUMPSUBV` are not strictly necessary. They provide `O(1)` operations that can be replaced by `O(n)` or `O(log n)` EVM code using static jumps, but that code will be slower, larger and use more gas for things that can and should be fast, small, and cheap, and that are directly supported in WASM with br_table and call_indirect.
### Variable Access
#### Variable Access
These operations provide convenient access to subroutine parameters and other variables at fixed stack offsets within a subroutine.
* `PUTLOCAL n`
Pops the top value on the stack and copies it to local variable `n`.
Pops the top value on the stack and copies it to local variable `n`[^putlocal_n].
[^putlocal_n]: The `n` of stack items below the frame pointer to put a value at.
* `GETLOCAL n`
Pushes the value of local variable `n` on the EVM stack.
Pushes the value of local variable `n`[^getlocal_n] on the stack.
[^getlocal_n]: The `n` of stack items below the frame pointer to get a value from.
Local variable `n` is `FP[-n]` as defined below.
Local variable `n` is the nth stack item below the frame pointer—`FP[-n]` as defined below.
## SEMANTICS
### Semantics
Jumps to and returns from subroutines are described here in terms of
* the EVM data stack, (as defined in the [Yellow Paper](https://ethereum.github.io/yellowpaper/paper.pdf) usually just called “the stack”,
* a return stack of `JUMPSUB` and `JUMPSUBV` offsets, and
* a frame stack of frame pointers.
* The EVM data stack, (as defined in the [Yellow Paper](https://ethereum.github.io/yellowpaper/paper.pdf)) usually just called “the stack.
* A return stack of `JUMPSUB` and `JUMPSUBV` offsets.
* A frame stack of frame pointers.
We will adopt the following conventions to describe the machine state:
* The _program counter_ `PC` is (as usual) the byte offset of the currently executing instruction.
* The _stack pointer_ `SP` corresponds to the number of items on the stack - the _stack size_. As a negative offset it addresses the current top of the stack of data values, where new items are pushed.
* The _stack pointer_ `SP` corresponds to the [Yellow Paper](https://ethereum.github.io/yellowpaper/paper.pdf)'s substate `s` of the machine state.
* The _stack pointer_ addresses the current top of the stack of data values, where new items are pushed.
* The stack grows towards lower addresses.
* The _frame pointer_ `FP` is set to `SP + n_args` at entry to the currently executing subroutine.
* The _stack items_ between the frame pointer and the current stack pointer are called the _frame_.
* The current number of items in the frame, `FP - SP`, is the _frame size_.
@ -128,97 +140,118 @@ Defining the frame pointer so as to include the arguments is unconventional, but
The frame pointer and return stacks are internal to the subroutine mechanism, and not directly accessible to the program. This is necessary to prevent the program from modifying its state in ways that could be invalid.
The first instruction of an array of EVM bytecode begins execution of a _main_ routine with no arguments, `SP` and `FP` set to 0, and with one value on the return stack - `code size - 1`. (Executing the virtual byte of 0 after this offset causes an EVM to stop. Thus executing a `RETURNSUB` with no prior `JUMPSUB` or `JUMBSUBV` - that is, in the _main_ routine - executes a `STOP`.)
The first instruction of an array of EVM bytecode begins execution of a _main_ routine with no arguments, `SP` and `FP` set to 0, and with one value on the return stack—`code_size - 1`. (Executing the virtual byte of 0 after this offset causes an EVM to stop. Thus executing a `RETURNSUB` with no prior `JUMPSUB` or `JUMBSUBV`—that is, in the _main_ routine—executes a `STOP`.)
Execution of a subroutine begins with `JUMPSUB` or `JUMPSUBV`, which
* push `PC` on the return stack,
* push `FP` on the frame stack,
thus suspending execution of the current subroutine, and
* set `FP` to `SP + n_args`, and
* set `PC` to the specified `BEGINSUB` address,
thus beginning execution of the new subroutine.
(The _main_ routine is not addressable by `JUMPSUB` instructions.)
* pushes `PC` on the return stack,
* pushes `FP` on the frame stack
* thus suspending execution of the current subroutine,
* sets `FP` to `SP + n_args`, and
* sets `PC` to the specified `BEGINSUB` address
* thus beginning execution of the new subroutine
* the _main_ routine is not addressable by `JUMPSUB` instructions
Execution of a subroutine is suspended during and resumed after execution of nested subroutines, and ends upon encountering a `RETURNSUB`, which
* sets `FP` to the top of the virtual frame stack and pops the stack, and
* sets `PC` to top of the return stack and pops the stack
* sets `FP` to the top of the virtual frame stack and pops the stack,
* sets `PC` to top of the return stack and pops the stack, and
* advances `PC` to the next instruction
thus resuming execution of the enclosing subroutine or _main_ program.
A `STOP or `RETURN` also ends the execution of a subroutine.
A `STOP` or `RETURN` also ends the execution of a subroutine.
## VALIDITY
For example, after a `JUMPSUB` to a `BEGINSUB 2, 0` like this
```
PUSH 10
PUSH 11
JUMPSUB _beginsub_
PUSH 12
PUSH 13
```
the stack looks like this
```
10 <- FP
11
12
13
<- SP
```
We would like to consider EVM code valid if and only if no execution of the program can lead to an exceptional halting state. But we must and will validate code in linear time. So our validation does not consider the codes data and computations, only its control flow and stack use. This means we will reject programs with invalid code paths, even if those paths cannot be executed at runtime. Most conditions can be validated, and will not need to be checked at runtime; the exceptions are sufficient gas and sufficient stack. So some false negatives and runtime checks are the price we pay for linear time validation.
### Validity
_Execution_ is as defined in the Yellow Paper - a sequence of changes in the EVM state. The conditions on valid code are preserved by state changes. At runtime, if execution of an instruction would violate a condition the execution is in an exceptional halting state. The yellow paper defines five such states.
>1 Insufficient gas
We would like to consider EVM code valid iff no execution of the program can lead to an exceptional halting state, but we must validate code in linear time. So our validation does not consider the codes data and computations, only its control flow and stack use. This means we will reject programs with invalid code paths, even if those paths are not reachable. Most conditions can be validated, and will not need to be checked at runtime; the exceptions are sufficient gas and sufficient stack. As such, static analysis may yield false negatives belonging to well-understood classes of code requiring runtime checks. Aside from those cases, we can validate large classes at validation time and with linear complexity.
>2 More than 1024 stack items
_Execution_ is as defined in the [Yellow Paper](https://ethereum.github.io/yellowpaper/paper.pdf)—a sequence of changes in the EVM state. The conditions on valid code are preserved by state changes. At runtime, if execution of an instruction would violate a condition the execution is in an exceptional halting state. The Yellow Paper defines five such states.
>**1** Insufficient gas
>3 Insufficient stack items
>**2** More than 1024 stack items
>4 Invalid jump destination
>**3** Insufficient stack items
>5 Invalid instruction
>**4** Invalid jump destination
We propose to expand and extend the Yellow Paper conditions to handle the new instructions we propose.
>**5** Invalid instruction
We propose to expand and extend the Yellow Paper conditions to handle the new instructions we propose.
To handle the return stack we expand the conditions on stack size:
>2a The size of the data stack does not exceed 1024.
>**2a** The size of the data stack does not exceed 1024.
>2b The size of the return stack does not exceed 1024.
>**2b** The size of the return stack does not exceed 1024.
Given our more detailed description of the data stack we restate condition 3 - stack underflow - as
>3 `SP` must be less than or equal to `FP`
Given our more detailed description of the data stack we restate condition 3—stack underflow—as
>**3** `SP` must be less than or equal to `FP`
Since the various `DUP` and `SWAP` operations are formalized as taking items off the stack and putting them back on, this prevents `DUP` and `SWAP` from accessing data below the frame pointer, since taking too many items off of the stack would mean that `SP` is less than `FP`.
Since the various `DUP` and `SWAP` operations—as well as `PUTLOCAL` and `GETLOCAL`—are defined as taking items off the stack and putting them back on, this prevents them from accessing data below the frame pointer, since taking too many items off of the stack would mean that `SP` is less than `FP`.
To handle the new jump instructions and subroutine boundaries we expand the conditions on jumps and jump destinations.
>4a `JUMPTO`, `JUMPIF`, and `JUMPV` address only `JUMPDEST` instructions.
To handle the new jump instructions and subroutine boundaries, we expand the conditions on jumps and jump destinations.
>**4a** `JUMPTO`, `JUMPIF`, and `JUMPV` address only `JUMPDEST` instructions.
>4b `JUMPSUB` and `JUMPSUBV` address only `BEGINSUB` instructions.
>**4b** `JUMPSUB` and `JUMPSUBV` address only `BEGINSUB` instructions.
>4c `JUMP` instructions do not address instructions outside of the subroutine they occur in.
>**4c** `JUMP` instructions do not address instructions outside of the subroutine they occur in.
We have two new conditions on execution to ensure consistent use of the stack by subroutines:
>6 For `JUMPSUB` and `JUMPSUBV` the frame size is at least the `n_args` of the `BEGINSUB`(s) to jump to.
>**6** For `JUMPSUB` and `JUMPSUBV` the frame size is at least the `n_args` of the `BEGINSUB`(s) to jump to.
>7 For `RETURNSUB` the frame size is equal to the `n_results` of the enclosing `BEGINSUB`.
>**7** For `RETURNSUB` the frame size is equal to the `n_results` of the enclosing `BEGINSUB`.
Finally, we have one condition that prevents pathological uses of the stack:
>8 For every instruction in the code the frame size is constant.
>**8** For every instruction in the code the frame size is constant.
In practice, we must test at runtime for conditions 1 and 2 - sufficient gas and sufficient stack. We dont know how much gas there will be, we dont know how deep a recursion may go, and analysis of stack depth even for non-recursive programs is non-trivial.
In practice, we must test at runtime for conditions 1 and 2sufficient gas and sufficient stack. We dont know how much gas there will be, we dont know how deep a recursion may go, and analysis of stack depth even for non-recursive programs is nontrivial.
All of the remaining conditions we validate statically.
## VALIDATION
### Validation
Validation comprises two tasks:
* Checking that jump destinations are correct and instructions valid.
* Checking that subroutines satisfy the conditions on control flow and stack use.
* Check that jump destinations are correct and instructions valid.
* Check that subroutines satisfy the conditions on control flow and stack use.
We sketch out these two validation functions in pseudo-C below. To simplify the presentation only the five primitives are handled (`JUMPV` and `JUMPSUBV` would just add more complexity to loop over their vectors), we assume helper functions for extracting instruction arguments from immediate data and managing the stack pointer and program counter, and some optimizations are forgone.
We sketch out these two validation functions in pseudo-C below. To simplify the presentation only the five primitives are handled (`JUMPV` and `JUMPSUBV` would just add more complexity to loop over their vectors), we assume helper functions for extracting instruction arguments from immediate data and managing the stack pointer and program counter, and some optimizations are forgone.
### Validating jumps
#### Validating Jumps
Validating that jumps are to valid addresses takes two sequential passes over the bytecode - one to build sets of jump destinations and subroutine entry points, another to check that addresses jumped to are in the appropriate sets.
Validating that jumps are to valid addresses takes two sequential passes over the bytecodeone to build sets of jump destinations and subroutine entry points, another to check that addresses jumped to are in the appropriate sets.
```
bytecode[code_size] // contains EVM bytecode to validate
is_sub[code_size] // is there a BEGINSUB at PC?
is_dest[code_size] // is there a JUMPDEST at PC?
sub_for_pc[code_size] // which BEGINSUB is PC in?
bool validate_jumps(PC)
{
current_sub = PC
// build sets of BEGINSUBs and JUMPDESTs
current_sub = 0
for (PC = 0; instruction = bytecode[PC]; PC = advance_pc(PC))
{
if instruction is invalid
return false
if instruction is BEGINDATA
break
break;
if instruction is BEGINSUB
is_sub[PC] = true
current_sub = PC
@ -227,9 +260,8 @@ Validating that jumps are to valid addresses takes two sequential passes over th
is_dest[PC] = true
sub_for_pc[PC] = current_sub
}
// check that targets are in subroutine
current_sub = 0
for (PC = 0; instruction = bytecode[PC]; PC = advance_pc(PC))
{
if instruction is BEGINDATA
@ -245,16 +277,18 @@ Validating that jumps are to valid addresses takes two sequential passes over th
if sub_for_pc[PC] is not current_sub
return false
}
return true
return true
}
```
Note that code like this is already run by EVMs to check dynamic jumps, including building the jump destination set every time a contract is run, and doing a lookup in the jump destination set before every jump.
### Validating subroutines
#### Subroutine Validation
This function can be seen as a symbolic execution of a subroutine in the EVM code, where only the effect of the instructions on the state being validated is computed. Thus the structure of this function is very similar to an EVM interpreter. This function can also be seen as an acyclic traversal of the directed graph formed by taking instructions as vertexes and sequential and branching connections as edges, checking conditions along the way. The traversal is accomplished via recursion, and cycles are broken by returning when a vertex which has already been visited is reached. The time complexity of this traversal is O(n-vertices + n-edges)
This function can be seen as a symbolic execution of a subroutine in the EVM code, where only the effect of the instructions on the state being validated is computed. Thus the structure of this function is very similar to an EVM interpreter. This function can also be seen as an acyclic traversal of the directed graph formed by taking instructions as vertexes and sequential and branching connections as edges, checking conditions along the way. The traversal is accomplished via recursion, and cycles are broken by returning when a vertex which has already been visited is reached. The time complexity of this traversal is `O(|E|+|V|)`[^subroutine_complexity].
The basic approach is to call `validate_subroutine(i, 0, 0)`, for _i_ equal to the first instruction in the EVM code through each `BEGINDATA` offset. `validate_subroutine()` traverses instructions sequentially, recursing when `JUMP` and `JUMPI` instructions are encountered. When a destination is reached that has been visited before it returns, thus breaking cycles. It returns true if the subroutine is valid, false otherwise.
[^subroutine_complexity]: The sum of the number of edges and number of verticies in the graph.
The basic approach is to call `validate_subroutine(i, 0, 0)`, for `i` equal to the first instruction in the EVM code through each `BEGINDATA` offset. `validate_subroutine()` traverses instructions sequentially, recursing when `JUMP` and `JUMPI` instructions are encountered. When a destination is reached that has been visited before it returns, thus breaking cycles. It returns true if the subroutine is valid, false otherwise.
```
bytecode[code_size] // contains EVM bytecode to validate
@ -263,7 +297,8 @@ The basic approach is to call `validate_subroutine(i, 0, 0)`, for _i_ equal to t
// we validate each subroutine individually, as if at top level
// * PC is the offset in the code to start validating at
// * return_pc is the top PC on return stack that RETURNSUB returns to
// * at top level FP = 0, so SP is both the frame size and the stack size
// * at top level FP = SP = 0 is both the frame size and the stack size
// * as items are pushed SP get more negative, so the stack size is -SP
validate_subroutine(PC, return_pc, SP)
{
// traverse code sequentially, recurse for jumps
@ -276,31 +311,31 @@ The basic approach is to call `validate_subroutine(i, 0, 0)`, for _i_ equal to t
{
// check for constant frame size
if instruction is JUMPDEST
if SP != frame_size[PC]
if -SP != frame_size[PC]
return false
// return to break cycle
return true
}
frame_size[PC] = SP
frame_size[PC] = -SP
// effect of instruction on stack
n_removed = removed_items(instructions)
n_added = added_items(instruction)
// check for stack underflow
if SP < n_removed
if -SP < n_removed
return false
// net effect of removing and adding stack items
SP -= n_removed
SP += n_added
SP += n_removed
SP -= n_added
// check for stack overflow
if SP > 1024
if -SP > 1024
return false
if instruction is STOP, RETURN, SUICIDE or BEGINDATA
if instruction is STOP, RETURN, or SUICIDE
return true
// violates single entry
@ -315,7 +350,7 @@ The basic approach is to call `validate_subroutine(i, 0, 0)`, for _i_ equal to t
{
// check for enough arguments
sub_pc = jump_target(PC)
if SP < n_args(sub_pc)
if -SP < n_args(sub_pc)
return false
return true
}
@ -324,10 +359,10 @@ The basic approach is to call `validate_subroutine(i, 0, 0)`, for _i_ equal to t
if instruction is JUMPTO
{
PC = jump_target(PC)
continue
continue
}
// recurse to jump to code to validate
// recurse to jump to code to validate
if instruction is JUMPIF
{
if not validate_subroutine(jump_target(PC), return_pc, SP)
@ -339,36 +374,50 @@ The basic approach is to call `validate_subroutine(i, 0, 0)`, for _i_ equal to t
}
// check for right number of results
if SP != n_results(return_pc)
if (-SP != n_results(return_pc)
return false
return true
}
```
### COSTS & CODES
#### Costs & Codes
All of the instructions are O(1) with a small constant, requiring just a few machine operations each, whereas a `JUMP` or `JUMPI` must do an O(log n) binary search of an array of `JUMPDEST` offsets before every jump. With the cost of `JUMPI` being _high_ and the cost of `JUMP` being _mid_, we suggest the cost of `JUMPV` and `JUMPSUBV` should be _mid_, `JUMPSUB` and `JUMPIF` should be _low_, and`JUMPTO` should be _verylow_. Measurement will tell.
All of the instructions are `O(1)` with a small constant, requiring just a few machine operations each, whereas a `JUMP` or `JUMPI` must do an O(log n) binary search of an array of `JUMPDEST` offsets before every jump. With the cost of `JUMPI` being _high_ and the cost of `JUMP` being _mid_, we suggest the cost of `JUMPV` and `JUMPSUBV` should be _mid_, `JUMPSUB` and `JUMPIF` should be _low_, and`JUMPTO` should be _verylow_. Measurement will tell.
We tentatively suggest the following opcodes:
We suggest the following opcodes:
```
0xB0 JUMPTO
0xB1 JUMPIF
0XB2 JUMPSUB
0xB4 JUMPSUBV
0xB5 BEGINSUB
0xB6 BEGINDATA
0xB8 RETURNSUB
0xB9 PUTLOCAL
0xBA GETLOCAL
0xb0 JUMPTO
0xb1 JUMPIF
0xb2 JUMPV
0xb3 JUMPSUB
0xb4 JUMPSUBV
0xb5 BEGINSUB
0xb6 BEGINDATA
0xb7 RETURNSUB
0xb8 PUTLOCAL
0xb9 GETLOCAL
```
### GETTING THERE FROM HERE
## Backwards Compatibility
These changes would need to be implemented in phases at decent intervals:
>1 If this EIP is accepted, invalid code should be deprecated. Tools should stop generating invalid code, users should stop writing it, and clients should warn about loading it.
>**1.** If this EIP is accepted, invalid code should be deprecated. Tools should stop generating invalid code, users should stop writing it, and clients should warn about loading it.
>2 A later hard fork would require clients to place only valid code on the block chain. Note that despite the fork old EVM code will still need to be supported indefinitely.
>**2.** A later hard fork would require clients to place only valid code on the block chain. Note that despite the fork old EVM code will still need to be supported indefinitely.
If desired, the period of deprecation can be extended indefinitely by continuing to accept code not versioned as new - but without validation. That is, by delaying step 2. Since we must continue to run old code this is not technically difficult.
If desired, the period of deprecation can be extended indefinitely by continuing to accept code not versioned as new—but without validation. That is, by delaying phase 2. Since we must continue to run old code this is not technically difficult.
Implementation of this proposal need not be difficult, At the least, interpreters can simply be extended with the new opcodes and run unchanged otherwise. The new opcodes require only stacks for the frame pointers and return offsets and the few pushes, pops, and assignments described above. JIT code can use native calls. Further optimizations include minimizing runtime checks for exceptions and taking advantage of validated code wherever possible.
## Rationale
This design was highly constrained by the existing EVM semantics, the requirement for eWasm compatibility, and the security demands of the Ethereum environment. It was also informed by the lead author's previous work implementing Java and Scheme interpreters. As such there was very little room for alternative designs.
As described above, the approach was simply to deprecate the problematic dynamic jumps, then ask what opcodes were necessary to provide for the features they supported. These needed to include those provided by eWasm, which themselves were modeled after typical hardware. The only real innovation was to move the frame pointer and the return pointer to their own stacks, so as to prevent any possibility of overwriting them. (Although Forth also uses a return stack.) This allowed for treating subroutine arguments as local variables, and facilitated the return of multiple values.
## Implementation
Implementation of this proposal need not be difficult. At the least, interpreters can simply be extended with the new opcodes and run unchanged otherwise. The new opcodes require only stacks for the frame pointers and return offsets and the few pushes, pops, and assignments described above. Compiled code can use native call instructions, greatly improving performance. Further optimizations include minimizing runtime checks for exceptions, condensing gas metering, and otherwise taking advantage of validated code wherever possible. A lightly tested reference implementation is available in [Greg Colvin's Aleth fork.](https://github.com/gcolvin/aleth/tree/master/libaleth-interpreter)
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).