2226 Commits

Author SHA1 Message Date
Daniel Lubarov
e3f7cc21c1
Merge pull request #624 from mir-protocol/mload_kernel_code_u32
Add a `mload_kernel_code_u32` macro
2022-07-19 12:18:15 -07:00
Daniel Lubarov
3dc79274a8 Add a mload_kernel_code_u32 macro
Intended for loading constants in SHA2, and maybe RIPEMD.

Sample usage
```
// Loads the i'th K256 constant.
%macro k256
  // stack: i
  %mul_const(4)
  // stack: 4*i
  PUSH k256_data
  // stack: k256_data, 4*i
  ADD
  // stack: k256_data + 4*i
  %mload_kernel_code_u32
  // stack: K256[4*i]
%endmacro

k256_data:
    BYTES 0x42, 0x8a, 0x2f, 0x98
    BYTES 0x71, 0x37, 0x44, 0x91
    ...
```

Untested for now since our interpreter doesn't have the needed memory support quite yet.
2022-07-19 10:36:18 -07:00
Daniel Lubarov
71db231c59
Merge pull request #622 from mir-protocol/memcpy
Implement memcpy
2022-07-19 07:21:15 -07:00
Daniel Lubarov
5b1f564039 Feedback 2022-07-19 07:20:57 -07:00
wborgeaud
1a5134e4b9
Merge pull request #620 from mir-protocol/sha3_interpreter_ecrecover
Implement SHA3 in interpreter and use it in ecrecover
2022-07-19 16:04:25 +02:00
wborgeaud
a8ce2a6073 Import fix 2022-07-19 15:27:51 +02:00
wborgeaud
54629a0ef9 Merge branch 'main' into sha3_interpreter_ecrecover
# Conflicts:
#	evm/src/cpu/kernel/interpreter.rs
#	evm/src/cpu/kernel/tests/ecrecover.rs
2022-07-19 15:24:28 +02:00
wborgeaud
e7dbba8d7b s/sha3/keccak256 2022-07-19 15:21:44 +02:00
Daniel Lubarov
539364c87a clippy 2022-07-18 21:53:31 -07:00
Daniel Lubarov
80d32f89b6 fixes 2022-07-18 15:58:12 -07:00
Daniel Lubarov
6610ec4487 Implement memcpy
This can be used, for example, to copy `CALL` data (which is a slice of the caller's main memory) to the callee's `CALLDATA` segment.
2022-07-18 14:55:15 -07:00
Daniel Lubarov
50144a638f Enable assertions, now working 2022-07-18 13:48:51 -07:00
Daniel Lubarov
cbdf2a66a1
Merge pull request #619 from mir-protocol/add_priviledged_opcodes
Add custom opcodes
2022-07-18 10:55:56 -07:00
Daniel Lubarov
799d333a90 fix 2022-07-18 10:40:02 -07:00
Daniel Lubarov
71b9705a0d
Merge pull request #618 from mir-protocol/asm_assertions
More basic ASM macros
2022-07-18 09:31:34 -07:00
Daniel Lubarov
b29de2c46a tweak 2022-07-18 09:29:21 -07:00
Daniel Lubarov
0b7e3eca67 PANIC returns error 2022-07-18 08:58:11 -07:00
Daniel Lubarov
d53804c66f Merge branch 'main' into add_priviledged_opcodes 2022-07-18 08:47:15 -07:00
wborgeaud
ea0d081fa8 Fix comment 2022-07-18 16:53:26 +02:00
wborgeaud
f9ec4e8e7d Modify ecrecover tests 2022-07-18 16:41:17 +02:00
wborgeaud
15ee891778 SHA3 in asm 2022-07-18 16:36:37 +02:00
wborgeaud
14a58439e5 SHA3 in interpreter 2022-07-18 16:24:47 +02:00
wborgeaud
ae7103d560
Merge pull request #611 from mir-protocol/ecrecover_kernel
`ecrecover` kernel function
2022-07-18 14:23:06 +02:00
wborgeaud
a22dbd18ed Merge conflicts 2022-07-18 14:04:40 +02:00
wborgeaud
a268677936 Merge branch 'main' into ecrecover_kernel
# Conflicts:
#	evm/src/cpu/kernel/aggregator.rs
2022-07-18 14:01:10 +02:00
wborgeaud
ba9aa14f51 PR feedback 2022-07-18 14:00:20 +02:00
wborgeaud
fd991a4eef
Merge pull request #614 from mir-protocol/evm_interpreter_memory
Implement memory for the EVM interpreter
2022-07-18 13:52:39 +02:00
Daniel Lubarov
4aaceabd18 Include assertions, disabled for now 2022-07-17 16:08:58 -07:00
Daniel Lubarov
925483ed1e Add custom opcodes
- `GET_STATE_ROOT` and `SET_STATE_ROOT` deal with the root of the state trie, and will be called from storage routines. Similarly `GET_RECEIPT_ROOT` and `SET_RECEIPT_ROOT` deal with the root of the receipt trie.
- `PANIC` enables an unsatisfiable constraint, so no proof can be generated.
- `GET_CONTEXT` and `SET_CONTEXT`, used when calling and returning
- `CONSUME_GAS` charges the sender gas; useful for cases where gas calculations are nontrivial and best implemented in assembly.
- `EXIT_KERNEL` simply clears the CPU flag indicating that we're in kernel mode; it would be used just before a jump to return to the (userspace) caller.
- `MLOAD_GENERAL` and `MSTORE_GENERAL` are for reading and writing memory, but they're not limited to the main memory segment of the current context; they can access any context and any segment. I added a couple macros to show how the they would typically be used.

There may be more later, but these are the ones I think we need for now.  I tried to fill in smaller invalid sections of the decoder's tree, as Jacqui suggested, while keeping related opcodes together. We can fine tune it when the opcode list is more stable.

These are all intended to be priviledged, i.e. they will be treated as invalid if used from userspace, for compatibility as well as (in some cases) security reasons.
2022-07-17 15:43:49 -07:00
Daniel Lubarov
36f1692ee5 tweaks 2022-07-17 09:23:37 -07:00
Daniel Lubarov
563401b24d More basic ASM utility functions
To be used in upcoming RLP code.
2022-07-17 09:15:24 -07:00
Daniel Lubarov
a9fe08a4a7
Merge pull request #610 from mir-protocol/feedback_591
Address some feedback on #591
2022-07-17 08:23:58 -07:00
Daniel Lubarov
ef842b03c8 Address some feedback on #591 2022-07-17 08:23:40 -07:00
Daniel Lubarov
c18d4844e7
Merge pull request #616 from mir-protocol/memory_u256
Store memory values as `U256`s
2022-07-17 07:59:05 -07:00
Daniel Lubarov
997453237f Store memory values as U256s
Ultimately they're encoded as `[F; 8]`s in the table, but I don't anticipate that we'll have any use cases where we want to store more than 256 bits. Might as well store `U256` until we actually build the table since they're more compact.
2022-07-17 07:58:28 -07:00
Daniel Lubarov
934bf757dd
Merge pull request #617 from mir-protocol/segment_enum
Organize segments in an enum
2022-07-17 07:46:24 -07:00
Daniel Lubarov
ab5abc391d Organize segments in an enum
It's a bit more type-safe (can't mix up segment with context or virtual addr), and this way uniqueness of ordinals is enforced, partially addressing a concern raised in #591.

To avoid making `Segment` public (which I don't think would be appropriate), I had to make some other visibility changes, and had to move `generate_random_memory_ops` into the test module.
2022-07-16 10:16:12 -07:00
Daniel Lubarov
83643aa584
Merge pull request #612 from mir-protocol/bootstrapping_continued
Continue work on bootstrapping
2022-07-15 13:03:16 -07:00
Daniel Lubarov
134c66b37d Missing TODO 2022-07-15 13:02:56 -07:00
wborgeaud
292bb4a024 Implement memory for the interpreter 2022-07-15 11:10:10 +02:00
wborgeaud
48f9b7fdf3 PR feedback 2022-07-15 09:56:52 +02:00
Daniel Lubarov
2e3ad0142e
Merge pull request #613 from mir-protocol/asm_rep
Add `%rep` syntax for repeating a block
2022-07-14 22:47:34 -07:00
Daniel Lubarov
6d69e14a89 Add %rep syntax for repeating a block
Same syntax as NASM.
2022-07-14 14:58:18 -07:00
Daniel Lubarov
0802d6c021 Continue work on bootstrapping
The kernel is hashed using a Keccak based sponge for now. We could switch to Poseidon later if our kernel grows too large.

Note that we use simple zero-padding (pad0*) instead of the standard pad10* rule. It's simpler, and we don't care that the prover can add extra 0s at the end of the code. The program counter can never reach those bytes, and even if it could, they'd be 0 anyway given the EVM's zero-initialization rule.

In one CPU row, we can do a whole Keccak hash (via the CTL), absorbing 136 bytes. But we can't actually bootstrap that many bytes of kernel code in one row, because we're also limited by memory bandwidth. Currently we can write 4 bytes of the kernel to memory in one row.

So we treat the `keccak_input_limbs` columns as a buffer. We gradually fill up this buffer, 4 bytes (one `u32` word) at a time. Every `136 / 4 = 34` rows, the buffer will be full, so at that point we activate the Keccak CTL to absorb the buffer.
2022-07-14 11:59:01 -07:00
wborgeaud
62c094615d Add _base suffix 2022-07-14 19:46:02 +02:00
wborgeaud
f4390410a3 Comments 2022-07-14 19:39:07 +02:00
wborgeaud
0ccd5adc7b Redundant x-coord in lifting 2022-07-14 19:23:08 +02:00
wborgeaud
7ee884b84d More tests 2022-07-14 15:26:07 +02:00
wborgeaud
33a5934255 Passing tests 2022-07-14 14:26:01 +02:00
wborgeaud
add2b42e16 Merge branch 'main' into ecrecover_kernel 2022-07-14 13:18:54 +02:00