2212 Commits

Author SHA1 Message Date
Daniel Lubarov
539364c87a clippy 2022-07-18 21:53:31 -07:00
Daniel Lubarov
50144a638f Enable assertions, now working 2022-07-18 13:48:51 -07:00
Daniel Lubarov
cbdf2a66a1
Merge pull request #619 from mir-protocol/add_priviledged_opcodes
Add custom opcodes
2022-07-18 10:55:56 -07:00
Daniel Lubarov
799d333a90 fix 2022-07-18 10:40:02 -07:00
Daniel Lubarov
71b9705a0d
Merge pull request #618 from mir-protocol/asm_assertions
More basic ASM macros
2022-07-18 09:31:34 -07:00
Daniel Lubarov
b29de2c46a tweak 2022-07-18 09:29:21 -07:00
Daniel Lubarov
0b7e3eca67 PANIC returns error 2022-07-18 08:58:11 -07:00
Daniel Lubarov
d53804c66f Merge branch 'main' into add_priviledged_opcodes 2022-07-18 08:47:15 -07:00
wborgeaud
ae7103d560
Merge pull request #611 from mir-protocol/ecrecover_kernel
`ecrecover` kernel function
2022-07-18 14:23:06 +02:00
wborgeaud
a22dbd18ed Merge conflicts 2022-07-18 14:04:40 +02:00
wborgeaud
a268677936 Merge branch 'main' into ecrecover_kernel
# Conflicts:
#	evm/src/cpu/kernel/aggregator.rs
2022-07-18 14:01:10 +02:00
wborgeaud
ba9aa14f51 PR feedback 2022-07-18 14:00:20 +02:00
wborgeaud
fd991a4eef
Merge pull request #614 from mir-protocol/evm_interpreter_memory
Implement memory for the EVM interpreter
2022-07-18 13:52:39 +02:00
Daniel Lubarov
4aaceabd18 Include assertions, disabled for now 2022-07-17 16:08:58 -07:00
Daniel Lubarov
925483ed1e Add custom opcodes
- `GET_STATE_ROOT` and `SET_STATE_ROOT` deal with the root of the state trie, and will be called from storage routines. Similarly `GET_RECEIPT_ROOT` and `SET_RECEIPT_ROOT` deal with the root of the receipt trie.
- `PANIC` enables an unsatisfiable constraint, so no proof can be generated.
- `GET_CONTEXT` and `SET_CONTEXT`, used when calling and returning
- `CONSUME_GAS` charges the sender gas; useful for cases where gas calculations are nontrivial and best implemented in assembly.
- `EXIT_KERNEL` simply clears the CPU flag indicating that we're in kernel mode; it would be used just before a jump to return to the (userspace) caller.
- `MLOAD_GENERAL` and `MSTORE_GENERAL` are for reading and writing memory, but they're not limited to the main memory segment of the current context; they can access any context and any segment. I added a couple macros to show how the they would typically be used.

There may be more later, but these are the ones I think we need for now.  I tried to fill in smaller invalid sections of the decoder's tree, as Jacqui suggested, while keeping related opcodes together. We can fine tune it when the opcode list is more stable.

These are all intended to be priviledged, i.e. they will be treated as invalid if used from userspace, for compatibility as well as (in some cases) security reasons.
2022-07-17 15:43:49 -07:00
Daniel Lubarov
36f1692ee5 tweaks 2022-07-17 09:23:37 -07:00
Daniel Lubarov
563401b24d More basic ASM utility functions
To be used in upcoming RLP code.
2022-07-17 09:15:24 -07:00
Daniel Lubarov
a9fe08a4a7
Merge pull request #610 from mir-protocol/feedback_591
Address some feedback on #591
2022-07-17 08:23:58 -07:00
Daniel Lubarov
ef842b03c8 Address some feedback on #591 2022-07-17 08:23:40 -07:00
Daniel Lubarov
c18d4844e7
Merge pull request #616 from mir-protocol/memory_u256
Store memory values as `U256`s
2022-07-17 07:59:05 -07:00
Daniel Lubarov
997453237f Store memory values as U256s
Ultimately they're encoded as `[F; 8]`s in the table, but I don't anticipate that we'll have any use cases where we want to store more than 256 bits. Might as well store `U256` until we actually build the table since they're more compact.
2022-07-17 07:58:28 -07:00
Daniel Lubarov
934bf757dd
Merge pull request #617 from mir-protocol/segment_enum
Organize segments in an enum
2022-07-17 07:46:24 -07:00
Daniel Lubarov
ab5abc391d Organize segments in an enum
It's a bit more type-safe (can't mix up segment with context or virtual addr), and this way uniqueness of ordinals is enforced, partially addressing a concern raised in #591.

To avoid making `Segment` public (which I don't think would be appropriate), I had to make some other visibility changes, and had to move `generate_random_memory_ops` into the test module.
2022-07-16 10:16:12 -07:00
Daniel Lubarov
83643aa584
Merge pull request #612 from mir-protocol/bootstrapping_continued
Continue work on bootstrapping
2022-07-15 13:03:16 -07:00
Daniel Lubarov
134c66b37d Missing TODO 2022-07-15 13:02:56 -07:00
wborgeaud
292bb4a024 Implement memory for the interpreter 2022-07-15 11:10:10 +02:00
wborgeaud
48f9b7fdf3 PR feedback 2022-07-15 09:56:52 +02:00
Daniel Lubarov
2e3ad0142e
Merge pull request #613 from mir-protocol/asm_rep
Add `%rep` syntax for repeating a block
2022-07-14 22:47:34 -07:00
Daniel Lubarov
6d69e14a89 Add %rep syntax for repeating a block
Same syntax as NASM.
2022-07-14 14:58:18 -07:00
Daniel Lubarov
0802d6c021 Continue work on bootstrapping
The kernel is hashed using a Keccak based sponge for now. We could switch to Poseidon later if our kernel grows too large.

Note that we use simple zero-padding (pad0*) instead of the standard pad10* rule. It's simpler, and we don't care that the prover can add extra 0s at the end of the code. The program counter can never reach those bytes, and even if it could, they'd be 0 anyway given the EVM's zero-initialization rule.

In one CPU row, we can do a whole Keccak hash (via the CTL), absorbing 136 bytes. But we can't actually bootstrap that many bytes of kernel code in one row, because we're also limited by memory bandwidth. Currently we can write 4 bytes of the kernel to memory in one row.

So we treat the `keccak_input_limbs` columns as a buffer. We gradually fill up this buffer, 4 bytes (one `u32` word) at a time. Every `136 / 4 = 34` rows, the buffer will be full, so at that point we activate the Keccak CTL to absorb the buffer.
2022-07-14 11:59:01 -07:00
wborgeaud
62c094615d Add _base suffix 2022-07-14 19:46:02 +02:00
wborgeaud
f4390410a3 Comments 2022-07-14 19:39:07 +02:00
wborgeaud
0ccd5adc7b Redundant x-coord in lifting 2022-07-14 19:23:08 +02:00
wborgeaud
7ee884b84d More tests 2022-07-14 15:26:07 +02:00
wborgeaud
33a5934255 Passing tests 2022-07-14 14:26:01 +02:00
wborgeaud
add2b42e16 Merge branch 'main' into ecrecover_kernel 2022-07-14 13:18:54 +02:00
wborgeaud
0d62895098
Merge pull request #606 from mir-protocol/jumpdest_push_data
Fix interpreter JUMPDEST check + change stopping behavior
2022-07-14 13:18:15 +02:00
wborgeaud
cb7215436b Merge branch 'main' into ecrecover_kernel
# Conflicts:
#	evm/src/cpu/kernel/aggregator.rs
2022-07-14 13:17:16 +02:00
wborgeaud
ad9e131026 Add test 2022-07-14 13:16:25 +02:00
wborgeaud
905b0243e7 Minor fixes 2022-07-14 13:07:58 +02:00
wborgeaud
522213c933 Ecrecover until hashing 2022-07-14 11:30:47 +02:00
Daniel Lubarov
8751aaec7a
Merge pull request #609 from mir-protocol/row_wise_memory_gen
Generate most of the memory table while it's in row-wise form
2022-07-13 17:09:56 -07:00
Daniel Lubarov
33622c1ec1
Merge pull request #608 from mir-protocol/kernel_size_logs
Have `make_kernel` log the size of each (assembled) file
2022-07-13 13:13:39 -07:00
Daniel Lubarov
bfd924870f Generate most of the memory table while it's in row-wise form
This should improve cache locality - since we generally access several values at a time in a given row, we want themt to be close together in memory.

There are a few steps that make more sense column-wise, though, such as generating the `COUNTER` column. I put those after the transpose.
2022-07-13 13:08:41 -07:00
wborgeaud
4be5a25a7d
Merge pull request #607 from mir-protocol/duplicate_macros
Avoid duplicate macros
2022-07-13 19:56:09 +02:00
Daniel Lubarov
d36eda20e2
Merge pull request #605 from mir-protocol/memory_misc
More realistic padding rows in memory table
2022-07-13 10:55:04 -07:00
Daniel Lubarov
a8852946b3 Have make_kernel log the size of each (assembled) file
For now it doesn't log filenames, but we can compare against the list of filenames in `combined_kernel`.

Current output:
```
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 0 bytes
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 49 bytes
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 387 bytes
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 27365 bytes
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 0 bytes
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 11 bytes
[DEBUG plonky2_evm::cpu::kernel::assembler] Assembled file size: 7 bytes
[DEBUG plonky2_evm::cpu::kernel::aggregator::tests] Total kernel size: 27819 bytes
```

This shows that most of our kernel code is from `curve_add.asm`, which makes sense since it invovles a couple uses of the large `inverse` macro.  Thankfully that will be replaced at some point.
2022-07-13 10:53:26 -07:00
wborgeaud
b4ebbe5a31 Start ecrecover 2022-07-13 19:48:17 +02:00
wborgeaud
7a6c53e921 Working secp mul 2022-07-13 19:25:28 +02:00
wborgeaud
a831fab8f8 Working secp add 2022-07-13 19:22:32 +02:00