Address bundling (#1426)

* Start

* Scale TxnFields

* Speed-up

* Misc fixes

* Other fixes

* Fix

* Fix offset

* One more fix

* And one more fix

* Fix

* Fix

* Fix init

* More interpreter fixes

* Final fixes

* Add helper methods

* Clippy

* Apply suggestions

* Comments

* Update documentation

* Regenerate pdf

* minor

* Rename some macros for consistency

* Add utility method for unscaling segments and scaled metadata

* Address comments
This commit is contained in:
Robin Salen 2024-01-08 11:46:26 +01:00 committed by GitHub
parent 3e61f06a1d
commit 2dacbfe2ff
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
85 changed files with 2028 additions and 1720 deletions

View File

@ -75,15 +75,14 @@ ecAdd, ecMul and ecPairing precompiles.
\item[0x0F.] \texttt{SUBMOD}. Pops 3 elements from the stack, and pushes the modular difference of the first two elements of the stack by the third one.
It is similar to the SUB instruction, with an extra pop for the custom modulus.
\item[0x21.] \texttt{KECCAK\_GENERAL}. Pops 4 elements (successively the context, segment, and offset portions of a Memory address, followed by a length $\ell$)
and pushes the hash of the memory portion starting at the constructed address and of length $\ell$. It is similar to KECCAK256 (0x20) instruction, but can be applied to
any memory section (i.e. even privileged ones).
\item[0x21.] \texttt{KECCAK\_GENERAL}. Pops 2 elements (a Memory address, followed by a length $\ell$) and pushes the hash of the memory portion starting at the
constructed address and of length $\ell$. It is similar to KECCAK256 (0x20) instruction, but can be applied to any memory section (i.e. even privileged ones).
\item[0x49.] \texttt{PROVER\_INPUT}. Pushes a single prover input onto the stack.
\item[0xC0-0xDF.] \texttt{MSTORE\_32BYTES}. Pops 4 elements from the stack (successively the context, segment, and offset portions of a Memory address, and then a value), and pushes
a new offset' onto the stack. The value is being decomposed into bytes and written to memory, starting from the reconstructed address. The new offset being pushed is computed as the
initial address offset + the length of the byte sequence being written to memory. Note that similarly to PUSH (0x60-0x7F) instructions there are 31 MSTORE\_32BYTES instructions, each
\item[0xC0-0xDF.] \texttt{MSTORE\_32BYTES}. Pops 2 elements from the stack (a Memory address, and then a value), and pushes
a new address' onto the stack. The value is being decomposed into bytes and written to memory, starting from the fetched address. The new address being pushed is computed as the
initial address + the length of the byte sequence being written to memory. Note that similarly to PUSH (0x60-0x7F) instructions, there are 32 MSTORE\_32BYTES instructions, each
corresponding to a target byte length (length 0 is ignored, for the same reasons as MLOAD\_32BYTES, see below). Writing to memory an integer fitting in $n$ bytes with a length $\ell < n$ will
result in the integer being truncated. On the other hand, specifying a length $\ell$ greater than the byte size of the value being written will result in padding with zeroes. This
process is heavily used when resetting memory sections (by calling MSTORE\_32BYTES\_32 with the value 0).
@ -93,29 +92,49 @@ ecAdd, ecMul and ecPairing precompiles.
\item[0xF7.] \texttt{SET\_CONTEXT}. Pops the top element of the stack and updates the current context to this value. It is usually used when calling another contract or precompile,
to distinguish the caller from the callee.
\item[0xF8.] \texttt{MLOAD\_32BYTES}. Pops 4 elements from the stack (successively the context, segment, and offset portions of a Memory address, and then a length $\ell$), and pushes
a value onto the stack. The pushed value corresponds to the U256 integer read from the big-endian sequence of length $\ell$ from the memory address being reconstructed. Note that an
\item[0xF8.] \texttt{MLOAD\_32BYTES}. Pops 2 elements from the stack (a Memory address, and then a length $\ell$), and pushes
a value onto the stack. The pushed value corresponds to the U256 integer read from the big-endian sequence of length $\ell$ from the memory address being fetched. Note that an
empty length is not valid, nor is a length greater than 32 (as a U256 consists in at most 32 bytes). Missing these conditions will result in an unverifiable proof.
\item[0xF9.] \texttt{EXIT\_KERNEL}. Pops 1 element from the stack. This instruction is used at the end of a syscall, before proceeding to the rest of the execution logic.
The popped element, \textit{kexit\_info}, contains several pieces of information like the current program counter, the current amount of gas used, and whether we are in kernel (i.e. privileged) mode or not.
\item[0xFB.] \texttt{MLOAD\_GENERAL}. Pops 3 elements (successively the context, segment, and offset portions of a Memory address), and pushes the value stored at this memory
\item[0xFB.] \texttt{MLOAD\_GENERAL}. Pops 1 elements (a Memory address), and pushes the value stored at this memory
address onto the stack. It can read any memory location, general (similarly to MLOAD (0x51) instruction) or privileged.
\item[0xFC.] \texttt{MSTORE\_GENERAL}. Pops 4 elements (successively a value, then the context, segment, and offset portions of a Memory address), and writes the popped value from
the stack at the reconstructed address. It can write to any memory location, general (similarly to MSTORE (0x52) / MSTORE8 (0x53) instructions) or privileged.
\item[0xFC.] \texttt{MSTORE\_GENERAL}. Pops 2 elements (a value and a Memory address), and writes the popped value from
the stack at the fetched address. It can write to any memory location, general (similarly to MSTORE (0x52) / MSTORE8 (0x53) instructions) or privileged.
\end{enumerate}
\subsection{Memory addresses}
\label{memoryaddresses}
Kernel operations deal with memory addresses as single U256 elements.
However, when processing the operations to generate the proof witness, the CPU will decompose these into three components:
\begin{itemize}
\item[context.] The context of the memory address. The Kernel context is special, and has value 0.
\item[segment.] The segment of the memory address, corresponding to a specific section given a context (eg. MPT data, global metadata, etc.).
\item[virtual.] The offset of the memory address, within a segment given a context.
\end{itemize}
To easily retrieve these components, we scale them so that they can represent a memory address as:
$$ \mathrm{addr} = 2^{64} \cdot \mathrm{context} + 2^{32} \cdot \mathrm{segment} + \mathrm{offset}$$
This allows to easily retrieve each component individually once a Memory address has been decomposed into 32-bit limbs.
\subsection{Stack handling}
\label{stackhandling}
\subsubsection{Top of the stack}
The majority of memory operations involve the stack. The stack is a segment in memory, and stack operations (popping or pushing) use the memory channels.
Every CPU instruction performs between 0 and 4 pops, and may push at most once. However, for efficiency purposes, we hold the top of the stack in
Every CPU instruction performs between 0 and 3 pops, and may push at most once. However, for efficiency purposes, we hold the top of the stack in
the first memory channel \texttt{current\_row.mem\_channels[0]}, only writing it in memory if necessary.
\paragraph*{Motivation:}

Binary file not shown.

View File

@ -16,8 +16,13 @@ pub(crate) fn eval_packed<P: PackedField>(
// The MSTORE_32BYTES opcodes are differentiated from MLOAD_32BYTES
// by the 5th bit set to 0.
let filter = lv.op.m_op_32bytes * (lv.opcode_bits[5] - P::ONES);
let new_offset = nv.mem_channels[0].value;
let virt = lv.mem_channels[2].value[0];
// The address to write to is stored in the first memory channel.
// It contains virt, segment, ctx in its first 3 limbs, and 0 otherwise.
// The new address is identical, except for its `virtual` limb that is increased by the corresponding `len` offset.
let new_addr = nv.mem_channels[0].value;
let written_addr = lv.mem_channels[0].value;
// Read len from opcode bits and constrain the pushed new offset.
let len_bits: P = lv.opcode_bits[..5]
.iter()
@ -25,8 +30,16 @@ pub(crate) fn eval_packed<P: PackedField>(
.map(|(i, &bit)| bit * P::Scalar::from_canonical_u64(1 << i))
.sum();
let len = len_bits + P::ONES;
yield_constr.constraint(filter * (new_offset[0] - virt - len));
for &limb in &new_offset[1..] {
// Check that `virt` is increased properly.
yield_constr.constraint(filter * (new_addr[0] - written_addr[0] - len));
// Check that `segment` and `ctx` do not change.
yield_constr.constraint(filter * (new_addr[1] - written_addr[1]));
yield_constr.constraint(filter * (new_addr[2] - written_addr[2]));
// Check that the rest of the returned address is null.
for &limb in &new_addr[3..] {
yield_constr.constraint(filter * limb);
}
}
@ -41,8 +54,13 @@ pub(crate) fn eval_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
// by the 5th bit set to 0.
let filter =
builder.mul_sub_extension(lv.op.m_op_32bytes, lv.opcode_bits[5], lv.op.m_op_32bytes);
let new_offset = nv.mem_channels[0].value;
let virt = lv.mem_channels[2].value[0];
// The address to write to is stored in the first memory channel.
// It contains virt, segment, ctx in its first 3 limbs, and 0 otherwise.
// The new address is identical, except for its `virtual` limb that is increased by the corresponding `len` offset.
let new_addr = nv.mem_channels[0].value;
let written_addr = lv.mem_channels[0].value;
// Read len from opcode bits and constrain the pushed new offset.
let len_bits = lv.opcode_bits[..5].iter().enumerate().fold(
builder.zero_extension(),
@ -50,11 +68,26 @@ pub(crate) fn eval_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
builder.mul_const_add_extension(F::from_canonical_u64(1 << i), bit, cumul)
},
);
let diff = builder.sub_extension(new_offset[0], virt);
// Check that `virt` is increased properly.
let diff = builder.sub_extension(new_addr[0], written_addr[0]);
let diff = builder.sub_extension(diff, len_bits);
let constr = builder.mul_sub_extension(filter, diff, filter);
yield_constr.constraint(builder, constr);
for &limb in &new_offset[1..] {
// Check that `segment` and `ctx` do not change.
{
let diff = builder.sub_extension(new_addr[1], written_addr[1]);
let constr = builder.mul_extension(filter, diff);
yield_constr.constraint(builder, constr);
let diff = builder.sub_extension(new_addr[2], written_addr[2]);
let constr = builder.mul_extension(filter, diff);
yield_constr.constraint(builder, constr);
}
// Check that the rest of the returned address is null.
for &limb in &new_addr[3..] {
let constr = builder.mul_extension(filter, limb);
yield_constr.constraint(builder, constr);
}

View File

@ -11,7 +11,8 @@ use super::membus::NUM_GP_CHANNELS;
use crate::constraint_consumer::{ConstraintConsumer, RecursiveConstraintConsumer};
use crate::cpu::columns::CpuColumnsView;
use crate::cpu::kernel::constants::context_metadata::ContextMetadata;
use crate::memory::segments::Segment;
use crate::memory::segments::{Segment, SEGMENT_SCALING_FACTOR};
use crate::memory::VALUE_LIMBS;
// If true, the instruction will keep the current context for the next row.
// If false, next row's context is handled manually.
@ -83,8 +84,10 @@ fn eval_packed_get<P: PackedField>(
// If the opcode is GET_CONTEXT, then lv.opcode_bits[0] = 0.
let filter = lv.op.context_op * (P::ONES - lv.opcode_bits[0]);
let new_stack_top = nv.mem_channels[0].value;
yield_constr.constraint(filter * (new_stack_top[0] - lv.context));
for &limb in &new_stack_top[1..] {
// Context is scaled by 2^64, hence stored in the 3rd limb.
yield_constr.constraint(filter * (new_stack_top[2] - lv.context));
for (i, &limb) in new_stack_top.iter().enumerate().filter(|(i, _)| *i != 2) {
yield_constr.constraint(filter * limb);
}
@ -113,12 +116,14 @@ fn eval_ext_circuit_get<F: RichField + Extendable<D>, const D: usize>(
let prod = builder.mul_extension(lv.op.context_op, lv.opcode_bits[0]);
let filter = builder.sub_extension(lv.op.context_op, prod);
let new_stack_top = nv.mem_channels[0].value;
// Context is scaled by 2^64, hence stored in the 3rd limb.
{
let diff = builder.sub_extension(new_stack_top[0], lv.context);
let diff = builder.sub_extension(new_stack_top[2], lv.context);
let constr = builder.mul_extension(filter, diff);
yield_constr.constraint(builder, constr);
}
for &limb in &new_stack_top[1..] {
for (i, &limb) in new_stack_top.iter().enumerate().filter(|(i, _)| *i != 2) {
let constr = builder.mul_extension(filter, limb);
yield_constr.constraint(builder, constr);
}
@ -155,13 +160,14 @@ fn eval_packed_set<P: PackedField>(
let stack_top = lv.mem_channels[0].value;
let write_old_sp_channel = lv.mem_channels[1];
let read_new_sp_channel = lv.mem_channels[2];
let ctx_metadata_segment = P::Scalar::from_canonical_u64(Segment::ContextMetadata as u64);
let stack_size_field = P::Scalar::from_canonical_u64(ContextMetadata::StackSize as u64);
// We need to unscale the context metadata segment and related field.
let ctx_metadata_segment = P::Scalar::from_canonical_usize(Segment::ContextMetadata.unscale());
let stack_size_field = P::Scalar::from_canonical_usize(ContextMetadata::StackSize.unscale());
let local_sp_dec = lv.stack_len - P::ONES;
// The next row's context is read from stack_top.
yield_constr.constraint(filter * (stack_top[0] - nv.context));
for &limb in &stack_top[1..] {
yield_constr.constraint(filter * (stack_top[2] - nv.context));
for (i, &limb) in stack_top.iter().enumerate().filter(|(i, _)| *i != 2) {
yield_constr.constraint(filter * limb);
}
@ -220,22 +226,23 @@ fn eval_ext_circuit_set<F: RichField + Extendable<D>, const D: usize>(
let stack_top = lv.mem_channels[0].value;
let write_old_sp_channel = lv.mem_channels[1];
let read_new_sp_channel = lv.mem_channels[2];
let ctx_metadata_segment = builder.constant_extension(F::Extension::from_canonical_u32(
Segment::ContextMetadata as u32,
// We need to unscale the context metadata segment and related field.
let ctx_metadata_segment = builder.constant_extension(F::Extension::from_canonical_usize(
Segment::ContextMetadata.unscale(),
));
let stack_size_field = builder.constant_extension(F::Extension::from_canonical_u32(
ContextMetadata::StackSize as u32,
let stack_size_field = builder.constant_extension(F::Extension::from_canonical_usize(
ContextMetadata::StackSize.unscale(),
));
let one = builder.one_extension();
let local_sp_dec = builder.sub_extension(lv.stack_len, one);
// The next row's context is read from stack_top.
{
let diff = builder.sub_extension(stack_top[0], nv.context);
let diff = builder.sub_extension(stack_top[2], nv.context);
let constr = builder.mul_extension(filter, diff);
yield_constr.constraint(builder, constr);
}
for &limb in &stack_top[1..] {
for (i, &limb) in stack_top.iter().enumerate().filter(|(i, _)| *i != 2) {
let constr = builder.mul_extension(filter, limb);
yield_constr.constraint(builder, constr);
}
@ -368,7 +375,8 @@ pub(crate) fn eval_packed<P: PackedField>(
yield_constr.constraint(new_filter * (channel.addr_context - nv.context));
// Same segment for both.
yield_constr.constraint(
new_filter * (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
new_filter
* (channel.addr_segment - P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
// The address is one less than stack_len.
let addr_virtual = stack_len - P::ONES;
@ -429,7 +437,7 @@ pub(crate) fn eval_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
{
let diff = builder.add_const_extension(
channel.addr_segment,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
);
let constr = builder.mul_extension(new_filter, diff);
yield_constr.constraint(builder, constr);

View File

@ -21,7 +21,7 @@ use crate::cpu::{
};
use crate::cross_table_lookup::{Column, Filter, TableWithColumns};
use crate::evaluation_frame::{StarkEvaluationFrame, StarkFrame};
use crate::memory::segments::Segment;
use crate::memory::segments::{Segment, SEGMENT_SCALING_FACTOR};
use crate::memory::{NUM_CHANNELS, VALUE_LIMBS};
use crate::stark::Stark;
@ -29,15 +29,14 @@ use crate::stark::Stark;
/// the CPU reads the output of the sponge directly from the `KeccakSpongeStark` table.
pub(crate) fn ctl_data_keccak_sponge<F: Field>() -> Vec<Column<F>> {
// When executing KECCAK_GENERAL, the GP memory channels are used as follows:
// GP channel 0: stack[-1] = context
// GP channel 1: stack[-2] = segment
// GP channel 2: stack[-3] = virt
// GP channel 3: stack[-4] = len
// GP channel 0: stack[-1] = addr (context, segment, virt)
// GP channel 1: stack[-2] = len
// Next GP channel 0: pushed = outputs
let context = Column::single(COL_MAP.mem_channels[0].value[0]);
let segment = Column::single(COL_MAP.mem_channels[1].value[0]);
let virt = Column::single(COL_MAP.mem_channels[2].value[0]);
let len = Column::single(COL_MAP.mem_channels[3].value[0]);
let (context, segment, virt) = get_addr(&COL_MAP, 0);
let context = Column::single(context);
let segment = Column::single(segment);
let virt = Column::single(virt);
let len = Column::single(COL_MAP.mem_channels[1].value[0]);
let num_channels = F::from_canonical_usize(NUM_CHANNELS);
let timestamp = Column::linear_combination([(COL_MAP.clock, num_channels)]);
@ -149,27 +148,30 @@ pub(crate) fn ctl_data_byte_unpacking<F: Field>() -> Vec<Column<F>> {
let is_read = Column::constant(F::ZERO);
// When executing MSTORE_32BYTES, the GP memory channels are used as follows:
// GP channel 0: stack[-1] = context
// GP channel 1: stack[-2] = segment
// GP channel 2: stack[-3] = virt
// GP channel 3: stack[-4] = val
// GP channel 0: stack[-1] = addr (context, segment, virt)
// GP channel 1: stack[-2] = val
// Next GP channel 0: pushed = new_offset (virt + len)
let context = Column::single(COL_MAP.mem_channels[0].value[0]);
let segment = Column::single(COL_MAP.mem_channels[1].value[0]);
let virt = Column::single(COL_MAP.mem_channels[2].value[0]);
let val = Column::singles(COL_MAP.mem_channels[3].value);
let (context, segment, virt) = get_addr(&COL_MAP, 0);
let mut res = vec![
is_read,
Column::single(context),
Column::single(segment),
Column::single(virt),
];
// len can be reconstructed as new_offset - virt.
let len = Column::linear_combination_and_next_row_with_constant(
[(COL_MAP.mem_channels[2].value[0], -F::ONE)],
[(COL_MAP.mem_channels[0].value[0], -F::ONE)],
[(COL_MAP.mem_channels[0].value[0], F::ONE)],
F::ZERO,
);
res.push(len);
let num_channels = F::from_canonical_usize(NUM_CHANNELS);
let timestamp = Column::linear_combination([(COL_MAP.clock, num_channels)]);
res.push(timestamp);
let mut res = vec![is_read, context, segment, virt, len, timestamp];
let val = Column::singles(COL_MAP.mem_channels[1].value);
res.extend(val);
res
@ -224,6 +226,20 @@ pub(crate) const MEM_CODE_CHANNEL_IDX: usize = 0;
/// Index of the first general purpose memory channel.
pub(crate) const MEM_GP_CHANNELS_IDX_START: usize = MEM_CODE_CHANNEL_IDX + 1;
/// Recover the three components of an address, given a CPU row and
/// a provided memory channel index.
/// The components are recovered as follows:
///
/// - `context`, shifted by 2^64 (i.e. at index 2)
/// - `segment`, shifted by 2^32 (i.e. at index 1)
/// - `virtual`, not shifted (i.e. at index 0)
pub(crate) const fn get_addr<T: Copy>(lv: &CpuColumnsView<T>, mem_channel: usize) -> (T, T, T) {
let addr_context = lv.mem_channels[mem_channel].value[2];
let addr_segment = lv.mem_channels[mem_channel].value[1];
let addr_virtual = lv.mem_channels[mem_channel].value[0];
(addr_context, addr_segment, addr_virtual)
}
/// Make the time/channel column for memory lookups.
fn mem_time_and_channel<F: Field>(channel: usize) -> Column<F> {
let scalar = F::from_canonical_usize(NUM_CHANNELS);
@ -234,10 +250,10 @@ fn mem_time_and_channel<F: Field>(channel: usize) -> Column<F> {
/// Creates the vector of `Columns` corresponding to the contents of the code channel when reading code values.
pub(crate) fn ctl_data_code_memory<F: Field>() -> Vec<Column<F>> {
let mut cols = vec![
Column::constant(F::ONE), // is_read
Column::single(COL_MAP.code_context), // addr_context
Column::constant(F::from_canonical_u64(Segment::Code as u64)), // addr_segment
Column::single(COL_MAP.program_counter), // addr_virtual
Column::constant(F::ONE), // is_read
Column::single(COL_MAP.code_context), // addr_context
Column::constant(F::from_canonical_usize(Segment::Code.unscale())), // addr_segment
Column::single(COL_MAP.program_counter), // addr_virtual
];
// Low limb of the value matches the opcode bits

View File

@ -54,7 +54,7 @@ fn constrain_channel_packed<P: PackedField>(
yield_constr.constraint(filter * (channel.is_read - P::Scalar::from_bool(is_read)));
yield_constr.constraint(filter * (channel.addr_context - lv.context));
yield_constr.constraint(
filter * (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
filter * (channel.addr_segment - P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
// Top of the stack is at `addr = lv.stack_len - 1`.
let addr_virtual = lv.stack_len - P::ONES - offset;
@ -94,7 +94,7 @@ fn constrain_channel_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
{
let constr = builder.arithmetic_extension(
F::ONE,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
filter,
channel.addr_segment,
filter,

View File

@ -87,7 +87,8 @@ pub(crate) fn eval_packed_jump_jumpi<P: PackedField>(
yield_constr.constraint_transition(new_filter * (channel.is_read - P::ONES));
yield_constr.constraint_transition(new_filter * (channel.addr_context - nv.context));
yield_constr.constraint_transition(
new_filter * (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
new_filter
* (channel.addr_segment - P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
let addr_virtual = nv.stack_len - P::ONES;
yield_constr.constraint_transition(new_filter * (channel.addr_virtual - addr_virtual));
@ -134,7 +135,7 @@ pub(crate) fn eval_packed_jump_jumpi<P: PackedField>(
yield_constr.constraint(
filter
* (jumpdest_flag_channel.addr_segment
- P::Scalar::from_canonical_u64(Segment::JumpdestBits as u64)),
- P::Scalar::from_canonical_usize(Segment::JumpdestBits.unscale())),
);
yield_constr.constraint(filter * (jumpdest_flag_channel.addr_virtual - dst[0]));
@ -205,7 +206,7 @@ pub(crate) fn eval_ext_circuit_jump_jumpi<F: RichField + Extendable<D>, const D:
{
let constr = builder.arithmetic_extension(
F::ONE,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
new_filter,
channel.addr_segment,
new_filter,
@ -308,7 +309,7 @@ pub(crate) fn eval_ext_circuit_jump_jumpi<F: RichField + Extendable<D>, const D:
{
let constr = builder.arithmetic_extension(
F::ONE,
-F::from_canonical_u64(Segment::JumpdestBits as u64),
-F::from_canonical_usize(Segment::JumpdestBits.unscale()),
filter,
jumpdest_flag_channel.addr_segment,
filter,

View File

@ -86,6 +86,8 @@ global extcodesize:
// Checks that the hash of the loaded code corresponds to the `codehash` in the state trie.
// Pre stack: address, ctx, retdest
// Post stack: code_size
//
// NOTE: The provided `dest` **MUST** have a virtual address of 0.
global load_code:
%stack (address, ctx, retdest) -> (extcodehash, address, load_code_ctd, ctx, retdest)
JUMP
@ -94,8 +96,9 @@ load_code_ctd:
DUP1 ISZERO %jumpi(load_code_non_existent_account)
// Load the code non-deterministically in memory and return the length.
PROVER_INPUT(account_code)
%stack (code_size, codehash, ctx, retdest) -> (ctx, @SEGMENT_CODE, 0, code_size, codehash, retdest, code_size)
%stack (code_size, codehash, ctx, retdest) -> (ctx, code_size, codehash, retdest, code_size)
// Check that the hash of the loaded code equals `codehash`.
// ctx == DST, as SEGMENT_CODE == offset == 0.
KECCAK_GENERAL
// stack: shouldbecodehash, codehash, retdest, code_size
%assert_eq
@ -103,9 +106,9 @@ load_code_ctd:
JUMP
load_code_non_existent_account:
// Write 0 at address 0 for soundness.
// stack: codehash, ctx, retdest
%stack (codehash, ctx, retdest) -> (0, ctx, @SEGMENT_CODE, 0, retdest, 0)
// Write 0 at address 0 for soundness: SEGMENT_CODE == 0, hence ctx == addr.
// stack: codehash, addr, retdest
%stack (codehash, addr, retdest) -> (0, addr, retdest, 0)
MSTORE_GENERAL
// stack: retdest, 0
JUMP
@ -120,10 +123,14 @@ global load_code_padded:
%jump(load_code)
load_code_padded_ctd:
%stack (code_size, ctx, retdest) -> (ctx, @SEGMENT_CODE, code_size, 0, ctx, retdest, code_size)
// SEGMENT_CODE == 0.
// stack: code_size, ctx, retdest
%stack (code_size, ctx, retdest) -> (ctx, code_size, 0, retdest, code_size)
ADD
// stack: addr, 0, retdest, code_size
MSTORE_32BYTES_32
// stack: last_offset, ctx, retdest, code_size
%stack (last_offset, ctx) -> (0, ctx, @SEGMENT_CODE, last_offset)
// stack: addr', retdest, code_size
PUSH 0
MSTORE_GENERAL
// stack: retdest, code_size
JUMP

View File

@ -1,15 +1,21 @@
%macro memcpy_current_general
// stack: dst, src, len
GET_CONTEXT
%stack (context, dst, src, len) -> (context, @SEGMENT_KERNEL_GENERAL, dst, context, @SEGMENT_KERNEL_GENERAL, src, len, %%after)
// DST and SRC are offsets, for the same memory segment
GET_CONTEXT PUSH @SEGMENT_KERNEL_GENERAL %build_address_no_offset
%stack (addr_no_offset, dst, src, len) -> (addr_no_offset, src, addr_no_offset, dst, len, %%after)
ADD
// stack: SRC, addr_no_offset, dst, len, %%after
SWAP2
ADD
// stack: DST, SRC, len, %%after
%jump(memcpy)
%%after:
%endmacro
%macro clear_current_general
// stack: dst, len
GET_CONTEXT
%stack (context, dst, len) -> (context, @SEGMENT_KERNEL_GENERAL, dst, len, %%after)
GET_CONTEXT PUSH @SEGMENT_KERNEL_GENERAL %build_address
%stack (DST, len) -> (DST, len, %%after)
%jump(memset)
%%after:
%endmacro

View File

@ -120,14 +120,20 @@ insert_storage_key:
// stack: i, len, addr, key, value, retdest
DUP4 DUP4 %journal_add_storage_loaded // Add a journal entry for the loaded storage key.
// stack: i, len, addr, key, value, retdest
DUP1 %increment
DUP1 %increment
%stack (i_plus_2, i_plus_1, i, len, addr, key, value) -> (i, addr, i_plus_1, key, i_plus_2, value, i_plus_2, value)
%mstore_kernel(@SEGMENT_ACCESSED_STORAGE_KEYS) // Store new address at the end of the array.
%mstore_kernel(@SEGMENT_ACCESSED_STORAGE_KEYS) // Store new key after that
%mstore_kernel(@SEGMENT_ACCESSED_STORAGE_KEYS) // Store new value after that
// stack: i_plus_2, value, retdest
%increment
DUP1
PUSH @SEGMENT_ACCESSED_STORAGE_KEYS
%build_kernel_address
%stack(dst, i, len, addr, key, value) -> (addr, dst, dst, key, dst, value, i, value)
MSTORE_GENERAL // Store new address at the end of the array.
// stack: dst, key, dst, value, i, value, retdest
%increment SWAP1
MSTORE_GENERAL // Store new key after that
// stack: dst, value, i, value, retdest
%add_const(2) SWAP1
MSTORE_GENERAL // Store new value after that
// stack: i, value, retdest
%add_const(3)
%mstore_global_metadata(@GLOBAL_METADATA_ACCESSED_STORAGE_KEYS_LEN) // Store new length.
%stack (value, retdest) -> (retdest, 1, value) // Return 1 to indicate that the storage key was inserted.
JUMP

View File

@ -1,4 +1,5 @@
// Handlers for call-like operations, namely CALL, CALLCODE, STATICCALL and DELEGATECALL.
// Reminder: All context metadata hardcoded offsets are already scaled by `Segment::ContextMetadata`.
// Creates a new sub context and executes the code of the given account.
global sys_call:
@ -271,7 +272,10 @@ call_too_deep:
// because it will already be 0 by default.
%macro set_static_true
// stack: new_ctx
%stack (new_ctx) -> (1, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_STATIC, new_ctx)
DUP1
%build_address_with_ctx_no_segment(@CTX_METADATA_STATIC)
PUSH 1
// stack: 1, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
@ -279,74 +283,90 @@ call_too_deep:
// Set @CTX_METADATA_STATIC of the next context to the current value.
%macro set_static
// stack: new_ctx
DUP1
%build_address_with_ctx_no_segment(@CTX_METADATA_STATIC)
%mload_context_metadata(@CTX_METADATA_STATIC)
%stack (is_static, new_ctx) -> (is_static, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_STATIC, new_ctx)
// stack: is_static, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_addr
// stack: called_addr, new_ctx
%stack (called_addr, new_ctx)
-> (called_addr, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_ADDRESS, new_ctx)
DUP2
%build_address_with_ctx_no_segment(@CTX_METADATA_ADDRESS)
SWAP1
// stack: called_addr, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_caller
// stack: sender, new_ctx
%stack (sender, new_ctx)
-> (sender, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_CALLER, new_ctx)
DUP2
%build_address_with_ctx_no_segment(@CTX_METADATA_CALLER)
SWAP1
// stack: sender, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_value
// stack: value, new_ctx
%stack (value, new_ctx)
-> (value, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_CALL_VALUE, new_ctx)
DUP2
%build_address_with_ctx_no_segment(@CTX_METADATA_CALL_VALUE)
SWAP1
// stack: value, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_code_size
// stack: code_size, new_ctx
%stack (code_size, new_ctx)
-> (code_size, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_CODE_SIZE, new_ctx)
DUP2
%build_address_with_ctx_no_segment(@CTX_METADATA_CODE_SIZE)
SWAP1
// stack: code_size, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_calldata_size
// stack: calldata_size, new_ctx
%stack (calldata_size, new_ctx)
-> (calldata_size, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_CALLDATA_SIZE, new_ctx)
DUP2
%build_address_with_ctx_no_segment(@CTX_METADATA_CALLDATA_SIZE)
SWAP1
// stack: calldata_size, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_gas_limit
// stack: gas_limit, new_ctx
%stack (gas_limit, new_ctx)
-> (gas_limit, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_GAS_LIMIT, new_ctx)
DUP2
%build_address_with_ctx_no_segment(@CTX_METADATA_GAS_LIMIT)
SWAP1
// stack: gas_limit, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_parent_ctx
// stack: new_ctx
PUSH @CTX_METADATA_PARENT_CONTEXT
PUSH @SEGMENT_CONTEXT_METADATA
DUP3 // new_ctx
DUP1
%build_address_with_ctx_no_segment(@CTX_METADATA_PARENT_CONTEXT)
GET_CONTEXT
// stack: ctx, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
%macro set_new_ctx_parent_pc(label)
// stack: new_ctx
%stack (new_ctx)
-> ($label, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_PARENT_PC, new_ctx)
DUP1
%build_address_with_ctx_no_segment(@CTX_METADATA_PARENT_PC)
PUSH $label
// stack: label, addr, new_ctx
MSTORE_GENERAL
// stack: new_ctx
%endmacro
@ -381,17 +401,18 @@ call_too_deep:
%macro copy_mem_to_calldata
// stack: new_ctx, args_offset, args_size
GET_CONTEXT
%stack (ctx, new_ctx, args_offset, args_size) ->
(
new_ctx, @SEGMENT_CALLDATA, 0, // DST
ctx, @SEGMENT_MAIN_MEMORY, args_offset, // SRC
args_size, %%after, // count, retdest
new_ctx, args_size
)
%stack(ctx, new_ctx, args_offset, args_size) -> (ctx, @SEGMENT_MAIN_MEMORY, args_offset, args_size, %%after, new_ctx, args_size)
%build_address
// stack: SRC, args_size, %%after, new_ctx, args_size
DUP4
%build_address_with_ctx_no_offset(@SEGMENT_CALLDATA)
// stack: DST, SRC, args_size, %%after, new_ctx, args_size
%jump(memcpy_bytes)
%%after:
%stack (new_ctx, args_size) ->
(args_size, new_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_CALLDATA_SIZE)
// stack: new_ctx, args_size
%build_address_with_ctx_no_segment(@CTX_METADATA_CALLDATA_SIZE)
// stack: addr, args_size
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
@ -403,13 +424,12 @@ call_too_deep:
// stack: returndata_size, ret_size, new_ctx, success, ret_offset, kexit_info
%min
GET_CONTEXT
%stack (ctx, n, new_ctx, success, ret_offset, kexit_info) ->
(
ctx, @SEGMENT_MAIN_MEMORY, ret_offset, // DST
ctx, @SEGMENT_RETURNDATA, 0, // SRC
n, %%after, // count, retdest
kexit_info, success
)
%stack (ctx, n, new_ctx, success, ret_offset, kexit_info) -> (ctx, @SEGMENT_RETURNDATA, @SEGMENT_MAIN_MEMORY, ret_offset, ctx, n, %%after, kexit_info, success)
%build_address_no_offset
// stack: SRC, @SEGMENT_MAIN_MEMORY, ret_offset, ctx, n, %%after, kexit_info, success
SWAP3
%build_address
// stack: DST, SRC, n, %%after, kexit_info, success
%jump(memcpy_bytes)
%%after:
%endmacro

View File

@ -9,7 +9,7 @@
// Charge gas for *call opcodes and return the sub-context gas limit.
// Doesn't include memory expansion costs.
global call_charge_gas:
// Compute C_aaccess
// Compute C_access
// stack: is_call_or_callcode, is_call_or_staticcall, cold_access, address, gas, kexit_info, value, retdest
SWAP2
// stack: cold_access, is_call_or_staticcall, is_call_or_callcode, address, gas, kexit_info, value, retdest

View File

@ -57,6 +57,7 @@ global sys_create2:
DUP5 // code_offset
PUSH @SEGMENT_MAIN_MEMORY
GET_CONTEXT
%build_address
KECCAK_GENERAL
// stack: hash, salt, create_common, value, code_offset, code_len, kexit_info
@ -99,11 +100,15 @@ global create_common:
%set_new_ctx_code_size POP
// Copy the code from memory to the new context's code segment.
%stack (src_ctx, new_ctx, address, value, code_offset, code_len)
-> (new_ctx, @SEGMENT_CODE, 0, // DST
src_ctx, @SEGMENT_MAIN_MEMORY, code_offset, // SRC
-> (src_ctx, @SEGMENT_MAIN_MEMORY, code_offset, // SRC
new_ctx, // DST (SEGMENT_CODE == virt == 0)
code_len,
run_constructor,
new_ctx, value, address)
%build_address
// stack: SRC, DST, code_len, run_constructor, new_ctx, value, address
SWAP1
// stack: DST, SRC, code_len, run_constructor, new_ctx, value, address
%jump(memcpy_bytes)
run_constructor:
@ -144,7 +149,11 @@ after_constructor:
POP
// EIP-3541: Reject new contract code starting with the 0xEF byte
PUSH 0 %mload_current(@SEGMENT_RETURNDATA) %eq_const(0xEF) %jumpi(create_first_byte_ef)
PUSH @SEGMENT_RETURNDATA
GET_CONTEXT
%build_address_no_offset
MLOAD_GENERAL
%eq_const(0xEF) %jumpi(create_first_byte_ef)
// Charge gas for the code size.
// stack: leftover_gas, success, address, kexit_info
@ -160,9 +169,9 @@ after_constructor:
%pop_checkpoint
// Store the code hash of the new contract.
GET_CONTEXT
%returndatasize
%stack (size, ctx) -> (ctx, @SEGMENT_RETURNDATA, 0, size) // context, segment, offset, len
PUSH @SEGMENT_RETURNDATA GET_CONTEXT %build_address_no_offset
// stack: addr, len
KECCAK_GENERAL
// stack: codehash, leftover_gas, success, address, kexit_info
%observe_new_contract

View File

@ -14,10 +14,7 @@ global get_create_address:
%encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
%prepend_rlp_list_prefix
// stack: rlp_prefix_start, rlp_len, retdest
PUSH @SEGMENT_RLP_RAW
PUSH 0 // context
// stack: RLP_ADDR: 3, rlp_len, retdest
// stack: RLP_ADDR, rlp_len, retdest
KECCAK_GENERAL
// stack: hash, retdest
%u256_to_addr
@ -41,19 +38,23 @@ global get_create_address:
global get_create2_address:
// stack: sender, code_hash, salt, retdest
PUSH 0xff PUSH 0 %mstore_kernel_general
%stack (sender, code_hash, salt, retdest) -> (0, @SEGMENT_KERNEL_GENERAL, 1, sender, 20, get_create2_address_contd, salt, code_hash, retdest)
%stack (sender, code_hash, salt, retdest) -> (@SEGMENT_KERNEL_GENERAL, 1, sender, 20, get_create2_address_contd, salt, code_hash, retdest)
ADD
%jump(mstore_unpacking)
get_create2_address_contd:
POP
%stack (salt, code_hash, retdest) -> (0, @SEGMENT_KERNEL_GENERAL, 21, salt, 32, get_create2_address_contd2, code_hash, retdest)
%stack (salt, code_hash, retdest) -> (@SEGMENT_KERNEL_GENERAL, 21, salt, 32, get_create2_address_contd2, code_hash, retdest)
ADD
%jump(mstore_unpacking)
get_create2_address_contd2:
POP
%stack (code_hash, retdest) -> (0, @SEGMENT_KERNEL_GENERAL, 53, code_hash, 32, get_create2_address_finish, retdest)
%stack (code_hash, retdest) -> (@SEGMENT_KERNEL_GENERAL, 53, code_hash, 32, get_create2_address_finish, retdest)
ADD
%jump(mstore_unpacking)
get_create2_address_finish:
POP
%stack (retdest) -> (0, @SEGMENT_KERNEL_GENERAL, 0, 85, retdest) // context, segment, offset, len
%stack (retdest) -> (@SEGMENT_KERNEL_GENERAL, 85, retdest) // offset == context == 0
// addr, len, retdest
KECCAK_GENERAL
// stack: hash, retdest
%u256_to_addr

View File

@ -55,8 +55,8 @@ process_receipt_after_bloom:
%get_trie_data_size
// stack: receipt_ptr, payload_len, status, new_cum_gas, txn_nb, new_cum_gas, txn_nb, num_nibbles, retdest
// Write transaction type if necessary. RLP_RAW contains, at index 0, the current transaction type.
PUSH 0
%mload_kernel(@SEGMENT_RLP_RAW)
PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
MLOAD_GENERAL
// stack: first_txn_byte, receipt_ptr, payload_len, status, new_cum_gas, txn_nb, new_cum_gas, txn_nb, num_nibbles, retdest
DUP1 %eq_const(1) %jumpi(receipt_nonzero_type)
DUP1 %eq_const(2) %jumpi(receipt_nonzero_type)
@ -79,8 +79,10 @@ process_receipt_after_type:
// stack: receipt_ptr, txn_nb, new_cum_gas, txn_nb, num_nibbles, retdest
// Write Bloom filter.
PUSH 256 // Bloom length.
PUSH 0 PUSH @SEGMENT_TXN_BLOOM PUSH 0 // Bloom memory address.
%get_trie_data_size PUSH @SEGMENT_TRIE_DATA PUSH 0 // MPT dest address.
PUSH @SEGMENT_TXN_BLOOM // ctx == virt == 0
// stack: bloom_addr, 256, txn_nb, new_cum_gas, txn_nb, num_nibbles, retdest
%get_trie_data_size
PUSH @SEGMENT_TRIE_DATA ADD // MPT dest address.
// stack: DST, SRC, 256, receipt_ptr, txn_nb, new_cum_gas, txn_nb, num_nibbles, retdest
%memcpy_bytes
// stack: receipt_ptr, txn_nb, new_cum_gas, txn_nb, num_nibbles, retdest
@ -204,16 +206,14 @@ process_receipt_after_write:
%mpt_insert_receipt_trie
// stack: new_cum_gas, txn_nb, num_nibbles, retdest
// Now, we set the Bloom filter back to 0. We proceed by chunks of 32 bytes.
PUSH 0
PUSH @SEGMENT_TXN_BLOOM // ctx == offset == 0
%rep 8
// stack: counter, new_cum_gas, txn_nb, num_nibbles, retdest
// stack: addr, new_cum_gas, txn_nb, num_nibbles, retdest
PUSH 0 // we will fill the memory segment with zeroes
DUP2
PUSH @SEGMENT_TXN_BLOOM
DUP3 // kernel context is 0
// stack: ctx, segment, counter, 0, counter, new_cum_gas, txn_nb, num_nibbles, retdes
// stack: addr, 0, addr, new_cum_gas, txn_nb, num_nibbles, retdest
MSTORE_32BYTES_32
// stack: new_counter, counter, new_cum_gas, txn_nb, num_nibbles, retdest
// stack: new_addr, addr, new_cum_gas, txn_nb, num_nibbles, retdest
SWAP1 POP
%endrep
POP

View File

@ -14,7 +14,8 @@ loop:
%jumpi(return)
// stack: i, ctx, code_len, retdest
%stack (i, ctx) -> (ctx, @SEGMENT_CODE, i, i, ctx)
%stack (i, ctx) -> (ctx, i, i, ctx)
ADD // combine context and offset to make an address (SEGMENT_CODE == 0)
MLOAD_GENERAL
// stack: opcode, i, ctx, code_len, retdest
@ -26,7 +27,10 @@ loop:
%jumpi(continue)
// stack: JUMPDEST, i, ctx, code_len, retdest
%stack (JUMPDEST, i, ctx) -> (1, ctx, @SEGMENT_JUMPDEST_BITS, i, JUMPDEST, i, ctx)
%stack (JUMPDEST, i, ctx) -> (ctx, @SEGMENT_JUMPDEST_BITS, i, JUMPDEST, i, ctx)
%build_address
PUSH 1
// stack: 1, addr, JUMPDEST, i, ctx
MSTORE_GENERAL
continue:

View File

@ -29,7 +29,8 @@ global precompile_blake2_f:
// stack: flag_addr, flag_addr, blake2_f_contd, kexit_info
PUSH @SEGMENT_CALLDATA
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, flag_addr, flag_addr, blake2_f_contd, kexit_info
%build_address
// stack: addr, flag_addr, blake2_f_contd, kexit_info
MLOAD_GENERAL
// stack: flag, flag_addr, blake2_f_contd, kexit_info
DUP1
@ -45,6 +46,7 @@ global precompile_blake2_f:
// stack: @SEGMENT_CALLDATA, t1_addr, t1_addr, flag, blake2_f_contd, kexit_info
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, t1_addr, t1_addr, flag, blake2_f_contd, kexit_info
%build_address
%mload_packing_u64_LE
// stack: t_1, t1_addr, flag, blake2_f_contd, kexit_info
SWAP1
@ -56,6 +58,7 @@ global precompile_blake2_f:
// stack: @SEGMENT_CALLDATA, t0_addr, t0_addr, t_1, flag, blake2_f_contd, kexit_info
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, t0_addr, t0_addr, t_1, flag, blake2_f_contd, kexit_info
%build_address
%mload_packing_u64_LE
// stack: t_0, t0_addr, t_1, flag, blake2_f_contd, kexit_info
SWAP1
@ -71,6 +74,7 @@ global precompile_blake2_f:
// stack: @SEGMENT_CALLDATA, m0_addr + 8 * (16 - i - 1), m0_addr + 8 * (16 - i - 1), m_(i+1), ..., m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, m0_addr + 8 * (16 - i - 1), m0_addr + 8 * (16 - i - 1), m_(i+1), ..., m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
%build_address
%mload_packing_u64_LE
// stack: m_i, m0_addr + 8 * (16 - i - 1), m_(i+1), ..., m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
SWAP1
@ -88,6 +92,7 @@ global precompile_blake2_f:
// stack: @SEGMENT_CALLDATA, h0_addr + 8 * (8 - i), h0_addr + 8 * (8 - i), h_(i+1), ..., h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, h0_addr + 8 * (8 - i), h0_addr + 8 * (8 - i), h_(i+1), ..., h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
%build_address
%mload_packing_u64_LE
// stack: h_i, h0_addr + 8 * (8 - i), h_(i+1), ..., h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
SWAP1
@ -96,9 +101,10 @@ global precompile_blake2_f:
// stack: h0_addr + 8 * 8 = 68, h_0, ..., h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
POP
%stack () -> (@SEGMENT_CALLDATA, 0, 4)
%stack () -> (@SEGMENT_CALLDATA, 4)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 0, 4, h_0..h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
// stack: ctx, @SEGMENT_CALLDATA, 4, h_0..h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
%build_address_no_offset
%mload_packing
// stack: rounds, h_0..h_7, m_0..m_15, t_0, t_1, flag, blake2_f_contd, kexit_info
@ -113,20 +119,20 @@ blake2_f_contd:
// Store the result hash to the parent's return data using `mstore_unpacking_u64_LE`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 64)
PUSH 0
// stack: addr_0=0, h_0', h_1', h_2', h_3', h_4', h_5', h_6', h_7', kexit_info
// stack: h_0', h_1', h_2', h_3', h_4', h_5', h_6', h_7', kexit_info
PUSH @SEGMENT_RETURNDATA
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
// stack: parent_ctx, addr_0=0, h_0', h_1', h_2', h_3', h_4', h_5', h_6', h_7', kexit_info
// stack: parent_ctx, segment, h_0', h_1', h_2', h_3', h_4', h_5', h_6', h_7', kexit_info
%build_address_no_offset
// stack: addr0=0, h_0', h_1', h_2', h_3', h_4', h_5', h_6', h_7', kexit_info
%rep 8
// stack: parent_ctx, addr_i, h_i', ..., h_7', kexit_info
%stack (ctx, addr, h_i) -> (ctx, @SEGMENT_RETURNDATA, addr, h_i, addr, ctx)
// stack: addri, h_i', ..., h_7', kexit_info
%stack (addr, h_i) -> (addr, h_i, addr)
%mstore_unpacking_u64_LE
// stack: addr_i, parent_ctx, h_(i+1)', ..., h_7', kexit_info
// stack: addr_i, h_(i+1)', ..., h_7', kexit_info
%add_const(8)
// stack: addr_(i+1), parent_ctx, h_(i+1)', ..., h_7', kexit_info
SWAP1
// stack: parent_ctx, addr_(i+1), h_(i+1)', ..., h_7', kexit_info
// stack: addr_(i+1), h_(i+1)', ..., h_7', kexit_info
%endrep
// stack: kexit_info

View File

@ -20,21 +20,25 @@ global precompile_bn_add:
%stack () -> (@SEGMENT_CALLDATA, 96, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 96, 32, bn_add_return, kexit_info
%build_address
%mload_packing
// stack: y1, bn_add_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 64, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 64, 32, y1, bn_add_return, kexit_info
%build_address
%mload_packing
// stack: x1, y1, bn_add_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 32, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 32, 32, x1, y1, bn_add_return, kexit_info
%build_address
%mload_packing
// stack: y0, x1, y1, bn_add_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 0, 32)
%stack () -> (@SEGMENT_CALLDATA, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 0, 32, y0, x1, y1, bn_add_return, kexit_info
// stack: ctx, @SEGMENT_CALLDATA, 32, y0, x1, y1, bn_add_return, kexit_info
%build_address_no_offset
%mload_packing
// stack: x0, y0, x1, y1, bn_add_return, kexit_info
%jump(bn_add)
@ -49,9 +53,11 @@ bn_add_return:
// Store the result (x, y) to the parent's return data using `mstore_unpacking`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 64)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, x, y) -> (parent_ctx, @SEGMENT_RETURNDATA, 0, x, 32, bn_add_contd6, parent_ctx, y)
%stack (parent_ctx, x, y) -> (parent_ctx, @SEGMENT_RETURNDATA, x, 32, bn_add_contd6, parent_ctx, y)
%build_address_no_offset
%jump(mstore_unpacking)
bn_add_contd6:
POP
%stack (parent_ctx, y) -> (parent_ctx, @SEGMENT_RETURNDATA, 32, y, 32, pop_and_return_success)
%build_address
%jump(mstore_unpacking)

View File

@ -20,16 +20,19 @@ global precompile_bn_mul:
%stack () -> (@SEGMENT_CALLDATA, 64, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 64, 32, bn_mul_return, kexit_info
%build_address
%mload_packing
// stack: n, bn_mul_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 32, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 32, 32, n, bn_mul_return, kexit_info
%build_address
%mload_packing
// stack: y, n, bn_mul_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 0, 32)
%stack () -> (@SEGMENT_CALLDATA, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 0, 32, y, n, bn_mul_return, kexit_info
// stack: ctx, @SEGMENT_CALLDATA, 32, y, n, bn_mul_return, kexit_info
%build_address_no_offset
%mload_packing
// stack: x, y, n, bn_mul_return, kexit_info
%jump(bn_mul)
@ -44,9 +47,11 @@ bn_mul_return:
// Store the result (Px, Py) to the parent's return data using `mstore_unpacking`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 64)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, Px, Py) -> (parent_ctx, @SEGMENT_RETURNDATA, 0, Px, 32, bn_mul_contd6, parent_ctx, Py)
%stack (parent_ctx, Px, Py) -> (parent_ctx, @SEGMENT_RETURNDATA, Px, 32, bn_mul_contd6, parent_ctx, Py)
%build_address_no_offset
%jump(mstore_unpacking)
bn_mul_contd6:
POP
%stack (parent_ctx, Py) -> (parent_ctx, @SEGMENT_RETURNDATA, 32, Py, 32, pop_and_return_success)
%build_address
%jump(mstore_unpacking)

View File

@ -20,21 +20,25 @@ global precompile_ecrec:
%stack () -> (@SEGMENT_CALLDATA, 96, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 96, 32, ecrec_return, kexit_info
%build_address
%mload_packing
// stack: s, ecrec_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 64, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 64, 32, s, ecrec_return, kexit_info
%build_address
%mload_packing
// stack: r, s, ecrec_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 32, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 32, 32, r, s, ecrec_return, kexit_info
%build_address
%mload_packing
// stack: v, r, s, ecrec_return, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 0, 32)
%stack () -> (@SEGMENT_CALLDATA, 32)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 0, 32, v, r, s, ecrec_return, kexit_info
// stack: ctx, @SEGMENT_CALLDATA, 32, v, r, s, ecrec_return, kexit_info
%build_address_no_offset
%mload_packing
// stack: hash, v, r, s, ecrec_return, kexit_info
%jump(ecrecover)
@ -45,7 +49,8 @@ ecrec_return:
// Store the result address to the parent's return data using `mstore_unpacking`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 32)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, address) -> (parent_ctx, @SEGMENT_RETURNDATA, 0, address, 32, pop_and_return_success)
%stack (parent_ctx, address) -> (parent_ctx, @SEGMENT_RETURNDATA, address, 32, pop_and_return_success)
%build_address_no_offset
%jump(mstore_unpacking)
// On bad input, return empty return data but still return success.

View File

@ -11,43 +11,43 @@
// We pass around total_num_limbs and len for conveience, because we can't access them from the stack
// if they're hidden behind the variable number of limbs.
mload_bytes_as_limbs:
// stack: ctx, segment, offset, num_bytes, retdest, total_num_limbs, len, ..limbs
DUP4
// stack: num_bytes, ctx, segment, offset, num_bytes, retdest, total_num_limbs, len, ..limbs
// stack: addr, num_bytes, retdest, total_num_limbs, len, ..limbs
DUP2
// stack: num_bytes, addr, num_bytes, retdest, total_num_limbs, len, ..limbs
%mod_16
// stack: min(16, num_bytes), ctx, segment, offset, num_bytes, retdest, total_num_limbs, len, ..limbs
%stack (len, addr: 3) -> (addr, len, addr)
// stack: ctx, segment, offset, min(16, num_bytes), ctx, segment, offset, num_bytes, retdest, total_num_limbs, len, ..limbs
// stack: min(16, num_bytes), addr, num_bytes, retdest, total_num_limbs, len, ..limbs
DUP2
// stack: addr, min(16, num_bytes), addr, num_bytes, retdest, total_num_limbs, len, ..limbs
%mload_packing
// stack: new_limb, ctx, segment, offset, num_bytes, retdest, total_num_limbs, len, ..limbs
%stack (new, addr: 3, numb, ret, tot, len) -> (numb, addr, ret, tot, len, new)
// stack: num_bytes, ctx, segment, offset, retdest, total_num_limbs, len, new_limb, ..limbs
// stack: new_limb, addr, num_bytes, retdest, total_num_limbs, len, ..limbs
%stack (new, addr, numb, ret, tot, len) -> (numb, addr, ret, tot, len, new)
// stack: num_bytes, addr, retdest, total_num_limbs, len, new_limb, ..limbs
DUP1
%mod_16
// stack: num_bytes%16, num_bytes, ctx, segment, offset, retdest, total_num_limbs, len, new_limb, ..limbs
// stack: num_bytes%16, num_bytes, addr, retdest, total_num_limbs, len, new_limb, ..limbs
DUP1 SWAP2
SUB
// stack:num_bytes_new, num_bytes%16, ctx, segment, offset, retdest, total_num_limbs, len, new_limb, ..limbs
// stack: num_bytes_new, num_bytes%16, addr, retdest, total_num_limbs, len, new_limb, ..limbs
DUP1
ISZERO
%jumpi(mload_bytes_return)
SWAP1
// stack: num_bytes%16, num_bytes_new, ctx, segment, offset, retdest, total_num_limbs, len, new_limb, ..limbs
DUP5 // offset
ADD
// stack: offset_new, num_bytes_new, ctx, segment, offset, retdest, total_num_limbs, len, new_limb, ..limbs
SWAP4 POP
// stack: num_bytes_new, ctx, segment, offset_new, retdest, total_num_limbs, len, new_limb, ..limbs
%stack (num, addr: 3) -> (addr, num)
// stack: num_bytes%16, num_bytes_new, addr, retdest, total_num_limbs, len, new_limb, ..limbs
DUP3 // addr
ADD // increment offset
// stack: addr_new, num_bytes_new, addr, retdest, total_num_limbs, len, new_limb, ..limbs
SWAP2 POP
// stack: num_bytes_new, addr_new, retdest, total_num_limbs, len, new_limb, ..limbs
SWAP1
%jump(mload_bytes_as_limbs)
mload_bytes_return:
// stack: num_bytes_new, num_bytes%16, ctx, segment, offset, retdest, total_num_limbs, len, new_limb, ..limbs
%pop5
// stack: num_bytes_new, num_bytes%16, addr, retdest, total_num_limbs, len, new_limb, ..limbs
%pop3
// stack: retdest, total_num_limbs, len, ..limbs
JUMP
%macro mload_bytes_as_limbs
%stack (ctx, segment, offset, num_bytes, total_num_limbs) -> (ctx, segment, offset, num_bytes, %%after, total_num_limbs)
%stack (addr, num_bytes, total_num_limbs) -> (addr, num_bytes, %%after, total_num_limbs)
%jump(mload_bytes_as_limbs)
%%after:
%endmacro
@ -112,6 +112,7 @@ calculate_l_E_prime:
// stack: 96 + l_B, 32, l_E, l_B, retdest
PUSH @SEGMENT_CALLDATA
GET_CONTEXT
%build_address
%mload_packing
// stack: i[96 + l_B..128 + l_B], l_E, l_B, retdest
%log2_floor
@ -142,6 +143,7 @@ case_le_32:
// stack: 96 + l_B, l_E, retdest
PUSH @SEGMENT_CALLDATA
GET_CONTEXT
%build_address
%mload_packing
// stack: E, retdest
%log2_floor
@ -165,22 +167,25 @@ global precompile_expmod:
// stack: kexit_info
// Load l_B from i[0..32].
%stack () -> (@SEGMENT_CALLDATA, 0, 32)
// stack: @SEGMENT_CALLDATA, 0, 32, kexit_info
%stack () -> (@SEGMENT_CALLDATA, 32)
// stack: @SEGMENT_CALLDATA, 32, kexit_info
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 0, 32, kexit_info
// stack: ctx, @SEGMENT_CALLDATA, 32, kexit_info
%build_address_no_offset
%mload_packing
// stack: l_B, kexit_info
// Load l_E from i[32..64].
%stack () -> (@SEGMENT_CALLDATA, 32, 32)
GET_CONTEXT
%build_address
%mload_packing
// stack: l_E, l_B, kexit_info
// Load l_M from i[64..96].
%stack () -> (@SEGMENT_CALLDATA, 64, 32)
GET_CONTEXT
%build_address
%mload_packing
// stack: l_M, l_E, l_B, kexit_info
DUP3 ISZERO DUP2 ISZERO
@ -247,6 +252,7 @@ l_E_prime_return:
%stack () -> (@SEGMENT_CALLDATA, 96)
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 96, num_bytes, num_limbs, len, len, l_M, l_E, l_B, kexit_info
%build_address
%mload_bytes_as_limbs
// stack: num_limbs, len, limbs[num_limbs-1], .., limbs[0], len, l_M, l_E, l_B, kexit_info
SWAP1
@ -282,6 +288,7 @@ copy_b_end:
PUSH @SEGMENT_CALLDATA
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 96 + l_B, num_bytes, num_limbs, len, len, l_M, l_E, l_B, kexit_info
%build_address
%mload_bytes_as_limbs
// stack: num_limbs, len, limbs[num_limbs-1], .., limbs[0], len, l_M, l_E, l_B, kexit_info
SWAP1
@ -316,6 +323,7 @@ copy_e_end:
PUSH @SEGMENT_CALLDATA
GET_CONTEXT
// stack: ctx, @SEGMENT_CALLDATA, 96 + l_B + l_E, num_bytes, num_limbs, len, len, l_M, l_E, l_B, kexit_info
%build_address
%mload_bytes_as_limbs
// stack: num_limbs, len, limbs[num_limbs-1], .., limbs[0], len, l_M, l_E, l_B, kexit_info
SWAP1
@ -410,33 +418,33 @@ expmod_contd:
DUP2
DUP2
ADD
// stack: cur_address=out+l_M_128-1, end_address=out-1, l_M_128, l_M%16, kexit_info
// stack: cur_offset=out+l_M_128-1, end_offset=out-1, l_M_128, l_M%16, kexit_info
DUP1 %mload_current_general
%stack (cur_limb, cur_address, end_address, l_M_128, l_M_mod16, kexit_info) ->
(@SEGMENT_RETURNDATA, 0, cur_limb, l_M_mod16, cur_address, end_address, l_M_128, kexit_info)
%stack (cur_limb, cur_offset, end_offset, l_M_128, l_M_mod16, kexit_info) ->
(@SEGMENT_RETURNDATA, cur_limb, l_M_mod16, cur_offset, end_offset, l_M_128, kexit_info)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%build_address_no_offset
%mstore_unpacking
// stack: offset, cur_address, end_address, l_M_128, kexit_info
// stack: address, cur_offset, end_offset, l_M_128, kexit_info
SWAP1
%decrement
// stack: cur_address, offset, end_address, l_M_128, kexit_info
// stack: cur_offset, address, end_offset, l_M_128, kexit_info
// Store in big-endian format.
expmod_store_loop:
// stack: cur_address, offset, end_address, l_M_128, kexit_info
// stack: cur_offset, address, end_offset, l_M_128, kexit_info
DUP3 DUP2 EQ %jumpi(expmod_store_end)
// stack: cur_address, offset, end_address, l_M_128, kexit_info
// stack: cur_offset, address, end_offset, l_M_128, kexit_info
DUP1 %mload_current_general
%stack (cur_limb, cur_address, offset, end_address, l_M_128, kexit_info) ->
(offset, cur_limb, cur_address, end_address, l_M_128, kexit_info)
%stack (offset, cur_limb) -> (@SEGMENT_RETURNDATA, offset, cur_limb, 16)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (cur_limb, cur_offset, address, end_offset, l_M_128, kexit_info) ->
(address, cur_limb, cur_offset, end_offset, l_M_128, kexit_info)
%stack (address, cur_limb) -> (address, cur_limb, 16)
%mstore_unpacking
// stack: offset', cur_address, end_address, l_M_128, kexit_info)
// stack: address', cur_offset, end_offset, l_M_128, kexit_info)
SWAP1 %decrement
// stack: cur_address-1, offset', end_address, l_M_128, kexit_info)
// stack: cur_offset-1, address', end_offset, l_M_128, kexit_info)
%jump(expmod_store_loop)
expmod_store_end:
// stack: cur_address, offset, end_address, l_M_128, kexit_info
// stack: cur_offset, address, end_offset, l_M_128, kexit_info
%pop4
the_end:
// stack: kexit_info

View File

@ -24,14 +24,19 @@ global precompile_id:
// Simply copy the call data to the parent's return data.
%calldatasize
DUP1 %mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE)
PUSH id_contd SWAP1
PUSH @SEGMENT_CALLDATA
GET_CONTEXT
%build_address_no_offset
// stack: SRC, size, id_contd
PUSH @SEGMENT_RETURNDATA
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, ctx, size) ->
(
parent_ctx, @SEGMENT_RETURNDATA, 0, // DST
ctx, @SEGMENT_CALLDATA, 0, // SRC
size, id_contd // count, retdest
)
%build_address_no_offset
// stack: DST, SRC, size, id_contd
%jump(memcpy_bytes)
id_contd:

View File

@ -58,7 +58,9 @@ global handle_precompiles_from_eoa:
%mload_txn_field(@TXN_FIELD_DATA_LEN)
%stack (calldata_size, new_ctx) -> (calldata_size, new_ctx, calldata_size)
%set_new_ctx_calldata_size
%stack (new_ctx, calldata_size) -> (new_ctx, @SEGMENT_CALLDATA, 0, 0, @SEGMENT_TXN_DATA, 0, calldata_size, handle_precompiles_from_eoa_finish, new_ctx)
%stack (new_ctx, calldata_size) -> (@SEGMENT_TXN_DATA, @SEGMENT_CALLDATA, new_ctx, calldata_size, handle_precompiles_from_eoa_finish, new_ctx)
SWAP2 %build_address_no_offset // DST
// stack: DST, SRC, calldata_size, handle_precompiles_from_eoa_finish, new_ctx
%jump(memcpy_bytes)
handle_precompiles_from_eoa_finish:

View File

@ -25,27 +25,17 @@ global precompile_rip160:
%calldatasize
GET_CONTEXT
// The next block of code is equivalent to the following %stack macro call
// (unfortunately the macro call takes too long to expand dynamically).
//
// %stack (ctx, size) ->
// (
// ctx, @SEGMENT_KERNEL_GENERAL, 200, // DST
// ctx, @SEGMENT_CALLDATA, 0, // SRC
// size, ripemd, // count, retdest
// 200, size, rip160_contd // ripemd input: virt, num_bytes, retdest
// )
PUSH 200
PUSH ripemd
DUP4
PUSH 0
PUSH @SEGMENT_CALLDATA
PUSH rip160_contd
SWAP7
SWAP6
PUSH 200
PUSH @SEGMENT_KERNEL_GENERAL
DUP3
%stack (ctx, size) ->
(
ctx, @SEGMENT_CALLDATA, // SRC
ctx,
size, ripemd, // count, retdest
200, size, rip160_contd // ripemd input: virt, num_bytes, retdest
)
%build_address_no_offset
%stack(addr, ctx) -> (ctx, @SEGMENT_KERNEL_GENERAL, 200, addr)
%build_address
// stack: DST, SRC, count, retdest, virt, num_bytes, retdest
%jump(memcpy_bytes)
@ -54,5 +44,6 @@ rip160_contd:
// Store the result hash to the parent's return data using `mstore_unpacking`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 32)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, hash) -> (parent_ctx, @SEGMENT_RETURNDATA, 0, hash, 32, pop_and_return_success)
%stack (parent_ctx, hash) -> (parent_ctx, @SEGMENT_RETURNDATA, hash, 32, pop_and_return_success)
%build_address_no_offset
%jump(mstore_unpacking)

View File

@ -24,30 +24,18 @@ global precompile_sha256:
// Copy the call data to the kernel general segment (sha2 expects it there) and call sha2.
%calldatasize
GET_CONTEXT
// stack: ctx, size
// The next block of code is equivalent to the following %stack macro call
// (unfortunately the macro call takes too long to expand dynamically).
//
// %stack (ctx, size) ->
// (
// ctx, @SEGMENT_KERNEL_GENERAL, 1, // DST
// ctx, @SEGMENT_CALLDATA, 0, // SRC
// size, sha2, // count, retdest
// 0, size, sha256_contd // sha2 input: virt, num_bytes, retdest
// )
//
PUSH 0
PUSH sha2
DUP4
PUSH 0
PUSH @SEGMENT_CALLDATA
PUSH sha256_contd
SWAP7
SWAP6
PUSH 1
PUSH @SEGMENT_KERNEL_GENERAL
DUP3
%stack (ctx, size) ->
(
ctx, @SEGMENT_CALLDATA, // SRC
ctx,
size, sha2, // count, retdest
0, size, sha256_contd // sha2 input: virt, num_bytes, retdest
)
%build_address_no_offset
%stack(addr, ctx) -> (ctx, @SEGMENT_KERNEL_GENERAL, 1, addr)
%build_address
// stack: DST, SRC, count, retdest, virt, num_bytes, retdest
%jump(memcpy_bytes)
@ -56,5 +44,6 @@ sha256_contd:
// Store the result hash to the parent's return data using `mstore_unpacking`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 32)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, hash) -> (parent_ctx, @SEGMENT_RETURNDATA, 0, hash, 32, pop_and_return_success)
%stack (parent_ctx, hash) -> (parent_ctx, @SEGMENT_RETURNDATA, hash, 32, pop_and_return_success)
%build_address_no_offset
%jump(mstore_unpacking)

View File

@ -31,18 +31,21 @@ loading_loop:
// stack: px, i, k, kexit_info
GET_CONTEXT
%stack (ctx, px) -> (ctx, @SEGMENT_CALLDATA, px, 32, loading_loop_contd, px)
%build_address
%jump(mload_packing)
loading_loop_contd:
// stack: x, px, i, k, kexit_info
SWAP1 %add_const(32)
GET_CONTEXT
%stack (ctx, py) -> (ctx, @SEGMENT_CALLDATA, py, 32, loading_loop_contd2, py)
%build_address
%jump(mload_packing)
loading_loop_contd2:
// stack: y, py, x, i, k, kexit_info
SWAP1 %add_const(32)
GET_CONTEXT
%stack (ctx, px_im) -> (ctx, @SEGMENT_CALLDATA, px_im, 32, loading_loop_contd3, px_im)
%build_address
%jump(mload_packing)
loading_loop_contd3:
// stack: x_im, px_im, y, x, i, k, kexit_info
@ -50,6 +53,7 @@ loading_loop_contd3:
// stack: px_re, x_im, y, x, i, k, kexit_info
GET_CONTEXT
%stack (ctx, px_re) -> (ctx, @SEGMENT_CALLDATA, px_re, 32, loading_loop_contd4, px_re)
%build_address
%jump(mload_packing)
loading_loop_contd4:
// stack: x_re, px_re, x_im, y, x, i, k, kexit_info
@ -57,6 +61,7 @@ loading_loop_contd4:
// stack: py_im, x_re, x_im, y, x, i, k, kexit_info
GET_CONTEXT
%stack (ctx, py_im) -> (ctx, @SEGMENT_CALLDATA, py_im, 32, loading_loop_contd5, py_im)
%build_address
%jump(mload_packing)
loading_loop_contd5:
// stack: y_im, py_im, x_re, x_im, y, x, i, k, kexit_info
@ -64,6 +69,7 @@ loading_loop_contd5:
// stack: py_re, y_im, x_re, x_im, y, x, i, k, kexit_info
GET_CONTEXT
%stack (ctx, py_re) -> (ctx, @SEGMENT_CALLDATA, py_re, 32, loading_loop_contd6)
%build_address
%jump(mload_packing)
loading_loop_contd6:
// stack: y_re, y_im, x_re, x_im, y, x, i, k, kexit_info
@ -118,5 +124,6 @@ got_result:
// Store the result bool (repr. by a U256) to the parent's return data using `mstore_unpacking`.
%mstore_parent_context_metadata(@CTX_METADATA_RETURNDATA_SIZE, 32)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, address) -> (parent_ctx, @SEGMENT_RETURNDATA, 0, address, 32, pop_and_return_success)
%stack (parent_ctx, address) -> (parent_ctx, @SEGMENT_RETURNDATA, address, 32, pop_and_return_success)
%build_address_no_offset
%jump(mstore_unpacking)

View File

@ -12,11 +12,11 @@ global process_normalized_txn:
// Compute this transaction's intrinsic gas and store it.
%intrinsic_gas
DUP1
%mstore_txn_field(@TXN_FIELD_INTRINSIC_GAS)
// stack: retdest
// stack: intrinsic_gas, retdest
// Assert gas_limit >= intrinsic_gas.
%mload_txn_field(@TXN_FIELD_INTRINSIC_GAS)
%mload_txn_field(@TXN_FIELD_GAS_LIMIT)
%assert_ge(invalid_txn)
@ -146,23 +146,20 @@ global process_contract_creation_txn:
// Store constructor code length
PUSH @CTX_METADATA_CODE_SIZE
PUSH @SEGMENT_CONTEXT_METADATA
// stack: segment, offset, new_ctx, address, retdest
DUP3 // new_ctx
// stack: offset, new_ctx, address, retdest
DUP2 // new_ctx
ADD // CTX_METADATA_CODE_SIZE is already scaled by its segment
// stack: addr, new_ctx, address, retdest
%mload_txn_field(@TXN_FIELD_DATA_LEN)
// stack: data_len, new_ctx, segment, offset, new_ctx, address, retdest
// stack: data_len, addr, new_ctx, address, retdest
MSTORE_GENERAL
// stack: new_ctx, address, retdest
// Copy the code from txdata to the new context's code segment.
PUSH process_contract_creation_txn_after_code_loaded
%mload_txn_field(@TXN_FIELD_DATA_LEN)
PUSH 0 // SRC.offset
PUSH @SEGMENT_TXN_DATA // SRC.segment
PUSH 0 // SRC.context
PUSH 0 // DST.offset
PUSH @SEGMENT_CODE // DST.segment
DUP8 // DST.context = new_ctx
PUSH @SEGMENT_TXN_DATA // SRC (context == offset == 0)
DUP4 // DST (segment == 0 (i.e. CODE), and offset == 0)
%jump(memcpy_bytes)
global process_contract_creation_txn_after_code_loaded:
@ -203,9 +200,11 @@ global process_contract_creation_txn_after_constructor:
// Store the code hash of the new contract.
// stack: leftover_gas, new_ctx, address, retdest, success
GET_CONTEXT
%returndatasize
%stack (size, ctx) -> (ctx, @SEGMENT_RETURNDATA, 0, size) // context, segment, offset, len
PUSH @SEGMENT_RETURNDATA
GET_CONTEXT
%build_address_no_offset
// stack: addr, len
KECCAK_GENERAL
// stack: codehash, leftover_gas, new_ctx, address, retdest, success
%observe_new_contract
@ -292,7 +291,8 @@ global process_message_txn_code_loaded:
%mload_txn_field(@TXN_FIELD_DATA_LEN)
%stack (calldata_size, new_ctx, retdest) -> (calldata_size, new_ctx, calldata_size, retdest)
%set_new_ctx_calldata_size
%stack (new_ctx, calldata_size, retdest) -> (new_ctx, @SEGMENT_CALLDATA, 0, 0, @SEGMENT_TXN_DATA, 0, calldata_size, process_message_txn_code_loaded_finish, new_ctx, retdest)
%stack (new_ctx, calldata_size, retdest) -> (new_ctx, @SEGMENT_CALLDATA, @SEGMENT_TXN_DATA, calldata_size, process_message_txn_code_loaded_finish, new_ctx, retdest)
%build_address_no_offset // DST
%jump(memcpy_bytes)
process_message_txn_code_loaded_finish:

View File

@ -29,18 +29,26 @@ return_after_gas:
// Store the return data size in the parent context's metadata.
%stack (parent_ctx, kexit_info, offset, size) ->
(size, parent_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_RETURNDATA_SIZE, offset, size, parent_ctx, kexit_info)
(parent_ctx, @CTX_METADATA_RETURNDATA_SIZE, size, offset, size, parent_ctx, kexit_info)
ADD // addr (CTX offsets are already scaled by their segment)
SWAP1
// stack: size, addr, offset, size, parent_ctx, kexit_info
MSTORE_GENERAL
// stack: offset, size, parent_ctx, kexit_info
// Store the return data in the parent context's returndata segment.
PUSH @SEGMENT_MAIN_MEMORY
GET_CONTEXT
%stack (ctx, offset, size, parent_ctx, kexit_info) ->
%build_address
%stack (addr, size, parent_ctx, kexit_info) ->
(
parent_ctx, @SEGMENT_RETURNDATA, 0, // DST
ctx, @SEGMENT_MAIN_MEMORY, offset, // SRC
parent_ctx, @SEGMENT_RETURNDATA, // DST
addr, // SRC
size, sys_return_finish, kexit_info // count, retdest, ...
)
%build_address_no_offset
// stack: DST, SRC, size, sys_return_finish, kexit_info
%jump(memcpy_bytes)
sys_return_finish:
@ -129,18 +137,26 @@ revert_after_gas:
// Store the return data size in the parent context's metadata.
%stack (parent_ctx, kexit_info, offset, size) ->
(size, parent_ctx, @SEGMENT_CONTEXT_METADATA, @CTX_METADATA_RETURNDATA_SIZE, offset, size, parent_ctx, kexit_info)
(parent_ctx, @CTX_METADATA_RETURNDATA_SIZE, size, offset, size, parent_ctx, kexit_info)
ADD // addr (CTX offsets are already scaled by their segment)
SWAP1
// stack: size, addr, offset, size, parent_ctx, kexit_info
MSTORE_GENERAL
// stack: offset, size, parent_ctx, kexit_info
// Store the return data in the parent context's returndata segment.
PUSH @SEGMENT_MAIN_MEMORY
GET_CONTEXT
%stack (ctx, offset, size, parent_ctx, kexit_info) ->
%build_address
%stack (addr, size, parent_ctx, kexit_info) ->
(
parent_ctx, @SEGMENT_RETURNDATA, 0, // DST
ctx, @SEGMENT_MAIN_MEMORY, offset, // SRC
parent_ctx, @SEGMENT_RETURNDATA, // DST
addr, // SRC
size, sys_revert_finish, kexit_info // count, retdest, ...
)
%build_address_no_offset
// stack: DST, SRC, size, sys_revert_finish, kexit_info
%jump(memcpy_bytes)
sys_revert_finish:

View File

@ -11,7 +11,7 @@
%macro next_context_id
// stack: (empty)
%mload_global_metadata(@GLOBAL_METADATA_LARGEST_CONTEXT)
%increment
%add_const(0x10000000000000000) // scale each context by 2^64
// stack: new_ctx
DUP1
%mstore_global_metadata(@GLOBAL_METADATA_LARGEST_CONTEXT)
@ -83,7 +83,6 @@
SET_CONTEXT
// stack: (empty)
// We can now read this stack length from memory.
push @CTX_METADATA_STACK_SIZE
%mload_current(@SEGMENT_CONTEXT_METADATA)
%mload_context_metadata(@CTX_METADATA_STACK_SIZE)
// stack: stack_length
%endmacro

View File

@ -34,8 +34,12 @@ wnaf_loop_contd:
DUP2 SWAP1 SUB
%stack (n, m, segment, o, retdest) -> (129, o, m, o, segment, n, retdest)
SUB
// stack: i, m, o, segment, n, retdest
DUP4
GET_CONTEXT
%stack (ctx, i, m, o, segment, n, retdest) -> (m, ctx, segment, i, o, segment, n, retdest)
%build_address
// stack: addr, m, o, segment, n, retdest
SWAP1
MSTORE_GENERAL
// stack: o, segment, n, retdest
DUP3 ISZERO %jumpi(wnaf_end)

View File

@ -2,9 +2,7 @@ global main:
// First, hash the kernel code
%mload_global_metadata(@GLOBAL_METADATA_KERNEL_LEN)
PUSH 0
PUSH 0
PUSH 0
// stack: context, segment, virt, len
// stack: addr, len
KECCAK_GENERAL
// stack: hash
%mload_global_metadata(@GLOBAL_METADATA_KERNEL_HASH)
@ -13,6 +11,10 @@ global main:
// Initialise the shift table
%shift_table_init
// Initialize the RLP DATA pointer to its initial position (ctx == virt == 0, segment = RLP)
PUSH @SEGMENT_RLP_RAW
%mstore_global_metadata(@GLOBAL_METADATA_RLP_DATA_SIZE)
// Initialize the state, transaction and receipt trie root pointers.
PROVER_INPUT(trie_ptr::state)

View File

@ -1,39 +1,31 @@
// Load a big-endian u32, consisting of 4 bytes (c_3, c_2, c_1, c_0).
%macro mload_u32
// stack: context, segment, offset
%stack (addr: 3) -> (addr, 4, %%after)
// stack: addr
%stack (addr) -> (addr, 4, %%after)
%jump(mload_packing)
%%after:
%endmacro
// Load a little-endian u32, consisting of 4 bytes (c_0, c_1, c_2, c_3).
%macro mload_u32_LE
// stack: context, segment, offset
DUP3
DUP3
DUP3
// stack: addr
DUP1
MLOAD_GENERAL
// stack: c0, context, segment, offset
DUP4
// stack: c0, addr
DUP2
%increment
DUP4
DUP4
MLOAD_GENERAL
%shl_const(8)
ADD
// stack: c0 | (c1 << 8), context, segment, offset
DUP4
// stack: c0 | (c1 << 8), addr
DUP2
%add_const(2)
DUP4
DUP4
MLOAD_GENERAL
%shl_const(16)
ADD
// stack: c0 | (c1 << 8) | (c2 << 16), context, segment, offset
SWAP3
%add_const(3)
SWAP2
// stack: c0 | (c1 << 8) | (c2 << 16), addr
SWAP1
%add_const(3)
MLOAD_GENERAL
%shl_const(24)
ADD // OR
@ -42,16 +34,12 @@
// Load a little-endian u64, consisting of 8 bytes (c_0, ..., c_7).
%macro mload_u64_LE
// stack: context, segment, offset
DUP3
DUP3
DUP3
// stack: addr
DUP1
%mload_u32_LE
// stack: lo, context, segment, offset
SWAP3
%add_const(4)
SWAP2
// stack: lo, addr
SWAP1
%add_const(4)
%mload_u32_LE
// stack: hi, lo
%shl_const(32)
@ -62,16 +50,16 @@
// Load a big-endian u256.
%macro mload_u256
// stack: context, segment, offset
%stack (addr: 3) -> (addr, 32, %%after)
// stack: addr
%stack (addr) -> (addr, 32, %%after)
%jump(mload_packing)
%%after:
%endmacro
// Store a big-endian u32, consisting of 4 bytes (c_3, c_2, c_1, c_0).
%macro mstore_u32
// stack: context, segment, offset, value
%stack (addr: 3, value) -> (addr, value, 4, %%after)
// stack: addr, value
%stack (addr, value) -> (addr, value, 4, %%after)
%jump(mstore_unpacking)
%%after:
// stack: offset
@ -88,6 +76,7 @@
// stack: segment, offset
GET_CONTEXT
// stack: context, segment, offset
%build_address
MLOAD_GENERAL
// stack: value
%endmacro
@ -102,7 +91,8 @@
// stack: segment, offset, value
GET_CONTEXT
// stack: context, segment, offset, value
%stack(context, segment, offset, value) -> (value, context, segment, offset)
%build_address
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
@ -115,7 +105,8 @@
// stack: segment, offset, value
GET_CONTEXT
// stack: context, segment, offset, value
%stack(context, segment, offset, value) -> (value, context, segment, offset)
%build_address
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
@ -123,7 +114,10 @@
// Load a single byte from user code.
%macro mload_current_code
// stack: offset
%mload_current(@SEGMENT_CODE)
// SEGMENT_CODE == 0
GET_CONTEXT ADD
// stack: addr
MLOAD_GENERAL
// stack: value
%endmacro
@ -141,6 +135,7 @@
// stack: segment, offset
GET_CONTEXT
// stack: context, segment, offset
%build_address
%mload_u32
// stack: value
%endmacro
@ -152,6 +147,7 @@
// stack: segment, offset
GET_CONTEXT
// stack: context, segment, offset
%build_address
%mload_u32_LE
// stack: value
%endmacro
@ -163,6 +159,7 @@
// stack: segment, offset
GET_CONTEXT
// stack: context, segment, offset
%build_address
%mload_u64_LE
// stack: value
%endmacro
@ -174,6 +171,7 @@
// stack: segment, offset
GET_CONTEXT
// stack: context, segment, offset
%build_address
%mload_u256
// stack: value
%endmacro
@ -185,7 +183,8 @@
// stack: segment, offset, value
GET_CONTEXT
// stack: context, segment, offset, value
%stack(context, segment, offset, value) -> (value, context, segment, offset)
%build_address
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
@ -205,6 +204,7 @@
// stack: segment, offset, value
GET_CONTEXT
// stack: context, segment, offset, value
%build_address
%mstore_u32
// stack: (empty)
%endmacro
@ -224,8 +224,7 @@
// stack: offset
PUSH $segment
// stack: segment, offset
PUSH 0 // kernel has context 0
// stack: context, segment, offset
%build_kernel_address
MLOAD_GENERAL
// stack: value
%endmacro
@ -235,9 +234,9 @@
// stack: offset, value
PUSH $segment
// stack: segment, offset, value
PUSH 0 // kernel has context 0
// stack: context, segment, offset, value
%stack(context, segment, offset, value) -> (value, context, segment, offset)
%build_kernel_address
// stack: addr, value
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
@ -249,9 +248,9 @@
// stack: offset, value
PUSH $segment
// stack: segment, offset, value
PUSH 0 // kernel has context 0
// stack: context, segment, offset, value
%stack(context, segment, offset, value) -> (value, context, segment, offset)
%build_kernel_address
// stack: addr, value
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
@ -261,8 +260,7 @@
// stack: offset
PUSH $segment
// stack: segment, offset
PUSH 0 // kernel has context 0
// stack: context, segment, offset
%build_kernel_address
%mload_u32
%endmacro
@ -271,8 +269,7 @@
// stack: offset
PUSH $segment
// stack: segment, offset
PUSH 0 // kernel has context 0
// stack: context, segment, offset
%build_kernel_address
%mload_u32_LE
%endmacro
@ -281,8 +278,7 @@
// stack: offset
PUSH $segment
// stack: segment, offset
PUSH 0 // kernel has context 0
// stack: context, segment, offset
%build_kernel_address
%mload_u64_LE
%endmacro
@ -291,8 +287,7 @@
// stack: offset
PUSH $segment
// stack: segment, offset
PUSH 0 // kernel has context 0
// stack: context, segment, offset
%build_kernel_address
%mload_u256
%endmacro
@ -302,15 +297,16 @@
// stack: offset, value
PUSH $segment
// stack: segment, offset, value
PUSH 0 // kernel has context 0
// stack: context, segment, offset, value
%build_kernel_address
// stack: addr, value
%mstore_u32
%endmacro
// Load a single byte from kernel code.
%macro mload_kernel_code
// stack: offset
%mload_kernel(@SEGMENT_CODE)
// ctx == SEGMENT_CODE == 0
MLOAD_GENERAL
// stack: value
%endmacro
@ -327,7 +323,8 @@
// from kernel code.
%macro mload_kernel_code_u32
// stack: offset
%mload_kernel_u32(@SEGMENT_CODE)
// ctx == SEGMENT_CODE == 0
%mload_u32
// stack: value
%endmacro
@ -338,7 +335,8 @@
PUSH $label
ADD
// stack: offset
%mload_kernel_u32(@SEGMENT_CODE)
// ctx == SEGMENT_CODE == 0
%mload_u32
// stack: value
%endmacro
@ -383,7 +381,8 @@
// Load a u256 (big-endian) from kernel code.
%macro mload_kernel_code_u256
// stack: offset
%mload_kernel_u256(@SEGMENT_CODE)
// ctx == SEGMENT_CODE == 0
%mload_u256
// stack: value
%endmacro
@ -397,7 +396,8 @@
// Store a single byte to kernel code.
%macro mstore_kernel_code
// stack: offset, value
%mstore_kernel(@SEGMENT_CODE)
// ctx == SEGMENT_CODE == 0
MLOAD_GENERAL
// stack: (empty)
%endmacro
@ -405,13 +405,15 @@
// to kernel code.
%macro mstore_kernel_code_u32
// stack: offset, value
%mstore_kernel_u32(@SEGMENT_CODE)
// ctx == SEGMENT_CODE == 0
%mstore_u32
%endmacro
// Store a single byte to @SEGMENT_RLP_RAW.
%macro mstore_rlp
// stack: offset, value
%mstore_kernel(@SEGMENT_RLP_RAW)
// stack: addr, value
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro

View File

@ -1,25 +1,16 @@
// Copies `count` values from
// SRC = (src_ctx, src_segment, src_addr)
// to
// DST = (dst_ctx, dst_segment, dst_addr).
// These tuple definitions are used for brevity in the stack comments below.
// Copies `count` values from SRC to DST.
global memcpy:
// stack: DST, SRC, count, retdest
DUP7
DUP3
// stack: count, DST, SRC, count, retdest
ISZERO
// stack: count == 0, DST, SRC, count, retdest
%jumpi(memcpy_finish)
// stack: DST, SRC, count, retdest
DUP3
DUP3
DUP3
DUP1
// Copy the next value
// stack: DST, DST, SRC, count, retdest
DUP9
DUP9
DUP9
// Copy the next value.
DUP3
// stack: SRC, DST, DST, SRC, count, retdest
MLOAD_GENERAL
// stack: value, DST, DST, SRC, count, retdest
@ -27,23 +18,19 @@ global memcpy:
// stack: DST, SRC, count, retdest
// Increment dst_addr.
SWAP2
%increment
SWAP2
// Increment src_addr.
SWAP5
SWAP1
%increment
SWAP5
SWAP1
// Decrement count.
SWAP6
%decrement
SWAP6
PUSH 1 DUP4 SUB SWAP3 POP
// Continue the loop.
%jump(memcpy)
%macro memcpy
%stack (dst: 3, src: 3, count) -> (dst, src, count, %%after)
%stack (dst, src, count) -> (dst, src, count, %%after)
%jump(memcpy)
%%after:
%endmacro
@ -53,7 +40,7 @@ global memcpy_bytes:
// stack: DST, SRC, count, retdest
// Handle small case
DUP7
DUP3
// stack: count, DST, SRC, count, retdest
%lt_const(0x21)
// stack: count <= 32, DST, SRC, count, retdest
@ -61,31 +48,22 @@ global memcpy_bytes:
// We will pack 32 bytes into a U256 from the source, and then unpack it at the destination.
// Copy the next chunk of bytes.
// stack: DST, SRC, count, retdest
PUSH 32
DUP7
DUP7
DUP7
DUP3
// stack: SRC, 32, DST, SRC, count, retdest
MLOAD_32BYTES
// stack: value, DST, SRC, count, retdest
DUP4
DUP4
DUP4
// stack: DST, value, DST, SRC, count, retdest
SWAP1
// stack: DST, value, SRC, count, retdest
MSTORE_32BYTES_32
// stack: new_offset, DST, SRC, count, retdest
// Increment dst_addr by 32.
SWAP3
POP
// stack: DST, SRC, count, retdest
// Increment src_addr by 32.
SWAP5
// stack: DST', SRC, count, retdest
// Increment SRC by 32.
SWAP1
%add_const(0x20)
SWAP5
SWAP1
// Decrement count by 32.
SWAP6
%sub_const(0x20)
SWAP6
PUSH 32 DUP4 SUB SWAP3 POP
// Continue the loop.
%jump(memcpy_bytes)
@ -94,7 +72,7 @@ memcpy_bytes_finish:
// stack: DST, SRC, count, retdest
// Handle empty case
DUP7
DUP3
// stack: count, DST, SRC, count, retdest
ISZERO
// stack: count == 0, DST, SRC, count, retdest
@ -103,17 +81,13 @@ memcpy_bytes_finish:
// stack: DST, SRC, count, retdest
// Copy the last chunk of `count` bytes.
DUP7
DUP3
DUP1
DUP8
DUP8
DUP8
DUP4
// stack: SRC, count, count, DST, SRC, count, retdest
MLOAD_32BYTES
// stack: value, count, DST, SRC, count, retdest
DUP5
DUP5
DUP5
DUP3
// stack: DST, value, count, DST, SRC, count, retdest
%mstore_unpacking
// stack: new_offset, DST, SRC, count, retdest
@ -121,12 +95,12 @@ memcpy_bytes_finish:
memcpy_finish:
// stack: DST, SRC, count, retdest
%pop7
%pop3
// stack: retdest
JUMP
%macro memcpy_bytes
%stack (dst: 3, src: 3, count) -> (dst, src, count, %%after)
%stack (dst, src, count) -> (dst, src, count, %%after)
%jump(memcpy_bytes)
%%after:
%endmacro

View File

@ -1,11 +1,9 @@
// Sets `count` values to 0 at
// DST = (dst_ctx, dst_segment, dst_addr).
// This tuple definition is used for brevity in the stack comments below.
// Sets `count` values to 0 at DST.
global memset:
// stack: DST, count, retdest
// Handle small case
DUP4
DUP2
// stack: count, DST, count, retdest
%lt_const(0x21)
// stack: count <= 32, DST, count, retdest
@ -13,20 +11,12 @@ global memset:
// stack: DST, count, retdest
PUSH 0
DUP4
DUP4
DUP4
// stack: DST, 0, DST, count, retdest
SWAP1
// stack: DST, 0, count, retdest
MSTORE_32BYTES_32
// stack: new_offset, DST, count, retdest
// Update dst_addr.
SWAP3
POP
// stack: DST', count, retdest
// Decrement count.
SWAP3
%sub_const(0x20)
SWAP3
PUSH 32 DUP3 SUB SWAP2 POP
// Continue the loop.
%jump(memset)
@ -35,27 +25,25 @@ memset_finish:
// stack: DST, final_count, retdest
// Handle empty case
DUP4
DUP2
// stack: final_count, DST, final_count, retdest
ISZERO
// stack: final_count == 0, DST, final_count, retdest
%jumpi(memset_bytes_empty)
// stack: DST, final_count, retdest
DUP4
DUP2
PUSH 0
DUP5
DUP5
DUP5
DUP3
// stack: DST, 0, final_count, DST, final_count, retdest
%mstore_unpacking
// stack: new_offset, DST, final_count, retdest
%pop5
// stack: DST, final_count, retdest
%pop3
// stack: retdest
JUMP
memset_bytes_empty:
// stack: DST, 0, retdest
%pop4
%pop2
// stack: retdest
JUMP

View File

@ -1,62 +1,104 @@
// Load the given global metadata field from memory.
%macro mload_global_metadata(field)
// Global metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: (empty)
PUSH $field
// stack: offset
%mload_kernel(@SEGMENT_GLOBAL_METADATA)
MLOAD_GENERAL
// stack: value
%endmacro
// Store the given global metadata field to memory.
%macro mstore_global_metadata(field)
// Global metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: value
PUSH $field
// stack: offset, value
%mstore_kernel(@SEGMENT_GLOBAL_METADATA)
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
// Load the given context metadata field from memory.
%macro mload_context_metadata(field)
// Context metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: (empty)
PUSH $field
// stack: offset
%mload_current(@SEGMENT_CONTEXT_METADATA)
GET_CONTEXT
ADD
// stack: addr
MLOAD_GENERAL
// stack: value
%endmacro
// Store the given context metadata field to memory.
%macro mstore_context_metadata(field)
// Context metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: value
PUSH $field
// stack: offset, value
%mstore_current(@SEGMENT_CONTEXT_METADATA)
GET_CONTEXT
ADD
// stack: addr, value
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
// Store the given context metadata field to memory.
%macro mstore_context_metadata(field, value)
PUSH $value
// Context metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
PUSH $field
// stack: offset, value
%mstore_current(@SEGMENT_CONTEXT_METADATA)
GET_CONTEXT
ADD
// stack: addr
PUSH $value
// stack: value, addr
MSTORE_GENERAL
// stack: (empty)
%endmacro
%macro mstore_parent_context_metadata(field)
// Context metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: value
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx, value) ->
(value, parent_ctx, @SEGMENT_CONTEXT_METADATA, $field)
// stack: parent_ctx, value
PUSH $field ADD
// stack: addr, value
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro
%macro mstore_parent_context_metadata(field, value)
// Context metadata are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: (empty)
%mload_context_metadata(@CTX_METADATA_PARENT_CONTEXT)
%stack (parent_ctx) ->
($value, parent_ctx, @SEGMENT_CONTEXT_METADATA, $field)
// stack: parent_ctx
PUSH $field ADD
// stack: addr
PUSH $value
// stack: value, addr
MSTORE_GENERAL
// stack: (empty)
%endmacro

View File

@ -2,11 +2,10 @@
// decoding bytes as integers. All big-endian.
// Given a pointer to some bytes in memory, pack them into a word. Assumes 0 < len <= 32.
// Pre stack: addr: 3, len, retdest
// Pre stack: addr, len, retdest
// Post stack: packed_value
// NOTE: addr: 3 denotes a (context, segment, virtual) tuple
global mload_packing:
// stack: addr: 3, len, retdest
// stack: addr, len, retdest
MLOAD_32BYTES
// stack: packed_value, retdest
SWAP1
@ -14,50 +13,50 @@ global mload_packing:
JUMP
%macro mload_packing
%stack (addr: 3, len) -> (addr, len, %%after)
%stack (addr, len) -> (addr, len, %%after)
%jump(mload_packing)
%%after:
%endmacro
global mload_packing_u64_LE:
// stack: context, segment, offset, retdest
DUP3 DUP3 DUP3 MLOAD_GENERAL
DUP4 %add_const(1) DUP4 DUP4 MLOAD_GENERAL %shl_const( 8) ADD
DUP4 %add_const(2) DUP4 DUP4 MLOAD_GENERAL %shl_const(16) ADD
DUP4 %add_const(3) DUP4 DUP4 MLOAD_GENERAL %shl_const(24) ADD
DUP4 %add_const(4) DUP4 DUP4 MLOAD_GENERAL %shl_const(32) ADD
DUP4 %add_const(5) DUP4 DUP4 MLOAD_GENERAL %shl_const(40) ADD
DUP4 %add_const(6) DUP4 DUP4 MLOAD_GENERAL %shl_const(48) ADD
DUP4 %add_const(7) DUP4 DUP4 MLOAD_GENERAL %shl_const(56) ADD
%stack (value, context, segment, offset, retdest) -> (retdest, value)
// stack: addr, retdest
DUP1 MLOAD_GENERAL
DUP2 %add_const(1) MLOAD_GENERAL %shl_const( 8) ADD
DUP2 %add_const(2) MLOAD_GENERAL %shl_const(16) ADD
DUP2 %add_const(3) MLOAD_GENERAL %shl_const(24) ADD
DUP2 %add_const(4) MLOAD_GENERAL %shl_const(32) ADD
DUP2 %add_const(5) MLOAD_GENERAL %shl_const(40) ADD
DUP2 %add_const(6) MLOAD_GENERAL %shl_const(48) ADD
DUP2 %add_const(7) MLOAD_GENERAL %shl_const(56) ADD
%stack (value, addr, retdest) -> (retdest, value)
JUMP
%macro mload_packing_u64_LE
%stack (addr: 3) -> (addr, %%after)
%stack (addr) -> (addr, %%after)
%jump(mload_packing_u64_LE)
%%after:
%endmacro
// Pre stack: context, segment, offset, value, len, retdest
// Post stack: offset'
// Pre stack: addr, value, len, retdest
// Post stack: addr'
global mstore_unpacking:
// stack: context, segment, offset, value, len, retdest
DUP5 ISZERO
// stack: len == 0, context, segment, offset, value, len, retdest
// stack: addr, value, len, retdest
DUP3 ISZERO
// stack: len == 0, addr, value, len, retdest
%jumpi(mstore_unpacking_empty)
%stack(context, segment, offset, value, len, retdest) -> (len, context, segment, offset, value, retdest)
%stack(addr, value, len, retdest) -> (len, addr, value, retdest)
PUSH 3
// stack: BYTES_PER_JUMP, len, context, segment, offset, value, retdest
// stack: BYTES_PER_JUMP, len, addr, value, retdest
MUL
// stack: jump_offset, context, segment, offset, value, retdest
// stack: jump_offset, addr, value, retdest
PUSH mstore_unpacking_0
// stack: mstore_unpacking_0, jump_offset, context, segment, offset, value, retdest
// stack: mstore_unpacking_0, jump_offset, addr, value, retdest
ADD
// stack: address_unpacking, context, segment, offset, value, retdest
// stack: address_unpacking, addr, value, retdest
JUMP
mstore_unpacking_empty:
%stack(context, segment, offset, value, len, retdest) -> (retdest, offset)
%stack(addr, value, len, retdest) -> (retdest, addr)
JUMP
// This case can never be reached. It's only here to offset the table correctly.
@ -66,274 +65,274 @@ mstore_unpacking_0:
PANIC
%endrep
mstore_unpacking_1:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_1
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_2:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_2
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_3:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_3
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_4:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_4
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_5:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_5
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_6:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_6
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_7:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_7
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_8:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_8
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_9:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_9
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_10:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_10
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_11:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_11
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_12:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_12
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_13:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_13
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_14:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_14
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_15:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_15
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_16:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_16
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_17:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_17
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_18:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_18
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_19:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_19
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_20:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_20
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_21:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_21
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_22:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_22
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_23:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_23
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_24:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_24
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_25:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_25
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_26:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_26
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_27:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_27
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_28:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_28
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_29:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_29
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_30:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_30
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_31:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_31
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
mstore_unpacking_32:
// stack: context, segment, offset, value, retdest
// stack: addr, value, retdest
MSTORE_32BYTES_32
// stack: offset', retdest
// stack: addr', retdest
SWAP1
// stack: retdest, offset'
// stack: retdest, addr'
JUMP
%macro mstore_unpacking
%stack (addr: 3, value, len) -> (addr, value, len, %%after)
%stack (addr, value, len) -> (addr, value, len, %%after)
%jump(mstore_unpacking)
%%after:
%endmacro
// Pre stack: context, segment, offset, value, retdest
// Post stack: offset'
// Pre stack: addr, value, retdest
// Post stack: addr'
global mstore_unpacking_u64_LE:
%stack (context, segment, offset, value) -> (0xff, value, context, segment, offset, context, segment, offset, value)
%stack (addr, value) -> (0xff, value, addr, addr, value)
AND
MSTORE_GENERAL // First byte
DUP3 %add_const(1)
%stack (new_offset, context, segment, offset, value) -> (0xff00, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(1)
%stack (new_addr, addr, value) -> (0xff00, value, new_addr, addr, value)
AND %shr_const(8)
MSTORE_GENERAL // Second byte
DUP3 %add_const(2)
%stack (new_offset, context, segment, offset, value) -> (0xff0000, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(2)
%stack (new_addr, addr, value) -> (0xff0000, value, new_addr, addr, value)
AND %shr_const(16)
MSTORE_GENERAL // Third byte
DUP3 %add_const(3)
%stack (new_offset, context, segment, offset, value) -> (0xff000000, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(3)
%stack (new_addr, addr, value) -> (0xff000000, value, new_addr, addr, value)
AND %shr_const(24)
MSTORE_GENERAL // Fourth byte
DUP3 %add_const(4)
%stack (new_offset, context, segment, offset, value) -> (0xff00000000, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(4)
%stack (new_addr, addr, value) -> (0xff00000000, value, new_addr, addr, value)
AND %shr_const(32)
MSTORE_GENERAL // Fifth byte
DUP3 %add_const(5)
%stack (new_offset, context, segment, offset, value) -> (0xff0000000000, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(5)
%stack (new_addr, addr, value) -> (0xff0000000000, value, new_addr, addr, value)
AND %shr_const(40)
MSTORE_GENERAL // Sixth byte
DUP3 %add_const(6)
%stack (new_offset, context, segment, offset, value) -> (0xff000000000000, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(6)
%stack (new_addr, addr, value) -> (0xff000000000000, value, new_addr, addr, value)
AND %shr_const(48)
MSTORE_GENERAL // Seventh byte
DUP3 %add_const(7)
%stack (new_offset, context, segment, offset, value) -> (0xff00000000000000, value, context, segment, new_offset, context, segment, offset, value)
DUP1 %add_const(7)
%stack (new_addr, addr, value) -> (0xff00000000000000, value, new_addr, addr, value)
AND %shr_const(56)
MSTORE_GENERAL // Eighth byte
%pop4 JUMP
%pop2 JUMP
%macro mstore_unpacking_u64_LE
%stack (addr: 3, value) -> (addr, value, %%after)
%stack (addr, value) -> (addr, value, %%after)
%jump(mstore_unpacking_u64_LE)
%%after:
%endmacro

View File

@ -11,7 +11,8 @@ global sys_mload:
%stack(kexit_info, offset) -> (offset, 32, kexit_info)
PUSH @SEGMENT_MAIN_MEMORY
GET_CONTEXT
// stack: addr: 3, len, kexit_info
%build_address
// stack: addr, len, kexit_info
MLOAD_32BYTES
%stack (value, kexit_info) -> (kexit_info, value)
EXIT_KERNEL
@ -29,7 +30,8 @@ global sys_mstore:
%stack(kexit_info, offset, value) -> (offset, value, kexit_info)
PUSH @SEGMENT_MAIN_MEMORY
GET_CONTEXT
// stack: addr: 3, value, kexit_info
%build_address
// stack: addr, value, kexit_info
MSTORE_32BYTES_32
POP
// stack: kexit_info
@ -60,7 +62,8 @@ global sys_calldataload:
LT %jumpi(calldataload_large_offset)
%stack (kexit_info, i) -> (@SEGMENT_CALLDATA, i, 32, sys_calldataload_after_mload_packing, kexit_info)
GET_CONTEXT
// stack: ADDR: 3, 32, sys_calldataload_after_mload_packing, kexit_info
%build_address
// stack: addr, 32, sys_calldataload_after_mload_packing, kexit_info
%jump(mload_packing)
sys_calldataload_after_mload_packing:
// stack: value, kexit_info
@ -113,7 +116,10 @@ wcopy_within_bounds:
// stack: segment, src_ctx, kexit_info, dest_offset, offset, size
GET_CONTEXT
%stack (context, segment, src_ctx, kexit_info, dest_offset, offset, size) ->
(context, @SEGMENT_MAIN_MEMORY, dest_offset, src_ctx, segment, offset, size, wcopy_after, kexit_info)
(src_ctx, segment, offset, @SEGMENT_MAIN_MEMORY, dest_offset, context, size, wcopy_after, kexit_info)
%build_address
SWAP3 %build_address
// stack: DST, SRC, size, wcopy_after, kexit_info
%jump(memcpy_bytes)
wcopy_empty:
@ -132,6 +138,7 @@ wcopy_large_offset:
GET_CONTEXT
%stack (context, kexit_info, dest_offset, offset, size) ->
(context, @SEGMENT_MAIN_MEMORY, dest_offset, size, wcopy_after, kexit_info)
%build_address
%jump(memset)
wcopy_after:
@ -241,6 +248,9 @@ extcodecopy_contd:
GET_CONTEXT
%stack (context, new_dest_offset, copy_size, extra_size, segment, src_ctx, kexit_info, dest_offset, offset, size) ->
(context, @SEGMENT_MAIN_MEMORY, dest_offset, src_ctx, segment, offset, copy_size, wcopy_large_offset, kexit_info, new_dest_offset, offset, extra_size)
(src_ctx, segment, offset, @SEGMENT_MAIN_MEMORY, dest_offset, context, copy_size, wcopy_large_offset, kexit_info, new_dest_offset, offset, extra_size)
%build_address
SWAP3 %build_address
// stack: DST, SRC, copy_size, wcopy_large_offset, kexit_info, new_dest_offset, offset, extra_size
%jump(memcpy_bytes)
%endmacro

View File

@ -1,18 +1,27 @@
// Load the given normalized transaction field from memory.
%macro mload_txn_field(field)
// Transaction fields are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: (empty)
PUSH $field
// stack: offset
%mload_kernel(@SEGMENT_NORMALIZED_TXN)
// stack: addr
MLOAD_GENERAL
// stack: value
%endmacro
// Store the given normalized transaction field to memory.
%macro mstore_txn_field(field)
// Transaction fields are already scaled by their corresponding segment,
// effectively making them the direct memory position to read from /
// write to.
// stack: value
PUSH $field
// stack: offset, value
%mstore_kernel(@SEGMENT_NORMALIZED_TXN)
// stack: addr, value
SWAP1
MSTORE_GENERAL
// stack: (empty)
%endmacro

View File

@ -29,15 +29,13 @@ mpt_hash_hash_if_rlp:
mpt_hash_hash_rlp:
// stack: result, result_len, new_len, retdest
%stack (result, result_len, new_len)
// context, segment, offset, value, len, trie_len, retdest
-> (0, @SEGMENT_RLP_RAW, 0, result, result_len, mpt_hash_hash_rlp_after_unpacking, new_len)
-> (@SEGMENT_RLP_RAW, result, result_len, mpt_hash_hash_rlp_after_unpacking, result_len, new_len)
// stack: addr, result, result_len, mpt_hash_hash_rlp_after_unpacking, result_len, new_len
%jump(mstore_unpacking)
mpt_hash_hash_rlp_after_unpacking:
// stack: result_len, new_len, retdest
PUSH 0 // offset
PUSH @SEGMENT_RLP_RAW // segment
PUSH 0 // context
// stack: result_addr: 3, result_len, new_len, retdest
// stack: result_addr, result_len, new_len, retdest
POP PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
// stack: result_addr, result_len, new_len, retdest
KECCAK_GENERAL
// stack: hash, new_len, retdest
%stack(hash, new_len, retdest) -> (retdest, hash, new_len)
@ -80,23 +78,19 @@ encode_or_hash_concrete_node:
%stack (node_type, node_ptr, encode_value, cur_len) -> (node_type, node_ptr, encode_value, cur_len, maybe_hash_node)
%jump(encode_node)
maybe_hash_node:
// stack: result_ptr, result_len, cur_len, retdest
// stack: result_addr, result_len, cur_len, retdest
DUP2 %lt_const(32)
%jumpi(pack_small_rlp)
// result_len >= 32, so we hash the result.
// stack: result_ptr, result_len, cur_len, retdest
PUSH @SEGMENT_RLP_RAW // segment
PUSH 0 // context
// stack: result_addr: 3, result_len, cur_len, retdest
// stack: result_addr, result_len, cur_len, retdest
KECCAK_GENERAL
%stack (hash, cur_len, retdest) -> (retdest, hash, 32, cur_len)
JUMP
pack_small_rlp:
// stack: result_ptr, result_len, cur_len, retdest
%stack (result_ptr, result_len, cur_len)
-> (0, @SEGMENT_RLP_RAW, result_ptr, result_len,
after_packed_small_rlp, result_len, cur_len)
-> (result_ptr, result_len, after_packed_small_rlp, result_len, cur_len)
%jump(mload_packing)
after_packed_small_rlp:
%stack (result, result_len, cur_len, retdest) -> (retdest, result, result_len, cur_len)
@ -130,13 +124,13 @@ global encode_node_empty:
// An empty node is encoded as a single byte, 0x80, which is the RLP encoding of the empty string.
// TODO: Write this byte just once to RLP memory, then we can always return (0, 1).
%alloc_rlp_block
// stack: rlp_pos, cur_len, retdest
// stack: rlp_start, cur_len, retdest
PUSH 0x80
// stack: 0x80, rlp_pos, cur_len, retdest
// stack: 0x80, rlp_start, cur_len, retdest
DUP2
// stack: rlp_pos, 0x80, rlp_pos, cur_len, retdest
// stack: rlp_start, 0x80, rlp_start, cur_len, retdest
%mstore_rlp
%stack (rlp_pos, cur_len, retdest) -> (retdest, rlp_pos, 1, cur_len)
%stack (rlp_start, cur_len, retdest) -> (retdest, rlp_start, 1, cur_len)
JUMP
global encode_node_branch:
@ -244,7 +238,7 @@ encode_node_branch_prepend_prefix:
%stack (result_len, result, rlp_pos, rlp_start, base_offset, node_payload_ptr, encode_value, cur_len, retdest)
-> (rlp_pos, result, result_len, %%after_unpacking,
rlp_start, base_offset, node_payload_ptr, encode_value, cur_len, retdest)
%jump(mstore_unpacking_rlp)
%jump(mstore_unpacking)
%%after_unpacking:
// stack: rlp_pos', rlp_start, base_offset, node_payload_ptr, encode_value, cur_len, retdest
%endmacro
@ -284,7 +278,7 @@ encode_node_extension_after_hex_prefix:
encode_node_extension_unpack:
%stack (rlp_pos, rlp_start, result, result_len, node_payload_ptr, cur_len)
-> (rlp_pos, result, result_len, encode_node_extension_after_unpacking, rlp_start, cur_len)
%jump(mstore_unpacking_rlp)
%jump(mstore_unpacking)
encode_node_extension_after_unpacking:
// stack: rlp_pos, rlp_start, cur_len, retdest
%prepend_rlp_list_prefix

View File

@ -57,7 +57,7 @@ global mpt_hash_receipt_trie:
%endmacro
global encode_account:
// stack: rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_addr, value_ptr, cur_len, retdest
// First, we compute the length of the RLP data we're about to write.
// We also update the length of the trie data segment.
// The nonce and balance fields are variable-length, so we need to load them
@ -69,22 +69,22 @@ global encode_account:
SWAP2 %add_const(4) SWAP2
// Now, we start the encoding.
// stack: rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_addr, value_ptr, cur_len, retdest
DUP2 %mload_trie_data // nonce = value[0]
%rlp_scalar_len
// stack: nonce_rlp_len, rlp_pos, value_ptr, cur_len, retdest
// stack: nonce_rlp_len, rlp_addr, value_ptr, cur_len, retdest
DUP3 %increment %mload_trie_data // balance = value[1]
%rlp_scalar_len
// stack: balance_rlp_len, nonce_rlp_len, rlp_pos, value_ptr, cur_len, retdest
// stack: balance_rlp_len, nonce_rlp_len, rlp_addr, value_ptr, cur_len, retdest
PUSH 66 // storage_root and code_hash fields each take 1 + 32 bytes
ADD ADD
// stack: payload_len, rlp_pos, value_ptr, cur_len, retdest
// stack: payload_len, rlp_addr, value_ptr, cur_len, retdest
SWAP1
// stack: rlp_pos, payload_len, value_ptr, cur_len, retdest
// stack: rlp_addr, payload_len, value_ptr, cur_len, retdest
DUP2 %rlp_list_len
// stack: list_len, rlp_pos, payload_len, value_ptr, cur_len, retdest
// stack: list_len, rlp_addr, payload_len, value_ptr, cur_len, retdest
SWAP1
// stack: rlp_pos, list_len, payload_len, value_ptr, cur_len, retdest
// stack: rlp_addr, list_len, payload_len, value_ptr, cur_len, retdest
%encode_rlp_multi_byte_string_prefix
// stack: rlp_pos_2, payload_len, value_ptr, cur_len, retdest
%encode_rlp_list_prefix
@ -115,232 +115,237 @@ global encode_account:
JUMP
global encode_txn:
// stack: rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_addr, value_ptr, cur_len, retdest
// Load the txn_rlp_len which is at the beginning of value_ptr
DUP2 %mload_trie_data
// stack: txn_rlp_len, rlp_pos, value_ptr, cur_len, retdest
// stack: txn_rlp_len, rlp_addr, value_ptr, cur_len, retdest
// We need to add 1+txn_rlp_len to the length of the trie data.
SWAP3 DUP4 %increment ADD
// stack: new_len, rlp_pos, value_ptr, txn_rlp_len, retdest
// stack: new_len, rlp_addr, value_ptr, txn_rlp_len, retdest
SWAP3
SWAP2 %increment
// stack: txn_rlp_ptr=value_ptr+1, rlp_pos, txn_rlp_len, new_len, retdest
// stack: txn_rlp_ptr=value_ptr+1, rlp_addr, txn_rlp_len, new_len, retdest
%stack (txn_rlp_ptr, rlp_pos, txn_rlp_len) -> (rlp_pos, txn_rlp_len, txn_rlp_len, txn_rlp_ptr)
%stack (txn_rlp_ptr, rlp_addr, txn_rlp_len) -> (rlp_addr, txn_rlp_len, txn_rlp_len, txn_rlp_ptr)
// Encode the txn rlp prefix
// stack: rlp_pos, txn_rlp_len, txn_rlp_len, txn_rlp_ptr, cur_len, retdest
// stack: rlp_addr, txn_rlp_len, txn_rlp_len, txn_rlp_ptr, cur_len, retdest
%encode_rlp_multi_byte_string_prefix
// copy txn_rlp to the new block
// stack: rlp_pos, txn_rlp_len, txn_rlp_ptr, new_len, retdest
%stack (rlp_pos, txn_rlp_len, txn_rlp_ptr) -> (
0, @SEGMENT_RLP_RAW, rlp_pos, // dest addr
0, @SEGMENT_TRIE_DATA, txn_rlp_ptr, // src addr. Kernel has context 0
// stack: rlp_addr, txn_rlp_len, txn_rlp_ptr, new_len, retdest
%stack (rlp_addr, txn_rlp_len, txn_rlp_ptr) -> (
@SEGMENT_TRIE_DATA, txn_rlp_ptr, // src addr. Kernel has context 0
rlp_addr, // dest addr
txn_rlp_len, // mcpy len
txn_rlp_len, rlp_pos)
txn_rlp_len, rlp_addr)
%build_kernel_address
SWAP1
// stack: DST, SRC, txn_rlp_len, txn_rlp_len, rlp_addr, new_len, retdest
%memcpy_bytes
ADD
// stack new_rlp_pos, new_len, retdest
%stack(new_rlp_pos, new_len, retdest) -> (retdest, new_rlp_pos, new_len)
// stack new_rlp_addr, new_len, retdest
%stack(new_rlp_addr, new_len, retdest) -> (retdest, new_rlp_addr, new_len)
JUMP
// We assume a receipt in memory is stored as:
// [payload_len, status, cum_gas_used, bloom, logs_payload_len, num_logs, [logs]].
// A log is [payload_len, address, num_topics, [topics], data_len, [data]].
global encode_receipt:
// stack: rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_addr, value_ptr, cur_len, retdest
// First, we add 261 to the trie data length for all values before the logs besides the type.
// These are: the payload length, the status, cum_gas_used, the bloom filter (256 elements),
// the length of the logs payload and the length of the logs.
SWAP2 %add_const(261) SWAP2
// There is a double encoding! What we compute is:
// either RLP(RLP(receipt)) for Legacy transactions or RLP(txn_type||RLP(receipt)) for transactions of type 1 or 2.
// There is a double encoding!
// What we compute is:
// - either RLP(RLP(receipt)) for Legacy transactions
// - or RLP(txn_type||RLP(receipt)) for transactions of type 1 or 2.
// First encode the wrapper prefix.
DUP2 %mload_trie_data
// stack: first_value, rlp_pos, value_ptr, cur_len, retdest
// stack: first_value, rlp_addr, value_ptr, cur_len, retdest
// The first value is either the transaction type or the payload length.
// Since the receipt contains at least the 256-bytes long bloom filter, payload_len > 3.
DUP1 %lt_const(3) %jumpi(encode_nonzero_receipt_type)
// If we are here, then the first byte is the payload length.
%rlp_list_len
// stack: rlp_receipt_len, rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_receipt_len, rlp_addr, value_ptr, cur_len, retdest
SWAP1 %encode_rlp_multi_byte_string_prefix
// stack: rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_addr, value_ptr, cur_len, retdest
encode_receipt_after_type:
// stack: rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, payload_len_ptr, cur_len, retdest
// Then encode the receipt prefix.
// `payload_ptr` is either `value_ptr` or `value_ptr+1`, depending on the transaction type.
DUP2 %mload_trie_data
// stack: payload_len, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: payload_len, rlp_addr, payload_len_ptr, cur_len, retdest
SWAP1 %encode_rlp_list_prefix
// stack: rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, payload_len_ptr, cur_len, retdest
// Encode status.
DUP2 %increment %mload_trie_data
// stack: status, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: status, rlp_addr, payload_len_ptr, cur_len, retdest
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, payload_len_ptr, cur_len, retdest
// Encode cum_gas_used.
DUP2 %add_const(2) %mload_trie_data
// stack: cum_gas_used, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: cum_gas_used, rlp_addr, payload_len_ptr, cur_len, retdest
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, payload_len_ptr, cur_len, retdest
// Encode bloom.
PUSH 256 // Bloom length.
DUP3 %add_const(3) PUSH @SEGMENT_TRIE_DATA PUSH 0 // MPT src address.
DUP5
// stack: rlp_pos, SRC, 256, rlp_pos, payload_len_ptr, cur_len, retdest
DUP3 %add_const(3) PUSH @SEGMENT_TRIE_DATA %build_kernel_address // MPT src address.
DUP3
// stack: rlp_addr, SRC, 256, rlp_addr, payload_len_ptr, cur_len, retdest
%encode_rlp_string
// stack: rlp_pos, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, old_rlp_pos, payload_len_ptr, cur_len, retdest
SWAP1 POP
// stack: rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, payload_len_ptr, cur_len, retdest
// Encode logs prefix.
DUP2 %add_const(259) %mload_trie_data
// stack: logs_payload_len, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: logs_payload_len, rlp_addr, payload_len_ptr, cur_len, retdest
SWAP1 %encode_rlp_list_prefix
// stack: rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, payload_len_ptr, cur_len, retdest
DUP2 %add_const(261)
// stack: logs_ptr, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: logs_ptr, rlp_addr, payload_len_ptr, cur_len, retdest
DUP3 %add_const(260) %mload_trie_data
// stack: num_logs, logs_ptr, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: num_logs, logs_ptr, rlp_addr, payload_len_ptr, cur_len, retdest
PUSH 0
encode_receipt_logs_loop:
// stack: i, num_logs, current_log_ptr, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: i, num_logs, current_log_ptr, rlp_addr, payload_len_ptr, cur_len, retdest
DUP2 DUP2 EQ
// stack: i == num_logs, i, num_logs, current_log_ptr, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: i == num_logs, i, num_logs, current_log_ptr, rlp_addr, payload_len_ptr, cur_len, retdest
%jumpi(encode_receipt_end)
// We add 4 to the trie data length for the fixed size elements in the current log.
SWAP5 %add_const(4) SWAP5
// stack: i, num_logs, current_log_ptr, rlp_pos, payload_len_ptr, cur_len, retdest
// stack: i, num_logs, current_log_ptr, rlp_addr, payload_len_ptr, cur_len, retdest
DUP3 DUP5
// stack: rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// Encode log prefix.
DUP2 %mload_trie_data
// stack: payload_len, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: payload_len, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
SWAP1 %encode_rlp_list_prefix
// stack: rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// Encode address.
DUP2 %increment %mload_trie_data
// stack: address, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: address, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
SWAP1 %encode_rlp_160
// stack: rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
DUP2 %add_const(2) %mload_trie_data
// stack: num_topics, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: num_topics, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// Encode topics prefix.
DUP1 %mul_const(33)
// stack: topics_payload_len, num_topics, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: topics_payload_len, num_topics, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
DUP3 %encode_rlp_list_prefix
// stack: new_rlp_pos, num_topics, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: new_rlp_pos, num_topics, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
SWAP2 POP
// stack: num_topics, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// stack: num_topics, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len, retdest
// Add `num_topics` to the length of the trie data segment.
DUP1 SWAP9
// stack: cur_len, num_topics, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, num_topics, retdest
// stack: cur_len, num_topics, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, num_topics, retdest
ADD SWAP8
// stack: num_topics, rlp_pos, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: num_topics, rlp_addr, current_log_ptr, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
SWAP2 %add_const(3)
// stack: topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
PUSH 0
encode_receipt_topics_loop:
// stack: j, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: j, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
DUP4 DUP2 EQ
// stack: j == num_topics, j, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: j == num_topics, j, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
%jumpi(encode_receipt_topics_end)
// stack: j, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: j, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
DUP2 DUP2 ADD
%mload_trie_data
// stack: current_topic, j, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: current_topic, j, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
DUP4
// stack: rlp_pos, current_topic, j, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: rlp_addr, current_topic, j, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
%encode_rlp_256
// stack: new_rlp_pos, j, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: new_rlp_pos, j, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
SWAP3 POP
// stack: j, topics_ptr, new_rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
%increment
%jump(encode_receipt_topics_loop)
encode_receipt_topics_end:
// stack: num_topics, topics_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: num_topics, topics_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
ADD
// stack: data_len_ptr, rlp_pos, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: data_len_ptr, rlp_addr, num_topics, i, num_logs, current_log_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
SWAP5 POP
// stack: rlp_pos, num_topics, i, num_logs, data_len_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
// stack: rlp_addr, num_topics, i, num_logs, data_len_ptr, old_rlp_pos, payload_len_ptr, cur_len', retdest
SWAP5 POP
// stack: num_topics, i, num_logs, data_len_ptr, rlp_pos, payload_len_ptr, cur_len', retdest
// stack: num_topics, i, num_logs, data_len_ptr, rlp_addr, payload_len_ptr, cur_len', retdest
POP
// stack: i, num_logs, data_len_ptr, rlp_pos, payload_len_ptr, cur_len', retdest
// stack: i, num_logs, data_len_ptr, rlp_addr, payload_len_ptr, cur_len', retdest
// Encode data prefix.
DUP3 %mload_trie_data
// stack: data_len, i, num_logs, data_len_ptr, rlp_pos, payload_len_ptr, cur_len', retdest
// stack: data_len, i, num_logs, data_len_ptr, rlp_addr, payload_len_ptr, cur_len', retdest
// Add `data_len` to the length of the trie data.
DUP1 SWAP7 ADD SWAP6
// stack: data_len, i, num_logs, data_len_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
// stack: data_len, i, num_logs, data_len_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
DUP4 %increment DUP2 ADD
// stack: next_log_ptr, data_len, i, num_logs, data_len_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
// stack: next_log_ptr, data_len, i, num_logs, data_len_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
SWAP4 %increment
// stack: data_ptr, data_len, i, num_logs, next_log_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
PUSH @SEGMENT_TRIE_DATA PUSH 0
// stack: SRC, data_len, i, num_logs, next_log_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
DUP8
// stack: rlp_pos, SRC, data_len, i, num_logs, next_log_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
// stack: data_ptr, data_len, i, num_logs, next_log_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
PUSH @SEGMENT_TRIE_DATA %build_kernel_address
// stack: SRC, data_len, i, num_logs, next_log_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
DUP6
// stack: rlp_addr, SRC, data_len, i, num_logs, next_log_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
%encode_rlp_string
// stack: new_rlp_pos, i, num_logs, next_log_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
// stack: new_rlp_pos, i, num_logs, next_log_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
SWAP4 POP
// stack: i, num_logs, next_log_ptr, new_rlp_pos, payload_len_ptr, cur_len'', retdest
%increment
%jump(encode_receipt_logs_loop)
encode_receipt_end:
// stack: num_logs, num_logs, current_log_ptr, rlp_pos, payload_len_ptr, cur_len'', retdest
// stack: num_logs, num_logs, current_log_ptr, rlp_addr, payload_len_ptr, cur_len'', retdest
%pop3
// stack: rlp_pos, payload_len_ptr, cur_len'', retdest
// stack: rlp_addr, payload_len_ptr, cur_len'', retdest
SWAP1 POP
// stack: rlp_pos, cur_len'', retdest
%stack(rlp_pos, new_len, retdest) -> (retdest, rlp_pos, new_len)
// stack: rlp_addr, cur_len'', retdest
%stack(rlp_addr, new_len, retdest) -> (retdest, rlp_addr, new_len)
JUMP
encode_nonzero_receipt_type:
// stack: txn_type, rlp_pos, value_ptr, cur_len, retdest
// stack: txn_type, rlp_addr, value_ptr, cur_len, retdest
// We have a nonlegacy receipt, so the type is also stored in the trie data segment.
SWAP3 %increment SWAP3
// stack: txn_type, rlp_pos, value_ptr, cur_len, retdest
// stack: txn_type, rlp_addr, value_ptr, cur_len, retdest
DUP3 %increment %mload_trie_data
// stack: payload_len, txn_type, rlp_pos, value_ptr, retdest
// stack: payload_len, txn_type, rlp_addr, value_ptr, retdest
// The transaction type is encoded in 1 byte
%increment %rlp_list_len
// stack: rlp_receipt_len, txn_type, rlp_pos, value_ptr, retdest
// stack: rlp_receipt_len, txn_type, rlp_addr, value_ptr, retdest
DUP3 %encode_rlp_multi_byte_string_prefix
// stack: rlp_pos, txn_type, old_rlp_pos, value_ptr, retdest
// stack: rlp_addr, txn_type, old_rlp_addr, value_ptr, retdest
DUP2 DUP2
%mstore_rlp
%increment
// stack: rlp_pos, txn_type, old_rlp_pos, value_ptr, retdest
%stack (rlp_pos, txn_type, old_rlp_pos, value_ptr, retdest) -> (rlp_pos, value_ptr, retdest)
// stack: rlp_addr, txn_type, old_rlp_addr, value_ptr, retdest
%stack (rlp_addr, txn_type, old_rlp_addr, value_ptr, retdest) -> (rlp_addr, value_ptr, retdest)
// We replace `value_ptr` with `paylaod_len_ptr` so we can encode the rest of the data more easily
SWAP1 %increment SWAP1
// stack: rlp_pos, payload_len_ptr, retdest
// stack: rlp_addr, payload_len_ptr, retdest
%jump(encode_receipt_after_type)
global encode_storage_value:
// stack: rlp_pos, value_ptr, cur_len, retdest
// stack: rlp_addr, value_ptr, cur_len, retdest
SWAP1 %mload_trie_data SWAP1
// A storage value is a scalar, so we only need to add 1 to the trie data length.
SWAP2 %increment SWAP2
// stack: rlp_pos, value, cur_len, retdest
// stack: rlp_addr, value, cur_len, retdest
// The YP says storage trie is a map "... to the RLP-encoded 256-bit integer values"
// which seems to imply that this should be %encode_rlp_256. But %encode_rlp_scalar
// causes the tests to pass, so it seems storage values should be treated as variable-
// length after all.
%doubly_encode_rlp_scalar
// stack: rlp_pos', cur_len, retdest
%stack (rlp_pos, cur_len, retdest) -> (retdest, rlp_pos, cur_len)
// stack: rlp_addr', cur_len, retdest
%stack (rlp_addr, cur_len, retdest) -> (retdest, rlp_addr, cur_len)
JUMP

View File

@ -3,8 +3,8 @@
// given position, and returns the updated position, i.e. a pointer to the next
// unused offset.
//
// Pre stack: rlp_start_pos, num_nibbles, packed_nibbles, terminated, retdest
// Post stack: rlp_end_pos
// Pre stack: rlp_start_addr, num_nibbles, packed_nibbles, terminated, retdest
// Post stack: rlp_end_addr
global hex_prefix_rlp:
DUP2 %assert_lt_const(65)
@ -12,7 +12,7 @@ global hex_prefix_rlp:
// Compute the length of the hex-prefix string, in bytes:
// hp_len = num_nibbles / 2 + 1 = i + 1
%increment
// stack: hp_len, rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: hp_len, rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
// Write the RLP header.
DUP1 %gt_const(55) %jumpi(rlp_header_large)
@ -21,113 +21,112 @@ global hex_prefix_rlp:
// The hex-prefix is a single byte. It must be <= 127, since its first
// nibble only has two bits. So this is the "small" RLP string case, where
// the byte is its own RLP encoding.
// stack: hp_len, rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: hp_len, rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
POP
first_byte:
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
// get the first nibble, if num_nibbles is odd, or zero otherwise
SWAP2
// stack: packed_nibbles, num_nibbbles, rlp_pos, terminated, retdest
// stack: packed_nibbles, num_nibbbles, rlp_addr, terminated, retdest
DUP2 DUP1
%mod_const(2)
// stack: parity, num_nibbles, packed_nibbles, num_nibbles, rlp_pos, terminated, retdest
// stack: parity, num_nibbles, packed_nibbles, num_nibbles, rlp_addr, terminated, retdest
SWAP1 SUB
%mul_const(4)
SHR
// stack: first_nibble_or_zero, num_nibbles, rlp_pos, terminated, retdest
// stack: first_nibble_or_zero, num_nibbles, rlp_addr, terminated, retdest
SWAP2
// stack: rlp_pos, num_nibbles, first_nibble_or_zero, terminated, retdest
// stack: rlp_addr, num_nibbles, first_nibble_or_zero, terminated, retdest
SWAP3
// stack: terminated, num_nibbles, first_nibble_or_zero, rlp_pos, retdest
// stack: terminated, num_nibbles, first_nibble_or_zero, rlp_addr, retdest
%mul_const(2)
// stack: terminated * 2, num_nibbles, first_nibble_or_zero, rlp_pos, retdest
// stack: terminated * 2, num_nibbles, first_nibble_or_zero, rlp_addr, retdest
SWAP1
// stack: num_nibbles, terminated * 2, first_nibble_or_zero, rlp_pos, retdest
// stack: num_nibbles, terminated * 2, first_nibble_or_zero, rlp_addr, retdest
%mod_const(2) // parity
ADD
// stack: parity + terminated * 2, first_nibble_or_zero, rlp_pos, retdest
// stack: parity + terminated * 2, first_nibble_or_zero, rlp_addr, retdest
%mul_const(16)
ADD
// stack: first_byte, rlp_pos, retdest
// stack: first_byte, rlp_addr, retdest
DUP2
%mstore_rlp
%increment
// stack: rlp_pos', retdest
// stack: rlp_addr', retdest
SWAP1
JUMP
remaining_bytes:
// stack: rlp_pos, num_nibbles, packed_nibbles, retdest
// stack: rlp_addr, num_nibbles, packed_nibbles, retdest
SWAP2
PUSH @U256_MAX
// stack: U256_MAX, packed_nibbles, num_nibbles, rlp_pos, ret_dest
// stack: U256_MAX, packed_nibbles, num_nibbles, rlp_addr, ret_dest
SWAP1 SWAP2 DUP1
%mod_const(2)
// stack: parity, num_nibbles, U256_MAX, packed_nibbles, rlp_pos, ret_dest
// stack: parity, num_nibbles, U256_MAX, packed_nibbles, rlp_addr, ret_dest
SWAP1 SUB DUP1
// stack: num_nibbles - parity, num_nibbles - parity, U256_MAX, packed_nibbles, rlp_pos, ret_dest
// stack: num_nibbles - parity, num_nibbles - parity, U256_MAX, packed_nibbles, rlp_addr, ret_dest
%div_const(2)
// stack: rem_bytes, num_nibbles - parity, U256_MAX, packed_nibbles, rlp_pos, ret_dest
// stack: rem_bytes, num_nibbles - parity, U256_MAX, packed_nibbles, rlp_addr, ret_dest
SWAP2 SWAP1
// stack: num_nibbles - parity, U256_MAX, rem_bytes, packed_nibbles, rlp_pos, ret_dest
// stack: num_nibbles - parity, U256_MAX, rem_bytes, packed_nibbles, rlp_addr, ret_dest
%mul_const(4)
// stack: 4*(num_nibbles - parity), U256_MAX, rem_bytes, packed_nibbles, rlp_pos, ret_dest
// stack: 4*(num_nibbles - parity), U256_MAX, rem_bytes, packed_nibbles, rlp_addr, ret_dest
PUSH 256 SUB
// stack: 256 - 4*(num_nibbles - parity), U256_MAX, rem_bytes, packed_nibbles, rlp_pos, ret_dest
// stack: 256 - 4*(num_nibbles - parity), U256_MAX, rem_bytes, packed_nibbles, rlp_addr, ret_dest
SHR
// stack: mask, rem_bytes, packed_nibbles, rlp_pos, ret_dest
// stack: mask, rem_bytes, packed_nibbles, rlp_addr, ret_dest
SWAP1 SWAP2
AND
%stack
(remaining_nibbles, rem_bytes, rlp_pos) ->
(rlp_pos, remaining_nibbles, rem_bytes)
%mstore_unpacking_rlp
%stack(remaining_nibbles, rem_bytes, rlp_addr) -> (rlp_addr, remaining_nibbles, rem_bytes)
%mstore_unpacking
SWAP1
JUMP
rlp_header_medium:
// stack: hp_len, rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: hp_len, rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
%add_const(0x80) // value = 0x80 + hp_len
DUP2 // offset = rlp_pos
DUP2
%mstore_rlp
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// rlp_pos += 1
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
// rlp_addr += 1
%increment
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
SWAP3 DUP3 DUP3
// stack: num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_pos, retdest
// stack: num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_addr, retdest
PUSH remaining_bytes
// stack: remaining_bytes, num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_pos, retdest
// stack: remaining_bytes, num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_addr, retdest
SWAP4 SWAP5 SWAP6
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, remaining_bytes, num_nibbles, packed_nibbles, retdest
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, remaining_bytes, num_nibbles, packed_nibbles, retdest
%jump(first_byte)
rlp_header_large:
// stack: hp_len, rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: hp_len, rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
// In practice hex-prefix length will never exceed 256, so the length of the
// length will always be 1 byte in this case.
PUSH 0xb8 // value = 0xb7 + len_of_len = 0xb8
DUP3 // offset = rlp_pos
DUP3
%mstore_rlp
// stack: rlp_addr, value, hp_len, i, rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
// stack: hp_len, rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
DUP2 %increment
%mstore_rlp
// stack: hp_len, rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
DUP2 %increment // offset = rlp_pos + 1
%mstore_rlp
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// rlp_pos += 2
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
// rlp_addr += 2
%add_const(2)
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, retdest
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, retdest
SWAP3 DUP3 DUP3
// stack: num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_pos, retdest
// stack: num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_addr, retdest
PUSH remaining_bytes
// stack: remaining_bytes, num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_pos, retdest
// stack: remaining_bytes, num_nibbles, packed_nibbles, terminated, num_nibbles, packed_nibbles, rlp_addr, retdest
SWAP4 SWAP5 SWAP6
// stack: rlp_pos, num_nibbles, packed_nibbles, terminated, remaining_bytes, num_nibbles, packed_nibbles, retdest
// stack: rlp_addr, num_nibbles, packed_nibbles, terminated, remaining_bytes, num_nibbles, packed_nibbles, retdest
%jump(first_byte)

View File

@ -71,18 +71,18 @@ mpt_insert_receipt_trie_save:
global scalar_to_rlp:
// stack: scalar, retdest
%mload_global_metadata(@GLOBAL_METADATA_RLP_DATA_SIZE)
// stack: pos, scalar, retdest
// stack: init_addr, scalar, retdest
SWAP1 DUP2
%encode_rlp_scalar
// stack: pos', init_pos, retdest
// stack: addr', init_addr, retdest
// Now our rlp_encoding is in RlpRaw.
// Set new RlpRaw data size
DUP1 %mstore_global_metadata(@GLOBAL_METADATA_RLP_DATA_SIZE)
DUP2 DUP2 SUB // len of the key
// stack: len, pos', init_pos, retdest
DUP3 PUSH @SEGMENT_RLP_RAW PUSH 0 // address where we get the key from
// stack: len, addr', init_addr, retdest
DUP3
%mload_packing
// stack: packed_key, pos', init_pos, retdest
// stack: packed_key, addr', init_addr, retdest
SWAP2 %pop2
// stack: key, retdest
SWAP1

View File

@ -7,143 +7,141 @@
// assets.
// Parse the length of a bytestring from RLP memory. The next len bytes after
// pos' will contain the string.
// rlp_addr' will contain the string.
//
// Pre stack: pos, retdest
// Post stack: pos', len
// Pre stack: rlp_addr, retdest
// Post stack: rlp_addr', len
global decode_rlp_string_len:
// stack: pos, retdest
// stack: rlp_addr, retdest
DUP1
%mload_kernel(@SEGMENT_RLP_RAW)
// stack: first_byte, pos, retdest
MLOAD_GENERAL
// stack: first_byte, rlp_addr, retdest
DUP1
%gt_const(0xb7)
// stack: first_byte >= 0xb8, first_byte, pos, retdest
// stack: first_byte >= 0xb8, first_byte, rlp_addr, retdest
%jumpi(decode_rlp_string_len_large)
// stack: first_byte, pos, retdest
// stack: first_byte, rlp_addr, retdest
DUP1
%gt_const(0x7f)
// stack: first_byte >= 0x80, first_byte, pos, retdest
// stack: first_byte >= 0x80, first_byte, rlp_addr, retdest
%jumpi(decode_rlp_string_len_medium)
// String is a single byte in the range [0x00, 0x7f].
%stack (first_byte, pos, retdest) -> (retdest, pos, 1)
%stack (first_byte, rlp_addr, retdest) -> (retdest, rlp_addr, 1)
JUMP
decode_rlp_string_len_medium:
// String is 0-55 bytes long. First byte contains the len.
// stack: first_byte, pos, retdest
// stack: first_byte, rlp_addr, retdest
%sub_const(0x80)
// stack: len, pos, retdest
// stack: len, rlp_addr, retdest
SWAP1
%increment
// stack: pos', len, retdest
%stack (pos, len, retdest) -> (retdest, pos, len)
// stack: rlp_addr', len, retdest
%stack (rlp_addr, len, retdest) -> (retdest, rlp_addr, len)
JUMP
decode_rlp_string_len_large:
// String is >55 bytes long. First byte contains the len of the len.
// stack: first_byte, pos, retdest
// stack: first_byte, rlp_addr, retdest
%sub_const(0xb7)
// stack: len_of_len, pos, retdest
// stack: len_of_len, rlp_addr, retdest
SWAP1
%increment
// stack: pos', len_of_len, retdest
// stack: rlp_addr', len_of_len, retdest
%jump(decode_int_given_len)
// Convenience macro to call decode_rlp_string_len and return where we left off.
%macro decode_rlp_string_len
%stack (pos) -> (pos, %%after)
%stack (rlp_addr) -> (rlp_addr, %%after)
%jump(decode_rlp_string_len)
%%after:
%endmacro
// Parse a scalar from RLP memory.
// Pre stack: pos, retdest
// Post stack: pos', scalar
// Pre stack: rlp_addr, retdest
// Post stack: rlp_addr', scalar
//
// Scalars are variable-length, but this method assumes a max length of 32
// bytes, so that the result can be returned as a single word on the stack.
// As per the spec, scalars must not have leading zeros.
global decode_rlp_scalar:
// stack: pos, retdest
// stack: rlp_addr, retdest
PUSH decode_int_given_len
// stack: decode_int_given_len, pos, retdest
// stack: decode_int_given_len, rlp_addr, retdest
SWAP1
// stack: pos, decode_int_given_len, retdest
// stack: rlp_addr, decode_int_given_len, retdest
// decode_rlp_string_len will return to decode_int_given_len, at which point
// the stack will contain (pos', len, retdest), which are the proper args
// the stack will contain (rlp_addr', len, retdest), which are the proper args
// to decode_int_given_len.
%jump(decode_rlp_string_len)
// Convenience macro to call decode_rlp_scalar and return where we left off.
%macro decode_rlp_scalar
%stack (pos) -> (pos, %%after)
%stack (rlp_addr) -> (rlp_addr, %%after)
%jump(decode_rlp_scalar)
%%after:
%endmacro
// Parse the length of an RLP list from memory.
// Pre stack: pos, retdest
// Post stack: pos', len
// Pre stack: rlp_addr, retdest
// Post stack: rlp_addr', len
global decode_rlp_list_len:
// stack: pos, retdest
// stack: rlp_addr, retdest
DUP1
%mload_kernel(@SEGMENT_RLP_RAW)
// stack: first_byte, pos, retdest
MLOAD_GENERAL
// stack: first_byte, rlp_addr, retdest
SWAP1
%increment // increment pos
%increment // increment rlp_addr
SWAP1
// stack: first_byte, pos', retdest
// stack: first_byte, rlp_addr', retdest
// If first_byte is >= 0xf8, it's a > 55 byte list, and
// first_byte - 0xf7 is the length of the length.
DUP1
%gt_const(0xf7) // GT is native while GE is not, so compare to 0xf6 instead
// stack: first_byte >= 0xf7, first_byte, pos', retdest
// stack: first_byte >= 0xf7, first_byte, rlp_addr', retdest
%jumpi(decode_rlp_list_len_big)
// This is the "small list" case.
// The list length is first_byte - 0xc0.
// stack: first_byte, pos', retdest
// stack: first_byte, rlp_addr', retdest
%sub_const(0xc0)
// stack: len, pos', retdest
%stack (len, pos, retdest) -> (retdest, pos, len)
// stack: len, rlp_addr', retdest
%stack (len, rlp_addr, retdest) -> (retdest, rlp_addr, len)
JUMP
decode_rlp_list_len_big:
// The length of the length is first_byte - 0xf7.
// stack: first_byte, pos', retdest
// stack: first_byte, rlp_addr', retdest
%sub_const(0xf7)
// stack: len_of_len, pos', retdest
// stack: len_of_len, rlp_addr', retdest
SWAP1
// stack: pos', len_of_len, retdest
// stack: rlp_addr', len_of_len, retdest
%jump(decode_int_given_len)
// Convenience macro to call decode_rlp_list_len and return where we left off.
%macro decode_rlp_list_len
%stack (pos) -> (pos, %%after)
%stack (rlp_addr) -> (rlp_addr, %%after)
%jump(decode_rlp_list_len)
%%after:
%endmacro
// Parse an integer of the given length. It is assumed that the integer will
// fit in a single (256-bit) word on the stack.
// Pre stack: pos, len, retdest
// Post stack: pos', int
// Pre stack: rlp_addr, len, retdest
// Post stack: rlp_addr', int
global decode_int_given_len:
DUP2 ISZERO %jumpi(empty_int)
%stack (pos, len, retdest) -> (pos, len, pos, len, retdest)
%stack (rlp_addr, len, retdest) -> (rlp_addr, len, rlp_addr, len, retdest)
ADD
%stack(pos_two, pos, len, retdest) -> (pos, len, pos_two, retdest)
PUSH @SEGMENT_RLP_RAW
PUSH 0 //context
%stack(rlp_addr_two, rlp_addr, len, retdest) -> (rlp_addr, len, rlp_addr_two, retdest)
MLOAD_32BYTES
// stack: int, pos', retdest
%stack(int, pos, retdest) -> (retdest, pos, int)
// stack: int, rlp_addr', retdest
%stack(int, rlp_addr, retdest) -> (retdest, rlp_addr, int)
JUMP
empty_int:
// stack: pos, len, retdest
%stack(pos, len, retdest) -> (retdest, pos, 0)
// stack: rlp_addr, len, retdest
%stack(rlp_addr, len, retdest) -> (retdest, rlp_addr, 0)
JUMP

View File

@ -1,76 +1,76 @@
// RLP-encode a fixed-length 160 bit (20 byte) string. Assumes string < 2^160.
// Pre stack: pos, string, retdest
// Post stack: pos
// Pre stack: rlp_addr, string, retdest
// Post stack: rlp_addr
global encode_rlp_160:
PUSH 20
%jump(encode_rlp_fixed)
// Convenience macro to call encode_rlp_160 and return where we left off.
%macro encode_rlp_160
%stack (pos, string) -> (pos, string, %%after)
%stack (rlp_addr, string) -> (rlp_addr, string, %%after)
%jump(encode_rlp_160)
%%after:
%endmacro
// RLP-encode a fixed-length 256 bit (32 byte) string.
// Pre stack: pos, string, retdest
// Post stack: pos
// Pre stack: rlp_addr, string, retdest
// Post stack: rlp_addr
global encode_rlp_256:
PUSH 32
%jump(encode_rlp_fixed)
// Convenience macro to call encode_rlp_256 and return where we left off.
%macro encode_rlp_256
%stack (pos, string) -> (pos, string, %%after)
%stack (rlp_addr, string) -> (rlp_addr, string, %%after)
%jump(encode_rlp_256)
%%after:
%endmacro
// RLP-encode a fixed-length string with the given byte length. Assumes string < 2^(8 * len).
global encode_rlp_fixed:
// stack: len, pos, string, retdest
// stack: len, rlp_addr, string, retdest
DUP1
%add_const(0x80)
// stack: first_byte, len, pos, string, retdest
// stack: first_byte, len, rlp_addr, string, retdest
DUP3
// stack: pos, first_byte, len, pos, string, retdest
// stack: rlp_addr, first_byte, len, rlp_addr, string, retdest
%mstore_rlp
// stack: len, pos, string, retdest
// stack: len, rlp_addr, string, retdest
SWAP1
%increment // increment pos
// stack: pos, len, string, retdest
%stack (pos, len, string) -> (pos, string, len, encode_rlp_fixed_finish)
// stack: pos, string, len, encode_rlp_fixed_finish, retdest
%jump(mstore_unpacking_rlp)
%increment // increment rlp_addr
// stack: rlp_addr, len, string, retdest
%stack (rlp_addr, len, string) -> (rlp_addr, string, len, encode_rlp_fixed_finish)
// stack: rlp_addr, string, len, encode_rlp_fixed_finish, retdest
%jump(mstore_unpacking)
encode_rlp_fixed_finish:
// stack: pos', retdest
// stack: rlp_addr', retdest
SWAP1
JUMP
// Doubly-RLP-encode a fixed-length string with the given byte length.
// I.e. writes encode(encode(string). Assumes string < 2^(8 * len).
global doubly_encode_rlp_fixed:
// stack: len, pos, string, retdest
// stack: len, rlp_addr, string, retdest
DUP1
%add_const(0x81)
// stack: first_byte, len, pos, string, retdest
// stack: first_byte, len, rlp_addr, string, retdest
DUP3
// stack: pos, first_byte, len, pos, string, retdest
// stack: rlp_addr, first_byte, len, rlp_addr, string, retdest
%mstore_rlp
// stack: len, pos, string, retdest
// stack: len, rlp_addr, string, retdest
DUP1
%add_const(0x80)
// stack: second_byte, len, original_pos, string, retdest
// stack: second_byte, len, original_rlp_addr, string, retdest
DUP3 %increment
// stack: pos', second_byte, len, pos, string, retdest
// stack: rlp_addr', second_byte, len, rlp_addr, string, retdest
%mstore_rlp
// stack: len, pos, string, retdest
// stack: len, rlp_addr, string, retdest
SWAP1
%add_const(2) // advance past the two prefix bytes
// stack: pos'', len, string, retdest
%stack (pos, len, string) -> (pos, string, len, encode_rlp_fixed_finish)
// stack: context, segment, pos'', string, len, encode_rlp_fixed_finish, retdest
%jump(mstore_unpacking_rlp)
// stack: rlp_addr'', len, string, retdest
%stack (rlp_addr, len, string) -> (rlp_addr, string, len, encode_rlp_fixed_finish)
// stack: context, segment, rlp_addr'', string, len, encode_rlp_fixed_finish, retdest
%jump(mstore_unpacking)
// Writes the RLP prefix for a string of the given length. This does not handle
// the trivial encoding of certain single-byte strings, as handling that would
@ -78,156 +78,156 @@ global doubly_encode_rlp_fixed:
// length. This method should generally be used only when we know a string
// contains at least two bytes.
//
// Pre stack: pos, str_len, retdest
// Post stack: pos'
// Pre stack: rlp_addr, str_len, retdest
// Post stack: rlp_addr'
global encode_rlp_multi_byte_string_prefix:
// stack: pos, str_len, retdest
// stack: rlp_addr, str_len, retdest
DUP2 %gt_const(55)
// stack: str_len > 55, pos, str_len, retdest
// stack: str_len > 55, rlp_addr, str_len, retdest
%jumpi(encode_rlp_multi_byte_string_prefix_large)
// Medium case; prefix is 0x80 + str_len.
// stack: pos, str_len, retdest
// stack: rlp_addr, str_len, retdest
SWAP1 %add_const(0x80)
// stack: prefix, pos, retdest
// stack: prefix, rlp_addr, retdest
DUP2
// stack: pos, prefix, pos, retdest
// stack: rlp_addr, prefix, rlp_addr, retdest
%mstore_rlp
// stack: pos, retdest
// stack: rlp_addr, retdest
%increment
// stack: pos', retdest
// stack: rlp_addr', retdest
SWAP1
JUMP
encode_rlp_multi_byte_string_prefix_large:
// Large case; prefix is 0xb7 + len_of_len, followed by str_len.
// stack: pos, str_len, retdest
// stack: rlp_addr, str_len, retdest
DUP2
%num_bytes
// stack: len_of_len, pos, str_len, retdest
// stack: len_of_len, rlp_addr, str_len, retdest
SWAP1
DUP2 // len_of_len
%add_const(0xb7)
// stack: first_byte, pos, len_of_len, str_len, retdest
// stack: first_byte, rlp_addr, len_of_len, str_len, retdest
DUP2
// stack: pos, first_byte, pos, len_of_len, str_len, retdest
// stack: rlp_addr, first_byte, rlp_addr, len_of_len, str_len, retdest
%mstore_rlp
// stack: pos, len_of_len, str_len, retdest
// stack: rlp_addr, len_of_len, str_len, retdest
%increment
// stack: pos', len_of_len, str_len, retdest
%stack (pos, len_of_len, str_len) -> (pos, str_len, len_of_len)
%jump(mstore_unpacking_rlp)
// stack: rlp_addr', len_of_len, str_len, retdest
%stack (rlp_addr, len_of_len, str_len) -> (rlp_addr, str_len, len_of_len)
%jump(mstore_unpacking)
%macro encode_rlp_multi_byte_string_prefix
%stack (pos, str_len) -> (pos, str_len, %%after)
%stack (rlp_addr, str_len) -> (rlp_addr, str_len, %%after)
%jump(encode_rlp_multi_byte_string_prefix)
%%after:
%endmacro
// Writes the RLP prefix for a list with the given payload length.
//
// Pre stack: pos, payload_len, retdest
// Post stack: pos'
// Pre stack: rlp_addr, payload_len, retdest
// Post stack: rlp_addr'
global encode_rlp_list_prefix:
// stack: pos, payload_len, retdest
// stack: rlp_addr, payload_len, retdest
DUP2 %gt_const(55)
%jumpi(encode_rlp_list_prefix_large)
// Small case: prefix is just 0xc0 + length.
// stack: pos, payload_len, retdest
// stack: rlp_addr, payload_len, retdest
SWAP1
%add_const(0xc0)
// stack: prefix, pos, retdest
// stack: prefix, rlp_addr, retdest
DUP2
// stack: pos, prefix, pos, retdest
// stack: rlp_addr, prefix, rlp_addr, retdest
%mstore_rlp
// stack: pos, retdest
// stack: rlp_addr, retdest
%increment
SWAP1
JUMP
encode_rlp_list_prefix_large:
// Write 0xf7 + len_of_len.
// stack: pos, payload_len, retdest
// stack: rlp_addr, payload_len, retdest
DUP2 %num_bytes
// stack: len_of_len, pos, payload_len, retdest
// stack: len_of_len, rlp_addr, payload_len, retdest
DUP1 %add_const(0xf7)
// stack: first_byte, len_of_len, pos, payload_len, retdest
DUP3 // pos
// stack: first_byte, len_of_len, rlp_addr, payload_len, retdest
DUP3 // rlp_addr
%mstore_rlp
// stack: len_of_len, pos, payload_len, retdest
// stack: len_of_len, rlp_addr, payload_len, retdest
SWAP1 %increment
// stack: pos', len_of_len, payload_len, retdest
%stack (pos, len_of_len, payload_len)
-> (pos, payload_len, len_of_len,
// stack: rlp_addr', len_of_len, payload_len, retdest
%stack (rlp_addr, len_of_len, payload_len)
-> (rlp_addr, payload_len, len_of_len,
encode_rlp_list_prefix_large_done_writing_len)
%jump(mstore_unpacking_rlp)
%jump(mstore_unpacking)
encode_rlp_list_prefix_large_done_writing_len:
// stack: pos'', retdest
// stack: rlp_addr'', retdest
SWAP1
JUMP
%macro encode_rlp_list_prefix
%stack (pos, payload_len) -> (pos, payload_len, %%after)
%stack (rlp_addr, payload_len) -> (rlp_addr, payload_len, %%after)
%jump(encode_rlp_list_prefix)
%%after:
%endmacro
// Given an RLP list payload which starts and ends at the given positions,
// prepend the appropriate RLP list prefix. Returns the updated start position,
// Given an RLP list payload which starts and ends at the given rlp_address,
// prepend the appropriate RLP list prefix. Returns the updated start rlp_address,
// as well as the length of the RLP data (including the newly-added prefix).
//
// Pre stack: end_pos, start_pos, retdest
// Post stack: prefix_start_pos, rlp_len
// Pre stack: end_rlp_addr, start_rlp_addr, retdest
// Post stack: prefix_start_rlp_addr, rlp_len
global prepend_rlp_list_prefix:
// stack: end_pos, start_pos, retdest
DUP2 DUP2 SUB // end_pos - start_pos
// stack: payload_len, end_pos, start_pos, retdest
// stack: end_rlp_addr, start_rlp_addr, retdest
DUP2 DUP2 SUB // end_rlp_addr - start_rlp_addr
// stack: payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP1 %gt_const(55)
%jumpi(prepend_rlp_list_prefix_big)
// If we got here, we have a small list, so we prepend 0xc0 + len at position 8.
// stack: payload_len, end_pos, start_pos, retdest
// If we got here, we have a small list, so we prepend 0xc0 + len at rlp_address 8.
// stack: payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP1 %add_const(0xc0)
// stack: prefix_byte, payload_len, end_pos, start_pos, retdest
// stack: prefix_byte, payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP4 %decrement // offset of prefix
%mstore_rlp
// stack: payload_len, end_pos, start_pos, retdest
// stack: payload_len, end_rlp_addr, start_rlp_addr, retdest
%increment
// stack: rlp_len, end_pos, start_pos, retdest
// stack: rlp_len, end_rlp_addr, start_rlp_addr, retdest
SWAP2 %decrement
// stack: prefix_start_pos, end_pos, rlp_len, retdest
%stack (prefix_start_pos, end_pos, rlp_len, retdest) -> (retdest, prefix_start_pos, rlp_len)
// stack: prefix_start_rlp_addr, end_rlp_addr, rlp_len, retdest
%stack (prefix_start_rlp_addr, end_rlp_addr, rlp_len, retdest) -> (retdest, prefix_start_rlp_addr, rlp_len)
JUMP
prepend_rlp_list_prefix_big:
// We have a large list, so we prepend 0xf7 + len_of_len at position
// prefix_start_pos = start_pos - 1 - len_of_len
// We have a large list, so we prepend 0xf7 + len_of_len at rlp_address
// prefix_start_rlp_addr = start_rlp_addr - 1 - len_of_len
// followed by the length itself.
// stack: payload_len, end_pos, start_pos, retdest
// stack: payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP1 %num_bytes
// stack: len_of_len, payload_len, end_pos, start_pos, retdest
// stack: len_of_len, payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP1
DUP5 %decrement // start_pos - 1
DUP5 %decrement // start_rlp_addr - 1
SUB
// stack: prefix_start_pos, len_of_len, payload_len, end_pos, start_pos, retdest
DUP2 %add_const(0xf7) DUP2 %mstore_rlp // rlp[prefix_start_pos] = 0xf7 + len_of_len
// stack: prefix_start_pos, len_of_len, payload_len, end_pos, start_pos, retdest
DUP1 %increment // start_len_pos = prefix_start_pos + 1
%stack (start_len_pos, prefix_start_pos, len_of_len, payload_len, end_pos, start_pos, retdest)
-> (start_len_pos, payload_len, len_of_len,
// stack: prefix_start_rlp_addr, len_of_len, payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP2 %add_const(0xf7) DUP2 %mstore_rlp // rlp[prefix_start_rlp_addr] = 0xf7 + len_of_len
// stack: prefix_start_rlp_addr, len_of_len, payload_len, end_rlp_addr, start_rlp_addr, retdest
DUP1 %increment // start_len_rlp_addr = prefix_start_rlp_addr + 1
%stack (start_len_rlp_addr, prefix_start_rlp_addr, len_of_len, payload_len, end_rlp_addr, start_rlp_addr, retdest)
-> (start_len_rlp_addr, payload_len, len_of_len,
prepend_rlp_list_prefix_big_done_writing_len,
prefix_start_pos, end_pos, retdest)
%jump(mstore_unpacking_rlp)
prefix_start_rlp_addr, end_rlp_addr, retdest)
%jump(mstore_unpacking)
prepend_rlp_list_prefix_big_done_writing_len:
// stack: start_pos, prefix_start_pos, end_pos, retdest
%stack (start_pos, prefix_start_pos, end_pos)
-> (end_pos, prefix_start_pos, prefix_start_pos)
// stack: end_pos, prefix_start_pos, prefix_start_pos, retdest
// stack: start_rlp_addr, prefix_start_rlp_addr, end_rlp_addr, retdest
%stack (start_rlp_addr, prefix_start_rlp_addr, end_rlp_addr)
-> (end_rlp_addr, prefix_start_rlp_addr, prefix_start_rlp_addr)
// stack: end_rlp_addr, prefix_start_rlp_addr, prefix_start_rlp_addr, retdest
SUB
// stack: rlp_len, prefix_start_pos, retdest
%stack (rlp_len, prefix_start_pos, retdest) -> (retdest, prefix_start_pos, rlp_len)
// stack: rlp_len, prefix_start_rlp_addr, retdest
%stack (rlp_len, prefix_start_rlp_addr, retdest) -> (retdest, prefix_start_rlp_addr, rlp_len)
JUMP
// Convenience macro to call prepend_rlp_list_prefix and return where we left off.
%macro prepend_rlp_list_prefix
%stack (end_pos, start_pos) -> (end_pos, start_pos, %%after)
%stack (end_rlp_addr, start_rlp_addr) -> (end_rlp_addr, start_rlp_addr, %%after)
%jump(prepend_rlp_list_prefix)
%%after:
%endmacro
@ -274,18 +274,3 @@ prepend_rlp_list_prefix_big_done_writing_len:
ADD
%%finish:
%endmacro
// Like mstore_unpacking, but specifically for the RLP segment.
// Pre stack: offset, value, len, retdest
// Post stack: offset'
global mstore_unpacking_rlp:
// stack: offset, value, len, retdest
PUSH @SEGMENT_RLP_RAW
PUSH 0 // context
%jump(mstore_unpacking)
%macro mstore_unpacking_rlp
%stack (offset, value, len) -> (offset, value, len, %%after)
%jump(mstore_unpacking_rlp)
%%after:
%endmacro

View File

@ -1,8 +1,8 @@
// RLP-encode a scalar, i.e. a variable-length integer.
// Pre stack: pos, scalar, retdest
// Post stack: pos
// Pre stack: rlp_addr, scalar, retdest
// Post stack: rlp_addr
global encode_rlp_scalar:
// stack: pos, scalar, retdest
// stack: rlp_addr, scalar, retdest
// If scalar > 0x7f, this is the "medium" case.
DUP2
%gt_const(0x7f)
@ -12,12 +12,12 @@ global encode_rlp_scalar:
DUP2 %jumpi(encode_rlp_scalar_small)
// scalar = 0, so BE(scalar) is the empty string, which RLP encodes as a single byte 0x80.
// stack: pos, scalar, retdest
%stack (pos, scalar) -> (pos, 0x80, pos)
%mstore_rlp
// stack: pos, retdest
// stack: rlp_addr, scalar, retdest
%stack (rlp_addr, scalar) -> (0x80, rlp_addr, rlp_addr)
MSTORE_GENERAL
// stack: rlp_addr, retdest
%increment
// stack: pos', retdest
// stack: rlp_addr', retdest
SWAP1
JUMP
@ -26,17 +26,17 @@ encode_rlp_scalar_medium:
// (big-endian) scalar bytes. We first compute the minimal number of bytes
// needed to represent this scalar, then treat it as if it was a fixed-
// length string with that length.
// stack: pos, scalar, retdest
// stack: rlp_addr, scalar, retdest
DUP2
%num_bytes
// stack: scalar_bytes, pos, scalar, retdest
// stack: scalar_bytes, rlp_addr, scalar, retdest
%jump(encode_rlp_fixed)
// Doubly-RLP-encode a scalar, i.e. return encode(encode(scalar)).
// Pre stack: pos, scalar, retdest
// Post stack: pos
// Pre stack: rlp_addr, scalar, retdest
// Post stack: rlp_addr
global doubly_encode_rlp_scalar:
// stack: pos, scalar, retdest
// stack: rlp_addr, scalar, retdest
// If scalar > 0x7f, this is the "medium" case.
DUP2
%gt_const(0x7f)
@ -46,15 +46,16 @@ global doubly_encode_rlp_scalar:
DUP2 %jumpi(encode_rlp_scalar_small)
// scalar = 0, so BE(scalar) is the empty string, encode(scalar) = 0x80, and encode(encode(scalar)) = 0x8180.
// stack: pos, scalar, retdest
%stack (pos, scalar) -> (pos, 0x81, pos, 0x80, pos)
%mstore_rlp
// stack: pos, 0x80, pos, retdest
// stack: rlp_addr, scalar, retdest
%stack (rlp_addr, scalar) -> (0x81, rlp_addr, rlp_addr)
MSTORE_GENERAL
// stack: rlp_addr, retdest
%increment
%mstore_rlp
// stack: pos, retdest
%add_const(2)
// stack: pos, retdest
DUP1 PUSH 0x80
MSTORE_GENERAL
// stack: rlp_addr, retdest
%increment
// stack: rlp_addr, retdest
SWAP1
JUMP
@ -65,35 +66,35 @@ doubly_encode_rlp_scalar_medium:
// encode(encode(scalar)) = [0x80 + len + 1] || [0x80 + len] || BE(scalar)
// We first compute the length of the scalar with %num_bytes, then treat the scalar as if it was a
// fixed-length string with that length.
// stack: pos, scalar, retdest
// stack: rlp_addr, scalar, retdest
DUP2
%num_bytes
// stack: scalar_bytes, pos, scalar, retdest
// stack: scalar_bytes, rlp_addr, scalar, retdest
%jump(doubly_encode_rlp_fixed)
// The "small" case of RLP-encoding a scalar, where the value is its own encoding.
// This can be used for both for singly encoding or doubly encoding, since encode(encode(x)) = encode(x) = x.
encode_rlp_scalar_small:
// stack: pos, scalar, retdest
%stack (pos, scalar) -> (pos, scalar, pos)
// stack: pos, scalar, pos, retdest
%mstore_rlp
// stack: pos, retdest
// stack: rlp_addr, scalar, retdest
%stack (rlp_addr, scalar) -> (scalar, rlp_addr, rlp_addr)
// stack: scalar, rlp_addr, rlp_addr, retdest
MSTORE_GENERAL
// stack: rlp_addr, retdest
%increment
// stack: pos', retdest
// stack: rlp_addr', retdest
SWAP1
JUMP
// Convenience macro to call encode_rlp_scalar and return where we left off.
%macro encode_rlp_scalar
%stack (pos, scalar) -> (pos, scalar, %%after)
%stack (rlp_addr, scalar) -> (rlp_addr, scalar, %%after)
%jump(encode_rlp_scalar)
%%after:
%endmacro
// Convenience macro to call doubly_encode_rlp_scalar and return where we left off.
%macro doubly_encode_rlp_scalar
%stack (pos, scalar) -> (pos, scalar, %%after)
%stack (rlp_addr, scalar) -> (rlp_addr, scalar, %%after)
%jump(doubly_encode_rlp_scalar)
%%after:
%endmacro

View File

@ -1,80 +1,79 @@
// Encodes an arbitrary string, given a pointer and length.
// Pre stack: pos, ADDR: 3, len, retdest
// Post stack: pos'
// Pre stack: rlp_addr, ADDR, len, retdest
// Post stack: rlp_addr'
global encode_rlp_string:
// stack: pos, ADDR: 3, len, retdest
DUP5 %eq_const(1)
// stack: len == 1, pos, ADDR: 3, len, retdest
DUP5 DUP5 DUP5 // ADDR: 3
// stack: rlp_addr, ADDR, len, retdest
DUP3 %eq_const(1)
// stack: len == 1, rlp_addr, ADDR, len, retdest
DUP3
MLOAD_GENERAL
// stack: first_byte, len == 1, pos, ADDR: 3, len, retdest
// stack: first_byte, len == 1, rlp_addr, ADDR, len, retdest
%lt_const(128)
MUL // cheaper than AND
// stack: single_small_byte, pos, ADDR: 3, len, retdest
// stack: single_small_byte, rlp_addr, ADDR, len, retdest
%jumpi(encode_rlp_string_small_single_byte)
// stack: pos, ADDR: 3, len, retdest
DUP5 %gt_const(55)
// stack: len > 55, pos, ADDR: 3, len, retdest
// stack: rlp_addr, ADDR, len, retdest
DUP3 %gt_const(55)
// stack: len > 55, rlp_addr, ADDR, len, retdest
%jumpi(encode_rlp_string_large)
global encode_rlp_string_small:
// stack: pos, ADDR: 3, len, retdest
DUP5 // len
// stack: rlp_addr, ADDR, len, retdest
DUP1
DUP4 // len
%add_const(0x80)
// stack: first_byte, pos, ADDR: 3, len, retdest
DUP2
// stack: pos, first_byte, pos, ADDR: 3, len, retdest
%mstore_rlp
// stack: pos, ADDR: 3, len, retdest
// stack: first_byte, rlp_addr, rlp_addr, ADDR, len, retdest
MSTORE_GENERAL
// stack: rlp_addr, ADDR, len, retdest
%increment
// stack: pos', ADDR: 3, len, retdest
DUP5 DUP2 ADD // pos'' = pos' + len
// stack: pos'', pos', ADDR: 3, len, retdest
%stack (pos2, pos1, ADDR: 3, len, retdest)
-> (0, @SEGMENT_RLP_RAW, pos1, ADDR, len, retdest, pos2)
// stack: rlp_addr', ADDR, len, retdest
DUP3 DUP2 ADD // rlp_addr'' = rlp_addr' + len
// stack: rlp_addr'', rlp_addr', ADDR, len, retdest
%stack (rlp_addr2, rlp_addr1, ADDR, len, retdest)
-> (rlp_addr1, ADDR, len, retdest, rlp_addr2)
%jump(memcpy_bytes)
global encode_rlp_string_small_single_byte:
// stack: pos, ADDR: 3, len, retdest
%stack (pos, ADDR: 3, len) -> (ADDR, pos)
// stack: rlp_addr, ADDR, len, retdest
%stack (rlp_addr, ADDR, len) -> (ADDR, rlp_addr)
MLOAD_GENERAL
// stack: byte, pos, retdest
DUP2
%mstore_rlp
// stack: pos, retdest
// stack: byte, rlp_addr, retdest
DUP2 SWAP1
MSTORE_GENERAL
// stack: rlp_addr, retdest
%increment
SWAP1
// stack: retdest, pos'
// stack: retdest, rlp_addr'
JUMP
global encode_rlp_string_large:
// stack: pos, ADDR: 3, len, retdest
DUP5 %num_bytes
// stack: len_of_len, pos, ADDR: 3, len, retdest
// stack: rlp_addr, ADDR, len, retdest
DUP3 %num_bytes
// stack: len_of_len, rlp_addr, ADDR, len, retdest
SWAP1
DUP2 // len_of_len
DUP1
// stack: rlp_addr, rlp_addr, len_of_len, ADDR, len, retdest
DUP3 // len_of_len
%add_const(0xb7)
// stack: first_byte, pos, len_of_len, ADDR: 3, len, retdest
DUP2
// stack: pos, first_byte, pos, len_of_len, ADDR: 3, len, retdest
%mstore_rlp
// stack: pos, len_of_len, ADDR: 3, len, retdest
// stack: first_byte, rlp_addr, rlp_addr, len_of_len, ADDR, len, retdest
MSTORE_GENERAL
// stack: rlp_addr, len_of_len, ADDR, len, retdest
%increment
// stack: pos', len_of_len, ADDR: 3, len, retdest
%stack (pos, len_of_len, ADDR: 3, len)
-> (pos, len, len_of_len, encode_rlp_string_large_after_writing_len, ADDR, len)
%jump(mstore_unpacking_rlp)
// stack: rlp_addr', len_of_len, ADDR, len, retdest
%stack (rlp_addr, len_of_len, ADDR, len)
-> (rlp_addr, len, len_of_len, encode_rlp_string_large_after_writing_len, ADDR, len)
%jump(mstore_unpacking)
global encode_rlp_string_large_after_writing_len:
// stack: pos'', ADDR: 3, len, retdest
DUP5 DUP2 ADD // pos''' = pos'' + len
// stack: pos''', pos'', ADDR: 3, len, retdest
%stack (pos3, pos2, ADDR: 3, len, retdest)
-> (0, @SEGMENT_RLP_RAW, pos2, ADDR, len, retdest, pos3)
// stack: rlp_addr'', ADDR, len, retdest
DUP3 DUP2 ADD // rlp_addr''' = rlp_addr'' + len
// stack: rlp_addr''', rlp_addr'', ADDR, len, retdest
%stack (rlp_addr3, rlp_addr2, ADDR, len, retdest)
-> (rlp_addr2, ADDR, len, retdest, rlp_addr3)
%jump(memcpy_bytes)
%macro encode_rlp_string
%stack (pos, ADDR: 3, len) -> (pos, ADDR, len, %%after)
%stack (rlp_addr, ADDR, len) -> (rlp_addr, ADDR, len, %%after)
%jump(encode_rlp_string)
%%after:
%endmacro

View File

@ -8,29 +8,34 @@ global read_rlp_to_memory:
// stack: retdest
PROVER_INPUT(rlp) // Read the RLP blob length from the prover tape.
// stack: len, retdest
PUSH 0 // initial position
// stack: pos, len, retdest
PUSH @SEGMENT_RLP_RAW
%build_kernel_address
PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
// stack: addr, final_addr, retdest
read_rlp_to_memory_loop:
// stack: pos, len, retdest
// stack: addr, final_addr, retdest
DUP2
DUP2
EQ
// stack: pos == len, pos, len, retdest
// stack: addr == final_addr, addr, final_addr, retdest
%jumpi(read_rlp_to_memory_finish)
// stack: pos, len, retdest
// stack: addr, len, retdest
DUP1
PROVER_INPUT(rlp)
// stack: byte, pos, len, retdest
DUP2
// stack: pos, byte, pos, len, retdest
%mstore_kernel(@SEGMENT_RLP_RAW)
// stack: pos, len, retdest
// stack: byte, addr, addr, final_addr, retdest
MSTORE_GENERAL
// stack: addr, final_addr, retdest
%increment
// stack: pos', len, retdest
// stack: addr', final_addr, retdest
%jump(read_rlp_to_memory_loop)
read_rlp_to_memory_finish:
// stack: pos, len, retdest
POP
// stack: len, retdest
SWAP1 JUMP
// stack: addr, final_addr, retdest
// we recover the offset here
PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
DUP2 SUB
// stack: pos, addr, final_addr, retdest
%stack(pos, addr, final_addr, retdest) -> (retdest, pos)
JUMP

View File

@ -2,21 +2,17 @@
///
/// Specifically, set SHIFT_TABLE_SEGMENT[i] = 2^i for i = 0..255.
%macro shift_table_init
push 0 // initial offset is zero
push @SEGMENT_SHIFT_TABLE // segment
dup2 // kernel context is 0
push @SEGMENT_SHIFT_TABLE // segment, ctx == virt == 0
push 1 // 2^0
%rep 255
// stack: 2^i, context, segment, ost_i
dup4
// stack: 2^i, addr_i
dup2
%increment
dup4
dup4
// stack: context, segment, ost_(i+1), 2^i, context, segment, ost_i
dup4
// stack: addr_(i+1), 2^i, addr_i
dup2
dup1
add
// stack: 2^(i+1), context, segment, ost_(i+1), 2^i, context, segment, ost_i
// stack: 2^(i+1), addr_(i+1), 2^i, addr_i
%endrep
%rep 256
mstore_general

View File

@ -6,207 +6,206 @@
// Decode the chain ID and store it.
%macro decode_and_store_chain_id
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, chain_id) -> (chain_id, pos)
%stack (rlp_addr, chain_id) -> (chain_id, rlp_addr)
%mstore_txn_field(@TXN_FIELD_CHAIN_ID)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the nonce and store it.
%macro decode_and_store_nonce
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, nonce) -> (nonce, pos)
%stack (rlp_addr, nonce) -> (nonce, rlp_addr)
%mstore_txn_field(@TXN_FIELD_NONCE)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the gas price and, since this is for legacy txns, store it as both
// TXN_FIELD_MAX_PRIORITY_FEE_PER_GAS and TXN_FIELD_MAX_FEE_PER_GAS.
%macro decode_and_store_gas_price_legacy
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, gas_price) -> (gas_price, gas_price, pos)
%stack (rlp_addr, gas_price) -> (gas_price, gas_price, rlp_addr)
%mstore_txn_field(@TXN_FIELD_MAX_PRIORITY_FEE_PER_GAS)
%mstore_txn_field(@TXN_FIELD_MAX_FEE_PER_GAS)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the max priority fee and store it.
%macro decode_and_store_max_priority_fee
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, gas_price) -> (gas_price, pos)
%stack (rlp_addr, gas_price) -> (gas_price, rlp_addr)
%mstore_txn_field(@TXN_FIELD_MAX_PRIORITY_FEE_PER_GAS)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the max fee and store it.
%macro decode_and_store_max_fee
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, gas_price) -> (gas_price, pos)
%stack (rlp_addr, gas_price) -> (gas_price, rlp_addr)
%mstore_txn_field(@TXN_FIELD_MAX_FEE_PER_GAS)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the gas limit and store it.
%macro decode_and_store_gas_limit
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, gas_limit) -> (gas_limit, pos)
%stack (rlp_addr, gas_limit) -> (gas_limit, rlp_addr)
%mstore_txn_field(@TXN_FIELD_GAS_LIMIT)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the "to" field and store it.
// This field is either 160-bit or empty in the case of a contract creation txn.
%macro decode_and_store_to
// stack: pos
// stack: rlp_addr
%decode_rlp_string_len
// stack: pos, len
// stack: rlp_addr, len
SWAP1
// stack: len, pos
// stack: len, rlp_addr
DUP1 ISZERO %jumpi(%%contract_creation)
// stack: len, pos
// stack: len, rlp_addr
DUP1 %eq_const(20) ISZERO %jumpi(invalid_txn) // Address is 160-bit
%stack (len, pos) -> (pos, len, %%with_scalar)
%stack (len, rlp_addr) -> (rlp_addr, len, %%with_scalar)
%jump(decode_int_given_len)
%%with_scalar:
// stack: pos, int
// stack: rlp_addr, int
SWAP1
%mstore_txn_field(@TXN_FIELD_TO)
// stack: pos
// stack: rlp_addr
%jump(%%end)
%%contract_creation:
// stack: len, pos
// stack: len, rlp_addr
POP
PUSH 1 %mstore_global_metadata(@GLOBAL_METADATA_CONTRACT_CREATION)
// stack: pos
// stack: rlp_addr
%%end:
%endmacro
// Decode the "value" field and store it.
%macro decode_and_store_value
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, value) -> (value, pos)
%stack (rlp_addr, value) -> (value, rlp_addr)
%mstore_txn_field(@TXN_FIELD_VALUE)
// stack: pos
// stack: rlp_addr
%endmacro
// Decode the calldata field, store its length in @TXN_FIELD_DATA_LEN, and copy it to @SEGMENT_TXN_DATA.
%macro decode_and_store_data
// stack: pos
// Decode the data length, store it, and compute new_pos after any data.
// stack: rlp_addr
// Decode the data length, store it, and compute new_rlp_addr after any data.
%decode_rlp_string_len
%stack (pos, data_len) -> (data_len, pos, data_len, pos, data_len)
%stack (rlp_addr, data_len) -> (data_len, rlp_addr, data_len, rlp_addr, data_len)
%mstore_txn_field(@TXN_FIELD_DATA_LEN)
// stack: pos, data_len, pos, data_len
// stack: rlp_addr, data_len, rlp_addr, data_len
ADD
// stack: new_pos, old_pos, data_len
// stack: new_rlp_addr, old_rlp_addr, data_len
// Memcpy the txn data from @SEGMENT_RLP_RAW to @SEGMENT_TXN_DATA.
%stack (new_pos, old_pos, data_len) -> (old_pos, data_len, %%after, new_pos)
PUSH @SEGMENT_RLP_RAW
GET_CONTEXT
PUSH 0
%stack (new_rlp_addr, old_rlp_addr, data_len) -> (old_rlp_addr, data_len, %%after, new_rlp_addr)
// old_rlp_addr has context 0. We will call GET_CONTEXT and update it.
GET_CONTEXT ADD
PUSH @SEGMENT_TXN_DATA
GET_CONTEXT
// stack: DST, SRC, data_len, %%after, new_pos
GET_CONTEXT ADD
// stack: DST, SRC, data_len, %%after, new_rlp_addr
%jump(memcpy_bytes)
%%after:
// stack: new_pos
// stack: new_rlp_addr
%endmacro
%macro decode_and_store_access_list
// stack: pos
// stack: rlp_addr
DUP1 %mstore_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_START)
%decode_rlp_list_len
%stack (pos, len) -> (len, len, pos, %%after)
%stack (rlp_addr, len) -> (len, len, rlp_addr, %%after)
%jumpi(decode_and_store_access_list)
// stack: len, pos, %%after
// stack: len, rlp_addr, %%after
POP SWAP1 POP
// stack: pos
// stack: rlp_addr
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_START) DUP2 SUB %mstore_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_LEN)
%%after:
%endmacro
%macro decode_and_store_y_parity
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, y_parity) -> (y_parity, pos)
%stack (rlp_addr, y_parity) -> (y_parity, rlp_addr)
%mstore_txn_field(@TXN_FIELD_Y_PARITY)
// stack: pos
// stack: rlp_addr
%endmacro
%macro decode_and_store_r
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, r) -> (r, pos)
%stack (rlp_addr, r) -> (r, rlp_addr)
%mstore_txn_field(@TXN_FIELD_R)
// stack: pos
// stack: rlp_addr
%endmacro
%macro decode_and_store_s
// stack: pos
// stack: rlp_addr
%decode_rlp_scalar
%stack (pos, s) -> (s, pos)
%stack (rlp_addr, s) -> (s, rlp_addr)
%mstore_txn_field(@TXN_FIELD_S)
// stack: pos
// stack: rlp_addr
%endmacro
// The access list is of the form `[[{20 bytes}, [{32 bytes}...]]...]`.
global decode_and_store_access_list:
// stack: len, pos
// stack: len, rlp_addr
DUP2 ADD
// stack: end_pos, pos
// stack: end_rlp_addr, rlp_addr
// Store the RLP length.
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_START) DUP2 SUB %mstore_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_LEN)
SWAP1
decode_and_store_access_list_loop:
// stack: pos, end_pos
// stack: rlp_addr, end_rlp_addr
DUP2 DUP2 EQ %jumpi(decode_and_store_access_list_finish)
// stack: pos, end_pos
// stack: rlp_addr, end_rlp_addr
%decode_rlp_list_len // Should be a list `[{20 bytes}, [{32 bytes}...]]`
// stack: pos, internal_len, end_pos
// stack: rlp_addr, internal_len, end_rlp_addr
SWAP1 POP // We don't need the length of this list.
// stack: pos, end_pos
// stack: rlp_addr, end_rlp_addr
%decode_rlp_scalar // Address // TODO: Should panic when address is not 20 bytes?
// stack: pos, addr, end_pos
// stack: rlp_addr, addr, end_rlp_addr
SWAP1
// stack: addr, pos, end_pos
// stack: addr, rlp_addr, end_rlp_addr
DUP1 %insert_accessed_addresses_no_return
// stack: addr, pos, end_pos
// stack: addr, rlp_addr, end_rlp_addr
%add_address_cost
// stack: addr, pos, end_pos
// stack: addr, rlp_addr, end_rlp_addr
SWAP1
// stack: pos, addr, end_pos
// stack: rlp_addr, addr, end_rlp_addr
%decode_rlp_list_len // Should be a list of storage keys `[{32 bytes}...]`
// stack: pos, sk_len, addr, end_pos
// stack: rlp_addr, sk_len, addr, end_rlp_addr
SWAP1 DUP2 ADD
// stack: sk_end_pos, pos, addr, end_pos
// stack: sk_end_rlp_addr, rlp_addr, addr, end_rlp_addr
SWAP1
// stack: pos, sk_end_pos, addr, end_pos
// stack: rlp_addr, sk_end_rlp_addr, addr, end_rlp_addr
sk_loop:
DUP2 DUP2 EQ %jumpi(end_sk)
// stack: pos, sk_end_pos, addr, end_pos
// stack: rlp_addr, sk_end_rlp_addr, addr, end_rlp_addr
%decode_rlp_scalar // Storage key // TODO: Should panic when key is not 32 bytes?
%stack (pos, key, sk_end_pos, addr, end_pos) ->
(addr, key, sk_loop_contd, pos, sk_end_pos, addr, end_pos)
%stack (rlp_addr, key, sk_end_rlp_addr, addr, end_rlp_addr) ->
(addr, key, sk_loop_contd, rlp_addr, sk_end_rlp_addr, addr, end_rlp_addr)
%jump(insert_accessed_storage_keys_with_original_value)
sk_loop_contd:
// stack: pos, sk_end_pos, addr, end_pos
// stack: rlp_addr, sk_end_rlp_addr, addr, end_rlp_addr
%add_storage_key_cost
%jump(sk_loop)
end_sk:
%stack (pos, sk_end_pos, addr, end_pos) -> (pos, end_pos)
%stack (rlp_addr, sk_end_rlp_addr, addr, end_rlp_addr) -> (rlp_addr, end_rlp_addr)
%jump(decode_and_store_access_list_loop)
decode_and_store_access_list_finish:
%stack (pos, end_pos, retdest) -> (retdest, pos)
%stack (rlp_addr, end_rlp_addr, retdest) -> (retdest, rlp_addr)
JUMP
%macro add_address_cost

View File

@ -20,15 +20,15 @@ read_txn_from_memory:
// Type 0 (legacy) transactions have no such prefix, but their RLP will have a
// first byte >= 0xc0, so there is no overlap.
PUSH 0
%mload_kernel(@SEGMENT_RLP_RAW)
PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
MLOAD_GENERAL
%eq_const(1)
// stack: first_byte == 1, retdest
%jumpi(process_type_1_txn)
// stack: retdest
PUSH 0
%mload_kernel(@SEGMENT_RLP_RAW)
PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
MLOAD_GENERAL
%eq_const(2)
// stack: first_byte == 2, retdest
%jumpi(process_type_2_txn)
@ -53,10 +53,12 @@ global update_txn_trie:
// and now copy txn_rlp to the new block
%stack (rlp_start, txn_rlp_len, value_ptr, txn_counter, num_nibbles) -> (
0, @SEGMENT_TRIE_DATA, rlp_start, // dest addr
0, @SEGMENT_RLP_RAW, 0, // src addr. Kernel has context 0
@SEGMENT_RLP_RAW, // src addr. ctx == virt == 0
rlp_start, @SEGMENT_TRIE_DATA, // swapped dest addr, ctx == 0
txn_rlp_len, // mcpy len
txn_rlp_len, rlp_start, txn_counter, num_nibbles, value_ptr)
SWAP2 %build_kernel_address
// stack: DST, SRC, txn_rlp_len, txn_rlp_len, rlp_start, txn_counter, num_nibbles, value_ptr
%memcpy_bytes
ADD
%set_trie_data_size

View File

@ -13,68 +13,68 @@
global process_type_0_txn:
// stack: retdest
PUSH 0 // initial pos
// stack: pos, retdest
PUSH @SEGMENT_RLP_RAW // ctx == virt == 0
// stack: rlp_addr, retdest
%decode_rlp_list_len
// We don't actually need the length.
%stack (pos, len) -> (pos)
%stack (rlp_addr, len) -> (rlp_addr)
// stack: pos, retdest
// stack: rlp_addr, retdest
%decode_and_store_nonce
%decode_and_store_gas_price_legacy
%decode_and_store_gas_limit
%decode_and_store_to
%decode_and_store_value
%decode_and_store_data
// stack: pos, retdest
// stack: rlp_addr, retdest
// Parse the "v" field.
// stack: pos, retdest
// stack: rlp_addr, retdest
%decode_rlp_scalar
// stack: pos, v, retdest
// stack: rlp_addr, v, retdest
SWAP1
// stack: v, pos, retdest
// stack: v, rlp_addr, retdest
DUP1
%gt_const(28)
// stack: v > 28, v, pos, retdest
// stack: v > 28, v, rlp_addr, retdest
%jumpi(process_v_new_style)
// We have an old style v, so y_parity = v - 27.
// No chain ID is present, so we can leave TXN_FIELD_CHAIN_ID_PRESENT and
// TXN_FIELD_CHAIN_ID with their default values of zero.
// stack: v, pos, retdest
// stack: v, rlp_addr, retdest
%sub_const(27)
%stack (y_parity, pos) -> (y_parity, pos)
%stack (y_parity, rlp_addr) -> (y_parity, rlp_addr)
%mstore_txn_field(@TXN_FIELD_Y_PARITY)
// stack: pos, retdest
// stack: rlp_addr, retdest
%jump(decode_r_and_s)
process_v_new_style:
// stack: v, pos, retdest
// stack: v, rlp_addr, retdest
// We have a new style v, so chain_id_present = 1,
// chain_id = (v - 35) / 2, and y_parity = (v - 35) % 2.
%stack (v, pos) -> (1, v, pos)
%stack (v, rlp_addr) -> (1, v, rlp_addr)
%mstore_txn_field(@TXN_FIELD_CHAIN_ID_PRESENT)
// stack: v, pos, retdest
// stack: v, rlp_addr, retdest
%sub_const(35)
DUP1
// stack: v - 35, v - 35, pos, retdest
// stack: v - 35, v - 35, rlp_addr, retdest
%div_const(2)
// stack: chain_id, v - 35, pos, retdest
// stack: chain_id, v - 35, rlp_addr, retdest
%mstore_txn_field(@TXN_FIELD_CHAIN_ID)
// stack: v - 35, pos, retdest
// stack: v - 35, rlp_addr, retdest
%mod_const(2)
// stack: y_parity, pos, retdest
// stack: y_parity, rlp_addr, retdest
%mstore_txn_field(@TXN_FIELD_Y_PARITY)
decode_r_and_s:
// stack: pos, retdest
// stack: rlp_addr, retdest
%decode_and_store_r
%decode_and_store_s
// stack: pos, retdest
// stack: rlp_addr, retdest
POP
// stack: retdest
@ -85,73 +85,68 @@ type_0_compute_signed_data:
// keccak256(rlp([nonce, gas_price, gas_limit, to, value, data]))
%alloc_rlp_block
// stack: rlp_start, retdest
// stack: rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_NONCE)
// stack: nonce, rlp_start, retdest
// stack: nonce, rlp_addr_start, retdest
DUP2
// stack: rlp_pos, nonce, rlp_start, retdest
// stack: rlp_addr, nonce, rlp_addr_start, retdest
%encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_MAX_FEE_PER_GAS)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_GAS_LIMIT)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_TO)
%mload_global_metadata(@GLOBAL_METADATA_CONTRACT_CREATION) %jumpi(zero_to)
// stack: to, rlp_pos, rlp_start, retdest
// stack: to, rlp_addr, rlp_addr_start, retdest
SWAP1 %encode_rlp_160
%jump(after_to)
zero_to:
// stack: to, rlp_pos, rlp_start, retdest
// stack: to, rlp_addr, rlp_addr_start, retdest
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
after_to:
%mload_txn_field(@TXN_FIELD_VALUE)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
// Encode txn data.
%mload_txn_field(@TXN_FIELD_DATA_LEN)
PUSH 0 // ADDR.virt
PUSH @SEGMENT_TXN_DATA
PUSH 0 // ADDR.context
// stack: ADDR: 3, len, rlp_pos, rlp_start, retdest
// stack: ADDR, len, rlp_addr, rlp_addr_start, retdest
PUSH after_serializing_txn_data
// stack: after_serializing_txn_data, ADDR: 3, len, rlp_pos, rlp_start, retdest
SWAP5
// stack: rlp_pos, ADDR: 3, len, after_serializing_txn_data, rlp_start, retdest
// stack: after_serializing_txn_data, ADDR, len, rlp_addr, rlp_addr_start, retdest
SWAP3
// stack: rlp_addr, ADDR, len, after_serializing_txn_data, rlp_addr_start, retdest
%jump(encode_rlp_string)
after_serializing_txn_data:
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_CHAIN_ID_PRESENT)
ISZERO %jumpi(finish_rlp_list)
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_CHAIN_ID)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
PUSH 0
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
PUSH 0
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
finish_rlp_list:
%prepend_rlp_list_prefix
// stack: prefix_start_pos, rlp_len, retdest
PUSH @SEGMENT_RLP_RAW
PUSH 0 // context
// stack: ADDR: 3, rlp_len, retdest
// stack: ADDR, rlp_len, retdest
KECCAK_GENERAL
// stack: hash, retdest

View File

@ -8,11 +8,14 @@
global process_type_1_txn:
// stack: retdest
PUSH 1 // initial pos, skipping over the 0x01 byte
// stack: pos, retdest
// Initial rlp address offset of 1 (skipping over the 0x01 byte)
PUSH 1
PUSH @SEGMENT_RLP_RAW
%build_kernel_address
// stack: rlp_addr, retdest
%decode_rlp_list_len
// We don't actually need the length.
%stack (pos, len) -> (pos)
%stack (rlp_addr, len) -> (rlp_addr)
%store_chain_id_present_true
%decode_and_store_chain_id
@ -27,7 +30,7 @@ global process_type_1_txn:
%decode_and_store_r
%decode_and_store_s
// stack: pos, retdest
// stack: rlp_addr, retdest
POP
// stack: retdest
@ -36,83 +39,79 @@ global process_type_1_txn:
// over keccak256(0x01 || rlp([chainId, nonce, gasPrice, gasLimit, to, value, data, accessList])).
type_1_compute_signed_data:
%alloc_rlp_block
// stack: rlp_start, retdest
// stack: rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_CHAIN_ID)
// stack: chain_id, rlp_start, retdest
// stack: chain_id, rlp_addr_start, retdest
DUP2
// stack: rlp_pos, chain_id, rlp_start, retdest
// stack: rlp_addr, chain_id, rlp_addr_start, retdest
%encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_NONCE)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_MAX_FEE_PER_GAS)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_GAS_LIMIT)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_TO)
%mload_global_metadata(@GLOBAL_METADATA_CONTRACT_CREATION) %jumpi(zero_to)
// stack: to, rlp_pos, rlp_start, retdest
// stack: to, rlp_addr, rlp_addr_start, retdest
SWAP1 %encode_rlp_160
%jump(after_to)
zero_to:
// stack: to, rlp_pos, rlp_start, retdest
// stack: to, rlp_addr, rlp_addr_start, retdest
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
after_to:
%mload_txn_field(@TXN_FIELD_VALUE)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
// Encode txn data.
%mload_txn_field(@TXN_FIELD_DATA_LEN)
PUSH 0 // ADDR.virt
PUSH @SEGMENT_TXN_DATA
PUSH 0 // ADDR.context
// stack: ADDR: 3, len, rlp_pos, rlp_start, retdest
PUSH @SEGMENT_TXN_DATA // ctx == virt == 0
// stack: ADDR, len, rlp_addr, rlp_addr_start, retdest
PUSH after_serializing_txn_data
// stack: after_serializing_txn_data, ADDR: 3, len, rlp_pos, rlp_start, retdest
SWAP5
// stack: rlp_pos, ADDR: 3, len, after_serializing_txn_data, rlp_start, retdest
// stack: after_serializing_txn_data, ADDR, len, rlp_addr, rlp_addr_start, retdest
SWAP3
// stack: rlp_addr, ADDR, len, after_serializing_txn_data, rlp_addr_start, retdest
%jump(encode_rlp_string)
after_serializing_txn_data:
// Instead of manually encoding the access list, we just copy the raw RLP from the transaction.
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_START)
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_LEN)
%stack (al_len, al_start, rlp_pos, rlp_start, retdest) ->
%stack (al_len, al_start, rlp_addr, rlp_addr_start, retdest) ->
(
0, @SEGMENT_RLP_RAW, rlp_pos,
0, @SEGMENT_RLP_RAW, al_start,
rlp_addr,
al_start,
al_len,
after_serializing_access_list,
rlp_pos, rlp_start, retdest)
rlp_addr, rlp_addr_start, retdest)
%jump(memcpy_bytes)
after_serializing_access_list:
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_LEN) ADD
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_addr_start, retdest
%prepend_rlp_list_prefix
// stack: prefix_start_pos, rlp_len, retdest
// stack: prefix_start_rlp_addr, rlp_len, retdest
// Store a `1` in front of the RLP
%decrement
%stack (pos) -> (1, 0, @SEGMENT_RLP_RAW, pos, pos)
%stack (rlp_addr) -> (1, rlp_addr, rlp_addr)
MSTORE_GENERAL
// stack: pos, rlp_len, retdest
// stack: rlp_addr, rlp_len, retdest
// Hash the RLP + the leading `1`
SWAP1 %increment SWAP1
PUSH @SEGMENT_RLP_RAW
PUSH 0 // context
// stack: ADDR: 3, len, retdest
// stack: ADDR, len, retdest
KECCAK_GENERAL
// stack: hash, retdest

View File

@ -9,13 +9,16 @@
global process_type_2_txn:
// stack: retdest
PUSH 1 // initial pos, skipping over the 0x02 byte
// stack: pos, retdest
// Initial rlp address offset of 1 (skipping over the 0x02 byte)
PUSH 1
PUSH @SEGMENT_RLP_RAW
%build_kernel_address
// stack: rlp_addr, retdest
%decode_rlp_list_len
// We don't actually need the length.
%stack (pos, len) -> (pos)
%stack (rlp_addr, len) -> (rlp_addr)
// stack: pos, retdest
// stack: rlp_addr, retdest
%store_chain_id_present_true
%decode_and_store_chain_id
%decode_and_store_nonce
@ -30,7 +33,7 @@ global process_type_2_txn:
%decode_and_store_r
%decode_and_store_s
// stack: pos, retdest
// stack: rlp_addr, retdest
POP
// stack: retdest
@ -39,87 +42,83 @@ global process_type_2_txn:
// keccak256(0x02 || rlp([chain_id, nonce, max_priority_fee_per_gas, max_fee_per_gas, gas_limit, destination, amount, data, access_list]))
type_2_compute_signed_data:
%alloc_rlp_block
// stack: rlp_start, retdest
// stack: rlp_addr_start, retdest
%mload_txn_field(@TXN_FIELD_CHAIN_ID)
// stack: chain_id, rlp_start, retdest
DUP2
// stack: rlp_pos, chain_id, rlp_start, retdest
// stack: rlp_addr, chain_id, rlp_start, retdest
%encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%mload_txn_field(@TXN_FIELD_NONCE)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%mload_txn_field(@TXN_FIELD_MAX_PRIORITY_FEE_PER_GAS)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%mload_txn_field(@TXN_FIELD_MAX_FEE_PER_GAS)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%mload_txn_field(@TXN_FIELD_GAS_LIMIT)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%mload_txn_field(@TXN_FIELD_TO)
%mload_global_metadata(@GLOBAL_METADATA_CONTRACT_CREATION) %jumpi(zero_to)
// stack: to, rlp_pos, rlp_start, retdest
// stack: to, rlp_addr, rlp_start, retdest
SWAP1 %encode_rlp_160
%jump(after_to)
zero_to:
// stack: to, rlp_pos, rlp_start, retdest
// stack: to, rlp_addr, rlp_start, retdest
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
after_to:
%mload_txn_field(@TXN_FIELD_VALUE)
SWAP1 %encode_rlp_scalar
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
// Encode txn data.
%mload_txn_field(@TXN_FIELD_DATA_LEN)
PUSH 0 // ADDR.virt
PUSH @SEGMENT_TXN_DATA
PUSH 0 // ADDR.context
// stack: ADDR: 3, len, rlp_pos, rlp_start, retdest
PUSH @SEGMENT_TXN_DATA // ctx == virt == 0
// stack: ADDR, len, rlp_addr, rlp_start, retdest
PUSH after_serializing_txn_data
// stack: after_serializing_txn_data, ADDR: 3, len, rlp_pos, rlp_start, retdest
SWAP5
// stack: rlp_pos, ADDR: 3, len, after_serializing_txn_data, rlp_start, retdest
// stack: after_serializing_txn_data, ADDR, len, rlp_addr, rlp_start, retdest
SWAP3
// stack: rlp_addr, ADDR, len, after_serializing_txn_data, rlp_start, retdest
%jump(encode_rlp_string)
after_serializing_txn_data:
// Instead of manually encoding the access list, we just copy the raw RLP from the transaction.
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_START)
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_LEN)
%stack (al_len, al_start, rlp_pos, rlp_start, retdest) ->
%stack (al_len, al_start, rlp_addr, rlp_start, retdest) ->
(
0, @SEGMENT_RLP_RAW, rlp_pos,
0, @SEGMENT_RLP_RAW, al_start,
rlp_addr,
al_start,
al_len,
after_serializing_access_list,
rlp_pos, rlp_start, retdest)
rlp_addr, rlp_start, retdest)
%jump(memcpy_bytes)
after_serializing_access_list:
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%mload_global_metadata(@GLOBAL_METADATA_ACCESS_LIST_RLP_LEN) ADD
// stack: rlp_pos, rlp_start, retdest
// stack: rlp_addr, rlp_start, retdest
%prepend_rlp_list_prefix
// stack: prefix_start_pos, rlp_len, retdest
// Store a `2` in front of the RLP
%decrement
%stack (pos) -> (2, 0, @SEGMENT_RLP_RAW, pos, pos)
%stack (rlp_addr) -> (2, rlp_addr, rlp_addr)
MSTORE_GENERAL
// stack: pos, rlp_len, retdest
// stack: rlp_addr, rlp_len, retdest
// Hash the RLP + the leading `2`
SWAP1 %increment SWAP1
PUSH @SEGMENT_RLP_RAW
PUSH 0 // context
// stack: ADDR: 3, len, retdest
// stack: ADDR, len, retdest
KECCAK_GENERAL
// stack: hash, retdest

View File

@ -410,3 +410,36 @@
ISZERO
// stack: not b
%endmacro
%macro build_address
// stack: ctx, seg, off
ADD
ADD
// stack: addr
%endmacro
%macro build_address_no_offset
// stack: ctx, seg
ADD
// stack: addr
%endmacro
%macro build_kernel_address
// stack: seg, off
ADD
// stack: addr (ctx == 0)
%endmacro
%macro build_address_with_ctx_no_offset(seg)
// stack: ctx
PUSH $seg
ADD
// stack: addr
%endmacro
%macro build_address_with_ctx_no_segment(off)
// stack: ctx
PUSH $off
ADD
// stack: addr
%endmacro

View File

@ -18,7 +18,8 @@ global sys_keccak256:
%stack (kexit_info, offset, len) -> (offset, len, kexit_info)
PUSH @SEGMENT_MAIN_MEMORY
GET_CONTEXT
// stack: ADDR: 3, len, kexit_info
%build_address
// stack: ADDR, len, kexit_info
KECCAK_GENERAL
// stack: hash, kexit_info
SWAP1
@ -37,11 +38,12 @@ sys_keccak256_empty:
%macro keccak256_word(num_bytes)
// Since KECCAK_GENERAL takes its input from memory, we will first write
// input_word's bytes to @SEGMENT_KERNEL_GENERAL[0..$num_bytes].
%stack (word) -> (0, @SEGMENT_KERNEL_GENERAL, 0, word, $num_bytes, %%after_mstore)
%stack (word) -> (@SEGMENT_KERNEL_GENERAL, word, $num_bytes, %%after_mstore)
%jump(mstore_unpacking)
%%after_mstore:
// stack: offset
%stack (offset) -> (0, @SEGMENT_KERNEL_GENERAL, 0, $num_bytes) // context, segment, offset, len
// stack: addr
%stack(addr) -> (addr, $num_bytes, $num_bytes)
SUB
KECCAK_GENERAL
%endmacro
@ -53,12 +55,13 @@ sys_keccak256_empty:
// Since KECCAK_GENERAL takes its input from memory, we will first write
// a's bytes to @SEGMENT_KERNEL_GENERAL[0..32], then b's bytes to
// @SEGMENT_KERNEL_GENERAL[32..64].
%stack (a) -> (0, @SEGMENT_KERNEL_GENERAL, 0, a, 32, %%after_mstore_a)
%stack (a) -> (@SEGMENT_KERNEL_GENERAL, a, 32, %%after_mstore_a)
%jump(mstore_unpacking)
%%after_mstore_a:
%stack (offset, b) -> (0, @SEGMENT_KERNEL_GENERAL, 32, b, 32, %%after_mstore_b)
%stack (addr, b) -> (addr, b, 32, %%after_mstore_b)
%jump(mstore_unpacking)
%%after_mstore_b:
%stack (offset) -> (0, @SEGMENT_KERNEL_GENERAL, 0, 64) // context, segment, offset, len
%stack (addr) -> (addr, 64, 64) // reset the address offset
SUB
KECCAK_GENERAL
%endmacro

View File

@ -1,39 +1,51 @@
use crate::memory::segments::Segment;
/// These metadata fields contain VM state specific to a particular context.
///
/// Each value is directly scaled by the corresponding `Segment::ContextMetadata` value for faster
/// memory access in the kernel.
#[allow(clippy::enum_clike_unportable_variant)]
#[repr(usize)]
#[derive(Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Debug)]
pub(crate) enum ContextMetadata {
/// The ID of the context which created this one.
ParentContext = 0,
ParentContext = Segment::ContextMetadata as usize,
/// The program counter to return to when we return to the parent context.
ParentProgramCounter = 1,
CalldataSize = 2,
ReturndataSize = 3,
ParentProgramCounter,
CalldataSize,
ReturndataSize,
/// The address of the account associated with this context.
Address = 4,
Address,
/// The size of the code under the account associated with this context.
/// While this information could be obtained from the state trie, it is best to cache it since
/// the `CODESIZE` instruction is very cheap.
CodeSize = 5,
CodeSize,
/// The address of the caller who spawned this context.
Caller = 6,
Caller,
/// The value (in wei) deposited by the caller.
CallValue = 7,
CallValue,
/// Whether this context was created by `STATICCALL`, in which case state changes are
/// prohibited.
Static = 8,
Static,
/// Pointer to the initial version of the state trie, at the creation of this context. Used when
/// we need to revert a context.
StateTrieCheckpointPointer = 9,
StateTrieCheckpointPointer,
/// Size of the active main memory, in (32 byte) words.
MemWords = 10,
StackSize = 11,
MemWords,
StackSize,
/// The gas limit for this call (not the entire transaction).
GasLimit = 12,
ContextCheckpointsLen = 13,
GasLimit,
ContextCheckpointsLen,
}
impl ContextMetadata {
pub(crate) const COUNT: usize = 14;
/// Unscales this virtual offset by their respective `Segment` value.
pub(crate) const fn unscale(&self) -> usize {
*self as usize - Segment::ContextMetadata as usize
}
pub(crate) const fn all() -> [Self; Self::COUNT] {
[
Self::ParentContext,

View File

@ -1,98 +1,110 @@
use crate::memory::segments::Segment;
/// These metadata fields contain global VM state, stored in the `Segment::Metadata` segment of the
/// kernel's context (which is zero).
///
/// Each value is directly scaled by the corresponding `Segment::GlobalMetadata` value for faster
/// memory access in the kernel.
#[allow(clippy::enum_clike_unportable_variant)]
#[repr(usize)]
#[derive(Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Debug)]
pub(crate) enum GlobalMetadata {
/// The largest context ID that has been used so far in this execution. Tracking this allows us
/// give each new context a unique ID, so that its memory will be zero-initialized.
LargestContext = 0,
LargestContext = Segment::GlobalMetadata as usize,
/// The size of active memory, in bytes.
MemorySize = 1,
MemorySize,
/// The size of the `TrieData` segment, in bytes. In other words, the next address available for
/// appending additional trie data.
TrieDataSize = 2,
/// The size of the `TrieData` segment, in bytes. In other words, the next address available for
/// appending additional trie data.
RlpDataSize = 3,
TrieDataSize,
/// The size of the `TrieData` segment, in bytes, represented as a whole address.
/// In other words, the next address available for appending additional trie data.
RlpDataSize,
/// A pointer to the root of the state trie within the `TrieData` buffer.
StateTrieRoot = 4,
StateTrieRoot,
/// A pointer to the root of the transaction trie within the `TrieData` buffer.
TransactionTrieRoot = 5,
TransactionTrieRoot,
/// A pointer to the root of the receipt trie within the `TrieData` buffer.
ReceiptTrieRoot = 6,
ReceiptTrieRoot,
// The root digests of each Merkle trie before these transactions.
StateTrieRootDigestBefore = 7,
TransactionTrieRootDigestBefore = 8,
ReceiptTrieRootDigestBefore = 9,
StateTrieRootDigestBefore,
TransactionTrieRootDigestBefore,
ReceiptTrieRootDigestBefore,
// The root digests of each Merkle trie after these transactions.
StateTrieRootDigestAfter = 10,
TransactionTrieRootDigestAfter = 11,
ReceiptTrieRootDigestAfter = 12,
StateTrieRootDigestAfter,
TransactionTrieRootDigestAfter,
ReceiptTrieRootDigestAfter,
/// The sizes of the `TrieEncodedChild` and `TrieEncodedChildLen` buffers. In other words, the
/// next available offset in these buffers.
TrieEncodedChildSize = 13,
TrieEncodedChildSize,
// Block metadata.
BlockBeneficiary = 14,
BlockTimestamp = 15,
BlockNumber = 16,
BlockDifficulty = 17,
BlockRandom = 18,
BlockGasLimit = 19,
BlockChainId = 20,
BlockBaseFee = 21,
BlockGasUsed = 22,
BlockBeneficiary,
BlockTimestamp,
BlockNumber,
BlockDifficulty,
BlockRandom,
BlockGasLimit,
BlockChainId,
BlockBaseFee,
BlockGasUsed,
/// Before current transactions block values.
BlockGasUsedBefore = 23,
BlockGasUsedBefore,
/// After current transactions block values.
BlockGasUsedAfter = 24,
BlockGasUsedAfter,
/// Current block header hash
BlockCurrentHash = 25,
BlockCurrentHash,
/// Gas to refund at the end of the transaction.
RefundCounter = 26,
RefundCounter,
/// Length of the addresses access list.
AccessedAddressesLen = 27,
AccessedAddressesLen,
/// Length of the storage keys access list.
AccessedStorageKeysLen = 28,
AccessedStorageKeysLen,
/// Length of the self-destruct list.
SelfDestructListLen = 29,
SelfDestructListLen,
/// Length of the bloom entry buffer.
BloomEntryLen = 30,
BloomEntryLen,
/// Length of the journal.
JournalLen = 31,
JournalLen,
/// Length of the `JournalData` segment.
JournalDataLen = 32,
JournalDataLen,
/// Current checkpoint.
CurrentCheckpoint = 33,
TouchedAddressesLen = 34,
CurrentCheckpoint,
TouchedAddressesLen,
// Gas cost for the access list in type-1 txns. See EIP-2930.
AccessListDataCost = 35,
AccessListDataCost,
// Start of the access list in the RLP for type-1 txns.
AccessListRlpStart = 36,
AccessListRlpStart,
// Length of the access list in the RLP for type-1 txns.
AccessListRlpLen = 37,
AccessListRlpLen,
// Boolean flag indicating if the txn is a contract creation txn.
ContractCreation = 38,
IsPrecompileFromEoa = 39,
CallStackDepth = 40,
ContractCreation,
IsPrecompileFromEoa,
CallStackDepth,
/// Transaction logs list length
LogsLen = 41,
LogsDataLen = 42,
LogsPayloadLen = 43,
TxnNumberBefore = 44,
TxnNumberAfter = 45,
LogsLen,
LogsDataLen,
LogsPayloadLen,
TxnNumberBefore,
TxnNumberAfter,
KernelHash = 46,
KernelLen = 47,
KernelHash,
KernelLen,
}
impl GlobalMetadata {
pub(crate) const COUNT: usize = 48;
/// Unscales this virtual offset by their respective `Segment` value.
pub(crate) const fn unscale(&self) -> usize {
*self as usize - Segment::GlobalMetadata as usize
}
pub(crate) const fn all() -> [Self; Self::COUNT] {
[
Self::LargestContext,

View File

@ -58,16 +58,19 @@ pub(crate) fn evm_constants() -> HashMap<String, U256> {
c.insert(CALL_STACK_LIMIT.0.into(), U256::from(CALL_STACK_LIMIT.1));
for segment in Segment::all() {
c.insert(segment.var_name().into(), (segment as u32).into());
c.insert(segment.var_name().into(), (segment as usize).into());
}
for txn_field in NormalizedTxnField::all() {
c.insert(txn_field.var_name().into(), (txn_field as u32).into());
// These offsets are already scaled by their respective segment.
c.insert(txn_field.var_name().into(), (txn_field as usize).into());
}
for txn_field in GlobalMetadata::all() {
c.insert(txn_field.var_name().into(), (txn_field as u32).into());
// These offsets are already scaled by their respective segment.
c.insert(txn_field.var_name().into(), (txn_field as usize).into());
}
for txn_field in ContextMetadata::all() {
c.insert(txn_field.var_name().into(), (txn_field as u32).into());
// These offsets are already scaled by their respective segment.
c.insert(txn_field.var_name().into(), (txn_field as usize).into());
}
for trie_type in PartialTrieType::all() {
c.insert(trie_type.var_name().into(), (trie_type as u32).into());

View File

@ -1,33 +1,46 @@
use crate::memory::segments::Segment;
/// These are normalized transaction fields, i.e. not specific to any transaction type.
///
/// Each value is directly scaled by the corresponding `Segment::TxnFields` value for faster
/// memory access in the kernel.
#[allow(dead_code)]
#[allow(clippy::enum_clike_unportable_variant)]
#[repr(usize)]
#[derive(Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Debug)]
pub(crate) enum NormalizedTxnField {
/// Whether a chain ID was present in the txn data. Type 0 transaction with v=27 or v=28 have
/// no chain ID. This affects what fields get signed.
ChainIdPresent = 0,
ChainId = 1,
Nonce = 2,
MaxPriorityFeePerGas = 3,
MaxFeePerGas = 4,
GasLimit = 6,
IntrinsicGas = 7,
To = 8,
Value = 9,
ChainIdPresent = Segment::TxnFields as usize,
ChainId,
Nonce,
MaxPriorityFeePerGas,
MaxFeePerGas,
GasLimit,
IntrinsicGas,
To,
Value,
/// The length of the data field. The data itself is stored in another segment.
DataLen = 10,
YParity = 11,
R = 12,
S = 13,
Origin = 14,
DataLen,
YParity,
R,
S,
Origin,
/// The actual computed gas price for this transaction in the block.
/// This is not technically a transaction field, as it depends on the block's base fee.
ComputedFeePerGas = 15,
ComputedPriorityFeePerGas = 16,
ComputedFeePerGas,
ComputedPriorityFeePerGas,
}
impl NormalizedTxnField {
pub(crate) const COUNT: usize = 16;
/// Unscales this virtual offset by their respective `Segment` value.
pub(crate) const fn unscale(&self) -> usize {
*self as usize - Segment::TxnFields as usize
}
pub(crate) const fn all() -> [Self; Self::COUNT] {
[
Self::ChainIdPresent,

View File

@ -19,12 +19,12 @@ use crate::extension_tower::BN_BASE;
use crate::generation::prover_input::ProverInputFn;
use crate::generation::state::GenerationState;
use crate::generation::GenerationInputs;
use crate::memory::segments::Segment;
use crate::memory::segments::{Segment, SEGMENT_SCALING_FACTOR};
use crate::util::u256_to_usize;
use crate::witness::errors::{ProgramError, ProverInputError};
use crate::witness::gas::gas_to_charge;
use crate::witness::memory::{MemoryAddress, MemoryContextState, MemorySegmentState, MemoryState};
use crate::witness::operation::Operation;
use crate::witness::operation::{Operation, CONTEXT_SCALING_FACTOR};
use crate::witness::state::RegistersState;
use crate::witness::transition::decode;
use crate::witness::util::stack_peek;
@ -199,19 +199,20 @@ impl<'a> Interpreter<'a> {
match op {
InterpreterMemOpKind::Push(context) => {
self.generation_state.memory.contexts[context].segments
[Segment::Stack as usize]
.content
.pop();
[Segment::Stack.unscale()]
.content
.pop();
}
InterpreterMemOpKind::Pop(value, context) => {
self.generation_state.memory.contexts[context].segments
[Segment::Stack as usize]
.content
.push(value)
[Segment::Stack.unscale()]
.content
.push(value)
}
InterpreterMemOpKind::Write(value, context, segment, offset) => {
self.generation_state.memory.contexts[context].segments[segment].content
[offset] = value
self.generation_state.memory.contexts[context].segments
[segment >> SEGMENT_SCALING_FACTOR] // we need to unscale the segment value
.content[offset] = value
}
}
}
@ -267,8 +268,8 @@ impl<'a> Interpreter<'a> {
offset_name,
self.stack(),
self.generation_state.memory.contexts[0].segments
[Segment::KernelGeneral as usize]
.content,
[Segment::KernelGeneral.unscale()]
.content,
);
}
self.rollback(checkpoint);
@ -289,7 +290,7 @@ impl<'a> Interpreter<'a> {
fn code(&self) -> &MemorySegmentState {
// The context is 0 if we are in kernel mode.
&self.generation_state.memory.contexts[(1 - self.is_kernel() as usize) * self.context()]
.segments[Segment::Code as usize]
.segments[Segment::Code.unscale()]
}
fn code_slice(&self, n: usize) -> Vec<u8> {
@ -301,52 +302,76 @@ impl<'a> Interpreter<'a> {
}
pub(crate) fn get_txn_field(&self, field: NormalizedTxnField) -> U256 {
self.generation_state.memory.contexts[0].segments[Segment::TxnFields as usize]
.get(field as usize)
// These fields are already scaled by their respective segment.
self.generation_state.memory.contexts[0].segments[Segment::TxnFields.unscale()]
.get(field.unscale())
}
pub(crate) fn set_txn_field(&mut self, field: NormalizedTxnField, value: U256) {
self.generation_state.memory.contexts[0].segments[Segment::TxnFields as usize]
.set(field as usize, value);
// These fields are already scaled by their respective segment.
self.generation_state.memory.contexts[0].segments[Segment::TxnFields.unscale()]
.set(field.unscale(), value);
}
pub(crate) fn get_txn_data(&self) -> &[U256] {
&self.generation_state.memory.contexts[0].segments[Segment::TxnData as usize].content
&self.generation_state.memory.contexts[0].segments[Segment::TxnData.unscale()].content
}
pub(crate) fn get_context_metadata_field(&self, ctx: usize, field: ContextMetadata) -> U256 {
// These fields are already scaled by their respective segment.
self.generation_state.memory.contexts[ctx].segments[Segment::ContextMetadata.unscale()]
.get(field.unscale())
}
pub(crate) fn set_context_metadata_field(
&mut self,
ctx: usize,
field: ContextMetadata,
value: U256,
) {
// These fields are already scaled by their respective segment.
self.generation_state.memory.contexts[ctx].segments[Segment::ContextMetadata.unscale()]
.set(field.unscale(), value)
}
pub(crate) fn get_global_metadata_field(&self, field: GlobalMetadata) -> U256 {
self.generation_state.memory.contexts[0].segments[Segment::GlobalMetadata as usize]
.get(field as usize)
// These fields are already scaled by their respective segment.
let field = field.unscale();
self.generation_state.memory.contexts[0].segments[Segment::GlobalMetadata.unscale()]
.get(field)
}
pub(crate) fn set_global_metadata_field(&mut self, field: GlobalMetadata, value: U256) {
self.generation_state.memory.contexts[0].segments[Segment::GlobalMetadata as usize]
.set(field as usize, value)
// These fields are already scaled by their respective segment.
let field = field.unscale();
self.generation_state.memory.contexts[0].segments[Segment::GlobalMetadata.unscale()]
.set(field, value)
}
pub(crate) fn set_global_metadata_multi_fields(&mut self, metadata: &[(GlobalMetadata, U256)]) {
for &(field, value) in metadata {
self.generation_state.memory.contexts[0].segments[Segment::GlobalMetadata as usize]
.set(field as usize, value);
let field = field.unscale();
self.generation_state.memory.contexts[0].segments[Segment::GlobalMetadata.unscale()]
.set(field, value);
}
}
pub(crate) fn get_trie_data(&self) -> &[U256] {
&self.generation_state.memory.contexts[0].segments[Segment::TrieData as usize].content
&self.generation_state.memory.contexts[0].segments[Segment::TrieData.unscale()].content
}
pub(crate) fn get_trie_data_mut(&mut self) -> &mut Vec<U256> {
&mut self.generation_state.memory.contexts[0].segments[Segment::TrieData as usize].content
&mut self.generation_state.memory.contexts[0].segments[Segment::TrieData.unscale()].content
}
pub(crate) fn get_memory_segment(&self, segment: Segment) -> Vec<U256> {
self.generation_state.memory.contexts[0].segments[segment as usize]
self.generation_state.memory.contexts[0].segments[segment.unscale()]
.content
.clone()
}
pub(crate) fn get_memory_segment_bytes(&self, segment: Segment) -> Vec<u8> {
self.generation_state.memory.contexts[0].segments[segment as usize]
self.generation_state.memory.contexts[0].segments[segment.unscale()]
.content
.iter()
.map(|x| x.low_u32() as u8)
@ -355,9 +380,9 @@ impl<'a> Interpreter<'a> {
pub(crate) fn get_current_general_memory(&self) -> Vec<U256> {
self.generation_state.memory.contexts[self.context()].segments
[Segment::KernelGeneral as usize]
.content
.clone()
[Segment::KernelGeneral.unscale()]
.content
.clone()
}
pub(crate) fn get_kernel_general_memory(&self) -> Vec<U256> {
@ -370,16 +395,16 @@ impl<'a> Interpreter<'a> {
pub(crate) fn set_current_general_memory(&mut self, memory: Vec<U256>) {
let context = self.context();
self.generation_state.memory.contexts[context].segments[Segment::KernelGeneral as usize]
self.generation_state.memory.contexts[context].segments[Segment::KernelGeneral.unscale()]
.content = memory;
}
pub(crate) fn set_memory_segment(&mut self, segment: Segment, memory: Vec<U256>) {
self.generation_state.memory.contexts[0].segments[segment as usize].content = memory;
self.generation_state.memory.contexts[0].segments[segment.unscale()].content = memory;
}
pub(crate) fn set_memory_segment_bytes(&mut self, segment: Segment, memory: Vec<u8>) {
self.generation_state.memory.contexts[0].segments[segment as usize].content =
self.generation_state.memory.contexts[0].segments[segment.unscale()].content =
memory.into_iter().map(U256::from).collect();
}
@ -395,7 +420,7 @@ impl<'a> Interpreter<'a> {
.contexts
.push(MemoryContextState::default());
}
self.generation_state.memory.contexts[context].segments[Segment::Code as usize].content =
self.generation_state.memory.contexts[context].segments[Segment::Code.unscale()].content =
code.into_iter().map(U256::from).collect();
}
@ -406,7 +431,7 @@ impl<'a> Interpreter<'a> {
}
pub(crate) fn get_jumpdest_bits(&self, context: usize) -> Vec<bool> {
self.generation_state.memory.contexts[context].segments[Segment::JumpdestBits as usize]
self.generation_state.memory.contexts[context].segments[Segment::JumpdestBits.unscale()]
.content
.iter()
.map(|x| x.bit(0))
@ -421,9 +446,9 @@ impl<'a> Interpreter<'a> {
match self.stack_len().cmp(&1) {
Ordering::Greater => {
let mut stack = self.generation_state.memory.contexts[self.context()].segments
[Segment::Stack as usize]
.content
.clone();
[Segment::Stack.unscale()]
.content
.clone();
stack.truncate(self.stack_len() - 1);
stack.push(
self.stack_top()
@ -443,7 +468,7 @@ impl<'a> Interpreter<'a> {
}
fn stack_segment_mut(&mut self) -> &mut Vec<U256> {
let context = self.context();
&mut self.generation_state.memory.contexts[context].segments[Segment::Stack as usize]
&mut self.generation_state.memory.contexts[context].segments[Segment::Stack.unscale()]
.content
}
@ -642,8 +667,8 @@ impl<'a> Interpreter<'a> {
if !self.is_kernel() {
let gas_limit_address = MemoryAddress {
context: self.context(),
segment: Segment::ContextMetadata as usize,
virt: ContextMetadata::GasLimit as usize,
segment: Segment::ContextMetadata.unscale(),
virt: ContextMetadata::GasLimit.unscale(),
};
let gas_limit =
u256_to_usize(self.generation_state.memory.get(gas_limit_address))? as u64;
@ -828,11 +853,11 @@ impl<'a> Interpreter<'a> {
}
fn run_keccak_general(&mut self) -> anyhow::Result<(), ProgramError> {
let context = self.pop()?.as_usize();
let segment = Segment::all()[self.pop()?.as_usize()];
let addr = self.pop()?;
let (context, segment, offset) = unpack_address!(addr);
// Not strictly needed but here to avoid surprises with MSIZE.
assert_ne!(segment, Segment::MainMemory, "Call KECCAK256 instead.");
let offset = self.pop()?.as_usize();
let size = self.pop()?.as_usize();
let bytes = (offset..offset + size)
.map(|i| {
@ -983,7 +1008,7 @@ impl<'a> Interpreter<'a> {
let mem_write_op = InterpreterMemOpKind::Write(
old_value,
self.context(),
Segment::Stack as usize,
Segment::Stack.unscale(),
len - n as usize - 1,
);
self.memops.push(mem_write_op);
@ -992,16 +1017,17 @@ impl<'a> Interpreter<'a> {
}
fn run_get_context(&mut self) -> anyhow::Result<(), ProgramError> {
self.push(self.context().into())
self.push(U256::from(self.context()) << CONTEXT_SCALING_FACTOR)
}
fn run_set_context(&mut self) -> anyhow::Result<(), ProgramError> {
let new_ctx = self.pop()?.as_usize();
let x = self.pop()?;
let new_ctx = (x >> CONTEXT_SCALING_FACTOR).as_usize();
let sp_to_save = self.stack_len().into();
let old_ctx = self.context();
let sp_field = ContextMetadata::StackSize as usize;
let sp_field = ContextMetadata::StackSize.unscale();
let old_sp_addr = MemoryAddress::new(old_ctx, Segment::ContextMetadata, sp_field);
let new_sp_addr = MemoryAddress::new(new_ctx, Segment::ContextMetadata, sp_field);
@ -1011,8 +1037,8 @@ impl<'a> Interpreter<'a> {
if new_sp > 0 {
let new_stack_top = self.generation_state.memory.contexts[new_ctx].segments
[Segment::Stack as usize]
.content[new_sp - 1];
[Segment::Stack.unscale()]
.content[new_sp - 1];
self.generation_state.registers.stack_top = new_stack_top;
}
self.set_context(new_ctx);
@ -1021,9 +1047,8 @@ impl<'a> Interpreter<'a> {
}
fn run_mload_general(&mut self) -> anyhow::Result<(), ProgramError> {
let context = self.pop()?.as_usize();
let segment = Segment::all()[self.pop()?.as_usize()];
let offset = self.pop()?.as_usize();
let addr = self.pop()?;
let (context, segment, offset) = unpack_address!(addr);
let value = self
.generation_state
.memory
@ -1033,9 +1058,8 @@ impl<'a> Interpreter<'a> {
}
fn run_mload_32bytes(&mut self) -> anyhow::Result<(), ProgramError> {
let context = self.pop()?.as_usize();
let segment = Segment::all()[self.pop()?.as_usize()];
let offset = self.pop()?.as_usize();
let addr = self.pop()?;
let (context, segment, offset) = unpack_address!(addr);
let len = self.pop()?.as_usize();
if len > 32 {
return Err(ProgramError::IntegerTooLarge);
@ -1054,9 +1078,8 @@ impl<'a> Interpreter<'a> {
fn run_mstore_general(&mut self) -> anyhow::Result<(), ProgramError> {
let value = self.pop()?;
let context = self.pop()?.as_usize();
let segment = Segment::all()[self.pop()?.as_usize()];
let offset = self.pop()?.as_usize();
let addr = self.pop()?;
let (context, segment, offset) = unpack_address!(addr);
let memop = self
.generation_state
.memory
@ -1066,9 +1089,8 @@ impl<'a> Interpreter<'a> {
}
fn run_mstore_32bytes(&mut self, n: u8) -> anyhow::Result<(), ProgramError> {
let context = self.pop()?.as_usize();
let segment = Segment::all()[self.pop()?.as_usize()];
let offset = self.pop()?.as_usize();
let addr = self.pop()?;
let (context, segment, offset) = unpack_address!(addr);
let value = self.pop()?;
let mut bytes = vec![0; 32];
@ -1086,7 +1108,7 @@ impl<'a> Interpreter<'a> {
self.memops.push(memop);
}
self.push(U256::from(offset + n as usize))
self.push(addr + U256::from(n))
}
fn run_exit_kernel(&mut self) -> anyhow::Result<(), ProgramError> {
@ -1447,14 +1469,28 @@ fn get_mnemonic(opcode: u8) -> &'static str {
}
}
#[macro_use]
macro_rules! unpack_address {
($addr:ident) => {{
let offset = $addr.low_u32() as usize;
let segment = Segment::all()[($addr >> SEGMENT_SCALING_FACTOR).low_u32() as usize];
let context = ($addr >> CONTEXT_SCALING_FACTOR).low_u32() as usize;
(context, segment, offset)
}};
}
pub(crate) use unpack_address;
#[cfg(test)]
mod tests {
use std::collections::HashMap;
use ethereum_types::U256;
use crate::cpu::kernel::constants::context_metadata::ContextMetadata;
use crate::cpu::kernel::interpreter::{run, Interpreter};
use crate::memory::segments::Segment;
use crate::witness::memory::MemoryAddress;
use crate::witness::operation::CONTEXT_SCALING_FACTOR;
#[test]
fn test_run() -> anyhow::Result<()> {
@ -1491,8 +1527,9 @@ mod tests {
interpreter.set_code(1, code.to_vec());
interpreter.generation_state.memory.contexts[1].segments[Segment::ContextMetadata as usize]
.set(ContextMetadata::GasLimit as usize, 100_000.into());
interpreter.generation_state.memory.contexts[1].segments
[Segment::ContextMetadata.unscale()]
.set(ContextMetadata::GasLimit.unscale(), 100_000.into());
// Set context and kernel mode.
interpreter.set_context(1);
interpreter.set_is_kernel(false);
@ -1501,7 +1538,7 @@ mod tests {
MemoryAddress::new(
1,
Segment::ContextMetadata,
ContextMetadata::ParentProgramCounter as usize,
ContextMetadata::ParentProgramCounter.unscale(),
),
0xdeadbeefu32.into(),
);
@ -1509,9 +1546,9 @@ mod tests {
MemoryAddress::new(
1,
Segment::ContextMetadata,
ContextMetadata::ParentContext as usize,
ContextMetadata::ParentContext.unscale(),
),
1.into(),
U256::one() << CONTEXT_SCALING_FACTOR,
);
interpreter.run()?;
@ -1522,12 +1559,12 @@ mod tests {
assert_eq!(interpreter.stack(), &[0xff.into(), 0xff00.into()]);
assert_eq!(
interpreter.generation_state.memory.contexts[1].segments[Segment::MainMemory as usize]
interpreter.generation_state.memory.contexts[1].segments[Segment::MainMemory.unscale()]
.get(0x27),
0x42.into()
);
assert_eq!(
interpreter.generation_state.memory.contexts[1].segments[Segment::MainMemory as usize]
interpreter.generation_state.memory.contexts[1].segments[Segment::MainMemory.unscale()]
.get(0x1f),
0xff.into()
);

View File

@ -17,6 +17,7 @@ use crate::generation::mpt::{load_all_mpts, AccountRlp};
use crate::generation::TrieInputs;
use crate::memory::segments::Segment;
use crate::witness::memory::MemoryAddress;
use crate::witness::operation::CONTEXT_SCALING_FACTOR;
use crate::Node;
pub(crate) fn initialize_mpts(interpreter: &mut Interpreter, trie_inputs: &TrieInputs) {
@ -24,27 +25,14 @@ pub(crate) fn initialize_mpts(interpreter: &mut Interpreter, trie_inputs: &TrieI
let (trie_root_ptrs, trie_data) =
load_all_mpts(trie_inputs).expect("Invalid MPT data for preinitialization");
let state_addr = MemoryAddress::new(
0,
Segment::GlobalMetadata,
GlobalMetadata::StateTrieRoot as usize,
);
let txn_addr = MemoryAddress::new(
0,
Segment::GlobalMetadata,
GlobalMetadata::TransactionTrieRoot as usize,
);
let receipts_addr = MemoryAddress::new(
0,
Segment::GlobalMetadata,
GlobalMetadata::ReceiptTrieRoot as usize,
);
let len_addr = MemoryAddress::new(
0,
Segment::GlobalMetadata,
GlobalMetadata::TrieDataSize as usize,
);
let state_addr =
MemoryAddress::new_bundle((GlobalMetadata::StateTrieRoot as usize).into()).unwrap();
let txn_addr =
MemoryAddress::new_bundle((GlobalMetadata::TransactionTrieRoot as usize).into()).unwrap();
let receipts_addr =
MemoryAddress::new_bundle((GlobalMetadata::ReceiptTrieRoot as usize).into()).unwrap();
let len_addr =
MemoryAddress::new_bundle((GlobalMetadata::TrieDataSize as usize).into()).unwrap();
let to_set = [
(state_addr, trie_root_ptrs.state_root_ptr.into()),
@ -202,8 +190,8 @@ fn test_extcodecopy() -> Result<()> {
let context = interpreter.context();
interpreter.generation_state.memory.contexts[context].segments
[Segment::ContextMetadata as usize]
.set(GasLimit as usize, U256::from(1000000000000u64));
[Segment::ContextMetadata.unscale()]
.set(GasLimit.unscale(), U256::from(1000000000000u64));
let extcodecopy = KERNEL.global_labels["sys_extcodecopy"];
@ -211,11 +199,11 @@ fn test_extcodecopy() -> Result<()> {
let mut rng = thread_rng();
for i in 0..2000 {
interpreter.generation_state.memory.contexts[context].segments
[Segment::MainMemory as usize]
.set(i, U256::from(rng.gen::<u8>()));
[Segment::MainMemory.unscale()]
.set(i, U256::from(rng.gen::<u8>()));
interpreter.generation_state.memory.contexts[context].segments
[Segment::KernelAccountCode as usize]
.set(i, U256::from(rng.gen::<u8>()));
[Segment::KernelAccountCode.unscale()]
.set(i, U256::from(rng.gen::<u8>()));
}
// Random inputs
@ -251,8 +239,8 @@ fn test_extcodecopy() -> Result<()> {
// Check that the code was correctly copied to memory.
for i in 0..size {
let memory = interpreter.generation_state.memory.contexts[context].segments
[Segment::MainMemory as usize]
.get(dest_offset + i);
[Segment::MainMemory.unscale()]
.get(dest_offset + i);
assert_eq!(
memory,
code.get(offset + i).copied().unwrap_or_default().into()
@ -277,30 +265,23 @@ fn prepare_interpreter_all_accounts(
// Switch context and initialize memory with the data we need for the tests.
interpreter.generation_state.registers.program_counter = 0;
interpreter.set_code(1, code.to_vec());
interpreter.generation_state.memory.contexts[1].segments[Segment::ContextMetadata as usize]
.set(
ContextMetadata::Address as usize,
U256::from_big_endian(&addr),
);
interpreter.generation_state.memory.contexts[1].segments[Segment::ContextMetadata as usize]
.set(ContextMetadata::GasLimit as usize, 100_000.into());
interpreter.set_context_metadata_field(
1,
ContextMetadata::Address,
U256::from_big_endian(&addr),
);
interpreter.set_context_metadata_field(1, ContextMetadata::GasLimit, 100_000.into());
interpreter.set_context(1);
interpreter.set_is_kernel(false);
interpreter.generation_state.memory.set(
MemoryAddress::new(
1,
Segment::ContextMetadata,
ContextMetadata::ParentProgramCounter as usize,
),
interpreter.set_context_metadata_field(
1,
ContextMetadata::ParentProgramCounter,
0xdeadbeefu32.into(),
);
interpreter.generation_state.memory.set(
MemoryAddress::new(
1,
Segment::ContextMetadata,
ContextMetadata::ParentContext as usize,
),
1.into(),
interpreter.set_context_metadata_field(
1,
ContextMetadata::ParentContext,
U256::one() << CONTEXT_SCALING_FACTOR, // ctx = 1
);
Ok(())

View File

@ -16,7 +16,7 @@ use crate::cpu::kernel::tests::account_code::initialize_mpts;
use crate::generation::mpt::{AccountRlp, LegacyReceiptRlp};
use crate::generation::rlp::all_rlp_prover_inputs_reversed;
use crate::generation::TrieInputs;
use crate::memory::segments::Segment;
use crate::memory::segments::{Segment, SEGMENT_SCALING_FACTOR};
use crate::proof::TrieRoots;
use crate::util::h2u;
@ -199,8 +199,7 @@ fn test_add11_yml() {
let route_txn_label = KERNEL.global_labels["hash_initial_tries"];
// Switch context and initialize memory with the data we need for the tests.
interpreter.generation_state.registers.program_counter = route_txn_label;
interpreter.generation_state.memory.contexts[0].segments[Segment::ContextMetadata as usize]
.set(ContextMetadata::GasLimit as usize, 1_000_000.into());
interpreter.set_context_metadata_field(0, ContextMetadata::GasLimit, 1_000_000.into());
interpreter.set_is_kernel(true);
interpreter.run().expect("Proving add11 failed.");
}
@ -331,8 +330,7 @@ fn test_add11_yml_with_exception() {
let route_txn_label = KERNEL.global_labels["hash_initial_tries"];
// Switch context and initialize memory with the data we need for the tests.
interpreter.generation_state.registers.program_counter = route_txn_label;
interpreter.generation_state.memory.contexts[0].segments[Segment::ContextMetadata as usize]
.set(ContextMetadata::GasLimit as usize, 1_000_000.into());
interpreter.set_context_metadata_field(0, ContextMetadata::GasLimit, 1_000_000.into());
interpreter.set_is_kernel(true);
interpreter
.run()

View File

@ -9,7 +9,7 @@ use crate::cpu::kernel::constants::global_metadata::GlobalMetadata::{
AccessedAddressesLen, AccessedStorageKeysLen,
};
use crate::cpu::kernel::interpreter::Interpreter;
use crate::memory::segments::Segment::{AccessedAddresses, AccessedStorageKeys, GlobalMetadata};
use crate::memory::segments::Segment::{AccessedAddresses, AccessedStorageKeys};
use crate::witness::memory::MemoryAddress;
#[test]
@ -42,17 +42,16 @@ fn test_insert_accessed_addresses() -> Result<()> {
.set(MemoryAddress::new(0, AccessedAddresses, i), addr);
}
interpreter.generation_state.memory.set(
MemoryAddress::new(0, GlobalMetadata, AccessedAddressesLen as usize),
MemoryAddress::new_bundle(U256::from(AccessedAddressesLen as usize)).unwrap(),
U256::from(n),
);
interpreter.run()?;
assert_eq!(interpreter.stack(), &[U256::zero()]);
assert_eq!(
interpreter.generation_state.memory.get(MemoryAddress::new(
0,
GlobalMetadata,
AccessedAddressesLen as usize
)),
interpreter
.generation_state
.memory
.get(MemoryAddress::new_bundle(U256::from(AccessedAddressesLen as usize)).unwrap()),
U256::from(n)
);
@ -67,17 +66,16 @@ fn test_insert_accessed_addresses() -> Result<()> {
.set(MemoryAddress::new(0, AccessedAddresses, i), addr);
}
interpreter.generation_state.memory.set(
MemoryAddress::new(0, GlobalMetadata, AccessedAddressesLen as usize),
MemoryAddress::new_bundle(U256::from(AccessedAddressesLen as usize)).unwrap(),
U256::from(n),
);
interpreter.run()?;
assert_eq!(interpreter.stack(), &[U256::one()]);
assert_eq!(
interpreter.generation_state.memory.get(MemoryAddress::new(
0,
GlobalMetadata,
AccessedAddressesLen as usize
)),
interpreter
.generation_state
.memory
.get(MemoryAddress::new_bundle(U256::from(AccessedAddressesLen as usize)).unwrap()),
U256::from(n + 1)
);
assert_eq!(
@ -134,17 +132,16 @@ fn test_insert_accessed_storage_keys() -> Result<()> {
);
}
interpreter.generation_state.memory.set(
MemoryAddress::new(0, GlobalMetadata, AccessedStorageKeysLen as usize),
MemoryAddress::new_bundle(U256::from(AccessedStorageKeysLen as usize)).unwrap(),
U256::from(3 * n),
);
interpreter.run()?;
assert_eq!(interpreter.stack(), &[storage_key_in_list.2, U256::zero()]);
assert_eq!(
interpreter.generation_state.memory.get(MemoryAddress::new(
0,
GlobalMetadata,
AccessedStorageKeysLen as usize
)),
interpreter
.generation_state
.memory
.get(MemoryAddress::new_bundle(U256::from(AccessedStorageKeysLen as usize)).unwrap()),
U256::from(3 * n)
);
@ -172,7 +169,7 @@ fn test_insert_accessed_storage_keys() -> Result<()> {
);
}
interpreter.generation_state.memory.set(
MemoryAddress::new(0, GlobalMetadata, AccessedStorageKeysLen as usize),
MemoryAddress::new_bundle(U256::from(AccessedStorageKeysLen as usize)).unwrap(),
U256::from(3 * n),
);
interpreter.run()?;
@ -181,11 +178,10 @@ fn test_insert_accessed_storage_keys() -> Result<()> {
&[storage_key_not_in_list.2, U256::one()]
);
assert_eq!(
interpreter.generation_state.memory.get(MemoryAddress::new(
0,
GlobalMetadata,
AccessedStorageKeysLen as usize
)),
interpreter
.generation_state
.memory
.get(MemoryAddress::new_bundle(U256::from(AccessedStorageKeysLen as usize)).unwrap()),
U256::from(3 * (n + 1))
);
assert_eq!(

View File

@ -1,8 +1,10 @@
use anyhow::Result;
use ethereum_types::U256;
use crate::cpu::kernel::aggregator::KERNEL;
use crate::cpu::kernel::interpreter::Interpreter;
use crate::cpu::kernel::opcodes::{get_opcode, get_push_opcode};
use crate::witness::operation::CONTEXT_SCALING_FACTOR;
#[test]
fn test_jumpdest_analysis() -> Result<()> {
@ -28,7 +30,11 @@ fn test_jumpdest_analysis() -> Result<()> {
let expected_jumpdest_bits = vec![false, true, false, false, false, true, false, true];
// Contract creation transaction.
let initial_stack = vec![0xDEADBEEFu32.into(), code.len().into(), CONTEXT.into()];
let initial_stack = vec![
0xDEADBEEFu32.into(),
code.len().into(),
U256::from(CONTEXT) << CONTEXT_SCALING_FACTOR,
];
let mut interpreter = Interpreter::new_with_kernel(jumpdest_analysis, initial_stack);
interpreter.set_code(CONTEXT, code);
interpreter.run()?;

View File

@ -1,7 +1,9 @@
use anyhow::Result;
use ethereum_types::U256;
use crate::cpu::kernel::aggregator::KERNEL;
use crate::cpu::kernel::interpreter::Interpreter;
use crate::memory::segments::Segment;
#[test]
fn hex_prefix_even_nonterminated() -> Result<()> {
@ -11,11 +13,11 @@ fn hex_prefix_even_nonterminated() -> Result<()> {
let terminated = 0.into();
let packed_nibbles = 0xABCDEF.into();
let num_nibbles = 6.into();
let rlp_pos = 0.into();
let rlp_pos = U256::from(Segment::RlpRaw as usize);
let initial_stack = vec![retdest, terminated, packed_nibbles, num_nibbles, rlp_pos];
let mut interpreter = Interpreter::new_with_kernel(hex_prefix, initial_stack);
interpreter.run()?;
assert_eq!(interpreter.stack(), vec![5.into()]);
assert_eq!(interpreter.stack(), vec![rlp_pos + U256::from(5)]);
assert_eq!(
interpreter.get_rlp_memory(),
@ -39,11 +41,11 @@ fn hex_prefix_odd_terminated() -> Result<()> {
let terminated = 1.into();
let packed_nibbles = 0xABCDE.into();
let num_nibbles = 5.into();
let rlp_pos = 0.into();
let rlp_pos = U256::from(Segment::RlpRaw as usize);
let initial_stack = vec![retdest, terminated, packed_nibbles, num_nibbles, rlp_pos];
let mut interpreter = Interpreter::new_with_kernel(hex_prefix, initial_stack);
interpreter.run()?;
assert_eq!(interpreter.stack(), vec![4.into()]);
assert_eq!(interpreter.stack(), vec![rlp_pos + U256::from(4)]);
assert_eq!(
interpreter.get_rlp_memory(),
@ -66,11 +68,14 @@ fn hex_prefix_odd_terminated_tiny() -> Result<()> {
let terminated = 1.into();
let packed_nibbles = 0xA.into();
let num_nibbles = 1.into();
let rlp_pos = 2.into();
let rlp_pos = U256::from(Segment::RlpRaw as usize + 2);
let initial_stack = vec![retdest, terminated, packed_nibbles, num_nibbles, rlp_pos];
let mut interpreter = Interpreter::new_with_kernel(hex_prefix, initial_stack);
interpreter.run()?;
assert_eq!(interpreter.stack(), vec![3.into()]);
assert_eq!(
interpreter.stack(),
vec![U256::from(Segment::RlpRaw as usize + 3)]
);
assert_eq!(
interpreter.get_rlp_memory(),

View File

@ -11,10 +11,8 @@ fn test_mload_packing_1_byte() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let len = 1.into();
let offset = 2.into();
let segment = (Segment::RlpRaw as u32).into();
let context = 0.into();
let initial_stack = vec![retdest, len, offset, segment, context];
let addr = (Segment::RlpRaw as u64 + 2).into();
let initial_stack = vec![retdest, len, addr];
let mut interpreter = Interpreter::new_with_kernel(mload_packing, initial_stack);
interpreter.set_rlp_memory(vec![0, 0, 0xAB]);
@ -31,10 +29,8 @@ fn test_mload_packing_3_bytes() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let len = 3.into();
let offset = 2.into();
let segment = (Segment::RlpRaw as u32).into();
let context = 0.into();
let initial_stack = vec![retdest, len, offset, segment, context];
let addr = (Segment::RlpRaw as u64 + 2).into();
let initial_stack = vec![retdest, len, addr];
let mut interpreter = Interpreter::new_with_kernel(mload_packing, initial_stack);
interpreter.set_rlp_memory(vec![0, 0, 0xAB, 0xCD, 0xEF]);
@ -51,10 +47,8 @@ fn test_mload_packing_32_bytes() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let len = 32.into();
let offset = 0.into();
let segment = (Segment::RlpRaw as u32).into();
let context = 0.into();
let initial_stack = vec![retdest, len, offset, segment, context];
let addr = (Segment::RlpRaw as u64).into();
let initial_stack = vec![retdest, len, addr];
let mut interpreter = Interpreter::new_with_kernel(mload_packing, initial_stack);
interpreter.set_rlp_memory(vec![0xFF; 32]);
@ -72,15 +66,13 @@ fn test_mstore_unpacking() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let len = 4.into();
let value = 0xABCD1234u32.into();
let offset = 0.into();
let segment = (Segment::TxnData as u32).into();
let context = 0.into();
let initial_stack = vec![retdest, len, value, offset, segment, context];
let addr = (Segment::TxnData as u64).into();
let initial_stack = vec![retdest, len, value, addr];
let mut interpreter = Interpreter::new_with_kernel(mstore_unpacking, initial_stack);
interpreter.run()?;
assert_eq!(interpreter.stack(), vec![4.into()]);
assert_eq!(interpreter.stack(), vec![addr + U256::from(4)]);
assert_eq!(
&interpreter.get_txn_data(),
&[0xAB.into(), 0xCD.into(), 0x12.into(), 0x34.into()]

View File

@ -1,20 +1,25 @@
use anyhow::Result;
use ethereum_types::U256;
use crate::cpu::kernel::aggregator::KERNEL;
use crate::cpu::kernel::interpreter::Interpreter;
use crate::memory::segments::Segment;
#[test]
fn test_decode_rlp_string_len_short() -> Result<()> {
let decode_rlp_string_len = KERNEL.global_labels["decode_rlp_string_len"];
let initial_stack = vec![0xDEADBEEFu32.into(), 2.into()];
let initial_stack = vec![
0xDEADBEEFu32.into(),
U256::from(Segment::RlpRaw as usize + 2),
];
let mut interpreter = Interpreter::new_with_kernel(decode_rlp_string_len, initial_stack);
// A couple dummy bytes, followed by "0x70" which is its own encoding.
interpreter.set_rlp_memory(vec![123, 234, 0x70]);
interpreter.run()?;
let expected_stack = vec![1.into(), 2.into()]; // len, pos
let expected_stack = vec![1.into(), U256::from(Segment::RlpRaw as usize + 2)]; // len, pos
assert_eq!(interpreter.stack(), expected_stack);
Ok(())
@ -24,14 +29,17 @@ fn test_decode_rlp_string_len_short() -> Result<()> {
fn test_decode_rlp_string_len_medium() -> Result<()> {
let decode_rlp_string_len = KERNEL.global_labels["decode_rlp_string_len"];
let initial_stack = vec![0xDEADBEEFu32.into(), 2.into()];
let initial_stack = vec![
0xDEADBEEFu32.into(),
U256::from(Segment::RlpRaw as usize + 2),
];
let mut interpreter = Interpreter::new_with_kernel(decode_rlp_string_len, initial_stack);
// A couple dummy bytes, followed by the RLP encoding of "1 2 3 4 5".
interpreter.set_rlp_memory(vec![123, 234, 0x85, 1, 2, 3, 4, 5]);
interpreter.run()?;
let expected_stack = vec![5.into(), 3.into()]; // len, pos
let expected_stack = vec![5.into(), U256::from(Segment::RlpRaw as usize + 3)]; // len, pos
assert_eq!(interpreter.stack(), expected_stack);
Ok(())
@ -41,7 +49,10 @@ fn test_decode_rlp_string_len_medium() -> Result<()> {
fn test_decode_rlp_string_len_long() -> Result<()> {
let decode_rlp_string_len = KERNEL.global_labels["decode_rlp_string_len"];
let initial_stack = vec![0xDEADBEEFu32.into(), 2.into()];
let initial_stack = vec![
0xDEADBEEFu32.into(),
U256::from(Segment::RlpRaw as usize + 2),
];
let mut interpreter = Interpreter::new_with_kernel(decode_rlp_string_len, initial_stack);
// The RLP encoding of the string "1 2 3 ... 56".
@ -52,7 +63,7 @@ fn test_decode_rlp_string_len_long() -> Result<()> {
]);
interpreter.run()?;
let expected_stack = vec![56.into(), 4.into()]; // len, pos
let expected_stack = vec![56.into(), U256::from(Segment::RlpRaw as usize + 4)]; // len, pos
assert_eq!(interpreter.stack(), expected_stack);
Ok(())
@ -62,14 +73,14 @@ fn test_decode_rlp_string_len_long() -> Result<()> {
fn test_decode_rlp_list_len_short() -> Result<()> {
let decode_rlp_list_len = KERNEL.global_labels["decode_rlp_list_len"];
let initial_stack = vec![0xDEADBEEFu32.into(), 0.into()];
let initial_stack = vec![0xDEADBEEFu32.into(), U256::from(Segment::RlpRaw as usize)];
let mut interpreter = Interpreter::new_with_kernel(decode_rlp_list_len, initial_stack);
// The RLP encoding of [1, 2, [3, 4]].
interpreter.set_rlp_memory(vec![0xc5, 1, 2, 0xc2, 3, 4]);
interpreter.run()?;
let expected_stack = vec![5.into(), 1.into()]; // len, pos
let expected_stack = vec![5.into(), U256::from(Segment::RlpRaw as usize + 1)]; // len, pos
assert_eq!(interpreter.stack(), expected_stack);
Ok(())
@ -79,7 +90,7 @@ fn test_decode_rlp_list_len_short() -> Result<()> {
fn test_decode_rlp_list_len_long() -> Result<()> {
let decode_rlp_list_len = KERNEL.global_labels["decode_rlp_list_len"];
let initial_stack = vec![0xDEADBEEFu32.into(), 0.into()];
let initial_stack = vec![0xDEADBEEFu32.into(), U256::from(Segment::RlpRaw as usize)];
let mut interpreter = Interpreter::new_with_kernel(decode_rlp_list_len, initial_stack);
// The RLP encoding of [1, ..., 56].
@ -90,7 +101,7 @@ fn test_decode_rlp_list_len_long() -> Result<()> {
]);
interpreter.run()?;
let expected_stack = vec![56.into(), 2.into()]; // len, pos
let expected_stack = vec![56.into(), U256::from(Segment::RlpRaw as usize + 2)]; // len, pos
assert_eq!(interpreter.stack(), expected_stack);
Ok(())
@ -100,14 +111,14 @@ fn test_decode_rlp_list_len_long() -> Result<()> {
fn test_decode_rlp_scalar() -> Result<()> {
let decode_rlp_scalar = KERNEL.global_labels["decode_rlp_scalar"];
let initial_stack = vec![0xDEADBEEFu32.into(), 0.into()];
let initial_stack = vec![0xDEADBEEFu32.into(), U256::from(Segment::RlpRaw as usize)];
let mut interpreter = Interpreter::new_with_kernel(decode_rlp_scalar, initial_stack);
// The RLP encoding of "12 34 56".
interpreter.set_rlp_memory(vec![0x83, 0x12, 0x34, 0x56]);
interpreter.run()?;
let expected_stack = vec![0x123456.into(), 4.into()]; // scalar, pos
let expected_stack = vec![0x123456.into(), U256::from(Segment::RlpRaw as usize + 4)]; // scalar, pos
assert_eq!(interpreter.stack(), expected_stack);
Ok(())

View File

@ -1,7 +1,9 @@
use anyhow::Result;
use ethereum_types::U256;
use crate::cpu::kernel::aggregator::KERNEL;
use crate::cpu::kernel::interpreter::Interpreter;
use crate::memory::segments::Segment;
#[test]
fn test_encode_rlp_scalar_small() -> Result<()> {
@ -9,12 +11,12 @@ fn test_encode_rlp_scalar_small() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let scalar = 42.into();
let pos = 2.into();
let pos = U256::from(Segment::RlpRaw as usize + 2);
let initial_stack = vec![retdest, scalar, pos];
let mut interpreter = Interpreter::new_with_kernel(encode_rlp_scalar, initial_stack);
interpreter.run()?;
let expected_stack = vec![3.into()]; // pos' = pos + rlp_len = 2 + 1
let expected_stack = vec![pos + U256::from(1)]; // pos' = pos + rlp_len = 2 + 1
let expected_rlp = vec![0, 0, 42];
assert_eq!(interpreter.stack(), expected_stack);
assert_eq!(interpreter.get_rlp_memory(), expected_rlp);
@ -28,12 +30,12 @@ fn test_encode_rlp_scalar_medium() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let scalar = 0x12345.into();
let pos = 2.into();
let pos = U256::from(Segment::RlpRaw as usize + 2);
let initial_stack = vec![retdest, scalar, pos];
let mut interpreter = Interpreter::new_with_kernel(encode_rlp_scalar, initial_stack);
interpreter.run()?;
let expected_stack = vec![6.into()]; // pos' = pos + rlp_len = 2 + 4
let expected_stack = vec![pos + U256::from(4)]; // pos' = pos + rlp_len = 2 + 4
let expected_rlp = vec![0, 0, 0x80 + 3, 0x01, 0x23, 0x45];
assert_eq!(interpreter.stack(), expected_stack);
assert_eq!(interpreter.get_rlp_memory(), expected_rlp);
@ -47,12 +49,12 @@ fn test_encode_rlp_160() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let string = 0x12345.into();
let pos = 0.into();
let pos = U256::from(Segment::RlpRaw as usize);
let initial_stack = vec![retdest, string, pos];
let mut interpreter = Interpreter::new_with_kernel(encode_rlp_160, initial_stack);
interpreter.run()?;
let expected_stack = vec![(1 + 20).into()]; // pos'
let expected_stack = vec![pos + U256::from(1 + 20)]; // pos'
#[rustfmt::skip]
let expected_rlp = vec![0x80 + 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x01, 0x23, 0x45];
assert_eq!(interpreter.stack(), expected_stack);
@ -67,12 +69,12 @@ fn test_encode_rlp_256() -> Result<()> {
let retdest = 0xDEADBEEFu32.into();
let string = 0x12345.into();
let pos = 0.into();
let pos = U256::from(Segment::RlpRaw as usize);
let initial_stack = vec![retdest, string, pos];
let mut interpreter = Interpreter::new_with_kernel(encode_rlp_256, initial_stack);
interpreter.run()?;
let expected_stack = vec![(1 + 32).into()]; // pos'
let expected_stack = vec![pos + U256::from(1 + 32)]; // pos'
#[rustfmt::skip]
let expected_rlp = vec![0x80 + 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x01, 0x23, 0x45];
assert_eq!(interpreter.stack(), expected_stack);
@ -86,8 +88,8 @@ fn test_prepend_rlp_list_prefix_small() -> Result<()> {
let prepend_rlp_list_prefix = KERNEL.global_labels["prepend_rlp_list_prefix"];
let retdest = 0xDEADBEEFu32.into();
let start_pos = 9.into();
let end_pos = (9 + 5).into();
let start_pos = U256::from(Segment::RlpRaw as usize + 9);
let end_pos = U256::from(Segment::RlpRaw as usize + 9 + 5);
let initial_stack = vec![retdest, start_pos, end_pos];
let mut interpreter = Interpreter::new_with_kernel(prepend_rlp_list_prefix, initial_stack);
interpreter.set_rlp_memory(vec![
@ -100,7 +102,7 @@ fn test_prepend_rlp_list_prefix_small() -> Result<()> {
interpreter.run()?;
let expected_rlp_len = 6.into();
let expected_start_pos = 8.into();
let expected_start_pos = U256::from(Segment::RlpRaw as usize + 8);
let expected_stack = vec![expected_rlp_len, expected_start_pos];
let expected_rlp = vec![0, 0, 0, 0, 0, 0, 0, 0, 0xc0 + 5, 1, 2, 3, 4, 5];
@ -115,8 +117,8 @@ fn test_prepend_rlp_list_prefix_large() -> Result<()> {
let prepend_rlp_list_prefix = KERNEL.global_labels["prepend_rlp_list_prefix"];
let retdest = 0xDEADBEEFu32.into();
let start_pos = 9.into();
let end_pos = (9 + 60).into();
let start_pos = U256::from(Segment::RlpRaw as usize + 9);
let end_pos = U256::from(Segment::RlpRaw as usize + 9 + 60);
let initial_stack = vec![retdest, start_pos, end_pos];
let mut interpreter = Interpreter::new_with_kernel(prepend_rlp_list_prefix, initial_stack);
@ -136,7 +138,7 @@ fn test_prepend_rlp_list_prefix_large() -> Result<()> {
interpreter.run()?;
let expected_rlp_len = 62.into();
let expected_start_pos = 7.into();
let expected_start_pos = U256::from(Segment::RlpRaw as usize + 7);
let expected_stack = vec![expected_rlp_len, expected_start_pos];
#[rustfmt::skip]

View File

@ -5,23 +5,17 @@ use plonky2::field::types::Field;
use plonky2::hash::hash_types::RichField;
use plonky2::iop::ext_target::ExtensionTarget;
use super::cpu_stark::get_addr;
use crate::constraint_consumer::{ConstraintConsumer, RecursiveConstraintConsumer};
use crate::cpu::columns::CpuColumnsView;
use crate::cpu::membus::NUM_GP_CHANNELS;
use crate::cpu::stack;
use crate::memory::segments::Segment;
use crate::memory::segments::{Segment, SEGMENT_SCALING_FACTOR};
const fn get_addr_load<T: Copy>(lv: &CpuColumnsView<T>) -> (T, T, T) {
let addr_context = lv.mem_channels[0].value[0];
let addr_segment = lv.mem_channels[1].value[0];
let addr_virtual = lv.mem_channels[2].value[0];
(addr_context, addr_segment, addr_virtual)
get_addr(lv, 0)
}
const fn get_addr_store<T: Copy>(lv: &CpuColumnsView<T>) -> (T, T, T) {
let addr_context = lv.mem_channels[1].value[0];
let addr_segment = lv.mem_channels[2].value[0];
let addr_virtual = lv.mem_channels[3].value[0];
(addr_context, addr_segment, addr_virtual)
get_addr(lv, 1)
}
/// Evaluates constraints for MLOAD_GENERAL.
@ -36,7 +30,7 @@ fn eval_packed_load<P: PackedField>(
let (addr_context, addr_segment, addr_virtual) = get_addr_load(lv);
// Check that we are loading the correct value from the correct address.
let load_channel = lv.mem_channels[3];
let load_channel = lv.mem_channels[1];
yield_constr.constraint(filter * (load_channel.used - P::ONES));
yield_constr.constraint(filter * (load_channel.is_read - P::ONES));
yield_constr.constraint(filter * (load_channel.addr_context - addr_context));
@ -53,7 +47,7 @@ fn eval_packed_load<P: PackedField>(
}
// Disable remaining memory channels, if any.
for &channel in &lv.mem_channels[4..NUM_GP_CHANNELS] {
for &channel in &lv.mem_channels[2..] {
yield_constr.constraint(filter * channel.used);
}
yield_constr.constraint(filter * lv.partial_channel.used);
@ -83,7 +77,7 @@ fn eval_ext_circuit_load<F: RichField + Extendable<D>, const D: usize>(
let (addr_context, addr_segment, addr_virtual) = get_addr_load(lv);
// Check that we are loading the correct value from the correct channel.
let load_channel = lv.mem_channels[3];
let load_channel = lv.mem_channels[1];
{
let constr = builder.mul_sub_extension(filter, load_channel.used, filter);
yield_constr.constraint(builder, constr);
@ -117,7 +111,7 @@ fn eval_ext_circuit_load<F: RichField + Extendable<D>, const D: usize>(
}
// Disable remaining memory channels, if any.
for &channel in &lv.mem_channels[4..] {
for &channel in &lv.mem_channels[2..] {
let constr = builder.mul_extension(filter, channel.used);
yield_constr.constraint(builder, constr);
}
@ -157,13 +151,13 @@ fn eval_packed_store<P: PackedField>(
yield_constr.constraint(filter * (store_channel.addr_virtual - addr_virtual));
// Disable remaining memory channels, if any.
for &channel in &lv.mem_channels[4..] {
for &channel in &lv.mem_channels[2..] {
yield_constr.constraint(filter * channel.used);
}
// Stack constraints.
// Pops.
for i in 1..4 {
for i in 1..2 {
let channel = lv.mem_channels[i];
yield_constr.constraint(filter * (channel.used - P::ONES));
@ -171,19 +165,21 @@ fn eval_packed_store<P: PackedField>(
yield_constr.constraint(filter * (channel.addr_context - lv.context));
yield_constr.constraint(
filter * (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
filter
* (channel.addr_segment
- P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
// Remember that the first read (`i == 1`) is for the second stack element at `stack[stack_len - 1]`.
let addr_virtual = lv.stack_len - P::Scalar::from_canonical_usize(i + 1);
yield_constr.constraint(filter * (channel.addr_virtual - addr_virtual));
}
// Constrain `stack_inv_aux`.
let len_diff = lv.stack_len - P::Scalar::from_canonical_usize(4);
let len_diff = lv.stack_len - P::Scalar::from_canonical_usize(2);
yield_constr.constraint(
lv.op.m_op_general
* (len_diff * lv.general.stack().stack_inv - lv.general.stack().stack_inv_aux),
);
// If stack_len != 4 and MSTORE, read new top of the stack in nv.mem_channels[0].
// If stack_len != 2 and MSTORE, read new top of the stack in nv.mem_channels[0].
let top_read_channel = nv.mem_channels[0];
let is_top_read = lv.general.stack().stack_inv_aux * (P::ONES - lv.opcode_bits[0]);
// Constrain `stack_inv_aux_2`. It contains `stack_inv_aux * opcode_bits[0]`.
@ -196,12 +192,11 @@ fn eval_packed_store<P: PackedField>(
yield_constr.constraint_transition(
new_filter
* (top_read_channel.addr_segment
- P::Scalar::from_canonical_u64(Segment::Stack as u64)),
- P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
let addr_virtual = nv.stack_len - P::ONES;
yield_constr.constraint_transition(new_filter * (top_read_channel.addr_virtual - addr_virtual));
// If stack_len == 4 or MLOAD, disable the channel.
// If stack_len == 2 or MLOAD, disable the channel.
yield_constr.constraint(
lv.op.m_op_general * (lv.general.stack().stack_inv_aux - P::ONES) * top_read_channel.used,
);
@ -245,14 +240,14 @@ fn eval_ext_circuit_store<F: RichField + Extendable<D>, const D: usize>(
}
// Disable remaining memory channels, if any.
for &channel in &lv.mem_channels[4..] {
for &channel in &lv.mem_channels[2..] {
let constr = builder.mul_extension(filter, channel.used);
yield_constr.constraint(builder, constr);
}
// Stack constraints
// Pops.
for i in 1..4 {
for i in 1..2 {
let channel = lv.mem_channels[i];
{
@ -271,7 +266,7 @@ fn eval_ext_circuit_store<F: RichField + Extendable<D>, const D: usize>(
{
let diff = builder.add_const_extension(
channel.addr_segment,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
);
let constr = builder.mul_extension(filter, diff);
yield_constr.constraint(builder, constr);
@ -285,7 +280,7 @@ fn eval_ext_circuit_store<F: RichField + Extendable<D>, const D: usize>(
}
// Constrain `stack_inv_aux`.
{
let len_diff = builder.add_const_extension(lv.stack_len, -F::from_canonical_usize(4));
let len_diff = builder.add_const_extension(lv.stack_len, -F::from_canonical_usize(2));
let diff = builder.mul_sub_extension(
len_diff,
lv.general.stack().stack_inv,
@ -294,7 +289,7 @@ fn eval_ext_circuit_store<F: RichField + Extendable<D>, const D: usize>(
let constr = builder.mul_extension(lv.op.m_op_general, diff);
yield_constr.constraint(builder, constr);
}
// If stack_len != 4 and MSTORE, read new top of the stack in nv.mem_channels[0].
// If stack_len != 2 and MSTORE, read new top of the stack in nv.mem_channels[0].
let top_read_channel = nv.mem_channels[0];
let is_top_read = builder.mul_extension(lv.general.stack().stack_inv_aux, lv.opcode_bits[0]);
let is_top_read = builder.sub_extension(lv.general.stack().stack_inv_aux, is_top_read);
@ -321,7 +316,7 @@ fn eval_ext_circuit_store<F: RichField + Extendable<D>, const D: usize>(
{
let diff = builder.add_const_extension(
top_read_channel.addr_segment,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
);
let constr = builder.mul_extension(new_filter, diff);
yield_constr.constraint_transition(builder, constr);
@ -332,7 +327,7 @@ fn eval_ext_circuit_store<F: RichField + Extendable<D>, const D: usize>(
let constr = builder.mul_extension(new_filter, diff);
yield_constr.constraint_transition(builder, constr);
}
// If stack_len == 4 or MLOAD, disable the channel.
// If stack_len == 2 or MLOAD, disable the channel.
{
let diff = builder.mul_sub_extension(
lv.op.m_op_general,

View File

@ -24,7 +24,7 @@ pub(crate) fn eval_packed<P: PackedField>(
// let val = lv.mem_channels[0];
// let output = lv.mem_channels[NUM_GP_CHANNELS - 1];
let shift_table_segment = P::Scalar::from_canonical_u64(Segment::ShiftTable as u64);
let shift_table_segment = P::Scalar::from_canonical_usize(Segment::ShiftTable.unscale());
// Only lookup the shifting factor when displacement is < 2^32.
// two_exp.used is true (1) if the high limbs of the displacement are
@ -73,7 +73,7 @@ pub(crate) fn eval_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
let displacement = lv.mem_channels[0];
let two_exp = lv.mem_channels[2];
let shift_table_segment = F::from_canonical_u64(Segment::ShiftTable as u64);
let shift_table_segment = F::from_canonical_usize(Segment::ShiftTable.unscale());
// Only lookup the shifting factor when displacement is < 2^32.
// two_exp.used is true (1) if the high limbs of the displacement are

View File

@ -83,13 +83,13 @@ pub(crate) const JUMPI_OP: Option<StackBehavior> = Some(StackBehavior {
});
/// `StackBehavior` for MLOAD_GENERAL.
pub(crate) const MLOAD_GENERAL_OP: Option<StackBehavior> = Some(StackBehavior {
num_pops: 3,
num_pops: 1,
pushes: true,
disable_other_channels: false,
});
pub(crate) const KECCAK_GENERAL_OP: StackBehavior = StackBehavior {
num_pops: 4,
num_pops: 2,
pushes: true,
disable_other_channels: true,
};
@ -132,7 +132,7 @@ pub(crate) const STACK_BEHAVIORS: OpsColumnsView<Option<StackBehavior>> = OpsCol
dup_swap: None,
context_op: None,
m_op_32bytes: Some(StackBehavior {
num_pops: 4,
num_pops: 2,
pushes: true,
disable_other_channels: false,
}),
@ -186,7 +186,8 @@ pub(crate) fn eval_packed_one<P: PackedField>(
yield_constr.constraint(filter * (channel.addr_context - lv.context));
yield_constr.constraint(
filter
* (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
* (channel.addr_segment
- P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
// Remember that the first read (`i == 1`) is for the second stack element at `stack[stack_len - 1]`.
let addr_virtual = lv.stack_len - P::Scalar::from_canonical_usize(i + 1);
@ -212,7 +213,8 @@ pub(crate) fn eval_packed_one<P: PackedField>(
yield_constr.constraint_transition(new_filter * (channel.addr_context - nv.context));
yield_constr.constraint_transition(
new_filter
* (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
* (channel.addr_segment
- P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
let addr_virtual = nv.stack_len - P::ONES;
yield_constr.constraint_transition(new_filter * (channel.addr_virtual - addr_virtual));
@ -238,7 +240,8 @@ pub(crate) fn eval_packed_one<P: PackedField>(
yield_constr.constraint(new_filter * (channel.addr_context - lv.context));
yield_constr.constraint(
new_filter
* (channel.addr_segment - P::Scalar::from_canonical_u64(Segment::Stack as u64)),
* (channel.addr_segment
- P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
let addr_virtual = lv.stack_len - P::ONES;
yield_constr.constraint(new_filter * (channel.addr_virtual - addr_virtual));
@ -343,7 +346,7 @@ pub(crate) fn eval_packed<P: PackedField>(
yield_constr.constraint_transition(
new_filter
* (top_read_channel.addr_segment
- P::Scalar::from_canonical_u64(Segment::Stack as u64)),
- P::Scalar::from_canonical_usize(Segment::Stack.unscale())),
);
let addr_virtual = nv.stack_len - P::ONES;
yield_constr.constraint_transition(new_filter * (top_read_channel.addr_virtual - addr_virtual));
@ -397,7 +400,7 @@ pub(crate) fn eval_ext_circuit_one<F: RichField + Extendable<D>, const D: usize>
{
let constr = builder.arithmetic_extension(
F::ONE,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
filter,
channel.addr_segment,
filter,
@ -454,7 +457,7 @@ pub(crate) fn eval_ext_circuit_one<F: RichField + Extendable<D>, const D: usize>
{
let constr = builder.arithmetic_extension(
F::ONE,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
new_filter,
channel.addr_segment,
new_filter,
@ -507,7 +510,7 @@ pub(crate) fn eval_ext_circuit_one<F: RichField + Extendable<D>, const D: usize>
{
let constr = builder.arithmetic_extension(
F::ONE,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
new_filter,
channel.addr_segment,
new_filter,
@ -674,7 +677,7 @@ pub(crate) fn eval_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
{
let diff = builder.add_const_extension(
top_read_channel.addr_segment,
-F::from_canonical_u64(Segment::Stack as u64),
-F::from_canonical_usize(Segment::Stack.unscale()),
);
let constr = builder.mul_extension(new_filter, diff);
yield_constr.constraint_transition(builder, constr);

View File

@ -45,7 +45,7 @@ pub(crate) fn eval_packed<P: PackedField>(
}
// Look up the handler in memory
let code_segment = P::Scalar::from_canonical_usize(Segment::Code as usize);
let code_segment = P::Scalar::from_canonical_usize(Segment::Code.unscale());
let opcode: P = lv
.opcode_bits
@ -153,7 +153,7 @@ pub(crate) fn eval_ext_circuit<F: RichField + Extendable<D>, const D: usize>(
}
// Look up the handler in memory
let code_segment = F::from_canonical_usize(Segment::Code as usize);
let code_segment = F::from_canonical_usize(Segment::Code.unscale());
let opcode = lv
.opcode_bits

View File

@ -155,7 +155,8 @@ fn apply_metadata_and_tries_memops<F: RichField + Extendable<D>, const D: usize>
.map(|(field, val)| {
mem_write_log(
channel,
MemoryAddress::new(0, Segment::GlobalMetadata, field as usize),
// These fields are already scaled by their segment, and are in context 0 (kernel).
MemoryAddress::new_bundle(U256::from(field as usize)).unwrap(),
state,
val,
)

View File

@ -20,6 +20,7 @@ use crate::util::{biguint_to_mem_vec, mem_vec_to_biguint, u256_to_usize};
use crate::witness::errors::ProgramError;
use crate::witness::errors::ProverInputError::*;
use crate::witness::memory::MemoryAddress;
use crate::witness::operation::CONTEXT_SCALING_FACTOR;
use crate::witness::util::{current_context_peek, stack_peek};
/// Prover input function represented as a scoped function name.
@ -138,7 +139,7 @@ impl<F: Field> GenerationState<F> {
fn run_account_code(&mut self) -> Result<U256, ProgramError> {
// stack: codehash, ctx, ...
let codehash = stack_peek(self, 0)?;
let context = stack_peek(self, 1)?;
let context = stack_peek(self, 1)? >> CONTEXT_SCALING_FACTOR;
let context = u256_to_usize(context)?;
let mut address = MemoryAddress::new(context, Segment::Code, 0);
let code = self
@ -189,11 +190,11 @@ impl<F: Field> GenerationState<F> {
m_start_loc: usize,
) -> (Vec<U256>, Vec<U256>) {
let n = self.memory.contexts.len();
let a = &self.memory.contexts[n - 1].segments[Segment::KernelGeneral as usize].content
let a = &self.memory.contexts[n - 1].segments[Segment::KernelGeneral.unscale()].content
[a_start_loc..a_start_loc + len];
let b = &self.memory.contexts[n - 1].segments[Segment::KernelGeneral as usize].content
let b = &self.memory.contexts[n - 1].segments[Segment::KernelGeneral.unscale()].content
[b_start_loc..b_start_loc + len];
let m = &self.memory.contexts[n - 1].segments[Segment::KernelGeneral as usize].content
let m = &self.memory.contexts[n - 1].segments[Segment::KernelGeneral.unscale()].content
[m_start_loc..m_start_loc + len];
let a_biguint = mem_vec_to_biguint(a);

View File

@ -57,7 +57,7 @@ impl<F: Field> GenerationState<F> {
let (trie_roots_ptrs, trie_data) =
load_all_mpts(trie_inputs).expect("Invalid MPT data for preinitialization");
self.memory.contexts[0].segments[Segment::TrieData as usize].content = trie_data;
self.memory.contexts[0].segments[Segment::TrieData.unscale()].content = trie_data;
trie_roots_ptrs
}
@ -131,13 +131,11 @@ impl<F: Field> GenerationState<F> {
}
let ctx = self.registers.context;
let returndata_size_addr = MemoryAddress::new(
ctx,
Segment::ContextMetadata,
ContextMetadata::ReturndataSize as usize,
);
let returndata_offset = ContextMetadata::ReturndataSize.unscale();
let returndata_size_addr =
MemoryAddress::new(ctx, Segment::ContextMetadata, returndata_offset);
let returndata_size = u256_to_usize(self.memory.get(returndata_size_addr))?;
let code = self.memory.contexts[ctx].segments[Segment::Returndata as usize].content
let code = self.memory.contexts[ctx].segments[Segment::Returndata.unscale()].content
[..returndata_size]
.iter()
.map(|x| x.low_u32() as u8)

View File

@ -58,7 +58,7 @@ pub(crate) fn read_trie_helper<V>(
) -> Result<(), ProgramError> {
let load = |offset| memory.get(MemoryAddress::new(0, Segment::TrieData, offset));
let load_slice_from = |init_offset| {
&memory.contexts[0].segments[Segment::TrieData as usize].content[init_offset..]
&memory.contexts[0].segments[Segment::TrieData.unscale()].content[init_offset..]
};
let trie_type = PartialTrieType::all()[u256_to_usize(load(ptr))?];

View File

@ -859,11 +859,7 @@ mod tests {
let expected_output = keccak(&input);
let op = KeccakSpongeOp {
base_address: MemoryAddress {
context: 0,
segment: Segment::Code as usize,
virt: 0,
},
base_address: MemoryAddress::new(0, Segment::Code, 0),
timestamp: 0,
input,
};

View File

@ -368,7 +368,7 @@ impl<F: RichField + Extendable<D>, const D: usize> Stark<F, D> for MemoryStark<F
// specified ones (segment 0 is already included in initialize_aux).
// There is overlap with the previous constraint, but this is not a problem.
yield_constr.constraint_transition(
(next_addr_segment - P::Scalar::from_canonical_usize(Segment::TrieData as usize))
(next_addr_segment - P::Scalar::from_canonical_usize(Segment::TrieData.unscale()))
* initialize_aux
* next_values_limbs[i],
);
@ -524,7 +524,7 @@ impl<F: RichField + Extendable<D>, const D: usize> Stark<F, D> for MemoryStark<F
// There is overlap with the previous constraint, but this is not a problem.
let segment_trie_data = builder.add_const_extension(
next_addr_segment,
F::NEG_ONE * F::from_canonical_u32(Segment::TrieData as u32),
F::NEG_ONE * F::from_canonical_usize(Segment::TrieData.unscale()),
);
let zero_init_constraint =
builder.mul_extension(segment_trie_data, context_zero_initializing_constraint);

View File

@ -1,75 +1,90 @@
use num::traits::AsPrimitive;
pub(crate) const SEGMENT_SCALING_FACTOR: usize = 32;
/// This contains all the existing memory segments. The values in the enum are shifted by 32 bits
/// to allow for convenient address components (context / segement / virtual) bundling in the kernel.
#[allow(dead_code)]
#[allow(clippy::enum_clike_unportable_variant)]
#[derive(Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Debug)]
pub(crate) enum Segment {
/// Contains EVM bytecode.
// The Kernel has optimizations relying on the Code segment being 0.
// This shouldn't be changed!
Code = 0,
/// The program stack.
Stack = 1,
Stack = 1 << SEGMENT_SCALING_FACTOR,
/// Main memory, owned by the contract code.
MainMemory = 2,
MainMemory = 2 << SEGMENT_SCALING_FACTOR,
/// Data passed to the current context by its caller.
Calldata = 3,
Calldata = 3 << SEGMENT_SCALING_FACTOR,
/// Data returned to the current context by its latest callee.
Returndata = 4,
Returndata = 4 << SEGMENT_SCALING_FACTOR,
/// A segment which contains a few fixed-size metadata fields, such as the caller's context, or the
/// size of `CALLDATA` and `RETURNDATA`.
GlobalMetadata = 5,
ContextMetadata = 6,
GlobalMetadata = 5 << SEGMENT_SCALING_FACTOR,
ContextMetadata = 6 << SEGMENT_SCALING_FACTOR,
/// General purpose kernel memory, used by various kernel functions.
/// In general, calling a helper function can result in this memory being clobbered.
KernelGeneral = 7,
KernelGeneral = 7 << SEGMENT_SCALING_FACTOR,
/// Another segment for general purpose kernel use.
KernelGeneral2 = 8,
KernelGeneral2 = 8 << SEGMENT_SCALING_FACTOR,
/// Segment to hold account code for opcodes like `CODESIZE, CODECOPY,...`.
KernelAccountCode = 9,
KernelAccountCode = 9 << SEGMENT_SCALING_FACTOR,
/// Contains normalized transaction fields; see `NormalizedTxnField`.
TxnFields = 10,
TxnFields = 10 << SEGMENT_SCALING_FACTOR,
/// Contains the data field of a transaction.
TxnData = 11,
TxnData = 11 << SEGMENT_SCALING_FACTOR,
/// A buffer used to hold raw RLP data.
RlpRaw = 12,
RlpRaw = 12 << SEGMENT_SCALING_FACTOR,
/// Contains all trie data. It is owned by the kernel, so it only lives on context 0.
TrieData = 13,
TrieData = 13 << SEGMENT_SCALING_FACTOR,
/// A buffer used to store the encodings of a branch node's children.
TrieEncodedChild = 14,
TrieEncodedChild = 14 << SEGMENT_SCALING_FACTOR,
/// A buffer used to store the lengths of the encodings of a branch node's children.
TrieEncodedChildLen = 15,
TrieEncodedChildLen = 15 << SEGMENT_SCALING_FACTOR,
/// A table of values 2^i for i=0..255 for use with shift
/// instructions; initialised by `kernel/asm/shift.asm::init_shift_table()`.
ShiftTable = 16,
JumpdestBits = 17,
EcdsaTable = 18,
BnWnafA = 19,
BnWnafB = 20,
BnTableQ = 21,
BnPairing = 22,
ShiftTable = 16 << SEGMENT_SCALING_FACTOR,
JumpdestBits = 17 << SEGMENT_SCALING_FACTOR,
EcdsaTable = 18 << SEGMENT_SCALING_FACTOR,
BnWnafA = 19 << SEGMENT_SCALING_FACTOR,
BnWnafB = 20 << SEGMENT_SCALING_FACTOR,
BnTableQ = 21 << SEGMENT_SCALING_FACTOR,
BnPairing = 22 << SEGMENT_SCALING_FACTOR,
/// List of addresses that have been accessed in the current transaction.
AccessedAddresses = 23,
AccessedAddresses = 23 << SEGMENT_SCALING_FACTOR,
/// List of storage keys that have been accessed in the current transaction.
AccessedStorageKeys = 24,
AccessedStorageKeys = 24 << SEGMENT_SCALING_FACTOR,
/// List of addresses that have called SELFDESTRUCT in the current transaction.
SelfDestructList = 25,
SelfDestructList = 25 << SEGMENT_SCALING_FACTOR,
/// Contains the bloom filter of a transaction.
TxnBloom = 26,
TxnBloom = 26 << SEGMENT_SCALING_FACTOR,
/// Contains the bloom filter present in the block header.
GlobalBlockBloom = 27,
GlobalBlockBloom = 27 << SEGMENT_SCALING_FACTOR,
/// List of log pointers pointing to the LogsData segment.
Logs = 28,
LogsData = 29,
Logs = 28 << SEGMENT_SCALING_FACTOR,
LogsData = 29 << SEGMENT_SCALING_FACTOR,
/// Journal of state changes. List of pointers to `JournalData`. Length in `GlobalMetadata`.
Journal = 30,
JournalData = 31,
JournalCheckpoints = 32,
Journal = 30 << SEGMENT_SCALING_FACTOR,
JournalData = 31 << SEGMENT_SCALING_FACTOR,
JournalCheckpoints = 32 << SEGMENT_SCALING_FACTOR,
/// List of addresses that have been touched in the current transaction.
TouchedAddresses = 33,
TouchedAddresses = 33 << SEGMENT_SCALING_FACTOR,
/// List of checkpoints for the current context. Length in `ContextMetadata`.
ContextCheckpoints = 34,
ContextCheckpoints = 34 << SEGMENT_SCALING_FACTOR,
/// List of 256 previous block hashes.
BlockHashes = 35,
BlockHashes = 35 << SEGMENT_SCALING_FACTOR,
}
impl Segment {
pub(crate) const COUNT: usize = 36;
/// Unscales this segment by `SEGMENT_SCALING_FACTOR`.
pub(crate) const fn unscale(&self) -> usize {
*self as usize >> SEGMENT_SCALING_FACTOR
}
pub(crate) const fn all() -> [Self; Self::COUNT] {
[
Self::Code,

View File

@ -431,78 +431,94 @@ pub(crate) fn get_memory_extra_looking_sum_circuit<F: RichField + Extendable<D>,
// Add metadata writes.
let block_fields_scalars = [
(
GlobalMetadata::BlockTimestamp as usize,
GlobalMetadata::BlockTimestamp,
public_values.block_metadata.block_timestamp,
),
(
GlobalMetadata::BlockNumber as usize,
GlobalMetadata::BlockNumber,
public_values.block_metadata.block_number,
),
(
GlobalMetadata::BlockDifficulty as usize,
GlobalMetadata::BlockDifficulty,
public_values.block_metadata.block_difficulty,
),
(
GlobalMetadata::BlockGasLimit as usize,
GlobalMetadata::BlockGasLimit,
public_values.block_metadata.block_gaslimit,
),
(
GlobalMetadata::BlockChainId as usize,
GlobalMetadata::BlockChainId,
public_values.block_metadata.block_chain_id,
),
(
GlobalMetadata::BlockGasUsed as usize,
GlobalMetadata::BlockGasUsed,
public_values.block_metadata.block_gas_used,
),
(
GlobalMetadata::BlockGasUsedBefore as usize,
GlobalMetadata::BlockGasUsedBefore,
public_values.extra_block_data.gas_used_before,
),
(
GlobalMetadata::BlockGasUsedAfter as usize,
GlobalMetadata::BlockGasUsedAfter,
public_values.extra_block_data.gas_used_after,
),
(
GlobalMetadata::TxnNumberBefore as usize,
GlobalMetadata::TxnNumberBefore,
public_values.extra_block_data.txn_number_before,
),
(
GlobalMetadata::TxnNumberAfter as usize,
GlobalMetadata::TxnNumberAfter,
public_values.extra_block_data.txn_number_after,
),
];
let beneficiary_random_base_fee_cur_hash_fields: [(usize, &[Target]); 4] = [
let beneficiary_random_base_fee_cur_hash_fields: [(GlobalMetadata, &[Target]); 4] = [
(
GlobalMetadata::BlockBeneficiary as usize,
GlobalMetadata::BlockBeneficiary,
&public_values.block_metadata.block_beneficiary,
),
(
GlobalMetadata::BlockRandom as usize,
GlobalMetadata::BlockRandom,
&public_values.block_metadata.block_random,
),
(
GlobalMetadata::BlockBaseFee as usize,
GlobalMetadata::BlockBaseFee,
&public_values.block_metadata.block_base_fee,
),
(
GlobalMetadata::BlockCurrentHash as usize,
GlobalMetadata::BlockCurrentHash,
&public_values.block_hashes.cur_hash,
),
];
let metadata_segment = builder.constant(F::from_canonical_u32(Segment::GlobalMetadata as u32));
let metadata_segment =
builder.constant(F::from_canonical_usize(Segment::GlobalMetadata.unscale()));
block_fields_scalars.map(|(field, target)| {
// Each of those fields fit in 32 bits, hence in a single Target.
sum = add_data_write(builder, challenge, sum, metadata_segment, field, &[target]);
sum = add_data_write(
builder,
challenge,
sum,
metadata_segment,
field.unscale(),
&[target],
);
});
beneficiary_random_base_fee_cur_hash_fields.map(|(field, targets)| {
sum = add_data_write(builder, challenge, sum, metadata_segment, field, targets);
sum = add_data_write(
builder,
challenge,
sum,
metadata_segment,
field.unscale(),
targets,
);
});
// Add block hashes writes.
let block_hashes_segment = builder.constant(F::from_canonical_u32(Segment::BlockHashes as u32));
let block_hashes_segment =
builder.constant(F::from_canonical_usize(Segment::BlockHashes.unscale()));
for i in 0..256 {
sum = add_data_write(
builder,
@ -515,7 +531,8 @@ pub(crate) fn get_memory_extra_looking_sum_circuit<F: RichField + Extendable<D>,
}
// Add block bloom filters writes.
let bloom_segment = builder.constant(F::from_canonical_u32(Segment::GlobalBlockBloom as u32));
let bloom_segment =
builder.constant(F::from_canonical_usize(Segment::GlobalBlockBloom.unscale()));
for i in 0..8 {
sum = add_data_write(
builder,
@ -530,33 +547,40 @@ pub(crate) fn get_memory_extra_looking_sum_circuit<F: RichField + Extendable<D>,
// Add trie roots writes.
let trie_fields = [
(
GlobalMetadata::StateTrieRootDigestBefore as usize,
GlobalMetadata::StateTrieRootDigestBefore,
public_values.trie_roots_before.state_root,
),
(
GlobalMetadata::TransactionTrieRootDigestBefore as usize,
GlobalMetadata::TransactionTrieRootDigestBefore,
public_values.trie_roots_before.transactions_root,
),
(
GlobalMetadata::ReceiptTrieRootDigestBefore as usize,
GlobalMetadata::ReceiptTrieRootDigestBefore,
public_values.trie_roots_before.receipts_root,
),
(
GlobalMetadata::StateTrieRootDigestAfter as usize,
GlobalMetadata::StateTrieRootDigestAfter,
public_values.trie_roots_after.state_root,
),
(
GlobalMetadata::TransactionTrieRootDigestAfter as usize,
GlobalMetadata::TransactionTrieRootDigestAfter,
public_values.trie_roots_after.transactions_root,
),
(
GlobalMetadata::ReceiptTrieRootDigestAfter as usize,
GlobalMetadata::ReceiptTrieRootDigestAfter,
public_values.trie_roots_after.receipts_root,
),
];
trie_fields.map(|(field, targets)| {
sum = add_data_write(builder, challenge, sum, metadata_segment, field, &targets);
sum = add_data_write(
builder,
challenge,
sum,
metadata_segment,
field.unscale(),
&targets,
);
});
// Add kernel hash and kernel length.
@ -567,7 +591,7 @@ pub(crate) fn get_memory_extra_looking_sum_circuit<F: RichField + Extendable<D>,
challenge,
sum,
metadata_segment,
GlobalMetadata::KernelHash as usize,
GlobalMetadata::KernelHash.unscale(),
&kernel_hash_targets,
);
let kernel_len_target = builder.constant(F::from_canonical_usize(KERNEL.code.len()));
@ -576,7 +600,7 @@ pub(crate) fn get_memory_extra_looking_sum_circuit<F: RichField + Extendable<D>,
challenge,
sum,
metadata_segment,
GlobalMetadata::KernelLen as usize,
GlobalMetadata::KernelLen.unscale(),
&[kernel_len_target],
);

View File

@ -239,19 +239,22 @@ where
(GlobalMetadata::KernelLen, KERNEL.code.len().into()),
];
let segment = F::from_canonical_u32(Segment::GlobalMetadata as u32);
let segment = F::from_canonical_usize(Segment::GlobalMetadata.unscale());
fields.map(|(field, val)| sum = add_data_write(challenge, segment, sum, field as usize, val));
fields.map(|(field, val)| {
// These fields are already scaled by their segment, and are in context 0 (kernel).
sum = add_data_write(challenge, segment, sum, field.unscale(), val)
});
// Add block bloom writes.
let bloom_segment = F::from_canonical_u32(Segment::GlobalBlockBloom as u32);
let bloom_segment = F::from_canonical_usize(Segment::GlobalBlockBloom.unscale());
for index in 0..8 {
let val = public_values.block_metadata.block_bloom[index];
sum = add_data_write(challenge, bloom_segment, sum, index, val);
}
// Add Blockhashes writes.
let block_hashes_segment = F::from_canonical_u32(Segment::BlockHashes as u32);
let block_hashes_segment = F::from_canonical_usize(Segment::BlockHashes.unscale());
for index in 0..256 {
let val = h2u(public_values.block_hashes.prev_hashes[index]);
sum = add_data_write(challenge, block_hashes_segment, sum, index, val);
@ -547,22 +550,22 @@ pub(crate) mod testutils {
(GlobalMetadata::KernelLen, KERNEL.code.len().into()),
];
let segment = F::from_canonical_u32(Segment::GlobalMetadata as u32);
let segment = F::from_canonical_usize(Segment::GlobalMetadata.unscale());
let mut extra_looking_rows = Vec::new();
fields.map(|(field, val)| {
extra_looking_rows.push(add_extra_looking_row(segment, field as usize, val))
extra_looking_rows.push(add_extra_looking_row(segment, field.unscale(), val))
});
// Add block bloom writes.
let bloom_segment = F::from_canonical_u32(Segment::GlobalBlockBloom as u32);
let bloom_segment = F::from_canonical_usize(Segment::GlobalBlockBloom.unscale());
for index in 0..8 {
let val = public_values.block_metadata.block_bloom[index];
extra_looking_rows.push(add_extra_looking_row(bloom_segment, index, val));
}
// Add Blockhashes writes.
let block_hashes_segment = F::from_canonical_u32(Segment::BlockHashes as u32);
let block_hashes_segment = F::from_canonical_usize(Segment::BlockHashes.unscale());
for index in 0..256 {
let val = h2u(public_values.block_hashes.prev_hashes[index]);
extra_looking_rows.push(add_extra_looking_row(block_hashes_segment, index, val));

View File

@ -11,8 +11,9 @@ pub(crate) enum MemoryChannel {
use MemoryChannel::{Code, GeneralPurpose, PartialChannel};
use super::operation::CONTEXT_SCALING_FACTOR;
use crate::cpu::kernel::constants::global_metadata::GlobalMetadata;
use crate::memory::segments::Segment;
use crate::memory::segments::{Segment, SEGMENT_SCALING_FACTOR};
use crate::witness::errors::MemoryError::{ContextTooLarge, SegmentTooLarge, VirtTooLarge};
use crate::witness::errors::ProgramError;
use crate::witness::errors::ProgramError::MemoryError;
@ -41,7 +42,8 @@ impl MemoryAddress {
pub(crate) const fn new(context: usize, segment: Segment, virt: usize) -> Self {
Self {
context,
segment: segment as usize,
// segment is scaled
segment: segment.unscale(),
virt,
}
}
@ -69,6 +71,17 @@ impl MemoryAddress {
})
}
/// Creates a new `MemoryAddress` from a bundled address fitting a `U256`.
/// It will recover the virtual offset as the lowest 32-bit limb, the segment
/// as the next limb, and the context as the next one.
pub(crate) fn new_bundle(addr: U256) -> Result<Self, ProgramError> {
let virt = addr.low_u32().into();
let segment = (addr >> SEGMENT_SCALING_FACTOR).low_u32().into();
let context = (addr >> CONTEXT_SCALING_FACTOR).low_u32().into();
Self::new_u256s(context, segment, virt)
}
pub(crate) fn increment(&mut self) {
self.virt = self.virt.saturating_add(1);
}
@ -153,7 +166,7 @@ impl MemoryState {
pub(crate) fn new(kernel_code: &[u8]) -> Self {
let code_u256s = kernel_code.iter().map(|&x| x.into()).collect();
let mut result = Self::default();
result.contexts[0].segments[Segment::Code as usize].content = code_u256s;
result.contexts[0].segments[Segment::Code.unscale()].content = code_u256s;
result
}
@ -204,12 +217,9 @@ impl MemoryState {
self.contexts[address.context].segments[address.segment].set(address.virt, val);
}
// These fields are already scaled by their respective segment.
pub(crate) fn read_global_metadata(&self, field: GlobalMetadata) -> U256 {
self.get(MemoryAddress::new(
0,
Segment::GlobalMetadata,
field as usize,
))
self.get(MemoryAddress::new_bundle(U256::from(field as usize)).unwrap())
}
}

View File

@ -19,9 +19,8 @@ use crate::extension_tower::BN_BASE;
use crate::generation::state::GenerationState;
use crate::memory::segments::Segment;
use crate::util::u256_to_usize;
use crate::witness::errors::MemoryError::{ContextTooLarge, SegmentTooLarge, VirtTooLarge};
use crate::witness::errors::MemoryError::VirtTooLarge;
use crate::witness::errors::ProgramError;
use crate::witness::errors::ProgramError::MemoryError;
use crate::witness::memory::{MemoryAddress, MemoryChannel, MemoryOp, MemoryOpKind};
use crate::witness::operation::MemoryChannel::GeneralPurpose;
use crate::witness::transition::fill_stack_fields;
@ -59,6 +58,10 @@ pub(crate) enum Operation {
MstoreGeneral,
}
// Contexts in the kernel are shifted by 2^64, so that they can be combined with
// the segment and virtual address components in a single U256 word.
pub(crate) const CONTEXT_SCALING_FACTOR: usize = 64;
/// Adds a CPU row filled with the two inputs and the output of a logic operation.
/// Generates a new logic operation and adds it to the vector of operation in `LogicStark`.
/// Adds three memory read operations to `MemoryStark`: for the two inputs and the output.
@ -129,11 +132,10 @@ pub(crate) fn generate_keccak_general<F: Field>(
state: &mut GenerationState<F>,
mut row: CpuColumnsView<F>,
) -> Result<(), ProgramError> {
let [(context, _), (segment, log_in1), (base_virt, log_in2), (len, log_in3)] =
stack_pop_with_log_and_fill::<4, _>(state, &mut row)?;
let [(addr, _), (len, log_in1)] = stack_pop_with_log_and_fill::<2, _>(state, &mut row)?;
let len = u256_to_usize(len)?;
let base_address = MemoryAddress::new_u256s(context, segment, base_virt)?;
let base_address = MemoryAddress::new_bundle(addr)?;
let input = (0..len)
.map(|i| {
let address = MemoryAddress {
@ -152,8 +154,6 @@ pub(crate) fn generate_keccak_general<F: Field>(
keccak_sponge_log(state, base_address, input);
state.traces.push_memory(log_in1);
state.traces.push_memory(log_in2);
state.traces.push_memory(log_in3);
state.traces.push_cpu(row);
Ok(())
}
@ -191,7 +191,7 @@ pub(crate) fn generate_pop<F: Field>(
) -> Result<(), ProgramError> {
let [(_, _)] = stack_pop_with_log_and_fill::<1, _>(state, &mut row)?;
let diff = row.stack_len - F::from_canonical_usize(1);
let diff = row.stack_len - F::ONE;
if let Some(inv) = diff.try_inverse() {
row.general.stack_mut().stack_inv = inv;
row.general.stack_mut().stack_inv_aux = F::ONE;
@ -352,7 +352,11 @@ pub(crate) fn generate_get_context<F: Field>(
let res = mem_write_gp_log_and_fill(3, address, state, &mut row, state.registers.stack_top);
Some(res)
};
push_no_write(state, state.registers.context.into());
push_no_write(
state,
// The fetched value needs to be scaled before being pushed.
U256::from(state.registers.context) << CONTEXT_SCALING_FACTOR,
);
if let Some(log) = write {
state.traces.push_memory(log);
}
@ -369,9 +373,10 @@ pub(crate) fn generate_set_context<F: Field>(
let sp_to_save = state.registers.stack_len.into();
let old_ctx = state.registers.context;
let new_ctx = u256_to_usize(ctx)?;
// The popped value needs to be scaled down.
let new_ctx = u256_to_usize(ctx >> CONTEXT_SCALING_FACTOR)?;
let sp_field = ContextMetadata::StackSize as usize;
let sp_field = ContextMetadata::StackSize.unscale();
let old_sp_addr = MemoryAddress::new(old_ctx, Segment::ContextMetadata, sp_field);
let new_sp_addr = MemoryAddress::new(new_ctx, Segment::ContextMetadata, sp_field);
@ -390,7 +395,7 @@ pub(crate) fn generate_set_context<F: Field>(
channel.used = F::ONE;
channel.is_read = F::ONE;
channel.addr_context = F::from_canonical_usize(new_ctx);
channel.addr_segment = F::from_canonical_usize(Segment::ContextMetadata as usize);
channel.addr_segment = F::from_canonical_usize(Segment::ContextMetadata.unscale());
channel.addr_virtual = F::from_canonical_usize(new_sp_addr.virt);
let val_limbs: [u64; 4] = sp_to_save.0;
for (i, limb) in val_limbs.into_iter().enumerate() {
@ -433,6 +438,7 @@ pub(crate) fn generate_set_context<F: Field>(
state.traces.push_memory(log_write_old_sp);
state.traces.push_memory(log_read_new_sp);
state.traces.push_cpu(row);
Ok(())
}
@ -575,7 +581,7 @@ pub(crate) fn generate_not<F: Field>(
// This is necessary for the stack constraints for POP,
// since the two flags are combined.
let diff = row.stack_len - F::from_canonical_usize(1);
let diff = row.stack_len - F::ONE;
if let Some(inv) = diff.try_inverse() {
row.general.stack_mut().stack_inv = inv;
row.general.stack_mut().stack_inv_aux = F::ONE;
@ -808,18 +814,16 @@ pub(crate) fn generate_mload_general<F: Field>(
state: &mut GenerationState<F>,
mut row: CpuColumnsView<F>,
) -> Result<(), ProgramError> {
let [(context, _), (segment, log_in1), (virt, log_in2)] =
stack_pop_with_log_and_fill::<3, _>(state, &mut row)?;
let [(addr, _)] = stack_pop_with_log_and_fill::<1, _>(state, &mut row)?;
let (val, log_read) = mem_read_gp_with_log_and_fill(
3,
MemoryAddress::new_u256s(context, segment, virt)?,
state,
&mut row,
);
let (val, log_read) =
mem_read_gp_with_log_and_fill(1, MemoryAddress::new_bundle(addr)?, state, &mut row);
push_no_write(state, val);
let diff = row.stack_len - F::from_canonical_usize(4);
// Because MLOAD_GENERAL performs 1 pop and 1 push, it does not make use of the `stack_inv_aux` general columns.
// We hence can set the diff to 2 (instead of 1) so that the stack constraint for MSTORE_GENERAL applies to both
// operations, which are combined into a single CPU flag.
let diff = row.stack_len - F::TWO;
if let Some(inv) = diff.try_inverse() {
row.general.stack_mut().stack_inv = inv;
row.general.stack_mut().stack_inv_aux = F::ONE;
@ -828,8 +832,6 @@ pub(crate) fn generate_mload_general<F: Field>(
row.general.stack_mut().stack_inv_aux = F::ZERO;
}
state.traces.push_memory(log_in1);
state.traces.push_memory(log_in2);
state.traces.push_memory(log_read);
state.traces.push_cpu(row);
Ok(())
@ -839,15 +841,14 @@ pub(crate) fn generate_mload_32bytes<F: Field>(
state: &mut GenerationState<F>,
mut row: CpuColumnsView<F>,
) -> Result<(), ProgramError> {
let [(context, _), (segment, log_in1), (base_virt, log_in2), (len, log_in3)] =
stack_pop_with_log_and_fill::<4, _>(state, &mut row)?;
let [(addr, _), (len, log_in1)] = stack_pop_with_log_and_fill::<2, _>(state, &mut row)?;
let len = u256_to_usize(len)?;
if len > 32 {
// The call to `U256::from_big_endian()` would panic.
return Err(ProgramError::IntegerTooLarge);
}
let base_address = MemoryAddress::new_u256s(context, segment, base_virt)?;
let base_address = MemoryAddress::new_bundle(addr)?;
if usize::MAX - base_address.virt < len {
return Err(ProgramError::MemoryError(VirtTooLarge {
virt: base_address.virt.into(),
@ -870,8 +871,6 @@ pub(crate) fn generate_mload_32bytes<F: Field>(
byte_packing_log(state, base_address, bytes);
state.traces.push_memory(log_in1);
state.traces.push_memory(log_in2);
state.traces.push_memory(log_in3);
state.traces.push_cpu(row);
Ok(())
}
@ -880,23 +879,12 @@ pub(crate) fn generate_mstore_general<F: Field>(
state: &mut GenerationState<F>,
mut row: CpuColumnsView<F>,
) -> Result<(), ProgramError> {
let [(val, _), (context, log_in1), (segment, log_in2), (virt, log_in3)] =
stack_pop_with_log_and_fill::<4, _>(state, &mut row)?;
let [(val, _), (addr, log_in1)] = stack_pop_with_log_and_fill::<2, _>(state, &mut row)?;
let address = MemoryAddress {
context: context
.try_into()
.map_err(|_| MemoryError(ContextTooLarge { context }))?,
segment: segment
.try_into()
.map_err(|_| MemoryError(SegmentTooLarge { segment }))?,
virt: virt
.try_into()
.map_err(|_| MemoryError(VirtTooLarge { virt }))?,
};
let address = MemoryAddress::new_bundle(addr)?;
let log_write = mem_write_partial_log_and_fill(address, state, &mut row, val);
let diff = row.stack_len - F::from_canonical_usize(4);
let diff = row.stack_len - F::TWO;
if let Some(inv) = diff.try_inverse() {
row.general.stack_mut().stack_inv = inv;
row.general.stack_mut().stack_inv_aux = F::ONE;
@ -908,8 +896,6 @@ pub(crate) fn generate_mstore_general<F: Field>(
}
state.traces.push_memory(log_in1);
state.traces.push_memory(log_in2);
state.traces.push_memory(log_in3);
state.traces.push_memory(log_write);
state.traces.push_cpu(row);
@ -922,19 +908,16 @@ pub(crate) fn generate_mstore_32bytes<F: Field>(
state: &mut GenerationState<F>,
mut row: CpuColumnsView<F>,
) -> Result<(), ProgramError> {
let [(context, _), (segment, log_in1), (base_virt, log_in2), (val, log_in3)] =
stack_pop_with_log_and_fill::<4, _>(state, &mut row)?;
let [(addr, _), (val, log_in1)] = stack_pop_with_log_and_fill::<2, _>(state, &mut row)?;
let base_address = MemoryAddress::new_u256s(context, segment, base_virt)?;
let base_address = MemoryAddress::new_bundle(addr)?;
byte_unpacking_log(state, base_address, val, n as usize);
let new_offset = base_virt + n;
push_no_write(state, new_offset);
let new_addr = addr + n;
push_no_write(state, new_addr);
state.traces.push_memory(log_in1);
state.traces.push_memory(log_in2);
state.traces.push_memory(log_in3);
state.traces.push_cpu(row);
Ok(())
}

View File

@ -299,11 +299,11 @@ fn perform_op<F: Field>(
state.registers.gas_used += gas_to_charge(op);
let gas_limit_address = MemoryAddress {
context: state.registers.context,
segment: Segment::ContextMetadata as usize,
virt: ContextMetadata::GasLimit as usize,
};
let gas_limit_address = MemoryAddress::new(
state.registers.context,
Segment::ContextMetadata,
ContextMetadata::GasLimit.unscale(), // context offsets are already scaled
);
if !state.registers.is_kernel {
let gas_limit = TryInto::<u64>::try_into(state.memory.get(gas_limit_address));
match gas_limit {
@ -345,14 +345,14 @@ pub(crate) fn fill_stack_fields<F: Field>(
channel.used = F::ONE;
channel.is_read = F::ONE;
channel.addr_context = F::from_canonical_usize(state.registers.context);
channel.addr_segment = F::from_canonical_usize(Segment::Stack as usize);
channel.addr_segment = F::from_canonical_usize(Segment::Stack.unscale());
channel.addr_virtual = F::from_canonical_usize(state.registers.stack_len - 1);
let address = MemoryAddress {
context: state.registers.context,
segment: Segment::Stack as usize,
virt: state.registers.stack_len - 1,
};
let address = MemoryAddress::new(
state.registers.context,
Segment::Stack,
state.registers.stack_len - 1,
);
let mem_op = MemoryOp::new(
GeneralPurpose(0),
@ -494,7 +494,7 @@ pub(crate) fn transition<F: Field>(state: &mut GenerationState<F>) -> anyhow::Re
e,
offset_name,
state.stack(),
state.memory.contexts[0].segments[Segment::KernelGeneral as usize].content,
state.memory.contexts[0].segments[Segment::KernelGeneral.unscale()].content,
);
}
state.rollback(checkpoint);

View File

@ -442,7 +442,7 @@ fn test_log_with_aggreg() -> anyhow::Result<()> {
// Preprocess all circuits.
let all_circuits = AllRecursiveCircuits::<F, C, D>::new(
&all_stark,
&[16..17, 13..16, 15..18, 14..15, 9..10, 12..13, 17..20],
&[16..17, 13..16, 15..18, 14..15, 10..11, 12..13, 17..20],
&config,
);