diff --git a/commentary/FRI.md b/commentary/FRI.md index b2c4569..f483c6f 100644 --- a/commentary/FRI.md +++ b/commentary/FRI.md @@ -3,18 +3,18 @@ FRI protocol Plonky2 uses a "wide" FRI commitment (committing to whole rows), and then a batched opening proofs for all the 4 commitments (namely: constants, witness, running product and quotient polynomial). -### Initial Merkle commitment +### Initial Merkle commitment(s) To commit to a matrix of size $2^n\times M$, the columns, interpreted as values of polynomials on a multiplicative subgroup, are "low-degree extended", that is, evaluated (via an IFFT-FFT pair) on a (coset of a) larger multiplicative subgroup of size $2^{n+\mathsf{rate}^{-1}}$. In the standard configuration we have $\mathsf{rate=1}/8$, so we get 8x larger columns, that is, size $2^{n+3}$. The coset Plonky2 uses is the one shifted by the multiplicative generator of the field $$ g := \mathtt{0xc65c18b67785d900} = 14293326489335486720\in\mathbb{F} $$ -Note: There may be some reordering of the LDE values (bit reversal etc) which I'm unsure about at this point. - When configured for zero-knowledge, each _row_ (Merkle leaf) is "blinded" by the addition of `SALT_SIZE = 4` extra random _columns_ (huh?). Finally, each row is hashed (well, if the number of columns is at most 4, they are left as they are, but this should never happen in practice), and a Merkle tree is built on the top of these leaf hashes. +WARNING: before building the Merkle tree, the leaves are _reorderd_ by reversing the order of the bits in the index!!!! + So we get a Merkle tree whose leaves correspond to full rows ($2^{n+3}$ leaves). Note that in Plonky2 we have in total 4 such matrices, resulting in 4 Merkle caps: @@ -102,11 +102,18 @@ In practice this is done per batch (see the double sum), because the division is Remark: In the actual protocol, $F_i$ will be the columns and $y_i$ will be the openings. Combining the batches, we end up with 2 terms: -$$P(X) = P_0(X) + \alpha^M\cdot P_1(X) = +$$P(X) = P_0(X) + \alpha^{M_0}\cdot P_1(X) = \frac{G_0(X)-Y_0}{X-\zeta} + \alpha^M\cdot \frac{G_1(X)-Y_1}{X-\omega\zeta}$$ The pair $(Y_0,Y_1)$ are called "precomputed reduced openings" in the code (calculated from the opening set, involving _two rows_), and $X$ will be substituted with $X\mapsto \eta^{\mathsf{query\_index}}$ (calculated from the "initial tree proofs", involving _one row_). Here $\eta$ is the generator of the LDE subgroup, so $\omega = \eta^{1/\rho}$. +**NOTE1:** while the above would be _the logical version_, here _AGAIN_ Plonky2 does something twisted instead, just because: + +$$ P(X) := \alpha^{M_1}\cdot P_0(X) + P_1(X)$$ + +where $M_0,M_1$ denote the number of terms in the sums in $P_0,P_1$, respectively. + +**NOTE2:** in all these sums (well in the zero-index ones), the terms are _reordered_ so that the lookups are now at the very end. This is frankly **just stupid** because the both kind of inputs are _in a different order_, and in the case of Merkle tree openings, you couldn't even change that order... #### Commit phase @@ -116,7 +123,9 @@ The prover then repeatedly "folds" these vectors using the challenges $\beta_i$, As example, consider a starting polynomial of degree $2^{13}-1$. With $\rho=1/8$ this gives a codeword of size $2^{16}$. This is committed to (but the see the note below!). Then a challenge $\beta_0$ is generated, and we fold this (with an arity of $2^4$), getting a codeword of size $2^{12}$, representing a polynomial of degree $2^9-1$. We commit to this too. Then generate another challenge $\beta_1$, and fold again with that. Now we get a codeword of size $2^8$, however, this is represented by a polynomial of at most degree $31$, so we just send the 32 coefficients of that instead of a commitment. -Note: as an optimization, when creating these Merkle trees, we always put _cosets_ of size $2^{\mathsf{arity}}$ on the leaves, as we will have to open them all together anyway. Furthermore, we use _Merkle caps_, so the proof lengths are shorter by the corresponding amount (4 by default, because we have 16 mini-roots in a cap). So the Merkle proofs are for a LDE size $2^k$ have length $k-\mathsf{arity\_bits}-\mathsf{cap\_bits}$, typically $k-8$. +Notes: as an optimization, when creating these Merkle trees, we always put _cosets_ of size $2^{\mathsf{arity}}$ on the leaves, as we will have to open them all together anyway. To achieve this grouping, Plonky2 reverses the order of elements by reversing _the order of bits in the indices_; this way, members of cosets (which were before far from each other) become a continuous range. + +Furthermore, we use _Merkle caps_, so the proof lengths are shorter by the corresponding amount (4 by default, because we have 16 mini-roots in a cap). So the Merkle proofs are for a LDE size $2^k$ have length $k-\mathsf{arity\_bits}-\mathsf{cap\_bits}$, typically $k-8$. | step | Degree | LDE size | Tree depth |prf. len | fold with |send & absorb | |------|------------|----------|------------|---------|-----------|--------------| @@ -153,20 +162,89 @@ $$ Then in each folding step, a whole coset is opened in the "upper layer", one element of which was known from the previous step (or in the very first step, can be computed from the "initial tree proofs" and the openings themselves) which is checked to match. Then the folded element of the next layer is computed by a small $2^\mathsf{arity}$ sized FFT, and this is repeated until the final step. -### FRI verifier cost +### Folding math details + +So the combined polynomial $P(x)$ is a polynomial of degree (one less than) $N=2^{n+(1/\rho)}$ over $\widetilde{\mathbb{F}}$. We first commit to the evaluations + +$$\big\{P(g\cdot \eta^i)\;:\;0\le i < N\big\}$$ + +of this polynomial on the coset $gH=\{g\cdot\eta^k\}$ where $\eta$ is generator of the subgroup of size $N$, and $g\in\mathbb{F}^\times$ is a fixed generator of the whole multiplicative group of the field. + +However, the commitment is done not on individual values, but smaller cosets $\mathcal{C}_k:=\{g\cdot \eta^{i(N/\mathsf{arity})+k}\;:\;0\le i < \mathsf{arity}\}$ of size $\mathsf{arity}=16$. + +To achieve this, the above vector of evaluation is reordered, so that _the bits of vector indices_ are reversed. + +Then the query index (in each query round) selects a single element of the vector. Note that the bits of the `query_index` are _also reversed_ - however as the Merkle tree leaves are reordered too, this is not apparent at first! So mathematically speaking the locations look like $x_0 = g\cdot \eta^{\mathrm{rev}(\mathsf{idx})}$... + +From that, we can derive its coset, which we open with a Merkle proof. We can check the consistency of the selected element (combined polynomial value) vs. the original polynomial commitments using the formula for the combined polynomial. + + + 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 + +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ + C_k | | |##| | | | + +-----------+-----------+-----------+-----------+ + \ ^^ / + \ ____ ______ / + \ _____ ______ / + \ / + \ / + +--+- -+--+--+--+- -+--+ + C_(k>>4) | ... |##| ... | + +--+- -+--+--+--+- -+--+ + 0 ^^ 15 + (k mod 16) + +Next, we want to "fold this coset", so that we can repeat this operation until we get the final polynomial, we we simply check the final folded value aginst an evaluation of the final polynomial. + +On the prover side, the folding step looks like this: + +$$ P(x) = \sum_{i=0}^{\mathsf{arity}-1} x^i\cdot P_i(x^\mathsf{arity}) \quad\quad\longrightarrow\quad\quad + P'(y) := \sum_{i=0}^{\mathsf{arity}-1} \beta^i\cdot P_i(y) $$ + +The folded polynomial $P'(x)$ will be then evaluated on the (shrunken) coset + +$$H' := H^{\mathsf{arity}} = \big\{(g\eta^i)^\mathsf{arity}\;:\;i\big\} =\big\{g^\mathsf{arity}(\eta^{\mathsf{arity}})^{i'} \;:\; 0\le i'(); data.verify(proof.clone())?; ``` -### Example: Conditional Verification +#### Example: Conditional Verification In some cases you may want the recursive circuit to choose between verifying a real inner proof or a dummy one. This is useful in conditional and cyclic recursion. The dummy proof is generated (using the `DummyProofGenerator`) to fill in when no “real” proof is present. @@ -290,7 +293,7 @@ builder.conditionally_verify_proof_or_dummy::( ); ``` -### Example: Cyclic Recursion with Public Verifier Data +#### Example: Cyclic Recursion with Public Verifier Data In cyclic recursion every inner proof must use the same verification key. For this purpose the verifier data is made public and then checked for consistency. The function check_cyclic_proof_verifier_data is used to enforce that the verifier data from the inner proof (from its public inputs) matches the expected verifier data. See the following example of this: ```rust diff --git a/commentary/commentary.pdf b/commentary/commentary.pdf index dbf3804..46b55e3 100644 Binary files a/commentary/commentary.pdf and b/commentary/commentary.pdf differ