The 2^12 change made this outdated. We no longer need to shrink degree (since normal recursive proofs are 2^12), so we can simplify a bit. We just boost the rate, then do a size-optimized proof. (Without doing the rate boost first, the final proof would be over 2^12.)
Configured for 93 bits security for now, but the PoW settings are low so that'll be easy to increase.
~45kb with current settings.
This results in 8 constant polynomials, which means our Merkle tree containing preprocessed polynomials has leaves of size 80 + 8 = 88. A multiple of 8 is efficient in terms of how many gates it takes to hash a leaf. Saves 17 gates.
Rather than creating arithmetic gates with potentially unique constants.
Should be strictly cheaper, though it only seems to save one gate in practice.
* Suppress warnings about use of unstable compiler features.
* Remove unused functions.
* Refactor and remove PolynomialCoeffs::new_padded(); fix degree_padded.
Note that this fixes a minor mistake in the FFT testing code, where
`degree_padded` value was log2 of what it should have been, preventing
a testing loop from executing.
* Remove divide_by_z_h() and related test functions.
* Only compile check_{consistency,test_vectors} when testing.
* Move verify() to test module.
* Remove unused functions.
NB: Changed the config in the gadgets/arithmetic_extension.rs::tests
module which may change the test's meaning?
* Remove unused import.
* Mark GMiMC option as allowed 'dead code'.
* Fix missing feature.
* Remove unused functions.
* cargo fmt
* Mark variable as unused.
* Revert "Remove unused functions."
This reverts commit 99d2357f1c967fd9fd6cac63e1216d929888be72.
* Make config functions public.
* Mark 'reduce_nonnative()' as dead code for now.
* Revert "Move verify() to test module." Refactor to `verify_compressed`.
This reverts commit b426e810d033c642f54e25ebc4a8114491df5076.
* cargo fmt
* Reinstate `verify()` fn on `CompressedProofWithPublicInputs`.
For now, we can do shrinking recursion with 93 bits of security. It's not quite as high as we want, but it's close, and I think it makes sense to merge this and treat the 2^12 circuit as our main benchmark, as we continue working to improve security.
The previous code used an equality test for each index. This variant uses a "MUX tree" instead. If we imagine the items as being the leaves of a binary tree, we can compute the `i`th item by splitting `i` into bits, then performing a "select" operation for each node. The bit used in each select is based on the height of the associated node.
This uses fewer wires and is cheaper to evaluate, saving 31 wires in the recursion circuit.
A potential disadvantage is that this uses higher-degree constraints (degree 4 with our params), but I don't think this is much of a concern for us since we use a degree-9 constraint system.
The effect on soundness error is negligible for our current field, but this introduces an assertion that could fail if we changed to a field with more elements in the "ambiguous" range.
My previous change introduced a bug -- when `num_routed_wires` was a multiple of 8, the partial products "consumed" all `num_routed_wires` terms, whereas we actually want to leave 8 terms for the final product.
This also changes `check_partial_products` to include the final product constraint, and merges `vanishing_v_shift_terms` into `vanishing_partial_products_terms`. I think this is natural since `Z(x)`, partial products, and `Z(g x)` are all part of the product accumulator chain.
I believe I was mistaken earlier, and hash-based commitments actually call for `r = 2*security_bits` bits of randomness.
I.e. I believe breaking a particular commitment requires `O(2^r)` work (more if the committed value adds entropy, but assume it doesn't), but breaking one of `n` commitments requires less work.
It seems like this should be a well-known thing, but I can't find much in the literature. The IOP paper does mention using `2*security_bits` of randomness though.