For now, we can do shrinking recursion with 93 bits of security. It's not quite as high as we want, but it's close, and I think it makes sense to merge this and treat the 2^12 circuit as our main benchmark, as we continue working to improve security.
The previous code used an equality test for each index. This variant uses a "MUX tree" instead. If we imagine the items as being the leaves of a binary tree, we can compute the `i`th item by splitting `i` into bits, then performing a "select" operation for each node. The bit used in each select is based on the height of the associated node.
This uses fewer wires and is cheaper to evaluate, saving 31 wires in the recursion circuit.
A potential disadvantage is that this uses higher-degree constraints (degree 4 with our params), but I don't think this is much of a concern for us since we use a degree-9 constraint system.
The effect on soundness error is negligible for our current field, but this introduces an assertion that could fail if we changed to a field with more elements in the "ambiguous" range.
My previous change introduced a bug -- when `num_routed_wires` was a multiple of 8, the partial products "consumed" all `num_routed_wires` terms, whereas we actually want to leave 8 terms for the final product.
This also changes `check_partial_products` to include the final product constraint, and merges `vanishing_v_shift_terms` into `vanishing_partial_products_terms`. I think this is natural since `Z(x)`, partial products, and `Z(g x)` are all part of the product accumulator chain.
I believe I was mistaken earlier, and hash-based commitments actually call for `r = 2*security_bits` bits of randomness.
I.e. I believe breaking a particular commitment requires `O(2^r)` work (more if the committed value adds entropy, but assume it doesn't), but breaking one of `n` commitments requires less work.
It seems like this should be a well-known thing, but I can't find much in the literature. The IOP paper does mention using `2*security_bits` of randomness though.