Right now `secp256k1_ec_pubkey_decompress` takes an in/out pointer to
a public key and replaces the input key with its decompressed variant.
This forces users who store compressed keys in small (<65 byte) fixed
size buffers (for example, the Rust bindings do this) to explicitly
and wastefully copy their key to a larger buffer.
[API BREAK]
* Make secp256k1_gej_add_var and secp256k1_gej_double return the
Z ratio to go from a.z to r.z.
* Use these Z ratios to speed up batch point conversion to affine
coordinates, and to speed up batch conversion of points to a
common Z coordinate.
* Add a point addition function that takes a point with a known
Z inverse.
* Due to secp256k1's endomorphism, all additions in the EC
multiplication code can work on affine coordinate (with an
implicit common Z coordinate), correcting the Z coordinate of
the result afterwards.
Refactoring by Pieter Wuille:
* Move more global-z logic into the group code.
* Separate code for computing the odd multiples from the code to bring it
to either storage or globalz format.
* Rename functions.
* Make all addition operations return Z ratios, and test them.
* Make the zr table format compatible with future batch chaining
(the first entry in zr becomes the ratio between the input and the
first output).
Original idea and code by Peter Dettman.
This computes (n-b)G + bG with random value b, in place of nG in
ecmult_gen() for signing.
This is intended to reduce exposure to potential power/EMI sidechannels
during signing and pubkey generation by blinding the secret value with
another value which is hopefully unknown to the attacker.
It may not be very helpful if the attacker is able to observe the setup
or if even the scalar addition has an unacceptable leak, but it has low
overhead in any case and the security should be purely additive on top
of the existing defenses against sidechannels.
Use a conditional move of the same kind we use for the affine points
in the storage type instead of multiplying with the infinity flag
and adding. This results in fewer constructions to worry about for
sidechannel behavior.
It also might be faster: It doesn't appear to benchmark as slower for
me at least; but I think the CMOV is faster than the mul_int + add,
but slower than the set+add; making it a wash.
Unbraced statements spanning multiple lines has been shown in many
projects to contribute to the introduction of bugs and a failure
to catch them in review, especially for maintenance on infrequently
modified code.
Most, but not all, of the existing practice in the codebase were not
cases that I would have expected to eventually result in bugs but
applying it as a rule makes it easier for other people to safely
contribute.
I'm not aware of any such evidence for the case with the statement
on a single line, but some people strongly prefer to never do that
and the opposite rule of "_always_ use a single line for single
statement blocks" isn't a reasonable rule for formatting reasons.
Might as well brace all these too, since that's more universally
acceptable.
[In any case, I seem to have introduced the vast majority of the
single-line form (as they're my preference where they fit).]
This also removes a broken test which is no longer needed.
Goto, multiple returns, continue, and/or multiple breaks in a
loop are often used to build complex or non-local control
flow in software.
(They're all basically the same thing, and anyone axiomatically
opposing goto and not the rest is probably cargo-culting from
the title of Dijkstra's essay without thinking hard about it.)
Personally, I think the current use of these constructs in the
code base is fine: no where are we using them to create control-
flow that couldn't easily be described in plain English, which
is hard to read or reason about, or which looks like a trap for
future developers.
Some, however, prefer a more rules based approach to software
quality. In particular, MISRA forbids all of these constructs,
and for good experience based reasons. Rules also have the
benefit of being machine checkable and surviving individual
developers.
(To be fair-- MISRA also has a process for accommodating code that
breaks the rules for good reason).
I think that in general we should also try to satisfy the rules-
based measures of software quality, except where there is an
objective reason not do: a measurable performance difference,
logic that turns to spaghetti, etc.
Changing out all the multiple returns in secp256k1.c appears to
be basically neutral: Some parts become slightly less clear,
some parts slightly more.
C doesn't include the null in an array initilized from a
string literal if it doesn't fit, in C++ this is invalid.
The vararray style prototypes and init+calc also changed in
this commit are not C89 enough for some tools.
34b898d Additional comments for the testing PRNG and a seeding fix. (Gregory Maxwell)
6efd6e7 Some comments explaining some of the constants in the code. (Gregory Maxwell)
fcc48c4 Remove the non-storage cmov (Pieter Wuille)
55422b6 Switch ecmult_gen to use storage types (Pieter Wuille)
41f8455 Use group element storage type in EC multiplications (Pieter Wuille)
e68d720 Add group element storage type (Pieter Wuille)
ff889f7 Field storage type (Pieter Wuille)
This makes the software more portable to embedded systems
and static analysis tools.
Sadly, it can't result in identical binaries because C99 mixed
declarations seem to make GCC emit superfluous stack-pointer
updates. The compiler is also somewhat dependent on the
declaration order.
7688e34 Add magnitude limits to secp256k1_fe_verify to ensure that it's own tests function correctly. (Gregory Maxwell)
70ae0d2 Use secp256k1_fe_equal_var in secp256k1_fe_sqrt_var. (Gregory Maxwell)
In theory this should be faster, since secp256k1_fe_equal_var is able to
shortcut the normalization. On x86_64 the improvement appears to be in
the noise for me. At least it makes the code cleaner.
bbd5ba7 Use rfc6979 as default nonce generation function (Pieter Wuille)
b37fbc2 Implement SHA256 / HMAC-SHA256 / RFC6979. (Pieter Wuille)
c6e7f4e [API BREAK] Use a nonce-generation function instead of a nonce (Pieter Wuille)
b2c9681 Make {mul,sqr}_inner use the same argument order as {mul,sqr} (Pieter Wuille)
6793505 Convert YASM code into inline assembly (Pieter Wuille)
f048615 Rewrite field assembly to match the C version (Pieter Wuille)
- In secp256k1_gej_split_exp, there are two divisions used. Since the denominator is a constant known at compile-time, each can be replaced by a multiplication followed by a right-shift (and rounding).
- Add the constants g1, g2 for this purpose and rewrite secp256k1_scalar_split_lambda_var accordingly.
- Remove secp256k1_num_div since no longer used
Rebased-by: Pieter Wuille
4d4eeea Make secp256k1_fe_mul_inner use the r != property (Pieter Wuille)
be82e92 Require that r and b are different for field multiplication. (Pieter Wuille)