Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k256: Add non-biased/non-zero constructors to Scalar #432

Closed
wants to merge 1 commit into from

Conversation

fjarri
Copy link
Contributor

@fjarri fjarri commented Sep 14, 2021

This is a "spitballing" PR, intended primarily for illustration of required API and discussion.

There are two things I'm missing from the current Scalar API:

  • an ability to generate random NonZeroScalars (without unwrapping)
  • an ability to generate a Scalar/NonZeroScalar from a digest safely according to hash-to-curve standard

The latter needs the scalar to be reduced from L = ceil((ceil(log2(p)) + k) / 8) bytes (Section 5.3), where p is the order, and k is the security parameter. For k = 256 this gives 64 bytes, for k = 128 it's 48 bytes. I went with 64 for simplicity.

The public methods added to Scalar:

  • pub fn random_nonzero(rng: impl RngCore) -> NonZeroScalar
  • pub fn from_digest_safe<D>(digest: D) -> Self where D: Digest<OutputSize = U64>
  • pub fn from_digest_safe_nonzero<D>(digest: D) -> NonZeroScalar where D: Digest<OutputSize = U64>

Now for the problems:

  • random() is a part of the Field trait. If NonZeroScalar implemented Field, the code from random_nonzero() could go there.
  • FromDigest requires OutputSize = FieldSize, so it can't be changed to 64 bytes output. The ways to deal with it include:
    • adding a SafeExpansionSize (or whatever) type and locking FromDigest to it
    • adding a FromDigestSafe trait
  • from_digest_safe_nonzero() can be an impl of FromDigest for NonZeroScalar (but again there's the size problem)
  • The added internal function Scalar::from_wide_bytes_reduced() assumes that WideScalar::reduce() can reduce any 512 bit, and not just anything below order^2. I'm 99.9% certain that's true, but I can't present a formal proof right now. I did test it for 0xfff...fff, and it works.
  • WideScalar::reduce_nonzero() assumes that reduce() works just as well for the modulus decreased by one. See the comment above about being 99.9% sure.
  • WideScalar::reduce_nonzero() contains an unwrap(). Ideally I'd prefer to have something like NonZeroScalar::from_bytes_unchecked() to avoid it.

@fjarri fjarri marked this pull request as draft September 14, 2021 06:09
@fjarri fjarri added the k256 secp256k1 crate label Sep 14, 2021
@codecov-commenter
Copy link

codecov-commenter commented Sep 14, 2021

Codecov Report

Merging #432 (e97b09b) into master (df71290) will decrease coverage by 0.44%.
The diff coverage is 35.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #432      +/-   ##
==========================================
- Coverage   64.70%   64.25%   -0.45%     
==========================================
  Files          28       28              
  Lines        3598     3654      +56     
==========================================
+ Hits         2328     2348      +20     
- Misses       1270     1306      +36     
Impacted Files Coverage Δ
k256/src/arithmetic/scalar/wide32.rs 0.00% <0.00%> (ø)
k256/src/arithmetic/scalar.rs 78.78% <40.00%> (-5.36%) ⬇️
k256/src/arithmetic/scalar/wide64.rs 93.67% <70.58%> (-2.11%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update df71290...e97b09b. Read the comment docs.

@fjarri fjarri force-pushed the scalar-constructors branch 4 times, most recently from ccaafc1 to 1e0f760 Compare September 14, 2021 06:23
@fjarri fjarri changed the title Add non-biased/non-zero constructors to Scalar k256: Add non-biased/non-zero constructors to Scalar Sep 14, 2021
@tarcieri
Copy link
Member

Lots to respond to here, but I can perhaps start here:

FromDigest requires OutputSize = FieldSize, so it can't be changed to 64 bytes output. The ways to deal with it include:

  • adding a SafeExpansionSize (or whatever) type and locking FromDigest to it
  • adding a FromDigestSafe trait

The current FromDigest trait is ecdsa::FromDigest and very much serves ECDSA-related concerns. I specifically moved it to the ecdsa crate to get it out-of-the-way so we can define a new FromDigest crate in elliptic-curve.

It'd be great if we could define a better elliptic_curve::FromDigest trait to replace ecdsa::FromDigest, if we can design it in such a way that it can cover both ECDSA's usage and hash-to-curve.

Perhaps we could make the input size a generic parameter, which would permit impls for multiple sizes.

@tarcieri
Copy link
Member

I looked at trying to add a new FromDigest trait to the elliptic-curve crate which is generic with respect to the digest size and can be impl'd for multiple sizes, and then it dawned on me that the result really had nothing specifically to do with elliptic curves:

pub trait FromDigest<OutputSize: ArrayLength<u8>> {
    fn from_digest<D>(digest: D) -> Self
    where
        D: Digest<OutputSize = OutputSize>;
}

Given that, perhaps the trait could be added to the digest crate instead /cc @newpavlov

@newpavlov
Copy link
Member

newpavlov commented Sep 16, 2021

Shouldn't this use-case be already covered by the crypto_common::InnerInit trait? You can write a blanket impl for scalars and hashes with matching sizes. The trait name is a bit unfortunate due to consistency with other initialization traits, so I am open to suggestions.

@tarcieri
Copy link
Member

tarcieri commented Sep 16, 2021

In this particular case, we would need to provide several impls on the same type for different digest sizes, e.g. for a secp256k1 scalar it could be initialized from a digest with a 256-bit output using a narrow reduction, or a 512-bit output using a wide reduction, and those two cases will take separate codepaths

@newpavlov
Copy link
Member

Is this initialization process defined for a range of output sizes or only for those two values (e.g. would it work for SHA-1 or for SHA-384)? Do you track which hash function was used on the type level (i.e. do you get the same type when 256-bit and 512-bit hash functions were used)?

@tarcieri
Copy link
Member

tarcieri commented Sep 16, 2021

I can't speak specifically to the hash-to-curve-point case, but in the case of scalars it's effectively two values:

  • narrow: bitlength equivalent to the modulus
  • wide: 2 * bitlength equivalent to the modulus

I think if you were to use something "in the middle" for a wide reduction, it would bias the output (it's already minutely biased even with a uniformly random input), but perhaps @fjarri can check me on that.

Do you track which hash function was used on the type level

The ecdsa crate needs to do this, namely the default implementation uses the digest function used on the message to instantiate HMAC for RFC6979

@newpavlov
Copy link
Member

I am not sure why InnerInit can not be used here:

struct Foo<D>
where D: Digest, // also constrain D::OutputSize % Self::ScalarSize == 0
{ .. }

impl<D> InnerUser for Foo<D>
where // same bounds
{ type Inner = D; }

impl<D> InnerInit for Foo<D>
where // same bounds
{
    fn inner_init(digest: D) -> Self { .. }
}

Am I missing something?

@tarcieri
Copy link
Member

tarcieri commented Sep 16, 2021

Can you write a full translation of something like this?

pub trait FromDigest<OutputSize: ArrayLength<u8>> {
    fn from_digest<D>(digest: D) -> Self
    where
        D: Digest<OutputSize = OutputSize>;
}

pub struct Scalar([u64; 4]);

impl Scalar {
    pub fn from_bytes_mod_order(bytes: &[u8; 32]) -> Self {
        ...
    }

    pub fn from_bytes_mod_order_wide(bytes: &[u8; 64]) -> Self {
        ...
    }
}

impl FromDigest<U32> for Scalar {
    fn from_digest<D>(digest: D) -> Self
    where
        D: Digest<OutputSize = U32>
    {
        Self::from_bytes_mod_order(&digest.finalize().into())
    }
}

impl FromDigest<U64> for Scalar {
    fn from_digest<D>(digest: D) -> Self
    where
        D: Digest<OutputSize = U64>
    {
        Self::from_bytes_mod_order_wide(&digest.finalize().into())
    }
}

The real issue is the overlapping impls, which you can't have without a generic parameter.

@newpavlov
Copy link
Member

Here is my take on it: https://play.rust-lang.org/?gist=6547bf4c636d51196326fd26aa4bbdee

The idea is that the trait should be implemented by a type which tracks used hash function. It's not necessary should be the scalar itself, it could be wrapper around it as well. The trait bounds are somewhat annoying, but it should be better eventually with panicking const fns in where bounds.

@tarcieri
Copy link
Member

tarcieri commented Sep 16, 2021

It might be good to split this off into a separate issue, possibly RustCrypto/traits#481

But I don't get how that works @newpavlov, especially this:

>  D::OutputSize: IsEqual<U32> + IsEqual<U64>,

It looks like that can only be satisfied by a number that is simultaneously equal to U32 and U64? It seems like there are no solutions to that.

But then the problem further becomes, in the context of ECDSA I need to bound on support for a specific digest size:

pub struct SigningKey<C>
where
    C: PrimeCurve + ProjectiveArithmetic,
    Scalar<C>: FromDigest<FieldSize<C>> + Invert<Output = Scalar<C>> + SignPrimitive<C>,
    SignatureSize<C>: ArrayLength<u8>,
{
    inner: NonZeroScalar<C>,
}

ECDSA specifically relies on a narrow reduction, so I only want to use digests with an output size equal to the scalar field's modulus.

@newpavlov
Copy link
Member

It might be good to split this off into a separate issue

Yeah, I've answered to you here: RustCrypto/traits#481 (comment)

@tarcieri
Copy link
Member

tarcieri commented Sep 17, 2021

Sidestepping the issue of digests for the moment, I've opened a PR which adds a Reduce trait for these sorts of hashing-to-a-field-element constructors:

RustCrypto/traits#768

It's generic around a crypto_bignum::Integer, allowing overlapping impls for multiple integer sizes, so it can be impl'd twice for a "narrow" and "wide" reduction.

There are also provided methods for reducing from a big endian or little endian byte array.

The bounds for the ECDSA use case would look something like:

Scalar<C>: Reduce<C::Uint>

...to select a narrow reduction.

(Edit: actually, in that PR I bounded ScalarArithmetic::Scalar on Reduce<C::Uint> already, so no additional bounds would be needed)

NonZeroScalar is defined in the elliptic-curve crate. We can add impls for it there, but they would have to be generic over curves, or use an additional "thunk" trait.

@tarcieri
Copy link
Member

#436 adds "narrow" impls of the Reduce trait to k256 and p256

@tarcieri
Copy link
Member

tarcieri commented Dec 3, 2021

Circling back on this, I think this is the current state of things:

an ability to generate random NonZeroScalars (without unwrapping)

  • Use NonZeroScalar::random

an ability to generate a Scalar/NonZeroScalar from a digest safely according to hash-to-curve standard

A couple things are still missing though:

  • Integration with digest (possibly via a digest feature on the elliptic-curve crate which adds methods to Reduce)
  • Support for NonZeroScalar

It seems like supporting this might require changes to the Reduce trait, or possibly supplementing it with another trait (e.g. ReduceNonZero/NonZeroReduce), in order to propagate the modulus_minus_one flag passed to the reduction.

If that existed, then it would be possible to impl Reduce for NonZeroScalar where the inner Scalar type impls the Reduce-with-modulus-minus-one flag or ReduceNonZero.

tarcieri added a commit to RustCrypto/traits that referenced this pull request Dec 3, 2021
Adds a trait similar to `Reduce`, but where the output of the reduction
is ensured to be non-zero.

Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means
that end users need only concern themselves with `Reduce` as they can
use `NonZeroScalar::<C>::from_uint_reduced` instead of the more
cumbersome `Scalar::<C>::from_uint_reduced_non_zero`.

Related: RustCrypto/elliptic-curves#432
@tarcieri
Copy link
Member

tarcieri commented Dec 3, 2021

Proposed traits for non-zero reductions here: RustCrypto/traits#827

tarcieri added a commit to RustCrypto/traits that referenced this pull request Dec 3, 2021
Adds a trait similar to `Reduce`, but where the output of the reduction
is ensured to be non-zero.

Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means
that end users need only concern themselves with `Reduce` as they can
use `NonZeroScalar::<C>::from_uint_reduced` instead of the more
cumbersome `Scalar::<C>::from_uint_reduced_non_zero`.

Related: RustCrypto/elliptic-curves#432
tarcieri added a commit to RustCrypto/traits that referenced this pull request Dec 3, 2021
Adds a trait similar to `Reduce`, but where the output of the reduction
is ensured to be non-zero.

Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means
that end users need only concern themselves with `Reduce` as they can
use `NonZeroScalar::<C>::from_uint_reduced` instead of the more
cumbersome `Scalar::<C>::from_uint_reduced_non_zero`.

Related: RustCrypto/elliptic-curves#432
tarcieri added a commit to RustCrypto/traits that referenced this pull request Dec 3, 2021
Adds a trait similar to `Reduce`, but where the output of the reduction
is ensured to be non-zero.

Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means
that end users need only concern themselves with `Reduce` as they can
use `NonZeroScalar::<C>::from_uint_reduced` instead of the more
cumbersome `Scalar::<C>::from_uint_reduced_non_zero`.

Related: RustCrypto/elliptic-curves#432
tarcieri added a commit that referenced this pull request Dec 3, 2021
Provides an impl of the `ReduceNonZero` trait added in
#827, which provides a reduction from a
512-bit (64-byte) input, i.e. a "wide" reduction from an integer twice
the size of the curve's order, to a `Scalar` which is guaranteed to be
non-zero.

Based on @fjarri's work in #432
tarcieri added a commit that referenced this pull request Dec 3, 2021
Provides an impl of the `ReduceNonZero` trait added in
#827, which provides a reduction from a
512-bit (64-byte) input, i.e. a "wide" reduction from an integer twice
the size of the curve's order, to a `Scalar` which is guaranteed to be
non-zero.

Based on @fjarri's work in #432
tarcieri added a commit that referenced this pull request Dec 3, 2021
Provides an impl of the `ReduceNonZero` trait added in
#827, which provides a reduction from a
512-bit (64-byte) input, i.e. a "wide" reduction from an integer twice
the size of the curve's order, to a `Scalar` which is guaranteed to be
non-zero.

Based on @fjarri's work in #432
tarcieri added a commit that referenced this pull request Dec 3, 2021
Provides an impl of the `ReduceNonZero` trait added in
#827, which provides a reduction from a
512-bit (64-byte) input, i.e. a "wide" reduction from an integer twice
the size of the curve's order, to a `Scalar` which is guaranteed to be
non-zero.

Based on @fjarri's work in #432

Co-authored-by: Bogdan Opanchuk <bogdan@opanchuk.net>
@tarcieri
Copy link
Member

tarcieri commented Dec 3, 2021

Went ahead and addressed what I believe are the remaining concerns, adapting this PR into #474, which is now merged.

@tarcieri tarcieri closed this Dec 3, 2021
@fjarri fjarri deleted the scalar-constructors branch August 3, 2022 04:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
k256 secp256k1 crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants