Discussion:
[curves] Finalizing XEdDSA
Trevor Perrin
2016-10-31 21:12:14 UTC
Permalink
https://whispersystems.org/docs/specifications/xeddsa/


Thanks for feedback everyone,

I plan to make the following tweaks, then freeze the design (at least
for 25519):

(1) Check that all integers in signatures are fully reduced (s<q,
h<q, R.y<p, V.y<p). This prevents "signature malleability", which
could be an issue for badly-designed protocols [1].

(2) Replace hash_i(a || ... || Z) with hash_i(a || Z || pad || ...)
for reasons here [2] - mainly a bit more sidechannel resistance, and
slightly cleaner use of the hash.

(The 448 design would be easier to change later. We can keep adding
explanatory text.)


One last tweak to consider is clearing the cofactor in verification.
Currently XEdDSA does "cofactorless verification", i.e. it takes a
signature (R, s) and checks R == sB - hA. We could change it to cR ==
c(sB - hA). VXEdDSA would be unchanged.

This has no effect on valid signatures, but adding the cofactor
multiplication means signers could create signatures with a few
different values of R for the same s (which has no security relevance,
I think, and does not cause "malleability" because the signer's choice
of R is included in the hash).

Advantages to current "cofactorless" approach:
- matches existing code like (ref10, libsodium)
- less code, doesn't need a "point comparison" function (can encode,
then compare)
- less computation (by tiny amount, 1% or something)

Advantages to changing to "cofactor" approach:
- Allows batch verification of signatures (I'm told), that can give ~2x speedup
- Preferred approach in Ed25519 paper, "EdDSA for more curves" paper,
and CFRG draft

I don't consider batch verification that compelling, given existing
code, but if that's the direction new implementations are going, maybe
this should align.

Other opinions? Any other last-minute tweaks?

Trevor

[1] https://moderncrypto.org/mail-archive/curves/2016/000764.html
[2] https://moderncrypto.org/mail-archive/curves/2016/000773.html
Peter Schwabe
2016-11-01 14:14:28 UTC
Permalink
Trevor Perrin <***@trevp.net> wrote:

Dear Trevor,

> One last tweak to consider is clearing the cofactor in verification.
> Currently XEdDSA does "cofactorless verification", i.e. it takes a
> signature (R, s) and checks R == sB - hA. We could change it to cR ==
> c(sB - hA). VXEdDSA would be unchanged.
>
> This has no effect on valid signatures, but adding the cofactor
> multiplication means signers could create signatures with a few
> different values of R for the same s (which has no security relevance,
> I think, and does not cause "malleability" because the signer's choice
> of R is included in the hash).
>
> Advantages to current "cofactorless" approach:
> - matches existing code like (ref10, libsodium)
> - less code, doesn't need a "point comparison" function (can encode,
> then compare)
> - less computation (by tiny amount, 1% or something)
>
> Advantages to changing to "cofactor" approach:
> - Allows batch verification of signatures (I'm told), that can give ~2x speedup
> - Preferred approach in Ed25519 paper, "EdDSA for more curves" paper,
> and CFRG draft

The Ed25519 paper says

"The verifier is /permitted/ to check this stronger equation and
to reject alleged signatures where the stronger equation does not
hold. However, this is not /required/; checking that
8SB=8R+8H(\encode{R},\encode{A},M)A is enough for security."


You could decide to do the same; allowing both for verification in the
specification and leaving the choice to the implementation. If I
understand correctly, this gives you the advantages of both approaches,
right?


Best regards,

Peter
Ron Garret
2016-11-01 16:04:15 UTC
Permalink
On Nov 1, 2016, at 7:14 AM, Peter Schwabe <***@cryptojedi.org> wrote:

> Trevor Perrin <***@trevp.net> wrote:
>
> Dear Trevor,
>
>> One last tweak to consider is clearing the cofactor in verification.
>> Currently XEdDSA does "cofactorless verification", i.e. it takes a
>> signature (R, s) and checks R == sB - hA. We could change it to cR ==
>> c(sB - hA). VXEdDSA would be unchanged.
>>
>> This has no effect on valid signatures, but adding the cofactor
>> multiplication means signers could create signatures with a few
>> different values of R for the same s (which has no security relevance,
>> I think, and does not cause "malleability" because the signer's choice
>> of R is included in the hash).
>>
>> Advantages to current "cofactorless" approach:
>> - matches existing code like (ref10, libsodium)
>> - less code, doesn't need a "point comparison" function (can encode,
>> then compare)
>> - less computation (by tiny amount, 1% or something)
>>
>> Advantages to changing to "cofactor" approach:
>> - Allows batch verification of signatures (I'm told), that can give ~2x speedup
>> - Preferred approach in Ed25519 paper, "EdDSA for more curves" paper,
>> and CFRG draft
>
> The Ed25519 paper says
>
> "The verifier is /permitted/ to check this stronger equation and
> to reject alleged signatures where the stronger equation does not
> hold. However, this is not /required/; checking that
> 8SB=8R+8H(\encode{R},\encode{A},M)A is enough for security."
>
>
> You could decide to do the same; allowing both for verification in the
> specification and leaving the choice to the implementation. If I
> understand correctly, this gives you the advantages of both approaches,
> right?

Possibly naive question: What is “this stronger equation” that the paper refers to? Because the immediately preceding equation is:

SB = rB + H(\encode{R},\encode{A},M)aB = R + H(\encode{R},\encode{A},M)A

which is actually two equations. The first:

SB = rB + H(\encode{R},\encode{A},M)aB

relies on the secret key a, so that does not seem particularly useful. But the second equation:

SB = R + H(\encode{R},\encode{A},M)A

Is (AFAICT) identical to the earlier “weaker” equation:

8SB = 8R+8H(\encode{R},\encode{A},M)A

except for factoring out the 8’s. (Where did those 8’s come from anyway? They seem completely unmotivated.)

What am I missing?

rg
Mike Hamburg
2016-11-01 17:30:36 UTC
Permalink
The 8s are the cofactor of Ed25519. The idea is that you may check the equation's projection on the order-q subgroup, instead of in the whole group. Depending on implementation, this may make things easier. For example, libdecaf projects everything into the order-q subgroup.

This is not supposed to make a difference because everything is supposed to live in the order-q subgroup. But if someone generates their key or nonce wrong, then it makes a difference.

The problem with giving implementers the option is that it could allow fingerprinting or forking attacks. I'd rather see a hard requirement one way or the other.

Sent from my phone. Please excuse brevity and typos.

> On Nov 1, 2016, at 09:04, Ron Garret <***@flownet.com> wrote:
>
>
>> On Nov 1, 2016, at 7:14 AM, Peter Schwabe <***@cryptojedi.org> wrote:
>>
>> Trevor Perrin <***@trevp.net> wrote:
>>
>> Dear Trevor,
>>
>>> One last tweak to consider is clearing the cofactor in verification.
>>> Currently XEdDSA does "cofactorless verification", i.e. it takes a
>>> signature (R, s) and checks R == sB - hA. We could change it to cR ==
>>> c(sB - hA). VXEdDSA would be unchanged.
>>>
>>> This has no effect on valid signatures, but adding the cofactor
>>> multiplication means signers could create signatures with a few
>>> different values of R for the same s (which has no security relevance,
>>> I think, and does not cause "malleability" because the signer's choice
>>> of R is included in the hash).
>>>
>>> Advantages to current "cofactorless" approach:
>>> - matches existing code like (ref10, libsodium)
>>> - less code, doesn't need a "point comparison" function (can encode,
>>> then compare)
>>> - less computation (by tiny amount, 1% or something)
>>>
>>> Advantages to changing to "cofactor" approach:
>>> - Allows batch verification of signatures (I'm told), that can give ~2x speedup
>>> - Preferred approach in Ed25519 paper, "EdDSA for more curves" paper,
>>> and CFRG draft
>>
>> The Ed25519 paper says
>>
>> "The verifier is /permitted/ to check this stronger equation and
>> to reject alleged signatures where the stronger equation does not
>> hold. However, this is not /required/; checking that
>> 8SB=8R+8H(\encode{R},\encode{A},M)A is enough for security."
>>
>>
>> You could decide to do the same; allowing both for verification in the
>> specification and leaving the choice to the implementation. If I
>> understand correctly, this gives you the advantages of both approaches,
>> right?
>
> Possibly naive question: What is “this stronger equation” that the paper refers to? Because the immediately preceding equation is:
>
> SB = rB + H(\encode{R},\encode{A},M)aB = R + H(\encode{R},\encode{A},M)A
>
> which is actually two equations. The first:
>
> SB = rB + H(\encode{R},\encode{A},M)aB
>
> relies on the secret key a, so that does not seem particularly useful. But the second equation:
>
> SB = R + H(\encode{R},\encode{A},M)A
>
> Is (AFAICT) identical to the earlier “weaker” equation:
>
> 8SB = 8R+8H(\encode{R},\encode{A},M)A
>
> except for factoring out the 8’s. (Where did those 8’s come from anyway? They seem completely unmotivated.)
>
> What am I missing?
>
> rg
>
> _______________________________________________
> Curves mailing list
> ***@moderncrypto.org
> https://moderncrypto.org/mailman/listinfo/curves
Ron Garret
2016-11-01 19:20:16 UTC
Permalink
OK, that makes sense. Thanks! (And thanks to Trevor for giving me the answer off-list.)

So let me hard-fork this thread and ask a followup meta-question: The fact that 8 was the cofactor of the curve is apparently something most (if not all) people on this list already knew. But how? Neither the Ed25519 paper nor the Curve25519 paper mentions it (AFAICT).

In trying to back-solve the answer to this question myself I was led to RFC7748 Appendix A, which was very enlightening, but contained this (to me) cryptic comment:

"For primes congruent to 1 mod 4, the minimal cofactors of the curve and its twist are either {4, 8} or {8, 4}.”

Where the heck did *that* come from? Digging through the references, I happened to stumble upon this:

http://www.hpl.hp.com/techreports/97/HPL-97-128.pdf

which seems like it’s the answer to that particular question. But even this (apparently) elementary fact seems to be almost deliberately obscured in the literature. Even https://safecurves.cr.yp.to doesn’t mention it, which is another indication that all this is just taken to be common knowledge.

Another data point: I asked a similar question to this once before and was pointed to the Handbook of Elliptic and Hyperelliptic Curve Cryptography by Cohen & Fry et al. The word “cofactor” appears in that 800-page tome on 22 pages. The first appearance of the word is this passage on page 218:

"If the cofactor N/p is sufficiently small, the computations can be done modulo N without any addi- tional cost since multiplications are usually performed at a word level.”

i.e. the word is introduced in an offhand way and defined only in passing, as if the reader is expected to already know what a cofactor is and why it’s important.

BTW: Smart’s result came as a complete shock to me. Every intro to ECC on the web says something isomorphic to “ECC is secure because DLP over elliptic curves is hard.” But if I’m understanding this correctly, that statement is actually not true at all. The correct statement is (AFAICT) something more like, “DLP over elliptic curves with cofactor >1 is hard, otherwise it’s easy.”

Bottom line: there seems to be a huge disconnect between the importance of cofactors on the one hand and the information available about them on the other. On the one hand, cofactors seem to be so important that all this stuff that I just learned about seems to be common knowledge, and on the other hand, even now that I know it I *still* can’t figure out how I could have learned it other than asking this apparently stupid question.

So my meta-question is: *was* this a stupid question? Did everyone really already know this except me? And if so, how?

On Nov 1, 2016, at 10:30 AM, Mike Hamburg <***@shiftleft.org> wrote:

> The 8s are the cofactor of Ed25519. The idea is that you may check the equation's projection on the order-q subgroup, instead of in the whole group. Depending on implementation, this may make things easier. For example, libdecaf projects everything into the order-q subgroup.
>
> This is not supposed to make a difference because everything is supposed to live in the order-q subgroup. But if someone generates their key or nonce wrong, then it makes a difference.
>
> The problem with giving implementers the option is that it could allow fingerprinting or forking attacks. I'd rather see a hard requirement one way or the other.
>
> Sent from my phone. Please excuse brevity and typos.
>
>> On Nov 1, 2016, at 09:04, Ron Garret <***@flownet.com> wrote:
>>
>>
>>> On Nov 1, 2016, at 7:14 AM, Peter Schwabe <***@cryptojedi.org> wrote:
>>>
>>> Trevor Perrin <***@trevp.net> wrote:
>>>
>>> Dear Trevor,
>>>
>>>> One last tweak to consider is clearing the cofactor in verification.
>>>> Currently XEdDSA does "cofactorless verification", i.e. it takes a
>>>> signature (R, s) and checks R == sB - hA. We could change it to cR ==
>>>> c(sB - hA). VXEdDSA would be unchanged.
>>>>
>>>> This has no effect on valid signatures, but adding the cofactor
>>>> multiplication means signers could create signatures with a few
>>>> different values of R for the same s (which has no security relevance,
>>>> I think, and does not cause "malleability" because the signer's choice
>>>> of R is included in the hash).
>>>>
>>>> Advantages to current "cofactorless" approach:
>>>> - matches existing code like (ref10, libsodium)
>>>> - less code, doesn't need a "point comparison" function (can encode,
>>>> then compare)
>>>> - less computation (by tiny amount, 1% or something)
>>>>
>>>> Advantages to changing to "cofactor" approach:
>>>> - Allows batch verification of signatures (I'm told), that can give ~2x speedup
>>>> - Preferred approach in Ed25519 paper, "EdDSA for more curves" paper,
>>>> and CFRG draft
>>>
>>> The Ed25519 paper says
>>>
>>> "The verifier is /permitted/ to check this stronger equation and
>>> to reject alleged signatures where the stronger equation does not
>>> hold. However, this is not /required/; checking that
>>> 8SB=8R+8H(\encode{R},\encode{A},M)A is enough for security."
>>>
>>>
>>> You could decide to do the same; allowing both for verification in the
>>> specification and leaving the choice to the implementation. If I
>>> understand correctly, this gives you the advantages of both approaches,
>>> right?
>>
>> Possibly naive question: What is “this stronger equation” that the paper refers to? Because the immediately preceding equation is:
>>
>> SB = rB + H(\encode{R},\encode{A},M)aB = R + H(\encode{R},\encode{A},M)A
>>
>> which is actually two equations. The first:
>>
>> SB = rB + H(\encode{R},\encode{A},M)aB
>>
>> relies on the secret key a, so that does not seem particularly useful. But the second equation:
>>
>> SB = R + H(\encode{R},\encode{A},M)A
>>
>> Is (AFAICT) identical to the earlier “weaker” equation:
>>
>> 8SB = 8R+8H(\encode{R},\encode{A},M)A
>>
>> except for factoring out the 8’s. (Where did those 8’s come from anyway? They seem completely unmotivated.)
>>
>> What am I missing?
>>
>> rg
>>
>> _______________________________________________
>> Curves mailing list
>> ***@moderncrypto.org
>> https://moderncrypto.org/mailman/listinfo/curves
Trevor Perrin
2016-11-01 20:35:49 UTC
Permalink
Hi Ron,

Here's a few references that discuss cofactors in signature verification:

https://ed25519.cr.yp.to/eddsa-20150704.pdf (cofactor = 2^c)
https://cr.yp.to/badbatch/badbatch-20120919.pdf

"Costs of cofactor > 1"
https://moderncrypto.org/mail-archive/curves/2014/

Trevor


On Tue, Nov 1, 2016 at 12:20 PM, Ron Garret <***@flownet.com> wrote:
>
> So let me hard-fork this thread and ask a followup meta-question: The fact that 8 was the cofactor of the curve is apparently something most (if not all) people on this list already knew. But how? Neither the Ed25519 paper nor the Curve25519 paper mentions it (AFAICT).


Trevor
Ron Garret
2016-11-01 21:09:11 UTC
Permalink
Thanks Trevor, but I think I may not have made myself clear. I’m not really asking about cofactors per se here. I’m just using them as an illustrative example. What I’m really asking about is whether I’m asking questions that are appropriate for the Curves list, or if I need to go back and do some more homework and if so, what that homework is. The fact that you originally answered me off-list seemed to indicate that the latter might be the case. A lot of the discussion here is still over my head, and I really don’t want to wear out my welcome by asking too many stupid questions.

Notwithstanding the above, thanks for these pointers! They look like very interesting reading.

On Nov 1, 2016, at 1:35 PM, Trevor Perrin <***@trevp.net> wrote:

> Hi Ron,
>
> Here's a few references that discuss cofactors in signature verification:
>
> https://ed25519.cr.yp.to/eddsa-20150704.pdf (cofactor = 2^c)
> https://cr.yp.to/badbatch/badbatch-20120919.pdf
>
> "Costs of cofactor > 1"
> https://moderncrypto.org/mail-archive/curves/2014/
>
> Trevor
>
>
> On Tue, Nov 1, 2016 at 12:20 PM, Ron Garret <***@flownet.com> wrote:
>>
>> So let me hard-fork this thread and ask a followup meta-question: The fact that 8 was the cofactor of the curve is apparently something most (if not all) people on this list already knew. But how? Neither the Ed25519 paper nor the Curve25519 paper mentions it (AFAICT).
>
>
> Trevor
Trevor Perrin
2016-11-01 21:40:07 UTC
Permalink
Nothing wrong with asking about complex topics. Whether people have
time to answer is another question. With respect to cofactor, it's a
concept that's explained in typical elliptic curve crypto texts and
standards.

It would be be great if there were better surveys on modern ECC and
engineering issues. If someone wanted to suggest a reading list /
bibliography that would be a nice contribution (but also a bunch of
work).

Trevor


On Tue, Nov 1, 2016 at 2:09 PM, Ron Garret <***@flownet.com> wrote:
> Thanks Trevor, but I think I may not have made myself clear. I’m not really asking about cofactors per se here. I’m just using them as an illustrative example. What I’m really asking about is whether I’m asking questions that are appropriate for the Curves list, or if I need to go back and do some more homework and if so, what that homework is. The fact that you originally answered me off-list seemed to indicate that the latter might be the case. A lot of the discussion here is still over my head, and I really don’t want to wear out my welcome by asking too many stupid questions.
>
> Notwithstanding the above, thanks for these pointers! They look like very interesting reading.
>
> On Nov 1, 2016, at 1:35 PM, Trevor Perrin <***@trevp.net> wrote:
>
>> Hi Ron,
>>
>> Here's a few references that discuss cofactors in signature verification:
>>
>> https://ed25519.cr.yp.to/eddsa-20150704.pdf (cofactor = 2^c)
>> https://cr.yp.to/badbatch/badbatch-20120919.pdf
>>
>> "Costs of cofactor > 1"
>> https://moderncrypto.org/mail-archive/curves/2014/
>>
>> Trevor
>>
>>
>> On Tue, Nov 1, 2016 at 12:20 PM, Ron Garret <***@flownet.com> wrote:
>>>
>>> So let me hard-fork this thread and ask a followup meta-question: The fact that 8 was the cofactor of the curve is apparently something most (if not all) people on this list already knew. But how? Neither the Ed25519 paper nor the Curve25519 paper mentions it (AFAICT).
>>
>>
>> Trevor
>
Ron Garret
2016-11-03 19:30:49 UTC
Permalink
On Nov 1, 2016, at 2:40 PM, Trevor Perrin <***@trevp.net> wrote:

> It would be be great if there were better surveys on modern ECC and
> engineering issues. If someone wanted to suggest a reading list /
> bibliography that would be a nice contribution (but also a bunch of
> work).

I decided it would be a useful exercise for me to undertake to write such a survey (even if I couldn’t actually finish it), and right away I ran into a snag. I was trying to reconcile all the different forms of elliptic curve formulas commonly found in the literature, and found the following promising-looking lead on mathworld:

http://mathworld.wolfram.com/EllipticCurve.html

Ax^3 + Bx^2y + Cxy^2 + Dy^3 + Ex^2 + Fxy + Gy^2 + hHx + Iy + J = 0

This is consistent (AFAICT) with the definition given in section 4.4.2.a of Cohen and Frey. But then there are Edwards curves, which have a x^2y^2 term in them. How do those fit in?

In fact, as I started thinking about this I realized that Edwards curves are really weird because they’re quartic and not cubic (aren’t they?) and all elliptic curves are supposed to be cubic (aren’t they?) How can a fourth-order polynomial be birationally equivalent to a third-order polynomial?

I tried taking a look at some of the proofs that Edwards curves are birationally equivalent to Montgomery curves but they went way over my head. Is there a more elementary way of understanding this?

Thanks,
rg
Ron Garret
2016-11-03 21:07:42 UTC
Permalink
[cc’ing the list at Andrew’s suggestion]

Thanks! That is exactly the kind of explanation I was looking for. (Thanks also to Robert Ransom who also responded off-list.)

On Nov 3, 2016, at 1:54 PM, Andrew Egbert <***@gmail.com> wrote:

> Ah- must have unsubscribed or something (feel free to post this to the list). I can try to explain intuitively whats happening, and why the degree of the polynomial decreases.
> Imagine you have a curve of some sort in 2-dimensions, this will be an equation with x, y (two variables). Now imagine you look at the curve in three dimensions.
> If it really is still a one-dimensional object, it will need to have 3 variables (otherwise it will be a surface if ‘z’ is not specified).
>
> Resolving singularities of curves is often (not always) a similar process. Imagine you have a curve with a ‘cusp’ which is sort of like a sharp ‘singular’ point.
> (You can google image search plane curve cusp to get an idea)
> Now imagine that instead of a sharp point, you are actually looking at a place where the curve is going ‘downwards’ in a third dimension (so in fact it is a smooth curve).
> This is sort of what’s happening.
> Best,
> Andrew
>
>> On Nov 3, 2016, at 1:48 PM, Ron Garret <***@flownet.com> wrote:
>>
>> Not sure what “bad response” you’re referring to here because this is the only message I’ve received from you. I took a look at page 1, and I do understand the change of variables that transforms curve25519 into Ed25519 and vice-versa. It’s the more general case that I don’t yet fully understand.
>>
>> I have a working theory though: because the transformation involves a change of variables, the letters X and Y have completely different semantics in the Edwards formula than in the other forms.
>>
>> On Nov 3, 2016, at 1:36 PM, Andrew Egbert <***@gmail.com> wrote:
>>
>>> Sorry that was a bad response, since I missed the last sentence of your post- I’ve written out the transformation on page 1 of my thesis here: https://divisibility.files.wordpress.com/2016/02/thesismarch18.pdf (also available at my github)
>>>> On Nov 3, 2016, at 12:30 PM, Ron Garret <***@flownet.com> wrote:
>>>>
>>>>
>>>> On Nov 1, 2016, at 2:40 PM, Trevor Perrin <***@trevp.net> wrote:
>>>>
>>>>> It would be be great if there were better surveys on modern ECC and
>>>>> engineering issues. If someone wanted to suggest a reading list /
>>>>> bibliography that would be a nice contribution (but also a bunch of
>>>>> work).
>>>>
>>>> I decided it would be a useful exercise for me to undertake to write such a survey (even if I couldn’t actually finish it), and right away I ran into a snag. I was trying to reconcile all the different forms of elliptic curve formulas commonly found in the literature, and found the following promising-looking lead on mathworld:
>>>>
>>>> http://mathworld.wolfram.com/EllipticCurve.html
>>>>
>>>> Ax^3 + Bx^2y + Cxy^2 + Dy^3 + Ex^2 + Fxy + Gy^2 + hHx + Iy + J = 0
>>>>
>>>> This is consistent (AFAICT) with the definition given in section 4.4.2.a of Cohen and Frey. But then there are Edwards curves, which have a x^2y^2 term in them. How do those fit in?
>>>>
>>>> In fact, as I started thinking about this I realized that Edwards curves are really weird because they’re quartic and not cubic (aren’t they?) and all elliptic curves are supposed to be cubic (aren’t they?) How can a fourth-order polynomial be birationally equivalent to a third-order polynomial?
>>>>
>>>> I tried taking a look at some of the proofs that Edwards curves are birationally equivalent to Montgomery curves but they went way over my head. Is there a more elementary way of understanding this?
>>>>
>>>> Thanks,
>>>> rg
>>>>
>>>> _______________________________________________
>>>> Curves mailing list
>>>> ***@moderncrypto.org
>>>> https://moderncrypto.org/mailman/listinfo/curves
>>>
>>
>
AA EE
2016-11-04 03:12:28 UTC
Permalink
(Hopefully I've managed to re-subscribe myself- too many e-mail addresses). Without equations, maybe it's not quite clear why the degree 'drops' in that intuition (and the typical Edward's ones can be hard to picture in your head). Try the curve y^2 = x^3. This is singular by computing the jacobian (2f/2y, 2f/2x) at (0,0). Apply the substitution y = x*t (so increasing the number of variables, as we look in a new direction). This will give you x^2t^2 = x^3 or x^2(t^2-x) the new curve is t^2 - x (which is a smooth parabola, and a degree lower) and the x^2 term represents the singular point, expanded into a double line which we ignore. Unfortunately the references I can think of all seem to start from a heavy commutative algebra perspective rather than crypto, and might be sort of off-putting...

> Sent: Thursday, November 03, 2016 at 7:46 PM
> From: "Andrew Egbert" <***@gmail.com>
> To: ***@mail.com
> Subject: Fwd: [curves] Climbing the elliptic learning curve (was: Re: Finalizing XEdDSA)
>
>
>
> > Begin forwarded message:
> >
> > From: Ron Garret <***@flownet.com>
> > Subject: Re: [curves] Climbing the elliptic learning curve (was: Re: Finalizing XEdDSA)
> > Date: November 3, 2016 at 2:07:42 PM PDT
> > To: Andrew Egbert <***@gmail.com>
> > Cc: ***@moderncrypto.org
> >
> > [cc’ing the list at Andrew’s suggestion]
> >
> > Thanks! That is exactly the kind of explanation I was looking for. (Thanks also to Robert Ransom who also responded off-list.)
> >
> > On Nov 3, 2016, at 1:54 PM, Andrew Egbert <***@gmail.com> wrote:
> >
> >> Ah- must have unsubscribed or something (feel free to post this to the list). I can try to explain intuitively whats happening, and why the degree of the polynomial decreases.
> >> Imagine you have a curve of some sort in 2-dimensions, this will be an equation with x, y (two variables). Now imagine you look at the curve in three dimensions.
> >> If it really is still a one-dimensional object, it will need to have 3 variables (otherwise it will be a surface if ‘z’ is not specified).
> >>
> >> Resolving singularities of curves is often (not always) a similar process. Imagine you have a curve with a ‘cusp’ which is sort of like a sharp ‘singular’ point.
> >> (You can google image search plane curve cusp to get an idea)
> >> Now imagine that instead of a sharp point, you are actually looking at a place where the curve is going ‘downwards’ in a third dimension (so in fact it is a smooth curve).
> >> This is sort of what’s happening.
> >> Best,
> >> Andrew
> >>
> >>> On Nov 3, 2016, at 1:48 PM, Ron Garret <***@flownet.com> wrote:
> >>>
> >>> Not sure what “bad response” you’re referring to here because this is the only message I’ve received from you. I took a look at page 1, and I do understand the change of variables that transforms curve25519 into Ed25519 and vice-versa. It’s the more general case that I don’t yet fully understand.
> >>>
> >>> I have a working theory though: because the transformation involves a change of variables, the letters X and Y have completely different semantics in the Edwards formula than in the other forms.
> >>>
> >>> On Nov 3, 2016, at 1:36 PM, Andrew Egbert <***@gmail.com> wrote:
> >>>
> >>>> Sorry that was a bad response, since I missed the last sentence of your post- I’ve written out the transformation on page 1 of my thesis here: https://divisibility.files.wordpress.com/2016/02/thesismarch18.pdf (also available at my github)
> >>>>> On Nov 3, 2016, at 12:30 PM, Ron Garret <***@flownet.com> wrote:
> >>>>>
> >>>>>
> >>>>> On Nov 1, 2016, at 2:40 PM, Trevor Perrin <***@trevp.net> wrote:
> >>>>>
> >>>>>> It would be be great if there were better surveys on modern ECC and
> >>>>>> engineering issues. If someone wanted to suggest a reading list /
> >>>>>> bibliography that would be a nice contribution (but also a bunch of
> >>>>>> work).
> >>>>>
> >>>>> I decided it would be a useful exercise for me to undertake to write such a survey (even if I couldn’t actually finish it), and right away I ran into a snag. I was trying to reconcile all the different forms of elliptic curve formulas commonly found in the literature, and found the following promising-looking lead on mathworld:
> >>>>>
> >>>>> http://mathworld.wolfram.com/EllipticCurve.html
> >>>>>
> >>>>> Ax^3 + Bx^2y + Cxy^2 + Dy^3 + Ex^2 + Fxy + Gy^2 + hHx + Iy + J = 0
> >>>>>
> >>>>> This is consistent (AFAICT) with the definition given in section 4.4.2.a of Cohen and Frey. But then there are Edwards curves, which have a x^2y^2 term in them. How do those fit in?
> >>>>>
> >>>>> In fact, as I started thinking about this I realized that Edwards curves are really weird because they’re quartic and not cubic (aren’t they?) and all elliptic curves are supposed to be cubic (aren’t they?) How can a fourth-order polynomial be birationally equivalent to a third-order polynomial?
> >>>>>
> >>>>> I tried taking a look at some of the proofs that Edwards curves are birationally equivalent to Montgomery curves but they went way over my head. Is there a more elementary way of understanding this?
> >>>>>
> >>>>> Thanks,
> >>>>> rg
> >>>>>
> >>>>> _______________________________________________
> >>>>> Curves mailing list
> >>>>> ***@moderncrypto.org
> >>>>> https://moderncrypto.org/mailman/listinfo/curves
> >>>>
> >>>
> >>
> >
>
>
Ben Smith
2016-11-07 08:51:36 UTC
Permalink
Hi Ron, everyone,

Here's a rather longish explanation that might be helpful (I hope).
It's sort of a geometric complement to Mike's reply on curve shapes.
It should really be a link to a blog post, I suppose---but in the
absence of a blog, I'm posting it here.

What I'm aiming to do here is
* Connect the Edwards equation with a Weierstrass equation (actually a
Montgomery curve);
* Show how the usual magic birational map appears in a more natural way;
* Resolve Ron's apparent degree-3-vs-degree-4 incompatibility; and
* Explain how we can ignore the whole resolution-of-singularities
issue by simply never having singularities in the first place.

(If the geometric language goes over your head, don't worry; there
will be variables and equations the whole time to to show what I mean.
"Fake it til you make it", etc...)

2016-11-03 20:30 GMT+01:00 Ron Garret <***@flownet.com>:
> I decided it would be a useful exercise for me to undertake to write such a survey (even if I couldn’t actually finish it), and right away I ran into a snag. I was trying to reconcile all the different forms of elliptic curve formulas commonly found in the literature, and found the following promising-looking lead on mathworld:
>
> http://mathworld.wolfram.com/EllipticCurve.html
>
> Ax^3 + Bx^2y + Cxy^2 + Dy^3 + Ex^2 + Fxy + Gy^2 + hHx + Iy + J = 0
>
> This is consistent (AFAICT) with the definition given in section 4.4.2.a of Cohen and Frey. But then there are Edwards curves, which have a x^2y^2 term in them. How do those fit in?
>
> In fact, as I started thinking about this I realized that Edwards curves are really weird because they’re quartic and not cubic (aren’t they?) and all elliptic curves are supposed to be cubic (aren’t they?) How can a fourth-order polynomial be birationally equivalent to a third-order polynomial?
>
> I tried taking a look at some of the proofs that Edwards curves are birationally equivalent to Montgomery curves but they went way over my head. Is there a more elementary way of understanding this?

Degree is a surprisingly subtle concept. The tricky thing is that
degree is not actually well-defined for curves in affine
coordinates---it only makes sense for projective curves [note 0]. So
we have to be careful about saying that x^3 is a degree-3 term and
x^2y^2 is a degree-4 term, because they're affine, and hence their
"degree" isn't well-defined.

Let's look at two elliptic curves defined by affine plane equations.
First, the Montgomery curve
M: B*y^2 = x*(x^2 + A*x + 1) ,
which fits neatly into your cubic equation definition above; and
second, the twisted Edwards curve
E: a*u^2 + v^2 = 1 + d*u^2*v^2 ,
which doesn't. Assuming we didn't make any pathological choices for
the parameters A, B, a, and d, these curves are both nonsingular *in
the affine plane*. This is as good a place as any to note that these
elliptic curves have some distinguished points:
* M has its identity point "at infinity" (hence missing here), and a
special 2-torsion point (x,y) = (0,0);
* E has the identity point O = (0,1), and a special 2-torsion point P = (0,-1).

I want to relate these two curve shapes to each other, while keeping
things as nonsingular (and concrete) as possible.

As you noted, it *appears* that M has degree 3, while E has degree 4.
That's because we've got a habit of implicitly taking the projective
closure in PP^2, by homogenizing with respect to a new variable. If
we take (X:Y:Z) as coordinates on the projective plane PP^2, then we
can put x = X/Z and y = Y/Z and clear denominators to get the
projective plane model
M: B*Y^2*Z = X*(X^2 + A*X*Z + Z^2) [in PP^2]
for M. This equation is homogeneous of total degree 3 in X,Y,Z, so we
can legitimately say that in this projective form, M has degree 3.

But what happens if we do the same thing for E? If we put u = X/Z and
v = Y/Z, then we get the projective plane model
E: a*X^2*Z^2 + Y^2*Z^2 = Z^4 + d*X^2*Y^2 [in PP^2],
which has degree 4. If you apply the usual genus formula (genus =
(deg-1)(deg-2)/2), then M has genus 1 (ok), but E seems to have genus
3, which is all wrong [note 1]. That's because this model of E is
singular: there are two singularities, at (X:Y:Z) = (1:0:0) and
(0:1:0).

We're so used to taking projective closures in PP^2 this way that we
tend to call it "the" projective closure, as if it were unique; but
it's not. And if you start with a smooth affine curve like E,
complete it, and get something singular, then you should immediately
get a feeling that you completed it the wrong way---because you just
tried to squish a projective curve into a space where it wouldn't fit,
and some of its points got mashed together as a result...

...So let's try again. The natural projective closure for E is *not*
in PP^2, but rather in PP^1 x PP^1, the product of two projective
lines [note 2]. Bear with me for a minute: let (U:S) be coordinates
on one projective line, (V:T) coordinates on another; then
((U:S),(V:T)) is a coordinate system on PP^1xPP^1. Curves in
PP^1xPP^1 are defined by single equations in U,S,V,T that are
homogeneous with respect to U,S, and simultaneously (but separately)
homogeneous with respect to V,T. Since the homogeneity for each pair
of coordinates is separate, we have two separate degrees (one with
respect to (U:S), and one with respect to (V:T)), which we package
together as a "bidegree".

Coming back to E, if we put u = U/S, v = V/T, and clear denominators,
then we get a new projective model
E: a*U^2*T^2 + S^2*V^2 = S^2*T^2 + d*U^2*V^2 [in PP^1xPP^1].
The equation is homogeneous of degree 2 with respect to (U:S), and
homogeneous of degree 2 with respect to (V:T); so E has bidegree
(2,2). The identity point O = (0,1) on E has product-projective
coordinates ((0:1),(1:1)), and the special 2-torsion point P = (0,-1)
has product-projective coordinates ((0:1),(-1:1)).

What's nice about this version of E is that it's nonsingular, so we
know we're on the right track. Still, this has come at a cost: we're
now in a funny product projective space, and bidegree (2,2) in
PP^1xPP^1 is no closer to being degree 3 in PP^2...

So our next step is to move from a product projective space into a
"flat" projective space, without squashing anything. 2-dimensional
space didn't work, so let's go to 3-dimensional space: let
(W_0:W_1:W_2:W_3) be coordinates on PP^3. We can embed PP^1xPP^1 in
PP^3 by mapping ((U:S),(V:T)) to (W_0:W_1:W_2:W_3) =
(U*V:U*T:S*V:S*T). The result is the quadric surface Q in PP^3
defined by W_0*W_3 = W_1*W_2. The image of E under this embedding is
the curve in Q defined by
E: a*W_1^2 + W_2^2 = W_3^2 + d*W_0^2 , W_0*W_3 = W_1*W_2 [in PP^3]
(if you evaluate these two defining equations in (W_0:W_1:W_2:W_3) =
(U*V:U*T:S*V:S*T), then the first will become our product-projective
equation for E, and the second will become 0). This is only a linear
change of variables away from being in Jacobi intersection form (see
Mike's post). The identity point is now O = (0:0:1:1), while the
special 2-torsion point is now P = (0:0:-1:1).

We've simultaneously done two things here: we've recovered the
classical Jacobi model of an elliptic curve as an intersection of two
quadrics in 3-space, and---probably more interesting to this
list---we've recovered Huseyin Hisil's "extended twisted Edwards
coordinates" (from Hisil-Wong-Carter-Dawson's "twisted Edwards curves
revisited" paper). Still, while extended coordinates are fast,
they're not the object of this exercise...
Let's continue. The intersection of two degree-2 surfaces in PP^3 is
a degree-4 space curve, so we've gone from smooth bidegree (2,2) in
PP^1xPP^1 to smooth degree 4 in PP^3. We want to drop a dimension to
end up in PP^2, and the simplest way to do that is a linear mapping:
that is, a projection away from some point C (the "centre" of the
projection). Each line through C in PP^3 projects onto a point in
PP^2, and each plane though C projects to a line. Any plane in PP^3
cuts through our curve in 4 places (degree 4), which means that if we
take C to be any old point of PP^3 then the lines in PP^2 will cut
through the image curve in 4 places: that is, the image curve will
have degree 4 (it will turn out to be a singular quartic: exactly what
we didn't want!). On the other hand, if we take C to be a point on E,
then any plane through C will intersect the curve in 3 *other* points,
and so any line in PP^2 will intersect the image curve in 3 points:
that is, we'll have a degree-3 image, which is what we want.

So let's project away from the special 2-torsion point P = (0:0:-1:1)
[note 3]. Projecting away from P means writing down a mapping using 3
linear combinations of the 3 polynomials W_0, W_1, and W_2+W_3 (since
these generate the space of linear forms that vanish on P). If I
start with (W_0:W_1:W_2:W_3) \mapsto (X:Y:Z) = (W_0:W_1:W_2+W_3) then
I get a smooth plane cubic curve, and it's just a matter of eyeballing
the right linear combinations to make sure that the image is the
Montgomery curve (then I use the magic of hindsight to pretend I
wanted those linear combinations all along).

It turns out that if we take (W_0:W_1:W_2:W_3) \mapsto (X:Y:Z) =
(W_0+W_1:W_2-W_3:W_1-W_0), then our E in PP^3 maps down to the
nonsingular projective plane curve
E: 4/(a-d)*Y^2*Z = X*(X^2 + 2*(a+d)/(a-d)*X*Z + Z^2) [in PP^2]
---which, if we set B = 4/(a-d) and A = 2*(a+d)/(a-d), is just the
Montgomery curve M! Not only that,
* the image of the identity point O is (0:1:0), and
* the special 2-torsion point P turns out to map to (0:0:1), the
distinguished 2-torsion point on M.

Everything in its right place.

So we've finally arrived: composing all these isomorphisms, we see
that to get from
E: a*u^2 + v^2 = a + d*u^2*v^2
to
M: B*Y^2*Z = X*(X^2 + A*X*Z + Z^2) [in PP^2],
the thing to do is
(1) take the projective closure of E in PP^1xPP^1 with
(u,v) \mapsto ((U:S),(V:T)) = ((u:1),(v:1)) ;
(2) visualise E \subset PP^1xPP^1 in the more-familiar 3-dimensional
space using the Segre embedding
((U:S),(V:T)) \mapsto (W_0:W_1:W_2:W_3) = (U*V:U*T:S*V:S*T) ;
(3) project smoothly back down to the projective plane PP^2, via
(W_0:W_1:W_2:W_3) \mapsto (X:Y:Z) = (W_0+W_1:W_2+W_3:W_1-W_0) .

Composing the projective maps, we get an isomorphism E \to M defined by
((U:S),(V:T)) \mapsto (X:Y:Z) = (U*(V + T):S*(V + T):U*(T - V)) .

If we put everything in the original affine coordinates, with u = U/S,
v = V/T, x = X/Z, and y = Y/Z again, then the isomorphism above
becomes the birational map
(u,v) \mapsto (x,y) = ( (v+1)/(1-v), (v+1)/(1-v)*1/u )
---which is, reassuringly enough, the one that Bernstein, Birker,
Joye, Lange, and Peters used to connect twisted Edwards curves with
Montgomery curves (in "Twisted Edwards curves").

But now you can see how the birational map appears: first you complete
the affine Edwards curve in the most natural way to get a nonsingular
projective curve, albeit living in a sort of funky product project
space; then you "visualize" the product projective space by embedding
it in the flatter, more familiar PP^3; then you fix your projective
eye at the 2-torsion point image, and looking around you (via the
projection from that point) you see a Montgomery curve. At the same
time, you've resolved the cognitive dissonance with the apparently
degree-4 Edwards equation: it was never degree 4, it was bidegree
(2,2). (And we never had to bother with the whole "resolution of
singularities" thing, either!)

[note 0] I would try to explain this properly, but that would require
a whole other (completely OT) post---and this one is far too long and
boring already... Sorry.

[note 1] What's happening here is that there are two notions of genus:
the arithmetic genus, which is connected to the degree/multidegree of
a particular curve in a given projective space, and the geometric
genus, which is intrinsic to the curve itself. If you like, the
arithmetic genus measures the equational complexity of a given
presentation of a curve, while the geometric genus measures the true
complexity of the curve itself, independent of its presentation. The
arithmetic genus can be greater than the geometric genus, but never
less. The difference between the two is accounted for by the
singularities that we see in a given presentation. In this case,
we've got arithmetic genus (4-1)(4-2)/2 = 3, then we subtract 2 for
the two plain old nodal singularities that appear at infininty, and
we're left with geometric genus 1, which is what we wanted.

[note 2] This "natural" thing can be made rigorous: PP^1 x PP^1 is the
toric variety connected to the Newton polygon of the affine equation
of E (TMI, I know...) Something that's interesting about PP^1xPP^1 is
that it completes the affine plane with not one, but *two* "lines at
infinity": a "vertical" one and a "horizontal" one. That's kind of
appropriate in the Edwards setting, because on some level it reflects
the natural symmetries we have around the x- and y-axes (vertical and
horizontal reflections, corresponding to negation and translation by
the special 2-torsion point); and by the same token it's inappropriate
in the Weierstrass/Montgomery setting, because you don't have that
horizontal symmetry there.

[note 3] This is the one bit where we apply a bit of sleight of hand.
Why did I use the point P as the centre C of the projection? Well, we
need a point on E, and we only have two obvious choices of points
guaranteed to be on E: the identity point O = (0:0:1:1), and the
2-torsion point P = (0:0:-1:1). The 2-torsion point P is a bit more
special for E, so I went with that---but if you go with O instead,
then you'll end up with the same mappings, and ultimately the same
isomorphism, composed with translation-by-P (which isn't the end of
the world).



Right, that's enough already! Enjoy your Mondays,

ben

--
You know we all became mathematicians for the same reason: we were lazy.
--Max Rosenlicht
Trevor Perrin
2016-11-09 00:00:12 UTC
Permalink
On Mon, Nov 7, 2016 at 12:51 AM, Ben Smith <***@gmail.com> wrote:
>
> Here's a rather longish explanation that might be helpful (I hope).
> It's sort of a geometric complement to Mike's reply on curve shapes.
> It should really be a link to a blog post, I suppose---but in the
> absence of a blog, I'm posting it here.
>
> What I'm aiming to do here is
> * Connect the Edwards equation with a Weierstrass equation (actually a
> Montgomery curve);
> * Show how the usual magic birational map appears in a more natural way;
> * Resolve Ron's apparent degree-3-vs-degree-4 incompatibility; and
> * Explain how we can ignore the whole resolution-of-singularities
> issue by simply never having singularities in the first place.
>
> (If the geometric language goes over your head, don't worry; there
> will be variables and equations the whole time to to show what I mean.


Thanks to you and Mike, that's awesome!

I wonder what the easiest path is to *learn* the geometric language
that you and Mike are using, to the point of following along here?

A lot of crypto-interested people can roughly understand RSA and DH,
and would like to understand ECC, but get lost with terms like
(skimming recent mails):

twist
torsion
homogenous
isogenies
birational
singularities / nonsingular
affine
projective (plane, closure, line)
genus
embedding

Are there gentle references you'd suggest?

Trevor
Ron Garret
2016-11-09 01:23:06 UTC
Permalink
On Nov 8, 2016, at 4:00 PM, Trevor Perrin <***@trevp.net> wrote:

> On Mon, Nov 7, 2016 at 12:51 AM, Ben Smith <***@gmail.com> wrote:
>>
>> Here's a rather longish explanation that might be helpful (I hope).
>> It's sort of a geometric complement to Mike's reply on curve shapes.
>> It should really be a link to a blog post, I suppose---but in the
>> absence of a blog, I'm posting it here.
>>
>> What I'm aiming to do here is
>> * Connect the Edwards equation with a Weierstrass equation (actually a
>> Montgomery curve);
>> * Show how the usual magic birational map appears in a more natural way;
>> * Resolve Ron's apparent degree-3-vs-degree-4 incompatibility; and
>> * Explain how we can ignore the whole resolution-of-singularities
>> issue by simply never having singularities in the first place.
>>
>> (If the geometric language goes over your head, don't worry; there
>> will be variables and equations the whole time to to show what I mean.
>
>
> Thanks to you and Mike, that's awesome!
>
> I wonder what the easiest path is to *learn* the geometric language
> that you and Mike are using, to the point of following along here?
>
> A lot of crypto-interested people can roughly understand RSA and DH,
> and would like to understand ECC, but get lost with terms like
> (skimming recent mails):
>
> twist
> torsion
> homogenous
> isogenies
> birational
> singularities / nonsingular
> affine
> projective (plane, closure, line)
> genus
> embedding

order
cofactor
characteristic
trace of frobenius

Another thing that has been driving me nuts for years is Theorem 2.1 in the Curve25519 paper. I understand what it *says* but I still don’t understand what it *means*.

rg
Mike Hamburg
2016-11-09 08:35:41 UTC
Permalink
> On Nov 8, 2016, at 5:23 PM, Ron Garret <***@flownet.com> wrote:
>
>
> On Nov 8, 2016, at 4:00 PM, Trevor Perrin <***@trevp.net> wrote:
>
>> On Mon, Nov 7, 2016 at 12:51 AM, Ben Smith <***@gmail.com> wrote:
>>>
>>> Here's a rather longish explanation that might be helpful (I hope).
>>> It's sort of a geometric complement to Mike's reply on curve shapes.
>>> It should really be a link to a blog post, I suppose---but in the
>>> absence of a blog, I'm posting it here.
>>>
>>> What I'm aiming to do here is
>>> * Connect the Edwards equation with a Weierstrass equation (actually a
>>> Montgomery curve);
>>> * Show how the usual magic birational map appears in a more natural way;
>>> * Resolve Ron's apparent degree-3-vs-degree-4 incompatibility; and
>>> * Explain how we can ignore the whole resolution-of-singularities
>>> issue by simply never having singularities in the first place.
>>>
>>> (If the geometric language goes over your head, don't worry; there
>>> will be variables and equations the whole time to to show what I mean.
>>
>>
>> Thanks to you and Mike, that's awesome!
>>
>> I wonder what the easiest path is to *learn* the geometric language
>> that you and Mike are using, to the point of following along here?
>>
>> A lot of crypto-interested people can roughly understand RSA and DH,
>> and would like to understand ECC, but get lost with terms like
>> (skimming recent mails):

I honestly don’t know! But, glossary:

>> twist

A twist TC of some curve C looks different over whatever field F you’re working with, but the same over an algebraically closed field. That is, C and TC are equivalent under a change of variables, but that change of variables uses a value like sqrt(-1) that’s not actually in the field (but is in some extension field).

Therefore TC and C (considered over F) have different sizes and different cryptographic properties.

>> torsion

An n-torsion point is a point P where nP = 0. For example, Curve25519 has a 2-torsion point at (0,0).

>> homogenous

A polynomial or equation is homogeneous if every term has the same degree. So x^2+1 isn’t homogeneous, but x^2+z^2 is.

>> birational

A birational map is a 1:1 change of variables. More specifically it’s a rational function f :: (x,y) -> (x’,y’) where both f and its inverse are (pairs of) ratios of polynomials in x and y.

>> isogenies

Isogenies are substitutions or changes of variables that aren’t 1:1, but are still defined by ratios of polynomials.

For example, if you take any point P on Curve25519, then the x-coordinate of 2P is necessarily square. Call its square root “s”. Then there is a curve (a Jacobi quartic)

J : t^2 = s^4 + As^2 + 1,

where (s,t) -> (x,y) by the map (x=s^2, y=ts). This is almost an isogeny from J to Curve25519, but not quite, because (at least as traditionally defined) J would have its neutral point at (0,1), and that doesn’t map to Curve25519’s neutral point. The actual isogeny is basically the same but with an inversion somewhere.

Every isogeny \phi has a “dual isogeny” \psi, such that \phi(\psi(P)) = n*P for some fixed n. Roughly, that n is the “degree" of the isogeny. The above isogeny is a 2-isogeny, once you fix up the problem with the base point.

>> singularities / nonsingular

A singularity is a point where the derivative of the curve is not defined. For example, the graph y^2 = x^3 has a “cusp" at (0,0). This is why that curve is not considered elliptic.

>> affine

Not projective.

Affine function: a linear function with an offset, eg x,y -> x+3y+7.

>> projective (plane, closure, line)

Projective line: say you’re talking about the slope of a line. It could be a real number, or it could be infinity (and +infinity and -infinity are the same: they both mean a vertical line). The set of possible slopes is the projective line. More formally, it’s ratios x:z where x and z aren’t both zero, and x:z = x’:z’ if and only if xz’ = x’z. That is, scaling x and z by the same value gives the same ratio.

Projective plane: x:y:z, not all zero, kind of like the projective line.

Projective closure: given a curve, add a z-coordinate to make it homogeneous. Eg y^2 = x^3 + ax + b ==> y^2 z = x^3 + axz^2 + bz^3. This lets you extend the domain of the curve from the affine plane to the projective plane: it’s now the set of ratios x:y:z (not all zero) where y^2 z = x^3 + axz^2 + bz^3. It’s important that you made it homogeneous, so that scaling x,y, and z by the same factor won’t change whether the equation holds (it will scale the LHS and RHS by the cube or whatever of the factor).

It’s usually more natural to talk about geometric objects like elliptic curves in a projective setting. Conveniently, it’s also faster to implement them this way.

>> genus

Ben can explain this one better than I can. It’s sort of a measure of how complicated a curve is, with genus 0 being a circle or line, genus 1 an elliptic curve, genus 2+ hyperelliptic.

Genus is pretty complicated to explain and compute. For example, the not-quite-elliptic curve y^2 = x^3 has genus 0.

>> embedding

A function from a smaller object into a larger one that preserves the structure of the smaller object. For example, a line can be embedded into a plane by mapping it to some line in that plane.

> order

For groups, and for elliptic curves, the order is the number of elements.

But the order of a group element G is the least positive integer n such that nG = 0. (Written as multiplication, such that G^n = 1.) Blarg.

> cofactor

Usually we want to use a group with large prime order q. If it instead has a composite order h*q, where q is the part we’re using for crypto, then h is the cofactor.

> characteristic

Of a field: the number of times you have to add 1 to itself to get 0. For F(p) where p is prime, this is just p. But there are also eg binary fields GF(2^n), which have characteristic 2, and GF(3^n) etc. (GF stands for Galois Field; sometimes just written F(2^n).)

> trace of frobenius

This is just p+1-#E, where #E is the order of the curve and p is the order of the underlying field. It’s called that because in algebraic geometry it’s the “trace" of a map called the Frobenius map. The reason behind the name isn’t important for most purposes, but it becomes clear when you try to compute #E.

> Another thing that has been driving me nuts for years is Theorem 2.1 in the Curve25519 paper. I understand what it *says* but I still don’t understand what it *means*.

It means that Curve25519 is symmetric with a reflection across the x-axis. The scalarmul map P -> nP preserves the symmetry. Also, the trick of representing infinity by 0 works, because any scalar times either of those points is still 0 or infinity. This means that x-only scalar multiplication is well-defined.

> rg

Cheers,
— Mike
Deirdre Connolly
2016-11-10 16:03:46 UTC
Permalink
Mike, I'm saving this forever, this is gold. Thank you!

On Wed, Nov 9, 2016 at 3:35 AM Mike Hamburg <***@shiftleft.org> wrote:

>
> > On Nov 8, 2016, at 5:23 PM, Ron Garret <***@flownet.com> wrote:
> >
> >
> > On Nov 8, 2016, at 4:00 PM, Trevor Perrin <***@trevp.net> wrote:
> >
> >> On Mon, Nov 7, 2016 at 12:51 AM, Ben Smith <***@gmail.com>
> wrote:
> >>>
> >>> Here's a rather longish explanation that might be helpful (I hope).
> >>> It's sort of a geometric complement to Mike's reply on curve shapes.
> >>> It should really be a link to a blog post, I suppose---but in the
> >>> absence of a blog, I'm posting it here.
> >>>
> >>> What I'm aiming to do here is
> >>> * Connect the Edwards equation with a Weierstrass equation (actually a
> >>> Montgomery curve);
> >>> * Show how the usual magic birational map appears in a more natural
> way;
> >>> * Resolve Ron's apparent degree-3-vs-degree-4 incompatibility; and
> >>> * Explain how we can ignore the whole resolution-of-singularities
> >>> issue by simply never having singularities in the first place.
> >>>
> >>> (If the geometric language goes over your head, don't worry; there
> >>> will be variables and equations the whole time to to show what I mean.
> >>
> >>
> >> Thanks to you and Mike, that's awesome!
> >>
> >> I wonder what the easiest path is to *learn* the geometric language
> >> that you and Mike are using, to the point of following along here?
> >>
> >> A lot of crypto-interested people can roughly understand RSA and DH,
> >> and would like to understand ECC, but get lost with terms like
> >> (skimming recent mails):
>
> I honestly don’t know! But, glossary:
>
> >> twist
>
> A twist TC of some curve C looks different over whatever field F you’re
> working with, but the same over an algebraically closed field. That is, C
> and TC are equivalent under a change of variables, but that change of
> variables uses a value like sqrt(-1) that’s not actually in the field (but
> is in some extension field).
>
> Therefore TC and C (considered over F) have different sizes and different
> cryptographic properties.
>
> >> torsion
>
> An n-torsion point is a point P where nP = 0. For example, Curve25519 has
> a 2-torsion point at (0,0).
>
> >> homogenous
>
> A polynomial or equation is homogeneous if every term has the same
> degree. So x^2+1 isn’t homogeneous, but x^2+z^2 is.
>
> >> birational
>
> A birational map is a 1:1 change of variables. More specifically it’s a
> rational function f :: (x,y) -> (x’,y’) where both f and its inverse are
> (pairs of) ratios of polynomials in x and y.
>
> >> isogenies
>
> Isogenies are substitutions or changes of variables that aren’t 1:1, but
> are still defined by ratios of polynomials.
>
> For example, if you take any point P on Curve25519, then the x-coordinate
> of 2P is necessarily square. Call its square root “s”. Then there is a
> curve (a Jacobi quartic)
>
> J : t^2 = s^4 + As^2 + 1,
>
> where (s,t) -> (x,y) by the map (x=s^2, y=ts). This is almost an isogeny
> from J to Curve25519, but not quite, because (at least as traditionally
> defined) J would have its neutral point at (0,1), and that doesn’t map to
> Curve25519’s neutral point. The actual isogeny is basically the same but
> with an inversion somewhere.
>
> Every isogeny \phi has a “dual isogeny” \psi, such that \phi(\psi(P)) =
> n*P for some fixed n. Roughly, that n is the “degree" of the isogeny. The
> above isogeny is a 2-isogeny, once you fix up the problem with the base
> point.
>
> >> singularities / nonsingular
>
> A singularity is a point where the derivative of the curve is not
> defined. For example, the graph y^2 = x^3 has a “cusp" at (0,0). This is
> why that curve is not considered elliptic.
>
> >> affine
>
> Not projective.
>
> Affine function: a linear function with an offset, eg x,y -> x+3y+7.
>
> >> projective (plane, closure, line)
>
> Projective line: say you’re talking about the slope of a line. It could
> be a real number, or it could be infinity (and +infinity and -infinity are
> the same: they both mean a vertical line). The set of possible slopes is
> the projective line. More formally, it’s ratios x:z where x and z aren’t
> both zero, and x:z = x’:z’ if and only if xz’ = x’z. That is, scaling x
> and z by the same value gives the same ratio.
>
> Projective plane: x:y:z, not all zero, kind of like the projective line.
>
> Projective closure: given a curve, add a z-coordinate to make it
> homogeneous. Eg y^2 = x^3 + ax + b ==> y^2 z = x^3 + axz^2 + bz^3. This
> lets you extend the domain of the curve from the affine plane to the
> projective plane: it’s now the set of ratios x:y:z (not all zero) where y^2
> z = x^3 + axz^2 + bz^3. It’s important that you made it homogeneous, so
> that scaling x,y, and z by the same factor won’t change whether the
> equation holds (it will scale the LHS and RHS by the cube or whatever of
> the factor).
>
> It’s usually more natural to talk about geometric objects like elliptic
> curves in a projective setting. Conveniently, it’s also faster to
> implement them this way.
>
> >> genus
>
> Ben can explain this one better than I can. It’s sort of a measure of how
> complicated a curve is, with genus 0 being a circle or line, genus 1 an
> elliptic curve, genus 2+ hyperelliptic.
>
> Genus is pretty complicated to explain and compute. For example, the
> not-quite-elliptic curve y^2 = x^3 has genus 0.
>
> >> embedding
>
> A function from a smaller object into a larger one that preserves the
> structure of the smaller object. For example, a line can be embedded into
> a plane by mapping it to some line in that plane.
>
> > order
>
> For groups, and for elliptic curves, the order is the number of elements.
>
> But the order of a group element G is the least positive integer n such
> that nG = 0. (Written as multiplication, such that G^n = 1.) Blarg.
>
> > cofactor
>
> Usually we want to use a group with large prime order q. If it instead
> has a composite order h*q, where q is the part we’re using for crypto, then
> h is the cofactor.
>
> > characteristic
>
> Of a field: the number of times you have to add 1 to itself to get 0. For
> F(p) where p is prime, this is just p. But there are also eg binary fields
> GF(2^n), which have characteristic 2, and GF(3^n) etc. (GF stands for
> Galois Field; sometimes just written F(2^n).)
>
> > trace of frobenius
>
> This is just p+1-#E, where #E is the order of the curve and p is the order
> of the underlying field. It’s called that because in algebraic geometry
> it’s the “trace" of a map called the Frobenius map. The reason behind the
> name isn’t important for most purposes, but it becomes clear when you try
> to compute #E.
>
> > Another thing that has been driving me nuts for years is Theorem 2.1 in
> the Curve25519 paper. I understand what it *says* but I still don’t
> understand what it *means*.
>
> It means that Curve25519 is symmetric with a reflection across the
> x-axis. The scalarmul map P -> nP preserves the symmetry. Also, the trick
> of representing infinity by 0 works, because any scalar times either of
> those points is still 0 or infinity. This means that x-only scalar
> multiplication is well-defined.
>
> > rg
>
> Cheers,
> — Mike_______________________________________________
> Curves mailing list
> ***@moderncrypto.org
> https://moderncrypto.org/mailman/listinfo/curves
>
Samuel Neves
2016-11-10 02:21:51 UTC
Permalink
On 09/11/2016 00:00, Trevor Perrin wrote:
> twist
> torsion
> homogenous
> isogenies
> birational
> singularities / nonsingular
> affine
> projective (plane, closure, line)
> genus
> embedding
>
> Are there gentle references you'd suggest?

Poonen's elliptic curve chapter on [1] introduces most of these, and is a reasonably short read. There are several other nice introductions in that book on other topics.

[1] http://library.msri.org/books/Book44/contents.html
Douglas Stebila
2016-11-10 14:58:11 UTC
Permalink
On Nov 8, 2016, at 19:00, Trevor Perrin <***@trevp.net> wrote:
>
> I wonder what the easiest path is to *learn* the geometric language
> that you and Mike are using, to the point of following along here?
>
> Are there gentle references you'd suggest?

Craig Costello has a multi-chapter tutorial introducing elliptic curves through to pairings, accompanied by Magma code to play around with. I don't know if it's gentle, per se, but it's thorough.
http://www.craigcostello.com.au/pairing/

Douglas
Deirdre Connolly
2016-11-10 15:19:33 UTC
Permalink
I've been looking for something exactly like this for pairings, thank you!

On Thu, Nov 10, 2016, 9:58 AM Douglas Stebila <***@gmail.com> wrote:

> On Nov 8, 2016, at 19:00, Trevor Perrin <***@trevp.net> wrote:
> >
> > I wonder what the easiest path is to *learn* the geometric language
> > that you and Mike are using, to the point of following along here?
> >
> > Are there gentle references you'd suggest?
>
> Craig Costello has a multi-chapter tutorial introducing elliptic curves
> through to pairings, accompanied by Magma code to play around with. I
> don't know if it's gentle, per se, but it's thorough.
> http://www.craigcostello.com.au/pairing/
>
> Douglas
> _______________________________________________
> Curves mailing list
> ***@moderncrypto.org
> https://moderncrypto.org/mailman/listinfo/curves
>
Mike Hamburg
2016-11-01 21:55:13 UTC
Permalink
Hi Ron,

I think your questions are appropriate for this mailing list. It’s not always easy to find information on elliptic curves and other cryptography, sometimes even for people who work on them frequently.

DJB has an unfortunate habit of omitting information he deems irrelevant, such as background and motivation. So it’s not surprising to me that he doesn’t mention cofactor much in his work. It’s disappointing that HEHCC isn’t better.

> On Nov 1, 2016, at 12:20 PM, Ron Garret <***@flownet.com> wrote:
>
> "For primes congruent to 1 mod 4, the minimal cofactors of the curve and its twist are either {4, 8} or {8, 4}.”
>
> Where the heck did *that* come from? Digging through the references, I happened to stumble upon this:
>
> http://www.hpl.hp.com/techreports/97/HPL-97-128.pdf
>
> which seems like it’s the answer to that particular question. But even this (apparently) elementary fact seems to be almost deliberately obscured in the literature. Even https://safecurves.cr.yp.to doesn’t mention it, which is another indication that all this is just taken to be common knowledge.

No, cofactor-1 is fine. It would even be preferable on its own. It’s used by the NIST and Brainpool curves. It’s just that curve shapes with nicer formulas and fewer corner cases (Montgomery, Edwards, Huff, Jacobi quartic, etc) all have cofactors divisible by 4.

For example, an Edwards curve has 4-way rotational symmetry. The center (0,0) of the rotation isn’t on the curve, and in fact all points on the curve are mapped to exactly 3 other, distinct points. This means that the number of points on the curve must be divisible by 4.

The article you’re referring to is about curves with *trace* 1, which is completely different. The trace (aka “trace of Frobenius”) is p+1-#E, where p is the order of the underlying field and #E is the number of points on the curve. So the danger in the Smart article is curves with exactly p points on them, over a field of size p. This is a very special case indeed.

The reason for the {4,8} thing is that trace(E) = -trace(quadratic twist of E). Plugging in the definition, #E + #(twist E) = 2p + 2. When p == 1 mod 4, then 2p+2 == 4 mod 8. This means that #E and #(twist E) can’t both be of the form 4*(large prime), because then their sum would be 4*(odd + odd), which would be divisible by 8. So if E has cofactor 4, then twist-E must have cofactor at least 8 and vice versa.

For X25519 (the usual DH protocol over Curve25519), the protocol has to be secure on the twist as well, because you don’t check if the point is on the curve. So you want to minimize cofactor on both, and {4,8} or {8,4} is the best you can do. Bernstein chose {8,4} so that security measures on the curve would automatically protect the twist as well.

If you run through the above math with p == 3 mod 4, you get that both E and twist E can have cofactor 4. This is why Ed448-Goldilocks and its twist can both have cofactor 4, but Curve25519 has cofactor 8.

Cheers,
— Mike
Andy Isaacson
2016-11-04 02:01:50 UTC
Permalink
On Tue, Nov 01, 2016 at 12:20:16PM -0700, Ron Garret wrote:
>So let me hard-fork this thread and ask a followup meta-question: The fact
>that 8 was the cofactor of the curve is apparently something most (if
>not all) people on this list already knew. But how? Neither the
>Ed25519 paper nor the Curve25519 paper mentions it (AFAICT).

[snip]

>Bottom line: there seems to be a huge disconnect between the importance
>of cofactors on the one hand and the information available about them
>on the other. On the one hand, cofactors seem to be so important that
>all this stuff that I just learned about seems to be common knowledge,
>and on the other hand, even now that I know it I *still* can’t figure
>out how I could have learned it other than asking this apparently
>stupid question.
>
>So my meta-question is: *was* this a stupid question? Did everyone
>really already know this except me? And if so, how?

I'm deeply unqualified to opine on if it was a stupid question, but it
didn't seem stupid to me. As far as I can tell there's a quite
remarkable pile of specialized knowledge necessary to be able to
effectively work with elliptic curve cryptography, and this list is
mostly for folks who already have the knowledge to discuss things. I'm
just lurking here to keep tabs on the developments.

I'm somewhat embarrassed to note that I'm not even sure which fields of
study are necessary; I think you're supposed to start by understanding a
basic graduate level textbook such as Silverman's "Arithmetic of
Elliptic Curves"
http://www.springer.com/gp/book/9780387094939
but I bounced off of that book pretty hard when I tried reading it a few
years back.

-andy
Trevor Perrin
2016-11-06 23:51:56 UTC
Permalink
On Thu, Nov 3, 2016 at 7:01 PM, Andy Isaacson <***@hexapodia.org> wrote:
> As far as I can tell there's a quite remarkable
> pile of specialized knowledge necessary to be able to effectively work with
> elliptic curve cryptography, and this list is mostly for folks who already
> have the knowledge to discuss things.


I think it helps a lot to think of layers built on top of each other,
from high-level to low:
- Protocols (Signatures, Diffie-Hellman, MQV, etc.)
- Groups (where discrete log is hard)
- Elliptic Curves (where points form groups)
- Fields (the coordinates of elliptic-curve points are field
elements, e.g in GF(2^255-19))


Here's a (rambling) tour of a couple layers, I'll try to connect it to
recent discussion:

Groups
-------
At the level of DH or signatures, elliptic curve crypto is mostly just
"discrete log" crypto, i.e. it deals with (cyclic) groups where
calculating discrete logs is hard. Constructs like DH, DSA, etc apply
whether the group elements are points on an elliptic curve or integers
modulo some prime.

In either case you'll have some element (elliptic curve point; or
integer mod prime) that generates a group with large prime order q
(number of elements), which is where you want to do crypto. But this
group is often part of a larger group, with order = cofactor * q.

If cofactor=1 then things are simpler, but cofactor > 1 means there's
other groups co-existing with the "main subgroup", and there can be
weird interactions.

"Small subgroup attacks" on DH with reused keys is the classic case:
Someone gives you a DH public value A, you raise it to your reusable
DH private value b to get a shared key and encrypt something with that
key.

However! Instead of A generating the main subgroup, it was
maliciously chosen to generate some small-order subgroup with j
elements. The attacker can trial-decrypt your encrypted data to
determine which of the j keys was chosen, thus learning your private
key b mod j. By repeating this with different A_i of order j_i the
attacker can calculate b via the Chinese Remainder Theorem.

With traditional "mod p" or "finite field" Diffie-Hellman, you can
choose a "safe prime" p=2q+1 to get a cofactor of 2 and a main
subgroup order of q. This prevents the attack because the 2-element
subgroup containing (1,-1) is easy to test for, and because leaking a
single bit of your key (mod 2) doesn't matter much.

For traditional Schnorr or DSA signatures you have to send a value
(mod q) as part of the signature, so you want a prime p = cofactor*q +
1, where cofactor is large (instead of cofactor=2). Thus, using
DSA-style primes for DH would introduce a risk of small-subgroup
attacks against re-used keys, requiring an expensive validation check
(exponentiation by q) to ensure received public values are in the
correct subgroup.

(To make this topical: A recent paper points out that NIST recommends
DSA-style primes for DH in SP800-56A [0,1]. RFC 5114 also recommends
specific DSA-style primes for "IKE, TLS, SSH, and SMIME", without
mentioning the need for validation checks [2]. The paper analyzes the
"fragility" of the implementation landscape that has resulted, though
various complications mostly seem to prevent devastating attacks, in
the implementations looked at.)

So note that group structure and cofactor/subgroup questions are
complicated even in "regular" DH, without getting to EC.
With EC, cofactors are typically small enough (e.g. 1 for NIST
P-curves, 8 for Curve25519) that the above attack isn't that relevant,
though sending invalid points (not on the curve) can lead to a similar
attack.

However, cofactor>1 can still have subtle and unexpected effects, e.g.
see security considerations about "equivalent" public keys in RFC
7748, which is relevant to the cofactor multiplication "cV" in
VXEdDSA, or including DH public keys into "AD" in Signal's (recently
published) X3DH [3].


Signatures
-----------
Discrete-log signatures (El Gamal, Schnorr, DSA, EC-DSA, EdDSA) build
on top of the group structure, so can be considered without too much
EC detail.

Academic intro to crypto books usually cover the basics well, the
typical reference points are:
* Schnorr's identification protocol
* Fiat-Shamir transform
* Security proof via Random Oracle Model and oracle rewinding

From there, DJB has a great writeup on concrete design details [4], as
well as the Ed25519 and "More curves for EdDSA" papers.

It's also worth understanding these signatures as instances of
"zero-knowledge proofs" which can do fancier things. E.g. see
Camenisch-Stadler [5] examples 2 and 3 on "equality of two discrete
logarithms" (relevant to VRF), and "or" proofs (relevant to signature
variants like "designated verifier" signatures).


Curves
-------
This is harder math, and I'm not sure Montgomery / Edwards curves have
made it into good textbooks and overviews yet. I think people lean on
DJB's Curve25519 and Ed25519 papers, "Twisted Edwards Curves" [6], and
their references. The authors have a "gentle introduction" to EC as
well [7].

I'm not the person to do it, but if anyone wants to try an overview of
modern curve equations / coordinate systems / algorithms, I'm sure it
would be widely appreciated (there's about 600 people on this list,
probably most here to learn things like that).

Trevor


[0] https://eprint.iacr.org/2016/995
[1] http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Ar2.pdf
[2] https://tools.ietf.org/html/rfc5114
[3] https://whispersystems.org/docs/specifications/x3dh/
[4] https://blog.cr.yp.to/20140323-ecdsa.html
[5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.1208
[6] https://eprint.iacr.org/2008/013
[7] http://ecchacks.cr.yp.to/
Ron Garret
2016-11-07 01:16:55 UTC
Permalink
Wow, thanks for taking the time to write this up! This has been more helpful than you can possibly imagine.

I started work on an EC survey a few days ago, but as you might imagine, curves are quite the rabbit hole so I don’t expect to have anything ready to show for a few weeks at the earliest. But I thought I would mention it just in case anyone else out there is thinking about diving in to this so we don’t duplicate efforts.

On Nov 6, 2016, at 3:51 PM, Trevor Perrin <***@trevp.net> wrote:

> On Thu, Nov 3, 2016 at 7:01 PM, Andy Isaacson <***@hexapodia.org> wrote:
>> As far as I can tell there's a quite remarkable
>> pile of specialized knowledge necessary to be able to effectively work with
>> elliptic curve cryptography, and this list is mostly for folks who already
>> have the knowledge to discuss things.
>
>
> I think it helps a lot to think of layers built on top of each other,
> from high-level to low:
> - Protocols (Signatures, Diffie-Hellman, MQV, etc.)
> - Groups (where discrete log is hard)
> - Elliptic Curves (where points form groups)
> - Fields (the coordinates of elliptic-curve points are field
> elements, e.g in GF(2^255-19))
>
>
> Here's a (rambling) tour of a couple layers, I'll try to connect it to
> recent discussion:
>
> Groups
> -------
> At the level of DH or signatures, elliptic curve crypto is mostly just
> "discrete log" crypto, i.e. it deals with (cyclic) groups where
> calculating discrete logs is hard. Constructs like DH, DSA, etc apply
> whether the group elements are points on an elliptic curve or integers
> modulo some prime.
>
> In either case you'll have some element (elliptic curve point; or
> integer mod prime) that generates a group with large prime order q
> (number of elements), which is where you want to do crypto. But this
> group is often part of a larger group, with order = cofactor * q.
>
> If cofactor=1 then things are simpler, but cofactor > 1 means there's
> other groups co-existing with the "main subgroup", and there can be
> weird interactions.
>
> "Small subgroup attacks" on DH with reused keys is the classic case:
> Someone gives you a DH public value A, you raise it to your reusable
> DH private value b to get a shared key and encrypt something with that
> key.
>
> However! Instead of A generating the main subgroup, it was
> maliciously chosen to generate some small-order subgroup with j
> elements. The attacker can trial-decrypt your encrypted data to
> determine which of the j keys was chosen, thus learning your private
> key b mod j. By repeating this with different A_i of order j_i the
> attacker can calculate b via the Chinese Remainder Theorem.
>
> With traditional "mod p" or "finite field" Diffie-Hellman, you can
> choose a "safe prime" p=2q+1 to get a cofactor of 2 and a main
> subgroup order of q. This prevents the attack because the 2-element
> subgroup containing (1,-1) is easy to test for, and because leaking a
> single bit of your key (mod 2) doesn't matter much.
>
> For traditional Schnorr or DSA signatures you have to send a value
> (mod q) as part of the signature, so you want a prime p = cofactor*q +
> 1, where cofactor is large (instead of cofactor=2). Thus, using
> DSA-style primes for DH would introduce a risk of small-subgroup
> attacks against re-used keys, requiring an expensive validation check
> (exponentiation by q) to ensure received public values are in the
> correct subgroup.
>
> (To make this topical: A recent paper points out that NIST recommends
> DSA-style primes for DH in SP800-56A [0,1]. RFC 5114 also recommends
> specific DSA-style primes for "IKE, TLS, SSH, and SMIME", without
> mentioning the need for validation checks [2]. The paper analyzes the
> "fragility" of the implementation landscape that has resulted, though
> various complications mostly seem to prevent devastating attacks, in
> the implementations looked at.)
>
> So note that group structure and cofactor/subgroup questions are
> complicated even in "regular" DH, without getting to EC.
> With EC, cofactors are typically small enough (e.g. 1 for NIST
> P-curves, 8 for Curve25519) that the above attack isn't that relevant,
> though sending invalid points (not on the curve) can lead to a similar
> attack.
>
> However, cofactor>1 can still have subtle and unexpected effects, e.g.
> see security considerations about "equivalent" public keys in RFC
> 7748, which is relevant to the cofactor multiplication "cV" in
> VXEdDSA, or including DH public keys into "AD" in Signal's (recently
> published) X3DH [3].
>
>
> Signatures
> -----------
> Discrete-log signatures (El Gamal, Schnorr, DSA, EC-DSA, EdDSA) build
> on top of the group structure, so can be considered without too much
> EC detail.
>
> Academic intro to crypto books usually cover the basics well, the
> typical reference points are:
> * Schnorr's identification protocol
> * Fiat-Shamir transform
> * Security proof via Random Oracle Model and oracle rewinding
>
> From there, DJB has a great writeup on concrete design details [4], as
> well as the Ed25519 and "More curves for EdDSA" papers.
>
> It's also worth understanding these signatures as instances of
> "zero-knowledge proofs" which can do fancier things. E.g. see
> Camenisch-Stadler [5] examples 2 and 3 on "equality of two discrete
> logarithms" (relevant to VRF), and "or" proofs (relevant to signature
> variants like "designated verifier" signatures).
>
>
> Curves
> -------
> This is harder math, and I'm not sure Montgomery / Edwards curves have
> made it into good textbooks and overviews yet. I think people lean on
> DJB's Curve25519 and Ed25519 papers, "Twisted Edwards Curves" [6], and
> their references. The authors have a "gentle introduction" to EC as
> well [7].
>
> I'm not the person to do it, but if anyone wants to try an overview of
> modern curve equations / coordinate systems / algorithms, I'm sure it
> would be widely appreciated (there's about 600 people on this list,
> probably most here to learn things like that).
>
> Trevor
>
>
> [0] https://eprint.iacr.org/2016/995
> [1] http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Ar2.pdf
> [2] https://tools.ietf.org/html/rfc5114
> [3] https://whispersystems.org/docs/specifications/x3dh/
> [4] https://blog.cr.yp.to/20140323-ecdsa.html
> [5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.1208
> [6] https://eprint.iacr.org/2008/013
> [7] http://ecchacks.cr.yp.to/
Sigurd Hogsbro
2016-11-07 05:36:25 UTC
Permalink
+1!

I am one of those lurkers trying to grasp. Thank you Trevor!

On Mon, Nov 7, 2016 at 5:16 AM, Ron Garret <***@flownet.com> wrote:

> Wow, thanks for taking the time to write this up! This has been more
> helpful than you can possibly imagine.
>
> I started work on an EC survey a few days ago, but as you might imagine,
> curves are quite the rabbit hole so I don’t expect to have anything ready
> to show for a few weeks at the earliest. But I thought I would mention it
> just in case anyone else out there is thinking about diving in to this so
> we don’t duplicate efforts.
>
> On Nov 6, 2016, at 3:51 PM, Trevor Perrin <***@trevp.net> wrote:
>
> > On Thu, Nov 3, 2016 at 7:01 PM, Andy Isaacson <***@hexapodia.org> wrote:
> >> As far as I can tell there's a quite remarkable
> >> pile of specialized knowledge necessary to be able to effectively work
> with
> >> elliptic curve cryptography, and this list is mostly for folks who
> already
> >> have the knowledge to discuss things.
> >
> >
> > I think it helps a lot to think of layers built on top of each other,
> > from high-level to low:
> > - Protocols (Signatures, Diffie-Hellman, MQV, etc.)
> > - Groups (where discrete log is hard)
> > - Elliptic Curves (where points form groups)
> > - Fields (the coordinates of elliptic-curve points are field
> > elements, e.g in GF(2^255-19))
> >
> >
> > Here's a (rambling) tour of a couple layers, I'll try to connect it to
> > recent discussion:
> >
> > Groups
> > -------
> > At the level of DH or signatures, elliptic curve crypto is mostly just
> > "discrete log" crypto, i.e. it deals with (cyclic) groups where
> > calculating discrete logs is hard. Constructs like DH, DSA, etc apply
> > whether the group elements are points on an elliptic curve or integers
> > modulo some prime.
> >
> > In either case you'll have some element (elliptic curve point; or
> > integer mod prime) that generates a group with large prime order q
> > (number of elements), which is where you want to do crypto. But this
> > group is often part of a larger group, with order = cofactor * q.
> >
> > If cofactor=1 then things are simpler, but cofactor > 1 means there's
> > other groups co-existing with the "main subgroup", and there can be
> > weird interactions.
> >
> > "Small subgroup attacks" on DH with reused keys is the classic case:
> > Someone gives you a DH public value A, you raise it to your reusable
> > DH private value b to get a shared key and encrypt something with that
> > key.
> >
> > However! Instead of A generating the main subgroup, it was
> > maliciously chosen to generate some small-order subgroup with j
> > elements. The attacker can trial-decrypt your encrypted data to
> > determine which of the j keys was chosen, thus learning your private
> > key b mod j. By repeating this with different A_i of order j_i the
> > attacker can calculate b via the Chinese Remainder Theorem.
> >
> > With traditional "mod p" or "finite field" Diffie-Hellman, you can
> > choose a "safe prime" p=2q+1 to get a cofactor of 2 and a main
> > subgroup order of q. This prevents the attack because the 2-element
> > subgroup containing (1,-1) is easy to test for, and because leaking a
> > single bit of your key (mod 2) doesn't matter much.
> >
> > For traditional Schnorr or DSA signatures you have to send a value
> > (mod q) as part of the signature, so you want a prime p = cofactor*q +
> > 1, where cofactor is large (instead of cofactor=2). Thus, using
> > DSA-style primes for DH would introduce a risk of small-subgroup
> > attacks against re-used keys, requiring an expensive validation check
> > (exponentiation by q) to ensure received public values are in the
> > correct subgroup.
> >
> > (To make this topical: A recent paper points out that NIST recommends
> > DSA-style primes for DH in SP800-56A [0,1]. RFC 5114 also recommends
> > specific DSA-style primes for "IKE, TLS, SSH, and SMIME", without
> > mentioning the need for validation checks [2]. The paper analyzes the
> > "fragility" of the implementation landscape that has resulted, though
> > various complications mostly seem to prevent devastating attacks, in
> > the implementations looked at.)
> >
> > So note that group structure and cofactor/subgroup questions are
> > complicated even in "regular" DH, without getting to EC.
> > With EC, cofactors are typically small enough (e.g. 1 for NIST
> > P-curves, 8 for Curve25519) that the above attack isn't that relevant,
> > though sending invalid points (not on the curve) can lead to a similar
> > attack.
> >
> > However, cofactor>1 can still have subtle and unexpected effects, e.g.
> > see security considerations about "equivalent" public keys in RFC
> > 7748, which is relevant to the cofactor multiplication "cV" in
> > VXEdDSA, or including DH public keys into "AD" in Signal's (recently
> > published) X3DH [3].
> >
> >
> > Signatures
> > -----------
> > Discrete-log signatures (El Gamal, Schnorr, DSA, EC-DSA, EdDSA) build
> > on top of the group structure, so can be considered without too much
> > EC detail.
> >
> > Academic intro to crypto books usually cover the basics well, the
> > typical reference points are:
> > * Schnorr's identification protocol
> > * Fiat-Shamir transform
> > * Security proof via Random Oracle Model and oracle rewinding
> >
> > From there, DJB has a great writeup on concrete design details [4], as
> > well as the Ed25519 and "More curves for EdDSA" papers.
> >
> > It's also worth understanding these signatures as instances of
> > "zero-knowledge proofs" which can do fancier things. E.g. see
> > Camenisch-Stadler [5] examples 2 and 3 on "equality of two discrete
> > logarithms" (relevant to VRF), and "or" proofs (relevant to signature
> > variants like "designated verifier" signatures).
> >
> >
> > Curves
> > -------
> > This is harder math, and I'm not sure Montgomery / Edwards curves have
> > made it into good textbooks and overviews yet. I think people lean on
> > DJB's Curve25519 and Ed25519 papers, "Twisted Edwards Curves" [6], and
> > their references. The authors have a "gentle introduction" to EC as
> > well [7].
> >
> > I'm not the person to do it, but if anyone wants to try an overview of
> > modern curve equations / coordinate systems / algorithms, I'm sure it
> > would be widely appreciated (there's about 600 people on this list,
> > probably most here to learn things like that).
> >
> > Trevor
> >
> >
> > [0] https://eprint.iacr.org/2016/995
> > [1] http://nvlpubs.nist.gov/nistpubs/SpecialPublications/
> NIST.SP.800-56Ar2.pdf
> > [2] https://tools.ietf.org/html/rfc5114
> > [3] https://whispersystems.org/docs/specifications/x3dh/
> > [4] https://blog.cr.yp.to/20140323-ecdsa.html
> > [5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.1208
> > [6] https://eprint.iacr.org/2008/013
> > [7] http://ecchacks.cr.yp.to/
>
> _______________________________________________
> Curves mailing list
> ***@moderncrypto.org
> https://moderncrypto.org/mailman/listinfo/curves
>
Mike Hamburg
2016-11-07 07:24:47 UTC
Permalink
Here’s a quick and slightly sleepy try on curve shapes. Please correct any errors you see.

Some of the curve shapes have multiple parameters; usually all but one of the parameters are fixed to a small number like +-1 or -3.

I’ll focus on curves over large-characteristic fields. I will not describe binary curves (F(2^n)) or ternary ones (F(3^n)). While I don’t believe any attacks are known against such curves, they are regarded as slightly sketchier due to progress on the classical discrete log problem in small-characteristic fields.

A formula for addition on an elliptic curves is *complete* if it has no corner cases that result in the wrong answer. In general, that wrong answer will be 0/0. Any algebraic addition formula must have a corner case in some extension field, but for complete formulas there aren’t any in the base field.

I won’t discuss pairings here, or other fancy techniques. I won’t make a distinction between multiplication and squaring, even though usually squaring is slightly faster.




Short Weierstrass: y^2 = x^3 + ax + b. Most often a=-3.

Minimum cofactor: 1.

Identity point: (0 : 1 : 0), i.e. infinity in the vertical direction.

This is the classic shape for elliptic curves, used by the likes of NIST and Brainpool. For fields of characteristic >3, any elliptic curve can be put into this shape by a change of variables. Most often used with Jacobian coordinates, where y is scaled by z^3 and x by z^2, i.e. y^2 = x^3 + axz^4 + bz^6. Usually chosen with a=-3, which makes the point doubling formula more efficient.

Short Weierstrass curves are universal, and can have prime order. Their main downsides are efficiency and completeness. In particular, complete formulas for short Weierstrass curves are complicated and not very efficient. The efficiency of point operations are so-so: Jacobian doubles are ~8M, and additions are ~16M. Projective doubles are ~11M and additions are ~14M, which is usually but not always worse than Jacobian. However, if the algorithm can be arranged so that the points you’re adding have the same Z-coordinate (“co-Z”), then they become more efficient, on the order of ~6-7M for addition.

When doing a scalar multiply of a single point, an efficient family of algorithms is the co-Z ladder (https://eprint.iacr.org/2010/309.pdf <https://eprint.iacr.org/2010/309.pdf>), which costs 14M/bit if you only want an x-coordinate out at the end, or 16M/bit if you want the y-coordinate too. These algorithms are very memory-efficient and reasonably side-channel resistant. So for all DJB likes to badmouth this curve shape, they’re not terrible. You just have to be really careful to avoid the incompleteness problems.




Montgomery: by^2 = x(x^2+Ax+1). Almost always b=1.

Minimum cofactor: 4.

The point of order 2 has (x,y)=(0,0). A point of order 4 has x = +- 1.

The famous Curve25519 has this form. The main point of Montgomery curves is that for a single scalar-multiply, there is a very simple, fast, memory-efficient and mostly-side-channel-resistant “Montgomery ladder” which computes the scalar multiply in 9M/bit. Usually, this ladder is only used to compute the x coordinate, but there is an efficient way to recover y as well if you really want to, assuming you know y of the input point. The Montgomery ladder is relatively easy to implement securely, at least compared to other algorithms.

Usually, Montgomery curves are chosen so that their twist — where b is multiplied by a nonsquare value — is also cryptographically secure. This is because when implementing the Montgomery ladder, if you take a point on the twist (or rather, an x-coordinate corresponding to a point on the twist, since you don’t usually take y as input) you will get the correct output (but still on the twist). So if the twist is secure, implementations can still be secure for most purposes even if they don’t check whether a point is on the curve or on the twist.

The main disadvantages are that these curves can’t have prime order, and operations other than the Montgomery ladder aren’t particularly efficient. Not having prime order is annoying because of standards, and because lots of papers assume that the group has prime order and you have to check whether you need a workaround.

The cofactor is almost always arranged so that there are points of order 4. IIRC when the underlying field has order 1 mod 4, it is possible to instead have a full complement of points of order 2 (i.e. 3 points of order 2, plus the identity) and no points of order 4, but nobody does this for some reason. Let’s call this 2x2 torsion, because it makes the group structure look like 2*2*q instead of 4*q.

Note that a curve with 2*2*q torsion is *not a cyclic group*. This usually doesn’t matter in practice. Maybe it messes with a couple PAKE schemes if you’re not careful. It’s annoying for the same reason that cofactor is annoying: there are so many crypto papers that assume G is cyclic, and you have to check whether it matters.





Twisted Edwards: ax^2 + y^2 = 1 + dx^2y^2. The curve is “untwisted” if a=1.

Reference: https://eprint.iacr.org/2008/013.pdf <https://eprint.iacr.org/2008/013.pdf>
http://iacr.org/archive/asiacrypt2008/53500329/53500329.pdf <http://iacr.org/archive/asiacrypt2008/53500329/53500329.pdf>

Minimum cofactor: 4. As with Montgomery curves, with twisted (but not untwisted) Edwards curves and p = 1 mod 4, it is possible to have 2x2 torsion instead of 1x4. As with Montgomery curves, nobody does this.

Identity point (0,1). For some reason, the identity point is at (0,1) and not (1,0), meaning that the y-coordinate is the more significant one instead of the x-coordinate.

When the terms (d,ad) are both nonsquare in the field, this curve has no points at infinity. Otherwise, there are points of small order (2 or 4 IIRC) at infinity. Furthermore, the curve supports fast addition laws that are complete when they don’t involve points at infinity. This means that if (d,ad) aren’t square, those formulas are complete for any points on the curve. Otherwise, the addition laws are still complete if you’ve doubled enough times to avoid the points of small order. There are also slightly faster addition laws that aren’t complete.

Most often Edwards curves are chosen with a = 1 or -1. The fastest coordinates in this case are “extended” (X:Y:Z:T) coordinates, where ZT = XY. This makes the curve equation aX^2 + Y^2 = Z^2 + dT^2, which is nicely homogeneous and symmetrical. When a=-1, the formulas cost ~8M for addition and ~8M for doubling, and they’re super pretty/simple/symmetric. When a=+1, it’s 9M for addition. Usually T isn’t used on doubling, which means that it can be computed only when necessary; this usually effectively saves 1M on doubling. With this optimization, twisted Edwards curves with a=-1 are the fastest elliptic curves for almost every operation, except maybe when they’re beaten by the Montgomery ladder.

When the underlying field has order 3 mod 4, then a=-1 wouldn’t be square. This means that either d or ad must be square, so the formulas are not complete. This problem can be worked around in various ways. The easiest thing to do is to set a=+1 and take the small performance hit, but there are other options.

When running on a non-tiny device, sometimes precomputed points are stored in “Niels” form, which is ((y+x)/2,(y-x)/2,dxy) or similar. This makes the addition formula slightly faster at the cost of a bigger table.

Every Montgomery curve is birationally equivalent to a twisted Edwards curve, and vice versa. This makes Edwards curves a great complement to Montgomery curves for operations other than the Montgomery ladder, such as signatures.

Additionally, there are *isogenies* between Montgomery and Edwards curves, and between Edwards and Twisted Edwards curves. In particular, there are 4-isogenies between Ed(1,d), Ed(-1,d-1), and Mont(A = 2-4d). This is used for the Ed448-Goldilocks curve is RFC 7748. The isogenies are another way to work around the a=-1 problems to make implementations faster.




Jacobi intersections, Jacobi quartics and Huff curves

Jacobi intersections: s^2+c^2=1, a*s^2+d^2=1; neutral point (0,1,1). Here s,c,d are coordinates, and a is a parameter.
https://eprint.iacr.org/2009/597.pdf <https://eprint.iacr.org/2009/597.pdf>

Jacobi quartics: y^2 = x^4 + ax^2 + 1, sometimes written 2ax^2 instead of ax^2. Identity point (0,1).
https://eprint.iacr.org/2009/312.pdf <https://eprint.iacr.org/2009/312.pdf>

Huff curves: x(dy^2 + 1) = cy(ax^2+1). Here a,c,d are parameters, and x,y are coordinates. Neutral point (0,0).
https://eprint.iacr.org/2010/383.pdf <https://eprint.iacr.org/2010/383.pdf>

Minimum cofactor: 4, but (depending on whether you add in twisting parameters) generally with 2x2 torsion. It might be that with some extra twisting parameters you can get the cofactor down to 2, but the formulas get less efficient.

These curve shapes are less common, because arithmetic isn’t quite as fast on them as Edwards, and they otherwise have the same advantages and disadvantages. At least Huff curves and Jacobi quartics have a fast Montgomery “odd-ladder”. This is almost exactly like the Montgomery ladder, has almost the same code, has the same speed, but for technical reasons works better or exclusively with an odd scalar. I’m brewing a post on odd ladders
 maybe in a week or two.

IIRC, at least most of these curve shapes or maybe all of them have reasonably efficient complete addition formulas.

IIRC these curves are birationally equivalent to each other, and are isogenous to Edwards curves. For example, if you take a twisted Edwards curve

ax^2 + y^2 = 1 + dx^2y^2

and set (p = xy, r = x/y) then you get

p(ar + 1/r) = 1 + dp^2 <====> p(ar^2+1) = r(1+dp^2)

which is a Huff curve that’s 2-isogenous to the twisted Edwards one. This means that if for some reason you’re using a Huff curve for something, you can convert your points to Edwards and vice-versa for operations that one model doesn’t support as well.




Hessian curves:
x^3 + y^3 + 1 = 3dxy; homogenized to x^3 + y^3 + 1 = 3dxyz.

https://cr.yp.to/newelliptic/hessian-20150804.pdf <https://cr.yp.to/newelliptic/hessian-20150804.pdf>

Identity point is (1:-1:0), i.e. the point at infinity.

Minimum cofactor: 3.

Supports complete addition laws, I think.

IIRC these are pretty much eclipsed by Edwards curves, but if for some reason you want cofactor 3, then this is the shape for you.



Doche–Icart–Kohel curves: y^2 = x(x^2 + ax + 16a) and y^2=x^3+3*a*(x+1)^2.

I haven’t seen these used before. I’m guessing they’re also eclipsed by Edwards, but maybe there is a niche use.


I hope this is helpful!
— Mike




> On Nov 6, 2016, at 3:51 PM, Trevor Perrin <***@trevp.net> wrote:
>
> On Thu, Nov 3, 2016 at 7:01 PM, Andy Isaacson <***@hexapodia.org> wrote:
>> As far as I can tell there's a quite remarkable
>> pile of specialized knowledge necessary to be able to effectively work with
>> elliptic curve cryptography, and this list is mostly for folks who already
>> have the knowledge to discuss things.
>
>
> I think it helps a lot to think of layers built on top of each other,
> from high-level to low:
> - Protocols (Signatures, Diffie-Hellman, MQV, etc.)
> - Groups (where discrete log is hard)
> - Elliptic Curves (where points form groups)
> - Fields (the coordinates of elliptic-curve points are field
> elements, e.g in GF(2^255-19))
>
>
> Here's a (rambling) tour of a couple layers, I'll try to connect it to
> recent discussion:
>
> Groups
> -------
> At the level of DH or signatures, elliptic curve crypto is mostly just
> "discrete log" crypto, i.e. it deals with (cyclic) groups where
> calculating discrete logs is hard. Constructs like DH, DSA, etc apply
> whether the group elements are points on an elliptic curve or integers
> modulo some prime.
>
> In either case you'll have some element (elliptic curve point; or
> integer mod prime) that generates a group with large prime order q
> (number of elements), which is where you want to do crypto. But this
> group is often part of a larger group, with order = cofactor * q.
>
> If cofactor=1 then things are simpler, but cofactor > 1 means there's
> other groups co-existing with the "main subgroup", and there can be
> weird interactions.
>
> "Small subgroup attacks" on DH with reused keys is the classic case:
> Someone gives you a DH public value A, you raise it to your reusable
> DH private value b to get a shared key and encrypt something with that
> key.
>
> However! Instead of A generating the main subgroup, it was
> maliciously chosen to generate some small-order subgroup with j
> elements. The attacker can trial-decrypt your encrypted data to
> determine which of the j keys was chosen, thus learning your private
> key b mod j. By repeating this with different A_i of order j_i the
> attacker can calculate b via the Chinese Remainder Theorem.
>
> With traditional "mod p" or "finite field" Diffie-Hellman, you can
> choose a "safe prime" p=2q+1 to get a cofactor of 2 and a main
> subgroup order of q. This prevents the attack because the 2-element
> subgroup containing (1,-1) is easy to test for, and because leaking a
> single bit of your key (mod 2) doesn't matter much.
>
> For traditional Schnorr or DSA signatures you have to send a value
> (mod q) as part of the signature, so you want a prime p = cofactor*q +
> 1, where cofactor is large (instead of cofactor=2). Thus, using
> DSA-style primes for DH would introduce a risk of small-subgroup
> attacks against re-used keys, requiring an expensive validation check
> (exponentiation by q) to ensure received public values are in the
> correct subgroup.
>
> (To make this topical: A recent paper points out that NIST recommends
> DSA-style primes for DH in SP800-56A [0,1]. RFC 5114 also recommends
> specific DSA-style primes for "IKE, TLS, SSH, and SMIME", without
> mentioning the need for validation checks [2]. The paper analyzes the
> "fragility" of the implementation landscape that has resulted, though
> various complications mostly seem to prevent devastating attacks, in
> the implementations looked at.)
>
> So note that group structure and cofactor/subgroup questions are
> complicated even in "regular" DH, without getting to EC.
> With EC, cofactors are typically small enough (e.g. 1 for NIST
> P-curves, 8 for Curve25519) that the above attack isn't that relevant,
> though sending invalid points (not on the curve) can lead to a similar
> attack.
>
> However, cofactor>1 can still have subtle and unexpected effects, e.g.
> see security considerations about "equivalent" public keys in RFC
> 7748, which is relevant to the cofactor multiplication "cV" in
> VXEdDSA, or including DH public keys into "AD" in Signal's (recently
> published) X3DH [3].
>
>
> Signatures
> -----------
> Discrete-log signatures (El Gamal, Schnorr, DSA, EC-DSA, EdDSA) build
> on top of the group structure, so can be considered without too much
> EC detail.
>
> Academic intro to crypto books usually cover the basics well, the
> typical reference points are:
> * Schnorr's identification protocol
> * Fiat-Shamir transform
> * Security proof via Random Oracle Model and oracle rewinding
>
> From there, DJB has a great writeup on concrete design details [4], as
> well as the Ed25519 and "More curves for EdDSA" papers.
>
> It's also worth understanding these signatures as instances of
> "zero-knowledge proofs" which can do fancier things. E.g. see
> Camenisch-Stadler [5] examples 2 and 3 on "equality of two discrete
> logarithms" (relevant to VRF), and "or" proofs (relevant to signature
> variants like "designated verifier" signatures).
>
>
> Curves
> -------
> This is harder math, and I'm not sure Montgomery / Edwards curves have
> made it into good textbooks and overviews yet. I think people lean on
> DJB's Curve25519 and Ed25519 papers, "Twisted Edwards Curves" [6], and
> their references. The authors have a "gentle introduction" to EC as
> well [7].
>
> I'm not the person to do it, but if anyone wants to try an overview of
> modern curve equations / coordinate systems / algorithms, I'm sure it
> would be widely appreciated (there's about 600 people on this list,
> probably most here to learn things like that).
>
> Trevor
>
>
> [0] https://eprint.iacr.org/2016/995
> [1] http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-56Ar2.pdf
> [2] https://tools.ietf.org/html/rfc5114
> [3] https://whispersystems.org/docs/specifications/x3dh/
> [4] https://blog.cr.yp.to/20140323-ecdsa.html
> [5] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.1208
> [6] https://eprint.iacr.org/2008/013
> [7] http://ecchacks.cr.yp.to/
> _______________________________________________
> Curves mailing list
> ***@moderncrypto.org
> https://moderncrypto.org/mailman/listinfo/curves
Antonio Sanso
2017-01-30 20:41:51 UTC
Permalink
Thanks a lot Trevor,

this was a great write up.
One more question

On Nov 7, 2016, at 12:51 AM, Trevor Perrin <***@trevp.net<mailto:***@trevp.net>> wrote:

However, cofactor>1 can still have subtle and unexpected effects, e.g.
see security considerations about "equivalent" public keys in RFC
7748, which is relevant to the cofactor multiplication "cV" in
VXEdDSA, or including DH public keys into "AD" in Signal's (recently
published) X3DH [3].

may you shed some more light about this?
What is the algorithm to find and “equivalent” public key?

regards

antonio
Mike Hamburg
2017-01-30 21:48:00 UTC
Permalink
> On Jan 30, 2017, at 12:41 PM, Antonio Sanso <***@adobe.com> wrote:
>
> Thanks a lot Trevor,
>
> this was a great write up.
> One more question
>
> On Nov 7, 2016, at 12:51 AM, Trevor Perrin <***@trevp.net <mailto:***@trevp.net>> wrote:
>
>> However, cofactor>1 can still have subtle and unexpected effects, e.g.
>> see security considerations about "equivalent" public keys in RFC
>> 7748, which is relevant to the cofactor multiplication "cV" in
>> VXEdDSA, or including DH public keys into "AD" in Signal's (recently
>> published) X3DH [3].
>
> may you shed some more light about this?
> What is the algorithm to find and “equivalent” public key?
>
> regards
>
> antonio

There are two degrees of freedom in making X25519 public keys equivalent.

First, the standard specifies that public keys must be treated the same as if they were reduced modulo p = 2^255-19. That means that if x is a public key computed by the standard algorithm, then x+p is an equivalent public key, and x+2p is too if that fits in 32 bytes. The same is true for p=2^448-2^224-1, except that for this prime, x+p is too large to write down in 56 bytes for almost every x.

Second, two x’s are equivalent if they differ by a c-torsion point. This is because the X25519 Diffie-Hellman key exchange algorithm is computing c*secret*P, which is the same as c*secret*(P+T) for points T such that c*T is the identity. Another way to describe these equivalent keys is that they’re the x-coordinates of points Q such that c*Q = c*P. Such keys can be found in SAGE by computing

[P.xy()[0] for P in (c*E.lift_x(x)).division_points(c)]

where the cofactor c is 8 for X25519 and 4 for X448. One of these points will always be 1/x modulo p. There are surely somewhat-less-nice formulas for the rest of them.

Of course, these two classes can be combined, so that P.xy()[0] + p is also an equivalent public key.

Cheers,
— Mike
Trevor Perrin
2017-01-30 22:02:24 UTC
Permalink
On Mon, Jan 30, 2017 at 1:48 PM, Mike Hamburg <***@shiftleft.org> wrote:
>
> On Jan 30, 2017, at 12:41 PM, Antonio Sanso <***@adobe.com> wrote:
>
> On Nov 7, 2016, at 12:51 AM, Trevor Perrin <***@trevp.net> wrote:
>
> However, cofactor>1 can still have subtle and unexpected effects, e.g.
> see security considerations about "equivalent" public keys in RFC
> 7748, which is relevant to the cofactor multiplication "cV" in
> VXEdDSA, or including DH public keys into "AD" in Signal's (recently
> published) X3DH [3].
>
>
> may you shed some more light about this?
> What is the algorithm to find and “equivalent” public key?
[...]
>
> Second, two x’s are equivalent if they differ by a c-torsion point. This is
> because the X25519 Diffie-Hellman key exchange algorithm is computing
> c*secret*P, which is the same as c*secret*(P+T) for points T such that c*T
> is the identity. Another way to describe these equivalent keys is that
> they’re the x-coordinates of points Q such that c*Q = c*P.

I'll describe the same thing, but maybe this is simpler wording:

For X25519, just add a point of low order (i.e. order=2, 4, or 8) onto
an X25519 public key. Because X25519 private keys are multiples of
the cofactor (8), the added point won't change DH results.

I.e. for public key A, some private key b, and low-order point L:

b(A+L) = bA + bL = bA


Trevor
Antonio Sanso
2017-01-31 10:32:43 UTC
Permalink
Thanks a lot guys,

I have tried the sage formula from Mike and worked like a charm.
I got less luck with the approach from Trevor (but hey, is for sure my fault).
Of course even if I was able to calculate an equivalent public key there is no chance I can retrieve the associate
private key (of course this would be like breaking DH, right?).

Said that, last silly question on the topic is:

in which situation not checking for the “right” public key can be a problem?
Trevor mentioned already one situation, but I fail to see without the knowledge
of the associated private key, where this could be an harm….

Thanks a lot and regards

antonio

On Jan 30, 2017, at 11:02 PM, Trevor Perrin <***@trevp.net> wrote:

> On Mon, Jan 30, 2017 at 1:48 PM, Mike Hamburg <***@shiftleft.org> wrote:
>>
>> On Jan 30, 2017, at 12:41 PM, Antonio Sanso <***@adobe.com> wrote:
>>
>> On Nov 7, 2016, at 12:51 AM, Trevor Perrin <***@trevp.net> wrote:
>>
>> However, cofactor>1 can still have subtle and unexpected effects, e.g.
>> see security considerations about "equivalent" public keys in RFC
>> 7748, which is relevant to the cofactor multiplication "cV" in
>> VXEdDSA, or including DH public keys into "AD" in Signal's (recently
>> published) X3DH [3].
>>
>>
>> may you shed some more light about this?
>> What is the algorithm to find and “equivalent” public key?
> [...]
>>
>> Second, two x’s are equivalent if they differ by a c-torsion point. This is
>> because the X25519 Diffie-Hellman key exchange algorithm is computing
>> c*secret*P, which is the same as c*secret*(P+T) for points T such that c*T
>> is the identity. Another way to describe these equivalent keys is that
>> they’re the x-coordinates of points Q such that c*Q = c*P.
>
> I'll describe the same thing, but maybe this is simpler wording:
>
> For X25519, just add a point of low order (i.e. order=2, 4, or 8) onto
> an X25519 public key. Because X25519 private keys are multiples of
> the cofactor (8), the added point won't change DH results.
>
> I.e. for public key A, some private key b, and low-order point L:
>
> b(A+L) = bA + bL = bA
>
>
> Trevor
Trevor Perrin
2017-02-02 21:19:22 UTC
Permalink
On Tue, Jan 31, 2017 at 2:32 AM, Antonio Sanso <***@adobe.com> wrote:
> Of course even if I was able to calculate an equivalent public key there is no chance I can retrieve the associate
> private key (of course this would be like breaking DH, right?).
>
> Said that, last silly question on the topic is:
>
> in which situation not checking for the “right” public key can be a problem?
> Trevor mentioned already one situation, but I fail to see without the knowledge
> of the associated private key, where this could be an harm….


A key exchange protocol might want to guarantee that if the protocol
completes successfully, both parties have the same secret key *and*
agree on things like the identities of the two parties.

So if an attacker can change a transmitted public key into a
different-but-equivalent public key, a protocol might complete
successfully despite the parties having a different view of the public
keys.

This type of issue is often called "unknown key share" or "identity
misbinding". Avoiding it motivates things like hashing public keys
(and maybe other identity info) into session keys, or MAC'ing them
with session keys, etc.

As far as cases where equivalent DH public keys are a concrete security problem:

Maybe you meet someone online, they authenticate with identity public
key Y, but then when you meet them in person they claim public key X,
so you don't believe they're the same person. But they are!, an
attacker just changed X to Y.

With some creativity you could perhaps think of other cases where this
causes minor confusion.

Trevor
Ray Dillinger
2017-02-03 00:00:15 UTC
Permalink
On 02/02/2017 01:19 PM, Trevor Perrin wrote:
> On Tue, Jan 31, 2017 at 2:32 AM, Antonio Sanso <***@adobe.com> wrote:
>
>> in which situation not checking for the “right” public key can be a problem?
>> Trevor mentioned already one situation, but I fail to see without the knowledge
>> of the associated private key, where this could be an harm
.
>
> A key exchange protocol might want to guarantee that if the protocol
> completes successfully, both parties have the same secret key *and*
> agree on things like the identities of the two parties.
>
> So if an attacker can change a transmitted public key into a
> different-but-equivalent public key, a protocol might complete
> successfully despite the parties having a different view of the public
> keys.

There was a bug related to this with the Bitcoin block chain which was
exposed by an SSL upgrade. The SSL software began failing signature
checks that were signed by anything other than the 'canonical version'
of the key. But the block chain already had (and still has) some
transactions in it that were signed by various different equivalent
keys. When these transactions started failing the signature checks, it
meant that checking the integrity of the block chain failed for anyone
who had done the most recent SSL upgrade.

It was dead-easy to fix it so no new transactions with variant forms of
the key would be made, but it was no longer sufficient to call SSL to
check signatures on the block chain, and the block chain cannot be
rewritten to update the keys.

It got fixed over a year ago, and the current version of Bitcoin has
code specifically to check transactions signed with variant forms of
keys no longer recognized by SSL.

This was mostly a 'mismatch of purpose' bug; the SSL maintainers
consider it a networking library where the form of keys used in earlier
sessions or certificates is completely irrelevant to current sessions;
Bitcoin was using SSL signatures to establish an irrevocable recorded
history and needed those old signatures to continue to be provably valid.

The short version of that story is that if you're using any library for
something that is not what the maintainers think it's for, sooner or
later there's going to be an 'upgrade' that breaks your application.


Bear
Antonio Sanso
2017-06-07 08:28:44 UTC
Permalink
Hi

On Jan 31, 2017, at 11:32 AM, Antonio Sanso <***@adobe.com> wrote:

> Thanks a lot guys,
>
> I have tried the sage formula from Mike and worked like a charm.
> I got less luck with the approach from Trevor (but hey, is for sure my fault).
> Of course even if I was able to calculate an equivalent public key there is no chance I can retrieve the associate
> private key (of course this would be like breaking DH, right?).
>
> Said that, last silly question on the topic is:
>
> in which situation not checking for the “right” public key can be a problem?

Well it seems that “time" gave the answer :) https://nickler.ninja/blog/2017/05/23/exploiting-low-order-generators-in-one-time-ring-signatures/

regards

antonio


> Trevor mentioned already one situation, but I fail to see without the knowledge
> of the associated private key, where this could be an harm….
>
> Thanks a lot and regards
>
> antonio
>
> On Jan 30, 2017, at 11:02 PM, Trevor Perrin <***@trevp.net> wrote:
>
>> On Mon, Jan 30, 2017 at 1:48 PM, Mike Hamburg <***@shiftleft.org> wrote:
>>>
>>> On Jan 30, 2017, at 12:41 PM, Antonio Sanso <***@adobe.com> wrote:
>>>
>>> On Nov 7, 2016, at 12:51 AM, Trevor Perrin <***@trevp.net> wrote:
>>>
>>> However, cofactor>1 can still have subtle and unexpected effects, e.g.
>>> see security considerations about "equivalent" public keys in RFC
>>> 7748, which is relevant to the cofactor multiplication "cV" in
>>> VXEdDSA, or including DH public keys into "AD" in Signal's (recently
>>> published) X3DH [3].
>>>
>>>
>>> may you shed some more light about this?
>>> What is the algorithm to find and “equivalent” public key?
>> [...]
>>>
>>> Second, two x’s are equivalent if they differ by a c-torsion point. This is
>>> because the X25519 Diffie-Hellman key exchange algorithm is computing
>>> c*secret*P, which is the same as c*secret*(P+T) for points T such that c*T
>>> is the identity. Another way to describe these equivalent keys is that
>>> they’re the x-coordinates of points Q such that c*Q = c*P.
>>
>> I'll describe the same thing, but maybe this is simpler wording:
>>
>> For X25519, just add a point of low order (i.e. order=2, 4, or 8) onto
>> an X25519 public key. Because X25519 private keys are multiples of
>> the cofactor (8), the added point won't change DH results.
>>
>> I.e. for public key A, some private key b, and low-order point L:
>>
>> b(A+L) = bA + bL = bA
>>
>>
>> Trevor
>
Trevor Perrin
2016-11-01 17:07:16 UTC
Permalink
On Tue, Nov 1, 2016 at 7:14 AM, Peter Schwabe <***@cryptojedi.org> wrote:
> Trevor Perrin <***@trevp.net> wrote:
>> One last tweak to consider is clearing the cofactor in verification.
>> Currently XEdDSA does "cofactorless verification", i.e. it takes a
>> signature (R, s) and checks R == sB - hA. We could change it to cR ==
>> c(sB - hA). VXEdDSA would be unchanged.
[...]
>
> The Ed25519 paper says
>
> "The verifier is /permitted/ to check this stronger equation and
> to reject alleged signatures where the stronger equation does not
> hold. However, this is not /required/; checking that
> 8SB=8R+8H(\encode{R},\encode{A},M)A is enough for security."
>
>
> You could decide to do the same; allowing both for verification in the
> specification and leaving the choice to the implementation.

Hi Peter,

That's an option: But I think we'd rather specify rigid behavior, in
case this gets used in an anonymity context where different
implementation choices leak info.

So while this is an annoying and slightly arbitrary decision, I think
we should just commit to one approach.

Trevor
Brian Smith
2016-11-02 06:52:11 UTC
Permalink
Trevor Perrin <***@trevp.net> wrote:

> I plan to make the following tweaks, then freeze the design (at least
> for 25519):
>
> (1) Check that all integers in signatures are fully reduced (s<q,
> h<q, R.y<p, V.y<p). This prevents "signature malleability", which
> could be an issue for badly-designed protocols [1].
>

This seems good to me.


> (2) Replace hash_i(a || ... || Z) with hash_i(a || Z || pad || ...)
> for reasons here [2] - mainly a bit more sidechannel resistance, and
> slightly cleaner use of the hash.
>

This seems good to me. I am not convinced that "pad" is needed, and also if
it is needed then should it be random too?

Other opinions? Any other last-minute tweaks?
>

I still haven't had time to read the paper carefully. But, I have some more
questions. Note that I intentionally avoided reading the prior discussion
on the list (which happened before I subscribed, I think), so that you can
get a newbie's impression of the paper, which I hope is valuable to you.


The motivation for VXEdDSA is unclear. Is the purpose simply to give the
verifier a proof that the EdDSA keypair really was derived from the
X25519/X448 key, that is indistinguishable from random to a third-party
observer? Is it also serving to provide proof-of-possession of the private
key, or any other purpose?


hash_0(x) is reserved for other specifications. But, if hash_0(x) is never
used, then it is clear that Curve25519 and Curve448 are fully domain
separated, right? (The 32nd input byte to the hash function will always be
0xff for Curve448 and never 0xff for Curve25519, for i > 0). Conversely, if
something does use hash_0(x) then will the domain separation actually work?
It seems like, instead of reserving hash_0(x) for other specifications, it
might be better for it to be forbidden from being used?


It seems like the domain separation provided by the hash_i(x) scheme can
only work for curves with differing values of b (curves of different octet
lengths). It may be useful to mention this.


In this paper, should we also consider the EdDSA public key to be a secret
worth protecting?


In VXEdDSA, will the call to hash_to_point change from hash_to_point(A ||
M) to something similar to the proposed change for the first hashing in
XEdDSA? (I guess this may depend on whether the EdDSA public key is
supposed to be secret.)


In VXEdDSA, every use of a hash function is a hash_i(x). But, in XEdDSA,
the first use of the hash function is hash_1(x) but the second use of the
hash function is just hash(x). Why isn't it hash_2(x) instead for the
second hashing?


In XEdDSA and EdDSA, near the end we calculate:

h = hash(R || A || M).

But in VXEdDSA we calculate:

h = hash_4(A || V || R || Rv || M).

Why doesn't VXEdDSA use this instead?:

h = hash_4(V || Rv || R || A || M)

Note that that is equivalent to:

h = hash(P || R || A || M) where P = 2**b - 1 - i || V || Rv

Note in particular that this would be consistent with EdDSA & XEdDSA with P
= "". Also note that (P || R) fills one block of SHA-512 for Curve25519,
and A || M form the subsequent block(s), and A is the public key. I'm not
sure any of this matters. However, if it is important that it be different
than EdDSA and XEdDSA then the paper should explain why; conversely, if it
isn't important then it would be better to be consistent.


Section 2.4 explains how elliptic curve points are encoded, but it isn't as
precise as the IETF draft. In particular, it isn't clear whether the
coordinates of P must or must not be reduced (mod p) before encoding, but
this is important because the results of point multiplications are hashed.
In the IETF draft for EdDSA, this is clearer because it has a precondition
that 0 <= x, y < p. If the goal is to be aligned with EdDSA then I suggest
defining the encoding rules by reference to the IETF draft for EdDSA.


It would be good to have some test vectors, where RNG outputs are given as
inputs.

Cheers,
Brian
--
https://briansmith.org/
Trevor Perrin
2016-11-02 20:28:20 UTC
Permalink
Thanks for the close read -

On Tue, Nov 1, 2016 at 11:52 PM, Brian Smith <***@briansmith.org> wrote:
> Trevor Perrin <***@trevp.net> wrote:
>
>>
>> (2) Replace hash_i(a || ... || Z) with hash_i(a || Z || pad || ...)
>> for reasons here [2] - mainly a bit more sidechannel resistance, and
>> slightly cleaner use of the hash.
>
>
> This seems good to me. I am not convinced that "pad" is needed, and also if
> it is needed then should it be random too?

64 bytes of randomness ought to be enough for anyone.

With Curve25519, Curve448, and SHA-512, you're right that padding
isn't totally necessary:
* The padding is zero-length for 25519, since hash_i(a || Z) is 128
bytes (one SHA-512 block)
* For 448, hash_i(a || Z) spills into the second SHA-512 block, so
the private key and message will be in separate blocks regardless of
padding.

But it's still a little cleaner to pad the full block in 448, and for
other hash functions or curves things might work out differently, so I
think specifying padding instead of relying on things to line up is
the safest design.


> The motivation for VXEdDSA is unclear. Is the purpose simply to give the
> verifier a proof that the EdDSA keypair really was derived from the
> X25519/X448 key, that is indistinguishable from random to a third-party
> observer? Is it also serving to provide proof-of-possession of the private
> key, or any other purpose?

Other purposes - namely to be a VRF, per references (Micali, Dodis).

You might think: What are VRFs for? People have proposed various
uses, hopefully I'll have more to suggest later.


> hash_0(x) is reserved for other specifications. But, if hash_0(x) is never
> used, then it is clear that Curve25519 and Curve448 are fully domain
> separated, right?

I was thinking about domain separation with respect to the same key.

With respect to separating hash functions for different types of keys,
I'm not sure that's important, but it's worth mulling over a bit more.


> (The 32nd input byte to the hash function will always be
> 0xff for Curve448 and never 0xff for Curve25519, for i > 0). Conversely, if
> something does use hash_0(x) then will the domain separation actually work?
> It seems like, instead of reserving hash_0(x) for other specifications, it
> might be better for it to be forbidden from being used?

Letting other specs use it means they have an easy way to get domain
separation, if they're also doing something with the same key.


> In this paper, should we also consider the EdDSA public key to be a secret
> worth protecting?

Where does that make a difference? You could do const-time
verification, I suppose?


> In VXEdDSA, will the call to hash_to_point change from hash_to_point(A || M)
> to something similar to the proposed change for the first hashing in XEdDSA?
> (I guess this may depend on whether the EdDSA public key is supposed to be
> secret.)

This can't be randomized, it needs to be deterministic.

There's not a very strong argument for padding, either, since we're
not trying to make this a PRF based on A (hashing A just prevents the
possibility of different keys that are equivalent in their VRF
outputs, which isn't technically a VRF attack, but seems confusing
enough that it's worth preventing).


> In VXEdDSA, every use of a hash function is a hash_i(x). But, in XEdDSA, the
> first use of the hash function is hash_1(x) but the second use of the hash
> function is just hash(x). Why isn't it hash_2(x) instead for the second
> hashing?

Compatibility with EdDSA.


> In XEdDSA and EdDSA, near the end we calculate:
>
> h = hash(R || A || M).
>
> But in VXEdDSA we calculate:
>
> h = hash_4(A || V || R || Rv || M).
>
> Why doesn't VXEdDSA use this instead?:
>
> h = hash_4(V || Rv || R || A || M)
>
> Note that that is equivalent to:
>
> h = hash(P || R || A || M) where P = 2**b - 1 - i || V || Rv

The order doesn't matter. I see your point about aligning with EdDSA,
but the current order somewhat matches the "order of appearance" of
the variables. Since this is arbitrary, I'm inclined to leave it
as-is.


> Section 2.4 explains how elliptic curve points are encoded, but it isn't as
> precise as the IETF draft. In particular, it isn't clear whether the
> coordinates of P must or must not be reduced (mod p) before encoding, but
> this is important because the results of point multiplications are hashed.
> In the IETF draft for EdDSA, this is clearer because it has a precondition
> that 0 <= x, y < p. If the goal is to be aligned with EdDSA then I suggest
> defining the encoding rules by reference to the IETF draft for EdDSA.

Coordinates are defined as being mod p, but I can add a (mod p) in
there to be clear.


> It would be good to have some test vectors, where RNG outputs are given as
> inputs.

For sure, may not get to that in next round though.

Trevor
Brian Smith
2016-11-02 22:00:41 UTC
Permalink
Trevor Perrin <***@trevp.net> wrote:

> Brian Smith <***@briansmith.org> wrote:

> hash_0(x) is reserved for other specifications. But, if hash_0(x) is never
>
> used, then it is clear that Curve25519 and Curve448 are fully domain
> > separated, right?
>
> I was thinking about domain separation with respect to the same key.
>
> With respect to separating hash functions for different types of keys,
> I'm not sure that's important, but it's worth mulling over a bit more.
>

I don't think it matters, because for different values of b, signatures
will be different sizes, and for the same value of b, such domain
separation doesn't work. So, forget this idea of mine.


> > In this paper, should we also consider the EdDSA public key to be a
> secret
> > worth protecting?
>
> Where does that make a difference? You could do const-time
> verification, I suppose?
>

> > In VXEdDSA, will the call to hash_to_point change from hash_to_point(A
> || M)
> > to something similar to the proposed change for the first hashing in
> XEdDSA?
> > (I guess this may depend on whether the EdDSA public key is supposed to
> be
> > secret.)
>
> This can't be randomized, it needs to be deterministic.
>
> There's not a very strong argument for padding, either, since we're
> not trying to make this a PRF based on A (hashing A just prevents the
> possibility of different keys that are equivalent in their VRF
> outputs, which isn't technically a VRF attack, but seems confusing
> enough that it's worth preventing).
>

I understand it can't be randomized. My question was this: If A is secret
then when we call hash_to_point(A || M), should we do anything to prevent
any kind of (side channel) attacks on A where the attacker controls M, like
we reordered the inputs to hash_1(...) for.


> > In VXEdDSA, every use of a hash function is a hash_i(x). But, in XEdDSA,
> the
> > first use of the hash function is hash_1(x) but the second use of the
> hash
> > function is just hash(x). Why isn't it hash_2(x) instead for the second
> > hashing?
>
> Compatibility with EdDSA.
>

It seems worth expanding on what exactly compatibility and incompatibility
is intended. It's unclear to me.


> > In XEdDSA and EdDSA, near the end we calculate:
> >
> > h = hash(R || A || M).
> >
> > But in VXEdDSA we calculate:
> >
> > h = hash_4(A || V || R || Rv || M).
> >
> > Why doesn't VXEdDSA use this instead?:
> >
> > h = hash_4(V || Rv || R || A || M)
> >
> > Note that that is equivalent to:
> >
> > h = hash(P || R || A || M) where P = 2**b - 1 - i || V || Rv
>
> The order doesn't matter. I see your point about aligning with EdDSA,
> but the current order somewhat matches the "order of appearance" of
> the variables. Since this is arbitrary, I'm inclined to leave it
> as-is.


See below where I try to demonstrate how code that implements both XEd25519
and Ed25519 would be factored. I would like to do similar for VXEdDSA,
which is why I think consistency is useful. Also, I think that maximizing
similarity is probably better for maximizing reuse of analysis of EdDSA and
XEdDSA for VXEdDSA. This is also the motivation for my suggestions to use
naming consistent with the EdDSA specification. Note that I'm not
suggesting that you rewrite the paper in this form.

Note in particular how it is more convenient in this factoring to replace
the the hash_i(x) hash family with a prefix_i construct, where prefix_i =
2**b - 1 - i.


# Uses naming from the XEdDSA specification.
xed25519_sign(k, M, Z):
pad = ""
A, a = calculate_key_pair(k)
prefix = prefix(1) || a || Z || pad
return ed25519_sign'(A, a, prefix, M)

# Uses naming from draft-irtf-cfrg-eddsa-05.
ed25519_sign(A, seed, M):
a, prefix = hash(seed).split_at(32)
return ed25519_sign'(A, a, prefix, M)

# Uses naming from draft-irtf-cfrg-eddsa-05.
ed25519_sign'(A, a, prefix, M):
r = hash(prefix || M) (mod L)
R = rB
k = hash(R || A || M) (mod L)
S = r + k * a (mod L)
return R || S

Cheers,
Brian
--
https://briansmith.org/
Trevor Perrin
2016-11-02 22:45:48 UTC
Permalink
On Wed, Nov 2, 2016 at 3:00 PM, Brian Smith <***@briansmith.org> wrote:
>> > In VXEdDSA, will the call to hash_to_point change from hash_to_point(A
>> > || M)
>> > to something similar to the proposed change for the first hashing in
>> > XEdDSA?
[...]
>> This can't be randomized, it needs to be deterministic.
[...]
> I understand it can't be randomized. My question was this: If A is secret
> then when we call hash_to_point(A || M), should we do anything to prevent
> any kind of (side channel) attacks on A where the attacker controls M, like
> we reordered the inputs to hash_1(...) for.

That re-ordering was so that random data could be inserted between the
long-term secret (a) and message (M), to prevent the message from
being directly mixed with the secret.

We can't do that here - even if A is a secret, there's no way to
prevent M from being mixed with it (or a deterministic function of
it).

So I'm not seeing any changes worth making.


>> > In VXEdDSA, every use of a hash function is a hash_i(x). But, in XEdDSA,
>> > the
>> > first use of the hash function is hash_1(x) but the second use of the
>> > hash
>> > function is just hash(x). Why isn't it hash_2(x) instead for the second
>> > hashing?
>>
>> Compatibility with EdDSA.
>
> It seems worth expanding on what exactly compatibility and incompatibility
> is intended. It's unclear to me.

XEd25519 signatures are intended to be equivalent to Ed25519
signatures, just with the public keys converted. So we need to
calculate "h" the exact same way, i.e. h = HASH(R || A || M).

Agreed this should be explained, somewhere.

(XEd448 may not be identical to the CFRG's version of Ed448, because
they're not using the equivalent curve to X448, but that's a separate
issue).


>> > In XEdDSA and EdDSA, near the end we calculate:
>> >
>> > h = hash(R || A || M).
>> >
>> > But in VXEdDSA we calculate:
>> >
>> > h = hash_4(A || V || R || Rv || M).
>> >
>> > Why doesn't VXEdDSA use this instead?:
>> >
>> > h = hash_4(V || Rv || R || A || M)
[...]
> See below where I try to demonstrate how code that implements both XEd25519
> and Ed25519 would be factored. I would like to do similar for VXEdDSA, which
> is why I think consistency is useful.
[...]
> # Uses naming from the XEdDSA specification.
> xed25519_sign(k, M, Z):
> pad = ""
> A, a = calculate_key_pair(k)
> prefix = prefix(1) || a || Z || pad
> return ed25519_sign'(A, a, prefix, M)
>
> # Uses naming from draft-irtf-cfrg-eddsa-05.
> ed25519_sign(A, seed, M):
> a, prefix = hash(seed).split_at(32)
> return ed25519_sign'(A, a, prefix, M)
>
> # Uses naming from draft-irtf-cfrg-eddsa-05.
> ed25519_sign'(A, a, prefix, M):
> r = hash(prefix || M) (mod L)
> R = rB
> k = hash(R || A || M) (mod L)
> S = r + k * a (mod L)
> return R || S

Nice - That works well, another reason for moving Z earlier.

So you're thinking of similarly factoring out h and s computation into
a helper function, with V and Rv moved to a prefix?

Hadn't occurred to me, but I guess that could tilt the argument your
direction, let me think more...

Trevor
Brian Smith
2016-11-02 23:53:26 UTC
Permalink
Trevor Perrin <***@trevp.net> wrote:

> On Wed, Nov 2, 2016 at 3:00 PM, Brian Smith <***@briansmith.org> wrote:

> It seems worth expanding on what exactly compatibility and incompatibility
>
> is intended. It's unclear to me.
>
> XEd25519 signatures are intended to be equivalent to Ed25519
> signatures, just with the public keys converted. So we need to
> calculate "h" the exact same way, i.e. h = HASH(R || A || M).
>
> Agreed this should be explained, somewhere.
>
> (XEd448 may not be identical to the CFRG's version of Ed448, because
> they're not using the equivalent curve to X448, but that's a separate
> issue).
>

Yes, I found (and find) it confusing that XEd448 is different than Ed448 in
obvious ways, but yet a of XEdDSA is to be co,[compatible with EdDSA in
some way. More on the equivalence of Ed25519 and XEd25119 below.



> Nice - That works well, another reason for moving Z earlier.
>
> So you're thinking of similarly factoring out h and s computation into
> a helper function, with V and Rv moved to a prefix?
>

Yes. Basically, I'm simply trying to understand make make plain the
differences between these three functions. Also, I'm suspicious of trivial
but unnecessary differences between the functions in general.

Assuming I didn't make a huge mistake, here's another factoring of the
logic that shows that XEd22519 signing can be used with either XEd25519
keys or Ed25519 keys. In particular, the randomization of the nonce and the
derivation of an Ed25519 key from an X25519 key are orthogonal and XEd25519
signing is equivalent to Ed25519 signing with a nonce that has a fixed ||
random || fixed prefix.

xed25519_precompute(k):
A, a = calculate_key_pair(k)
prefix = domain_separator(1) || a
return (A, a, prefix)

ed25519_precompute(k):
A, seed = k.split_at(32)
a', prefix = hash(seed).split_at(32)
a = a' & 248 & 63 | 64
return (A, a, prefix)

xed25519_sign((A, a, prefix), M, Z):
pad = ""
randomized_prefix = prefix || Z || pad
return ed25519_sign((A, a, prefix), M)

# Uses naming from draft-irtf-cfrg-eddsa-05
ed25519_sign((A, a, prefix), M):
r = hash(prefix || M) (mod L)
R = rB
k = hash(R || A || M) (mod L)
S = r + k * a (mod L)
return R || S

I've not bothered to do VXEd25519 yet.

Cheers,
Brian
--
https://briansmith.org/
Brian Smith
2016-11-02 23:55:01 UTC
Permalink
Brian Smith <***@briansmith.org> wrote:

> xed25519_sign((A, a, prefix), M, Z):
> pad = ""
> randomized_prefix = prefix || Z || pad
> return ed25519_sign((A, a, prefix), M)
>

I meant:

xed25519_sign((A, a, prefix), M, Z):
pad = ""
randomized_prefix = prefix || Z || pad
return ed25519_sign((A, a, randomized_prefix), M)

Cheers,
Brian
Trevor Perrin
2016-11-03 00:23:14 UTC
Permalink
On Wed, Nov 2, 2016 at 4:53 PM, Brian Smith <***@briansmith.org> wrote:
>
> Yes, I found (and find) it confusing that XEd448 is different than Ed448 in
> obvious ways, but yet a of XEdDSA is to be co,[compatible with EdDSA in some
> way. More on the equivalence of Ed25519 and XEd25119 below.

Yeah, I'd prefer Ed448 to be the equivalent of Ed25519 for the X448
curve (equivalent curve; same input encoding; same hash). But that's
up to the IETF.


> Assuming I didn't make a huge mistake, here's another factoring of the logic
> that shows that XEd22519 signing can be used with either XEd25519 keys or
> Ed25519 keys. In particular, the randomization of the nonce and the
> derivation of an Ed25519 key from an X25519 key are orthogonal

Sure, agreed that handling of nonce, and public key, are orthogonal.

Trevor
Brian Smith
2016-11-04 07:36:22 UTC
Permalink
Trevor Perrin <***@trevp.net> wrote:

> On Wed, Nov 2, 2016 at 4:53 PM, Brian Smith <***@briansmith.org> wrote:
> > Assuming I didn't make a huge mistake, here's another factoring of the
> logic
> > that shows that XEd22519 signing can be used with either XEd25519 keys or
> > Ed25519 keys. In particular, the randomization of the nonce and the
> > derivation of an Ed25519 key from an X25519 key are orthogonal
>
> Sure, agreed that handling of nonce, and public key, are orthogonal.
>

Just to be clear: hash_1(x) is used whenever randomized nonces are use,
regardless of whether compute_key_pair is used to derive an EdDSA key from
an X25519/X448 key, right? And conversely, if one derives an EdDSA keypair
from an X25519 keypair, but doesn't use a randomized nonce, then just
regular EdDSA should be used, instead of XEdDSA?

Cheers,
Brian
--
https://briansmith.org/
Trevor Perrin
2016-11-04 08:27:37 UTC
Permalink
On Fri, Nov 4, 2016 at 12:36 AM, Brian Smith <***@briansmith.org> wrote:
> Trevor Perrin <***@trevp.net> wrote:
>> Sure, agreed that handling of nonce, and public key, are orthogonal.
>
>
> Just to be clear: hash_1(x) is used whenever randomized nonces are use,
> regardless of whether compute_key_pair is used to derive an EdDSA key from
> an X25519/X448 key, right? And conversely, if one derives an EdDSA keypair
> from an X25519 keypair, but doesn't use a randomized nonce, then just
> regular EdDSA should be used, instead of XEdDSA?

Hmm, maybe this isn't as orthogonal as I thought. I think the term
XEdDSA should refer to the whole package of (X->Ed, randomized).

I'm not sure why you'd want (X->Ed, deterministic).

As far as (Ed, randomized), I'm not sure exactly what its best design there.

Trevor
Brian Smith
2016-11-05 06:43:01 UTC
Permalink
On Wed, Nov 2, 2016 at 1:53 PM, Brian Smith <***@briansmith.org> wrote:

> I've not bothered to do VXEd25519 yet.


I did spend a little bit of time figuring out what a newbie would have
trouble understanding about VXEd25519 in relation to XEd25519. Below I
reference the XEdDSA nonce calculation in the published draft, not the
proposed modified calculation, just to keep things simpler. The same
applies for the new calculation.

For reference, the first part of VXEd25519 is:
A, a = calculate_key_pair(k)
Bv = hash_to_point(A || M)
V = aBv
r = hash3(a || V || Z) (mod q)

1. Does r really *need* to be calculated differently in VXEdDSA? That is,
is there a risk of a security failure or that things otherwise stop working
if it were calculated the same way as XEdDSA, r = hash2(a || M || Z)?


2. If the answer to #1 is "no," is there any material advantage to
calculating it differently?

Consider the following hand-wavy (and therefore probably wrong) argument.
First, note

V = aBv
V = a * hash_to_point(A || M)
V = a * hash_to_point(public_from_private(a) || M)

When we compute r = hash3(a || V || Z) (mod q), we're already hashing a,
therefore also hashing any function f(a) is redundant. Similarly, hashing
the output of a function f(M) can't give us a larger set than M. Therefore,
computing r the same was as in XEdDSA r = hash3(a || M || Z) is sufficient
and actually at least slightly more secure than calculating hash3(a || V ||
Z), AFAICT. The only obvious advantage to calculating hash3(a || V || Z)
instead of hash3(a || M || Z) is that the length of V will generally be
smaller than the length of M, so less data will be hashed.

Regardless, it would be useful to note the rationale for this design.

3. To what extent does calculating hash3(a || V || Z) reduce security
compared to hash3(a || M || Z) when Z is the output of a broken PRNG? In
other words, how much worse is it to hash V instead of M? Put yet another
way, it would be useful to explain how lossy hash_to_point is in the
Elligator 2 summary.

Cheers,
Brian
--
https://briansmith.org/
Trevor Perrin
2016-11-07 09:50:27 UTC
Permalink
On Fri, Nov 4, 2016 at 11:43 PM, Brian Smith <***@briansmith.org> wrote:
>
> For reference, the first part of VXEd25519 is:
> A, a = calculate_key_pair(k)
> Bv = hash_to_point(A || M)
> V = aBv
> r = hash3(a || V || Z) (mod q)
>
> 1. Does r really *need* to be calculated differently in VXEdDSA? That is, is
> there a risk of a security failure or that things otherwise stop working if
> it were calculated the same way as XEdDSA, r = hash2(a || M || Z)?

No, but that means hashing the input message three times, instead of
twice. Avoiding that seemed worthwhile at the time, but you're right
there's a cost:


> 3. To what extent does calculating hash3(a || V || Z) reduce security
> compared to hash3(a || M || Z) when Z is the output of a broken PRNG? In
> other words, how much worse is it to hash V instead of M?

If an attacker can find messages M1 and M2 which are a colliding pair
for hash2, and the RNG repeats, then the private key will probably
leak since M1 and M2 probably won't collide hash4.

VXEdDSA is broken as a VRF if you can collide hash2, because after
seeing VRF(M1) you can predict VRF(M2). But you still might care
about signature forgery, or the key might be shared with something
else.

So I could see an argument for just hashing the message three times -
it's more consistent with XEdDSA, for typical small messages it's not
a big time difference, and if hashing time does matter the caller
could do their own pre-hashing.


Trevor
Brian Smith
2016-11-07 19:22:35 UTC
Permalink
Trevor Perrin <***@trevp.net> wrote:

> Brian Smith <***@briansmith.org> wrote:
> > 3. To what extent does calculating hash3(a || V || Z) reduce security
> > compared to hash3(a || M || Z) when Z is the output of a broken PRNG? In
> > other words, how much worse is it to hash V instead of M?
>
> If an attacker can find messages M1 and M2 which are a colliding pair
> for hash2, and the RNG repeats, then the private key will probably
> leak since M1 and M2 probably won't collide hash4.
>
> VXEdDSA is broken as a VRF if you can collide hash2, because after
> seeing VRF(M1) you can predict VRF(M2). But you still might care
> about signature forgery, or the key might be shared with something
> else.
>
> So I could see an argument for just hashing the message three times -
> it's more consistent with XEdDSA, for typical small messages it's not
> a big time difference, and if hashing time does matter the caller
> could do their own pre-hashing.
>

Right.

Also, (X)EdDSA has been argued to secure even if the hash function isn't
fully collision-resistant; hashing V instead of M breaks that as you
describe above. Also, IIUC, hash_to_point is strictly less
collision-resistant than SHA-512 alone, but I'm too lazy to try to figure
out exactly how much less.

I think when you wrote "typical small messages" you meant that the messages
are typically small, in which case I think hashing the message three times
makes the most sense.

Since the VRF functionality fundamentally depends on hash_to_point, it
seems like collision resistance of the hash function and of hash_to_point
are required for the VRF functionality, even if it wouldn't be required for
the security of the signature functionality. I'm not sure what difference
it would make since I don't know much about VRFs.

At the very least, this stuff would be good to mention in the security
considerations.

Cheers,
Brian
--
https://briansmith.org/
Loading...