Return to Video

35C3 - The year in post-quantum crypto

  • 0:00 - 0:19
    35C3 preroll music
  • 0:19 - 0:26
    Herald: The next talk is called "The year
    in post-quantum crypto" by Tanja Lange,
  • 0:26 - 0:32
    she's researching in Eindhoven, and Daniel
    Bernstein, researching at the University
  • 0:32 - 0:39
    of Chicago. They both had the first ever
    conference on post-quantum cryptography in
  • 0:39 - 0:47
    2006 and they both contribute to libsalt
    crypto library. Their talk is going to be
  • 0:47 - 0:53
    a summary of the NIST post-quantum crypto
    competition and they're gonna provide an
  • 0:53 - 0:59
    overview of problems of post-quantum
    cryptography and hopefully show us how to
  • 0:59 - 1:03
    integrate post-quantum crypto in existing
    software. Please welcome them with a huge
  • 1:03 - 1:05
    round of applause.
  • 1:05 - 1:16
    applause
  • 1:16 - 1:20
    Tanja Lange: All right, well, thank you.
    First of all, this title word post-quantum
  • 1:20 - 1:25
    cryptography, what it means, we're talking
    about cryptography where we're designing
  • 1:25 - 1:30
    the systems under the assumption that our
    attacker - not the user, just the attacker
  • 1:30 - 1:35
    - has a large quantum computer. Also to
    remind you from three years ago, when we
  • 1:35 - 1:42
    showed you the attacker, we're talking
    about attackers. All right. So, we're
  • 1:42 - 1:48
    seeing like over the last few years 2015,
    say middle August, there was a big
  • 1:48 - 1:51
    announcement by the NSA saying "Oh yeah,
    acutally the world should care about post-
  • 1:51 - 1:56
    quantum cryptography. We need more
    research." They finally wake up to it and
  • 1:56 - 1:59
    actually they had a nice roll-out effect,
    that pretty much every agency - just
  • 1:59 - 2:04
    highlighting three of them here, so U.K.,
    the Netherlands and the NSA themselves -
  • 2:04 - 2:09
    made some statements about the urgency of
    rolling out post-quantum, that normal
  • 2:09 - 2:12
    cryptography that we're currently using,
    so elliptic curves and RSA and Diffie-
  • 2:12 - 2:16
    Hellman and finite fields, will be broken
    as soon as a quantum computer, and the
  • 2:16 - 2:19
    NIST, which is the National Institute for
    Standards and Technology, so it's an
  • 2:19 - 2:25
    institute in the U.S., which is working on
    standards said, ok, we're gonna call for a
  • 2:25 - 2:31
    standardization project for post-quatum
    cryptography. So yes, it's a U.S.
  • 2:31 - 2:36
    institution, but it has in cryptography a
    mostly good history. They have been
  • 2:36 - 2:39
    running competitions which gave us AES.
    They've been running competitions which
  • 2:39 - 2:44
    gave us the SHA3. And they also, without a
    competition, gave us Dual_EC.
  • 2:44 - 2:48
    laughter
    TL: So, competitions with NIST, good
  • 2:48 - 2:54
    thing, everything else, well, judge for
    yourself. This sounds like a great story.
  • 2:54 - 2:58
    It's also a little bit disappointing when
    you look at where it started. So back in
  • 2:58 - 3:02
    2003, Dan here coined the term post-
    quantum cryptography and then they've been
  • 3:02 - 3:06
    running around for 10 years going like
    "The end is nigh, please invest in post-
  • 3:06 - 3:11
    quantum cryptography, do something", just
    that 10 years later the NSA said "Oh my
  • 3:11 - 3:15
    God, we have to do something, quatum
    computers are comming". So it's a little
  • 3:15 - 3:18
    bit. Yeah, I told you so. Not a great
    story, but.
  • 3:18 - 3:22
    Daniel J. Bernstein: All right so what
    happened with this competition. Well after
  • 3:22 - 3:27
    NIST said "Hey everybody, send us some
    proposals for post-quantum crypto to
  • 3:27 - 3:32
    standardize", a lot of people did. So,
    they said actually more than 80
  • 3:32 - 3:36
    submissions came in and some of them were
    incomplete, like one of the requirements
  • 3:36 - 3:39
    was include software, and whatever
    reasons, they said some of the submissions
  • 3:39 - 3:43
    are not complete, but they posted 69
    complete submissions, the submission teams
  • 3:43 - 3:48
    include 260 people from around the world,
    all sorts of academia, industry, maybe
  • 3:48 - 3:52
    even some government people. There's lots
    and lots of names and we're not gonna go
  • 3:52 - 3:56
    through, like, what all these things are,
    but we are going to say something...
  • 3:56 - 3:59
    TL: Oh we have 60 minutes.
    DJB: Oh that's true. So BIG QUAKE: This is
  • 3:59 - 4:02
    a code based system...
    laughter
  • 4:02 - 4:06
    No, we're gonna give you some idea of,
    like, what the big picture is of how well
  • 4:06 - 4:12
    these things have held up. So, these were
    all posted by NIST on the 21st of December
  • 4:12 - 4:18
    last year, and some of you who saw our
    LatticeHacks talk last year remember that
  • 4:18 - 4:22
    this list, already there was some damage
    to it by the time of our talk on the 28th
  • 4:22 - 4:27
    of December 2017. By the end of the year,
    there were eight submissions that had
  • 4:27 - 4:31
    attacks, at different levels of severity.
    I should explain the color coding here.
  • 4:31 - 4:37
    The stuff that's in brown is less security
    than claimed, which could be for instance
  • 4:37 - 4:41
    they claimed something was, it would take
    2 to the 128 operations to break and
  • 4:41 - 4:47
    somebody says "no, it's 2 to the 100 or 2
    to the 80". Does that really matter? Or
  • 4:47 - 4:49
    maybe a different direction is somebody
    says: "this is a system where you can
  • 4:49 - 4:54
    reuse the key any number of times", which
    we expect for for normal crypto systems
  • 4:54 - 4:57
    that you can publish your key and people
    can keep sending you messages or you can
  • 4:57 - 5:01
    sign many times under some keys, but
    sometimes people claimed they had that
  • 5:01 - 5:06
    feature and they turned out to be wrong;
    those were attackable, like HILA5 in this
  • 5:06 - 5:11
    list, which is not necessarily kicking
    them out of the competition, because NIST
  • 5:11 - 5:15
    said, you can have a system with one-time
    keys that can be useful for some
  • 5:15 - 5:19
    applications. The things in red are,
    everything that they proposed, all the
  • 5:19 - 5:24
    concrete parameters are broken. The
    underlines are those where there's attacks
  • 5:24 - 5:28
    scripts, Python scripts, stage scripts,
    whatever it takes to demonstrate that,
  • 5:28 - 5:33
    yeah, look, here's here's your secret key,
    or decrypting something. So that was the
  • 5:33 - 5:39
    end of 2017. How about now. Well, OK,
    let's extrapolate, three days from now.
  • 5:39 - 5:45
    Probably the situation is at least the
    following. At least by twenty eighth of
  • 5:45 - 5:52
    December 2018, 22 submissions have
    attacks. So it's about a third of those 69
  • 5:52 - 5:58
    submissions, and you can see more, well,
    most, well, 13 of those, out of the 22,
  • 5:58 - 6:03
    are red, mostly with underlines. I think
    some of this is from us, a lot from other
  • 6:03 - 6:08
    people. And I think we did well early on
    establishing that people should put out
  • 6:08 - 6:11
    scripts to demonstrate that ,yeah, you
    really can break these things. So again
  • 6:11 - 6:16
    the underlines are demonstrated attacks;
    some of the submitters have withdrawn the
  • 6:16 - 6:21
    broken submissions; some of them have not.
    TL: All right, so when you look at this,
  • 6:21 - 6:27
    we're not going to go through explaining
    those, but let me categorize these. So
  • 6:27 - 6:30
    when we look at the ones which are not
    completely smashed and broken, we can put
  • 6:30 - 6:34
    them into boxes, we can say "hey, what is
    the underlying mathematical problem",
  • 6:34 - 6:39
    "what do we hope is something the attacker
    has to do in order to break the
  • 6:39 - 6:43
    cryptosystem". And then there is one
    category of using error-correcting codes,
  • 6:43 - 6:48
    which is then used for building an
    encryption system or a signature scheme,
  • 6:48 - 6:52
    hash functions, you can only build
    signature schemes from, isogeny-based
  • 6:52 - 6:55
    crypto, for the competition we're only
    seeing an encryption system, and honestly
  • 6:55 - 7:00
    all the signatures we have seen so far are
    pretty inefficient. Lattices we see for
  • 7:00 - 7:05
    both, that is something we talked about
    last year, so if you want to get a full-
  • 7:05 - 7:09
    blown lattice explanation, go for last
    year's talk. And then finally, there's
  • 7:09 - 7:14
    something using systems in many variables,
    many equations, and we get signatures and
  • 7:14 - 7:19
    encryption from that. So that's one way of
    saying "Well, these are good things for
  • 7:19 - 7:23
    post-quantum". It does not mean that
    everything which isn't in these boxes is
  • 7:23 - 7:28
    secure. You can still design a system,
    which somehow relates to this math
  • 7:28 - 7:33
    problem, but the attacker can do a way
    around. Some of those systems, RaCoSS I'll
  • 7:33 - 7:38
    get back to later, is a code-based system
    - code-based is on the list, first one up
  • 7:38 - 7:42
    there - so should be totally secure, and
    yet it's one of the red underlined
  • 7:42 - 7:48
    systems. So just being in one of the
    categories does not mean it's secure, but
  • 7:48 - 7:53
    it's at least some hopeful and helpful
    thing to box things. There are other ways
  • 7:53 - 7:57
    of describing why this is the situation we
    have now.
  • 7:57 - 8:00
    DJB: As an example of this kind of
    categorization, sometimes people might
  • 8:00 - 8:05
    say: "Oh, lattice-based cryptography is is
    the safe thing to do. All of that red was
  • 8:05 - 8:09
    people who were not using lattice-based-
    cryptography and everything lattice-based
  • 8:09 - 8:14
    is good, everything else is scary". But if
    you look at the lattice-based systems,
  • 8:14 - 8:19
    it's not all black. There's some red stuff
    here. Compact LWE. That one is broken,
  • 8:19 - 8:22
    with a script, we're quite sure it's
    broken. There's some others with some
  • 8:22 - 8:25
    damage, DRS, HILA5. So it's not that the
    lattice-based submissions have held up
  • 8:25 - 8:30
    perfectly and also it's not just that
    these are isolated mistakes that people
  • 8:30 - 8:35
    have made. There's ongoing research which
    is making better and better lattice
  • 8:35 - 8:39
    attacks. For instance, some papers from
    last month and this month from the three
  • 8:39 - 8:45
    authors listed there are talking about
    lattice-based systems being broken by
  • 8:45 - 8:50
    decryption failures. Now this phenomenon,
    most of the lattice submissions have
  • 8:50 - 8:57
    occasional decryption failures. Once every
    2 to the 64 ciphertexts maybe, you won't
  • 8:57 - 9:02
    be able to decrypt. Which might sound like
    "oh, it's no big deal, it's, maybe
  • 9:02 - 9:05
    occasionally you might have some user has
    that happen and they'll just, the browser
  • 9:05 - 9:11
    will reconnect, whatever it takes, it's
    not a significant failure rate". Except
  • 9:11 - 9:16
    that those failures, if the attacker is
    trying to decrypt a particular cipher text
  • 9:16 - 9:20
    or maybe attack or figure out somebody's
    secret key, they can usually get that
  • 9:20 - 9:25
    information out of watching the pattern of
    decryption failures, if decryption
  • 9:25 - 9:29
    failures happen often enough. And these
    papers are saying, for two different
  • 9:29 - 9:33
    reasons, decryption failures are actually
    happening more often, maybe much more
  • 9:33 - 9:37
    often than people claim. So that's kind of
    scary.
  • 9:37 - 9:43
    TL: All right. One explanation, which I of
    course like very much. I've been running a
  • 9:43 - 9:49
    European protocol, PQCRYPTO, and just use
    everything in our portfolio. It's good
  • 9:49 - 9:53
    right? Actually, it's looking better than
    the arguments saying: "I'll use everything
  • 9:53 - 9:56
    lattice-based. We have one which is
    slightly scratched, but everything else is
  • 9:56 - 10:01
    good. Right. Yay."
    DJB: Except, well, there's another
  • 10:01 - 10:06
    explanation than just, well, whoever these
    PQCRYPTO project people are, obviously the
  • 10:06 - 10:11
    right people, putting together a great
    portfolio. There's another possibility.
  • 10:11 - 10:15
    Not saying this is right, but you could
    imagine the cryptanalysts who are deciding
  • 10:15 - 10:21
    what to do out of that huge pile of 69
    submissions. Maybe they thought the people
  • 10:21 - 10:25
    who were in this project doing this stuff
    for some number of years are maybe not the
  • 10:25 - 10:30
    top targets. Maybe you should look at the
    other 49 submissions. Maybe you should
  • 10:30 - 10:34
    look at the submissions with the
    specification written in Microsoft Word.
  • 10:34 - 10:39
    Probably more likely to be broken. Maybe
    there's other ways that you can decide how
  • 10:39 - 10:42
    to - no offence to Microsoft people -
    TL: It worked.
  • 10:42 - 10:49
    DJB: yeah, that did work coincidentally.
    Yeah. So, the thing to keep in mind is
  • 10:49 - 10:53
    that there's a huge pile of submissions,
    more than a million words in these 69
  • 10:53 - 10:58
    submission documents, and this is like, a
    word in English is usually a kind of
  • 10:58 - 11:02
    imprecise thing, reviewing that, this is I
    would say more effort than reviewing a
  • 11:02 - 11:07
    million lines of code. This is a lot of
    stuff, lot of junk to go through. There's
  • 11:07 - 11:11
    a real flood, there's too much for us to
    all the people who care about attacking
  • 11:11 - 11:15
    systems to, to actually go through
    everything. So people are making a
  • 11:15 - 11:18
    selecttion, and maybe they just haven't
    bothered looking at these PQCRYPTO
  • 11:18 - 11:22
    submissions. So, if you want to actually
    have security review, it's really
  • 11:22 - 11:26
    important to keep this denial of service
    effect in mind, that the flood of
  • 11:26 - 11:31
    submissions has to be small enough that it
    can be handled by the number of people who
  • 11:31 - 11:33
    are looking at these submissions and
    evaluating the security.
  • 11:33 - 11:37
    TL: So, call for help, please join, please
    break. Don't break ours.
  • 11:37 - 11:42
    laughter
    TL: Now, one thing which in this audience
  • 11:42 - 11:46
    is a little funny, but in an normal
    academic conference where you talk to like
  • 11:46 - 11:51
    40 or 100 people, we like, we actually
    have a lot of people now being interested
  • 11:51 - 11:54
    in post-quantum cryptography. This was
    this year's conference on this, when Dan
  • 11:54 - 11:59
    and I started this in 2006, we were
    looking at an audience of 40. This is 350
  • 11:59 - 12:04
    people. Well, they would probably fit in
    here, but, well - for academics, this is
  • 12:04 - 12:08
    big. There's a huge interest right now.
    This was the combined conference together
  • 12:08 - 12:15
    with a NIST workshop. And so people are
    looking. So that's the good news. There's
  • 12:15 - 12:18
    more denial of service going on, I
    mentioned RaCoSS already as one which is
  • 12:18 - 12:22
    broken, but not withdrawn. Broken actually
    three different ways, where our last
  • 12:22 - 12:26
    message to them was basically "abandon all
    hope, this is not gonna work", but they
  • 12:26 - 12:31
    keep on hoping. If I zoom in there, they
    have, they are on the way to publishing
  • 12:31 - 12:35
    new parameters.
    laughter
  • 12:35 - 12:40
    TL: When he was reading it out, actually
    at the conference, he was saying "the keys
  • 12:40 - 12:46
    and signature size may be several dozens
    of terabytes". Well, there is some effect
  • 12:46 - 12:49
    that we're most likely not going to break
    it so easily, because when you're trying
  • 12:49 - 12:53
    to download several terabytes of data, it
    might not reconnect.
  • 12:53 - 12:59
    DJB: So, NIST is certainly aware of the
    fact that there's just this denial of
  • 12:59 - 13:04
    service on security analysis. And one of
    the ways that they tried to simplify the
  • 13:04 - 13:09
    picture is, said, after their conference
    they put out this call saying, "all right,
  • 13:09 - 13:14
    if you have similar submissions, see if
    you can merge them". And then hopefully
  • 13:14 - 13:17
    you get something which is, you know,
    going to be a combined submission that's
  • 13:17 - 13:21
    easier for us to analyze than these two
    separate submissions in the first place.
  • 13:21 - 13:25
    And they said this is an interesting
    little guarantee. They said "NIST will
  • 13:25 - 13:30
    accept a merged submission to the second
    round if either of the submissions being
  • 13:30 - 13:34
    merged would have been accepted". So you
    can imagine kind of attacking this if
  • 13:34 - 13:39
    there's a strong submission and you have a
    weak submission, like the strong one, they
  • 13:39 - 13:43
    surely have to accept that, and then you
    merge your weak submission into the strong
  • 13:43 - 13:47
    one if you can somehow convince the other
    team to do that, then your weak submission
  • 13:47 - 13:50
    is also going to get into the the second
    round, but NIST said "well, you should
  • 13:50 - 13:55
    only merge submissions that are similar
    and the merged submissions should be like
  • 13:55 - 14:00
    a combination of the two original
    submissions". So that sounds kind of
  • 14:00 - 14:04
    reasonable, an example, while the first
    announcement of a merger - I don't think
  • 14:04 - 14:10
    NIST said that you have to publicly
    announce - but HILA5 merged with Round2,
  • 14:10 - 14:15
    and of course this was after the HILA5
    attack and part of the merger was fixing
  • 14:15 - 14:20
    the issue in that attack, and they formed
    Round5 and they said "Round5, this result
  • 14:20 - 14:25
    of the merge, is a leading lattice-based
    candidate in terms of security, bandwidth
  • 14:25 - 14:31
    and CPU performance". So, three weeks
    later, the security turned out to be kind
  • 14:31 - 14:36
    of a problem. Mike Hamburg said that here
    is a very strong reason to believe that
  • 14:36 - 14:43
    decryption failures are much much more
    likely than what they claimed, and as a
  • 14:43 - 14:49
    result of that, and they they accepted the
    argument and said yeah, oops. As a result
  • 14:49 - 14:51
    of that, like I mentioned before,
    decryption failures are something that
  • 14:51 - 14:57
    attackers can use to break security. And
    it's not that that a full attack was
  • 14:57 - 15:02
    implemented, but it's pretty clear that
    the attack would work, and this is also an
  • 15:02 - 15:06
    interesting attack because the mergers
    were supposed to be just taking, like take
  • 15:06 - 15:10
    the best features of your two submissions,
    that you're merging. But this was a
  • 15:10 - 15:14
    mistake. The vulnerability that Hamburg
    exploited was a mistake that was not in
  • 15:14 - 15:19
    either of the submissions that were being
    put together. So there's some process of
  • 15:19 - 15:24
    break and fix and merge, making more
    mistakes, which get broken and then fixed
  • 15:24 - 15:28
    and, well, what was the fix. They said,
    "oh, here's a proposed fix". They're
  • 15:28 - 15:32
    "looking at the security proof
    adjustments", there will be a Round5 real
  • 15:32 - 15:36
    proposal, their actual merge will be in
    the future. I think now they have Round5a,
  • 15:36 - 15:43
    Round5b and Round5c, where A is broken, B
    is questionable, C is still not defined.
  • 15:43 - 15:49
    What does a security proof, what does a
    security proof mean, if you have a
  • 15:49 - 15:53
    security proof previously that they were
    adjusting, and the security proof is for
  • 15:53 - 15:58
    something that is not actually secure.
    Very strange. More merger announcements,
  • 15:58 - 16:02
    post-quantum RSA encryption and post-
    quantum RSA signatures merged to form
  • 16:02 - 16:06
    post-quantum RSA, saying that that "is a
    leading candidate in terms of depth of
  • 16:06 - 16:11
    security analysis, amount of network
    traffic, and flexibility". For people not
  • 16:11 - 16:17
    familiar with post-quantum RSA, this means
    using RSA with gigabyte or terabyte keys,
  • 16:17 - 16:20
    which is leading in the amount of network
    traffic.
  • 16:20 - 16:22
    laughter
    applause
  • 16:22 - 16:28
    DJB: We want the internet to have as much
    cryptography as possible.
  • 16:28 - 16:32
    TL: Use more bandwidth.
    DJB: Yeah. Remember, you have, if you're
  • 16:32 - 16:39
    measuring the amount of encrypted data on
    your network, this increases that. More
  • 16:39 - 16:42
    mergers, more mergers, more mergers. Some
    of these are kind of gluing submissions
  • 16:42 - 16:47
    together in a way that does not simplify
    the, the security analysis. But this last
  • 16:47 - 16:51
    one is a good example, I would say, of a
    merge entry. NTRU-HRSS and NTRUEncrypt,
  • 16:51 - 16:55
    two of the NTRU-based submissions, they
    actually put some thought into what they
  • 16:55 - 16:59
    wanted to keep and what they wanted to
    throw away. And so analyzing the merge is
  • 16:59 - 17:07
    easier than analyzing the, both of the
    initial submissions. After the November
  • 17:07 - 17:12
    deadline for mergers, NIST said they will
    announce the second round candidates.
  • 17:12 - 17:17
    Maybe it'll be, probably less than 30,
    some hints it'll be 25 or even 20,
  • 17:17 - 17:20
    candidates, and maybe that's starting to
    get down to a small enough flood that we
  • 17:20 - 17:25
    can start seriously analyzing what's left.
    They're going to announce that, I think on
  • 17:25 - 17:29
    exactly January 10th they're scheduled to
    do that, and then a week after this
  • 17:29 - 17:34
    announcement they said, "Well, the
    government might be shutting down, and in
  • 17:34 - 17:38
    that case we're not allowed to do any
    work, so we're going to be completely
  • 17:38 - 17:43
    silent in case of a shutdown". It's
    important in the U.S. government, during
  • 17:43 - 17:48
    shutdown there's a definition of essential
    personnel, like NSA, and non-essential
  • 17:48 - 17:51
    personnel, like people protecting us
    against NSA.
  • 17:51 - 17:54
    laughter
    DJB: - and only the essential personnel
  • 17:54 - 17:59
    are allowed to do work. You know what else
    is not allowed to do work? The backend
  • 17:59 - 18:07
    database for NIST's web pages, which might
    sound a little bit weird, although maybe
  • 18:07 - 18:12
    they're paying Oracle for the backend
    database, and they have to turn off the
  • 18:12 - 18:16
    payments to Oracle. I don't know what's
    going on here, but if you look for the
  • 18:16 - 18:22
    competition information, you can't find it
    on their web pages anymore. We're not
  • 18:22 - 18:25
    quite sure how long this shutdown is going
    to last. Of course, there are some people
  • 18:25 - 18:32
    who say that this is not a problem,
    because we can figure out without this
  • 18:32 - 18:36
    competition how to protect ourselves
    against quantum computers.
  • 18:36 - 18:39
    laughter
    TL: All right, now that we have aliens
  • 18:39 - 18:44
    already, are quantum computers actually
    coming? Big question, we don't know. What
  • 18:44 - 18:49
    we can monitor is progress in quantum
    computing and just middle December, there
  • 18:49 - 18:54
    was a news item from IonQ, which is a
    small startup company, announcing their
  • 18:54 - 19:01
    largest ever quantum computer based on ion
    traps. All the other quantum computers
  • 19:01 - 19:05
    we've seen so far, which are of this size,
    like 40 to 70, they were using
  • 19:05 - 19:10
    superconducting qubits. So, it is again a
    race between different technologies, but
  • 19:10 - 19:14
    both of them are advancing, and there are
    some more which are growing. So, it looks
  • 19:14 - 19:18
    like it's coming. Whenever I see that
    picture like this, I'm reminded of a nice
  • 19:18 - 19:24
    joke from a colleague of mine, Steven
    Galbraith.
  • 19:24 - 19:31
    laughter
    TL: Can you distinguish? Yep. So, with all
  • 19:31 - 19:35
    these news coming up, the National Academy
    of Sciences in the US has been
  • 19:35 - 19:38
    interviewing people for the last about
    year and a half. So they got people in
  • 19:38 - 19:42
    physics and engineering, building quantum
    computers, people doing quantum
  • 19:42 - 19:46
    algorithms, people doing quantum error-
    correcting codes, and putting all of this
  • 19:46 - 19:51
    together into a report, which just came
    out, where they look into, well, what is
  • 19:51 - 19:56
    the progress and what are the prospects.
    So the first part of key findings, the
  • 19:56 - 20:02
    first one is the good news, saying don't
    panic. We do not expect that anything is
  • 20:02 - 20:08
    going to happen in the next 10 years which
    will threaten RSA 2048 or similar, where I
  • 20:08 - 20:13
    assume what they mean, well, elliptic
    curves, discrete logs on finite fields. So
  • 20:13 - 20:17
    that's the good news. But they wouldn't
    have just one key finding, it actually
  • 20:17 - 20:24
    goes on, two three, ten, by the time they
    reach 10, I think this is panic. So that
  • 20:24 - 20:28
    they say is, well, it takes forever to
    roll out these things. The hazard of such
  • 20:28 - 20:35
    a machine is high enough. And then, "the
    deployment of post-quantum cryptography is
  • 20:35 - 20:39
    critical for minimizing the chances of a
    potential security and privacy disaster".
  • 20:39 - 20:43
    These are strong words from the National
    Academy of Sciences.
  • 20:43 - 20:49
    DJB: So, OK, can we deploy post-quantum
    cryptography? Is it deployable? Well, some
  • 20:49 - 20:55
    people would say we've already deployed
    it, but maybe that doesn't include the
  • 20:55 - 20:59
    NIST submissions, so let's look at the
    deployability of the NIST submissions. The
  • 20:59 - 21:04
    main thing that matters for deployment in
    most applications, the main problem for
  • 21:04 - 21:10
    post quantum cryptography, is the sizes.
    So, here's a picture of the night sky over
  • 21:10 - 21:16
    Leipzig. No, this is a picture of, on the
    horizontal axis is the size of your public
  • 21:16 - 21:20
    key for a bunch of signature systems, not
    all of the signature systems, for instance
  • 21:20 - 21:25
    WalnutDSA, which is broken with a script
    in the first five versions -
  • 21:25 - 21:28
    TL: Post-quantum RSA is missing.
    DJB: Yeah, post-quantum RSA is also
  • 21:28 - 21:30
    omitted from this, sorry.
    TL: It's kind of, of
  • 21:30 - 21:36
    laughter
    DJB: Yeah, that would be like, I'm one of
  • 21:36 - 21:38
    the designers of post-quantum RSA by the
    way.
  • 21:38 - 21:41
    TL: I'm not.
    DJB: It's the future of cryptography.
  • 21:41 - 21:47
    laughter
    DJB: That was good. Yeah. So what you can
  • 21:47 - 21:51
    see here is, for example this "gui"
    submission, this has, you can get your
  • 21:51 - 21:57
    your verticals, the signature size down to
    just 32 bytes or 35 bytes, something like
  • 21:57 - 22:03
    that, but you need something like 400,000
    bytes in your public key. And then there's
  • 22:03 - 22:07
    three different dots for gui. Those are
    three different security levels, and maybe
  • 22:07 - 22:10
    all the different submissions here are
    maybe not exactly comparable on the
  • 22:10 - 22:14
    security levels. It should be a three
    dimensional graph, if we measure
  • 22:14 - 22:17
    everything properly by exactly how secure
    it is, which of course we're not quite
  • 22:17 - 22:22
    sure about until there's been enough
    security analysis. You can see, various
  • 22:22 - 22:26
    different trade-offs are possible, none of
    which are down where we want to be with
  • 22:26 - 22:30
    things like under 100 bytes for your
    public key and under 100 bytes for your
  • 22:30 - 22:34
    signature, which is what we're used to
    right now. That's what ecliptic curve
  • 22:34 - 22:39
    crypto gives us, is signature sizes and
    public sizes, which are both below 100
  • 22:39 - 22:43
    bytes. And that's something you can fit
    into your applications much more easily
  • 22:43 - 22:50
    than, say, 100,000 byte public keys or
    10,000 byte signatures. There's various
  • 22:50 - 22:52
    different trade-offs, and maybe your
    application can handle some of that, but
  • 22:52 - 22:57
    there's nothing that's just really small,
    which is what we're used to right now.
  • 22:57 - 23:02
    Another more complicated graph. This is
    for encryption and showing more
  • 23:02 - 23:07
    candidates. Well, there are more
    encryption submissions. This is still not
  • 23:07 - 23:11
    all of the encryption submissions, but a
    representative sample. And you can see
  • 23:11 - 23:18
    that, well, there's still no really great
    sizes here. The best in terms of sizes is
  • 23:18 - 23:24
    "sike", supersingular isogeny keyexchange,
    which is something like 400, little less
  • 23:24 - 23:28
    than 400 bytes for the public key and for
    the cipher text, and then it starts
  • 23:28 - 23:32
    getting bigger from there. And you get, on
    this graph you get things up to megabyte
  • 23:32 - 23:38
    or bigger. You can get a little below
    300-400 bytes, you can get down to 100 or
  • 23:38 - 23:43
    so bytes for the cipher text, as long as
    you are willing to accept a public key
  • 23:43 - 23:47
    that's much, much bigger, with some of
    these code-based systems. And then just to
  • 23:47 - 23:51
    zoom in on some of the smaller ones, you
    can start seeing where some different
  • 23:51 - 23:57
    candidates are. This is everything which
    is public key and cipher text below 1280
  • 23:57 - 24:03
    bytes. And again, you see "sike" down
    there, a little below 400 bytes, and then
  • 24:03 - 24:07
    some other possibilities. But well, what
    are the security levels of these things?
  • 24:07 - 24:10
    Could all of these be broken? There's not
    actually that many of them, how many of
  • 24:10 - 24:14
    these have actually been studied? It's
    kind of scary. And again, much bigger
  • 24:14 - 24:20
    sizes than we're used to in cryptography.
    TL: So, yes, size does matter.
  • 24:20 - 24:24
    laughter
    TL: So, Google and Cloudflare this year in
  • 24:24 - 24:28
    April were saying, well, we don't really
    know what the outcome of this competition
  • 24:28 - 24:33
    is going to be, but we have some
    categories of different crypto systems,
  • 24:33 - 24:38
    and let's just send dummy packets of data
    of respective sizes, and see what happens
  • 24:38 - 24:41
    when we do this on the Internet. Now this
    is Google and Cloudflare, so they they
  • 24:41 - 24:45
    were doing this on the Chrome browser for
    connections that go through Cloudflare, so
  • 24:45 - 24:50
    they could actually see from where it
    came, where it ended, whether it came
  • 24:50 - 24:54
    back, whether it dropped. Dan mentioned
    "sike", so this is the one category of
  • 24:54 - 24:59
    supersingular isogenies, those are just
    400 bytes. And that was pretty much fine,
  • 24:59 - 25:03
    so when you look at the first column, you
    have a small latency increase. You also
  • 25:03 - 25:13
    see the inaccuracy, that's a -0.2. So, the
    numbers are mostly correct. Then there is,
  • 25:13 - 25:15
    in the lattices, this was the zoom-in what
    Dan was showing, those are mostly
  • 25:15 - 25:19
    something that could take a, we have
    structured lattices, those are around the
  • 25:19 - 25:27
    MTU, so 1,100 something bytes. And that
    was, mostly worked, some small increases,
  • 25:27 - 25:31
    but under 20 percent, so this is also
    something we feel like, yes, you could
  • 25:31 - 25:36
    actually deploy this on the Internet. Then
    a different category still within lattices
  • 25:36 - 25:41
    are unstructured lattices. Those would
    come with 10 kilobytes of data for the
  • 25:41 - 25:46
    public key. And there they just noticed
    that too many pages, including, like, top
  • 25:46 - 25:53
    pages on Alexa 500, such as LinkedIn, were
    just dropping. They tried, funny enough,
  • 25:53 - 25:59
    9999, where fewer pages was dropping, so
    10K was worse than 9999, but even then
  • 25:59 - 26:03
    LinkedIn was still dropping. And so they
    decreased it to a third, as basically a
  • 26:03 - 26:10
    placeholder, measured with these 3300
    bytes, and then scaled up by a factor of
  • 26:10 - 26:18
    three. Now, those increases in the latency
    is what they said, well, this is not
  • 26:18 - 26:21
    acceptable. So then for the next
    experiments, they were only looking at
  • 26:21 - 26:27
    isogenies and structured lattices.
    Isogenies are also special, not just being
  • 26:27 - 26:32
    the smallest, but also being the slowest.
    Well okay, not absolutely the slowest, but
  • 26:32 - 26:39
    it's the only system where the speed is
    much more an issue than the size. And so
  • 26:39 - 26:41
    despite Google having quite a few
    computers, they were saying we can't
  • 26:41 - 26:46
    actually use isogenies for the speed
    reasons. Size would be awesome, but speed,
  • 26:46 - 26:52
    not so sure. It's also a relatively recent
    system, it's just in 2012. So maybe also
  • 26:52 - 26:56
    it's a security question. So just now in
    December, they announced that they're
  • 26:56 - 27:00
    actually building a new experiment, they
    announced which candidate they have
  • 27:00 - 27:03
    chosen, NTRU-HRSS, which Dan just
    mentioned was also one of the recent
  • 27:03 - 27:07
    mergers. And so this is one of the
    structured lattices systems, designers are
  • 27:07 - 27:14
    Andreas Hulsing, Joost Rijneveld, Peter
    Schwab and John Schanck. Great score for
  • 27:14 - 27:21
    Eindhoven team, current professor, former
    student, and then some collaborators. And
  • 27:21 - 27:25
    so they're now building a system which is
    a combined elliptic curve, so that's the
  • 27:25 - 27:29
    combined EC, that's combined elliptic
    curves and post-quantum. This is the
  • 27:29 - 27:34
    second round running, and so this would
    come to some Internet browser near you
  • 27:34 - 27:39
    soon. Another nice result, I mean this is
    not the only thing up there. The IETF this
  • 27:39 - 27:44
    year finished standardizing XMSS, which is
    a hash based signature system. It's been
  • 27:44 - 27:46
    in the making, down there you see the
    timeline, it's been in the making for
  • 27:46 - 27:51
    three years. It's not really fast, but
    it's also the first of its kind. This was
  • 27:51 - 27:56
    the first time that IETF has published a
    post-quantum RFC, request for comments,
  • 27:56 - 27:59
    but that's basically their standards. And
    so there's a lot of boilerplate text,
  • 27:59 - 28:03
    which was developed in this thing, in the
    process of making the standard, which is
  • 28:03 - 28:08
    dealing with, yeah, post-quantum, quantum
    attacks and so on, how you should handle
  • 28:08 - 28:13
    it. XMSS is interesting. It's not one of
    the NIST submissions, because it doesn't
  • 28:13 - 28:17
    satisfy the normal thing that you learn in
    primary school, what a signature should be
  • 28:17 - 28:23
    doing. Signature you expect to have a
    public key. You use it to sign something,
  • 28:23 - 28:29
    and then you have a signature. XMSS, you
    have a public key, you have a state. You
  • 28:29 - 28:34
    get a message, you sign it, and you update
    your state. And if you ever forget to
  • 28:34 - 28:40
    update your state, you will lose security.
    So it's something which is not as cool as
  • 28:40 - 28:43
    a normal signature scheme, but there are
    also many applications where you actually
  • 28:43 - 28:47
    know how many signatures you've made. I
    mean, if you're doing operating system
  • 28:47 - 28:52
    updates, you better know how often you got
    your key out of the drawer and used it. So
  • 28:52 - 28:56
    it is not impossible to use, but it might
    not be exactly what you want for your web
  • 28:56 - 29:00
    applications.
    DJB: Good thing about XMSS is still, if
  • 29:00 - 29:04
    you can count, then the size is much
    smaller than the signature systems that we
  • 29:04 - 29:10
    were looking at before. Another size
    advantage, size advance is something
  • 29:10 - 29:13
    called Glowstick. I should explain the
    name. This is starting from a lattice
  • 29:13 - 29:19
    submission called Saber, which is one of
    the unbroken ones, and Saber has a big
  • 29:19 - 29:23
    version called FireSaber, high security
    level, scaled up. It also has a small
  • 29:23 - 29:29
    version called LightSaber.
    laughter and groans
  • 29:29 - 29:35
    DJB: And this Glowstick is the even
    smaller version. It's like, let's scale it
  • 29:35 - 29:40
    down as far as we can get that it's not
    quite broken, and there's various
  • 29:40 - 29:45
    technical details and it hasn't been
    broken in the months since it's been
  • 29:45 - 29:50
    proposed, the six months or so, and it is
    interesting. I mean, it is something which
  • 29:50 - 29:54
    is a good challenge. It's nice to have
    these scaled down problems, so we can try
  • 29:54 - 29:58
    different ways of attacking these things
    and people who like breaking stuff, it's
  • 29:58 - 30:01
    good to have the simpler systems to
    practice doing attacks, and that gives you
  • 30:01 - 30:04
    some insight into what could work against
    the larger systems.
  • 30:04 - 30:10
    TL: All right. So, since we're coming to
    funny names... Oh, no, since we're coming
  • 30:10 - 30:16
    to sizes, why do we still care about big
    sizes? I mean, people are scaling things
  • 30:16 - 30:20
    down. Google says, oh, we don't like big
    sizes. So why do people say post-quantum
  • 30:20 - 30:25
    systems are bigger and we still care about
    it? So one of the reasons is, well,
  • 30:25 - 30:29
    highlighting McEliece here, which is our
    submission in this competition, these have
  • 30:29 - 30:35
    had a lot of analysis. So classic McEliece
    is based on a system from 1978, basically
  • 30:35 - 30:40
    unchanged except for some changes where we
    can really prove this is the same as that.
  • 30:40 - 30:44
    It has one of the shorter ciphertext, just
    200 bytes, so that's actually kind of
  • 30:44 - 30:51
    tolerable on the Internet, but a megabyte
    of key. Key generation is also pretty
  • 30:51 - 30:56
    slow, but then the, well, it's nowadays
    called encapsulation, decapsulation
  • 30:56 - 30:59
    because all you want is your AES key, you
    don't want to actually encrypt the
  • 30:59 - 31:05
    message, but basic thing of encryption and
    decryption of that. So, nice system, very
  • 31:05 - 31:12
    good history in security, pretty fast,
    pretty small, except for the public keys.
  • 31:12 - 31:19
    It's like, "Grandma, why do you have so
    big keys?". Why are these keys so big? So
  • 31:19 - 31:22
    one thing is, it's like a two dimensional
    key. We have this big matrix there, what
  • 31:22 - 31:28
    you see on the left is an identity matrix.
    This thing has about 7000 columns. It's
  • 31:28 - 31:33
    like pretty long. It's only for, the
    height is n-k, which is just like thousand
  • 31:33 - 31:40
    five hundred, so it's really long and
    stretched. And so all of this part on your
  • 31:40 - 31:47
    right-hand side, that part is random and
    you have to send that. You can of course,
  • 31:47 - 31:50
    everybody remembers what an identity
    matrix looks like. You can forget about
  • 31:50 - 31:55
    this one, but this one you have to send,
    because the encryption works by the sender
  • 31:55 - 32:01
    thinking, which of those around 7000
    column he wants to pick, and then just
  • 32:01 - 32:06
    XORing them up, and for doing that, well,
    you need to have this big matrix.
  • 32:06 - 32:14
    And if you calculate, well, 1547 times 5413 ,
    that's the part of the right matrix,
  • 32:14 - 32:19
    you're getting to this one megabyte size.
    Now what are the issues with having big keys?
  • 32:19 - 32:25
    It's bandwidth, but honestly, when
    you download, a picture it's also a megabyte.
  • 32:25 - 32:29
    So, it might not be so bad. I mean, if
    you're on the German trains then you will
  • 32:29 - 32:31
    hate it, but -
    laughter
  • 32:31 - 32:37
    TL: - normally, elsewhere in the world or
    on your mobile, it's fine. Google was
  • 32:37 - 32:44
    saying, we actually excluded some systems,
    so they didn't experiment with classic
  • 32:44 - 32:49
    McEliece, largest they looked at was
    10,000 kilobytes, and even then some dropped,
  • 32:49 - 32:54
    and they said, well, some are too
    large to be viable within TLS.
  • 32:54 - 33:00
    So they just said, well, we don't do this. But
    then again, you have a secure system.
  • 33:00 - 33:04
    You can just also design a secure protocol to
    go with it. We don't need to stick with TLS.
  • 33:04 - 33:09
    But there is a real problem with
    having a megabyte of key, because if your
  • 33:09 - 33:15
    protocol assumes that the client generates
    this one megabyte and then just throws it
  • 33:15 - 33:20
    at the server, and the server has to
    accept one megabyte from every single
  • 33:20 - 33:24
    client that is throwing a megabyte at it,
    and then has to do something with it,
  • 33:24 - 33:28
    well, this is really an invitation for a
    denial of service attack, because you're
  • 33:28 - 33:33
    allocating memory on the server for doing
    these operations. The operations are pretty fast,
  • 33:33 - 33:40
    it's just XORing 0s and 1s, but you have
    to allocate 1 megabyte for each client.
  • 33:40 - 33:43
    And so this is a problem, no matter what
    the protocol we designed,
  • 33:43 - 33:48
    we have to deal with the possibility of
    denial of service attacks, or avoid them.
  • 33:48 - 33:53
    So, can servers avoid
    storing these big keys? You want to XOR
  • 33:53 - 33:58
    these columns, so one of the first
    limitation on small devices was saying,
  • 33:58 - 34:05
    well, "I'm a very small device, but I can
    pick those positions and then, outside
  • 34:05 - 34:11
    world, please be nice to me and spoon feed
    me one column at a time". So the small
  • 34:11 - 34:18
    device memorizes 1500 bits, not so big,
    and then gets the next column.
  • 34:18 - 34:23
    If it was selected, it XORs it in, if it wasn't
    selected, well, it keeps the intermediate
  • 34:23 - 34:29
    state. So this works. And at the end, you
    output the normal ciphertext. But what we
  • 34:29 - 34:34
    have here is, we're operating in a
    friendly environment where we do not
  • 34:34 - 34:39
    expect this outside world to do something
    nasty to us, and also we have some memory.
  • 34:39 - 34:46
    Now if we put this on the real Internet
    and we don't want to have any state, so we
  • 34:46 - 34:49
    cannot memorize these 1500, because, well,
    we don't know when the next column is
  • 34:49 - 34:56
    going to come, so we output and send this
    back to this client. That's not gonna work.
  • 34:56 - 35:03
    When you tell the client, "Oh, this
    is my current result", then the server
  • 35:03 - 35:08
    gets the next column, maybe XORs it in,
    maybe not, sends it back to the client,
  • 35:08 - 35:12
    anybody who is watching this traffic could
    see whether there was a change or there
  • 35:12 - 35:17
    was no change. So this is not a way of
    dealing with it. So what Dan and I have
  • 35:17 - 35:21
    been busy with, and, well, I put 2018 with
    a question mark, we still have like three
  • 35:21 - 35:27
    days, right? So we're working on this
    system called McTiny, tiny because it's
  • 35:27 - 35:33
    made for tiny web servers, where we assume
    that this web server is not allocating
  • 35:33 - 35:39
    any per-client storage, any per-client state.
    And so, we again work with spoon feeding
  • 35:39 - 35:43
    things, but we're making sure that
    everything that the server gets and sends
  • 35:43 - 35:49
    out is encrypted, is authenticated, there
    is some stuff to avoid replay attacks, so
  • 35:49 - 35:54
    that somebody can't just say, "oh, what if
    I change the column here or there". So all
  • 35:54 - 35:59
    these things are encrypted, and what we do
    is we use properties of doing these sums
  • 35:59 - 36:06
    and pieces by chunking up this big matrix
    into chewable pieces that are small enough
  • 36:06 - 36:11
    to fit in one MTU and still have some
    space for some cookies. So this is similar
  • 36:11 - 36:17
    to normal uses of cookies, this is a
    cookie, encrypted to the server, send to
  • 36:17 - 36:21
    a client, you handle the storage,
    and then client sends the next piece,
  • 36:21 - 36:27
    sends the old cookie. Now, the cookie is
    encrypted, but the way that the key is
  • 36:27 - 36:32
    handled is the same for all clients. So
    there is no per-client storage of any
  • 36:32 - 36:36
    keys. It's a symmetric key, it's pretty
    small. So that's the one thing that the
  • 36:36 - 36:40
    server remembers, and then it gets a
    packet, it from this cookie part recovers
  • 36:40 - 36:45
    all the, like, which columns to pick,
    what's the intermediate result, and then
  • 36:45 - 36:50
    does some computation, sends it back. So,
    the result of this is that we need several
  • 36:50 - 36:55
    round trips, but there's absolutely no
    per-client state on the server.
  • 36:55 - 37:01
    DJB: Of course you could say, well, there
    were still all that bandwidth, and what if
  • 37:01 - 37:06
    you do have bandwidth problems, but some
    people say that we're familiar with
  • 37:06 - 37:12
    sending a lot of data around, so that's
    really not a big deal. Something else that
  • 37:12 - 37:16
    could interfere with deployment is
    patents. Now Tanja mentioned before that
  • 37:16 - 37:20
    classic McEliece does not have patents,
    but what if somebody says, "I just don't
  • 37:20 - 37:23
    want to handle the megabyte", or for
    whatever reason people want something
  • 37:23 - 37:29
    smaller or there are signature questions.
    Well, we have a lot of information about
  • 37:29 - 37:33
    some systems which are patented. The 18
    systems shown here, because NIST had as
  • 37:33 - 37:38
    one of the rules for the competition that
    you had to deliver statements to them,
  • 37:38 - 37:43
    signed by every member of the submission
    team, saying either "we do not have
  • 37:43 - 37:47
    patents, patent applications on our
    submission", or, "here's the patents,
  • 37:47 - 37:50
    patent applications, here's their
    numbers". And so, OK, as a result of that
  • 37:50 - 37:54
    and NIST, after checking they had a
    complete pile of statements, put them
  • 37:54 - 37:58
    online, so now we know that these are
    exactly the 18 submissions where the
  • 37:58 - 38:02
    submission teams claim patents on their
    submissions, including for example Compact
  • 38:02 - 38:08
    LWE and DME and WalnutDSA, which are
    rapidly broken by scripts that are online.
  • 38:08 - 38:12
    And RLCE-KEM, that one has half of the
    parameters are broken, there's another
  • 38:12 - 38:16
    half which are not broken. It's not that
    the patented submissions are somehow
  • 38:16 - 38:20
    better than the rest of the submissions,
    but well, for some reason people think
  • 38:20 - 38:24
    they can make money off of patents, and
    maybe they're not actually so wrong,
  • 38:24 - 38:28
    because you can't just throw away these 18
    submissions and say "that's the end of it".
  • 38:28 - 38:38
    The problem is that there's some
    patents which cover more submissions. Now,
  • 38:38 - 38:45
    NIST does not require these submitters to
    say that, "here's which patents are, which
  • 38:45 - 38:51
    submissions are covered by my patent". The
    submitters are only required to say
  • 38:51 - 38:55
    something about their own submissions, and
    also NIST has no way to say anything about
  • 38:55 - 38:59
    whatever random patent trolls that are out
    there, that have not submitted anything.
  • 38:59 - 39:03
    They can't impose any rules on that. I
    mean, of course you can try doing patent
  • 39:03 - 39:07
    searches, but you won't necessarily find
    things, for instance this patent. Nobody
  • 39:07 - 39:15
    noticed until it was revealed in, well, by
    a member of some submission team, this
  • 39:15 - 39:20
    patent was issued in 2015, at the top
    there, which might make you think, "oh, if
  • 39:20 - 39:23
    something was published before 2015, it
    would be okay", and some submissions were
  • 39:23 - 39:28
    published earlier. But what's important is
    the date down at the bottom here, which is
  • 39:28 - 39:34
    the priority date of February 18th 2010.
    If you look on Google Patents, one good
  • 39:34 - 39:38
    thing is they put the priority date pretty
    far up, where you can easily see it. What
  • 39:38 - 39:43
    this means is that, in order to be prior
    art for this patent, well, you have to
  • 39:43 - 39:47
    check what exactly they filed in 2010.
    They might have made later changes,
  • 39:47 - 39:51
    but the 2010 thing, assuming that has all
    the same stuff as the patent, which it's
  • 39:51 - 39:57
    possible to find out, this, anything that
    was published after 2010 is not prior art
  • 39:57 - 40:02
    for this. Now what's really scary about
    this patent, and I hope that really soon
  • 40:02 - 40:06
    I'm gonna have analysis online of which
    submissions are covered by which patents
  • 40:06 - 40:10
    of all the patents I've seen. This one is
    by far the scariest, because this one
  • 40:10 - 40:15
    covers a whole bunch of submissions. This
    one covers basically every submission
  • 40:15 - 40:21
    which is using what's called the LPR
    cryptosystem, a ring-LWE-lattice-based
  • 40:21 - 40:26
    crypto system. This is a very popular type
    of lattice-based crypto system, which was
  • 40:26 - 40:36
    published by L, P, and R, Lyubashevsky,
    Peikert, and Regev, in May 2010, which is
  • 40:36 - 40:44
    after this patent application was filed.
    Now, there was a talk in April which had
  • 40:44 - 40:49
    the same stuff from LPR, and it seems like
    there might even have been a talk in
  • 40:49 - 40:54
    January from LPR, but they didn't put the
    slides online, and then, well, it starts
  • 40:54 - 40:58
    getting into interesting questions of
    patent law. This looks like a very
  • 40:58 - 41:02
    strong patent, covering a whole lot of
    submissions, and there's more cases,
  • 41:02 - 41:06
    there's a whole company called ISARA that
    is specializing in planting landmines,
  • 41:06 - 41:10
    patent landmines, around things that other
    people are doing, sometimes on things that
  • 41:10 - 41:14
    other people have already published, and
    then you get a court fight about it. This
  • 41:14 - 41:18
    is gonna be a problem; it's something we
    really have to watch out for, is what is
  • 41:18 - 41:24
    patented, and again, I hope to be sometime
    soon done with some patent analysis.
  • 41:24 - 41:29
    Of course, some people would say that we
    don't have to worry about patents as long
  • 41:29 - 41:32
    as we find something that we can deploy,
    that somebody tries deploying it
  • 41:32 - 41:36
    and they don't get sued.
    TL: Not sure that's going to be deployed
  • 41:36 - 41:45
    anytime soon. I mean, 95 out of 3000, Laughter
    okay. All right. Funny names, I said.
  • 41:45 - 41:55
    So what do you see here? Can anybody read
    phonetics? Yeah? "si-said". Okay, so,
  • 41:55 - 42:02
    seaside. Now, CSIDH is what you really
    always wanted. CSIDH is an efficient post-
  • 42:02 - 42:06
    quantum commutative group action. Did you
    know that you wanted a commutative group
  • 42:06 - 42:14
    action? Actually, you did. So, what all
    people ask me is, like, "I'm using Diffie-
  • 42:14 - 42:19
    Hellman these days. What can you give me
    in the post-quantum world?". And then it
  • 42:19 - 42:25
    depends a lot on how you define Diffie-
    Hellman. Some features that we come to use
  • 42:25 - 42:31
    from Diffie-Hellman are that, well, you
    publish a public key, I publish a public key,
  • 42:31 - 42:37
    other people publish public keys, and
    we can reuse them. Kind of nice. Also, we
  • 42:37 - 42:41
    don't have to talk to each other. We can
    just look up the other public key on the
  • 42:41 - 42:46
    phone book, have a shared key, and start
    using that one. And if I send you
  • 42:46 - 42:51
    something encrypted with our shared public
    keys, like, combined thing of this, then
  • 42:51 - 42:56
    you will be able to decrypt this. There
    are some nice other features; you can
  • 42:56 - 43:01
    blind things, you can like take your g to
    the a and then multiply, compute in the
  • 43:01 - 43:06
    exponent times r, so put some blinding
    factors there, and there is no difference
  • 43:06 - 43:12
    whether I'm the initiator or I am the
    responder in this. We don't have this
  • 43:12 - 43:15
    anywhere else in post quantum
    cryptography. All the systems that you see
  • 43:15 - 43:20
    for NIST submissions make a difference
    between are you the sender, are you the
  • 43:20 - 43:26
    responder. So this is the first efficient
    post-quantum, well, Diffie-Hellman-like
  • 43:26 - 43:32
    thing, which, well, by fancy math called
    commutative group action. So, if you are a
  • 43:32 - 43:35
    user, you don't want to know all the
    details. I'm not going to give an entire
  • 43:35 - 43:41
    talk about this, unless, maybe next year.
    What you see exposed to you is just one
  • 43:41 - 43:44
    single finite field element. So there is
    some fixed prime, that all the people in
  • 43:44 - 43:48
    the system know, and then everybody's
    public key is just one single field
  • 43:48 - 43:55
    element. So Alice computes her field
    element, Bob computes his field element,
  • 43:55 - 43:59
    they post these somewhere, and then
    sometime later, years later, maybe if
  • 43:59 - 44:04
    quantum computers are built, they find
    each other, they compute their shared
  • 44:04 - 44:09
    public key, they combine the shared, the
    public keys into their shared secret key,
  • 44:09 - 44:14
    sorry, and then they have the shared
    secret. Now, a little bit of the math
  • 44:14 - 44:18
    behind this, A actually appears in some
    form there, so this is one of the
  • 44:18 - 44:21
    elliptic curves that we've been talking
    about in, gosh,
  • 44:21 - 44:25
    when was this, 2013 or so. No, 14 at least.
    DJB: 14.
  • 44:25 - 44:34
    TL: So there's EA: y^2 = x^3, and then A,
    this public key A, x^2 + x, and that what
  • 44:34 - 44:41
    the computation is to go from one key to
    another key is using an isogeny, same
  • 44:41 - 44:45
    isogeny kind of thing that you heard in
    sike before, it's a math object that just
  • 44:45 - 44:50
    means you move from one elliptic curve to
    another elliptic curve. If somebody tells
  • 44:50 - 44:55
    you to implement this, what you need to
    get doing is, well, take this prime p,
  • 44:55 - 45:01
    compute modulo p for addition,
    multiplications and divisions, out of
  • 45:01 - 45:07
    those you for instance build the curve
    operations, and then some more operations
  • 45:07 - 45:11
    which computes an isogeny, but all of
    those are just combined from those things.
  • 45:11 - 45:16
    So there's nothing particularly scary
    behind it, except for, well, we came up
  • 45:16 - 45:25
    with this thing in January 2018 at this
    lovely beach. Was great there, but please
  • 45:25 - 45:29
    don't use it yet. Experiment with it all
    you want, but this has not had enough
  • 45:29 - 45:35
    analysis. But another reason why you might
    want this is security, key sizes and so on.
  • 45:35 - 45:42
    So, what are we looking at? First of
    all, how many keys are there? So how big
  • 45:42 - 45:49
    do we have to look at this p? When you
    have fixed your prime p, say n bits, then
  • 45:49 - 45:56
    there are square root of p, so 2 to the n
    over 2, many such curves. So these are the
  • 45:56 - 46:01
    numbers of public keys. And then, similar
    to how the elliptic curve discrete log,
  • 46:01 - 46:06
    kind of meet-in-the-middle attacks, work,
    so basically smart bruce force search, you
  • 46:06 - 46:11
    get a square root of the number of keys as
    your security. So if you have square root
  • 46:11 - 46:18
    of p many keys, it takes your fourth root
    of p time to find out what Alice's key is.
  • 46:18 - 46:23
    So if you want 128 bit security, you have
    to choose your prime p four times as many
  • 46:23 - 46:31
    bits, so a 512 bit prime. But this is a
    talk on post-quantum cryptography.
  • 46:31 - 46:36
    So where do we stand there? Elliptic curves
    would be totally broken. Nice enough for
  • 46:36 - 46:41
    isogenies, we don't have any complete
    break. There are some sub-exponential
  • 46:41 - 46:46
    attacks, so doesn't have full exponential
    security as we maybe would like to have,
  • 46:46 - 46:50
    on the other hand, with RSA and finite
    field Diffie-Hellman, we have been dealing
  • 46:50 - 46:54
    with the growth of keys, with sub-
    exponentil attacks, so this is something
  • 46:54 - 46:58
    we're familiar with. It doesn't kill
    things, but you look at the literature,
  • 46:58 - 47:01
    it's mostly asymptotics, so we and also
    some others have been looking into
  • 47:01 - 47:06
    details. I think our analysis, which we
    have at quantum.isogeny.org, is the most
  • 47:06 - 47:11
    detailed one, looking into, like, what
    actual security you get against somebody
  • 47:11 - 47:16
    with a really really big quantum computer.
    Now with elliptic curves, you'll hopefully
  • 47:16 - 47:20
    have also learned that you must always
    validate, you need to get a point,
  • 47:20 - 47:23
    somebody says "oh, this is my public key",
    the first thing you do is check, is this
  • 47:23 - 47:29
    thing on the curve, does it have the right
    order? Same thing for isogeny-based
  • 47:29 - 47:33
    crypto, for CSIDH you have a very quick
    check, you check that this curve has the
  • 47:33 - 47:36
    number of points, you know what it is, you
    don't even need to do full point counting,
  • 47:36 - 47:42
    you just take a point, do some scalar
    multiplications, and you check it. This is
  • 47:42 - 47:47
    another thing that we've gotten totally
    used to doing. This is another thing that
  • 47:47 - 47:51
    is really really really hard for most
    post-quantum systems. Most post-quantum
  • 47:51 - 47:56
    systems you add another proof to it, so
    typically when you encrypt to somebody's
  • 47:56 - 48:00
    key and you're sending something which
    looks like a key, you reveal all the
  • 48:00 - 48:05
    secrets in there, that's why you can't
    reuse it, or you have to do a big zero
  • 48:05 - 48:10
    knowledge proof to prove that you actually
    generated this thing properly. With CSIDH,
  • 48:10 - 48:16
    it's just there. All you need to do is
    check if it's a valid curve, and you're done.
  • 48:16 - 48:22
    Size it's also pretty neat. So, 32
    bytes for the secret key, 64 bytes.
  • 48:22 - 48:26
    So just twice as large as normal elliptic
    curves, that is really like bottom-left
  • 48:26 - 48:32
    corner of Dan's graph where there was
    nothing, so CSIDH does fill a big gap, a
  • 48:32 - 48:38
    big gap, but a small key, there with
    something, which pre-quantum has at least
  • 48:38 - 48:43
    128 bit security and post-quantum, so what
    NIST was asking for is comparisons with
  • 48:43 - 48:49
    AES-128 and then you look at, like, how
    big are the sizes, how big are the quantum
  • 48:49 - 48:53
    computers and so on. And we think that
    CSIDH-512, to the best of our knowledge,
  • 48:53 - 48:58
    based on the latest analysis, will be as
    secure as that. There is some code written
  • 48:58 - 49:09
    by Lorenz, somewhere? Ah, Lorenz. Yay.
    This is on Skylake. Your mileage may vary.
  • 49:09 - 49:15
    This is a, it's not a super-quick hack,
    but it's not deployment code. So this is
  • 49:15 - 49:17
    not yet constant time, there are some
    others who've been working on constant
  • 49:17 - 49:24
    time, it makes it about three times
    slower. It is similar to sike in that it's
  • 49:24 - 49:29
    really nice small keys, but somewhat slow.
    On the other hand, this is still very new,
  • 49:29 - 49:32
    it's just from January. So we're still
    figuring out ways to make it faster,
  • 49:32 - 49:36
    whereas, well, sike has been doing a lot
    of work getting to the speed that they
  • 49:36 - 49:40
    have now. So there's hope that this will
    get faster, and there's some hope it will
  • 49:40 - 49:45
    remain unbroken until next year, but,
    well, I'm not sure yet where I'd put my
  • 49:45 - 49:49
    money, at the, at this moment I think
    actually CSIDH has a better chance than
  • 49:49 - 49:54
    sike of surviving, but who knows. Don't
    use it for anything yet.
  • 49:54 - 49:57
    DJB: Speaking of broke, there's a lot of
    people who are investing
  • 49:57 - 50:01
    in crypto currencies, -
    laughter
  • 50:01 - 50:06
    DJB: - and I think, I think it's Nick
    Mathewson's fault, this whole quantum-
  • 50:06 - 50:10
    cyber blockchain idea, if you know
    something earlier than 2016, well anyway,
  • 50:10 - 50:14
    there's variations of it since then, like
    quantum AI blockchain, apparently you can
  • 50:14 - 50:20
    buy the t-shirt. We have about 10 minutes
    left, so I'd like to finish things off
  • 50:20 - 50:26
    with some comments on software. Now this
    is looking back at 40 years of public key
  • 50:26 - 50:32
    cryptography, well RSA was from '77 or so,
    the McEliece cryptosystem from '78, and
  • 50:32 - 50:38
    then, well, here's some, some schematics
    of what the software quality has been in
  • 50:38 - 50:42
    cryptography, on a scale of "good", "bad",
    "terrible", and "horrifying". 1978, I
  • 50:42 - 50:46
    don't actually know, I haven't seen
    software from back then. By 1988, it was
  • 50:46 - 50:51
    clear that the software quality was
    horrifying. By 1998, it had moved up to
  • 50:51 - 50:56
    terrible. By 2008, it moved up to bad. And
    by 2018, it has jumped back down to
  • 50:56 - 51:06
    horrifying.
    laughter and applause
  • 51:06 - 51:10
    DJB: And of course a major contributor to
    this is all of these NIST submissions,
  • 51:10 - 51:15
    which have code written by mathematicians,
    who barely can implement anything and
  • 51:15 - 51:18
    certainly don't have good code quality.
    There's occasional submission teams that
  • 51:18 - 51:23
    have people who can write code, but in
    general, if you, well, for a good time,
  • 51:23 - 51:25
    pick up a random -
    TL: McEliece is fine!
  • 51:25 - 51:28
    DJB: Yeah, the classic McEliece code is
    fine. There's other submissions where the
  • 51:28 - 51:34
    code is is fine, but if you just take a
    random submission and look at the code,
  • 51:34 - 51:43
    it's, well, interesting. If you would like
    to find out where the software is and
  • 51:43 - 51:45
    download it.
    laughter
  • 51:45 - 51:50
    DJB: Yeah, NIST doesn't work very well. I
    did look, archive.org has, like, you
  • 51:50 - 51:56
    search for "NIST round one" on DuckDuckGo,
    and the top link is to the NIST page, and
  • 51:56 - 52:00
    then you take the URL for that, put it
    into archive.org, and I tried a few of the
  • 52:00 - 52:04
    submissions and the zip files that NIST
    prepared with the specifications and the
  • 52:04 - 52:08
    code, those are available from
    archive.org. I guess they got most or all
  • 52:08 - 52:12
    of them. You can also look for more than
    half the submissions, there are upstream
  • 52:12 - 52:18
    websites with newer code. NIST has not
    updated the code, but lots of submissions,
  • 52:18 - 52:23
    the submission teams have, lots of the
    fastest code, and even some faster while
  • 52:23 - 52:27
    improved code, is available in our
    SUPERCOP benchmarking framework, so this
  • 52:27 - 52:31
    is the system for unified performance
    evaluation related to cryptographic
  • 52:31 - 52:39
    operations and primitives, bench.cr.yp.to,
    and this one has, well, 170 primitives
  • 52:39 - 52:46
    from 40 of the 69 submissions. Might have
    accidentally left out all of the patented
  • 52:46 - 52:51
    submissions, well, the SUPERCOP policy is
    anybody who sends us code to put in, we'll
  • 52:51 - 52:55
    benchmark it, it doesn't have to be
    unpatented, it doesn't have to be secure,
  • 52:55 - 53:00
    we benchmark MD5, we benchmark RSA-512.
    But anyways, there's 40 submissions where
  • 53:00 - 53:05
    code is in there, from other people or
    from me painfully going through getting
  • 53:05 - 53:11
    code to actually work. The primitives are,
    for instance, RSA-512, and RSA-1024, and
  • 53:11 - 53:15
    RSA-2048. They're all RSA, but they're
    different primitives, or different
  • 53:15 - 53:19
    mathematical functions with different
    security levels and, well, in these
  • 53:19 - 53:23
    submissions, there's typically three
    different security levels, sometimes more
  • 53:23 - 53:27
    choices, sometimes less and then a lot of
    the primitives have multiple
  • 53:27 - 53:31
    implementations, like reference code and
    optimized code for different platforms
  • 53:31 - 53:35
    maybe. So, okay, a lot of those are
    collected in this benchmarking framework,
  • 53:35 - 53:40
    all with the same API. libpqcrypto, I
    think I might have a few minutes to say a
  • 53:40 - 53:45
    little more about this, libpqcrypto is
    focusing on having an API which is
  • 53:45 - 53:50
    suitable for cryptographic deployment in
    the future. If you imagine that the
  • 53:50 - 53:55
    implementation quality of the underlying
    crypto is dramatically improved, at least
  • 53:55 - 53:59
    that interface layer is supposed to be
    something that you'll be able to use. Some
  • 53:59 - 54:04
    more examples of things out there, pqm4 is
    a library optimized for small ARM
  • 54:04 - 54:11
    microcontrollers, the ARM Cortex-M4, or
    pqhw is for FPGAs, and this last one,Open
  • 54:11 - 54:17
    Quantum Safe, that one, they don't have as
    many primitives maybe as libpqcrypto or
  • 54:17 - 54:20
    SUPERCOP, but what's cool about that
    project is, they've got working
  • 54:20 - 54:26
    integrations of all of these into OpenSSL
    and OpenSSH. So if you're in, say the TLS
  • 54:26 - 54:31
    world, then that's clearly the way to use
    these post-quantum proposals, at least
  • 54:31 - 54:38
    quite a few of them, inside TLS. Okay, let
    me look a little bit at libpqcrypto and
  • 54:38 - 54:43
    then we'll finish this off. There's lots
    of cryptographic libraries which give you
  • 54:43 - 54:49
    a nice, simple API for hash. They'll have
    some simple function like sha256, which
  • 54:49 - 54:53
    takes a message, okay, in C you have to
    say there's a pointer to the beginning of
  • 54:53 - 54:58
    the message plus the length of the
    message, and then gives you back some hash
  • 54:58 - 55:03
    of some 256 bit, 32 byte hash. In a higher
    level language, of course, you say
  • 55:03 - 55:09
    something like "h = sha256(m)", and m
    knows what its length is, but in C it
  • 55:09 - 55:14
    looks like you have h and m and the length
    of m as arguments. Why not do this for all
  • 55:14 - 55:18
    cryptographic functions? Somehow, it's
    really weird, lots of cryptographic
  • 55:18 - 55:22
    libraries just have a nice, simple
    interface for hashing, and then if you
  • 55:22 - 55:26
    want to do something like public key
    signatures, it's, well, okay, first we're
  • 55:26 - 55:31
    going to find the factory, which is
    producing the keys, while the generator
  • 55:31 - 55:35
    method for the key, blah blah blah, and
    well, you can just say, and what
  • 55:35 - 55:40
    libpqcrypto does is, it simply says, you
    sign something, with whichever signature
  • 55:40 - 55:45
    scheme, you have to tell it, well, I'm
    going to put the signed message somewhere,
  • 55:45 - 55:48
    and then the length of the message is an
    output, the message you're signing and the
  • 55:48 - 55:52
    length are inputs, and your secret key is
    an input. And then it just takes
  • 55:52 - 55:56
    everything in wire format, produces
    everything in wire format. You don't have
  • 55:56 - 56:00
    to have conversion functions, input/output
    serializations etc. This is actually an
  • 56:00 - 56:05
    API that goes back to, we've been doing
    SUPERCOP for many years, and SUPERCOP, the
  • 56:05 - 56:11
    salt (NaCl) library, libsodium etc. are
    all using the same API. And this is
  • 56:11 - 56:17
    something which, actually people have
    measured the impact on usability of
  • 56:17 - 56:21
    cryptographic libraries depending on the
    different API provided by these libraries,
  • 56:21 - 56:24
    and so we're pretty confident about
    benefits of having a nice, simple way to
  • 56:24 - 56:31
    use crypto. NIST looked at this and said,
    well, ok, yeah. People should submit
  • 56:31 - 56:37
    something to the post-quantum competition
    using this API, but they didn't have test
  • 56:37 - 56:42
    code that people could use to make sure
    that they were following the
  • 56:42 - 56:46
    rules. They didn't require that everybody
    pass any particular set of tests and they
  • 56:46 - 56:53
    accepted submissions which didn't work,
    for example, in SUPERCOP. So, well, ok,
  • 56:53 - 56:56
    that's why I had to do a bunch of work to
    integrate a bunch of submissions into,
  • 56:56 - 57:01
    into SUPERCOP. But it's been sufficiently
    close to everybody using this that there
  • 57:01 - 57:05
    has been a lot of code shared between
    these different projects, Open Quantum
  • 57:05 - 57:10
    Safe is also starting from the same API
    and then providing higher level
  • 57:10 - 57:15
    integrations into OpenSSL and OpenSSH. OK.
    There's a bunch of different signature
  • 57:15 - 57:19
    systems and a bunch of different
    encryption systems in libpqcrypto. Here's
  • 57:19 - 57:23
    an example of what the higher level API
    looks like in Python. If you want to use
  • 57:23 - 57:28
    libpqcrypto and sign a message, well
    first somebody has to generate a public
  • 57:28 - 57:33
    key and a secret key using the pqcrypto
    library. Signature system, here's one of
  • 57:33 - 57:38
    the signature systems; Sphincs is a
    stateless hash-based signature system, you
  • 57:38 - 57:43
    don't have to record anything when you
    sign message, and then 128 is security
  • 57:43 - 57:47
    level 2 to the 128, using the SHA-256
    hash, and well, you just have to know this
  • 57:47 - 57:52
    name and then you say, give me a key pair,
    sign a message using a secret key, open a
  • 57:52 - 57:57
    message using a public key. Now this is
    not "you get a signature and then you do
  • 57:57 - 58:03
    verify of a message and a signature". This
    is another little API detail, designed to
  • 58:03 - 58:06
    protect people against screwing up.
    There's lots of applications which verify
  • 58:06 - 58:11
    signatures, and then if the verification
    fails, nobody's ever tested it, and the
  • 58:11 - 58:17
    verification failure is ignored. What
    actually works to protect application
  • 58:17 - 58:22
    programmers is have an interface where you
    have a signed message as one bundle, it
  • 58:22 - 58:27
    goes into the opening a signature, opening
    a signed message and producing a message,
  • 58:27 - 58:32
    and the cryptographic library does not
    produce a message as output if the
  • 58:32 - 58:37
    signature was invalid. So the signed
    message is produced, is handled by the
  • 58:37 - 58:42
    cryptographic library producing a message
    if the signature is valid. Also there's an
  • 58:42 - 58:46
    exception being raised, but even if you
    ignore the exception in Python or if
  • 58:46 - 58:50
    you're using a lower level language
    without exceptions, then you just aren't
  • 58:50 - 58:55
    given back a message. This is what lots of
    little thought that goes into the API.
  • 58:55 - 59:00
    Maybe a bigger example in Python; this is
    a whole thing of using the library,
  • 59:00 - 59:05
    generating a key, signing some random
    message and opening the message. OK. What's
  • 59:05 - 59:11
    going to happen in libpqcrypto coming up
    is, first of all, one of the big problems
  • 59:11 - 59:16
    with code quality is there's lots of
    exposure to timing attacks. I saw a great
  • 59:16 - 59:21
    talk earlier today about Spectre, and
    there's lots and lots of these attacks,
  • 59:21 - 59:25
    and part of fixing these attacks is fixing
    software, along with we're going to have
  • 59:25 - 59:30
    to do lots of hardware fixes. There's been
    some work on some implementations to fix
  • 59:30 - 59:35
    this, but much more is required. Need a
    lot more work on correctness. Lots and
  • 59:35 - 59:40
    lots of the code doesn't even pass address
    sanitizer. And this was, I don't want to
  • 59:40 - 59:45
    tell you how much pain to get code working
    and address sanitizer, where, I mean
  • 59:45 - 59:49
    anybody writing code professionally is
    going to be using these automatic tests as
  • 59:49 - 59:53
    they're writing the code, and this is
    something that just doesn't happen when
  • 59:53 - 59:58
    you ask a bunch of mathematicians to write
    code. Formal verification. We'll do much
  • 59:58 - 60:03
    more than testing and do much more than,
    say, address sanitizer does and much more
  • 60:03 - 60:07
    than even an expert auditor will do. That
    formal verification is going to guarantee
  • 60:07 - 60:12
    that your code is doing what it's supposed
    to for every possible input, which I used
  • 60:12 - 60:16
    to be very skeptical about because it
    seemed so painful to do for any realistic
  • 60:16 - 60:20
    code, but I've started getting much more
    enthusiastic about it, because the tools
  • 60:20 - 60:26
    are getting much much better. And one
    example of something I did was a sorting
  • 60:26 - 60:30
    verification, where some really fast
    sorting code is actually completely
  • 60:30 - 60:34
    verified to work correctly, the machine
    code is verified, so you compile it, even
  • 60:34 - 60:38
    if there's compiler bugs, then the machine
    code is what's verified, so the
  • 60:38 - 60:43
    verification isn't going to rely on some
    compiler being correct. This is using the
  • 60:43 - 60:47
    angr toolkit, also, I don't know if
    there's any Trail of Bits people here,
  • 60:47 - 60:52
    Manticore I understand has similar
    features, I used angr, but, well. There's
  • 60:52 - 60:56
    really cool tools out there for doing
    symbolic execution and as part of that
  • 60:56 - 61:01
    formal verification. Speed is important
    and trying to get the code volume down,
  • 61:01 - 61:05
    there's lots of duplication, we need more
    internal libraries to get post-quantum
  • 61:05 - 61:11
    crypto on a smaller, easier to review code
    base. And finally, hopefully, at the end
  • 61:11 - 61:16
    of all this we'll be able to throw away as
    many primitives as possible and focus on a
  • 61:16 - 61:20
    small number of things, where we can say
    "We've really, seriously reviewed these.
  • 61:20 - 61:25
    We've reviewed the designs, we've reviewed
    the implementations, and we're confident
  • 61:25 - 61:29
    that these things are secure". That's it.
    Thank you for your attention.
  • 61:29 - 61:45
    applause
  • 61:45 - 61:49
    Herald: Thank you Tanja & Daniel. If you would
    like to leave at this point, please do
  • 61:49 - 61:55
    that very quietly, we'll have a short run
    of Q&A. Signal angel, your first question.
  • 61:55 - 62:00
    Microphone: [unintelligible] from older
    submission in code base, are there any
  • 62:00 - 62:06
    other ones that use smaller keys?
    TL: Use smaller keys you said?
  • 62:06 - 62:08
    Signal angel: Yeah. From all the
    submissions -
  • 62:08 - 62:11
    TL: Yeah, so code-based cryptography,
    there's two submissions classic McEliece,
  • 62:11 - 62:14
    which I highlighted because it's ours, and
    there's NTS-KEM [CHECK], which has these
  • 62:14 - 62:19
    gigantic keys. Both of those are using
    goppa codes, which is what has received
  • 62:19 - 62:27
    the most analysis so far, but at this
    gigantic list of - Yep. What Dan is
  • 62:27 - 62:32
    showing here, several of those are
    actually code based. So, bigquake for
  • 62:32 - 62:37
    instance, down there, is a code-based
    system, then -
  • 62:37 - 62:40
    DJB: Lake, that's, that's -
    TL: Bike is one, lake is one, down there,
  • 62:40 - 62:46
    so lake would be fitting this by saying
    it's very small keys and signatures and
  • 62:46 - 62:53
    ciphertext. The downside is, it is using
    far less well-studied codes. So we need to
  • 62:53 - 62:58
    see how that develops.
    Herald: Thank you. For people in the room,
  • 62:58 - 63:02
    please try to limit your questions to a
    single sentence. Microphone number 3, your
  • 63:02 - 63:06
    question.
    Microphone 3: OK. How exactly do you
  • 63:06 - 63:11
    define post-quantum crypto? I mean, you
    have Shor's algorithm, you have the other
  • 63:11 - 63:17
    algorithms, but do you say, OK, it's just
    secure against factoring, discrete
  • 63:17 - 63:22
    logarithms, or do you also take into
    account optimization problems and stuff
  • 63:22 - 63:26
    like that?
    DJB: Yeah. So, I mean the definition
  • 63:26 - 63:31
    is, we're trying to protect against any
    attacker who has a big quantum computer
  • 63:31 - 63:36
    and we have a rough understanding of what
    quantum computers can do, because they're
  • 63:36 - 63:42
    limited by the laws of quantum physics.
    Which tells us that, OK, if you can build
  • 63:42 - 63:46
    a computer that supports what are called
    Toffoli gates and Hadamard gates, then you
  • 63:46 - 63:51
    can, well, it's not completely proven, but
    it's very plausible that you can simulate
  • 63:51 - 63:53
    the matrix at that point, a quantum
    matrix.
  • 63:53 - 63:55
    Microphone 3: Yes, but that's the
    universal model.
  • 63:55 - 63:58
    DJB: Yes, you have a universal quantum
    computer at that point. The problem is,
  • 63:58 - 64:03
    how do we know, even if we say, well OK,
    by believing that quantum physics is
  • 64:03 - 64:08
    everything we can do in the universe, that
    tells us we have a computation build out
  • 64:08 - 64:12
    of Hadamards and Toffolis. That doesn't
    tell you what kinds of algorithms you can
  • 64:12 - 64:16
    put together. And there's this big problem
    that's always been a problem for
  • 64:16 - 64:21
    cryptography, is trying to imagine what
    all possible algorithms are, and sometimes
  • 64:21 - 64:25
    people miss something. And so if somebody
    ever tells you, "oh, the system is
  • 64:25 - 64:28
    provably secure, there's, there can't
    possibly be an algorithm which is faster
  • 64:28 - 64:32
    than this to break the system", no there's
    no guarantees, and lots of people have
  • 64:32 - 64:37
    been overconfident and burned, because
    there is actually a faster algorithm.
  • 64:37 - 64:40
    We've had a lot of work on people trying
    to figure out good algorithms using
  • 64:40 - 64:44
    quantum computers, for instance for the
    sub-exponential attacks that Tanja was
  • 64:44 - 64:49
    mentioning against CSIDH, and that's
    something where, there's a long history to
  • 64:49 - 64:53
    those attacks, starting with Cooperberg's
    algorithm, and this is going beyond Shor's
  • 64:53 - 64:57
    algorithm and Grover's algorithm. And it's
    really important to look more at what sort
  • 64:57 - 65:00
    of quantum algorithms could attack
    cryptographic systems. There's been some
  • 65:00 - 65:02
    initial work, but there definitely
    needs to be more.
  • 65:02 - 65:07
    TL: I mean, our attacker is allowed to do
    whatever they want. That's why I'm showing
  • 65:07 - 65:10
    this as attacker. The attacker is not
    playing by the rules. The only thing we
  • 65:10 - 65:12
    know is our attacker has a quantum
    computer.
  • 65:12 - 65:16
    Microphone 3: Okay.
    Herald: Right, signal angel, your next
  • 65:16 - 65:19
    question.
    Signal angel: Question from the Internet.
  • 65:19 - 65:23
    Size does matter, but what about the
    performance of post-quantum cryptography
  • 65:23 - 65:28
    compared to classical algorithms for
    embedded or FPGA devices, for example of
  • 65:28 - 65:33
    firmware signing or communication and
    encryption?
  • 65:33 - 65:41
    TL: OK. So on the big list, and quickly
    firing up. pqm4, that's using a Cortex ARM
  • 65:41 - 65:46
    M4, so that's a rather small device. They
    did not implement all algorithms and for
  • 65:46 - 65:51
    some of them they said, it is very
    cumbersome to do with the big keys. So
  • 65:51 - 65:55
    yes, it's more of an issue. I mean, we're
    spoiled with elliptic curves, just having
  • 65:55 - 66:00
    256 bits there, and all the systems are
    larger than that. CSIDH is the closest you
  • 66:00 - 66:05
    get, but then it has the big computation.
    But there is effort, and these smaller and
  • 66:05 - 66:10
    more fitting systems have been
    implemented. Hopefully we'll get better.
  • 66:10 - 66:15
    Herald: Thanks. Microphone number 4, your
    question.
  • 66:15 - 66:21
    Microphone 4: You said, when Google did
    some tests, that said it's just too slow,
  • 66:21 - 66:27
    they cannot really use it. Would the
    solution be acceleration units, like used
  • 66:27 - 66:32
    for AES in CPUs?
    TL: So, Google was excluding the use of
  • 66:32 - 66:36
    the supersingular isogenies based on
    speed. I assume that's what you mean,
  • 66:36 - 66:42
    rather than the big ones with bandwidth. I
    don't know all the details of it. My
  • 66:42 - 66:46
    assumption is, it was factoring in also
    the security, like how much time have
  • 66:46 - 66:50
    people spent on analyzing it, which made
    them more comfortable with the structured
  • 66:50 - 66:56
    lattices than the supersingular isogenies.
    You can speed things up if you have a big
  • 66:56 - 67:01
    engine, which would be manufactured to do
    finite field arithmetic, but that is much,
  • 67:01 - 67:07
    much bigger than, say, an AES engine.
    DJB: Maybe just an extra comment. I think
  • 67:07 - 67:12
    that the choice they made of NTRU-HRSS is
    really an excellent choice of, it's
  • 67:12 - 67:17
    something which is small enough to fit
    into most applications, I mean 1,000 bytes
  • 67:17 - 67:21
    or so, it's much bigger than elliptic
    curve crypto, but compared to all the data
  • 67:21 - 67:25
    we tend to send around, it usually fits,
    unless you got, like, some really small
  • 67:25 - 67:31
    communication happening, then you usually
    can fit a kilobyte or so, which is the
  • 67:31 - 67:37
    NTRU-HRSS sizes. And it's something which,
    it's got some history of study. I would be
  • 67:37 - 67:41
    the last person to say that lattices are
    definitely secure, and we actually, our
  • 67:41 - 67:46
    NTRU Prime submission is worried about
    ways that something like NTRU-HRSS could
  • 67:46 - 67:52
    maybe be broken, but there's no evidence
    of any problems, and NTRU has held up for
  • 67:52 - 67:58
    about 20 years of study without being
    broken. So, it's also reasonably fast, so
  • 67:58 - 68:02
    it's a reasonable compromise between the
    different constraints of trying to have
  • 68:02 - 68:07
    something secure and not ridiculously big,
    and, well, if it gets broken, then
  • 68:07 - 68:14
    we're in trouble. But hopefully it's okay.
    Herald: Thanks. Signal angel. The final
  • 68:14 - 68:17
    question please.
    Signal angel: The final question. Can
  • 68:17 - 68:21
    CSIDH drawn on a hardware accelerator make
    for regular elliptic curves, or is it is
  • 68:21 - 68:24
    the handling of isogenies more
    problematic?
  • 68:24 - 68:29
    TL: All right, so depends what your
    hardware accelerator has. If it's like one
  • 68:29 - 68:33
    of fairly generic elliptic curve
    arithmetics, you can probably use it.
  • 68:33 - 68:38
    We're getting some speed from not using
    elliptic curves in Weierstrass form, but
  • 68:38 - 68:42
    Montgomery forms, so you probably would
    want to modify the accelerator they're
  • 68:42 - 68:48
    currently using to fit this better. Also,
    most systems are optimized for 256 bit
  • 68:48 - 68:53
    elliptic curves or 384, with 512 bits
    we're a little bit outside, but most of
  • 68:53 - 68:58
    the operations would be looking just the
    same. The most time we spend on doing a
  • 68:58 - 69:04
    big scalar multiplication, and then we
    have some operations in these isogenies,
  • 69:04 - 69:09
    but they are fairly similar. If you have,
    like, the field arithmetic built up, you
  • 69:09 - 69:16
    can just put these together and have an
    isogeny computation as well. So yes, it
  • 69:16 - 69:19
    can get faster. As I said, this is from
    January, we're still working on the
  • 69:19 - 69:24
    security analysis, so don't build any
    hardware, at this moment, quite yet.
  • 69:24 - 69:27
    laughter
    Herald: Thank you so much. Please give
  • 69:27 - 69:30
    them a huge round of applause for the
    talk.
  • 69:30 - 69:38
    applause
  • 69:38 - 69:40
    postroll music
  • 69:40 - 70:01
    subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
35C3 - The year in post-quantum crypto
Description:

more » « less
Video Language:
English
Duration:
01:10:01

English subtitles

Revisions