< Return to Video

Deploying TLS 1.3: the great, the good and the bad (33c3)

  • 0:00 - 0:15
    33C3 preroll music
  • 0:15 - 0:20
    Herald: Basically the upcoming
    talk is about “Deploying TLS 1.3”
  • 0:20 - 0:24
    and is by Filippo Valsorda
    and Nick Sullivan,
  • 0:24 - 0:27
    and they’re both with Cloudflare.
  • 0:27 - 0:32
    So please, a warm welcome
    to Nick and Filippo!
  • 0:32 - 0:39
    applause
  • 0:39 - 0:44
    Filippo: Hello everyone. Alright,
    we are here to talk about TLS 1.3.
  • 0:44 - 0:48
    TLS 1.3 is of course the latest
    version of TLS, which stands for
  • 0:48 - 0:53
    ‘Transport Layer Security’.
    Now, you might know it best
  • 0:53 - 0:58
    as, of course, the green lock in
    the browser, or by its old name SSL,
  • 0:58 - 1:03
    which we are still trying
    to kill. Now. TLS is
  • 1:03 - 1:08
    a transparent security protocol
    that can tunnel securely
  • 1:08 - 1:12
    arbitrary application traffic.
    It’s used by web browsers, of course,
  • 1:12 - 1:17
    it’s used by mail servers to
    communicate with each other
  • 1:17 - 1:22
    to secure SMTP. It’s used by
    Tor nodes to talk to each other.
  • 1:22 - 1:27
    But it evolved over 20 years,
  • 1:27 - 1:31
    but at its core it’s about a client
    and a server that want to communicate
  • 1:31 - 1:36
    securely over the network.
    To communicate securely over the network
  • 1:36 - 1:41
    they need to establish some key material,
    to agree on some key material
  • 1:41 - 1:47
    on the two sides to encrypt
    the rest of the traffic.
  • 1:47 - 1:52
    Now how they agree on this key material
    is [done] in a phase that we call
  • 1:52 - 1:58
    the ‘handshake’. The handshake involves
    some public key cryptography and some data
  • 1:58 - 2:03
    being shovelled from the client to the
    server, from the server to the client.
  • 2:03 - 2:07
    Now this is how the handshake
    looks like in TLS 1.2.
  • 2:07 - 2:13
    So the client starts the dances
    by sending a ‘Client Hello’ over,
  • 2:13 - 2:19
    which specifies what supported
    parameters it can use.
  • 2:19 - 2:23
    The server receives that and sends
    a message of its own, which is
  • 2:23 - 2:28
    ‘Server Hello’ that says: “Sure!
    Let’s use this cipher suite over here
  • 2:28 - 2:33
    that you say you support, and
    here is my key share to be used
  • 2:33 - 2:39
    in this key agreement algorithm.
    And also here is a certificate
  • 2:39 - 2:45
    which is signed by an authority
    that proves that I am indeed
  • 2:45 - 2:50
    Cloudflare.com. And here is a signature
    from the certificate to prove that
  • 2:50 - 2:55
    this key share is actually the one that
    I want you to use, to establish keys”.
  • 2:55 - 3:01
    The client receives that, and it generates
    its own key share, its own half
  • 3:01 - 3:06
    of the Diffie-Hellman key exchange,
    and sends over the key share,
  • 3:06 - 3:11
    and a message to say: “Alright, this
    is it. This wraps up the handshake”
  • 3:11 - 3:15
    which is called the ‘Finished’ message.
    [The] server receives that, makes
  • 3:15 - 3:21
    a ‘Finished’ message of its own,
    and answers with that. So.
  • 3:21 - 3:26
    Now we can finally send application
    data. So to recap, we went:
  • 3:26 - 3:30
    Client –> Server, Server –> Client;
    Client –> Server, Server –> Client.
  • 3:30 - 3:35
    We had to do 2 round trips between the
    client and the server before we could do
  • 3:35 - 3:41
    anything. We haven’t sent any
    byte on the application layer
  • 3:41 - 3:46
    until now. Now of course
    this, on mobile networks
  • 3:46 - 3:51
    or in certain parts of the
    world, can build up
  • 3:51 - 3:55
    to hundreds of milliseconds of latency.
    And this is what needs to happen
  • 3:55 - 4:01
    every time a new connection is set up.
    Every time the client and the server
  • 4:01 - 4:06
    have to go twice between them
    to establish the keys before
  • 4:06 - 4:13
    the connection can actually
    be used. Now, TLS 1.1
  • 4:13 - 4:18
    and 1.0 were not that different
    from 1.2. So you might ask: well, then
  • 4:18 - 4:24
    why are we having an entire talk on
    TLS 1.3, which is probably just this other
  • 4:24 - 4:31
    iteration over the same concept? Well,
    TLS 1.3 is actually a big re-design.
  • 4:31 - 4:37
    And in particular, the handshake has been
    restructured. And the most visible result
  • 4:37 - 4:43
    of this is that an entire round
    trip has been shaved off.
  • 4:43 - 4:49
    So, here is how a TLS 1.3
    handshake looks like.
  • 4:49 - 4:53
    How does 1.3 remove a round trip?
    How can it do that? Well, it does that
  • 4:53 - 5:00
    by predicting what key agreement algorithm
  • 5:00 - 5:05
    the server will decide to use, and
    sending pre-emptively a key share
  • 5:05 - 5:10
    for that algorithm to the server.
    So with the first flight we had
  • 5:10 - 5:16
    the ‘Client Hello’, the supported
    parameters, and a key share
  • 5:16 - 5:22
    for the one that the client thinks the
    server will like. The server receives that
  • 5:22 - 5:27
    and if everything goes well, it will
    go like “Oh! Sure! I like this key share.
  • 5:27 - 5:33
    Here is my own key share to run
    the same algorithm, and here is
  • 5:33 - 5:38
    the other parameters we should use.”
    It immediately mixes the two key shares
  • 5:38 - 5:42
    to get a shared key, because now
    it has both key shares – the client’s
  • 5:42 - 5:47
    and the server’s – and sends again
    the certificate and a signature
  • 5:47 - 5:51
    from the certificate, and then
    immediately sends a ‘Finished’ message
  • 5:51 - 5:56
    because it doesn’t need anything else
    from the client. The client receives that,
  • 5:56 - 6:02
    takes the key share, mixes the shared key
    and sends its own ‘Finished’ message,
  • 6:02 - 6:07
    and is ready to send whatever application
    layer data it was waiting to send.
  • 6:07 - 6:13
    For example your HTTP
    request. Now we went:
  • 6:13 - 6:16
    Client –> Server, Server –> Client.
  • 6:16 - 6:21
    And we are ready to send data at the
    application layer. So you are trying
  • 6:21 - 6:27
    to setup a HTTPS connection
    and your browser
  • 6:27 - 6:33
    doesn’t need to wait 4x
    the latency, or 4x the ping.
  • 6:33 - 6:39
    It only has to wait 2x. And of course
    this saves hundreds of milliseconds
  • 6:39 - 6:46
    of latency when setting up fresh
    connections. Now, this is the happy path.
  • 6:46 - 6:52
    So this is what happens when the
    prediction is correct and the server likes
  • 6:52 - 6:58
    the client key share. If the server
    doesn’t support the key share
  • 6:58 - 7:05
    that the client sent it will send a polite
    request to use a different algorithm
  • 7:05 - 7:11
    that the client said it can support. We
    call that message ‘Hello Retry Request’.
  • 7:11 - 7:16
    It has a cookie, so that can be stateless,
    but essentially it makes a fall-back
  • 7:16 - 7:22
    to what is effectively a TLS-1.2-like
    handshake. And it’s not that hard
  • 7:22 - 7:27
    to implement because the client follows up
    with a new ‘Client Hello’ which looks
  • 7:27 - 7:34
    essentially exactly like a fresh one. Now.
  • 7:34 - 7:42
    Here I’ve been lying to you.
    TLS 1.2 is not always 2 round trips.
  • 7:42 - 7:48
    Most of the connections we see from the
    Cloudflare edge e.g. are ‘resumptions’.
  • 7:48 - 7:53
    That means that the client has connected
    to that website before in the past.
  • 7:53 - 7:59
    And we can use that, we can exploit
    that to make the handshake faster.
  • 7:59 - 8:06
    That means that the client can remember
    something about the key material
  • 8:06 - 8:11
    to make the next connection
    a round trip even in TLS 1.2.
  • 8:11 - 8:16
    So here is how it looks like. Here
    you have your normal TLS 1.2 full
  • 8:16 - 8:22
    2-round trip connection. And over
    here it sends a new session ticket.
  • 8:22 - 8:30
    A session ticket is nothing else than a
    encrypted wrapped blob of key material
  • 8:30 - 8:35
    that the client will hold on to. The
    session ticket is encrypted and signed
  • 8:35 - 8:40
    with a key that only the server knows.
    So it’s completely opaque to the client.
  • 8:40 - 8:45
    But the client will keep it together
    with the key material of the connection,
  • 8:45 - 8:49
    so that the next time it makes
    a connection to that same website
  • 8:49 - 8:54
    it will send a ‘Client Hello’,
    and a session ticket.
  • 8:54 - 8:59
    If the server recognises the session
    ticket it will decrypt it, find inside
  • 8:59 - 9:04
    the key material. And now, after only one
    round trip, the server will have some
  • 9:04 - 9:10
    shared key material with the client because
    the client held on to the key material
  • 9:10 - 9:15
    from last time and the server just
    decrypted it from the session ticket.
  • 9:15 - 9:21
    OK? So now the server has some shared
    keys to use already, and it sends
  • 9:21 - 9:26
    a ‘Finished’ message, and the client sends
    its own ‘Finished’ message and the request.
  • 9:26 - 9:32
    So this is TLS 1.2. This is what
    is already happening every day
  • 9:32 - 9:37
    with most modern TLS connections. Now.
  • 9:37 - 9:44
    TLS 1.3 resumption is not that different.
    It still has the concept of a session ticket.
  • 9:44 - 9:48
    We changed the name of what’s inside
    the session ticket to a ‘PSK’ but that
  • 9:48 - 9:53
    just means ‘Pre-shared Key’ because
    that’s what it is: it’s some key material
  • 9:53 - 9:58
    that was agreed upon in advance.
    And it works the same way:
  • 9:58 - 10:03
    the server receives the session
    ticket, decrypts it and jumps to the
  • 10:03 - 10:07
    ‘Finished’ message. Now,
  • 10:07 - 10:13
    a problem with resumption
    is that if an attacker
  • 10:13 - 10:17
    controls the session ticket key
    – the key that the server uses
  • 10:17 - 10:22
    to encrypt the session ticket that
    has inside the key material –
  • 10:22 - 10:27
    an attacker can passively or in the future
    even, with a recording of the connection,
  • 10:27 - 10:33
    decrypt the session ticket from the
    ‘Client Hello’, find the PSK inside it
  • 10:33 - 10:38
    and use it to decrypt the rest of
    the connection. This is not good.
  • 10:38 - 10:43
    This means that someone can do
    passive decryption by just having
  • 10:43 - 10:48
    the session ticket key. How this is
    addressed usually is that we say
  • 10:48 - 10:53
    that session ticket keys are short-
    lived. But still it would be nice if
  • 10:53 - 10:56
    we didn’t have to rely on that. And there
    are actually nice papers that tell us
  • 10:56 - 11:01
    that implementations don’t
    always do this right. So,
  • 11:01 - 11:07
    instead what TLS 1.3 allows
    us to do is use Diffie-Hellman
  • 11:07 - 11:12
    with resumption. In 1.2 there
    was no way to protect
  • 11:12 - 11:17
    against session ticket key
    compromise. In 1.3 what you can do
  • 11:17 - 11:21
    is send a key share as part
    of the ‘Client Hello’ anyway,
  • 11:21 - 11:25
    and the server will send a key share
    together with the ‘Server Hello’,
  • 11:25 - 11:32
    and they will run Diffie-Hellman.
    Diffie-Hellman is what was used to
  • 11:32 - 11:36
    introduce forward secrecy against
    the compromise of, for example,
  • 11:36 - 11:41
    the certificate private key in 1.2, and
    it’s used here to provide forward secrecy
  • 11:41 - 11:46
    for resumed connections.
    Now, you will say:
  • 11:46 - 11:51
    “Now this looks essentially
    like a normal 1.3 handshake,
  • 11:51 - 11:56
    why having the PSK at all?” Well,
    there is something missing from this one,
  • 11:56 - 12:00
    there is no certificate. Because
    there is no need to re-authenticate
  • 12:00 - 12:05
    with a certificate because the client and
    the server spoke in the past, and so
  • 12:05 - 12:09
    the client knows that it already checked
    the certificate of the server and
  • 12:09 - 12:13
    if the server can decrypt the session
    ticket it means that it’s actually
  • 12:13 - 12:18
    who it says it is. So, the two
    key shares get mixed together.
  • 12:18 - 12:23
    Then mixed with the PSK to make
    a key that encrypts the rest
  • 12:23 - 12:30
    of the connection. Now.
    There is one other feature
  • 12:30 - 12:35
    that is introduced by TLS 1.3
    resumption. And that is the fact
  • 12:35 - 12:41
    that it allows us to make 0-round
    trip handshakes. Again,
  • 12:41 - 12:47
    all handshakes in 1.3
    are mostly 1-round trip.
  • 12:47 - 12:52
    TLS 1.2 resumptions can be
    at a minimum 1-round trip.
  • 12:52 - 12:58
    TLS 1.3 resumptions can be 0-round
    trip. How does a 0-round trip
  • 12:58 - 13:04
    handshake work? Well, if you think about
    it, when you start, you have a PSK,
  • 13:04 - 13:10
    a Pre-Shared Key. The client
    can just use that to encrypt
  • 13:10 - 13:16
    this early data that it wants to
    send to the server. So the client
  • 13:16 - 13:20
    opens a connection, to a server that it
    has already connected to in the past,
  • 13:20 - 13:25
    and sends ‘Client Hello’, session ticket,
  • 13:25 - 13:30
    key share for Diffie-Hellman and
    then early data. Early data is
  • 13:30 - 13:34
    this blob of application data
    – it can be e.g. a HTTP request –
  • 13:34 - 13:39
    encrypted with the PSK.
    The server receives this,
  • 13:39 - 13:45
    decrypts the session ticket, finds
    the PSK, uses the PSK to decrypt the
  • 13:45 - 13:51
    early data and then proceeds as normal:
    mixes the 2 key shares, mixes the PSK in,
  • 13:51 - 13:55
    makes a new key for the rest of the
    connection and continues the connection.
  • 13:55 - 14:00
    So what happened here? We were able to
    send application data immediately upon
  • 14:00 - 14:05
    opening the connection. This means that
    we completely removed the performance
  • 14:05 - 14:11
    overhead of TLS. Now.
  • 14:11 - 14:16
    0-RTT handshakes, though, have
    2 caveats that are theoretically
  • 14:16 - 14:23
    impossible to remove. One is that
    that nice thing that we introduced
  • 14:23 - 14:28
    with the PSK ECDHE mode, the one where
    we do Diffie-Hellman for resumption
  • 14:28 - 14:33
    in 1.3, does not help with 0-RTT data.
  • 14:33 - 14:39
    We do Diffie-Hellman when we
    reach the green box in the slide.
  • 14:39 - 14:44
    Of course the early data is only encrypted
    with the PSK. So let’s think about
  • 14:44 - 14:49
    the attacker again. The attacker somehow
    stole our session ticket encryption keys.
  • 14:49 - 14:55
    It can look at the ‘Client Hello’, decrypt
    the session ticket, get the PSK out,
  • 14:55 - 15:00
    use the PSK to decrypt the early data.
  • 15:00 - 15:05
    And it can do this even from a recording
    if it gets the session ticket later on.
  • 15:05 - 15:12
    So the early data is not forward secret
    with respect to the session ticket keys.
  • 15:12 - 15:17
    Then of course it becomes useless
    if we are doing Diffie-Hellman to get
  • 15:17 - 15:23
    the server answer. That’s only useful
    for the first flight sent from the client.
  • 15:23 - 15:28
    So to recap, a lot of things
    going on here: TLS 1.2
  • 15:28 - 15:33
    introduced forward secrecy
    against the compromise of the
  • 15:33 - 15:39
    certificate private keys, a long
    time ago, by using ECDHE modes.
  • 15:39 - 15:45
    So 1.2 connections can be
    always forward secret against
  • 15:45 - 15:50
    certificate compromise.
    TLS 1.3 has that always on as well.
  • 15:50 - 15:55
    There is no mode that is not forward
    secret against compromise of the
  • 15:55 - 16:01
    certificate. But when we think about what
    might happen to the session ticket key:
  • 16:01 - 16:06
    TLS 1.2 never provides forward secrecy.
  • 16:06 - 16:11
    In TLS 1.2 compromising the session
    ticket key always means being able
  • 16:11 - 16:16
    to passively and in the future
    decrypt resumed connections.
  • 16:16 - 16:23
    In 1.3 instead, if we use PSK
    ECDHE only the early data
  • 16:23 - 16:28
    can be decrypted by using
    the session ticket key alone.
  • 16:28 - 16:33
    Now, I said that there were 2 caveats.
  • 16:33 - 16:39
    The second caveat is that
    0-RTT data can be replayed.
  • 16:39 - 16:45
    The scenario is this: you have
    some data in the early data
  • 16:45 - 16:52
    that is somehow authenticated. It might be
    a HTTP request with some cookies on it.
  • 16:52 - 16:58
    And that HTTP request is somehow
    executing a transaction,
  • 16:58 - 17:03
    okay? Moving some money, instructing
    the server to do something. An attacker
  • 17:03 - 17:08
    wants to make that happen multiple
    times. It can’t decrypt it, of course
  • 17:08 - 17:13
    – it’s protected with TLS. So it
    can’t read the cookie, and it can’t
  • 17:13 - 17:18
    modify it because, of course, it’s
    protected with TLS. But it can record
  • 17:18 - 17:23
    the encrypted message
    and it can then replay it
  • 17:23 - 17:28
    against the server. Now if you have
    a single server this is easy to fix.
  • 17:28 - 17:33
    You just take a note of the messages you
    have seen before and you just say like
  • 17:33 - 17:38
    “No, this looks exactly like something I
    got before”. But if, for example like
  • 17:38 - 17:42
    Cloudflare you are running multiple data
    centres around the world, you cannot keep
  • 17:42 - 17:48
    consistent state all the time, in real
    time across all machines. So there would
  • 17:48 - 17:52
    be different machines that if they
    receive this message will go like
  • 17:52 - 17:58
    “Sure I have the session ticket key,
    I decrypt the PSK, I use the PSK,
  • 17:58 - 18:02
    I decrypt the early data, I find
    inside something, I execute what
  • 18:02 - 18:08
    it tells me to do.” Now, of
    course, this is not desirable.
  • 18:08 - 18:13
    One countermeasure that TLS offers
    is that the client sends a value
  • 18:13 - 18:19
    in that bundle which is how long
    ago in milliseconds I obtained
  • 18:19 - 18:24
    the session ticket. The server
    looks at that value and
  • 18:24 - 18:29
    if it does not match its own view of this
    information it will reject the message.
  • 18:29 - 18:34
    That means that if the attacker records
    the message and then 10 seconds later
  • 18:34 - 18:40
    tries to replay it the times won’t
    match and the server can drop it.
  • 18:40 - 18:45
    But this is not a full solution because
    if the attacker is fast enough
  • 18:45 - 18:50
    it can still replay messages.
    So, everything the server can do
  • 18:50 - 18:56
    is either accept the
    0-RTT data, or reject it.
  • 18:56 - 19:01
    It can’t just take some part of it or
    take a peek and then decide because
  • 19:01 - 19:06
    it’s the ‘Server Hello’ message that
    says whether it’s accepted or rejected.
  • 19:06 - 19:10
    And the client will keep sending early
    data until it gets the ‘Server Hello’.
  • 19:10 - 19:16
    There’s a race here. So the server has to
    go blind and decide “Am I taking 0-RTT data
  • 19:16 - 19:21
    or am I just rejecting it all?” If it’s
    taking it, and then it finds out that it’s
  • 19:21 - 19:27
    something that it can’t process because
    “Oh god, there is a HTTP POST in here
  • 19:27 - 19:32
    that says to move some money, I can’t
    do this unless I know it’s not replayed.”
  • 19:32 - 19:37
    So the server has to get some
    confirmation. The good news is that
  • 19:37 - 19:41
    if the server waits for the ‘Finished’
    message… The server sends
  • 19:41 - 19:45
    the ‘Server Hello’, the ‘Finished’
    and waits for the client’s one.
  • 19:45 - 19:51
    When the client’s one gets there it means
    that also the early data was not replayed,
  • 19:51 - 19:55
    because that ‘Finished’ message
    ties together the entire handshake
  • 19:55 - 20:00
    together with some random value that
    the server sent. So it’s impossible
  • 20:00 - 20:04
    that it was replayed. So, this is
    what a server can do: it can accept
  • 20:04 - 20:09
    the early data and if it’s something
    that is not idempotent, something
  • 20:09 - 20:15
    that is dangerous, if it’s replayed it
    can just wait for the confirmation.
  • 20:15 - 20:19
    But that means it has to buffer it, and
    there’s a risk for an attack here, where
  • 20:19 - 20:26
    an attacker just sends a HTTP POST, with
    a giant body just to fill your memory.
  • 20:26 - 20:32
    So what we realised is that we could help
    with this if we wrote on the session tickets
  • 20:32 - 20:37
    what’s the maximum amount of
    early data that the client can send.
  • 20:37 - 20:42
    If we see someone sending more than
    that, then it’s an attacker and we
  • 20:42 - 20:47
    close the connection, drop the
    buffer, free up the memory.
  • 20:47 - 20:53
    But. Anyway. However
    countermeasures we deploy,
  • 20:53 - 20:59
    unless we can keep global state across the
    servers, we have to inform the application
  • 20:59 - 21:03
    that “this data might be replayed”.
    The spec knows this.
  • 21:03 - 21:08
    So the TLS 1.3 spec EXPLICITLY says
  • 21:08 - 21:14
    protocols must NOT use
    0-RTT without a profile
  • 21:14 - 21:19
    that defines its use. Which means
    “without knowing what they are doing”.
  • 21:19 - 21:24
    This means that TLS stack
    API’s have to do 1 round trip
  • 21:24 - 21:30
    by default, which is not affected by
    replays, and then allow the server
  • 21:30 - 21:36
    to call some API’s to either reject
    or wait for the confirmation,
  • 21:36 - 21:41
    and to let the client decide what goes
    into this dangerous re-playable
  • 21:41 - 21:46
    piece of data. So this will change
  • 21:46 - 21:50
    based on the protocols but what about
    our favourite protocol? What about
  • 21:50 - 21:55
    HTTP? Now HTTP should
    be easy, the HTTP spec,
  • 21:55 - 22:01
    you go read it and it says “Well,
    GET requests are idempotent,
  • 22:01 - 22:06
    they must not change anything on the
    server”. Solved! We will just allow
  • 22:06 - 22:11
    GET requests in early data because even
    if they are replayed nothing happened!
  • 22:11 - 22:17
    Yay! Nope. sighs You will definitely
    find some server on the internet
  • 22:17 - 22:23
    that has something like
    “send-money.php?to=filippo&amount=this”
  • 22:23 - 22:29
    and it’s a GET request. And if an attacker
    records this, which is early data,
  • 22:29 - 22:34
    and then replays this against a different
    server in the pool, that will get executed
  • 22:34 - 22:39
    twice. And we can’t have that.
  • 22:39 - 22:43
    Now, so what can we do here?
  • 22:43 - 22:47
    We make trade-offs!
  • 22:47 - 22:52
    If you know your application, you can
    make very specific trade-offs. E.g.
  • 22:52 - 22:57
    Google has been running QUIC
    with 0-RTT for the longest time,
  • 22:57 - 23:02
    for 3 years I think? And that means that
    they know very well their application.
  • 23:02 - 23:07
    And they know that they don’t have
    any “send-money.php” endpoints.
  • 23:07 - 23:13
    But if you are like Cloudflare that
    fronts a wide number of applications
  • 23:13 - 23:18
    you can’t make such wide sweeping
    assumptions, and you have instead
  • 23:18 - 23:23
    to hope for some middle ground. For
    example, something we might decide to do
  • 23:23 - 23:29
    is to only allow GETs
    to the root. So “GET /”
  • 23:29 - 23:33
    which might be the most benefit because
    maybe most connections start like that,
  • 23:33 - 23:39
    and the least likely to cause trouble.
  • 23:39 - 23:43
    We are still working on how exactly to
    bring this to applications. So if you know
  • 23:43 - 23:48
    of an application that would get hurt
    by something as simple as that
  • 23:48 - 23:54
    do email us, but actually,
    if you have an application
  • 23:54 - 23:59
    that is that vulnerable I have
    bad news. Thai Duong et. al.
  • 23:59 - 24:04
    demonstrated that browsers will
    today, without TLS 1.3 or anything,
  • 24:04 - 24:10
    replay HTTP requests
    if network errors happen.
  • 24:10 - 24:16
    And they will replay them silently.
    So it might not be actually worse
  • 24:16 - 24:22
    than the current state. Okay.
    I can actually see everyone
  • 24:22 - 24:28
    getting uneasy in their seats, thinking
    “There the cryptographers are at it again!
  • 24:28 - 24:33
    They are making the security protocol that
    we need more complex than it has to be
  • 24:33 - 24:39
    to get their job security for
    the next 15 years!” Right?
  • 24:39 - 24:44
    No. No. I can actually assure you that
  • 24:44 - 24:50
    one of the big changes, in my opinion
    even bigger than the round trips in 1.3,
  • 24:50 - 24:55
    is that everything is being weighted
    for the benefit against the complexity
  • 24:55 - 24:59
    that it introduces. And
    while 0-RTT made the cut
  • 24:59 - 25:03
    most other things definitely didn’t.
  • 25:03 - 25:08
    Nick: Right. Thanks Filippo.
  • 25:08 - 25:14
    In TLS 1.3 as an iteration of
    TLS we also went back, or,
  • 25:14 - 25:18
    “we” being the people who are
    looking at TLS, went back and
  • 25:18 - 25:23
    revisited the existing TLS 1.2 features
    that sort of seemed reasonable at the time
  • 25:23 - 25:27
    and decided whether or not the complexity
    and the danger added by these features,
  • 25:27 - 25:32
    or these protocols, or these
    primitives involved in TLS were
  • 25:32 - 25:38
    reasonable to keep. And the big one which
    happened early on in the process is
  • 25:38 - 25:44
    ‘Static RSA’ mode. So this is the way that
    TLS has been working back since SSL.
  • 25:44 - 25:48
    Rather than using Diffie-Hellman to
    establish a shared key… How this works is,
  • 25:48 - 25:52
    the client will make its own shared
    key, and encrypt it with the server’s
  • 25:52 - 25:57
    certificate public key which is gonna
    be an RSA key, and then just send it
  • 25:57 - 26:01
    in plain text over the wire to the server.
    And then the server would use its
  • 26:01 - 26:05
    private key to decrypt that, and then
    establish a shared key. So the client
  • 26:05 - 26:10
    creates all the key material in this case.
    And one thing that is sort of obvious
  • 26:10 - 26:14
    from this is that if the private key
    for the certificate is comprised,
  • 26:14 - 26:18
    even after the fact, even years later,
    someone with the transcript of what happened
  • 26:18 - 26:23
    can go back and decrypt this key material,
    and then see the entire conversation.
  • 26:23 - 26:28
    So this was removed very early in the
    process, somewhere around 2 years ago
  • 26:28 - 26:34
    in TLS 1.3. So, much to our surprise,
    and the surprise of everyone
  • 26:34 - 26:40
    reading the TLS mailing
    list, just very recently,
  • 26:40 - 26:45
    near the end of the standardisation
    process where TLS 1.3 was almost final
  • 26:45 - 26:51
    this e-mail landed on the list. And this
    is from Andrew Kennedy who works at BITS
  • 26:51 - 26:57
    which basically means he works
    at banks. So this is what he said:
  • 26:57 - 27:02
    “Deprecation of the RSA key exchange
    in TLS 1.3 will cause significant problems
  • 27:02 - 27:07
    for financial institutions, almost all of
    whom are running TLS internally and have
  • 27:07 - 27:13
    significant, security-critical investments
    in out-of-band TLS decryption”.
  • 27:13 - 27:18
    “Out-of-band TLS decryption”… mmh…
    laughs - applause
  • 27:18 - 27:23
    That certainly sounds critical…
    critical for someone, right?
  • 27:23 - 27:26
    laughs - applause
    So…
  • 27:26 - 27:32
    laughs
    applause
  • 27:32 - 27:37
    So one of the bright spots was
    Kenny Paterson’s response to this,
  • 27:37 - 27:42
    in which he said: “My view
    concerning your request: no.
  • 27:42 - 27:45
    Rationale: We’re trying to build a MORE
    secure internet.” The emphasis on ‘more’
  • 27:45 - 27:47
    is mine but I’m sure he meant it, yeah.
  • 27:47 - 27:54
    applause
  • 27:54 - 27:59
    So after this the banking folks came
    to the IETF and presented this slide
  • 27:59 - 28:04
    to describe how hard it was to actually
    debug their system. This is a very simple…
  • 28:04 - 28:09
    I guess, with respect to banking. Those
    are the different switches, routers,
  • 28:09 - 28:14
    middle ware, web applications; and
    everything talks TLS one to the other.
  • 28:14 - 28:20
    And after this discussion we decided
    we came to a compromise.
  • 28:20 - 28:24
    But instead of actually compromising
    the protocol Matthew Green
  • 28:24 - 28:29
    taught them how to use Diffie-Hellman
    incorrectly. They ended up actually
  • 28:29 - 28:33
    being able to do what they wanted
    to do, without us – or anybody
  • 28:33 - 28:37
    in the academic community, or in the
    TLS community – adding back this
  • 28:37 - 28:42
    insecure piece of TLS.
  • 28:42 - 28:46
    So if you want to read this it shows
    how to do it. But in any case
  • 28:46 - 28:50
    – we didn’t add it back.
    Don’t do this, basically! laughs
  • 28:50 - 28:54
    applause
  • 28:54 - 29:00
    So we killed static RSA, and
    what else did we kill? Well,
  • 29:00 - 29:04
    looking back on the trade-offs there is
    a number of primitives that are in use
  • 29:04 - 29:09
    in TLS 1.2 and earlier that just
    haven’t stood the test of time.
  • 29:09 - 29:12
    So, RC4 stream cipher. Gone!
    applause
  • 29:12 - 29:15
    3DES (Triple DES) block cipher. Gone!
    applause
  • 29:15 - 29:22
    MD5, SHA1… all gone. Yo!
    ongoing applause
  • 29:22 - 29:26
    There is even constructions that took…
    basic block cipher constructions
  • 29:26 - 29:32
    that are gone: AES-CBC.
    Gone. RSA-PKCS1-1.5,
  • 29:32 - 29:37
    this has been known to have been
    problematic since 1998, also gone!
  • 29:37 - 29:42
    They have also removed several features
    like Compression and Renegotiation which
  • 29:42 - 29:47
    was replaced with a very lightweight
    ‘key update’ mechanism. So in TLS 1.3
  • 29:47 - 29:52
    none of these met the balance of
    benefit vs. complexity. And a lot of these
  • 29:52 - 29:58
    vulnerabilities, you might recognize, are
    just impossible in TLS 1.3. So that’s good.
  • 29:58 - 30:04
    applause
  • 30:04 - 30:09
    So the philosophy for TLS 1.3 in a lot of
    places is simplify and make it more robust
  • 30:09 - 30:15
    as much as possible. There are a number
    of little cases in which we did that.
  • 30:15 - 30:19
    Some of the authors of this paper may be
    in the audience right now. But there is
  • 30:19 - 30:24
    a way in which block ciphers where
    used for the actual record layer
  • 30:24 - 30:28
    that was not as robust as it could be.
    It has been replaced with a much simpler
  • 30:28 - 30:32
    mechanism. TLS 1.2 had this
  • 30:32 - 30:38
    really kind of funny ‘Catch 22’ in it
    where the cipher negotiation
  • 30:38 - 30:42
    is protected by a ‘Finished’ message which
    is a message-authentication code, but
  • 30:42 - 30:47
    the algorithm for that code was determined
    in the cipher negotiation, so,
  • 30:47 - 30:53
    it had this kind of loop-back effect. And
    attacks like FREAK, LogJam and CurveSwap
  • 30:53 - 30:59
    (from last year) managed to exploit these
    to actually downgrade connections.
  • 30:59 - 31:03
    And this was something that was happening
    in the wild. And the reason for this is
  • 31:03 - 31:07
    that these cipher suites in this handshake
    are not actually digitally signed
  • 31:07 - 31:12
    by the private key. And in TLS 1.3
    this was changed. Everything
  • 31:12 - 31:16
    from the signature up is digitally
    signed. So this is great!
  • 31:16 - 31:21
    What else did we change? Well,
    what else did TLS 1.3 change
  • 31:21 - 31:28
    vs. TLS 1.2? And that is: fewer, better
    choices. And in cryptography
  • 31:28 - 31:33
    better choices always means fewer choices.
    So there is now a shortlist of curves and
  • 31:33 - 31:37
    finite field groups that you can use. And
    no arbitrary Diffie-Hellman groups made up
  • 31:37 - 31:42
    by the server, no arbitrary curves
    that can be used. And this sort of
  • 31:42 - 31:48
    shortening of the list of parameters
    really enables 1-RTT to work
  • 31:48 - 31:52
    a lot of the time. So as Filippo
    mentioned, the client has to guess
  • 31:52 - 31:57
    which key establishment
    methods the server supports,
  • 31:57 - 32:01
    and send that key share. If there is
    a short list of only-secure options
  • 32:01 - 32:06
    this happens a larger percentage of
    the time. So when you’re configuring
  • 32:06 - 32:11
    your TLS server it no longer looks
    like a complicated takeout menu,
  • 32:11 - 32:16
    it’s more like a wedding [menu]. Take one
    of each, and it’s a lot more delicious
  • 32:16 - 32:22
    anyways. And you can look on
    Wireshark, it’s also very simple.
  • 32:22 - 32:28
    The cipher suites use extensions,
    the curves, and you can go from there.
  • 32:28 - 32:33
    Filippo: Now, TLS 1.3 also fixed
    what I think was one of the biggest
  • 32:33 - 32:37
    actual design mistakes of
    TLS 1.2. We talked about
  • 32:37 - 32:43
    how forward secrecy works
    with resumption in 1.2 and 1.3.
  • 32:43 - 32:49
    But TLS 1.2 is even more
    problematic. TLS 1.2 wraps
  • 32:49 - 32:56
    inside the session tickets the actual
    master secret of the old connection.
  • 32:56 - 33:03
    So it takes the actual keys that encrypt
    the traffic of the original connection,
  • 33:03 - 33:08
    encrypts them with the session ticket key,
    and sends that to the client to be sent
  • 33:08 - 33:14
    back the next time. We talked about
    how there’s a risk that an attacker will
  • 33:14 - 33:18
    obtain session ticket keys, and decrypt
    the session tickets, and break
  • 33:18 - 33:24
    the forward secrecy and decrypt
    the resumed connections. Well,
  • 33:24 - 33:30
    in TLS 1.2 it’s even worse. If they
    decrypt the session tickets they could
  • 33:30 - 33:36
    go back and backward decrypt the original
  • 33:36 - 33:42
    non-resumed connection. And
    this is completely unnecessary.
  • 33:42 - 33:47
    We have hash functions, we have one-way
    functions where you put an input in
  • 33:47 - 33:53
    and you get something that you can’t
    go back from. So that’s what 1.3 does.
  • 33:53 - 33:59
    1.3 derives new keys, fresh
    keys for the next connection
  • 33:59 - 34:04
    and wraps them inside the session ticket
    to become the PSK. So even if you
  • 34:04 - 34:09
    decrypt a 1.3 session ticket
    you can then attack
  • 34:09 - 34:14
    the subsequent connection, and we’ve
    seen that you might be able to decrypt
  • 34:14 - 34:19
    only the early data, or all the connection
    depending on what mode it uses. But
  • 34:19 - 34:26
    you definitely can’t decrypt the
    original non-resumed connection.
  • 34:26 - 34:32
    So, this would be bad enough, but 1.2
    makes another decision that entirely
  • 34:32 - 34:37
    puzzled me. The whole ‘using the master
    secret’ might be just because session
  • 34:37 - 34:42
    tickets were an extension in
    1.2, which they are not in 1.3.
  • 34:42 - 34:48
    But, 1.2 sends the new session
    ticket message at the beginning
  • 34:48 - 34:53
    of the original handshake,
    unencrypted! I mean
  • 34:53 - 34:59
    encrypted with the session ticket keys
    but not with the current session keys.
  • 34:59 - 35:04
    So, any server that just supports
  • 35:04 - 35:10
    session tickets will have at the
    beginning of all connections,
  • 35:10 - 35:15
    even if resumption never happens, they
    will have a session ticket which is
  • 35:15 - 35:19
    nothing else than the ephemeral
    keys of that connection
  • 35:19 - 35:23
    wrapped with the session
    ticket keys. Now, if you are
  • 35:23 - 35:29
    a global passive adversary
    that somehow wants to do
  • 35:29 - 35:33
    passive dragnet surveillance and
    you wanted to passively decrypt
  • 35:33 - 35:39
    all the connections, and somehow you
    were able to obtain session ticket keys,
  • 35:39 - 35:44
    what you would find at the beginning
    of every TLS 1.2 connection is
  • 35:44 - 35:50
    the session keys encrypted with
    the session ticket keys. Now,
  • 35:50 - 35:56
    1.3 solves this, and in 1.3 this kind
    of attacks are completely impossible.
  • 35:56 - 35:59
    The only thing that you can passively
    decrypt, or decrypt after the fact,
  • 35:59 - 36:04
    is the early data, and definitely not non-
    resumed connections, and definitely not
  • 36:04 - 36:11
    anything that comes after 0-RTT.
  • 36:11 - 36:13
    Nick: So it’s safer, basically.
    laughs
  • 36:13 - 36:16
    Filippo: Hope so!
    Nick: …hopefully.
  • 36:16 - 36:21
    And how do we know that it’s safer? Well,
    these security parameters, and these
  • 36:21 - 36:26
    security requirements of TLS have been
    formalized and, as opposed to earlier
  • 36:26 - 36:30
    versions of TLS the folks in the academic
    community who do formal verification were
  • 36:30 - 36:34
    involved earlier. So there have been
    several papers analyzing the state machine
  • 36:34 - 36:40
    and analyzing the different modes of
    TLS 1.3, and these have aided a lot
  • 36:40 - 36:45
    in the development
    of the protocol. So,
  • 36:45 - 36:51
    who actually develops TLS 1.3? Well, it’s
  • 36:51 - 36:55
    an organization called the IETF which is
    the Internet Engineering Taskforce. It’s
  • 36:55 - 37:00
    a group of volunteers that meet 3 times
    a year and have mailing lists, and they
  • 37:00 - 37:03
    debate these protocols endlessly. They
    define the protocols that are used
  • 37:03 - 37:08
    on the internet. And originally, the first
    thing that I ever saw about this – this is
  • 37:08 - 37:13
    a tweet of mine from September
    2013 – was a wish list for TLS 1.3.
  • 37:13 - 37:20
    And since then they came out
    with a first draft at the IETF…
  • 37:20 - 37:25
    Documents that define protocols
    are known as RFCs, and
  • 37:25 - 37:29
    the lead-up to something becoming an RFC
    is an ‘Internet Draft’. So you start with
  • 37:29 - 37:34
    the Internet Draft 0, and then you iterate
    on this draft until finally it gets
  • 37:34 - 37:40
    accepted or rejected as an RFC. So
    the first one was almost 3 years ago
  • 37:40 - 37:46
    back in April 2014, and the current
    draft (18) which is considered to be
  • 37:46 - 37:52
    almost final, it’s in what is
    called ‘Last Call’ at the IETF,
  • 37:52 - 37:57
    was just recently in October.
    In the security landscape
  • 37:57 - 38:02
    during that time you’ve seen so many
    different types of attacks on TLS. So:
  • 38:02 - 38:08
    Triple Handshake, POODLE, FREAK, Logjam,
    DROWN (there was a talk about that earlier
  • 38:08 - 38:12
    today), Lucky Microseconds, SLOTH.
    All these different types of acronyms
  • 38:12 - 38:16
    – you may or may not have heard of –
    have happened during the development.
  • 38:16 - 38:21
    So TLS 1.3 is a living
    document, and it’s hopefully
  • 38:21 - 38:28
    going to be small. I mean,
    TLS 1.2 was 79 pages.
  • 38:28 - 38:33
    It’s kind of a rough read, but
    give it a shot! If you like. TLS 1.3
  • 38:33 - 38:36
    if you shave off a lot of the excess stuff
    at the end is actually close. And it’s
  • 38:36 - 38:41
    a lot nicer read, it’s a lot more precise,
    even though there are some interesting
  • 38:41 - 38:47
    features like 0-RTT, resumption. So
    practically, how does it get written?
  • 38:47 - 38:53
    Well it’s, uh… Github! And a mailing list!
    So if you want to send a pull request
  • 38:53 - 38:59
    to this TLS working group, there it is.
    This is actually how the draft gets defined.
  • 38:59 - 39:04
    And you probably want to send a message
    to the mailing list to describe what your
  • 39:04 - 39:09
    change is, if you want to. I suggest if
    anybody wants to be involved this is
  • 39:09 - 39:14
    pretty late. I mean it’s in ‘Last Call’…
    But the mailing list is still open. Now
  • 39:14 - 39:18
    I’ve been working on this with a bunch of
    other people, Filippo as well. We were
  • 39:18 - 39:23
    contributors on the draft, been working
    for over a year on this. You can check
  • 39:23 - 39:29
    the Github issues to see how much work
    has gone into it. The draft has changed
  • 39:29 - 39:34
    over the years and months.
  • 39:34 - 39:39
    E.g. Draft 9 had this very
    complicated tree structure
  • 39:39 - 39:44
    for a key schedule, you can see
    htk… all these different things
  • 39:44 - 39:50
    had to do with different keys in the TLS
    handshake. And this was inspired by QUIC,
  • 39:50 - 39:56
    the Google protocol that Filippo mentioned
    earlier as well as a paper called ‘OPTLS’.
  • 39:56 - 40:01
    And it had lots of different modes,
    semi-static Diffie-Hellman, and this
  • 40:01 - 40:05
    tree-based key schedule. And over the
    time this was widdled down from this
  • 40:05 - 40:11
    complicated diagram to what we have
    now in TLS 1.3. Which is a very simple
  • 40:11 - 40:16
    derivation algorithm. This took a lot
    of work to get from something big
  • 40:16 - 40:22
    to something small. But it’s happened!
    Other things that happened
  • 40:22 - 40:27
    in TLS 1.3 are sort of less substantial,
    cryptographically, and that involves
  • 40:27 - 40:33
    naming! If anyone has been following
    along, TLS 1.3 is not necessarily
  • 40:33 - 40:38
    the unanimous choice for the name of this
    protocol. It’s, as Filippo mentioned, 1.0,
  • 40:38 - 40:44
    1.1, 1.2 are pretty small iterations
    even on SSLv3, whereas
  • 40:44 - 40:49
    TLS 1.3 is quite a big change.
    So there is a lot of options
  • 40:49 - 40:55
    for names! Let’s have
    a show of hands: Who here
  • 40:55 - 41:00
    thinks it should be called 1.3?
    laughs
  • 41:00 - 41:02
    Thanks, Filippo! Filippo laughs
    Yeah, so, pretty good number.
  • 41:02 - 41:08
    How about TLS 2? Anybody?
    Well, that actually looks like more than…
  • 41:08 - 41:13
    Filippo: Remember that SSLv2 is
    a thing! And it’s a terrible thing!
  • 41:13 - 41:18
    Nick: You don’t want to confuse
    that with us! So how about TLS 4?
  • 41:18 - 41:23
    Still a significant number of people…
    How about TLS 2017? Yeah…
  • 41:23 - 41:26
    Alright! TLS 7 anybody? Okay…
  • 41:26 - 41:30
    Filippo: TLS Millennium 2019 X?
  • 41:30 - 41:35
    YES! Sold!
    Nick: Alright! TLS Vista?
  • 41:35 - 41:39
    laughter - Nick and Filippo laugh
    applause
  • 41:39 - 41:45
    Nick: Lots of options! But just as
    a reminder, the rest of the world
  • 41:45 - 41:50
    doesn’t really call it TLS. This is Google
    trends, interest over time, searching for
  • 41:50 - 41:55
    ‘SSL vs. TLS’. SSL is really what most
    of the world calls this protocol. So SSL
  • 41:55 - 42:00
    has the highest version of Version 3,
    and that’s kind of the reason why people
  • 42:00 - 42:05
    thought ‘TLS 4’ was a good idea, because
    “Oh, people are confused: 3 is higher
  • 42:05 - 42:11
    than 1.2, yada-yada-yada”.
  • 42:11 - 42:15
    This poll was not the only poll. It was
    taken there some informal twitter polls.
  • 42:15 - 42:20
    “Mmm, Bacon!” was a good one,
    52% of Ryan Hurst’s poll.
  • 42:20 - 42:24
    laughter
  • 42:24 - 42:28
    Versions are a really sticky thing in TLS.
  • 42:28 - 42:33
    E.g. the versions that we have of TLS
    – if you look at them on the wire
  • 42:33 - 42:38
    they actually don’t match up.
    So SSL 3 is 3.0 which does match up.
  • 42:38 - 42:44
    But TLS 1 is 3.1; 3.2…
    TLS 1.2 is 3.3; and originally
  • 42:44 - 42:49
    I think up to Draft 16
    of TLS 1.3 it was 3.4.
  • 42:49 - 42:54
    Just sort of a bumping the minor
    version of TLS 1.2, very confusing.
  • 42:54 - 42:59
    But after doing some internet
    measurement it was determined that
  • 42:59 - 43:03
    a lot of servers, if you send a ‘Client
    Hello’ with ‘3.4’, it just disconnects. So
  • 43:03 - 43:08
    this is actually really bad, it prevents
    browsers from being able to actually
  • 43:08 - 43:13
    safely downgrade. What a server is
    supposed to do if it sees a version
  • 43:13 - 43:19
    higher than 3.3 is just respond with “3.3”
    saying: “Hey, this is the best I have”.
  • 43:19 - 43:25
    But turns out a lot of these break.
    So 3.3 is in the ‘Client Hello’ now, and
  • 43:25 - 43:31
    3.4 is negotiated as a sub
    protocol. So this is messy.
  • 43:31 - 43:36
    Right? But we do balance the benefits vs.
    complexity, and this is one of the ones
  • 43:36 - 43:40
    where the benefits of not having servers
    fail outweigh the complexity added,
  • 43:40 - 43:44
    of adding an additional thing. And to
    prevent this from happening in the future
  • 43:44 - 43:49
    David Benjamin proposed something called
    GREASE where in every single piece of
  • 43:49 - 43:54
    TLS negotiation you are supposed to,
    as a client, add some random stuff
  • 43:54 - 43:57
    in there, so that servers will
    get used to seeing things
  • 43:57 - 44:01
    that are not versions they’re used to.
    So, 0x8a8a. It’s all GREASE-d up!
  • 44:01 - 44:06
    Filippo: It’s a real thing!
    It’s a real very useful thing!
  • 44:06 - 44:09
    Nick: This is going to be very useful,
    for the future, for preventing
  • 44:09 - 44:14
    these sorts of things. But it’s really
    unfortunate that that had to happen.
  • 44:14 - 44:19
    We are running low on time, but
    we dued to actually get involved with
  • 44:19 - 44:23
    getting our hands dirty. And one thing
    the IETF really loves when developing
  • 44:23 - 44:29
    these standards is running code. So we
    started with the IETF 95 Hackathon
  • 44:29 - 44:33
    which is in April, and managed,
    by the end of it, to get Firefox
  • 44:33 - 44:38
    to load a server hosted by Cloudflare
    over TLS 1.3. Which was a big
  • 44:38 - 44:43
    accomplishment at the time. We used NSS
    which is the security library in Firefox
  • 44:43 - 44:49
    and ‘Mint’ which was a new version
  • 44:49 - 44:53
    of TLS 1.3, from scratch, written in Go.
  • 44:53 - 44:58
    And the result was, it worked! But
    this was just a proof-of-concept.
  • 44:58 - 45:03
    Filippo: To build something that was more
    production ready, we looked at what was
  • 45:03 - 45:08
    the TLS library that we were most
    confident modifying, which unsurprisingly
  • 45:08 - 45:13
    wasn’t OpenSSL! So we opted to
  • 45:13 - 45:18
    build 1.3 on top of the Go
    crypto/tls library, which is
  • 45:18 - 45:24
    in the Go language standard library.
    The result, we call it ‘tls-tris’,
  • 45:24 - 45:28
    and it’s a drop-in replacement for
    crypto/tls, and comes with this
  • 45:28 - 45:34
    wonderful warning that says “Do not use
    this for the sake of everything that’s
  • 45:34 - 45:39
    good and just!” Now, it used to be about
    everything, but now it’s not really
  • 45:39 - 45:45
    about security anymore, we got this
    audited, but it’s still about stability.
  • 45:45 - 45:51
    We are working on upstreaming
    this, which will solidify the API,
  • 45:51 - 45:56
    and you can follow along with the
    upstreaming process. The Google people
  • 45:56 - 46:01
    were kind enough to open us a branch to do
    the development, and it will definitely not
  • 46:01 - 46:07
    hit the next Go release, Go 1.8, but we
    are looking forward to upstreaming this.
  • 46:07 - 46:12
    Anyway, even if you use Go,
    deploying is hard.
  • 46:12 - 46:18
    The first time we deployed Tris
    the draft number version was 13.
  • 46:18 - 46:24
    And to actually support browsers
    going forward from there we had
  • 46:24 - 46:29
    to support multiple draft versions
    at the same time by switching on
  • 46:29 - 46:35
    obscure details sometimes. And sometimes
    had to support things that were definitely
  • 46:35 - 46:40
    not even drafts because
    browsers started to… diverge.
  • 46:40 - 46:45
    Now, anyway, we had
    a test matrix that would run
  • 46:45 - 46:51
    all our commits against all the different
    versions of the client libraries,
  • 46:51 - 46:55
    and that would make sure that we are
    always compatible with the browsers.
  • 46:55 - 47:00
    And these days the clients are actually
    much more stable, and indeed
  • 47:00 - 47:05
    you might be already using it
    without knowing. E.g. Chrome Beta,
  • 47:05 - 47:11
    the beta channel has it enabled for about
    50% as an experiment from the Google side.
  • 47:11 - 47:16
    And this is how our graphs looked
    like when we first launched,
  • 47:16 - 47:22
    when Firefox Nightly enabled it by default
    and when Chrome Canary enabled it
  • 47:22 - 47:27
    by default. These days we are stable,
    around 700 requests per second
  • 47:27 - 47:31
    carried over TLS 1.3.
    And on our side we enabled it
  • 47:31 - 47:36
    for millions of our
    websites on Cloudflare.
  • 47:36 - 47:41
    And, anyway, as we said,
    the spec is a living document
  • 47:41 - 47:46
    and it is open. You can see it on
    Github. The Tris implementation is there
  • 47:46 - 47:51
    even if it has this scary warning, and
    the blog here is where we’ll probably
  • 47:51 - 47:56
    publish all the follow-up research and
    results of this. Thank you very much and
  • 47:56 - 48:00
    if you have any questions please come
    forward, I think we have a few minutes.
  • 48:00 - 48:12
    applause
  • 48:12 - 48:16
    Herald: Thank you, we have plenty
    of time for questions. First question
  • 48:16 - 48:20
    goes to the Internet.
  • 48:20 - 48:24
    Signal Angel: The very first
    question is of people asking if
  • 48:24 - 48:28
    the decision of the 0-RTT going
    on to the application, handing it
  • 48:28 - 48:32
    off to the application developers,
    if that is a very wise decision?
  • 48:32 - 48:34
    Filippo: laughs
    applause
  • 48:34 - 48:40
    Filippo: Well… fair. So, as we said, this
    is definitely breaking an abstraction.
  • 48:40 - 48:46
    So it’s NOT broken by default.
    If you just update Go
  • 48:46 - 48:51
    and get TLS 1.3 you won’t
    get any 0-RTT because
  • 48:51 - 48:55
    indeed it requires collaboration by the
    application. So unless an application
  • 48:55 - 49:00
    knows what to do with it it just can not
    use that and have all the security benefits
  • 49:00 - 49:07
    and the one round trip full
    handshake advantages, anyway.
  • 49:07 - 49:10
    Herald: Ok, next question
    is from microphone 1.
  • 49:10 - 49:13
    Question: With your early testing of the
    protocol have you been able to capture
  • 49:13 - 49:18
    any hard numbers on what those
    performance improvements look like?
  • 49:18 - 49:21
    Filippo sighs
  • 49:21 - 49:25
    Nick: One round trip! laughs
    Depends how much a round trip is.
  • 49:25 - 49:28
    Filippo: Yeah, exactly. One round trip
    is… I mean, I can’t tell you a number
  • 49:28 - 49:33
    because of course if you live in
    San Francisco with a fast fiber it’s,
  • 49:33 - 49:39
    I don’t know, 3 milliseconds, 6…?
    If you live in, I don’t know,
  • 49:39 - 49:43
    some country where EDGE is the only type
    of connection you get that’s probably
  • 49:43 - 49:48
    around one second. I think we have an
    average that is around… between 100
  • 49:48 - 49:55
    and 200 milliseconds, but we haven’t
    like formally collected these numbers.
  • 49:55 - 49:58
    Herald: Ok, next question
    from microphone 3.
  • 49:58 - 50:02
    Question: One remark I wanted to make is
    that another improvement that was made
  • 50:02 - 50:07
    in TLS 1.3 is that they added
    encryption to client certificates.
  • 50:07 - 50:11
    So the client certificates are transmitted
    encrypted which is important
  • 50:11 - 50:18
    if you think about that a client will
    move, and a dragnet surveillance entity
  • 50:18 - 50:23
    could track clients with this. And
    another remark/question which might…
  • 50:23 - 50:27
    Herald: Questions are ended with a question
    mark. So can you keep it please a bit short?
  • 50:27 - 50:32
    Question: Yeah…
    That might be stupid so…
  • 50:32 - 50:36
    Does the fixed Diffie-Hellman
    groups… wasn’t that the problem
  • 50:36 - 50:43
    with the LogJam attack, so… does
    this help with LogJam attacks?
  • 50:43 - 50:47
    Nick: Are you referencing the
    proposal for the banks?
  • 50:47 - 50:50
    Question: No no, just in general,
    that you can pre-compute…
  • 50:50 - 50:54
    Nick: Right, yes, so in Logjam there was
    a problem where there was a DH group
  • 50:54 - 50:58
    that was shared by a lot of different
    servers by default. The Apache one,
  • 50:58 - 51:04
    which was 1024 [bit].
    In TLS 1.3 it was restricted to
  • 51:04 - 51:09
    a pre-computed DH group, that’s
    over 2000 bits, as the smallest one,
  • 51:09 - 51:15
    and even with all the pre-computation in
    the world if you have a 2000 bit DH group
  • 51:15 - 51:20
    it’s not feasible to pre-compute
    enough to do any type of attack.
  • 51:20 - 51:22
    But, yeah, that’s a very good point.
  • 51:22 - 51:25
    Filippo: …and since they are fixed there
    is no way to force the protocol to use
  • 51:25 - 51:29
    anything else that would not be as strong.
    Question: Okay, thanks!
  • 51:29 - 51:33
    Herald: Next question for microphone 4.
  • 51:33 - 51:37
    Question: Thanks for your talk! In the
    abstract you mentioned that another
  • 51:37 - 51:42
    feature that had to be killed was SNI,
  • 51:42 - 51:46
    with the 0-RTT but there are ways to still
    implement that, can you elaborate a bit?
  • 51:46 - 51:50
    Filippo: Yeah. So, we gave this talk
    internally twice, and this question came
  • 51:50 - 51:56
    both of the times. So… laughs
  • 51:56 - 52:02
    So, SNI is a small parameter
    that the client sends to the server
  • 52:02 - 52:06
    to say which website it is trying to
    connect to. E.g. Cloudflare has
  • 52:06 - 52:11
    a lot of websites behind our machines, so
    you have to tell us “Oh I actually want
  • 52:11 - 52:17
    to connect to blog.filippo.io”. Now
    this is of course a privacy concern
  • 52:17 - 52:23
    because someone just looking at the bytes
    on the wire will know what specific website
  • 52:23 - 52:29
    you want to connect to. Now the unfortunate
    thing is that it has the same problem as
  • 52:29 - 52:35
    getting forward secrecy for the early
    data. You send SNI in the ‘Client Hello’,
  • 52:35 - 52:40
    and at that time you haven’t negotiated
    any key yet, so you don’t have anything
  • 52:40 - 52:45
    to encrypt it with. But if you
    don’t send SNI in the first flight
  • 52:45 - 52:49
    then the server doesn’t know what
    certificate to send, so it can’t send
  • 52:49 - 52:53
    the signature in the first flight! So you
    don’t have keys. So you would have to do
  • 52:53 - 52:59
    a 2-round trip, and now we would
    be back at TLS 1.2. So, alas.
  • 52:59 - 53:03
    That doesn’t work with
    1-round trip handshakes.
  • 53:03 - 53:09
    Nick: That said, there are proposals in
    the HTTP2 spec to allow multiplexing,
  • 53:09 - 53:14
    and this is ongoing work. It could be
    possible to establish one connection
  • 53:14 - 53:20
    to a domain and then establish another
    connection within the existing connection.
  • 53:20 - 53:22
    And that could potentially
    protect your SNI.
  • 53:22 - 53:26
    Filippo: So someone looking would think
    that you are going to blog.filippo.io but
  • 53:26 - 53:29
    then, once you open the connection,
    you would be able to ask HTTP2 to also
  • 53:29 - 53:33
    serve you “this other website”. Thanks!
  • 53:33 - 53:38
    Herald: Okay, next
    question, microphone 7,
  • 53:38 - 53:41
    or actually 5, sorry.
  • 53:41 - 53:47
    Question: You mentioned that there
    was formal verification of TLS 1.3.
  • 53:47 - 53:54
    What’s the software that was used
    to do the formal verification?
  • 53:54 - 53:59
    Nick: So there were several software
    implementations and protocols…
  • 53:59 - 54:03
    Let’s see if I can go back… here.
  • 54:03 - 54:07
    So, Tamarin[Prover] is a piece of software
    developed by Cas Cremers and others,
  • 54:07 - 54:12
    at Oxford and Royal Holloway.
    miTLS is in F# I believe,
  • 54:12 - 54:18
    this is by INRIA.
    And NQSB-TLS is in OCAMAL.
  • 54:18 - 54:23
    So several different languages were used
    to develop these and I believe the authors
  • 54:23 - 54:27
    of NQSB-TLS are here…
  • 54:27 - 54:31
    Herald: Okay, next question, microphone 8.
  • 54:31 - 54:36
    Question: Hi! Thanks. Thank you for
    your informative presentation.
  • 54:36 - 54:43
    SSL and TLS history is riddled with “what
    could possibly go wrong” ideas and moments
  • 54:43 - 54:49
    that bit us in the ass eventually. And so
    I guess my question is taking into account
  • 54:49 - 54:53
    that there’s a lot of smaller organisations
    or smaller hosting companies etc. that
  • 54:53 - 55:00
    will probably get this 0-RTT thing
    wrong. Your gut feeling? How large
  • 55:00 - 55:04
    a chance is there that this will indeed
    bite us in the ass soon? Thank you.
  • 55:04 - 55:10
    Filippo: Ok, so, as I said I’m
    actually vaguely sceptical
  • 55:10 - 55:16
    on the impact on HTTP because browsers
    can be made to replay requests already.
  • 55:16 - 55:22
    And we have seen papers
    and blog posts about it. But
  • 55:22 - 55:26
    no one actually went out
    and proved that that broke
  • 55:26 - 55:31
    a huge percent of the internet. But to
    be honest, I actually don’t know how to
  • 55:31 - 55:36
    answer you how badly we will be bit by it.
    But remember that on the other hand
  • 55:36 - 55:42
    of the balance is how many still say
    that they won’t implement TLS
  • 55:42 - 55:46
    because it’s “slow”. Now, no!
  • 55:46 - 55:52
    It’s 0-RTT, TLS is fast! Go
    out and encrypt everything!
  • 55:52 - 55:58
    So those are the 2 concerns that
    you have to balance together.
  • 55:58 - 56:02
    Again, my personal opinion
    is also worth very little.
  • 56:02 - 56:07
    This was a decision that was made by
    the entire community on the mailing list.
  • 56:07 - 56:13
    And I can assure you that everyone has
    been really conservative with everything,
  • 56:13 - 56:19
    thinking even… indeed, if the name
    would have mislead people. So,
  • 56:19 - 56:24
    I can’t predict the future. I can only
    say that I hope we made the best choice
  • 56:24 - 56:29
    to make the most part of the
    web the most secure we can.
  • 56:29 - 56:32
    Herald: Next question is from the internet.
  • 56:32 - 56:35
    Signal Angel, do we have another
    question from the internet?
  • 56:35 - 56:38
    Signal Angel: Yes we do.
  • 56:38 - 56:43
    What are the major implementation
    incompatibilities that were found
  • 56:43 - 56:46
    now that the actual spec is fairly close?
  • 56:46 - 56:48
    Herald: Can you repeat that question?
  • 56:48 - 56:53
    Signal Angel repeats question
  • 56:53 - 56:59
    Filippo: Okay. As in
    during the drafts period?
  • 56:59 - 57:03
    So, some of the ones that had version
    intolerance were mostly, I think,
  • 57:03 - 57:07
    middle boxes and firewalls.
  • 57:07 - 57:13
    Nick: There were some very large sites.
    I think Paypal was one of them?
  • 57:13 - 57:18
    Filippo: Although during the process we
    had incompatibilities for all kinds of
  • 57:18 - 57:24
    reasons, including one of
    the 2 developers misspelled
  • 57:24 - 57:28
    the variable number.
    laughs
  • 57:28 - 57:32
    During the drafts sometimes compatibility
    broke, but there was a lot of
  • 57:32 - 57:38
    collaboration between client implementations
    and server implementations on our side.
  • 57:38 - 57:44
    So I’m pretty happy to say that the
    actual 1.3 implementations had a lot of
  • 57:44 - 57:51
    interoperability testing, and all the
    issues were pretty quick to be killed.
  • 57:51 - 57:54
    Herald: Okay, next question
    is from microphone number 1.
  • 57:54 - 57:59
    Question: I have 2 quick questions
    concerning session resumption.
  • 57:59 - 58:03
    If you store some data on a server
    from a session, wouldn’t that be
  • 58:03 - 58:08
    some kind of supercookie?
    Is that not privacy-dangerous?
  • 58:08 - 58:14
    And the second question would be: what
    about DNS load balancers or some other
  • 58:14 - 58:21
    huge amounts of servers where your request
    is going to different servers every time?
  • 58:21 - 58:28
    Filippo: Ok, so, these are details about
    deploying session tickets effectively.
  • 58:28 - 58:33
    TLS 1.3 does think about the privacy
    concerns of session tickets; and indeed
  • 58:33 - 58:38
    it allows the server to send multiple
    session tickets. So the server will still
  • 58:38 - 58:42
    know what client is sending it if it
    wants to. But at least anyone looking
  • 58:42 - 58:47
    at the connection since they are
    sent encrypted, not like in 1.2, and
  • 58:47 - 58:53
    there can be many. Anyone looking at the
    connection will not be able to link it
  • 58:53 - 58:58
    back to the original connection. That’s
    the best you can do, because if the server
  • 58:58 - 59:03
    and the client have to reuse some shared
    knowledge the server has to learn about
  • 59:03 - 59:08
    who it was. But session tickets in 1.3
    can’t be tracked by a passive observer,
  • 59:08 - 59:13
    by a third party, actually. And… when you
    do load balancing… there is an interesting
  • 59:13 - 59:19
    paper about deploying session tickets,
    but the gist is that you probably want
  • 59:19 - 59:25
    to figure out how clients roam between
    your servers, and strike a balance between
  • 59:25 - 59:30
    having to share the session ticket
    key so that it’s more effective, and
  • 59:30 - 59:36
    not sharing the session ticket key which
    makes it harder to acquire them all.
  • 59:36 - 59:42
    You might want to do geographically
    located, or in-a-single-rack…
  • 59:42 - 59:45
    it’s really up to the deployment.
  • 59:45 - 59:47
    Herald: Okay, final question
    goes to microphone 3.
  • 59:47 - 59:52
    Question: I have a question regarding the
    GREASE mechanism that is implemented
  • 59:52 - 59:57
    on the client side. If I understood
    it correctly you are inserting
  • 59:57 - 60:02
    random version numbers of
    not-existing TLS or SSL versions
  • 60:02 - 60:09
    and that way training
    the servers to
  • 60:09 - 60:14
    conform to the specification. What
    is the result of the real-world tests?
  • 60:14 - 60:18
    How many servers actually
    are broken by this?
  • 60:18 - 60:23
    Filippo: So you would expect none because
    after all they are all implementing 1.3
  • 60:23 - 60:28
    now, so that all the clients they would
    see would already be doing GREASE. Instead
  • 60:28 - 60:33
    just as Google enabled GREASE I think
    it broke… I’m not sure so I won’t say
  • 60:33 - 60:38
    which specific server implementation, but
    one of the minor server implementations
  • 60:38 - 60:42
    was immediately detected
    as… the Haskell one!
  • 60:42 - 60:44
    Nick: Right!
    Filippo: I don’t remember the name,
  • 60:44 - 60:47
    I can’t read Haskell, so I don’t know what
    exactly they were doing, but they were
  • 60:47 - 60:50
    terminating connections because of GREASE.
  • 60:50 - 60:53
    Nick: And just as a note, GREASE is also
    used in cipher negotiation and anything
  • 60:53 - 60:59
    that is a negotiation in TLS 1.3.
    So this actually did break
  • 60:59 - 61:03
    a subset of servers, but
    a small enough subset
  • 61:03 - 61:07
    that people were happy with it.
  • 61:07 - 61:09
    Question: Thanks!
    Nick: 2% is too high!
  • 61:09 - 61:11
    Herald: Thank you very much.
    Filippo: Thank you!
  • 61:11 - 61:20
    applause
  • 61:20 - 61:39
    33C3 postroll music
  • 61:39 - 61:44
    subtitles created by c3subtitles.de
    in the year 2017. Join, and help us!
Title:
Deploying TLS 1.3: the great, the good and the bad (33c3)
Description:

more » « less
Video Language:
English
Duration:
01:01:44
  • Thank you for helping us with the transcripts of this conference talk! To save you time and spare you a lot of work please take a quick look at our c3subtitles workflow guide at https://wiki.c3subtitles.de/en:postprocessing:contribute --> because after writing the transcript (which seems to be complete) an automatic timing/syncing process can be triggered by one of our maintainers. They're already informed, so that you only need to continue work on this video in the rare case that nobody wants to do the review later. So, thanks, and watch the magic happening!

  • The magic has happened, please keep track of your progress here: https://c3subtitles.de/talk/646/
    Thank you very much!

English subtitles

Revisions