-
33C3 preroll music
-
Herald: Basically the upcoming
talk is about “Deploying TLS 1.3”
-
and is by Filippo Valsorda
and Nick Sullivan,
-
and they’re both with Cloudflare.
-
So please, a warm welcome
to Nick and Filippo!
-
applause
-
Filippo: Hello everyone. Alright,
we are here to talk about TLS 1.3.
-
TLS 1.3 is of course the latest
version of TLS, which stands for
-
‘Transport Layer Security’.
Now, you might know it best
-
as, of course, the green lock in
the browser, or by its old name SSL,
-
which we are still trying
to kill. Now. TLS is
-
a transparent security protocol
that can tunnel securely
-
arbitrary application traffic.
It’s used by web browsers, of course,
-
it’s used by mail servers to
communicate with each other
-
to secure SMTP. It’s used by
Tor nodes to talk to each other.
-
But it evolved over 20 years,
-
but at its core it’s about a client
and a server that want to communicate
-
securely over the network.
To communicate securely over the network
-
they need to establish some key material,
to agree on some key material
-
on the two sides to encrypt
the rest of the traffic.
-
Now how they agree on this key material
is [done] in a phase that we call
-
the ‘handshake’. The handshake involves
some public key cryptography and some data
-
being shovelled from the client to the
server, from the server to the client.
-
Now this is how the handshake
looks like in TLS 1.2.
-
So the client starts the dances
by sending a ‘Client Hello’ over,
-
which specifies what supported
parameters it can use.
-
The server receives that and sends
a message of its own, which is
-
‘Server Hello’ that says: “Sure!
Let’s use this cipher suite over here
-
that you say you support, and
here is my key share to be used
-
in this key agreement algorithm.
And also here is a certificate
-
which is signed by an authority
that proves that I am indeed
-
Cloudflare.com. And here is a signature
from the certificate to prove that
-
this key share is actually the one that
I want you to use, to establish keys”.
-
The client receives that, and it generates
its own key share, its own half
-
of the Diffie-Hellman key exchange,
and sends over the key share,
-
and a message to say: “Alright, this
is it. This wraps up the handshake”
-
which is called the ‘Finished’ message.
[The] server receives that, makes
-
a ‘Finished’ message of its own,
and answers with that. So.
-
Now we can finally send application
data. So to recap, we went:
-
Client –> Server, Server –> Client;
Client –> Server, Server –> Client.
-
We had to do 2 round trips between the
client and the server before we could do
-
anything. We haven’t sent any
byte on the application layer
-
until now. Now of course
this, on mobile networks
-
or in certain parts of the
world, can build up
-
to hundreds of milliseconds of latency.
And this is what needs to happen
-
every time a new connection is set up.
Every time the client and the server
-
have to go twice between them
to establish the keys before
-
the connection can actually
be used. Now, TLS 1.1
-
and 1.0 were not that different
from 1.2. So you might ask: well, then
-
why are we having an entire talk on
TLS 1.3, which is probably just this other
-
iteration over the same concept? Well,
TLS 1.3 is actually a big re-design.
-
And in particular, the handshake has been
restructured. And the most visible result
-
of this is that an entire round
trip has been shaved off.
-
So, here is how a TLS 1.3
handshake looks like.
-
How does 1.3 remove a round trip?
How can it do that? Well, it does that
-
by predicting what key agreement algorithm
-
the server will decide to use, and
sending pre-emptively a key share
-
for that algorithm to the server.
So with the first flight we had
-
the ‘Client Hello’, the supported
parameters, and a key share
-
for the one that the client thinks the
server will like. The server receives that
-
and if everything goes well, it will
go like “Oh! Sure! I like this key share.
-
Here is my own key share to run
the same algorithm, and here is
-
the other parameters we should use.”
It immediately mixes the two key shares
-
to get a shared key, because now
it has both key shares – the client’s
-
and the server’s – and sends again
the certificate and a signature
-
from the certificate, and then
immediately sends a ‘Finished’ message
-
because it doesn’t need anything else
from the client. The client receives that,
-
takes the key share, mixes the shared key
and sends its own ‘Finished’ message,
-
and is ready to send whatever application
layer data it was waiting to send.
-
For example your HTTP
request. Now we went:
-
Client –> Server, Server –> Client.
-
And we are ready to send data at the
application layer. So you are trying
-
to setup a HTTPS connection
and your browser
-
doesn’t need to wait 4x
the latency, or 4x the ping.
-
It only has to wait 2x. And of course
this saves hundreds of milliseconds
-
of latency when setting up fresh
connections. Now, this is the happy path.
-
So this is what happens when the
prediction is correct and the server likes
-
the client key share. If the server
doesn’t support the key share
-
that the client sent it will send a polite
request to use a different algorithm
-
that the client said it can support. We
call that message ‘Hello Retry Request’.
-
It has a cookie, so that can be stateless,
but essentially it makes a fall-back
-
to what is effectively a TLS-1.2-like
handshake. And it’s not that hard
-
to implement because the client follows up
with a new ‘Client Hello’ which looks
-
essentially exactly like a fresh one. Now.
-
Here I’ve been lying to you.
TLS 1.2 is not always 2 round trips.
-
Most of the connections we see from the
Cloudflare edge e.g. are ‘resumptions’.
-
That means that the client has connected
to that website before in the past.
-
And we can use that, we can exploit
that to make the handshake faster.
-
That means that the client can remember
something about the key material
-
to make the next connection
a round trip even in TLS 1.2.
-
So here is how it looks like. Here
you have your normal TLS 1.2 full
-
2-round trip connection. And over
here it sends a new session ticket.
-
A session ticket is nothing else than a
encrypted wrapped blob of key material
-
that the client will hold on to. The
session ticket is encrypted and signed
-
with a key that only the server knows.
So it’s completely opaque to the client.
-
But the client will keep it together
with the key material of the connection,
-
so that the next time it makes
a connection to that same website
-
it will send a ‘Client Hello’,
and a session ticket.
-
If the server recognises the session
ticket it will decrypt it, find inside
-
the key material. And now, after only one
round trip, the server will have some
-
shared key material with the client because
the client held on to the key material
-
from last time and the server just
decrypted it from the session ticket.
-
OK? So now the server has some shared
keys to use already, and it sends
-
a ‘Finished’ message, and the client sends
its own ‘Finished’ message and the request.
-
So this is TLS 1.2. This is what
is already happening every day
-
with most modern TLS connections. Now.
-
TLS 1.3 resumption is not that different.
It still has the concept of a session ticket.
-
We changed the name of what’s inside
the session ticket to a ‘PSK’ but that
-
just means ‘Pre-shared Key’ because
that’s what it is: it’s some key material
-
that was agreed upon in advance.
And it works the same way:
-
the server receives the session
ticket, decrypts it and jumps to the
-
‘Finished’ message. Now,
-
a problem with resumption
is that if an attacker
-
controls the session ticket key
– the key that the server uses
-
to encrypt the session ticket that
has inside the key material –
-
an attacker can passively or in the future
even, with a recording of the connection,
-
decrypt the session ticket from the
‘Client Hello’, find the PSK inside it
-
and use it to decrypt the rest of
the connection. This is not good.
-
This means that someone can do
passive decryption by just having
-
the session ticket key. How this is
addressed usually is that we say
-
that session ticket keys are short-
lived. But still it would be nice if
-
we didn’t have to rely on that. And there
are actually nice papers that tell us
-
that implementations don’t
always do this right. So,
-
instead what TLS 1.3 allows
us to do is use Diffie-Hellman
-
with resumption. In 1.2 there
was no way to protect
-
against session ticket key
compromise. In 1.3 what you can do
-
is send a key share as part
of the ‘Client Hello’ anyway,
-
and the server will send a key share
together with the ‘Server Hello’,
-
and they will run Diffie-Hellman.
Diffie-Hellman is what was used to
-
introduce forward secrecy against
the compromise of, for example,
-
the certificate private key in 1.2, and
it’s used here to provide forward secrecy
-
for resumed connections.
Now, you will say:
-
“Now this looks essentially
like a normal 1.3 handshake,
-
why having the PSK at all?” Well,
there is something missing from this one,
-
there is no certificate. Because
there is no need to re-authenticate
-
with a certificate because the client and
the server spoke in the past, and so
-
the client knows that it already checked
the certificate of the server and
-
if the server can decrypt the session
ticket it means that it’s actually
-
who it says it is. So, the two
key shares get mixed together.
-
Then mixed with the PSK to make
a key that encrypts the rest
-
of the connection. Now.
There is one other feature
-
that is introduced by TLS 1.3
resumption. And that is the fact
-
that it allows us to make 0-round
trip handshakes. Again,
-
all handshakes in 1.3
are mostly 1-round trip.
-
TLS 1.2 resumptions can be
at a minimum 1-round trip.
-
TLS 1.3 resumptions can be 0-round
trip. How does a 0-round trip
-
handshake work? Well, if you think about
it, when you start, you have a PSK,
-
a Pre-Shared Key. The client
can just use that to encrypt
-
this early data that it wants to
send to the server. So the client
-
opens a connection, to a server that it
has already connected to in the past,
-
and sends ‘Client Hello’, session ticket,
-
key share for Diffie-Hellman and
then early data. Early data is
-
this blob of application data
– it can be e.g. a HTTP request –
-
encrypted with the PSK.
The server receives this,
-
decrypts the session ticket, finds
the PSK, uses the PSK to decrypt the
-
early data and then proceeds as normal:
mixes the 2 key shares, mixes the PSK in,
-
makes a new key for the rest of the
connection and continues the connection.
-
So what happened here? We were able to
send application data immediately upon
-
opening the connection. This means that
we completely removed the performance
-
overhead of TLS. Now.
-
0-RTT handshakes, though, have
2 caveats that are theoretically
-
impossible to remove. One is that
that nice thing that we introduced
-
with the PSK ECDHE mode, the one where
we do Diffie-Hellman for resumption
-
in 1.3, does not help with 0-RTT data.
-
We do Diffie-Hellman when we
reach the green box in the slide.
-
Of course the early data is only encrypted
with the PSK. So let’s think about
-
the attacker again. The attacker somehow
stole our session ticket encryption keys.
-
It can look at the ‘Client Hello’, decrypt
the session ticket, get the PSK out,
-
use the PSK to decrypt the early data.
-
And it can do this even from a recording
if it gets the session ticket later on.
-
So the early data is not forward secret
with respect to the session ticket keys.
-
Then of course it becomes useless
if we are doing Diffie-Hellman to get
-
the server answer. That’s only useful
for the first flight sent from the client.
-
So to recap, a lot of things
going on here: TLS 1.2
-
introduced forward secrecy
against the compromise of the
-
certificate private keys, a long
time ago, by using ECDHE modes.
-
So 1.2 connections can be
always forward secret against
-
certificate compromise.
TLS 1.3 has that always on as well.
-
There is no mode that is not forward
secret against compromise of the
-
certificate. But when we think about what
might happen to the session ticket key:
-
TLS 1.2 never provides forward secrecy.
-
In TLS 1.2 compromising the session
ticket key always means being able
-
to passively and in the future
decrypt resumed connections.
-
In 1.3 instead, if we use PSK
ECDHE only the early data
-
can be decrypted by using
the session ticket key alone.
-
Now, I said that there were 2 caveats.
-
The second caveat is that
0-RTT data can be replayed.
-
The scenario is this: you have
some data in the early data
-
that is somehow authenticated. It might be
a HTTP request with some cookies on it.
-
And that HTTP request is somehow
executing a transaction,
-
okay? Moving some money, instructing
the server to do something. An attacker
-
wants to make that happen multiple
times. It can’t decrypt it, of course
-
– it’s protected with TLS. So it
can’t read the cookie, and it can’t
-
modify it because, of course, it’s
protected with TLS. But it can record
-
the encrypted message
and it can then replay it
-
against the server. Now if you have
a single server this is easy to fix.
-
You just take a note of the messages you
have seen before and you just say like
-
“No, this looks exactly like something I
got before”. But if, for example like
-
Cloudflare you are running multiple data
centres around the world, you cannot keep
-
consistent state all the time, in real
time across all machines. So there would
-
be different machines that if they
receive this message will go like
-
“Sure I have the session ticket key,
I decrypt the PSK, I use the PSK,
-
I decrypt the early data, I find
inside something, I execute what
-
it tells me to do.” Now, of
course, this is not desirable.
-
One countermeasure that TLS offers
is that the client sends a value
-
in that bundle which is how long
ago in milliseconds I obtained
-
the session ticket. The server
looks at that value and
-
if it does not match its own view of this
information it will reject the message.
-
That means that if the attacker records
the message and then 10 seconds later
-
tries to replay it the times won’t
match and the server can drop it.
-
But this is not a full solution because
if the attacker is fast enough
-
it can still replay messages.
So, everything the server can do
-
is either accept the
0-RTT data, or reject it.
-
It can’t just take some part of it or
take a peek and then decide because
-
it’s the ‘Server Hello’ message that
says whether it’s accepted or rejected.
-
And the client will keep sending early
data until it gets the ‘Server Hello’.
-
There’s a race here. So the server has to
go blind and decide “Am I taking 0-RTT data
-
or am I just rejecting it all?” If it’s
taking it, and then it finds out that it’s
-
something that it can’t process because
“Oh god, there is a HTTP POST in here
-
that says to move some money, I can’t
do this unless I know it’s not replayed.”
-
So the server has to get some
confirmation. The good news is that
-
if the server waits for the ‘Finished’
message… The server sends
-
the ‘Server Hello’, the ‘Finished’
and waits for the client’s one.
-
When the client’s one gets there it means
that also the early data was not replayed,
-
because that ‘Finished’ message
ties together the entire handshake
-
together with some random value that
the server sent. So it’s impossible
-
that it was replayed. So, this is
what a server can do: it can accept
-
the early data and if it’s something
that is not idempotent, something
-
that is dangerous, if it’s replayed it
can just wait for the confirmation.
-
But that means it has to buffer it, and
there’s a risk for an attack here, where
-
an attacker just sends a HTTP POST, with
a giant body just to fill your memory.
-
So what we realised is that we could help
with this if we wrote on the session tickets
-
what’s the maximum amount of
early data that the client can send.
-
If we see someone sending more than
that, then it’s an attacker and we
-
close the connection, drop the
buffer, free up the memory.
-
But. Anyway. However
countermeasures we deploy,
-
unless we can keep global state across the
servers, we have to inform the application
-
that “this data might be replayed”.
The spec knows this.
-
So the TLS 1.3 spec EXPLICITLY says
-
protocols must NOT use
0-RTT without a profile
-
that defines its use. Which means
“without knowing what they are doing”.
-
This means that TLS stack
API’s have to do 1 round trip
-
by default, which is not affected by
replays, and then allow the server
-
to call some API’s to either reject
or wait for the confirmation,
-
and to let the client decide what goes
into this dangerous re-playable
-
piece of data. So this will change
-
based on the protocols but what about
our favourite protocol? What about
-
HTTP? Now HTTP should
be easy, the HTTP spec,
-
you go read it and it says “Well,
GET requests are idempotent,
-
they must not change anything on the
server”. Solved! We will just allow
-
GET requests in early data because even
if they are replayed nothing happened!
-
Yay! Nope. sighs You will definitely
find some server on the internet
-
that has something like
“send-money.php?to=filippo&amount=this”
-
and it’s a GET request. And if an attacker
records this, which is early data,
-
and then replays this against a different
server in the pool, that will get executed
-
twice. And we can’t have that.
-
Now, so what can we do here?
-
We make trade-offs!
-
If you know your application, you can
make very specific trade-offs. E.g.
-
Google has been running QUIC
with 0-RTT for the longest time,
-
for 3 years I think? And that means that
they know very well their application.
-
And they know that they don’t have
any “send-money.php” endpoints.
-
But if you are like Cloudflare that
fronts a wide number of applications
-
you can’t make such wide sweeping
assumptions, and you have instead
-
to hope for some middle ground. For
example, something we might decide to do
-
is to only allow GETs
to the root. So “GET /”
-
which might be the most benefit because
maybe most connections start like that,
-
and the least likely to cause trouble.
-
We are still working on how exactly to
bring this to applications. So if you know
-
of an application that would get hurt
by something as simple as that
-
do email us, but actually,
if you have an application
-
that is that vulnerable I have
bad news. Thai Duong et. al.
-
demonstrated that browsers will
today, without TLS 1.3 or anything,
-
replay HTTP requests
if network errors happen.
-
And they will replay them silently.
So it might not be actually worse
-
than the current state. Okay.
I can actually see everyone
-
getting uneasy in their seats, thinking
“There the cryptographers are at it again!
-
They are making the security protocol that
we need more complex than it has to be
-
to get their job security for
the next 15 years!” Right?
-
No. No. I can actually assure you that
-
one of the big changes, in my opinion
even bigger than the round trips in 1.3,
-
is that everything is being weighted
for the benefit against the complexity
-
that it introduces. And
while 0-RTT made the cut
-
most other things definitely didn’t.
-
Nick: Right. Thanks Filippo.
-
In TLS 1.3 as an iteration of
TLS we also went back, or,
-
“we” being the people who are
looking at TLS, went back and
-
revisited the existing TLS 1.2 features
that sort of seemed reasonable at the time
-
and decided whether or not the complexity
and the danger added by these features,
-
or these protocols, or these
primitives involved in TLS were
-
reasonable to keep. And the big one which
happened early on in the process is
-
‘Static RSA’ mode. So this is the way that
TLS has been working back since SSL.
-
Rather than using Diffie-Hellman to
establish a shared key… How this works is,
-
the client will make its own shared
key, and encrypt it with the server’s
-
certificate public key which is gonna
be an RSA key, and then just send it
-
in plain text over the wire to the server.
And then the server would use its
-
private key to decrypt that, and then
establish a shared key. So the client
-
creates all the key material in this case.
And one thing that is sort of obvious
-
from this is that if the private key
for the certificate is comprised,
-
even after the fact, even years later,
someone with the transcript of what happened
-
can go back and decrypt this key material,
and then see the entire conversation.
-
So this was removed very early in the
process, somewhere around 2 years ago
-
in TLS 1.3. So, much to our surprise,
and the surprise of everyone
-
reading the TLS mailing
list, just very recently,
-
near the end of the standardisation
process where TLS 1.3 was almost final
-
this e-mail landed on the list. And this
is from Andrew Kennedy who works at BITS
-
which basically means he works
at banks. So this is what he said:
-
“Deprecation of the RSA key exchange
in TLS 1.3 will cause significant problems
-
for financial institutions, almost all of
whom are running TLS internally and have
-
significant, security-critical investments
in out-of-band TLS decryption”.
-
“Out-of-band TLS decryption”… mmh…
laughs - applause
-
That certainly sounds critical…
critical for someone, right?
-
laughs - applause
So…
-
laughs
applause
-
So one of the bright spots was
Kenny Paterson’s response to this,
-
in which he said: “My view
concerning your request: no.
-
Rationale: We’re trying to build a MORE
secure internet.” The emphasis on ‘more’
-
is mine but I’m sure he meant it, yeah.
-
applause
-
So after this the banking folks came
to the IETF and presented this slide
-
to describe how hard it was to actually
debug their system. This is a very simple…
-
I guess, with respect to banking. Those
are the different switches, routers,
-
middle ware, web applications; and
everything talks TLS one to the other.
-
And after this discussion we decided
we came to a compromise.
-
But instead of actually compromising
the protocol Matthew Green
-
taught them how to use Diffie-Hellman
incorrectly. They ended up actually
-
being able to do what they wanted
to do, without us – or anybody
-
in the academic community, or in the
TLS community – adding back this
-
insecure piece of TLS.
-
So if you want to read this it shows
how to do it. But in any case
-
– we didn’t add it back.
Don’t do this, basically! laughs
-
applause
-
So we killed static RSA, and
what else did we kill? Well,
-
looking back on the trade-offs there is
a number of primitives that are in use
-
in TLS 1.2 and earlier that just
haven’t stood the test of time.
-
So, RC4 stream cipher. Gone!
applause
-
3DES (Triple DES) block cipher. Gone!
applause
-
MD5, SHA1… all gone. Yo!
ongoing applause
-
There is even constructions that took…
basic block cipher constructions
-
that are gone: AES-CBC.
Gone. RSA-PKCS1-1.5,
-
this has been known to have been
problematic since 1998, also gone!
-
They have also removed several features
like Compression and Renegotiation which
-
was replaced with a very lightweight
‘key update’ mechanism. So in TLS 1.3
-
none of these met the balance of
benefit vs. complexity. And a lot of these
-
vulnerabilities, you might recognize, are
just impossible in TLS 1.3. So that’s good.
-
applause
-
So the philosophy for TLS 1.3 in a lot of
places is simplify and make it more robust
-
as much as possible. There are a number
of little cases in which we did that.
-
Some of the authors of this paper may be
in the audience right now. But there is
-
a way in which block ciphers where
used for the actual record layer
-
that was not as robust as it could be.
It has been replaced with a much simpler
-
mechanism. TLS 1.2 had this
-
really kind of funny ‘Catch 22’ in it
where the cipher negotiation
-
is protected by a ‘Finished’ message which
is a message-authentication code, but
-
the algorithm for that code was determined
in the cipher negotiation, so,
-
it had this kind of loop-back effect. And
attacks like FREAK, LogJam and CurveSwap
-
(from last year) managed to exploit these
to actually downgrade connections.
-
And this was something that was happening
in the wild. And the reason for this is
-
that these cipher suites in this handshake
are not actually digitally signed
-
by the private key. And in TLS 1.3
this was changed. Everything
-
from the signature up is digitally
signed. So this is great!
-
What else did we change? Well,
what else did TLS 1.3 change
-
vs. TLS 1.2? And that is: fewer, better
choices. And in cryptography
-
better choices always means fewer choices.
So there is now a shortlist of curves and
-
finite field groups that you can use. And
no arbitrary Diffie-Hellman groups made up
-
by the server, no arbitrary curves
that can be used. And this sort of
-
shortening of the list of parameters
really enables 1-RTT to work
-
a lot of the time. So as Filippo
mentioned, the client has to guess
-
which key establishment
methods the server supports,
-
and send that key share. If there is
a short list of only-secure options
-
this happens a larger percentage of
the time. So when you’re configuring
-
your TLS server it no longer looks
like a complicated takeout menu,
-
it’s more like a wedding [menu]. Take one
of each, and it’s a lot more delicious
-
anyways. And you can look on
Wireshark, it’s also very simple.
-
The cipher suites use extensions,
the curves, and you can go from there.
-
Filippo: Now, TLS 1.3 also fixed
what I think was one of the biggest
-
actual design mistakes of
TLS 1.2. We talked about
-
how forward secrecy works
with resumption in 1.2 and 1.3.
-
But TLS 1.2 is even more
problematic. TLS 1.2 wraps
-
inside the session tickets the actual
master secret of the old connection.
-
So it takes the actual keys that encrypt
the traffic of the original connection,
-
encrypts them with the session ticket key,
and sends that to the client to be sent
-
back the next time. We talked about
how there’s a risk that an attacker will
-
obtain session ticket keys, and decrypt
the session tickets, and break
-
the forward secrecy and decrypt
the resumed connections. Well,
-
in TLS 1.2 it’s even worse. If they
decrypt the session tickets they could
-
go back and backward decrypt the original
-
non-resumed connection. And
this is completely unnecessary.
-
We have hash functions, we have one-way
functions where you put an input in
-
and you get something that you can’t
go back from. So that’s what 1.3 does.
-
1.3 derives new keys, fresh
keys for the next connection
-
and wraps them inside the session ticket
to become the PSK. So even if you
-
decrypt a 1.3 session ticket
you can then attack
-
the subsequent connection, and we’ve
seen that you might be able to decrypt
-
only the early data, or all the connection
depending on what mode it uses. But
-
you definitely can’t decrypt the
original non-resumed connection.
-
So, this would be bad enough, but 1.2
makes another decision that entirely
-
puzzled me. The whole ‘using the master
secret’ might be just because session
-
tickets were an extension in
1.2, which they are not in 1.3.
-
But, 1.2 sends the new session
ticket message at the beginning
-
of the original handshake,
unencrypted! I mean
-
encrypted with the session ticket keys
but not with the current session keys.
-
So, any server that just supports
-
session tickets will have at the
beginning of all connections,
-
even if resumption never happens, they
will have a session ticket which is
-
nothing else than the ephemeral
keys of that connection
-
wrapped with the session
ticket keys. Now, if you are
-
a global passive adversary
that somehow wants to do
-
passive dragnet surveillance and
you wanted to passively decrypt
-
all the connections, and somehow you
were able to obtain session ticket keys,
-
what you would find at the beginning
of every TLS 1.2 connection is
-
the session keys encrypted with
the session ticket keys. Now,
-
1.3 solves this, and in 1.3 this kind
of attacks are completely impossible.
-
The only thing that you can passively
decrypt, or decrypt after the fact,
-
is the early data, and definitely not non-
resumed connections, and definitely not
-
anything that comes after 0-RTT.
-
Nick: So it’s safer, basically.
laughs
-
Filippo: Hope so!
Nick: …hopefully.
-
And how do we know that it’s safer? Well,
these security parameters, and these
-
security requirements of TLS have been
formalized and, as opposed to earlier
-
versions of TLS the folks in the academic
community who do formal verification were
-
involved earlier. So there have been
several papers analyzing the state machine
-
and analyzing the different modes of
TLS 1.3, and these have aided a lot
-
in the development
of the protocol. So,
-
who actually develops TLS 1.3? Well, it’s
-
an organization called the IETF which is
the Internet Engineering Taskforce. It’s
-
a group of volunteers that meet 3 times
a year and have mailing lists, and they
-
debate these protocols endlessly. They
define the protocols that are used
-
on the internet. And originally, the first
thing that I ever saw about this – this is
-
a tweet of mine from September
2013 – was a wish list for TLS 1.3.
-
And since then they came out
with a first draft at the IETF…
-
Documents that define protocols
are known as RFCs, and
-
the lead-up to something becoming an RFC
is an ‘Internet Draft’. So you start with
-
the Internet Draft 0, and then you iterate
on this draft until finally it gets
-
accepted or rejected as an RFC. So
the first one was almost 3 years ago
-
back in April 2014, and the current
draft (18) which is considered to be
-
almost final, it’s in what is
called ‘Last Call’ at the IETF,
-
was just recently in October.
In the security landscape
-
during that time you’ve seen so many
different types of attacks on TLS. So:
-
Triple Handshake, POODLE, FREAK, Logjam,
DROWN (there was a talk about that earlier
-
today), Lucky Microseconds, SLOTH.
All these different types of acronyms
-
– you may or may not have heard of –
have happened during the development.
-
So TLS 1.3 is a living
document, and it’s hopefully
-
going to be small. I mean,
TLS 1.2 was 79 pages.
-
It’s kind of a rough read, but
give it a shot! If you like. TLS 1.3
-
if you shave off a lot of the excess stuff
at the end is actually close. And it’s
-
a lot nicer read, it’s a lot more precise,
even though there are some interesting
-
features like 0-RTT, resumption. So
practically, how does it get written?
-
Well it’s, uh… Github! And a mailing list!
So if you want to send a pull request
-
to this TLS working group, there it is.
This is actually how the draft gets defined.
-
And you probably want to send a message
to the mailing list to describe what your
-
change is, if you want to. I suggest if
anybody wants to be involved this is
-
pretty late. I mean it’s in ‘Last Call’…
But the mailing list is still open. Now
-
I’ve been working on this with a bunch of
other people, Filippo as well. We were
-
contributors on the draft, been working
for over a year on this. You can check
-
the Github issues to see how much work
has gone into it. The draft has changed
-
over the years and months.
-
E.g. Draft 9 had this very
complicated tree structure
-
for a key schedule, you can see
htk… all these different things
-
had to do with different keys in the TLS
handshake. And this was inspired by QUIC,
-
the Google protocol that Filippo mentioned
earlier as well as a paper called ‘OPTLS’.
-
And it had lots of different modes,
semi-static Diffie-Hellman, and this
-
tree-based key schedule. And over the
time this was widdled down from this
-
complicated diagram to what we have
now in TLS 1.3. Which is a very simple
-
derivation algorithm. This took a lot
of work to get from something big
-
to something small. But it’s happened!
Other things that happened
-
in TLS 1.3 are sort of less substantial,
cryptographically, and that involves
-
naming! If anyone has been following
along, TLS 1.3 is not necessarily
-
the unanimous choice for the name of this
protocol. It’s, as Filippo mentioned, 1.0,
-
1.1, 1.2 are pretty small iterations
even on SSLv3, whereas
-
TLS 1.3 is quite a big change.
So there is a lot of options
-
for names! Let’s have
a show of hands: Who here
-
thinks it should be called 1.3?
laughs
-
Thanks, Filippo! Filippo laughs
Yeah, so, pretty good number.
-
How about TLS 2? Anybody?
Well, that actually looks like more than…
-
Filippo: Remember that SSLv2 is
a thing! And it’s a terrible thing!
-
Nick: You don’t want to confuse
that with us! So how about TLS 4?
-
Still a significant number of people…
How about TLS 2017? Yeah…
-
Alright! TLS 7 anybody? Okay…
-
Filippo: TLS Millennium 2019 X?
-
YES! Sold!
Nick: Alright! TLS Vista?
-
laughter - Nick and Filippo laugh
applause
-
Nick: Lots of options! But just as
a reminder, the rest of the world
-
doesn’t really call it TLS. This is Google
trends, interest over time, searching for
-
‘SSL vs. TLS’. SSL is really what most
of the world calls this protocol. So SSL
-
has the highest version of Version 3,
and that’s kind of the reason why people
-
thought ‘TLS 4’ was a good idea, because
“Oh, people are confused: 3 is higher
-
than 1.2, yada-yada-yada”.
-
This poll was not the only poll. It was
taken there some informal twitter polls.
-
“Mmm, Bacon!” was a good one,
52% of Ryan Hurst’s poll.
-
laughter
-
Versions are a really sticky thing in TLS.
-
E.g. the versions that we have of TLS
– if you look at them on the wire
-
they actually don’t match up.
So SSL 3 is 3.0 which does match up.
-
But TLS 1 is 3.1; 3.2…
TLS 1.2 is 3.3; and originally
-
I think up to Draft 16
of TLS 1.3 it was 3.4.
-
Just sort of a bumping the minor
version of TLS 1.2, very confusing.
-
But after doing some internet
measurement it was determined that
-
a lot of servers, if you send a ‘Client
Hello’ with ‘3.4’, it just disconnects. So
-
this is actually really bad, it prevents
browsers from being able to actually
-
safely downgrade. What a server is
supposed to do if it sees a version
-
higher than 3.3 is just respond with “3.3”
saying: “Hey, this is the best I have”.
-
But turns out a lot of these break.
So 3.3 is in the ‘Client Hello’ now, and
-
3.4 is negotiated as a sub
protocol. So this is messy.
-
Right? But we do balance the benefits vs.
complexity, and this is one of the ones
-
where the benefits of not having servers
fail outweigh the complexity added,
-
of adding an additional thing. And to
prevent this from happening in the future
-
David Benjamin proposed something called
GREASE where in every single piece of
-
TLS negotiation you are supposed to,
as a client, add some random stuff
-
in there, so that servers will
get used to seeing things
-
that are not versions they’re used to.
So, 0x8a8a. It’s all GREASE-d up!
-
Filippo: It’s a real thing!
It’s a real very useful thing!
-
Nick: This is going to be very useful,
for the future, for preventing
-
these sorts of things. But it’s really
unfortunate that that had to happen.
-
We are running low on time, but
we dued to actually get involved with
-
getting our hands dirty. And one thing
the IETF really loves when developing
-
these standards is running code. So we
started with the IETF 95 Hackathon
-
which is in April, and managed,
by the end of it, to get Firefox
-
to load a server hosted by Cloudflare
over TLS 1.3. Which was a big
-
accomplishment at the time. We used NSS
which is the security library in Firefox
-
and ‘Mint’ which was a new version
-
of TLS 1.3, from scratch, written in Go.
-
And the result was, it worked! But
this was just a proof-of-concept.
-
Filippo: To build something that was more
production ready, we looked at what was
-
the TLS library that we were most
confident modifying, which unsurprisingly
-
wasn’t OpenSSL! So we opted to
-
build 1.3 on top of the Go
crypto/tls library, which is
-
in the Go language standard library.
The result, we call it ‘tls-tris’,
-
and it’s a drop-in replacement for
crypto/tls, and comes with this
-
wonderful warning that says “Do not use
this for the sake of everything that’s
-
good and just!” Now, it used to be about
everything, but now it’s not really
-
about security anymore, we got this
audited, but it’s still about stability.
-
We are working on upstreaming
this, which will solidify the API,
-
and you can follow along with the
upstreaming process. The Google people
-
were kind enough to open us a branch to do
the development, and it will definitely not
-
hit the next Go release, Go 1.8, but we
are looking forward to upstreaming this.
-
Anyway, even if you use Go,
deploying is hard.
-
The first time we deployed Tris
the draft number version was 13.
-
And to actually support browsers
going forward from there we had
-
to support multiple draft versions
at the same time by switching on
-
obscure details sometimes. And sometimes
had to support things that were definitely
-
not even drafts because
browsers started to… diverge.
-
Now, anyway, we had
a test matrix that would run
-
all our commits against all the different
versions of the client libraries,
-
and that would make sure that we are
always compatible with the browsers.
-
And these days the clients are actually
much more stable, and indeed
-
you might be already using it
without knowing. E.g. Chrome Beta,
-
the beta channel has it enabled for about
50% as an experiment from the Google side.
-
And this is how our graphs looked
like when we first launched,
-
when Firefox Nightly enabled it by default
and when Chrome Canary enabled it
-
by default. These days we are stable,
around 700 requests per second
-
carried over TLS 1.3.
And on our side we enabled it
-
for millions of our
websites on Cloudflare.
-
And, anyway, as we said,
the spec is a living document
-
and it is open. You can see it on
Github. The Tris implementation is there
-
even if it has this scary warning, and
the blog here is where we’ll probably
-
publish all the follow-up research and
results of this. Thank you very much and
-
if you have any questions please come
forward, I think we have a few minutes.
-
applause
-
Herald: Thank you, we have plenty
of time for questions. First question
-
goes to the Internet.
-
Signal Angel: The very first
question is of people asking if
-
the decision of the 0-RTT going
on to the application, handing it
-
off to the application developers,
if that is a very wise decision?
-
Filippo: laughs
applause
-
Filippo: Well… fair. So, as we said, this
is definitely breaking an abstraction.
-
So it’s NOT broken by default.
If you just update Go
-
and get TLS 1.3 you won’t
get any 0-RTT because
-
indeed it requires collaboration by the
application. So unless an application
-
knows what to do with it it just can not
use that and have all the security benefits
-
and the one round trip full
handshake advantages, anyway.
-
Herald: Ok, next question
is from microphone 1.
-
Question: With your early testing of the
protocol have you been able to capture
-
any hard numbers on what those
performance improvements look like?
-
Filippo sighs
-
Nick: One round trip! laughs
Depends how much a round trip is.
-
Filippo: Yeah, exactly. One round trip
is… I mean, I can’t tell you a number
-
because of course if you live in
San Francisco with a fast fiber it’s,
-
I don’t know, 3 milliseconds, 6…?
If you live in, I don’t know,
-
some country where EDGE is the only type
of connection you get that’s probably
-
around one second. I think we have an
average that is around… between 100
-
and 200 milliseconds, but we haven’t
like formally collected these numbers.
-
Herald: Ok, next question
from microphone 3.
-
Question: One remark I wanted to make is
that another improvement that was made
-
in TLS 1.3 is that they added
encryption to client certificates.
-
So the client certificates are transmitted
encrypted which is important
-
if you think about that a client will
move, and a dragnet surveillance entity
-
could track clients with this. And
another remark/question which might…
-
Herald: Questions are ended with a question
mark. So can you keep it please a bit short?
-
Question: Yeah…
That might be stupid so…
-
Does the fixed Diffie-Hellman
groups… wasn’t that the problem
-
with the LogJam attack, so… does
this help with LogJam attacks?
-
Nick: Are you referencing the
proposal for the banks?
-
Question: No no, just in general,
that you can pre-compute…
-
Nick: Right, yes, so in Logjam there was
a problem where there was a DH group
-
that was shared by a lot of different
servers by default. The Apache one,
-
which was 1024 [bit].
In TLS 1.3 it was restricted to
-
a pre-computed DH group, that’s
over 2000 bits, as the smallest one,
-
and even with all the pre-computation in
the world if you have a 2000 bit DH group
-
it’s not feasible to pre-compute
enough to do any type of attack.
-
But, yeah, that’s a very good point.
-
Filippo: …and since they are fixed there
is no way to force the protocol to use
-
anything else that would not be as strong.
Question: Okay, thanks!
-
Herald: Next question for microphone 4.
-
Question: Thanks for your talk! In the
abstract you mentioned that another
-
feature that had to be killed was SNI,
-
with the 0-RTT but there are ways to still
implement that, can you elaborate a bit?
-
Filippo: Yeah. So, we gave this talk
internally twice, and this question came
-
both of the times. So… laughs
-
So, SNI is a small parameter
that the client sends to the server
-
to say which website it is trying to
connect to. E.g. Cloudflare has
-
a lot of websites behind our machines, so
you have to tell us “Oh I actually want
-
to connect to blog.filippo.io”. Now
this is of course a privacy concern
-
because someone just looking at the bytes
on the wire will know what specific website
-
you want to connect to. Now the unfortunate
thing is that it has the same problem as
-
getting forward secrecy for the early
data. You send SNI in the ‘Client Hello’,
-
and at that time you haven’t negotiated
any key yet, so you don’t have anything
-
to encrypt it with. But if you
don’t send SNI in the first flight
-
then the server doesn’t know what
certificate to send, so it can’t send
-
the signature in the first flight! So you
don’t have keys. So you would have to do
-
a 2-round trip, and now we would
be back at TLS 1.2. So, alas.
-
That doesn’t work with
1-round trip handshakes.
-
Nick: That said, there are proposals in
the HTTP2 spec to allow multiplexing,
-
and this is ongoing work. It could be
possible to establish one connection
-
to a domain and then establish another
connection within the existing connection.
-
And that could potentially
protect your SNI.
-
Filippo: So someone looking would think
that you are going to blog.filippo.io but
-
then, once you open the connection,
you would be able to ask HTTP2 to also
-
serve you “this other website”. Thanks!
-
Herald: Okay, next
question, microphone 7,
-
or actually 5, sorry.
-
Question: You mentioned that there
was formal verification of TLS 1.3.
-
What’s the software that was used
to do the formal verification?
-
Nick: So there were several software
implementations and protocols…
-
Let’s see if I can go back… here.
-
So, Tamarin[Prover] is a piece of software
developed by Cas Cremers and others,
-
at Oxford and Royal Holloway.
miTLS is in F# I believe,
-
this is by INRIA.
And NQSB-TLS is in OCAMAL.
-
So several different languages were used
to develop these and I believe the authors
-
of NQSB-TLS are here…
-
Herald: Okay, next question, microphone 8.
-
Question: Hi! Thanks. Thank you for
your informative presentation.
-
SSL and TLS history is riddled with “what
could possibly go wrong” ideas and moments
-
that bit us in the ass eventually. And so
I guess my question is taking into account
-
that there’s a lot of smaller organisations
or smaller hosting companies etc. that
-
will probably get this 0-RTT thing
wrong. Your gut feeling? How large
-
a chance is there that this will indeed
bite us in the ass soon? Thank you.
-
Filippo: Ok, so, as I said I’m
actually vaguely sceptical
-
on the impact on HTTP because browsers
can be made to replay requests already.
-
And we have seen papers
and blog posts about it. But
-
no one actually went out
and proved that that broke
-
a huge percent of the internet. But to
be honest, I actually don’t know how to
-
answer you how badly we will be bit by it.
But remember that on the other hand
-
of the balance is how many still say
that they won’t implement TLS
-
because it’s “slow”. Now, no!
-
It’s 0-RTT, TLS is fast! Go
out and encrypt everything!
-
So those are the 2 concerns that
you have to balance together.
-
Again, my personal opinion
is also worth very little.
-
This was a decision that was made by
the entire community on the mailing list.
-
And I can assure you that everyone has
been really conservative with everything,
-
thinking even… indeed, if the name
would have mislead people. So,
-
I can’t predict the future. I can only
say that I hope we made the best choice
-
to make the most part of the
web the most secure we can.
-
Herald: Next question is from the internet.
-
Signal Angel, do we have another
question from the internet?
-
Signal Angel: Yes we do.
-
What are the major implementation
incompatibilities that were found
-
now that the actual spec is fairly close?
-
Herald: Can you repeat that question?
-
Signal Angel repeats question
-
Filippo: Okay. As in
during the drafts period?
-
So, some of the ones that had version
intolerance were mostly, I think,
-
middle boxes and firewalls.
-
Nick: There were some very large sites.
I think Paypal was one of them?
-
Filippo: Although during the process we
had incompatibilities for all kinds of
-
reasons, including one of
the 2 developers misspelled
-
the variable number.
laughs
-
During the drafts sometimes compatibility
broke, but there was a lot of
-
collaboration between client implementations
and server implementations on our side.
-
So I’m pretty happy to say that the
actual 1.3 implementations had a lot of
-
interoperability testing, and all the
issues were pretty quick to be killed.
-
Herald: Okay, next question
is from microphone number 1.
-
Question: I have 2 quick questions
concerning session resumption.
-
If you store some data on a server
from a session, wouldn’t that be
-
some kind of supercookie?
Is that not privacy-dangerous?
-
And the second question would be: what
about DNS load balancers or some other
-
huge amounts of servers where your request
is going to different servers every time?
-
Filippo: Ok, so, these are details about
deploying session tickets effectively.
-
TLS 1.3 does think about the privacy
concerns of session tickets; and indeed
-
it allows the server to send multiple
session tickets. So the server will still
-
know what client is sending it if it
wants to. But at least anyone looking
-
at the connection since they are
sent encrypted, not like in 1.2, and
-
there can be many. Anyone looking at the
connection will not be able to link it
-
back to the original connection. That’s
the best you can do, because if the server
-
and the client have to reuse some shared
knowledge the server has to learn about
-
who it was. But session tickets in 1.3
can’t be tracked by a passive observer,
-
by a third party, actually. And… when you
do load balancing… there is an interesting
-
paper about deploying session tickets,
but the gist is that you probably want
-
to figure out how clients roam between
your servers, and strike a balance between
-
having to share the session ticket
key so that it’s more effective, and
-
not sharing the session ticket key which
makes it harder to acquire them all.
-
You might want to do geographically
located, or in-a-single-rack…
-
it’s really up to the deployment.
-
Herald: Okay, final question
goes to microphone 3.
-
Question: I have a question regarding the
GREASE mechanism that is implemented
-
on the client side. If I understood
it correctly you are inserting
-
random version numbers of
not-existing TLS or SSL versions
-
and that way training
the servers to
-
conform to the specification. What
is the result of the real-world tests?
-
How many servers actually
are broken by this?
-
Filippo: So you would expect none because
after all they are all implementing 1.3
-
now, so that all the clients they would
see would already be doing GREASE. Instead
-
just as Google enabled GREASE I think
it broke… I’m not sure so I won’t say
-
which specific server implementation, but
one of the minor server implementations
-
was immediately detected
as… the Haskell one!
-
Nick: Right!
Filippo: I don’t remember the name,
-
I can’t read Haskell, so I don’t know what
exactly they were doing, but they were
-
terminating connections because of GREASE.
-
Nick: And just as a note, GREASE is also
used in cipher negotiation and anything
-
that is a negotiation in TLS 1.3.
So this actually did break
-
a subset of servers, but
a small enough subset
-
that people were happy with it.
-
Question: Thanks!
Nick: 2% is too high!
-
Herald: Thank you very much.
Filippo: Thank you!
-
applause
-
33C3 postroll music
-
subtitles created by c3subtitles.de
in the year 2017. Join, and help us!
Schnapspraline
Thank you for helping us with the transcripts of this conference talk! To save you time and spare you a lot of work please take a quick look at our c3subtitles workflow guide at https://wiki.c3subtitles.de/en:postprocessing:contribute --> because after writing the transcript (which seems to be complete) an automatic timing/syncing process can be triggered by one of our maintainers. They're already informed, so that you only need to continue work on this video in the rare case that nobody wants to do the review later. So, thanks, and watch the magic happening!
C3Subtitles
The magic has happened, please keep track of your progress here: https://c3subtitles.de/talk/646/
Thank you very much!