35C3 preroll music
Herald-Angel: All right. Now it's my very
big pleasure to introduce Hanno Böck to
you. He's no stranger to the Chaos crowd.
He's been to several Easterheggs and at
several other Chaos events. Today he's
here to talk about TLS 1.3, what it's all
about how it came to be and what the
future of it is gonna look like. Please
give a huge applause and welcome Hanno.
hanno: Yeah. Okay. So today I want to talk
to you about a new version of TLS. TLS is
this protocol, Transport Layer Security,
which I hope everyone knows what it is.
It's a protocol that you can put on top of
other protocols that gives us an encrypted
and authenticated channel through the
generally insecure internet. We have a new
version since August, TLS 1.3. At first
I'd like to go a bit into the history of why we
have this new version, how we got there
and what design decisions were made for
this version. So the very first version of
SSL which it was called back then was
released in 1995 by Netscape and it was
quickly followed up with version 3 which
is still very similar to the TLS 1.2 that
we mostly use today. And then in 1999 it
was kind of taken over from Netscape to
the IETF, which is the Internet
standardization organization, and they
renamed it to TLS. And so that's kind of
the history. We had SSL and I've marked it
in red because these two versions are
broken by design. You cannot really use
them in a way that is secure these days
because we know vulnerabilities that are
part of the protocol. And then we had, in
1999 it was renamed to TLS, and TLS is
kind of still kind of OK if you do
everything right. But that's really
tricky. So it's kind of a dangerous
protocol but maybe not totally broken.
Same with TLS 1.1. TLS 1.2 is what we
still mostly use today and TLS 1.3 is the
new one and what you can see here for
example is that the the biggest gap here
is between 1.2 and 1.3, so it was a very
long time where we had no new development
here. You probably heard that we had
plenty of vulnerabilities in TLS, around
TLS and also these days, a good
vulnerability always has a logo and a nice
name. And I want to go into one
vulnerability which doesn't have a logo.
Not one of the variants. I was very
surprised when I realized that. That's the
so-called Padding Oracles. They are in CBC
mode which is the encryption we use for
the actual data encryption, the symmetric
data encryption. The thing is, when we
encrypt data what we usually use are so-
called block ciphers and they encrypt one
block off a specific size of data. It's
usually sixteen bytes and this CBC mode
was the common way to encrypt in past TLS
versions. And this is roughly how it looks
like. So we have some initialization
vector which should be random, but wasn't
always, but that's another story. And then
we encrypt a block of data and then we XOR
that encryption into the next plain text
and encrypt it again. Now one thing here
is that because these are blocks of data
and our data may not always be in sixteen
byte blocks it may just be five bytes or
whatever, we need to fill up that space.
So we need some kind of padding. In TLS,
what was done was that first of all we had
some data. Then we added a MAC, which is
something that guarantees the correctness
of the data, the authentication of the
data. And then we pad it up to a block
size and then we encrypt it. And this
order of things turned out to be very
problematic. So this padding is a very
simple method. If we have one byte to fill
up we make a 00. If we have two bytes to
fill up well you make 0101, three bytes
020202 and so on. So that's easy to
understand, right? Let's for a moment
assume a situation where an attacker can
manipulate data and can see whether the
server receives a bad padding or whether
it receives bad data, where this MAC check
goes wrong. And here is the decryption
with CBC mode. And what an attacker can do
here: the first thing the attacker does,
it throws one block away at the end, it
just blocks the transmission of that block
and then it changes something here. So
what we assume here is the attacker wants
to know this decrypted byte because it may
contain some interesting data. So what he
can do is he can manipulate this byte with
a guess and a byte is only 256 values a
byte can have. So he can guess enough
times and XOR it with this value. And if
you think about it if we XOR it here with
the plaintext. That means if we end up
with this zero here then the padding is
valid. If we end up with some garbage
value here, then the padding is probably
invalid. So by making enough guesses the
attacker can decrypt a byte here under the
condition that he learns somehow whether
the padding is valid or not. So he could
decrypt one byte but he can't go on. Let's
assume we will learn that one byte we have
decrypted it and then we can go on with
the next byte. So we XOR this byte on the
right with the guess with what we already
know that it is and with the one and. Then
we XOR this next byte with our guess and
also a one. And if this ends up being 0101
then again we have a valid padding. So the
attacker learns the next byte and he can
do this for other bytes. This was
originally discovered in 2002 by Sergey
Vaudenay. But it was kind of only
theoretical. So one thing here is that TLS
has these error messages. There are
different kinds of errors. And if you read
through the TLS 1.0 standard, if the
padding is wrong then you get this
decryption_failed error. And if the MAC is
wrong, so the data has some modification,
then you get this bad_record_mac error. So
you could say this would allow this
Padding Oracle attack because there are
these error messages. But the attacker
cannot see them because they are
encrypted. So this was kind of only a
theoretical attack which didn't really
work on a real TLS connection. But then
there was a later paper which made this
attack practical by measuring the timing
difference from these different kinds of
errors. And this allowed a practical
decryption of TLS traffic. Then in later
versions of TLS this was fixed or kind of
fixed. But there is a warning in the
standard which says.. So this is right
from the standard text. "This leaves a
small timing channel but it is not
believed to be large enough to be
exploitable." If you read something like
that, it sounds maybe suspicious, maybe
dangerous. And actually in 2013 there was
these so-called Lucky Thirteen attack
where a team of researchers actually
managed to exploit that small timing side
channel that the designers of the standard
believed was not large enough to be
exploitable. It is in theory possible to
implement TLS in a way that it is safe
from this timing attacks. But it adds a
lot of complexity to the code. If you just
look at when Lucky Thirteen was fixed, it
just made the code much longer and much
harder to understand. Then there was
another Padding Oracle which was called
POODLE, which was in the old version
SSLv3. This was kind of by design. So the
protocol was built in a way that you could
not avoid this Padding Oracle. Then it
turned out that there was also a kind of
TLS variation of this POODLE attack. And
the reason here was that the only major
change between SSLv3 and TLS 1.0 was that
the padding was fixed to a specific value,
where in the past it could have any value.
It turned out that there were TLS
implementations that were not checking
that, enabling this poodle attack also in
TLS. Then there was the so-called Lucky
Microseconds attack which was basically
the.. one of the people who has found the
Lucky Thirteen attack looked at
implementations and saw if they have fixed
Lucky Thirteen properly. They looked at
s2n, which is an SSL library from Amazon
and they found: "Ok, they tried to make
countermeasures against this attack but
these countermeasures didn't really work
and they had still a timing attack that
they could perform." Then there was a bug
in OpenSSL which was kind of funny because
when OpenSSL tried to fix this Lucky
Thirteen attack, they introduced another
Padding Oracle attack which was actually
much easier to exploit. We had plenty of
Padding Oracles. But if you remember back
what I said, for the very first attack
that this didn't really work in practice
in TLS, because these errors are
encrypted. But theoretically you could
imagine that someone creates an
implementation that sends errors that are
not encrypted. For example you can send a
TCP error or just cut the connection or
have any kind of different behavior
because the whole attack just relies on
the fact that you can distinguish these
two kinds of errors. You can find
implementations out there doing that. So
yeah, Padding Oracle is still an issue.
Then I want to look at another attack
which is a so-called Bleichenbacher
attacks and they target the RSA encryption
and that is kind of the asymmetric
encryption which we use at the beginning
of a connection to establish a shared key.
This attack was found in 1998 by Daniel
Bleichenbacher. If you look at the RSA
encryption before we encrypt something
with RSA, we do some preparations. The way
this is done and in all TLS versions is
these so-called PKCS #1 1.5 standard. How
this looks is: it starts with the 00 02.
Then we have some random data which is
just again a padding to fill up space.
Then we have zero which marks the end of
the pending. Then we have a version number
03 03, which stands for TLS 1.2. It's
totally obvious right. I'll get two
version numbers later. And then we have
the secret data. But the relevant thing
for this attack is mostly this 00 02 at
the beginning. So we know each correct
encrypted block, if we decrypt it, it
starts with 00 02. We may wonder if we
implement a TLS server and it decrypts
some data from the client and then it
doesn't start with 00 02: what shall I do?
And the naive thing would be: yeah of
course we just send an error message
because something is obviously wrong here.
Now this turns out to be not such a good
idea because if we do this we will tell
the attacker something. We will tell him
that the decrypted data does not start
with 00 02. So the attacker learned
something about the interval in which the
decrypted data is. Either it starts with
00 02 or it doesn't. And this turned out
to be enough to.. if you send enough
queries and modify the ciphertext you can
learn enough information to decrypt data.
The whole algorithm is a bit more
complicated but it's not that complicated.
It's relatively straightforward. It's a
bit of math and I didn't want to put in
any formulas but yeah. Now as I said it
was discovered in 1998. So TLS 1.0
introduced some countermeasures. The
general idea here is that if you decrypt
something and it is wrong then you're
supposed to replace it with a random value
and use that random value as your secret
and pretend nothing has happened and then
continue and then the handshake will fail
later on because you don't have the same
key. This prevents the attacker from
learning whether your data is valid or
not. In 2003 a research-team figured out
that the countermeasures how they were
described and TLS 1.0 were incomplete and
also not entirely, it was not entirely
clear how to implement them. Because
there's this version-thing and it was not
exactly described how to handle that. If
only the version is wrong. So they they
were able to make, to create an attack
that still worked despite this
countermeasures. So more countermeasures
were proposed and in 2014 there was a
paper that Java was still vulnerable to
Bleichenbacher-attacks in a special way
because they used some kind of decoding,
then raised an exception. And the
exception was long enough that you could
measure the timing difference. And there
was also still a small issue in OpenSSL
although that was not practically
exploitable. In 2016 there the so-called
drown-attack. And the drown-attack was a
Bleichenbacher-attack in SSL version 2.
Now you may wonder SSL-version 2 is this
very very old version from 1995. Is this a
problem? But it actually is because you
can use encrypted data from a modern TLS-
version, TLS 1.2, and and decrypt it with
a server that still supports SSL version
2. So that was the drown-attack. And then
last year I thought maybe someone should
check if there are still servers
vulvernable to these Bleichenbacher-
attacks. So I wrote a small scan tool and
started scanning, and scanned the Alexa
top-1000000. The first hit was
Facebook.com was vulnerable and it turned
out from the top-100 pages roughly 1/3
were vulnerable. And in the end we found
like 15 different implementations that
were vulnerable. Probably more but these
were the ones we know about. Yeah. And
just I think a month ago there was another
paper, that there also you can use cache-
sidechannels which is mostly interesting
if you have cloud-infrastructure where
multiple servers are running on the same
hardware, which you can also use to
perform these Bleichenbacher-attacks. Now
what I want to show you here: You cannot
read this because it's too small but this
is the chapters in the TLS-standard that
describe the countermeasures to these
Bleichenbacher-attacks. So we knew about
them since before TLS 1.2, so there was a
small chapter what you should do to
prevent these attacks. And then they
figured out: OK that's not enough. We need
to have more countermeasures, and even
more. So, what you can clearly see here is
it's getting more and more complicated to
prevent these attacks. So with every new
TLS-version we had more complexity to
prevent these Bleichenbacher-attacks.
These were just two examples. There were a
lot more attacks on TLS 1.2 and earlier
that were due to poor design choices. I've
named a few here: SLOTH, which was all
against weak caches, FREAK which can
attack issues in the handshake and
compatibility with old versions, SWEET32
which attacks some block-ciphers that have
a small blocksize, Triple-Handshake which
is a very complicated interaction of
different handshakes. The general trend
here was that in TLS 1.2 and earlier, if
there was a security bug, if there was a
vulnerability in the cryptography, what
people did was: We need a workaround for
the security issue. And then if this
workaround doesn't work it's not
sufficient. We need more workarounds. And
also we create more secure modes but we
still keep the old ones. And then people
can choose. We have this algorithm
agility. You can choose, there's the
secure algorithm, there's the less secure
algorithm. Take whatever you want. Which
in practice meant very often still the
insecure modes were used because like for
all of these things there were modes
available in TLS 1.2 that didn't have
these vulnerabilities. But they were
optional. But, and I think that is the
major change that came with TLS 1.3, was a
mindset-change that people said, okay if
something has vulnerabilities, if it's
insecure and if we have something better
then we just remove the thing that is
vulnerable, that is problematic. So the
main change in TLS 1.3 was that a lot of
things were deprecated. So we no longer
have these CBC-modes. We no longer have
our RC4 which is another cipher which was
problematic. We no longer have 3DES, which
has these small blocksizes. We still use
GCM but we no longer use it with an
explicit nonce, which also turned out to
be problematic. We completely remove RSA-
encryption. We still use RSA but only for
signatures. We remove hash-functions that
turned out to be insecure. We remove
Diffie-Hellman with custom parameters,
which was, yeah, which turned out to be
very problematic. And we removed ecliptic-
curves that kind of look not so secure.
But also there was something that some
academics looked at TLS with the more
scientific view. They tried to formally
understand the security protocol,
properties of this protocol and to analyze
them to see if they can proof some kind of
security properties of the protocol. And
many vulnerabilities that I mentioned
earlier were found by these researchers
trying to formally analyze the protocol.
But also these analyses have contributed
to design TLS 1.3 to make it more robust
to attacks. So this is, I think, also a
big change. There was a much better
collaboration between scientists who were
looking at the protocol and the people who
were actually writing the protocol. But
you may see, all the security is nice but
what we really care about or maybe some
people really care about is speech, right.
We want our internet to be fast. We want
to open our browser and immediately get
the page loaded. And TLS 1.3 also brings
improved speed. And I am showing here the
handshake. And this is very simplified. I
have kind of only added the things that
matter to make this point. But, if you
look at on the left, if we do a handshake
with an old TLS-version it starts that
this client sends a client-hello, and some
information, what version it supports,
what encryption modes it's supports. Then
the server sends back which encryption
modes it wants to use and a key
exchange. And then the client sends his
part of the key exchange. And the so-
called finished message and then the
server sends a finished message and then
the client can start sending data. In TLS
1.3 we have compressed this all a bit. The
client sends his client-hello and
immediately sends a key-exchange message.
And then the server answers with his key-
exchange message. And a few more things
that I left out for simplicity. But the
important thing is that with the second
message the client can already send data.
And this is the situation for a fresh
handshake. Like we have not communicated
before. I want to make a new connection to
a server and it goes one time back and
forth. And then I can send data. Which,
and in the earlier version I had two times
back and forth. So I can send data much
faster. So yeah, we remove one round-trip
from a fresh handshake. There's also
security improvements to this handshake.
So this is nice. We have more security and
more speed. And particularly we have
better security on so-called session-
resumption, which means we're reconnecting
using a key from a previous section. And
we also protect more data which may avoid
some attacks where an attacker may fiddle
with the handshake. These were more or
less theoretic attacks. But these are also
prevented in TLS 1.3. Yeah. So TLS has a
more secure and a faster handshake. And if
you want to have more details about this
handshake there was a talk two years ago
at this congress, which goes into this in
much more detail. So if this particularly
interests you you should watch that talk.
I've put a link here and I will put the
slides online. There's also something
called the zero-roundtrip-handshake. And
this is even faster. We can send data
right away. Now, how can we do that? This
is kind of cheating because what we need
here is we need to have a previous
connection. And then we have a key from a
previous connection, can create a new key
from that and use that to send data right
away. So yeah, we need a so-called
preshared-key which we have from previous
connection and then we can send data
without any roundtrips. So, even more
speed. That's nice, right. But this 0-RTT-
mode does not come for free. There is a
problem here with so-called replay-
attacks, which means an attacker could
record the data that we're sending and
then send it again. And the server may
think: Okay, now this request came twice.
So I'm doing twice what this request was
supposed to do. So there are some caveats
with 0-RTT and the standard says you
should only use if it's safe. It says
something like you should only use it if
you have a profile how to use it safely.
Now what does that mean? There, let's look
at https, the protocol we're using
usually. If you look into the HTTP-
standard it says something that a GET-
request has to be idempotent and a POST-
request does not have to be idempotent.
Now what does that mean? It more or less
means if you send a request twice it
shouldn't do anything different from just
sending it once. So in theory we could say
yes a GET-requests are idempotent - that
means they are safe for zero-roundtrip-
connections. The question is, "do web-
developers" ... Sorry.
applause
You can do a little experiment: If you
meet someone who is a web developer, ask
them if they know what idempotent means,
and when they can use idempotent requests
and when they cannot. So, in an ideal
situation where web-developers do know
that, we can use 0-RTT safely with TLS
1.3. 0-RTT also does not have a strong
forward secrecy as a normal handshake. So,
there's kind of a tradeoff here, because
this pre-shared key is encrypted with a
key on the server and if that key gets
compromised that may compromise our
connection even if the key is only leaked
later on. So, this looks a bit problematic
and many speculate that the future attacks
we'll see on TLS 1.3, that at least some
of them will focus on this 0-RTT mode,
because it looks like one of the more
fragile parts of the protocol. But it
gives us more speed., so people wanted to
have it. Maybe the good news is, this is
entirely optional; we don't have to use
it; and if we think this looks too
problematic, we can switch it off. So, if
it turns out that there are too many
attacks involving 0-RTT mode, we could
disable it again and use it without it. It
will still be faster, but not as fast as
it could be with this. Okay. Deployment:
Now, if we have this nice new protocol, we
not only have to make sure it's secure and
fast and everything, but we also have to
deploy it. And we have to deploy it on the
internet - on the real internet - like the
one we have out there, not some
theoretical internet where there are no
bugs and everyone knows how to implement
protocols, but the real internet with lots
of IoT devices and enterprise firewalls
and all these kinds of things. And now I
want to get back to this version number.
This may sound like a trivial thing, but
TLS 1.3 has a new version number for the
protocol version. Here's a Wireshark dump
from a TLS 1.3 handshake. And if you're
trying to look for the version number, you
will find multiple version numbers. And in
case you cannot see it, I have made it a
bit larger. So at the top you see
"Version: TLS 1.0", encoded as 0x0301.
Okay. That looks strange. Then, a few
lines later, you have "Version TLS: 1.2",
0x0303. But we thought this was TLS 1.3...
I mean it says here at the top but somehow
there are these other versions. And then
if you scroll further down, you will see
"Extension: supported_versions". And then
here it lists TLS 1.3, which is encoded as
0x0304. So, what's going on here? This
looks strange. So, the first thing to
realize is why do we encode these versions
in such a strange way; why are we not
using 0x0100 for TLS 1.0? It's TLS 1.0
came after SSL version 3, which kind of
makes it version 3.1; and that's how we
encode it. TLS 1.0 is really just SSL
version 3.1, TLS 1.1 is SSL version 3.2
and so on and for TLS 1.3, it's
complicated. So, the very first version
you saw earlier in this Wireshark dump was
the so-called record layer and this is
kind of a protocol inside the TLS protocol
which has its own version number, which is
totally meaningless but it's just there.
And it turned out, for compatibility
reasons, it's best to just let this on the
version of TLS 1.0 and then, we have the
least problems. And this is kind of...
this record layer protocol is kind of the
encoding of the TLS packages. Now, if we
have a new TLS version, we cannot just
tell everyone "tomorrow we will use TLS
1.3" and everyone has to update, because
we know many people won't. And so, we
somehow need to be able to deploy this new
version and still be compatible with
devices that only speak the old version.
So, let's assume we have a client that
supports TLS 1.2, and we have a server
that only supports TLS 1.0. How does that
work? That's an extremely complicated
mechanism here. So, the client connects
and says "Hello. I speak TLS 1.2". Server
says "Okay, I don't know TLS 1.2, but
what's the highest version I support?"
It's TLS 1.0, so he sents that back. And
then, they can speak TLS 1.0 and - in case
the client still supports that - and we
have a connection. This is very simple. I
would think so. So, to illustrate how you
would program something like that, you
would say "Yeah, if client_max_version is
smaller than than server_max_version, then
we use the client_max_version. Otherwise,
we use the server_max_version. So, you
would think that there's no way anyone
could possibly not get that right, right?
I mean, it's very simple. But I was saying
earlier, we were talking about the real
internet. So... And on the real internet,
we have enterprise products. In case you
don't know that, an enterprise product is
something that's very expensive and it's
buggy.
LaughterApplause
hanno: So, yeah. We will have web pages
that run with firewalls from Cisco or we
will have people using IBM Domino web
server and all these kinds of things. And
this is the TLS version negotiation in the
enterprise edition. So a client says "Yeah
I want to connect with TLS 1.2" and the
server says "Oh I don't support this this
very new version. It's from 2008. I mean
that's 10 years in Enterprise years.
That's very long." So the server just
sends an error if the client connects with
the TLS version that it doesn't know. It
doesn't implement this version negotiation
correctly. And this is called version
intolerance. This has happened every time
there was a new TLS version. Every time we
had devices that had this problem. If we
tried to connect with the new TLS version
they would just fail. They would send an
error or they would just cut the
connection or have a timeout or crash. So
browsers needed to handle this somehow
because the problem here is, when a
browser introduces new TLS version and
everything breaks, then users will blame
the browser and then they will say "Yeah I
will no longer use this browser, I'll now
switch back to Internet Explorer" or
something like that. So browsers needed to
handle this somehow. What the browsers did
was "Okay, we try it with the latest TLS
version we support. And if we get an error
we try it again with one version lower.
And again one version lower and eventually
we may succeed to connect." So here we
have a browser and we have an enterprise
server that supports TLS 1.0 and we will
eventually get a connection. Do you
remember POODLE I mentioned earlier? There
was this padding oracle in SSLv3, which
was discovered in 2014. You may wonder
SSLv3 which is from 1996, that's really
old. Who uses that in 2014? It was
deprecated for 16 years. I mean who uses
that? Windows Phone 7 used it. On this
Nokia phones they also never got an
update. Normal browsers and servers at
least used TLS 1.0. They maybe didn't use
TLS 1.2, but they used TLS 1.0. But we
have these browsers that are trying to
reconnect if there's an error. And so what
an attacker could do is that the attacker
wants to exploit SSLv3. So he just blocks
all connections with the newer TLS version
and therefore forces the client to go into
SSLv3. And then he can exploit this attack
that only works on SSLv3. These downgrades
are causing security issues. What do we do
now? We could add another work workaround.
There was a standard called SCSV which
basically gives the server a way to tell
the client that it's not broken. It says
"hey, I have this kind of special cipher
suite" which tells the client "hey, if you
did these strange downgrades here, please
don't do that. I'm a well behaving
server." So we had a workaround for broken
servers and then we needed another
workaround for the security issues caused
by those workarounds. But at some points
even enterprise servers mostly had fixed
this version intolerance issues and
browsers stopped doing these downgrades.
Attacks like POODLE no longer worked.
However I just said they fixed it. No of
course they have not fixed it. I mean they
fixed it for TLS 1.2. But of course they
did not fix it for future TLS versions
because they were not around yet. This TLS
1.3, we would get version intolerance
again and breaking servers and would have
to introduce downgrades again and all the
nice security would not be very helpful.
The TLS working group realized that and
redesigned the handshake. It was
redesigned in a way that the old version
fields still said that we are connecting
with TLS 1.2 and then we introduce an
extension, supported_versions, which
signals the support for all the TLS
versions we can speak and which signals
support for TLS 1.3 and possibly for
future versions. Now at this point you may
wonder if we'll have version intolerance
with this new extension, once TLS 1.4 gets
out because the server may be implemented
that it sends an error if it sees an
unknown version in this new version
extension. David Benjamin from Google
thought about this and said "Yeah we have
to do something about that. We have to
improve the future compatibility for
future TLS versions." And he invented this
GREASE mechanism. The idea here is, a
server should just ignore unknown versions
in this extension. He gets a list of TLS
versions and if there's one in there that
he doesn't know about, he should just
ignore it and then connect with one of the
versions he knows about. So we could kind
of try to train servers to actually do
that. And the idea here is we're just
sending random bogus TLS versions that are
reserved values that will never be used
for a real TLS version. But we can just
randomly add them to this extension in
order to make sure that if a server
implements this incorrectly, they will
hopefully recognize that early because
there will be connection failures with
normal browsers. The hope here is if
enterprise vendors will implement a broken
version negotiation, they will hopefully
notice that before they ship the product
and then it can no longer be updated
because that's how the Internet works. So
we have this new version negotiation
mechanism we no longer need these
downgrades and we have this GREASE
mechanism to make it future proof. So now
we can ship TLS 1.3, right? Then there was
this middle box issue. Oh sorry, that's a
wrong year. It must be 2016, sorry. In
2016, in summer, TLS 1.3 was almost
finished. But then it took almost another
year till it got out. Oh sorry, I mixed up
the years. It's correct. In 2017, when TLS
1.3 was almost finished, but it took until
2018 until it was actually finished. The
reason for that was that when browser
vendors implemented a draft version of TLS
1.3, they noticed a lot of connection
failures. And the reason for these
connection failures turned out, were
devices that were trying to analyze the
traffic and trying to be smart. And they
thought "OK this is something that looks
very strange. It doesn't look like a TLS
package how we're used to it. So let's
just drop it, yeah?" So "Yeah, this is a
strange TLS package, I don't know what
to do with this, I'll drop it." These were
largely passive middle boxes. So we're not
talking about things like man in the
middle devices that are intercepting a TLS
connection but just something like a
router, where you would expect it just
forwards traffic. But it tries to be
smart, it tries to do advanced security
enterprise. I don't know. And they were
dropping traffic that looked like TLS 1.3
Then the browser vendors proposed some
changes to TLS 1.3 so it looks more like
TLS 1.2. The main thing was, they
introduced some bogus messages from TLS
1.2 that were supposed to be ignored. So
one such message is the so-called
ChangeCipherSpec message in TLS 1.2, which
originally didn't exist in 1.3 due to this
new handshake design. This message in 1.2,
it signals that everything from now on is
encrypted. The idea was "okay, if we sent
a bogus ChangeCipherSpec message early in
the handshake, then maybe this would
confuse those devices, thinking everything
after that is encrypted and they cannot
analyze it." And it turned out this
worked. A lot of this reduced the
connection failures a lot. There are a few
other things. And then eventually the
failure rates got low enough that browsers
thought "Okay, now we can deploy this."
There were a few more issues. This is a
Pixma printer from Canon. These things
have an HTTPS server. They have network
support. And we have to talk about these
people here. If you remember the Snowden
relations, one of the things that got
highlighted there was that there's a
random number generator called Dual EC
DRBG. And that has a backdoor and
basically these days everyone believes
this is a backdoor by the NSA and they
have some secret keys so they can predict
what random values this RNG will output.
Also what was in the Snowden documents was
that at some point the NSA offered 10
million dollars to RSA security, so they
implement this RNG. Then there was a
proposal, a draft, for a TLS extension,
called Extended Random, that adds some
extra random numbers to the TLS handshake.
Why? It wasn't really clear like it was
just "Yeah we can do this." It was just a
proposal. I mean everyone can write a
proposal for a new extension, it was never
finalized, but it was out there. And in
2014, a research team looked closer at
this Dual EC RNG and figured out that if
you use this ER extension then it's much
easier to exploit this backdoor in this
RNG. And coincidentally RSA's TLS library,
BSAFE, also contains support for that
extension. But it was switched off. They
didn't find any implementations that
actually used it. So it was thought of
"okay, this was no big deal, all right".
But actually it seems these these Canon
printers, they had enabled this extension.
They use this RSA BSAFE library and
enabled this ER extension, which was only
a draft. And so as ER was only a draft, it
had no official extension number. So such
a TLS extension has a number so that the
server knows what kind of extension this
is. And for this implementation they just
used the next available number. And it
turned out that this number collided with
one of the mandatory extensions that TLS
1.3 introduced. So these these Canon
printers could not interpret that new
extension. They thought this is this
Extended Random and it didn't make any
sense, and so you had connection failures.
In the TLS protocol they just gave this
extension a new number and then this no
longer happened. There were many more such
issues and they continue to show up. For
example recently, Java, which is like also
very popular in enterprise environments,
it now ships with TLS 1.3 support but it
doesn't really work. So you have
connection failures there. Now with all
these deployment issues, what about future
TLS versions? Will we have all that again?
And we have this GREASE mechanism and it
helps a bit like it prevents this version
intolerance issues but it doesn't prevent
these more complicated middle box issues.
There was a proposal from David Benjamin
from Google who said "Yeah, maybe we
should just, every few months, like every
two or three months, ship a new temporary
TLS version which we will use for three
months and then we will deprecate it again
to just constantly change the protocol so
that the Internet gets used to the fact
that new protocols get introduced."
Laughter
My prediction here is that these
deployment issues are going to get worse.
I mean, we know now that they exist and we
kind of have some ideas how to prevent
them, but if you go to enterprise security
conferences, you will know that the latest
trend in enterprise security is this thing
called artificial intelligence: We use
machine learning and fancy algorithms to
detect bad stuff. And that worries me. And
here's a blog post from Cisco, where they
want to use machine learning to detect bad
TLS traffic, because they see all this
traffic is encrypted and we can no longer
analyze it, we don't know if malware in
there, so let's use some machine learning;
it will detect bad traffic. So, what I'm
very worried that will happen here is that
the next generation of TLS deployment
issues will be AI-supported TLS
intolerance issues and it may be much
harder to fix and analyze. Speaking of
enterprise environments, one of the very
early changes in TLS 1.3 was that it
removed the RSA encryption handshake. One
reason was that it doesn't have forward
secrecy. The other was these, all these
Bleichenbacher attacks that I talked about
earlier. And then, there came an email to
the TLS working group from the banking
industry and I quote: "I recently learned
of a proposed change that would affect
many of my organization's member
institutions: the deprecation of the RSA
key exchange. Deprecation of the RSA key
exchange in TLS 1.3 will cause significant
problems for financial institutions,
almost all of whom are running TLS
internally and have significant, security-
critical investments in out-of-band TLS
decryption." What it basically means is,
they are using TLS for some connection;
they have some device in the middle that
is decrypting the traffic and analyzing it
somehow which - if they do it internally -
it's okay, but this no longer works with
TLS 1.3, because we always negotiate a new
key for each connection and it's no longer
possible to have the static decryption.
There was an answer from Kenny Patterson,
he's a professor from London, he said: "My
view concerning your request: no.
Rationale: We're trying to build a more
secure internet."
Applause
"You're a bit late to the party. We're
metaphorically speaking at the stage of
emptying the ash trays and hunting for the
not quite empty beer cans. More exactly,
we are at draft 15 and RSA key transport
disappeared from the spec about a dozen
drafts ago. I know the banking industry is
usually a bit slow off the mark, but this
takes the biscuit." Okay.
Laughter
There were several proposals then to add a
visibility mode to TLS 1.3, which would in
another way allow these connections that
could be passively observed and decrypted,
but they were all rejected and the general
opinion in the TLS working group was that
the goal of monitoring traffic content is
just fundamentally not the goal of TLS.
The goal of TLS is to have an encrypted
channel that no one else can read. The
industry eventually went to ETSI, which is
the European technology standardization
organization, and they recently published
something called Enterprise TLS...
Laughter
...which modifies TLS 1.3 in a way that it
would allow these decryptions. The IETF
protested against that and primarily
because of the... they used the name TLS,
because it sounds like this is some
addition to TLS or something, and
apparently ETSI has previously promised to
them that they would not use the name TLS
and then they named it Enterprise TLS.
Okay, but yeah... TLS 1.3 is finished. You
can start using it. You should update your
servers so that they use it. Your browser
probably already supports it. So, in
summary: TLS 1.3 deprecates many insecure
constructions. It's faster and deploying
new things on the internet is a mess. So,
yeah. That's it. And I think we have a few
minutes for questions.
Applause
Herald: Alright, yeah. As Hanno mentioned, we
have 6 minutes or so for questions. We
have 5 microphones in the room. So, if you
want to ask a question, hurry up to one of
the microphones and please make sure to
ask a short, concise question, so that we
can get as many in as we can possibly can.
Maybe, you just go ahead over there. Mic
2.
Mic 2: Thank you very much for this
interesting talk. Is there a way to
prevent the uses of this Enterprise TLS?
Hanno: The question is if there is a way
to prevent the use of that Enterprise TLS.
Yes, there is, because the basic idea is
that they will use a static Diffie-Hellman
key exchange and if you just connect twice
and see that they are using the same
again, then you may reject that. Although
the problem is, some servers may also use
that for optimization. So, there are
longer discussions on this question, so...
I cannot fully answer it, but there more
or less... there are options.
Herald: Alright. Before we go to the next
question, a quick request for all the
people leaving the room: Please do so as
quietly as possible, so we can finish this
Q&A in peace and don't have all this noise
going on. Mic 3, please.
Mic 3: Hi. I was wondering about the
replay attacks. Why didn't they implement
something like sequence numbers into the
TLS protocol?
Hanno: Yeah. There is something like that
in there. The problem is, you sometimes
have a situation where you have multiple
TLS termination points - for example, if
you have a CDN network that is
internationally distributed - and you may
not be able to keep state across all of
them.
Herald: Alright. Then, let's take a
question from our viewers in the internet.
The signal angel, please.
Signal angel: Alright. Binarystrike asks:
"With regards to TLS 1.3 in the
enterprise, shouldn't we move away from
perimeter interception devices to also
putting control on the end point, like we
would have in a zero-trust environment?"
Hanno: So, in my opinion, yes. But, there
are many people in the enterise security
industry who think that this is not
feasible. But, I mean, discussion about
network design, that would be a whole
other talk. Yeah.
Herald: Alright. Then, let's take a
question from mic 4.
Mic 4: Yeah. It's also related to the
Enterprise TLS. The browser can connect to
an Enterprise TLS server without any
problems?
Hanno: Yeah. So, it's built that it's
compatible with the existing TLS protocol.
Mic 4: Okay, thanks.
Hanno: And the reason of whether you can
avoid that or not, that's really a more
complicated discussion, that would kind of
be a whole sub-talk, so I cannot answer
this in a minute, but come to me later if
you are interested in details.
Herald: Alright. Then, let's take another
question from the interwebs.
Signal Angel: We have one more question
from IRC: "Would you recommend
inserting bogus values into handshakes to
train implementors?"
Hanno: I mean that what I said what is
done, that's actually what browsers are
doing, and I think this is a good idea. I
just think that this covers only a small
fraction of these deployment issues.
Herald: Okay, we still have plenty of
time, so let's go to mic 2 please.
Mic 2: Yeah, as you said, we have still a
lot of dirty workarounds concerning TLS
1.3 and all the implementatons in the
browers and so on. Is there a way to make,
like, a requirement for the TLS 1.3 or 1.4
compliance to meet some compliance to the
standard? So you have like a test you can
perform, a self-test or something like
that and if you pass that you are allowed
to use the TLS 1.3 logo or 1.4 logo.
Hanno: You can do that in theory. The
problem is you don't really want to have a
certification regime that people like have
to ask for a logo to be able to be allowed
to implement TLS. and I mean that's kind
of one of the downsides of the open
architecture of the Internet, right? We
allow everyone to put devices on the
Internet, so we kind of have to live with
that. And there's no TLS police, so, we
kind of have no way of preventing people
to use broken TLS implementations. And I
mean people won't care if they have a logo
for it or not, right?
Herald: Alright, let's go to mic 5 all the
way in the back there.
Mic 5: Okay. I have a question about
Shor's algorithm and TLS 1.3, because
since quantum computing is getting very
popular lately and there are a lot of
improvements in the industries, so what's
the current situation regarding TLS 1.3
and all those quantum-based algorithms
that break the complexity into polynomial
times?
Hanno: There's no major change here. So,
with TLS 1.3 you still are using
algorithms that can be broken with quantum
computers if you have a quantum computer.
Which currently you don't, but you may
have in the future. There is work done on
standardizing future algorithms that are
safe from quantum attacks, but that's kind
of in an early stage. And there was an
experiment by Google to introduce a
quantum-safe handshake, but they only ran
it for a few months. But, I think we will
see extensions within the next few years
that will introduce quantum-safe
algorithms, but right now there's no
change from TLS 1.2 to 1.3. Both can be
attacked with quantum computers.
Herald: Okay, so I think we are getting to
our last or second to last question, so
let's go to mic 3, I think you've been
waiting the longest.
Mic 3: Okay. In older versions of TLS
there was a problem for small devices such
as IoT and the industrial devices. Has
there been a change in 1.3 to allow them
to participate?
Hanno: I mean I'm not sure what entirely
you mean with the problem, I mean...
Mic 3: Of performance, of performance.
Hanno: ...of course TLS needs some... the
performance issues of TLS have usually
been overstated. So even in a relatively
low-power device you can implement the
crypto. I mean the whole protocol is
relatively complex and you need to
implement it somehow, but I don't think
that's such a big issue anymore because
even IoT devices have relatively powerfull
processors these days.
Herald: Okay, alright, that concludes our
Q&A, unfortunately we are out of time. So
please give a huge round of applause for
this great talk.
applause
35C3 postroll music
subtitles created by c3subtitles.de
in the year 2019. Join, and help us!