0:00:00.000,0:00:14.790
36C3 preroll music
0:00:14.790,0:00:25.539
Herald: Our next speaker is way is paved[br]with broken trust zones. He's no stranger
0:00:25.539,0:00:31.599
to breaking ARM's, equipment or crypto[br]wallets or basically anything he touches.
0:00:31.599,0:00:39.591
It just dissolves in his fingers. He's one[br]of Forbes, 30 under 30s in tech. And
0:00:39.591,0:00:42.840
please give a warm round of applause to[br]Thomas Roth.
0:00:42.840,0:00:48.100
Applause.
0:00:48.100,0:00:54.680
Thomas: Test, okay. Wonderful. Yeah.[br]Welcome to my talk. TrustZone-M: Hardware
0:00:54.680,0:01:00.630
attacks on ARMv8-M security features. My[br]name is Thomas Roth. You can find me on
0:01:00.630,0:01:05.860
Twitter. I'm @stacksmashing and I'm a[br]security researcher, consultant and
0:01:05.860,0:01:11.049
trainer affiliated with a couple of[br]companies. And yeah, before we can start,
0:01:11.049,0:01:15.990
I need to to thank some people. So first[br]off, Josh Datko and Dimitri Nedospasov
0:01:15.990,0:01:20.110
who've been super helpful at anytime I was[br]stuck somewhere, or just wanted some
0:01:20.110,0:01:25.969
feedback. They immediately helped me. And[br]also Colin O'Flynn, who gave me constant
0:01:25.969,0:01:30.729
feedback and helped me with some troubles,[br]gave me tips and so on. And so without
0:01:30.729,0:01:36.909
these people and many more who paved the[br]way towards this research, I wouldn't be
0:01:36.909,0:01:41.520
here. Also, thanks to NXP and Microchip[br]who I had to work with as part of this
0:01:41.520,0:01:47.950
talk. And it was awesome. I had a lot of[br]very bad vendor experiences, but these two
0:01:47.950,0:01:54.799
were really nice to work with. Also some[br]prior work. So Colin O'Flynn and Alex
0:01:54.799,0:01:59.499
Dewar released a paper, I guess last year[br]or this year "On-Device Power Analysis
0:01:59.499,0:02:04.270
Across Hardware Security Domains". And[br]they basically looked at TrustZone from a
0:02:04.270,0:02:10.301
differential power analysis viewpoint and[br]otherwise TrustZone-M is pretty new, but
0:02:10.301,0:02:15.860
lots of work has been done on the big or[br]real TrustZone and also lots and lots of
0:02:15.860,0:02:21.440
works on fault injection would be far too[br]much to list here. So just google fault
0:02:21.440,0:02:26.220
injection and you'll see what I mean.[br]Before we start, what is TrustZone-M? So
0:02:26.220,0:02:31.890
TrustZone-M is the small TrustZone. It's[br]basically a simplified version of the big
0:02:31.890,0:02:35.940
TrustZone that you find on Cortex-A[br]processors. So basically if you have an
0:02:35.940,0:02:40.280
Android phone, chances are very high that[br]your phone actually runs TrustZone and
0:02:40.280,0:02:44.950
that, for example, your key store of[br]Android is backed by TrustZone. And
0:02:44.950,0:02:50.620
TrustZone basically splits the CPU into a[br]secure and a non-secure world. And so, for
0:02:50.620,0:02:54.340
example, you can say that a certain[br]peripheral should only be available to the
0:02:54.340,0:02:58.190
secure world. So, for example, if you have[br]a crypto accelerator, you might only want
0:02:58.190,0:03:03.990
to use it in the secure world. It also, if[br]you're wondering what's the difference to
0:03:03.990,0:03:10.870
an MPU - it also comes with two MPUs.[br]Sorry, not MMUs, MPUs. And so last year we
0:03:10.870,0:03:14.650
gave a talk on bitcoin wallets. And so[br]let's take those as an example on a
0:03:14.650,0:03:19.730
bitcoin wallet you often have different[br]apps, for example, for Bitcoin, Dogecoin
0:03:19.730,0:03:24.590
or Monaro, and then underneath you have an[br]operating system. The problem is kind of
0:03:24.590,0:03:28.620
this operating system is very complex[br]because it has to handle graphics
0:03:28.620,0:03:33.060
rendering and so on and so forth. And[br]chances are high that it gets compromised.
0:03:33.060,0:03:37.900
And if it gets compromised, all your funds[br]are gone. And so with TrustZone, you could
0:03:37.900,0:03:43.380
basically have a second operating system[br]separated from your normal one that
0:03:43.380,0:03:47.250
handles all the important stuff like[br]firmware update, key store attestation and
0:03:47.250,0:03:52.930
so on and reduces your attack surface. And[br]the reason I actually looked at
0:03:52.930,0:03:57.280
TrustZone-M is we got a lot of requests[br]for consulting on TrustZone-M. So
0:03:57.280,0:04:02.510
basically, after our talk last year, a lot[br]of companies reached out to us and said,
0:04:02.510,0:04:07.100
okay, we want to do this, but more[br]securely. And a lot of them try to use
0:04:07.100,0:04:12.190
TrustZone-M for this. And so far there's[br]been, as far as I know, little public
0:04:12.190,0:04:16.780
research into TrustZone-M and whether it's[br]protected against certain types of
0:04:16.780,0:04:21.250
attacks. And we also have companies that[br]start using them as secure chips. So, for
0:04:21.250,0:04:24.990
example, in the automotive industry, I[br]know somebody who was thinking about
0:04:24.990,0:04:28.810
putting them into car keys. I know about[br]some people in the payment industry
0:04:28.810,0:04:34.820
evaluating this. And as said, hardware[br]wallets. And one of the terms that come up
0:04:34.820,0:04:40.469
again and again is this is a secure chip.[br]But I mean, what is the secure chip
0:04:40.469,0:04:45.130
without a threat model? There's no such[br]thing as a secure chip because there are
0:04:45.130,0:04:49.310
so many attacks and you need to have a[br]threat model to understand what are you
0:04:49.310,0:04:53.280
actually protecting against. So, for[br]example, a chip might have software
0:04:53.280,0:04:59.080
features or hardware features that make[br]the software more secure, such as NX bit
0:04:59.080,0:05:03.229
and so on and so forth. And on the other[br]hand, you have hardware attacks, for
0:05:03.229,0:05:08.460
example, debug port side channel attacks[br]and fault injection. And often the
0:05:08.460,0:05:14.289
description of a chip doesn't really tell[br]you what it's protecting you against. And
0:05:14.289,0:05:19.139
often I would even say it's misleading in[br]some cases. And so you will see, oh, this
0:05:19.139,0:05:22.850
is a secure chip and you ask marketing and[br]they say, yeah, it has the most modern
0:05:22.850,0:05:28.310
security features. But it doesn't really[br]specify whether they are, for example,
0:05:28.310,0:05:31.759
protecting against fault injection attacks[br]or whether they consider this out of
0:05:31.759,0:05:37.530
scope. In this talk, we will exclusively[br]look at hardware attacks and more
0:05:37.530,0:05:42.439
specifically, we will look at fault[br]injection attacks on TrustZone-M. And so
0:05:42.439,0:05:47.180
all of the attacks we're going to see are[br]local to the device only you need to have
0:05:47.180,0:05:52.470
it in your hands. And there's no chance,[br]normally, of remotely exploiting them.
0:05:52.470,0:05:58.599
Yeah. So this will be our agenda. We will[br]start with a short introduction of
0:05:58.599,0:06:01.990
TrustZone-M, which will have a lot of[br]theory on like memory layouts and so on.
0:06:01.990,0:06:05.610
We will talk a bit about the fault-[br]injection setup and then we will start
0:06:05.610,0:06:13.229
attacking real chips. These 3, as you will[br]see. So on a Cortex-M processor you have a
0:06:13.229,0:06:17.270
flat memory map. You don't have a memory[br]management unit and all your peripherals,
0:06:17.270,0:06:21.719
your flash, your ram, it's all mapped to a[br]certain address in memory and TrustZone-M
0:06:21.719,0:06:27.669
allows you to partition your flash or your[br]ram into secure and non secure parts. And
0:06:27.669,0:06:31.400
so, for example, you could have a tiny[br]secure area because your secret code is
0:06:31.400,0:06:36.909
very small and a big non secure area. The[br]same is true for Ram and also for the
0:06:36.909,0:06:42.569
peripherals. So for example, if you have a[br]display and a crypto engine and so on. You
0:06:42.569,0:06:48.599
can decide whether these peripherals[br]should be secure or non secure. And so
0:06:48.599,0:06:53.419
let's talk about these two security[br]states: secure and non secure. Well, if
0:06:53.419,0:06:57.949
you have code running in secure flash or[br]you have secure code running, it can call
0:06:57.949,0:07:02.729
anywhere into the non secure world. It's[br]basically the highest privilege level you
0:07:02.729,0:07:08.009
can have. And so there's no protection[br]there. However, the opposite, if we tried
0:07:08.009,0:07:12.479
to go from the non secure world and to the[br]secure world would be insecure because,
0:07:12.479,0:07:15.469
for example, you could jump to the parts[br]of the code that are behind certain
0:07:15.469,0:07:20.330
protections and so on. And so that's why,[br]if you tried to jump from an unsecured
0:07:20.330,0:07:26.749
code into a secure code, it will cause an[br]exception. And to handle that, there's a
0:07:26.749,0:07:32.249
third memory state which is called non[br]secure callable. And as the name implies,
0:07:32.249,0:07:37.909
basically you're non secure code can call[br]into the non secure callable code. More
0:07:37.909,0:07:43.210
specifically, it can only call to non[br]secure callable code addresses where
0:07:43.210,0:07:49.569
there's an SG instruction which stands for[br]Secure Gateway. And the idea behind the
0:07:49.569,0:07:53.539
secure gateway is that if you have a non[br]secure kernel running, you probably also
0:07:53.539,0:07:57.610
have a secure kernel of running. And[br]somehow this secure kernel will expose
0:07:57.610,0:08:02.520
certain system calls, for example. And so[br]we want to somehow call from the non
0:08:02.520,0:08:09.069
secure kernel into these system calls, but[br]as I've just mentioned, we can't do that
0:08:09.069,0:08:15.039
because this will unfortunately cause an[br]exception. And so the way this is handled
0:08:15.039,0:08:19.729
on TrustZone-M is that you create so-[br]called secure gateway veneer functions.
0:08:19.729,0:08:24.689
These are very short functions in the non[br]secure callable area. And so if we want,
0:08:24.689,0:08:29.650
for example, to call the load key system[br]call, we would call the load key veneer
0:08:29.650,0:08:35.370
function, which in turn would call the[br]real load key function. And these veneer
0:08:35.370,0:08:40.199
functions are super short. So if you look[br]at the disassembly of them, it's like two
0:08:40.199,0:08:44.120
instructions. It's the secure gateway[br]instruction and then a branch instruction
0:08:44.120,0:08:51.630
to what's your real function. And so if we[br]combine this, we end up with this diagram
0:08:51.630,0:08:57.199
secure can call into non secure, non[br]secure, can call into NSC and NSC can call
0:08:57.199,0:09:04.190
into your secure world. But how do we[br]manage these memory states? How do we know
0:09:04.190,0:09:09.300
what security state does an address have?[br]And so for this in TrustZone-M, we use
0:09:09.300,0:09:13.940
something called attribution units and[br]there're by default there are two
0:09:13.940,0:09:19.089
attribution units available. The first one[br]is the SAU the Security Attribution Unit,
0:09:19.089,0:09:24.430
which is standard across chips. It's[br]basically defined by ARM how you use this.
0:09:24.430,0:09:29.490
And then there's the IDAU. The[br]Implementation Defined Attribution Unit,
0:09:29.490,0:09:34.070
which is basically custom to the silicon[br]vendor, but can also be the same across
0:09:34.070,0:09:41.259
several chips. And to get the security[br]state of an address, the security
0:09:41.259,0:09:47.310
attribution of both the SAU and the IDAU[br]are combined and whichever one has the
0:09:47.310,0:09:53.149
higher privilege level will basically win.[br]And so let's say our SAU says this address
0:09:53.149,0:09:59.209
is secure and our IDAU says this address[br]is non secure the SAU wins because it's
0:09:59.209,0:10:05.880
the highest privilege level. And basically[br]our address would be considered secure.
0:10:05.880,0:10:12.340
This is a short table. If both the SAU and[br]the IDAU agree, we will be non secure if
0:10:12.340,0:10:17.240
both say, hey, this is secure, it will be[br]secure. However, if they disagree and the
0:10:17.240,0:10:22.640
SAU says, hey, this address is secure the[br]IDAU says it's non secure, it will still
0:10:22.640,0:10:27.459
be secure because secure is to have[br]privilege level. The opposite is true. And
0:10:27.459,0:10:33.850
with even with non secure callable, secure[br]is more privileged than NSC. And so secure
0:10:33.850,0:10:41.170
will win. But if we mix NS and NSC, we get[br]non-secular callable. Okay. My initial
0:10:41.170,0:10:45.880
hypothesis when I read all of this was if[br]we break or disable the attribution units,
0:10:45.880,0:10:52.220
we probably break the security. And so to[br]break these, we have to understand them.
0:10:52.220,0:10:57.560
And so let's look at the SAU the security[br]attribution unit. It's standardized by
0:10:57.560,0:11:02.430
ARM. It's not available on all chips. And[br]it basically allows you to create memory
0:11:02.430,0:11:08.740
regions with different security states.[br]So, for example, if the SAU is turned off,
0:11:08.740,0:11:13.190
everything will be considered secure. And[br]if we turn it on, but no regions are
0:11:13.190,0:11:16.990
configured, still, everything will be[br]secure. We can then go and add, for
0:11:16.990,0:11:23.850
example, address ranges and make them NSC[br]or non secure and so on. And this is done
0:11:23.850,0:11:28.920
very, very easily. You basically have[br]these five registers. You have the SAU
0:11:28.920,0:11:34.890
control register where you basically can[br]turn it on or off. You have the SAU type,
0:11:34.890,0:11:38.329
which gives you the number of supported[br]regions on your platform because this can
0:11:38.329,0:11:42.779
be different across different chips. And[br]then we have the region number register,
0:11:42.779,0:11:46.149
which you use to select the region you[br]want to configure and then you set the
0:11:46.149,0:11:50.460
base address and the limit address. And[br]that's basically it. So, for example, if
0:11:50.460,0:11:57.380
we want to set region zero, we simply set[br]the RNR register to zero. Then we set the
0:11:57.380,0:12:05.649
base address to 0x1000. We set the limit[br]address to 0x1FE0, which is identical to
0:12:05.649,0:12:08.970
1FFF because there are some other bits[br]behind there that we don't care about
0:12:08.970,0:12:14.910
right now. And then we turn on the[br]security attribution unit and now our
0:12:14.910,0:12:19.420
memory range is marked as secure if you[br]want to create a second region we simply
0:12:19.420,0:12:25.980
change RNR to, for example, 1 again insert[br]some nice addresses. Turn on the SAU and
0:12:25.980,0:12:33.860
we have a second region this time from[br]4000 to 5FFF. So to summarize, we have
0:12:33.860,0:12:40.470
three memory security states. We have S[br]secure and we have NSC non secure callable
0:12:40.470,0:12:46.149
and we have NS non secure. We also have[br]the two attribution units, the SAU
0:12:46.149,0:12:53.070
standard by ARM and the IDAU which is[br]potentially custom we will use SAU and
0:12:53.070,0:13:00.120
IDAU a lot. So this was very important.[br]Cool. Let's talk about fault injection. So
0:13:00.120,0:13:06.060
as I've mentioned, we want to use fault[br]injection to compromise TrustZone. And the
0:13:06.060,0:13:10.740
idea behind fault injection or as it's[br]also called glitching is to introduce
0:13:10.740,0:13:14.610
faults into a chip. So, for example, you[br]cut the power for a very short amount of
0:13:14.610,0:13:19.310
time while you change the period of the[br]clock signal or even you could go and
0:13:19.310,0:13:23.600
inject electromagnetic shocks in your[br]chip. You can also shoot at it with a
0:13:23.600,0:13:29.170
laser and so on and so forth. Lots of ways[br]to do this. And the goal of this is to is
0:13:29.170,0:13:34.399
to cause undefined behavior. And in this[br]talk, we will specifically look at
0:13:34.399,0:13:40.440
something called voltage glitching. And so[br]the idea behind voltage glitching is that
0:13:40.440,0:13:44.930
we cut the power to the chip for very,[br]very short amount of time at a very
0:13:44.930,0:13:49.100
precisely timed moment. And this will[br]cause some interesting behavior. So
0:13:49.100,0:13:56.720
basically, if you would look at this on an[br]oscilloscope, we would basically have a
0:13:56.720,0:14:02.569
stable voltage, stable voltage, stable[br]voltage, and then suddenly it drops and
0:14:02.569,0:14:08.100
immediately returns. And this drop will[br]only be a couple of nanoseconds long. And
0:14:08.100,0:14:12.760
so, for example, you can have glitches[br]that are 10 nanoseconds long or 15
0:14:12.760,0:14:18.829
nanoseconds long and so on. Depends on[br]your chip. And yeah. And this allows you
0:14:18.829,0:14:24.230
to do different things. So, for example, a[br]glitch can allow you to skip instructions.
0:14:24.230,0:14:29.110
It can corrupt flash reads or flash[br]writes. It can corrupt memory registers or
0:14:29.110,0:14:34.920
register reads and writes. And skipping[br]instructions for me is always the most
0:14:34.920,0:14:40.000
interesting one, because it allows you to[br]directly go from disassembly to
0:14:40.000,0:14:45.079
understanding what you can potentially[br]jump over. So, for example, if we have
0:14:45.079,0:14:50.610
some code, this would be a basic firmware[br]boot up code. We have an initialized
0:14:50.610,0:14:55.439
device function. Then we have a function[br]that basically verifies the firmware
0:14:55.439,0:15:00.339
that's in flash and then we have this[br]boolean check whether our firmware was
0:15:00.339,0:15:05.329
valid. And now if we glitch at just the[br]right time, we might be able to glitch
0:15:05.329,0:15:12.879
over this check and boot our potentially[br]compromised firmware, which is super nice.
0:15:12.879,0:15:19.480
So how does this relate to TrustZone?[br]Well, if we manage to glitch over enable
0:15:19.480,0:15:25.899
TrustZone, we might be able to break[br]TrustZone. So how do you actually do this?
0:15:25.899,0:15:30.810
Well, we need something to wait for a[br]certain delay and generate a pulse at just
0:15:30.810,0:15:36.250
the right time with very high precision.[br]We are talking about nano seconds here,
0:15:36.250,0:15:40.259
and we also need something to drop the[br]power to the target. And so if you need
0:15:40.259,0:15:46.450
precise timing and so on, what works very[br]well is an FPGA. And so, for example, the
0:15:46.450,0:15:51.649
code that was released as part of this all[br]runs on the Lattice iCEstick, which is
0:15:51.649,0:15:56.610
roughly 30 bucks and you need a cheap[br]MOSFET and so together this is like thirty
0:15:56.610,0:16:02.440
one dollars of equipment. And on a setup[br]side, this looks something like this. You
0:16:02.440,0:16:06.830
would have your FPGA, which has a trigger[br]input. And so, for example, if you want to
0:16:06.830,0:16:10.430
glitch something doing the boot up of a[br]chip, you could connect this to the reset
0:16:10.430,0:16:14.769
line of the chip. And then we have an[br]output for the glitch pulse. And then if
0:16:14.769,0:16:20.820
we hook this all up, we basically have our[br]power supply to the chip run over a
0:16:20.820,0:16:26.529
MOSFET. And then if the glitch pulls goes[br]high, we drop the power to ground and the
0:16:26.529,0:16:33.189
chip doesn't get power for a couple of[br]nanoseconds. Let's talk about this power
0:16:33.189,0:16:39.360
supply, because a chip has a lot of[br]different things inside of it. So, for
0:16:39.360,0:16:45.370
example, a microcontroller has a CPU core.[br]We have a Wi-Fi peripheral. We have GPIO.
0:16:45.370,0:16:50.899
We might have Bluetooth and so on. And[br]often these peripherals run at different
0:16:50.899,0:16:56.529
voltages. And so while our microcontroller[br]might just have a 3.3 volt input,
0:16:56.529,0:17:00.079
internally there are a lot of different[br]voltages at play. And the way these
0:17:00.079,0:17:05.410
voltages are generated often is using[br]in-chip regulators. And basically these
0:17:05.410,0:17:11.449
regulators connect with the 3.3 voltage in[br]and then generate the different voltages
0:17:11.449,0:17:16.740
for the CPU core and so on. But what's[br]nice is that on a lot of chips there are
0:17:16.740,0:17:21.620
behind the core regulator, so called[br]bypass capacitors, and these external
0:17:21.620,0:17:26.240
capacitors are basically there to[br]stabilize the voltage because regulators
0:17:26.240,0:17:32.120
tend to have a very noisy output and you[br]use the capacitor to make it more smooth.
0:17:32.120,0:17:36.730
But if you look at this, this also gives[br]us direct access to the CPU core power
0:17:36.730,0:17:42.390
supply. And so if we just take a heat gun[br]and remove the capacitor, we actually kind
0:17:42.390,0:17:46.730
of change the pin out of the processor[br]because now we have a 3.3 voltage in, we
0:17:46.730,0:17:52.700
have a point to input the core voltage and[br]we have ground. So we basically gained
0:17:52.700,0:17:59.990
direct access to the internal CPU core[br]voltage rails. The only problem is these
0:17:59.990,0:18:04.630
capacitors are for a reason. And so if we[br]remove them, then your chip might stop
0:18:04.630,0:18:09.770
working. But very easy solution. You just[br]hook up a power supply to it, set it to
0:18:09.770,0:18:14.650
1.2 volts or whatever, and then suddenly[br]it works. And this also allows you to
0:18:14.650,0:18:23.150
glitch very easily. You just glitch on[br]your power rail towards the chip. And so
0:18:23.150,0:18:27.450
this is our current setup. So we have the[br]Lattice iCEstick. We also use a
0:18:27.450,0:18:31.430
multiplexer as an analog switch to cut the[br]power to the entire device. If we want to
0:18:31.430,0:18:36.779
reboot everything, we have the MOSFET and[br]we have a power supply. Now hooking this
0:18:36.779,0:18:42.300
all up on a bread board is fun the first[br]time, it's okay the second time. But the
0:18:42.300,0:18:47.080
third time it begins to really, really[br]suck. And as soon as something breaks with
0:18:47.080,0:18:52.450
like 100 jumper wires on your desk, the[br]only way to debug is to start over. And so
0:18:52.450,0:18:57.320
that's why I decided to design a small[br]hardware platform that combines all of
0:18:57.320,0:19:03.070
these things. So it has an FPGA on it. It[br]has analog input and it has a lot of
0:19:03.070,0:19:07.560
glitch circuitry and it's called the Mark[br]Eleven. If you've read William Gibson, you
0:19:07.560,0:19:13.260
might know where this is from. And it[br]contains a Lattice iCE40, which has a
0:19:13.260,0:19:18.130
fully open source toolchain, thanks to[br]Clifford Wolf and so. And this allows us
0:19:18.130,0:19:23.230
to very, very quickly develop new[br]triggers, develop new glitched code and so
0:19:23.230,0:19:27.450
on. And it makes compilation and[br]everything really really fast. It also
0:19:27.450,0:19:31.741
comes with three integrated power[br]supplies. So we have a 1.2 watt power
0:19:31.741,0:19:38.250
supply, 3.3, 5 volts and so on, and you[br]can use it for DPA. And this is based
0:19:38.250,0:19:42.880
around some existing devices. So, for[br]example, the FPGA part is based on the
0:19:42.880,0:19:48.820
1BitSquared iCEBreaker. The analog front[br]end, thanks to Colin O'Flynn, is based on
0:19:48.820,0:19:53.570
the ChipWhisperer Nano. And then the[br]glitch circuit is basically what we've
0:19:53.570,0:19:58.520
been using on bread boards for quite a[br]while. Just combined on a single device.
0:19:58.520,0:20:02.549
And so unfortunately, as always with[br]hardware production takes longer than you
0:20:02.549,0:20:07.440
might assume. But if you drop me a message[br]on Twitter, I'm happy to send you a PCB as
0:20:07.440,0:20:13.440
soon as they work well. And the BOM is[br]around 50 bucks. Cool. So now that we are
0:20:13.440,0:20:19.580
ready to have to actually attack chips,[br]let's look at an example. So the very
0:20:19.580,0:20:25.390
first chip that I encountered that used[br]TrustZone-M was the Microchip SAM 11. And
0:20:25.390,0:20:32.010
so this chip was released in June 2018.[br]And it's kind of it's a small, slow chip.
0:20:32.010,0:20:37.929
It's runs at 32 megahertz. It has up to 64[br]kilobytes of flash and 16 kilobytes of
0:20:37.929,0:20:44.210
SRAM, but it's super cheap. It's like one[br]dollar eighty at quantity one. And so it's
0:20:44.210,0:20:50.230
really nice, really affordable. And we had[br]people come up to us and suggest, hey, I
0:20:50.230,0:20:54.659
want to build a TPM on top of this or I[br]want to build a hardware wallet on top of
0:20:54.659,0:21:01.120
this. And so on and so forth. And if we[br]look at the website of this chip. It has a
0:21:01.120,0:21:06.530
lot of security in it, so it's the best[br]contribution to IOT security winner of
0:21:06.530,0:21:14.899
2018. And if you just type secure into the[br]word search, you get like 57 hits. So this
0:21:14.899,0:21:23.610
chip is 57 secure. laughter And even on[br]the website itself, you have like chip
0:21:23.610,0:21:28.700
level security. And then if you look at[br]the first of the descriptions, you have a
0:21:28.700,0:21:33.950
robust chip level security include chip[br]level, tamper resistance, active shield
0:21:33.950,0:21:38.301
protects against physical attacks and[br]resists micro probing attacks. And even in
0:21:38.301,0:21:42.440
the datasheet, where I got really worried[br]because I said I do a lot with a core
0:21:42.440,0:21:47.649
voltage it has a brown-out detector that[br]has been calibrated in production and must
0:21:47.649,0:21:53.809
not be changed and so on. Yeah. To be[br]fair, when I talked to my microchip, they
0:21:53.809,0:21:58.490
mentioned that they absolutely want to[br]communicate that this chip is not hardened
0:21:58.490,0:22:03.680
against hardware attacks, but I can see[br]how somebody who looks at this would get
0:22:03.680,0:22:10.550
the wrong impression given all the terms[br]and so on. Anyway, so let's talk about the
0:22:10.550,0:22:16.669
TrustZone in this chip. So the SAM L11[br]does not have a security attribution unit.
0:22:16.669,0:22:21.270
Instead, it only has the implementation[br]defined attribution unit. And the
0:22:21.270,0:22:25.580
configuration for this implementation[br]defined attribution unit is stored in the
0:22:25.580,0:22:29.789
user row, which is basically the[br]configuration flash. It's also called
0:22:29.789,0:22:33.610
fuses in the data sheet sometimes, but[br]it's really I think it's flash based. I
0:22:33.610,0:22:36.750
haven't checked, but I am pretty sure it[br]is because you can read it, write it,
0:22:36.750,0:22:42.190
change it and so on. And then the IDAU,[br]once you've configured it, will be
0:22:42.190,0:22:49.370
configured by the boot ROM during the[br]start of the chip. And the idea behind the
0:22:49.370,0:22:54.100
IDAU is that all your flash is partitioned[br]into two parts of the bootloader part and
0:22:54.100,0:23:00.289
the application part, and both of these[br]can be split into secure, non secure
0:23:00.289,0:23:05.100
callable and non secure. So you can have a[br]bootloader, a secure and a non secure one,
0:23:05.100,0:23:09.510
and you can have an application, a secure[br]and a non secure one. And the size of
0:23:09.510,0:23:14.040
these regions is controlled by these five[br]registers. And for example, if we want to
0:23:14.040,0:23:18.740
change our non secure application to be[br]bigger and make our secure application a
0:23:18.740,0:23:23.649
bit smaller, we just fiddle with these[br]registers and the sizes will adjust and
0:23:23.649,0:23:31.390
the same with the bootloader. So this is[br]pretty simple. How do we attack it? My
0:23:31.390,0:23:36.940
goal initially was I want to somehow read[br]data from the secure world while running
0:23:36.940,0:23:41.559
code in the non secret world. So jump the[br]security gap. My code in non secure should
0:23:41.559,0:23:47.350
be able to, for example, extract keys from[br]the secure world and my attack path for
0:23:47.350,0:23:52.790
that was well, I glitched the boot ROM[br]code that loads the IDAU you
0:23:52.790,0:23:57.140
configuration. But before we can actually[br]do this, we need to understand, is this
0:23:57.140,0:24:01.549
chip actually glitchable and can we? Is it[br]susceptible to glitches or do we
0:24:01.549,0:24:07.360
immediately get get thrown out? And so I[br]used a very simple setup where just had a
0:24:07.360,0:24:13.210
firmware and tried to glitch out of the[br]loop and enable an LED. And I had success
0:24:13.210,0:24:19.090
in less than five minutes and super stable[br]glitches almost immediately. Like when I
0:24:19.090,0:24:23.190
saw this, I was 100 percent sure that I[br]messed up my setup or that the compiler
0:24:23.190,0:24:28.710
optimized out my loop or that I did[br]something wrong because I never glitch to
0:24:28.710,0:24:33.530
chip in five minutes. And so this was[br]pretty awesome, but I also spend another
0:24:33.530,0:24:41.549
two hours verifying my setup. So. OK.[br]Cool, we know that ship is glitchable, so
0:24:41.549,0:24:47.149
let's glitch it. What do we glitch? Well,[br]if we think about it somewhere during the
0:24:47.149,0:24:53.330
boot ROM, these registers are red from[br]flash and then some hardware is somehow
0:24:53.330,0:24:57.890
configured. We don't know how because we[br]can't dump the boot from we don't know
0:24:57.890,0:25:01.539
what's going on in the chip. And the[br]datasheet has a lot of pages. And I'm a
0:25:01.539,0:25:09.160
millennial. So, yeah, I read what I have[br]to read and that's it. But my basic idea
0:25:09.160,0:25:14.250
is if we somehow manage to glitch the[br]point where it tries to read the value of
0:25:14.250,0:25:19.100
the AS Register, we might be able to set[br]it to zero because most chip peripherals
0:25:19.100,0:25:25.060
will initialize to zero. And if we glitch[br]with the instruction that reads AS, maybe
0:25:25.060,0:25:30.290
we can make our non secure application[br]bigger so that we, that actually we can
0:25:30.290,0:25:39.220
read the secure application data because[br]now it's considered non secure. But.
0:25:39.220,0:25:44.409
Problem 1 The boot ROM is not dumpable. So[br]we cannot just disassemble it and figure
0:25:44.409,0:25:50.659
out when does it roughly do this. And the[br]problem 2 is that we don't know when
0:25:50.659,0:25:55.130
exactly this read occures and our glitch[br]needs to be instruction precise. We need
0:25:55.130,0:26:01.159
to hit just the right instruction to make[br]this work. And the solution is brute
0:26:01.159,0:26:08.140
force. But I mean like nobody has time for[br]that. Right? So if the chip boots for 2
0:26:08.140,0:26:12.820
milliseconds. That's a long range we have[br]to search for glitches. And so very easy
0:26:12.820,0:26:17.160
solution power analysis. And it turns out[br]that, for example, a riscure has done this
0:26:17.160,0:26:23.029
before where basically they tried to[br]figure out where in time a JTAG lock is
0:26:23.029,0:26:30.450
set by comparing the power consumption.[br]And so the idea is, we basically write
0:26:30.450,0:26:35.649
different values to the AS register, then[br]we collect a lot of power traces and then
0:26:35.649,0:26:41.029
we look for the differences. And this is[br]relatively simple to do. If you have a
0:26:41.029,0:26:46.429
ChipWhisperer. So. This was my rough[br]setup. So we just have the ChipWhisperer-
0:26:46.429,0:26:51.740
Lite. We have a breakout with the chip we[br]want to attack and a programmer. And then
0:26:51.740,0:26:56.710
we basically collect a couple of traces.[br]And in my case, even just 20 traces are
0:26:56.710,0:27:01.779
enough, which takes, I don't know, like[br]half a second to run. And if you have 20
0:27:01.779,0:27:07.370
traces in unsecure mode, 20 traces in[br]secure mode and you compare them, you can
0:27:07.370,0:27:11.230
see that there are clear differences in[br]the power consumption starting at a
0:27:11.230,0:27:15.470
certain point. And so I wrote a script[br]that does some more statistics on it and
0:27:15.470,0:27:20.970
so on. And that basically told me the best[br]glitch candidate starts at 2.18
0:27:20.970,0:27:24.720
milliseconds. And this needs to be so[br]precise because I said we're in the milli
0:27:24.720,0:27:31.220
and the nano seconds range. And so we want[br]to make sure that we at the right point in
0:27:31.220,0:27:37.519
time. Now, how do you actually configure?[br]How do you build the setup where you
0:27:37.519,0:27:44.429
basically you get a success indication[br]once you broke this? For this, I needed to
0:27:44.429,0:27:50.039
write a firmware that basically attempts[br]to read secure data. And then if it's
0:27:50.039,0:27:54.139
successful, enabled the GPIO. And if it[br]fails, it does nothing. And I just reset
0:27:54.139,0:27:59.460
and try again. And so I know I knew my[br]rough delay and I was triggering of the
0:27:59.460,0:28:04.590
reset of the chip that I just tried. Any[br]delay after it and tried different glitch
0:28:04.590,0:28:11.169
pulse length and so on. And eventually I[br]had a success. And these glitches you will
0:28:11.169,0:28:16.029
see with the glitcher which we released a[br]while back is super easy to write because
0:28:16.029,0:28:21.940
all you have is like 20 lines of Python.[br]You basically set up a loop delay from
0:28:21.940,0:28:28.320
delay to your setup, the pulse length. You[br]iterate over a range of pulses. And then
0:28:28.320,0:28:34.250
in this case you just check whether your[br]GPIO is high or low. That's all it takes.
0:28:34.250,0:28:38.309
And then once you have this running in a[br]stable fashion, it's amazing how fast it
0:28:38.309,0:28:43.190
works. So this is now a recorded video of[br]a life glitch, of a real glitch,
0:28:43.190,0:28:49.730
basically. And you can see we have like 20[br]attempts per second. And after a couple of
0:28:49.730,0:28:57.370
seconds, we actually get a success[br]indication we just broke a chip. Sweet.
0:28:57.370,0:29:02.049
But one thing I moved to a part of Germany[br]to the very south is called the
0:29:02.049,0:29:09.590
Schwabenland. And I mean, 60 bucks. We are[br]known to be very cheap and 60 bucks
0:29:09.590,0:29:15.440
translates to like six beers at[br]Oktoberfest. Just to convert this to the
0:29:15.440,0:29:24.460
local currency, that's like 60 Club Mate.[br]Unacceptable. We need to go cheaper, much
0:29:24.460,0:29:33.650
cheaper, and so.[br]laughter and applause
0:29:33.650,0:29:40.380
What if we take a chip that is 57 secure[br]and we tried to break it with the smallest
0:29:40.380,0:29:46.730
chip. And so this is an ATTiny which[br]costs, I don't know, a a euro or two euro.
0:29:46.730,0:29:52.929
We combine it with a MOSFET to keep the[br]comparison that's roughly 3 Club Mate and
0:29:52.929,0:29:57.820
we hook it all up on a jumper board and[br]turns out: This works like you can have a
0:29:57.820,0:30:02.649
relatively stable glitch, a glitcher with[br]like 120 lines of assembly running all the
0:30:02.649,0:30:07.019
ATTiny and this will glitch your chip[br]successfully and can break TrustZone on
0:30:07.019,0:30:13.590
the SAM L11. The problem is chips are very[br]complex and it's always very hard to do an
0:30:13.590,0:30:17.830
attack on a chip that you configured[br]yourself because as you will see, chances
0:30:17.830,0:30:21.380
are very high that you messed up the[br]configuration and for example, missed a
0:30:21.380,0:30:26.020
security bit, forgot to set something and[br]so on and so forth. But luckily, in the
0:30:26.020,0:30:32.169
case of the SAM L11, there's a version of[br]this chip which is already configured and
0:30:32.169,0:30:39.590
only ships in non secure mode. And so this[br]is called the SAM L11 KPH. And so it comes
0:30:39.590,0:30:43.990
pre provisioned with a key and it comes[br]pre provisioned with a trusted execution
0:30:43.990,0:30:49.750
environment already loaded into the secure[br]part of the chips and ships completely
0:30:49.750,0:30:54.700
secured and the customer can write and[br]debug non secure code only. And also you
0:30:54.700,0:30:59.620
can download the SDK for it and write your[br]own trustlets and so on. But I couldn't
0:30:59.620,0:31:04.289
because it requires you to agree to their[br]terms and conditions so which exclude
0:31:04.289,0:31:08.980
reverse engineering. So no chance,[br]unfortunately. But anyway, this is the
0:31:08.980,0:31:14.601
perfect example to test our attack. You[br]can buy these chips on DigiKey and then
0:31:14.601,0:31:18.990
try to break into the secure world because[br]these chips are hopefully decently secured
0:31:18.990,0:31:24.779
and have everything set up and so on. And[br]yeah. So this was the setup. We designed
0:31:24.779,0:31:29.779
our own breakout port for the SAM L11,[br]which makes it a bit more accessible, has
0:31:29.779,0:31:35.100
JTAG and has no capacitors in the way. So[br]you get access to all the core voltages
0:31:35.100,0:31:42.130
and so on and you have the FPGA on the top[br]left the super cheap 20 bucks power supply
0:31:42.130,0:31:47.220
and the programmer. And then we just[br]implemented a simple function that uses
0:31:47.220,0:31:53.230
openOCD to try to read an address that we[br]normally can't read. So we basically we
0:31:53.230,0:31:59.029
glitch. Then we start OpenOCD, which uses[br]the JTAG adapter to try to read secure
0:31:59.029,0:32:10.320
memory. And so I hooked it all up, wrote a[br]nice script and let it rip. And so after a
0:32:10.320,0:32:16.980
while or in well, a couple of seconds[br]immediately again got successful, I got a
0:32:16.980,0:32:20.340
successful attack on the chip and more and[br]more. And you can see just how stable you
0:32:20.340,0:32:26.610
can get these glitches and how well you[br]can attack this. Yeah. So sweet hacked. We
0:32:26.610,0:32:31.309
can compromise the root of trust and the[br]trusted execution environment. And this is
0:32:31.309,0:32:36.080
perfect for supply chain attacks. Right.[br]Because if you can compromise a part of
0:32:36.080,0:32:42.139
the chip that the customer will not be[br]able to access, he will never find you.
0:32:42.139,0:32:45.769
But the problem with supply chain attacks[br]is, they're pretty hard to scale and they
0:32:45.769,0:32:51.140
are only for sophisticated actors normally[br]and far too expensive is what most people
0:32:51.140,0:32:58.779
will tell you, except if you hack the[br]distributor. And so as I guess last year
0:32:58.779,0:33:04.341
or this year, I don't know, I actually[br]found a vulnerability in DigiKey, which
0:33:04.341,0:33:09.179
allowed me to access any invoice on[br]DigiKey, including the credentials you
0:33:09.179,0:33:16.779
need to actually change the invoice. And[br]so basically the bug is that they did not
0:33:16.779,0:33:20.770
check when you basically requested an[br]invoice, they did not check whether you
0:33:20.770,0:33:25.509
actually had permission to access it. And[br]you have the web access id on top and the
0:33:25.509,0:33:30.370
invoice number. And that's all you need to[br]call DigiKey and change the delivery,
0:33:30.370,0:33:37.169
basically. And so this also is all data[br]that you need to reroute the shipment. I
0:33:37.169,0:33:41.490
disclosed this. It's fixed. It's been[br]fixed again afterwards. And now hopefully
0:33:41.490,0:33:45.990
this should be fine. So I feel good to[br]talk about it. And so let's walk through
0:33:45.990,0:33:52.050
the scenarios. We have Eve and we have[br]DigiKey and Eve builds this new super
0:33:52.050,0:33:58.090
sophisticated IOT toilet and she needs a[br]secure chip. So she goes to DigiKey and
0:33:58.090,0:34:06.610
orders some SAM L11 KPHs and Mallory.[br]Mallory scans all new invoices on DigiKey.
0:34:06.610,0:34:13.240
And as soon as somebody orders a SAM L11,[br]they talk to DigiKey with the API or via a
0:34:13.240,0:34:17.840
phone call to change the delivery address.[br]And because you know who the chips are
0:34:17.840,0:34:23.409
going to, you can actually target this[br]very, very well. So now the chips get
0:34:23.409,0:34:30.450
delivered to Mallory Mallory backdoors the[br]chips. And then sends the backdoored chips
0:34:30.450,0:34:34.419
to Eve who is none the wiser, because it's[br]the same carrier, it's the same, it looks
0:34:34.419,0:34:38.149
the same. You have to be very, very[br]mindful of these types of attack to
0:34:38.149,0:34:43.310
actually recognize them. And even if they[br]open the chips and they say they open the
0:34:43.310,0:34:48.530
package and they try the chip, they scan[br]everything they can scan the backdoor will
0:34:48.530,0:34:53.580
be in the part of the chip that they[br]cannot access. And so we just supply chain
0:34:53.580,0:35:02.329
attacked whoever using an UPS envelope,[br]basically. So, yeah. Interesting attack
0:35:02.329,0:35:07.119
vector. So I talked to microchip and it's[br]been great. They've been super nice. It
0:35:07.119,0:35:13.460
was really a pleasure. I also talked to[br]Trustonic, who were very open to this and
0:35:13.460,0:35:19.890
wanted to understand it. And so it was[br]great. And they explicitly state that this
0:35:19.890,0:35:23.760
chip only protects against software[br]attacks while it has some hardware
0:35:23.760,0:35:29.530
features like tamper ressistant RAM. It is[br]not built to withstand fault injection
0:35:29.530,0:35:34.130
attacks. And even if you compare it now,[br]different revisions of the data sheet, you
0:35:34.130,0:35:38.760
can see that some data sheets, the older[br]ones they mention some fault injection
0:35:38.760,0:35:42.550
resistance and it's now gone from the data[br]sheet. And they are also asking for
0:35:42.550,0:35:46.980
feedback on making it more clear what this[br]chip protects against, which I think is a
0:35:46.980,0:35:52.620
noble goal because we all know marketing[br]versus technicians is always an
0:35:52.620,0:36:00.580
interesting fight. Let's say, cool first[br]chip broken time for the next one, right?
0:36:00.580,0:36:07.270
So the next chip I looked at was the[br]Nuvoton NuMicro M2351 rolls off the
0:36:07.270,0:36:14.150
tongue. It's a Cortex-M23 processor. It[br]has TrustZone-M. And I was super excited
0:36:14.150,0:36:19.690
because this finally has an SAU, a[br]security attribution unit and an IDAU and
0:36:19.690,0:36:23.490
also I talked to the marketing. It[br]explicitly protects against fault
0:36:23.490,0:36:31.790
injection. So that's awesome. I was[br]excited. Let's see how that turns out.
0:36:31.790,0:36:37.010
Let's briefly talk about the TrustZone in[br]the Nuvoton chip. So as I've mentioned
0:36:37.010,0:36:45.329
before, the SAU if it's turned off or[br]turned on without regions will be to fully
0:36:45.329,0:36:49.630
secure. And no matter what the IDAU is,[br]the most privileged level always wins. And
0:36:49.630,0:36:55.150
so if our entire security attribution unit[br]is secure, our final security state will
0:36:55.150,0:37:00.880
also be secure. And so if we now add some[br]small regions, the final state will also
0:37:00.880,0:37:08.240
have the small, non secure regions. I[br]mean, I saw this looked at how this this
0:37:08.240,0:37:14.980
code works. And you can see that at the[br]very bottom SAU control is set to 1 simple
0:37:14.980,0:37:19.340
right. We glitch over the SAU enabling and[br]all our code will be secure and we'll just
0:37:19.340,0:37:26.480
run our code in secret mode, no problem -[br]is what I fought. And so basically the
0:37:26.480,0:37:31.201
secure bootloader starts execution of non[br]secure code. We disable the SAU by
0:37:31.201,0:37:35.859
glitching over the instruction and now[br]everything is secure. So our code runs in
0:37:35.859,0:37:43.700
a secure world. It's easy except read the[br]fucking manual. So turns out these
0:37:43.700,0:37:49.760
thousands of pages of documentation[br]actually contain useful information and
0:37:49.760,0:37:55.060
you need a special instruction to[br]transition from secure to non secure state
0:37:55.060,0:38:02.230
which is called BLXNS, which stands for[br]branch optionally linked and exchange to
0:38:02.230,0:38:08.300
non secure. This is exactly made to[br]prevent this. It prevents accidentally
0:38:08.300,0:38:13.290
jumping into non secure code. It will[br]cause a secure fault if you try to do it.
0:38:13.290,0:38:19.390
And what's interesting is that even if you[br]use this instruction, it will not always
0:38:19.390,0:38:24.530
transition. The state depends on the last[br]bit in the destination address, whether
0:38:24.530,0:38:30.060
the status transition and the way the[br]bootloader will actually get these
0:38:30.060,0:38:34.410
addresses it jumps to is from what's[br]called the reset table, which is basically
0:38:34.410,0:38:38.610
where your reset handlers are, where your[br]stack pointer, your initial stack pointer
0:38:38.610,0:38:43.710
is and so on. And you will notice that the[br]last bit is always set. And if the last
0:38:43.710,0:38:49.600
bit is set, it will jump to secure code.[br]So somehow they managed to branch to this
0:38:49.600,0:38:56.790
address and run it into non secure. So how[br]do they do this? They use an explicit bit
0:38:56.790,0:39:02.700
clear instruction. What do we know about[br]instructions? We can glitch over them. And
0:39:02.700,0:39:09.109
so basically we can with two glitches, we[br]can glitch over the SAU control enable now
0:39:09.109,0:39:16.369
our entire memory is secure and then we[br]glitch over the bitclear instruction and
0:39:16.369,0:39:23.609
then branch linked ex non secure again[br]rolls off the tongue will run secure code.
0:39:23.609,0:39:29.260
And now our normal world code is running[br]in secure mode. The problem is it works,
0:39:29.260,0:39:33.780
but it's very hard to get stable. So, I[br]mean, this was I somehow got it working,
0:39:33.780,0:39:40.840
but it was not very stable and it was a[br]big pain to to actually make use of. So I
0:39:40.840,0:39:45.010
wanted a different vulnerability. And I[br]read up on the implementation defined
0:39:45.010,0:39:52.190
attribution unit of the M2351. And it[br]turns out that each flash RAM peripheral
0:39:52.190,0:39:59.780
and so on is mapped twice into memory. And[br]so basically once as secure as the address
0:39:59.780,0:40:08.710
0x2000 and once as non secure at the[br]address 0x3000. And so you have the flash
0:40:08.710,0:40:15.410
twice and you have the the RAM twice. This[br]is super important. This is the same
0:40:15.410,0:40:22.220
memory. And so I came up with an attack[br]that I called CroeRBAR because a
0:40:22.220,0:40:27.820
vulnerability basically doesn't exist if[br]it doesn't have a fancy name. And the
0:40:27.820,0:40:32.079
basic point of this is that the security[br]of the system relies on the region
0:40:32.079,0:40:36.569
configuration of the SAU. What if we[br]glitch this initialization combined with
0:40:36.569,0:40:43.170
this IDAU layout again with the IDAU[br]mirrors the memory. Has it once a secure
0:40:43.170,0:40:48.500
and once it's not secure. Now let's say we[br]have at the very bottom of our flash. We
0:40:48.500,0:40:54.520
have a secret which is in the secure area.[br]It will also be in the mirror of this
0:40:54.520,0:41:00.550
memory. But again, because our SAU[br]configuration is fine, it will not be
0:41:00.550,0:41:06.309
accessible by the non secure region.[br]However, the start of this non secret area
0:41:06.309,0:41:14.339
is configured by the RBAR register. And so[br]maybe if we glitch this RBAR being set, we
0:41:14.339,0:41:18.210
can increase the size of the non secure[br]area. And if you check the ARM
0:41:18.210,0:41:22.950
documentation on the RBAR register, the[br]reset values state of this register is
0:41:22.950,0:41:28.079
unknown. So unfortunately it doesn't just[br]say zero, but I tried this on all chips I
0:41:28.079,0:41:33.839
had access to and it is zero on all chips[br]I tested. And so now what we can do is we
0:41:33.839,0:41:38.800
glitch over this RBAR and now our final[br]security state will be bigger and our
0:41:38.800,0:41:43.390
secure code is still running in the bottom[br]half. But then the jump into non secure
0:41:43.390,0:41:50.750
will also give us access to the secret and[br]it works. We get a fully stable glitch,
0:41:50.750,0:41:56.650
takes roughly 30 seconds to bypass it. I[br]should mention that this is what I think
0:41:56.650,0:42:00.440
happens. All I know is that I inject a[br]glitch and I can read the secret. I cannot
0:42:00.440,0:42:05.180
tell you exactly what happens, but this is[br]the best interpretation I have so far. So
0:42:05.180,0:42:10.970
wuhu we have an attack with a cool name?[br]And so I looked at another chip called the
0:42:10.970,0:42:18.930
NXP LPC55S69, and this one has 2[br]Cortex-M33 cores, one of which has
0:42:18.930,0:42:26.599
TrustZone-M. The IDAU and the overall[br]TrustZone layout seem to be very similar
0:42:26.599,0:42:31.640
to the NuMicro. And I got the dual glitch[br]attack working and also the CrowRBAR
0:42:31.640,0:42:38.730
attack working. And the vendor response[br]was amazing. Like holy crap, they called
0:42:38.730,0:42:42.500
me and wanted to fully understand it. They[br]reproduced that. They got me on the phone
0:42:42.500,0:42:48.250
with an expert and the expert was super[br]nice. But what he said came down to was
0:42:48.250,0:42:55.480
RTFM. But again, this is a long document,[br]but it turns out that the example code did
0:42:55.480,0:43:01.900
not enable a certain security feature. And[br]this security feature is helpfully named
0:43:01.900,0:43:10.820
Miscellaneous Control Register, basically,[br]laughter which stands for Secure Control
0:43:10.820,0:43:21.120
Register, laughter obviously. And this[br]register has a bit. If you set it, it
0:43:21.120,0:43:26.640
enables secure checking. And if I read[br]just a couple of sentences first further,
0:43:26.640,0:43:31.119
when I read about the TrustZone on the[br]chip, I would have actually seen this. But
0:43:31.119,0:43:37.630
Millennial sorry. Yeah. And so what this[br]enables is called the memory protection
0:43:37.630,0:43:41.420
checkers and this is an additional memory[br]security check that gives you finer
0:43:41.420,0:43:46.481
control over the memory layout. And so it[br]basically checks if the attribution unit
0:43:46.481,0:43:51.870
security state is identical with the[br]memory protection checker security state.
0:43:51.870,0:43:57.960
And so, for example, if our attack code[br]tries to access memory, the MPC will check
0:43:57.960,0:44:04.280
whether this was really a valid request.[br]So to say and stop you if you are unlucky
0:44:04.280,0:44:10.250
as I was. But turns out it's glitchable,[br]but it's much, much harder to glitch and
0:44:10.250,0:44:15.550
you need multiple glitches. And the vendor[br]response was awesome. They also say
0:44:15.550,0:44:22.010
they're working on improving the[br]documentation for this. So yeah, super
0:44:22.010,0:44:26.770
cool. But still like it's not a full[br]protection against glitching, but it gives
0:44:26.770,0:44:33.041
you certain security. And I think that's[br]pretty awesome. Before we finish. Is
0:44:33.041,0:44:38.260
everything broken? No. These chips are not[br]insecure. They are not protected against a
0:44:38.260,0:44:43.930
very specific attack scenario and align[br]the chips that you want to use with your
0:44:43.930,0:44:47.510
threat model. If fault injection is part[br]of your threat models. So, for example,
0:44:47.510,0:44:51.700
you're building a carkey. Maybe you should[br]protect against glitching. If you're
0:44:51.700,0:44:56.340
building a hardware wallet, definitely you[br]should protect against glitching. Thank
0:44:56.340,0:45:00.829
you. Also, by the way, if you want to play[br]with some awesome fault injection
0:45:00.829,0:45:05.579
equipment, I have an EMFI glitcher with me[br]and so. So just hit me up on Twitter and
0:45:05.579,0:45:09.540
I'm happy to show it to you. So thanks a[br]lot.
0:45:09.540,0:45:17.700
applause
0:45:17.700,0:45:24.780
Herald: Thank you very much, Thomas. We do[br]have an awesome 15 minutes for Q and A. So
0:45:24.780,0:45:30.391
if you line up, we have three microphones.[br]Microphone number 3 actually has an
0:45:30.391,0:45:34.119
induction loop. So if you're hearing[br]impaired and have a suitable device, you
0:45:34.119,0:45:39.130
can go to microphone 3 and actually hear[br]the answer. And we're starting off with
0:45:39.130,0:45:41.980
our signal angel with questions from the[br]Internet.
0:45:41.980,0:45:47.710
Thomas: Hello, Internet.[br]Signal Angel: Hello. Are you aware of the
0:45:47.710,0:45:53.560
ST Cortex-M4 firewall? And can your[br]research be somehow related to it? Or
0:45:53.560,0:45:56.880
maybe do you have plans to explore it in[br]the future?
0:45:56.880,0:46:02.440
Thomas: I. So, yes, I'm very aware of the[br]ST M3 and M4. If you watch our talk last
0:46:02.440,0:46:06.680
year at CCC called Wallet.fail, we[br]actually exploited the sister chip, the
0:46:06.680,0:46:12.950
STM32 F2. The F4 has this strange firewall[br]thing which feels very similar to
0:46:12.950,0:46:18.680
TrustZone-M. However, I cannot yet share[br]any research related to that chip,
0:46:18.680,0:46:22.090
unfortunately. Sorry.[br]Signal Angel: Thank you.
0:46:22.090,0:46:28.720
Herald: Microphone number 1, please.[br]Mic 1: Hello. I'm just wondering, have you
0:46:28.720,0:46:34.280
tried to replicate this attack on[br]multicore CPUs with higher frequency such
0:46:34.280,0:46:38.859
like 2GHz and others, how would you go[br]about that?
0:46:38.859,0:46:43.599
Thomas: So I have not because there there[br]are no TrustZone-M chips with this
0:46:43.599,0:46:48.190
frequency. However, people have done it on[br]mobile phones and other equipment. So, for
0:46:48.190,0:46:54.960
example, yeah, there's a lot of materials[br]on glitching higher frequency stuff. But
0:46:54.960,0:46:59.170
yeah, it will get expensive really quickly[br]because the scope, the way you can even
0:46:59.170,0:47:03.819
see a two gigahertz clock, that's a nice[br]car oscilloscope.
0:47:03.819,0:47:09.410
Herald: Microphone number 2, please.[br]Mic 2: Thank you for your talk. Is the
0:47:09.410,0:47:15.750
more functionality to go from non-secure[br]to secure area? Are there same standard
0:47:15.750,0:47:19.740
defined functionalities or the proprietory[br]libraries from NXP?
0:47:19.740,0:47:25.130
Thomas: So the the veneer stuff is[br]standard and you will find ARM documents
0:47:25.130,0:47:29.299
basically recommending you to do this. But[br]all the tool chains, for example, the one
0:47:29.299,0:47:34.799
for the SAM L11 will generate the veneers[br]for you. And so I have to be honest, I
0:47:34.799,0:47:37.900
have not looked at how exactly they are[br]generated.
0:47:37.900,0:47:42.480
However, I did some rust stuff to play[br]around with it. And yeah, it's relatively
0:47:42.480,0:47:44.751
simple for the tool chain and it's[br]standard. So
0:47:44.751,0:47:51.720
Herald: the signal angel is signaling.[br]Signal Angel: Yeah. That's not another
0:47:51.720,0:47:56.180
question from the internet but from me and[br]I wanted to know how important is the
0:47:56.180,0:48:00.680
hardware security in comparison to the[br]software security because you cannot hack
0:48:00.680,0:48:06.490
these devices without having physical[br]access to them except of this supply chain
0:48:06.490,0:48:09.300
attack.[br]Thomas: Exactly. And that depends on your
0:48:09.300,0:48:14.210
threat model. So that's basically if you[br]build a door, if you build a hardware
0:48:14.210,0:48:18.280
wallet, you want to have hardware[br]protection because somebody can steal it
0:48:18.280,0:48:22.200
potentially very easily and then... And if[br]you, for example, look at your phone, you
0:48:22.200,0:48:27.720
probably maybe don't want to have anyone[br]at customs be able to immediately break
0:48:27.720,0:48:31.339
into your phone. And that's another point[br]where hardware security is very important.
0:48:31.339,0:48:36.090
And there with a car key, it's the same.[br]If you rent a car, you hopefully the car
0:48:36.090,0:48:41.920
rental company doesn't want you to copy[br]the key. And interestingly, the more
0:48:41.920,0:48:45.559
probably one of the most protected things[br]in your home is your printer cartridge,
0:48:45.559,0:48:49.700
because I can tell you that the vendor[br]invests a lot of money into you not being
0:48:49.700,0:48:54.500
able to clone the printer cartridge. And[br]so there are a lot of cases where it's
0:48:54.500,0:48:58.270
maybe not the user who wants to protect[br]against hardware attacks, but the vendor
0:48:58.270,0:49:02.200
who wants to protect against it.[br]Herald: Microphone number 1, please.
0:49:02.200,0:49:04.750
Mic 1: So thank you again for the amazing[br]Talk.
0:49:04.750,0:49:07.730
Thomas: Thank you.[br]Mic 1: You mentioned higher order attacks,
0:49:07.730,0:49:12.099
I think twice. And for the second chip,[br]you actually said you you broke it with
0:49:12.099,0:49:14.750
two glitches, two exploiteable glitches.[br]Thomas: Yes.
0:49:14.750,0:49:19.370
Mic 1: So what did you do to reduce the[br]search space or did you just search over
0:49:19.370,0:49:22.190
the entire space?[br]Thomas: So the nice thing about these
0:49:22.190,0:49:27.900
chips is that you can actually you can if[br]you have a security attribution unit, you
0:49:27.900,0:49:33.720
can decide when you turn it on, because[br]you can just, I had the GPIO go up. Then I
0:49:33.720,0:49:39.609
enable the SAU. And then I had my search[br]space very small because I knew it would
0:49:39.609,0:49:45.150
be just after I pulled up the GPIO. And so[br]I was able to very precisely time where I
0:49:45.150,0:49:50.280
glitch and I was able because I wrote the[br]code basically that does it. I could
0:49:50.280,0:49:53.470
almost count on the oscilloscope which[br]instruction I'm hitting.
0:49:53.470,0:49:56.520
Mic 1: Thank you.[br]Herald: Next question from microphone
0:49:56.520,0:49:59.839
number 2, please.[br]Mic 2: Thank you for the talk. I was just
0:49:59.839,0:50:05.170
wondering if the vendor was to include the[br]capacitor directly on the die, howfixed
0:50:05.170,0:50:10.520
would you consider it to be?[br]Thomas: So against voltage glitching? It
0:50:10.520,0:50:14.530
might help. It depends. But for example,[br]on a recent chip, we just used the
0:50:14.530,0:50:19.309
negative voltage to suck out the power[br]from the capacitor. And also, you will
0:50:19.309,0:50:23.820
have EMFI glitching as a possibility and[br]EMFI glitching is awesome because you
0:50:23.820,0:50:28.140
don't even have to solder. You just[br]basically put a small coil on top of your
0:50:28.140,0:50:33.070
chip and inject the voltage directly into[br]it behind any of the capacitors. And so
0:50:33.070,0:50:39.570
on. So it it helps, but it's not a. Often[br]it's not done for security reasons. Let's
0:50:39.570,0:50:42.650
see.[br]Herald: Next question again from our
0:50:42.650,0:50:46.359
Signal Angel.[br]Signal Angel: Did you get to use your own
0:50:46.359,0:50:55.970
custom hardware to help you?[br]Thomas: I partially the part that worked
0:50:55.970,0:50:59.310
is the summary.[br]Herald: Microphone number 1, please.
0:50:59.310,0:51:05.010
Mic 1: Hi. Thanks for the interesting[br]talk. All these vendors pretty much said
0:51:05.010,0:51:08.420
this sort of attack is sort of not really[br]in scope for what they're doing.
0:51:08.420,0:51:10.880
Thomas: Yes.[br]Mic 1: Are you aware of anyone like in
0:51:10.880,0:51:15.490
this sort of category of chip actually[br]doing anything against glitching attacks?
0:51:15.490,0:51:20.190
Thomas: Not in this category, but there[br]are secure elements that explicitly
0:51:20.190,0:51:25.891
protect against it. A big problem with[br]researching those is that it's also to a
0:51:25.891,0:51:30.280
large degree security by NDA, at least for[br]me, because I have no idea what's going
0:51:30.280,0:51:35.450
on. I can't buy one to play around with[br]it. And so I can't tell you how good these
0:51:35.450,0:51:39.130
are. But I know from some friends that[br]there are some chips. Are very good at
0:51:39.130,0:51:42.930
protecting against glitches. And[br]apparently the term you need to look for
0:51:42.930,0:51:47.420
it is called glitch monitor. And if you[br]see that in the data sheet, that tells you
0:51:47.420,0:51:52.230
that they at least thought about it[br]Herald: Microphone number 2, please.
0:51:52.230,0:51:59.950
Mic 2: So what about brown-out or[br]detection? Did microchip say why it didn't
0:51:59.950,0:52:03.490
catch your glitching attempts?[br]Thomas: It's not meet to glitch it at two
0:52:03.490,0:52:08.170
to catch glitching attacks. Basically, a[br]brownout detector is mainly there to keep
0:52:08.170,0:52:13.580
your chip stable. And so, for example, if[br]you're supply voltage drops, you want to
0:52:13.580,0:52:17.210
make sure that you notice and don't[br]accidentally glitch yourself. So, for
0:52:17.210,0:52:21.250
example, if it is running on a battery and[br]your battery goes empty, you want your
0:52:21.250,0:52:25.490
chip to run stable, stable, stable off.[br]And that's the idea behind a brownout
0:52:25.490,0:52:30.590
detector is my understanding. But yeah,[br]they are not made to be fast enough to
0:52:30.590,0:52:36.119
catch glitching attacks.[br]Herald: Do we have any more questions from
0:52:36.119,0:52:39.150
the hall?[br]Thomas: Yes.
0:52:39.150,0:52:45.359
Herald: Yes? Where?[br]Mic ?: Thank you for your amazing talk.
0:52:45.359,0:52:49.320
You have shown that it gets very[br]complicated if you have two consecutive
0:52:49.320,0:52:55.390
glitches. So wouldn't it be an easy[br]protection to just do the stuff twice or
0:52:55.390,0:53:00.809
three times and maybe randomize it? Would[br]you consider this then impossible to be
0:53:00.809,0:53:04.160
glitched?[br]Thomas: So adding randomization to the
0:53:04.160,0:53:08.010
point in time where you enable it helps,[br]but then you can trigger off the power
0:53:08.010,0:53:12.880
consumption and so on. And I should add, I[br]only tried to to trigger once and then use
0:53:12.880,0:53:16.880
just a simple delay. But in theory, if you[br]do it twice, you could also glitch on the
0:53:16.880,0:53:21.830
power consumption signature and so on. So[br]it might help. But somebody very motivated
0:53:21.830,0:53:27.910
will still be able to do it. Probably.[br]Herald: OK. We have another question from
0:53:27.910,0:53:31.059
the Internet.[br]Signal Angel: Is there a mitigation for
0:53:31.059,0:53:36.510
such a attack that I can do on PCB level[br]or it can be addressed only on chip level?
0:53:36.510,0:53:40.250
Thomas: Only on chip level, because if you[br]have a heat, can you just pull the chip
0:53:40.250,0:53:45.650
off and do it in a socket or if you do[br]EMFI glitching, you don't even have to
0:53:45.650,0:53:50.240
touch the chip. You just go over it with[br]the coil and inject directly into the
0:53:50.240,0:53:54.800
chip. So the chip needs to be secured[br]against this type of stuff or you can add
0:53:54.800,0:54:00.130
a tamper protection case around your[br]chips. So, yeah.
0:54:00.130,0:54:02.700
Herald: Another question from microphone[br]number 1.
0:54:02.700,0:54:08.270
Mic 1: So I was wondering if you've heard[br]anything or know anything about the STM32
0:54:08.270,0:54:11.260
L5 series?[br]Thomas: I've heard a lot. I've seen
0:54:11.260,0:54:17.020
nothing. So, yes, I've heard about it. But[br]it doesn't ship yet as far as I know. We
0:54:17.020,0:54:20.470
are all eagerly awaiting it.[br]Mic 1: Thank you.
0:54:20.470,0:54:24.440
Herald: Microphone number 2, please[br]Mic 2: Hey, very good talk. Thank you. Do
0:54:24.440,0:54:29.089
you, Will you release all the hardware[br]design of the board and those scripts?
0:54:29.089,0:54:30.799
Thomas: Yes.[br]Mic 2: Is there anything already
0:54:30.799,0:54:33.109
availability even if I understood it's not[br]all finished?
0:54:33.109,0:54:38.349
Thomas: Oh, yes. So on chip.fail. There[br]are thoughtful domains. It's awesome.
0:54:38.349,0:54:44.160
Chip.fail has the source code to our[br]glitcher. I've also ported it to the
0:54:44.160,0:54:48.990
Lattice and I need to push that hopefully[br]in the next few days. But then all the
0:54:48.990,0:54:53.109
hardware would be open sourced also[br]because it's based on open source hardware
0:54:53.109,0:54:59.100
and yeah, I'm not planning to make any[br]money or anything using it. It's just to
0:54:59.100,0:55:02.590
make life easier.[br]Herald: Microphone number 2, please.
0:55:02.590,0:55:07.340
Mic 2: So you said already you don't[br]really know what happens at the exact
0:55:07.340,0:55:14.990
moment of the glitch and you were lucky[br]that you that you skipped an instruction
0:55:14.990,0:55:24.339
maybe. Do you have. Yes. A feeling what is[br]happening inside the chip at the moment of
0:55:24.339,0:55:28.730
the glitch?[br]Thomas: So I asked this precise question,
0:55:28.730,0:55:36.579
what exactly happens to multiple people? I[br]got multiple answers. But basically my my
0:55:36.579,0:55:41.280
understanding is that you basically pull[br]the voltage that it needs to set, for
0:55:41.280,0:55:45.770
example, the register. But I'm it's[br]absolutely out of my domain to give an
0:55:45.770,0:55:50.710
educated comment on this. I'm a breaker,[br]unfortunately, not a maker when it comes
0:55:50.710,0:55:54.030
to chips.[br]Herald: Microphone number 2, please.
0:55:54.030,0:56:01.750
Mic 2: OK. Thank you. You said a lot of[br]the chip attack. Can you tell us something
0:56:01.750,0:56:07.510
about JTAG attacks? So I just have a[br]connection to JTAG?
0:56:07.510,0:56:12.280
Thomas: Yeah. So, for example, the attack[br]on the KPH version of the chip was
0:56:12.280,0:56:17.290
basically a JTAG attack. I used JTAG to[br]read out the chip, but I did have JTAG in
0:56:17.290,0:56:23.630
normal world. However, it's possible on[br]most - on a lot of chips to reenable JTAG
0:56:23.630,0:56:28.690
even if it's locked. And for example,[br]again, referencing last year's talk, we
0:56:28.690,0:56:34.330
were able to re enable JTAG on the STM32F2[br]and I would assume was something similar
0:56:34.330,0:56:39.440
as possible on this chip as well. But I[br]haven't tried.
0:56:39.440,0:56:47.260
Herald: Are there any more questions we[br]still have a few minutes. I guess not.
0:56:47.260,0:56:51.600
Well, a big, warm round of applause for[br]Thomas Roth.
0:56:51.600,0:56:55.110
applause
0:56:55.110,0:56:59.210
postroll music
0:56:59.210,0:57:06.250
Subtitles created by c3subtitles.de[br]in the year 2021. Join, and help us!