0:00:00.000,0:00:18.400
36C3 Intro[br]♪ (intro music) ♪
0:00:18.400,0:00:22.590
Herald: Welcome, everybody, to our very[br]first talk on the first day of Congress.
0:00:22.590,0:00:27.009
The talk is "Open Source is Insufficient[br]to Solve Trust Problems in Hardware,"
0:00:27.009,0:00:31.579
and although there is a lot to be said[br]for free and open software, it is
0:00:31.579,0:00:37.580
unfortunately not always inherently more[br]secure than proprietary or closed software,
0:00:37.580,0:00:41.520
and the same goes for hardware as well.[br]And this talk will take us into
0:00:41.520,0:00:46.540
the nitty gritty bits of how to build[br]trustable hardware and how it it has to be
0:00:46.540,0:00:51.496
implemented and brought together[br]with the software in order to be secure.
0:00:51.496,0:00:54.763
We have one speaker here today.[br]It's bunnie.
0:00:54.763,0:00:57.651
He's a hardware and firmware hacker.[br]But actually,
0:00:57.651,0:01:01.439
the talk was worked on by three people,[br]so it's not just bunnie, but also
0:01:01.439,0:01:04.914
Sean "Xobs" Cross and Tom Marble.[br]But the other two are not present today.
0:01:04.914,0:01:07.435
But I would like you to welcome[br]our speaker, bunnie,
0:01:07.435,0:01:10.655
with a big, warm, round of applause,[br]and have a lot of fun.
0:01:10.655,0:01:16.940
Applause
0:01:16.940,0:01:20.117
bunnie: Good morning, everybody.[br]Thanks for braving the crowds
0:01:20.117,0:01:23.940
and making it in to the Congress.[br]And thank you again to the Congress
0:01:23.940,0:01:30.514
for giving me the privilege[br]to address the Congress again this year.
0:01:30.514,0:01:34.359
Very exciting being the first talk[br]of the day. Had font problems,
0:01:34.359,0:01:39.439
so I'm running from a .pdf backup.[br]So we'll see how this all goes.
0:01:39.439,0:01:42.914
Good thing I make backups. So the[br]topic of today's talk is
0:01:42.914,0:01:47.226
"Open Source is Insufficient[br]to Solve Trust Problems in Hardware,"
0:01:47.226,0:01:49.249
and sort of some things[br]we can do about this.
0:01:49.249,0:01:53.309
So my background is, I'm[br]a big proponent of open source hardware. I
0:01:53.309,0:01:57.939
love it. And I've built a lot of things in[br]open source, using open source hardware
0:01:57.939,0:02:01.060
principles. But there's been sort of a[br]nagging question in me about like, you
0:02:01.060,0:02:04.299
know, some people would say things like,[br]oh, well, you know, you build open source
0:02:04.299,0:02:07.439
hardware because you can trust it more.[br]And there's been sort of this gap in my
0:02:07.439,0:02:12.380
head and this talk tries to distill out[br]that gap in my head between trust and open
0:02:12.380,0:02:18.610
source and hardware. So I'm sure people[br]have opinions on which browsers you would
0:02:18.610,0:02:22.580
think is more secure or trustable than the[br]others. But the question is why might you
0:02:22.580,0:02:26.500
think one is more trustable than the other[br]is. You have everything and hear from like
0:02:26.500,0:02:31.420
Firefox and Iceweasel down to like the[br]Samsung custom browser or the you know,
0:02:31.420,0:02:35.200
xiaomi custom browser. Which one would[br]you rather use for your browsing if you
0:02:35.200,0:02:41.300
had to trust something? So I'm sure people[br]have their biases and they might say that
0:02:41.300,0:02:45.480
open is more trustable. But why do we say[br]open is more trustable? Is it because we
0:02:45.480,0:02:49.270
actually read the source thoroughly and[br]check it every single release for this
0:02:49.270,0:02:53.701
browser? Is it because we compile our[br]source, our browsers from source before we
0:02:53.701,0:02:57.280
use them? No, actually we don't have the[br]time to do that. So let's take a closer
0:02:57.280,0:03:02.480
look as to why we like to think that open[br]source software is more secure. So this is
0:03:02.480,0:03:07.720
a kind of a diagram of the lifecycle of,[br]say, a software project. You have a bunch
0:03:07.720,0:03:12.920
of developers on the left. They'll commit[br]code into some source management program
0:03:12.920,0:03:17.890
like git. It goes to a build. And then[br]ideally, some person who carefully managed
0:03:17.890,0:03:22.260
the key signs that build goes into an[br]untrusted cloud, then gets download onto
0:03:22.260,0:03:26.080
users disks, pulled into RAM, run by the[br]user at the end of the day. Right? So the
0:03:26.080,0:03:31.920
reason why actually we find that we might[br]be able to trust things more is because in
0:03:31.920,0:03:35.790
the case of open source, anyone can pull[br]down that source code like someone doing
0:03:35.790,0:03:40.350
reproducible builds an audit of some type,[br]build it, confirm that the hashes match
0:03:40.350,0:03:44.330
and that the keys are all set up[br]correctly. And then the users also have
0:03:44.330,0:03:48.870
the ability to know developers and sort of[br]enforce community norms and standards upon
0:03:48.870,0:03:52.640
them to make sure that they're acting in[br]sort of in the favor of the community. So
0:03:52.640,0:03:55.630
in the case that we have bad actors who[br]want to go ahead and tamper with builds
0:03:55.630,0:03:59.940
and clouds and all the things in the[br]middle, it's much more difficult. So open
0:03:59.940,0:04:05.510
is more trustable because we have tools to[br]transfer trust in software, things like
0:04:05.510,0:04:10.070
hashing, things like public keys, things[br]like Merkle trees. Right? And also in the
0:04:10.070,0:04:14.460
case of open versus closed, we have social[br]networks that we can use to reinforce our
0:04:14.460,0:04:20.419
community standards for trust and[br]security. Now, it's worth looking a little
0:04:20.419,0:04:25.460
bit more into the hashing mechanism[br]because this is a very important part of
0:04:25.460,0:04:29.180
our software trust chain. So I'm sure a[br]lot of people know what hashing is, for
0:04:29.180,0:04:33.530
people who don't know. Basically it takes[br]a big pile of bits and turns them into a
0:04:33.530,0:04:38.340
short sequence of symbols so that a tiny[br]change in the big pile of bits makes a big
0:04:38.340,0:04:42.010
change in the output symbols. And also[br]knowing those symbols doesn't reveal
0:04:42.010,0:04:48.090
anything about the original file. So in[br]this case here, the file on the left is
0:04:48.090,0:04:54.750
hashed to sort of cat, mouse, panda, bear[br]and the file on the right hashes to, you
0:04:54.750,0:05:00.830
know, peach, snake, pizza, cookie. And the[br]thing is, as you may not even have noticed
0:05:00.830,0:05:04.790
necessarily that there was that one bit[br]changed up there, but it's very easy to
0:05:04.790,0:05:07.350
see that short string of symbols have[br]changed. So you don't actually have to go
0:05:07.350,0:05:11.140
through that whole file and look for that[br]needle in the haystack. You have this hash
0:05:11.140,0:05:15.550
function that tells you something has[br]changed very quickly. Then once you've
0:05:15.550,0:05:19.560
computed the hashes, we have a process[br]called signing, where a secret key is used
0:05:19.560,0:05:24.030
to encrypt the hash, users decrypt that[br]using the public key to compare against a
0:05:24.030,0:05:27.070
locally computed hash. You know, so we're[br]not trusting the server to compute the
0:05:27.070,0:05:31.790
hash. We reproduce it on our site and then[br]we can say that it is now difficult to
0:05:31.790,0:05:36.370
modify that file or the signature without[br]detection. Now the problem is, that there
0:05:36.370,0:05:41.040
is a time of check, time of use issue with[br]the system, even though we have this
0:05:41.040,0:05:44.980
mechanism, if we decouple the point of[br]check from the point of use, it creates a
0:05:44.980,0:05:49.790
man in the middle opportunity or a person[br]the middle if you want. The thing is that,
0:05:49.790,0:05:55.600
you know, it's a class of attacks that[br]allows someone to tamper with data as it
0:05:55.600,0:05:59.620
is in transit. And I'm kind of symbolizing[br]this evil guy, I guess, because hackers
0:05:59.620,0:06:05.780
all wear hoodies and, you know, they also[br]keep us warm as well in very cold places.
0:06:05.780,0:06:12.280
So now an example of a time of check, time[br]of use issue is that if, say, a user
0:06:12.280,0:06:15.730
downloads a copy of the program onto their[br]disk and they just check it after the
0:06:15.730,0:06:20.370
download to the disc. And they say, okay,[br]great, that's fine. Later on, an adversary
0:06:20.370,0:06:24.319
can then modify the file on a disk as it's[br]cut before it's copied to RAM. And now
0:06:24.319,0:06:27.150
actually the user, even though they[br]download the correct version of file,
0:06:27.150,0:06:31.560
they're getting the wrong version into the[br]RAM. So the key point is the reason why in
0:06:31.560,0:06:37.020
software we feel it's more trustworthy, we[br]have a tool to transfer trust and ideally,
0:06:37.020,0:06:41.510
we place that point of check as close to[br]the users as possible. So idea that we're
0:06:41.510,0:06:45.960
sort of putting keys into the CPU or some[br]secure enclave that, you know, just before
0:06:45.960,0:06:49.550
you run it, you've checked it, that[br]software is perfect and has not been
0:06:49.550,0:06:55.380
modified, right? Now, an important[br]clarification is that it's actually more
0:06:55.380,0:06:58.750
about the place of check versus the place[br]of use. Whether you checked one second
0:06:58.750,0:07:03.010
prior or a minute prior doesn't actually[br]matter. It's more about checking the copy
0:07:03.010,0:07:07.520
that's closest to the thing that's running[br]it, right? We don't call it PoCPoU because
0:07:07.520,0:07:12.700
it just doesn't have quite the same ring[br]to it. But now this is important. That
0:07:12.700,0:07:15.900
reason why I emphasize place of check[br]versus place of use is, this is why
0:07:15.900,0:07:20.620
hardware is not the same as software in[br]terms of trust. The place of check is not
0:07:20.620,0:07:25.320
the place of use or in other words, trust[br]in hardware is a ToCToU problem all the
0:07:25.320,0:07:29.570
way down the supply chain. Right? So the[br]hard problem is how do you trust your
0:07:29.570,0:07:33.290
computers? Right? So we have problems[br]where we have firmware, pervasive hidden
0:07:33.290,0:07:37.680
bits of code that are inside every single[br]part of your system that can break
0:07:37.680,0:07:41.770
abstractions. And there's also the issue[br]of hardware implants. So it's tampering or
0:07:41.770,0:07:45.430
adding components that can bypass security[br]in ways that we're not, according to the
0:07:45.430,0:07:51.390
specification, that you're building[br]around. So from the firmer standpoint,
0:07:51.390,0:07:54.759
it's more here to acknowledge is an issue.[br]The problem is this is actually a software
0:07:54.759,0:07:58.680
problem. The good news is we have things[br]like openness and runtime verification,
0:07:58.680,0:08:01.970
they go going to frame these questions. If[br]you're, you know, a big enough player or
0:08:01.970,0:08:05.180
you have enough influence or something,[br]you can coax out all the firmware blobs
0:08:05.180,0:08:10.190
and eventually sort of solve that problem.[br]The bad news is that you're still relying
0:08:10.190,0:08:14.770
on the hardware to obediently run the[br]verification. So if your hardware isn't
0:08:14.770,0:08:17.160
running the verification correctly, it[br]doesn't matter that you have all the
0:08:17.160,0:08:21.600
source code for the firmware. Which brings[br]us to the world of hardware implants. So
0:08:21.600,0:08:25.300
very briefly, it's worth thinking about,[br]you know, how bad can this get? What are
0:08:25.300,0:08:29.830
we worried about? What is the field? If we[br]really want to be worried about trust and
0:08:29.830,0:08:33.870
security, how bad can it be? So I've spent[br]many years trying to deal with supply
0:08:33.870,0:08:37.490
chains. They're not friendly territory.[br]There's a lot of reasons people want to
0:08:37.490,0:08:43.870
screw with the chips in the supply chain.[br]For example, here this is a small ST
0:08:43.870,0:08:47.630
microcontroller, claims to be a secure[br]microcontroller. Someone was like: "Ah,
0:08:47.630,0:08:50.830
this is not a secure, you know, it's not[br]behaving correctly." We digest off the top
0:08:50.830,0:08:54.770
of it. On the inside, it's an LCX244[br]buffer. Right. So like, you know, this was
0:08:54.770,0:08:59.130
not done because someone wanted to tamper[br]with the secure microcontroller. It's
0:08:59.130,0:09:02.490
because someone wants to make a quick[br]buck. Right. But the point is that that
0:09:02.490,0:09:05.590
marking on the outside is convincing.[br]Right. You could've been any chip on the
0:09:05.590,0:09:11.050
inside in that situation. Another problem[br]that I've had personally as I was building
0:09:11.050,0:09:15.690
a robot controller board that had an FPGA[br]on the inside. We manufactured a thousand
0:09:15.690,0:09:20.790
of these and about 3% of them weren't[br]passing tests, set them aside. Later on, I
0:09:20.790,0:09:23.480
pulled these units that weren't passing[br]tests and looked at them very carefully.
0:09:23.480,0:09:28.330
And I noticed that all of the units, the[br]FPGA units that weren't passing test had
0:09:28.330,0:09:34.510
that white rectangle on them, which is[br]shown in a big more zoomed in version. It
0:09:34.510,0:09:38.080
turned out that underneath that white[br]rectangle where the letters ES for
0:09:38.080,0:09:43.480
engineering sample, so someone had gone in[br]and Laser blasted off the letters which
0:09:43.480,0:09:46.020
say that's an engineering sample, which[br]means they're not qualified for regular
0:09:46.020,0:09:50.240
production, blending them into the supply[br]chain at a 3% rate and managed to
0:09:50.240,0:09:53.420
essentially double their profits at the[br]end of the day. The reason why this works
0:09:53.420,0:09:56.350
is because distributors make a small[br]amount of money. So even a few percent
0:09:56.350,0:09:59.931
actually makes them a lot more profit at[br]the end of day. But the key takeaway is
0:09:59.931,0:10:03.980
that this is just because 97% of your[br]hardware is okay. It does not mean that
0:10:03.980,0:10:09.860
you're safe. Right? So it doesn't help to[br]take one sample out of your entire set of
0:10:09.860,0:10:12.750
hardware and say all this is good. This is[br]constructed correctly right, therefore all
0:10:12.750,0:10:17.760
of them should be good. That's a ToCToU[br]problem, right? 100% hardware verification
0:10:17.760,0:10:23.140
is mandatory. If if you're worried about[br]trust and verification. So let's go a bit
0:10:23.140,0:10:27.220
further down the rabbit hole. This is a[br]diagram, sort of an ontology of supply
0:10:27.220,0:10:31.880
chain attacks. And I've kind of divided it[br]into two axis. On the vertical axis, is
0:10:31.880,0:10:36.270
how easy is it to detect or how hard.[br]Right? So in the bottom you might need a
0:10:36.270,0:10:40.060
SEM, a scanning electron microscope to do[br]it, in the middle is an x-ray, a little
0:10:40.060,0:10:43.590
specialized and at the top is just visual[br]or JTAG like anyone can do it at home.
0:10:43.590,0:10:47.770
Right? And then from left to right is[br]execution difficulty. Right? Things are
0:10:47.770,0:10:51.220
going take millions of dollars and months.[br]Things are going take 10$ and weeks. Or a
0:10:51.220,0:10:56.570
dollar in seconds. Right? There's sort of[br]several broad classes I've kind of
0:10:56.570,0:10:59.750
outlined here. Adding components is very[br]easy. Substituting components is very
0:10:59.750,0:11:03.810
easy. We don't have enough time to really[br]go into those. But instead, we're gona
0:11:03.810,0:11:07.990
talk about kind of the two more scary[br]ones, which are sort of adding a chip
0:11:07.990,0:11:12.390
inside a package and IC modifications. So[br]let's talk about adding a chip in a
0:11:12.390,0:11:16.230
package. This one has sort of grabbed a[br]bunch of headlines, so this sort of these
0:11:16.230,0:11:21.250
in the Snowden files, we've found these[br]like NSA implants where they had put chips
0:11:21.250,0:11:27.350
literally inside of connectors and other[br]chips to modify the computer's behavior.
0:11:27.350,0:11:31.840
Now, it turns out that actually adding a[br]chip in a package is quite easy. It
0:11:31.840,0:11:35.490
happens every day. This is a routine[br]thing, right? If you take open any SD
0:11:35.490,0:11:39.180
card, micro-SD card that you have, you're[br]going to find that it has two chips on the
0:11:39.180,0:11:42.060
inside at the very least. One is a[br]controller chip, one is memory chip. In
0:11:42.060,0:11:47.909
fact, they can stick 16, 17 chips inside[br]of these packages today very handily.
0:11:47.909,0:11:51.940
Right? And so if you want to go ahead and[br]find these chips, is the solution to go
0:11:51.940,0:11:54.760
ahead and X-ray all the things, you just[br]take every single circuit board and throw
0:11:54.760,0:11:58.240
inside an x-ray machine. Well, this is[br]what a circuit board looks like, in the
0:11:58.240,0:12:02.950
x-ray machine. Some things are very[br]obvious. So on the left, we have our
0:12:02.950,0:12:05.780
Ethernet magnetic jacks and there's a[br]bunch of stuff on the inside. Turns out
0:12:05.780,0:12:08.980
those are all OK right there. Don't worry[br]about those. And on the right, we have our
0:12:08.980,0:12:13.739
chips. And this one here, you may be sort[br]of tempted to look and say, oh, I see this
0:12:13.739,0:12:18.290
big sort of square thing on the bottom[br]there. That must be the chip. Actually,
0:12:18.290,0:12:22.240
turns out that's not the chip at all.[br]That's the solder pad that holds the chip
0:12:22.240,0:12:25.920
in place. You can't actually see the chip[br]as the solder is masking it inside the
0:12:25.920,0:12:30.300
x-ray. So when we're looking at a chip[br]inside of an x-ray, I've kind of giving
0:12:30.300,0:12:34.750
you a look right here on the left is what[br]it looks like sort of in 3-D. And the
0:12:34.750,0:12:37.360
right is what looks like an x-ray, sort of[br]looking from the top down. You're looking
0:12:37.360,0:12:41.480
at ghostly outlines with very thin spidery[br]wires coming out of it. So if you were to
0:12:41.480,0:12:45.760
look at a chip-on-chip in an x-ray, this[br]is actually an image of a chip. So in the
0:12:45.760,0:12:49.790
cross-section, you can see the several[br]pieces of silicon that are stacked on top
0:12:49.790,0:12:53.440
of each other. And if you could actually[br]do an edge on x-ray of it, this is what
0:12:53.440,0:12:57.000
you would see. Unfortunately, you'd have[br]to take the chip off the board to do the
0:12:57.000,0:13:00.500
edge on x-ray. So what you do is you have[br]to look at it from the top down and we
0:13:00.500,0:13:03.960
look at it from the top down, all you see[br]are basically some straight wires. Like, I
0:13:03.960,0:13:08.510
can't it's not obvious from that top down[br]x-ray, whether you're looking at multiple
0:13:08.510,0:13:11.700
chips, eight chips, one chip, how many[br]chips are on the inside? That piece of
0:13:11.700,0:13:16.380
wire bonds all stitched perfectly in[br]overlap over the chip. So you know. this
0:13:16.380,0:13:20.170
is what the chip-on-chip scenario might[br]look like. You have a chip that's sitting
0:13:20.170,0:13:23.959
on top of a chip and wire bonds just sort[br]of going a little bit further on from the
0:13:23.959,0:13:28.399
edge. And so in the X-ray, the only kind[br]of difference you see is a slightly longer
0:13:28.399,0:13:32.760
wire bond in some cases. So it's actually,[br]it's not not, you can find these, but it's
0:13:32.760,0:13:38.450
not like, you know, obvious that you've[br]found an implant or not. So looking for
0:13:38.450,0:13:42.880
silicon is hard. Silicon is relatively[br]transparent to X-rays. A lot of things
0:13:42.880,0:13:48.279
mask it. Copper traces, Solder masks the[br]presence of silicon. This is like another
0:13:48.279,0:13:54.180
example of a, you know, a wire bonded chip[br]under an X-ray. There's some mitigations.
0:13:54.180,0:13:57.290
If you have a lot of money, you can do[br]computerized tomography. They'll build a
0:13:57.290,0:14:02.839
3D image of the chip. You can do X-ray[br]diffraction spectroscopy, but it's not a
0:14:02.839,0:14:07.490
foolproof method. And so basically the[br]threat of wirebonded package is actually
0:14:07.490,0:14:11.609
very well understood commodity technology.[br]It's actually quite cheap. This is a I was
0:14:11.609,0:14:15.750
actually doing some wire bonding in China[br]the other day. This is the wirebonding
0:14:15.750,0:14:20.010
machine. I looked up the price, it's 7000 [br]dollars for a used one. And you
0:14:20.010,0:14:23.100
basically just walk into the guy with a[br]picture where you want the bonds to go. He
0:14:23.100,0:14:27.100
sort of picks them out, programs the[br]machines motion once and he just plays
0:14:27.100,0:14:30.030
back over and over again. So if you want[br]to go ahead and modify a chip and add a
0:14:30.030,0:14:34.769
wirebond, it's not as crazy as it sounds.[br]The mitigation is that this is a bit
0:14:34.769,0:14:38.770
detectable inside X-rays. So let's go down[br]the rabbit hole a little further. So
0:14:38.770,0:14:41.859
there's nother concept of threat use[br]called the Through-Silicon Via. So this
0:14:41.859,0:14:46.570
here is a cross-section of a chip. On the[br]bottom is the base chip and the top is a
0:14:46.570,0:14:51.350
chip that's only 0.1 to 0.2 millimeters[br]thick, almost the width of a human hair.
0:14:51.350,0:14:55.220
And they actually have drilled Vias[br]through the chip. So you have circuits on
0:14:55.220,0:14:59.540
the top and circuits on the bottom. So[br]this is kind of used to sort of, you know,
0:14:59.540,0:15:03.880
putting interposer in between different[br]chips, also used to stack DRAM and HBM. So
0:15:03.880,0:15:07.880
this is a commodity process to be able[br]today. It's not science fiction. And the
0:15:07.880,0:15:10.640
second concept I want to throw at you is a[br]thing called a Wafer Level Chip Scale
0:15:10.640,0:15:15.340
Package, WLCSP. This is actually a very[br]common method for packaging chips today.
0:15:15.340,0:15:19.340
Basically it's solder bolts directly on[br]top of chips. They're everywhere. If you
0:15:19.340,0:15:24.129
look inside of like an iPhone, basically[br]almost all the chips are WLCSP package
0:15:24.129,0:15:28.380
types. Now, if I were to take that Wafer[br]Level Chip Scale Package and cross-section
0:15:28.380,0:15:32.459
and look at it, it looks like a circuit[br]board with some solder-balls and the
0:15:32.459,0:15:36.089
silicon itself with some backside[br]passivation. If you go ahead and combine
0:15:36.089,0:15:40.709
this with a Through-Silicon Via implant, a[br]man in the middle attack using Through-
0:15:40.709,0:15:43.500
Silicon Vias, this is what it looks like[br]at the end of the day, you basically have
0:15:43.500,0:15:47.490
a piece of silicon this size, the original[br]silicon, sitting in original pads, in
0:15:47.490,0:15:50.350
basically all the right places with the[br]solder-balls masking the presence of that
0:15:50.350,0:15:53.690
chip. So it's actually basically a nearly[br]undetectable implant if you want to
0:15:53.690,0:15:57.580
execute it, if you go ahead and look at[br]the edge of the chip. They already have
0:15:57.580,0:16:00.570
seams on the sides. You can't even just[br]look at the side and say, oh, I see a seam
0:16:00.570,0:16:03.851
on my chip. Therefore, it's a problem. The[br]seam on the edge often times is because of
0:16:03.851,0:16:08.380
a different coding as the back or[br]passivations, these types of things. So if
0:16:08.380,0:16:12.870
you really wanted to sort of say, OK, how[br]well can we hide implant, this is probably
0:16:12.870,0:16:16.100
the way I would do it. It's logistically[br]actually easier than to worry about an
0:16:16.100,0:16:19.769
implant because you don't have to get the[br]chips in wire-bondable format, you
0:16:19.769,0:16:23.250
literally just buy them off the Internet.[br]You can just clean off the solder-balls
0:16:23.250,0:16:27.470
with a hot air gun and then the hard part[br]is building it so it can be a template for
0:16:27.470,0:16:32.390
doing the attack, which will take some[br]hundreds of thousands of dollars to do and
0:16:32.390,0:16:36.870
probably a mid-end fab. But if you have[br]almost no budget constraint and you have a
0:16:36.870,0:16:39.950
set of chips that are common and you want[br]to build a template for, this could be a
0:16:39.950,0:16:46.459
pretty good way to hide an implant inside[br]of a system. So that's sort of adding
0:16:46.459,0:16:52.290
chips inside packages. Let's talk a bit[br]about chip modification itself. So how
0:16:52.290,0:16:55.740
hard is it to modify the chip itself?[br]Let's say we've managed to eliminate the
0:16:55.740,0:17:00.380
possibility of someone's added chip, but[br]what about the chip itself? So this sort
0:17:00.380,0:17:03.300
of goes, a lot of people said, hey,[br]bunnie, why don't you spin an open source,
0:17:03.300,0:17:06.459
silicon processor, this will make it[br]trustable, right?. This is not a problem.
0:17:06.459,0:17:12.309
Well, let's think about the attack surface[br]of IC fabrication processes. So on the
0:17:12.309,0:17:16.140
left hand side here I've got kind of a[br]flowchart of what I see fabrication looks
0:17:16.140,0:17:22.630
like. You start with a high level chip[br]design, it's a RTL, like Verilog, or VHDL
0:17:22.630,0:17:27.430
these days or Python. You go into some[br]backend and then you have a decision to
0:17:27.430,0:17:31.380
make: Do you own your backend tooling or[br]not? And so I will go into this a little
0:17:31.380,0:17:34.500
more. If you don't, you trust the fab to[br]compile it and assemble it. If you do, you
0:17:34.500,0:17:37.760
assemble the chip with some blanks for[br]what's called "hard IP", we'll get into
0:17:37.760,0:17:42.140
this. And then you trust the fab to[br]assemble that, make masks and go to mass
0:17:42.140,0:17:46.910
production. So there's three areas that I[br]think are kind of ripe for tampering now,
0:17:46.910,0:17:49.510
"Netlist tampering", "hard IP tampering"[br]and "mask tampering". We'll go into each
0:17:49.510,0:17:55.140
of those. So "Netlist tampering", a lot of[br]people think that, of course, if you wrote
0:17:55.140,0:17:59.360
the RTL, you're going to make the chip. It[br]turns out that's actually kind of a
0:17:59.360,0:18:02.910
minority case. We hear about that. That's[br]on the right hand side called customer
0:18:02.910,0:18:06.910
owned tooling. That's when the customer[br]does a full flow, down to the mask set.
0:18:06.910,0:18:11.520
The problem is it costs several million[br]dollars and a lot of extra headcount of
0:18:11.520,0:18:15.169
very talented people to produce these and[br]you usually only do it for flagship
0:18:15.169,0:18:20.010
products like CPUs, and GPUs or high-end[br]routers, these sorts of things. I would
0:18:20.010,0:18:25.020
say most chips tend to go more towards[br]what's called an ASIC side, "Application
0:18:25.020,0:18:28.830
Specific Integrated Circuit". What happens[br]is that the customer will do some RTL and
0:18:28.830,0:18:33.270
maybe a high level floorplan and then the[br]silicon foundry or service will go ahead
0:18:33.270,0:18:36.500
and do the place/route, the IP[br]integration, the pad ring. This is quite
0:18:36.500,0:18:39.640
popular for cheap support chips, like your[br]baseboard management controller inside
0:18:39.640,0:18:43.820
your server probably went through this[br]flow, disk controllers probably got this
0:18:43.820,0:18:47.860
flow, mid-to-low IO controllers . So all[br]those peripheral chips that we don't like
0:18:47.860,0:18:52.210
to think about, that we know that can[br]handle our data probably go through a flow
0:18:52.210,0:18:57.880
like this. And, to give you an idea of how[br]common it is, but how little you've heard
0:18:57.880,0:19:00.900
of it, there's a company called SOCIONEXT.[br]There are a billion dollar company,
0:19:00.900,0:19:04.280
actually, you've probably never heard of[br]them, and they offer services. You
0:19:04.280,0:19:07.290
basically just throw a spec over the wall[br]and they'll build a chip to you all the
0:19:07.290,0:19:10.160
way to the point where you've done logic,[br]synthesis and physical design and then
0:19:10.160,0:19:14.590
they'll go ahead and do the manufacturing[br]and test and sample shipment for it. So
0:19:14.590,0:19:18.540
then, OK, fine, now, obviously, if you[br]care about trust, you don't do an ASIC
0:19:18.540,0:19:24.260
flow, you pony up the millions of dollars[br]and you do a COT flow, right? Well, there
0:19:24.260,0:19:29.140
is a weakness in COT flows. And this is[br]it's called the "Hard IP problem". So this
0:19:29.140,0:19:33.000
here on the right hand side is an amoeba[br]plot of the standard cells alongside a
0:19:33.000,0:19:39.380
piece of SRAM, a highlight this year. The[br]image wasn't great for presentation, but
0:19:39.380,0:19:45.010
this region here is the SRAM-block. And[br]all those little colorful blocks are
0:19:45.010,0:19:50.370
standard cells, representing your AND-[br]gates and NAND-gates and that sort of
0:19:50.370,0:19:55.040
stuff. What happens is that the foundry[br]will actually ask you, just leave an open
0:19:55.040,0:20:00.000
spot on your mask-design and they'll go[br]ahead and merge in the RAM into that spot
0:20:00.000,0:20:05.290
just before production. The reason why[br]they do this is because stuff like RAM is
0:20:05.290,0:20:08.140
a carefully guarded trade secret. If you[br]can increase the RAM density of your
0:20:08.140,0:20:12.970
foundry process, you can get a lot more[br]customers. There's a lot of knowhow in it,
0:20:12.970,0:20:16.880
and so foundries tend not to want to share[br]the RAM. You can compile your own RAM,
0:20:16.880,0:20:20.110
there are open RAM projects, but their[br]performance or their density is not as
0:20:20.110,0:20:24.539
good as the foundry specific ones. So in[br]terms of Hard IP, what are the blocks that
0:20:24.539,0:20:29.589
tend to be Hard IP? Stuff like RF and[br]analog, phase-locked-loops, ADCs, DACs,
0:20:29.589,0:20:34.230
bandgaps. RAM tends to be Hard IP, ROM[br]tends to be Hard IP, eFuze that stores
0:20:34.230,0:20:38.370
your keys is going to be given to you as[br]an opaque block, the pad ring around your
0:20:38.370,0:20:41.860
chip, the thing that protects your chip[br]from ESD, that's going to be an opaque
0:20:41.860,0:20:46.480
block. Basically all the points you need[br]to backdoor your RTL are going to be
0:20:46.480,0:20:52.010
trusted in the foundry in a modern[br]process. So OK, let's say, fine, we're
0:20:52.010,0:20:55.650
going ahead and build all of our own IP[br]blocks as well. We're gonna compile our
0:20:55.650,0:21:00.180
RAMs, do our own IO, everything, right?.[br]So we're safe, right? Well, turns out that
0:21:00.180,0:21:04.080
masks can be tampered with post-[br]processing. So if you're going to do
0:21:04.080,0:21:07.820
anything in a modern process, the mask[br]designs change quite dramatically from
0:21:07.820,0:21:11.240
what you drew them to what actually ends[br]up in the line: They get fractured into
0:21:11.240,0:21:14.940
multiple masks, they have resolution[br]correction techniques applied to them and
0:21:14.940,0:21:20.700
then they always go through an editing[br]phase. So masks are not born perfect. Masks
0:21:20.700,0:21:24.260
have defects on the inside. And so you can[br]look up papers about how they go and they
0:21:24.260,0:21:28.220
inspect the mask, every single line on the[br]inside when they find an error, they'll
0:21:28.220,0:21:32.080
patch over it, they'll go ahead and add[br]bits of metal and then take away bits of
0:21:32.080,0:21:36.350
glass to go ahead and make that mask[br]perfect or, better in some way, if you
0:21:36.350,0:21:40.459
have access to the editing capability. So[br]what can you do with mask-editing? Well,
0:21:40.459,0:21:45.080
there's been a lot of papers written on[br]this. You can look up ones on, for
0:21:45.080,0:21:48.590
example, "Dopant tampering". This one[br]actually has no morphological change. You
0:21:48.590,0:21:52.400
can't look at it under a microscope and[br]detect Dopant tampering. You have to have
0:21:52.400,0:21:57.020
something and either you have to do some[br]wet chemistry or some X-ray-spectroscopy
0:21:57.020,0:22:03.860
to figure it out. This allows for circuit[br]level change without a gross morphological
0:22:03.860,0:22:07.600
change of the circuit. And so this can[br]allow for tampering with things like RNGs
0:22:07.600,0:22:15.500
or some logic paths. There are oftentimes[br]spare cells inside of your ASIC, since
0:22:15.500,0:22:18.230
everyone makes mistakes, including chip[br]designers and so you want a patch over
0:22:18.230,0:22:22.070
that. It can be done at the mask level, by[br]signal bypassing, these types of things.
0:22:22.070,0:22:29.320
So some certain attacks can still happen[br]at the mask level. So that's a very quick
0:22:29.320,0:22:33.700
sort of idea of how bad can it get. When[br]you talk about the time of check, time of
0:22:33.700,0:22:39.720
use trust problem inside the supply chain.[br]The short summary of implants is that
0:22:39.720,0:22:43.510
there's a lot of places to hide them. Not[br]all of them are expensive or hard. I
0:22:43.510,0:22:48.070
talked about some of the more expensive or[br]hard ones. But remember, wire bonding is
0:22:48.070,0:22:52.770
actually a pretty easy process. It's not[br]hard to do and it's hard to detect. And
0:22:52.770,0:22:56.350
there's really no actual essential[br]correlation between detection difficulty
0:22:56.350,0:23:02.059
and difficulty of the attack, if you're[br]very careful in planning the attack. So,
0:23:02.059,0:23:06.240
okay, implants are possible. It's just[br]this. Let's agree on that maybe. So now
0:23:06.240,0:23:08.539
the solution is we should just have[br]trustable factories. Let's go ahead and
0:23:08.539,0:23:12.440
bring the fabs to the EU. Let's have a fab[br]in my backyard or whatever it is, these
0:23:12.440,0:23:17.580
these types of things. Let's make sure all[br]the workers are logged and registered,
0:23:17.580,0:23:22.400
that sort of thing. Let's talk about that.[br]So if you think about hardware, there's
0:23:22.400,0:23:26.429
you, right?. And then we can talk about[br]evil maids. But let's not actually talk
0:23:26.429,0:23:30.270
about those, because that's actually kind[br]of a minority case to worry about. But
0:23:30.270,0:23:35.650
let's think about how stuff gets to you.[br]There's a distributor, who goes through a
0:23:35.650,0:23:39.330
courier, who gets to you. All right. So[br]we've gone and done all this stuff for the
0:23:39.330,0:23:43.679
trustful factory. But it's actually[br]documented that couriers have been
0:23:43.679,0:23:50.300
intercepted and implants loaded. You know,[br]by for example, the NSA on Cisco products.
0:23:50.300,0:23:55.030
Now, you don't even have to have access to[br]couriers, now. Thanks to the way modern
0:23:55.030,0:24:00.730
commerce works, other customers can go[br]ahead and just buy a product, tamper with
0:24:00.730,0:24:04.880
it, seal it back in the box, send it back[br]to your distributor. And then maybe you
0:24:04.880,0:24:07.880
get one, right? That can be good enough.[br]Particularly, if you know a corporation is
0:24:07.880,0:24:10.600
in a particular area. Targeting them, you[br]buy a bunch of hard drives in the area,
0:24:10.600,0:24:12.510
seal them up, send them back and[br]eventually one of them ends up in the
0:24:12.510,0:24:16.750
right place and you've got your implant,[br]right? So there's a great talk last year
0:24:16.750,0:24:20.200
at 35C3. I recommend you check it out.[br]That talks a little bit more about the
0:24:20.200,0:24:25.100
scenario, sort of removing tamper stickers[br]and you know, the possibility that some
0:24:25.100,0:24:29.412
crypto wallets were sent back in the[br]supply chain then and tampered with. OK,
0:24:29.412,0:24:32.480
and then let's let's take that back. We[br]have to now worry about the wonderful
0:24:32.480,0:24:36.490
people in customs. We have to worry about[br]the wonderful people in the factory who
0:24:36.490,0:24:40.370
have access to your hardware. And so if[br]you cut to the chase, it's a huge attack
0:24:40.370,0:24:44.480
surface in terms of the supply chain,[br]right? From you to the courier to the
0:24:44.480,0:24:49.120
distributor, customs, box build, the box[br]build factory itself. Oftentimes we'll use
0:24:49.120,0:24:53.300
gray market resources to help make[br]themselves more profitable, right? You
0:24:53.300,0:24:56.980
have distributors who go to them. You[br]don't even know who those guys are. PCB
0:24:56.980,0:25:00.740
assembly, components, boards, chip fab,[br]packaging, the whole thing, right? Every
0:25:00.740,0:25:04.270
single point is a place where someone can[br]go ahead and touch a piece of hardware
0:25:04.270,0:25:08.970
along the chain. So can open source save[br]us in this scenario? Does open hardware
0:25:08.970,0:25:12.140
solve this problem? Right. Let's think[br]about it. Let's go ahead and throw some
0:25:12.140,0:25:16.090
developers with git on the left hand side.[br]How far does it get, right? Well, we can
0:25:16.090,0:25:18.880
have some continuous integration checks[br]that make sure that you know the hardware
0:25:18.880,0:25:23.049
is correct. We can have some open PCB[br]designs. We have some open PDK, but then
0:25:23.049,0:25:27.230
from that point, it goes into a rather[br]opaque machine and then, OK, maybe we can
0:25:27.230,0:25:31.090
put some test on the very edge before exit[br]the factory to try and catch some
0:25:31.090,0:25:36.110
potential issues, right? But you can see[br]all the area, other places, where a time
0:25:36.110,0:25:40.750
of check, time of use problem can happen.[br]And this is why, you know, I'm saying that
0:25:40.750,0:25:45.700
open hardware on its own is not sufficient[br]to solve this trust problem. Right? And
0:25:45.700,0:25:49.500
the big problem at the end of the day is[br]that you can't hash hardware. Right? There
0:25:49.500,0:25:53.950
is no hash function for hardware. That's[br]why I want to go through that early today.
0:25:53.950,0:25:57.480
There's no convenient, easy way to[br]basically confirming the correctness of
0:25:57.480,0:26:00.710
your hardware before you use it. Some[br]people say, well, bunnie, said once, there
0:26:00.710,0:26:05.320
is always a bigger microscope, right? You[br]know, I do some, security reverse
0:26:05.320,0:26:08.370
engineering stuff. This is true, right? So[br]there's a wonderful technique called
0:26:08.370,0:26:12.481
ptychographic X-ray Imaging, there is a[br]great paper in nature about it, where they
0:26:12.481,0:26:16.880
take like a modern i7 CPU and they get[br]down to the gate level nondestructively
0:26:16.880,0:26:20.600
with it, right? It's great for reverse[br]engineering or for design verification.
0:26:20.600,0:26:24.190
The problem number one is it literally[br]needs a building sized microscope. It was
0:26:24.190,0:26:28.940
done at the Swiss light source, that donut[br]shaped thing is the size of the light
0:26:28.940,0:26:33.000
source for doing that type of[br]verification, right? So you're not going
0:26:33.000,0:26:36.591
to have one at your point of use, right?[br]You're going to check it there and then
0:26:36.591,0:26:41.279
probably courier it to yourself again.[br]Time of check is not time of use. Problem
0:26:41.279,0:26:46.190
number two, it's expensive to do so.[br]Verifying one chip only verifies one chip
0:26:46.190,0:26:49.760
and as I said earlier, just because 99.9%[br]of your hardware is OK, doesn't mean
0:26:49.760,0:26:54.070
you're safe. Sometimes all it takes is one[br]server out of a thousand, to break some
0:26:54.070,0:26:59.110
fundamental assumptions that you have[br]about your cloud. And random sampling just
0:26:59.110,0:27:02.030
isn't good enough, right? I mean, would[br]you random sample signature checks on
0:27:02.030,0:27:06.240
software that you install? Download? No.[br]You insist 100% check and everything. If
0:27:06.240,0:27:08.441
you want that same standard of[br]reliability, you have to do that for
0:27:08.441,0:27:12.860
hardware. So then, is there any role for[br]open source and trustful hardware?
0:27:12.860,0:27:16.870
Absolutely, yes. Some of you guys may be[br]familiar with that little guy on the
0:27:16.870,0:27:22.799
right, the SPECTRE logo. So correctness is[br]very, very hard. Peer review can help fix
0:27:22.799,0:27:27.160
correctness bugs. Micro architectural[br]transparency can able the fixes in SPECTRE
0:27:27.160,0:27:30.250
like situations. So, you know, for[br]example, you would love to be able to say
0:27:30.250,0:27:33.750
we're entering a critical region. Let's[br]turn off all the micro architectural
0:27:33.750,0:27:38.340
optimizations, sacrifice performance and[br]then run the code securely and then go
0:27:38.340,0:27:41.250
back into, who cares what mode, and just[br]get done fast, right? That would be a
0:27:41.250,0:27:44.590
switch I would love to have. But without[br]that sort of transparency or without the
0:27:44.590,0:27:48.500
bill to review it, we can't do that. Also,[br]you know, community driven features and
0:27:48.500,0:27:51.390
community own designs is very empowering[br]and make sure that we're sort of building
0:27:51.390,0:27:56.710
the right hardware for the job and that[br]it's upholding our standards. So there is
0:27:56.710,0:28:01.850
a role. It's necessary, but it's not[br]sufficient for trustable hardware. Now the
0:28:01.850,0:28:06.220
question is, OK, can we solve the point of[br]use hardware verification problem? Is it
0:28:06.220,0:28:09.510
all gloom and doom from here on? Well, I[br]didn't bring us here to tell you it's just
0:28:09.510,0:28:14.720
gloom and doom. I've thought about this[br]and I've kind of boiled it into three
0:28:14.720,0:28:19.429
principles for building verifiable[br]hardware. The three principles are: 1)
0:28:19.429,0:28:23.020
Complexity is the enemy of verification.[br]2) We should verify entire systems, not
0:28:23.020,0:28:26.400
just components. 3) And we need to empower[br]end-users to verify and seal their
0:28:26.400,0:28:31.580
hardware. We'll go into this in the[br]remainder of the talk. The first one is
0:28:31.580,0:28:37.090
that complexity is complicated. Right?[br]Without a hashing function, verification
0:28:37.090,0:28:43.830
rolls back to bit-by-bit or atom-by-atom[br]verification. Modern phones just have so
0:28:43.830,0:28:48.690
many components. Even if I gave you the[br]full source code for the SOC inside of a
0:28:48.690,0:28:51.960
phone down to the mass level, what are you[br]going to do with it? How are you going to
0:28:51.960,0:28:56.560
know that this mass actually matches the[br]chip and those two haven't been modified?
0:28:56.560,0:29:01.400
So more complexity, is more difficult. The[br]solution is: Let's go to simplicity,
0:29:01.400,0:29:04.250
right? Let's just build things from[br]discrete transistors. Someone's done this.
0:29:04.250,0:29:08.250
The Monster 6502 is great. I love the[br]project. Very easy to verify. Runs at 50
0:29:08.250,0:29:13.250
kHz. So you're not going to do a lot[br]with that. Well, let's build processors at
0:29:13.250,0:29:16.490
a visually inspectable process node. Go to[br]500 nanometers. You can see that with
0:29:16.490,0:29:21.450
light. Well, you know, 100 megahertz clock[br]rate and a very high power consumption and
0:29:21.450,0:29:25.419
you know, a couple kilobytes RAM probably[br]is not going to really do it either.
0:29:25.419,0:29:30.100
Right? So the point of use verification is[br]a tradeoff between ease of verification
0:29:30.100,0:29:34.070
and features and usability. Right? So[br]these two products up here largely do the
0:29:34.070,0:29:39.280
same thing. Air pods. Right? And[br]headphones on your head. Right? Air pods
0:29:39.280,0:29:43.960
have something on the order of tens of[br]millions of transistors for you to verify.
0:29:43.960,0:29:47.570
The headphone that goes on your head. Like[br]I can actually go to Maxwell's equations
0:29:47.570,0:29:50.630
and actually tell you how the magnets work[br]from very first principles. And there's
0:29:50.630,0:29:54.490
probably one transistor on the inside of[br]the microphone to go ahead and amplify the
0:29:54.490,0:29:59.740
membrane. And that's it. Right? So this[br]one, you do sacrifice some features and
0:29:59.740,0:30:02.910
usability, when you go to a headset. Like[br]you can't say, hey, Siri, and they will
0:30:02.910,0:30:07.510
listen to you and know what you're doing,[br]but it's very easy to verify and know
0:30:07.510,0:30:13.250
what's going on. So in order to start a[br]dialog on user verification, we have to
0:30:13.250,0:30:17.150
serve a set of context. So I started a[br]project called 'Betrusted' because the
0:30:17.150,0:30:22.100
right answer depends on the context. I[br]want to establish what might be a minimum
0:30:22.100,0:30:27.119
viable, verifiable product. And it's sort[br]of like meant to be user verifiable by
0:30:27.119,0:30:30.230
design. And when we think of it as a[br]hardware software distro. So it's meant to
0:30:30.230,0:30:34.291
be modified and changed and customized[br]based upon the right context at the end of
0:30:34.291,0:30:39.710
the day. This a picture of what it looks[br]like. I actually have a little prototype
0:30:39.710,0:30:43.919
here. Very, very, very early product here[br]at the Congress. If you wanna look at it.
0:30:43.919,0:30:48.720
It's a mobile device that is meant for[br]sort of communication, sort of text based
0:30:48.720,0:30:52.990
communication and maybe voice[br]authentication. So authenticator tokens
0:30:52.990,0:30:56.320
are like a crypto wall if you want. And[br]the people were thinking about who might
0:30:56.320,0:31:00.990
be users are either high value targets[br]politically or financially. So you don't
0:31:00.990,0:31:04.340
have to have a lot of money to be a high[br]value target. You could also be in a very
0:31:04.340,0:31:08.620
politically risky for some people. And[br]also, of course, looking at developers and
0:31:08.620,0:31:12.299
enthusiasts and ideally we're thinking[br]about a global demographic, not just
0:31:12.299,0:31:15.890
English speaking users, which is sort of a[br]thing when you think about the complexity
0:31:15.890,0:31:18.880
standpoint, this is where we really have[br]to sort of champ at the bit and figure out
0:31:18.880,0:31:24.250
how to solve a lot of hard problems like[br]getting Unicode and, you know, right to
0:31:24.250,0:31:28.210
left rendering and pictographic fonts to[br]work inside a very small tax surface
0:31:28.210,0:31:34.419
device. So this leads me to the second[br]point. In which we verify entire systems,
0:31:34.419,0:31:37.779
not just components. We all say, well, why[br]not just build a chip? Why not? You know,
0:31:37.779,0:31:41.899
why are you thinking about a whole device?[br]Right. The problem is, that private keys
0:31:41.899,0:31:45.830
are not your private matters. Screens can[br]be scraped and keyboards can be logged. So
0:31:45.830,0:31:50.059
there's some efforts now to build[br]wonderful security enclaves like Keystone
0:31:50.059,0:31:54.600
and Open Titan, which will build, you[br]know, wonderful secure chips. The problem
0:31:54.600,0:31:58.500
is, that even if you manage to keep your[br]key secret, you still have to get that
0:31:58.500,0:32:03.309
information through an insecure CPU from[br]the screen to the keyboard and so forth.
0:32:03.309,0:32:06.250
Right? And so, you know, people who have[br]used these, you know, on screen touch
0:32:06.250,0:32:09.309
keyboards have probably seen something of[br]a message like this saying that, by the
0:32:09.309,0:32:11.940
way, this keyboard can see everything[br]you're typing, clean your passwords.
0:32:11.940,0:32:14.680
Right? And people probably clip and say,[br]oh, yeah, sure, whatever. I trust that.
0:32:14.680,0:32:18.840
Right? OK, well, this answer, this little[br]enclave on the site here isn't really
0:32:18.840,0:32:22.410
doing a lot of good. When you go ahead and[br]you say, sure, I'll run this implant
0:32:22.410,0:32:28.890
method, they can go ahead and modify all[br]my data and intercept all my data. So in
0:32:28.890,0:32:32.820
terms of making a device variable, let's[br]talk about the concept of practice flow.
0:32:32.820,0:32:36.480
How do I take these three principles and[br]turn them into something? So this is you
0:32:36.480,0:32:40.320
know, this is the ideal of taking these[br]three requirements and turning it into the
0:32:40.320,0:32:44.709
set of five features, a physical keyboard,[br]a black and white LCD, a FPGA-based RISC-V
0:32:44.709,0:32:49.310
SoC, users-sealable keys and so on. It's[br]easy to verify and physically protect. So
0:32:49.310,0:32:53.250
let's talk about these features one by[br]one. First one is a physical keyboard. Why
0:32:53.250,0:32:56.259
am I using a physical keyboard and not a[br]virtual keyboard? People love the virtual
0:32:56.259,0:33:00.220
keyboard. The problem is that captouch[br]screens, which is necessary to do a good
0:33:00.220,0:33:04.610
virtual keyboard, have a firmware block.[br]They have a microcontroller to do the
0:33:04.610,0:33:07.650
touch screens, actually. It's actually[br]really hard to build these things we want.
0:33:07.650,0:33:10.630
If you can do a good job with it and build[br]an awesome open source one, that'll be
0:33:10.630,0:33:15.020
great, but that's a project in and of[br]itself. So in order to sort of get an easy
0:33:15.020,0:33:17.599
win here and we can, let's just go with[br]the physical keyboard. So this is what the
0:33:17.599,0:33:21.520
device looks like with this cover off. We[br]have a physical keyboard, PCV with a
0:33:21.520,0:33:24.960
little overlay that does, you know, so we[br]can do multilingual inserts and you can go
0:33:24.960,0:33:28.580
to change that out. And it's like it's[br]just a two layer daughter card. Right.
0:33:28.580,0:33:32.649
Just hold up to like, you know, like, OK,[br]switches, wires. Right? Not a lot of
0:33:32.649,0:33:35.500
places to hide things. So I'll take that[br]as an easy win for an input surface,
0:33:35.500,0:33:39.540
that's verifiable. Right? The output[br]surface is a little more subtle. So we're
0:33:39.540,0:33:44.470
doing a black and white LCD. If you say,[br]OK, why not use a curiosity? If you ever
0:33:44.470,0:33:52.279
take apart a liquid crystal display, look[br]for a tiny little thin rectangle sort of
0:33:52.279,0:33:57.130
located near the display area. That's[br]actually a silicon chip that's bonded to
0:33:57.130,0:34:00.630
the glass. That's what it looks like at[br]the end of the day. That contains a frame
0:34:00.630,0:34:05.169
buffer and a command interface. It has[br]millions of transistors on the inside and
0:34:05.169,0:34:08.909
you don't know what it does. So if you're[br]ever assuming your adversary may be
0:34:08.909,0:34:14.240
tampering with your CPU, this is also a[br]viable place you have to worry about. So I
0:34:14.240,0:34:18.991
found a screen. It's called a memory LCD[br]by sharp electronics. It turns out they do
0:34:18.991,0:34:22.980
all the drive electrons on glass. So this[br]is a picture of the driver electronics on
0:34:22.980,0:34:26.779
the screen through like a 50x microscope[br]with a bright light behind it. Right? You
0:34:26.779,0:34:34.369
can actually see the transistors that are[br]used to to drive everything on the display
0:34:34.369,0:34:37.980
it's a nondestructive method of[br]verification. But actually more important
0:34:37.980,0:34:41.790
to the point is that there's so few places[br]to hide things, you probably don't need to
0:34:41.790,0:34:45.359
check it, right? There's not - If you want[br]to add an implant to this, you would need
0:34:45.359,0:34:50.469
to grow the glass area substantially or[br]add a silicon chip, which is a thing that
0:34:50.469,0:34:55.069
you'll notice, right. So at the end of the[br]day, the less places to hide things is
0:34:55.069,0:34:58.510
less need to check things. And so I can[br]feel like this is a screen where I can
0:34:58.510,0:35:02.749
write data to, and it'll show what I want[br]to show. The good news is that display has
0:35:02.749,0:35:07.119
a 200 ppi pixel density. So it's not -[br]even though it's black and white - it's
0:35:07.119,0:35:12.410
kind of closer to E-Paper. EPD in terms of[br]resolution. So now we come to the hard
0:35:12.410,0:35:16.869
part, right, the CPU. The silicon problem,[br]right? Any chip built in the last two
0:35:16.869,0:35:20.559
decades is not going to be inspectable,[br]fully inspectable with optical microscope,
0:35:20.559,0:35:24.469
right? Thorough analysis requires removing[br]layers and layers of metal and dielectric.
0:35:24.469,0:35:29.289
This is sort of a cross section of a[br]modernish chip and you can see the sort of
0:35:29.289,0:35:34.930
the huge stack of things to look at on[br]this. This process is destructive and you
0:35:34.930,0:35:37.569
can think of it as hashing, but it's a[br]little bit too literal, right? We want
0:35:37.569,0:35:40.680
something where we can check the thing[br]that we're going to use and then not
0:35:40.680,0:35:46.720
destroy it. So I've spent quite a bit of[br]time thinking about options for
0:35:46.720,0:35:50.319
nondestructive silicon verification. The[br]best I could come up with maybe was using
0:35:50.319,0:35:54.390
optical fauilt induction somehow combined[br]with some chip design techniques to go
0:35:54.390,0:35:58.009
ahead and like scan a laser across and[br]look at fault syndromes and figure out,
0:35:58.009,0:36:02.019
you know, does the thing... do the gates[br]that we put down correspond to the thing
0:36:02.019,0:36:07.349
that I built. The problem is, I couldn't[br]think of a strategy to do it that wouldn't
0:36:07.349,0:36:10.459
take years and tens of millions of dollars[br]to develop, which puts it a little bit far
0:36:10.459,0:36:13.549
out there and probably in the realm of[br]like sort of venture funded activities,
0:36:13.549,0:36:18.250
which is not really going to be very[br]empowering of everyday people. So let's
0:36:18.250,0:36:22.380
say I want something a little more short[br]term than that, then that sort of this,
0:36:22.380,0:36:27.130
you know, sort of, you know, platonic[br]ideal of verifiability. So the compromise
0:36:27.130,0:36:32.300
I sort of arrived at is the FPGA. So field[br]programmable gate arrays, that's what FPGA
0:36:32.300,0:36:37.069
stands for, are large arrays of logic and[br]wires that are user configured to
0:36:37.069,0:36:42.109
implement hardware designs. So this here[br]is an image inside an FPGA design tool. On
0:36:42.109,0:36:47.109
the top right is an example of one sort of[br]logic sub cell. It's got a few flip flops
0:36:47.109,0:36:51.920
and lookup tables in it. It's embedded in[br]this huge mass of wires that allow you to
0:36:51.920,0:36:56.069
wire it up at runtime to figure out what's[br]going on. And one thing that this diagram
0:36:56.069,0:36:59.789
here shows is I'm able to sort of[br]correlate design. I can see "Okay. The
0:36:59.789,0:37:04.299
decode_to_execute_INSTRUCTION_reg bit 26[br]corresponds to this net." So now we're
0:37:04.299,0:37:09.260
sort of like bring that Time Of Check a[br]little bit closer to Time Of Use. And so
0:37:09.260,0:37:13.099
the idea is to narrow that ToCToU gap by[br]compiling your own CPU. We can basically
0:37:13.099,0:37:16.510
give you the CPU from source. You can[br]compile it yourself. You can confirm the
0:37:16.510,0:37:20.599
bit stream. So now we're sort of enabling[br]a bit more of that trust transfer like
0:37:20.599,0:37:24.989
software, right. But there's a subtlety in[br]that the toolchains are not necessarily
0:37:24.989,0:37:30.380
always open. There's some FOSS flows like[br]symbiflow. They have a 100% open flow for
0:37:30.380,0:37:35.150
ICE40 and ECP5 and there's like 7-series[br]where they've a coming-soon status, but
0:37:35.150,0:37:41.519
they currently require some closed vendor[br]tools. So picking FPGA is a difficult
0:37:41.519,0:37:45.230
choice. There's a usability versus[br]verification tradeoff here. The big
0:37:45.230,0:37:49.119
usability issue is battery life. If we're[br]going for a mobile device, you want to use
0:37:49.119,0:37:54.190
it all day long or you want to be dead by[br]noon. It turns out that the best sort of
0:37:54.190,0:37:57.950
chip in terms of battery life is a[br]Spartan7. It gives you 4x, roughly 3 to
0:37:57.950,0:38:05.329
4x, in terms of battery life. But the tool[br]flow is still semi-closed. But the, you
0:38:05.329,0:38:09.199
know, I am optimistic that symbiflow will[br]get there and we can also fork and make an
0:38:09.199,0:38:13.260
ECP5 version if that's a problem at the[br]end of day. So let's talk a little bit
0:38:13.260,0:38:18.049
more about sort of FPGA features. So one[br]thing I like to say about FPGA is: they
0:38:18.049,0:38:22.420
offer a sort of ASLR, so address-space[br]layout randomization, but for hardware.
0:38:22.420,0:38:27.269
Essentially, a design has a kind of[br]pseudo-random mapping to the device. This
0:38:27.269,0:38:31.019
is a sort of a screenshot of two[br]compilation runs at the same source code
0:38:31.019,0:38:35.379
with a very small modification to it. And[br]basically a version number stored in a
0:38:35.379,0:38:41.710
GPR. And then you can see that the[br]actually the locations of a lot of the
0:38:41.710,0:38:45.609
registers are basically shifted around.[br]The reason why this is important is
0:38:45.609,0:38:50.500
because this hinders a significant class[br]of silicon attacks. All those small mass
0:38:50.500,0:38:53.849
level changes I talked about the ones[br]where we just "Okay, we're just gonna head
0:38:53.849,0:38:58.459
and change a few wires or run a couple[br]logic cells around", those become more
0:38:58.459,0:39:02.329
less likely to capture a critical bit. So[br]if you want to go ahead and backdoor a
0:39:02.329,0:39:05.760
full FPGA, you're going to have to change[br]the die size. You have to make it
0:39:05.760,0:39:09.969
substantially larger to be able to sort of[br]like swap out the function in those cases.
0:39:09.969,0:39:13.480
And so now the verification bar goes from[br]looking for a needle in a haystack to
0:39:13.480,0:39:16.959
measuring the size of the haystack, which[br]is a bit easier to do towards the user
0:39:16.959,0:39:22.140
side of things. And it turns out, at least[br]in Xilinx-land, it's just a change of a
0:39:22.140,0:39:28.819
random parameter does the trick. So some[br]potential attack vectors against FPGA is
0:39:28.819,0:39:34.279
like "OK, well, it's closed silicon." What[br]are the backdoors there? Notably inside a
0:39:34.279,0:39:38.589
7-series FPGA they actually document[br]introspection features. You can pull out
0:39:38.589,0:39:42.869
anything inside the chip by instantiating[br]a certain special block. And then we still
0:39:42.869,0:39:46.349
also have to worry about the whole class[br]of like man in the middle. I/O- and JTAG
0:39:46.349,0:39:49.990
implants that I talked about earlier. So[br]It's easy, really easy, to mitigate the
0:39:49.990,0:39:52.809
known blocks, basically lock them down,[br]tie them down, check them in the bit
0:39:52.809,0:39:58.290
stream, right? In terms of the I/O-man-in-[br]the-middle stuff, this is where we're
0:39:58.290,0:40:02.750
talking about like someone goes ahead and[br]puts a chip in in the path of your FPGA.
0:40:02.750,0:40:06.069
There's a few tricks you can do. We can do[br]sort of bust encryption on the RAM and the
0:40:06.069,0:40:11.690
ROM at the design level that frustrates[br]these. At the implementation, basically,
0:40:11.690,0:40:15.190
we can use the fact that data pins and[br]address pins can be permuted without
0:40:15.190,0:40:19.259
affecting the device's function. So every[br]design can go ahead and permute those data
0:40:19.259,0:40:24.670
and address pin mappings sort of uniquely.[br]So any particular implant that goes in
0:40:24.670,0:40:28.150
will have to be able to compensate for all[br]those combinations, making the implant a
0:40:28.150,0:40:32.339
little more difficult to do. And of[br]course, we can also fall back to sort of
0:40:32.339,0:40:37.959
careful inspection of the device. In terms[br]of the closed source silicon, the thing
0:40:37.959,0:40:42.160
that I'm really optimistic for there is[br]that so in terms of the closed source
0:40:42.160,0:40:46.521
system, the thing that we have to worry[br]about is that, for example, now that
0:40:46.521,0:40:49.769
Xilinx knows that we're doing these[br]trustable devices using a tool chain, they
0:40:49.769,0:40:54.140
push a patch that compiles back doors into[br]your bit stream. So not even as a silicon
0:40:54.140,0:40:57.999
level implant, but like, you know, maybe[br]the tool chain itself has a backdoor that
0:40:57.999,0:41:04.940
recognizes that we're doing this. So the[br]cool thing is, this is a cool project: So
0:41:04.940,0:41:08.789
there's a project called "Prjxray",[br]project x-ray, it's part of the Symbiflow
0:41:08.789,0:41:12.270
effort, and they're actually documenting[br]the full bit stream of the 7-Series
0:41:12.270,0:41:15.799
device. It turns out that we don't yet[br]know what all the bit functions are, but
0:41:15.799,0:41:19.400
the bit mappings are deterministic. So if[br]someone were to try and activate a
0:41:19.400,0:41:22.970
backdoor in the bit stream through[br]compilation, we can see it in a diff. We'd
0:41:22.970,0:41:26.220
be like: Wow, we've never seen this bit[br]flip before. What is this? Do we can look
0:41:26.220,0:41:29.949
into it and figure out if it's malicious[br]or not, right? So there's actually sort of
0:41:29.949,0:41:33.799
a hope that essentially at the end of day[br]we can build sort of a bit stream checker.
0:41:33.799,0:41:37.150
We can build a thing that says: Here's a[br]bit stream that came out, does it
0:41:37.150,0:41:40.789
correlate to the design source, do all the[br]bits check out, do they make sense? And so
0:41:40.789,0:41:44.141
ideally we would come up with like a one[br]click tool. And now we're at the point
0:41:44.141,0:41:47.469
where the point of check is very close to[br]the point of use. The users are now
0:41:47.469,0:41:50.749
confirming that the CPUs are correctly[br]constructed and mapped to the FPGA
0:41:50.749,0:41:56.359
correctly. So the sort of the summary of[br]FPGA vs. custom silicon is sort of like,
0:41:56.359,0:42:02.210
the pros of custom silicon is that they[br]have great performance. We can do a true
0:42:02.210,0:42:05.479
single chip enclave with hundreds of[br]megahertz speeds and tiny power
0:42:05.479,0:42:09.750
consumption. But the cons of silicon is[br]that it's really hard to verify. So, you
0:42:09.750,0:42:13.529
know, open source doesn't help that[br]verification and Hard IP blocks are tough
0:42:13.529,0:42:17.269
problems we talked about earlier. So FPGAs[br]on the other side, they offer some
0:42:17.269,0:42:20.320
immediate mitigation paths. We don't have[br]to wait until we solve this verification
0:42:20.320,0:42:25.049
problem. We can inspect the bit streams,[br]we can randomize the logic mapping and we
0:42:25.049,0:42:30.029
can do per device unique pin mapping. It's[br]not perfect, but it's better than I think
0:42:30.029,0:42:34.529
any other solution I can offer right now.[br]The cons of it is that FPGAs are just
0:42:34.529,0:42:37.959
barely good enough to do this today. So[br]you need a little bit of external RAM
0:42:37.959,0:42:42.219
which needs to be encrypted, but 100[br]megahertz speed performance and about five
0:42:42.219,0:42:47.529
to 10x the power consumption of a custom[br]silicon solution, which in a mobile device
0:42:47.529,0:42:51.849
is a lot. But, you know, actually part of[br]the reason, the main thing that drives the
0:42:51.849,0:42:55.799
thickness in this is the battery, right?[br]And most of that battery is for the FPGA.
0:42:55.799,0:43:01.490
If we didn't have to go with an FPGA it[br]could be much, much thinner. So now let's
0:43:01.490,0:43:05.019
talk a little about the last two points,[br]user-sealable keys, and verification and
0:43:05.019,0:43:08.369
protection. And this is that third point,[br]"empowering end users to verify and seal
0:43:08.369,0:43:13.349
their hardware". So it's great that we can[br]verify something but can it keep a secret?
0:43:13.349,0:43:15.910
No, transparency is good up to a point,[br]but you want to be able to keep secrets so
0:43:15.910,0:43:19.569
that people won't come up and say: oh,[br]there's your keys, right? So sealing a key
0:43:19.569,0:43:23.969
in the FPGA, ideally we want user[br]generated keys that are hard to extract,
0:43:23.969,0:43:28.479
we don't rely on a central keying[br]authority and that any attack to remove
0:43:28.479,0:43:32.910
those keys should be noticeable. So any[br]high level apps, I mean, someone with
0:43:32.910,0:43:37.220
infinite funding basically should take[br]about a day to extract it and the effort
0:43:37.220,0:43:40.499
should be trivially evident. The solution[br]to that is basically self provisioning and
0:43:40.499,0:43:45.009
sealing of the cryptographic keys in the[br]bit stream and a bit of epoxy. So let's
0:43:45.009,0:43:49.719
talk a little bit about provisioning those[br]keys. If we look at the 7-series FPGA
0:43:49.719,0:43:56.131
security, they offer a sort of encrypted[br]HMAC 256-AES, with 256-bit SHA bit
0:43:56.131,0:44:02.170
streams. There's a paper which discloses a[br]known weakness in it, so the attack takes
0:44:02.170,0:44:06.499
about a day or 1.6 million chosen cipher[br]text traces. The reason why it takes a day
0:44:06.499,0:44:09.650
is because that's how long it takes to[br]load in that many chosen ciphertexts
0:44:09.650,0:44:13.940
through the interfaces. The good news is[br]there's some easy mitigations to this. You
0:44:13.940,0:44:16.910
can just glue shut the JTAG-port or[br]improve your power filtering and that
0:44:16.910,0:44:21.599
should significantly complicate the[br]attack. But the point is that it will take
0:44:21.599,0:44:24.109
a fixed amount of time to do this and you[br]have to have direct access to the
0:44:24.109,0:44:28.750
hardware. It's not the sort of thing that,[br]you know, someone at customs or like an
0:44:28.750,0:44:33.369
"evil maid" could easily pull off. And[br]just to put that in perspective, again,
0:44:33.369,0:44:37.940
even if we improved dramatically the DPA-[br]resistance of the hardware, if we knew a
0:44:37.940,0:44:41.830
region of the chip that we want to[br]inspect, probably with the SEM in it and a
0:44:41.830,0:44:45.140
skilled technician, we could probably pull[br]it off in a matter of a day or a couple of
0:44:45.140,0:44:49.019
days. Takes only an hour to decap the[br]silicon, you know, an hour for a few
0:44:49.019,0:44:52.642
layers, a few hours in the FIB to delayer[br]a chip, and an afternoon in the the SEM
0:44:52.642,0:44:57.780
and you can find out the keys, right? But[br]the key point is that, this is kind of the
0:44:57.780,0:45:03.709
level that we've agreed is OK for a lot of[br]the silicon enclaves, and this is not
0:45:03.709,0:45:07.440
going to happen at a customs checkpoint or[br]by an evil maid. So I think I'm okay with
0:45:07.440,0:45:11.150
that for now. We can do better. But I[br]think that's it's a good starting point,
0:45:11.150,0:45:14.839
particularly for something that's so cheap[br]and accessible. So then how do we get
0:45:14.839,0:45:17.730
those keys in FPGA and how do we keep them[br]from getting out? So those keys should be
0:45:17.730,0:45:21.100
user generated, never leave device, not be[br]accessible by the CPU after it's
0:45:21.100,0:45:24.299
provisioned, be unique per device. And it[br]should be easy for the user to get it
0:45:24.299,0:45:28.170
right. It should be. You don't have to[br]know all the stuff and type a bunch
0:45:28.170,0:45:35.339
commands to do it, right. So if you look[br]inside Betrusted there's two rectangles
0:45:35.339,0:45:39.319
there, one of them is the ROM that[br]contains a bit stream, the other one is
0:45:39.319,0:45:43.399
the FPGA. So we're going to draw those in[br]the schematic form. Inside the ROM, you
0:45:43.399,0:45:47.589
start the day with an unencrypted bit[br]stream in ROM, which loads an FPGA. And
0:45:47.589,0:45:50.859
then you have this little crypto engine.[br]There's no keys on the inside. There's no
0:45:50.859,0:45:53.859
anywhere. You can check everything. You[br]can build your own bitstream, and do what
0:45:53.859,0:45:59.309
you want to do. The crypto engine then[br]generates keys from a TRNG that's located
0:45:59.309,0:46:02.829
on chip. Probably some help of some off-[br]chip randomness as well, because I don't
0:46:02.829,0:46:06.779
necessarily trust everything inside the[br]FPGA. Then that crypto engine can go ahead
0:46:06.779,0:46:12.009
and, as it encrypts the external bit[br]stream, inject those keys back into the
0:46:12.009,0:46:15.329
bit stream because we know where that[br]block-RAM is. We can go ahead and inject
0:46:15.329,0:46:19.789
those keys back into that specific RAM[br]block as we encrypt it. So now we have a
0:46:19.789,0:46:26.089
sealed encrypted image on the ROM, which[br]can then load the FPGA if it had the key.
0:46:26.089,0:46:28.809
So after you've gone ahead and provisioned[br]the ROM, hopefully at this point you don't
0:46:28.809,0:46:35.660
lose power, you go ahead and you burn the[br]key into the FPGA's keying engine which
0:46:35.660,0:46:40.609
sets it to only boot from that encrypted[br]bit stream, blow out the readback-
0:46:40.609,0:46:45.409
disabled-bit and the AES-only boot is[br]blown. So now at this point in time,
0:46:45.409,0:46:48.829
basically there's no way to go ahead and[br]put in a bit stream that says tell me your
0:46:48.829,0:46:52.079
keys, whatever it is. You have to go and[br]do one of these hard techniques to pull
0:46:52.079,0:46:56.930
out the key. You can maybe enable hardware[br]upgrade path if you want by having the
0:46:56.930,0:47:00.959
crypto and just be able to retain a copy[br]of the master key and re-encrypt it, but
0:47:00.959,0:47:04.529
that becomes a vulnerability because the[br]user can be coerced to go ahead and load
0:47:04.529,0:47:08.240
inside a bit stream that then leaks out[br]the keys. So if you're really paranoid at
0:47:08.240,0:47:13.720
some point in time, you seal this thing[br]and it's done. You know, you have to go
0:47:13.720,0:47:18.109
ahead and do that full key extraction[br]routine to go ahead and pull stuff out if
0:47:18.109,0:47:21.999
you forget your passwords. So that's the[br]sort of user-sealable keys. I think we can
0:47:21.999,0:47:27.729
do that with FPGA. Finally, easy to verify[br]and easy to protect. Just very quickly
0:47:27.729,0:47:31.119
talking about this. So if you want to make[br]an expectable tamper barrier, a lot of
0:47:31.119,0:47:34.619
people have talked about glitter seals.[br]Those are pretty cool, right? The problem
0:47:34.619,0:47:39.490
is, I find that glitter seals are too hard[br]to verify. Right. Like, I have tried
0:47:39.490,0:47:42.660
glitter-seals before and I stare at the[br]thing and I'm like: Damn, I have no idea
0:47:42.660,0:47:45.489
if this is the seal I put down. And so[br]then I say, ok, we'll take a picture or
0:47:45.489,0:47:50.079
write an app or something. Now I'm relying[br]on this untrusted device to go ahead and
0:47:50.079,0:47:55.700
tell me if the seal is verified or not. So[br]I have a suggestion for a DIY watermark
0:47:55.700,0:47:59.629
that relies not on an app to go and[br]verify, but our very, very well tuned
0:47:59.629,0:48:03.089
neural networks inside our head to go[br]ahead and verify things. So the idea is
0:48:03.089,0:48:08.350
basically, there's this nice epoxy that I[br]found. It comes in this Bi-packs, 2 part
0:48:08.350,0:48:12.319
epoxy, you just put on the edge of a table[br]and you go like this and it goes ahead and
0:48:12.319,0:48:17.249
mixes the epoxy and you're ready to use.[br]It's very easy for users to apply. And
0:48:17.249,0:48:21.039
then you just draw a watermark on a piece[br]of tissue paper. It turns out humans are
0:48:21.039,0:48:25.260
really good at identifying our own[br]handwriting, our own signatures, these
0:48:25.260,0:48:28.359
types of things. Someone can go ahead and[br]try to forge it. There's people who are
0:48:28.359,0:48:32.579
skilled in doing this, but this is way[br]easier than looking at a glitter-seal. You
0:48:32.579,0:48:36.539
go ahead and put that down on your device.[br]You swab on the epoxy and at the end of
0:48:36.539,0:48:41.119
day, you end up with a sort of tissue[br]paper plus a very easily recognizable
0:48:41.119,0:48:44.501
seal. If someone goes ahead and tries to[br]take this off or tamper with it, I can
0:48:44.501,0:48:47.980
look at it easy and say, yes, this is a[br]different thing than what I had yesterday,
0:48:47.980,0:48:50.749
I don't have to open an app, I don't have[br]to look at glitter patterns, I don't have
0:48:50.749,0:48:54.229
to do these sorts of things. And I can go[br]ahead and swab onto all the I/O-ports that
0:48:54.229,0:49:01.869
need to do. So it's a bit of a hack, but I[br]think that it's a little closer towards
0:49:01.869,0:49:09.900
not having to rely on third party apps to[br]verify a tamper evidence seal. So I've
0:49:09.900,0:49:16.249
talked about sort of this implementation[br]and also talked about how it maps to these
0:49:16.249,0:49:20.859
three principles for building trustable[br]hardware. So the idea is to try to build a
0:49:20.859,0:49:25.729
system that is not too complex so that we[br]can verify most the parts or all of them
0:49:25.729,0:49:29.829
at the end-user point, look at the[br]keyboard, look at the display and we can
0:49:29.829,0:49:35.930
go ahead and compile the FPGA from source.[br]We're focusing on verifying the entire
0:49:35.930,0:49:40.459
system, the keyboard and the display,[br]we're not forgetting the user. They secret
0:49:40.459,0:49:43.199
starts with the user and ends with the[br]user, not with the edge of the silicon.
0:49:43.199,0:49:47.939
And finally, we're empowering end-users to[br]verify and seal their own hardware. You
0:49:47.939,0:49:52.049
don't have to go through a central keying[br]authority to go ahead and make sure
0:49:52.049,0:49:56.730
secrets are are inside your hardware. So[br]at the end of the day, the idea behind
0:49:56.730,0:50:01.460
Betrusted is to close that hardware time[br]of check/time of use gap by moving the
0:50:01.460,0:50:07.690
verification point closer to the point of[br]use. So in this huge, complicated
0:50:07.690,0:50:12.329
landscape of problems that we can have,[br]the idea is that we want to, as much as
0:50:12.329,0:50:19.279
possible, teach users to verify their own[br]stuff. So by design, it's meant to be a
0:50:19.279,0:50:23.249
thing that hopefully anyone can be taught[br]to sort of verify and use, and we can
0:50:23.249,0:50:27.520
provide tools that enable them to do that.[br]But if that ends up being too high of a
0:50:27.520,0:50:31.920
bar, I would like it to be within like one[br]or two nodes in your immediate social
0:50:31.920,0:50:35.550
network, so anyone in the world can find[br]someone who can do this. And the reason
0:50:35.550,0:50:41.240
why I kind of set this bar is, I want to[br]sort of define the maximum level of
0:50:41.240,0:50:45.330
technical competence required to do this,[br]because it's really easy, particularly
0:50:45.330,0:50:48.999
when sitting in an audience of these[br]really brilliant technical people to say,
0:50:48.999,0:50:52.210
yeah, of course everyone can just hash[br]things and compile things and look at
0:50:52.210,0:50:55.499
things in microscopes and solder and then[br]you get into life and reality and then be
0:50:55.499,0:51:01.019
like: oh, wait, I had completely forgotten[br]what real people are like. So this tries
0:51:01.019,0:51:06.770
to get me grounded and make sure that I'm[br]not sort of drinking my own Kool-Aid in
0:51:06.770,0:51:11.719
terms of how useful open hardware is as a[br]mechanism to verify anything. Because I
0:51:11.719,0:51:14.459
hand a bunch of people schematics and say,[br]check this and they'll be like: I have no
0:51:14.459,0:51:22.459
idea. So the current development status is[br]that: The hardware is kind of an initial
0:51:22.459,0:51:27.969
EVT stage for types subject to significant[br]change, particularly part of the reason
0:51:27.969,0:51:31.589
we're here is talking about this is to[br]collect more ideas and feedback on this,
0:51:31.589,0:51:35.869
to make sure we're doing it right. The[br]software is just starting. We're writing
0:51:35.869,0:51:40.809
our own OS called Xous, being done by Sean[br]Cross, and we're exploring the UX and
0:51:40.809,0:51:44.180
applications being done by Tom Marble[br]shown here. And I actually want to give a
0:51:44.180,0:51:48.891
big shout out to NLnet for funding us[br]partially. We have a grant, a couple of
0:51:48.891,0:51:52.559
grants for under privacy and trust[br]enhancing technologies. This is really
0:51:52.559,0:51:57.309
significant because now we can actually[br]think about the hard problems, and not
0:51:57.309,0:52:00.260
have to be like, oh, when do we go[br]crowdfunded, when do we go fundraising.
0:52:00.260,0:52:04.030
Like a lot of time, people are like: This[br]looks like a product, can we sell this
0:52:04.030,0:52:10.849
now? It's not ready yet. And I want to be[br]able to take the time to talk about it,
0:52:10.849,0:52:15.780
listen to people, incorporate changes and[br]make sure we're doing the right thing. So
0:52:15.780,0:52:18.900
with that, I'd like to open up the floor[br]for Q&A. Thanks to everyone, for coming to
0:52:18.900,0:52:20.400
my talk.
0:52:20.400,0:52:29.299
Applause
0:52:29.299,0:52:32.239
Herald: Thank you so much, bunnie, for the[br]great talk. We have about five minutes
0:52:32.239,0:52:36.130
left for Q&A. For those who are leaving[br]earlier, you're only supposed to use the
0:52:36.130,0:52:40.480
two doors on the left, not the one, not[br]the tunnel you came in through, but only
0:52:40.480,0:52:44.769
the doors on the left back, the very left[br]door and the door in the middle. Now, Q&A,
0:52:44.769,0:52:49.310
you can pile up at the microphones. Do we[br]have a question from the Internet? No, not
0:52:49.310,0:52:54.170
yet. If someone wants to ask a question[br]but is not present but in the stream, or
0:52:54.170,0:52:57.790
maybe a person in the room who wants to[br]ask a question, you can use the hashtag
0:52:57.790,0:53:01.849
#Clark and Twitter. Mastodon and IRC are[br]being monitored. So let's start with
0:53:01.849,0:53:04.489
microphone number one.[br]Your question, please.
0:53:04.489,0:53:10.169
Q: Hey, bunnie. So you mentioned that with[br]the foundry process that the Hard IP-
0:53:10.169,0:53:16.569
blocks, the prototyped IP-blocks are a[br]place where attacks could be made. Do you
0:53:16.569,0:53:22.469
have the same concern about the Hard IP[br]blocks in the FPGA, either the embedded
0:53:22.469,0:53:27.559
block RAM or any of the other special[br]features that you might be using?
0:53:27.559,0:53:34.059
bunnie: Yeah, I think that we do have to[br]be concerned about implants that have
0:53:34.059,0:53:40.630
existed inside the FPGA prior to this[br]project. And I think there is a risk, for
0:53:40.630,0:53:44.930
example, that there's a JTAG-path that we[br]didn't know about. But I guess the
0:53:44.930,0:53:49.229
compensating side is that the military,[br]U.S. military does use a lot of these in
0:53:49.229,0:53:52.869
their devices. So they have a self-[br]interest in not having backdoors inside of
0:53:52.869,0:54:01.280
these things as well. So we'll see. I[br]think that the answer is it's possible. I
0:54:01.280,0:54:07.549
think the upside is that because the FPGA[br]is actually a very regular structure,
0:54:07.549,0:54:11.099
doing like sort of a SEM-level analysis,[br]of the initial construction of it at
0:54:11.099,0:54:15.220
least, is not insane. We can identify[br]these blocks and look at them and make
0:54:15.220,0:54:18.880
sure the right number of bits. That[br]doesn't mean the one you have today is the
0:54:18.880,0:54:22.759
same one. But if they were to go ahead and[br]modify that block to do sort of the
0:54:22.759,0:54:26.920
implant, my argument is that because of[br]the randomness of the wiring and the
0:54:26.920,0:54:29.839
number of factors they have to consider,[br]they would have to actually grow the
0:54:29.839,0:54:34.779
silicon area substantially. And that's a[br]thing that is a proxy for detection of
0:54:34.779,0:54:38.459
these types of problems. So that would be[br]my kind of half answer to that problem.
0:54:38.459,0:54:41.269
It's a good question, though. Thank you.[br]Herald: Thanks for the question. The next
0:54:41.269,0:54:46.069
one from microphone number three, please.[br]Yeah. Move close to the microphone.
0:54:46.069,0:54:50.969
Thanks.[br]Q: Hello. My question is, in your proposed
0:54:50.969,0:54:56.459
solution, how do you get around the fact[br]that the attacker, whether it's an implant
0:54:56.459,0:55:01.609
or something else, will just attack it[br]before they user self provisioning so
0:55:01.609,0:55:04.769
it'll compromise a self provisioning[br]process itself?
0:55:04.769,0:55:13.009
bunnie: Right. So the idea of the self[br]provisioning process is that we send the
0:55:13.009,0:55:18.509
device to you, you can look at the circuit[br]boards and devices and then you compile
0:55:18.509,0:55:23.630
your own FPGA, which includes a self[br]provisioning code from source and you can
0:55:23.630,0:55:26.339
confirm, or if you don't want to compile,[br]you can confirm that the signatures match
0:55:26.339,0:55:30.049
with what's on the Internet. And so[br]someone wanting to go ahead and compromise
0:55:30.049,0:55:34.390
that process and so stash away some keys[br]in some other place, that modification
0:55:34.390,0:55:40.400
would either be evident in the bit stream[br]or that would be evident as a modification
0:55:40.400,0:55:44.230
of the hash of the code that's running on[br]it at that point in time. So someone would
0:55:44.230,0:55:49.710
have to then add a hardware implant, for[br]example, to the ROM, but that doesn't help
0:55:49.710,0:55:52.229
because it's already encrypted by the time[br]it hits the ROM. So it'd really have to be
0:55:52.229,0:55:55.539
an implant that's inside the FPGA and then[br]trammel's question just sort of talked
0:55:55.539,0:56:01.609
about that situation itself. So I think[br]the attack surface is limited at least for
0:56:01.609,0:56:05.549
that.[br]Q: So you talked about how the courier
0:56:05.549,0:56:11.809
might be a hacker, right? So in this case,[br]you know, the courier would put a
0:56:11.809,0:56:17.719
hardware implant, not in the Hard IP, but[br]just in the piece of hardware inside the
0:56:17.719,0:56:21.519
FPGA that provisions the bit stream.[br]bunnie: Right. So the idea is that you
0:56:21.519,0:56:26.529
would get that FPGA and you would blow[br]your own FPGA bitstream yourself. You
0:56:26.529,0:56:30.339
don't trust my factory to give you a bit[br]stream. You get the device.
0:56:30.339,0:56:34.209
Q: How do you trust that the bitstream is[br]being blown. You just get indicate your
0:56:34.209,0:56:36.609
computer's saying this[br]bitstream is being blown.
0:56:36.609,0:56:40.109
bunnie: I see, I see, I see. So how do you[br]trust that the ROM actually doesn't have a
0:56:40.109,0:56:43.499
backdoor in itself that's pulling in the[br]secret bit stream that's not related to
0:56:43.499,0:56:52.839
him. I mean, possible, I guess. I think[br]there are things you can do to defeat
0:56:52.839,0:56:58.640
that. So the way that we do the semi[br]randomness in the compilation is that
0:56:58.640,0:57:02.599
there's a random 64-Bit random number we[br]compile into the bit stream. So we're
0:57:02.599,0:57:07.099
compiling our own bitstream. You can read[br]out that number and see if it matches. At
0:57:07.099,0:57:13.039
that point, if someone had pre burned a[br]bit stream onto it that is actually loaded
0:57:13.039,0:57:16.499
instead of your own bit stream, it's not[br]going to be able to have that random
0:57:16.499,0:57:21.309
number, for example, on the inside. So I[br]think there's ways to tell if, for
0:57:21.309,0:57:24.539
example, the ROM has been backdoored and[br]it has two copies of the ROM, one of the
0:57:24.539,0:57:27.399
evil one and one of yours, and then[br]they're going to use the evil one during
0:57:27.399,0:57:31.169
provisioning, right? I think that's a[br]thing that can be mitigated.
0:57:31.169,0:57:33.779
Herald: All right. Thank you very much. We[br]take the very last question from
0:57:33.779,0:57:39.309
microphone number five.[br]Q: Hi, bunnie. So one of the options you
0:57:39.309,0:57:44.569
sort of touched on in the talk but then[br]didn't pursue was this idea of doing some
0:57:44.569,0:57:49.769
custom silicon in a sort of very low-res[br]process that could be optically inspected
0:57:49.769,0:57:51.769
directly.[br]bunnie: Yes.
0:57:51.769,0:57:55.950
Q: Is that completely out of the question[br]in terms of being a usable route in the
0:57:55.950,0:58:00.099
future or, you know, did you look into[br]that and go to detail at all?
0:58:00.099,0:58:05.069
bunnie: So I thought about that when[br]there's a couple of issues: 1) Is that if
0:58:05.069,0:58:10.109
we rely on optical verification now, users[br]need optical verification prior to do it.
0:58:10.109,0:58:14.209
So we have to somehow move those optical[br]verification tools to the edge towards
0:58:14.209,0:58:17.559
that time of use. Right. So nice thing[br]about the FPGA is everything I talked
0:58:17.559,0:58:20.869
about building your own midstream,[br]inspecting the bit stream, checking the
0:58:20.869,0:58:27.279
hashes. Those are things that don't[br]require particular sort of user equipment.
0:58:27.279,0:58:32.219
But yes, if we if we were to go ahead and[br]build like an enclave out of 500
0:58:32.219,0:58:36.369
nanometers, silicon like it probably run[br]around 100 megahertz, you'd have a few
0:58:36.369,0:58:40.960
kilobytes of RAM on the inside. Not a lot.[br]Right. So you have a limitation in how
0:58:40.960,0:58:47.499
much capability you have on it and would[br]consume a lot of power. But then every
0:58:47.499,0:58:52.710
single one of those chips. Right. We put[br]them in a black piece of epoxy. How do you
0:58:52.710,0:58:55.420
like, you know, what keeps someone from[br]swapping that out with another chip?
0:58:55.420,0:58:58.009
Q: Yeah. I mean, I was I was thinking of[br]like old school, transparent top, like on
0:58:58.009,0:59:00.109
a lark.[br]bunnie: So, yeah, you can go ahead and
0:59:00.109,0:59:03.599
wire bond on the board, put some clear[br]epoxy on and then now people have to take
0:59:03.599,0:59:11.009
a microscope to look at that. That's a[br]possibility. I think that that's the sort
0:59:11.009,0:59:15.049
of thing that I think I am trying to[br]imagine. Like, for example, my mom using
0:59:15.049,0:59:19.640
this and asking her do this sort of stuff.[br]I just don't envision her knowing anyone
0:59:19.640,0:59:22.719
who would have an optical microscope who[br]could do this for except for me. Right.
0:59:22.719,0:59:29.089
And I don't think that's a fair assessment[br]of what is verifiable by the end user. At
0:59:29.089,0:59:33.599
the end of the day. So maybe for some[br]scenarios it's OK. But I think that the
0:59:33.599,0:59:37.599
full optical verification of a chip and[br]making that sort of the only thing between
0:59:37.599,0:59:42.589
you and implant, worries me. That's the[br]problem with the hard chip is that
0:59:42.589,0:59:46.589
basically if someone even if it's full,[br]you know, it's just to get a clear thing
0:59:46.589,0:59:51.699
and someone just swapped out the chip with[br]another chip. Right. You still need to
0:59:51.699,0:59:55.699
know, a piece of equipment to check that.[br]Right. Whereas like when I talked about
0:59:55.699,0:59:58.652
the display and the fact that you can look[br]at that, actually the argument for that is
0:59:58.652,1:00:01.660
not that you have to check the display.[br]It's that you don't it's actually because
1:00:01.660,1:00:04.700
it's so simple. You don't need to check[br]the display. Right. You don't need the
1:00:04.700,1:00:07.809
microscope to check it, because there is[br]no place to hide anything.
1:00:07.809,1:00:11.169
Herald: All right, folks, we ran out of[br]time. Thank you very much to everyone who
1:00:11.169,1:00:14.319
asked a question. And please give another[br]big round of applause to our great
1:00:14.319,1:00:17.319
speaker, bunnie. Thank you so much for the[br]great talk. Thanks.
1:00:17.319,1:00:18.319
Applause
1:00:18.319,1:00:20.869
bunnie: Thanks everyone!
1:00:20.869,1:00:23.796
Outro
1:00:23.796,1:00:46.000
Subtitles created by c3subtitles.de[br]in the year 2020. Join, and help us!