36C3 Intro
♪ (intro music) ♪
Herald: Welcome, everybody, to our very
first talk on the first day of Congress.
The talk is "Open Source is Insufficient
to Solve Trust Problems in Hardware,"
and although there is a lot to be said
for free and open software, it is
unfortunately not always inherently more
secure than proprietary or closed software,
and the same goes for hardware as well.
And this talk will take us into
the nitty gritty bits of how to build
trustable hardware and how it it has to be
implemented and brought together
with the software in order to be secure.
We have one speaker here today.
It's bunnie.
He's a hardware and firmware hacker.
But actually,
the talk was worked on by three people,
so it's not just bunnie, but also
Sean "Xobs" Cross and Tom Marble.
But the other two are not present today.
But I would like you to welcome
our speaker, bunnie,
with a big, warm, round of applause,
and have a lot of fun.
Applause
bunnie: Good morning, everybody.
Thanks for braving the crowds
and making it in to the Congress.
And thank you again to the Congress
for giving me the privilege
to address the Congress again this year.
Very exciting being the first talk
of the day. Had font problems,
so I'm running from a .pdf backup.
So we'll see how this all goes.
Good thing I make backups. So the
topic of today's talk is
"Open Source is Insufficient
to Solve Trust Problems in Hardware,"
and sort of some things
we can do about this.
So my background is, I'm
a big proponent of open source hardware. I
love it. And I've built a lot of things in
open source, using open source hardware
principles. But there's been sort of a
nagging question in me about like, you
know, some people would say things like,
oh, well, you know, you build open source
hardware because you can trust it more.
And there's been sort of this gap in my
head and this talk tries to distill out
that gap in my head between trust and open
source and hardware. So I'm sure people
have opinions on which browsers you would
think is more secure or trustable than the
others. But the question is why might you
think one is more trustable than the other
is. You have everything and hear from like
Firefox and Iceweasel down to like the
Samsung custom browser or the you know,
xiaomi custom browser. Which one would
you rather use for your browsing if you
had to trust something? So I'm sure people
have their biases and they might say that
open is more trustable. But why do we say
open is more trustable? Is it because we
actually read the source thoroughly and
check it every single release for this
browser? Is it because we compile our
source, our browsers from source before we
use them? No, actually we don't have the
time to do that. So let's take a closer
look as to why we like to think that open
source software is more secure. So this is
a kind of a diagram of the lifecycle of,
say, a software project. You have a bunch
of developers on the left. They'll commit
code into some source management program
like git. It goes to a build. And then
ideally, some person who carefully managed
the key signs that build goes into an
untrusted cloud, then gets download onto
users disks, pulled into RAM, run by the
user at the end of the day. Right? So the
reason why actually we find that we might
be able to trust things more is because in
the case of open source, anyone can pull
down that source code like someone doing
reproducible builds an audit of some type,
build it, confirm that the hashes match
and that the keys are all set up
correctly. And then the users also have
the ability to know developers and sort of
enforce community norms and standards upon
them to make sure that they're acting in
sort of in the favor of the community. So
in the case that we have bad actors who
want to go ahead and tamper with builds
and clouds and all the things in the
middle, it's much more difficult. So open
is more trustable because we have tools to
transfer trust in software, things like
hashing, things like public keys, things
like Merkle trees. Right? And also in the
case of open versus closed, we have social
networks that we can use to reinforce our
community standards for trust and
security. Now, it's worth looking a little
bit more into the hashing mechanism
because this is a very important part of
our software trust chain. So I'm sure a
lot of people know what hashing is, for
people who don't know. Basically it takes
a big pile of bits and turns them into a
short sequence of symbols so that a tiny
change in the big pile of bits makes a big
change in the output symbols. And also
knowing those symbols doesn't reveal
anything about the original file. So in
this case here, the file on the left is
hashed to sort of cat, mouse, panda, bear
and the file on the right hashes to, you
know, peach, snake, pizza, cookie. And the
thing is, as you may not even have noticed
necessarily that there was that one bit
changed up there, but it's very easy to
see that short string of symbols have
changed. So you don't actually have to go
through that whole file and look for that
needle in the haystack. You have this hash
function that tells you something has
changed very quickly. Then once you've
computed the hashes, we have a process
called signing, where a secret key is used
to encrypt the hash, users decrypt that
using the public key to compare against a
locally computed hash. You know, so we're
not trusting the server to compute the
hash. We reproduce it on our site and then
we can say that it is now difficult to
modify that file or the signature without
detection. Now the problem is, that there
is a time of check, time of use issue with
the system, even though we have this
mechanism, if we decouple the point of
check from the point of use, it creates a
man in the middle opportunity or a person
the middle if you want. The thing is that,
you know, it's a class of attacks that
allows someone to tamper with data as it
is in transit. And I'm kind of symbolizing
this evil guy, I guess, because hackers
all wear hoodies and, you know, they also
keep us warm as well in very cold places.
So now an example of a time of check, time
of use issue is that if, say, a user
downloads a copy of the program onto their
disk and they just check it after the
download to the disc. And they say, okay,
great, that's fine. Later on, an adversary
can then modify the file on a disk as it's
cut before it's copied to RAM. And now
actually the user, even though they
download the correct version of file,
they're getting the wrong version into the
RAM. So the key point is the reason why in
software we feel it's more trustworthy, we
have a tool to transfer trust and ideally,
we place that point of check as close to
the users as possible. So idea that we're
sort of putting keys into the CPU or some
secure enclave that, you know, just before
you run it, you've checked it, that
software is perfect and has not been
modified, right? Now, an important
clarification is that it's actually more
about the place of check versus the place
of use. Whether you checked one second
prior or a minute prior doesn't actually
matter. It's more about checking the copy
that's closest to the thing that's running
it, right? We don't call it PoCPoU because
it just doesn't have quite the same ring
to it. But now this is important. That
reason why I emphasize place of check
versus place of use is, this is why
hardware is not the same as software in
terms of trust. The place of check is not
the place of use or in other words, trust
in hardware is a ToCToU problem all the
way down the supply chain. Right? So the
hard problem is how do you trust your
computers? Right? So we have problems
where we have firmware, pervasive hidden
bits of code that are inside every single
part of your system that can break
abstractions. And there's also the issue
of hardware implants. So it's tampering or
adding components that can bypass security
in ways that we're not, according to the
specification, that you're building
around. So from the firmer standpoint,
it's more here to acknowledge is an issue.
The problem is this is actually a software
problem. The good news is we have things
like openness and runtime verification,
they go going to frame these questions. If
you're, you know, a big enough player or
you have enough influence or something,
you can coax out all the firmware blobs
and eventually sort of solve that problem.
The bad news is that you're still relying
on the hardware to obediently run the
verification. So if your hardware isn't
running the verification correctly, it
doesn't matter that you have all the
source code for the firmware. Which brings
us to the world of hardware implants. So
very briefly, it's worth thinking about,
you know, how bad can this get? What are
we worried about? What is the field? If we
really want to be worried about trust and
security, how bad can it be? So I've spent
many years trying to deal with supply
chains. They're not friendly territory.
There's a lot of reasons people want to
screw with the chips in the supply chain.
For example, here this is a small ST
microcontroller, claims to be a secure
microcontroller. Someone was like: "Ah,
this is not a secure, you know, it's not
behaving correctly." We digest off the top
of it. On the inside, it's an LCX244
buffer. Right. So like, you know, this was
not done because someone wanted to tamper
with the secure microcontroller. It's
because someone wants to make a quick
buck. Right. But the point is that that
marking on the outside is convincing.
Right. You could've been any chip on the
inside in that situation. Another problem
that I've had personally as I was building
a robot controller board that had an FPGA
on the inside. We manufactured a thousand
of these and about 3% of them weren't
passing tests, set them aside. Later on, I
pulled these units that weren't passing
tests and looked at them very carefully.
And I noticed that all of the units, the
FPGA units that weren't passing test had
that white rectangle on them, which is
shown in a big more zoomed in version. It
turned out that underneath that white
rectangle where the letters ES for
engineering sample, so someone had gone in
and Laser blasted off the letters which
say that's an engineering sample, which
means they're not qualified for regular
production, blending them into the supply
chain at a 3% rate and managed to
essentially double their profits at the
end of the day. The reason why this works
is because distributors make a small
amount of money. So even a few percent
actually makes them a lot more profit at
the end of day. But the key takeaway is
that this is just because 97% of your
hardware is okay. It does not mean that
you're safe. Right? So it doesn't help to
take one sample out of your entire set of
hardware and say all this is good. This is
constructed correctly right, therefore all
of them should be good. That's a ToCToU
problem, right? 100% hardware verification
is mandatory. If if you're worried about
trust and verification. So let's go a bit
further down the rabbit hole. This is a
diagram, sort of an ontology of supply
chain attacks. And I've kind of divided it
into two axis. On the vertical axis, is
how easy is it to detect or how hard.
Right? So in the bottom you might need a
SEM, a scanning electron microscope to do
it, in the middle is an x-ray, a little
specialized and at the top is just visual
or JTAG like anyone can do it at home.
Right? And then from left to right is
execution difficulty. Right? Things are
going take millions of dollars and months.
Things are going take 10$ and weeks. Or a
dollar in seconds. Right? There's sort of
several broad classes I've kind of
outlined here. Adding components is very
easy. Substituting components is very
easy. We don't have enough time to really
go into those. But instead, we're gona
talk about kind of the two more scary
ones, which are sort of adding a chip
inside a package and IC modifications. So
let's talk about adding a chip in a
package. This one has sort of grabbed a
bunch of headlines, so this sort of these
in the Snowden files, we've found these
like NSA implants where they had put chips
literally inside of connectors and other
chips to modify the computer's behavior.
Now, it turns out that actually adding a
chip in a package is quite easy. It
happens every day. This is a routine
thing, right? If you take open any SD
card, micro-SD card that you have, you're
going to find that it has two chips on the
inside at the very least. One is a
controller chip, one is memory chip. In
fact, they can stick 16, 17 chips inside
of these packages today very handily.
Right? And so if you want to go ahead and
find these chips, is the solution to go
ahead and X-ray all the things, you just
take every single circuit board and throw
inside an x-ray machine. Well, this is
what a circuit board looks like, in the
x-ray machine. Some things are very
obvious. So on the left, we have our
Ethernet magnetic jacks and there's a
bunch of stuff on the inside. Turns out
those are all OK right there. Don't worry
about those. And on the right, we have our
chips. And this one here, you may be sort
of tempted to look and say, oh, I see this
big sort of square thing on the bottom
there. That must be the chip. Actually,
turns out that's not the chip at all.
That's the solder pad that holds the chip
in place. You can't actually see the chip
as the solder is masking it inside the
x-ray. So when we're looking at a chip
inside of an x-ray, I've kind of giving
you a look right here on the left is what
it looks like sort of in 3-D. And the
right is what looks like an x-ray, sort of
looking from the top down. You're looking
at ghostly outlines with very thin spidery
wires coming out of it. So if you were to
look at a chip-on-chip in an x-ray, this
is actually an image of a chip. So in the
cross-section, you can see the several
pieces of silicon that are stacked on top
of each other. And if you could actually
do an edge on x-ray of it, this is what
you would see. Unfortunately, you'd have
to take the chip off the board to do the
edge on x-ray. So what you do is you have
to look at it from the top down and we
look at it from the top down, all you see
are basically some straight wires. Like, I
can't it's not obvious from that top down
x-ray, whether you're looking at multiple
chips, eight chips, one chip, how many
chips are on the inside? That piece of
wire bonds all stitched perfectly in
overlap over the chip. So you know. this
is what the chip-on-chip scenario might
look like. You have a chip that's sitting
on top of a chip and wire bonds just sort
of going a little bit further on from the
edge. And so in the X-ray, the only kind
of difference you see is a slightly longer
wire bond in some cases. So it's actually,
it's not not, you can find these, but it's
not like, you know, obvious that you've
found an implant or not. So looking for
silicon is hard. Silicon is relatively
transparent to X-rays. A lot of things
mask it. Copper traces, Solder masks the
presence of silicon. This is like another
example of a, you know, a wire bonded chip
under an X-ray. There's some mitigations.
If you have a lot of money, you can do
computerized tomography. They'll build a
3D image of the chip. You can do X-ray
diffraction spectroscopy, but it's not a
foolproof method. And so basically the
threat of wirebonded package is actually
very well understood commodity technology.
It's actually quite cheap. This is a I was
actually doing some wire bonding in China
the other day. This is the wirebonding
machine. I looked up the price, it's 7000
dollars for a used one. And you
basically just walk into the guy with a
picture where you want the bonds to go. He
sort of picks them out, programs the
machines motion once and he just plays
back over and over again. So if you want
to go ahead and modify a chip and add a
wirebond, it's not as crazy as it sounds.
The mitigation is that this is a bit
detectable inside X-rays. So let's go down
the rabbit hole a little further. So
there's nother concept of threat use
called the Through-Silicon Via. So this
here is a cross-section of a chip. On the
bottom is the base chip and the top is a
chip that's only 0.1 to 0.2 millimeters
thick, almost the width of a human hair.
And they actually have drilled Vias
through the chip. So you have circuits on
the top and circuits on the bottom. So
this is kind of used to sort of, you know,
putting interposer in between different
chips, also used to stack DRAM and HBM. So
this is a commodity process to be able
today. It's not science fiction. And the
second concept I want to throw at you is a
thing called a Wafer Level Chip Scale
Package, WLCSP. This is actually a very
common method for packaging chips today.
Basically it's solder bolts directly on
top of chips. They're everywhere. If you
look inside of like an iPhone, basically
almost all the chips are WLCSP package
types. Now, if I were to take that Wafer
Level Chip Scale Package and cross-section
and look at it, it looks like a circuit
board with some solder-balls and the
silicon itself with some backside
passivation. If you go ahead and combine
this with a Through-Silicon Via implant, a
man in the middle attack using Through-
Silicon Vias, this is what it looks like
at the end of the day, you basically have
a piece of silicon this size, the original
silicon, sitting in original pads, in
basically all the right places with the
solder-balls masking the presence of that
chip. So it's actually basically a nearly
undetectable implant if you want to
execute it, if you go ahead and look at
the edge of the chip. They already have
seams on the sides. You can't even just
look at the side and say, oh, I see a seam
on my chip. Therefore, it's a problem. The
seam on the edge often times is because of
a different coding as the back or
passivations, these types of things. So if
you really wanted to sort of say, OK, how
well can we hide implant, this is probably
the way I would do it. It's logistically
actually easier than to worry about an
implant because you don't have to get the
chips in wire-bondable format, you
literally just buy them off the Internet.
You can just clean off the solder-balls
with a hot air gun and then the hard part
is building it so it can be a template for
doing the attack, which will take some
hundreds of thousands of dollars to do and
probably a mid-end fab. But if you have
almost no budget constraint and you have a
set of chips that are common and you want
to build a template for, this could be a
pretty good way to hide an implant inside
of a system. So that's sort of adding
chips inside packages. Let's talk a bit
about chip modification itself. So how
hard is it to modify the chip itself?
Let's say we've managed to eliminate the
possibility of someone's added chip, but
what about the chip itself? So this sort
of goes, a lot of people said, hey,
bunnie, why don't you spin an open source,
silicon processor, this will make it
trustable, right?. This is not a problem.
Well, let's think about the attack surface
of IC fabrication processes. So on the
left hand side here I've got kind of a
flowchart of what I see fabrication looks
like. You start with a high level chip
design, it's a RTL, like Verilog, or VHDL
these days or Python. You go into some
backend and then you have a decision to
make: Do you own your backend tooling or
not? And so I will go into this a little
more. If you don't, you trust the fab to
compile it and assemble it. If you do, you
assemble the chip with some blanks for
what's called "hard IP", we'll get into
this. And then you trust the fab to
assemble that, make masks and go to mass
production. So there's three areas that I
think are kind of ripe for tampering now,
"Netlist tampering", "hard IP tampering"
and "mask tampering". We'll go into each
of those. So "Netlist tampering", a lot of
people think that, of course, if you wrote
the RTL, you're going to make the chip. It
turns out that's actually kind of a
minority case. We hear about that. That's
on the right hand side called customer
owned tooling. That's when the customer
does a full flow, down to the mask set.
The problem is it costs several million
dollars and a lot of extra headcount of
very talented people to produce these and
you usually only do it for flagship
products like CPUs, and GPUs or high-end
routers, these sorts of things. I would
say most chips tend to go more towards
what's called an ASIC side, "Application
Specific Integrated Circuit". What happens
is that the customer will do some RTL and
maybe a high level floorplan and then the
silicon foundry or service will go ahead
and do the place/route, the IP
integration, the pad ring. This is quite
popular for cheap support chips, like your
baseboard management controller inside
your server probably went through this
flow, disk controllers probably got this
flow, mid-to-low IO controllers . So all
those peripheral chips that we don't like
to think about, that we know that can
handle our data probably go through a flow
like this. And, to give you an idea of how
common it is, but how little you've heard
of it, there's a company called SOCIONEXT.
There are a billion dollar company,
actually, you've probably never heard of
them, and they offer services. You
basically just throw a spec over the wall
and they'll build a chip to you all the
way to the point where you've done logic,
synthesis and physical design and then
they'll go ahead and do the manufacturing
and test and sample shipment for it. So
then, OK, fine, now, obviously, if you
care about trust, you don't do an ASIC
flow, you pony up the millions of dollars
and you do a COT flow, right? Well, there
is a weakness in COT flows. And this is
it's called the "Hard IP problem". So this
here on the right hand side is an amoeba
plot of the standard cells alongside a
piece of SRAM, a highlight this year. The
image wasn't great for presentation, but
this region here is the SRAM-block. And
all those little colorful blocks are
standard cells, representing your AND-
gates and NAND-gates and that sort of
stuff. What happens is that the foundry
will actually ask you, just leave an open
spot on your mask-design and they'll go
ahead and merge in the RAM into that spot
just before production. The reason why
they do this is because stuff like RAM is
a carefully guarded trade secret. If you
can increase the RAM density of your
foundry process, you can get a lot more
customers. There's a lot of knowhow in it,
and so foundries tend not to want to share
the RAM. You can compile your own RAM,
there are open RAM projects, but their
performance or their density is not as
good as the foundry specific ones. So in
terms of Hard IP, what are the blocks that
tend to be Hard IP? Stuff like RF and
analog, phase-locked-loops, ADCs, DACs,
bandgaps. RAM tends to be Hard IP, ROM
tends to be Hard IP, eFuze that stores
your keys is going to be given to you as
an opaque block, the pad ring around your
chip, the thing that protects your chip
from ESD, that's going to be an opaque
block. Basically all the points you need
to backdoor your RTL are going to be
trusted in the foundry in a modern
process. So OK, let's say, fine, we're
going ahead and build all of our own IP
blocks as well. We're gonna compile our
RAMs, do our own IO, everything, right?.
So we're safe, right? Well, turns out that
masks can be tampered with post-
processing. So if you're going to do
anything in a modern process, the mask
designs change quite dramatically from
what you drew them to what actually ends
up in the line: They get fractured into
multiple masks, they have resolution
correction techniques applied to them and
then they always go through an editing
phase. So masks are not born perfect. Masks
have defects on the inside. And so you can
look up papers about how they go and they
inspect the mask, every single line on the
inside when they find an error, they'll
patch over it, they'll go ahead and add
bits of metal and then take away bits of
glass to go ahead and make that mask
perfect or, better in some way, if you
have access to the editing capability. So
what can you do with mask-editing? Well,
there's been a lot of papers written on
this. You can look up ones on, for
example, "Dopant tampering". This one
actually has no morphological change. You
can't look at it under a microscope and
detect Dopant tampering. You have to have
something and either you have to do some
wet chemistry or some X-ray-spectroscopy
to figure it out. This allows for circuit
level change without a gross morphological
change of the circuit. And so this can
allow for tampering with things like RNGs
or some logic paths. There are oftentimes
spare cells inside of your ASIC, since
everyone makes mistakes, including chip
designers and so you want a patch over
that. It can be done at the mask level, by
signal bypassing, these types of things.
So some certain attacks can still happen
at the mask level. So that's a very quick
sort of idea of how bad can it get. When
you talk about the time of check, time of
use trust problem inside the supply chain.
The short summary of implants is that
there's a lot of places to hide them. Not
all of them are expensive or hard. I
talked about some of the more expensive or
hard ones. But remember, wire bonding is
actually a pretty easy process. It's not
hard to do and it's hard to detect. And
there's really no actual essential
correlation between detection difficulty
and difficulty of the attack, if you're
very careful in planning the attack. So,
okay, implants are possible. It's just
this. Let's agree on that maybe. So now
the solution is we should just have
trustable factories. Let's go ahead and
bring the fabs to the EU. Let's have a fab
in my backyard or whatever it is, these
these types of things. Let's make sure all
the workers are logged and registered,
that sort of thing. Let's talk about that.
So if you think about hardware, there's
you, right?. And then we can talk about
evil maids. But let's not actually talk
about those, because that's actually kind
of a minority case to worry about. But
let's think about how stuff gets to you.
There's a distributor, who goes through a
courier, who gets to you. All right. So
we've gone and done all this stuff for the
trustful factory. But it's actually
documented that couriers have been
intercepted and implants loaded. You know,
by for example, the NSA on Cisco products.
Now, you don't even have to have access to
couriers, now. Thanks to the way modern
commerce works, other customers can go
ahead and just buy a product, tamper with
it, seal it back in the box, send it back
to your distributor. And then maybe you
get one, right? That can be good enough.
Particularly, if you know a corporation is
in a particular area. Targeting them, you
buy a bunch of hard drives in the area,
seal them up, send them back and
eventually one of them ends up in the
right place and you've got your implant,
right? So there's a great talk last year
at 35C3. I recommend you check it out.
That talks a little bit more about the
scenario, sort of removing tamper stickers
and you know, the possibility that some
crypto wallets were sent back in the
supply chain then and tampered with. OK,
and then let's let's take that back. We
have to now worry about the wonderful
people in customs. We have to worry about
the wonderful people in the factory who
have access to your hardware. And so if
you cut to the chase, it's a huge attack
surface in terms of the supply chain,
right? From you to the courier to the
distributor, customs, box build, the box
build factory itself. Oftentimes we'll use
gray market resources to help make
themselves more profitable, right? You
have distributors who go to them. You
don't even know who those guys are. PCB
assembly, components, boards, chip fab,
packaging, the whole thing, right? Every
single point is a place where someone can
go ahead and touch a piece of hardware
along the chain. So can open source save
us in this scenario? Does open hardware
solve this problem? Right. Let's think
about it. Let's go ahead and throw some
developers with git on the left hand side.
How far does it get, right? Well, we can
have some continuous integration checks
that make sure that you know the hardware
is correct. We can have some open PCB
designs. We have some open PDK, but then
from that point, it goes into a rather
opaque machine and then, OK, maybe we can
put some test on the very edge before exit
the factory to try and catch some
potential issues, right? But you can see
all the area, other places, where a time
of check, time of use problem can happen.
And this is why, you know, I'm saying that
open hardware on its own is not sufficient
to solve this trust problem. Right? And
the big problem at the end of the day is
that you can't hash hardware. Right? There
is no hash function for hardware. That's
why I want to go through that early today.
There's no convenient, easy way to
basically confirming the correctness of
your hardware before you use it. Some
people say, well, bunnie, said once, there
is always a bigger microscope, right? You
know, I do some, security reverse
engineering stuff. This is true, right? So
there's a wonderful technique called
ptychographic X-ray Imaging, there is a
great paper in nature about it, where they
take like a modern i7 CPU and they get
down to the gate level nondestructively
with it, right? It's great for reverse
engineering or for design verification.
The problem number one is it literally
needs a building sized microscope. It was
done at the Swiss light source, that donut
shaped thing is the size of the light
source for doing that type of
verification, right? So you're not going
to have one at your point of use, right?
You're going to check it there and then
probably courier it to yourself again.
Time of check is not time of use. Problem
number two, it's expensive to do so.
Verifying one chip only verifies one chip
and as I said earlier, just because 99.9%
of your hardware is OK, doesn't mean
you're safe. Sometimes all it takes is one
server out of a thousand, to break some
fundamental assumptions that you have
about your cloud. And random sampling just
isn't good enough, right? I mean, would
you random sample signature checks on
software that you install? Download? No.
You insist 100% check and everything. If
you want that same standard of
reliability, you have to do that for
hardware. So then, is there any role for
open source and trustful hardware?
Absolutely, yes. Some of you guys may be
familiar with that little guy on the
right, the SPECTRE logo. So correctness is
very, very hard. Peer review can help fix
correctness bugs. Micro architectural
transparency can able the fixes in SPECTRE
like situations. So, you know, for
example, you would love to be able to say
we're entering a critical region. Let's
turn off all the micro architectural
optimizations, sacrifice performance and
then run the code securely and then go
back into, who cares what mode, and just
get done fast, right? That would be a
switch I would love to have. But without
that sort of transparency or without the
bill to review it, we can't do that. Also,
you know, community driven features and
community own designs is very empowering
and make sure that we're sort of building
the right hardware for the job and that
it's upholding our standards. So there is
a role. It's necessary, but it's not
sufficient for trustable hardware. Now the
question is, OK, can we solve the point of
use hardware verification problem? Is it
all gloom and doom from here on? Well, I
didn't bring us here to tell you it's just
gloom and doom. I've thought about this
and I've kind of boiled it into three
principles for building verifiable
hardware. The three principles are: 1)
Complexity is the enemy of verification.
2) We should verify entire systems, not
just components. 3) And we need to empower
end-users to verify and seal their
hardware. We'll go into this in the
remainder of the talk. The first one is
that complexity is complicated. Right?
Without a hashing function, verification
rolls back to bit-by-bit or atom-by-atom
verification. Modern phones just have so
many components. Even if I gave you the
full source code for the SOC inside of a
phone down to the mass level, what are you
going to do with it? How are you going to
know that this mass actually matches the
chip and those two haven't been modified?
So more complexity, is more difficult. The
solution is: Let's go to simplicity,
right? Let's just build things from
discrete transistors. Someone's done this.
The Monster 6502 is great. I love the
project. Very easy to verify. Runs at 50
kHz. So you're not going to do a lot
with that. Well, let's build processors at
a visually inspectable process node. Go to
500 nanometers. You can see that with
light. Well, you know, 100 megahertz clock
rate and a very high power consumption and
you know, a couple kilobytes RAM probably
is not going to really do it either.
Right? So the point of use verification is
a tradeoff between ease of verification
and features and usability. Right? So
these two products up here largely do the
same thing. Air pods. Right? And
headphones on your head. Right? Air pods
have something on the order of tens of
millions of transistors for you to verify.
The headphone that goes on your head. Like
I can actually go to Maxwell's equations
and actually tell you how the magnets work
from very first principles. And there's
probably one transistor on the inside of
the microphone to go ahead and amplify the
membrane. And that's it. Right? So this
one, you do sacrifice some features and
usability, when you go to a headset. Like
you can't say, hey, Siri, and they will
listen to you and know what you're doing,
but it's very easy to verify and know
what's going on. So in order to start a
dialog on user verification, we have to
serve a set of context. So I started a
project called 'Betrusted' because the
right answer depends on the context. I
want to establish what might be a minimum
viable, verifiable product. And it's sort
of like meant to be user verifiable by
design. And when we think of it as a
hardware software distro. So it's meant to
be modified and changed and customized
based upon the right context at the end of
the day. This a picture of what it looks
like. I actually have a little prototype
here. Very, very, very early product here
at the Congress. If you wanna look at it.
It's a mobile device that is meant for
sort of communication, sort of text based
communication and maybe voice
authentication. So authenticator tokens
are like a crypto wall if you want. And
the people were thinking about who might
be users are either high value targets
politically or financially. So you don't
have to have a lot of money to be a high
value target. You could also be in a very
politically risky for some people. And
also, of course, looking at developers and
enthusiasts and ideally we're thinking
about a global demographic, not just
English speaking users, which is sort of a
thing when you think about the complexity
standpoint, this is where we really have
to sort of champ at the bit and figure out
how to solve a lot of hard problems like
getting Unicode and, you know, right to
left rendering and pictographic fonts to
work inside a very small tax surface
device. So this leads me to the second
point. In which we verify entire systems,
not just components. We all say, well, why
not just build a chip? Why not? You know,
why are you thinking about a whole device?
Right. The problem is, that private keys
are not your private matters. Screens can
be scraped and keyboards can be logged. So
there's some efforts now to build
wonderful security enclaves like Keystone
and Open Titan, which will build, you
know, wonderful secure chips. The problem
is, that even if you manage to keep your
key secret, you still have to get that
information through an insecure CPU from
the screen to the keyboard and so forth.
Right? And so, you know, people who have
used these, you know, on screen touch
keyboards have probably seen something of
a message like this saying that, by the
way, this keyboard can see everything
you're typing, clean your passwords.
Right? And people probably clip and say,
oh, yeah, sure, whatever. I trust that.
Right? OK, well, this answer, this little
enclave on the site here isn't really
doing a lot of good. When you go ahead and
you say, sure, I'll run this implant
method, they can go ahead and modify all
my data and intercept all my data. So in
terms of making a device variable, let's
talk about the concept of practice flow.
How do I take these three principles and
turn them into something? So this is you
know, this is the ideal of taking these
three requirements and turning it into the
set of five features, a physical keyboard,
a black and white LCD, a FPGA-based RISC-V
SoC, users-sealable keys and so on. It's
easy to verify and physically protect. So
let's talk about these features one by
one. First one is a physical keyboard. Why
am I using a physical keyboard and not a
virtual keyboard? People love the virtual
keyboard. The problem is that captouch
screens, which is necessary to do a good
virtual keyboard, have a firmware block.
They have a microcontroller to do the
touch screens, actually. It's actually
really hard to build these things we want.
If you can do a good job with it and build
an awesome open source one, that'll be
great, but that's a project in and of
itself. So in order to sort of get an easy
win here and we can, let's just go with
the physical keyboard. So this is what the
device looks like with this cover off. We
have a physical keyboard, PCV with a
little overlay that does, you know, so we
can do multilingual inserts and you can go
to change that out. And it's like it's
just a two layer daughter card. Right.
Just hold up to like, you know, like, OK,
switches, wires. Right? Not a lot of
places to hide things. So I'll take that
as an easy win for an input surface,
that's verifiable. Right? The output
surface is a little more subtle. So we're
doing a black and white LCD. If you say,
OK, why not use a curiosity? If you ever
take apart a liquid crystal display, look
for a tiny little thin rectangle sort of
located near the display area. That's
actually a silicon chip that's bonded to
the glass. That's what it looks like at
the end of the day. That contains a frame
buffer and a command interface. It has
millions of transistors on the inside and
you don't know what it does. So if you're
ever assuming your adversary may be
tampering with your CPU, this is also a
viable place you have to worry about. So I
found a screen. It's called a memory LCD
by sharp electronics. It turns out they do
all the drive electrons on glass. So this
is a picture of the driver electronics on
the screen through like a 50x microscope
with a bright light behind it. Right? You
can actually see the transistors that are
used to to drive everything on the display
it's a nondestructive method of
verification. But actually more important
to the point is that there's so few places
to hide things, you probably don't need to
check it, right? There's not - If you want
to add an implant to this, you would need
to grow the glass area substantially or
add a silicon chip, which is a thing that
you'll notice, right. So at the end of the
day, the less places to hide things is
less need to check things. And so I can
feel like this is a screen where I can
write data to, and it'll show what I want
to show. The good news is that display has
a 200 ppi pixel density. So it's not -
even though it's black and white - it's
kind of closer to E-Paper. EPD in terms of
resolution. So now we come to the hard
part, right, the CPU. The silicon problem,
right? Any chip built in the last two
decades is not going to be inspectable,
fully inspectable with optical microscope,
right? Thorough analysis requires removing
layers and layers of metal and dielectric.
This is sort of a cross section of a
modernish chip and you can see the sort of
the huge stack of things to look at on
this. This process is destructive and you
can think of it as hashing, but it's a
little bit too literal, right? We want
something where we can check the thing
that we're going to use and then not
destroy it. So I've spent quite a bit of
time thinking about options for
nondestructive silicon verification. The
best I could come up with maybe was using
optical fauilt induction somehow combined
with some chip design techniques to go
ahead and like scan a laser across and
look at fault syndromes and figure out,
you know, does the thing... do the gates
that we put down correspond to the thing
that I built. The problem is, I couldn't
think of a strategy to do it that wouldn't
take years and tens of millions of dollars
to develop, which puts it a little bit far
out there and probably in the realm of
like sort of venture funded activities,
which is not really going to be very
empowering of everyday people. So let's
say I want something a little more short
term than that, then that sort of this,
you know, sort of, you know, platonic
ideal of verifiability. So the compromise
I sort of arrived at is the FPGA. So field
programmable gate arrays, that's what FPGA
stands for, are large arrays of logic and
wires that are user configured to
implement hardware designs. So this here
is an image inside an FPGA design tool. On
the top right is an example of one sort of
logic sub cell. It's got a few flip flops
and lookup tables in it. It's embedded in
this huge mass of wires that allow you to
wire it up at runtime to figure out what's
going on. And one thing that this diagram
here shows is I'm able to sort of
correlate design. I can see "Okay. The
decode_to_execute_INSTRUCTION_reg bit 26
corresponds to this net." So now we're
sort of like bring that Time Of Check a
little bit closer to Time Of Use. And so
the idea is to narrow that ToCToU gap by
compiling your own CPU. We can basically
give you the CPU from source. You can
compile it yourself. You can confirm the
bit stream. So now we're sort of enabling
a bit more of that trust transfer like
software, right. But there's a subtlety in
that the toolchains are not necessarily
always open. There's some FOSS flows like
symbiflow. They have a 100% open flow for
ICE40 and ECP5 and there's like 7-series
where they've a coming-soon status, but
they currently require some closed vendor
tools. So picking FPGA is a difficult
choice. There's a usability versus
verification tradeoff here. The big
usability issue is battery life. If we're
going for a mobile device, you want to use
it all day long or you want to be dead by
noon. It turns out that the best sort of
chip in terms of battery life is a
Spartan7. It gives you 4x, roughly 3 to
4x, in terms of battery life. But the tool
flow is still semi-closed. But the, you
know, I am optimistic that symbiflow will
get there and we can also fork and make an
ECP5 version if that's a problem at the
end of day. So let's talk a little bit
more about sort of FPGA features. So one
thing I like to say about FPGA is: they
offer a sort of ASLR, so address-space
layout randomization, but for hardware.
Essentially, a design has a kind of
pseudo-random mapping to the device. This
is a sort of a screenshot of two
compilation runs at the same source code
with a very small modification to it. And
basically a version number stored in a
GPR. And then you can see that the
actually the locations of a lot of the
registers are basically shifted around.
The reason why this is important is
because this hinders a significant class
of silicon attacks. All those small mass
level changes I talked about the ones
where we just "Okay, we're just gonna head
and change a few wires or run a couple
logic cells around", those become more
less likely to capture a critical bit. So
if you want to go ahead and backdoor a
full FPGA, you're going to have to change
the die size. You have to make it
substantially larger to be able to sort of
like swap out the function in those cases.
And so now the verification bar goes from
looking for a needle in a haystack to
measuring the size of the haystack, which
is a bit easier to do towards the user
side of things. And it turns out, at least
in Xilinx-land, it's just a change of a
random parameter does the trick. So some
potential attack vectors against FPGA is
like "OK, well, it's closed silicon." What
are the backdoors there? Notably inside a
7-series FPGA they actually document
introspection features. You can pull out
anything inside the chip by instantiating
a certain special block. And then we still
also have to worry about the whole class
of like man in the middle. I/O- and JTAG
implants that I talked about earlier. So
It's easy, really easy, to mitigate the
known blocks, basically lock them down,
tie them down, check them in the bit
stream, right? In terms of the I/O-man-in-
the-middle stuff, this is where we're
talking about like someone goes ahead and
puts a chip in in the path of your FPGA.
There's a few tricks you can do. We can do
sort of bust encryption on the RAM and the
ROM at the design level that frustrates
these. At the implementation, basically,
we can use the fact that data pins and
address pins can be permuted without
affecting the device's function. So every
design can go ahead and permute those data
and address pin mappings sort of uniquely.
So any particular implant that goes in
will have to be able to compensate for all
those combinations, making the implant a
little more difficult to do. And of
course, we can also fall back to sort of
careful inspection of the device. In terms
of the closed source silicon, the thing
that I'm really optimistic for there is
that so in terms of the closed source
system, the thing that we have to worry
about is that, for example, now that
Xilinx knows that we're doing these
trustable devices using a tool chain, they
push a patch that compiles back doors into
your bit stream. So not even as a silicon
level implant, but like, you know, maybe
the tool chain itself has a backdoor that
recognizes that we're doing this. So the
cool thing is, this is a cool project: So
there's a project called "Prjxray",
project x-ray, it's part of the Symbiflow
effort, and they're actually documenting
the full bit stream of the 7-Series
device. It turns out that we don't yet
know what all the bit functions are, but
the bit mappings are deterministic. So if
someone were to try and activate a
backdoor in the bit stream through
compilation, we can see it in a diff. We'd
be like: Wow, we've never seen this bit
flip before. What is this? Do we can look
into it and figure out if it's malicious
or not, right? So there's actually sort of
a hope that essentially at the end of day
we can build sort of a bit stream checker.
We can build a thing that says: Here's a
bit stream that came out, does it
correlate to the design source, do all the
bits check out, do they make sense? And so
ideally we would come up with like a one
click tool. And now we're at the point
where the point of check is very close to
the point of use. The users are now
confirming that the CPUs are correctly
constructed and mapped to the FPGA
correctly. So the sort of the summary of
FPGA vs. custom silicon is sort of like,
the pros of custom silicon is that they
have great performance. We can do a true
single chip enclave with hundreds of
megahertz speeds and tiny power
consumption. But the cons of silicon is
that it's really hard to verify. So, you
know, open source doesn't help that
verification and Hard IP blocks are tough
problems we talked about earlier. So FPGAs
on the other side, they offer some
immediate mitigation paths. We don't have
to wait until we solve this verification
problem. We can inspect the bit streams,
we can randomize the logic mapping and we
can do per device unique pin mapping. It's
not perfect, but it's better than I think
any other solution I can offer right now.
The cons of it is that FPGAs are just
barely good enough to do this today. So
you need a little bit of external RAM
which needs to be encrypted, but 100
megahertz speed performance and about five
to 10x the power consumption of a custom
silicon solution, which in a mobile device
is a lot. But, you know, actually part of
the reason, the main thing that drives the
thickness in this is the battery, right?
And most of that battery is for the FPGA.
If we didn't have to go with an FPGA it
could be much, much thinner. So now let's
talk a little about the last two points,
user-sealable keys, and verification and
protection. And this is that third point,
"empowering end users to verify and seal
their hardware". So it's great that we can
verify something but can it keep a secret?
No, transparency is good up to a point,
but you want to be able to keep secrets so
that people won't come up and say: oh,
there's your keys, right? So sealing a key
in the FPGA, ideally we want user
generated keys that are hard to extract,
we don't rely on a central keying
authority and that any attack to remove
those keys should be noticeable. So any
high level apps, I mean, someone with
infinite funding basically should take
about a day to extract it and the effort
should be trivially evident. The solution
to that is basically self provisioning and
sealing of the cryptographic keys in the
bit stream and a bit of epoxy. So let's
talk a little bit about provisioning those
keys. If we look at the 7-series FPGA
security, they offer a sort of encrypted
HMAC 256-AES, with 256-bit SHA bit
streams. There's a paper which discloses a
known weakness in it, so the attack takes
about a day or 1.6 million chosen cipher
text traces. The reason why it takes a day
is because that's how long it takes to
load in that many chosen ciphertexts
through the interfaces. The good news is
there's some easy mitigations to this. You
can just glue shut the JTAG-port or
improve your power filtering and that
should significantly complicate the
attack. But the point is that it will take
a fixed amount of time to do this and you
have to have direct access to the
hardware. It's not the sort of thing that,
you know, someone at customs or like an
"evil maid" could easily pull off. And
just to put that in perspective, again,
even if we improved dramatically the DPA-
resistance of the hardware, if we knew a
region of the chip that we want to
inspect, probably with the SEM in it and a
skilled technician, we could probably pull
it off in a matter of a day or a couple of
days. Takes only an hour to decap the
silicon, you know, an hour for a few
layers, a few hours in the FIB to delayer
a chip, and an afternoon in the the SEM
and you can find out the keys, right? But
the key point is that, this is kind of the
level that we've agreed is OK for a lot of
the silicon enclaves, and this is not
going to happen at a customs checkpoint or
by an evil maid. So I think I'm okay with
that for now. We can do better. But I
think that's it's a good starting point,
particularly for something that's so cheap
and accessible. So then how do we get
those keys in FPGA and how do we keep them
from getting out? So those keys should be
user generated, never leave device, not be
accessible by the CPU after it's
provisioned, be unique per device. And it
should be easy for the user to get it
right. It should be. You don't have to
know all the stuff and type a bunch
commands to do it, right. So if you look
inside Betrusted there's two rectangles
there, one of them is the ROM that
contains a bit stream, the other one is
the FPGA. So we're going to draw those in
the schematic form. Inside the ROM, you
start the day with an unencrypted bit
stream in ROM, which loads an FPGA. And
then you have this little crypto engine.
There's no keys on the inside. There's no
anywhere. You can check everything. You
can build your own bitstream, and do what
you want to do. The crypto engine then
generates keys from a TRNG that's located
on chip. Probably some help of some off-
chip randomness as well, because I don't
necessarily trust everything inside the
FPGA. Then that crypto engine can go ahead
and, as it encrypts the external bit
stream, inject those keys back into the
bit stream because we know where that
block-RAM is. We can go ahead and inject
those keys back into that specific RAM
block as we encrypt it. So now we have a
sealed encrypted image on the ROM, which
can then load the FPGA if it had the key.
So after you've gone ahead and provisioned
the ROM, hopefully at this point you don't
lose power, you go ahead and you burn the
key into the FPGA's keying engine which
sets it to only boot from that encrypted
bit stream, blow out the readback-
disabled-bit and the AES-only boot is
blown. So now at this point in time,
basically there's no way to go ahead and
put in a bit stream that says tell me your
keys, whatever it is. You have to go and
do one of these hard techniques to pull
out the key. You can maybe enable hardware
upgrade path if you want by having the
crypto and just be able to retain a copy
of the master key and re-encrypt it, but
that becomes a vulnerability because the
user can be coerced to go ahead and load
inside a bit stream that then leaks out
the keys. So if you're really paranoid at
some point in time, you seal this thing
and it's done. You know, you have to go
ahead and do that full key extraction
routine to go ahead and pull stuff out if
you forget your passwords. So that's the
sort of user-sealable keys. I think we can
do that with FPGA. Finally, easy to verify
and easy to protect. Just very quickly
talking about this. So if you want to make
an expectable tamper barrier, a lot of
people have talked about glitter seals.
Those are pretty cool, right? The problem
is, I find that glitter seals are too hard
to verify. Right. Like, I have tried
glitter-seals before and I stare at the
thing and I'm like: Damn, I have no idea
if this is the seal I put down. And so
then I say, ok, we'll take a picture or
write an app or something. Now I'm relying
on this untrusted device to go ahead and
tell me if the seal is verified or not. So
I have a suggestion for a DIY watermark
that relies not on an app to go and
verify, but our very, very well tuned
neural networks inside our head to go
ahead and verify things. So the idea is
basically, there's this nice epoxy that I
found. It comes in this Bi-packs, 2 part
epoxy, you just put on the edge of a table
and you go like this and it goes ahead and
mixes the epoxy and you're ready to use.
It's very easy for users to apply. And
then you just draw a watermark on a piece
of tissue paper. It turns out humans are
really good at identifying our own
handwriting, our own signatures, these
types of things. Someone can go ahead and
try to forge it. There's people who are
skilled in doing this, but this is way
easier than looking at a glitter-seal. You
go ahead and put that down on your device.
You swab on the epoxy and at the end of
day, you end up with a sort of tissue
paper plus a very easily recognizable
seal. If someone goes ahead and tries to
take this off or tamper with it, I can
look at it easy and say, yes, this is a
different thing than what I had yesterday,
I don't have to open an app, I don't have
to look at glitter patterns, I don't have
to do these sorts of things. And I can go
ahead and swab onto all the I/O-ports that
need to do. So it's a bit of a hack, but I
think that it's a little closer towards
not having to rely on third party apps to
verify a tamper evidence seal. So I've
talked about sort of this implementation
and also talked about how it maps to these
three principles for building trustable
hardware. So the idea is to try to build a
system that is not too complex so that we
can verify most the parts or all of them
at the end-user point, look at the
keyboard, look at the display and we can
go ahead and compile the FPGA from source.
We're focusing on verifying the entire
system, the keyboard and the display,
we're not forgetting the user. They secret
starts with the user and ends with the
user, not with the edge of the silicon.
And finally, we're empowering end-users to
verify and seal their own hardware. You
don't have to go through a central keying
authority to go ahead and make sure
secrets are are inside your hardware. So
at the end of the day, the idea behind
Betrusted is to close that hardware time
of check/time of use gap by moving the
verification point closer to the point of
use. So in this huge, complicated
landscape of problems that we can have,
the idea is that we want to, as much as
possible, teach users to verify their own
stuff. So by design, it's meant to be a
thing that hopefully anyone can be taught
to sort of verify and use, and we can
provide tools that enable them to do that.
But if that ends up being too high of a
bar, I would like it to be within like one
or two nodes in your immediate social
network, so anyone in the world can find
someone who can do this. And the reason
why I kind of set this bar is, I want to
sort of define the maximum level of
technical competence required to do this,
because it's really easy, particularly
when sitting in an audience of these
really brilliant technical people to say,
yeah, of course everyone can just hash
things and compile things and look at
things in microscopes and solder and then
you get into life and reality and then be
like: oh, wait, I had completely forgotten
what real people are like. So this tries
to get me grounded and make sure that I'm
not sort of drinking my own Kool-Aid in
terms of how useful open hardware is as a
mechanism to verify anything. Because I
hand a bunch of people schematics and say,
check this and they'll be like: I have no
idea. So the current development status is
that: The hardware is kind of an initial
EVT stage for types subject to significant
change, particularly part of the reason
we're here is talking about this is to
collect more ideas and feedback on this,
to make sure we're doing it right. The
software is just starting. We're writing
our own OS called Xous, being done by Sean
Cross, and we're exploring the UX and
applications being done by Tom Marble
shown here. And I actually want to give a
big shout out to NLnet for funding us
partially. We have a grant, a couple of
grants for under privacy and trust
enhancing technologies. This is really
significant because now we can actually
think about the hard problems, and not
have to be like, oh, when do we go
crowdfunded, when do we go fundraising.
Like a lot of time, people are like: This
looks like a product, can we sell this
now? It's not ready yet. And I want to be
able to take the time to talk about it,
listen to people, incorporate changes and
make sure we're doing the right thing. So
with that, I'd like to open up the floor
for Q&A. Thanks to everyone, for coming to
my talk.
Applause
Herald: Thank you so much, bunnie, for the
great talk. We have about five minutes
left for Q&A. For those who are leaving
earlier, you're only supposed to use the
two doors on the left, not the one, not
the tunnel you came in through, but only
the doors on the left back, the very left
door and the door in the middle. Now, Q&A,
you can pile up at the microphones. Do we
have a question from the Internet? No, not
yet. If someone wants to ask a question
but is not present but in the stream, or
maybe a person in the room who wants to
ask a question, you can use the hashtag
#Clark and Twitter. Mastodon and IRC are
being monitored. So let's start with
microphone number one.
Your question, please.
Q: Hey, bunnie. So you mentioned that with
the foundry process that the Hard IP-
blocks, the prototyped IP-blocks are a
place where attacks could be made. Do you
have the same concern about the Hard IP
blocks in the FPGA, either the embedded
block RAM or any of the other special
features that you might be using?
bunnie: Yeah, I think that we do have to
be concerned about implants that have
existed inside the FPGA prior to this
project. And I think there is a risk, for
example, that there's a JTAG-path that we
didn't know about. But I guess the
compensating side is that the military,
U.S. military does use a lot of these in
their devices. So they have a self-
interest in not having backdoors inside of
these things as well. So we'll see. I
think that the answer is it's possible. I
think the upside is that because the FPGA
is actually a very regular structure,
doing like sort of a SEM-level analysis,
of the initial construction of it at
least, is not insane. We can identify
these blocks and look at them and make
sure the right number of bits. That
doesn't mean the one you have today is the
same one. But if they were to go ahead and
modify that block to do sort of the
implant, my argument is that because of
the randomness of the wiring and the
number of factors they have to consider,
they would have to actually grow the
silicon area substantially. And that's a
thing that is a proxy for detection of
these types of problems. So that would be
my kind of half answer to that problem.
It's a good question, though. Thank you.
Herald: Thanks for the question. The next
one from microphone number three, please.
Yeah. Move close to the microphone.
Thanks.
Q: Hello. My question is, in your proposed
solution, how do you get around the fact
that the attacker, whether it's an implant
or something else, will just attack it
before they user self provisioning so
it'll compromise a self provisioning
process itself?
bunnie: Right. So the idea of the self
provisioning process is that we send the
device to you, you can look at the circuit
boards and devices and then you compile
your own FPGA, which includes a self
provisioning code from source and you can
confirm, or if you don't want to compile,
you can confirm that the signatures match
with what's on the Internet. And so
someone wanting to go ahead and compromise
that process and so stash away some keys
in some other place, that modification
would either be evident in the bit stream
or that would be evident as a modification
of the hash of the code that's running on
it at that point in time. So someone would
have to then add a hardware implant, for
example, to the ROM, but that doesn't help
because it's already encrypted by the time
it hits the ROM. So it'd really have to be
an implant that's inside the FPGA and then
trammel's question just sort of talked
about that situation itself. So I think
the attack surface is limited at least for
that.
Q: So you talked about how the courier
might be a hacker, right? So in this case,
you know, the courier would put a
hardware implant, not in the Hard IP, but
just in the piece of hardware inside the
FPGA that provisions the bit stream.
bunnie: Right. So the idea is that you
would get that FPGA and you would blow
your own FPGA bitstream yourself. You
don't trust my factory to give you a bit
stream. You get the device.
Q: How do you trust that the bitstream is
being blown. You just get indicate your
computer's saying this
bitstream is being blown.
bunnie: I see, I see, I see. So how do you
trust that the ROM actually doesn't have a
backdoor in itself that's pulling in the
secret bit stream that's not related to
him. I mean, possible, I guess. I think
there are things you can do to defeat
that. So the way that we do the semi
randomness in the compilation is that
there's a random 64-Bit random number we
compile into the bit stream. So we're
compiling our own bitstream. You can read
out that number and see if it matches. At
that point, if someone had pre burned a
bit stream onto it that is actually loaded
instead of your own bit stream, it's not
going to be able to have that random
number, for example, on the inside. So I
think there's ways to tell if, for
example, the ROM has been backdoored and
it has two copies of the ROM, one of the
evil one and one of yours, and then
they're going to use the evil one during
provisioning, right? I think that's a
thing that can be mitigated.
Herald: All right. Thank you very much. We
take the very last question from
microphone number five.
Q: Hi, bunnie. So one of the options you
sort of touched on in the talk but then
didn't pursue was this idea of doing some
custom silicon in a sort of very low-res
process that could be optically inspected
directly.
bunnie: Yes.
Q: Is that completely out of the question
in terms of being a usable route in the
future or, you know, did you look into
that and go to detail at all?
bunnie: So I thought about that when
there's a couple of issues: 1) Is that if
we rely on optical verification now, users
need optical verification prior to do it.
So we have to somehow move those optical
verification tools to the edge towards
that time of use. Right. So nice thing
about the FPGA is everything I talked
about building your own midstream,
inspecting the bit stream, checking the
hashes. Those are things that don't
require particular sort of user equipment.
But yes, if we if we were to go ahead and
build like an enclave out of 500
nanometers, silicon like it probably run
around 100 megahertz, you'd have a few
kilobytes of RAM on the inside. Not a lot.
Right. So you have a limitation in how
much capability you have on it and would
consume a lot of power. But then every
single one of those chips. Right. We put
them in a black piece of epoxy. How do you
like, you know, what keeps someone from
swapping that out with another chip?
Q: Yeah. I mean, I was I was thinking of
like old school, transparent top, like on
a lark.
bunnie: So, yeah, you can go ahead and
wire bond on the board, put some clear
epoxy on and then now people have to take
a microscope to look at that. That's a
possibility. I think that that's the sort
of thing that I think I am trying to
imagine. Like, for example, my mom using
this and asking her do this sort of stuff.
I just don't envision her knowing anyone
who would have an optical microscope who
could do this for except for me. Right.
And I don't think that's a fair assessment
of what is verifiable by the end user. At
the end of the day. So maybe for some
scenarios it's OK. But I think that the
full optical verification of a chip and
making that sort of the only thing between
you and implant, worries me. That's the
problem with the hard chip is that
basically if someone even if it's full,
you know, it's just to get a clear thing
and someone just swapped out the chip with
another chip. Right. You still need to
know, a piece of equipment to check that.
Right. Whereas like when I talked about
the display and the fact that you can look
at that, actually the argument for that is
not that you have to check the display.
It's that you don't it's actually because
it's so simple. You don't need to check
the display. Right. You don't need the
microscope to check it, because there is
no place to hide anything.
Herald: All right, folks, we ran out of
time. Thank you very much to everyone who
asked a question. And please give another
big round of applause to our great
speaker, bunnie. Thank you so much for the
great talk. Thanks.
Applause
bunnie: Thanks everyone!
Outro
Subtitles created by c3subtitles.de
in the year 2020. Join, and help us!