-
33C3 preroll music
-
Herald: Ok, please welcome Pol van Aubel,
PhD student at Radboud University in
-
Nijmegen, and he's going to give a talk
of physically uncloneable functions.
-
A warm round of applause, please.
-
applause
Thank you.
-
Pol: Thank you, thank you for having me,
thank you for having me on prime time,
-
when everybody is finally awake,
but not yet drunk.
-
mild laughter
-
And, thank you for letting me compete with
the space track. So, well, my Herald just
-
explained who I am, but the work in this
talk is actually mostly not from me. It's
-
by many many authors, and there will be
citations on almost every slide. Don't pay
-
attention to those. It was simply too hard
for me to make two different sets of
-
slides. Download the slides afterwards, if
something interests you. The entire intent
-
of this talk is to get you interested in
this material, get you reading the papers,
-
and get you implementing this stuff
yourself. So, without further ado, and
-
without any further egocentric blathering,
let's look at the problem we're trying to
-
solve. In computer security, since the
1980's, we've noticed that we might want
-
unique identification and authentication
of devices. And then, specifically, integrated
-
circuits. So, we want to distinguish
chips, uniquely, from the same
-
manufacturing masks, even, and with high
accuracy, unforgeably. Eh, simple task,
-
right? So, in order to explain how we get
to physically uncloneable functions, I'm
-
first going to explain some history in
anti-counterfeiting. And
-
anti-counterfeiting, you can think of
money, you can think of magstripe cards,
-
you can think of identity documents, and
nuke counters, or as they are commonly
-
called in literature, "Treaty Limited
Item" identifiers. So let's start with
-
money. Historically, money has been
protected with highly intricate imagery.
-
This is an example from right after the US
Revolution, and I personally really like
-
the, let's see, the "To Counterfeit is
Death". Because, you know, while it was a
-
crime against the state, you were drawn
and quartered when you did it. Then we
-
fast forward a few centuries, and I would
like to know from the audience, who has
-
ever seen this? ... Quite a lot. Can
anybody tell me what it is?
-
audience comment (inaudible)
-
The EURion constellation. It's
intended to prevent photocopiers from
-
copying your money. So basically, when the
photocopier detects this thing, it'll just
-
not... it'll just say, "I don't want to
copy." You can actually use this on your
-
own stuff if you want. But, we see a
common theme in those entire few
-
centuries, namely, you mark all valid
bills the same, and then you make sure
-
that you can check the marks in order to
identify that it is legitimate. An
-
alternative to this would be to have
different marks for each bill, and sign
-
that marking. But, you get into a whole
bunch of questions, like, how do I then
-
prevent somebody from copying that
bill-specific valid mark a hundred
-
thousand times, and just, you know,
copying the signature as well. It's not as
-
though anybody is checking paper money
online. So, back in 1983, Bader proposed
-
an anti-counterfeiting measure, which
basically meant you sprinkle random-length
-
cuts of optical fibers into your paper,
before it becomes paper, the... mull. (?)
-
And then, you make the money, and you use
basically a light bar scanner, so whatever
-
a photocopier does as well. And then there
will be a dot pattern that appears around
-
the light bar. And you extract that dot
pattern, you make that into a series of
-
bits, and you sign that dot pattern. And then
you print the signature onto the bill. Now
-
there's several problems with this, which
are all explained in those papers, I don't
-
have time to go into that, but in
principle, this works. Then, next, cards.
-
You know magnetic stripes and PIN, the way
we used to use them in Europe, I think you
-
still use them in the US, I'm not sure...
But because nobody knows how to copy
-
magstripes, right? So, you add stuff to
the card, so that it becomes detectable
-
when somebody has copied the card onto a
forgery. So, you use holograms. So far as
-
I know, holograms are also copyable, now,
I don't have a literature reference there,
-
but... stuff can be done. Now somebody in
1980 already proposed this: You randomly
-
disperse magnetic fibers in a coating, you
scan those fibers with a, well,
-
electromagnetic sensing device, and turn
them into pulses and AND the pulses with clock
-
etc., turn them into bits again, sign
that pattern etc. Then there's also
-
this nice proposal where you randomly
disperse conductive particles in an
-
insulating material, scan it with microwave,
it's basically the same principle from
-
also the 1980s. Next, identity documents,
somebody proposed using the translucency
-
of a paper strip in an identity document,
scan that strip, turn the translucency
-
pattern into a bitmask, sign the bitmask,
etc. Now, Simmons already said that
-
this was too easily cloneable, because,
you know, you can just take a photograph
-
of this and reproduce it through
photographic techniques. So translucency
-
isn't really nice. Now you could also
potentially use the exact
-
three-dimensional cotton fiber pattern of
the paper. But that proposal was also in
-
1991, and Simmons also said, this is
infeasible to do. However, in 1999,
-
somebody came up with something similar,
they take the texture hash of a postal
-
envelope, so you just print a square on
the envelope, take a high resolution
-
picture of that, and then hash that with a
certain hashing code that ensures that all
-
these things collapse into the same bit
pattern every time. This works. Then
-
finally, those Treaty Limited Items, the
reflective particle tags, you basically
-
affix such a tag to the surface of a
treaty-limited item, then you cure them
-
with ultraviolet light, so that you turn
it into a glass-like substance, which
-
makes a tamper evident, if I try to take
it off, the glass breaks, and it also
-
preserves the particle orientation, and
then you put a laser onto it, you look at the
-
reflective pattern, and you have your
identifier. So, if you ever have a bunch
-
of nukes to count, that might be
interesting. The common theme here is that
-
we are using an intrinsic aspect of an
item that's infeasible to copy, but easily
-
readable. It's unpredictable, and it
should ideally be unchanging. Which brings
-
us to a proposal in 2001 of physical
one-way-functions. Basically the idea was,
-
you have an epoxy with miniscule glass
spheres, you cure the epoxy, you make it
-
into a ten by ten by two and a half
millimeters, don't know the exact
-
dimensions anymore, ... I say "sphere", I
mean, what's it called, cube... cuboids,
-
something like that... And then you illuminate
it by a laser, and then you get a speckle
-
pattern out of that, because the laser
will disperse in a really unpredictable
-
pattern, and you capture that at 320 by
240 pixels, you turn that into a 2400 bit
-
key with a so called Gabor transform. I have
no idea how the math behind that works because
-
that's not my field of expertise. And you
get interesting properties like drilling a
-
hole here causes half the bits to flip, so
it’s tamper resistant, and it mirrors the
-
way one-way functions work, like SHA-1 and
SHA-256: ideally, if you flip one bit in
-
your input, half your output bits are
flipped. So this paper is really the first
-
paper that proposed this as a connection
with cryptography. So here, reading the
-
structure is feasible, because, you know,
you have this glass pattern, you can just
-
– I say just, but you can use
microscopic techniques to read it out
-
exactly, but good luck with having this
sub-micron accuracy for all those glass
-
spheres in the epoxy. So, you can, in
theory, if you know the structure, emulate
-
or simulate how a laser passes through
this. But it requires a lot of
-
computational power, and in order to...
you also can't build a database of responses
-
to challenges, because imagine that the
challenge to the structure is a laser at
-
different orientations, like, I can say a
laser under an angle of 5 degrees, or 10
-
degrees or 20 degrees, and at different
locations, and all those responses will be
-
different. So this challenge-response space
is infeasibly huge. So the protocol here
-
would be, first, you read this thing on a
trusted terminal, and you create a random
-
collection of challenge-response pairs.
Your challenges have to be kept secret
-
because, next, you get an authentication
request from an untrusted terminal, and
-
you challenge that terminal. And the idea
would be that it is infeasible to send the
-
correct response key if you don't have the
device containing this PUF, er, this
-
physical one-way function. So you then
receive the response key, and you reject
-
this if the key differs by too many bits.
Because it won't be a perfect match. There
-
might be scratches, there might be slight
micron differences in the orientations, it
-
might be a bad camera, you get some
differences, and the way you then do this
-
is you calculate the least probable
acceptance rate of a counterfeit device
-
that you want, and then you get to this
amount of bits. And then you can get a
-
better match rate if you repeat steps
(4) - (6) a few times, and if you run
-
out of challenge pairs you can just go
back to (1). That's the general idea. So
-
this is the first paper that made this
connection with cryptography, it has a
-
defined protocol, but there are several
not-so-nice things, like, you have special
-
equipment required, and we would really
like to have the same possibility in
-
silicon and silicon only. Now in this
paper already, the proposal was that you
-
might be able to have a similar approach,
if you scatter electrons... uhhh... I
-
don't understand what this says, but I
know that it is not what we're going to
-
see next. As an aside, if you do this kind
of thing, then you get to read very old
-
papers. So, wasn't it nice, back when you
could say this: "In the fuel rod placement
-
monitor … high radiation levels in the "hot"
cell provided the general tamper resistance."
-
Or: "The seismic sensors … would detect any
attempt to gain physical access to the
-
package long before the information
security is in jeopardy." Now I wouldn't
-
actually take that one as a bet because
I know you guys, but, uhhm, the first one
-
is pretty good. And you get to see things
like this. This is how RSA was done in
-
1984. This is a – I think – that's an
ISA, maybe pre-ISA bus, I dont know... So
-
this is how that was done. And the text is
really beautiful: they scanned an old,
-
basically typed on a typing machine paper.
This is available online, by the
-
way, if you have university access...
sorry... Then, there are other solutions
-
to this problem, of course. You have
hardware security modules, you have
-
smartcards, you have trusted platform
modules... actually, I found out we only
-
have those since 2006, I felt [like] they
were older?... But you still have the
-
problem of key management, right? Because
the key isn't tied to the platform. If I
-
can extract the key, and put it into
another trusted platform module, or
-
another hardware security module, then
we're still dead in the water. So the aspects of
-
these things is the key never leaves the
device – ideally – but then how does
-
the key enter the device? You can enter
new keys, you can enter key-encrypting
-
keys to decrypt keys that you never see,
that another hardware security module exports,
-
it's all interesting crypto, but you also
get the problem of what can the key do,
-
are you limited to 1024 bits RSA, and is
it possible to emulate all this once you
-
have the key, right? We really want to
have other aspects to our function. Now,
-
this is the first name for PUFs, "silicon
physical random functions", but they
-
already knew that "PRF" might have some
three letter acronym that clashes with
-
"pseudo-random function", so they decided
to go for "physical uncloneable
-
functions". There's an interesting
discussion going on whether it should be
-
"physical" or "physically"... not going
into that. The idea is, tamper resistance
-
in general is expensive, is difficult,
it's just... let's look at a different
-
approach. There is enough process
variation across identical integrated
-
circuits, where... yeah, so, they're not
identical because of those process
-
variations. And already in 2000 somebody
made a... Lofstrom, Daasch and Taylor had
-
a small paper on specific special device
identification circuits. But, if you want
-
to use those for secure device
identification and authentication, then
-
just a single such circuit is not enough.
You need more. So, what do you do? You
-
build this. And I don't think it's
really feasible, ... basically, this is
-
the entire circuit, you have a delay
circuit here, ...this is a ring oscillator
-
PUF. So you have a delay circuit here,
this is a self-oscillating loop,
-
basically, this feeds back into this. And
the challenge here is a bit for each of
-
these blocks. And what the bit says: if
it's one then you pass through, if it's
-
zero you pass over. So if you have a
different challenge, you have a different
-
path through this PUF. So ideally, for
each challenge, it should be unpredictable
-
whether this final arbiter block here...
uhh, somewhere over there... gives a one
-
or a zero, and then you count the pulses,
and you identify your circuits. Now
-
attacks on this were also quite well
studied in this paper... possible attacks.
-
So, you have the duplication attack, which
is basically cloning, which should be
-
impossible. Right, that's the general
idea: Cloning should be impossible. There
-
is emulation from measuring, so, you build
a model from this by measuring the exact
-
distances between logical units inside the
PUF, or the length of the wires inside the
-
PUF, also deemed infeasible because how
are you going to measure this without
-
destroying the PUF. This is back in 2001. Then
there was emulation from modeling, so basically, if
-
you get these challenge-response pairs, if
you get enough of them, you can apply some
-
nice maching-learning algorithms to that,
and then you get prediction of responses.
-
And, finally, you have the control
algorithm attack, which is
-
attacking the PUF's control algorithm
without ever getting into the PUF. If you
-
can do that, then your PUF is useless. So,
they also proposed controlled physically
-
uncloneable functions, which is the same
but with bells on. So you have an access
-
function for the PUF, which is part of the
PUF. This is to prevent against that final
-
attack. So basically you overlay the logic
of the access function with the PUF, so
-
that to access the logic of the access
function, you have to break the PUF. And
-
if you break the PUF, everything breaks,
no longer works. So this gives additional
-
properties. An uncontrolled PUF can only
be used for device authentication. This
-
can be used to have nice things like proof
of execution on a specific device.
-
Potentially [for] things that I don't have an
opinion on: on code that only runs on specific
-
devices, but basically whatever you need a
secure cryptographic key for, you should
-
really be using a controlled PUF. Is
the idea. But you can still do device
-
identification. So, how does a controlled
PUF look? You have a random hash here, you
-
have a potential ID here, you have the PUF
here, Challenge, ID, Personality into the
-
random hash, you run that through the PUF,
do some error correction, because PUFs are
-
not ideal, and then the random hash again,
and then the response. This is to prevent
-
all these attacks. If you're interested in
this, read the paper. Then, in 2011, a
-
formal model was proposed, what do we
really need from PUFs? First, we need
-
robustness. Across evaluations, we need
the same response. We need physical
-
uncloneability, it really shouldn't be
possible to clone these things, and we
-
need unpredictability. Now, these two are
potentially a lot, so we'll get into that
-
on the final slide, I think... And since then,
since 2001 there have been a lot of proposals
-
and attacks on PUFs. So, first, there are
the Arbiter PUFs, which are all delay
-
based. So, the general idea here is that
if you run a signal through a chip, it is
-
delayed by a certain amount. But the
amount is unique per chip. But it turns
-
out that you can pretty easily model this.
And even the Bistable Ring PUF, which is
-
fairly recent, I think, you can do some
fancy machine learning... I highly
-
recommend this paper, "Pac learning of
arbiter PUFs". Basically, the idea is, you
-
have 30000 challenge-response pairs, and
that is enough to give you 100% accuracy
-
on a 256-bit challenge PUF. That's not
good. This doesn't really work, if you can
-
model it that way. And you can also use
optical measuring of signals through
-
devices at six pico-second accuracy. So
these things might not be around for much
-
longer. Then there are memory-based PUFs.
They are based on bistable memory, which
-
basically looks like this, and it's also
delay-based, but here it's unique to this
-
cell. You have a block of these cells,
they are all independent, so you get a
-
pattern out of this. These cells go to one
or zero, and they are pretty fairly stable
-
in doing it. I'll show you a picture later of
what happens if you have a nice PUF of
-
this type and if you don't have a nice PUF
of this type. However, if you have a SRAM
-
PUF, for instance, you have fairly limited
SRAM. So you can just, in principle, read
-
all this out and store all the bits in a
database. And then you can, er, clone the
-
PUF. Because you can use focused ion beams
to trim the SRAM of another chip into the
-
correct orientation. And, well, emulation,
if you have this database, you can just
-
respond from your database. So, this is,
in some literature, termed a "weak PUF",
-
but it's probably still the most useful
one we have right now. This is usually
-
also what's in your devices if it's
claimed to have a physical uncloneable
-
function. But they are of the control
variety most of the time. Then finally,
-
recently somebody proposed, I think that
was, yeah, Schaller, Xiong, and
-
Anagnosto... can not pronounce it. But the
decay-based PUFs, the idea is, you have
-
DRAM, take the power off, put the power
back on, look how it decayed. No attacks
-
on that that I have seen yet. So, the
final few minutes of this talk will be
-
about your very own memory PUFs. Which is
trivial. Right? ...No. It's not, actually.
-
And all this time before, you might think,
why would we even bother with this? It
-
seems to be hopeless for PUFs, there is
not enough randomness in the silicon, but
-
I disagree. Because for one, some
protection is better than none, which is
-
what most system-on-chip devices have. And
two, I do not believe in silver bullets.
-
This should be part of a greater security
mechanism. So if nothing else, if all you
-
want from this talk is some interesting
paper to read, just one, read this one.
-
That's on slide 39, it's called
"Lightweight anti-counterfeiting solution
-
for low and commodity hardware using
inherent PUFs." And, preferably, you also
-
read this related one, "PUF based software
protection for low end embedded devices."
-
Don't be fooled by the terms "IP
protection" and "license model". This is a
-
Secure Boot environment. You want it, in
your Raspberry Pi, for instance. I don't
-
know whether Raspberry Pis have it, that's
for you to find out. So what you'll need
-
is a device with a masked ROM to hold the
boot loader, like the first stage of code
-
needs to be under your control. You need
to have that modifiable startup code, you
-
need to be able to modify it, obviously.
And you need on-board SRAM, to build the
-
PUF from. And then you need some
non-volatile memory for encrypted firmware
-
and helper data. So, in the puffin
project, which that earlier pic was a result
-
of... So, there are several results here.
This is an STM32F100B microcontroller,
-
this is PandaBoard, which is pretty much like a
mobile phone actually, so what you want to
-
see is this. White noise. This part is a
PUF-like memory range, this part is
-
probably spoiled by the bootloader or
something like that or the wrong code, but
-
this you can use. This looks good. So,
once you have such a white-noise area, you
-
start measuring a lot of times, and then
you compute the Hamming distance between
-
lots of measurements from lots of
different devices. And you want it to look
-
like this, you want it be around half.
Because that means that every device will
-
look different. By about 50%. You also measure
the inner class Hamming distance, which is same
-
measurements from the same PUF, and you
want that to be below 0.1. You don't want
-
that to be too inaccurate, because then
your error correction becomes too complex
-
and starts leaking information, and you
will need error correction, using for
-
example Golay codes. So this first paper I
mentioned, the... this one... the
-
lightweight anti-counterfeiting one, this
is also from that paper. Read it, it also
-
explains how this fuzzy extraction works.
If you're interested in this, there's lots
-
of scientific literature out there. And
then finally, you build this fuzzy
-
extractor, and then you enrol your chip.
And you generate some helper data for
-
this error correction, and then once you
challenge the chip you send this
-
error-correcting data with the challenge.
And in the end the idea would be that you
-
get a secret S' from every chip. Now how
can you use this? You have the bootloader
-
in the masked ROM, this is the first-stage
bootloader, it challenges the PUF, and
-
decrypts the second-stage bootloader,
which comes from external memory. And then
-
you boot the embedded operating system.
So, this should look familiar to a lot of
-
you, because this is basically also how
device attestation on x86 works if you're
-
using trusted platform modules. So, in a
bit more detail, same procedure, query the
-
PUF, decrypt and call, here the key also
ends up, and you decrypt and call the
-
kernel, and then finally, this is how it
really looks in real detail. And even if
-
you don't want to build this, you'll still
have this: So, remember when I showed you
-
the inner class Hamming distance, the 10% of
differences between meausurements? That's
-
caused by the red dots. Those are the
unstable SRAM cells. You can use those as
-
seeds for a random function. And
hopefully, you won't have this. This looks
-
wrong, this is not a PUF, this is too
predictable. Unfortunately, all this won't
-
be possible on x86, because we looked for
the PUFs in the CPUs, but Intel and AMD
-
both explicitly zero everything. Finally,
a word on privacy. I don't have too much
-
time for this, but I really liked the fact
they mentioned they feel that users... users
-
feel that they can be tracked if you have
a unique identifier. As though, its not a
-
valid concern. Damn the users, being paranoid.
Now, back to the controlled PUF. You can
-
add personality IDs as a user. If they
challenge it, you add a personality, so
-
one application reading the PUF gets a
different ID from another application,
-
which changes the entire output of the
hash function, no paranoia required
-
anymore. Hopefully. Finally, the references.
Google Scholar is your friend. The rest of
-
the slides are... all kinds of
references... Read it! You've already seen
-
all of those, read it,
thank you for your attention.
-
applause
-
Herald: Thank you, Pol. We have
time for maybe two questions.
-
Please come up to the mics... Mic 3!
-
Mic 3: What do you think about MEMS-based
physically uncloneable functions, where
-
they basically use the accelerometer
sensors, and the deviations in these
-
sensors by inducing challenges
as controlled vibration?
-
Pol: Sorry, I missed the
first word of your question.
-
Mik 3: The MEMS-based… basically the
technology that is being used to build
-
accelerometers in silicon. So Bosch has
some PUF chips based on that, where they
-
have arrays of these MEMS-chips, and then
a controlled vibrator to induce the
-
challenge into that.
-
Pol: I think they're probably more secure
than silicon-based PUFs, because they are
-
built for randomness, whereas we're here
trying to extract randomness from an
-
existing circuit. Yeah, they're
interesting. Use them if you can, but most
-
people don't have the option.
-
Mik 3: Thank you.
-
Herald: More questions?
Pol: Up there!
-
Herald: Ok, Mic 7!
-
Mic 7: Hi, thanks for your talk, I'd never
heard of PUFs. I recently went on a
-
quest to find a usable smartcard that met
all the things I wanted to do, like open
-
source, et cetera. Can you expand a bit on
how PUFs could be used with an OpenPGP
-
smartcard or similar?
-
Pol: Short answer: no. I have no idea
whether OpenPGP will ever support anything
-
like this. You have the PKCS protocols,
I know that in theory this is possible.
-
I don't know whether anything has
implemented it. There are PUFs on
-
smartcards, but whether.. We haven't looked
into this, I don't know of anyone who has.
-
Mic 7: Thank you.
-
Pol: But that doesn't mean
it doesn't exist.
-
Herald: That would be all.
Please give it up for Pol, one more time.
-
Pol: Thanks!
applause
-
postroll music
-
subtitles created by c3subtitles.de
in the year 2017. Join, and help us!