-
rc3 preroll music
-
Herald: So for the next talk, I have Jo
Van Bulck, and Fritz Alder from the
-
University of Leuven in Belgium, and David
Oswald professor for cyber security in
-
Birmingham. They are here to talk about
the trusted execution environment. You
-
probably know from Intel and so on, and
you should probably not trust it all the
-
way because it's software and it has its
flaws. And so they're talking about
-
ramming enclave gates, which is always
good, a systematic vulnerability
-
assessment of TEE shielding runtimes.
Please go on with your talk.
-
Jo van Bulck: Hi, everyone. Welcome to our
talk. So I'm Jo, former imec-DistriNet
-
research group at KU Leuven. And
today joining me are Fritz, also from
-
Leuven and David from the University of
Birmingham. And we have this very exciting
-
topic to talk about, ramming enclave
gates. But before we dive into that, I
-
think most of you will not know what are
enclave's, let alone what are these TEEs.
-
So let me first start with some analogy.
So enclave's are essentially a sort of a
-
secure fortress in the processor, in the
CPU. And so it's an encrypted memory
-
region that is exclusively accessible from
the inside. And what we know from the last
-
history of fortress attacks and defenses,
of course, is that when you cannot take a
-
fortress because the walls are high and
strong, you typically aim for the gates,
-
right? That's the weakest point in any in
any fortress defense. And that's exactly
-
the idea of this research. So it turns out
to apply to enclave's as well. And we have
-
been ramming the enclave gates. We have
been attacking the input/output interface
-
of the enclave. So a very simple idea, but
very drastic consequences I dare to say.
-
So this is sort of the summary of our
research. With over 40 interface
-
sanitization vulnerabilities that we found
in over 8 widely used open source enclave
-
projects. So we will go a bit into detail
over that in the rest of the slides. Also,
-
a nice thing to say here is that this
resulted in two academic papers to date,
-
over 7 CVEs and altogether quite some
responsible disclosure, lengthy embargo
-
periods.
David Oswald: OK, so, uh, I guess we
-
should talk about why we need such enclave
fortresses anyway. So if you look at a
-
traditional kind of like operating system
or computer architecture, you have a very
-
large trusted computing base. So you, for
instance, on the laptop that you most
-
likely use to watch this talk, you
trust the kernel, you trust maybe a
-
hypervisor if you have and the whole
hardware under the systems: a CPU,
-
memory, maybe hard drive, a trusted
platform module and the like. So actually
-
the problem is here with such a large TCB,
trusted computing base, you can also have
-
vulnerabilities basically everywhere. And
also malware hiding in all these parts. So
-
the idea of this enclaved execution is as
we find, for instance, in Intel SGX, which
-
is built into most recent Intel
processors, is that you take most of the
-
software stack between an actual
application, here the enclave app and the
-
actual CPU out of the TCB. So now you only
trust really the CPU and of course, you
-
trust your own code, but you don't have to
trust the OS anymore. And SGX, for
-
instance, promises to protect against an
attacker who has achieved root in the
-
operating system. And even depending on
who you ask against, for instance, a
-
malicious cloud provider. So imagine you
run your application on the cloud and then
-
you can still run your code in a trusted
way with hardware level isolation. And you
-
have attestation and so on. And you don't
no longer really have to trust even the
-
administrator. So the problem is, of
course, that attack surface remains, so
-
previous attacks and some of them, I think
will also be presented at this remote
-
Congress this year, have targeted
vulnerabilities in the microarchitecture
-
of the CPU. So you are hacking basically
the hardware level. So you had foreshadow,
-
you had microarchitectural data sampling,
spectre and LVI and the like. But what
-
less attention has been paid to and what
we'll talk about more in this presentation
-
is the software level inside the enclave,
which I hinted at, that there is some
-
software that you trust. But now we'll
look in more detail into what actually is
-
in such an enclave. Now from the
software side. So can an attacker exploit
-
any classical software vulnerabilities in
the enclave?
-
Jo: Yes David, that's quite an interesting
approach, right? Let's aim for the
-
software. So we have to understand what is
the software landscape out there for these
-
SGX enclaves and TEEs in general. So
that's what we did. We started with an
-
analysis and you see some screenshots
here. This is actually a growing open
-
source ecosystem. Many, many of these
runtimes, library operating systems, SDKs.
-
And before we dive into the details, I
want to stand still with what is the
-
common factor that all of them share,
right? What is kind of the idea of these
-
enclave development environments? So here,
what any TEE, trusted execution
-
environment gives you is this notion of a
secure enclave oasis in a hostile
-
environment. And you can do secure
computations in the green box while the
-
outside world is burning. As with any
defense mechanism, as I said earlier, the
-
devil is in the details and typically at
the gate, right? So how do you mediate
-
between that untrusted world where the
desert is on fire, and the secure oasis in
-
the enclave? And the intuition here is
that you need some sort of intermediary
-
software layer, what we call a shielding
runtime. So it kind of makes a secure
-
bridge to go from the untrusted world to
the enclave and back. And that's what we
-
are interested in. To see, what kind of
security checks you need to do there. So
-
it's quite a beautiful picture you have on
the right, the fertile enclave and on the
-
left the hostile desert. And we make this
secure bridge in between. And what we are
-
interested in is what if it goes wrong?
What if your bridge itself is flawed? So
-
to answer that question, we look at that
yellow box and we ask what kind of
-
sanitization, what kind of security checks
do you need to apply when you go from the
-
outside to the inside and back from the
inside to the outside. And one of the key
-
contributions that we have built up in the
past two years of this research, I think,
-
is that that yellow box can be subdivided
into 2 smaller subsequent layers. And the
-
first one is this ABI, application binary
interface, very low level CPU state. And
-
the second one is what we call API,
application programing interface. So
-
that's the kind of state that is already
visible at the programing language. In the
-
remainder of the presentation, we will
kind of guide you through some relevant
-
vulnerabilities on both these layers to
give you an understanding of what this
-
means. So first, Fritz will guide you to
the exciting low level landscape of the
-
ABI.
Fritz: Yeah, exactly. And Jo, you just
-
said it's the CPU state and it's the
application binary interface. But let's
-
take a look at what this means, actually.
So it means basically that the attacker
-
controls the CPU register contents and
that... On every enclave entry and every
-
enclave exit, we need to perform some
tasks. So that's the enclave and the
-
trusted runtime have some like, well
initialized CPU state and the compiler can
-
work with the calling conventions that it
expects. So these are basically the key
-
part. We need to initialize the CPU
registers when entering the enclave and
-
scrubbing them when we exiting the
enclave. So we can't just assume anything
-
that the attacker gives us as a given. We
have to initialize it to something proper.
-
And we looked at multiple TEE runtimes and
multiple TEEs and we found a lot of
-
vulnerabilities in this ABI layer. And one
key insight of this analysis is basically
-
that a lot of these vulnerabilities happen
on complex instruction set processors, so
-
on CISC processors and basically on the
Intel SGX TEE. We also looked at some RISC
-
processors and of course, it's not
representative, but it's like immediately
-
visible that the complex x86 ABI seems to
be... have a way higher, larger attack
-
surface than the simpler RISC designs. So
let's take a look at one example of this
-
more complex design. So, for example,
there's the x86 string instructions that
-
are controlled by the direction flag. So
there's a special x86 rep instruction that
-
basically allows you to perform streamed
memory operations. So if you do a memset
-
on a buffer, this will be compiled to the
rep string operation instruction. And the
-
idea here is basically that the buffer is
read from left to right and written over
-
it by memset. But this direction flag also
allows you to go through it from right to
-
left. So backwards. Let's not think about
why this was a good idea or why this is
-
needed. But definitely it is possible to
just set the direction flag to one and run
-
this buffer backwards. And what we found
out is that the System-V ABI actually says
-
that this must be clear or set to
forward on function entry and return.
-
And that compilers expect this to happen.
So let's take a look at this when we do
-
this in our enclave. So in our enclave,
when we, in our trusted application,
-
perform this memset on our buffer, on
normal entry with the normal direction
-
flag this just means that we walk this
buffer from front to back. So you can see
-
here it just runs correctly from front to
back. But now, if the attacker enters the
-
enclave with the direction flag set to 1
so set to run backwards, this now means
-
that from the start of our buffer. So from
where the pointer points right now, you
-
can now see it actually runs backwards. So
that's a problem. And that's definitely
-
something that we don't want in our
trusted applications because, well, as you
-
can think, it allows you to overwrite keys
that are in the memory location that you
-
can go backwards. It allows you to read
out things, that's definitely not
-
something that is useful. And when we
reported this, this actually got a nice
-
CVE assigned with the base score High, as
you can see here on the next slide. And
-
while you may say, OK, well, that's one
instance. And you just have to think of
-
all the flags to sanitize and all the
flags to check. But wait, of course,
-
there's always more, right? So as we found
out, there's actually the floating point
-
unit, which comes with a like, whole lot
of other registers and a whole lot of
-
other things to exploit. And I will spare
you all the details. But just for this
-
presentation, just know that there is an
older x87 FPU and a new SSE that does
-
vector floating point operations. So
there's the FPU control word and the MXCSR
-
register for these newer instructions. And
this x87 FPU is older, but it's still used
-
for example, for extended precision, like
long double variables. So old and new
-
doesn't really apply here because both are
still relevant. And that's kind of the
-
thing with x86 and x87 here. That old
archaic things that you could say are
-
outdated, are still relevant or are still
used nowadays. And again, if you look at
-
the System-V ABI now, we saw that these
control bits are callee-saved. So they are
-
preserved across function calls. And the
idea here is which to some degree holds
-
merit, is that these are some global
states that you can set and they are all
-
transferred within one application. So one
application can set some global state and
-
keep the state across all its usage. But
the problem here as you can see here is
-
our application or enclave is basically
one application, and we don't want our
-
attacker to have control over the global
state within our trusted application,
-
right? So what happens if FPU settings are
preserved across calls? Well, on a normal,
-
for a normal user, let's say we just do
some calculation inside the enclave. Like
-
2.1 times 3.4, which just nicely
calculates to a 7.14, a long double.
-
That's nice, right? But what happens if
the attacker now enters the enclave with
-
some corrupt precision and rounding modes
for the FPU? Well, then we actually get
-
another result. So we get distorted
results with a lower precision and a
-
different rounding mode. So actually it's
rounding down here, whenever it exceeds
-
the precision. And this is something we
don't want, right? So this is something
-
where the developer expects a certain
precision or long double precision, but
-
the attacker could actually just reduce it
to a very short position. And we reported
-
this and we actually found this issue also
in Microsoft OpenEnclave. That's why it's
-
marked as not exploitable here. But what
we found interesting is that the Intel SGX
-
SDK, which was vulnerable, patched this
with some xrstore instruction, which
-
completely restores the extended state to
a known value, while OpenEnclave only
-
restored the specific register that was
affected, the ldmxcsr instruction. And
-
so let's just skip over the next few
slides here, because I just want to give
-
you the idea that this was not enough. So
we found out that even if you restored
-
this specific register, there's still
another data register that you can just
-
mark as in use before entering the enclave
and with which the attacker can make that
-
any floating point calculation results in
a not a number. And this is silent, so
-
this is not programing language specific,
this is not developer specific. This is a
-
silent ABI issue that the calculations are
just not a number. So we also reported
-
this. And now, thankfully, all enclave
runtimes use this full xrstor instruction
-
to fully restore this extended state. So
it took two CVEs, but now luckily, they
-
all perform this nice full restore. So I
don't want to go to the full details of
-
our use cases now or of our case studies
that we did now. So let me just give you
-
the ideas of these case studies. So we
looked at these issues and wanted to look
-
into whether they just feel difficult or
whether they are bad. And we found that we
-
can use overflows as a side channel to
deduce secrets. So, for example, the
-
attacker could use this register to unmask
exceptions, that inside the
-
enclave are then triggered by some input
dependent multiplication. And we found out
-
that these side channels if you have some
input dependent multiplication can
-
actually be used in the enclave to perform
a binary search on this input space. And
-
we can actually retrieve this
multiplication secret with a deterministic
-
number of steps. So even though we just
have a single mask we flip, we can
-
actually retrieve a secret with
deterministic steps. And just for the, just
-
so that you know, there's more you can do.
We can also do machine learning in the
-
enclave. So Jo said it nicely, you can run
it inside the TEE, inside the cloud. And
-
that's great for machine learning, right?
So let's do a handwritten digit
-
recognition. And if you look at just the
model that we look at, we just have two
-
users where one user pushes some
machine learning model and the other user
-
pushes some input and everything is
protected with enclaves, right?
-
Everything is secure. But we actually
found out that we can poison these FPU
-
registers and degrade the performance of
this machine learning down from all digits
-
were predicted correctly to just eight
percent of digits were correctly. And
-
actually all digits were just predicting
the same number. And this basically made
-
this machine learning model useless,
right? There's more we did so we can also
-
attack blender with image differences,
slight image differences between blender
-
images. But this is just for you to see
that it's small, but it's a tricky thing
-
and indicate that that can go wrong very
fast on the ABI level once you play around
-
with it. So this is about the CPU state.
And now we will talk more about the
-
application programing interface that I
think more of you will be comfortable
-
with.
David: Yeah, we take, uh, thank you,
-
Fritz. We take a quite simple example. So
let's assume that we actually load a
-
standard Unix binary into such an enclave,
and there are frameworks that can do that,
-
such as graphene or so. And what I want to
illustrate with that example is that it's
-
actually very important to check where
pointers come from. Because the enclave
-
kind of partitions memory into untrusted
memory and enclave memory and they live in
-
a shared address space. So the problem
here is as follows. Let's assume we have
-
an echo binary that just prints an input.
And we give it as an argument a string and
-
that normally, when everything is fine,
points to some string, let's say hello
-
world, which is located in the untrusted
memory. So if everything runs as it
-
should, this enclave will run, will get
the pointer to untrusted memory and will
-
just print that string. But the problem is
now actually the enclave has access also
-
to its own trusted memory. So if you don't
check this pointer and the attacker passes
-
a pointed to the secret that might live in
enclave memory, what will happen? Well the
-
enclave will fetch it from there and will
just print it. So suddenly you have turned
-
this kind of like into a like a memory
disclosure vulnerability. And we can see
-
that in action here for the framework
named graphene that I mentioned. So we
-
have a very simple hello world binary and
we run it with a couple of command line
-
arguments. And now on the untrusted side,
we actually change a memory address to
-
point into enclave memory. And as you can
see, normally, it should print here test,
-
but actually it prints a super secret
enclave string that lived inside
-
the memory space of the enclave. So
these kind of vulnerabilities are quite
-
well known from user to kernel research
and from other instances. And they're
-
called confused deputy. So the deputy kind
of like has a gun now can read and if
-
memory and suddenly then does something
which is not not supposed to do because he
-
didn't really didn't really check where
the memory should belong or not. So I
-
think this vulnerability, uh, seems seems
to be quite trivial to solve. You simply
-
check all the time where, uh, where
pointers come from. But as you will tell,
-
you know, it's often not quite quite that
easy. Yes. David, that's quite insightful
-
that we should check all of the pointers.
So that's what we did. We checked all of
-
the pointer checks and we noticed that
Endo has a very interesting kind of all
-
the way to check these things. Of course,
the code is high quality. They checked all
-
of the pointers, but you have to do
something special for things. We're
-
talking here, the C programing language.
So things are no terminated, terminated.
-
They end with a new byte and you can use a
function as they are struggling to compute
-
the length of this thing. And let's see
how they check whether thing that's
-
completely outside of memory. So the first
step is you compute the length of the
-
interest, it's ten, and then you check
whether the string from start to end lives
-
completely outside of the anchor. That
sounds so legitimate. Then you eject the
-
steam. So so this works beautifully. Let's
see, however, how it behaves when we when
-
we partnered. And so we are not going to
parse this thing has a world outside of
-
the enclave that we pass on string secret,
one that lies within the. So the first
-
step will be that the conclave starts
computing the length of that string that
-
lies within the anklet. That sounds
already fishy, but then luckily everything
-
comes OK because then it will detect that
this actually should never have been done
-
and that this thing lies inside the
enclave. So it will reject the call so
-
that the call into the anklet. So that's
fine. But but some of you who know such
-
channels know that this is exciting
because the English did some competition
-
it was never supposed to do. And the
length of that competition depends on the
-
amount of of non-zero bites within the
anklet. So what we have here is a side
-
channel where the English will always
return false. But the time it takes to
-
return false depends on the amount of of
zero bytes inside that secret Arncliffe
-
memory block. So that's what we found. We
are excited and we said, OK, it's simple
-
timing channel. Let's go with that. So we
did that and you can see a graph here and
-
it turns out it's not as easy as it seems.
So I can tell you that the blue one is for
-
a string of length one, and that one is
for a string of like two. But there is no
-
way you can see that from that graph
because it said six processors are
-
lightning fast so that one single
incrementing section is completely
-
dissolves into the pipeline. You will not
see that by by measuring execution time.
-
So we need something different. And what
we have smart papers and in literature,
-
one of the very common attacks in ASICs is
also something that Intel describes here.
-
You can see which memory pages for memory
blocks are being accessed while the
-
English executes because you control the
operating system and the paging machinery.
-
So that's what we tried to do. We thought
this is a nice channel and we were there
-
scratching our heads, looking at that code
of very simple for loop that fits entirely
-
within one page and a very short string
that fits entirely within one page. So
-
just having access to for a memory, it's
not going to help us here because because
-
votes the code and the data fit on a
single page. So this is essentially what
-
we call the temporal resolution of the
sideshow. This is not accurate enough. So
-
we need a lot of take. And well, here we
have been working on quite an exciting
-
framework. It uses indirects and it's
called as a big step. So it's a completely
-
open source framework on Hadoop. And what
it allows you to do essentially is to
-
execute an enclave one step at a time,
hence the name. So it allows you to
-
interleave the execution of the enclave
with attacker code after every single
-
instruction. And the way we pull it off is
highly technical. We have this Linux
-
kernel drive around a little library
operating system in userspace, but that's
-
a bit out of scope. The matter is that we
can interrupt an enclave after every
-
single restriction and then let's see what
we can do with that. So. What we
-
essentially can do here is to execute and
follow up with all this extra increment
-
instructions one of the time, and after
every interrupt, we can simply check
-
whether the enclave accessed the string
residing of our target. That's another way
-
to think about it, is that we have that
execution of the enclave and we can break
-
that up into individual steps and then
just count the steps and hands and hands.
-
A deterministic timing. So in other words,
we have an oracle that tells you where all
-
zero bytes are in the anklet. I don't know
if that's useful, actually do so. It turns
-
out that this I mean, some people who
might be born into exploitation already
-
know that it's good to know whether zero
is somewhere in memory or not. And we do
-
now do one example where we break A-S and
Iowa, which is the hardware acceleration
-
of enterprises process for AI. So finally,
that actually operates only on registers.
-
And you just said you can kind of like do
that on onepoint us on memory, but says
-
another trick that comes into play here.
So whenever the enclave is interrupted, it
-
will store its current registers, date
somewhere to memory Quazi as a frame so we
-
can actually interrupt it and clarify make
it right. It's memory to to it's it's
-
register sorry to to say memory. And then
we can run the zero byte oracle on this
-
SSA a memory. And what we figure out is
where zero is or if there's any zero in
-
the state. So I don't want to go into the
gory details of a yes. But what we
-
basically do is we find whenever there's a
zero in the last in the state before the
-
last round of ads and then that zero will
go down to the box will be X or to a key
-
byte, and then that will give us a cipher
text. But we actually know the ciphertext
-
byte so we can go backwards. So we can
kind of compute, uh, we can compute from
-
zero up to here and from here to this X1.
And that way we can compute directly one
-
key byte. So we repeat that whole thing 16
times until we have found a zero in every
-
bite of this state before the last round.
And that way we get the whole final round
-
key. And for those that know as if you
have one round key, you have the whole key
-
in it. So you get like the original key,
you can go backwards. So sounds
-
complicated, but it's actually a very fast
attack when you see it running. So here is
-
a except doing this attack and as you can
see, was in a couple of seconds and maybe
-
five hundred twenty invocations of of
Asir, we get the full KeIso. That's
-
actually quite impressive, especially
because the whole uh. Yeah, one of the
-
points in essence is that you don't put
anything in memory, but this is
-
interaction with SGX, which is kind of
like allows you to put stuff into into
-
memory. So I want to wrap up here. Um, we
have found various other attacks. Yeah.
-
So, um, both in research code and in
production code, such as the Intel SDK and
-
the Microsoft SDK. And they basically go
across the whole range of foreign
-
abilities that we have often seen already
from use it to kind of research. But there
-
are also some, uh, some interesting new
new kind of like vulnerabilities due to
-
some of the aspects we explained. There
was also a problem with all call centers
-
when the enclave calls into untrust, the
codes that is used when you want to, for
-
instance, emulate system calls and so on.
And if you return some kind of like a
-
wrong result here, you could again go out
of out of bounds. And they were actually
-
quite, quite widespread. And then finally,
we also found some issues with padding,
-
with leakage in the padding. I don't want
to go into details. I think we have, uh,
-
learned a lesson here that that we also
know from from the real world. And that is
-
it's important to wash your hands. So it's
also important to sanitize and state to
-
check pointers and so on. No. So that is
kind of one one of the take away message
-
is really that to build and connect
securely, yes, you need to fix all the
-
hardware issues, but you also need to
write safe code. And for enclave's, that
-
means you have to do a proper API and APIs
sanitization. And that's quite a difficult
-
task actually, as as we've seen, I think
in that presentation, there's quite a
-
large attack surface due to the attack
model, especially of intellectual X, where
-
you can interrupt after every instruction
and so on. And I think for from a research
-
perspective, there's really a need for a
more. Approach, then just continue if you
-
want, maybe we can learn something from
from the user to analogy which which I
-
invoked, I think a couple of times so we
can learn kind of like how what an enclave
-
should do, uh, from from what we know
about what a colonel should do. But they
-
are quite important differences also that
need to be taken account. So I think, as
-
you said, all all our code is is open
source. So you can find that on the below
-
GitHub links and you can, of course, ask
also questions after you have watched this
-
talk. So thank you very much. Hello, so
back again. Here are the questions. Hello
-
to see your life. Um, we have no questions
yet, so you can put up questions in the
-
see below if you have questions. And on
the other hand. Oh, let me make close this
-
up so I'll ask you some questions. How did
you come about this topic and how did you
-
meet? Uh, well, that's actually
interesting. I think this such as has been
-
building up over the years. Um, and it's
so, so, so I think some some of the
-
vulnerabilities from our initial paper, I
actually started in my master's thesis to
-
sort of see and collect and we didn't
really see the big picture until I think I
-
met David and his colleagues from
Birmingham at an event in London, the nice
-
conference. And then we we started to
collaborate on this and we went to look at
-
this a bit more systematic. So I started
with this whole list of vulnerabilities
-
and then with with David, we kind of made
it into a more systematic analysis. And
-
and that was sort of a Pandora's box. I
dare to say from the moment on this, this
-
kind of same errors being repeated. And
then also Fitzhugh, who recently joined
-
our team in London, started working
together with us on one or more of these
-
low level Sebu estate. And that's the
Pandora's box in itself. I would say,
-
especially one of the lessons, as we said,
that particular six is extremely complex.
-
And it turns out that almost all of that
complexity, I would say, can be abused,
-
potentially biodiversity. So it's more
like a fractal in a fraction of a fractal
-
where you're opening a box and you're
getting more and more of questions out of
-
that. In a way, I think. Yes, I think it's
fair to say this this research is not the
-
final answer to to this, but it's an
attempt to to give a systematic way of
-
looking at probably never ending up
actually funding is. So there is a
-
question from the Internet. So are there
any other circumstances where he has
-
Mianus and he is writing its registers
into memory, or is this executed exclusive
-
to SGX? So I repeat, I do not understand
the question either, so, so well, I think
-
the question is that this is a tactical
defeat. Prison depends on, of course,
-
having a memory disclosure about the
content and people that are accusing us
-
except to kind of forcibly right the
memory content of the content into memory.
-
So that is definitely a specific um.
However, I would say one of the the
-
lessons from the past five years of
research is that often these things
-
generalize beyond the six and at least the
general concept of, let's say, the
-
insights that sebu, that justice end up in
memory one way or another sooner or later.
-
I think that also applies to creating
systems that if you somehow can force an
-
operating system to complex, which pertain
to applications, that you also have to
-
register temporarily in memory. So if you
would have something similar like what we
-
have in an operating system, Colonel, you
would potentially mount a similar attack.
-
But maybe David wants to say something
about operating systems there as well. No,
-
no, not really. I think, like one one
thing that helps with SGX is that you have
-
very precise control, as you explained,
which was the interrupts and stuff because
-
you were your route outside the outside
the enclave. So you can signal step
-
essentially the whole enclave where it's
like, um, interrupting the operating
-
system. Exactly repeatedly at exactly the
point you want or some other process also
-
tends to be probably probably harder just
by design. But of course, on a context
-
which keep us to save somewhere, it's
register set and then then it will end up
-
in memoria in some situations probably not
not as controlled as it is for for as
-
Asgeirsson. So there is the question, what
about other CPU architectures other than
-
Intel, did you test those? So maybe I can
I can go into this so. Well, interesting.
-
See, that's the largest one with the
largest software base and the most runtime
-
that is also that we could look at. Right.
But there, of course, some other stuff we
-
have or as this eternity that we developed
some years ago, it's called Sancho's. And
-
of course, for this, there are similar
issues. Right. So you always need the
-
software layer to interact, to enter the
enclave into the enclave. And I think you
-
had David in the earlier work, also found
issues in our TI. So it's not just Intel
-
and really related product projects that
mess up there, of course. But what we
-
definitely found is it's easier to to
think of all cases for simpler designs
-
like risk five or simpler risk designs
then for this complex actually six
-
architecture. Right. So right now there
are not that many sites into less Jicks.
-
So so they have the advantage and
disadvantage of being the first widely
-
deployed, let's say. And um, but I think
as soon as others start to, to grow out
-
and simpler designs start to be more
common, I think we will see this, that
-
it's easier to fix alleged cases for
simpler designs. OK, so what is a
-
reasonable alternative to tea, or is there
any way you want to take that or think,
-
should I say what? Uh, well, we can
probably both give our perspectives. So I
-
think. Well, the question to start
statute, of course, is do we need an
-
alternative or do we need to find more
systematic ways to to to sanitize
-
Australians? That's, I think, one part of
the answer here, that we don't have to
-
necessarily throw away these because we
have problems with them. We can also look
-
at how to solve those problems. But apart
from that, there is some exciting
-
research. OK, maybe David also wants to
say a bit more about, for instance, on
-
capabilities, but that's not in a way not
so different than these necessarily. But
-
but when you have high tech support for
capabilities like like the Cherry
-
Borjesson computer, which essentially
associates metadata to a point of
-
metadata, like commission checks, then you
could at least for some cause of the
-
issues we talked about point to point of
poisoning attacks, you could natively
-
catch those without support. But but it's
a very high level idea. Maybe David wants
-
to say something. Yeah. So so I think,
like alternative to tea is whenever you
-
want to partition your system into into
parts, which is, I think, a good idea. And
-
everybody is now doing that also in there,
how we build online services and stuff so
-
that these are one systems that we have
become quite used to from from mobile
-
phones or from maybe even even from
something like a banking card or so out,
-
which is sort of like a protected
environment for a very simple job. But the
-
problem then starts when you throw a lot
of functionality into the tea. As we saw,
-
the trusted code base becomes more and
more complex and you get traditional box.
-
So I'm saying like, yeah, it's really a
question if you need an alternative or a
-
better way of approaching it. How are you
partition software? And as you mentioned,
-
there are some other things you can do
architecturally so you can change the way
-
we or extends the way we build build
architectures for with capabilities and
-
then start to isolate components. For
instance, in one software project, say it,
-
say in your Web server, you isolate the
stack or something like this. And also,
-
thanks for the people noticing the secret
password here. You so obviously only for
-
decoration purposes to give the people
something to watch. So but it's not
-
fundamentally broken, isn't? Yeah, not 60.
I mean, these are so many of them, I
-
think, like you cannot say, fundamentally
broken for but for a question I had was
-
specifically for SGX at that point,
because signal uses its mobile coin,
-
cryptocurrency uses it and so on and so
forth. Is that fundamentally broken or
-
would you rather say so? So I guess it
depends what you call fundamentally right.
-
So there has been in the past, we have
worked also on what I would say for
-
breaches of attitudes, but they have been
fixed and it's actually quite a beautiful
-
instance of a well researched and have
short term industry impact. So you find a
-
vulnerability, then the vendor has to
devise a fix that they are often not
-
available and there are often workarounds
to the problem. And then the later,
-
because you're are talking, of course,
about how to talk to. So then you need new
-
processes to really get a fundamental fix
for the problem and then you have
-
temporary workarounds. So I would say, for
instance, a company like Signeul using it,
-
if they so it does not give you security
by default. But you need to think about
-
the software. That's what you focused on
in this stock. We also need to think about
-
all of the hardware, micro patches and on
the processors to take care of all the
-
known vulnerabilities. And then, of
course, the question always remains, are
-
the abilities that we don't know of yet
with any secure system? I guess. But but
-
maybe also David wants to say something
about some of his latest work there.
-
That's a bit interesting. Yeah. So I think
what what your source or my answer to this
-
question would be, it depends on your
threat model, really. So some some people
-
use SGX as a way to kind of like remove
the trust in the cloud provider. So you
-
say like RSS and Signaler. So I move all
this functionality that that is hosted
-
maybe on some cloud provider into an
enclave and then then I don't have to
-
trust the cloud provider anymore because
there's also some form of protection
-
against physical access. But recently we
actually we published another attack,
-
which shows that if you have hardware
access to an SGX processor, you can inject
-
false into into the processor by playing
with the on the voting interface with was
-
hardware. And so you really saw that to
the main board to to a couple of a couple
-
of wires on the bus to the voltage
regulator. And then you can do voltage
-
glitching, as some people might know, from
from other embedded contexts. And that way
-
then you can flip bits essentially in the
enclave and of course, do all kinds of,
-
um, it kind of like inject all kinds of
evil effects that then can be used further
-
to get keys out or maybe hijack control
flow or something. So it depends on your
-
threat model. I wouldn't say so. That ASX
is completely pointless. It's, I think,
-
better than not having it at all. But it
definitely cannot you cannot have, like,
-
complete protection against somebody who
has physical access to your server. So I
-
have to close this talk. It's a bummer.
And I would ask all the questions that I
-
flew in. But one very, very fast answer,
please. What is that with a password in
-
your background? I explained it. It's
it's, of course, like just a joke. So I'll
-
say it again, because some people seem to
have taken it seriously. So it was such an
-
empty whiteboard. So I put a password
there. Unfortunately, it's not fully
-
visible in the in the screen. OK, so I
think you should open book out of David
-
Oswald. Thank you for having that nice
talk. And now we make the transition to
-
the new show.
-
Subtitles created by c3subtitles.de
in the year 2021. Join, and help us!