-
36c3 preroll music
-
Herald: So, Samuel is working at Google
Project Zero on especially vulnerabilities
-
in Web browsers and mobile devices. He was
part of the team that discovered some of
-
the vulnerabilities that he will be
presenting in this talk today in detail
-
about the no user interaction
vulnerability that will be able to
-
remotely exploit and compromise iPhones
through iMessage. Please give Samuel a
-
warm round of applause.
Applause
-
Samuel: OK. Thanks, everyone. Welcome to
my talk. One note before I start,
-
unfortunately, I only have one hour. So I
had to omit quite a lot of details. But
-
there will be a blog post coming out
hopefully very soon that has a lot more
-
details. But for this talk, I wanted to
get everything in there and leave out some
-
details. OK. So this is about iMessage in
theory some of it applies, or quite a lot
-
actually applies to other messengers, but
we'll focus on iMessage. So what is
-
iMessage? Yeah, it's a messaging service
by Apple. We've heard about it in the
-
previous talk a bit. As far as I know, it
is enabled by default. As soon as someone
-
signs into an iPhone with their account,
which I guess most people do, because
-
otherwise you can't download apps.
Interestingly, anyone can send messages to
-
anyone else. So it's like SMS or phone
calling. And then if you do this, then it
-
pops up some notifications, which you can
see that here on the right screenshot,
-
which means that there must be some kind
of processing happening. And so, yeah,
-
this is like default enabled, zero click
attack surface without the user doing
-
anything, there's stuff happening. And
then on the very right screenshot, you can
-
see that you can receive messages from
unknown senders. It just like says there.
-
This sender is not in your contact list,
but all the processing still happens. In
-
terms of architecture, this is roughly how
iMessage is structured, not very, yeah,
-
anything too interesting, I guess. You
have Apple cloud servers and then sender
-
and receiver are connected to these
servers. That's pretty much it. Content is
-
end to end encrypted, which is very good.
We heard this before, also. Interestingly,
-
this also means that Apple can hardly
detect or block these exploits though,
-
because, well, they are encrypted, right?
So that's an interesting thing to note. So
-
what does an iMessage exploit look like?
So in terms of prerequisites, really the
-
attacker only needs to know the phone
number or the email address, which is the
-
Apple account. The iPhone has to be in
default configuration so you can disable
-
iMessage. But that's not done by default.
And the iPhone has to be connected to the
-
Internet. And in terms of prerequisites,
that's pretty much all you need for this
-
exploit to work. So that's quite a lot of
iPhones. The outcome is the attacker has
-
full control over the iPhone. After a few
minutes, I think it takes like five to
-
six, seven minutes maybe. And it is also
possible without any visual indicator. So
-
there's no... you can make it so there are
no notifications during this entire
-
exploit. OK. But before we get to
exploiting, of course, we need a
-
vulnerability and for that we need to do
some reverse engineering. So I want to
-
highlight a bit how we started this or how
we approached this. And I guess the first
-
question, you might be interested in, is
what daemon or what service is handling
-
iMessages. And one easy way to figure
this out is you can just make a guess. You
-
look at your process list on your Mac, the
Mac can also receive iMessages. You, like,
-
stop one of these processes and then you
see if iMessages are still delivered. And
-
if not, then probably you found a
process that's somewhat related to
-
iMessages. If you do this, you'll find
"imagent", already sounds kind of related.
-
If you look at it, it also has an iMessage
library that it's loading. Ok, so this
-
seems very relevant. And then you can load
this library in IDA. You see a screenshot
-
top right. And you find a lot of handlers.
So for example, this
-
"MessageServiceSession handler:
incomingMessage:", and then you can set a
-
breakpoint there. And then at that point
you can see these messages as they come
-
in. You can dump them, display them, look
at them, change them. And so this is a
-
good way to get started. Of course, from
there, you want to figure out how these
-
messages look like. So, yeah, you can dump
them in there when they come in in the
-
handler, on the right side you see how
these iMessages look like more or less on
-
the wire. They are encoded as a PList,
which is an Apple proprietary format.
-
Yeah, think of it like JSON or XML. And I
guess some fields are self-explanatory.
-
So, "p", that's the participants in this
case this is me sending a message to
-
another account I own. You have "T" which
is the text content of the message. So
-
"Hello 36C3!". You have a version, for
some reason you also have an XML or HTML-
-
ish field, which is probably some legacy
stuff. It's being parsed, this XML. But
-
yeah, the whole thing looks kind of
complex already. I mean maybe you would
-
expect a simple string message to just be
a string. In reality, it's sending this
-
dictionary over the wire. So let's do some
more attack service enumeration. If you
-
then do more reverse engineering, read the
code of the handler, you find two
-
interesting keys that can be present,
which is ATI and BP, and they can contain
-
NSKeyedUnarchiver data, which is another
Apple proprietary serialization format.
-
It's quite complex, it has had quite a few
bugs in the past. On the left side you see
-
an example for such an archive. It's yeah,
it's being encoded in a plist and then
-
it's pretty much one big array that has,
like, every object has an index in this
-
array. And here you can see, for example,
number 7 is some object, is the class
-
NSSharedKeyDictionary. And I think key one
is an instance of that class and so on. So
-
it's quite powerful. But really what this
means is that this serializer is now zero
-
click attack surface because it's being
passed on this path without any user
-
interaction. So I said it's quite complex.
It even supports things like cyclic
-
references. So you can send an object
graph where A points to B and B points
-
back to A for whatever reason you might
want that. Natalie wrote a great blog post
-
where she describes this in more detail.
What I have here is just an example for
-
the API, how you use it. This is Objective
C at the bottom. If you're not familiar
-
with Objective C, you can think of these
brackets as just being method calls. So
-
this is doing, in the last line, it's
calling the unarchivedObjectOfClasses
-
method for this NSKeyedUnarchiver. You can
see you can pass a whitelist of classes.
-
So in this case, it will only decode
dictionary, strings, data, etc. So looks
-
quite okay. Interestingly, if you dig
deeper into this, this is not quite true
-
because it also allows all the subclasses
to be decoded. So if you have an NS-
-
something-something dictionary that
inherits from NSDictionary, then that can
-
also be decoded here, which is quite
unintuitive I think. And this really blows
-
up the attack surface because now you have
not only these 7 or so classes, but you
-
have like 50. Okay. So this is what we
focused on when me and Natalie were
-
looking for vulnerabilities. It seemed
like the most complex thing we found. We
-
reported quite a few vulnerabilities here,
you can see it maybe a bit on the left.
-
The one I decided to write an exploit for
is this 1917, reported on July 29th and
-
then exploits sent on August 9th. Yeah,
mostly I decided to use this one because
-
it seemed the most convenient. I do think
many of the other ones could be exploited
-
in a similar way, but not quite as nice,
so would maybe take some more heap
-
manipulation, etc. So then Apple first
pushed the mitigation quite quickly, which
-
basically blocks this code from being
reached over iMessage. In particular, what
-
they did is, they exactly no longer allow
subclasses to be decoded in iMessage. So
-
that's quite a good mitigation, it blocks
off maybe 90 percent of the attack surface
-
here. Yeah. So then they fully fixed it in
iOS 13.2. But again, after August 26th
-
this was only just local attack surface.
OK, so what is the bug? It's some
-
initialization problem during decoding,
the vulnerable class is
-
SharedKeyDictionary, which again, it's a
subclass of NSDictionary, so it's allowed
-
to be decoded. So let's take a look at
that. So, yeah. SharedKeyDictionary.
-
Here's some pseudocode in Python. It's a
dictionary. So its purpose is to, well,
-
look up keys to values or map keys to
values. The lookup method is really
-
simple. It just looks up an index in a key
set. So every key dictionary has a shared
-
key set and then that index is used to
index into some area. OK, so that's quite
-
simple so most of the magic happens in the
SharedKeySet. And so what that does is
-
something like compute a hash of the key.
Use that hash to index into something
-
called a rankTable, which is an array of
indices. And then if that index is valid,
-
so it's being bounced, checked against the
number of keys. Then it has found the the
-
correct index and if not, it can recurse
to another SharedKeySet. So every
-
SharedKeySet, can have a sub-SharedKeySet,
and then it repeats the same procedure. So
-
it already looks kind of complex. Why does
it have... why does it need this
-
recursion? I'm not quite sure, but it's
there. And so now we look at how this goes
-
wrong. So this is the initWithCoder, which
is the SharedKeySet constructor used
-
during decoding with the keyedUnarchiver.
And it looks pretty solid at first, it's
-
really just taking the values out of the
archive and then storing them as the
-
fields of this SharedKeySet. I have a, I'm
gonna go through the code here in, like,
-
single step to highlight where it goes
wrong or what goes wrong here, what's
-
wrong with this code. So we start with
SharedKeySet1 which implies there's gonna
-
be another one. And at the start it's all
zero initialized. It's basically being
-
allocated through ?calloc?. So everything
is zero. Then we execute the first line.
-
Okay. So numKey, you see some interesting
values coming. So far this is all fine.
-
Note, that you can set numKey, at this
point numkey can be anything because it's
-
only being validated three lines further
down, right? Where it's making sure that
-
numKey matches the the real length of this
array. So this is fine, but here it's now
-
recursing and it's decoding another
SharedKeySet. So we start again. We have
-
another SharedKeySet, all filled with
zeros and we start from the top. Again,
-
numKey is one, so this is this is a
legitimate SharedKeySet, decoding a
-
rankTable. And here we are making a
circle. So for SharedKeySet2 we pretend
-
that its sub-KeySet is SharedKeySet1. And
this actually works. So the
-
NSKeyedUnarchiver has special handling to
handle this correctly. So it does not
-
create a third object and it makes the
cycle. And we're good to go. Okay. Next to
-
decode the keys area. So this is fine.
SharedKeySet2 seems legitimate so far. And
-
now it's doing some sanity checking. Where
it's trying, where it's making sure that
-
this SharedKeySet can look up every key.
And so it does this for the only key it
-
has, key one. Now, at this point, it's
again, remember, it's hashing the key
-
going into rank table, takes out 42, which
is bigger than numKey. So in this case,
-
this look up here has failed. And now it's
recursing to SharedKeySet1. Right? This
-
was the logic. And at this point it's
taking out this hex41414141 as index,
-
compares it against hexffffffff and that's
fine, and now it's accessing, null
-
pointer, which is.. the keys area is still
null pointer plus, well, 41 41 41 41 times
-
8. So at this point it's crashing. It's
accessing invalid memory, precisely
-
because in this situation the
SharedKeySet1 hasn't been validated yet.
-
OK, so that's the bug we're going to
exploit. I have these checkpoints just to
-
think where we are, so we now have a
vulnerability in this NSUnarchiver API. We
-
can trigger it through iMessage. So what
Exploit Primitive do we have? Let's take a
-
look again at the lookup function, which
we saw before. So here where it's bold,
-
this is where we crash. keys is null
pointer, index is fully controlled. So we
-
can access null pointer plus offset. And
then what happens is the result of this
-
memory access is going to be used as some
Object-C Object. So this is all
-
Objective-C in reality, it's doing some
comparison, which means it does something
-
like it called some method called
isNSString, for example. And then also
-
eventually it calls dealloc, which is the
destructor. So yeah, we have... the thing
-
it reads from whatever, it will treat it
as an objectif C-Object calls a
-
message on it. And that's our Exploitation
Primitive. Okay, so here we are. How do we
-
exploit this? So the rough idea for
exploiting such vulnerabilities looks like
-
this. You want to have some fake
Objective-C object somewhere in memory
-
that you're referencing. So again, we have
an we can access an arbitrary absolute
-
address. We want some fake Objective-C
object there. Every Objective-C object has
-
a pointer to its class. And then the class
has something called a method table, which
-
basically has function pointers to these
methods. Right. And so if we fake this
-
entire data structure thing, the fake
object and the fake class, then as soon as
-
the process calls some method on our fake
thing, we get code execution. So we get
-
control over the instruction pointer and
then it's game over. So that's going to be
-
our goal for this exploit. So here we have
two different types of addresses: On the
-
left side we have heap addresses or data,
really. And on the right side, in this
-
NSString-thing we need library addresses
or code addresses, simply because on iOS
-
you can't have writeable code regions. So
we have to necessarily reuse existing
-
code, do so to something like ROP also.
So we need to know where libraries are
-
mapped for this. And this is exactly the
problem we are gonna face now because
-
there is something called ASLR, Address
Space Layout Randomization. And what it
-
does is it will randomize this entire
address space. So on the left side, you
-
can see how a process looks like, how the
virtual memory of a process looks like
-
before ASLR. And there everything is
always mapped at the same address. So if
-
you start the same address twice on
different phones, maybe even without ASLR
-
the same library is at the same address,
the heap is always at the same address
-
stack. Everything is the same. And so this
would be really simple to exploit now
-
because, well, everything is the same.
With ASLR everything is shifted and now
-
all the addresses are randomized and we
don't really know where anything is. And
-
so that makes it harder to exploit this.
So we need an ASLR bypass is what this
-
means. We're gonna divide it into two
parts. So the heap addresses we get them
-
from in a different way than the library
addresses. So let's see how we get heap
-
addresses. It's really simple honestly,
what you can do is heap spraying, which is
-
an old technique. I think 15 years old
maybe. And it does still work today. The
-
idea is that you simply allocate lots of
memory. So if you look at this code there
-
put on the right, which you can use to
test that, what it does is that it allocates
-
256 megabytes of memory on the heap with
malloc. And then afterwards there's one
-
address or there's many addresses. But in
this case, I'm using this hex110000000
-
where you will find your data at. Okay. So
just spraying 256 megabytes lets you put
-
controlled data at a controlled address,
which is enough for this first part of the
-
exploit. The remaining question is how can
you heap spray over a iMessage. That's a
-
bit more complicated. But it is possible
because NSKeyedUnarchiver is great and it
-
lets you do all sorts of weird stuff which
you can abuse for heap spraying. So, yeah.
-
Blog posts will have more details. Okay.
So we have these, the heap addresses. We
-
have them. We need the library addresses.
Let's go back to the virtual memory space.
-
On iOS and also on macOS the libraries -
so maybe in this case all three libraries,
-
but in reality, it's like hundreds of
system libraries - they are all prelinked
-
into one gigantic binary blob, which is
called a dyld_shared_cache. The idea is
-
that this speeds up like loading times
because all the interdependencies between
-
libraries are resolved pretty much at
compile time. But yeah, so we have this
-
gigantic binary blob and it has everything
we need. So it has all the code, it has
-
all the ROP gadgets and it has all the
Objective-C classes. So we have to know
-
where this dyld_shared_cache is mapped. If
you dig into that a bit or if you look at
-
the documentation or the the binaries, you
can find out that it is going to be mapped
-
always between these two addresses. So
between 0x180000000 and 0x280000000, which
-
leaves only a 4 gigabyte region, so it's
only being mapped in these 4 gigabytes.
-
And then the randomization granularity is
also 0x4000 because iOS uses large pages
-
so it can only randomize with page
granularity, and that page granularity is
-
0x4000. But really what's most interesting
is that on the same device, the
-
dyld_shared_cache is only randomized once
per boot. So if if you have two different
-
processes on the same device, the shared
cache is at the same virtual address. And
-
if you have one process, then it crashes
and you have another one. And so on, like
-
the shared cache is always going to be at
the same address. And that makes it really
-
interesting. And also, it's one gigabyte
in size. It's gigantic. So it's not too
-
hard to find in this four gigabyte region.
Right. So this is what our our task has
-
boiled down to at this point. We have this
address range, we have the shared cache.
-
And all we need to know now is what is
this offset? So let's make a thought
-
experiment. Let's say we had an oracle
which would tell us... which we could give
-
an address. And it would tell us if this
address is mapped in the remote process.
-
OK, if we have this, it suddenly becomes
really easy to solve this problem, because
-
then all you have to do is you go in 1
gigabyte steps the the size of the shared
-
cache between these two addresses and then
at some point you find a valid address. So
-
maybe here after 3 steps, you find a valid
address, and then from there you just do a
-
binary search. Right. Because you know
that somewhere between the green and the
-
second red arrow, the shared cache starts.
So you can do a binary search and you find
-
the the start address in logarithmic time
in a few seconds, minutes, whatever. So
-
obviously the question is what? How? Where
would we get this oracle from? This seems
-
kind of weird. So let's look at receipts,
message receipts. So iMessage like many
-
other messengers - I think pretty much all
of them that I know - send receipts for
-
different things. iMessage in particular
has delivery receipts and read receipts.
-
Delivery received means the device
received the message, read receipt means
-
the user actually looked - opened the app,
looked at the message. You can turn off
-
read receipts, but as far as I know, you
cannot turn off delivery receipts. And so
-
here on the left you see a screenshot.
Three different messages were sent and
-
they have three different states. The
first message was marked as read, which
-
means it got a delivery receipt and a read
receipt. The second message is marked as
-
delivered. So it only got a delivery
receipt and the third message doesn't have
-
anything. So it hasn't received any
receipt. OK. So why is it useful? Here on
-
the left is some pseudocode of imagent's
handling of how it handles messages and
-
when it sends these receipts. And so you
can see that it first parses the plist
-
that's coming in and it's then doing this
nsUnarchive at some later time. And this
-
is this is exactly why all but would
trigger during nsUnarchive. And only then
-
does it send a delivery receipt. Right. So
what that means is if during our during
-
our nsUnarchive, if we can trigger the bug
and cause a crash, then we have somewhat
-
of a one bit sidechannel. Right. Because
if we cause a crash, then we won't see a
-
delivery receipt. And if we don't cause a
crash, then we see a delivery receipt. So
-
it's a one bit of information. And this is
going to be our oracle. All right. So
-
ideally, you have a vulnerability that
gives you this perfect oracle of is an
-
address mapped or not? So crash, if it is
not mapped, don't crash if it mapped. In
-
reality, you probably will not get this
perfect oracle from your bug. On the left
-
side, you see the real Oracle function for
this vulnerability, which is, well it has
-
to be mapped. OK. But then it's also using
the value that it's reading. And so it
-
will only not crash if the value is either
0 or if it has the most significant bit
-
set, that is some like pointer taking
stuff or if it's a real legitimate pointer
-
to an Objective-C object. So this Oracle
function is a bit more complex, but the
-
similar idea still works. So you can still
do something like a binary search, and
-
then infer the shared cache start address
in logarithmic time. Right. And so it only
-
takes maybe five minutes or so to do this.
But for this for this part, again, I have
-
to refer to the blog post which will cover
how this works. OK. So this is the summary
-
of the remote ASLR bypass. Two phases,
there's linear scan where it's just
-
scanning, sending these payloads and
checking if it gets the receipt back, and
-
the first time it gets a receipt back, it
knows. OK. This address is valid. I now
-
found an address that is within the shared
cache. And at that point it starts this
-
searching phase, which in logarithmic time
figures out the exact, precise starting
-
address. So there's a few common questions
about this that I want to briefly go into.
-
The first maybe obvious question is, can
you really just crash this agent like 20
-
plus times? And the answer is yes. There's
no indicator or anything that the user
-
would would see that this demon crashes.
The only thing you can do is you can go
-
into like settings, privacy, something
something, crash log something, and then
-
you can see these crash logs. Second
question is you can I think by default,
-
the iPhone is configured to send crash
logs to the vendor, to Apple. So isn't
-
that a problem? So I think I looked at
this briefly. What I stumbled across was
-
that it seems that iOS collects at most 25
crash logs per service. This is not
-
designed to be like a security feature.
Right. So this makes sense. But what that
-
means is that an attacker can use some
kind of, well, resource exhaustion bug to
-
crash this daemon maybe 25 times first,
and then only start to exploit and then no
-
trace of the exploit will be sent over.
Third question is whether this can be
-
fixed by simply sending the delivery
receipt very early on. I think this is...
-
this was my first suggestion to Apple to
just send this delivery receipt right at
-
the start. Eventually I figured out it
doesn't really work because you can still
-
make some kind of timing side channel,
because when when a demon crashes multiple
-
times, it's subject to some penalty and it
will only restart like a few seconds or
-
even minutes later. So from the timing of
getting a delivery receipt, you can then
-
still basically get this oracle. Right. So
it doesn't really work by just sending it
-
earlier. I'll go into some other ideas
that might work later. Okay. So at this
-
point I'm starting the demo. The demo is
two parts. Let's see where it is. Right.
-
So I have this iPhone here and you can
with QuickTime... the screen is mirrored
-
to the projector. So this iPhone is it's a
10S, so it's from last year. It's on 12.4,
-
which is the last vulnerable version. So
that's like half a year old at this point.
-
And what else? So there is no existing
chats open. Okay. And let's see. So I hope
-
the Wi-Fi works. What you can see here is
the way the exploit works that it's
-
hooking with Frida into... Do we get
delivery receipt? Uh, do we? Yeah. Okay,
-
cool. It works. So, yeah, it's popping up
these messages. The way the exploit works
-
that it's hooking the messages app on
macOS with Frida and then it's sending
-
these specific marker messages like
INJECT_ATI, and then the Frida hook
-
replaces this message with like the
current payload. Right. And now it's
-
testing these addresses. It's not too slow
I guess. Yeah. And it's popping up some
-
nice messages. Okay. It already found.
Okay. So this is already the end of the
-
first stage. So that was quite fast. It
found a valid address in this like first
-
probing step and now it has 21,000
candidates for the shared cache base. I
-
know it's doing this kind of binary search
thing to half that in every step. Okay.
-
Now it only has 10,000 left and so it's
quite fast and quite efficient. Okay.
-
While this runs, um, let's continue. So
this is where we are. We can now create
-
fake objects. We have all the addresses we
need. It's like this 1170 is where we can
-
place our stuff and then we will gain
control over the program counter. And from
-
there it's standard stuff, right? It's
what you would do in all of these exploits
-
you pivot maybe to the stack, you do
return oriented programing and then you
-
can run your code and you've succeeded.
Now, at this point, there is another thing
-
coming in. Pointer authentication is a new
security feature that Apple designed and
-
implemented first in the 10S, so this
device from 2018. And the idea is that you
-
can now - for this you need CPU support -
the idea that you can now store a
-
cryptographic signature in the top bits of
a pointer. OK, so here on the very left
-
side, you have a raw pointer. So the top
bits are zero because the way the address
-
space works. Now there's a set of
instructions that sign a pointer and they
-
will maybe take a context on it, but they
use some key that's not in memory - that's
-
in a register, compute a signature of this
pointer and store the signature in the top
-
bits. And that's what you see on the right
side. The green things. That's the
-
signature. And now before using this
pointer, the code will now authenticate by
-
running another instruction. And this
instruction, if the verification fails, it
-
will basically clobber this pointer,
make it invalid. And then the following
-
instructions will just crash. Right. So
here this is the function called the BL,
-
branch and link instruction. This is doing
a function call to a function pointer. But
-
first it's authenticating this pointer.
And if this authentication step fails,
-
then the process will crash right there.
What this means for an attacker is that
-
more or less, ROP is dead, because ROP
involves faking a bunch of function
-
point... or like, well, code pointers
really, that point in the middle of
-
existing code. So this is no longer
possible because an attacker cannot
-
generate these signatures. So this is
where our exploit breaks, right, the red
-
thing. Well, we have a fake objective C
class with our own function pointer. This
-
does no longer work because we cannot
compute these signatures. So what do we
-
do? One thing that's still possible and
it's even documented in the documentation
-
is that this class pointer in the object -
what's also called the ISA pointer -
-
it's not protected by PAC in any way.
Which means we can fake instances of
-
legitimate existing classes. Right. So in
this case here we can have a fake object
-
that points to a real class that has real,
legitimately signed method pointers. So
-
this tool works. And with this, we can now
get existing methods called, out of place
-
and kind of manipulate the control flow.
And these existing methods are basically
-
now gadgets. So if you want to think about
it that way. So what can we do with this?
-
One very interesting method we can get
called is dealloc, the destructor. So I
-
think in quite a few, maybe most of the
Objective-C exploitation scenarios, you
-
can probably get a dealloc method called.
Now what you do is you just enumerate all
-
the destructors in the shared cache.
There's tons of them, I think 50,000, and
-
you can get any of those called. And then
one of them or a few of them are really
-
interesting because they call this invoke
method, which is part of the NSInvocation
-
object, or class. And an NCInvocation is
basically a bound function. So it has a
-
target object, the method to be called and
all the arguments. And as soon as you call
-
invoke on this NCInvocation, it does this
method call with fully control arguments.
-
Right. So what that means is with this
destructor, we can now make a fake object
-
with a fake NSInvocation that has any
method call we would like to perform, and
-
then it's going to do that because it's
running this invoke here. Again, you see
-
this shield here, which I put in place for
things that Apple has hardened since we
-
sent them the exploit. So what they did so
far is they hardened NSInvocation and it's
-
now no longer easily possible to abuse it
in this way. But yeah. So for us, we can
-
now run arbitrary Objective-C methods with
controlled arguments. What about
-
sandboxing? If you do some more reverse
engineering and figure out what services
-
play into iMessage, this is what you end
up with. On the right side. So you have a
-
number of services. Most of them are
sandboxed. If it has the red border, it
-
means there's a sandbox. Interestingly,
Springboard also does some NSUnarchiver
-
stuff. So it's decoding the BP key. So it
could also trigger our vulnerability and
-
Springboard is not sandboxed. So it's the
main UI process. It's basically what's
-
handling showing the the welcome screen.
And so on. And so what that means is,
-
well, we can just target Springboard and
then we get code execution outside of the
-
sandbox so we don't actually need to worry
too much about the sandbox. As of iOS 13,
-
this is fixed and this key is now decoded
in the sandbox. Cool, so we can execute
-
Objective-C methods outside of the
sandbox. We can with that access user
-
data, activate camera, microphone, etc.
This is all possible through Objective-C
-
quite easily. But of course we don't care
about that. What we want is a calculator
-
and this is also quite easy, with one
Objective-C call - UIApplication
-
launchApplication blah blah blah. And so
let's see if this works. Go back to the
-
demo. So where are we at? So the, uh, the
ASLR bypass ran through. You can nicely
-
see that it roughly halved the candidates
in every round, or with every message
-
it had to send. It ended up with just
one message. Yeah, well with just one
-
candidate at the end. And that is the
shared cache base in this case
-
0x18a608000. Now it's preparing the heap
spray. This is all kind of hacked
-
together. I think if you wanted to do this
properly, for one, you can send the whole
-
heap spray in one message. I'm just lazy.
It's also probably way too big. Another
-
thing is, I think you would probably not
target springboard in reality just because
-
spring board is very sensitive. So if you
crash, did you get this re-spring and the
-
UI restarts. So I think in reality you
would probably target IM agent and then
-
chain the sandbox escape. Because while
this bug would also get you out of the
-
sandbox. So looks should be doable. Okay.
So I think the last message arrived. It's
-
freezing here for a couple of seconds. I
don't actually know why I never bothered,
-
but it does work.
Applause
-
Thank you. Yeah. So that was a demo. It's
it's kind of naturally reliable, this
-
exploit, because there is not much of heap
manipulation involved except this one
-
heaps spray, which is controllable. Okay.
Um, so what's left? I think one more thing
-
you can do is you can attack the kernel if
you want that. You have to deal with two
-
problems here. One is code signing. You
cannot execute unsigned code on iOS. And
-
then the standard workaround for that is
you abuse JIT pages in safari. But we are
-
not in safari or we are not in web
content, so we don't have JIT pages. What
-
I did here is I basically pivoted into
JavaScript core, which is the the JS
-
library. You can use it from from any app
also. And then I'm just bridging syscalls
-
into JavaScript and then implementing the
kernel exploit in JavaScript. This does
-
not require any more vulnerabilities. So
you do not need a JavaScript core bug to
-
do this. And the idea is very similar to
pwn.js. Maybe some of you know about that.
-
It's a library. I think initially
developed for Edge because they did
-
something similar was like JIT page
hardnings. So what I decided to do is take
-
SockPuppet from Ned or CVE-2019-8605,
which works on this version, it works on
-
12.4. This is the trigger for it. And I
only ported the trigger. I didn't bother
-
re -implementing the entire exploit. So
yeah, this is the trigger. It will cause a
-
kernel panic. It's quite short. Which is
nice. So if you want to run this from
-
JavaScript, really, there's only three
things you care about, right? So the first
-
one is you need the syscalls. So
highlighted here, there is like four or so
-
different syscalls here. Not a lot. And
you just have to be able to call them from
-
JavaScript. The other thing is you need
constants, right? So I have AF_INET6,
-
SOCK_STREAM. These are all integer
constants. So this is really easy, right?
-
You just need to look up what these values
end up being. And then the last thing is
-
you need some data structures. So in this
case, I need this so_np_extension thing.
-
It needs some integer value to pass
pointers to and so on. Yeah. And then this
-
is kind of the the magic that happens. You
take sock_puppet.c extract the syscalls
-
etc. There is one Objective C message you
can call which is very convenient, which
-
gives you a dlsym. What this lets you do
is, it lets you get native C function
-
pointers that are signed, right. Because
so far we can only call Objective C
-
methods, but we need to be able to call
syscalls or at least the C wrapper
-
functions. So with this dlsym method thing
we can get signed pointers to C functions.
-
Then we need to be able to pivot into
JavaScript code, which is also really easy
-
with one method call, the JSContext
evaluateScript. We need to mess around
-
with memory a bit like corrupt some
objects from outside, corrupt some area
-
buffers in javascript, get read, write.
Kind of standard browser exploitation
-
tricks I guess. But yeah. So if you do
this what you end up with is
-
sock_puppet.js. It looks very similar. You
can see a bit of my javascript API that
-
lets you allocate memory buffers. I read
and write memory, have some integer
-
constants and yeah, apart from that, it
doesn't really look much different from
-
the initial trigger. And so this can now
be served over, well, staged onto the
-
iMessage exploit building on top of this
object a C method called primitive. And I
-
guess at least in theory I didn't fully
implement it. This should be able to just
-
run a kernel exploit and fully compromise
the device without any interaction in
-
probably less than 10 minutes. Okay, so
this was the first part. How does how does
-
this exploit work. What I have now is a
number of suggestions how to make this
-
harder and how to improve things. So one
of the first things that is really
-
critical for this exploit is the ASLR
bypass, which relies on a couple of
-
things. And I think a lot of this ALSR
bypass also works on other platforms. So
-
Android has a very similar problem with
like mappings being at the same address
-
across processes. And other messengers
have these like receipts and so on. So I
-
think a lot of this applies not just to
Apple but to Android and to other
-
messengers. But okay. What is the first
point? So weak ALSR, this is basically the
-
heap spraying, which is just too easy.
This shouldn't be so easy. In terms of
-
theoretical ASLR, you can see it maybe
sketched here on the right. In theory,
-
ASLR could be much stronger, much more
randomized. In reality, it's just like the
-
small red bar. So it really it should just
have much more entropy to make heap
-
springing not viable anymore. The next
problem with ASLR is per-boot stuff. At
-
the bottom you can see it, right? So you
have three different processes, the shared
-
cache is always at the same address,
similar problems on other platforms, I
-
mentioned that. This is probably hard to
fix because by this point a lot of, quite
-
a lot relies on this. And it would be a
big performance hit to change this. But
-
maybe some clever engineers can figure out
how to do it better. The third part here
-
is the delivery receipts, which,
interestingly, they can give this side
-
channel, this one bit information side
channel and this can be enough to break
-
ASLR. And as I've mentioned before, I
think a lot of other messengers have this
-
same problem. What might work is to
either, well, remove these receipts. Sure.
-
Or maybe send them from a different
process so you can't do this timing thing
-
or even from the server. I think if you
send them, if the server already sends the
-
delivery receipt, it's a bit of cheating.
But at least this attack doesn't work.
-
Sandboxing, another thing, it's probably
obvious, right? So the everything that's
-
on zero click attack surface should be
sandboxed as much as possible. Of course,
-
to, you know, to require the attacker to
do another full exploit after getting code
-
execution. But Sandboxing can also
complicate information leaks. So not only
-
had this other iMessage bug
CVE-2019-8646, there's a blog post about
-
this one. It basically lets you. She was
able to send to cause a Springboard to
-
send HTTP requests to some server and
those would contain pictures, data,
-
whatever. If Springboard would've been
sandbox to not allow network activities,
-
this would have been much harder. So
sandboxing is not necessarily just about
-
this second breakout. What I do want to
say about sandboxing, ithat it shouldn't
-
be relied on. So I think that this remote
attack surface is pretty hard. And it's
-
not unlikely that it's actually harder
than the sandboxing attack surface. And
-
also on top of that, this bug, the
NSKeyedUnarchiver bug, it would also get
-
you out of the sandbox because the same
API is used locally for IPC. So there's
-
that. Yeah. This would be nice if the zero
click attack surface code would be open
-
source. Would have been nice for us. It
would have been easier to audit. Maybe
-
someday. Another feature that I would like
to see or another theme is reduced zero
-
attack surface. Make it one click at least
one click attack surface. Right. So before
-
and here you could see that an unknown
sender can send any messages. It would be
-
nice if there would be some pop up that's
like, well, do you actually want to accept
-
messages? Threema lets you block unknown
senders. I think that's a cool feature. So
-
yeah, there's more work to be done here.
Also, this restarting service problem, I
-
think it could get bigger even. So, here
we have pretty much unlimited tries for
-
the ASLR bypass. It's probably going to
become even more relevant with memory
-
tagging, which we can also be defeated if
you have many tries. So yeah, I guess if
-
there's some process or some critical
demon crashes ten times, maybe not restart
-
it. I don't know. It's gonna need some
more thinking, right? You don't want to
-
denial-of-service the user by just not
restarting this demon that crashed for
-
some unrelated reason. But yeah, this
would be a very good idea to have some
-
kind of limit here. Okay. Conclusion. So
yeah, zero click exploit, they are thing.
-
They do exist. It is possible to exploit
single memory corruption bugs on this
-
surface with, you know, without separate
info leaks. Despite all the mitigations we
-
have. However, I do think by turning the
right knobs, this could be made much
-
harder. So I gave some suggestions here.
And yeah, we need more atack surface
-
reduction, especially on the zero click
surface. But I think there is progress
-
being made. And with that thanks for your
attention. And I think we have time for
-
questions. Thank you.
-
applause
-
Herald: We do have time for questions. And
if you're in the room, you should line up
-
at the microphones and then we might also
have questions from the Internet. One
-
quick reminder is that all fun things that
what they work with explicit consent that
-
includes photos. So the photo policy of
the CCC is that if you take a photo, you
-
need to have explicit consent by the
people in the frame. So remember, don't do
-
any long shots into the crowd because you
want to have the consent of everybody
-
there. Good. We have the first question
from the Internet.
-
Question: The Internet wants to know. Did
Apple give you some kind of a reward? And
-
was it in your iPhone?
Answer: No, we did not get any kind of
-
reward. But we also didn't ask for it. No,
I didn't get a new iPhone, but I'm still
-
using mine. Which is it? Yeah. I mean,
this is a Xs, right? Current hardware
-
models can be defeated with this, if that
is the question.
-
Herald: Good. We have a question for
microphone number 3.
-
Q: Hello. Uh, just a question. I did not
truly understand how the fix with the
-
server or having another process, uh,
sending that there every message will fix
-
the problem because if it does work, if
you are in the right addresses, the thing
-
just will work. Make the server or the
process, send the delivery message and if
-
it crashes, it doesn't do anything so...
A: So the idea would be in this case, I'm
-
like sending this one method that would
crash and then either I get a delivery
-
received or I don't. If the server already
sends the delivery receipt before it
-
actually gives the message to the client
or to the receiver, then I would always
-
see a delivery receipt and I wouldn't be
able to figure out if my message caused
-
the crash or not. So that's the idea
behind maybe sending it on the server
-
side, if that makes sense.
Follow-up question: Yeah. But in this
-
case, if legit people send a message and
it doesn't reach the people because...
-
A: Yeah. Yeah. It's a hack. Right. So it's
not perfect. I mean the server could only
-
send to find this delivery receipt once it
like send it out over TCP and maybe got a
-
TCP ACK or whatever happens in the kernel.
But it's a hack in any case. Yeah. Like
-
it's a tradeoff.
Herald: We have a question for microphone
-
number two.
Q: Hello. Okay. Thanks for the talk. Two
-
questions. First: Is OS X also a
potential candidate for this bug. And
-
second: Can you distinguish multiple
devices with your address based
-
randomization detection?
A: Mm hmm. So yes: OS X or MacOS is
-
affected just the same. I think this
specific exploit wouldn't directly work
-
because address space looks a bit
different, but I think you could make it
-
work and it's affected. In terms of
multiple devices, so I haven't played
-
around with that. I could imagine that it
is possible to somehow figure out that
-
there are multiple devices or that you
know which device just crashed. But I
-
haven't investigated. That's the answer.
Follow-up: Thanks.
-
Herald: We still have time for more
questions. There was a question from
-
microphone number 1.
Q: Hi. Thanks for the talk. Quick
-
question. You said that exploitation could
be made without having any notification.
-
How would that be made?
A: Yeah, I briefly looked into how it
-
could work. Well. So for one, you can take
out parts of the message so that it fails
-
parsing later on in the processing and
then it will just be like thrown away
-
because it says, well, this is garbage.
The other thing is, of course, once you
-
get with the like very last message where
you get code execution, you cannot prevent
-
it from showing a message like a
notification, because that happens
-
afterwards.
Follow-up Q: But until you get the code
-
execution, you can't remove it. So you
see the first message?
-
A: So but you can do the other. The other
thing, like make it make the message look
-
bad - bad enough that like later parsing
stages will throw them away.
-
Follow-up: Thanks.
Herald: Good. We have a couple of more
-
questions. Remember, if you don't feel
comfortable lining up behind the
-
microphones, you can ask through the
signal angel through the Internet.
-
Microphone number 4, please.
Q: Yes. Hi. Hi Samuel. Um, I was curious
-
you have some suggestions about reducing
the attack surface. Are there any
-
suggestions that you'd make to save, like
Apple or Google? You know, in terms of
-
what they can see. You mentioned logging a
little bit earlier.
-
A: Yeah. So I sent pretty much this list
with the exploit I sent to Apple. And I
-
think the blog post will have a bit more.
But yeah, I told them the same thing.
-
Yeah, if that's your question, did I get
it right?
-
Follow-up Q: Yes. I mean, maybe I
misunderstood a little bit, but I suppose
-
that some of these reductions in the attack
surface seem to be in terms of like what's
-
happening on the device. Yeah. Whereas I'm
wondering in terms of monitoring. So being
-
able to catch something like this in
progress.
-
A: Right. Right. So this is gonna be
really hard because of end-to-end
-
encryption. So the server just sees like
encrypted garbage and has no way of
-
knowing is this an image? Is that the
text? This is an exploit? So on the
-
server, I don't think you can do much
there. I think it's gonna have to be on
-
the device.
Herald: We have a question from the
-
Internet.
Q: How do you approach a attack surface
-
mapping?
A: Um, well, reverse engineering, playing
-
around, looking at this message format. In
this case, what was somewhat obvious what
-
an attack surface was. Right. So figure
out which key is off this message are
-
being processed in some way. Make a note.
Decide which one looks most complex. Go
-
for that first. That's what we did.
Herald: We have a question from microphone
-
number 2, please.
Q: Hi. How long did you and your colleague
-
research to get the export running?
A: So the the vulnerability finding thing
-
was not only I think we spend maybe three
months finding the exploit. So I had a
-
rough idea how I wanted to how how I would
approach this exploit. So I think at the
-
end it took me maybe a week to finish it.
But I had thought about doing that for
-
like, while a looking while looking for
vulnerabilities and those two to three
-
months.
Herad: We have another question from
-
microphone number three.
Q: Um, is there the, uh, threat that the
-
attacked iPhone would itself turn into a
tack up by the exploit?
-
A: Sure. Yeah, you can do that. I mean,
you have full control, right? So you have
-
access to the contacts list and you can
send out iMessages. The question is if
-
it's necessary. Right. I mean, you can
also send messages from you don't really
-
need them, the iPhone to send the
messages. But I think in theory: Yes,
-
that's possible.
Herald: Do we have more questions from the
-
Internet?
Q: Does the phone stay compromised after
-
restart?
A: So there is no persistence exploit
-
here. No. You will need another exploit a
littlelailo did a talk. I think just an
-
hour ago about persistence. So you would
need to change this with what, for
-
example, to exploit that he showed.
Herald: And if you have questions in the
-
room, please line up behind the
microphones. Do we have more questions
-
from the Internet.
Q: Yes. So you've achieved the most novel
-
buck ever found to be fine in iOS. What's
the next big thing you'll be looking at?
-
A: Good question. I don't really know
myself, but I'm going to stay probably
-
around for zero click attack surface reduction
for a bit more.
-
Herald: Looks like we don't have any brave
people asking questions in the room. Does
-
the Internet have more courage?
Q: How long does discovery and
-
exploitation and development take and how
much does the team work to improve the
-
process and development time?
A: Okay, so how much how long does this
-
exploitation process work? That's the
first question. Yes. Yeah. I mean, this is
-
generally a hard thing to answer. Right.
There's like years of hacking around and
-
learning how to do this stuff, etc. that
you have to take into account. But as I
-
said, I had a rough idea how this exploit
would look like. So then really
-
implementing it was like one or two weeks.
The initial part of reverse engineering
-
iMessage reverse engineering this
NSUnarchiver thing. I kind of I think this
-
took forever. This took many months and it
was also very necessary for exploit
-
writing. Right. So a lot of the expert
primitives I use, they also abuse the
-
NSKeyUnarchiver thing.
Herald: We have time for perhaps two quick
-
questions. Mike number 4, please.
Q: Super. Uh, I'm not super familiar with
-
iOS virtual memory address space but you
should two heap regions when you showed the
-
picture of it. And I'm wondering why are
there two heap regions?
-
A: OK, because there is only a minor
detail, but I think there is one region
-
initially like below the shared cache and
one state is full. It just makes another
-
one above it. So it's really just like if
the one gets used or gets gets used up, it
-
makes another one. And that's going to be
like above the shared cache. I think
-
that's the picture you're referring to.
Follow-up: Yeah, thank you.
-
Heralds: And unfortunately, we are out of
time. So the person that might have some
-
number one, please come up to the stage
afterwards and perhaps you can grab a talk.
-
So please give a warm. I can't say this
exactly. applause Thanks.
-
applause
-
Postroll music
-
Subtitles created by c3subtitles.de
in the year 2020. Join, and help us!