< Return to Video

36C3 - Messenger Hacking: Remotely Compromising an iPhone through iMessage

  • 0:00 - 0:20
    36c3 preroll music
  • 0:20 - 0:25
    Herald: So, Samuel is working at Google
    Project Zero on especially vulnerabilities
  • 0:25 - 0:30
    in Web browsers and mobile devices. He was
    part of the team that discovered some of
  • 0:30 - 0:34
    the vulnerabilities that he will be
    presenting in this talk today in detail
  • 0:34 - 0:41
    about the no user interaction
    vulnerability that will be able to
  • 0:41 - 0:49
    remotely exploit and compromise iPhones
    through iMessage. Please give Samuel a
  • 0:49 - 0:56
    warm round of applause.
    Applause
  • 0:56 - 1:01
    Samuel: OK. Thanks, everyone. Welcome to
    my talk. One note before I start,
  • 1:01 - 1:06
    unfortunately, I only have one hour. So I
    had to omit quite a lot of details. But
  • 1:06 - 1:10
    there will be a blog post coming out
    hopefully very soon that has a lot more
  • 1:10 - 1:15
    details. But for this talk, I wanted to
    get everything in there and leave out some
  • 1:15 - 1:22
    details. OK. So this is about iMessage in
    theory some of it applies, or quite a lot
  • 1:22 - 1:26
    actually applies to other messengers, but
    we'll focus on iMessage. So what is
  • 1:26 - 1:31
    iMessage? Yeah, it's a messaging service
    by Apple. We've heard about it in the
  • 1:31 - 1:38
    previous talk a bit. As far as I know, it
    is enabled by default. As soon as someone
  • 1:38 - 1:43
    signs into an iPhone with their account,
    which I guess most people do, because
  • 1:43 - 1:49
    otherwise you can't download apps.
    Interestingly, anyone can send messages to
  • 1:49 - 1:55
    anyone else. So it's like SMS or phone
    calling. And then if you do this, then it
  • 1:55 - 2:01
    pops up some notifications, which you can
    see that here on the right screenshot,
  • 2:01 - 2:07
    which means that there must be some kind
    of processing happening. And so, yeah,
  • 2:07 - 2:11
    this is like default enabled, zero click
    attack surface without the user doing
  • 2:11 - 2:16
    anything, there's stuff happening. And
    then on the very right screenshot, you can
  • 2:16 - 2:24
    see that you can receive messages from
    unknown senders. It just like says there.
  • 2:24 - 2:30
    This sender is not in your contact list,
    but all the processing still happens. In
  • 2:30 - 2:37
    terms of architecture, this is roughly how
    iMessage is structured, not very, yeah,
  • 2:37 - 2:43
    anything too interesting, I guess. You
    have Apple cloud servers and then sender
  • 2:43 - 2:48
    and receiver are connected to these
    servers. That's pretty much it. Content is
  • 2:48 - 2:54
    end to end encrypted, which is very good.
    We heard this before, also. Interestingly,
  • 2:54 - 2:58
    this also means that Apple can hardly
    detect or block these exploits though,
  • 2:58 - 3:06
    because, well, they are encrypted, right?
    So that's an interesting thing to note. So
  • 3:06 - 3:12
    what does an iMessage exploit look like?
    So in terms of prerequisites, really the
  • 3:12 - 3:16
    attacker only needs to know the phone
    number or the email address, which is the
  • 3:16 - 3:22
    Apple account. The iPhone has to be in
    default configuration so you can disable
  • 3:22 - 3:27
    iMessage. But that's not done by default.
    And the iPhone has to be connected to the
  • 3:27 - 3:32
    Internet. And in terms of prerequisites,
    that's pretty much all you need for this
  • 3:32 - 3:38
    exploit to work. So that's quite a lot of
    iPhones. The outcome is the attacker has
  • 3:38 - 3:43
    full control over the iPhone. After a few
    minutes, I think it takes like five to
  • 3:43 - 3:49
    six, seven minutes maybe. And it is also
    possible without any visual indicator. So
  • 3:49 - 3:54
    there's no... you can make it so there are
    no notifications during this entire
  • 3:54 - 4:00
    exploit. OK. But before we get to
    exploiting, of course, we need a
  • 4:00 - 4:04
    vulnerability and for that we need to do
    some reverse engineering. So I want to
  • 4:04 - 4:09
    highlight a bit how we started this or how
    we approached this. And I guess the first
  • 4:09 - 4:14
    question, you might be interested in, is
    what daemon or what service is handling
  • 4:14 - 4:21
    iMessages. And one easy way to figure
    this out is you can just make a guess. You
  • 4:21 - 4:26
    look at your process list on your Mac, the
    Mac can also receive iMessages. You, like,
  • 4:26 - 4:31
    stop one of these processes and then you
    see if iMessages are still delivered. And
  • 4:31 - 4:36
    if not, then probably you found a
    process that's somewhat related to
  • 4:36 - 4:44
    iMessages. If you do this, you'll find
    "imagent", already sounds kind of related.
  • 4:44 - 4:48
    If you look at it, it also has an iMessage
    library that it's loading. Ok, so this
  • 4:48 - 4:55
    seems very relevant. And then you can load
    this library in IDA. You see a screenshot
  • 4:55 - 5:00
    top right. And you find a lot of handlers.
    So for example, this
  • 5:00 - 5:04
    "MessageServiceSession handler:
    incomingMessage:", and then you can set a
  • 5:04 - 5:07
    breakpoint there. And then at that point
    you can see these messages as they come
  • 5:07 - 5:13
    in. You can dump them, display them, look
    at them, change them. And so this is a
  • 5:13 - 5:17
    good way to get started. Of course, from
    there, you want to figure out how these
  • 5:17 - 5:22
    messages look like. So, yeah, you can dump
    them in there when they come in in the
  • 5:22 - 5:30
    handler, on the right side you see how
    these iMessages look like more or less on
  • 5:30 - 5:37
    the wire. They are encoded as a PList,
    which is an Apple proprietary format.
  • 5:37 - 5:45
    Yeah, think of it like JSON or XML. And I
    guess some fields are self-explanatory.
  • 5:45 - 5:50
    So, "p", that's the participants in this
    case this is me sending a message to
  • 5:50 - 5:57
    another account I own. You have "T" which
    is the text content of the message. So
  • 5:57 - 6:05
    "Hello 36C3!". You have a version, for
    some reason you also have an XML or HTML-
  • 6:05 - 6:11
    ish field, which is probably some legacy
    stuff. It's being parsed, this XML. But
  • 6:11 - 6:14
    yeah, the whole thing looks kind of
    complex already. I mean maybe you would
  • 6:14 - 6:20
    expect a simple string message to just be
    a string. In reality, it's sending this
  • 6:20 - 6:29
    dictionary over the wire. So let's do some
    more attack service enumeration. If you
  • 6:29 - 6:35
    then do more reverse engineering, read the
    code of the handler, you find two
  • 6:35 - 6:41
    interesting keys that can be present,
    which is ATI and BP, and they can contain
  • 6:41 - 6:49
    NSKeyedUnarchiver data, which is another
    Apple proprietary serialization format.
  • 6:49 - 6:55
    It's quite complex, it has had quite a few
    bugs in the past. On the left side you see
  • 6:55 - 7:02
    an example for such an archive. It's yeah,
    it's being encoded in a plist and then
  • 7:02 - 7:08
    it's pretty much one big array that has,
    like, every object has an index in this
  • 7:08 - 7:15
    array. And here you can see, for example,
    number 7 is some object, is the class
  • 7:15 - 7:24
    NSSharedKeyDictionary. And I think key one
    is an instance of that class and so on. So
  • 7:24 - 7:31
    it's quite powerful. But really what this
    means is that this serializer is now zero
  • 7:31 - 7:35
    click attack surface because it's being
    passed on this path without any user
  • 7:35 - 7:43
    interaction. So I said it's quite complex.
    It even supports things like cyclic
  • 7:43 - 7:48
    references. So you can send an object
    graph where A points to B and B points
  • 7:48 - 7:56
    back to A for whatever reason you might
    want that. Natalie wrote a great blog post
  • 7:56 - 8:00
    where she describes this in more detail.
    What I have here is just an example for
  • 8:00 - 8:06
    the API, how you use it. This is Objective
    C at the bottom. If you're not familiar
  • 8:06 - 8:12
    with Objective C, you can think of these
    brackets as just being method calls. So
  • 8:12 - 8:17
    this is doing, in the last line, it's
    calling the unarchivedObjectOfClasses
  • 8:17 - 8:25
    method for this NSKeyedUnarchiver. You can
    see you can pass a whitelist of classes.
  • 8:25 - 8:32
    So in this case, it will only decode
    dictionary, strings, data, etc. So looks
  • 8:32 - 8:39
    quite okay. Interestingly, if you dig
    deeper into this, this is not quite true
  • 8:39 - 8:44
    because it also allows all the subclasses
    to be decoded. So if you have an NS-
  • 8:44 - 8:49
    something-something dictionary that
    inherits from NSDictionary, then that can
  • 8:49 - 8:55
    also be decoded here, which is quite
    unintuitive I think. And this really blows
  • 8:55 - 9:02
    up the attack surface because now you have
    not only these 7 or so classes, but you
  • 9:02 - 9:10
    have like 50. Okay. So this is what we
    focused on when me and Natalie were
  • 9:10 - 9:17
    looking for vulnerabilities. It seemed
    like the most complex thing we found. We
  • 9:17 - 9:22
    reported quite a few vulnerabilities here,
    you can see it maybe a bit on the left.
  • 9:22 - 9:31
    The one I decided to write an exploit for
    is this 1917, reported on July 29th and
  • 9:31 - 9:38
    then exploits sent on August 9th. Yeah,
    mostly I decided to use this one because
  • 9:38 - 9:43
    it seemed the most convenient. I do think
    many of the other ones could be exploited
  • 9:43 - 9:48
    in a similar way, but not quite as nice,
    so would maybe take some more heap
  • 9:48 - 9:56
    manipulation, etc. So then Apple first
    pushed the mitigation quite quickly, which
  • 9:56 - 10:01
    basically blocks this code from being
    reached over iMessage. In particular, what
  • 10:01 - 10:08
    they did is, they exactly no longer allow
    subclasses to be decoded in iMessage. So
  • 10:08 - 10:13
    that's quite a good mitigation, it blocks
    off maybe 90 percent of the attack surface
  • 10:13 - 10:23
    here. Yeah. So then they fully fixed it in
    iOS 13.2. But again, after August 26th
  • 10:23 - 10:33
    this was only just local attack surface.
    OK, so what is the bug? It's some
  • 10:33 - 10:37
    initialization problem during decoding,
    the vulnerable class is
  • 10:37 - 10:42
    SharedKeyDictionary, which again, it's a
    subclass of NSDictionary, so it's allowed
  • 10:42 - 10:50
    to be decoded. So let's take a look at
    that. So, yeah. SharedKeyDictionary.
  • 10:50 - 10:56
    Here's some pseudocode in Python. It's a
    dictionary. So its purpose is to, well,
  • 10:56 - 11:02
    look up keys to values or map keys to
    values. The lookup method is really
  • 11:02 - 11:08
    simple. It just looks up an index in a key
    set. So every key dictionary has a shared
  • 11:08 - 11:14
    key set and then that index is used to
    index into some area. OK, so that's quite
  • 11:14 - 11:21
    simple so most of the magic happens in the
    SharedKeySet. And so what that does is
  • 11:21 - 11:28
    something like compute a hash of the key.
    Use that hash to index into something
  • 11:28 - 11:35
    called a rankTable, which is an array of
    indices. And then if that index is valid,
  • 11:35 - 11:42
    so it's being bounced, checked against the
    number of keys. Then it has found the the
  • 11:42 - 11:47
    correct index and if not, it can recurse
    to another SharedKeySet. So every
  • 11:47 - 11:53
    SharedKeySet, can have a sub-SharedKeySet,
    and then it repeats the same procedure. So
  • 11:53 - 11:58
    it already looks kind of complex. Why does
    it have... why does it need this
  • 11:58 - 12:03
    recursion? I'm not quite sure, but it's
    there. And so now we look at how this goes
  • 12:03 - 12:11
    wrong. So this is the initWithCoder, which
    is the SharedKeySet constructor used
  • 12:11 - 12:18
    during decoding with the keyedUnarchiver.
    And it looks pretty solid at first, it's
  • 12:18 - 12:26
    really just taking the values out of the
    archive and then storing them as the
  • 12:26 - 12:32
    fields of this SharedKeySet. I have a, I'm
    gonna go through the code here in, like,
  • 12:32 - 12:37
    single step to highlight where it goes
    wrong or what goes wrong here, what's
  • 12:37 - 12:43
    wrong with this code. So we start with
    SharedKeySet1 which implies there's gonna
  • 12:43 - 12:49
    be another one. And at the start it's all
    zero initialized. It's basically being
  • 12:49 - 12:54
    allocated through ?calloc?. So everything
    is zero. Then we execute the first line.
  • 12:54 - 13:02
    Okay. So numKey, you see some interesting
    values coming. So far this is all fine.
  • 13:02 - 13:07
    Note, that you can set numKey, at this
    point numkey can be anything because it's
  • 13:07 - 13:13
    only being validated three lines further
    down, right? Where it's making sure that
  • 13:13 - 13:21
    numKey matches the the real length of this
    array. So this is fine, but here it's now
  • 13:21 - 13:25
    recursing and it's decoding another
    SharedKeySet. So we start again. We have
  • 13:25 - 13:33
    another SharedKeySet, all filled with
    zeros and we start from the top. Again,
  • 13:33 - 13:40
    numKey is one, so this is this is a
    legitimate SharedKeySet, decoding a
  • 13:40 - 13:48
    rankTable. And here we are making a
    circle. So for SharedKeySet2 we pretend
  • 13:48 - 13:54
    that its sub-KeySet is SharedKeySet1. And
    this actually works. So the
  • 13:54 - 14:00
    NSKeyedUnarchiver has special handling to
    handle this correctly. So it does not
  • 14:00 - 14:06
    create a third object and it makes the
    cycle. And we're good to go. Okay. Next to
  • 14:06 - 14:14
    decode the keys area. So this is fine.
    SharedKeySet2 seems legitimate so far. And
  • 14:14 - 14:19
    now it's doing some sanity checking. Where
    it's trying, where it's making sure that
  • 14:19 - 14:26
    this SharedKeySet can look up every key.
    And so it does this for the only key it
  • 14:26 - 14:33
    has, key one. Now, at this point, it's
    again, remember, it's hashing the key
  • 14:33 - 14:39
    going into rank table, takes out 42, which
    is bigger than numKey. So in this case,
  • 14:39 - 14:45
    this look up here has failed. And now it's
    recursing to SharedKeySet1. Right? This
  • 14:45 - 14:54
    was the logic. And at this point it's
    taking out this hex41414141 as index,
  • 14:54 - 15:00
    compares it against hexffffffff and that's
    fine, and now it's accessing, null
  • 15:00 - 15:08
    pointer, which is.. the keys area is still
    null pointer plus, well, 41 41 41 41 times
  • 15:08 - 15:14
    8. So at this point it's crashing. It's
    accessing invalid memory, precisely
  • 15:14 - 15:21
    because in this situation the
    SharedKeySet1 hasn't been validated yet.
  • 15:21 - 15:30
    OK, so that's the bug we're going to
    exploit. I have these checkpoints just to
  • 15:30 - 15:35
    think where we are, so we now have a
    vulnerability in this NSUnarchiver API. We
  • 15:35 - 15:42
    can trigger it through iMessage. So what
    Exploit Primitive do we have? Let's take a
  • 15:42 - 15:50
    look again at the lookup function, which
    we saw before. So here where it's bold,
  • 15:50 - 15:55
    this is where we crash. keys is null
    pointer, index is fully controlled. So we
  • 15:55 - 16:01
    can access null pointer plus offset. And
    then what happens is the result of this
  • 16:01 - 16:06
    memory access is going to be used as some
    Object-C Object. So this is all
  • 16:06 - 16:10
    Objective-C in reality, it's doing some
    comparison, which means it does something
  • 16:10 - 16:17
    like it called some method called
    isNSString, for example. And then also
  • 16:17 - 16:25
    eventually it calls dealloc, which is the
    destructor. So yeah, we have... the thing
  • 16:25 - 16:29
    it reads from whatever, it will treat it
    as an objectif C-Object calls a
  • 16:29 - 16:36
    message on it. And that's our Exploitation
    Primitive. Okay, so here we are. How do we
  • 16:36 - 16:44
    exploit this? So the rough idea for
    exploiting such vulnerabilities looks like
  • 16:44 - 16:50
    this. You want to have some fake
    Objective-C object somewhere in memory
  • 16:50 - 16:55
    that you're referencing. So again, we have
    an we can access an arbitrary absolute
  • 16:55 - 17:00
    address. We want some fake Objective-C
    object there. Every Objective-C object has
  • 17:00 - 17:07
    a pointer to its class. And then the class
    has something called a method table, which
  • 17:07 - 17:11
    basically has function pointers to these
    methods. Right. And so if we fake this
  • 17:11 - 17:17
    entire data structure thing, the fake
    object and the fake class, then as soon as
  • 17:17 - 17:22
    the process calls some method on our fake
    thing, we get code execution. So we get
  • 17:22 - 17:28
    control over the instruction pointer and
    then it's game over. So that's going to be
  • 17:28 - 17:38
    our goal for this exploit. So here we have
    two different types of addresses: On the
  • 17:38 - 17:43
    left side we have heap addresses or data,
    really. And on the right side, in this
  • 17:43 - 17:50
    NSString-thing we need library addresses
    or code addresses, simply because on iOS
  • 17:50 - 17:57
    you can't have writeable code regions. So
    we have to necessarily reuse existing
  • 17:57 - 18:01
    code, do so to something like ROP also.
    So we need to know where libraries are
  • 18:01 - 18:07
    mapped for this. And this is exactly the
    problem we are gonna face now because
  • 18:07 - 18:12
    there is something called ASLR, Address
    Space Layout Randomization. And what it
  • 18:12 - 18:17
    does is it will randomize this entire
    address space. So on the left side, you
  • 18:17 - 18:23
    can see how a process looks like, how the
    virtual memory of a process looks like
  • 18:23 - 18:29
    before ASLR. And there everything is
    always mapped at the same address. So if
  • 18:29 - 18:33
    you start the same address twice on
    different phones, maybe even without ASLR
  • 18:33 - 18:37
    the same library is at the same address,
    the heap is always at the same address
  • 18:37 - 18:42
    stack. Everything is the same. And so this
    would be really simple to exploit now
  • 18:42 - 18:48
    because, well, everything is the same.
    With ASLR everything is shifted and now
  • 18:48 - 18:53
    all the addresses are randomized and we
    don't really know where anything is. And
  • 18:53 - 19:00
    so that makes it harder to exploit this.
    So we need an ASLR bypass is what this
  • 19:00 - 19:07
    means. We're gonna divide it into two
    parts. So the heap addresses we get them
  • 19:07 - 19:12
    from in a different way than the library
    addresses. So let's see how we get heap
  • 19:12 - 19:18
    addresses. It's really simple honestly,
    what you can do is heap spraying, which is
  • 19:18 - 19:25
    an old technique. I think 15 years old
    maybe. And it does still work today. The
  • 19:25 - 19:30
    idea is that you simply allocate lots of
    memory. So if you look at this code there
  • 19:30 - 19:35
    put on the right, which you can use to
    test that, what it does is that it allocates
  • 19:35 - 19:41
    256 megabytes of memory on the heap with
    malloc. And then afterwards there's one
  • 19:41 - 19:48
    address or there's many addresses. But in
    this case, I'm using this hex110000000
  • 19:48 - 19:54
    where you will find your data at. Okay. So
    just spraying 256 megabytes lets you put
  • 19:54 - 20:00
    controlled data at a controlled address,
    which is enough for this first part of the
  • 20:00 - 20:05
    exploit. The remaining question is how can
    you heap spray over a iMessage. That's a
  • 20:05 - 20:11
    bit more complicated. But it is possible
    because NSKeyedUnarchiver is great and it
  • 20:11 - 20:18
    lets you do all sorts of weird stuff which
    you can abuse for heap spraying. So, yeah.
  • 20:18 - 20:24
    Blog posts will have more details. Okay.
    So we have these, the heap addresses. We
  • 20:24 - 20:32
    have them. We need the library addresses.
    Let's go back to the virtual memory space.
  • 20:32 - 20:38
    On iOS and also on macOS the libraries -
    so maybe in this case all three libraries,
  • 20:38 - 20:43
    but in reality, it's like hundreds of
    system libraries - they are all prelinked
  • 20:43 - 20:49
    into one gigantic binary blob, which is
    called a dyld_shared_cache. The idea is
  • 20:49 - 20:54
    that this speeds up like loading times
    because all the interdependencies between
  • 20:54 - 20:59
    libraries are resolved pretty much at
    compile time. But yeah, so we have this
  • 20:59 - 21:05
    gigantic binary blob and it has everything
    we need. So it has all the code, it has
  • 21:05 - 21:11
    all the ROP gadgets and it has all the
    Objective-C classes. So we have to know
  • 21:11 - 21:19
    where this dyld_shared_cache is mapped. If
    you dig into that a bit or if you look at
  • 21:19 - 21:25
    the documentation or the the binaries, you
    can find out that it is going to be mapped
  • 21:25 - 21:32
    always between these two addresses. So
    between 0x180000000 and 0x280000000, which
  • 21:32 - 21:37
    leaves only a 4 gigabyte region, so it's
    only being mapped in these 4 gigabytes.
  • 21:37 - 21:43
    And then the randomization granularity is
    also 0x4000 because iOS uses large pages
  • 21:43 - 21:50
    so it can only randomize with page
    granularity, and that page granularity is
  • 21:50 - 21:58
    0x4000. But really what's most interesting
    is that on the same device, the
  • 21:58 - 22:04
    dyld_shared_cache is only randomized once
    per boot. So if if you have two different
  • 22:04 - 22:08
    processes on the same device, the shared
    cache is at the same virtual address. And
  • 22:08 - 22:12
    if you have one process, then it crashes
    and you have another one. And so on, like
  • 22:12 - 22:18
    the shared cache is always going to be at
    the same address. And that makes it really
  • 22:18 - 22:24
    interesting. And also, it's one gigabyte
    in size. It's gigantic. So it's not too
  • 22:24 - 22:32
    hard to find in this four gigabyte region.
    Right. So this is what our our task has
  • 22:32 - 22:37
    boiled down to at this point. We have this
    address range, we have the shared cache.
  • 22:37 - 22:45
    And all we need to know now is what is
    this offset? So let's make a thought
  • 22:45 - 22:51
    experiment. Let's say we had an oracle
    which would tell us... which we could give
  • 22:51 - 22:56
    an address. And it would tell us if this
    address is mapped in the remote process.
  • 22:56 - 23:03
    OK, if we have this, it suddenly becomes
    really easy to solve this problem, because
  • 23:03 - 23:08
    then all you have to do is you go in 1
    gigabyte steps the the size of the shared
  • 23:08 - 23:16
    cache between these two addresses and then
    at some point you find a valid address. So
  • 23:16 - 23:21
    maybe here after 3 steps, you find a valid
    address, and then from there you just do a
  • 23:21 - 23:26
    binary search. Right. Because you know
    that somewhere between the green and the
  • 23:26 - 23:31
    second red arrow, the shared cache starts.
    So you can do a binary search and you find
  • 23:31 - 23:40
    the the start address in logarithmic time
    in a few seconds, minutes, whatever. So
  • 23:40 - 23:44
    obviously the question is what? How? Where
    would we get this oracle from? This seems
  • 23:44 - 23:51
    kind of weird. So let's look at receipts,
    message receipts. So iMessage like many
  • 23:51 - 23:58
    other messengers - I think pretty much all
    of them that I know - send receipts for
  • 23:58 - 24:05
    different things. iMessage in particular
    has delivery receipts and read receipts.
  • 24:05 - 24:11
    Delivery received means the device
    received the message, read receipt means
  • 24:11 - 24:16
    the user actually looked - opened the app,
    looked at the message. You can turn off
  • 24:16 - 24:21
    read receipts, but as far as I know, you
    cannot turn off delivery receipts. And so
  • 24:21 - 24:28
    here on the left you see a screenshot.
    Three different messages were sent and
  • 24:28 - 24:32
    they have three different states. The
    first message was marked as read, which
  • 24:32 - 24:37
    means it got a delivery receipt and a read
    receipt. The second message is marked as
  • 24:37 - 24:42
    delivered. So it only got a delivery
    receipt and the third message doesn't have
  • 24:42 - 24:51
    anything. So it hasn't received any
    receipt. OK. So why is it useful? Here on
  • 24:51 - 24:58
    the left is some pseudocode of imagent's
    handling of how it handles messages and
  • 24:58 - 25:04
    when it sends these receipts. And so you
    can see that it first parses the plist
  • 25:04 - 25:10
    that's coming in and it's then doing this
    nsUnarchive at some later time. And this
  • 25:10 - 25:15
    is this is exactly why all but would
    trigger during nsUnarchive. And only then
  • 25:15 - 25:23
    does it send a delivery receipt. Right. So
    what that means is if during our during
  • 25:23 - 25:28
    our nsUnarchive, if we can trigger the bug
    and cause a crash, then we have somewhat
  • 25:28 - 25:34
    of a one bit sidechannel. Right. Because
    if we cause a crash, then we won't see a
  • 25:34 - 25:39
    delivery receipt. And if we don't cause a
    crash, then we see a delivery receipt. So
  • 25:39 - 25:48
    it's a one bit of information. And this is
    going to be our oracle. All right. So
  • 25:48 - 25:53
    ideally, you have a vulnerability that
    gives you this perfect oracle of is an
  • 25:53 - 25:59
    address mapped or not? So crash, if it is
    not mapped, don't crash if it mapped. In
  • 25:59 - 26:04
    reality, you probably will not get this
    perfect oracle from your bug. On the left
  • 26:04 - 26:10
    side, you see the real Oracle function for
    this vulnerability, which is, well it has
  • 26:10 - 26:18
    to be mapped. OK. But then it's also using
    the value that it's reading. And so it
  • 26:18 - 26:24
    will only not crash if the value is either
    0 or if it has the most significant bit
  • 26:24 - 26:29
    set, that is some like pointer taking
    stuff or if it's a real legitimate pointer
  • 26:29 - 26:35
    to an Objective-C object. So this Oracle
    function is a bit more complex, but the
  • 26:35 - 26:41
    similar idea still works. So you can still
    do something like a binary search, and
  • 26:41 - 26:48
    then infer the shared cache start address
    in logarithmic time. Right. And so it only
  • 26:48 - 26:54
    takes maybe five minutes or so to do this.
    But for this for this part, again, I have
  • 26:54 - 27:02
    to refer to the blog post which will cover
    how this works. OK. So this is the summary
  • 27:02 - 27:08
    of the remote ASLR bypass. Two phases,
    there's linear scan where it's just
  • 27:08 - 27:13
    scanning, sending these payloads and
    checking if it gets the receipt back, and
  • 27:13 - 27:17
    the first time it gets a receipt back, it
    knows. OK. This address is valid. I now
  • 27:17 - 27:22
    found an address that is within the shared
    cache. And at that point it starts this
  • 27:22 - 27:28
    searching phase, which in logarithmic time
    figures out the exact, precise starting
  • 27:28 - 27:37
    address. So there's a few common questions
    about this that I want to briefly go into.
  • 27:37 - 27:42
    The first maybe obvious question is, can
    you really just crash this agent like 20
  • 27:42 - 27:50
    plus times? And the answer is yes. There's
    no indicator or anything that the user
  • 27:50 - 27:56
    would would see that this demon crashes.
    The only thing you can do is you can go
  • 27:56 - 28:00
    into like settings, privacy, something
    something, crash log something, and then
  • 28:00 - 28:07
    you can see these crash logs. Second
    question is you can I think by default,
  • 28:07 - 28:12
    the iPhone is configured to send crash
    logs to the vendor, to Apple. So isn't
  • 28:12 - 28:17
    that a problem? So I think I looked at
    this briefly. What I stumbled across was
  • 28:17 - 28:25
    that it seems that iOS collects at most 25
    crash logs per service. This is not
  • 28:25 - 28:31
    designed to be like a security feature.
    Right. So this makes sense. But what that
  • 28:31 - 28:38
    means is that an attacker can use some
    kind of, well, resource exhaustion bug to
  • 28:38 - 28:45
    crash this daemon maybe 25 times first,
    and then only start to exploit and then no
  • 28:45 - 28:52
    trace of the exploit will be sent over.
    Third question is whether this can be
  • 28:52 - 28:57
    fixed by simply sending the delivery
    receipt very early on. I think this is...
  • 28:57 - 29:02
    this was my first suggestion to Apple to
    just send this delivery receipt right at
  • 29:02 - 29:07
    the start. Eventually I figured out it
    doesn't really work because you can still
  • 29:07 - 29:12
    make some kind of timing side channel,
    because when when a demon crashes multiple
  • 29:12 - 29:18
    times, it's subject to some penalty and it
    will only restart like a few seconds or
  • 29:18 - 29:24
    even minutes later. So from the timing of
    getting a delivery receipt, you can then
  • 29:24 - 29:30
    still basically get this oracle. Right. So
    it doesn't really work by just sending it
  • 29:30 - 29:38
    earlier. I'll go into some other ideas
    that might work later. Okay. So at this
  • 29:38 - 29:48
    point I'm starting the demo. The demo is
    two parts. Let's see where it is. Right.
  • 29:48 - 29:54
    So I have this iPhone here and you can
    with QuickTime... the screen is mirrored
  • 29:54 - 30:06
    to the projector. So this iPhone is it's a
    10S, so it's from last year. It's on 12.4,
  • 30:06 - 30:11
    which is the last vulnerable version. So
    that's like half a year old at this point.
  • 30:11 - 30:23
    And what else? So there is no existing
    chats open. Okay. And let's see. So I hope
  • 30:23 - 30:28
    the Wi-Fi works. What you can see here is
    the way the exploit works that it's
  • 30:28 - 30:35
    hooking with Frida into... Do we get
    delivery receipt? Uh, do we? Yeah. Okay,
  • 30:35 - 30:41
    cool. It works. So, yeah, it's popping up
    these messages. The way the exploit works
  • 30:41 - 30:46
    that it's hooking the messages app on
    macOS with Frida and then it's sending
  • 30:46 - 30:54
    these specific marker messages like
    INJECT_ATI, and then the Frida hook
  • 30:54 - 30:58
    replaces this message with like the
    current payload. Right. And now it's
  • 30:58 - 31:09
    testing these addresses. It's not too slow
    I guess. Yeah. And it's popping up some
  • 31:09 - 31:13
    nice messages. Okay. It already found.
    Okay. So this is already the end of the
  • 31:13 - 31:19
    first stage. So that was quite fast. It
    found a valid address in this like first
  • 31:19 - 31:25
    probing step and now it has 21,000
    candidates for the shared cache base. I
  • 31:25 - 31:31
    know it's doing this kind of binary search
    thing to half that in every step. Okay.
  • 31:31 - 31:39
    Now it only has 10,000 left and so it's
    quite fast and quite efficient. Okay.
  • 31:39 - 31:50
    While this runs, um, let's continue. So
    this is where we are. We can now create
  • 31:50 - 31:56
    fake objects. We have all the addresses we
    need. It's like this 1170 is where we can
  • 31:56 - 32:02
    place our stuff and then we will gain
    control over the program counter. And from
  • 32:02 - 32:07
    there it's standard stuff, right? It's
    what you would do in all of these exploits
  • 32:07 - 32:12
    you pivot maybe to the stack, you do
    return oriented programing and then you
  • 32:12 - 32:17
    can run your code and you've succeeded.
    Now, at this point, there is another thing
  • 32:17 - 32:24
    coming in. Pointer authentication is a new
    security feature that Apple designed and
  • 32:24 - 32:32
    implemented first in the 10S, so this
    device from 2018. And the idea is that you
  • 32:32 - 32:37
    can now - for this you need CPU support -
    the idea that you can now store a
  • 32:37 - 32:43
    cryptographic signature in the top bits of
    a pointer. OK, so here on the very left
  • 32:43 - 32:48
    side, you have a raw pointer. So the top
    bits are zero because the way the address
  • 32:48 - 32:57
    space works. Now there's a set of
    instructions that sign a pointer and they
  • 32:57 - 33:02
    will maybe take a context on it, but they
    use some key that's not in memory - that's
  • 33:02 - 33:08
    in a register, compute a signature of this
    pointer and store the signature in the top
  • 33:08 - 33:12
    bits. And that's what you see on the right
    side. The green things. That's the
  • 33:12 - 33:20
    signature. And now before using this
    pointer, the code will now authenticate by
  • 33:20 - 33:25
    running another instruction. And this
    instruction, if the verification fails, it
  • 33:25 - 33:29
    will basically clobber this pointer,
    make it invalid. And then the following
  • 33:29 - 33:34
    instructions will just crash. Right. So
    here this is the function called the BL,
  • 33:34 - 33:39
    branch and link instruction. This is doing
    a function call to a function pointer. But
  • 33:39 - 33:43
    first it's authenticating this pointer.
    And if this authentication step fails,
  • 33:43 - 33:50
    then the process will crash right there.
    What this means for an attacker is that
  • 33:50 - 33:56
    more or less, ROP is dead, because ROP
    involves faking a bunch of function
  • 33:56 - 34:00
    point... or like, well, code pointers
    really, that point in the middle of
  • 34:00 - 34:05
    existing code. So this is no longer
    possible because an attacker cannot
  • 34:05 - 34:13
    generate these signatures. So this is
    where our exploit breaks, right, the red
  • 34:13 - 34:20
    thing. Well, we have a fake objective C
    class with our own function pointer. This
  • 34:20 - 34:26
    does no longer work because we cannot
    compute these signatures. So what do we
  • 34:26 - 34:33
    do? One thing that's still possible and
    it's even documented in the documentation
  • 34:33 - 34:38
    is that this class pointer in the object -
    what's also called the ISA pointer -
  • 34:38 - 34:45
    it's not protected by PAC in any way.
    Which means we can fake instances of
  • 34:45 - 34:52
    legitimate existing classes. Right. So in
    this case here we can have a fake object
  • 34:52 - 34:59
    that points to a real class that has real,
    legitimately signed method pointers. So
  • 34:59 - 35:06
    this tool works. And with this, we can now
    get existing methods called, out of place
  • 35:06 - 35:10
    and kind of manipulate the control flow.
    And these existing methods are basically
  • 35:10 - 35:20
    now gadgets. So if you want to think about
    it that way. So what can we do with this?
  • 35:20 - 35:25
    One very interesting method we can get
    called is dealloc, the destructor. So I
  • 35:25 - 35:30
    think in quite a few, maybe most of the
    Objective-C exploitation scenarios, you
  • 35:30 - 35:36
    can probably get a dealloc method called.
    Now what you do is you just enumerate all
  • 35:36 - 35:41
    the destructors in the shared cache.
    There's tons of them, I think 50,000, and
  • 35:41 - 35:47
    you can get any of those called. And then
    one of them or a few of them are really
  • 35:47 - 35:52
    interesting because they call this invoke
    method, which is part of the NSInvocation
  • 35:52 - 35:59
    object, or class. And an NCInvocation is
    basically a bound function. So it has a
  • 35:59 - 36:05
    target object, the method to be called and
    all the arguments. And as soon as you call
  • 36:05 - 36:10
    invoke on this NCInvocation, it does this
    method call with fully control arguments.
  • 36:10 - 36:15
    Right. So what that means is with this
    destructor, we can now make a fake object
  • 36:15 - 36:21
    with a fake NSInvocation that has any
    method call we would like to perform, and
  • 36:21 - 36:28
    then it's going to do that because it's
    running this invoke here. Again, you see
  • 36:28 - 36:34
    this shield here, which I put in place for
    things that Apple has hardened since we
  • 36:34 - 36:39
    sent them the exploit. So what they did so
    far is they hardened NSInvocation and it's
  • 36:39 - 36:47
    now no longer easily possible to abuse it
    in this way. But yeah. So for us, we can
  • 36:47 - 36:53
    now run arbitrary Objective-C methods with
    controlled arguments. What about
  • 36:53 - 36:59
    sandboxing? If you do some more reverse
    engineering and figure out what services
  • 36:59 - 37:04
    play into iMessage, this is what you end
    up with. On the right side. So you have a
  • 37:04 - 37:09
    number of services. Most of them are
    sandboxed. If it has the red border, it
  • 37:09 - 37:15
    means there's a sandbox. Interestingly,
    Springboard also does some NSUnarchiver
  • 37:15 - 37:23
    stuff. So it's decoding the BP key. So it
    could also trigger our vulnerability and
  • 37:23 - 37:27
    Springboard is not sandboxed. So it's the
    main UI process. It's basically what's
  • 37:27 - 37:35
    handling showing the the welcome screen.
    And so on. And so what that means is,
  • 37:35 - 37:39
    well, we can just target Springboard and
    then we get code execution outside of the
  • 37:39 - 37:44
    sandbox so we don't actually need to worry
    too much about the sandbox. As of iOS 13,
  • 37:44 - 37:51
    this is fixed and this key is now decoded
    in the sandbox. Cool, so we can execute
  • 37:51 - 37:56
    Objective-C methods outside of the
    sandbox. We can with that access user
  • 37:56 - 38:01
    data, activate camera, microphone, etc.
    This is all possible through Objective-C
  • 38:01 - 38:06
    quite easily. But of course we don't care
    about that. What we want is a calculator
  • 38:06 - 38:11
    and this is also quite easy, with one
    Objective-C call - UIApplication
  • 38:11 - 38:17
    launchApplication blah blah blah. And so
    let's see if this works. Go back to the
  • 38:17 - 38:26
    demo. So where are we at? So the, uh, the
    ASLR bypass ran through. You can nicely
  • 38:26 - 38:31
    see that it roughly halved the candidates
    in every round, or with every message
  • 38:31 - 38:36
    it had to send. It ended up with just
    one message. Yeah, well with just one
  • 38:36 - 38:41
    candidate at the end. And that is the
    shared cache base in this case
  • 38:41 - 38:51
    0x18a608000. Now it's preparing the heap
    spray. This is all kind of hacked
  • 38:51 - 38:59
    together. I think if you wanted to do this
    properly, for one, you can send the whole
  • 38:59 - 39:07
    heap spray in one message. I'm just lazy.
    It's also probably way too big. Another
  • 39:07 - 39:12
    thing is, I think you would probably not
    target springboard in reality just because
  • 39:12 - 39:15
    spring board is very sensitive. So if you
    crash, did you get this re-spring and the
  • 39:15 - 39:20
    UI restarts. So I think in reality you
    would probably target IM agent and then
  • 39:20 - 39:26
    chain the sandbox escape. Because while
    this bug would also get you out of the
  • 39:26 - 39:32
    sandbox. So looks should be doable. Okay.
    So I think the last message arrived. It's
  • 39:32 - 39:36
    freezing here for a couple of seconds. I
    don't actually know why I never bothered,
  • 39:36 - 39:45
    but it does work.
    Applause
  • 39:45 - 39:52
    Thank you. Yeah. So that was a demo. It's
    it's kind of naturally reliable, this
  • 39:52 - 39:59
    exploit, because there is not much of heap
    manipulation involved except this one
  • 39:59 - 40:09
    heaps spray, which is controllable. Okay.
    Um, so what's left? I think one more thing
  • 40:09 - 40:15
    you can do is you can attack the kernel if
    you want that. You have to deal with two
  • 40:15 - 40:20
    problems here. One is code signing. You
    cannot execute unsigned code on iOS. And
  • 40:20 - 40:25
    then the standard workaround for that is
    you abuse JIT pages in safari. But we are
  • 40:25 - 40:30
    not in safari or we are not in web
    content, so we don't have JIT pages. What
  • 40:30 - 40:36
    I did here is I basically pivoted into
    JavaScript core, which is the the JS
  • 40:36 - 40:42
    library. You can use it from from any app
    also. And then I'm just bridging syscalls
  • 40:42 - 40:48
    into JavaScript and then implementing the
    kernel exploit in JavaScript. This does
  • 40:48 - 40:53
    not require any more vulnerabilities. So
    you do not need a JavaScript core bug to
  • 40:53 - 40:59
    do this. And the idea is very similar to
    pwn.js. Maybe some of you know about that.
  • 40:59 - 41:03
    It's a library. I think initially
    developed for Edge because they did
  • 41:03 - 41:11
    something similar was like JIT page
    hardnings. So what I decided to do is take
  • 41:11 - 41:19
    SockPuppet from Ned or CVE-2019-8605,
    which works on this version, it works on
  • 41:19 - 41:27
    12.4. This is the trigger for it. And I
    only ported the trigger. I didn't bother
  • 41:27 - 41:31
    re -implementing the entire exploit. So
    yeah, this is the trigger. It will cause a
  • 41:31 - 41:37
    kernel panic. It's quite short. Which is
    nice. So if you want to run this from
  • 41:37 - 41:42
    JavaScript, really, there's only three
    things you care about, right? So the first
  • 41:42 - 41:48
    one is you need the syscalls. So
    highlighted here, there is like four or so
  • 41:48 - 41:52
    different syscalls here. Not a lot. And
    you just have to be able to call them from
  • 41:52 - 41:58
    JavaScript. The other thing is you need
    constants, right? So I have AF_INET6,
  • 41:58 - 42:02
    SOCK_STREAM. These are all integer
    constants. So this is really easy, right?
  • 42:02 - 42:07
    You just need to look up what these values
    end up being. And then the last thing is
  • 42:07 - 42:14
    you need some data structures. So in this
    case, I need this so_np_extension thing.
  • 42:14 - 42:22
    It needs some integer value to pass
    pointers to and so on. Yeah. And then this
  • 42:22 - 42:28
    is kind of the the magic that happens. You
    take sock_puppet.c extract the syscalls
  • 42:28 - 42:34
    etc. There is one Objective C message you
    can call which is very convenient, which
  • 42:34 - 42:42
    gives you a dlsym. What this lets you do
    is, it lets you get native C function
  • 42:42 - 42:47
    pointers that are signed, right. Because
    so far we can only call Objective C
  • 42:47 - 42:51
    methods, but we need to be able to call
    syscalls or at least the C wrapper
  • 42:51 - 43:00
    functions. So with this dlsym method thing
    we can get signed pointers to C functions.
  • 43:00 - 43:03
    Then we need to be able to pivot into
    JavaScript code, which is also really easy
  • 43:03 - 43:09
    with one method call, the JSContext
    evaluateScript. We need to mess around
  • 43:09 - 43:13
    with memory a bit like corrupt some
    objects from outside, corrupt some area
  • 43:13 - 43:19
    buffers in javascript, get read, write.
    Kind of standard browser exploitation
  • 43:19 - 43:23
    tricks I guess. But yeah. So if you do
    this what you end up with is
  • 43:23 - 43:31
    sock_puppet.js. It looks very similar. You
    can see a bit of my javascript API that
  • 43:31 - 43:37
    lets you allocate memory buffers. I read
    and write memory, have some integer
  • 43:37 - 43:42
    constants and yeah, apart from that, it
    doesn't really look much different from
  • 43:42 - 43:49
    the initial trigger. And so this can now
    be served over, well, staged onto the
  • 43:49 - 43:55
    iMessage exploit building on top of this
    object a C method called primitive. And I
  • 43:55 - 44:00
    guess at least in theory I didn't fully
    implement it. This should be able to just
  • 44:00 - 44:08
    run a kernel exploit and fully compromise
    the device without any interaction in
  • 44:08 - 44:14
    probably less than 10 minutes. Okay, so
    this was the first part. How does how does
  • 44:14 - 44:19
    this exploit work. What I have now is a
    number of suggestions how to make this
  • 44:19 - 44:26
    harder and how to improve things. So one
    of the first things that is really
  • 44:26 - 44:30
    critical for this exploit is the ASLR
    bypass, which relies on a couple of
  • 44:30 - 44:37
    things. And I think a lot of this ALSR
    bypass also works on other platforms. So
  • 44:37 - 44:42
    Android has a very similar problem with
    like mappings being at the same address
  • 44:42 - 44:47
    across processes. And other messengers
    have these like receipts and so on. So I
  • 44:47 - 44:52
    think a lot of this applies not just to
    Apple but to Android and to other
  • 44:52 - 44:57
    messengers. But okay. What is the first
    point? So weak ALSR, this is basically the
  • 44:57 - 45:04
    heap spraying, which is just too easy.
    This shouldn't be so easy. In terms of
  • 45:04 - 45:08
    theoretical ASLR, you can see it maybe
    sketched here on the right. In theory,
  • 45:08 - 45:13
    ASLR could be much stronger, much more
    randomized. In reality, it's just like the
  • 45:13 - 45:19
    small red bar. So it really it should just
    have much more entropy to make heap
  • 45:19 - 45:30
    springing not viable anymore. The next
    problem with ASLR is per-boot stuff. At
  • 45:30 - 45:33
    the bottom you can see it, right? So you
    have three different processes, the shared
  • 45:33 - 45:37
    cache is always at the same address,
    similar problems on other platforms, I
  • 45:37 - 45:44
    mentioned that. This is probably hard to
    fix because by this point a lot of, quite
  • 45:44 - 45:50
    a lot relies on this. And it would be a
    big performance hit to change this. But
  • 45:50 - 45:56
    maybe some clever engineers can figure out
    how to do it better. The third part here
  • 45:56 - 46:01
    is the delivery receipts, which,
    interestingly, they can give this side
  • 46:01 - 46:06
    channel, this one bit information side
    channel and this can be enough to break
  • 46:06 - 46:11
    ASLR. And as I've mentioned before, I
    think a lot of other messengers have this
  • 46:11 - 46:21
    same problem. What might work is to
    either, well, remove these receipts. Sure.
  • 46:21 - 46:25
    Or maybe send them from a different
    process so you can't do this timing thing
  • 46:25 - 46:30
    or even from the server. I think if you
    send them, if the server already sends the
  • 46:30 - 46:38
    delivery receipt, it's a bit of cheating.
    But at least this attack doesn't work.
  • 46:38 - 46:43
    Sandboxing, another thing, it's probably
    obvious, right? So the everything that's
  • 46:43 - 46:50
    on zero click attack surface should be
    sandboxed as much as possible. Of course,
  • 46:50 - 46:56
    to, you know, to require the attacker to
    do another full exploit after getting code
  • 46:56 - 47:02
    execution. But Sandboxing can also
    complicate information leaks. So not only
  • 47:02 - 47:08
    had this other iMessage bug
    CVE-2019-8646, there's a blog post about
  • 47:08 - 47:15
    this one. It basically lets you. She was
    able to send to cause a Springboard to
  • 47:15 - 47:20
    send HTTP requests to some server and
    those would contain pictures, data,
  • 47:20 - 47:27
    whatever. If Springboard would've been
    sandbox to not allow network activities,
  • 47:27 - 47:31
    this would have been much harder. So
    sandboxing is not necessarily just about
  • 47:31 - 47:37
    this second breakout. What I do want to
    say about sandboxing, ithat it shouldn't
  • 47:37 - 47:42
    be relied on. So I think that this remote
    attack surface is pretty hard. And it's
  • 47:42 - 47:46
    not unlikely that it's actually harder
    than the sandboxing attack surface. And
  • 47:46 - 47:51
    also on top of that, this bug, the
    NSKeyedUnarchiver bug, it would also get
  • 47:51 - 47:56
    you out of the sandbox because the same
    API is used locally for IPC. So there's
  • 47:56 - 48:03
    that. Yeah. This would be nice if the zero
    click attack surface code would be open
  • 48:03 - 48:09
    source. Would have been nice for us. It
    would have been easier to audit. Maybe
  • 48:09 - 48:17
    someday. Another feature that I would like
    to see or another theme is reduced zero
  • 48:17 - 48:22
    attack surface. Make it one click at least
    one click attack surface. Right. So before
  • 48:22 - 48:28
    and here you could see that an unknown
    sender can send any messages. It would be
  • 48:28 - 48:32
    nice if there would be some pop up that's
    like, well, do you actually want to accept
  • 48:32 - 48:37
    messages? Threema lets you block unknown
    senders. I think that's a cool feature. So
  • 48:37 - 48:45
    yeah, there's more work to be done here.
    Also, this restarting service problem, I
  • 48:45 - 48:52
    think it could get bigger even. So, here
    we have pretty much unlimited tries for
  • 48:52 - 48:57
    the ASLR bypass. It's probably going to
    become even more relevant with memory
  • 48:57 - 49:04
    tagging, which we can also be defeated if
    you have many tries. So yeah, I guess if
  • 49:04 - 49:08
    there's some process or some critical
    demon crashes ten times, maybe not restart
  • 49:08 - 49:15
    it. I don't know. It's gonna need some
    more thinking, right? You don't want to
  • 49:15 - 49:21
    denial-of-service the user by just not
    restarting this demon that crashed for
  • 49:21 - 49:26
    some unrelated reason. But yeah, this
    would be a very good idea to have some
  • 49:26 - 49:33
    kind of limit here. Okay. Conclusion. So
    yeah, zero click exploit, they are thing.
  • 49:33 - 49:39
    They do exist. It is possible to exploit
    single memory corruption bugs on this
  • 49:39 - 49:45
    surface with, you know, without separate
    info leaks. Despite all the mitigations we
  • 49:45 - 49:51
    have. However, I do think by turning the
    right knobs, this could be made much
  • 49:51 - 49:57
    harder. So I gave some suggestions here.
    And yeah, we need more atack surface
  • 49:57 - 50:02
    reduction, especially on the zero click
    surface. But I think there is progress
  • 50:02 - 50:06
    being made. And with that thanks for your
    attention. And I think we have time for
  • 50:06 - 50:09
    questions. Thank you.
  • 50:09 - 50:16
    applause
  • 50:16 - 50:21
    Herald: We do have time for questions. And
    if you're in the room, you should line up
  • 50:21 - 50:26
    at the microphones and then we might also
    have questions from the Internet. One
  • 50:26 - 50:33
    quick reminder is that all fun things that
    what they work with explicit consent that
  • 50:33 - 50:39
    includes photos. So the photo policy of
    the CCC is that if you take a photo, you
  • 50:39 - 50:43
    need to have explicit consent by the
    people in the frame. So remember, don't do
  • 50:43 - 50:48
    any long shots into the crowd because you
    want to have the consent of everybody
  • 50:48 - 50:53
    there. Good. We have the first question
    from the Internet.
  • 50:53 - 50:57
    Question: The Internet wants to know. Did
    Apple give you some kind of a reward? And
  • 50:57 - 51:02
    was it in your iPhone?
    Answer: No, we did not get any kind of
  • 51:02 - 51:12
    reward. But we also didn't ask for it. No,
    I didn't get a new iPhone, but I'm still
  • 51:12 - 51:21
    using mine. Which is it? Yeah. I mean,
    this is a Xs, right? Current hardware
  • 51:21 - 51:29
    models can be defeated with this, if that
    is the question.
  • 51:29 - 51:32
    Herald: Good. We have a question for
    microphone number 3.
  • 51:32 - 51:41
    Q: Hello. Uh, just a question. I did not
    truly understand how the fix with the
  • 51:41 - 51:48
    server or having another process, uh,
    sending that there every message will fix
  • 51:48 - 51:54
    the problem because if it does work, if
    you are in the right addresses, the thing
  • 51:54 - 52:02
    just will work. Make the server or the
    process, send the delivery message and if
  • 52:02 - 52:11
    it crashes, it doesn't do anything so...
    A: So the idea would be in this case, I'm
  • 52:11 - 52:15
    like sending this one method that would
    crash and then either I get a delivery
  • 52:15 - 52:20
    received or I don't. If the server already
    sends the delivery receipt before it
  • 52:20 - 52:26
    actually gives the message to the client
    or to the receiver, then I would always
  • 52:26 - 52:30
    see a delivery receipt and I wouldn't be
    able to figure out if my message caused
  • 52:30 - 52:36
    the crash or not. So that's the idea
    behind maybe sending it on the server
  • 52:36 - 52:39
    side, if that makes sense.
    Follow-up question: Yeah. But in this
  • 52:39 - 52:47
    case, if legit people send a message and
    it doesn't reach the people because...
  • 52:47 - 52:53
    A: Yeah. Yeah. It's a hack. Right. So it's
    not perfect. I mean the server could only
  • 52:53 - 52:59
    send to find this delivery receipt once it
    like send it out over TCP and maybe got a
  • 52:59 - 53:06
    TCP ACK or whatever happens in the kernel.
    But it's a hack in any case. Yeah. Like
  • 53:06 - 53:08
    it's a tradeoff.
    Herald: We have a question for microphone
  • 53:08 - 53:14
    number two.
    Q: Hello. Okay. Thanks for the talk. Two
  • 53:14 - 53:21
    questions. First: Is OS X also a
    potential candidate for this bug. And
  • 53:21 - 53:26
    second: Can you distinguish multiple
    devices with your address based
  • 53:26 - 53:31
    randomization detection?
    A: Mm hmm. So yes: OS X or MacOS is
  • 53:31 - 53:37
    affected just the same. I think this
    specific exploit wouldn't directly work
  • 53:37 - 53:40
    because address space looks a bit
    different, but I think you could make it
  • 53:40 - 53:45
    work and it's affected. In terms of
    multiple devices, so I haven't played
  • 53:45 - 53:51
    around with that. I could imagine that it
    is possible to somehow figure out that
  • 53:51 - 53:57
    there are multiple devices or that you
    know which device just crashed. But I
  • 53:57 - 54:01
    haven't investigated. That's the answer.
    Follow-up: Thanks.
  • 54:01 - 54:04
    Herald: We still have time for more
    questions. There was a question from
  • 54:04 - 54:08
    microphone number 1.
    Q: Hi. Thanks for the talk. Quick
  • 54:08 - 54:16
    question. You said that exploitation could
    be made without having any notification.
  • 54:16 - 54:21
    How would that be made?
    A: Yeah, I briefly looked into how it
  • 54:21 - 54:29
    could work. Well. So for one, you can take
    out parts of the message so that it fails
  • 54:29 - 54:34
    parsing later on in the processing and
    then it will just be like thrown away
  • 54:34 - 54:39
    because it says, well, this is garbage.
    The other thing is, of course, once you
  • 54:39 - 54:45
    get with the like very last message where
    you get code execution, you cannot prevent
  • 54:45 - 54:49
    it from showing a message like a
    notification, because that happens
  • 54:49 - 54:53
    afterwards.
    Follow-up Q: But until you get the code
  • 54:53 - 54:57
    execution, you can't remove it. So you
    see the first message?
  • 54:57 - 55:01
    A: So but you can do the other. The other
    thing, like make it make the message look
  • 55:01 - 55:06
    bad - bad enough that like later parsing
    stages will throw them away.
  • 55:06 - 55:09
    Follow-up: Thanks.
    Herald: Good. We have a couple of more
  • 55:09 - 55:11
    questions. Remember, if you don't feel
    comfortable lining up behind the
  • 55:11 - 55:15
    microphones, you can ask through the
    signal angel through the Internet.
  • 55:15 - 55:20
    Microphone number 4, please.
    Q: Yes. Hi. Hi Samuel. Um, I was curious
  • 55:20 - 55:24
    you have some suggestions about reducing
    the attack surface. Are there any
  • 55:24 - 55:28
    suggestions that you'd make to save, like
    Apple or Google? You know, in terms of
  • 55:28 - 55:32
    what they can see. You mentioned logging a
    little bit earlier.
  • 55:32 - 55:40
    A: Yeah. So I sent pretty much this list
    with the exploit I sent to Apple. And I
  • 55:40 - 55:47
    think the blog post will have a bit more.
    But yeah, I told them the same thing.
  • 55:47 - 55:50
    Yeah, if that's your question, did I get
    it right?
  • 55:50 - 55:54
    Follow-up Q: Yes. I mean, maybe I
    misunderstood a little bit, but I suppose
  • 55:54 - 55:57
    that some of these reductions in the attack
    surface seem to be in terms of like what's
  • 55:57 - 56:02
    happening on the device. Yeah. Whereas I'm
    wondering in terms of monitoring. So being
  • 56:02 - 56:04
    able to catch something like this in
    progress.
  • 56:04 - 56:08
    A: Right. Right. So this is gonna be
    really hard because of end-to-end
  • 56:08 - 56:13
    encryption. So the server just sees like
    encrypted garbage and has no way of
  • 56:13 - 56:19
    knowing is this an image? Is that the
    text? This is an exploit? So on the
  • 56:19 - 56:26
    server, I don't think you can do much
    there. I think it's gonna have to be on
  • 56:26 - 56:29
    the device.
    Herald: We have a question from the
  • 56:29 - 56:34
    Internet.
    Q: How do you approach a attack surface
  • 56:34 - 56:41
    mapping?
    A: Um, well, reverse engineering, playing
  • 56:41 - 56:48
    around, looking at this message format. In
    this case, what was somewhat obvious what
  • 56:48 - 56:52
    an attack surface was. Right. So figure
    out which key is off this message are
  • 56:52 - 56:58
    being processed in some way. Make a note.
    Decide which one looks most complex. Go
  • 56:58 - 57:03
    for that first. That's what we did.
    Herald: We have a question from microphone
  • 57:03 - 57:07
    number 2, please.
    Q: Hi. How long did you and your colleague
  • 57:07 - 57:15
    research to get the export running?
    A: So the the vulnerability finding thing
  • 57:15 - 57:22
    was not only I think we spend maybe three
    months finding the exploit. So I had a
  • 57:22 - 57:27
    rough idea how I wanted to how how I would
    approach this exploit. So I think at the
  • 57:27 - 57:34
    end it took me maybe a week to finish it.
    But I had thought about doing that for
  • 57:34 - 57:39
    like, while a looking while looking for
    vulnerabilities and those two to three
  • 57:39 - 57:42
    months.
    Herad: We have another question from
  • 57:42 - 57:49
    microphone number three.
    Q: Um, is there the, uh, threat that the
  • 57:49 - 57:54
    attacked iPhone would itself turn into a
    tack up by the exploit?
  • 57:54 - 57:59
    A: Sure. Yeah, you can do that. I mean,
    you have full control, right? So you have
  • 57:59 - 58:05
    access to the contacts list and you can
    send out iMessages. The question is if
  • 58:05 - 58:09
    it's necessary. Right. I mean, you can
    also send messages from you don't really
  • 58:09 - 58:16
    need them, the iPhone to send the
    messages. But I think in theory: Yes,
  • 58:16 - 58:19
    that's possible.
    Herald: Do we have more questions from the
  • 58:19 - 58:25
    Internet?
    Q: Does the phone stay compromised after
  • 58:25 - 58:29
    restart?
    A: So there is no persistence exploit
  • 58:29 - 58:34
    here. No. You will need another exploit a
    littlelailo did a talk. I think just an
  • 58:34 - 58:40
    hour ago about persistence. So you would
    need to change this with what, for
  • 58:40 - 58:45
    example, to exploit that he showed.
    Herald: And if you have questions in the
  • 58:45 - 58:48
    room, please line up behind the
    microphones. Do we have more questions
  • 58:48 - 58:55
    from the Internet.
    Q: Yes. So you've achieved the most novel
  • 58:55 - 59:02
    buck ever found to be fine in iOS. What's
    the next big thing you'll be looking at?
  • 59:02 - 59:06
    A: Good question. I don't really know
    myself, but I'm going to stay probably
  • 59:06 - 59:13
    around for zero click attack surface reduction
    for a bit more.
  • 59:13 - 59:16
    Herald: Looks like we don't have any brave
    people asking questions in the room. Does
  • 59:16 - 59:21
    the Internet have more courage?
    Q: How long does discovery and
  • 59:21 - 59:27
    exploitation and development take and how
    much does the team work to improve the
  • 59:27 - 59:34
    process and development time?
    A: Okay, so how much how long does this
  • 59:34 - 59:40
    exploitation process work? That's the
    first question. Yes. Yeah. I mean, this is
  • 59:40 - 59:45
    generally a hard thing to answer. Right.
    There's like years of hacking around and
  • 59:45 - 59:51
    learning how to do this stuff, etc. that
    you have to take into account. But as I
  • 59:51 - 59:54
    said, I had a rough idea how this exploit
    would look like. So then really
  • 59:54 - 60:01
    implementing it was like one or two weeks.
    The initial part of reverse engineering
  • 60:01 - 60:05
    iMessage reverse engineering this
    NSUnarchiver thing. I kind of I think this
  • 60:05 - 60:11
    took forever. This took many months and it
    was also very necessary for exploit
  • 60:11 - 60:17
    writing. Right. So a lot of the expert
    primitives I use, they also abuse the
  • 60:17 - 60:22
    NSKeyUnarchiver thing.
    Herald: We have time for perhaps two quick
  • 60:22 - 60:27
    questions. Mike number 4, please.
    Q: Super. Uh, I'm not super familiar with
  • 60:27 - 60:33
    iOS virtual memory address space but you
    should two heap regions when you showed the
  • 60:33 - 60:37
    picture of it. And I'm wondering why are
    there two heap regions?
  • 60:37 - 60:43
    A: OK, because there is only a minor
    detail, but I think there is one region
  • 60:43 - 60:49
    initially like below the shared cache and
    one state is full. It just makes another
  • 60:49 - 60:56
    one above it. So it's really just like if
    the one gets used or gets gets used up, it
  • 60:56 - 60:59
    makes another one. And that's going to be
    like above the shared cache. I think
  • 60:59 - 61:03
    that's the picture you're referring to.
    Follow-up: Yeah, thank you.
  • 61:03 - 61:07
    Heralds: And unfortunately, we are out of
    time. So the person that might have some
  • 61:07 - 61:11
    number one, please come up to the stage
    afterwards and perhaps you can grab a talk.
  • 61:11 - 61:17
    So please give a warm. I can't say this
    exactly. applause Thanks.
  • 61:17 - 61:23
    applause
  • 61:23 - 61:27
    Postroll music
  • 61:27 - 61:50
    Subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
36C3 - Messenger Hacking: Remotely Compromising an iPhone through iMessage
Description:

more » « less
Video Language:
English
Duration:
01:01:50

English subtitles

Revisions