< Return to Video

35C3 - Jailbreaking iOS

  • 0:04 - 0:17
    35C3 preroll music
  • 0:17 - 0:22
    Herald-Angel: All right, let's start with
    our next talk in the security track of the
  • 0:22 - 0:27
    chaos communication congress. The talk is
    called jailbreaking iOS from past to
  • 0:27 - 0:35
    present. Done by tihmstar. He spoke at the
    32nd C3 already and researched on several
  • 0:35 - 0:40
    jailbreaks like the Phoenix or the
    jelbrekTime for the Apple Watch and he's
  • 0:40 - 0:45
    gonna talk about the history of
    jailbreaks. He's going to familiarize you
  • 0:45 - 0:52
    with the terminology of jailbreaking and
    about exploit mitigations and how you can
  • 0:52 - 0:56
    circumvent these mitigations. Please
    welcome him with a huge round of applause.
  • 0:56 - 1:03
    Applause
  • 1:03 - 1:09
    tihmstar: Thank you very much. So hello
    room, I'm tihmstar, and as already said I
  • 1:09 - 1:14
    want to talk about jailbreaking iOS from
    past to present and the topics I'm going
  • 1:14 - 1:20
    to cover "what is jailbreaking?". I will
    give an overview in general. I'm going to
  • 1:20 - 1:26
    introduce you to how jailbreak started,
    how they got into the phone at first and
  • 1:26 - 1:31
    how all of these progressed. I'll
    introduce you to the terminology which is
  • 1:31 - 1:37
    "tethered", "untethered", "semi-tethered",
    "semi-untethered" jailbreaks. Stuff you
  • 1:37 - 1:41
    probably heard but some of you don't know
    what that means. I'm gonna talk a bit
  • 1:41 - 1:46
    about hardware mitigations which were
    introduced by Apple which is KPP, KTRR and
  • 1:46 - 1:53
    a little bit about PAC. I'm going to talk
    about the general goals of... About the
  • 1:53 - 1:58
    technical goals of jailbreaking and the
    kernel patches and what you want to do
  • 1:58 - 2:03
    with those and brief overview how
    jailbreaking could look like in the future.
  • 2:03 - 2:11
    So who am I? I'm tihmstar. I got
    my first iPod touch with iOS 5.1 and since
  • 2:11 - 2:16
    then I pretty much played with jailbreaks
    and then I got really interested into that
  • 2:16 - 2:19
    and started doing my own research. I
    eventually started doing my own
  • 2:19 - 2:24
    jailbreaks. I kinda started with
    downgrading – so I've been here two years
  • 2:24 - 2:29
    ago with my presentation "iOS Downgrading:
    From past to present". I kept hacking
  • 2:29 - 2:34
    since then. So back then I kind of talked
    about the projects I made and related to
  • 2:34 - 2:39
    downgrading which was tsschecker,
    futurerestore, img4tool, you probably have
  • 2:39 - 2:43
    heard of that. And since then I was
    working on several jailbreaking tools
  • 2:43 - 2:50
    ranging from iOS 8.4.1 to 10.3.3, among
    those 32bit jailbreaks, untethered
  • 2:50 - 2:55
    jailbreaks, remote jailbreaks like
    jailbreak.me and the jailbreak for the
  • 2:55 - 3:02
    Apple Watch. So, what is this jailbreaking
    I am talking about? Basically, the goal is
  • 3:02 - 3:09
    to get control over a device you own. You
    want to escape the sandbox which the apps
  • 3:09 - 3:14
    are put in. You want to elevate the
    privileges to root and eventually to
  • 3:14 - 3:20
    kernel, you want to disable code signing
    because all applications on iOS are code-
  • 3:20 - 3:24
    signed and you cannot run unsigned
    binaries. You pretty much want to disable
  • 3:24 - 3:30
    that to run unsigned binaries. And the
    most popular about people on jailbreak is
  • 3:30 - 3:36
    to install tweaks! And also a lot of
    people install a jailbreak or jailbreak
  • 3:36 - 3:39
    their devices for doing security analysis.
    For example if you want to pentest your
  • 3:39 - 3:45
    application and see how an attack goes
    foot – you want to debug that stuff and
  • 3:45 - 3:51
    you want to have a jailbroken phone for
    that. So what are these tweaks? Tweaks are
  • 3:51 - 3:56
    usually modifications of built-in
    userspace programs, for example one of the
  • 3:56 - 4:00
    programs is springboard. Springboard is
    what you see if you turn on your phone.
  • 4:00 - 4:05
    This is where all the icons are at. And
    usually you can install tweaks to, I don't
  • 4:05 - 4:11
    know, modify the look, the behavior or add
    functionality, just this customization,
  • 4:11 - 4:18
    this is how it started with jailbreaking.
    What is usually bundled when you install a
  • 4:18 - 4:24
    jailbreak is Cydia. So you install dpkg
    and apt which is the Debian package
  • 4:24 - 4:32
    manager and you also get Cydia which is a
    user-friendly graphical user interface for
  • 4:32 - 4:38
    the decentralized or centralized package
    installer system. I'm saying centralized
  • 4:38 - 4:42
    because it is pretty much all in one spot,
    you just open the app and you can get all
  • 4:42 - 4:47
    your tweaks and it's also decentralized
    because you can just add up your own repo,
  • 4:47 - 4:54
    you can make your own repo, you can add
    other repos and you're not kinda tied to
  • 4:54 - 4:58
    one spot where you get the tweaks from,
    like from the App Store you can only download
  • 4:58 - 5:02
    from the App Store. But with Cydia you can
    pretty much download from everywhere.
  • 5:02 - 5:09
    You're probably familiar with Debian and
    it's pretty much the same. So this talk is
  • 5:09 - 5:16
    pretty much structured around this tweet:
    the "Ages of jailbreaking". So as you can
  • 5:16 - 5:20
    see we get the Golden Age, the BootRom,
    the Industrial Age and The Post-
  • 5:20 - 5:25
    Apocalyptic age. And I kind of agree with
    that. So this is why I decided to
  • 5:25 - 5:30
    structure my talk around that and walk you
    through the different ages of
  • 5:30 - 5:35
    jailbreaking. So starting with the first
    iPhone OS jailbreak – then it was actually
  • 5:35 - 5:42
    called iPhone OS not iOS – it was not the
    BootROM yet. So the first was a buffer
  • 5:42 - 5:50
    overflow and the iPhone's libTitt library.
    And this is an image parsing library.
  • 5:50 - 5:55
    It was exploited through Safari and used as
    an entry point to get code execution.
  • 5:55 - 6:01
    It was the first time that non-Apple software
    was run on an iPhone and people installed
  • 6:01 - 6:07
    applications like Installer or AppTapp
    which were stores similar to Cydia back
  • 6:07 - 6:12
    then and those were used to install apps
    or games because for the first iPhone OS
  • 6:12 - 6:17
    there was no way to install applications
    anyhow, as the App Store got introduced
  • 6:17 - 6:24
    with iOS 2. So then, going to the Golden
    Age, the attention kind of shifted to the
  • 6:24 - 6:30
    BootROM; people started looking at the
    boot process and they found this device
  • 6:30 - 6:39
    firmware upgrade mode which is a part of
    ROM. So the most famous BootROM exploit
  • 6:39 - 6:45
    was limera1n by geohot. It was a bug in
    hardware and it was unpatchable with
  • 6:45 - 6:53
    software. So this bug was used to
    jailbreak devices up to the iPhone 4.
  • 6:53 - 6:57
    There were also several other jailbreaks –
    we didn't rely on that one – but this one,
  • 6:57 - 7:00
    once discovered, you can use it over and
    over again and there's no way to patch
  • 7:00 - 7:07
    that. So this was later patched in a new
    hardware revision which is the iPhone 4s.
  • 7:07 - 7:10
    So with that BootROM bug –
  • 7:10 - 7:16
    This is how kind of tethered jailbreaks
    became a thing. So limera1n exploits a bug
  • 7:16 - 7:24
    in DFU mode which allows you to load
    unsigned software through USB. However
  • 7:24 - 7:30
    when you reboot the device a computer was
    required to re-exploit and again load your
  • 7:30 - 7:36
    unsigned code. And then load the
    bootloaders, load the patched kernel and
  • 7:36 - 7:40
    thus the jailbreak was kind of tethered to
    the computer because whenever you shut
  • 7:40 - 7:46
    down you need to be back at a computer to
    boot your phone up. So historically a
  • 7:46 - 7:52
    tethered jailbroken phone does not boot
    without a computer at all. And the reason
  • 7:52 - 7:58
    for that is because the jailbreaks would
    modify the kernel and the bootloaders on
  • 7:58 - 8:04
    the file system for performance reasons,
    so when you do the actual tether boot you
  • 8:04 - 8:09
    would need to upload a very tiny payload
    via USB which then in turn would load
  • 8:09 - 8:14
    everything else from the file system
    itself. But this results in a broken chain
  • 8:14 - 8:19
    of trust. When the normal boot process
    runs and the bootloader checks the
  • 8:19 - 8:23
    signature of the first-stage bootloader
    that would be invalid so the bootloader
  • 8:23 - 8:29
    would refuse to boot that and it would end
    up in DFU mode so basically a phone won't
  • 8:29 - 8:36
    boot. Sometime around then, the idea for
    semi-tethered jailbreak came up and the
  • 8:36 - 8:40
    idea behind that is very simple: just
    don't break the chain of trust for
  • 8:40 - 8:47
    tethered jailbreaks. So, what you would do
    differently is you do not modify the
  • 8:47 - 8:52
    kernel on the file system, don't touch the
    bootloaders at all and then when you
  • 8:52 - 8:56
    would boot tethered, you would need to
    upload all the bootloaders like the first
  • 8:56 - 9:01
    stage bootloader, then the second stage
    bootloader which is iBoot and then the
  • 9:01 - 9:06
    kernel via USB to boot into jailbroken
    mode. However when you reboot you could
  • 9:06 - 9:11
    boot all those components from the file
    system so you could actually boot your
  • 9:11 - 9:17
    phone into non-jailbroken mode. If you
    don't install any tweaks or modifications
  • 9:17 - 9:22
    which modify critical system components
    because if you tamper with, for example,
  • 9:22 - 9:25
    the signature of the mount binary the
    system obviously cannot boot
  • 9:25 - 9:29
    in non-jailbroken mode.
  • 9:29 - 9:36
    So, this is kind of the Golden age.
    So let's continue with the Industrial age.
  • 9:36 - 9:44
    So with the release of the
    iPhone 4s and iOS 5, Apple fixed the
  • 9:44 - 9:50
    BootROM bug and essentially killed
    limera1n. They also introduced APTickets
  • 9:50 - 9:56
    and nonces to bootloaders, which I'm just
    mentioning because it's kind of a
  • 9:56 - 10:01
    throwback for downgrading: before that you
    can have a phone if you update to the
  • 10:01 - 10:06
    latest firmware and before you save your
    SHSH blobs you could just downgrade and
  • 10:06 - 10:09
    then jailbreak again which wasn't a big
    deal but with that they also added
  • 10:09 - 10:14
    downgrade protection so jailbreaking
    became harder. If you wanted to know more
  • 10:14 - 10:20
    about how the boot process works, what
    SHSH blobs are, what APTickets are, you
  • 10:20 - 10:24
    should check out my talk from two years
    ago, I go in-depth on how all of that
  • 10:24 - 10:30
    works. So, I'm skipping that for this
    talk. So the binaries the phone boots are
  • 10:30 - 10:36
    encrypted so the bootloaders are encrypted
    and until recently the kernel used to be
  • 10:36 - 10:42
    encrypted as well. And the key encryption
    key is fused into the devices and it is
  • 10:42 - 10:46
    impossible to get through hardware
    attacks. At least there's no public case
  • 10:46 - 10:53
    where somebody actually got that recovered
    at keys so it's probably impossible,
  • 10:53 - 11:01
    nobody has done it yet. So old boot files
    are decrypted at boot by the previous
  • 11:01 - 11:10
    bootloader. And before the iPhone 4s you
    could actually just talk to the hardware
  • 11:10 - 11:15
    iOS engine as soon as you got kernel-level
    code execution. But with the iPhone 4s
  • 11:15 - 11:19
    they introduced a feature where before the
    kernel would boot they would shut off the
  • 11:19 - 11:26
    iOS engine by hardware, so there is no way
    to decrypt bootloader files anymore so
  • 11:26 - 11:33
    easily unless you got code execution in
    the bootloader itself. So decrypting
  • 11:33 - 11:39
    bootloaders is a struggle from now on. So
    I think kind of because of that the
  • 11:39 - 11:45
    attention shifted to userland and from now
    the jailbreaks kind of had to be
  • 11:45 - 11:51
    untethered. So untethered here means that
    if you jailbreak your device, you turn it
  • 11:51 - 11:56
    off, you boot it again, then the device is
    still jailbroken, and this is usually
  • 11:56 - 12:01
    achieved through re-exploitation at some
    point in the boot process. So you can't
  • 12:01 - 12:06
    just patch the kernel on file system
    because that would invalidate signatures,
  • 12:06 - 12:10
    so instead you would, I don't know, add
    some configuration files to some demons
  • 12:10 - 12:16
    which would trigger bugs and then exploit.
    So jailbreaks then chained many bugs
  • 12:16 - 12:21
    together, sometimes six or more bugs to
    get initial code execution, kernel code
  • 12:21 - 12:28
    execution and persistence. This somewhat
    changed when Apple introduced free
  • 12:28 - 12:34
    developer accounts around the time they
    released iOS 9. So these developer
  • 12:34 - 12:39
    accounts allow everybody who has an Apple
    ID to get a valid signing certificate for
  • 12:39 - 12:45
    seven days for free. So you can actually
    create an XCode project and run your app
  • 12:45 - 12:50
    on your physical device. Before that that
    was not possible, so the only way to run
  • 12:50 - 12:56
    your own code on your device was to buy a
    paid developer account which is 100$ per
  • 12:56 - 13:03
    year if you a buy personal developer
    account. But now you can just get that for
  • 13:03 - 13:08
    free. And after seven days the certificate
    expires, but you can just, for free,
  • 13:08 - 13:12
    request another one and keep doing that.
    Which is totally enough if you develop
  • 13:12 - 13:19
    apps. So this kind of led to semi-
    untethered jailbreaks because initial code
  • 13:19 - 13:22
    execution was not an issue anymore.
    Anybody could just get that free
  • 13:22 - 13:29
    certificate, sign apps and run some kind
    of code that was sandboxed. So jailbreak
  • 13:29 - 13:36
    focus shifted to more powerful kernel bugs
    which were reachable from sandbox. So we
  • 13:36 - 13:41
    had jailbreaks using just one single bug
    or maybe just two bugs and the jailbreaks
  • 13:41 - 13:47
    then were distributed as an IPA, which is
    an installable app people would download,
  • 13:47 - 13:53
    sign themselves, put on the phone and just
    run the app. So semi-untethered means
  • 13:53 - 14:00
    you can reboot into non-jailbroken mode,
    however you can get to jailbroken mode
  • 14:00 - 14:06
    easily by just pressing an app. And over
    the years Apple stepped up its game
  • 14:06 - 14:14
    constantly. So with iOS 5 they introduced
    ASLR address space layer randomisation,
  • 14:14 - 14:21
    with iOS 6 they added kernel ASLR, with
    the introduction of the iPhone 5, as they
  • 14:21 - 14:28
    added 64bit CPUs, which isn't really a
    security mitigation, it just changed a bit
  • 14:28 - 14:38
    how you would exploit. So the real deal
    started to come with iOS 9, where they
  • 14:38 - 14:43
    first introduced Kernel Patch Protection,
    an attempt to make the kernel immutable
  • 14:43 - 14:50
    and not patchable. And they stepped up that
    with the iPhone 7 where they introduced
  • 14:50 - 14:58
    Kernel Text Readonly Region, also known as
    KTRR. So with iOS 11 they removed 32bit
  • 14:58 - 15:03
    libraries, which I think has very little
    to no impact on exploitation; it's mainly
  • 15:03 - 15:10
    in the list because up to that point Cydia
    was compiled as a 32bit binary and that
  • 15:10 - 15:18
    stopped working, that's why that had to be
    recompiled for 64bit, which took someone
  • 15:18 - 15:26
    to do until you could get a working Cydia
    on 64bits iOS 11. So with the iPhone Xs
  • 15:26 - 15:31
    which came out just recently they
    introduced Pointer Authentication Codes,
  • 15:31 - 15:35
    and I'm gonna go more in detail into these
    hardware mitigations in the next few
  • 15:35 - 15:42
    slides. So let's start with Kernel Patch
    Protection. So when people say KPP, they
  • 15:42 - 15:48
    usually refer to what Apple calls
    watchtower. So watchtower, as the name
  • 15:48 - 15:54
    suggests, watches over the kernel and
    panics when modifications are detected,
  • 15:54 - 15:59
    and it prevents the kernel from being
    patched. At least that's the idea of it.
  • 15:59 - 16:03
    It doesn't really prevent it because it's
    broken but when they engineered it, it
  • 16:03 - 16:09
    should prevent you from patching the
    kernel. So how does it work? Watchtower is
  • 16:09 - 16:13
    a piece of software which runs in EL3
    which is the ARM exception level 3.
  • 16:13 - 16:19
    So exception levels are kind of privilege
    separations while 3 is the highest and 0
  • 16:19 - 16:25
    is the lowest. And you can kind of trigger
    an exception to call handler code in
  • 16:25 - 16:31
    higher levels. So the idea of watchtowers
    that recurring events which is FPU usage
  • 16:31 - 16:36
    trigger Watchtower inspection of the
    kernel, and you cannot really turn it off
  • 16:36 - 16:42
    because you do need the FPU. So if you
    picture how it looks like, we have the
  • 16:42 - 16:47
    Watchtower to the left (which totally
    looks like a lighthouse) and the
  • 16:47 - 16:51
    applications at the right. So in the
    middle, in EL1, we have the kernel and
  • 16:51 - 17:01
    recent studies revealed that this is
    exactly how the XNU kernel looks like. So
  • 17:01 - 17:06
    how can we be worse? An event occurs from
    time to time which is from using userland
  • 17:06 - 17:11
    application, for example JavaScript makes
    heavy use of floating points, and the
  • 17:11 - 17:16
    event would then go to the kernel and the
    kernel would then trigger Watchtower as it
  • 17:16 - 17:24
    tries to enable the FPU. Watchtower would
    scan the kernel and then if everything is
  • 17:24 - 17:29
    fine it would transition execution back
    into the kernel which then in turn would
  • 17:29 - 17:36
    transition back into userspace which can
    then use the FPU. However with a modified
  • 17:36 - 17:42
    kernel, when Watchtower scans the kernel
    and detects modification, it would just
  • 17:42 - 17:49
    panic. So the idea is that the kernel is
    forced to call Watchtower because the FPU
  • 17:49 - 17:54
    is blocked otherwise. But the problem at
    the same time is that the kernel is in
  • 17:54 - 18:00
    control before it calls watchtower. And
    this thing was fully defeated by qwerty in
  • 18:00 - 18:10
    yalu102. So how qwerty's KPP bypass works:
    The idea is: you copy the kernel in memory
  • 18:10 - 18:16
    and you mody the copied kernel. Then you
    would modify the page tables to use the
  • 18:16 - 18:24
    patched kernel. And whenever the FPU
    triggers a Watchtower inspection, before
  • 18:24 - 18:30
    actually calling Watchtower you would
    switch back to the unmodified kernel and
  • 18:30 - 18:34
    then let it run, let it check the
    unmodified kernel when that returns you
  • 18:34 - 18:40
    would go back to the modified kernel. So
    this one it looks like: we copy the kernel
  • 18:40 - 18:47
    in memory, we patch the modified copy, we
    switch the page tables to actually use the
  • 18:47 - 18:54
    modified copy and when we have the FPU
    event it would just switch the page tables
  • 18:54 - 19:00
    back, forward the call to Watchtower, make
    then watch tower scan the unmodified
  • 19:00 - 19:08
    kernel and after the scan we would just
    return to the patched kernel. So the
  • 19:08 - 19:14
    problem here is: Time of check – Time of
    Use, the classical TOCTOU. And this works
  • 19:14 - 19:20
    on the iPhone 5s, the iPhone 6 and the
    iPhone 6s and it's not really patchable.
  • 19:20 - 19:27
    However, with the iPhone 7, Apple
    introduced KTRR, which kind of proves that
  • 19:27 - 19:35
    and they really managed to do an
    unpatchable kernel. So how does KTRR work?
  • 19:35 - 19:41
    So Kernel Text Readonly Region, I'm going
    to present as described by Siguza in his
  • 19:41 - 19:49
    blog, adds an extra memory controller
    which is the AMCC which traps all writes to
  • 19:49 - 19:57
    the read-only region. And there's extra CPU
    registers which mark and executable range
  • 19:57 - 20:03
    which are the KTRR registers and they
    obviously mark a subsection of the
  • 20:03 - 20:09
    read-only region, so you have hardware
    enforcement at boot time for read-only
  • 20:09 - 20:15
    memory region and hardware enforcement at
    boot-time for an executable memory region.
  • 20:16 - 20:21
    So this the CPU. This is the memory at the
    bottom. You would set the read-only region
  • 20:21 - 20:26
    at boot and since that's enforced by the
    hardware memory controller everything
  • 20:26 - 20:32
    inside that region is not writable and
    everything outside that region is
  • 20:32 - 20:41
    writable. And the CPU got KTRR registers
    which mark begin and end. So the
  • 20:41 - 20:47
    executable region is a subsection of the
    read-only region. Everything outside there
  • 20:47 - 20:52
    cannot be executed by the CPU. Everything
    inside the read-only region cannot be
  • 20:52 - 20:59
    modified. And this has not been truly
    bypassed yet. There's been a bypass but
  • 20:59 - 21:04
    that actually targeted how that thing gets
    set up. But that's fakes and now it's
  • 21:04 - 21:11
    probably setting up everything and so far
    it hasn't been bypassed. So jailbreaks are
  • 21:11 - 21:16
    still around. So what are they doing?
    Well, they just walk around kernel patches
  • 21:16 - 21:23
    and this is when KPP jailbreaks evolved.
    Which means, they just don't patch
  • 21:23 - 21:28
    the kernel. But before we dive into that,
    let's take a look what previous jailbreaks
  • 21:28 - 21:35
    actually did patch in the kernel. So the
    general goals are to disable code signing
  • 21:35 - 21:41
    to disable the sandbox to make the root
    file system writable to somehow make
  • 21:41 - 21:47
    tweaks work which involves making mobile
    substrate or libsubstitute work which is
  • 21:47 - 21:54
    the library for hooking. And I was about
    to make a list of kernel patches which you
  • 21:54 - 22:00
    could simply apply, however, the
    techniques and patches vary across
  • 22:00 - 22:04
    individual jailbreaks so much that I
    couldn't even come up with the list of
  • 22:04 - 22:10
    kernel patches among the different
    jailbreaks I worked on. So there's no
  • 22:10 - 22:13
    general set of patches, some prefer to do
    it that way, some prefer to do it that
  • 22:13 - 22:19
    way. So instead of doing a kind of full
    list, I'll just show you what the Helix
  • 22:19 - 22:24
    jailbreak does patch. So the Helix
    jailbreak first patches the
  • 22:24 - 22:30
    i_can_has_debugger, which is a boot arc.
    It's a variable in the kernel and if you
  • 22:30 - 22:36
    set that to true that would relax the
    sandbox. So to relax the sandbox or to
  • 22:36 - 22:42
    disable code signing usually involves
    multiple steps. Also since iOS 7 you need
  • 22:42 - 22:48
    to patch mount because there's actual
    hardcoded that the root filesystem cannot
  • 22:48 - 22:55
    be mounted as read-write. Since iOS 10.3,
    there is also hardcoded that you cannot
  • 22:55 - 22:59
    mount the root filesystem without the
    nosuid flag, so you probably want to patch
  • 22:59 - 23:05
    that out as well. And then if you patch
    both these you can remount the root
  • 23:05 - 23:09
    filesystem as read-and-write, however you
    cannot actually write to the files on the
  • 23:09 - 23:14
    root filesystem unless you patch Light-
    Weight Volume Manager which you also only
  • 23:14 - 23:21
    need to do in iOS 9 up to iOS 10.3. Later
    when they switched to APFS you don't
  • 23:21 - 23:28
    actually need that anymore. Also there's a
    variable called proc_enforce. You set that
  • 23:28 - 23:34
    to 0 to disable code signing which is one
    of the things you need to do to disable
  • 23:34 - 23:43
    code signing. Another flag is
    cs_enforcement_disable, set that to 1 to
  • 23:43 - 23:51
    disable code signing. So amfi, which is
    Apple mobile file integrity is a kext which
  • 23:51 - 24:00
    handles the code signing checks. In that
    kext it imports the mem-copy function.
  • 24:00 - 24:05
    So there's a stub and one of the patches
    is to patch that stub to always return 0
  • 24:05 - 24:10
    by some simple gadget. So what this does
    is, whenever it compares something in a
  • 24:10 - 24:16
    code, it would just always compare… say
    that the compare succeeds and is equal.
  • 24:16 - 24:22
    I'm not entirely sure what it does,
    so this patch dates back to Yalu
  • 24:22 - 24:24
    but like just supplying that patch helps
  • 24:24 - 24:30
    killing code signing, so that's why it's
    in there. Another thing h3lix does is, it
  • 24:30 - 24:36
    adds the get-task-allow entitlement to
    every process and this is for allowing
  • 24:36 - 24:41
    read/ write/executable mappings and this
    is what you want for a mobile substrate
  • 24:41 - 24:46
    tweaks. So initially this entitlement
    is used for debugging because
  • 24:46 - 24:53
    there you also need to be able to modify
    code at runtime for setting breakpoints
  • 24:53 - 25:00
    while we use it for getting tweaks to
    work. Since iOS 10.3 there's... h3lix also
  • 25:00 - 25:07
    patches label_update_execve patch...
    label_update_execve function. So the idea
  • 25:07 - 25:12
    of that patch was to fix the "process-exec
    denied while updating label" error message
  • 25:12 - 25:17
    in Cydia and several other processes. Well
    that seems to completely nuke the sandbox
  • 25:17 - 25:23
    and also break sandbox containers so this
    is also the reason why if you're
  • 25:23 - 25:28
    jailbreaking with h3lix apps would save
    their data in the global directory instead
  • 25:28 - 25:33
    of their sandbox containers. And you also
    kill a bunch of checks in
  • 25:33 - 25:41
    map_ops_policy... mac_policy_ops to relax
    the sandbox. So if you want to check out
  • 25:41 - 25:45
    how that works yourself, unfortunately
    h3lix itself is not open-source and I've
  • 25:45 - 25:50
    no plans of open-sourcing that. But
    there's two very very closely related
  • 25:50 - 25:56
    projects which are open-source which is
    doubleH3lix – this is pretty much exactly
  • 25:56 - 26:04
    the same but for 64 bit devices which
    does include the KPP bypass, so it also
  • 26:04 - 26:11
    patches the kernel – and jelbrekTime,
    which is the watchOS jailbreak. But h3lix
  • 26:11 - 26:18
    is for iOS 10 and the watchOS jailbreak is
    kind of the iOS 11 equivalent but it
  • 26:18 - 26:22
    shares like most of the code. So most of
    the patch code is the same if you want to
  • 26:22 - 26:29
    check that out. Check these out. So,
    KPPless jailbreaks. So the idea is, don't
  • 26:29 - 26:35
    patch the kernel code but instead patch
    the data. So for an example we go for
  • 26:35 - 26:39
    remounting root file system. We know we
    have hardcoded checks which forbid us to
  • 26:39 - 26:45
    mount the root file system read/write. But
    what we can do is in the kernel there's
  • 26:45 - 26:49
    this structure representing the root file
    system and we can patch that structure
  • 26:49 - 26:54
    removing the flag saying that this
    structure represents the root file system.
  • 26:54 - 27:00
    And we simply remove that and then we can
    call remount on the root file system and
  • 27:00 - 27:06
    then we put back in the flag. So we kind
    of bypass the hardcoded check. For
  • 27:06 - 27:13
    disabling code signing and disabling
    sandbox there are several approaches.
  • 27:13 - 27:17
    In the kernel there's a trust cache so
    usually amfi handles the code signing.
  • 27:17 - 27:21
    The demon in userspace handles the code
    signing requests. But the demon itself
  • 27:21 - 27:25
    also needs to be code-signed. So you have
    the chicken and egg problem. That's why in
  • 27:25 - 27:31
    the kernel there is a list of hashes of
    binaries which are allowed to execute. And
  • 27:31 - 27:35
    this thing is actually writable because
    when you mount the developer disk image it
  • 27:35 - 27:41
    actually adds some debugging things to it
    so you can simply inject your own hash
  • 27:41 - 27:47
    into the trust cache making the binary
    trusted. Another approach taken by
  • 27:47 - 27:52
    jailbreakd and the latest electro
    jailbreak is to have a process, in this
  • 27:52 - 27:58
    case jailbreakd, which would patch the
    processes on creation, so when you spawn a
  • 27:58 - 28:04
    process that thing would immediately stop
    the process, go into the kernel, look up
  • 28:04 - 28:11
    the structure and remove the flags
    saying "kill this process when the cold
  • 28:11 - 28:16
    signature becomes involved" and it will
    invalid. And it would also add the
  • 28:16 - 28:21
    get-task-low entitlements. And then after
    it's done that it would resume the process
  • 28:21 - 28:26
    and then the process won't get killed any
    more because it's kind of already trusted.
  • 28:26 - 28:33
    And the third approach taken or demoed by
    bazad was to take over amfid and userspace
  • 28:33 - 28:41
    completely. So if you can get a Mac port
    to launchd or to amfid you can impersonate
  • 28:41 - 28:47
    that and whenever the kernel asks and
    feels that it's trusted you would reply
  • 28:47 - 28:51
    "Okay yeah that's trusted that's fine
    you can run it" so that way you don't need
  • 28:51 - 29:00
    to go for the kernel at all. So future
    jailbreaks. Kernel patches are not really
  • 29:00 - 29:05
    possible anymore and they're not even
    required. Because we can still patch the
  • 29:05 - 29:13
    kernel data or not go for the kernel at
    all. But we're still not done yet, we
  • 29:13 - 29:22
    still didn't go for Post-Apocalyptic
    or short PAC. Well actually
  • 29:22 - 29:25
    PAC stands for pointer authentication
    codes but you get the joke.
  • 29:25 - 29:31
    So pointer authentication codes were
    introduced with the iPhone Xs and if we
  • 29:31 - 29:37
    quote Qualcomm "This is a stronger version
    off stack protection". And pointer
  • 29:37 - 29:42
    authentication codes are similar to
    message authentication codes but for
  • 29:42 - 29:47
    pointers, if you are familiar with that.
    And the idea of that is to protect data in
  • 29:47 - 29:55
    memory in relation to context with a
    secret key. So the data in memory could be
  • 29:55 - 30:01
    the return value and the context could be
    the stack pointer or data in memory could
  • 30:01 - 30:08
    be a function pointer and the context
    could be a vtable. So if we take a look
  • 30:08 - 30:15
    how PAC is implemented. So at the left you
    can see function entry and like function
  • 30:15 - 30:20
    prologue and function epilogue without PAC
    and with PAC the only thing that would be
  • 30:20 - 30:27
    changed is when you enter a function
    before actually doing anything inside it,
  • 30:27 - 30:32
    you would normally store the return value
    on the stack but when doing that you would
  • 30:32 - 30:39
    first authenticate the pointer with the
    context and then kinda create the
  • 30:39 - 30:44
    signature and store it inside the pointer
    and then put it on the stack. And then
  • 30:44 - 30:50
    when you leave the function you would just
    take back the pointer, again calculate the
  • 30:50 - 30:56
    signature and see if these both signatures
    matches and if they do then just return
  • 30:56 - 31:04
    and if the signature's invalid you would
    just throw a hardware fault. So this is
  • 31:04 - 31:11
    how it looks like for 64-bit pointers. You
    don't really use all of the available bits.
  • 31:11 - 31:17
    So usually you use 48 bits for
    virtual memory which is more than enough.
  • 31:17 - 31:22
    If you use memory tagging
    you have seven bits left for putting in
  • 31:22 - 31:29
    the signature or if you do not use memory
    tagging you can use up to 15 bits for the
  • 31:29 - 31:40
    pointer authentication code. So the basic
    idea of PAC is to kill ROP like code reuse
  • 31:40 - 31:46
    attacks. You cannot simply smash the stack
    and create a ROP chain because every
  • 31:46 - 31:54
    return would have an instruction verifying
    the signature of the return value and that
  • 31:54 - 32:01
    means you would need to sign everything,
    every single of these pointers and since
  • 32:01 - 32:07
    you don't know the key you can't do that
    in advance. So you cannot modify a return
  • 32:07 - 32:12
    value and you cannot swap two signed
    values on the stack unless the stack
  • 32:12 - 32:23
    pointer is the same for both. Can we
    bypass it? Maybe. I don't know. But we can
  • 32:23 - 32:29
    take a look at how that thing is
    implemented. So if we take a look at the
  • 32:29 - 32:36
    ARM slides you can see that PAC is
    basically derived from a pointer and a
  • 32:36 - 32:42
    64-bit context value and the key and we
    put all of that in the algorithm P. And
  • 32:42 - 32:48
    that gives us the PAC which we store in
    the unused bits. So the algorithm P can
  • 32:48 - 32:56
    either be QARMA or it can be something
    completely custom. And the instructions,
  • 32:56 - 33:03
    the ARM instructions, kind of hide the
    implementation details. So if you would go
  • 33:03 - 33:11
    for attacking PAC, there's two ways of
    attack strategies. We can either try and
  • 33:11 - 33:16
    go straight for the cryptographic
    primitive like take a look what cipher it
  • 33:16 - 33:22
    is or how that cipher is implemented.
    Maybe it's weak or we can go and attack
  • 33:22 - 33:29
    the implementation. So if we go and attack
    the implementation we could look for
  • 33:29 - 33:35
    signing primitives, which could be like
    small gadgets we could jump to somehow,
  • 33:35 - 33:42
    somehow execute to sign a value which
    could be either an arbitrary context
  • 33:42 - 33:50
    signing gadget or maybe a fixed context
    signing gadget. We could also look for
  • 33:50 - 33:57
    unauthenticated code, for example I
    imagine the code which sets up PAC itself
  • 33:57 - 34:04
    is probably not protected by PAC because
    you can't sign the pointer if the key is
  • 34:04 - 34:09
    not set up yet. Maybe that code is still
    accessible. We could look for something
  • 34:09 - 34:18
    like that. We could also try to replace
    pointers which share the same context.
  • 34:18 - 34:24
    It's probably not feasible for return
    values on the stack, but maybe it's
  • 34:24 - 34:30
    feasible for swapping pointers in the
    vtable. Or maybe you come up with your own
  • 34:30 - 34:37
    clever idea how to bypass that. These are
    just like some ideas. So I want to make a
  • 34:37 - 34:45
    point here, that in my opinion it doesn't
    make much sense to try to attack the
  • 34:45 - 34:52
    underlying cryptography on PAC, so I think
    that if we go for attacking PAC it makes
  • 34:52 - 34:56
    much more sense to look for implementation
    attacks and not attacking the cryptography
  • 34:56 - 35:04
    and the next few slides are just there to
    explain why I think that. So if we take a
  • 35:04 - 35:10
    look at QARMA which was proposed by ARM as
    being one of the possible ways of
  • 35:10 - 35:17
    implementing PAC. PAC, uhm, QARMA is a
    tweakable block cipher, so it takes an
  • 35:17 - 35:23
    input, a tweak and gives you an output.
    Which kind of fits perfectly for what we
  • 35:23 - 35:30
    want. And then I started looking at QARMA
    and came up with ideas on how are you
  • 35:30 - 35:36
    could maybe attack that cipher. At some
    point I realized that practical crypto
  • 35:36 - 35:43
    attacks on QARMA if there will be any in
    the future will probably that's what I
  • 35:43 - 35:51
    think completely irrelevant to the PAC
    security. So why's that? If we define –
  • 35:51 - 35:55
    So just so you know, the next few slides
    I'm going to bore you with some math but
  • 35:55 - 36:04
    it's not too complex. So if we define PAC
    as a function which takes a 128 bit input
  • 36:04 - 36:12
    and a 120-bit key and maps it to 15 bits
    output. Or we can more realistically
  • 36:12 - 36:18
    define it as a function which takes 96
    bits input with a 128-bit key because we
  • 36:18 - 36:24
    have a 48-bit pointer because the other
    ones we can't use because that's where we
  • 36:24 - 36:30
    store the signature and we're most likely
    using the stack pointer as a context so
  • 36:30 - 36:39
    that one will also only use 48-bit
    pointers, 48 bits. Then we have PAC as a
  • 36:39 - 36:44
    construct so then we define the attacker
    with following capabilities. The attacker
  • 36:44 - 36:51
    is allowed to observe some pointer and
    signature pairs and I assume that you can
  • 36:51 - 36:55
    get that through some info leaks, for
    example you have some bug in the code
  • 36:55 - 37:01
    which lets you dump a portion of the stack
    with a bunch of signed pointers.
  • 37:01 - 37:07
    This is why you can observe some, not all,
    but you can see some and I would also
  • 37:07 - 37:13
    allow to have the attacker be able to
    slightly modify the context and what I
  • 37:13 - 37:20
    mean by that is I imagine a scenario where
    the attacker could maybe shift the stack,
  • 37:20 - 37:25
    maybe through more nested function calls
    before executing the leak which will give
  • 37:25 - 37:32
    you actually two signatures for the same
    pointer but with a different context.
  • 37:32 - 37:39
    Maybe that's somewhat helpful. But still
    we realize that the attacker, the
  • 37:39 - 37:46
    cryptographic attacker, is super weak so
    the only other cryptographic problem there
  • 37:46 - 37:53
    could be is collisions. And for those of
    you who seen my last talk they probably
  • 37:53 - 38:02
    know I love collisions. So we have 48-bit
    pointer, 48-bit context and 128-bit key.
  • 38:02 - 38:09
    We sum that up and we divide that by the
    15-bit of output we get from PAC which
  • 38:09 - 38:17
    gives us 2 to the power of 209 possible
    collisions because we map so many bits to
  • 38:17 - 38:25
    so little bits. But even if we reduce the
    pointers because practically probably less
  • 38:25 - 38:33
    than 34 bit of a pointer are really used,
    we still get 2 to the power 181
  • 38:33 - 38:38
    collisions, which is a lot of collisions
    but the bad thing here is random
  • 38:38 - 38:45
    collisions are not very useful to us
    unless we can predict them somehow. So
  • 38:45 - 38:51
    let's take a look how a cryptographically
    secure MAC is defined. So a MAC is defined
  • 38:51 - 38:55
    as following: Let p be a MAC with the
    following components and those are
  • 38:55 - 39:01
    basically Gen(), Mac() and Vrfy(). So
    Gen() just somehow generates a key, it's
  • 39:01 - 39:06
    only here for the sake of mathematical
    completeness. Just assume we generate the
  • 39:06 - 39:14
    key by randomly choosing n bits or however
    how much bits the key needs. And Mac() is
  • 39:14 - 39:22
    just a function where you put in an n-bit
    message called m and it gives us a
  • 39:22 - 39:26
    signature t. And I'm going to say
    signature but in reality I mean a message
  • 39:26 - 39:31
    authentication code. And the third
    function is Vrfy() and you give it a
  • 39:31 - 39:35
    message and a signature and that just
    returns true if that signature is valid
  • 39:35 - 39:42
    for the message or false if it's not. And
    when cryptographers prove that something
  • 39:42 - 39:47
    is secure they like to play games. So I'm
    gonna to show you my favorite game, which
  • 39:47 - 39:52
    is Mac-forge game. So the game is pretty
    simple you have to the left the game
  • 39:52 - 39:59
    master which is playing Mac-forge and
    to the right the attacker. So the game
  • 39:59 - 40:04
    starts when the Mac-forge game master
    informs the attacker how much bits are we
  • 40:04 - 40:09
    playing. So this is the first 1 to the
    power of n, basically means hey we're
  • 40:09 - 40:14
    having MAC-forge with, I don't know,
    64-bit messages so the attacker knows the
  • 40:14 - 40:22
    size. Then the game master just generates
    the key and then the attacker can choose to
  • 40:22 - 40:30
    q messages of n-bit length and send them
    over to the game master and the game
  • 40:30 - 40:36
    master will generate signatures and send
    them back. So then the attacker can
  • 40:36 - 40:42
    observe all the messages he generated and
    all the matching signatures. So what the
  • 40:42 - 40:47
    attacker needs to do then is to choose
    another message which he did not send over
  • 40:47 - 40:56
    yet and somehow come up with a valid
    signature and if he can manage to do that
  • 40:56 - 41:03
    he sends it over and if that's actually a
    valid signature for the message then he
  • 41:03 - 41:09
    wins the game, otherwise he looses the
    game. So we say a Mac is secure if the
  • 41:09 - 41:16
    probability that an attacker can somehow
    win this is negligible. So I'm gonna spare
  • 41:16 - 41:20
    you the mathematical definition of what
    negligible means but like just guessing or
  • 41:20 - 41:28
    trying means that it's still secure if
    that's the best tech. So as you can see is
  • 41:28 - 41:37
    a MAC which is secure needs to withstand
    this. But for our PAC attacker we do not
  • 41:37 - 41:43
    even have this oracle. So our attacker for
    PAC is even weaker than that. So why do we
  • 41:43 - 41:49
    not have this oracle? Well simple if we
    allow the attacker to sign arbitrary
  • 41:49 - 41:56
    messages the attacker wouldn't even need
    to try to somehow get the key or forged
  • 41:56 - 42:01
    message because then he could just send
    over all the messages. All the pointers he
  • 42:01 - 42:05
    wants to sign get back signed pointers and
    you wouldn't need to bother about breaking
  • 42:05 - 42:11
    the crypto at all. So basically the point
    I'm trying to make here is that the PAC
  • 42:11 - 42:19
    attacker is weaker than a MAC attacker. So
    every secure MAC we know is also a secure
  • 42:19 - 42:29
    PAC, but even then an insecure MAC might
    still be sufficiently secure for PAC so
  • 42:29 - 42:34
    secure MACs have been around for a while
    and thus in my opinion, I think if
  • 42:34 - 42:41
    somebody, who knows what he's doing,
    designs a PAC algorithm today it will
  • 42:41 - 42:49
    likely be secure. So instead of going for
    the crypto I think we should rather go for
  • 42:49 - 42:54
    implementation attacks instead because
    those will be around forever. And by that
  • 42:54 - 43:00
    I mean well you can either see how the
    crypto itself is implemented, what I mean
  • 43:00 - 43:07
    especially by that is you could see how
    the PAC is used in the actual code. Maybe
  • 43:07 - 43:12
    you can find signing oracles, maybe you
    can find unauthenticated code. I think
  • 43:12 - 43:20
    this is where we need to go if wanna
    bypass PAC somehow. So just to recap where
  • 43:20 - 43:30
    we're coming from. Future iPhone hacks
    probably gonna not try to bypass KTRR. I
  • 43:30 - 43:35
    think they will not try to patch Kernel
    code because we can achieve pretty much
  • 43:35 - 43:41
    all the things we want to achieve for end
    user jailbreak without having to patch the
  • 43:41 - 43:49
    kernel so far. And I think people are
    going to struggle a bit. At least a bit
  • 43:49 - 43:57
    when exploiting with PAC because that kind
    will either make some bugs unexploitable
  • 43:57 - 44:05
    or really really hard to exploit. Also
    maybe we're gonna avoid the Kernel at all
  • 44:05 - 44:10
    as it has demoed that userland-only
    jailbreaks are possible. Maybe we're going
  • 44:10 - 44:16
    to recalculate what the low hanging fruits
    are. Maybe just go back to iBoot or look
  • 44:16 - 44:22
    for what other thing is interesting. So,
    that was about it, thank you very much for
  • 44:22 - 44:34
    your attention.
    Applause
  • 44:34 - 44:39
    Herald-Engel: Thank you, sir. If you would
    like to ask a question please line up
  • 44:39 - 44:49
    on the microphones in the room. We do not
    have a question from the Internet.
  • 44:49 - 44:53
    One question over there, yes please.
    Question: Hi. I would like to be
  • 44:53 - 44:58
    interested what your comment is on the
    statement from Zarek that basically
  • 44:58 - 45:02
    jailbreaking is not a thing anymore
    because you're breaking so much security
  • 45:02 - 45:08
    features that makes the phone basically
    more insecure than the former reasons of
  • 45:08 - 45:16
    doing a jailbreaking allow for.
    tihmstar: Well, jailbreaking -- I don't
  • 45:16 - 45:22
    think jailbreaking itself nowadays makes a
    phone really insecure. So of course if you
  • 45:22 - 45:26
    patched a kernel and disable all of the
    security features that will be less
  • 45:26 - 45:31
    secure. But if you take a look what we
    have here with the unpatchable kernel I
  • 45:31 - 45:36
    think the main downside of being
    jailbroken is the fact that you cannot go
  • 45:36 - 45:42
    to the latest software version because you
    want the box to be in there to have the
  • 45:42 - 45:51
    jailbreak. So I don't really think if you
    have like a KTRR device the jailbreak
  • 45:51 - 45:56
    itself makes it less secure. Just the fact
    that you are not on the latest firmware is
  • 45:56 - 46:01
    the insecure part of it.
    Herald: Alright, thank you.
  • 46:01 - 46:06
    Microphone number two, your question.
    Mic #2: Hi good talk. Could you go back to
  • 46:06 - 46:14
    the capabilities of the adversary please?
    Yeah. So you said you can do basically two
  • 46:14 - 46:18
    things right. This one, yes. Yeah you can
    observe some pointers and some signature
  • 46:18 - 46:24
    pairs. But why is this not an oracle?
    tihmstar: Because you cannot choose...
  • 46:24 - 46:26
    Mic #2: Your message yourself.
    tihmstar: ...your message yourself.
  • 46:26 - 46:31
    Mic #2: And you have also an oracle that
    says if the signature is valid. For a
  • 46:31 - 46:34
    chosen message.
    tihmstar: Well yeah but this is if you
  • 46:34 - 46:39
    take a look at the game and this game for
    a secure MAC the attacker can choose up to
  • 46:39 - 46:45
    q messages sending over... like he can do
    whatever he wants with that messages and
  • 46:45 - 46:52
    get a signature while the package hacker
    can see a few very limited amount of
  • 46:52 - 46:59
    messages and their matching signature and
    he has little to no influence on these
  • 46:59 - 47:05
    messages.
    Mic #2: Okay. So it's a bit weaker.
  • 47:05 - 47:08
    tihmstar: So yeah that's the point. Just
    that it's weaker.
  • 47:08 - 47:11
    Mic #2: Thanks.
    Herald-Engel: Do we have a question from
  • 47:11 - 47:24
    the internet? No. OK. Yes please. All
    right then I don't see anyone else being
  • 47:24 - 47:31
    lined up and... please give a lot of
    applause for tihmstar for his awesome
  • 47:31 - 47:36
    talk!
    Applause
  • 47:36 - 47:46
    postroll music
  • 47:46 - 47:58
    subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
35C3 - Jailbreaking iOS
Description:

more » « less
Video Language:
English
Duration:
47:58

English subtitles

Revisions