-
35C3 preroll music
-
Herald-Angel: All right, let's start with
our next talk in the security track of the
-
chaos communication congress. The talk is
called jailbreaking iOS from past to
-
present. Done by tihmstar. He spoke at the
32nd C3 already and researched on several
-
jailbreaks like the Phoenix or the
jelbrekTime for the Apple Watch and he's
-
gonna talk about the history of
jailbreaks. He's going to familiarize you
-
with the terminology of jailbreaking and
about exploit mitigations and how you can
-
circumvent these mitigations. Please
welcome him with a huge round of applause.
-
Applause
-
tihmstar: Thank you very much. So hello
room, I'm tihmstar, and as already said I
-
want to talk about jailbreaking iOS from
past to present and the topics I'm going
-
to cover "what is jailbreaking?". I will
give an overview in general. I'm going to
-
introduce you to how jailbreak started,
how they got into the phone at first and
-
how all of these progressed. I'll
introduce you to the terminology which is
-
"tethered", "untethered", "semi-tethered",
"semi-untethered" jailbreaks. Stuff you
-
probably heard but some of you don't know
what that means. I'm gonna talk a bit
-
about hardware mitigations which were
introduced by Apple which is KPP, KTRR and
-
a little bit about PAC. I'm going to talk
about the general goals of... About the
-
technical goals of jailbreaking and the
kernel patches and what you want to do
-
with those and brief overview how
jailbreaking could look like in the future.
-
So who am I? I'm tihmstar. I got
my first iPod touch with iOS 5.1 and since
-
then I pretty much played with jailbreaks
and then I got really interested into that
-
and started doing my own research. I
eventually started doing my own
-
jailbreaks. I kinda started with
downgrading – so I've been here two years
-
ago with my presentation "iOS Downgrading:
From past to present". I kept hacking
-
since then. So back then I kind of talked
about the projects I made and related to
-
downgrading which was tsschecker,
futurerestore, img4tool, you probably have
-
heard of that. And since then I was
working on several jailbreaking tools
-
ranging from iOS 8.4.1 to 10.3.3, among
those 32bit jailbreaks, untethered
-
jailbreaks, remote jailbreaks like
jailbreak.me and the jailbreak for the
-
Apple Watch. So, what is this jailbreaking
I am talking about? Basically, the goal is
-
to get control over a device you own. You
want to escape the sandbox which the apps
-
are put in. You want to elevate the
privileges to root and eventually to
-
kernel, you want to disable code signing
because all applications on iOS are code-
-
signed and you cannot run unsigned
binaries. You pretty much want to disable
-
that to run unsigned binaries. And the
most popular about people on jailbreak is
-
to install tweaks! And also a lot of
people install a jailbreak or jailbreak
-
their devices for doing security analysis.
For example if you want to pentest your
-
application and see how an attack goes
foot – you want to debug that stuff and
-
you want to have a jailbroken phone for
that. So what are these tweaks? Tweaks are
-
usually modifications of built-in
userspace programs, for example one of the
-
programs is springboard. Springboard is
what you see if you turn on your phone.
-
This is where all the icons are at. And
usually you can install tweaks to, I don't
-
know, modify the look, the behavior or add
functionality, just this customization,
-
this is how it started with jailbreaking.
What is usually bundled when you install a
-
jailbreak is Cydia. So you install dpkg
and apt which is the Debian package
-
manager and you also get Cydia which is a
user-friendly graphical user interface for
-
the decentralized or centralized package
installer system. I'm saying centralized
-
because it is pretty much all in one spot,
you just open the app and you can get all
-
your tweaks and it's also decentralized
because you can just add up your own repo,
-
you can make your own repo, you can add
other repos and you're not kinda tied to
-
one spot where you get the tweaks from,
like from the App Store you can only download
-
from the App Store. But with Cydia you can
pretty much download from everywhere.
-
You're probably familiar with Debian and
it's pretty much the same. So this talk is
-
pretty much structured around this tweet:
the "Ages of jailbreaking". So as you can
-
see we get the Golden Age, the BootRom,
the Industrial Age and The Post-
-
Apocalyptic age. And I kind of agree with
that. So this is why I decided to
-
structure my talk around that and walk you
through the different ages of
-
jailbreaking. So starting with the first
iPhone OS jailbreak – then it was actually
-
called iPhone OS not iOS – it was not the
BootROM yet. So the first was a buffer
-
overflow and the iPhone's libTitt library.
And this is an image parsing library.
-
It was exploited through Safari and used as
an entry point to get code execution.
-
It was the first time that non-Apple software
was run on an iPhone and people installed
-
applications like Installer or AppTapp
which were stores similar to Cydia back
-
then and those were used to install apps
or games because for the first iPhone OS
-
there was no way to install applications
anyhow, as the App Store got introduced
-
with iOS 2. So then, going to the Golden
Age, the attention kind of shifted to the
-
BootROM; people started looking at the
boot process and they found this device
-
firmware upgrade mode which is a part of
ROM. So the most famous BootROM exploit
-
was limera1n by geohot. It was a bug in
hardware and it was unpatchable with
-
software. So this bug was used to
jailbreak devices up to the iPhone 4.
-
There were also several other jailbreaks –
we didn't rely on that one – but this one,
-
once discovered, you can use it over and
over again and there's no way to patch
-
that. So this was later patched in a new
hardware revision which is the iPhone 4s.
-
So with that BootROM bug –
-
This is how kind of tethered jailbreaks
became a thing. So limera1n exploits a bug
-
in DFU mode which allows you to load
unsigned software through USB. However
-
when you reboot the device a computer was
required to re-exploit and again load your
-
unsigned code. And then load the
bootloaders, load the patched kernel and
-
thus the jailbreak was kind of tethered to
the computer because whenever you shut
-
down you need to be back at a computer to
boot your phone up. So historically a
-
tethered jailbroken phone does not boot
without a computer at all. And the reason
-
for that is because the jailbreaks would
modify the kernel and the bootloaders on
-
the file system for performance reasons,
so when you do the actual tether boot you
-
would need to upload a very tiny payload
via USB which then in turn would load
-
everything else from the file system
itself. But this results in a broken chain
-
of trust. When the normal boot process
runs and the bootloader checks the
-
signature of the first-stage bootloader
that would be invalid so the bootloader
-
would refuse to boot that and it would end
up in DFU mode so basically a phone won't
-
boot. Sometime around then, the idea for
semi-tethered jailbreak came up and the
-
idea behind that is very simple: just
don't break the chain of trust for
-
tethered jailbreaks. So, what you would do
differently is you do not modify the
-
kernel on the file system, don't touch the
bootloaders at all and then when you
-
would boot tethered, you would need to
upload all the bootloaders like the first
-
stage bootloader, then the second stage
bootloader which is iBoot and then the
-
kernel via USB to boot into jailbroken
mode. However when you reboot you could
-
boot all those components from the file
system so you could actually boot your
-
phone into non-jailbroken mode. If you
don't install any tweaks or modifications
-
which modify critical system components
because if you tamper with, for example,
-
the signature of the mount binary the
system obviously cannot boot
-
in non-jailbroken mode.
-
So, this is kind of the Golden age.
So let's continue with the Industrial age.
-
So with the release of the
iPhone 4s and iOS 5, Apple fixed the
-
BootROM bug and essentially killed
limera1n. They also introduced APTickets
-
and nonces to bootloaders, which I'm just
mentioning because it's kind of a
-
throwback for downgrading: before that you
can have a phone if you update to the
-
latest firmware and before you save your
SHSH blobs you could just downgrade and
-
then jailbreak again which wasn't a big
deal but with that they also added
-
downgrade protection so jailbreaking
became harder. If you wanted to know more
-
about how the boot process works, what
SHSH blobs are, what APTickets are, you
-
should check out my talk from two years
ago, I go in-depth on how all of that
-
works. So, I'm skipping that for this
talk. So the binaries the phone boots are
-
encrypted so the bootloaders are encrypted
and until recently the kernel used to be
-
encrypted as well. And the key encryption
key is fused into the devices and it is
-
impossible to get through hardware
attacks. At least there's no public case
-
where somebody actually got that recovered
at keys so it's probably impossible,
-
nobody has done it yet. So old boot files
are decrypted at boot by the previous
-
bootloader. And before the iPhone 4s you
could actually just talk to the hardware
-
iOS engine as soon as you got kernel-level
code execution. But with the iPhone 4s
-
they introduced a feature where before the
kernel would boot they would shut off the
-
iOS engine by hardware, so there is no way
to decrypt bootloader files anymore so
-
easily unless you got code execution in
the bootloader itself. So decrypting
-
bootloaders is a struggle from now on. So
I think kind of because of that the
-
attention shifted to userland and from now
the jailbreaks kind of had to be
-
untethered. So untethered here means that
if you jailbreak your device, you turn it
-
off, you boot it again, then the device is
still jailbroken, and this is usually
-
achieved through re-exploitation at some
point in the boot process. So you can't
-
just patch the kernel on file system
because that would invalidate signatures,
-
so instead you would, I don't know, add
some configuration files to some demons
-
which would trigger bugs and then exploit.
So jailbreaks then chained many bugs
-
together, sometimes six or more bugs to
get initial code execution, kernel code
-
execution and persistence. This somewhat
changed when Apple introduced free
-
developer accounts around the time they
released iOS 9. So these developer
-
accounts allow everybody who has an Apple
ID to get a valid signing certificate for
-
seven days for free. So you can actually
create an XCode project and run your app
-
on your physical device. Before that that
was not possible, so the only way to run
-
your own code on your device was to buy a
paid developer account which is 100$ per
-
year if you a buy personal developer
account. But now you can just get that for
-
free. And after seven days the certificate
expires, but you can just, for free,
-
request another one and keep doing that.
Which is totally enough if you develop
-
apps. So this kind of led to semi-
untethered jailbreaks because initial code
-
execution was not an issue anymore.
Anybody could just get that free
-
certificate, sign apps and run some kind
of code that was sandboxed. So jailbreak
-
focus shifted to more powerful kernel bugs
which were reachable from sandbox. So we
-
had jailbreaks using just one single bug
or maybe just two bugs and the jailbreaks
-
then were distributed as an IPA, which is
an installable app people would download,
-
sign themselves, put on the phone and just
run the app. So semi-untethered means
-
you can reboot into non-jailbroken mode,
however you can get to jailbroken mode
-
easily by just pressing an app. And over
the years Apple stepped up its game
-
constantly. So with iOS 5 they introduced
ASLR address space layer randomisation,
-
with iOS 6 they added kernel ASLR, with
the introduction of the iPhone 5, as they
-
added 64bit CPUs, which isn't really a
security mitigation, it just changed a bit
-
how you would exploit. So the real deal
started to come with iOS 9, where they
-
first introduced Kernel Patch Protection,
an attempt to make the kernel immutable
-
and not patchable. And they stepped up that
with the iPhone 7 where they introduced
-
Kernel Text Readonly Region, also known as
KTRR. So with iOS 11 they removed 32bit
-
libraries, which I think has very little
to no impact on exploitation; it's mainly
-
in the list because up to that point Cydia
was compiled as a 32bit binary and that
-
stopped working, that's why that had to be
recompiled for 64bit, which took someone
-
to do until you could get a working Cydia
on 64bits iOS 11. So with the iPhone Xs
-
which came out just recently they
introduced Pointer Authentication Codes,
-
and I'm gonna go more in detail into these
hardware mitigations in the next few
-
slides. So let's start with Kernel Patch
Protection. So when people say KPP, they
-
usually refer to what Apple calls
watchtower. So watchtower, as the name
-
suggests, watches over the kernel and
panics when modifications are detected,
-
and it prevents the kernel from being
patched. At least that's the idea of it.
-
It doesn't really prevent it because it's
broken but when they engineered it, it
-
should prevent you from patching the
kernel. So how does it work? Watchtower is
-
a piece of software which runs in EL3
which is the ARM exception level 3.
-
So exception levels are kind of privilege
separations while 3 is the highest and 0
-
is the lowest. And you can kind of trigger
an exception to call handler code in
-
higher levels. So the idea of watchtowers
that recurring events which is FPU usage
-
trigger Watchtower inspection of the
kernel, and you cannot really turn it off
-
because you do need the FPU. So if you
picture how it looks like, we have the
-
Watchtower to the left (which totally
looks like a lighthouse) and the
-
applications at the right. So in the
middle, in EL1, we have the kernel and
-
recent studies revealed that this is
exactly how the XNU kernel looks like. So
-
how can we be worse? An event occurs from
time to time which is from using userland
-
application, for example JavaScript makes
heavy use of floating points, and the
-
event would then go to the kernel and the
kernel would then trigger Watchtower as it
-
tries to enable the FPU. Watchtower would
scan the kernel and then if everything is
-
fine it would transition execution back
into the kernel which then in turn would
-
transition back into userspace which can
then use the FPU. However with a modified
-
kernel, when Watchtower scans the kernel
and detects modification, it would just
-
panic. So the idea is that the kernel is
forced to call Watchtower because the FPU
-
is blocked otherwise. But the problem at
the same time is that the kernel is in
-
control before it calls watchtower. And
this thing was fully defeated by qwerty in
-
yalu102. So how qwerty's KPP bypass works:
The idea is: you copy the kernel in memory
-
and you mody the copied kernel. Then you
would modify the page tables to use the
-
patched kernel. And whenever the FPU
triggers a Watchtower inspection, before
-
actually calling Watchtower you would
switch back to the unmodified kernel and
-
then let it run, let it check the
unmodified kernel when that returns you
-
would go back to the modified kernel. So
this one it looks like: we copy the kernel
-
in memory, we patch the modified copy, we
switch the page tables to actually use the
-
modified copy and when we have the FPU
event it would just switch the page tables
-
back, forward the call to Watchtower, make
then watch tower scan the unmodified
-
kernel and after the scan we would just
return to the patched kernel. So the
-
problem here is: Time of check – Time of
Use, the classical TOCTOU. And this works
-
on the iPhone 5s, the iPhone 6 and the
iPhone 6s and it's not really patchable.
-
However, with the iPhone 7, Apple
introduced KTRR, which kind of proves that
-
and they really managed to do an
unpatchable kernel. So how does KTRR work?
-
So Kernel Text Readonly Region, I'm going
to present as described by Siguza in his
-
blog, adds an extra memory controller
which is the AMCC which traps all writes to
-
the read-only region. And there's extra CPU
registers which mark and executable range
-
which are the KTRR registers and they
obviously mark a subsection of the
-
read-only region, so you have hardware
enforcement at boot time for read-only
-
memory region and hardware enforcement at
boot-time for an executable memory region.
-
So this the CPU. This is the memory at the
bottom. You would set the read-only region
-
at boot and since that's enforced by the
hardware memory controller everything
-
inside that region is not writable and
everything outside that region is
-
writable. And the CPU got KTRR registers
which mark begin and end. So the
-
executable region is a subsection of the
read-only region. Everything outside there
-
cannot be executed by the CPU. Everything
inside the read-only region cannot be
-
modified. And this has not been truly
bypassed yet. There's been a bypass but
-
that actually targeted how that thing gets
set up. But that's fakes and now it's
-
probably setting up everything and so far
it hasn't been bypassed. So jailbreaks are
-
still around. So what are they doing?
Well, they just walk around kernel patches
-
and this is when KPP jailbreaks evolved.
Which means, they just don't patch
-
the kernel. But before we dive into that,
let's take a look what previous jailbreaks
-
actually did patch in the kernel. So the
general goals are to disable code signing
-
to disable the sandbox to make the root
file system writable to somehow make
-
tweaks work which involves making mobile
substrate or libsubstitute work which is
-
the library for hooking. And I was about
to make a list of kernel patches which you
-
could simply apply, however, the
techniques and patches vary across
-
individual jailbreaks so much that I
couldn't even come up with the list of
-
kernel patches among the different
jailbreaks I worked on. So there's no
-
general set of patches, some prefer to do
it that way, some prefer to do it that
-
way. So instead of doing a kind of full
list, I'll just show you what the Helix
-
jailbreak does patch. So the Helix
jailbreak first patches the
-
i_can_has_debugger, which is a boot arc.
It's a variable in the kernel and if you
-
set that to true that would relax the
sandbox. So to relax the sandbox or to
-
disable code signing usually involves
multiple steps. Also since iOS 7 you need
-
to patch mount because there's actual
hardcoded that the root filesystem cannot
-
be mounted as read-write. Since iOS 10.3,
there is also hardcoded that you cannot
-
mount the root filesystem without the
nosuid flag, so you probably want to patch
-
that out as well. And then if you patch
both these you can remount the root
-
filesystem as read-and-write, however you
cannot actually write to the files on the
-
root filesystem unless you patch Light-
Weight Volume Manager which you also only
-
need to do in iOS 9 up to iOS 10.3. Later
when they switched to APFS you don't
-
actually need that anymore. Also there's a
variable called proc_enforce. You set that
-
to 0 to disable code signing which is one
of the things you need to do to disable
-
code signing. Another flag is
cs_enforcement_disable, set that to 1 to
-
disable code signing. So amfi, which is
Apple mobile file integrity is a kext which
-
handles the code signing checks. In that
kext it imports the mem-copy function.
-
So there's a stub and one of the patches
is to patch that stub to always return 0
-
by some simple gadget. So what this does
is, whenever it compares something in a
-
code, it would just always compare… say
that the compare succeeds and is equal.
-
I'm not entirely sure what it does,
so this patch dates back to Yalu
-
but like just supplying that patch helps
-
killing code signing, so that's why it's
in there. Another thing h3lix does is, it
-
adds the get-task-allow entitlement to
every process and this is for allowing
-
read/ write/executable mappings and this
is what you want for a mobile substrate
-
tweaks. So initially this entitlement
is used for debugging because
-
there you also need to be able to modify
code at runtime for setting breakpoints
-
while we use it for getting tweaks to
work. Since iOS 10.3 there's... h3lix also
-
patches label_update_execve patch...
label_update_execve function. So the idea
-
of that patch was to fix the "process-exec
denied while updating label" error message
-
in Cydia and several other processes. Well
that seems to completely nuke the sandbox
-
and also break sandbox containers so this
is also the reason why if you're
-
jailbreaking with h3lix apps would save
their data in the global directory instead
-
of their sandbox containers. And you also
kill a bunch of checks in
-
map_ops_policy... mac_policy_ops to relax
the sandbox. So if you want to check out
-
how that works yourself, unfortunately
h3lix itself is not open-source and I've
-
no plans of open-sourcing that. But
there's two very very closely related
-
projects which are open-source which is
doubleH3lix – this is pretty much exactly
-
the same but for 64 bit devices which
does include the KPP bypass, so it also
-
patches the kernel – and jelbrekTime,
which is the watchOS jailbreak. But h3lix
-
is for iOS 10 and the watchOS jailbreak is
kind of the iOS 11 equivalent but it
-
shares like most of the code. So most of
the patch code is the same if you want to
-
check that out. Check these out. So,
KPPless jailbreaks. So the idea is, don't
-
patch the kernel code but instead patch
the data. So for an example we go for
-
remounting root file system. We know we
have hardcoded checks which forbid us to
-
mount the root file system read/write. But
what we can do is in the kernel there's
-
this structure representing the root file
system and we can patch that structure
-
removing the flag saying that this
structure represents the root file system.
-
And we simply remove that and then we can
call remount on the root file system and
-
then we put back in the flag. So we kind
of bypass the hardcoded check. For
-
disabling code signing and disabling
sandbox there are several approaches.
-
In the kernel there's a trust cache so
usually amfi handles the code signing.
-
The demon in userspace handles the code
signing requests. But the demon itself
-
also needs to be code-signed. So you have
the chicken and egg problem. That's why in
-
the kernel there is a list of hashes of
binaries which are allowed to execute. And
-
this thing is actually writable because
when you mount the developer disk image it
-
actually adds some debugging things to it
so you can simply inject your own hash
-
into the trust cache making the binary
trusted. Another approach taken by
-
jailbreakd and the latest electro
jailbreak is to have a process, in this
-
case jailbreakd, which would patch the
processes on creation, so when you spawn a
-
process that thing would immediately stop
the process, go into the kernel, look up
-
the structure and remove the flags
saying "kill this process when the cold
-
signature becomes involved" and it will
invalid. And it would also add the
-
get-task-low entitlements. And then after
it's done that it would resume the process
-
and then the process won't get killed any
more because it's kind of already trusted.
-
And the third approach taken or demoed by
bazad was to take over amfid and userspace
-
completely. So if you can get a Mac port
to launchd or to amfid you can impersonate
-
that and whenever the kernel asks and
feels that it's trusted you would reply
-
"Okay yeah that's trusted that's fine
you can run it" so that way you don't need
-
to go for the kernel at all. So future
jailbreaks. Kernel patches are not really
-
possible anymore and they're not even
required. Because we can still patch the
-
kernel data or not go for the kernel at
all. But we're still not done yet, we
-
still didn't go for Post-Apocalyptic
or short PAC. Well actually
-
PAC stands for pointer authentication
codes but you get the joke.
-
So pointer authentication codes were
introduced with the iPhone Xs and if we
-
quote Qualcomm "This is a stronger version
off stack protection". And pointer
-
authentication codes are similar to
message authentication codes but for
-
pointers, if you are familiar with that.
And the idea of that is to protect data in
-
memory in relation to context with a
secret key. So the data in memory could be
-
the return value and the context could be
the stack pointer or data in memory could
-
be a function pointer and the context
could be a vtable. So if we take a look
-
how PAC is implemented. So at the left you
can see function entry and like function
-
prologue and function epilogue without PAC
and with PAC the only thing that would be
-
changed is when you enter a function
before actually doing anything inside it,
-
you would normally store the return value
on the stack but when doing that you would
-
first authenticate the pointer with the
context and then kinda create the
-
signature and store it inside the pointer
and then put it on the stack. And then
-
when you leave the function you would just
take back the pointer, again calculate the
-
signature and see if these both signatures
matches and if they do then just return
-
and if the signature's invalid you would
just throw a hardware fault. So this is
-
how it looks like for 64-bit pointers. You
don't really use all of the available bits.
-
So usually you use 48 bits for
virtual memory which is more than enough.
-
If you use memory tagging
you have seven bits left for putting in
-
the signature or if you do not use memory
tagging you can use up to 15 bits for the
-
pointer authentication code. So the basic
idea of PAC is to kill ROP like code reuse
-
attacks. You cannot simply smash the stack
and create a ROP chain because every
-
return would have an instruction verifying
the signature of the return value and that
-
means you would need to sign everything,
every single of these pointers and since
-
you don't know the key you can't do that
in advance. So you cannot modify a return
-
value and you cannot swap two signed
values on the stack unless the stack
-
pointer is the same for both. Can we
bypass it? Maybe. I don't know. But we can
-
take a look at how that thing is
implemented. So if we take a look at the
-
ARM slides you can see that PAC is
basically derived from a pointer and a
-
64-bit context value and the key and we
put all of that in the algorithm P. And
-
that gives us the PAC which we store in
the unused bits. So the algorithm P can
-
either be QARMA or it can be something
completely custom. And the instructions,
-
the ARM instructions, kind of hide the
implementation details. So if you would go
-
for attacking PAC, there's two ways of
attack strategies. We can either try and
-
go straight for the cryptographic
primitive like take a look what cipher it
-
is or how that cipher is implemented.
Maybe it's weak or we can go and attack
-
the implementation. So if we go and attack
the implementation we could look for
-
signing primitives, which could be like
small gadgets we could jump to somehow,
-
somehow execute to sign a value which
could be either an arbitrary context
-
signing gadget or maybe a fixed context
signing gadget. We could also look for
-
unauthenticated code, for example I
imagine the code which sets up PAC itself
-
is probably not protected by PAC because
you can't sign the pointer if the key is
-
not set up yet. Maybe that code is still
accessible. We could look for something
-
like that. We could also try to replace
pointers which share the same context.
-
It's probably not feasible for return
values on the stack, but maybe it's
-
feasible for swapping pointers in the
vtable. Or maybe you come up with your own
-
clever idea how to bypass that. These are
just like some ideas. So I want to make a
-
point here, that in my opinion it doesn't
make much sense to try to attack the
-
underlying cryptography on PAC, so I think
that if we go for attacking PAC it makes
-
much more sense to look for implementation
attacks and not attacking the cryptography
-
and the next few slides are just there to
explain why I think that. So if we take a
-
look at QARMA which was proposed by ARM as
being one of the possible ways of
-
implementing PAC. PAC, uhm, QARMA is a
tweakable block cipher, so it takes an
-
input, a tweak and gives you an output.
Which kind of fits perfectly for what we
-
want. And then I started looking at QARMA
and came up with ideas on how are you
-
could maybe attack that cipher. At some
point I realized that practical crypto
-
attacks on QARMA if there will be any in
the future will probably that's what I
-
think completely irrelevant to the PAC
security. So why's that? If we define –
-
So just so you know, the next few slides
I'm going to bore you with some math but
-
it's not too complex. So if we define PAC
as a function which takes a 128 bit input
-
and a 120-bit key and maps it to 15 bits
output. Or we can more realistically
-
define it as a function which takes 96
bits input with a 128-bit key because we
-
have a 48-bit pointer because the other
ones we can't use because that's where we
-
store the signature and we're most likely
using the stack pointer as a context so
-
that one will also only use 48-bit
pointers, 48 bits. Then we have PAC as a
-
construct so then we define the attacker
with following capabilities. The attacker
-
is allowed to observe some pointer and
signature pairs and I assume that you can
-
get that through some info leaks, for
example you have some bug in the code
-
which lets you dump a portion of the stack
with a bunch of signed pointers.
-
This is why you can observe some, not all,
but you can see some and I would also
-
allow to have the attacker be able to
slightly modify the context and what I
-
mean by that is I imagine a scenario where
the attacker could maybe shift the stack,
-
maybe through more nested function calls
before executing the leak which will give
-
you actually two signatures for the same
pointer but with a different context.
-
Maybe that's somewhat helpful. But still
we realize that the attacker, the
-
cryptographic attacker, is super weak so
the only other cryptographic problem there
-
could be is collisions. And for those of
you who seen my last talk they probably
-
know I love collisions. So we have 48-bit
pointer, 48-bit context and 128-bit key.
-
We sum that up and we divide that by the
15-bit of output we get from PAC which
-
gives us 2 to the power of 209 possible
collisions because we map so many bits to
-
so little bits. But even if we reduce the
pointers because practically probably less
-
than 34 bit of a pointer are really used,
we still get 2 to the power 181
-
collisions, which is a lot of collisions
but the bad thing here is random
-
collisions are not very useful to us
unless we can predict them somehow. So
-
let's take a look how a cryptographically
secure MAC is defined. So a MAC is defined
-
as following: Let p be a MAC with the
following components and those are
-
basically Gen(), Mac() and Vrfy(). So
Gen() just somehow generates a key, it's
-
only here for the sake of mathematical
completeness. Just assume we generate the
-
key by randomly choosing n bits or however
how much bits the key needs. And Mac() is
-
just a function where you put in an n-bit
message called m and it gives us a
-
signature t. And I'm going to say
signature but in reality I mean a message
-
authentication code. And the third
function is Vrfy() and you give it a
-
message and a signature and that just
returns true if that signature is valid
-
for the message or false if it's not. And
when cryptographers prove that something
-
is secure they like to play games. So I'm
gonna to show you my favorite game, which
-
is Mac-forge game. So the game is pretty
simple you have to the left the game
-
master which is playing Mac-forge and
to the right the attacker. So the game
-
starts when the Mac-forge game master
informs the attacker how much bits are we
-
playing. So this is the first 1 to the
power of n, basically means hey we're
-
having MAC-forge with, I don't know,
64-bit messages so the attacker knows the
-
size. Then the game master just generates
the key and then the attacker can choose to
-
q messages of n-bit length and send them
over to the game master and the game
-
master will generate signatures and send
them back. So then the attacker can
-
observe all the messages he generated and
all the matching signatures. So what the
-
attacker needs to do then is to choose
another message which he did not send over
-
yet and somehow come up with a valid
signature and if he can manage to do that
-
he sends it over and if that's actually a
valid signature for the message then he
-
wins the game, otherwise he looses the
game. So we say a Mac is secure if the
-
probability that an attacker can somehow
win this is negligible. So I'm gonna spare
-
you the mathematical definition of what
negligible means but like just guessing or
-
trying means that it's still secure if
that's the best tech. So as you can see is
-
a MAC which is secure needs to withstand
this. But for our PAC attacker we do not
-
even have this oracle. So our attacker for
PAC is even weaker than that. So why do we
-
not have this oracle? Well simple if we
allow the attacker to sign arbitrary
-
messages the attacker wouldn't even need
to try to somehow get the key or forged
-
message because then he could just send
over all the messages. All the pointers he
-
wants to sign get back signed pointers and
you wouldn't need to bother about breaking
-
the crypto at all. So basically the point
I'm trying to make here is that the PAC
-
attacker is weaker than a MAC attacker. So
every secure MAC we know is also a secure
-
PAC, but even then an insecure MAC might
still be sufficiently secure for PAC so
-
secure MACs have been around for a while
and thus in my opinion, I think if
-
somebody, who knows what he's doing,
designs a PAC algorithm today it will
-
likely be secure. So instead of going for
the crypto I think we should rather go for
-
implementation attacks instead because
those will be around forever. And by that
-
I mean well you can either see how the
crypto itself is implemented, what I mean
-
especially by that is you could see how
the PAC is used in the actual code. Maybe
-
you can find signing oracles, maybe you
can find unauthenticated code. I think
-
this is where we need to go if wanna
bypass PAC somehow. So just to recap where
-
we're coming from. Future iPhone hacks
probably gonna not try to bypass KTRR. I
-
think they will not try to patch Kernel
code because we can achieve pretty much
-
all the things we want to achieve for end
user jailbreak without having to patch the
-
kernel so far. And I think people are
going to struggle a bit. At least a bit
-
when exploiting with PAC because that kind
will either make some bugs unexploitable
-
or really really hard to exploit. Also
maybe we're gonna avoid the Kernel at all
-
as it has demoed that userland-only
jailbreaks are possible. Maybe we're going
-
to recalculate what the low hanging fruits
are. Maybe just go back to iBoot or look
-
for what other thing is interesting. So,
that was about it, thank you very much for
-
your attention.
Applause
-
Herald-Engel: Thank you, sir. If you would
like to ask a question please line up
-
on the microphones in the room. We do not
have a question from the Internet.
-
One question over there, yes please.
Question: Hi. I would like to be
-
interested what your comment is on the
statement from Zarek that basically
-
jailbreaking is not a thing anymore
because you're breaking so much security
-
features that makes the phone basically
more insecure than the former reasons of
-
doing a jailbreaking allow for.
tihmstar: Well, jailbreaking -- I don't
-
think jailbreaking itself nowadays makes a
phone really insecure. So of course if you
-
patched a kernel and disable all of the
security features that will be less
-
secure. But if you take a look what we
have here with the unpatchable kernel I
-
think the main downside of being
jailbroken is the fact that you cannot go
-
to the latest software version because you
want the box to be in there to have the
-
jailbreak. So I don't really think if you
have like a KTRR device the jailbreak
-
itself makes it less secure. Just the fact
that you are not on the latest firmware is
-
the insecure part of it.
Herald: Alright, thank you.
-
Microphone number two, your question.
Mic #2: Hi good talk. Could you go back to
-
the capabilities of the adversary please?
Yeah. So you said you can do basically two
-
things right. This one, yes. Yeah you can
observe some pointers and some signature
-
pairs. But why is this not an oracle?
tihmstar: Because you cannot choose...
-
Mic #2: Your message yourself.
tihmstar: ...your message yourself.
-
Mic #2: And you have also an oracle that
says if the signature is valid. For a
-
chosen message.
tihmstar: Well yeah but this is if you
-
take a look at the game and this game for
a secure MAC the attacker can choose up to
-
q messages sending over... like he can do
whatever he wants with that messages and
-
get a signature while the package hacker
can see a few very limited amount of
-
messages and their matching signature and
he has little to no influence on these
-
messages.
Mic #2: Okay. So it's a bit weaker.
-
tihmstar: So yeah that's the point. Just
that it's weaker.
-
Mic #2: Thanks.
Herald-Engel: Do we have a question from
-
the internet? No. OK. Yes please. All
right then I don't see anyone else being
-
lined up and... please give a lot of
applause for tihmstar for his awesome
-
talk!
Applause
-
postroll music
-
subtitles created by c3subtitles.de
in the year 2020. Join, and help us!