Return to Video

35C3 - The Layman's Guide to Zero-Day Engineering

  • 0:18 - 0:24
    Herald-Angel: The Layman's Guide to Zero-
    Day Engineering is our next talk by, and
  • 0:24 - 0:31
    my colleagues out in Austin who run the
    Pown2Own contest, assure me that our next
  • 0:31 - 0:36
    speakers are really very much top of their
    class, and I'm really looking forward to
  • 0:36 - 0:40
    this talk for that. A capture the flag
    contest like that requires having done a
  • 0:40 - 0:46
    lot of your homework upfront so that you
    have the tools at your disposal, at the
  • 0:46 - 0:51
    time, so that you can win. And Marcus and
    Amy are here to tell us something way more
  • 0:51 - 0:56
    valuable about the actual tools they found,
    but how they actually arrived at those
  • 0:56 - 1:01
    tools, and you know, the process of going
    to that. And I think that is going to be a
  • 1:01 - 1:08
    very valuable recipe or lesson for us. So,
    please help me welcome Marcus and Amy to a
  • 1:08 - 1:17
    very much anticipated talk.
    applause
  • 1:17 - 1:23
    Marcus: All right. Hi everyone. Thank you
    for making out to our talk this evening.
  • 1:23 - 1:26
    So, I'd like to start by thanking the CCC
    organizers for inviting us out here to
  • 1:26 - 1:30
    give this talk. This was a unique
    opportunity for us to share some of our
  • 1:30 - 1:36
    experience with the community, and we're
    really happy to be here. So yeah, I hope
  • 1:36 - 1:43
    you guys enjoy. OK, so who are we? Well,
    my name is Marcus Gaasedelen. I sometimes
  • 1:43 - 1:48
    go by the handle @gaasedelen which is my
    last name. And I'm joined here by my co-
  • 1:48 - 1:53
    worker Amy who was also a good friend and
    longtime collaborator. We worked for a
  • 1:53 - 1:56
    company called Ret2 Systems. Ret2 is best
    known publicly for its security research
  • 1:56 - 2:00
    and development. Behind the scenes we do
    consulting and have been pushing to
  • 2:00 - 2:04
    improve the availability of security
    education, and specialized security
  • 2:04 - 2:08
    training, as well as, raising awareness
    and sharing information like you're going
  • 2:08 - 2:14
    to see today. So this talk has been
    structured roughly to show our approach in
  • 2:14 - 2:18
    breaking some of the world's most hardened
    consumer software. In particular, we're
  • 2:18 - 2:23
    going to talk about one of the Zero-Days
    that we produce at Ret2 in 2018. And over
  • 2:23 - 2:28
    the course of the talk, we hope to break
    some common misconceptions about the
  • 2:28 - 2:31
    process of Zero-Day Engineering. We're
    going to highlight some of the
  • 2:31 - 2:37
    observations that we've gathered and built
    up about this industry and this trade over
  • 2:37 - 2:41
    the course of many years now. And we're
    going to try to offer some advice on how
  • 2:41 - 2:46
    to get started doing this kind of work as
    an individual. So, we're calling this talk
  • 2:46 - 2:52
    a non-technical commentary about the
    process of Zero-Day Engineering. At times,
  • 2:52 - 2:56
    it may seem like we're stating the
    obvious. But the point is to show that
  • 2:56 - 3:01
    there's less magic behind the curtain than
    most of the spectators probably realize.
  • 3:01 - 3:05
    So let's talk about PWN2OWN 2018. For
    those that don't know, PWN2OWN is an
  • 3:05 - 3:09
    industry level security competition,
    organized annually by Trend Micro's Zero-
  • 3:09 - 3:15
    Day Initiative. PWN2OWN invites the top
    security researchers from around the world
  • 3:15 - 3:20
    to showcase Zero-Day exploits against high
    value software targets such as premier web
  • 3:20 - 3:23
    browsers, operating systems and
    virtualization solutions, such as Hyper-V,
  • 3:23 - 3:29
    VMware, VirtualBox, Xen, whatever.
    So at Ret2, we thought it would be fun to
  • 3:29 - 3:33
    play on PWN2OWN this year. Specifically we
    wanted to target the competition's browser
  • 3:33 - 3:38
    category. We chose to attack Apple's
    Safari web browser on MacOS because it was
  • 3:38 - 3:45
    new, it was mysterious. But also to avoid
    any prior conflicts of interest. And so
  • 3:45 - 3:48
    for this competition, we ended up
    developing a type of Zero-Day, known
  • 3:48 - 3:56
    typically as a single click RCE or Safari
    remote, kind of as some industry language.
  • 3:56 - 4:00
    So what this means is that we could gain
    remote, root level access to your Macbook,
  • 4:00 - 4:05
    should you click a single malicious link
    of ours. That's kind of terrifying. You
  • 4:05 - 4:10
    know, a lot of you might feel like you're
    very prone to not clicking malicious
  • 4:10 - 4:15
    links, or not getting spearfished. But it's
    so easy. Maybe you're in a coffee shop, maybe
  • 4:15 - 4:20
    I just man-in-the-middle your connection.
    It's pretty, yeah, it's a pretty scary
  • 4:20 - 4:23
    world. So this is actually a picture that
    you took on stage at PWN2OWN 2018,
  • 4:23 - 4:28
    directly following our exploit attempt.
    This is actually Joshua Smith from ZDI
  • 4:28 - 4:33
    holding the competition machine after our
    exploit had landed, unfortunately, a
  • 4:33 - 4:37
    little bit too late. But the payload at
    the end of our exploit would pop Apple's
  • 4:37 - 4:41
    calculator app and a root shell on the
    victim machine. This is usually used to
  • 4:41 - 4:46
    demonstrate code execution. So, for fun we
    also made the payload change the desktop's
  • 4:46 - 4:51
    background to the Ret2 logo. So that's
    what you're seeing there. So, what makes a
  • 4:51 - 4:55
    Zero-Day a fun case study, is that we had
    virtually no prior experience with Safari
  • 4:55 - 4:59
    or MacOS going into this event. We
    literally didn't even have a single
  • 4:59 - 5:03
    Macbook in the office. We actually had to
    go out and buy one. And, so as a result
  • 5:03 - 5:07
    you get to see, how we as expert
    researchers approach new and unknown
  • 5:07 - 5:12
    software targets. So I promised that this
    was a non-technical talk which is mostly
  • 5:12 - 5:17
    true. That's because we actually publish
    all the nitty gritty details for the
  • 5:17 - 5:22
    entire exploit chain as a verbose six part
    blog series on our blog this past summer.
  • 5:22 - 5:27
    It's hard to make highly tactical talks
    fun and accessible to all audiences. So
  • 5:27 - 5:31
    we've reserved much of the truly technical
    stuff for you to read at your own leisure.
  • 5:31 - 5:35
    It's not a prerequisite for this talk, so
    don't feel bad if you haven't read those.
  • 5:35 - 5:39
    So with that in mind, we're ready to
    introduce you to the very 1st step of what
  • 5:39 - 5:45
    we're calling, The Layman's Guide to Zero-
    Day Engineering. So, at the start of this
  • 5:45 - 5:49
    talk, I said we'd be attacking some of the
    most high value and well protected
  • 5:49 - 5:55
    consumer software. This is no joke, right?
    This is a high stakes game. So before any
  • 5:55 - 5:59
    of you even think about looking at code,
    or searching for vulnerabilities in these
  • 5:59 - 6:03
    products, you need to set some
    expectations about what you're going to be
  • 6:03 - 6:08
    up against. So this is a picture of you.
    You might be a security expert, a software
  • 6:08 - 6:12
    engineer, or even just an enthusiast. But
    there are some odd twists of self-
  • 6:12 - 6:16
    loathing, you find yourself interested in
    Zero-Days, and the desire to break some
  • 6:16 - 6:22
    high impact software, like a web browser.
    But it's important to recognize that
  • 6:22 - 6:26
    you're looking to devise some of the
    largest, most successful organizations of
  • 6:26 - 6:30
    our generation. These types of companies have
    every interest in securing their products
  • 6:30 - 6:33
    and building trust with consumers. These
    vendors have steadily been growing their
  • 6:33 - 6:36
    investments in software and device
    security, and that trend will only
  • 6:36 - 6:41
    continue. You see cyber security in
    headlines every day, hacking, you know,
  • 6:41 - 6:45
    these systems compromised. It's only
    getting more popular. You know, there's
  • 6:45 - 6:49
    more money than ever in this space. This
    is a beautiful mountain peak that
  • 6:49 - 6:53
    represents your mission of, I want to
    craft a Zero-Day. But you're cent up this
  • 6:53 - 6:58
    mountain is not going to be an easy task.
    As an individual, the odds are not really
  • 6:58 - 7:03
    in your favor. This game is sort of a free
    for all, and everyone is at each other's
  • 7:03 - 7:07
    throats. So, in one corner is the vendor,
    might as well have infinite money and
  • 7:07 - 7:11
    infinite experience. In another corner, is
    the rest of the security research
  • 7:11 - 7:16
    community, fellow enthusiasts, other
    threat actors. So, all of you are going to
  • 7:16 - 7:21
    be fighting over the same terrain, which
    is the code. This is unforgiving terrain
  • 7:21 - 7:26
    in and of itself. But the vendor has home
    field advantage. So these obstacles are
  • 7:26 - 7:30
    not fun, but it's only going to get worse
    for you. Newcomers often don't prepare
  • 7:30 - 7:35
    themselves for understanding what kind of
    time scale they should expect when working
  • 7:35 - 7:40
    on these types of projects. So, for those
    of you who are familiar with the Capture
  • 7:40 - 7:45
    The Flag circuit. These competitions
    usually are time boxed from 36 to 48
  • 7:45 - 7:49
    hours. Normally, they're over a weekend. We
    came out of that circuit. We love the
  • 7:49 - 7:55
    sport. We still play. But how long does it
    take to develop a Zero-Day? Well, it can
  • 7:55 - 7:59
    vary a lot. Sometimes, you get really
    lucky. I've seen someone produce a
  • 7:59 - 8:05
    Chrome-/V8-bug in 2 days. Other times,
    it's taken two weeks. Sometimes, it takes
  • 8:05 - 8:10
    a month. But sometimes, it can actually
    take a lot longer to study and exploit new
  • 8:10 - 8:14
    targets. You need to be thinking, you
    know, you need to be looking at time in
  • 8:14 - 8:20
    these kind of scales. And so it could take
    3.5 months. It could take maybe even 6
  • 8:20 - 8:23
    months for some targets. The fact of the
    matter is that it's almost impossible to
  • 8:23 - 8:28
    tell how long the process is going take
    you. And so unlike a CTF challenge,
  • 8:28 - 8:33
    there's no upper bound to this process of
    Zero-Day Engineering. There's no guarantee
  • 8:33 - 8:37
    that the exploitable bugs you need to make
    a Zero-Day, even exist in the software
  • 8:37 - 8:43
    you're targeting. You also don't always
    know what you're looking for, and you're
  • 8:43 - 8:48
    working on projects that are many order
    magnitudes larger than any sort of
  • 8:48 - 8:51
    educational resource. We're talking
    millions of lines of code where your
  • 8:51 - 8:57
    average CTF challenge might be a couple
    hundred lines to see at most. So I can
  • 8:57 - 9:02
    already see the tear and self-doubt in
    some of your eyes. But I really want to
  • 9:02 - 9:07
    stress that you shouldn't be too hard on
    yourself about this stuff. As a novice,
  • 9:07 - 9:11
    you need to keep these caveats in mind and
    accept that failure is not unlikely in the
  • 9:11 - 9:16
    journey. All right? So please check this
    box before watching the rest of the talk.
  • 9:17 - 9:21
    So having built some psychological
    foundation for the task at hand, the next
  • 9:21 - 9:28
    step in the Layman's Guide is what we call
    reconnaissance. So this is kind of a goofy
  • 9:28 - 9:34
    slide, but yes, even Metasploit reminds
    you to start out doing recon. So with
  • 9:34 - 9:37
    regard to Zero-Day Engineering,
    discovering vulnerabilities against large
  • 9:37 - 9:41
    scale software can be an absolutely
    overwhelming experience. Like that
  • 9:41 - 9:45
    mountain, it's like, where do I start?
    What hill do I go up? Like, where do I
  • 9:45 - 9:49
    even go from there? So to overcome this,
    it's vital to build foundational knowledge
  • 9:49 - 9:53
    about the target. It's also one of the
    least glamorous parts of the Zero-Day
  • 9:53 - 9:58
    development process. And it's often
    skipped by many. You don't see any of the
  • 9:58 - 10:01
    other speakers really talking about this
    so much. You don't see blog posts where
  • 10:01 - 10:05
    people are like, I googled for eight hours
    about Apple Safari before writing a Zero-
  • 10:05 - 10:11
    Day for it. So you want to aggregate and
    review all existing research related to
  • 10:11 - 10:17
    your target. This is super, super
    important. So how do we do our recon? Well
  • 10:17 - 10:22
    the simple answer is Google everything.
    This is literally us Googling something,
  • 10:22 - 10:25
    and what we do is we go through and we
    click, and we download, and we bookmark
  • 10:25 - 10:30
    every single thing for about five pages.
    And you see all those buttons that you
  • 10:30 - 10:34
    never click at the bottom of Google. All
    the things are related searches that you
  • 10:34 - 10:37
    might want to look at. You should
    definitely click all of those. You should
  • 10:37 - 10:41
    also go through at least four or five
    pages and keep downloading and saving
  • 10:41 - 10:48
    everything that looks remotely relevant.
    So you just keep doing this over, and
  • 10:48 - 10:54
    over, and over again. And you just Google,
    and Google, and Google everything that you
  • 10:54 - 10:59
    think could possibly be related. And the
    idea is, you know, you just want to grab
  • 10:59 - 11:02
    all this information, you want to
    understand everything you can about this
  • 11:02 - 11:08
    target. Even if it's not Apple Safari
    specific. I mean, look into V8, look into
  • 11:08 - 11:14
    Chrome, look into Opera, look into Chakra,
    look into whatever you want. So the goal
  • 11:14 - 11:19
    is to build up a library of security
    literature related to your target and its
  • 11:19 - 11:26
    ecosystem. And then, I want you to read
    all of it. But I don't want you, don't,
  • 11:26 - 11:29
    don't force yourself to understand
    everything in your sack in your
  • 11:29 - 11:33
    literature. The point of this exercise is
    to build additional concepts about the
  • 11:33 - 11:37
    software, its architecture and its
    security track record. By the end of the
  • 11:37 - 11:40
    reconnaissance phase, you should aim to be
    able to answer these kinds of questions
  • 11:40 - 11:46
    about your target. What is the purpose of
    the software? How is it architected? Can
  • 11:46 - 11:51
    anyone describe what WebKit's architecture
    is to me? What are its major components?
  • 11:51 - 11:56
    Is there a sandbox around it? How do you
    debug it? How did the developers debug it?
  • 11:56 - 12:01
    Are there any tips and tricks, are there
    special flags? What does its security
  • 12:01 - 12:04
    track record look like? Does it have
    historically vulnerable components? Is
  • 12:04 - 12:11
    there existing writeups, exploits, or
    research in it? etc. All right, we're
  • 12:11 - 12:16
    through reconnaissance. Step 2 is going to
    be target selection. So, there's actually
  • 12:16 - 12:20
    a few different names that you could maybe
    call this. Technically, we're targeting
  • 12:20 - 12:25
    Apple's Safari, but you want to try and
    narrow your scope. So what we're looking
  • 12:25 - 12:33
    at here is a TreeMap visualization of the
    the Web Kit source. So Apple's Safari web
  • 12:33 - 12:36
    browser is actually built on top of the
    Web Kit framework which is essentially a
  • 12:36 - 12:42
    browser engine. This is Open Source. So
    yeah, this is a TreeMap visualization of
  • 12:42 - 12:47
    the source directory where files are
    sorted in by size. So each of those boxes
  • 12:47 - 12:53
    is essentially a file. While all the grey
    boxes, the big gray boxes are directories.
  • 12:53 - 13:02
    All the sub squares are files. In each
    file is sized, based on its lines of code.
  • 13:02 - 13:07
    Hue, the blue hues represent approximate
    maximum cyclomatic complexity detected in
  • 13:07 - 13:11
    each source file. And you might be
    getting, anyway, you might be getting
  • 13:11 - 13:14
    flashbacks back to that picture of that
    mountain peak. How do you even start to
  • 13:14 - 13:18
    hunt for security vulnerabilities in a
    product or codebase of this size?
  • 13:18 - 13:22
    3 million lines of code. You know, I've
    maybe written like, I don't know, like a
  • 13:22 - 13:29
    100,000 lines of C or C++ in my life, let
    alone read or reviewed 3 million. So the
  • 13:29 - 13:34
    short answer to breaking this problem down
    is that you need to reduce your scope of
  • 13:34 - 13:40
    valuation, and focus on depth over
    breadth. And this is most critical when
  • 13:40 - 13:45
    attacking extremely well picked over
    targets. Maybe you're probing an IoT
  • 13:45 - 13:48
    device? You can probably just sneeze at
    that thing and you are going to find
  • 13:48 - 13:52
    vulnerabilities. But you know, you're
    fighting on a very different landscape here.
  • 13:52 - 14:00
    And so you need to be very detailed
    with your review. So reduce your scope.
  • 14:00 - 14:04
    Our reconnaissance and past experience
    with exploiting browsers had lead us to
  • 14:04 - 14:09
    focus on WebKit's JavaScript engine,
    highlighted up here in orange. So, bugs in
  • 14:09 - 14:14
    JS engines, when it comes to browsers, are
    generally regarded as extremely powerful
  • 14:14 - 14:18
    bugs. But they're also few and far
    between, and they're kind of becoming more
  • 14:18 - 14:24
    rare, as more of you are looking for bugs.
    More people are colliding, they're dying
  • 14:24 - 14:29
    quicker. And so, anyway, let's try to
    reduce our scope. So we reduce our scope
  • 14:29 - 14:34
    from 3 million down in 350,000 lines of
    code. Here, we'll zoom into that orange.
  • 14:34 - 14:37
    So now we're looking at the JavaScript
    directory, specifically the JavaScript
  • 14:37 - 14:42
    core directory. So this is a JavaScript
    engine within WebKit, as used by Safari,
  • 14:42 - 14:48
    on MacOS. And specifically, to further
    reduce our scope, we chose to focus on the
  • 14:48 - 14:53
    highest level interface of the JavaScript
    core which is the runtime folder. So this
  • 14:53 - 14:58
    contains code that's almost 1 for 1
    mappings to JavaScript objects and methods
  • 14:58 - 15:06
    in the interpreter. So, for example,
    Array.reverse, or concat, or whatever.
  • 15:06 - 15:11
    It's very close to what you JavaScript
    authors are familiar with. And so this is
  • 15:11 - 15:17
    what the runtime folder looks like, at
    approximately 70,000 lines of code. When
  • 15:17 - 15:22
    we were spinning up for PWN2OWN, we said,
    okay, we are going to find a bug in this
  • 15:22 - 15:26
    directory in one of these files, and we're
    not going to leave it until we have, you
  • 15:26 - 15:31
    know, walked away with something. So if we
    take a step back now. This is what we
  • 15:31 - 15:35
    started with, and this is what we've done.
    We've reduced our scope. So it helps
  • 15:35 - 15:39
    illustrate this, you know, whittling
    process. It was almost a little bit
  • 15:39 - 15:44
    arbitrary. There's a lot, previously,
    there's been a lot of bugs in the runtime
  • 15:44 - 15:51
    directory. But it's really been cleaned up
    the past few years. So anyway, this is
  • 15:51 - 15:57
    what we chose for our RCE. So having spent
    a number of years going back and forth
  • 15:57 - 16:01
    between attacking and defending, I've come
    to recognize that bad components do not
  • 16:01 - 16:05
    get good fast. Usually researchers are
    able to hammer away at these components
  • 16:05 - 16:11
    for years before they reach some level of
    acceptable security. So to escape Safari's
  • 16:11 - 16:15
    sandbox, we simply looked at the security
    trends covered during the reconnaissance phase.
  • 16:15 - 16:19
    So, this observation, historically
    bad components will often take years to
  • 16:19 - 16:24
    improve, means that we chose to look at
    WindowServer. And for those that don't
  • 16:24 - 16:29
    know, WindowServer is a root level system
    service that runs on MacOS. Our research
  • 16:29 - 16:36
    turned up a trail of ugly bugs from a
    MacOS, from essentially the WindowServer
  • 16:36 - 16:43
    which is accessible to the Safari sandbox.
    And in particular, when we're doing our
  • 16:43 - 16:47
    research, we're looking at ZDI's website,
    and you can just search all their
  • 16:47 - 16:53
    advisories that they've disclosed. And in
    particular in 2016, there is over 10 plus
  • 16:53 - 16:57
    vulnerabilities report to ZDI that were
    used as sandbox escapes or privilege
  • 16:57 - 17:03
    escalation style issues. And so, these are
    only vulnerabilities that are reported to
  • 17:03 - 17:10
    ZDI. If you look in 2017, there is 4 all,
    again, used for the same purpose. I think,
  • 17:10 - 17:16
    all of these were actually, probably used
    at PWN2OWN both years. And then in 2018,
  • 17:16 - 17:20
    there is just one. And so, this is 3
    years. Over the span of 3 years where
  • 17:20 - 17:25
    people were hitting the same exact
    component, and Apple or researchers around
  • 17:25 - 17:29
    the world could have been watching, or
    listening and finding bugs, and fighting
  • 17:29 - 17:36
    over this land right here. And so, it's
    pretty interesting. I mean, they give some
  • 17:36 - 17:42
    perspective. The fact of the matter is
    that it's hard to write, it's really hard
  • 17:42 - 17:46
    for bad components to improve quickly.
    Nobody wants to try and sit down and
  • 17:46 - 17:50
    rewrite bad code. And vendors are
    terrified, absolutely terrified of
  • 17:50 - 17:55
    shipping regressions. Most vendors will
    only patch or modify old bad code only
  • 17:55 - 18:02
    when they absolutely must. For example,
    when a vulnerability is reported to them.
  • 18:02 - 18:08
    And so, as listed on this slide, there's a
    number of reasons why a certain module or
  • 18:08 - 18:13
    component has a terrible security track
    record. Just try to keep in mind, that's
  • 18:13 - 18:18
    usually a good place to look for more
    bugs. So if you see a waterfall of bugs
  • 18:18 - 18:23
    this year in some component, like, wasm or
    JIT, maybe you should be looking there,
  • 18:23 - 18:27
    right? Because that might be good for a
    few more years. All right.
  • 18:28 - 18:32
    Step three. So after all this talk, we are finally
    getting to a point where we can start
  • 18:32 - 18:38
    probing and exploring the codebase in
    greater depth. This step is all about bug hunting.
  • 18:38 - 18:45
    So as an individual researcher or
    small organization, the hardest part of
  • 18:45 - 18:49
    the Zero-Day engineering process is
    usually discovering and exploiting a
  • 18:49 - 18:53
    vulnerability. That's just kind of from
    our perspective. This can maybe vary from
  • 18:53 - 18:58
    person to person. But you know, we don't
    have a hundred million dollars to spend on
  • 18:58 - 19:06
    fuzzers, for example. And so we literally
    have one Macbook, right? So it's kind of
  • 19:06 - 19:11
    like looking for a needle in a haystack.
    We're also well versed in the exploitation
  • 19:11 - 19:15
    process itself. And so those end up being
    a little bit more formulaic for ourselves.
  • 19:15 - 19:18
    So there are two core strategies for
    finding exploitable vulnerabilities.
  • 19:18 - 19:22
    There's a lot of pros and cons to both of
    these approaches. But I don't want to
  • 19:22 - 19:25
    spend too much time talking about their
    strengths or weaknesses. So they're all
  • 19:25 - 19:31
    listed here, the short summary is that
    fuzzing is the main go-to strategy for
  • 19:31 - 19:37
    many security enthusiasts. Some of the key
    perks is that it's scalable and almost
  • 19:37 - 19:42
    always yields result. And so, spoiler
    alert, but later in the software industry,
  • 19:42 - 19:49
    we fuzzed both of our bugs. Both the bugs
    that we use for a full chain. And we know
  • 19:49 - 19:54
    it's 2018. These things are still falling
    out with some very trivial means. OK. So,
  • 19:54 - 19:59
    source review is the other main strategy.
    Source review is often much harder for
  • 19:59 - 20:03
    novices, but it can produce some high
    quality bugs when performed diligently.
  • 20:03 - 20:06
    You know if you're looking to just get
    into this stuff, I would say, start real
  • 20:06 - 20:13
    simple, start with fuzzing and see how
    fare you get. So, yeah, for the purpose of
  • 20:13 - 20:16
    this talk, we are mostly going to focus on
    fuzzing. This is a picture from the
  • 20:16 - 20:21
    dashboard of a simple, scalable fuzzing
    harness we built for JavaScript core. This
  • 20:21 - 20:25
    is when we were ramping up for PWN2OWN and
    trying to build our chain. It was a
  • 20:25 - 20:30
    grammar based JavaScript fuzzer, based on
    Mozilla's Darma. There is nothing fancy
  • 20:30 - 20:35
    about it. This is a snippet of some of
    what some of its output looked like. We
  • 20:35 - 20:38
    had only started building it out when we
    actually found the exploitable
  • 20:38 - 20:43
    vulnerability that we ended up using. So
    we haven't really played with this much
  • 20:43 - 20:48
    since then, but it's, I mean, it shows
    kind of how easy it was to get where we
  • 20:48 - 20:55
    needed to go. So, something like we'd like
    to stress heavily to the folks who fuzz,
  • 20:55 - 21:00
    is that it really must be treated as a
    science for these competitive targets.
  • 21:00 - 21:04
    Guys, I know code coverage isn't the best
    metric, but you absolutely must use some
  • 21:04 - 21:09
    form of introspection to quantify the
    progress and reach of your fuzzing. Please
  • 21:09 - 21:14
    don't just fuzz blindly. So our fuzzer
    would generate web based code coverage
  • 21:14 - 21:18
    reports of our grammars every 15 minutes,
    or so. This allowed us to quickly iterate
  • 21:18 - 21:23
    upon our fuzzer, helping generate more
    interesting complex test cases. A good
  • 21:23 - 21:26
    target is 60 percent code coverage. So you
    can see that in the upper right hand
  • 21:26 - 21:29
    corner. That's kind of what we were
    shooting for. Again, it really varies from
  • 21:29 - 21:34
    target to target. This was also just us
    focusing on the runtime folder. If you see
  • 21:34 - 21:39
    in the upper left hand corner. And so,
    something that we have observed, again
  • 21:39 - 21:46
    over many targets and exotic, exotic
    targets, is that bugs almost always fall
  • 21:46 - 21:52
    out of what we call the hard fought final
    coverage percentages. And so, what this
  • 21:52 - 21:56
    means is, you might work for a while,
    trying to build up your coverage, trying
  • 21:56 - 22:02
    to, you know, build a good set of test
    cases, or Grammar's for fuzzing, and then
  • 22:02 - 22:06
    you'll hit that 60 percent, and you'll be,
    okay, what am I missing now? Like everyone
  • 22:06 - 22:10
    gets that 60 percent, let's say. But then,
    once you start inching a little bit
  • 22:10 - 22:15
    further is when you start fining a lot of
    bugs. So, for example, we will pull up
  • 22:15 - 22:19
    code, and we'll be like, why did we not
    hit those blocks up there? Why are those
  • 22:19 - 22:23
    grey box? Why did we never hit those in
    our millions of test cases? And we'll go
  • 22:23 - 22:26
    find that that's some weird edge case, or
    some unoptimized condition, or something
  • 22:26 - 22:32
    like that, and we will modify our test
    cases to hit that code. Other times we'll
  • 22:32 - 22:36
    actually sit down pull it up on our
    projector and talk through some of that
  • 22:36 - 22:39
    and we'll be like: What the hell is going
    on there? This is actually, it's funny,
  • 22:39 - 22:44
    this is actually a live photo that I took
    during our pawn2own hunt. You know as
  • 22:44 - 22:48
    cliche as this picture is of hackers
    standing in front of like a dark screen in
  • 22:48 - 22:52
    a dark room, this was absolutely real. You
    know we were we were just reading some
  • 22:52 - 23:02
    code. And so it's good to rubber ducky
    among co-workers and to hash out ideas to
  • 23:02 - 23:11
    help confirm theories or discard them. And
    so yeah this kinda leads us to the next
  • 23:11 - 23:15
    piece of advice is when you're doing
    source reviews so this applies to both
  • 23:15 - 23:21
    debugging or assessing those corner cases
    and whatnot. If you're ever unsure about
  • 23:21 - 23:25
    the code that you're reading you
    absolutely should be using debuggers and
  • 23:25 - 23:30
    dynamic analysis. So as painful as it can
    maybe be to set up a JavaScript core or
  • 23:30 - 23:36
    debug this massive C++ application that's
    dumping these massive call stacks that are
  • 23:36 - 23:40
    100 [steps] deep you need to learn those
    tools or you are never going to be able to
  • 23:40 - 23:47
    understand the amount of context necessary
    for some of these bugs and complex code.
  • 23:47 - 23:55
    So for example one of our blog posts makes
    extensive use of rr to reverse or to root
  • 23:55 - 23:59
    cause the vulnerability that we end up
    exploiting. It was a race condition in the
  • 23:59 - 24:03
    garbage collector - totally wild bug.
    There's probably, I said there's probably
  • 24:03 - 24:08
    3 people on earth that could have spotted
    this bug through source review. It
  • 24:08 - 24:13
    required immense knowledge of code base in
    my opinion to be able to recognize this as
  • 24:13 - 24:16
    a vulnerability. We found it through
    fuzzing, we had a root cause it using time
  • 24:16 - 24:24
    travel debugging. Mozilla's rr which is an
    amazing project. And so, yeah. Absolutely
  • 24:24 - 24:28
    use debuggers. This is an example of a
    call stack again, just using a debugger to
  • 24:28 - 24:32
    dump the callstack from a function that
    you are auditing can give you an insane
  • 24:32 - 24:36
    amount of context as to how that function
    is used, what kind of data it's operating
  • 24:36 - 24:42
    on. Maybe, you know, what kind of areas of
    the codebase it's called from. You're not
  • 24:42 - 24:46
    actually supposed to be able to read the
    size or read the slide but it's a
  • 24:46 - 24:56
    backtrace from GDB that is 40 or 50 call
    stacks deep. All right. So there is this
  • 24:56 - 25:01
    huge misconception by novices that new
    code is inherently more secure and that
  • 25:01 - 25:07
    vulnerabilities are only being removed
    from code bases, not added. This is almost
  • 25:07 - 25:12
    patently false and this is something that
    I've observed over the course of several
  • 25:12 - 25:18
    years. Countless targets you know, code
    from all sorts of vendors. And there's
  • 25:18 - 25:24
    this really great blog post put out by
    Ivan from GPZ this past fall and in his
  • 25:24 - 25:29
    blog post he basically ... so one year ago
    he fuzzed WebKit using his fuzzer
  • 25:29 - 25:33
    called Domato. He found a bunch of
    vulnerabilities, he reported them and then
  • 25:33 - 25:40
    he open sourced the fuzzer. But then this
    year, this fall, he downloads his fuzzer,
  • 25:40 - 25:44
    ran it again with little to no changes,
    just to get things up and running. And
  • 25:44 - 25:48
    then he found another eight plus
    exploitable use after free vulnerabilities.
  • 25:48 - 25:51
    So what's really amazing
    about this, is when you look at these last
  • 25:51 - 25:56
    two columns that I have highlighted in
    red, virtually all the bugs he found had
  • 25:56 - 26:04
    been introduced or regressed in the past
    12 months. So yes, new vulnerabilities get
  • 26:04 - 26:11
    introduced every single day. The biggest
    reason new code is considered harmful, is
  • 26:11 - 26:16
    simply that it's not had years to sit in
    market. This means it hasn't had time to
  • 26:16 - 26:21
    mature, it hasn't been tested exhaustively
    like the rest of the code base. As soon as
  • 26:21 - 26:25
    that developer pushes it, whenever it hits
    release, whenever it hits stable that's
  • 26:25 - 26:29
    when you have a billion users pounding at
    - it let's say on Chrome. I don't know how
  • 26:29 - 26:33
    big that user base is but it's massive and
    that's a thousand users around the world
  • 26:33 - 26:38
    just using the browser who are effectively
    fuzzing it just by browsing the web. And
  • 26:38 - 26:41
    so of course you're going to manifest
    interesting conditions that will cover
  • 26:41 - 26:47
    things that are not in your test cases and
    unit testing. So yeah, it's not uncommon.
  • 26:47 - 26:50
    The second point down here is that it's
    not uncommon for new code to break
  • 26:50 - 26:55
    assumptions made elsewhere in the code
    base. And this is also actually extremely
  • 26:55 - 27:00
    common. The complexity of these code bases
    can be absolutely insane and it can be
  • 27:00 - 27:04
    extremely hard to tell if let's say some
    new code that Joe Schmoe, the new
  • 27:04 - 27:10
    developer, adds breaks some paradigm held
    by let's say the previous owner of the
  • 27:10 - 27:15
    codebase. He maybe doesn't understand it
    as well - you know, maybe it could be an
  • 27:15 - 27:23
    expert developer who just made a mistake.
    It's super common. Now a piece of advice.
  • 27:23 - 27:27
    This should be a no brainer for bug
    hunting but novices often grow impatient
  • 27:27 - 27:30
    and start hopping around between code and
    functions and getting lost or trying to
  • 27:30 - 27:36
    chase use-after-frees or bug classes
    without really truly understanding what
  • 27:36 - 27:42
    they're looking for. So a great starting
    point is always identify the sources of
  • 27:42 - 27:46
    user input or the way that you can
    interface with the program and then just
  • 27:46 - 27:51
    follow the data, follow it down. You know
    what functions parse it, what manipulates
  • 27:51 - 27:59
    your data, what reads it, what writes to
    it. You know just keep it simple. And so
  • 27:59 - 28:04
    when we're looking for our sandbox escapes
    when you're looking at window server and
  • 28:04 - 28:07
    our research had showed that there's all
    of these functions we don't know anything
  • 28:07 - 28:12
    about Mac but we read this blog post from
    Keine that was like "Oh there's all these
  • 28:12 - 28:16
    functions that you can send data to in
    window server" and apparently there's
  • 28:16 - 28:21
    about six hundred and there are all these
    functions prefixed with underscore
  • 28:21 - 28:28
    underscore x. And so the 600 end points
    will parse and operate upon data that we
  • 28:28 - 28:33
    send to them. And so to draw a rough
    diagram, there is essentially this big red
  • 28:33 - 28:38
    data tube from the safari sandbox to the
    windows server system service. This tube
  • 28:38 - 28:44
    can deliver arbitrary data that we control
    to all those six hundred end points. We
  • 28:44 - 28:48
    immediately thought let's just try to man-
    in-the-middle this data pipe, so that we
  • 28:48 - 28:52
    can see what's going on. And so that's
    exactly what we did. We just hooked up
  • 28:52 - 28:59
    FRIDA to it, another open source DBI. It's
    on GitHub. It's pretty cool. And we're
  • 28:59 - 29:05
    able to stream all of the messages flowing
    over this pipe so we can see all this data
  • 29:05 - 29:09
    just being sent into the window server
    from all sorts of applications - actually
  • 29:09 - 29:13
    everything on Mac OS talks to this. The
    window server is responsible for drawing
  • 29:13 - 29:17
    all your windows on the desktop, your
    mouse clicks, your whatever. It's kind of
  • 29:17 - 29:24
    like explorer.exe on Windows. So you know
    we see all this data coming through, we
  • 29:24 - 29:29
    see all these crazy messages, all these
    unique message formats, all these data
  • 29:29 - 29:34
    buffers that it's sending in and this is
    just begging to be fuzzed. And so we said
  • 29:34 - 29:38
    "OK, let's fuzz it" and we're getting all
    hyped and I distinctly remember thinking
  • 29:38 - 29:42
    maybe we can jerry-rig AFL into the
    window server or let's mutate these
  • 29:42 - 29:51
    buffers with Radamsa or why don't we just
    try flipping some bits. So that's what we
  • 29:51 - 29:57
    did. So I actually had a very timely tweet
    just a few weeks back that echoed this
  • 29:57 - 30:03
    exact experience. He said that "Looking at
    my Security / Vulnerability research
  • 30:03 - 30:07
    career, my biggest mistakes were almost
    always trying to be too clever. Success
  • 30:07 - 30:12
    hides behind what is the dumbest thing
    that could possibly work." The takeaway
  • 30:12 - 30:18
    here is that you should always start
    simple and iterate. So this is our Fuzz
  • 30:18 - 30:23
    farm. It's a single 13 inch MacBook Pro. I
    don't know if this is actually going to
  • 30:23 - 30:26
    work, it's not a big deal if it doesn't.
    I'm only gonna play a few seconds of it.
  • 30:26 - 30:32
    This is me literally placing my wallet on
    the enter key and you can see this box
  • 30:32 - 30:35
    popping up and we're fuzzing - our fuzzer
    is running now and flipping bits in the
  • 30:35 - 30:39
    messages. And the screen is changing
    colors. You're going to start seeing the
  • 30:39 - 30:43
    boxes freaking out. It's going all over
    the place. This is because the bits are
  • 30:43 - 30:47
    being flipped, it's corrupting stuff, it's
    changing the messages. Normally, this
  • 30:47 - 30:51
    little box is supposed to show your
    password hint. But the thing is by holding
  • 30:51 - 30:56
    the enter key on the locked screen. All
    this traffic was being generated to the
  • 30:56 - 31:00
    window server, and every time the window
    server crashed - you know where it brings
  • 31:00 - 31:04
    you? It brings you right back to your lock
    screen. So we had this awesome fuzzing
  • 31:04 - 31:16
    setup by just holding the enter key.
    Applause
  • 31:16 - 31:22
    And we you know we lovingly titled that
    picture "Advanced Persistent Threat" in
  • 31:22 - 31:30
    our blog. So this is a crash that we got
    out of the fuzzer. This occurred very
  • 31:30 - 31:34
    quickly after ... this was probably within
    the first 24 hours. So we found a ton of
  • 31:34 - 31:39
    crashes, we didn't even explore all of
    them. There is probably a few still
  • 31:39 - 31:44
    sitting on our server. But there's lots
    and all the rest ... lots of garbage. But
  • 31:44 - 31:49
    then this one stands out in particular:
    Anytime you see this thing up here that says
  • 31:49 - 31:54
    "EXC_BAD_ACCESS" with a big number up
    there with address equals blah blah blah.
  • 31:54 - 31:58
    That's a really bad place to be. And so
    this is a bug that we end up using at
  • 31:58 - 32:01
    pwn2own to perform our sandbox escape if
    you want to read about it again, it's on
  • 32:01 - 32:06
    the blog, we're not going to go too deep
    into it here. So maybe some of you have
  • 32:06 - 32:13
    seen the infosec comic. You know it's all
    about how people try to do these really
  • 32:13 - 32:17
    cool clever things. They get ... People
    can get too caught up trying to inject so
  • 32:17 - 32:22
    much science and technology into these
    problems that they often miss the forest
  • 32:22 - 32:27
    for the trees. And so here we are in the
    second panel. We just wrote this really
  • 32:27 - 32:32
    crappy little fuzzer and we found our bug
    pretty quickly. And this guy's really
  • 32:32 - 32:39
    upset. Which brings us to the
    misconception that only expert researchers
  • 32:39 - 32:43
    with blank tools can find bugs. And so you
    can fill in the blank with whatever you
  • 32:43 - 32:51
    want. It can be cutting edge tools, state
    of the art, state sponsored, magic bullet.
  • 32:51 - 32:59
    This is not true. There are very few
    secrets. So the next observation: you
  • 32:59 - 33:03
    should be very wary of any bugs that you
    find quickly. A good mantra is that an
  • 33:03 - 33:09
    easy to find bug is just as easily found
    by others. And so what this means is that
  • 33:09 - 33:13
    soon after our blog post went out ...
    actually at pwn2own 2018 we actually knew
  • 33:13 - 33:19
    we had collided with fluorescence one of
    the other competitors. We both struggled
  • 33:19 - 33:25
    with exploiting this issue ... is a
    difficult bug to exploit. And we were ...
  • 33:25 - 33:30
    we had some very creative exploit, it was
    very strange. But there was some
  • 33:30 - 33:33
    discussion after the fact on Twitter by
    nadge - started by Neddy -he's probably
  • 33:33 - 33:36
    out here, actually speaking tomorrow. You
    guys should go see this talk about the
  • 33:36 - 33:43
    Chrome IPC. That should be really good.
    But there is some discussion on Twitter,
  • 33:43 - 33:47
    that Ned had started, and Nicholas, who is
    also here, said "well, at least three
  • 33:47 - 33:52
    teams found it separately". So at least
    us, fluorescence and Nicholas had found
  • 33:52 - 33:57
    this bug. And we were all at pwn2own, so
    you can think how many people out there
  • 33:57 - 34:01
    might have also found this. There's
    probably at least a few. How many people
  • 34:01 - 34:07
    actually tried to weaponize this thing?
    Maybe not many. It is kind of a difficult
  • 34:07 - 34:15
    bug. And so there are probably at least a
    few other researchers who are aware of
  • 34:15 - 34:21
    this bug. So yeah, that kinda closes the,
    you know, if you found a bug very quickly
  • 34:21 - 34:25
    especially with fuzzing, you can almost
    guarantee that someone else has found it.
  • 34:25 - 34:31
    So I want to pass over the next section to
    Amy to continue.
  • 34:31 - 34:38
    Amy: So we just talked a bunch about, you
    know, techniques and expectations when
  • 34:38 - 34:42
    you're actually looking for the bug. Let
    me take over here and talk a little bit
  • 34:42 - 34:48
    about what to expect when trying to
    exploit whatever bug you end up finding.
  • 34:48 - 34:54
    Yeah and so we have the exploit
    development as the next step. So OK, you
  • 34:54 - 34:57
    found a bug right, you've done the hard
    part. You were looking at whatever your
  • 34:57 - 35:01
    target is, maybe it's a browser maybe it's
    the window server or the kernel or
  • 35:01 - 35:06
    whatever you're trying to target. But the
    question is how do you actually do the rest?
  • 35:06 - 35:10
    How do you go from the bug to
    actually popping a calculator onto the
  • 35:10 - 35:16
    screen? The systems that you're working
    with have such a high level of complexity
  • 35:16 - 35:20
    that even just like understanding you know
    enough to know how your bug works it might
  • 35:20 - 35:24
    not be enough to actually know how to
    exploit it. So we try to like brute force
  • 35:24 - 35:29
    our way to an exploit, is that a good
    idea? Well all right before we try to
  • 35:29 - 35:34
    tackle your bug let's take a step back and
    ask a slightly different question. How do
  • 35:34 - 35:39
    we actually write an exploit like this in
    general? Now I feel like a lot of people
  • 35:39 - 35:44
    consider these kind of exploits maybe be
    in their own league at least when you
  • 35:44 - 35:50
    compare them to something like maybe what
    you'd do at a CTF competition or something
  • 35:50 - 35:56
    simpler like that. And if you were for
    example to be given a browser exploit
  • 35:56 - 36:00
    challenge at a CTF competition it may seem
    like an impossibly daunting task has just
  • 36:00 - 36:05
    been laid in front of you if you've never
    done this stuff before. So how can we work
  • 36:05 - 36:09
    to sort of change that view? And you know
    it might be kind of cliche but I actually
  • 36:09 - 36:14
    think the best way to do it is through
    practice. And I know everyone says "oh how
  • 36:14 - 36:20
    do you get good", "oh, practice". But I
    think that this is actually very valuable
  • 36:20 - 36:25
    for this and the way that practicing
    actually comes out is that, well, before we
  • 36:25 - 36:29
    talked a lot about consuming everything
    you could about your targets, like
  • 36:29 - 36:34
    searching for everything you could that's
    public, downloading it, trying to read it
  • 36:34 - 36:37
    even if you don't understand it, because
    you'll hopefully gleam something from it
  • 36:37 - 36:42
    it doesn't hurt but maybe your goal now
    could be actually trying to understand it
  • 36:42 - 36:47
    as at least as much as you can. You know
    it's going to be... it's not going to be
  • 36:47 - 36:53
    easy. These are very intricate systems
    that we're attacking here. And so it will
  • 36:53 - 36:57
    be a lot of work to understand this stuff.
    But for every old exploit you can work
  • 36:57 - 37:02
    your way through, the path will become
    clearer for actually exploiting these
  • 37:02 - 37:11
    targets. So as because I focused mostly on
    browser work and I did that browser part
  • 37:11 - 37:17
    of our chain, at least the exploitation
    part. I have done a lot of exploits and
  • 37:17 - 37:21
    read a ton of browser exploits and one
    thing that I have found is that a lot of
  • 37:21 - 37:26
    them have very very similar structure. And
    they'll have similar techniques in them
  • 37:26 - 37:31
    they'll have similar sort of primitives
    that are being used to build up the
  • 37:31 - 37:38
    exploit. And so that's one observation.
    And to actually illustrate that I have an
  • 37:38 - 37:43
    example. So alongside us at this at
    PWN2OWN this spring we had Samuel Groffs
  • 37:43 - 37:49
    of Phoenix. He's probably here right now.
    So he was targeting Safari just like we
  • 37:49 - 37:54
    were. But his bug was in the just in time
    compiler, the JIT, which converts
  • 37:54 - 38:00
    JavaScript to the machine code. Our bug
    was nowhere near that. It was over in the
  • 38:00 - 38:06
    garbage collector so a completely
    different kind of bug. But the bug here is
  • 38:06 - 38:11
    super reliable. It was very very clean. I
    recommend you go look at it online. It's a
  • 38:11 - 38:19
    very good resource. And then, a few months
    later, at PWN2OWN Mobile, so another pwning
  • 38:19 - 38:24
    event, we have Fluoroacetate, which was an
    amazing team who managed to pretty much
  • 38:24 - 38:28
    pwn everything they could get their hands
    on at that competition, including an
  • 38:28 - 38:33
    iPhone which of course iPhone uses Safari
    so they needed a Safari bug. The safari
  • 38:33 - 38:38
    bug that they had was very similar in
    structure to the previous bug earlier that
  • 38:38 - 38:43
    year, at least in terms of how the bug
    worked and what you could do with it. So
  • 38:43 - 38:48
    now you could exploit both of these bugs
    with very similar exploit code almost in
  • 38:48 - 38:53
    the same way. There were a few tweaks you
    had to do because Apple added a few things
  • 38:53 - 39:01
    since then. But the path between bug and
    code execution was very similar. Then,
  • 39:01 - 39:07
    even a few months after that, there is a
    CTF called "Realworld CTF" which took
  • 39:07 - 39:11
    place in China and as the title suggests
    they had a lot of realistic challenges
  • 39:11 - 39:18
    including Safari. So of course my team
    RPISEC was there and they woke me up in
  • 39:18 - 39:23
    the middle of the night and tasked me with
    solving it. And so I was like "Okay, okay
  • 39:23 - 39:28
    I'll look at this". And I looked at it and
    it was a JIT bug and I've never actually
  • 39:28 - 39:35
    before that looked at the Safari JIT. And
    so, you know, I didn't have much previous
  • 39:35 - 39:40
    experience doing that, but because I had
    taken the time to read all the public
  • 39:40 - 39:45
    exploits. So I read all the other PWN2OWN
    competitors exploits and I read all the
  • 39:45 - 39:49
    other things that people were releasing
    for different CVEs. I had seen a bug like
  • 39:49 - 39:55
    this before very similar and I knew how to
    exploit it, so I could... I was able to
  • 39:55 - 39:59
    quickly build the path from bug to code
    exec and we actually managed to get first
  • 39:59 - 40:03
    blood on the challenge which was really
    really cool.
  • 40:03 - 40:12
    Applaus
    So... So what does this actually mean?
  • 40:12 - 40:19
    Well I think not every bug is going to be
    that easily to swap into an exploit but I
  • 40:19 - 40:23
    do think that understanding old exploits
    is extremely valuable if you're trying to
  • 40:23 - 40:29
    exploit new bugs. A good place to start if
    you're interested in looking at old bugs
  • 40:29 - 40:34
    is on places like this with the js-vuln-db,
    which is a basically a repository of a
  • 40:34 - 40:39
    whole bunch of JavaScript bugs and proof
    of concepts and sometimes even exploits
  • 40:39 - 40:44
    for them. And so if you were to go through
    all of those, I guarantee by the end you'd
  • 40:44 - 40:50
    have a great understanding of the types of
    bugs that are showing up these days and
  • 40:50 - 40:55
    probably how to exploit most of them.
    And... But there aren't that many bugs
  • 40:55 - 41:02
    that get published that are full exploits.
    There's only a couple a year maybe. So
  • 41:02 - 41:05
    what do you do from there once you've read
    all those in and you need to learn more?
  • 41:05 - 41:13
    Well maybe start trying to exploit other
    bugs yourself so you can go... For
  • 41:13 - 41:16
    example, I like Chrome because they have a
    very nice list of all their
  • 41:16 - 41:20
    vulnerabilities that they post every time
    they have an update and they even link you
  • 41:20 - 41:25
    to the issue, so you can go and see
    exactly what was wrong and so take some of
  • 41:25 - 41:30
    these for example, at the very top you
    have out-of-bounds write in V8. So we
  • 41:30 - 41:34
    could click on that and go and see what
    the bug was and then could try to write an
  • 41:34 - 41:38
    exploit for it. And then by the end we all
    have had a much better idea of how to
  • 41:38 - 41:44
    exploit an out-of-bounds write in V8 and
    we've now done it ourselves too. So this
  • 41:44 - 41:48
    is a chance to sort of apply what you've
    learned. But you say OK that's a lot of
  • 41:48 - 41:53
    work. You know I have to do all kinds of
    other stuff, I'm still in school or I have
  • 41:53 - 42:00
    a full time job. Can't I just play CTFs?
    Well it's a good question. The question is
  • 42:00 - 42:03
    how much do CTFs actually help you with
    these kind of exploits. I do think that
  • 42:03 - 42:06
    you can build a very good mindset for this
    because you need a very adversarial
  • 42:06 - 42:13
    mindset to do this sort of work. But a lot
    of the times the challenges don't really
  • 42:13 - 42:17
    represent the real world exploitation.
    There was a good tweet just the other day,
  • 42:17 - 42:24
    like a few days ago, where we were saying
    that yeah libc is... like random libc
  • 42:24 - 42:29
    challenges - Actually I don't think
    it's... Yes. It's libc here. Yeah. - are
  • 42:29 - 42:33
    often very artificial and don't carry much
    value to real world because they're very
  • 42:33 - 42:39
    specific. Some people love these sort of
    very specific CTF challenges, but I don't
  • 42:39 - 42:43
    think that there's as much value as there
    could be. However a lot of... There's been
  • 42:43 - 42:48
    a couple CTFs recently and historically as
    well that have had pretty realistic
  • 42:48 - 42:57
    challenges in them. In fact right now is a
    CTF, 35c3 CTF is running and they have 3
  • 42:57 - 43:00
    browser exploit challenges, they have a
    full chain safari challenge, they have a
  • 43:00 - 43:06
    virtual box challenge, It's like it's
    pretty crazy and it's crazy to see people
  • 43:06 - 43:11
    solve those challenges in such a short
    time span too. But I think it's definitely
  • 43:11 - 43:15
    something that you can look at afterwards
    even if you don't manage to get through
  • 43:15 - 43:20
    one of those challenges today. But
    something to try to work on. And so these
  • 43:20 - 43:25
    are the sort of new CTFs are actually
    pretty good for people who want to jump
  • 43:25 - 43:32
    off to this kind of real estate or a real
    exploit development work. However it can
  • 43:32 - 43:36
    be kind of scary for newer newcomers to
    the CTF scene because suddenly you know
  • 43:36 - 43:40
    it's your first CTF and they're asking you
    to exploit Chrome and you're like what...
  • 43:40 - 43:46
    what is going on here. So there, it is a
    double edged sword sometimes. All right so
  • 43:46 - 43:51
    now we found the bug and we have
    experience, so what do we actually do?
  • 43:51 - 43:55
    Well you have to kind of have to get lucky
    though because even if you've had a ton of
  • 43:55 - 43:59
    experience that doesn't necessarily mean
    that you can instantly write an exploit
  • 43:59 - 44:03
    for a bug. Our javascript exploit was kind
    of like that, it was kind of nice, we knew
  • 44:03 - 44:09
    what to do very right away but the
    brows... or our sandbox exploit did not
  • 44:09 - 44:14
    fit into a nice box of a previous exploit
    that we had seen. So it took a lot of
  • 44:14 - 44:19
    effort. Quickly I'll show... So this was
    the actual bug that we exploited for the
  • 44:19 - 44:25
    Sandbox. It's a pretty simple bug. It's a
    integer issue where index is signed which
  • 44:25 - 44:30
    means it can be negative. So normally it
    expects a value like 4 but we could give
  • 44:30 - 44:35
    it a value like negative 3 and that would
    make it go out of bounds and we could
  • 44:35 - 44:40
    corrupt memory. So very simple bug not
    like a crazy complex one like some of the
  • 44:40 - 44:44
    other ones we've seen. But does that mean
    that this exploit is going to be really
  • 44:44 - 44:52
    simple? Well let's see... That's a lot of
    code. So our exploit for this bug ended up
  • 44:52 - 44:59
    being about 1300 lines. And so that's
    pretty crazy. And you're probably
  • 44:59 - 45:05
    wondering how it gets there but I do want
    to say just be aware that when you do find
  • 45:05 - 45:09
    it simple looking bug, it might not be
    that easy to solve or to exploit. And it
  • 45:09 - 45:15
    might take a lot of effort but don't get
    discouraged if it happens to you. It just
  • 45:15 - 45:19
    means it's time to ride the exploit
    development rollercoaster. And basically
  • 45:19 - 45:24
    what that means is there's a lot of ups
    and downs to an exploit and we have to
  • 45:24 - 45:28
    basically ride the rollercoaster until
    hopefully we have it, the exploit,
  • 45:28 - 45:36
    finished and we had to do that for our
    sandbox escape. And so to say we found the
  • 45:36 - 45:42
    bug and we had a bunch of great ideas we'd
    previously seen a bug exploited like this
  • 45:42 - 45:47
    by Kiehne and we read their papers and we
    had a great idea but then we were like OK
  • 45:47 - 45:51
    OK it's going to work we just have to make
    sure this one bit is not set and it was
  • 45:51 - 45:56
    like in a random looking value, so we
    assumed it would be fine. But turns out
  • 45:56 - 46:01
    that bit is always set and we have no idea
    why and no one else knows why, so thank
  • 46:01 - 46:06
    you Apple for that. And so OK maybe we can
    work around it, maybe we can figure out a
  • 46:06 - 46:11
    way to unset it and we're like oh yes we
    can delete it. It's going to work again!
  • 46:11 - 46:15
    Everything will be great! Until we realize
    that actually breaks the rest of the
  • 46:15 - 46:21
    exploit. So it's this back and forth, it's
    an up and down. And you know sometimes
  • 46:21 - 46:27
    when you solve one issue you think you've
    got what you need and then another issue
  • 46:27 - 46:31
    shows up.
    So it's all about making incremental
  • 46:31 - 46:36
    progress towards removing all the issues
    that are in your way, getting at least
  • 46:36 - 46:39
    something that works.
    Marcus: And so just as a quick aside, this
  • 46:39 - 46:41
    all happened within like 60 minutes one
    night.
  • 46:41 - 46:45
    Amy: Yeah.
    Marcus: Amy saw me just like I was
  • 46:45 - 46:50
    walking, out of breath, I was like Are you
    kidding me? There's two bugs that tripped
  • 46:50 - 46:54
    us up that made this bug much more
    difficult to exploit. And there is no good
  • 46:54 - 46:58
    reason for why those issues were there and
    that it was a horrible experience.
  • 46:58 - 47:04
    Amy: But it's still one I'd recommend. And
    so this rollercoaster it actually applies
  • 47:04 - 47:11
    to the entire process not just for, you
    know, the exploit development because
  • 47:11 - 47:16
    you'll have it when you find crashes that
    don't actually lead to vulnerabilities or
  • 47:16 - 47:20
    unexplainable crashes or super unreliable
    exploits. You just have to keep pushing
  • 47:20 - 47:24
    your way through until eventually you
    hopefully you get to the end of the ride
  • 47:24 - 47:33
    and you've got yourself a nice exploit.
    And so now. OK. So we assume, OK, we've
  • 47:33 - 47:36
    written an exploit at this point. Maybe
    it's not the most reliable thing but it
  • 47:36 - 47:42
    works like I can get to my code exec every
    now and then. So we have to start talking
  • 47:42 - 47:47
    about the payload. So what is the payload
    exactly? A payload is whatever your
  • 47:47 - 47:51
    exploits trying to actually do. It could
    be trying to open up a calculator on the
  • 47:51 - 47:57
    screen, it could be trying to launch your
    sandbox escape exploit, it could be trying
  • 47:57 - 48:02
    to clean up your system after your
    exploit. And by that, I mean fix the
  • 48:02 - 48:06
    program that you're actually exploiting.
    So in CTF we don't get a lot of practice
  • 48:06 - 48:11
    with this because we're so used to doing
    'system', you know, 'cat flag', and then
  • 48:11 - 48:15
    it doesn't matter if the entire program is
    crashing down in flames around us because
  • 48:15 - 48:20
    we got the flag. So in this case yeah you
    cat the flag and then it crashes right
  • 48:20 - 48:24
    away because you didn't have anything
    after your action. But in the real world
  • 48:24 - 48:28
    it kind of matters all the more, so here's
    an example like what would happen if your
  • 48:28 - 48:33
    exploit didn't clean up after itself, and
    just crashes and goes back to the login
  • 48:33 - 48:38
    screen. This doesn't look very good. If
    you're at a conference like Pwn2Own this
  • 48:38 - 48:44
    won't work, I don't think that they would
    let you win if this happened. And so, it's
  • 48:44 - 48:49
    very important to try to go back and fix
    up any damage that you've done to the
  • 48:49 - 48:55
    system, before it crashes, right after you
    finished. Right. And so actually running
  • 48:55 - 49:01
    your payload, so, a lot of times we see or
    in the exploits we see that you'll get to
  • 49:01 - 49:06
    the code exec here which is just CCs
    which means INT3 which just tells the
  • 49:06 - 49:11
    program to stop or trap to break point and
    all the exploits you see most of the time
  • 49:11 - 49:14
    they just stop here. They don't tell you
    anymore and to be fair you know they've
  • 49:14 - 49:17
    gotten a new code exec they're just
    talking about the exploit but you, you
  • 49:17 - 49:20
    still have to figure out how to do your
    payload because unless you want to write
  • 49:20 - 49:26
    those 1300 lines of code in handwritten
    assembly and then make it into shellcode,
  • 49:26 - 49:32
    you're not going to have a good time. And
    so, we had to figure out a way to actually
  • 49:32 - 49:38
    take our payload, write it to the file
    system in the only place that the sandbox
  • 49:38 - 49:43
    would let us, and then we could run it
    again, as a library, and then it would go
  • 49:43 - 49:50
    and actually do our exploit. And so yeah.
    And so now that you've like assembled
  • 49:50 - 49:56
    everything you're almost done here. You
    have your exploit working. You get a
  • 49:56 - 50:00
    calculator pops up. This was actually our
    sandbox escape of running and popping
  • 50:00 - 50:04
    calculator and proving that we had brute
    code exec, but we're not completely done
  • 50:04 - 50:10
    yet because we need to do a little bit
    more, which is exploit reliability. We
  • 50:10 - 50:13
    need to make sure that our exploit is
    actually as reliable as we want it to
  • 50:13 - 50:17
    because if it only works 1 in 100 times
    that's not going to be very good. For
  • 50:17 - 50:22
    Pwn2own, we ended up building a harness
    for our Mac which would let us run the
  • 50:22 - 50:26
    exploit multiple times, and then collect
    information about it so we could look
  • 50:26 - 50:30
    here, and we could see, very easily how
    often it would fail and how often it would
  • 50:30 - 50:36
    succeed, and then we could go and get more
    information, maybe a log, and other stuff
  • 50:36 - 50:42
    like how long it ran. And this made it
    very easy to iterate over our exploit, and
  • 50:42 - 50:48
    try to correct issues and, make it better
    and more reliable. We found that most of
  • 50:48 - 50:53
    our failures were coming from our heap
    groom, which is where we try to line all
  • 50:53 - 50:57
    your memory in certain ways, but there's
    not much that you can do there in our
  • 50:57 - 51:01
    situation, so we tried to make it as best
    as we could and then accepted the
  • 51:01 - 51:07
    reliability that we got. You, something
    else might want to test on is a bunch of
  • 51:07 - 51:11
    multiple devices. For example our
    javascript exploit was a race condition,
  • 51:11 - 51:15
    so that means the number of cpus on the
    device and the speed of the cpu actually
  • 51:15 - 51:20
    might matter when you're running your
    exploit. You might want to try different
  • 51:20 - 51:24
    operating systems or different operating
    system versions, because even if they're
  • 51:24 - 51:28
    all vulnerable, they may have different
    quirks, or tweaks, that you have to do to
  • 51:28 - 51:33
    actually make your exploit work reliably
    on all of them. We had to, we wanted to
  • 51:33 - 51:37
    test on the MacOS beta as well as the
    normal MacOS at least, so that we could
  • 51:37 - 51:42
    make sure it worked in case Apple pushed
    an update right before the competition. So
  • 51:42 - 51:44
    we did make sure that some parts of our
    code, and our exploit, could be
  • 51:44 - 51:49
    interchanged. So for example, we have
    addresses here that are specific to the
  • 51:49 - 51:52
    operating system version, and we could
    swap those out very easily by changing
  • 51:52 - 51:59
    what part of the code is done here. Yeah.
    And then also if you're targeting some
  • 51:59 - 52:02
    browsers you might be interested in
    testing them on mobile too even if you're
  • 52:02 - 52:06
    not targeting the mobile device. Because a
    lot of times the bugs might also work on a
  • 52:06 - 52:10
    phone, or at least the initial bug will.
    And so that's another interesting target
  • 52:10 - 52:17
    you might be interested in if you weren't
    thinking about it originally. So in
  • 52:17 - 52:20
    general what I'm trying to say is try
    throwing your exploit at everything you
  • 52:20 - 52:26
    can, and hopefully you will be able to
    recover some reliability percentages or
  • 52:26 - 52:31
    figure out things that you overlooked in
    your initial testing. Alright. I'm gonna
  • 52:31 - 52:35
    throw it back over, for the final section.
    Marcus: You're in the final section here.
  • 52:35 - 52:39
    So I didn't get to spend as much time as I
    would have liked on this section, but I
  • 52:39 - 52:43
    think it's an important discussion to have
    on here. And so the very last step of our
  • 52:43 - 52:51
    layman's guide is about responsibilities.
    And so this is critical. And so you've
  • 52:51 - 52:55
    listened to our talk. You've seen us
    develop the skeleton keys to computers and
  • 52:55 - 53:02
    systems and devices. We can create doors
    into computers and servers and people's
  • 53:02 - 53:06
    machines, you can invade privacy, you can
    deal damage to people's lives and
  • 53:06 - 53:13
    companies and systems and countries and so
    there is a lot of you have to be very
  • 53:13 - 53:18
    careful with these. And so everyone in
    this room, you know if you take any of our
  • 53:18 - 53:23
    advice going into this stuff, you know,
    please acknowledge what you're getting
  • 53:23 - 53:28
    into and what can be done to people. And
    so, there's at least one example that's
  • 53:28 - 53:31
    kind of related, that, you know I pulled
    out quickly that, you know, quickly came
  • 53:31 - 53:36
    to mind. It was in 2016. I have to say i
    remember this day actually sitting at
  • 53:36 - 53:43
    work. And there is this massive ddos, that
    plagued the Internet, at least in the
  • 53:43 - 53:48
    U.S., and it took down all your favorite
    sites:Twitter, Amazon, Netflix, Etsy,
  • 53:48 - 53:54
    Github, Spotify, Reddit. I remember the
    whole Internet came to a crawl in the U.S.
  • 53:54 - 54:01
    This is the L3 outage map. This was
    absolutely insane. And I remember people
  • 54:01 - 54:05
    were bouncing off the walls like crazy,
    you know. After the fact and they were all
  • 54:05 - 54:09
    referencing Bruce Schneier's blog, and
    there on Twitter there's all this
  • 54:09 - 54:14
    discussion popping up that "this was
    likely a state actor". This was a newly
  • 54:14 - 54:19
    sophisticated ddos attack. Bruce suggested
    it was China or Russia, or you know, some
  • 54:19 - 54:23
    nation state and the blog post was
    specifically titled someone is learning
  • 54:23 - 54:28
    how to take down the Internet. But then a
    few months later we figured out that this
  • 54:28 - 54:32
    was called the mirai botnet and it was
    actually just a bunch of kids trying to
  • 54:32 - 54:38
    ddos each other minecraft servers. And so
    it's a... it's scary because you know I
  • 54:38 - 54:46
    have a lot of respect for the young people
    and how talented they are. And it's a... But
  • 54:46 - 54:52
    people need to be very conscious about the
    damage that can be caused by these things.
  • 54:52 - 54:58
    Mirai, they weren't using 0days per se.
    Later. Now nowadays they are using 0days
  • 54:58 - 55:01
    but back then they weren't, they were just
    using an IoT based botnet. One of the
  • 55:01 - 55:05
    biggest in the world, our highest
    throughput. But it was incredibly
  • 55:05 - 55:12
    damaging. And you know, so when you you,
    it's hard to recognize the power of an
  • 55:12 - 55:18
    0day until you are wielding it. And so
    that's why it's not the first step of the
  • 55:18 - 55:21
    layman's guide. Once you finish this
    process you will come to realize the
  • 55:21 - 55:26
    danger that you can cause, but also the
    danger that you might be putting yourself
  • 55:26 - 55:33
    in. And so I kind of want to close on that
    please be very careful. All right. And so,
  • 55:33 - 55:37
    that's all we have. This is the
    conclusion. The layman's guide, that is
  • 55:37 - 55:42
    the summary. And if you have any questions
    we'll take them now. Otherwise if we run
  • 55:42 - 55:45
    out of time you can catch us after the talk,
    and we'll have some cool stickers too,
  • 55:45 - 55:51
    so...
    Applause
  • 55:51 - 55:59
    Herald-Angel: Wow, great talk. Thank you.
    We have very very little time for
  • 55:59 - 56:03
    questions. If somebody is very quick they
    can come up to one of the microphones in
  • 56:03 - 56:08
    the front, we will handle one but
    otherwise, will you guys be available
  • 56:08 - 56:10
    after the talk?
    Marcus: We will be available after the
  • 56:10 - 56:15
    talk, if ou want to come up and chat. We
    might get swarmed but we also have some
  • 56:15 - 56:18
    cool Rekto stickers, so, come grab them if
    you want them.
  • 56:18 - 56:21
    Herald-angel: And where can we find you.
    Marcus: We'll be over here. We'll try to
  • 56:21 - 56:23
    head out to the back
    Herald-angel: Yeah yeah because we have
  • 56:23 - 56:25
    another talk coming up in a moment or so.
    Marcus: OK.
  • 56:25 - 56:30
    Herald-angel: I don't see any questions.
    So I'm going to wrap it up at this point,
  • 56:30 - 56:34
    but as I said the speakers will be
    available. Let's give this great speech
  • 56:34 - 56:35
    another round of applause.
  • 56:35 - 56:40
    Applause
  • 56:40 - 56:42
    Outro
  • 56:42 - 57:04
    subtitles created by c3subtitles.de
    in the year 2020. Join, and help us!
Title:
35C3 - The Layman's Guide to Zero-Day Engineering
Description:

more » « less
Video Language:
English
Duration:
57:04

English subtitles

Revisions