1
00:00:18,209 --> 00:00:23,874
Herald-Angel: The Layman's Guide to Zero-
Day Engineering is our next talk by, and
2
00:00:23,874 --> 00:00:31,050
my colleagues out in Austin who run the
Pown2Own contest, assure me that our next
3
00:00:31,050 --> 00:00:35,700
speakers are really very much top of their
class, and I'm really looking forward to
4
00:00:35,700 --> 00:00:40,430
this talk for that. A capture the flag
contest like that requires having done a
5
00:00:40,430 --> 00:00:46,309
lot of your homework upfront so that you
have the tools at your disposal, at the
6
00:00:46,309 --> 00:00:50,920
time, so that you can win. And Marcus and
Amy are here to tell us something way more
7
00:00:50,920 --> 00:00:56,309
valuable about the actual tools they found,
but how they actually arrived at those
8
00:00:56,309 --> 00:01:00,659
tools, and you know, the process of going
to that. And I think that is going to be a
9
00:01:00,659 --> 00:01:08,250
very valuable recipe or lesson for us. So,
please help me welcome Marcus and Amy to a
10
00:01:08,250 --> 00:01:16,768
very much anticipated talk.
applause
11
00:01:16,768 --> 00:01:22,600
Marcus: All right. Hi everyone. Thank you
for making out to our talk this evening.
12
00:01:22,600 --> 00:01:26,409
So, I'd like to start by thanking the CCC
organizers for inviting us out here to
13
00:01:26,409 --> 00:01:30,329
give this talk. This was a unique
opportunity for us to share some of our
14
00:01:30,329 --> 00:01:35,810
experience with the community, and we're
really happy to be here. So yeah, I hope
15
00:01:35,810 --> 00:01:42,729
you guys enjoy. OK, so who are we? Well,
my name is Marcus Gaasedelen. I sometimes
16
00:01:42,729 --> 00:01:48,070
go by the handle @gaasedelen which is my
last name. And I'm joined here by my co-
17
00:01:48,070 --> 00:01:52,950
worker Amy who was also a good friend and
longtime collaborator. We worked for a
18
00:01:52,950 --> 00:01:56,391
company called Ret2 Systems. Ret2 is best
known publicly for its security research
19
00:01:56,391 --> 00:01:59,740
and development. Behind the scenes we do
consulting and have been pushing to
20
00:01:59,740 --> 00:02:04,040
improve the availability of security
education, and specialized security
21
00:02:04,040 --> 00:02:08,100
training, as well as, raising awareness
and sharing information like you're going
22
00:02:08,100 --> 00:02:14,069
to see today. So this talk has been
structured roughly to show our approach in
23
00:02:14,069 --> 00:02:18,370
breaking some of the world's most hardened
consumer software. In particular, we're
24
00:02:18,370 --> 00:02:23,349
going to talk about one of the Zero-Days
that we produce at Ret2 in 2018. And over
25
00:02:23,349 --> 00:02:28,019
the course of the talk, we hope to break
some common misconceptions about the
26
00:02:28,019 --> 00:02:31,260
process of Zero-Day Engineering. We're
going to highlight some of the
27
00:02:31,260 --> 00:02:36,799
observations that we've gathered and built
up about this industry and this trade over
28
00:02:36,799 --> 00:02:41,269
the course of many years now. And we're
going to try to offer some advice on how
29
00:02:41,269 --> 00:02:46,220
to get started doing this kind of work as
an individual. So, we're calling this talk
30
00:02:46,220 --> 00:02:51,580
a non-technical commentary about the
process of Zero-Day Engineering. At times,
31
00:02:51,580 --> 00:02:55,740
it may seem like we're stating the
obvious. But the point is to show that
32
00:02:55,740 --> 00:03:00,879
there's less magic behind the curtain than
most of the spectators probably realize.
33
00:03:00,879 --> 00:03:05,350
So let's talk about PWN2OWN 2018. For
those that don't know, PWN2OWN is an
34
00:03:05,350 --> 00:03:09,099
industry level security competition,
organized annually by Trend Micro's Zero-
35
00:03:09,099 --> 00:03:15,000
Day Initiative. PWN2OWN invites the top
security researchers from around the world
36
00:03:15,000 --> 00:03:19,810
to showcase Zero-Day exploits against high
value software targets such as premier web
37
00:03:19,810 --> 00:03:23,310
browsers, operating systems and
virtualization solutions, such as Hyper-V,
38
00:03:23,310 --> 00:03:28,540
VMware, VirtualBox, Xen, whatever.
So at Ret2, we thought it would be fun to
39
00:03:28,540 --> 00:03:32,560
play on PWN2OWN this year. Specifically we
wanted to target the competition's browser
40
00:03:32,560 --> 00:03:38,480
category. We chose to attack Apple's
Safari web browser on MacOS because it was
41
00:03:38,480 --> 00:03:44,700
new, it was mysterious. But also to avoid
any prior conflicts of interest. And so
42
00:03:44,700 --> 00:03:48,159
for this competition, we ended up
developing a type of Zero-Day, known
43
00:03:48,159 --> 00:03:56,019
typically as a single click RCE or Safari
remote, kind of as some industry language.
44
00:03:56,019 --> 00:04:00,219
So what this means is that we could gain
remote, root level access to your Macbook,
45
00:04:00,219 --> 00:04:05,059
should you click a single malicious link
of ours. That's kind of terrifying. You
46
00:04:05,059 --> 00:04:10,390
know, a lot of you might feel like you're
very prone to not clicking malicious
47
00:04:10,390 --> 00:04:15,091
links, or not getting spearfished. But it's
so easy. Maybe you're in a coffee shop, maybe
48
00:04:15,091 --> 00:04:19,970
I just man-in-the-middle your connection.
It's pretty, yeah, it's a pretty scary
49
00:04:19,970 --> 00:04:23,360
world. So this is actually a picture that
you took on stage at PWN2OWN 2018,
50
00:04:23,360 --> 00:04:27,940
directly following our exploit attempt.
This is actually Joshua Smith from ZDI
51
00:04:27,940 --> 00:04:33,020
holding the competition machine after our
exploit had landed, unfortunately, a
52
00:04:33,020 --> 00:04:37,020
little bit too late. But the payload at
the end of our exploit would pop Apple's
53
00:04:37,020 --> 00:04:41,370
calculator app and a root shell on the
victim machine. This is usually used to
54
00:04:41,370 --> 00:04:45,930
demonstrate code execution. So, for fun we
also made the payload change the desktop's
55
00:04:45,930 --> 00:04:50,666
background to the Ret2 logo. So that's
what you're seeing there. So, what makes a
56
00:04:50,666 --> 00:04:54,620
Zero-Day a fun case study, is that we had
virtually no prior experience with Safari
57
00:04:54,620 --> 00:04:58,750
or MacOS going into this event. We
literally didn't even have a single
58
00:04:58,750 --> 00:05:03,220
Macbook in the office. We actually had to
go out and buy one. And, so as a result
59
00:05:03,220 --> 00:05:07,050
you get to see, how we as expert
researchers approach new and unknown
60
00:05:07,050 --> 00:05:12,060
software targets. So I promised that this
was a non-technical talk which is mostly
61
00:05:12,060 --> 00:05:16,710
true. That's because we actually publish
all the nitty gritty details for the
62
00:05:16,710 --> 00:05:21,530
entire exploit chain as a verbose six part
blog series on our blog this past summer.
63
00:05:21,530 --> 00:05:26,534
It's hard to make highly tactical talks
fun and accessible to all audiences. So
64
00:05:26,534 --> 00:05:31,210
we've reserved much of the truly technical
stuff for you to read at your own leisure.
65
00:05:31,210 --> 00:05:34,550
It's not a prerequisite for this talk, so
don't feel bad if you haven't read those.
66
00:05:34,550 --> 00:05:39,310
So with that in mind, we're ready to
introduce you to the very 1st step of what
67
00:05:39,310 --> 00:05:44,870
we're calling, The Layman's Guide to Zero-
Day Engineering. So, at the start of this
68
00:05:44,870 --> 00:05:48,730
talk, I said we'd be attacking some of the
most high value and well protected
69
00:05:48,730 --> 00:05:54,543
consumer software. This is no joke, right?
This is a high stakes game. So before any
70
00:05:54,543 --> 00:05:58,759
of you even think about looking at code,
or searching for vulnerabilities in these
71
00:05:58,759 --> 00:06:02,590
products, you need to set some
expectations about what you're going to be
72
00:06:02,590 --> 00:06:08,190
up against. So this is a picture of you.
You might be a security expert, a software
73
00:06:08,190 --> 00:06:11,757
engineer, or even just an enthusiast. But
there are some odd twists of self-
74
00:06:11,757 --> 00:06:16,009
loathing, you find yourself interested in
Zero-Days, and the desire to break some
75
00:06:16,009 --> 00:06:22,380
high impact software, like a web browser.
But it's important to recognize that
76
00:06:22,380 --> 00:06:25,759
you're looking to devise some of the
largest, most successful organizations of
77
00:06:25,759 --> 00:06:29,720
our generation. These types of companies have
every interest in securing their products
78
00:06:29,720 --> 00:06:33,410
and building trust with consumers. These
vendors have steadily been growing their
79
00:06:33,410 --> 00:06:36,449
investments in software and device
security, and that trend will only
80
00:06:36,449 --> 00:06:41,300
continue. You see cyber security in
headlines every day, hacking, you know,
81
00:06:41,300 --> 00:06:45,220
these systems compromised. It's only
getting more popular. You know, there's
82
00:06:45,220 --> 00:06:49,319
more money than ever in this space. This
is a beautiful mountain peak that
83
00:06:49,319 --> 00:06:53,330
represents your mission of, I want to
craft a Zero-Day. But you're cent up this
84
00:06:53,330 --> 00:06:58,410
mountain is not going to be an easy task.
As an individual, the odds are not really
85
00:06:58,410 --> 00:07:02,940
in your favor. This game is sort of a free
for all, and everyone is at each other's
86
00:07:02,940 --> 00:07:06,770
throats. So, in one corner is the vendor,
might as well have infinite money and
87
00:07:06,770 --> 00:07:10,729
infinite experience. In another corner, is
the rest of the security research
88
00:07:10,729 --> 00:07:16,199
community, fellow enthusiasts, other
threat actors. So, all of you are going to
89
00:07:16,199 --> 00:07:21,120
be fighting over the same terrain, which
is the code. This is unforgiving terrain
90
00:07:21,120 --> 00:07:25,669
in and of itself. But the vendor has home
field advantage. So these obstacles are
91
00:07:25,669 --> 00:07:30,419
not fun, but it's only going to get worse
for you. Newcomers often don't prepare
92
00:07:30,419 --> 00:07:34,810
themselves for understanding what kind of
time scale they should expect when working
93
00:07:34,810 --> 00:07:39,840
on these types of projects. So, for those
of you who are familiar with the Capture
94
00:07:39,840 --> 00:07:44,550
The Flag circuit. These competitions
usually are time boxed from 36 to 48
95
00:07:44,550 --> 00:07:49,191
hours. Normally, they're over a weekend. We
came out of that circuit. We love the
96
00:07:49,191 --> 00:07:54,880
sport. We still play. But how long does it
take to develop a Zero-Day? Well, it can
97
00:07:54,880 --> 00:07:58,780
vary a lot. Sometimes, you get really
lucky. I've seen someone produce a
98
00:07:58,780 --> 00:08:05,340
Chrome-/V8-bug in 2 days. Other times,
it's taken two weeks. Sometimes, it takes
99
00:08:05,340 --> 00:08:10,360
a month. But sometimes, it can actually
take a lot longer to study and exploit new
100
00:08:10,360 --> 00:08:13,960
targets. You need to be thinking, you
know, you need to be looking at time in
101
00:08:13,960 --> 00:08:19,509
these kind of scales. And so it could take
3.5 months. It could take maybe even 6
102
00:08:19,509 --> 00:08:23,021
months for some targets. The fact of the
matter is that it's almost impossible to
103
00:08:23,021 --> 00:08:28,370
tell how long the process is going take
you. And so unlike a CTF challenge,
104
00:08:28,370 --> 00:08:33,140
there's no upper bound to this process of
Zero-Day Engineering. There's no guarantee
105
00:08:33,140 --> 00:08:37,270
that the exploitable bugs you need to make
a Zero-Day, even exist in the software
106
00:08:37,270 --> 00:08:43,400
you're targeting. You also don't always
know what you're looking for, and you're
107
00:08:43,400 --> 00:08:47,540
working on projects that are many order
magnitudes larger than any sort of
108
00:08:47,540 --> 00:08:51,150
educational resource. We're talking
millions of lines of code where your
109
00:08:51,150 --> 00:08:56,780
average CTF challenge might be a couple
hundred lines to see at most. So I can
110
00:08:56,780 --> 00:09:01,673
already see the tear and self-doubt in
some of your eyes. But I really want to
111
00:09:01,673 --> 00:09:06,640
stress that you shouldn't be too hard on
yourself about this stuff. As a novice,
112
00:09:06,640 --> 00:09:11,470
you need to keep these caveats in mind and
accept that failure is not unlikely in the
113
00:09:11,470 --> 00:09:15,746
journey. All right? So please check this
box before watching the rest of the talk.
114
00:09:17,406 --> 00:09:21,010
So having built some psychological
foundation for the task at hand, the next
115
00:09:21,010 --> 00:09:28,130
step in the Layman's Guide is what we call
reconnaissance. So this is kind of a goofy
116
00:09:28,130 --> 00:09:33,566
slide, but yes, even Metasploit reminds
you to start out doing recon. So with
117
00:09:33,566 --> 00:09:36,530
regard to Zero-Day Engineering,
discovering vulnerabilities against large
118
00:09:36,530 --> 00:09:40,606
scale software can be an absolutely
overwhelming experience. Like that
119
00:09:40,606 --> 00:09:44,790
mountain, it's like, where do I start?
What hill do I go up? Like, where do I
120
00:09:44,790 --> 00:09:49,330
even go from there? So to overcome this,
it's vital to build foundational knowledge
121
00:09:49,330 --> 00:09:53,470
about the target. It's also one of the
least glamorous parts of the Zero-Day
122
00:09:53,470 --> 00:09:57,540
development process. And it's often
skipped by many. You don't see any of the
123
00:09:57,540 --> 00:10:01,000
other speakers really talking about this
so much. You don't see blog posts where
124
00:10:01,000 --> 00:10:05,480
people are like, I googled for eight hours
about Apple Safari before writing a Zero-
125
00:10:05,480 --> 00:10:11,160
Day for it. So you want to aggregate and
review all existing research related to
126
00:10:11,160 --> 00:10:17,266
your target. This is super, super
important. So how do we do our recon? Well
127
00:10:17,266 --> 00:10:21,790
the simple answer is Google everything.
This is literally us Googling something,
128
00:10:21,790 --> 00:10:25,210
and what we do is we go through and we
click, and we download, and we bookmark
129
00:10:25,210 --> 00:10:29,580
every single thing for about five pages.
And you see all those buttons that you
130
00:10:29,580 --> 00:10:33,542
never click at the bottom of Google. All
the things are related searches that you
131
00:10:33,542 --> 00:10:37,370
might want to look at. You should
definitely click all of those. You should
132
00:10:37,370 --> 00:10:40,960
also go through at least four or five
pages and keep downloading and saving
133
00:10:40,960 --> 00:10:48,040
everything that looks remotely relevant.
So you just keep doing this over, and
134
00:10:48,040 --> 00:10:53,621
over, and over again. And you just Google,
and Google, and Google everything that you
135
00:10:53,621 --> 00:10:58,730
think could possibly be related. And the
idea is, you know, you just want to grab
136
00:10:58,730 --> 00:11:02,260
all this information, you want to
understand everything you can about this
137
00:11:02,260 --> 00:11:07,766
target. Even if it's not Apple Safari
specific. I mean, look into V8, look into
138
00:11:07,766 --> 00:11:14,200
Chrome, look into Opera, look into Chakra,
look into whatever you want. So the goal
139
00:11:14,200 --> 00:11:19,370
is to build up a library of security
literature related to your target and its
140
00:11:19,370 --> 00:11:26,010
ecosystem. And then, I want you to read
all of it. But I don't want you, don't,
141
00:11:26,010 --> 00:11:29,120
don't force yourself to understand
everything in your sack in your
142
00:11:29,120 --> 00:11:32,720
literature. The point of this exercise is
to build additional concepts about the
143
00:11:32,720 --> 00:11:36,996
software, its architecture and its
security track record. By the end of the
144
00:11:36,996 --> 00:11:40,120
reconnaissance phase, you should aim to be
able to answer these kinds of questions
145
00:11:40,120 --> 00:11:45,727
about your target. What is the purpose of
the software? How is it architected? Can
146
00:11:45,727 --> 00:11:50,640
anyone describe what WebKit's architecture
is to me? What are its major components?
147
00:11:50,640 --> 00:11:55,880
Is there a sandbox around it? How do you
debug it? How did the developers debug it?
148
00:11:55,880 --> 00:12:00,550
Are there any tips and tricks, are there
special flags? What does its security
149
00:12:00,550 --> 00:12:04,265
track record look like? Does it have
historically vulnerable components? Is
150
00:12:04,265 --> 00:12:10,630
there existing writeups, exploits, or
research in it? etc. All right, we're
151
00:12:10,630 --> 00:12:16,450
through reconnaissance. Step 2 is going to
be target selection. So, there's actually
152
00:12:16,450 --> 00:12:20,190
a few different names that you could maybe
call this. Technically, we're targeting
153
00:12:20,190 --> 00:12:24,684
Apple's Safari, but you want to try and
narrow your scope. So what we're looking
154
00:12:24,684 --> 00:12:32,520
at here is a TreeMap visualization of the
the Web Kit source. So Apple's Safari web
155
00:12:32,520 --> 00:12:36,030
browser is actually built on top of the
Web Kit framework which is essentially a
156
00:12:36,030 --> 00:12:42,000
browser engine. This is Open Source. So
yeah, this is a TreeMap visualization of
157
00:12:42,000 --> 00:12:47,000
the source directory where files are
sorted in by size. So each of those boxes
158
00:12:47,000 --> 00:12:53,020
is essentially a file. While all the grey
boxes, the big gray boxes are directories.
159
00:12:53,020 --> 00:13:02,240
All the sub squares are files. In each
file is sized, based on its lines of code.
160
00:13:02,240 --> 00:13:07,490
Hue, the blue hues represent approximate
maximum cyclomatic complexity detected in
161
00:13:07,490 --> 00:13:10,810
each source file. And you might be
getting, anyway, you might be getting
162
00:13:10,810 --> 00:13:14,191
flashbacks back to that picture of that
mountain peak. How do you even start to
163
00:13:14,191 --> 00:13:17,546
hunt for security vulnerabilities in a
product or codebase of this size?
164
00:13:17,546 --> 00:13:22,070
3 million lines of code. You know, I've
maybe written like, I don't know, like a
165
00:13:22,070 --> 00:13:28,687
100,000 lines of C or C++ in my life, let
alone read or reviewed 3 million. So the
166
00:13:28,687 --> 00:13:33,963
short answer to breaking this problem down
is that you need to reduce your scope of
167
00:13:33,963 --> 00:13:39,950
valuation, and focus on depth over
breadth. And this is most critical when
168
00:13:39,950 --> 00:13:44,980
attacking extremely well picked over
targets. Maybe you're probing an IoT
169
00:13:44,980 --> 00:13:47,600
device? You can probably just sneeze at
that thing and you are going to find
170
00:13:47,600 --> 00:13:52,424
vulnerabilities. But you know, you're
fighting on a very different landscape here.
171
00:13:52,424 --> 00:13:59,820
And so you need to be very detailed
with your review. So reduce your scope.
172
00:13:59,820 --> 00:14:03,950
Our reconnaissance and past experience
with exploiting browsers had lead us to
173
00:14:03,950 --> 00:14:09,090
focus on WebKit's JavaScript engine,
highlighted up here in orange. So, bugs in
174
00:14:09,090 --> 00:14:14,120
JS engines, when it comes to browsers, are
generally regarded as extremely powerful
175
00:14:14,120 --> 00:14:18,460
bugs. But they're also few and far
between, and they're kind of becoming more
176
00:14:18,460 --> 00:14:23,550
rare, as more of you are looking for bugs.
More people are colliding, they're dying
177
00:14:23,550 --> 00:14:28,920
quicker. And so, anyway, let's try to
reduce our scope. So we reduce our scope
178
00:14:28,920 --> 00:14:33,820
from 3 million down in 350,000 lines of
code. Here, we'll zoom into that orange.
179
00:14:33,820 --> 00:14:37,200
So now we're looking at the JavaScript
directory, specifically the JavaScript
180
00:14:37,200 --> 00:14:42,490
core directory. So this is a JavaScript
engine within WebKit, as used by Safari,
181
00:14:42,490 --> 00:14:47,820
on MacOS. And specifically, to further
reduce our scope, we chose to focus on the
182
00:14:47,820 --> 00:14:52,821
highest level interface of the JavaScript
core which is the runtime folder. So this
183
00:14:52,821 --> 00:14:57,800
contains code that's almost 1 for 1
mappings to JavaScript objects and methods
184
00:14:57,800 --> 00:15:05,791
in the interpreter. So, for example,
Array.reverse, or concat, or whatever.
185
00:15:05,791 --> 00:15:10,901
It's very close to what you JavaScript
authors are familiar with. And so this is
186
00:15:10,901 --> 00:15:16,836
what the runtime folder looks like, at
approximately 70,000 lines of code. When
187
00:15:16,836 --> 00:15:21,630
we were spinning up for PWN2OWN, we said,
okay, we are going to find a bug in this
188
00:15:21,630 --> 00:15:25,680
directory in one of these files, and we're
not going to leave it until we have, you
189
00:15:25,680 --> 00:15:30,784
know, walked away with something. So if we
take a step back now. This is what we
190
00:15:30,784 --> 00:15:34,592
started with, and this is what we've done.
We've reduced our scope. So it helps
191
00:15:34,592 --> 00:15:39,490
illustrate this, you know, whittling
process. It was almost a little bit
192
00:15:39,490 --> 00:15:44,310
arbitrary. There's a lot, previously,
there's been a lot of bugs in the runtime
193
00:15:44,310 --> 00:15:51,380
directory. But it's really been cleaned up
the past few years. So anyway, this is
194
00:15:51,380 --> 00:15:56,870
what we chose for our RCE. So having spent
a number of years going back and forth
195
00:15:56,870 --> 00:16:00,930
between attacking and defending, I've come
to recognize that bad components do not
196
00:16:00,930 --> 00:16:05,200
get good fast. Usually researchers are
able to hammer away at these components
197
00:16:05,200 --> 00:16:10,520
for years before they reach some level of
acceptable security. So to escape Safari's
198
00:16:10,520 --> 00:16:15,084
sandbox, we simply looked at the security
trends covered during the reconnaissance phase.
199
00:16:15,084 --> 00:16:18,790
So, this observation, historically
bad components will often take years to
200
00:16:18,790 --> 00:16:23,542
improve, means that we chose to look at
WindowServer. And for those that don't
201
00:16:23,542 --> 00:16:29,390
know, WindowServer is a root level system
service that runs on MacOS. Our research
202
00:16:29,390 --> 00:16:35,570
turned up a trail of ugly bugs from a
MacOS, from essentially the WindowServer
203
00:16:35,570 --> 00:16:42,910
which is accessible to the Safari sandbox.
And in particular, when we're doing our
204
00:16:42,910 --> 00:16:47,110
research, we're looking at ZDI's website,
and you can just search all their
205
00:16:47,110 --> 00:16:52,910
advisories that they've disclosed. And in
particular in 2016, there is over 10 plus
206
00:16:52,910 --> 00:16:57,220
vulnerabilities report to ZDI that were
used as sandbox escapes or privilege
207
00:16:57,220 --> 00:17:03,406
escalation style issues. And so, these are
only vulnerabilities that are reported to
208
00:17:03,406 --> 00:17:09,500
ZDI. If you look in 2017, there is 4 all,
again, used for the same purpose. I think,
209
00:17:09,500 --> 00:17:15,600
all of these were actually, probably used
at PWN2OWN both years. And then in 2018,
210
00:17:15,600 --> 00:17:19,720
there is just one. And so, this is 3
years. Over the span of 3 years where
211
00:17:19,720 --> 00:17:25,010
people were hitting the same exact
component, and Apple or researchers around
212
00:17:25,010 --> 00:17:28,810
the world could have been watching, or
listening and finding bugs, and fighting
213
00:17:28,810 --> 00:17:36,230
over this land right here. And so, it's
pretty interesting. I mean, they give some
214
00:17:36,230 --> 00:17:42,200
perspective. The fact of the matter is
that it's hard to write, it's really hard
215
00:17:42,200 --> 00:17:46,250
for bad components to improve quickly.
Nobody wants to try and sit down and
216
00:17:46,250 --> 00:17:50,497
rewrite bad code. And vendors are
terrified, absolutely terrified of
217
00:17:50,497 --> 00:17:55,400
shipping regressions. Most vendors will
only patch or modify old bad code only
218
00:17:55,400 --> 00:18:02,370
when they absolutely must. For example,
when a vulnerability is reported to them.
219
00:18:02,370 --> 00:18:07,930
And so, as listed on this slide, there's a
number of reasons why a certain module or
220
00:18:07,930 --> 00:18:12,780
component has a terrible security track
record. Just try to keep in mind, that's
221
00:18:12,780 --> 00:18:17,740
usually a good place to look for more
bugs. So if you see a waterfall of bugs
222
00:18:17,740 --> 00:18:22,660
this year in some component, like, wasm or
JIT, maybe you should be looking there,
223
00:18:22,660 --> 00:18:27,401
right? Because that might be good for a
few more years. All right.
224
00:18:28,060 --> 00:18:32,240
Step three. So after all this talk, we are finally
getting to a point where we can start
225
00:18:32,240 --> 00:18:38,470
probing and exploring the codebase in
greater depth. This step is all about bug hunting.
226
00:18:38,470 --> 00:18:45,280
So as an individual researcher or
small organization, the hardest part of
227
00:18:45,280 --> 00:18:48,957
the Zero-Day engineering process is
usually discovering and exploiting a
228
00:18:48,957 --> 00:18:53,400
vulnerability. That's just kind of from
our perspective. This can maybe vary from
229
00:18:53,400 --> 00:18:58,040
person to person. But you know, we don't
have a hundred million dollars to spend on
230
00:18:58,040 --> 00:19:06,360
fuzzers, for example. And so we literally
have one Macbook, right? So it's kind of
231
00:19:06,360 --> 00:19:10,920
like looking for a needle in a haystack.
We're also well versed in the exploitation
232
00:19:10,920 --> 00:19:14,938
process itself. And so those end up being
a little bit more formulaic for ourselves.
233
00:19:14,938 --> 00:19:18,500
So there are two core strategies for
finding exploitable vulnerabilities.
234
00:19:18,500 --> 00:19:22,133
There's a lot of pros and cons to both of
these approaches. But I don't want to
235
00:19:22,133 --> 00:19:25,448
spend too much time talking about their
strengths or weaknesses. So they're all
236
00:19:25,448 --> 00:19:30,610
listed here, the short summary is that
fuzzing is the main go-to strategy for
237
00:19:30,610 --> 00:19:36,970
many security enthusiasts. Some of the key
perks is that it's scalable and almost
238
00:19:36,970 --> 00:19:41,841
always yields result. And so, spoiler
alert, but later in the software industry,
239
00:19:41,841 --> 00:19:48,640
we fuzzed both of our bugs. Both the bugs
that we use for a full chain. And we know
240
00:19:48,640 --> 00:19:53,806
it's 2018. These things are still falling
out with some very trivial means. OK. So,
241
00:19:53,806 --> 00:19:58,860
source review is the other main strategy.
Source review is often much harder for
242
00:19:58,860 --> 00:20:02,746
novices, but it can produce some high
quality bugs when performed diligently.
243
00:20:02,746 --> 00:20:06,200
You know if you're looking to just get
into this stuff, I would say, start real
244
00:20:06,200 --> 00:20:12,930
simple, start with fuzzing and see how
fare you get. So, yeah, for the purpose of
245
00:20:12,930 --> 00:20:16,490
this talk, we are mostly going to focus on
fuzzing. This is a picture from the
246
00:20:16,490 --> 00:20:21,080
dashboard of a simple, scalable fuzzing
harness we built for JavaScript core. This
247
00:20:21,080 --> 00:20:25,457
is when we were ramping up for PWN2OWN and
trying to build our chain. It was a
248
00:20:25,457 --> 00:20:30,310
grammar based JavaScript fuzzer, based on
Mozilla's Darma. There is nothing fancy
249
00:20:30,310 --> 00:20:34,723
about it. This is a snippet of some of
what some of its output looked like. We
250
00:20:34,723 --> 00:20:37,860
had only started building it out when we
actually found the exploitable
251
00:20:37,860 --> 00:20:42,520
vulnerability that we ended up using. So
we haven't really played with this much
252
00:20:42,520 --> 00:20:48,030
since then, but it's, I mean, it shows
kind of how easy it was to get where we
253
00:20:48,030 --> 00:20:55,221
needed to go. So, something like we'd like
to stress heavily to the folks who fuzz,
254
00:20:55,221 --> 00:20:59,720
is that it really must be treated as a
science for these competitive targets.
255
00:20:59,720 --> 00:21:04,180
Guys, I know code coverage isn't the best
metric, but you absolutely must use some
256
00:21:04,180 --> 00:21:08,552
form of introspection to quantify the
progress and reach of your fuzzing. Please
257
00:21:08,552 --> 00:21:13,730
don't just fuzz blindly. So our fuzzer
would generate web based code coverage
258
00:21:13,730 --> 00:21:18,160
reports of our grammars every 15 minutes,
or so. This allowed us to quickly iterate
259
00:21:18,160 --> 00:21:22,594
upon our fuzzer, helping generate more
interesting complex test cases. A good
260
00:21:22,594 --> 00:21:26,050
target is 60 percent code coverage. So you
can see that in the upper right hand
261
00:21:26,050 --> 00:21:29,050
corner. That's kind of what we were
shooting for. Again, it really varies from
262
00:21:29,050 --> 00:21:33,849
target to target. This was also just us
focusing on the runtime folder. If you see
263
00:21:33,849 --> 00:21:39,400
in the upper left hand corner. And so,
something that we have observed, again
264
00:21:39,400 --> 00:21:45,910
over many targets and exotic, exotic
targets, is that bugs almost always fall
265
00:21:45,910 --> 00:21:51,624
out of what we call the hard fought final
coverage percentages. And so, what this
266
00:21:51,624 --> 00:21:55,960
means is, you might work for a while,
trying to build up your coverage, trying
267
00:21:55,960 --> 00:22:02,030
to, you know, build a good set of test
cases, or Grammar's for fuzzing, and then
268
00:22:02,030 --> 00:22:06,030
you'll hit that 60 percent, and you'll be,
okay, what am I missing now? Like everyone
269
00:22:06,030 --> 00:22:09,550
gets that 60 percent, let's say. But then,
once you start inching a little bit
270
00:22:09,550 --> 00:22:15,150
further is when you start fining a lot of
bugs. So, for example, we will pull up
271
00:22:15,150 --> 00:22:19,190
code, and we'll be like, why did we not
hit those blocks up there? Why are those
272
00:22:19,190 --> 00:22:22,630
grey box? Why did we never hit those in
our millions of test cases? And we'll go
273
00:22:22,630 --> 00:22:26,020
find that that's some weird edge case, or
some unoptimized condition, or something
274
00:22:26,020 --> 00:22:32,160
like that, and we will modify our test
cases to hit that code. Other times we'll
275
00:22:32,160 --> 00:22:36,230
actually sit down pull it up on our
projector and talk through some of that
276
00:22:36,230 --> 00:22:39,420
and we'll be like: What the hell is going
on there? This is actually, it's funny,
277
00:22:39,420 --> 00:22:44,341
this is actually a live photo that I took
during our pawn2own hunt. You know as
278
00:22:44,341 --> 00:22:48,430
cliche as this picture is of hackers
standing in front of like a dark screen in
279
00:22:48,430 --> 00:22:52,090
a dark room, this was absolutely real. You
know we were we were just reading some
280
00:22:52,090 --> 00:23:02,190
code. And so it's good to rubber ducky
among co-workers and to hash out ideas to
281
00:23:02,190 --> 00:23:10,520
help confirm theories or discard them. And
so yeah this kinda leads us to the next
282
00:23:10,520 --> 00:23:15,220
piece of advice is when you're doing
source reviews so this applies to both
283
00:23:15,220 --> 00:23:21,060
debugging or assessing those corner cases
and whatnot. If you're ever unsure about
284
00:23:21,060 --> 00:23:24,800
the code that you're reading you
absolutely should be using debuggers and
285
00:23:24,800 --> 00:23:30,250
dynamic analysis. So as painful as it can
maybe be to set up a JavaScript core or
286
00:23:30,250 --> 00:23:35,500
debug this massive C++ application that's
dumping these massive call stacks that are
287
00:23:35,500 --> 00:23:39,900
100 [steps] deep you need to learn those
tools or you are never going to be able to
288
00:23:39,900 --> 00:23:46,700
understand the amount of context necessary
for some of these bugs and complex code.
289
00:23:46,700 --> 00:23:55,010
So for example one of our blog posts makes
extensive use of rr to reverse or to root
290
00:23:55,010 --> 00:23:58,831
cause the vulnerability that we end up
exploiting. It was a race condition in the
291
00:23:58,831 --> 00:24:03,270
garbage collector - totally wild bug.
There's probably, I said there's probably
292
00:24:03,270 --> 00:24:07,830
3 people on earth that could have spotted
this bug through source review. It
293
00:24:07,830 --> 00:24:12,690
required immense knowledge of code base in
my opinion to be able to recognize this as
294
00:24:12,690 --> 00:24:16,250
a vulnerability. We found it through
fuzzing, we had a root cause it using time
295
00:24:16,250 --> 00:24:23,630
travel debugging. Mozilla's rr which is an
amazing project. And so, yeah. Absolutely
296
00:24:23,630 --> 00:24:28,100
use debuggers. This is an example of a
call stack again, just using a debugger to
297
00:24:28,100 --> 00:24:32,043
dump the callstack from a function that
you are auditing can give you an insane
298
00:24:32,043 --> 00:24:36,500
amount of context as to how that function
is used, what kind of data it's operating
299
00:24:36,500 --> 00:24:42,294
on. Maybe, you know, what kind of areas of
the codebase it's called from. You're not
300
00:24:42,294 --> 00:24:46,490
actually supposed to be able to read the
size or read the slide but it's a
301
00:24:46,490 --> 00:24:56,370
backtrace from GDB that is 40 or 50 call
stacks deep. All right. So there is this
302
00:24:56,370 --> 00:25:01,420
huge misconception by novices that new
code is inherently more secure and that
303
00:25:01,420 --> 00:25:07,054
vulnerabilities are only being removed
from code bases, not added. This is almost
304
00:25:07,054 --> 00:25:11,980
patently false and this is something that
I've observed over the course of several
305
00:25:11,980 --> 00:25:18,377
years. Countless targets you know, code
from all sorts of vendors. And there's
306
00:25:18,377 --> 00:25:24,340
this really great blog post put out by
Ivan from GPZ this past fall and in his
307
00:25:24,340 --> 00:25:29,469
blog post he basically ... so one year ago
he fuzzed WebKit using his fuzzer
308
00:25:29,469 --> 00:25:33,250
called Domato. He found a bunch of
vulnerabilities, he reported them and then
309
00:25:33,250 --> 00:25:39,680
he open sourced the fuzzer. But then this
year, this fall, he downloads his fuzzer,
310
00:25:39,680 --> 00:25:43,553
ran it again with little to no changes,
just to get things up and running. And
311
00:25:43,553 --> 00:25:47,680
then he found another eight plus
exploitable use after free vulnerabilities.
312
00:25:47,680 --> 00:25:51,400
So what's really amazing
about this, is when you look at these last
313
00:25:51,400 --> 00:25:55,590
two columns that I have highlighted in
red, virtually all the bugs he found had
314
00:25:55,590 --> 00:26:03,950
been introduced or regressed in the past
12 months. So yes, new vulnerabilities get
315
00:26:03,950 --> 00:26:11,110
introduced every single day. The biggest
reason new code is considered harmful, is
316
00:26:11,110 --> 00:26:16,270
simply that it's not had years to sit in
market. This means it hasn't had time to
317
00:26:16,270 --> 00:26:20,843
mature, it hasn't been tested exhaustively
like the rest of the code base. As soon as
318
00:26:20,843 --> 00:26:24,786
that developer pushes it, whenever it hits
release, whenever it hits stable that's
319
00:26:24,786 --> 00:26:29,000
when you have a billion users pounding at
- it let's say on Chrome. I don't know how
320
00:26:29,000 --> 00:26:33,020
big that user base is but it's massive and
that's a thousand users around the world
321
00:26:33,020 --> 00:26:37,537
just using the browser who are effectively
fuzzing it just by browsing the web. And
322
00:26:37,537 --> 00:26:41,120
so of course you're going to manifest
interesting conditions that will cover
323
00:26:41,120 --> 00:26:47,046
things that are not in your test cases and
unit testing. So yeah, it's not uncommon.
324
00:26:47,046 --> 00:26:50,490
The second point down here is that it's
not uncommon for new code to break
325
00:26:50,490 --> 00:26:55,360
assumptions made elsewhere in the code
base. And this is also actually extremely
326
00:26:55,360 --> 00:26:59,720
common. The complexity of these code bases
can be absolutely insane and it can be
327
00:26:59,720 --> 00:27:04,263
extremely hard to tell if let's say some
new code that Joe Schmoe, the new
328
00:27:04,263 --> 00:27:10,420
developer, adds breaks some paradigm held
by let's say the previous owner of the
329
00:27:10,420 --> 00:27:14,870
codebase. He maybe doesn't understand it
as well - you know, maybe it could be an
330
00:27:14,870 --> 00:27:22,996
expert developer who just made a mistake.
It's super common. Now a piece of advice.
331
00:27:22,996 --> 00:27:27,200
This should be a no brainer for bug
hunting but novices often grow impatient
332
00:27:27,200 --> 00:27:30,320
and start hopping around between code and
functions and getting lost or trying to
333
00:27:30,320 --> 00:27:35,920
chase use-after-frees or bug classes
without really truly understanding what
334
00:27:35,920 --> 00:27:41,780
they're looking for. So a great starting
point is always identify the sources of
335
00:27:41,780 --> 00:27:45,700
user input or the way that you can
interface with the program and then just
336
00:27:45,700 --> 00:27:50,890
follow the data, follow it down. You know
what functions parse it, what manipulates
337
00:27:50,890 --> 00:27:58,611
your data, what reads it, what writes to
it. You know just keep it simple. And so
338
00:27:58,611 --> 00:28:03,570
when we're looking for our sandbox escapes
when you're looking at window server and
339
00:28:03,570 --> 00:28:07,370
our research had showed that there's all
of these functions we don't know anything
340
00:28:07,370 --> 00:28:12,000
about Mac but we read this blog post from
Keine that was like "Oh there's all these
341
00:28:12,000 --> 00:28:15,700
functions that you can send data to in
window server" and apparently there's
342
00:28:15,700 --> 00:28:21,090
about six hundred and there are all these
functions prefixed with underscore
343
00:28:21,090 --> 00:28:28,250
underscore x. And so the 600 end points
will parse and operate upon data that we
344
00:28:28,250 --> 00:28:33,430
send to them. And so to draw a rough
diagram, there is essentially this big red
345
00:28:33,430 --> 00:28:37,760
data tube from the safari sandbox to the
windows server system service. This tube
346
00:28:37,760 --> 00:28:43,651
can deliver arbitrary data that we control
to all those six hundred end points. We
347
00:28:43,651 --> 00:28:48,001
immediately thought let's just try to man-
in-the-middle this data pipe, so that we
348
00:28:48,001 --> 00:28:52,260
can see what's going on. And so that's
exactly what we did. We just hooked up
349
00:28:52,260 --> 00:28:58,898
FRIDA to it, another open source DBI. It's
on GitHub. It's pretty cool. And we're
350
00:28:58,898 --> 00:29:04,590
able to stream all of the messages flowing
over this pipe so we can see all this data
351
00:29:04,590 --> 00:29:08,870
just being sent into the window server
from all sorts of applications - actually
352
00:29:08,870 --> 00:29:12,800
everything on Mac OS talks to this. The
window server is responsible for drawing
353
00:29:12,800 --> 00:29:17,490
all your windows on the desktop, your
mouse clicks, your whatever. It's kind of
354
00:29:17,490 --> 00:29:23,820
like explorer.exe on Windows. So you know
we see all this data coming through, we
355
00:29:23,820 --> 00:29:29,050
see all these crazy messages, all these
unique message formats, all these data
356
00:29:29,050 --> 00:29:33,948
buffers that it's sending in and this is
just begging to be fuzzed. And so we said
357
00:29:33,948 --> 00:29:38,080
"OK, let's fuzz it" and we're getting all
hyped and I distinctly remember thinking
358
00:29:38,080 --> 00:29:42,440
maybe we can jerry-rig AFL into the
window server or let's mutate these
359
00:29:42,440 --> 00:29:51,059
buffers with Radamsa or why don't we just
try flipping some bits. So that's what we
360
00:29:51,059 --> 00:29:57,030
did. So I actually had a very timely tweet
just a few weeks back that echoed this
361
00:29:57,030 --> 00:30:02,590
exact experience. He said that "Looking at
my Security / Vulnerability research
362
00:30:02,590 --> 00:30:06,753
career, my biggest mistakes were almost
always trying to be too clever. Success
363
00:30:06,753 --> 00:30:11,587
hides behind what is the dumbest thing
that could possibly work." The takeaway
364
00:30:11,587 --> 00:30:18,030
here is that you should always start
simple and iterate. So this is our Fuzz
365
00:30:18,030 --> 00:30:22,520
farm. It's a single 13 inch MacBook Pro. I
don't know if this is actually going to
366
00:30:22,520 --> 00:30:26,220
work, it's not a big deal if it doesn't.
I'm only gonna play a few seconds of it.
367
00:30:26,220 --> 00:30:31,953
This is me literally placing my wallet on
the enter key and you can see this box
368
00:30:31,953 --> 00:30:35,490
popping up and we're fuzzing - our fuzzer
is running now and flipping bits in the
369
00:30:35,490 --> 00:30:39,366
messages. And the screen is changing
colors. You're going to start seeing the
370
00:30:39,366 --> 00:30:42,905
boxes freaking out. It's going all over
the place. This is because the bits are
371
00:30:42,905 --> 00:30:46,660
being flipped, it's corrupting stuff, it's
changing the messages. Normally, this
372
00:30:46,660 --> 00:30:51,330
little box is supposed to show your
password hint. But the thing is by holding
373
00:30:51,330 --> 00:30:56,240
the enter key on the locked screen. All
this traffic was being generated to the
374
00:30:56,240 --> 00:31:00,490
window server, and every time the window
server crashed - you know where it brings
375
00:31:00,490 --> 00:31:03,809
you? It brings you right back to your lock
screen. So we had this awesome fuzzing
376
00:31:03,809 --> 00:31:16,003
setup by just holding the enter key.
Applause
377
00:31:16,003 --> 00:31:21,810
And we you know we lovingly titled that
picture "Advanced Persistent Threat" in
378
00:31:21,810 --> 00:31:29,630
our blog. So this is a crash that we got
out of the fuzzer. This occurred very
379
00:31:29,630 --> 00:31:34,299
quickly after ... this was probably within
the first 24 hours. So we found a ton of
380
00:31:34,299 --> 00:31:38,670
crashes, we didn't even explore all of
them. There is probably a few still
381
00:31:38,670 --> 00:31:44,380
sitting on our server. But there's lots
and all the rest ... lots of garbage. But
382
00:31:44,380 --> 00:31:48,780
then this one stands out in particular:
Anytime you see this thing up here that says
383
00:31:48,780 --> 00:31:53,679
"EXC_BAD_ACCESS" with a big number up
there with address equals blah blah blah.
384
00:31:53,679 --> 00:31:57,520
That's a really bad place to be. And so
this is a bug that we end up using at
385
00:31:57,520 --> 00:32:01,250
pwn2own to perform our sandbox escape if
you want to read about it again, it's on
386
00:32:01,250 --> 00:32:05,710
the blog, we're not going to go too deep
into it here. So maybe some of you have
387
00:32:05,710 --> 00:32:12,590
seen the infosec comic. You know it's all
about how people try to do these really
388
00:32:12,590 --> 00:32:17,320
cool clever things. They get ... People
can get too caught up trying to inject so
389
00:32:17,320 --> 00:32:21,930
much science and technology into these
problems that they often miss the forest
390
00:32:21,930 --> 00:32:26,750
for the trees. And so here we are in the
second panel. We just wrote this really
391
00:32:26,750 --> 00:32:31,832
crappy little fuzzer and we found our bug
pretty quickly. And this guy's really
392
00:32:31,832 --> 00:32:38,670
upset. Which brings us to the
misconception that only expert researchers
393
00:32:38,670 --> 00:32:42,950
with blank tools can find bugs. And so you
can fill in the blank with whatever you
394
00:32:42,950 --> 00:32:51,000
want. It can be cutting edge tools, state
of the art, state sponsored, magic bullet.
395
00:32:51,000 --> 00:32:58,720
This is not true. There are very few
secrets. So the next observation: you
396
00:32:58,720 --> 00:33:03,350
should be very wary of any bugs that you
find quickly. A good mantra is that an
397
00:33:03,350 --> 00:33:08,840
easy to find bug is just as easily found
by others. And so what this means is that
398
00:33:08,840 --> 00:33:13,260
soon after our blog post went out ...
actually at pwn2own 2018 we actually knew
399
00:33:13,260 --> 00:33:18,640
we had collided with fluorescence one of
the other competitors. We both struggled
400
00:33:18,640 --> 00:33:24,580
with exploiting this issue ... is a
difficult bug to exploit. And we were ...
401
00:33:24,580 --> 00:33:29,884
we had some very creative exploit, it was
very strange. But there was some
402
00:33:29,884 --> 00:33:33,490
discussion after the fact on Twitter by
nadge - started by Neddy -he's probably
403
00:33:33,490 --> 00:33:36,299
out here, actually speaking tomorrow. You
guys should go see this talk about the
404
00:33:36,299 --> 00:33:42,610
Chrome IPC. That should be really good.
But there is some discussion on Twitter,
405
00:33:42,610 --> 00:33:47,170
that Ned had started, and Nicholas, who is
also here, said "well, at least three
406
00:33:47,170 --> 00:33:52,130
teams found it separately". So at least
us, fluorescence and Nicholas had found
407
00:33:52,130 --> 00:33:57,320
this bug. And we were all at pwn2own, so
you can think how many people out there
408
00:33:57,320 --> 00:34:01,200
might have also found this. There's
probably at least a few. How many people
409
00:34:01,200 --> 00:34:07,160
actually tried to weaponize this thing?
Maybe not many. It is kind of a difficult
410
00:34:07,160 --> 00:34:14,790
bug. And so there are probably at least a
few other researchers who are aware of
411
00:34:14,790 --> 00:34:21,070
this bug. So yeah, that kinda closes the,
you know, if you found a bug very quickly
412
00:34:21,070 --> 00:34:25,360
especially with fuzzing, you can almost
guarantee that someone else has found it.
413
00:34:25,360 --> 00:34:31,040
So I want to pass over the next section to
Amy to continue.
414
00:34:31,040 --> 00:34:38,360
Amy: So we just talked a bunch about, you
know, techniques and expectations when
415
00:34:38,360 --> 00:34:42,070
you're actually looking for the bug. Let
me take over here and talk a little bit
416
00:34:42,070 --> 00:34:48,179
about what to expect when trying to
exploit whatever bug you end up finding.
417
00:34:48,179 --> 00:34:53,600
Yeah and so we have the exploit
development as the next step. So OK, you
418
00:34:53,600 --> 00:34:57,221
found a bug right, you've done the hard
part. You were looking at whatever your
419
00:34:57,221 --> 00:35:01,090
target is, maybe it's a browser maybe it's
the window server or the kernel or
420
00:35:01,090 --> 00:35:05,784
whatever you're trying to target. But the
question is how do you actually do the rest?
421
00:35:05,784 --> 00:35:10,500
How do you go from the bug to
actually popping a calculator onto the
422
00:35:10,500 --> 00:35:15,650
screen? The systems that you're working
with have such a high level of complexity
423
00:35:15,650 --> 00:35:19,880
that even just like understanding you know
enough to know how your bug works it might
424
00:35:19,880 --> 00:35:23,650
not be enough to actually know how to
exploit it. So we try to like brute force
425
00:35:23,650 --> 00:35:28,560
our way to an exploit, is that a good
idea? Well all right before we try to
426
00:35:28,560 --> 00:35:34,390
tackle your bug let's take a step back and
ask a slightly different question. How do
427
00:35:34,390 --> 00:35:39,410
we actually write an exploit like this in
general? Now I feel like a lot of people
428
00:35:39,410 --> 00:35:44,461
consider these kind of exploits maybe be
in their own league at least when you
429
00:35:44,461 --> 00:35:49,750
compare them to something like maybe what
you'd do at a CTF competition or something
430
00:35:49,750 --> 00:35:55,859
simpler like that. And if you were for
example to be given a browser exploit
431
00:35:55,859 --> 00:36:00,340
challenge at a CTF competition it may seem
like an impossibly daunting task has just
432
00:36:00,340 --> 00:36:04,620
been laid in front of you if you've never
done this stuff before. So how can we work
433
00:36:04,620 --> 00:36:09,350
to sort of change that view? And you know
it might be kind of cliche but I actually
434
00:36:09,350 --> 00:36:14,090
think the best way to do it is through
practice. And I know everyone says "oh how
435
00:36:14,090 --> 00:36:19,540
do you get good", "oh, practice". But I
think that this is actually very valuable
436
00:36:19,540 --> 00:36:24,930
for this and the way that practicing
actually comes out is that, well, before we
437
00:36:24,930 --> 00:36:29,010
talked a lot about consuming everything
you could about your targets, like
438
00:36:29,010 --> 00:36:33,740
searching for everything you could that's
public, downloading it, trying to read it
439
00:36:33,740 --> 00:36:36,960
even if you don't understand it, because
you'll hopefully gleam something from it
440
00:36:36,960 --> 00:36:41,550
it doesn't hurt but maybe your goal now
could be actually trying to understand it
441
00:36:41,550 --> 00:36:46,810
as at least as much as you can. You know
it's going to be... it's not going to be
442
00:36:46,810 --> 00:36:53,130
easy. These are very intricate systems
that we're attacking here. And so it will
443
00:36:53,130 --> 00:36:57,150
be a lot of work to understand this stuff.
But for every old exploit you can work
444
00:36:57,150 --> 00:37:02,240
your way through, the path will become
clearer for actually exploiting these
445
00:37:02,240 --> 00:37:10,600
targets. So as because I focused mostly on
browser work and I did that browser part
446
00:37:10,600 --> 00:37:16,540
of our chain, at least the exploitation
part. I have done a lot of exploits and
447
00:37:16,540 --> 00:37:21,080
read a ton of browser exploits and one
thing that I have found is that a lot of
448
00:37:21,080 --> 00:37:26,170
them have very very similar structure. And
they'll have similar techniques in them
449
00:37:26,170 --> 00:37:30,923
they'll have similar sort of primitives
that are being used to build up the
450
00:37:30,923 --> 00:37:37,660
exploit. And so that's one observation.
And to actually illustrate that I have an
451
00:37:37,660 --> 00:37:42,770
example. So alongside us at this at
PWN2OWN this spring we had Samuel Groffs
452
00:37:42,770 --> 00:37:48,920
of Phoenix. He's probably here right now.
So he was targeting Safari just like we
453
00:37:48,920 --> 00:37:53,550
were. But his bug was in the just in time
compiler, the JIT, which converts
454
00:37:53,550 --> 00:38:00,061
JavaScript to the machine code. Our bug
was nowhere near that. It was over in the
455
00:38:00,061 --> 00:38:06,250
garbage collector so a completely
different kind of bug. But the bug here is
456
00:38:06,250 --> 00:38:11,014
super reliable. It was very very clean. I
recommend you go look at it online. It's a
457
00:38:11,014 --> 00:38:18,850
very good resource. And then, a few months
later, at PWN2OWN Mobile, so another pwning
458
00:38:18,850 --> 00:38:24,430
event, we have Fluoroacetate, which was an
amazing team who managed to pretty much
459
00:38:24,430 --> 00:38:28,071
pwn everything they could get their hands
on at that competition, including an
460
00:38:28,071 --> 00:38:33,103
iPhone which of course iPhone uses Safari
so they needed a Safari bug. The safari
461
00:38:33,103 --> 00:38:37,690
bug that they had was very similar in
structure to the previous bug earlier that
462
00:38:37,690 --> 00:38:43,320
year, at least in terms of how the bug
worked and what you could do with it. So
463
00:38:43,320 --> 00:38:47,760
now you could exploit both of these bugs
with very similar exploit code almost in
464
00:38:47,760 --> 00:38:52,950
the same way. There were a few tweaks you
had to do because Apple added a few things
465
00:38:52,950 --> 00:39:01,100
since then. But the path between bug and
code execution was very similar. Then,
466
00:39:01,100 --> 00:39:07,070
even a few months after that, there is a
CTF called "Realworld CTF" which took
467
00:39:07,070 --> 00:39:11,050
place in China and as the title suggests
they had a lot of realistic challenges
468
00:39:11,050 --> 00:39:18,280
including Safari. So of course my team
RPISEC was there and they woke me up in
469
00:39:18,280 --> 00:39:23,000
the middle of the night and tasked me with
solving it. And so I was like "Okay, okay
470
00:39:23,000 --> 00:39:27,760
I'll look at this". And I looked at it and
it was a JIT bug and I've never actually
471
00:39:27,760 --> 00:39:34,720
before that looked at the Safari JIT. And
so, you know, I didn't have much previous
472
00:39:34,720 --> 00:39:40,490
experience doing that, but because I had
taken the time to read all the public
473
00:39:40,490 --> 00:39:45,300
exploits. So I read all the other PWN2OWN
competitors exploits and I read all the
474
00:39:45,300 --> 00:39:49,470
other things that people were releasing
for different CVEs. I had seen a bug like
475
00:39:49,470 --> 00:39:54,790
this before very similar and I knew how to
exploit it, so I could... I was able to
476
00:39:54,790 --> 00:39:59,460
quickly build the path from bug to code
exec and we actually managed to get first
477
00:39:59,460 --> 00:40:03,350
blood on the challenge which was really
really cool.
478
00:40:03,350 --> 00:40:12,020
Applaus
So... So what does this actually mean?
479
00:40:12,020 --> 00:40:19,160
Well I think not every bug is going to be
that easily to swap into an exploit but I
480
00:40:19,160 --> 00:40:23,350
do think that understanding old exploits
is extremely valuable if you're trying to
481
00:40:23,350 --> 00:40:28,840
exploit new bugs. A good place to start if
you're interested in looking at old bugs
482
00:40:28,840 --> 00:40:34,170
is on places like this with the js-vuln-db,
which is a basically a repository of a
483
00:40:34,170 --> 00:40:39,200
whole bunch of JavaScript bugs and proof
of concepts and sometimes even exploits
484
00:40:39,200 --> 00:40:43,690
for them. And so if you were to go through
all of those, I guarantee by the end you'd
485
00:40:43,690 --> 00:40:49,560
have a great understanding of the types of
bugs that are showing up these days and
486
00:40:49,560 --> 00:40:55,430
probably how to exploit most of them.
And... But there aren't that many bugs
487
00:40:55,430 --> 00:41:02,430
that get published that are full exploits.
There's only a couple a year maybe. So
488
00:41:02,430 --> 00:41:05,260
what do you do from there once you've read
all those in and you need to learn more?
489
00:41:05,260 --> 00:41:12,540
Well maybe start trying to exploit other
bugs yourself so you can go... For
490
00:41:12,540 --> 00:41:16,260
example, I like Chrome because they have a
very nice list of all their
491
00:41:16,260 --> 00:41:19,510
vulnerabilities that they post every time
they have an update and they even link you
492
00:41:19,510 --> 00:41:24,980
to the issue, so you can go and see
exactly what was wrong and so take some of
493
00:41:24,980 --> 00:41:30,500
these for example, at the very top you
have out-of-bounds write in V8. So we
494
00:41:30,500 --> 00:41:34,000
could click on that and go and see what
the bug was and then could try to write an
495
00:41:34,000 --> 00:41:38,130
exploit for it. And then by the end we all
have had a much better idea of how to
496
00:41:38,130 --> 00:41:43,970
exploit an out-of-bounds write in V8 and
we've now done it ourselves too. So this
497
00:41:43,970 --> 00:41:47,650
is a chance to sort of apply what you've
learned. But you say OK that's a lot of
498
00:41:47,650 --> 00:41:53,030
work. You know I have to do all kinds of
other stuff, I'm still in school or I have
499
00:41:53,030 --> 00:41:59,690
a full time job. Can't I just play CTFs?
Well it's a good question. The question is
500
00:41:59,690 --> 00:42:03,310
how much do CTFs actually help you with
these kind of exploits. I do think that
501
00:42:03,310 --> 00:42:06,310
you can build a very good mindset for this
because you need a very adversarial
502
00:42:06,310 --> 00:42:13,410
mindset to do this sort of work. But a lot
of the times the challenges don't really
503
00:42:13,410 --> 00:42:17,270
represent the real world exploitation.
There was a good tweet just the other day,
504
00:42:17,270 --> 00:42:23,890
like a few days ago, where we were saying
that yeah libc is... like random libc
505
00:42:23,890 --> 00:42:28,660
challenges - Actually I don't think
it's... Yes. It's libc here. Yeah. - are
506
00:42:28,660 --> 00:42:33,330
often very artificial and don't carry much
value to real world because they're very
507
00:42:33,330 --> 00:42:38,570
specific. Some people love these sort of
very specific CTF challenges, but I don't
508
00:42:38,570 --> 00:42:43,410
think that there's as much value as there
could be. However a lot of... There's been
509
00:42:43,410 --> 00:42:48,040
a couple CTFs recently and historically as
well that have had pretty realistic
510
00:42:48,040 --> 00:42:56,580
challenges in them. In fact right now is a
CTF, 35c3 CTF is running and they have 3
511
00:42:56,580 --> 00:43:00,250
browser exploit challenges, they have a
full chain safari challenge, they have a
512
00:43:00,250 --> 00:43:05,950
virtual box challenge, It's like it's
pretty crazy and it's crazy to see people
513
00:43:05,950 --> 00:43:10,810
solve those challenges in such a short
time span too. But I think it's definitely
514
00:43:10,810 --> 00:43:15,020
something that you can look at afterwards
even if you don't manage to get through
515
00:43:15,020 --> 00:43:19,940
one of those challenges today. But
something to try to work on. And so these
516
00:43:19,940 --> 00:43:24,990
are the sort of new CTFs are actually
pretty good for people who want to jump
517
00:43:24,990 --> 00:43:31,690
off to this kind of real estate or a real
exploit development work. However it can
518
00:43:31,690 --> 00:43:36,310
be kind of scary for newer newcomers to
the CTF scene because suddenly you know
519
00:43:36,310 --> 00:43:39,681
it's your first CTF and they're asking you
to exploit Chrome and you're like what...
520
00:43:39,681 --> 00:43:45,730
what is going on here. So there, it is a
double edged sword sometimes. All right so
521
00:43:45,730 --> 00:43:50,590
now we found the bug and we have
experience, so what do we actually do?
522
00:43:50,590 --> 00:43:54,610
Well you have to kind of have to get lucky
though because even if you've had a ton of
523
00:43:54,610 --> 00:43:58,820
experience that doesn't necessarily mean
that you can instantly write an exploit
524
00:43:58,820 --> 00:44:03,390
for a bug. Our javascript exploit was kind
of like that, it was kind of nice, we knew
525
00:44:03,390 --> 00:44:09,170
what to do very right away but the
brows... or our sandbox exploit did not
526
00:44:09,170 --> 00:44:14,110
fit into a nice box of a previous exploit
that we had seen. So it took a lot of
527
00:44:14,110 --> 00:44:19,020
effort. Quickly I'll show... So this was
the actual bug that we exploited for the
528
00:44:19,020 --> 00:44:25,340
Sandbox. It's a pretty simple bug. It's a
integer issue where index is signed which
529
00:44:25,340 --> 00:44:30,160
means it can be negative. So normally it
expects a value like 4 but we could give
530
00:44:30,160 --> 00:44:34,920
it a value like negative 3 and that would
make it go out of bounds and we could
531
00:44:34,920 --> 00:44:39,830
corrupt memory. So very simple bug not
like a crazy complex one like some of the
532
00:44:39,830 --> 00:44:44,340
other ones we've seen. But does that mean
that this exploit is going to be really
533
00:44:44,340 --> 00:44:52,440
simple? Well let's see... That's a lot of
code. So our exploit for this bug ended up
534
00:44:52,440 --> 00:44:59,400
being about 1300 lines. And so that's
pretty crazy. And you're probably
535
00:44:59,400 --> 00:45:04,750
wondering how it gets there but I do want
to say just be aware that when you do find
536
00:45:04,750 --> 00:45:09,410
it simple looking bug, it might not be
that easy to solve or to exploit. And it
537
00:45:09,410 --> 00:45:14,654
might take a lot of effort but don't get
discouraged if it happens to you. It just
538
00:45:14,654 --> 00:45:19,420
means it's time to ride the exploit
development rollercoaster. And basically
539
00:45:19,420 --> 00:45:24,150
what that means is there's a lot of ups
and downs to an exploit and we have to
540
00:45:24,150 --> 00:45:28,010
basically ride the rollercoaster until
hopefully we have it, the exploit,
541
00:45:28,010 --> 00:45:36,020
finished and we had to do that for our
sandbox escape. And so to say we found the
542
00:45:36,020 --> 00:45:41,880
bug and we had a bunch of great ideas we'd
previously seen a bug exploited like this
543
00:45:41,880 --> 00:45:47,250
by Kiehne and we read their papers and we
had a great idea but then we were like OK
544
00:45:47,250 --> 00:45:51,422
OK it's going to work we just have to make
sure this one bit is not set and it was
545
00:45:51,422 --> 00:45:55,750
like in a random looking value, so we
assumed it would be fine. But turns out
546
00:45:55,750 --> 00:46:00,690
that bit is always set and we have no idea
why and no one else knows why, so thank
547
00:46:00,690 --> 00:46:06,450
you Apple for that. And so OK maybe we can
work around it, maybe we can figure out a
548
00:46:06,450 --> 00:46:11,190
way to unset it and we're like oh yes we
can delete it. It's going to work again!
549
00:46:11,190 --> 00:46:14,670
Everything will be great! Until we realize
that actually breaks the rest of the
550
00:46:14,670 --> 00:46:20,900
exploit. So it's this back and forth, it's
an up and down. And you know sometimes
551
00:46:20,900 --> 00:46:26,920
when you solve one issue you think you've
got what you need and then another issue
552
00:46:26,920 --> 00:46:30,930
shows up.
So it's all about making incremental
553
00:46:30,930 --> 00:46:35,870
progress towards removing all the issues
that are in your way, getting at least
554
00:46:35,870 --> 00:46:38,770
something that works.
Marcus: And so just as a quick aside, this
555
00:46:38,770 --> 00:46:41,450
all happened within like 60 minutes one
night.
556
00:46:41,450 --> 00:46:44,920
Amy: Yeah.
Marcus: Amy saw me just like I was
557
00:46:44,920 --> 00:46:49,609
walking, out of breath, I was like Are you
kidding me? There's two bugs that tripped
558
00:46:49,609 --> 00:46:54,110
us up that made this bug much more
difficult to exploit. And there is no good
559
00:46:54,110 --> 00:46:58,320
reason for why those issues were there and
that it was a horrible experience.
560
00:46:58,320 --> 00:47:04,260
Amy: But it's still one I'd recommend. And
so this rollercoaster it actually applies
561
00:47:04,260 --> 00:47:10,950
to the entire process not just for, you
know, the exploit development because
562
00:47:10,950 --> 00:47:15,940
you'll have it when you find crashes that
don't actually lead to vulnerabilities or
563
00:47:15,940 --> 00:47:20,340
unexplainable crashes or super unreliable
exploits. You just have to keep pushing
564
00:47:20,340 --> 00:47:24,450
your way through until eventually you
hopefully you get to the end of the ride
565
00:47:24,450 --> 00:47:32,740
and you've got yourself a nice exploit.
And so now. OK. So we assume, OK, we've
566
00:47:32,740 --> 00:47:36,400
written an exploit at this point. Maybe
it's not the most reliable thing but it
567
00:47:36,400 --> 00:47:42,060
works like I can get to my code exec every
now and then. So we have to start talking
568
00:47:42,060 --> 00:47:46,721
about the payload. So what is the payload
exactly? A payload is whatever your
569
00:47:46,721 --> 00:47:51,430
exploits trying to actually do. It could
be trying to open up a calculator on the
570
00:47:51,430 --> 00:47:56,870
screen, it could be trying to launch your
sandbox escape exploit, it could be trying
571
00:47:56,870 --> 00:48:01,859
to clean up your system after your
exploit. And by that, I mean fix the
572
00:48:01,859 --> 00:48:05,870
program that you're actually exploiting.
So in CTF we don't get a lot of practice
573
00:48:05,870 --> 00:48:11,240
with this because we're so used to doing
'system', you know, 'cat flag', and then
574
00:48:11,240 --> 00:48:14,910
it doesn't matter if the entire program is
crashing down in flames around us because
575
00:48:14,910 --> 00:48:19,670
we got the flag. So in this case yeah you
cat the flag and then it crashes right
576
00:48:19,670 --> 00:48:24,410
away because you didn't have anything
after your action. But in the real world
577
00:48:24,410 --> 00:48:27,760
it kind of matters all the more, so here's
an example like what would happen if your
578
00:48:27,760 --> 00:48:32,640
exploit didn't clean up after itself, and
just crashes and goes back to the login
579
00:48:32,640 --> 00:48:37,590
screen. This doesn't look very good. If
you're at a conference like Pwn2Own this
580
00:48:37,590 --> 00:48:44,390
won't work, I don't think that they would
let you win if this happened. And so, it's
581
00:48:44,390 --> 00:48:48,589
very important to try to go back and fix
up any damage that you've done to the
582
00:48:48,589 --> 00:48:55,339
system, before it crashes, right after you
finished. Right. And so actually running
583
00:48:55,339 --> 00:49:00,981
your payload, so, a lot of times we see or
in the exploits we see that you'll get to
584
00:49:00,981 --> 00:49:05,560
the code exec here which is just CCs
which means INT3 which just tells the
585
00:49:05,560 --> 00:49:10,700
program to stop or trap to break point and
all the exploits you see most of the time
586
00:49:10,700 --> 00:49:13,580
they just stop here. They don't tell you
anymore and to be fair you know they've
587
00:49:13,580 --> 00:49:16,820
gotten a new code exec they're just
talking about the exploit but you, you
588
00:49:16,820 --> 00:49:20,349
still have to figure out how to do your
payload because unless you want to write
589
00:49:20,349 --> 00:49:26,349
those 1300 lines of code in handwritten
assembly and then make it into shellcode,
590
00:49:26,349 --> 00:49:32,230
you're not going to have a good time. And
so, we had to figure out a way to actually
591
00:49:32,230 --> 00:49:37,570
take our payload, write it to the file
system in the only place that the sandbox
592
00:49:37,570 --> 00:49:42,560
would let us, and then we could run it
again, as a library, and then it would go
593
00:49:42,560 --> 00:49:50,460
and actually do our exploit. And so yeah.
And so now that you've like assembled
594
00:49:50,460 --> 00:49:56,430
everything you're almost done here. You
have your exploit working. You get a
595
00:49:56,430 --> 00:50:00,020
calculator pops up. This was actually our
sandbox escape of running and popping
596
00:50:00,020 --> 00:50:04,320
calculator and proving that we had brute
code exec, but we're not completely done
597
00:50:04,320 --> 00:50:09,630
yet because we need to do a little bit
more, which is exploit reliability. We
598
00:50:09,630 --> 00:50:12,740
need to make sure that our exploit is
actually as reliable as we want it to
599
00:50:12,740 --> 00:50:17,270
because if it only works 1 in 100 times
that's not going to be very good. For
600
00:50:17,270 --> 00:50:22,310
Pwn2own, we ended up building a harness
for our Mac which would let us run the
601
00:50:22,310 --> 00:50:26,160
exploit multiple times, and then collect
information about it so we could look
602
00:50:26,160 --> 00:50:30,500
here, and we could see, very easily how
often it would fail and how often it would
603
00:50:30,500 --> 00:50:36,130
succeed, and then we could go and get more
information, maybe a log, and other stuff
604
00:50:36,130 --> 00:50:41,760
like how long it ran. And this made it
very easy to iterate over our exploit, and
605
00:50:41,760 --> 00:50:48,290
try to correct issues and, make it better
and more reliable. We found that most of
606
00:50:48,290 --> 00:50:52,589
our failures were coming from our heap
groom, which is where we try to line all
607
00:50:52,589 --> 00:50:57,100
your memory in certain ways, but there's
not much that you can do there in our
608
00:50:57,100 --> 00:51:00,570
situation, so we tried to make it as best
as we could and then accepted the
609
00:51:00,570 --> 00:51:06,680
reliability that we got. You, something
else might want to test on is a bunch of
610
00:51:06,680 --> 00:51:11,410
multiple devices. For example our
javascript exploit was a race condition,
611
00:51:11,410 --> 00:51:15,300
so that means the number of cpus on the
device and the speed of the cpu actually
612
00:51:15,300 --> 00:51:20,349
might matter when you're running your
exploit. You might want to try different
613
00:51:20,349 --> 00:51:23,990
operating systems or different operating
system versions, because even if they're
614
00:51:23,990 --> 00:51:28,180
all vulnerable, they may have different
quirks, or tweaks, that you have to do to
615
00:51:28,180 --> 00:51:32,981
actually make your exploit work reliably
on all of them. We had to, we wanted to
616
00:51:32,981 --> 00:51:37,340
test on the MacOS beta as well as the
normal MacOS at least, so that we could
617
00:51:37,340 --> 00:51:42,089
make sure it worked in case Apple pushed
an update right before the competition. So
618
00:51:42,089 --> 00:51:44,359
we did make sure that some parts of our
code, and our exploit, could be
619
00:51:44,359 --> 00:51:49,090
interchanged. So for example, we have
addresses here that are specific to the
620
00:51:49,090 --> 00:51:52,500
operating system version, and we could
swap those out very easily by changing
621
00:51:52,500 --> 00:51:58,619
what part of the code is done here. Yeah.
And then also if you're targeting some
622
00:51:58,619 --> 00:52:01,610
browsers you might be interested in
testing them on mobile too even if you're
623
00:52:01,610 --> 00:52:06,400
not targeting the mobile device. Because a
lot of times the bugs might also work on a
624
00:52:06,400 --> 00:52:10,250
phone, or at least the initial bug will.
And so that's another interesting target
625
00:52:10,250 --> 00:52:17,030
you might be interested in if you weren't
thinking about it originally. So in
626
00:52:17,030 --> 00:52:20,170
general what I'm trying to say is try
throwing your exploit at everything you
627
00:52:20,170 --> 00:52:25,690
can, and hopefully you will be able to
recover some reliability percentages or
628
00:52:25,690 --> 00:52:30,640
figure out things that you overlooked in
your initial testing. Alright. I'm gonna
629
00:52:30,640 --> 00:52:35,250
throw it back over, for the final section.
Marcus: You're in the final section here.
630
00:52:35,250 --> 00:52:39,210
So I didn't get to spend as much time as I
would have liked on this section, but I
631
00:52:39,210 --> 00:52:43,250
think it's an important discussion to have
on here. And so the very last step of our
632
00:52:43,250 --> 00:52:50,849
layman's guide is about responsibilities.
And so this is critical. And so you've
633
00:52:50,849 --> 00:52:54,940
listened to our talk. You've seen us
develop the skeleton keys to computers and
634
00:52:54,940 --> 00:53:01,710
systems and devices. We can create doors
into computers and servers and people's
635
00:53:01,710 --> 00:53:06,490
machines, you can invade privacy, you can
deal damage to people's lives and
636
00:53:06,490 --> 00:53:12,610
companies and systems and countries and so
there is a lot of you have to be very
637
00:53:12,610 --> 00:53:17,870
careful with these. And so everyone in
this room, you know if you take any of our
638
00:53:17,870 --> 00:53:22,869
advice going into this stuff, you know,
please acknowledge what you're getting
639
00:53:22,869 --> 00:53:27,869
into and what can be done to people. And
so, there's at least one example that's
640
00:53:27,869 --> 00:53:31,090
kind of related, that, you know I pulled
out quickly that, you know, quickly came
641
00:53:31,090 --> 00:53:35,860
to mind. It was in 2016. I have to say i
remember this day actually sitting at
642
00:53:35,860 --> 00:53:42,820
work. And there is this massive ddos, that
plagued the Internet, at least in the
643
00:53:42,820 --> 00:53:48,110
U.S., and it took down all your favorite
sites:Twitter, Amazon, Netflix, Etsy,
644
00:53:48,110 --> 00:53:54,000
Github, Spotify, Reddit. I remember the
whole Internet came to a crawl in the U.S.
645
00:53:54,000 --> 00:54:01,310
This is the L3 outage map. This was
absolutely insane. And I remember people
646
00:54:01,310 --> 00:54:05,240
were bouncing off the walls like crazy,
you know. After the fact and they were all
647
00:54:05,240 --> 00:54:09,490
referencing Bruce Schneier's blog, and
there on Twitter there's all this
648
00:54:09,490 --> 00:54:13,800
discussion popping up that "this was
likely a state actor". This was a newly
649
00:54:13,800 --> 00:54:19,370
sophisticated ddos attack. Bruce suggested
it was China or Russia, or you know, some
650
00:54:19,370 --> 00:54:23,160
nation state and the blog post was
specifically titled someone is learning
651
00:54:23,160 --> 00:54:28,430
how to take down the Internet. But then a
few months later we figured out that this
652
00:54:28,430 --> 00:54:32,260
was called the mirai botnet and it was
actually just a bunch of kids trying to
653
00:54:32,260 --> 00:54:38,500
ddos each other minecraft servers. And so
it's a... it's scary because you know I
654
00:54:38,500 --> 00:54:45,630
have a lot of respect for the young people
and how talented they are. And it's a... But
655
00:54:45,630 --> 00:54:51,690
people need to be very conscious about the
damage that can be caused by these things.
656
00:54:51,690 --> 00:54:57,780
Mirai, they weren't using 0days per se.
Later. Now nowadays they are using 0days
657
00:54:57,780 --> 00:55:00,747
but back then they weren't, they were just
using an IoT based botnet. One of the
658
00:55:00,747 --> 00:55:04,977
biggest in the world, our highest
throughput. But it was incredibly
659
00:55:04,977 --> 00:55:11,710
damaging. And you know, so when you you,
it's hard to recognize the power of an
660
00:55:11,710 --> 00:55:17,660
0day until you are wielding it. And so
that's why it's not the first step of the
661
00:55:17,660 --> 00:55:20,930
layman's guide. Once you finish this
process you will come to realize the
662
00:55:20,930 --> 00:55:25,510
danger that you can cause, but also the
danger that you might be putting yourself
663
00:55:25,510 --> 00:55:33,070
in. And so I kind of want to close on that
please be very careful. All right. And so,
664
00:55:33,070 --> 00:55:36,940
that's all we have. This is the
conclusion. The layman's guide, that is
665
00:55:36,940 --> 00:55:41,536
the summary. And if you have any questions
we'll take them now. Otherwise if we run
666
00:55:41,536 --> 00:55:44,890
out of time you can catch us after the talk,
and we'll have some cool stickers too,
667
00:55:44,890 --> 00:55:51,329
so...
Applause
668
00:55:51,329 --> 00:55:59,385
Herald-Angel: Wow, great talk. Thank you.
We have very very little time for
669
00:55:59,385 --> 00:56:03,480
questions. If somebody is very quick they
can come up to one of the microphones in
670
00:56:03,480 --> 00:56:08,070
the front, we will handle one but
otherwise, will you guys be available
671
00:56:08,070 --> 00:56:10,460
after the talk?
Marcus: We will be available after the
672
00:56:10,460 --> 00:56:14,600
talk, if ou want to come up and chat. We
might get swarmed but we also have some
673
00:56:14,600 --> 00:56:17,680
cool Rekto stickers, so, come grab them if
you want them.
674
00:56:17,680 --> 00:56:21,339
Herald-angel: And where can we find you.
Marcus: We'll be over here. We'll try to
675
00:56:21,339 --> 00:56:22,720
head out to the back
Herald-angel: Yeah yeah because we have
676
00:56:22,720 --> 00:56:25,390
another talk coming up in a moment or so.
Marcus: OK.
677
00:56:25,390 --> 00:56:29,859
Herald-angel: I don't see any questions.
So I'm going to wrap it up at this point,
678
00:56:29,859 --> 00:56:34,210
but as I said the speakers will be
available. Let's give this great speech
679
00:56:34,210 --> 00:56:35,360
another round of applause.
680
00:56:35,360 --> 00:56:40,266
Applause
681
00:56:40,266 --> 00:56:42,251
Outro
682
00:56:42,251 --> 00:57:04,000
subtitles created by c3subtitles.de
in the year 2020. Join, and help us!