0:00:00.570,0:00:04.777 I work with a bunch of mathematicians,[br]philosophers and computer scientists 0:00:04.777,0:00:09.986 and we sit around and think about[br]the future of machine intelligence, 0:00:09.986,0:00:12.200 among other things. 0:00:12.200,0:00:16.755 Some people think that some of[br]these people are science fiction-y 0:00:16.755,0:00:19.856 far out there, crazy. 0:00:19.856,0:00:21.326 But I like to say, 0:00:21.326,0:00:24.930 "Okay, let's look at the modern[br]human condition." 0:00:24.930,0:00:26.622 (Laughter) 0:00:26.622,0:00:29.024 This is the normal way for things to be. 0:00:29.024,0:00:31.309 But, if we think about it, 0:00:31.309,0:00:34.602 we are actually recently arrived[br]guests on this planet. 0:00:34.602,0:00:36.214 The human species -- 0:00:36.214,0:00:41.430 think of if earth was created[br]one year ago, 0:00:41.430,0:00:44.978 the human species, then, [br]would be 10-minutes-old. 0:00:44.978,0:00:48.146 The industrial era started[br]two seconds ago. 0:00:48.146,0:00:50.551 Another way to think of this, 0:00:50.551,0:00:54.443 if you think of world GDP[br]over the last 10,000 years, 0:00:54.443,0:00:57.530 I've actually taken the trouble[br]to plot this for you in a graph.[br] 0:00:57.530,0:00:59.304 It looks like this. 0:00:59.304,0:01:00.667 (Laughter) 0:01:00.667,0:01:02.818 It's a curious shape[br]for a normal condition. 0:01:02.818,0:01:04.516 I sure wouldn't want to sit on it. 0:01:04.516,0:01:07.067 (Laughter) 0:01:07.067,0:01:08.681 Let's ask ourselves, 0:01:08.681,0:01:12.122 what is the cost of this current anomaly? 0:01:12.122,0:01:14.393 Some people would say it's technology. 0:01:14.393,0:01:15.851 Now it's true, 0:01:15.851,0:01:19.484 technology has accumulated[br]through human history, 0:01:19.484,0:01:23.713 and right now, technology[br]advances extremely rapidly, 0:01:23.713,0:01:25.278 that is the proximate cause, 0:01:25.278,0:01:28.473 that's why we are currently [br]so very productive. 0:01:28.473,0:01:32.874 But I like to think back further [br]to the ultimate cause. 0:01:32.874,0:01:36.880 Look at these two [br]highly distinguished gentlemen: 0:01:36.880,0:01:38.623 We have Kanzi, 0:01:38.623,0:01:41.260 he's mastered 200 lexical tokens, 0:01:41.260,0:01:43.123 an incredible feat. 0:01:43.123,0:01:46.817 And Ed Witten unleashed the second[br]super string revolution. 0:01:46.817,0:01:49.141 If we look under the hood, [br]this is what we find: 0:01:49.141,0:01:50.711 basically the same thing. 0:01:50.711,0:01:52.524 One is a little larger, 0:01:52.524,0:01:55.282 it maybe also has a few tricks[br]in the exact way it's wired. 0:01:55.282,0:01:59.094 These invisible differences cannot[br]be too complicated, however, 0:01:59.094,0:02:03.379 because they've only been[br]250,000 generations since 0:02:03.379,0:02:05.111 our last common ancestor. 0:02:05.111,0:02:10.000 We know that complicated mechanisms[br]that a long time to evolve. 0:02:10.000,0:02:12.499 So a bunch of relatively minor changes 0:02:12.499,0:02:15.566 take us from Kanzi to Witten. 0:02:15.566,0:02:17.377 From broken-off tree branches, 0:02:17.377,0:02:20.829 to intercontinental balistic missles. 0:02:20.829,0:02:22.724 So this then seems pretty obvious that[br] 0:02:22.724,0:02:24.954 everything we've achieved, pretty much, 0:02:24.954,0:02:27.212 and everything we care about[br]depends crucially 0:02:27.212,0:02:32.650 on relatively some minor changes [br]that made the human mind. 0:02:32.650,0:02:36.312 And the collaraly, of course, [br]is that any further changes 0:02:36.312,0:02:39.789 that could significantly change[br]the substrate of thinking 0:02:39.789,0:02:43.651 could have potentially [br]enormous consequences. 0:02:43.651,0:02:46.576 Some of my colleagues [br]think we're on the verge 0:02:46.576,0:02:49.756 of something that could cause[br]a profound change 0:02:49.756,0:02:51.384 in that substrate, 0:02:51.384,0:02:54.347 and that is machine super intelligence. 0:02:54.347,0:02:59.086 Artificial intelligence used to be[br]about putting commands in a box. 0:02:59.086,0:03:04.295 You would have human programmers[br]that would painstakingly handcraft items, 0:03:04.295,0:03:06.262 You build up these expert systems, 0:03:06.262,0:03:07.916 and they were kind of useful[br]for some purposes, 0:03:07.916,0:03:09.857 but they were very brittle, 0:03:09.867,0:03:11.023 you couldn't scale them. 0:03:11.023,0:03:14.410 Basically, you got out only[br]what you put in. 0:03:14.410,0:03:16.967 But since then, a paradigm shift[br]has taken place 0:03:16.967,0:03:18.874 in the field of artificial intelligence. 0:03:18.874,0:03:21.894 Today, the action is really [br]around machine learning. 0:03:21.894,0:03:28.061 So rather than handcrafting knowledge[br]representations and features, 0:03:28.061,0:03:31.709 we create algorithms that learn, 0:03:31.709,0:03:34.275 often from raw perceptual data. 0:03:34.275,0:03:39.063 Basically the same thing[br]that the human infant does. 0:03:39.063,0:03:43.270 The result is AI that is [br]not limited to one domain, 0:03:43.270,0:03:47.901 the same system can learn to translate [br]between any pairs of languages, 0:03:47.901,0:03:53.068 or learn to play any computer game[br]at the Atari console. 0:03:53.068,0:03:56.757 Now of course, AI is still [br]no where near having 0:03:56.757,0:04:00.701 the same powerful, cross-domain[br]ability to learn and plan 0:04:00.701,0:04:02.335 as a human being has. 0:04:02.335,0:04:04.461 The cortex still has some [br]algorithmic tricks 0:04:04.461,0:04:07.886 that we don't yet know[br]how to match in machines. 0:04:07.886,0:04:09.785 But so the question is, 0:04:09.785,0:04:13.885 how far are we from being able[br]to match those tricks? 0:04:13.885,0:04:15.798 A couple of years ago, we did a survey 0:04:15.798,0:04:17.927 of some of the world's leading AI experts 0:04:17.927,0:04:19.136 to see what they think 0:04:19.136,0:04:21.440 and one of the questions we asked was, 0:04:21.440,0:04:24.793 "By which year do you think[br]there is a 50 percent probability 0:04:24.793,0:04:28.785 that we will have achieved [br]human-level machine intelligence?" 0:04:28.785,0:04:31.898 We defined human-level here [br]as the ability to perform 0:04:31.898,0:04:35.839 almost any job at least as well[br]as an adult human, 0:04:35.839,0:04:39.844 so real human-level, not just[br]within some limited domain. 0:04:39.844,0:04:43.494 And the median answer was 2040 or 2050, 0:04:43.494,0:04:46.300 depending on precisely which [br]group of experts we asked. 0:04:46.300,0:04:49.229 Now, it could happen much, much later, 0:04:49.229,0:04:52.279 or sooner, the truth is [br]nobody really knows. 0:04:52.279,0:04:55.941 What we do know is that [br]the ultimate limit 0:04:55.941,0:04:58.802 to information processing [br]in machine substrate, 0:04:58.802,0:05:03.241 lie far outside the limits [br]in biological tissue. 0:05:03.241,0:05:05.619 This comes down to physics. 0:05:05.619,0:05:10.337 A biological neuron fires, maybe, [br]at 200 Hertz, 200 times a second. 0:05:10.337,0:05:13.931 But even a present-day transistor[br]operates at a gigahert. 0:05:13.931,0:05:16.640 Neurons propagate slowly in axons, 0:05:16.640,0:05:19.568 100 meters per second, tops. 0:05:19.568,0:05:23.079 But in computers, signals can travel[br]at the speed of light. 0:05:23.079,0:05:24.948 There's also size limitations, 0:05:24.948,0:05:27.975 a human brain has to fit inside a cranium, 0:05:27.975,0:05:32.736 but a computer can be the size[br]of a warehouse or larger. 0:05:32.736,0:05:38.335 So the potential of super intelligence [br]lies dormant in matter, 0:05:38.335,0:05:44.047 much like the power of the atom [br]lay dormant throughout human history, 0:05:44.047,0:05:48.452 patiently waiting there until 1945. 0:05:48.452,0:05:50.920 In this century, scientists[br]may learn to awaken 0:05:50.920,0:05:53.818 the power of artificial intelligence. 0:05:53.818,0:05:58.406 And I think we might then see[br]an intelligence explosion. 0:05:58.406,0:06:02.363 Now most people, when they think[br]about what is smart and what is dumb, 0:06:02.363,0:06:05.386 I think I have in mind a picture[br]roughly like this. 0:06:05.386,0:06:07.984 So at one hand, we have the village idiot, 0:06:07.984,0:06:10.448 and then far over at the other side, 0:06:10.448,0:06:11.857 we have Ed Witten, 0:06:11.857,0:06:15.573 or Albert Einsten or whoever [br]your favorite guru is. 0:06:15.573,0:06:19.057 But I think that from the point of view[br]of artificial intelligence, 0:06:19.057,0:06:23.258 the true picture is actually[br]probably more like this: 0:06:23.258,0:06:26.636 AI starts out at this point here,[br]at zero intelligence, 0:06:26.636,0:06:29.647 and then, after many, many [br]years of really hard work, 0:06:29.647,0:06:33.491 maybe eventually we get to[br]mouse-level artificial intelligence, 0:06:33.491,0:06:35.921 something that can navigate [br]cluttered environments 0:06:35.921,0:06:37.908 as well as a mouse can. 0:06:37.908,0:06:42.221 And then, after many, many more years[br]of really hard work, lots of investment, 0:06:42.221,0:06:46.860 maybe eventually we get to [br]chimpanzee-level artificial intelligence. 0:06:46.860,0:06:50.070 And then, after even more years [br]of really, really hard work, 0:06:50.070,0:06:52.983 we get village idiot [br]artificial intelligence. 0:06:52.983,0:06:56.255 And a few moments later, [br]we are beyond Ed Witten. 0:06:56.255,0:06:59.225 The train doesn't stop at [br]Human-ville Station. 0:06:59.225,0:07:02.247 It's likely, rather, to swoosh right by. 0:07:02.247,0:07:04.231 Now this has profound implications, 0:07:04.231,0:07:08.093 particularly when it comes [br]to questions of power. 0:07:08.093,0:07:09.992 For example, chimpanzees are strong, 0:07:09.992,0:07:15.214 pound for pound, a chimpanzee is about[br]twice as strong as a fit human male. 0:07:15.214,0:07:19.828 And yet, the fate of Kanzi and his pals[br]depends a lot more [br] 0:07:19.828,0:07:24.258 on what we humans do than on [br]what the chimpanzees do themselves. 0:07:24.258,0:07:27.542 Once there is super intelligence, 0:07:27.542,0:07:32.041 the fate of humanity may depend[br]on what the super intelligence does. 0:07:32.041,0:07:36.688 Think about it: machine intelligence [br]is the last invention 0:07:36.688,0:07:38.552 that humanity will ever need to make. 0:07:38.552,0:07:41.525 Machines will then be better [br]at inventing than we are, 0:07:41.525,0:07:44.065 and they'll be doing so [br]on digital timescales. 0:07:44.065,0:07:48.966 What this means is basically[br]a telescoping of the future. 0:07:48.966,0:07:52.524 Think of all the crazy technologies [br]that you could have imagined 0:07:52.524,0:07:55.322 maybe humans could have developed[br]in the fullness of time: 0:07:55.322,0:07:58.580 cures for aging, space colonization, 0:07:58.580,0:08:00.421 self-replicating nanobots 0:08:00.421,0:08:02.301 or uploading of minds into computers, 0:08:02.301,0:08:04.470 all kinds of science fiction-y stuff 0:08:04.470,0:08:07.207 that's nevertheless consistent [br]with the laws of physics. 0:08:07.207,0:08:09.639 All of this, super intelligence [br]could develop 0:08:09.639,0:08:12.449 and possibly, quite rapidly. 0:08:12.449,0:08:16.007 Now, super intelligence with such [br]technological maturity 0:08:16.007,0:08:18.186 would be extremely powerful, 0:08:18.186,0:08:19.982 and at least in some scenarios, 0:08:19.982,0:08:22.624 it would be able to get [br]what it wants. 0:08:22.624,0:08:25.073 We would then have a future[br]that would be shaped 0:08:25.073,0:08:28.375 by the preferences of this AI. 0:08:29.855,0:08:34.244 Now a good question is, what are [br]those preferences? 0:08:34.244,0:08:36.013 Here it gets trickier. 0:08:36.013,0:08:37.448 To make any headway with this, 0:08:37.448,0:08:39.364 we must first, first of all, 0:08:39.364,0:08:41.276 avoid anthropomorphizing. 0:08:41.276,0:08:45.385 And this is ironic because[br]every newspaper article 0:08:45.385,0:08:50.250 about the future of AI [br]has a picture of this: 0:08:50.250,0:08:52.424 So I think what we need [br]to do is to conceive 0:08:52.424,0:08:54.840 of the issue more abstractly, 0:08:54.840,0:08:57.204 not in terms of vivid Hollywood scenarios. 0:08:57.204,0:09:00.821 We need to think of intelligence [br]as an optimization process, 0:09:00.821,0:09:06.488 a process that steers the future[br]into a particular set of configurations. 0:09:06.488,0:09:08.130 As super intelligence -- 0:09:08.130,0:09:09.981 it's a really strong optimization process. 0:09:09.981,0:09:12.858 It's extremely good at using [br]available means 0:09:12.858,0:09:16.007 to achieve a state in which its[br]goal is realized. 0:09:16.007,0:09:18.769 This means that there is no necessary[br]conenction between 0:09:18.769,0:09:21.853 being highly intelligent in this sense, 0:09:21.853,0:09:24.125 and having an objective that we humans 0:09:24.125,0:09:27.321 would find worthwhile or meaningful. 0:09:27.321,0:09:31.115 Suppose we give AI the goal [br]to make humans smile. 0:09:31.115,0:09:34.097 When the AI is weak, it performs useful[br]or amusing actions 0:09:34.097,0:09:35.944 that cause its user to smile. 0:09:35.944,0:09:39.031 When the AI becomes super intelligent, 0:09:39.031,0:09:41.284 it realizes that there is[br]a more effective way 0:09:41.284,0:09:42.721 to achieve this goal: 0:09:42.721,0:09:44.476 take control of the world 0:09:44.476,0:09:47.638 and stick electrodes into [br]the facial muscles of humans 0:09:47.638,0:09:50.579 to cause constant, beaming grins. 0:09:50.579,0:09:53.124 Another example, suppose[br]we give AI the goal to solve 0:09:53.124,0:09:54.787 a difficult mathematical problem. 0:09:54.787,0:09:56.764 When the AI becomes super intelligent, 0:09:56.764,0:10:01.105 it realizes that the most effective way [br]to get the solution to this problem 0:10:01.105,0:10:04.035 is by transforming the planet[br]into a giant computer, 0:10:04.035,0:10:06.281 so as to increase its thinking capacity. 0:10:06.281,0:10:09.045 And notice that this gives the AIs[br]an instrumental reason 0:10:09.045,0:10:11.561 to do things to us that we[br]might not approve of. 0:10:11.561,0:10:13.496 Human beings in this model are threats, 0:10:13.496,0:10:16.417 we could prevent the [br]mathematical problem from being solved. 0:10:16.417,0:10:20.161 Of course, perceivably things won't [br]go wrong in these particular ways, 0:10:20.161,0:10:22.454 these are cartoon examples. 0:10:22.454,0:10:24.393 But the general point here is important: 0:10:24.393,0:10:27.266 if you create a really powerful[br]optimization process 0:10:27.266,0:10:29.500 to maximize for objective x, 0:10:29.500,0:10:31.776 you better make sure that [br]your definition of x 0:10:31.776,0:10:34.835 incorporates everything you care about. 0:10:34.835,0:10:39.219 This is a lesson that's also taught[br]in many a myth. 0:10:39.219,0:10:44.517 Kind Midas wishes that everything[br]he touches be turned into gold. 0:10:44.517,0:10:47.378 He touches his daughter, [br]she turns into fold. 0:10:47.378,0:10:49.931 He touches his food, it turns into gold. 0:10:49.931,0:10:52.520 This could become practically relevant, 0:10:52.520,0:10:54.590 not just for a metaphor for greed, 0:10:54.590,0:10:57.075 but an illustration of what happens [br]if you create 0:10:57.075,0:10:59.322 a powerful optimization process 0:10:59.322,0:11:04.111 and give it misconceived [br]or poorly specified goals. 0:11:04.111,0:11:09.300 Now you might say, "If a computer starts[br]sticking electrodes into people's faces, 0:11:09.300,0:11:12.555 we'd just shut it off." 0:11:12.555,0:11:16.690 A: This is not necessarily so easy [br]to do if we've grown 0:11:16.690,0:11:18.185 dependent on the system, 0:11:18.185,0:11:20.627 like where is the off switch [br]to the internet? 0:11:20.627,0:11:25.747 B: Why haven't the chimpanzees[br]flicked the off-switch to humanity, 0:11:25.747,0:11:27.298 or the neanderthals? 0:11:27.298,0:11:29.964 They certainly had reasons. 0:11:29.964,0:11:32.759 We have an off switch, [br]for example, right here. 0:11:32.759,0:11:34.813 [choking sound] 0:11:34.813,0:11:37.238 The reason is that we are [br]an intelligent adversary, 0:11:37.238,0:11:39.966 we can anticipate threats [br]and we can plan around them. 0:11:39.966,0:11:42.470 But so could a super intelligent agent, 0:11:42.470,0:11:45.724 and it would be much better [br]at that than we are. 0:11:45.724,0:11:52.911 The point is, we should not be confident[br]that we have this under control here. 0:11:52.911,0:11:56.358 And we could try to make our job[br]a little bit easier by, say, 0:11:56.358,0:11:57.948 putting the AI in a box, 0:11:57.948,0:12:01.034 like a secure software environment,[br]a virtual reality simulation 0:12:01.034,0:12:02.766 from which it cannot escape. 0:12:02.766,0:12:06.912 But how confident can we be that[br]the AI couldn't find a bug. 0:12:06.912,0:12:10.081 Given that even human hackers [br]find bugs all the time, 0:12:10.081,0:12:14.237 I'd say, probably not very confident. 0:12:14.237,0:12:18.785 So we disconnect the ethernet cable[br]to create an air gap, 0:12:18.785,0:12:23.823 but again, like nearly human hackers[br]routinely transgress air gaps 0:12:23.823,0:12:25.024 using social engineering. 0:12:25.034,0:12:27.383 Like right now as I speak, I'm sure[br]there is some employee 0:12:27.383,0:12:30.546 out there somewhere who's been [br]talked into handing out 0:12:30.546,0:12:34.543 her account details by somebody[br]claiming to be from the IT department. 0:12:34.543,0:12:36.701 More creative scenarios are also possible, 0:12:36.701,0:12:40.166 like if you're the AI, you can imagine[br]wiggling electroces around 0:12:40.166,0:12:42.588 in your internal circuitry [br]to create radio waves 0:12:42.588,0:12:45.010 that you can use to communicate. 0:12:45.010,0:12:47.434 Or maybe you could pretend to malfunction, 0:12:47.434,0:12:50.741 and then when the programmers open[br]you up to see what went wrong with you, 0:12:50.741,0:12:52.517 they look at the source code -- BAM! -- 0:12:52.517,0:12:55.314 the manipulation can take place. 0:12:55.314,0:12:58.744 Or it could output the blueprint[br]to a really nifty technology 0:12:58.744,0:13:00.142 and when we implement it, 0:13:00.142,0:13:04.539 it has some surreptitious side effect[br]that the AI had planned. 0:13:04.539,0:13:08.002 The point here is that we should [br]not be confident in our ability 0:13:08.002,0:13:11.810 to keep a super intelligent genie[br]locked up in its bottle forever. 0:13:11.810,0:13:15.034 Sooner or later, it will out. 0:13:15.034,0:13:18.137 I believe that the answer here[br]is to figure out 0:13:18.137,0:13:23.161 how to create super intelligent AI[br]such that even if, when it escapes, 0:13:23.161,0:13:26.438 it is still safe because it [br]is fundamentally on our side 0:13:26.438,0:13:28.337 because it shares our values. 0:13:28.337,0:13:32.557 I see no way around [br]this difficult problem. 0:13:32.557,0:13:36.391 Now, I'm actually fairly optimistic[br]that this problem can be solved. 0:13:36.391,0:13:40.294 We wouldn't have to write down [br]a long list of everything we care aobut 0:13:40.294,0:13:43.937 or worse yet, spell it out [br]in some computer language 0:13:43.937,0:13:45.391 like C ++ or Python, 0:13:45.391,0:13:48.158 that would be a task beyond hopeless. 0:13:48.158,0:13:52.455 Instead, we would create an AI[br]that uses its intelligence 0:13:52.455,0:13:55.226 to learn what we value, 0:13:55.226,0:14:00.506 and its motivation system is constructed[br]in such a way that it is motivated 0:14:00.506,0:14:05.738 to pursue our values or to perform actions[br]that it predicts we would approve of. 0:14:05.738,0:14:08.712 We would thus leverage [br]its intelligence as much as possible 0:14:08.712,0:14:12.727 to solve the problem of value -loading. 0:14:12.727,0:14:14.239 This can happen, 0:14:14.239,0:14:17.835 and the outcome could be [br]very good for humanity. 0:14:17.835,0:14:21.792 But it doesn't happen automatically. 0:14:21.792,0:14:24.790 The initial conditions [br]for the intelligent explosion 0:14:24.790,0:14:27.653 might need to be set up [br]in just the right way 0:14:27.653,0:14:31.183 if we are to have a controlled detonation. 0:14:31.183,0:14:33.801 The values that the AI has[br]need to match ours, 0:14:33.801,0:14:35.621 not just in the familiar context, 0:14:35.621,0:14:37.999 like where we can easily check[br]how the AI behaves, 0:14:37.999,0:14:41.233 but also in all novel contexts[br]that the AI might encounter 0:14:41.233,0:14:42.790 in the indefinite future. 0:14:42.790,0:14:47.527 And there are also some esoteric issues[br]that would need to be solved, sorted out 0:14:47.527,0:14:49.616 the exact decisions [br]of its decision theory, 0:14:49.616,0:14:53.330 how to deal with [br]logical uncertainty and so forth. 0:14:53.330,0:14:56.522 So the technical problems that need[br]to be solved to make this work 0:14:56.522,0:14:57.515 look quite difficult, 0:14:57.515,0:15:00.925 -- not as difficult as making [br]a super intelligent AI, 0:15:00.925,0:15:03.793 but fairly difficult. 0:15:03.793,0:15:05.488 Here is the worry: 0:15:05.488,0:15:10.172 making super intelligent AI[br]is a really hard challenge. 0:15:10.172,0:15:12.720 Making super intelligent AI that is safe 0:15:12.720,0:15:15.426 involves some additional [br]challenge on top of that. 0:15:15.426,0:15:18.133 The risk is that if somebody[br]figures out how to crack 0:15:18.133,0:15:21.392 the first challenge without also[br]having cracked 0:15:21.392,0:15:25.402 the additional challenge [br]of ensuring perfect safety. 0:15:25.402,0:15:28.706 So I think that we should[br]work out a solution 0:15:28.706,0:15:31.528 to the controlled problem in advance, 0:15:31.528,0:15:34.608 so that we have it available [br]by the time it is needed. 0:15:34.608,0:15:37.875 Now it might be that we cannot[br]solve the entire controlled problem 0:15:37.875,0:15:41.299 in advance because maybe some[br]element can only be put in place 0:15:41.299,0:15:43.576 once you know the details of [br]the architecture 0:15:43.576,0:15:45.053 where it will be implemented. 0:15:45.053,0:15:48.676 But the more of the controlled problem[br]that we solve in advance, 0:15:48.676,0:15:52.766 the better the odds that the transition[br]to the machine intelligence era 0:15:52.766,0:15:54.536 will go well. 0:15:54.536,0:15:58.950 This to me looks like a thing[br]that is well worth doing 0:15:58.950,0:16:02.282 and I can imagine that if [br]things turn out okay, 0:16:02.282,0:16:05.430 that people in a million years[br]from now 0:16:05.430,0:16:06.858 look back at this century 0:16:06.858,0:16:08.972 and it might well be [br]that they say 0:16:08.972,0:16:11.119 that he one thing we did[br]that really mattered 0:16:11.119,0:16:13.037 was to get this thing right. 0:16:13.037,0:16:14.198 Thank you. 0:16:14.198,0:16:17.011 (Applause)