WEBVTT 00:00:35.245 --> 00:00:38.182 In my lab, we build autonomous aerial robots 00:00:38.183 --> 00:00:40.423 like the one you see flying here. 00:00:42.575 --> 00:00:45.750 Unlike the commercially available drones that you can buy today, 00:00:46.738 --> 00:00:49.704 this robot doesn't have any GPS on board. 00:00:51.160 --> 00:00:52.376 So without GPS, 00:00:52.400 --> 00:00:55.400 it's hard for robots like this to determine their position. 00:00:56.953 --> 00:01:01.311 This robot uses onboard sensors, cameras and laser scanners, 00:01:01.389 --> 00:01:03.521 to scan the environment. 00:01:03.522 --> 00:01:06.522 It detects features from the environment, 00:01:07.199 --> 00:01:09.918 and it determines where it is relative to those features, 00:01:09.951 --> 00:01:12.021 using a method of triangulation. 00:01:13.642 --> 00:01:17.094 And then it can assemble all these features into a map, 00:01:17.189 --> 00:01:19.267 like you see behind me. 00:01:19.714 --> 00:01:23.593 And this map then allows the robot to understand where the obstacles are 00:01:23.660 --> 00:01:26.070 and navigate in a collision-free manner. 00:01:27.570 --> 00:01:29.491 What I want to show you next 00:01:30.025 --> 00:01:33.025 is a set of experiments we did inside our laboratory, 00:01:33.050 --> 00:01:36.050 where this robot was able to go for longer distances. 00:01:37.240 --> 00:01:42.378 So here you'll see, on the top right, what the robot sees with the camera. 00:01:42.403 --> 00:01:43.619 And on the main screen... 00:01:43.644 --> 00:01:46.100 And of course this is sped up by a factor of four... 00:01:46.125 --> 00:01:48.792 On the main screen you'll see the map that it's building. 00:01:48.817 --> 00:01:52.654 So this is a high-resolution map of the corridor around our laboratory. 00:01:52.921 --> 00:01:55.257 And in a minute you'll see it enter our lab, 00:01:55.282 --> 00:01:58.138 which is recognizable by the clutter that you see. 00:01:58.186 --> 00:01:59.202 (Laughter) 00:01:59.227 --> 00:02:01.234 But the main point I want to convey to you 00:02:01.259 --> 00:02:04.724 is that these robots are capable of building high-resolution maps 00:02:04.762 --> 00:02:07.258 at five centimeters resolution, 00:02:07.361 --> 00:02:11.528 allowing somebody who is outside the lab, or outside the building 00:02:11.553 --> 00:02:14.712 to deploy these without actually going inside, 00:02:14.809 --> 00:02:17.809 and trying to infer what happens inside the building. 00:02:19.637 --> 00:02:21.877 Now there's one problem with robots like this. 00:02:22.901 --> 00:02:25.101 The first problem is it's pretty big. 00:02:25.492 --> 00:02:27.172 Because it's big, it's heavy. 00:02:27.917 --> 00:02:30.917 And these robots consume about 100 watts per pound. 00:02:31.756 --> 00:02:34.036 And this makes for a very short mission life. 00:02:35.254 --> 00:02:36.710 The second problem 00:02:36.735 --> 00:02:40.668 is that these robots have onboard sensors that end up being very expensive... 00:02:40.796 --> 00:02:43.796 A laser scanner, a camera and the processors. 00:02:44.525 --> 00:02:47.525 That drives up the cost of this robot. 00:02:48.741 --> 00:02:51.068 So we asked ourselves a question: 00:02:51.422 --> 00:02:54.988 what consumer product can you buy in an electronics store 00:02:55.221 --> 00:03:00.901 that is inexpensive, that's lightweight, that has sensing onboard and computation? 00:03:03.350 --> 00:03:06.006 And we invented the flying phone. 00:03:06.053 --> 00:03:07.989 (Laughter) 00:03:08.014 --> 00:03:13.881 So this robot uses a Samsung Galaxy smartphone that you can buy off the shelf, 00:03:13.906 --> 00:03:17.837 and all you need is an app that you can download from our app store. 00:03:18.206 --> 00:03:22.303 And you can see this robot reading the letters, "TED" in this case, 00:03:22.390 --> 00:03:25.326 looking at the corners of the "T" and the "E" 00:03:25.351 --> 00:03:28.793 and then triangulating off of that, flying autonomously. 00:03:29.989 --> 00:03:33.354 That joystick is just there to make sure if the robot goes crazy, 00:03:33.379 --> 00:03:34.795 Giuseppe can kill it. 00:03:34.820 --> 00:03:36.460 (Laughter) 00:03:37.951 --> 00:03:40.951 In addition to building these small robots, 00:03:41.600 --> 00:03:46.180 we also experiment with aggressive behaviors, like you see here. 00:03:46.943 --> 00:03:51.240 So this robot is now traveling at two to three meters per second, 00:03:52.080 --> 00:03:55.080 pitching and rolling aggressively as it changes direction. 00:03:55.529 --> 00:03:59.607 The main point is we can have smaller robots that can go faster 00:03:59.632 --> 00:04:02.592 and then travel in these very unstructured environments. 00:04:04.223 --> 00:04:06.279 And in this next video, 00:04:06.304 --> 00:04:11.688 just like you see this bird, an eagle, gracefully coordinating its wings, 00:04:12.620 --> 00:04:16.238 its eyes and feet to grab prey out of the water, 00:04:16.940 --> 00:04:18.836 our robot can go fishing, too. 00:04:18.860 --> 00:04:20.356 (Laughter) 00:04:20.380 --> 00:04:24.434 In this case, this is a Philly cheesesteak hoagie that it's grabbing out of thin air. 00:04:24.459 --> 00:04:26.860 (Laughter) 00:04:27.180 --> 00:04:30.510 So you can see this robot going at about three meters per second, 00:04:30.510 --> 00:04:34.987 which is faster than walking speed, coordinating its arms, its claws 00:04:35.660 --> 00:04:39.680 and its flight with split-second timing to achieve this maneuver. 00:04:41.414 --> 00:04:42.630 In another experiment, 00:04:42.655 --> 00:04:45.655 I want to show you how the robot adapts its flight 00:04:46.285 --> 00:04:48.661 to control its suspended payload, 00:04:48.686 --> 00:04:52.372 whose length is actually larger than the width of the window. 00:04:52.949 --> 00:04:54.645 So in order to accomplish this, 00:04:54.670 --> 00:04:57.943 it actually has to pitch and adjust the altitude 00:04:58.390 --> 00:05:00.710 and swing the payload through. 00:05:06.134 --> 00:05:08.430 But of course we want to make these even smaller, 00:05:08.455 --> 00:05:11.455 and we're inspired in particular by honeybees. 00:05:11.480 --> 00:05:14.480 So if you look at honeybees, and this is a slowed down video, 00:05:14.798 --> 00:05:18.889 they're so small, the inertia is so lightweight... 00:05:19.460 --> 00:05:20.636 (Laughter) 00:05:20.660 --> 00:05:21.810 that they don't care... 00:05:21.810 --> 00:05:24.220 They bounce off my hand, for example. 00:05:24.220 --> 00:05:27.220 This is a little robot that mimics the honeybee behavior. 00:05:27.862 --> 00:05:29.078 And smaller is better, 00:05:29.103 --> 00:05:32.675 because along with the small size you get lower inertia. 00:05:32.701 --> 00:05:34.237 Along with lower inertia... 00:05:34.460 --> 00:05:37.316 (Robot buzzing, laughter) 00:05:37.340 --> 00:05:40.156 along with lower inertia, you're resistant to collisions. 00:05:40.180 --> 00:05:41.900 And that makes you more robust. 00:05:43.300 --> 00:05:45.956 So just like these honeybees, we build small robots. 00:05:45.980 --> 00:05:48.980 And this particular one is only 25 grams in weight. 00:05:49.141 --> 00:05:51.301 It consumes only six watts of power. 00:05:51.701 --> 00:05:54.237 And it can travel up to six meters per second. 00:05:54.262 --> 00:05:56.598 So if I normalize that to its size, 00:05:56.623 --> 00:05:59.623 it's like a Boeing 787 traveling ten times the speed of sound. 00:06:03.229 --> 00:06:05.325 (Laughter) 00:06:05.350 --> 00:06:07.270 And I want to show you an example. 00:06:08.340 --> 00:06:13.095 This is probably the first planned mid-air collision, at one-twentieth normal speed. 00:06:13.318 --> 00:06:16.176 These are going at a relative speed of two meters per second, 00:06:16.201 --> 00:06:18.681 and this illustrates the basic principle. 00:06:19.445 --> 00:06:24.163 The two-gram carbon fiber cage around it prevents the propellers from entangling, 00:06:24.278 --> 00:06:29.289 but essentially the collision is absorbed and the robot responds to the collisions. 00:06:30.059 --> 00:06:32.619 And so small also means safe. 00:06:32.900 --> 00:06:34.916 In my lab, as we developed these robots, 00:06:34.940 --> 00:06:36.560 we start off with these big robots 00:06:36.584 --> 00:06:39.396 and then now we're down to these small robots. 00:06:39.420 --> 00:06:42.750 And if you plot a histogram of the number of Band-Aids we've ordered 00:06:42.900 --> 00:06:45.476 in the past, that sort of tailed off now. 00:06:45.500 --> 00:06:47.460 Because these robots are really safe. 00:06:48.860 --> 00:06:51.316 The small size has some disadvantages, 00:06:51.340 --> 00:06:55.516 and nature has found a number of ways to compensate for these disadvantages. 00:06:56.060 --> 00:07:00.116 The basic idea is they aggregate to form large groups, or swarms. 00:07:01.020 --> 00:07:04.884 So, similarly, in our lab, we try to create artificial robot swarms. 00:07:05.020 --> 00:07:06.401 And this is quite challenging 00:07:06.425 --> 00:07:10.035 because now you have to think about networks of robots. 00:07:10.260 --> 00:07:11.556 And within each robot, 00:07:11.580 --> 00:07:16.606 you have to think about the interplay of sensing, communication, computation... 00:07:17.420 --> 00:07:22.022 And this network then becomes quite difficult to control and manage. 00:07:23.260 --> 00:07:26.260 So from nature we take away three organizing principles 00:07:26.580 --> 00:07:29.580 that essentially allow us to develop our algorithms. 00:07:30.740 --> 00:07:35.124 The first idea is that robots need to be aware of their neighbors. 00:07:35.482 --> 00:07:38.723 They need to be able to sense and communicate with their neighbors. 00:07:39.740 --> 00:07:42.396 So this video illustrates the basic idea. 00:07:42.420 --> 00:07:43.716 You have four robots... 00:07:43.741 --> 00:07:47.428 One of the robots has actually been hijacked by a human operator, literally. 00:07:48.917 --> 00:07:51.156 But because the robots interact with each other, 00:07:51.180 --> 00:07:52.836 they sense their neighbors, 00:07:52.860 --> 00:07:54.156 they essentially follow. 00:07:54.180 --> 00:07:59.309 And here there's a single person able to lead this network of followers. 00:08:01.700 --> 00:08:06.675 So again, it's not because all the robots know where they're supposed to go. 00:08:06.780 --> 00:08:10.717 It's because they're just reacting to the positions of their neighbors. 00:08:13.420 --> 00:08:16.420 (Laughter) 00:08:17.980 --> 00:08:22.317 So the next experiment illustrates the second organizing principle. 00:08:24.620 --> 00:08:27.620 And this principle has to do with the principle of anonymity. 00:08:29.100 --> 00:08:32.100 Here the key idea is that 00:08:33.419 --> 00:08:37.200 the robots are agnostic to the identities of their neighbors. 00:08:38.140 --> 00:08:40.756 They're asked to form a circular shape, 00:08:40.780 --> 00:08:44.052 and no matter how many robots you introduce into the formation, 00:08:44.100 --> 00:08:46.676 or how many robots you pull out, 00:08:46.700 --> 00:08:49.700 each robot is simply reacting to its neighbor. 00:08:49.859 --> 00:08:54.357 It's aware of the fact that it needs to form the circular shape, 00:08:54.860 --> 00:08:56.636 but collaborating with its neighbors 00:08:56.660 --> 00:08:59.660 it forms the shape without central coordination. 00:09:02.020 --> 00:09:04.436 Now if you put these ideas together, 00:09:04.461 --> 00:09:08.356 the third idea is that we essentially give these robots 00:09:08.380 --> 00:09:12.086 mathematical descriptions of the shape they need to execute. 00:09:12.700 --> 00:09:15.700 And these shapes can be varying as a function of time, 00:09:16.220 --> 00:09:20.524 and you'll see these robots start from a circular formation, 00:09:20.740 --> 00:09:23.995 change into a rectangular formation, stretch into a straight line, 00:09:24.020 --> 00:09:25.395 back into an ellipse. 00:09:25.419 --> 00:09:29.032 And they do this with the same kind of split-second coordination 00:09:29.060 --> 00:09:32.060 that you see in natural swarms, in nature. 00:09:33.980 --> 00:09:36.116 So why work with swarms? 00:09:36.140 --> 00:09:39.736 Let me tell you about two applications that we are very interested in. 00:09:40.195 --> 00:09:42.901 The first one has to do with agriculture, 00:09:42.926 --> 00:09:46.052 which is probably the biggest problem that we're facing worldwide. 00:09:46.779 --> 00:09:48.035 As you well know, 00:09:48.060 --> 00:09:51.687 one in every seven persons in this earth is malnourished. 00:09:51.725 --> 00:09:55.224 Most of the land that we can cultivate has already been cultivated. 00:09:56.026 --> 00:09:59.605 And the efficiency of most systems in the world is improving, 00:09:59.630 --> 00:10:02.932 but our production system efficiency is actually declining. 00:10:03.114 --> 00:10:06.864 And that's mostly because of water shortage, crop diseases, climate change 00:10:07.473 --> 00:10:08.993 and a couple of other things. 00:10:09.466 --> 00:10:10.946 So what can robots do? 00:10:11.306 --> 00:10:15.895 Well, we adopt an approach that's called Precision Farming in the community. 00:10:15.954 --> 00:10:20.946 And the basic idea is that we fly aerial robots through orchards, 00:10:21.203 --> 00:10:24.203 and then we build precision models of individual plants. 00:10:24.935 --> 00:10:26.706 So just like personalized medicine, 00:10:26.731 --> 00:10:31.021 while you might imagine wanting to treat every patient individually, 00:10:31.450 --> 00:10:34.450 what we'd like to do is build models of individual plants 00:10:35.202 --> 00:10:38.887 and then tell the farmer what kind of inputs every plant needs... 00:10:39.607 --> 00:10:43.863 The inputs in this case being water, fertilizer and pesticide. 00:10:45.516 --> 00:10:49.328 Here you'll see robots traveling through an apple orchard, 00:10:49.353 --> 00:10:51.609 and in a minute you'll see two of its companions 00:10:51.634 --> 00:10:53.444 doing the same thing on the left side. 00:10:54.100 --> 00:10:57.315 And what they're doing is essentially building a map of the orchard. 00:10:57.780 --> 00:11:02.275 Within the map is a map of every plant in this orchard. 00:11:02.420 --> 00:11:04.076 (Robot buzzing) 00:11:04.100 --> 00:11:05.996 Let's see what those maps look like. 00:11:06.020 --> 00:11:10.715 In the next video, you'll see the cameras that are being used on this robot. 00:11:10.740 --> 00:11:13.740 On the top-left is essentially a standard color camera. 00:11:14.838 --> 00:11:17.838 On the left-center is an infrared camera. 00:11:18.174 --> 00:11:21.791 And on the bottom-left is a thermal camera. 00:11:22.014 --> 00:11:25.447 And on the main panel, you're seeing a three-dimensional reconstruction 00:11:25.447 --> 00:11:30.348 of every tree in the orchard as the sensors fly right past the trees. 00:11:32.989 --> 00:11:36.593 Armed with information like this, we can do several things. 00:11:37.477 --> 00:11:41.812 The first and possibly the most important thing we can do is very simple: 00:11:41.837 --> 00:11:44.277 count the number of fruits on every tree. 00:11:44.726 --> 00:11:49.436 By doing this, you tell the farmer how many [fruits] she has in every tree 00:11:49.580 --> 00:11:52.580 and allow her to estimate the yield in the orchard, 00:11:53.860 --> 00:11:56.700 optimizing the production chain downstream. 00:11:57.940 --> 00:11:59.556 The second thing we can do 00:11:59.580 --> 00:12:03.645 is take models of plants, construct three-dimensional reconstructions, 00:12:04.100 --> 00:12:06.636 and from that estimate the canopy size, 00:12:06.660 --> 00:12:10.327 and then correlate the canopy size to the amount of leaf area on every plant. 00:12:10.460 --> 00:12:12.636 And this is called the leaf area index. 00:12:12.660 --> 00:12:14.596 So if you know this leaf area index, 00:12:14.620 --> 00:12:19.810 you essentially have a measure of how much photosynthesis is possible in every plant, 00:12:20.100 --> 00:12:22.980 which again tells you how healthy each plant is. 00:12:25.620 --> 00:12:29.795 By combining visual and infrared information, 00:12:30.355 --> 00:12:33.355 we can also compute indices such as NDVI. 00:12:33.780 --> 00:12:36.596 And in this particular case, you can essentially see 00:12:36.620 --> 00:12:39.620 there are some crops that are not doing as well as other crops. 00:12:39.660 --> 00:12:43.363 This is easily discernible from imagery, 00:12:44.340 --> 00:12:46.556 not just visual imagery but combining 00:12:46.580 --> 00:12:49.356 both visual imagery and infrared imagery. 00:12:50.180 --> 00:12:51.516 And then lastly, 00:12:51.540 --> 00:12:55.798 one thing we're interested in doing is detecting the early onset of chlorosis... 00:12:55.980 --> 00:12:57.476 And this is an orange tree... 00:12:57.500 --> 00:13:00.060 Which is essentially seen by yellowing of leaves. 00:13:00.085 --> 00:13:03.688 But robots flying overhead can easily spot this autonomously 00:13:03.935 --> 00:13:06.871 and then report to the farmer that he or she has a problem 00:13:06.896 --> 00:13:08.416 in this section of the orchard. 00:13:10.100 --> 00:13:12.796 Systems like this can really help, 00:13:12.820 --> 00:13:18.619 and we're projecting yields that can improve by about ten percent 00:13:18.660 --> 00:13:21.750 and, more importantly, decrease the amount of inputs such as water 00:13:21.755 --> 00:13:24.755 by 25 percent by using aerial robot swarms. 00:13:25.740 --> 00:13:29.021 A second application area is in first response. 00:13:29.038 --> 00:13:31.427 This is a picture of the Penn campus, 00:13:31.427 --> 00:13:33.918 actually south of the Penn campus, the South Bank. 00:13:33.942 --> 00:13:36.116 I want you to imagine a setting 00:13:36.141 --> 00:13:39.130 where there might be an emergency call from a building, 00:13:39.155 --> 00:13:40.377 a 911 call. 00:13:40.528 --> 00:13:44.837 By the time the police get there, it might take a valuable 5-10 minutes. 00:13:44.879 --> 00:13:47.012 But imagine now, robots respond. 00:13:47.300 --> 00:13:49.426 And you have a whole swarm of them, 00:13:49.442 --> 00:13:52.974 flying to the scene autonomously, just triggered by a 911 call 00:13:52.990 --> 00:13:54.451 or by a dispacher. 00:13:55.295 --> 00:13:57.449 By the way, if someone is here from the FAA, 00:13:57.449 --> 00:13:59.557 this was actually shot in South America. 00:13:59.588 --> 00:14:01.434 (Laughter) 00:14:01.474 --> 00:14:03.541 So, robots arrive at the scene, 00:14:04.858 --> 00:14:07.882 and they're all equipped with downward facing cameras, 00:14:08.232 --> 00:14:10.299 and they can monitor the scene. 00:14:10.343 --> 00:14:12.057 And they do this autonomously, 00:14:12.081 --> 00:14:16.370 so by the time a human first responder or a police officer gets there, 00:14:16.609 --> 00:14:19.466 they have access to all kinds of information. 00:14:19.649 --> 00:14:22.649 So on the top left, you see the central screen 00:14:22.673 --> 00:14:24.473 that a dispacher might see, 00:14:24.498 --> 00:14:27.236 which is telling her where the robots are flying 00:14:27.244 --> 00:14:29.632 and how they're encircling the building. 00:14:29.657 --> 00:14:33.196 And the robots are autonomously deciding which ingress poins 00:14:33.292 --> 00:14:35.492 should be assigned to what robot. 00:14:36.412 --> 00:14:40.205 On the top right, you essentialy see images from the robots 00:14:40.220 --> 00:14:42.220 being assembled into a mosaic. 00:14:42.529 --> 00:14:46.085 Which again, gives the first responder some idea 00:14:46.346 --> 00:14:48.743 of what activity is going on at the scene. 00:14:48.768 --> 00:14:51.860 And on the bottom, you can see a three-dimensional reconstruction 00:14:51.860 --> 00:14:53.800 that we developed on the fly. 00:14:55.103 --> 00:14:57.302 In addition to working outside buidlings, 00:14:57.302 --> 00:14:59.847 we're also interested in going inside buidlings, 00:14:59.855 --> 00:15:03.172 and I want to show you an experiment we did three years ago 00:15:03.450 --> 00:15:07.188 where our aerial robot - one exactly like this one - 00:15:07.768 --> 00:15:10.234 is collaborating with a ground robot, 00:15:10.268 --> 00:15:13.268 in this case it's actually hitching a ride with a ground robot, 00:15:13.300 --> 00:15:16.300 because it's programmed to be lazy, to save power. 00:15:16.453 --> 00:15:19.453 So, as it goes up, the two travel in tandem, 00:15:19.585 --> 00:15:22.585 and this is a collapsed building after an earthquake, 00:15:22.624 --> 00:15:25.609 this is shortly after the 2011 earthquake in Fukushima. 00:15:26.763 --> 00:15:29.763 When the robots get stuck in front of a collapsed doorway, 00:15:29.784 --> 00:15:33.458 our tobot takes off and is able to fly over the obstacles 00:15:33.950 --> 00:15:35.350 around the obstacles, 00:15:35.498 --> 00:15:38.736 and generate a three-dimensional map, in this case of a bookcase. 00:15:39.072 --> 00:15:41.436 And it's able to see what's on the other side. 00:15:41.458 --> 00:15:45.513 Something fairly simple, but it's hard to do with ground robots, 00:15:45.530 --> 00:15:49.045 and certainly hard to do with humans when there's potential for harm. 00:15:50.151 --> 00:15:54.085 So these two robots are collaboratively building these maps, 00:15:54.617 --> 00:15:57.188 and, again, when the first responders come, 00:15:57.213 --> 00:15:59.480 they can be quick with these maps. 00:15:59.630 --> 00:16:02.630 So let me show you what some of these maps look like. 00:16:02.720 --> 00:16:04.545 So this is a three storey building, 00:16:04.570 --> 00:16:07.537 the seventh, eighth and what remains of the ninth floor on top, 00:16:07.562 --> 00:16:11.164 and we were able to construct this map using this team of two robots, 00:16:11.188 --> 00:16:13.455 operating in tandem, autonomously. 00:16:14.032 --> 00:16:17.528 However, this experiment took us two and a half hours to complete. 00:16:18.569 --> 00:16:20.751 Now, no first responder is going to give you 00:16:20.776 --> 00:16:23.124 two and a half hours to do this experiment, 00:16:23.149 --> 00:16:25.228 before he or she wants to rush in. 00:16:25.252 --> 00:16:27.283 They might give you two and a half minutes, 00:16:27.292 --> 00:16:29.684 you'll be lucky if you get two and a half seconds. 00:16:29.700 --> 00:16:31.874 But now that's where robot swarms come in. 00:16:31.883 --> 00:16:34.533 Imagine if you could send in a hundred of these robots, 00:16:34.538 --> 00:16:36.711 like the little ones that we were just flying, 00:16:36.711 --> 00:16:39.625 and imagine they went in and generated maps like this 00:16:39.649 --> 00:16:42.649 well before humans actually arrived on the scene. 00:16:43.189 --> 00:16:46.189 And that's the vision we're working towards. 00:16:47.570 --> 00:16:48.837 So, let me conclude 00:16:49.994 --> 00:16:52.993 with a movie - a Warner brothers movie - 00:16:53.228 --> 00:16:56.228 of an upcoming - right next in your theatre, 00:16:57.138 --> 00:16:59.082 The Swarm! The Swarm is coming! 00:16:59.098 --> 00:17:01.934 And I love this poster, actually if you've seen the movie, 00:17:01.990 --> 00:17:04.057 you're probably dating yourself 00:17:04.738 --> 00:17:07.689 if you have not seen the movie, I encourage you not to see it, 00:17:07.689 --> 00:17:08.836 it's a terrible movie, 00:17:08.836 --> 00:17:09.885 (Laughter) 00:17:09.885 --> 00:17:13.347 It's about killer bees, attacking men and killing them and so on. 00:17:13.363 --> 00:17:16.490 But everything about this poster is true, which is why I like it. 00:17:16.498 --> 00:17:19.369 "Its size is immeasurable" - I hope I've convinced you 00:17:19.369 --> 00:17:21.237 that "its power is limitless", 00:17:21.307 --> 00:17:22.917 and even this last bit is true, 00:17:22.934 --> 00:17:25.862 "its enemy is man", the technology is here today 00:17:25.862 --> 00:17:27.905 and it's people like us that are standing 00:17:27.905 --> 00:17:30.605 between this technology and its applications. 00:17:30.625 --> 00:17:33.625 The swarm is coming, this is not science fiction, 00:17:34.109 --> 00:17:36.309 in fact, this is what lies ahead. 00:17:36.734 --> 00:17:42.299 Lastly, I want you to applaud the people who actually create the future, 00:17:42.647 --> 00:17:47.328 Yash Mulgaonkar, Sikang Liu and Giuseppe Loianno, 00:17:47.416 --> 00:17:50.416 who are responsible for the three demonstrations that you saw. 00:17:51.103 --> 00:17:52.279 Thank you. 00:17:52.328 --> 00:17:55.328 (Applause)