Return to Video

The future of flying robots | Vijay Kumar | TEDxPenn

  • 0:35 - 0:38
    In my lab, we build
    autonomous aerial robots
  • 0:38 - 0:40
    like the one you see flying here.
  • 0:43 - 0:46
    Unlike the commercially available drones
    that you can buy today,
  • 0:47 - 0:50
    this robot doesn't have any GPS on board.
  • 0:51 - 0:52
    So without GPS,
  • 0:52 - 0:55
    it's hard for robots like this
    to determine their position.
  • 0:57 - 1:01
    This robot uses onboard sensors,
    cameras and laser scanners,
  • 1:01 - 1:04
    to scan the environment.
  • 1:04 - 1:07
    It detects features from the environment,
  • 1:07 - 1:10
    and it determines where it is
    relative to those features,
  • 1:10 - 1:12
    using a method of triangulation.
  • 1:14 - 1:17
    And then it can assemble
    all these features into a map,
  • 1:17 - 1:19
    like you see behind me.
  • 1:20 - 1:24
    And this map then allows the robot
    to understand where the obstacles are
  • 1:24 - 1:26
    and navigate in a collision-free manner.
  • 1:28 - 1:29
    What I want to show you next
  • 1:30 - 1:33
    is a set of experiments
    we did inside our laboratory,
  • 1:33 - 1:36
    where this robot was able
    to go for longer distances.
  • 1:37 - 1:42
    So here you'll see, on the top right,
    what the robot sees with the camera.
  • 1:42 - 1:44
    And on the main screen...
  • 1:44 - 1:46
    And of course this is sped up
    by a factor of four...
  • 1:46 - 1:49
    On the main screen you'll see
    the map that it's building.
  • 1:49 - 1:53
    So this is a high-resolution map
    of the corridor around our laboratory.
  • 1:53 - 1:55
    And in a minute
    you'll see it enter our lab,
  • 1:55 - 1:58
    which is recognizable
    by the clutter that you see.
  • 1:58 - 1:59
    (Laughter)
  • 1:59 - 2:01
    But the main point I want to convey to you
  • 2:01 - 2:05
    is that these robots are capable
    of building high-resolution maps
  • 2:05 - 2:07
    at five centimeters resolution,
  • 2:07 - 2:12
    allowing somebody who is outside the lab,
    or outside the building
  • 2:12 - 2:15
    to deploy these
    without actually going inside,
  • 2:15 - 2:18
    and trying to infer
    what happens inside the building.
  • 2:20 - 2:22
    Now there's one problem
    with robots like this.
  • 2:23 - 2:25
    The first problem is it's pretty big.
  • 2:25 - 2:27
    Because it's big, it's heavy.
  • 2:28 - 2:31
    And these robots consume
    about 100 watts per pound.
  • 2:32 - 2:34
    And this makes for
    a very short mission life.
  • 2:35 - 2:37
    The second problem
  • 2:37 - 2:41
    is that these robots have onboard sensors
    that end up being very expensive...
  • 2:41 - 2:44
    A laser scanner, a camera
    and the processors.
  • 2:45 - 2:48
    That drives up the cost of this robot.
  • 2:49 - 2:51
    So we asked ourselves a question:
  • 2:51 - 2:55
    what consumer product
    can you buy in an electronics store
  • 2:55 - 3:01
    that is inexpensive, that's lightweight,
    that has sensing onboard and computation?
  • 3:03 - 3:06
    And we invented the flying phone.
  • 3:06 - 3:08
    (Laughter)
  • 3:08 - 3:14
    So this robot uses a Samsung Galaxy
    smartphone that you can buy off the shelf,
  • 3:14 - 3:18
    and all you need is an app that you
    can download from our app store.
  • 3:18 - 3:22
    And you can see this robot
    reading the letters, "TED" in this case,
  • 3:22 - 3:25
    looking at the corners
    of the "T" and the "E"
  • 3:25 - 3:29
    and then triangulating off of that,
    flying autonomously.
  • 3:30 - 3:33
    That joystick is just there
    to make sure if the robot goes crazy,
  • 3:33 - 3:35
    Giuseppe can kill it.
  • 3:35 - 3:36
    (Laughter)
  • 3:38 - 3:41
    In addition to building
    these small robots,
  • 3:42 - 3:46
    we also experiment with aggressive
    behaviors, like you see here.
  • 3:47 - 3:51
    So this robot is now traveling
    at two to three meters per second,
  • 3:52 - 3:55
    pitching and rolling aggressively
    as it changes direction.
  • 3:56 - 4:00
    The main point is we can have
    smaller robots that can go faster
  • 4:00 - 4:03
    and then travel in these
    very unstructured environments.
  • 4:04 - 4:06
    And in this next video,
  • 4:06 - 4:12
    just like you see this bird, an eagle,
    gracefully coordinating its wings,
  • 4:13 - 4:16
    its eyes and feet
    to grab prey out of the water,
  • 4:17 - 4:19
    our robot can go fishing, too.
  • 4:19 - 4:20
    (Laughter)
  • 4:20 - 4:24
    In this case, this is a Philly cheesesteak
    hoagie that it's grabbing out of thin air.
  • 4:24 - 4:27
    (Laughter)
  • 4:27 - 4:31
    So you can see this robot
    going at about three meters per second,
  • 4:31 - 4:35
    which is faster than walking speed,
    coordinating its arms, its claws
  • 4:36 - 4:40
    and its flight with split-second timing
    to achieve this maneuver.
  • 4:41 - 4:43
    In another experiment,
  • 4:43 - 4:46
    I want to show you
    how the robot adapts its flight
  • 4:46 - 4:49
    to control its suspended payload,
  • 4:49 - 4:52
    whose length is actually larger
    than the width of the window.
  • 4:53 - 4:55
    So in order to accomplish this,
  • 4:55 - 4:58
    it actually has to pitch
    and adjust the altitude
  • 4:58 - 5:01
    and swing the payload through.
  • 5:06 - 5:08
    But of course we want
    to make these even smaller,
  • 5:08 - 5:11
    and we're inspired
    in particular by honeybees.
  • 5:11 - 5:14
    So if you look at honeybees,
    and this is a slowed down video,
  • 5:15 - 5:19
    they're so small,
    the inertia is so lightweight...
  • 5:19 - 5:21
    (Laughter)
  • 5:21 - 5:22
    that they don't care...
  • 5:22 - 5:24
    They bounce off my hand, for example.
  • 5:24 - 5:27
    This is a little robot
    that mimics the honeybee behavior.
  • 5:28 - 5:29
    And smaller is better,
  • 5:29 - 5:33
    because along with the small size
    you get lower inertia.
  • 5:33 - 5:34
    Along with lower inertia...
  • 5:34 - 5:37
    (Robot buzzing, laughter)
  • 5:37 - 5:40
    along with lower inertia,
    you're resistant to collisions.
  • 5:40 - 5:42
    And that makes you more robust.
  • 5:43 - 5:46
    So just like these honeybees,
    we build small robots.
  • 5:46 - 5:49
    And this particular one
    is only 25 grams in weight.
  • 5:49 - 5:51
    It consumes only six watts of power.
  • 5:52 - 5:54
    And it can travel
    up to six meters per second.
  • 5:54 - 5:57
    So if I normalize that to its size,
  • 5:57 - 6:00
    it's like a Boeing 787 traveling
    ten times the speed of sound.
  • 6:03 - 6:05
    (Laughter)
  • 6:05 - 6:07
    And I want to show you an example.
  • 6:08 - 6:13
    This is probably the first planned mid-air
    collision, at one-twentieth normal speed.
  • 6:13 - 6:16
    These are going at a relative speed
    of two meters per second,
  • 6:16 - 6:19
    and this illustrates the basic principle.
  • 6:19 - 6:24
    The two-gram carbon fiber cage around it
    prevents the propellers from entangling,
  • 6:24 - 6:29
    but essentially the collision is absorbed
    and the robot responds to the collisions.
  • 6:30 - 6:33
    And so small also means safe.
  • 6:33 - 6:35
    In my lab, as we developed these robots,
  • 6:35 - 6:37
    we start off with these big robots
  • 6:37 - 6:39
    and then now we're down
    to these small robots.
  • 6:39 - 6:43
    And if you plot a histogram
    of the number of Band-Aids we've ordered
  • 6:43 - 6:45
    in the past, that sort of tailed off now.
  • 6:46 - 6:47
    Because these robots are really safe.
  • 6:49 - 6:51
    The small size has some disadvantages,
  • 6:51 - 6:56
    and nature has found a number of ways
    to compensate for these disadvantages.
  • 6:56 - 7:00
    The basic idea is they aggregate
    to form large groups, or swarms.
  • 7:01 - 7:05
    So, similarly, in our lab,
    we try to create artificial robot swarms.
  • 7:05 - 7:06
    And this is quite challenging
  • 7:06 - 7:10
    because now you have to think
    about networks of robots.
  • 7:10 - 7:12
    And within each robot,
  • 7:12 - 7:17
    you have to think about the interplay
    of sensing, communication, computation...
  • 7:17 - 7:22
    And this network then becomes
    quite difficult to control and manage.
  • 7:23 - 7:26
    So from nature we take away
    three organizing principles
  • 7:27 - 7:30
    that essentially allow us
    to develop our algorithms.
  • 7:31 - 7:35
    The first idea is that robots
    need to be aware of their neighbors.
  • 7:35 - 7:39
    They need to be able to sense
    and communicate with their neighbors.
  • 7:40 - 7:42
    So this video illustrates the basic idea.
  • 7:42 - 7:44
    You have four robots...
  • 7:44 - 7:47
    One of the robots has actually been
    hijacked by a human operator, literally.
  • 7:49 - 7:51
    But because the robots
    interact with each other,
  • 7:51 - 7:53
    they sense their neighbors,
  • 7:53 - 7:54
    they essentially follow.
  • 7:54 - 7:59
    And here there's a single person
    able to lead this network of followers.
  • 8:02 - 8:07
    So again, it's not because all the robots
    know where they're supposed to go.
  • 8:07 - 8:11
    It's because they're just reacting
    to the positions of their neighbors.
  • 8:13 - 8:16
    (Laughter)
  • 8:18 - 8:22
    So the next experiment illustrates
    the second organizing principle.
  • 8:25 - 8:28
    And this principle has to do
    with the principle of anonymity.
  • 8:29 - 8:32
    Here the key idea is that
  • 8:33 - 8:37
    the robots are agnostic
    to the identities of their neighbors.
  • 8:38 - 8:41
    They're asked to form a circular shape,
  • 8:41 - 8:44
    and no matter how many robots
    you introduce into the formation,
  • 8:44 - 8:47
    or how many robots you pull out,
  • 8:47 - 8:50
    each robot is simply
    reacting to its neighbor.
  • 8:50 - 8:54
    It's aware of the fact that it needs
    to form the circular shape,
  • 8:55 - 8:57
    but collaborating with its neighbors
  • 8:57 - 9:00
    it forms the shape
    without central coordination.
  • 9:02 - 9:04
    Now if you put these ideas together,
  • 9:04 - 9:08
    the third idea is that we
    essentially give these robots
  • 9:08 - 9:12
    mathematical descriptions
    of the shape they need to execute.
  • 9:13 - 9:16
    And these shapes can be varying
    as a function of time,
  • 9:16 - 9:21
    and you'll see these robots
    start from a circular formation,
  • 9:21 - 9:24
    change into a rectangular formation,
    stretch into a straight line,
  • 9:24 - 9:25
    back into an ellipse.
  • 9:25 - 9:29
    And they do this with the same
    kind of split-second coordination
  • 9:29 - 9:32
    that you see in natural swarms, in nature.
  • 9:34 - 9:36
    So why work with swarms?
  • 9:36 - 9:40
    Let me tell you about two applications
    that we are very interested in.
  • 9:40 - 9:43
    The first one has to do with agriculture,
  • 9:43 - 9:46
    which is probably the biggest problem
    that we're facing worldwide.
  • 9:47 - 9:48
    As you well know,
  • 9:48 - 9:52
    one in every seven persons
    in this earth is malnourished.
  • 9:52 - 9:55
    Most of the land that we can cultivate
    has already been cultivated.
  • 9:56 - 10:00
    And the efficiency of most systems
    in the world is improving,
  • 10:00 - 10:03
    but our production system
    efficiency is actually declining.
  • 10:03 - 10:07
    And that's mostly because of water
    shortage, crop diseases, climate change
  • 10:07 - 10:09
    and a couple of other things.
  • 10:09 - 10:11
    So what can robots do?
  • 10:11 - 10:16
    Well, we adopt an approach that's
    called Precision Farming in the community.
  • 10:16 - 10:21
    And the basic idea is that we fly
    aerial robots through orchards,
  • 10:21 - 10:24
    and then we build
    precision models of individual plants.
  • 10:25 - 10:27
    So just like personalized medicine,
  • 10:27 - 10:31
    while you might imagine wanting
    to treat every patient individually,
  • 10:31 - 10:34
    what we'd like to do is build
    models of individual plants
  • 10:35 - 10:39
    and then tell the farmer
    what kind of inputs every plant needs...
  • 10:40 - 10:44
    The inputs in this case being water,
    fertilizer and pesticide.
  • 10:46 - 10:49
    Here you'll see robots
    traveling through an apple orchard,
  • 10:49 - 10:52
    and in a minute you'll see
    two of its companions
  • 10:52 - 10:53
    doing the same thing on the left side.
  • 10:54 - 10:57
    And what they're doing is essentially
    building a map of the orchard.
  • 10:58 - 11:02
    Within the map is a map
    of every plant in this orchard.
  • 11:02 - 11:04
    (Robot buzzing)
  • 11:04 - 11:06
    Let's see what those maps look like.
  • 11:06 - 11:11
    In the next video, you'll see the cameras
    that are being used on this robot.
  • 11:11 - 11:14
    On the top-left is essentially
    a standard color camera.
  • 11:15 - 11:18
    On the left-center is an infrared camera.
  • 11:18 - 11:22
    And on the bottom-left
    is a thermal camera.
  • 11:22 - 11:25
    And on the main panel, you're seeing
    a three-dimensional reconstruction
  • 11:25 - 11:30
    of every tree in the orchard
    as the sensors fly right past the trees.
  • 11:33 - 11:37
    Armed with information like this,
    we can do several things.
  • 11:37 - 11:42
    The first and possibly the most important
    thing we can do is very simple:
  • 11:42 - 11:44
    count the number of fruits on every tree.
  • 11:45 - 11:49
    By doing this, you tell the farmer
    how many [fruits] she has in every tree
  • 11:50 - 11:53
    and allow her to estimate
    the yield in the orchard,
  • 11:54 - 11:57
    optimizing the production
    chain downstream.
  • 11:58 - 12:00
    The second thing we can do
  • 12:00 - 12:04
    is take models of plants, construct
    three-dimensional reconstructions,
  • 12:04 - 12:07
    and from that estimate the canopy size,
  • 12:07 - 12:10
    and then correlate the canopy size
    to the amount of leaf area on every plant.
  • 12:10 - 12:13
    And this is called the leaf area index.
  • 12:13 - 12:15
    So if you know this leaf area index,
  • 12:15 - 12:20
    you essentially have a measure of how much
    photosynthesis is possible in every plant,
  • 12:20 - 12:23
    which again tells you
    how healthy each plant is.
  • 12:26 - 12:30
    By combining visual
    and infrared information,
  • 12:30 - 12:33
    we can also compute indices such as NDVI.
  • 12:34 - 12:37
    And in this particular case,
    you can essentially see
  • 12:37 - 12:40
    there are some crops that are
    not doing as well as other crops.
  • 12:40 - 12:43
    This is easily discernible from imagery,
  • 12:44 - 12:47
    not just visual imagery but combining
  • 12:47 - 12:49
    both visual imagery and infrared imagery.
  • 12:50 - 12:52
    And then lastly,
  • 12:52 - 12:56
    one thing we're interested in doing is
    detecting the early onset of chlorosis...
  • 12:56 - 12:57
    And this is an orange tree...
  • 12:58 - 13:00
    Which is essentially seen
    by yellowing of leaves.
  • 13:00 - 13:04
    But robots flying overhead
    can easily spot this autonomously
  • 13:04 - 13:07
    and then report to the farmer
    that he or she has a problem
  • 13:07 - 13:08
    in this section of the orchard.
  • 13:10 - 13:13
    Systems like this can really help,
  • 13:13 - 13:19
    and we're projecting yields
    that can improve by about ten percent
  • 13:19 - 13:22
    and, more importantly, decrease
    the amount of inputs such as water
  • 13:22 - 13:25
    by 25 percent by using
    aerial robot swarms.
  • 13:26 - 13:29
    A second application area
    is in first response.
  • 13:29 - 13:31
    This is a picture of the Penn campus,
  • 13:31 - 13:34
    actually south of the Penn campus,
    the South Bank.
  • 13:34 - 13:36
    I want you to imagine a setting
  • 13:36 - 13:39
    where there might be an emergency call
    from a building,
  • 13:39 - 13:40
    a 911 call.
  • 13:41 - 13:45
    By the time the police get there,
    it might take a valuable 5-10 minutes.
  • 13:45 - 13:47
    But imagine now, robots respond.
  • 13:47 - 13:49
    And you have a whole swarm of them,
  • 13:49 - 13:53
    flying to the scene autonomously,
    just triggered by a 911 call
  • 13:53 - 13:54
    or by a dispacher.
  • 13:55 - 13:57
    By the way, if someone is here
    from the FAA,
  • 13:57 - 14:00
    this was actually shot in South America.
  • 14:00 - 14:01
    (Laughter)
  • 14:01 - 14:04
    So, robots arrive at the scene,
  • 14:05 - 14:08
    and they're all equipped
    with downward facing cameras,
  • 14:08 - 14:10
    and they can monitor the scene.
  • 14:10 - 14:12
    And they do this autonomously,
  • 14:12 - 14:16
    so by the time a human first responder
    or a police officer gets there,
  • 14:17 - 14:19
    they have access
    to all kinds of information.
  • 14:20 - 14:23
    So on the top left,
    you see the central screen
  • 14:23 - 14:24
    that a dispacher might see,
  • 14:24 - 14:27
    which is telling her
    where the robots are flying
  • 14:27 - 14:30
    and how they're encircling the building.
  • 14:30 - 14:33
    And the robots are autonomously deciding
    which ingress poins
  • 14:33 - 14:35
    should be assigned to what robot.
  • 14:36 - 14:40
    On the top right, you essentialy see
    images from the robots
  • 14:40 - 14:42
    being assembled into a mosaic.
  • 14:43 - 14:46
    Which again, gives the first responder
    some idea
  • 14:46 - 14:49
    of what activity is going on at the scene.
  • 14:49 - 14:52
    And on the bottom, you can see
    a three-dimensional reconstruction
  • 14:52 - 14:54
    that we developed on the fly.
  • 14:55 - 14:57
    In addition to working outside buidlings,
  • 14:57 - 15:00
    we're also interested
    in going inside buidlings,
  • 15:00 - 15:03
    and I want to show you
    an experiment we did three years ago
  • 15:03 - 15:07
    where our aerial robot -
    one exactly like this one -
  • 15:08 - 15:10
    is collaborating with a ground robot,
  • 15:10 - 15:13
    in this case it's actually hitching a ride
    with a ground robot,
  • 15:13 - 15:16
    because it's programmed to be lazy,
    to save power.
  • 15:16 - 15:19
    So, as it goes up,
    the two travel in tandem,
  • 15:20 - 15:23
    and this is a collapsed building
    after an earthquake,
  • 15:23 - 15:26
    this is shortly after the 2011 earthquake
    in Fukushima.
  • 15:27 - 15:30
    When the robots get stuck
    in front of a collapsed doorway,
  • 15:30 - 15:33
    our tobot takes off
    and is able to fly over the obstacles
  • 15:34 - 15:35
    around the obstacles,
  • 15:35 - 15:39
    and generate a three-dimensional map,
    in this case of a bookcase.
  • 15:39 - 15:41
    And it's able to see
    what's on the other side.
  • 15:41 - 15:46
    Something fairly simple,
    but it's hard to do with ground robots,
  • 15:46 - 15:49
    and certainly hard to do with humans
    when there's potential for harm.
  • 15:50 - 15:54
    So these two robots
    are collaboratively building these maps,
  • 15:55 - 15:57
    and, again,
    when the first responders come,
  • 15:57 - 15:59
    they can be quick with these maps.
  • 16:00 - 16:03
    So let me show you
    what some of these maps look like.
  • 16:03 - 16:05
    So this is a three storey building,
  • 16:05 - 16:08
    the seventh, eighth and what remains
    of the ninth floor on top,
  • 16:08 - 16:11
    and we were able to construct this map
    using this team of two robots,
  • 16:11 - 16:13
    operating in tandem, autonomously.
  • 16:14 - 16:18
    However, this experiment took us
    two and a half hours to complete.
  • 16:19 - 16:21
    Now, no first responder
    is going to give you
  • 16:21 - 16:23
    two and a half hours
    to do this experiment,
  • 16:23 - 16:25
    before he or she wants to rush in.
  • 16:25 - 16:27
    They might give you
    two and a half minutes,
  • 16:27 - 16:30
    you'll be lucky if you get
    two and a half seconds.
  • 16:30 - 16:32
    But now that's where robot swarms
    come in.
  • 16:32 - 16:35
    Imagine if you could send in
    a hundred of these robots,
  • 16:35 - 16:37
    like the little ones
    that we were just flying,
  • 16:37 - 16:40
    and imagine they went in
    and generated maps like this
  • 16:40 - 16:43
    well before humans actually arrived
    on the scene.
  • 16:43 - 16:46
    And that's the vision
    we're working towards.
  • 16:48 - 16:49
    So, let me conclude
  • 16:50 - 16:53
    with a movie -
    a Warner brothers movie -
  • 16:53 - 16:56
    of an upcoming -
    right next in your theatre,
  • 16:57 - 16:59
    The Swarm!
    The Swarm is coming!
  • 16:59 - 17:02
    And I love this poster,
    actually if you've seen the movie,
  • 17:02 - 17:04
    you're probably dating yourself
  • 17:05 - 17:08
    if you have not seen the movie,
    I encourage you not to see it,
  • 17:08 - 17:09
    it's a terrible movie,
  • 17:09 - 17:10
    (Laughter)
  • 17:10 - 17:13
    It's about killer bees, attacking men
    and killing them and so on.
  • 17:13 - 17:16
    But everything about this poster is true,
    which is why I like it.
  • 17:16 - 17:19
    "Its size is immeasurable" -
    I hope I've convinced you
  • 17:19 - 17:21
    that "its power is limitless",
  • 17:21 - 17:23
    and even this last bit is true,
  • 17:23 - 17:26
    "its enemy is man",
    the technology is here today
  • 17:26 - 17:28
    and it's people like us that are standing
  • 17:28 - 17:31
    between this technology
    and its applications.
  • 17:31 - 17:34
    The swarm is coming,
    this is not science fiction,
  • 17:34 - 17:36
    in fact, this is what lies ahead.
  • 17:37 - 17:42
    Lastly, I want you to applaud
    the people who actually create the future,
  • 17:43 - 17:47
    Yash Mulgaonkar, Sikang Liu
    and Giuseppe Loianno,
  • 17:47 - 17:50
    who are responsible for the three
    demonstrations that you saw.
  • 17:51 - 17:52
    Thank you.
  • 17:52 - 17:55
    (Applause)
Title:
The future of flying robots | Vijay Kumar | TEDxPenn
Description:

At his lab at the University of Pennsylvania, Vijay Kumar and his team have created autonomous aerial robots inspired by honeybees. Their latest breakthrough: Precision Farming, in which swarms of robots map, reconstruct and analyze every plant and piece of fruit in an orchard, providing vital information to farmers that can help improve yields and make water management smarter.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
18:09
  • 104
    00:06:24,278 --> 00:06:29,289
    but essentially the collision is absorbed
    and the robot responds to the collisions.
    =>
    but essentially the collision is absorbed
    and the robot able to respond to the collisions.

    112
    00:06:48,860 --> 00:06:51,316
    The small size has some disadvantages,
    =>
    The small size, of course, has some disadvantages,

    131
    00:07:54,180 --> 00:07:59,309
    And here there's a single person
    able to lead this network of followers.
    =>
    And here there's a single person who's
    able to lead this network of leader-followers.

    145
    00:08:56,660 --> 00:08:59,660
    it forms the shape
    without central coordination.
    =>
    it forms the shape
    without any central coordination.

    271
    00:15:57,213 --> 00:15:59,480
    they can be quick with these maps.
    =>
    they can be equipped with these maps.

    294
    00:17:01,990 --> 00:17:04,057
    you're probably dating yourself
    =>
    you're probably dating yourself.
    # Period

    295
    00:17:04,738 --> 00:17:07,689
    if you have not seen the movie,
    I encourage you not to see it,
    =>
    If ...

    Typos:
    dispacher => dispatcher
    poins => points
    essentialy => essentially
    buidlings => buildings
    tobot => robot
    storey => story

English subtitles

Revisions