Return to Video

The incredible inventions of intuitive AI

  • 0:01 - 0:03
    How many of your are creatives?
  • 0:03 - 0:07
    Designers, engineers,
    entrepreneurs, artists,
  • 0:07 - 0:09
    or maybe you just have
    a really big imagination.
  • 0:09 - 0:10
    Show of hands?
  • 0:10 - 0:11
    (Applause)
  • 0:11 - 0:13
    That's most of you.
  • 0:13 - 0:16
    I have some news for us creatives.
  • 0:17 - 0:22
    Over the course of the next 20 years ...
  • 0:22 - 0:26
    more will change around
    the way we do our work
  • 0:26 - 0:28
    than has happened in the last 2,000.
  • 0:29 - 0:33
    In fact, I think we're at the dawn
    of a new age in human history.
  • 0:34 - 0:39
    Now, there have been four major historical
    eras defined by the way we work.
  • 0:40 - 0:43
    The Hunter-Gatherer Age
    lasted several million years.
  • 0:43 - 0:47
    And then the Agricultural Age
    lasted several thousand years.
  • 0:47 - 0:51
    The Industrial Age lasted
    a couple of centuries,
  • 0:51 - 0:55
    and now the Information Age
    has lasted just a few decades.
  • 0:55 - 0:57
    And now today,
  • 0:57 - 1:00
    we're on the cusp of our next
    great era as a species.
  • 1:01 - 1:04
    Welcome to the Augemented Age.
  • 1:04 - 1:05
    In this new era,
  • 1:05 - 1:09
    your natural human capabilities are going
    to be augmented by computational systems
  • 1:09 - 1:11
    that help you think,
  • 1:11 - 1:13
    robotic systems that help you make,
  • 1:13 - 1:15
    and [a] digital nervous system
  • 1:15 - 1:19
    that connects you to the world
    fay beyond your natural senses.
  • 1:20 - 1:21
    Let's start with cognitive augmentation.
  • 1:21 - 1:24
    How many of you are augmented cyborgs?
  • 1:24 - 1:25
    (Laughter)
  • 1:27 - 1:30
    I would actually argue that we're
    already augmented.
  • 1:30 - 1:32
    Imagine you're at a party,
  • 1:32 - 1:35
    and somebody asks you a question
    that you don't know the answer to.
  • 1:35 - 1:37
    If you have one of these,
  • 1:37 - 1:38
    in a few seconds,
  • 1:38 - 1:40
    you can know the answer.
  • 1:40 - 1:43
    But this is just a primitive beginning.
  • 1:43 - 1:46
    Even Siri is just a passive tool.
  • 1:47 - 1:50
    In fact, for the last
    three-and-a-half million years,
  • 1:50 - 1:54
    the tools that we've had
    have been completely passive.
  • 1:54 - 1:58
    They do exactly what we tell them
    and nothing more.
  • 1:58 - 2:01
    Our very first tool only cut
    where we struck it.
  • 2:02 - 2:05
    The chisel only carves
    where the artist points it.
  • 2:06 - 2:11
    And even our most advanced tools
    do nothing without our explicit direction.
  • 2:11 - 2:13
    In fact, to date --
  • 2:13 - 2:15
    and this is something
    that frustrates me --
  • 2:15 - 2:16
    we've always been limited
  • 2:16 - 2:19
    by this need to manually
    push our wills into our tools --
  • 2:19 - 2:20
    like manual,
  • 2:20 - 2:22
    like literally using our hands,
  • 2:22 - 2:23
    even with computers.
  • 2:24 - 2:27
    But I'm more like Scotty in "Star Trek."
  • 2:27 - 2:28
    (Laughter)
  • 2:29 - 2:31
    I want to have a conversation
    with a computer.
  • 2:31 - 2:34
    I want to say, "Computer,
    let's design a car,"
  • 2:34 - 2:35
    and the computer shows me a car.
  • 2:35 - 2:38
    And I say, "No, more fast-looking,
    and less German,"
  • 2:38 - 2:40
    and bang, the computer shows me an option.
  • 2:40 - 2:41
    (Laughter)
  • 2:42 - 2:45
    That conversation might be
    a little ways off,
  • 2:45 - 2:47
    it's actually probably less
    than any of us think,
  • 2:47 - 2:49
    but right now,
  • 2:49 - 2:50
    we're working on it.
  • 2:50 - 2:54
    Tools are making this leap from being
    passive to being generative.
  • 2:55 - 2:58
    Generative design tools
    use a computer and algorithms
  • 2:58 - 3:01
    to synthesize geometry
  • 3:01 - 3:04
    to come up with new designs
    all by themselves.
  • 3:04 - 3:07
    All it needs are your goals
    and your constraints.
  • 3:07 - 3:08
    I'll give you an example.
  • 3:08 - 3:11
    In the case of this aerial drone chassis,
  • 3:11 - 3:14
    all you would need to do
    is tell it something like,
  • 3:14 - 3:15
    it has four propellers,
  • 3:15 - 3:17
    you want it to be
    as lightweight as possible,
  • 3:17 - 3:19
    and you need it to be
    aerodynamically efficient.
  • 3:19 - 3:22
    And then what the computer does
  • 3:22 - 3:25
    is it explores the entire solution space:
  • 3:25 - 3:28
    every single possibility that solves
    and meets your criteria --
  • 3:28 - 3:30
    millions of them.
  • 3:30 - 3:32
    It takes big computers to do this,
  • 3:32 - 3:34
    but it comes back to us with designs
  • 3:34 - 3:37
    that we by ourselves
    never could've imagined.
  • 3:37 - 3:40
    And the computer's coming up
    with this stuff all by itself,
  • 3:40 - 3:42
    no one ever drew anything,
  • 3:42 - 3:45
    and it started completely from scratch.
  • 3:45 - 3:46
    And by the way,
  • 3:46 - 3:48
    it's no accident
  • 3:48 - 3:51
    that the drone body looks just like
    the pelvis of a flying squirrel.
  • 3:51 - 3:53
    (Laughter)
  • 3:54 - 3:58
    It's because the algorithms are designed
    to work the same way that evolution does.
  • 3:59 - 4:03
    What's exciting is we're starting to see
    this technology out in the real world.
  • 4:03 - 4:05
    We've been working with Airbus
    for a couple of years
  • 4:05 - 4:07
    on this concept plane for the future.
  • 4:07 - 4:09
    It's aways out still,
  • 4:09 - 4:13
    but just recently we used
    a generative design AI
  • 4:13 - 4:15
    to come up with this.
  • 4:16 - 4:21
    This is a 3D printed cabin partition
    that's been designed by a computer.
  • 4:21 - 4:24
    It's stronger than the original
    yet half the weight,
  • 4:24 - 4:27
    and it will be flying
    in the Airbus A320 later this year.
  • 4:27 - 4:29
    So computers can now generate.
  • 4:29 - 4:34
    They can come up with their own solutions
    to our well-defined problems.
  • 4:35 - 4:36
    But they're not inutitive.
  • 4:36 - 4:39
    They still have to start from scratch
    every single time,
  • 4:39 - 4:43
    and that's because they never learn ...
  • 4:43 - 4:44
    unlike Maggie.
  • 4:44 - 4:45
    (Laughter)
  • 4:46 - 4:49
    Maggie's actually smarter than our
    most advanced design tools.
  • 4:50 - 4:51
    What do I mean by that?
  • 4:51 - 4:53
    If her owner picks up that leash,
  • 4:53 - 4:55
    Maggie knows with a fair
    degree of certainty
  • 4:55 - 4:56
    that it's time to for a walk.
  • 4:56 - 4:57
    And how did she learn?
  • 4:57 - 5:00
    Well, every time the owner
    picked up the leash,
  • 5:00 - 5:01
    they went for a walk.
  • 5:01 - 5:03
    And Maggie did three things:
  • 5:03 - 5:05
    she had to pay attention,
  • 5:05 - 5:07
    she had to remember what happened
  • 5:07 - 5:11
    and she had to retain and create
    a pattern in her mind.
  • 5:12 - 5:13
    Interestingly,
  • 5:13 - 5:16
    that's exactly computer scientists
    have been trying to get AIs to do
  • 5:16 - 5:18
    for the last 60 or so years.
  • 5:19 - 5:20
    Back in 1952,
  • 5:20 - 5:24
    they built this computer
    that could play Tic-Tac-Toe.
  • 5:25 - 5:26
    Big deal.
  • 5:27 - 5:28
    Then 45 years later,
  • 5:28 - 5:30
    in 1997,
  • 5:30 - 5:33
    Deep Blue beats Kasparov at Chess.
  • 5:34 - 5:39
    2011, Watson beats these two
    humans at Jeopardy,
  • 5:39 - 5:42
    which is much harder for a computer
    to play than Chess is.
  • 5:42 - 5:46
    In fact, rather than working
    from predefined recipes,
  • 5:46 - 5:49
    Watson had to use reasoning
    to overcome his human opponents.
  • 5:50 - 5:53
    And then a couple of weeks ago,
  • 5:53 - 5:57
    DeepMind's AlphaGo beats
    the world's best human at Go,
  • 5:57 - 5:59
    which is the most difficult
    game that we have.
  • 5:59 - 6:00
    In fact in Go,
  • 6:00 - 6:05
    there are more possible moves
    than there are atoms in the universe.
  • 6:06 - 6:08
    So in order to win,
  • 6:08 - 6:11
    what AlphaGo had to do
    was develop intuition,
  • 6:11 - 6:12
    and in fact,
  • 6:12 - 6:13
    at some points,
  • 6:13 - 6:18
    AlphaGo's programmers didn't understand
    why AlphaGo was doing what it was doing.
  • 6:20 - 6:21
    And things are moving really fast.
  • 6:21 - 6:22
    I mean, consider --
  • 6:22 - 6:25
    in the space of a human lifetime,
  • 6:25 - 6:28
    computers have gone from a child's game
  • 6:28 - 6:32
    to what's recognized as the pinnacle
    of strategic thought.
  • 6:32 - 6:35
    What's basically happening
  • 6:35 - 6:38
    is computers are going
    from being like Spock
  • 6:38 - 6:40
    to being a lot more like Kirk.
  • 6:40 - 6:42
    (Laughter)
  • 6:44 - 6:45
    Right?
  • 6:45 - 6:47
    From pure logic to intuition.
  • 6:48 - 6:50
    Would you cross this bridge?
  • 6:51 - 6:52
    Most of you are saying,
  • 6:52 - 6:53
    "Oh, hell no."
  • 6:53 - 6:54
    (Laughter)
  • 6:54 - 6:57
    And you arrived at that decision
    in a split second.
  • 6:57 - 7:00
    You just sort of knew
    that that bridge was unsafe.
  • 7:00 - 7:02
    And that's exactly the kind of intuition
  • 7:02 - 7:05
    that our deep learning systems
    are starting to develop right now.
  • 7:06 - 7:07
    Very soon,
  • 7:07 - 7:09
    you'll literally be able
    to show something you've made,
  • 7:09 - 7:10
    you've designed,
  • 7:10 - 7:11
    to a computer,
  • 7:11 - 7:13
    and it will look at it and say,
  • 7:13 - 7:15
    "Mm, sorry homey, that will never work,
  • 7:15 - 7:16
    you have to try again."
  • 7:16 - 7:20
    Or you could ask it if people
    are going to like your next song,
  • 7:20 - 7:23
    or your next flavor of ice cream.
  • 7:24 - 7:26
    Or, much more importantly,
  • 7:26 - 7:30
    you could work with a computer to solve
    a problem that we've never faced before.
  • 7:30 - 7:32
    For instance climate change.
  • 7:32 - 7:34
    We're not doing a very
    good job on our own,
  • 7:34 - 7:36
    we could certainly use
    all the help we can get.
  • 7:36 - 7:37
    That's what I'm talking about:
  • 7:37 - 7:40
    technology amplifying
    our cognitive abilities
  • 7:40 - 7:42
    so we can imagine and design things
  • 7:42 - 7:47
    that were simply out of our reach
    as plain old unaugmented humans.
  • 7:48 - 7:51
    So, what about making
    all of this crazy new stuff
  • 7:51 - 7:54
    that we're going to invent and design?
  • 7:54 - 7:58
    I think the era of human augmentation
    is as much about the physical world
  • 7:58 - 8:02
    as it is about the virtual,
    intellectual realm.
  • 8:02 - 8:05
    So how will technology augment us?
  • 8:05 - 8:06
    In the physcial world,
  • 8:06 - 8:07
    robotic systems.
  • 8:08 - 8:12
    OK, there's certainly a fear that robots
    are going to take jobs away from humans,
  • 8:12 - 8:14
    and that is true in certain sectors.
  • 8:14 - 8:17
    But I'm much more interested in this idea
  • 8:17 - 8:22
    that humans and robots working together
    are going to augment each other,
  • 8:22 - 8:24
    and start to inhabit a new space.
  • 8:24 - 8:27
    This is our applied
    research lab in San Francisco,
  • 8:27 - 8:30
    where one of our areas of focus
    is advanced robotics,
  • 8:30 - 8:33
    specifically human-robot collaboration.
  • 8:33 - 8:35
    And this is Bishop,
  • 8:35 - 8:36
    one of our robots.
  • 8:36 - 8:37
    As an experiment,
  • 8:37 - 8:41
    we set it up to help a person working
    in construction doing repetitive tasks.
  • 8:42 - 8:46
    Tasks like cutting out holes for outlets
    or light switches in dry wall.
  • 8:47 - 8:48
    (Laughter)
  • 8:50 - 8:53
    So, Bishop's human partner can tell
    what to do in plain English
  • 8:53 - 8:55
    and with simple gestures,
  • 8:55 - 8:56
    kind of like talking to a dog,
  • 8:56 - 8:58
    and then Bishop executes
    on those instructions
  • 8:58 - 9:00
    with perfect precision.
  • 9:00 - 9:03
    We're using the human for what
    the human is good at,
  • 9:03 - 9:04
    right?
  • 9:04 - 9:06
    Awareness, perception and desicion making.
  • 9:06 - 9:08
    And we're using the robot
    for what it's good at:
  • 9:08 - 9:10
    precision and repetitiveness.
  • 9:10 - 9:13
    Here's another cool project
    that Bishop worked on.
  • 9:13 - 9:14
    The goal of this project,
  • 9:14 - 9:16
    which we called the HIVE,
  • 9:16 - 9:20
    was to prototype the experience
    of humans, computers and robots
  • 9:20 - 9:23
    all working together to solve
    a highly complex design problem.
  • 9:24 - 9:25
    The humans acted as labor.
  • 9:25 - 9:27
    They cruised around the construction site,
  • 9:27 - 9:29
    they manipulated the bamboo,
  • 9:29 - 9:30
    which by the way,
  • 9:30 - 9:32
    because it's a [nonisomorphic] material,
  • 9:32 - 9:34
    is super hard for robots to deal with.
  • 9:34 - 9:36
    But then the robots
    did this fiber winding,
  • 9:36 - 9:38
    which was almost impossible
    for a human to do.
  • 9:38 - 9:42
    And then we had an AI
    that was controlling everything.
  • 9:42 - 9:43
    It was telling the humans what to do,
  • 9:43 - 9:45
    it was telling the robots what to do,
  • 9:45 - 9:48
    and keeping track of thousands
    of individual components.
  • 9:48 - 9:52
    What's interesting is building
    this pavilion was simply not possible
  • 9:52 - 9:57
    without human, robot and AI
    augmenting each other.
  • 9:58 - 9:59
    OK, I'll share one more project.
  • 9:59 - 10:01
    This one's a little bit crazy.
  • 10:01 - 10:06
    We're working with Amsterdam-based artist,
    Joris Laarman and his team at MX3D
  • 10:06 - 10:09
    to generatively design
    and robotically print
  • 10:09 - 10:12
    the world's first autonomously
    manufactured bridge.
  • 10:13 - 10:16
    So, Joris and an AI are designing
    this thing right now, as we speak,
  • 10:16 - 10:17
    in Amsterdam.
  • 10:17 - 10:18
    And when they're done,
  • 10:18 - 10:20
    we're going to hit "go,"
  • 10:20 - 10:23
    and robots will start 3D printing
    in stainless steel,
  • 10:23 - 10:26
    and then they're going to keep printing
    without human intervention
  • 10:26 - 10:28
    until the bridge is finished.
  • 10:29 - 10:32
    So, as computers are going
    to augment our ability
  • 10:32 - 10:34
    to imagine and design new stuff,
  • 10:34 - 10:36
    robotic systems are going to help us
  • 10:36 - 10:40
    build and make things that we've
    never been able to make before.
  • 10:40 - 10:44
    But what about our ability
    to sense and control these things?
  • 10:45 - 10:49
    What about a nervous system
    for the things that we make?
  • 10:49 - 10:50
    Our nervous system --
  • 10:50 - 10:52
    the human nervous system --
  • 10:52 - 10:54
    tells us everything that's
    going on around us.
  • 10:54 - 10:58
    But the nervous system of the things
    we make is rudimentary at best.
  • 10:58 - 10:59
    For instance,
  • 10:59 - 11:02
    a car doesn't tell the city's
    Public Works department
  • 11:02 - 11:05
    that it just hit a pothole at the corner
    of Broadway and Morrison;
  • 11:05 - 11:07
    A building doesn't tell its designers
  • 11:07 - 11:10
    whether or not the people
    inside like being there,
  • 11:10 - 11:15
    and the toy manufacturer doesn't know
    if a toy is actually being played with --
  • 11:15 - 11:17
    how and where and whether
    or not it's any fun.
  • 11:18 - 11:22
    Look, I'm sure that the designers
    imagined this lifestyle for Barbie
  • 11:22 - 11:24
    when they designed her --
  • 11:24 - 11:25
    right?
  • 11:25 - 11:27
    But what if it turns out that Barbie's
    actually really lonely?
  • 11:27 - 11:29
    (Laughter)
  • 11:31 - 11:35
    If the designers had known what was
    really happening in the real world
  • 11:35 - 11:36
    with their designs --
  • 11:36 - 11:37
    the road, the building and Barbie --
  • 11:37 - 11:39
    they could've used that knowledge
  • 11:39 - 11:41
    to create an experience
    that was better for the user.
  • 11:41 - 11:43
    What's missing is a nervous system
  • 11:43 - 11:47
    connecting us to all of the things
    that we design, make and use.
  • 11:48 - 11:52
    What if all of you had that kind
    of information flowing to you
  • 11:52 - 11:54
    from the things you create
    in the real world?
  • 11:56 - 11:57
    With all of the stuff we make,
  • 11:57 - 11:59
    we spend a tremendous amount
    of money and energy --
  • 11:59 - 12:02
    in fact last year about
    two trillion dollars --
  • 12:02 - 12:04
    convincing people to buy
    the things that we've made.
  • 12:05 - 12:08
    But if you had this connection
    to the things that you design and create
  • 12:08 - 12:10
    after they're out in the real world,
  • 12:10 - 12:14
    after they've been sold,
    or launched or whatever,
  • 12:14 - 12:15
    we could actually change that,
  • 12:15 - 12:18
    and go from making people want our stuff,
  • 12:18 - 12:22
    to just making stuff that people
    want in the first place.
  • 12:22 - 12:25
    The good news is we're working
    on digital nervous systems
  • 12:25 - 12:27
    that connect us to the things we design.
  • 12:29 - 12:33
    We're working on one project
    with a couple of guys down in Los Angeles
  • 12:33 - 12:34
    called the Bandito Brothers,
  • 12:34 - 12:35
    and their team,
  • 12:35 - 12:39
    and one of the things these guys do
    is build insane cars
  • 12:39 - 12:42
    that do absolutely insane things.
  • 12:43 - 12:45
    These guys are crazy.
  • 12:45 - 12:46
    (Laughter)
  • 12:46 - 12:47
    In the best way.
  • 12:49 - 12:51
    And what we're doing with them
  • 12:51 - 12:53
    is taking a traditional racecar chassis
  • 12:53 - 12:55
    and giving it a nervous system.
  • 12:55 - 12:58
    So, we instrumented it
    with dozens of censors,
  • 12:58 - 13:01
    and then we put a world-class driver
    behind the wheel,
  • 13:01 - 13:02
    took it out to the desert
  • 13:02 - 13:04
    and drove the hell out of it for a week.
  • 13:04 - 13:08
    And the car's nervous system captured
    everything that was happening to the car.
  • 13:08 - 13:11
    We captured four billion data points ...
  • 13:11 - 13:13
    all of the forces
    that it was subjected to.
  • 13:13 - 13:15
    And then we did something crazy.
  • 13:15 - 13:17
    We took all of that data,
  • 13:17 - 13:21
    and plugged it into a generative
    design AI that we call, Dreamcatcher.
  • 13:21 - 13:25
    So, what do get when you give
    a design tool a nervous system,
  • 13:25 - 13:28
    and you ask it to build you
    the ultimate car chassis?
  • 13:29 - 13:31
    You get this.
  • 13:32 - 13:37
    This is something that human
    could never have designed.
  • 13:37 - 13:39
    Except a human did design this,
  • 13:39 - 13:43
    but it was a human that was augmented
    by a generative design AI,
  • 13:43 - 13:44
    a digital nervous system,
  • 13:44 - 13:47
    and robots that can actually
    fabricate something like this.
  • 13:48 - 13:50
    So, if this is the future,
  • 13:50 - 13:51
    the Augmented Age,
  • 13:51 - 13:56
    and we're going to be augmented
    cognitively, physically and perceptually,
  • 13:56 - 13:57
    what will that look like?
  • 13:58 - 14:01
    What is this wonderland going to be like?
  • 14:01 - 14:03
    I think we're going to see a world
  • 14:03 - 14:06
    where we're moving from
    things that are fabricated
  • 14:06 - 14:08
    to things that are farmed.
  • 14:08 - 14:12
    Where we're moving from things
    that are constructed
  • 14:12 - 14:14
    to that which is grown.
  • 14:14 - 14:17
    We're going to move from being isolated
  • 14:17 - 14:19
    to being connected,
  • 14:19 - 14:21
    and we'll move away from extraction
  • 14:21 - 14:24
    to embrace aggregation.
  • 14:24 - 14:28
    I also think we'll shift from craving
    obedience from our things
  • 14:28 - 14:30
    to valuing autonomy.
  • 14:31 - 14:33
    Thanks to our augmented capabilities,
  • 14:33 - 14:35
    our world is going to change dramatically.
  • 14:36 - 14:38
    We're going to have a world
    with more variety,
  • 14:38 - 14:39
    more connectedness,
  • 14:39 - 14:40
    more dynamism,
  • 14:40 - 14:41
    more complexity,
  • 14:41 - 14:42
    more adaptability,
  • 14:42 - 14:44
    and of course,
  • 14:44 - 14:45
    more beauty.
  • 14:45 - 14:47
    The shape of things to come
  • 14:47 - 14:49
    will be unlike anything
    we've ever seen before.
  • 14:49 - 14:50
    Why?
  • 14:50 - 14:52
    Because what will be shaping those things
  • 14:52 - 14:58
    is this new partnership between
    technology, nature and humanity.
  • 14:59 - 15:03
    That to me is a future
    well worth looking forward to.
  • 15:03 - 15:05
    Thank you all so much.
  • 15:05 - 15:06
    (Applause)
Title:
The incredible inventions of intuitive AI
Speaker:
Maurice Conti
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
15:23

English subtitles

Revisions Compare revisions