Return to Video

The incredible inventions of intuitive AI

  • 0:01 - 0:03
    How many of you are creatives?
  • 0:03 - 0:07
    Designers, engineers,
    entrepreneurs, artists,
  • 0:07 - 0:09
    or maybe you just have
    a really big imagination.
  • 0:09 - 0:11
    Show of hands? (Cheers)
  • 0:11 - 0:12
    That's most of you.
  • 0:13 - 0:16
    I have some news for us creatives.
  • 0:17 - 0:19
    Over the course of the next 20 years,
  • 0:21 - 0:24
    more will change around
    the way we do our work
  • 0:25 - 0:28
    than has happened in the last 2,000.
  • 0:29 - 0:33
    In fact, I think we're at the dawn
    of a new age in human history.
  • 0:34 - 0:38
    Now, there have been four major historical
    eras defined by the way we work.
  • 0:39 - 0:43
    The Hunter-Gatherer Age
    lasted several million years.
  • 0:43 - 0:47
    And then the Agricultural Age
    lasted several thousand years.
  • 0:47 - 0:51
    The Industrial Age lasted
    a couple of centuries.
  • 0:51 - 0:55
    And now the Information Age
    has lasted just a few decades.
  • 0:55 - 1:00
    And now today, we're on the cusp
    of our next great era as a species.
  • 1:01 - 1:04
    Welcome to the Augmented Age.
  • 1:04 - 1:08
    In this new era, your natural human
    capabilities are going to be augmented
  • 1:08 - 1:11
    by computational systems
    that help you think,
  • 1:11 - 1:13
    robotic systems that help you make,
  • 1:13 - 1:15
    and a digital nervous system
  • 1:15 - 1:18
    that connects you to the world
    far beyond your natural senses.
  • 1:19 - 1:21
    Let's start with cognitive augmentation.
  • 1:21 - 1:24
    How many of you are augmented cyborgs?
  • 1:24 - 1:27
    (Laughter)
  • 1:27 - 1:30
    I would actually argue
    that we're already augmented.
  • 1:30 - 1:32
    Imagine you're at a party,
  • 1:32 - 1:35
    and somebody asks you a question
    that you don't know the answer to.
  • 1:35 - 1:39
    If you have one of these,
    in a few seconds, you can know the answer.
  • 1:40 - 1:42
    But this is just a primitive beginning.
  • 1:43 - 1:46
    Even Siri is just a passive tool.
  • 1:47 - 1:50
    In fact, for the last
    three-and-a-half million years,
  • 1:50 - 1:53
    the tools that we've had
    have been completely passive.
  • 1:54 - 1:58
    They do exactly what we tell them
    and nothing more.
  • 1:58 - 2:01
    Our very first tool only cut
    where we struck it.
  • 2:02 - 2:05
    The chisel only carves
    where the artist points it.
  • 2:05 - 2:11
    And even our most advanced tools
    do nothing without our explicit direction.
  • 2:11 - 2:14
    In fact, to date -- and this
    is something that frustrates me --
  • 2:14 - 2:16
    we've always been limited
  • 2:16 - 2:19
    by this need to manually
    push our wills into our tools --
  • 2:19 - 2:22
    like, manual -- like,
    literally using our hands,
  • 2:22 - 2:23
    even with computers.
  • 2:24 - 2:27
    But I'm more like Scotty in "Star Trek."
  • 2:27 - 2:28
    (Laughter)
  • 2:28 - 2:31
    I want to have a conversation
    with a computer.
  • 2:31 - 2:34
    I want to say, "Computer,
    let's design a car,"
  • 2:34 - 2:35
    and the computer shows me a car.
  • 2:35 - 2:38
    And I say, "No, more fast-looking,
    and less German,"
  • 2:38 - 2:40
    and bang, the computer shows me an option.
  • 2:40 - 2:42
    (Laughter)
  • 2:42 - 2:45
    That conversation might be
    a little ways off --
  • 2:45 - 2:47
    probably less than many of us think --
  • 2:47 - 2:49
    but right now,
  • 2:49 - 2:50
    we're working on it.
  • 2:50 - 2:54
    Tools are making this leap
    from being passive to being generative.
  • 2:55 - 2:58
    Generative design tools
    use a computer and algorithms
  • 2:58 - 3:01
    to synthesize geometry
  • 3:01 - 3:04
    to come up with new designs
    all by themselves.
  • 3:04 - 3:07
    All it needs are your goals
    and your constraints.
  • 3:07 - 3:08
    I'll give you an example.
  • 3:08 - 3:11
    In the case of this aerial drone chassis,
  • 3:11 - 3:14
    all you would need to do
    is tell it something like,
  • 3:14 - 3:15
    it has four propellers,
  • 3:15 - 3:17
    you want it to be
    as lightweight as possible,
  • 3:17 - 3:19
    and you need it to be
    aerodynamically efficient.
  • 3:19 - 3:24
    Then what the computer does
    is it explores the entire solution space:
  • 3:24 - 3:28
    every single possibility that solves
    and meets your criteria --
  • 3:28 - 3:30
    millions of them.
  • 3:30 - 3:32
    It takes big computers to do this.
  • 3:32 - 3:34
    But it comes back to us with designs
  • 3:34 - 3:37
    that we, by ourselves,
    never could've imagined.
  • 3:37 - 3:40
    And the computer's coming up
    with this stuff all by itself --
  • 3:40 - 3:42
    no one ever drew anything,
  • 3:42 - 3:44
    and it started completely from scratch.
  • 3:45 - 3:47
    And by the way, it's no accident
  • 3:47 - 3:51
    that the drone body looks just like
    the pelvis of a flying squirrel.
  • 3:51 - 3:53
    (Laughter)
  • 3:54 - 3:56
    It's because the algorithms
    are designed to work
  • 3:56 - 3:58
    the same way evolution does.
  • 3:59 - 4:01
    What's exciting is we're starting
    to see this technology
  • 4:01 - 4:03
    out in the real world.
  • 4:03 - 4:05
    We've been working with Airbus
    for a couple of years
  • 4:05 - 4:07
    on this concept plane for the future.
  • 4:07 - 4:09
    It's a ways out still.
  • 4:09 - 4:13
    But just recently we used
    a generative-design AI
  • 4:13 - 4:15
    to come up with this.
  • 4:16 - 4:21
    This is a 3D-printed cabin partition
    that's been designed by a computer.
  • 4:21 - 4:24
    It's stronger than the original
    yet half the weight,
  • 4:24 - 4:27
    and it will be flying
    in the Airbus A320 later this year.
  • 4:27 - 4:29
    So computers can now generate;
  • 4:29 - 4:34
    they can come up with their own solutions
    to our well-defined problems.
  • 4:35 - 4:36
    But they're not intuitive.
  • 4:36 - 4:39
    They still have to start from scratch
    every single time,
  • 4:39 - 4:42
    and that's because they never learn.
  • 4:42 - 4:44
    Unlike Maggie.
  • 4:44 - 4:46
    (Laughter)
  • 4:46 - 4:49
    Maggie's actually smarter
    than our most advanced design tools.
  • 4:49 - 4:51
    What do I mean by that?
  • 4:51 - 4:53
    If her owner picks up that leash,
  • 4:53 - 4:55
    Maggie knows with a fair
    degree of certainty
  • 4:55 - 4:56
    it's time to go for a walk.
  • 4:56 - 4:57
    And how did she learn?
  • 4:57 - 5:01
    Well, every time the owner picked up
    the leash, they went for a walk.
  • 5:01 - 5:02
    And Maggie did three things:
  • 5:03 - 5:04
    she had to pay attention,
  • 5:04 - 5:06
    she had to remember what happened
  • 5:07 - 5:11
    and she had to retain and create
    a pattern in her mind.
  • 5:11 - 5:14
    Interestingly, that's exactly what
  • 5:14 - 5:16
    computer scientists
    have been trying to get AIs to do
  • 5:16 - 5:18
    for the last 60 or so years.
  • 5:19 - 5:20
    Back in 1952,
  • 5:20 - 5:24
    they built this computer
    that could play Tic-Tac-Toe.
  • 5:25 - 5:26
    Big deal.
  • 5:27 - 5:30
    Then 45 years later, in 1997,
  • 5:30 - 5:33
    Deep Blue beats Kasparov at chess.
  • 5:34 - 5:39
    2011, Watson beats these two
    humans at Jeopardy,
  • 5:39 - 5:42
    which is much harder for a computer
    to play than chess is.
  • 5:42 - 5:46
    In fact, rather than working
    from predefined recipes,
  • 5:46 - 5:49
    Watson had to use reasoning
    to overcome his human opponents.
  • 5:50 - 5:53
    And then a couple of weeks ago,
  • 5:53 - 5:57
    DeepMind's AlphaGo beats
    the world's best human at Go,
  • 5:57 - 5:59
    which is the most difficult
    game that we have.
  • 5:59 - 6:02
    In fact, in Go, there are more
    possible moves
  • 6:02 - 6:04
    than there are atoms in the universe.
  • 6:06 - 6:08
    So in order to win,
  • 6:08 - 6:11
    what AlphaGo had to do
    was develop intuition.
  • 6:11 - 6:15
    And in fact, at some points,
    AlphaGo's programmers didn't understand
  • 6:15 - 6:18
    why AlphaGo was doing what it was doing.
  • 6:19 - 6:21
    And things are moving really fast.
  • 6:21 - 6:24
    I mean, consider --
    in the space of a human lifetime,
  • 6:24 - 6:27
    computers have gone from a child's game
  • 6:28 - 6:31
    to what's recognized as the pinnacle
    of strategic thought.
  • 6:32 - 6:34
    What's basically happening
  • 6:34 - 6:38
    is computers are going
    from being like Spock
  • 6:38 - 6:40
    to being a lot more like Kirk.
  • 6:40 - 6:43
    (Laughter)
  • 6:43 - 6:47
    Right? From pure logic to intuition.
  • 6:48 - 6:50
    Would you cross this bridge?
  • 6:51 - 6:53
    Most of you are saying, "Oh, hell no!"
  • 6:53 - 6:54
    (Laughter)
  • 6:54 - 6:57
    And you arrived at that decision
    in a split second.
  • 6:57 - 6:59
    You just sort of knew
    that bridge was unsafe.
  • 6:59 - 7:01
    And that's exactly the kind of intuition
  • 7:01 - 7:05
    that our deep-learning systems
    are starting to develop right now.
  • 7:06 - 7:07
    Very soon, you'll literally be able
  • 7:07 - 7:10
    to show something you've made,
    you've designed,
  • 7:10 - 7:11
    to a computer,
  • 7:11 - 7:12
    and it will look at it and say,
  • 7:12 - 7:15
    "Sorry, homey, that'll never work.
    You have to try again."
  • 7:16 - 7:19
    Or you could ask it if people
    are going to like your next song,
  • 7:20 - 7:22
    or your next flavor of ice cream.
  • 7:24 - 7:26
    Or, much more importantly,
  • 7:26 - 7:29
    you could work with a computer
    to solve a problem
  • 7:29 - 7:30
    that we've never faced before.
  • 7:30 - 7:32
    For instance, climate change.
  • 7:32 - 7:34
    We're not doing a very
    good job on our own,
  • 7:34 - 7:36
    we could certainly use
    all the help we can get.
  • 7:36 - 7:37
    That's what I'm talking about,
  • 7:37 - 7:40
    technology amplifying
    our cognitive abilities
  • 7:40 - 7:44
    so we can imagine and design things
    that were simply out of our reach
  • 7:44 - 7:46
    as plain old un-augmented humans.
  • 7:48 - 7:51
    So what about making
    all of this crazy new stuff
  • 7:51 - 7:53
    that we're going to invent and design?
  • 7:54 - 7:58
    I think the era of human augmentation
    is as much about the physical world
  • 7:58 - 8:01
    as it is about the virtual,
    intellectual realm.
  • 8:02 - 8:04
    How will technology augment us?
  • 8:04 - 8:07
    In the physical world, robotic systems.
  • 8:08 - 8:09
    OK, there's certainly a fear
  • 8:09 - 8:12
    that robots are going to take
    jobs away from humans,
  • 8:12 - 8:14
    and that is true in certain sectors.
  • 8:14 - 8:17
    But I'm much more interested in this idea
  • 8:17 - 8:22
    that humans and robots working together
    are going to augment each other,
  • 8:22 - 8:24
    and start to inhabit a new space.
  • 8:24 - 8:27
    This is our applied research lab
    in San Francisco,
  • 8:27 - 8:30
    where one of our areas of focus
    is advanced robotics,
  • 8:30 - 8:32
    specifically, human-robot collaboration.
  • 8:33 - 8:36
    And this is Bishop, one of our robots.
  • 8:36 - 8:38
    As an experiment, we set it up
  • 8:38 - 8:41
    to help a person working in construction
    doing repetitive tasks --
  • 8:42 - 8:46
    tasks like cutting out holes for outlets
    or light switches in drywall.
  • 8:46 - 8:49
    (Laughter)
  • 8:50 - 8:53
    So, Bishop's human partner
    can tell what to do in plain English
  • 8:53 - 8:54
    and with simple gestures,
  • 8:54 - 8:56
    kind of like talking to a dog,
  • 8:56 - 8:58
    and then Bishop executes
    on those instructions
  • 8:58 - 9:00
    with perfect precision.
  • 9:00 - 9:03
    We're using the human
    for what the human is good at:
  • 9:03 - 9:05
    awareness, perception and decision making.
  • 9:05 - 9:08
    And we're using the robot
    for what it's good at:
  • 9:08 - 9:09
    precision and repetitiveness.
  • 9:10 - 9:13
    Here's another cool project
    that Bishop worked on.
  • 9:13 - 9:16
    The goal of this project,
    which we called the HIVE,
  • 9:16 - 9:20
    was to prototype the experience
    of humans, computers and robots
  • 9:20 - 9:23
    all working together to solve
    a highly complex design problem.
  • 9:24 - 9:25
    The humans acted as labor.
  • 9:25 - 9:29
    They cruised around the construction site,
    they manipulated the bamboo --
  • 9:29 - 9:32
    which, by the way,
    because it's a non-isomorphic material,
  • 9:32 - 9:33
    is super hard for robots to deal with.
  • 9:33 - 9:35
    But then the robots
    did this fiber winding,
  • 9:35 - 9:38
    which was almost impossible
    for a human to do.
  • 9:38 - 9:42
    And then we had an AI
    that was controlling everything.
  • 9:42 - 9:45
    It was telling the humans what to do,
    telling the robots what to do
  • 9:45 - 9:48
    and keeping track of thousands
    of individual components.
  • 9:48 - 9:49
    What's interesting is,
  • 9:49 - 9:52
    building this pavilion
    was simply not possible
  • 9:52 - 9:57
    without human, robot and AI
    augmenting each other.
  • 9:58 - 10:01
    OK, I'll share one more project.
    This one's a little bit crazy.
  • 10:01 - 10:06
    We're working with Amsterdam-based artist
    Joris Laarman and his team at MX3D
  • 10:06 - 10:09
    to generatively design
    and robotically print
  • 10:09 - 10:12
    the world's first autonomously
    manufactured bridge.
  • 10:12 - 10:16
    So, Joris and an AI are designing
    this thing right now, as we speak,
  • 10:16 - 10:17
    in Amsterdam.
  • 10:17 - 10:20
    And when they're done,
    we're going to hit "Go,"
  • 10:20 - 10:23
    and robots will start 3D printing
    in stainless steel,
  • 10:23 - 10:26
    and then they're going to keep printing,
    without human intervention,
  • 10:26 - 10:28
    until the bridge is finished.
  • 10:29 - 10:32
    So, as computers are going
    to augment our ability
  • 10:32 - 10:34
    to imagine and design new stuff,
  • 10:34 - 10:37
    robotic systems are going to help us
    build and make things
  • 10:37 - 10:39
    that we've never been able to make before.
  • 10:40 - 10:45
    But what about our ability
    to sense and control these things?
  • 10:45 - 10:49
    What about a nervous system
    for the things that we make?
  • 10:49 - 10:51
    Our nervous system,
    the human nervous system,
  • 10:51 - 10:53
    tells us everything
    that's going on around us.
  • 10:54 - 10:58
    But the nervous system of the things
    we make is rudimentary at best.
  • 10:58 - 11:01
    For instance, a car doesn't tell
    the city's Public Works department
  • 11:01 - 11:05
    that it just hit a pothole at the corner
    of Broadway and Morrison.
  • 11:05 - 11:07
    A building doesn't tell its designers
  • 11:07 - 11:09
    whether or not the people inside
    like being there,
  • 11:09 - 11:12
    and the toy manufacturer doesn't know
  • 11:12 - 11:14
    if a toy is actually being played with --
  • 11:14 - 11:17
    how and where and whether
    or not it's any fun.
  • 11:18 - 11:21
    Look, I'm sure that the designers
    imagined this lifestyle for Barbie
  • 11:21 - 11:23
    when they designed her --
  • 11:23 - 11:24
    (Laughter)
  • 11:24 - 11:27
    But what if it turns out that Barbie's
    actually really lonely?
  • 11:27 - 11:30
    (Laughter)
  • 11:31 - 11:33
    If the designers had known
  • 11:33 - 11:35
    what was really happening
    in the real world
  • 11:35 - 11:37
    with their designs -- the road,
    the building, Barbie --
  • 11:37 - 11:40
    they could've used that knowledge
    to create an experience
  • 11:40 - 11:41
    that was better for the user.
  • 11:41 - 11:43
    What's missing is a nervous system
  • 11:43 - 11:47
    connecting us to all of the things
    that we design, make and use.
  • 11:48 - 11:51
    What if all of you had that kind
    of information flowing to you
  • 11:51 - 11:54
    from the things you create
    in the real world?
  • 11:55 - 11:57
    With all of the stuff we make,
  • 11:57 - 11:59
    we spend a tremendous amount
    of money and energy --
  • 11:59 - 12:02
    in fact, last year,
    about two trillion dollars --
  • 12:02 - 12:05
    convincing people to buy
    the things we've made.
  • 12:05 - 12:08
    But if you had this connection
    to the things that you design and create
  • 12:08 - 12:10
    after they're out in the real world,
  • 12:10 - 12:13
    after they've been sold
    or launched or whatever,
  • 12:13 - 12:15
    we could actually change that,
  • 12:15 - 12:18
    and go from making people want our stuff,
  • 12:18 - 12:22
    to just making stuff that people
    want in the first place.
  • 12:22 - 12:24
    The good news is, we're working
    on digital nervous systems
  • 12:24 - 12:27
    that connect us to the things we design.
  • 12:28 - 12:30
    We're working on one project
  • 12:30 - 12:34
    with a couple of guys down in Los Angeles
    called the Bandito Brothers
  • 12:34 - 12:35
    and their team,
  • 12:35 - 12:39
    and one of the things these guys do
    is build insane cars
  • 12:39 - 12:42
    that do absolutely insane things.
  • 12:43 - 12:44
    These guys are crazy --
  • 12:44 - 12:45
    (Laughter)
  • 12:45 - 12:47
    in the best way.
  • 12:49 - 12:51
    And what we're doing with them
  • 12:51 - 12:53
    is taking a traditional race-car chassis
  • 12:53 - 12:55
    and giving it a nervous system.
  • 12:55 - 12:58
    So, we instrumented it
    with dozens of sensors,
  • 12:58 - 13:01
    put a world-class driver behind the wheel,
  • 13:01 - 13:04
    took it out to the desert
    and drove the hell out of it for a week.
  • 13:04 - 13:06
    And the car's nervous system
    captured everything
  • 13:06 - 13:08
    that was happening to the car.
  • 13:08 - 13:11
    We captured four billion data points;
  • 13:11 - 13:13
    all of the forces
    that it was subjected to.
  • 13:13 - 13:15
    And then we did something crazy.
  • 13:15 - 13:17
    We took all of that data,
  • 13:17 - 13:21
    and plugged it into a generative-design AI
    we call "Dreamcatcher."
  • 13:21 - 13:25
    So what do get when you give
    a design tool a nervous system,
  • 13:25 - 13:28
    and you ask it to build you
    the ultimate car chassis?
  • 13:29 - 13:31
    You get this.
  • 13:32 - 13:36
    This is something that a human
    could never have designed.
  • 13:37 - 13:39
    Except a human did design this,
  • 13:39 - 13:43
    but it was a human that was augmented
    by a generative-design AI,
  • 13:43 - 13:44
    a digital nervous system
  • 13:44 - 13:47
    and robots that can actually
    fabricate something like this.
  • 13:48 - 13:51
    So if this is the future,
    the Augmented Age,
  • 13:51 - 13:56
    and we're going to be augmented
    cognitively, physically and perceptually,
  • 13:56 - 13:57
    what will that look like?
  • 13:58 - 14:01
    What is this wonderland going to be like?
  • 14:01 - 14:03
    I think we're going to see a world
  • 14:03 - 14:06
    where we're moving
    from things that are fabricated
  • 14:06 - 14:07
    to things that are farmed.
  • 14:08 - 14:12
    Where we're moving from things
    that are constructed
  • 14:12 - 14:13
    to that which is grown.
  • 14:14 - 14:16
    We're going to move from being isolated
  • 14:16 - 14:18
    to being connected.
  • 14:19 - 14:21
    And we'll move away from extraction
  • 14:21 - 14:23
    to embrace aggregation.
  • 14:24 - 14:28
    I also think we'll shift
    from craving obedience from our things
  • 14:28 - 14:29
    to valuing autonomy.
  • 14:31 - 14:32
    Thanks to our augmented capabilities,
  • 14:32 - 14:35
    our world is going to change dramatically.
  • 14:36 - 14:39
    We're going to have a world
    with more variety, more connectedness,
  • 14:39 - 14:41
    more dynamism, more complexity,
  • 14:41 - 14:43
    more adaptability and, of course,
  • 14:43 - 14:45
    more beauty.
  • 14:45 - 14:47
    The shape of things to come
  • 14:47 - 14:49
    will be unlike anything
    we've ever seen before.
  • 14:49 - 14:50
    Why?
  • 14:50 - 14:54
    Because what will be shaping those things
    is this new partnership
  • 14:54 - 14:58
    between technology, nature and humanity.
  • 14:59 - 15:03
    That, to me, is a future
    well worth looking forward to.
  • 15:03 - 15:04
    Thank you all so much.
  • 15:04 - 15:10
    (Applause)
Title:
The incredible inventions of intuitive AI
Speaker:
Maurice Conti
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
15:23

English subtitles

Revisions Compare revisions