< Return to Video

Don't fear superintelligent AI

  • 0:01 - 0:05
    When I was a kid, I was
    the quintessential nerd.
  • 0:06 - 0:08
    I think some of you were too.
  • 0:08 - 0:09
    (Laughter)
  • 0:09 - 0:12
    And you sir, who laughed the loudest,
    you probably still are.
  • 0:12 - 0:14
    (Laughter)
  • 0:14 - 0:16
    I grew up in a small town
  • 0:16 - 0:18
    in the dusty plains of north Texas,
  • 0:18 - 0:21
    the son of a sheriff who was
    the son of a pastor.
  • 0:21 - 0:24
    Getting into trouble was not an option.
  • 0:24 - 0:28
    And so I started reading
    calculus books for fun.
  • 0:28 - 0:29
    (Laughter)
  • 0:29 - 0:31
    You did too.
  • 0:31 - 0:35
    That led me to building a laser
    and a computer and model rockets,
  • 0:35 - 0:37
    and that led me to making rocket fuel
  • 0:37 - 0:38
    in my bedroom.
  • 0:38 - 0:42
    Now, in scientific terms,
  • 0:42 - 0:45
    we call this a very bad idea.
  • 0:45 - 0:46
    (Laughter)
  • 0:46 - 0:48
    Around that same time,
  • 0:48 - 0:51
    Stanley Kubrick's "2001: A Space Odyssey"
    came to the theaters,
  • 0:51 - 0:54
    and my life was forever changed.
  • 0:54 - 0:56
    I loved everything about that movie,
  • 0:56 - 0:59
    especially the HAL 9000.
  • 0:59 - 1:01
    Now, HAL was a sentient computer
  • 1:01 - 1:04
    designed to guide the Discovery spacecraft
  • 1:04 - 1:06
    from the Earth to Jupiter.
  • 1:06 - 1:08
    HAL was also a flawed character,
  • 1:08 - 1:13
    for in the end he chose
    to value the mission over human life.
  • 1:13 - 1:15
    Now, HAL was a fictional character,
  • 1:15 - 1:18
    but nonetheless he speaks to our fears,
  • 1:18 - 1:20
    our fears of being subjugated
  • 1:20 - 1:23
    by some unfeeling, artificial intelligence
  • 1:23 - 1:26
    who is indifferent to our humanity.
  • 1:26 - 1:29
    I believe that such fears are unfounded.
  • 1:29 - 1:32
    Indeed, we stand at a remarkable time
  • 1:32 - 1:33
    in human history,
  • 1:33 - 1:38
    where, driven by refusal to accept
    the limits of our bodies and our minds,
  • 1:38 - 1:40
    we are building machines
  • 1:40 - 1:43
    of exquisite, beautiful
    complexity and grace
  • 1:43 - 1:45
    that will extend the human experience
  • 1:45 - 1:48
    in ways beyond our imagining.
  • 1:48 - 1:51
    After a career that led me
    from the Air Force Academy
  • 1:51 - 1:52
    to Space Command to now,
  • 1:52 - 1:54
    I became a systems engineer,
  • 1:54 - 1:57
    and recently I was drawn
    into an engineering problem
  • 1:57 - 1:59
    associated with NASA's mission to Mars.
  • 1:59 - 2:02
    Now, in space flights to the Moon,
  • 2:02 - 2:04
    we can rely upon mission control
  • 2:04 - 2:07
    in Houston to watch over
    all aspects of the flight.
  • 2:07 - 2:11
    However, Mars is 200 times further away,
  • 2:11 - 2:14
    and as a result it takes on average
  • 2:14 - 2:17
    13 minutes for a signal to travel
    from the Earth to Mars.
  • 2:17 - 2:21
    If there's trouble,
    there's not enough time.
  • 2:21 - 2:24
    And so a reasonable engineering solution
  • 2:24 - 2:26
    calls for us to put mission control
  • 2:26 - 2:29
    inside the walls of the Orion Spacecraft.
  • 2:29 - 2:31
    Another fascinating idea
  • 2:31 - 2:32
    in the mission profile
  • 2:32 - 2:34
    places humanoid robots
    on the surface of Mars
  • 2:34 - 2:37
    before the humans themselves arrive,
  • 2:37 - 2:39
    first to build facilities
  • 2:39 - 2:43
    and later to serve as collaborative
    members of the science team.
  • 2:43 - 2:46
    Now, as I looked at this
    from an engineering perspective,
  • 2:46 - 2:49
    it became very clear to me
    that what I needed to architect
  • 2:49 - 2:52
    was a smart, collaborative,
  • 2:52 - 2:54
    socially intelligent
    artificial intelligence.
  • 2:54 - 2:58
    In other words, I needed to build
    something very much like a HAL
  • 2:58 - 3:01
    but without the homicidal tendencies.
  • 3:01 - 3:03
    (Laughter)
  • 3:03 - 3:05
    Let's pause for a moment.
  • 3:05 - 3:09
    Is it really possible to build
    an artificial intelligence like that?
  • 3:09 - 3:10
    Actually, it is.
  • 3:10 - 3:11
    In many ways,
  • 3:11 - 3:14
    this is a hard engineering problem
  • 3:14 - 3:15
    with elements of AI,
  • 3:15 - 3:19
    not some wet hairball of an AI problem
    that needs to be engineered.
  • 3:19 - 3:22
    To paraphrase Alan Turing,
  • 3:22 - 3:25
    I'm not interested in a sentient machine.
  • 3:25 - 3:26
    I'm not building a HAL.
  • 3:26 - 3:29
    All I'm after is a simple brain,
  • 3:29 - 3:33
    something that offers
    the illusion of intelligence.
  • 3:33 - 3:36
    The art and the science of computing
  • 3:36 - 3:37
    have come a long way
  • 3:37 - 3:38
    since HAL was onscreen,
  • 3:38 - 3:41
    and I'd imagine if his inventor
    Dr. Chandra were here today,
  • 3:41 - 3:43
    he'd have a whole lot of questions for us.
  • 3:43 - 3:45
    Is it really possible for us
  • 3:45 - 3:49
    to take a system of millions
    upon millions of devices
  • 3:49 - 3:51
    to read in their data streams,
  • 3:51 - 3:53
    to predict their failures
    and act in advance?
  • 3:53 - 3:55
    Yes.
  • 3:55 - 3:58
    Can we build systems that converse
    with humans in natural language?
  • 3:58 - 3:59
    Yes.
  • 3:59 - 4:01
    Can we build systems that recognize
    objects, identify emotions,
  • 4:01 - 4:05
    emote themselves, play games,
    and even read lips?
  • 4:05 - 4:06
    Yes.
  • 4:06 - 4:09
    Can we build a system that sets goals,
  • 4:09 - 4:12
    that carries out plans against those goals
    and learns along the way?
  • 4:12 - 4:14
    Yes.
  • 4:14 - 4:15
    Can we build systems
  • 4:15 - 4:17
    that have a theory of mind?
  • 4:17 - 4:19
    This we are learning to do.
  • 4:19 - 4:23
    Can we build systems that have
    an ethical and moral foundation?
  • 4:23 - 4:25
    This we must learn how to do.
  • 4:25 - 4:28
    So let's accept for a moment that it's
    possible to build such
  • 4:28 - 4:30
    an artificial intelligence
  • 4:30 - 4:32
    for this kind of mission and others.
  • 4:32 - 4:35
    The next question you
    must ask yourself is,
  • 4:35 - 4:36
    should we fear it?
  • 4:36 - 4:38
    Now, every new technology
  • 4:38 - 4:40
    brings with it some
    measure of trepidation.
  • 4:40 - 4:43
    When we first saw cars,
  • 4:43 - 4:47
    people lamented that we would see
    the destruction of the family.
  • 4:47 - 4:50
    When we first saw telephones come in,
  • 4:50 - 4:52
    people would worry it would destroy
    all civil conversation.
  • 4:52 - 4:56
    At the point in time we saw
    the written word become pervasive,
  • 4:56 - 4:59
    people thought we would lose
    our ability to memorize.
  • 4:59 - 5:01
    These things are all true to a degree,
  • 5:01 - 5:03
    but it's also true that these technologies
  • 5:03 - 5:06
    brought to us things that extended
    the human experience
  • 5:06 - 5:10
    in some profound ways.
  • 5:10 - 5:13
    So let's take this a little further.
  • 5:13 - 5:18
    I do not fear the creation
    of an AI like this,
  • 5:18 - 5:22
    because it will eventually embody
    some of our values.
  • 5:22 - 5:24
    Consider this: building a cognitive system
  • 5:24 - 5:28
    is fundamentally different
    than building a traditional
  • 5:28 - 5:29
    software-intensive system of the past.
  • 5:29 - 5:31
    We don't program them. We teach them.
  • 5:31 - 5:34
    In order to teach a system
    how to recognize flowers,
  • 5:34 - 5:37
    I show it thousands of flowers
    of the kinds I like.
  • 5:37 - 5:39
    In order to teach a system
    how to play a game --
  • 5:39 - 5:42
    Well, I would. You would too.
  • 5:42 - 5:46
    I like flowers. Come on.
  • 5:46 - 5:49
    To teach a system how
    to play a game like go,
  • 5:49 - 5:50
    I'd have it play thousands of games of go,
  • 5:50 - 5:53
    but in the process I also teach it
    how to discern
  • 5:53 - 5:55
    a good game from a bad game.
  • 5:55 - 5:58
    If I want to create an artificially
    intelligent legal assistant,
  • 5:58 - 6:01
    I will teach it some corpus of law
    but at the same time
  • 6:01 - 6:03
    I am fusing with it
  • 6:03 - 6:06
    the sense of mercy and justice
    that is part of that law.
  • 6:06 - 6:10
    In scientific terms, this is what
    we call ground truth,
  • 6:10 - 6:12
    and here's the important point:
  • 6:12 - 6:16
    in producing these machines,
    we are therefore teaching them
  • 6:16 - 6:17
    a sense of our values.
  • 6:17 - 6:20
    To that end, I trust
    in artificial intelligence
  • 6:20 - 6:24
    the same, if not more, as a human
    who is well-trained.
  • 6:24 - 6:26
    But, you may ask,
  • 6:26 - 6:29
    what about rogue agents,
  • 6:29 - 6:31
    some well-funded
    non-government organization?
  • 6:31 - 6:35
    I do not fear an artificial intelligence
    in the hand of a lone wolf.
  • 6:35 - 6:37
    Clearly, we cannot protect ourselves
  • 6:37 - 6:40
    against all random acts of violence,
  • 6:40 - 6:42
    but the reality is such a system
  • 6:42 - 6:46
    requires substantial training
    and subtle training
  • 6:46 - 6:48
    far beyond the resources of an individual,
  • 6:48 - 6:52
    and furthermore, it's far more
    than just injecting an Internet virus
  • 6:52 - 6:53
    to the world where you push a button,
  • 6:53 - 6:55
    all of a sudden it's in a million places
  • 6:55 - 6:57
    and laptops start blowing up
    all over the place.
  • 6:57 - 7:00
    Now, these kinds of substances
    are much larger
  • 7:00 - 7:03
    and we'll certainly see them coming.
  • 7:03 - 7:05
    Do I fear that such
    an artificial intelligence
  • 7:05 - 7:08
    might threaten all of humanity?
  • 7:08 - 7:10
    If you look at movies
  • 7:10 - 7:13
    such as "The Matrix," "Metropolis,"
  • 7:13 - 7:16
    "The Terminator," shows
    such as "Westworld,"
  • 7:16 - 7:18
    they all speak of this kind of fear.
  • 7:18 - 7:23
    Indeed, in the book "Superintelligence"
    by the philosopher Nick Bostrom,
  • 7:23 - 7:24
    he picks up on this theme
  • 7:24 - 7:28
    and observes that a superintelligence
    might not only be dangerous,
  • 7:28 - 7:32
    it could represent an existential threat
    to all of humanity.
  • 7:32 - 7:34
    Dr. Bostrom's basic argument
  • 7:34 - 7:38
    is that such systems will eventually
  • 7:38 - 7:40
    have such an insatiable
    thirst for information
  • 7:40 - 7:43
    that they will perhaps learn how to learn
  • 7:43 - 7:47
    and eventually discover that they
    may have goals
  • 7:47 - 7:48
    that are contrary to human needs.
  • 7:48 - 7:50
    Dr. Bostrom has a number of followers.
  • 7:50 - 7:53
    He is supported by people
    such as Elon Musk
  • 7:53 - 7:55
    and Stephen Hawking.
  • 7:55 - 7:58
    With all due respect
  • 7:58 - 8:00
    to these brilliant minds,
  • 8:00 - 8:03
    I believe that they
    are fundamentally wrong.
  • 8:03 - 8:07
    Now, there are a lot of pieces
    of Dr. Bostrom's argument to unpack,
  • 8:07 - 8:08
    and I don't have time to unpack them all,
  • 8:08 - 8:11
    but very briefly, consider this:
  • 8:11 - 8:14
    super knowing is very different
    than super doing.
  • 8:14 - 8:17
    HAL was a threat to the Discovery crew
    only insofar as HAL commanded
  • 8:17 - 8:21
    all aspects of the Discovery.
  • 8:21 - 8:23
    So it would have to be
    with a superintelligence.
  • 8:23 - 8:26
    It would have to have dominion
    over all of our world.
  • 8:26 - 8:29
    This is the stuff of Skynet
    from the movie "The Terminator"
  • 8:29 - 8:30
    in which we had a superintelligence
  • 8:30 - 8:32
    that commanded human will,
    that directed every device
  • 8:32 - 8:35
    that was in every corner of the world.
  • 8:35 - 8:37
    Practically speaking,
  • 8:37 - 8:39
    it ain't gonna happen.
  • 8:39 - 8:43
    We are not building AIs
    that control the weather,
  • 8:43 - 8:44
    that direct the tides,
  • 8:44 - 8:47
    that command us capricious,
    chaotic humans.
  • 8:47 - 8:51
    And furthermore, if such
    an artificial intelligence existed,
  • 8:51 - 8:54
    it would have to compete
    with human economies,
  • 8:54 - 8:57
    and thereby compete for resources with us.
  • 8:57 - 8:59
    And in the end --
  • 8:59 - 9:00
    don't tell Siri this --
  • 9:00 - 9:03
    we can always unplug them.
  • 9:03 - 9:05
    (Laughter)
  • 9:05 - 9:08
    We are on an incredible journey
  • 9:08 - 9:10
    of coevolution with our machines.
  • 9:10 - 9:13
    The humans we are today
  • 9:13 - 9:15
    are not the humans we will be then.
  • 9:15 - 9:19
    To worry now about the rise
    of a superintelligence
  • 9:19 - 9:22
    is in many ways a dangerous distraction
  • 9:22 - 9:24
    because the rise of computing itself
  • 9:24 - 9:27
    brings to us a number of human
    and societal issues
  • 9:27 - 9:29
    to which we must now attend.
  • 9:29 - 9:32
    How shall I best organize society
  • 9:32 - 9:35
    when the need for human labor diminishes?
  • 9:35 - 9:37
    How can I bring understanding
  • 9:37 - 9:40
    and education throughout the globe
    and still respect our differences?
  • 9:40 - 9:45
    How might I extend and enhance human life
    through cognitive healthcare?
  • 9:45 - 9:47
    How might I use computing
  • 9:47 - 9:50
    to help take us to the stars?
  • 9:50 - 9:53
    And that's the exciting thing.
  • 9:53 - 9:56
    The opportunities to use computing
  • 9:56 - 9:57
    to advance the human experience
  • 9:57 - 9:58
    are within our reach,
  • 9:58 - 10:00
    here and now,
  • 10:00 - 10:02
    and we are just beginning.
  • 10:02 - 10:04
    Thank you very much.
  • 10:04 - 10:08
    (Applause)
Title:
Don't fear superintelligent AI
Speaker:
Grady Booch
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
10:20

English subtitles

Revisions Compare revisions