< Return to Video

Don't fear superintelligent AI

  • 0:01 - 0:05
    When I was a kid,
    I was the quintessential nerd.
  • 0:05 - 0:07
    I think some of you were, too.
  • 0:08 - 0:09
    (Laughter)
  • 0:09 - 0:12
    And you, sir, who laughed the loudest,
    you probably still are.
  • 0:12 - 0:14
    (Laughter)
  • 0:14 - 0:18
    I grew up in a small town
    in the dusty plains of north Texas,
  • 0:18 - 0:21
    the son of a sheriff
    who was the son of a pastor.
  • 0:21 - 0:23
    Getting into trouble was not an option.
  • 0:24 - 0:27
    And so I started reading
    calculus books for fun.
  • 0:27 - 0:29
    (Laughter)
  • 0:29 - 0:31
    You did, too.
  • 0:31 - 0:34
    That led me to building a laser
    and a computer and model rockets,
  • 0:34 - 0:37
    and that led me to making
    rocket fuel in my bedroom.
  • 0:38 - 0:42
    Now, in scientific terms,
  • 0:42 - 0:45
    we call this a very bad idea.
  • 0:45 - 0:46
    (Laughter)
  • 0:46 - 0:48
    Around that same time,
  • 0:48 - 0:52
    Stanley Kubrick's "2001: A Space Odyssey"
    came to the theaters,
  • 0:52 - 0:54
    and my life was forever changed.
  • 0:54 - 0:56
    I loved everything about that movie,
  • 0:56 - 0:59
    especially the HAL 9000.
  • 0:59 - 1:01
    Now, HAL was a sentient computer
  • 1:01 - 1:03
    designed to guide the Discovery spacecraft
  • 1:03 - 1:06
    from the Earth to Jupiter.
  • 1:06 - 1:08
    HAL was also a flawed character,
  • 1:08 - 1:12
    for in the end he chose
    to value the mission over human life.
  • 1:13 - 1:15
    Now, HAL was a fictional character,
  • 1:15 - 1:18
    but nonetheless he speaks to our fears,
  • 1:18 - 1:20
    our fears of being subjugated
  • 1:20 - 1:23
    by some unfeeling, artificial intelligence
  • 1:23 - 1:25
    who is indifferent to our humanity.
  • 1:26 - 1:28
    I believe that such fears are unfounded.
  • 1:28 - 1:31
    Indeed, we stand at a remarkable time
  • 1:31 - 1:33
    in human history,
  • 1:33 - 1:38
    where, driven by refusal to accept
    the limits of our bodies and our minds,
  • 1:38 - 1:39
    we are building machines
  • 1:39 - 1:43
    of exquisite, beautiful
    complexity and grace
  • 1:43 - 1:45
    that will extend the human experience
  • 1:45 - 1:47
    in ways beyond our imagining.
  • 1:48 - 1:50
    After a career that led me
    from the Air Force Academy
  • 1:50 - 1:52
    to Space Command to now,
  • 1:52 - 1:54
    I became a systems engineer,
  • 1:54 - 1:57
    and recently I was drawn
    into an engineering problem
  • 1:57 - 1:59
    associated with NASA's mission to Mars.
  • 1:59 - 2:02
    Now, in space flights to the Moon,
  • 2:02 - 2:05
    we can rely upon
    mission control in Houston
  • 2:05 - 2:07
    to watch over all aspects of a flight.
  • 2:07 - 2:11
    However, Mars is 200 times further away,
  • 2:11 - 2:14
    and as a result it takes
    on average 13 minutes
  • 2:14 - 2:17
    for a signal to travel
    from the Earth to Mars.
  • 2:17 - 2:20
    If there's trouble,
    there's not enough time.
  • 2:21 - 2:23
    And so a reasonable engineering solution
  • 2:23 - 2:26
    calls for us to put mission control
  • 2:26 - 2:29
    inside the walls of the Orion spacecraft.
  • 2:29 - 2:32
    Another fascinating idea
    in the mission profile
  • 2:32 - 2:35
    places humanoid robots
    on the surface of Mars
  • 2:35 - 2:37
    before the humans themselves arrive,
  • 2:37 - 2:38
    first to build facilities
  • 2:38 - 2:42
    and later to serve as collaborative
    members of the science team.
  • 2:43 - 2:46
    Now, as I looked at this
    from an engineering perspective,
  • 2:46 - 2:49
    it became very clear to me
    that what I needed to architect
  • 2:49 - 2:52
    was a smart, collaborative,
  • 2:52 - 2:54
    socially intelligent
    artificial intelligence.
  • 2:54 - 2:58
    In other words, I needed to build
    something very much like a HAL
  • 2:58 - 3:01
    but without the homicidal tendencies.
  • 3:01 - 3:02
    (Laughter)
  • 3:03 - 3:05
    Let's pause for a moment.
  • 3:05 - 3:09
    Is it really possible to build
    an artificial intelligence like that?
  • 3:09 - 3:10
    Actually, it is.
  • 3:10 - 3:11
    In many ways,
  • 3:11 - 3:13
    this is a hard engineering problem
  • 3:13 - 3:15
    with elements of AI,
  • 3:15 - 3:20
    not some wet hair ball of an AI problem
    that needs to be engineered.
  • 3:20 - 3:22
    To paraphrase Alan Turing,
  • 3:22 - 3:25
    I'm not interested
    in building a sentient machine.
  • 3:25 - 3:26
    I'm not building a HAL.
  • 3:26 - 3:29
    All I'm after is a simple brain,
  • 3:29 - 3:32
    something that offers
    the illusion of intelligence.
  • 3:33 - 3:36
    The art and the science of computing
    have come a long way
  • 3:36 - 3:38
    since HAL was onscreen,
  • 3:38 - 3:41
    and I'd imagine if his inventor
    Dr. Chandra were here today,
  • 3:41 - 3:43
    he'd have a whole lot of questions for us.
  • 3:43 - 3:45
    Is it really possible for us
  • 3:45 - 3:49
    to take a system of millions
    upon millions of devices,
  • 3:49 - 3:51
    to read in their data streams,
  • 3:51 - 3:53
    to predict their failures
    and act in advance?
  • 3:53 - 3:54
    Yes.
  • 3:54 - 3:58
    Can we build systems that converse
    with humans in natural language?
  • 3:58 - 3:59
    Yes.
  • 3:59 - 4:02
    Can we build systems
    that recognize objects, identify emotions,
  • 4:02 - 4:05
    emote themselves,
    play games and even read lips?
  • 4:05 - 4:06
    Yes.
  • 4:07 - 4:09
    Can we build a system that sets goals,
  • 4:09 - 4:12
    that carries out plans against those goals
    and learns along the way?
  • 4:12 - 4:14
    Yes.
  • 4:14 - 4:17
    Can we build systems
    that have a theory of mind?
  • 4:17 - 4:18
    This we are learning to do.
  • 4:18 - 4:22
    Can we build systems that have
    an ethical and moral foundation?
  • 4:22 - 4:25
    This we must learn how to do.
  • 4:25 - 4:27
    So let's accept for a moment
  • 4:27 - 4:30
    that it's possible to build
    such an artificial intelligence
  • 4:30 - 4:32
    for this kind of mission and others.
  • 4:32 - 4:34
    The next question
    you must ask yourself is,
  • 4:34 - 4:36
    should we fear it?
  • 4:36 - 4:38
    Now, every new technology
  • 4:38 - 4:41
    brings with it
    some measure of trepidation.
  • 4:41 - 4:42
    When we first saw cars,
  • 4:43 - 4:47
    people lamented that we would see
    the destruction of the family.
  • 4:47 - 4:49
    When we first saw telephones come in,
  • 4:49 - 4:52
    people were worried it would destroy
    all civil conversation.
  • 4:52 - 4:56
    At a point in time we saw
    the written word become pervasive,
  • 4:56 - 4:59
    people thought we would lose
    our ability to memorize.
  • 4:59 - 5:01
    These things are all true to a degree,
  • 5:01 - 5:03
    but it's also the case
    that these technologies
  • 5:03 - 5:07
    brought to us things
    that extended the human experience
  • 5:07 - 5:08
    in some profound ways.
  • 5:10 - 5:12
    So let's take this a little further.
  • 5:13 - 5:18
    I do not fear the creation
    of an AI like this,
  • 5:18 - 5:22
    because it will eventually
    embody some of our values.
  • 5:22 - 5:25
    Consider this: building a cognitive system
    is fundamentally different
  • 5:25 - 5:29
    than building a traditional
    software-intensive system of the past.
  • 5:29 - 5:31
    We don't program them. We teach them.
  • 5:31 - 5:34
    In order to teach a system
    how to recognize flowers,
  • 5:34 - 5:37
    I show it thousands of flowers
    of the kinds I like.
  • 5:37 - 5:39
    In order to teach a system
    how to play a game --
  • 5:39 - 5:41
    Well, I would. You would, too.
  • 5:43 - 5:45
    I like flowers. Come on.
  • 5:45 - 5:48
    To teach a system
    how to play a game like Go,
  • 5:48 - 5:50
    I'd have it play thousands of games of Go,
  • 5:50 - 5:52
    but in the process I also teach it
  • 5:52 - 5:54
    how to discern
    a good game from a bad game.
  • 5:55 - 5:58
    If I want to create an artificially
    intelligent legal assistant,
  • 5:58 - 6:00
    I will teach it some corpus of law
  • 6:00 - 6:03
    but at the same time I am fusing with it
  • 6:03 - 6:06
    the sense of mercy and justice
    that is part of that law.
  • 6:07 - 6:10
    In scientific terms,
    this is what we call ground truth,
  • 6:10 - 6:12
    and here's the important point:
  • 6:12 - 6:13
    in producing these machines,
  • 6:13 - 6:16
    we are therefore teaching them
    a sense of our values.
  • 6:17 - 6:20
    To that end, I trust
    an artificial intelligence
  • 6:20 - 6:23
    the same, if not more,
    as a human who is well-trained.
  • 6:24 - 6:25
    But, you may ask,
  • 6:25 - 6:28
    what about rogue agents,
  • 6:28 - 6:31
    some well-funded
    nongovernment organization?
  • 6:31 - 6:35
    I do not fear an artificial intelligence
    in the hand of a lone wolf.
  • 6:35 - 6:40
    Clearly, we cannot protect ourselves
    against all random acts of violence,
  • 6:40 - 6:42
    but the reality is such a system
  • 6:42 - 6:45
    requires substantial training
    and subtle training
  • 6:45 - 6:47
    far beyond the resources of an individual.
  • 6:47 - 6:49
    And furthermore,
  • 6:49 - 6:52
    it's far more than just injecting
    an internet virus to the world,
  • 6:52 - 6:55
    where you push a button,
    all of a sudden it's in a million places
  • 6:55 - 6:57
    and laptops start blowing up
    all over the place.
  • 6:57 - 7:00
    Now, these kinds of substances
    are much larger,
  • 7:00 - 7:02
    and we'll certainly see them coming.
  • 7:03 - 7:06
    Do I fear that such
    an artificial intelligence
  • 7:06 - 7:08
    might threaten all of humanity?
  • 7:08 - 7:13
    If you look at movies
    such as "The Matrix," "Metropolis,"
  • 7:13 - 7:16
    "The Terminator,"
    shows such as "Westworld,"
  • 7:16 - 7:18
    they all speak of this kind of fear.
  • 7:18 - 7:22
    Indeed, in the book "Superintelligence"
    by the philosopher Nick Bostrom,
  • 7:22 - 7:24
    he picks up on this theme
  • 7:24 - 7:28
    and observes that a superintelligence
    might not only be dangerous,
  • 7:28 - 7:32
    it could represent an existential threat
    to all of humanity.
  • 7:32 - 7:34
    Dr. Bostrom's basic argument
  • 7:34 - 7:37
    is that such systems will eventually
  • 7:37 - 7:40
    have such an insatiable
    thirst for information
  • 7:40 - 7:43
    that they will perhaps learn how to learn
  • 7:43 - 7:46
    and eventually discover
    that they may have goals
  • 7:46 - 7:48
    that are contrary to human needs.
  • 7:48 - 7:50
    Dr. Bostrom has a number of followers.
  • 7:50 - 7:54
    He is supported by people
    such as Elon Musk and Stephen Hawking.
  • 7:55 - 7:57
    With all due respect
  • 7:58 - 8:00
    to these brilliant minds,
  • 8:00 - 8:02
    I believe that they
    are fundamentally wrong.
  • 8:02 - 8:06
    Now, there are a lot of pieces
    of Dr. Bostrom's argument to unpack,
  • 8:06 - 8:08
    and I don't have time to unpack them all,
  • 8:08 - 8:11
    but very briefly, consider this:
  • 8:11 - 8:14
    super knowing is very different
    than super doing.
  • 8:14 - 8:16
    HAL was a threat to the Discovery crew
  • 8:16 - 8:21
    only insofar as HAL commanded
    all aspects of the Discovery.
  • 8:21 - 8:23
    So it would have to be
    with a superintelligence.
  • 8:23 - 8:26
    It would have to have dominion
    over all of our world.
  • 8:26 - 8:29
    This is the stuff of Skynet
    from the movie "The Terminator"
  • 8:29 - 8:30
    in which we had a superintelligence
  • 8:30 - 8:32
    that commanded human will,
  • 8:32 - 8:36
    that directed every device
    that was in every corner of the world.
  • 8:36 - 8:37
    Practically speaking,
  • 8:37 - 8:39
    it ain't gonna happen.
  • 8:39 - 8:42
    We are not building AIs
    that control the weather,
  • 8:42 - 8:44
    that direct the tides,
  • 8:44 - 8:47
    that command us
    capricious, chaotic humans.
  • 8:47 - 8:51
    And furthermore, if such
    an artificial intelligence existed,
  • 8:51 - 8:54
    it would have to compete
    with human economies,
  • 8:54 - 8:57
    and thereby compete for resources with us.
  • 8:57 - 8:58
    And in the end --
  • 8:58 - 9:00
    don't tell Siri this --
  • 9:00 - 9:02
    we can always unplug them.
  • 9:02 - 9:04
    (Laughter)
  • 9:05 - 9:08
    We are on an incredible journey
  • 9:08 - 9:10
    of coevolution with our machines.
  • 9:10 - 9:13
    The humans we are today
  • 9:13 - 9:15
    are not the humans we will be then.
  • 9:15 - 9:19
    To worry now about the rise
    of a superintelligence
  • 9:19 - 9:22
    is in many ways a dangerous distraction
  • 9:22 - 9:24
    because the rise of computing itself
  • 9:24 - 9:27
    brings to us a number
    of human and societal issues
  • 9:27 - 9:29
    to which we must now attend.
  • 9:29 - 9:32
    How shall I best organize society
  • 9:32 - 9:35
    when the need for human labor diminishes?
  • 9:35 - 9:38
    How can I bring understanding
    and education throughout the globe
  • 9:38 - 9:40
    and still respect our differences?
  • 9:40 - 9:44
    How might I extend and enhance human life
    through cognitive healthcare?
  • 9:44 - 9:47
    How might I use computing
  • 9:47 - 9:49
    to help take us to the stars?
  • 9:50 - 9:52
    And that's the exciting thing.
  • 9:52 - 9:55
    The opportunities to use computing
  • 9:55 - 9:56
    to advance the human experience
  • 9:56 - 9:58
    are within our reach,
  • 9:58 - 10:00
    here and now,
  • 10:00 - 10:01
    and we are just beginning.
  • 10:02 - 10:03
    Thank you very much.
  • 10:04 - 10:08
    (Applause)
Title:
Don't fear superintelligent AI
Speaker:
Grady Booch
Description:

New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don't need to fear an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we'll teach, not program, them to share our values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
10:20

English subtitles

Revisions Compare revisions