< Return to Video

Can we build AI without losing control over it?

  • 0:01 - 0:03
    I'm going to talk about
    a failure of intuition
  • 0:03 - 0:06
    that many of us suffer from.
  • 0:06 - 0:09
    It's really a failure to detect
    a certain kind of danger.
  • 0:09 - 0:11
    I'm going to describe a scenario
  • 0:11 - 0:14
    that I think is both terrifying
  • 0:14 - 0:17
    and likely to occur,
  • 0:17 - 0:19
    and that's not a good combination,
  • 0:19 - 0:20
    as it turns out.
  • 0:20 - 0:23
    And yet rather than be scared,
    most of you will feel
  • 0:23 - 0:25
    that what I'm talking about
    is kind of cool.
  • 0:25 - 0:28
    I'm going to describe how
    the gains we make
  • 0:28 - 0:30
    in artificial intelligence
  • 0:30 - 0:32
    could ultimately destroy us.
  • 0:32 - 0:35
    And in fact, I think it's very difficult
    to see how they won't destroy us
  • 0:35 - 0:37
    or inspire us to destroy ourselves.
  • 0:37 - 0:39
    And yet if you're anything like me,
  • 0:39 - 0:42
    you'll find that it's fun
    to think about these things.
  • 0:42 - 0:45
    And that response is part of the problem.
  • 0:45 - 0:48
    Okay? That response should worry you.
  • 0:48 - 0:50
    And if I were to convince you in this talk
  • 0:50 - 0:54
    that we were likely to suffer
    a global famine,
  • 0:54 - 0:57
    either because of climate change
    or some other catastrophe,
  • 0:57 - 1:01
    and that your grandchildren,
    or their grandchildren,
  • 1:01 - 1:03
    are very likely to live like this,
  • 1:03 - 1:05
    you wouldn't think,
  • 1:05 - 1:07
    "Interesting.
  • 1:07 - 1:09
    I like this TEDTalk."
  • 1:09 - 1:12
    Famine isn't fun.
  • 1:12 - 1:15
    Death by science fiction,
    on the other hand, is fun,
  • 1:15 - 1:19
    and one of the things that worries me most
    about the development of AI at this point
  • 1:19 - 1:21
    is that we seem unable to marshal
  • 1:21 - 1:24
    an appropriate emotional response
  • 1:24 - 1:25
    to the dangers that lie ahead.
  • 1:25 - 1:28
    I am unable to marshal this response,
    and I'm giving this talk.
  • 1:28 - 1:33
    It's as though we stand before two doors.
  • 1:33 - 1:35
    Behind door number one,
  • 1:35 - 1:38
    we stop making progress
    in building intelligent machines.
  • 1:38 - 1:42
    Our computer hardware and software
    just stops getting better for some reason.
  • 1:42 - 1:44
    Now take a moment to consider
  • 1:44 - 1:45
    why this might happen.
  • 1:45 - 1:49
    I mean, given how valuable
    intelligence and automation are,
  • 1:49 - 1:53
    we will continue to improve our technology
    if we are at all able to.
  • 1:53 - 1:56
    What could stop us from doing this?
  • 1:56 - 1:59
    A full-scale nuclear war?
  • 1:59 - 2:02
    A global pandemic?
  • 2:02 - 2:05
    An asteroid impact?
  • 2:05 - 2:09
    Justin Bieber becoming
    President of the United States?
  • 2:09 - 2:12
    (Laughter)
  • 2:13 - 2:18
    The point is, something would have
    to destroy civilization as we know it.
  • 2:18 - 2:22
    You have to imagine
    how bad it would have to be
  • 2:22 - 2:25
    to prevent us from making
    improvements in our technology
  • 2:25 - 2:27
    permanently,
  • 2:27 - 2:29
    generation after generation.
  • 2:29 - 2:31
    Almost by definition, this is
    the worst thing that's ever happened
  • 2:31 - 2:33
    in human history.
  • 2:33 - 2:34
    So the only alternative,
  • 2:34 - 2:36
    and this is what lies behind
    door number two,
  • 2:36 - 2:40
    is that we continued to improve
    our intelligent machines
  • 2:40 - 2:42
    year after year after year.
  • 2:42 - 2:46
    At a certain point, we will build
    machines that are smarter than we are,
  • 2:46 - 2:48
    and once we have machines
    that are smarter than we are,
  • 2:48 - 2:51
    they will begin to improve themselves.
  • 2:51 - 2:54
    And then we risk what
    the mathematician I.J. Good called
  • 2:54 - 2:56
    an "intelligence explosion,"
  • 2:56 - 2:58
    that the process could get away from us.
  • 2:58 - 3:01
    Now this is often caricatured,
    as I have here,
  • 3:01 - 3:04
    as a fear that armies of malicious robots
  • 3:04 - 3:06
    will attack us.
  • 3:06 - 3:08
    But that isn't the most likely scenario.
  • 3:08 - 3:13
    It's not that our machines
    will become spontaneously malevolent.
  • 3:13 - 3:16
    The concern is really that we will build
    machines that are so much
  • 3:16 - 3:18
    more competent than we are
  • 3:18 - 3:22
    that the slightest divergence
    between their goals and our own
  • 3:22 - 3:23
    could destroy us.
  • 3:23 - 3:27
    Just think about how we relate to ants.
  • 3:27 - 3:29
    We don't hate them.
  • 3:29 - 3:31
    We don't go out of our way to harm them.
  • 3:31 - 3:32
    In fact, sometimes we take pains
    not to harm them.
  • 3:32 - 3:35
    We step over them on the sidewalk.
  • 3:35 - 3:36
    But whenever their presence
  • 3:36 - 3:40
    seriously conflicts
    with one of our goals,
  • 3:40 - 3:42
    let's say when constructing
    a building like this one,
  • 3:42 - 3:45
    we annihilate them without a qualm.
  • 3:45 - 3:48
    The concern is that we
    will one day build machines
  • 3:48 - 3:50
    that, whether they're conscious or not,
  • 3:50 - 3:54
    could treat us with similar disregard.
  • 3:54 - 3:58
    Now, I suspect this seems
    farfetched to many of you.
  • 3:58 - 4:04
    I bet there are those of you who doubt
    that superintelligent AI is possible,
  • 4:04 - 4:06
    much less inevitable.
  • 4:06 - 4:09
    But then you must find something wrong
    with one of the following assumptions.
  • 4:09 - 4:11
    And there are only three of them.
  • 4:11 - 4:18
    Intelligence is a matter of information
    processing in physical systems.
  • 4:18 - 4:21
    Actually, this is a little bit more
    than an assumption.
  • 4:21 - 4:24
    We have already built narrow intelligence
    into our machines,
  • 4:24 - 4:26
    and many of these machines perform
  • 4:26 - 4:29
    at a level of superhuman
    intelligence already.
  • 4:29 - 4:32
    And we know that mere matter
  • 4:32 - 4:34
    can give rise to what is called
    "general intelligence,"
  • 4:34 - 4:37
    an ability to think flexibly
    across multiple domains,
  • 4:37 - 4:41
    because our brains have managed it. Right?
  • 4:41 - 4:45
    There's just atoms in here,
  • 4:45 - 4:47
    and as long as we continue to
  • 4:47 - 4:50
    build systems of atoms
  • 4:50 - 4:52
    that display more and more
    intelligent behavior,
  • 4:52 - 4:54
    we will eventually,
  • 4:54 - 4:58
    unless we are interrupted, we will
    eventually build general intelligence
  • 4:58 - 5:00
    into our machines.
  • 5:00 - 5:03
    It's crucial to realize that
    the rate of progress doesn't matter,
  • 5:03 - 5:07
    because any progress is enough
    to get us into the end zone.
  • 5:07 - 5:10
    We don't need Moore's Law to continue.
    We don't need exponential progress.
  • 5:10 - 5:14
    We just need to keep going.
  • 5:14 - 5:17
    The second assumption
    is that we will keep going.
  • 5:17 - 5:21
    We will continue to improve
    our intelligent machines.
  • 5:21 - 5:26
    And given the value of intelligence,
  • 5:26 - 5:29
    I mean, intelligence is either
    the source of everything we value
  • 5:29 - 5:32
    or we need it to safeguard
    everything we value.
  • 5:32 - 5:34
    It is our most valuable resource.
  • 5:34 - 5:36
    So we want to do this.
  • 5:36 - 5:39
    We have problems that we
    desperately need to solve.
  • 5:39 - 5:43
    We want to cure diseases
    like Alzheimer's and cancer.
  • 5:43 - 5:47
    We want to understand economic systems.
    We want to improve our climate science.
  • 5:47 - 5:50
    So we will do this, if we can.
  • 5:50 - 5:54
    The train is already out of the station,
    and there's no brake to pull.
  • 5:54 - 6:00
    Finally, we don't stand on a peak
    of intelligence,
  • 6:00 - 6:02
    or anywhere near it, likely.
  • 6:02 - 6:03
    And this really is the crucial insight.
  • 6:03 - 6:06
    This is what makes our situation
    so precarious,
  • 6:06 - 6:11
    and this is what makes our intuitions
    about risk so unreliable.
  • 6:11 - 6:14
    Now, just consider the smartest person
    who has ever lived.
  • 6:14 - 6:18
    On almost everyone's shortlist here
    is John Von Neumann.
  • 6:18 - 6:22
    I mean, the impression that Von Neumann
    made on the people around him,
  • 6:22 - 6:26
    and this included the greatest
    mathematicians and physicists of his time,
  • 6:26 - 6:28
    is fairly well documented.
  • 6:28 - 6:31
    If only half the stories about him
    are half true,
  • 6:31 - 6:35
    there's no question he is one of
    the smartest people who has ever lived.
  • 6:35 - 6:38
    So consider the spectrum of intelligence.
  • 6:38 - 6:41
    We have John Von Neumann.
  • 6:41 - 6:44
    And then we have you and me.
  • 6:44 - 6:46
    And then we have a chicken.
  • 6:46 - 6:47
    (Laughter)
  • 6:47 - 6:50
    Sorry, a chicken.
  • 6:50 - 6:50
    (Laughter)
  • 6:51 - 6:54
    There's no reason for me to make this talk
    more depressing than it needs to be.
  • 6:54 - 6:57
    (Laughter)
  • 6:57 - 7:00
    It seems overwhelmingly, however,
    that the spectrum of intelligence
  • 7:00 - 7:04
    extends much further
    than we current conceive,
  • 7:04 - 7:07
    and if we build machines
    that are more intelligent than we are,
  • 7:07 - 7:10
    they will very likely
    explore this spectrum
  • 7:10 - 7:12
    in ways that we can't imagine,
  • 7:12 - 7:15
    and exceed us in ways
    that we can't imagine.
  • 7:15 - 7:20
    And it's important to recognize that this
    is true by virtue of speed alone.
  • 7:20 - 7:25
    Right? So imagine if we just built
    a super-intelligent AI, right,
  • 7:25 - 7:28
    that was no smarter than
    your average team of researchers
  • 7:28 - 7:30
    at Stanford or at MIT.
  • 7:30 - 7:34
    Well, electronic circuits function
    about a million times faster
  • 7:34 - 7:35
    than biochemical ones,
  • 7:35 - 7:40
    so this machine should think
    about a million times faster
  • 7:40 - 7:41
    than the minds that built it.
  • 7:41 - 7:42
    So you set it running for a week,
  • 7:42 - 7:47
    and it will perform 20,000 years
    of human-level intellectual work,
  • 7:47 - 7:49
    week after week after week.
  • 7:49 - 7:53
    How could we even understand,
    much less constrain,
  • 7:53 - 7:56
    a mind making this sort of progress?
  • 7:56 - 8:00
    The other thing that's worrying, frankly,
  • 8:00 - 8:04
    is that, imagine the best case scenario.
  • 8:04 - 8:08
    So imagine we hit upon a design
    of super-intelligent AI
  • 8:08 - 8:10
    that has no safety concerns.
  • 8:10 - 8:13
    We have the perfect design
    the first time around.
  • 8:13 - 8:15
    It's as though we've been handed an oracle
  • 8:15 - 8:17
    that behaves exactly as intended.
  • 8:17 - 8:22
    Well, this machine would be
    the perfect labor-saving device.
  • 8:22 - 8:24
    It can design the machine
    that can build the machine
  • 8:24 - 8:26
    that can do any physical work,
  • 8:26 - 8:28
    powered by sunlight,
  • 8:28 - 8:30
    more or less for the cost
    of raw materials.
  • 8:30 - 8:34
    So we're talking about
    the end of human drudgery.
  • 8:34 - 8:37
    We're also talking about the end
    of most intellectual work.
  • 8:37 - 8:41
    So what would apes like ourselves
    do in this circumstance?
  • 8:41 - 8:45
    Well, we'd be free to play frisbee
    and give each other massages.
  • 8:45 - 8:49
    Add some LSD and some
    questionable wardrobe choices,
  • 8:49 - 8:51
    and the whole world
    could be like Burning Man.
  • 8:51 - 8:54
    (Laughter)
  • 8:54 - 8:58
    Now, that might sound pretty good,
  • 8:58 - 9:00
    but ask yourself what would happen
  • 9:00 - 9:03
    under our current economic
    and political order?
  • 9:03 - 9:07
    It seems likely that we would witness
    a level of wealth inequality
  • 9:07 - 9:10
    and unemployment
    that we have never seen before.
  • 9:10 - 9:13
    Absent a willingness to immediately
    put this new wealth
  • 9:13 - 9:16
    to the service of all humanity,
  • 9:16 - 9:19
    a few trillionaires could grace
    the covers of our business magazines
  • 9:19 - 9:23
    while the rest of the world
    would be free to starve.
  • 9:23 - 9:25
    And what would the Russians
    or the Chinese do
  • 9:25 - 9:27
    if they heard that some company
    in Silicon Valley
  • 9:27 - 9:30
    was about to deploy
    a super-intelligent AI?
  • 9:30 - 9:33
    This machine would be capable
    of waging war,
  • 9:33 - 9:35
    whether terrestrial or cyber,
  • 9:35 - 9:38
    with unprecedented power.
  • 9:38 - 9:40
    This is a winner-take-all scenario.
  • 9:40 - 9:43
    To be six months ahead
    of the competition here
  • 9:43 - 9:48
    is to be 500,000 years ahead,
    at a minimum.
  • 9:48 - 9:52
    So even mere rumors
    of this kind of breakthrough
  • 9:52 - 9:55
    could cause our species to go berserk.
  • 9:55 - 9:57
    Now, one of the most frightening things,
  • 9:57 - 10:00
    in my view, at this moment,
  • 10:00 - 10:02
    are the kinds of things
  • 10:02 - 10:04
    that AI researchers say
  • 10:04 - 10:07
    when they want to be reassuring.
  • 10:07 - 10:11
    And the most common reason
    we're told not to worry is time.
  • 10:11 - 10:13
    This is all a long way off,
    don't you know.
  • 10:13 - 10:16
    This is probably 50 or 100 years away.
  • 10:16 - 10:17
    One researcher has said,
  • 10:17 - 10:19
    "Worrying about AI safety
  • 10:19 - 10:22
    is like worrying about
    overpopulation on Mars."
  • 10:22 - 10:24
    This is the Silicon Valley version of
  • 10:24 - 10:27
    "don't worry your
    pretty little head about it."
  • 10:27 - 10:28
    (Laughter)
  • 10:28 - 10:30
    No one seems to notice
  • 10:30 - 10:32
    that referencing the time horizon
  • 10:32 - 10:34
    is a total non sequitur.
  • 10:34 - 10:38
    If intelligence is just a matter
    of information processing,
  • 10:38 - 10:41
    and we continue to improve our machines,
  • 10:41 - 10:44
    we will improve some form
    of super-intelligence.
  • 10:44 - 10:46
    And we have no idea
  • 10:46 - 10:48
    how long it will take us
  • 10:48 - 10:51
    to create the conditions
    to do that safely.
  • 10:51 - 10:54
    Let me say that again.
  • 10:54 - 10:57
    And we have no idea
    how long it will take us
  • 10:57 - 11:01
    to create the conditions
    to do that safely.
  • 11:01 - 11:02
    And if you haven't noticed,
  • 11:02 - 11:05
    50 years is not what it used to be.
  • 11:05 - 11:07
    This is 50 years in months.
  • 11:07 - 11:10
    This is how long we've had the iPhone.
  • 11:10 - 11:13
    This is how long "The Simpsons"
    has been on television.
  • 11:13 - 11:15
    Fifty years is not that much time
  • 11:15 - 11:20
    to meet one of the greatest challenges
    our species will ever face.
  • 11:20 - 11:23
    Once again, we seem to be failing
    to have an appropriate emotional response
  • 11:23 - 11:27
    to what we have every reason
    to believe is coming.
  • 11:27 - 11:31
    The computer scientist Stuart Russell
    has a nice analogy here.
  • 11:31 - 11:35
    He said, imagine that we received
    a message from an alien civilization,
  • 11:35 - 11:36
    which read:
  • 11:36 - 11:39
    "People of Earth,
  • 11:39 - 11:42
    we will arrive on your planet in 50 years.
  • 11:42 - 11:44
    Get ready."
  • 11:44 - 11:47
    And now we're just counting down
    the months until the mothership lands?
  • 11:47 - 11:53
    We would feel a little
    more urgency than we do.
  • 11:53 - 11:55
    Another reason we're told not to worry
  • 11:55 - 11:58
    is that these machines can't help
    but share our values
  • 11:58 - 12:00
    because they will be literally
    extensions of ourselves.
  • 12:00 - 12:02
    They'll be grafted onto our brains,
  • 12:02 - 12:05
    and we'll essentially become
    their limbic systems.
  • 12:05 - 12:09
    Now take a moment to consider that the
    safest and only prudent path forward,
  • 12:09 - 12:11
    recommended,
  • 12:11 - 12:15
    is to implant this technology
    directly into our brains.
  • 12:15 - 12:18
    Now, this may in fact be the safest
    and only prudent path forward,
  • 12:18 - 12:21
    but usually one's safety concerns
    about a technology
  • 12:21 - 12:25
    have to be pretty much worked out
    before you stick it inside your head.
  • 12:25 - 12:27
    (Laughter)
  • 12:27 - 12:29
    The deeper problem is that
  • 12:29 - 12:32
    building super-intelligent AI on its own
  • 12:32 - 12:34
    seems likely to be easier
  • 12:34 - 12:36
    than building super-intelligent AI
  • 12:36 - 12:39
    and having the completed neuroscience
    that allows us to seamlessly
  • 12:39 - 12:41
    integrate our minds with it.
  • 12:41 - 12:44
    And given that the companies
    and governments doing this work
  • 12:44 - 12:47
    are likely to perceive themselves
    as being in a race against all others,
  • 12:47 - 12:51
    given that to win this race
    is to win the world,
  • 12:51 - 12:54
    provided you don't destroy it
    in the next moment,
  • 12:54 - 12:56
    then it seems likely
    that whatever is easier to do
  • 12:56 - 12:59
    will get done first.
  • 12:59 - 13:01
    Now, unfortunately, I don't have
    a solution to this problem,
  • 13:01 - 13:04
    apart from recommending
    that more of us think about it.
  • 13:04 - 13:06
    I think we need something like
    a Manhattan Project
  • 13:06 - 13:09
    on the topic of artificial intelligence.
  • 13:09 - 13:11
    Not to build it, because I think
    we'll inevitably do that,
  • 13:11 - 13:16
    but to understand how to avoid
    an arms race and to build it
  • 13:16 - 13:18
    in a way that is aligned
    with our interests.
  • 13:18 - 13:20
    When you're talking about
    super-intelligent AI
  • 13:20 - 13:22
    that can make changes to itself,
  • 13:22 - 13:28
    it seems that we only have one chance
    to get the initial conditions right,
  • 13:28 - 13:30
    and even then we will need
    to absorb the economic
  • 13:30 - 13:34
    and political consequences
    of getting them right.
  • 13:34 - 13:36
    But the moment we admit
  • 13:36 - 13:41
    that information processing
    is the source of intelligence,
  • 13:41 - 13:46
    that some appropriate computational system
    is what the basis of intelligence is,
  • 13:46 - 13:52
    and we admit that we will improve
    these systems continuously,
  • 13:52 - 13:56
    and we admit that the horizon
    of cognition very likely far exceeds
  • 13:56 - 13:58
    what we currently know,
  • 13:58 - 14:01
    then we have to admit that we
    are in the process of building
  • 14:01 - 14:04
    some sort of god.
  • 14:04 - 14:05
    Now would be a good time
  • 14:05 - 14:08
    to make sure it's a god we can live with.
  • 14:08 - 14:10
    Thank you very much.
  • 14:10 - 14:15
    (Applause)
Title:
Can we build AI without losing control over it?
Speaker:
Sam Harris
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
14:27

English subtitles

Revisions Compare revisions