< Return to Video

What happens when our computers get smarter than we are?

  • 0:01 - 0:05
    I work with a bunch of mathematicians,
    philosophers and computer scientists
  • 0:05 - 0:10
    and we sit around and think about
    the future of machine intelligence,
  • 0:10 - 0:12
    among other things.
  • 0:12 - 0:17
    Some people think that some of
    these people are science fiction-y
  • 0:17 - 0:20
    far out there, crazy.
  • 0:20 - 0:21
    But I like to say,
  • 0:21 - 0:25
    "Okay, let's look at the modern
    human condition."
  • 0:25 - 0:27
    (Laughter)
  • 0:27 - 0:29
    This is the normal way for things to be.
  • 0:29 - 0:31
    But, if we think about it,
  • 0:31 - 0:35
    we are actually recently arrived
    guests on this planet.
  • 0:35 - 0:36
    The human species --
  • 0:36 - 0:41
    think of if earth was created
    one year ago,
  • 0:41 - 0:45
    the human species, then,
    would be 10-minutes-old.
  • 0:45 - 0:48
    The industrial era started
    two seconds ago.
  • 0:48 - 0:51
    Another way to think of this,
  • 0:51 - 0:54
    if you think of world GDP
    over the last 10,000 years,
  • 0:54 - 0:58
    I've actually taken the trouble
    to plot this for you in a graph.
  • 0:58 - 0:59
    It looks like this.
  • 0:59 - 1:01
    (Laughter)
  • 1:01 - 1:03
    It's a curious shape
    for a normal condition.
  • 1:03 - 1:05
    I sure wouldn't want to sit on it.
  • 1:05 - 1:07
    (Laughter)
  • 1:07 - 1:09
    Let's ask ourselves,
  • 1:09 - 1:12
    what is the cost of this current anomaly?
  • 1:12 - 1:14
    Some people would say it's technology.
  • 1:14 - 1:16
    Now it's true,
  • 1:16 - 1:19
    technology has accumulated
    through human history,
  • 1:19 - 1:24
    and right now, technology
    advances extremely rapidly,
  • 1:24 - 1:25
    that is the proximate cause,
  • 1:25 - 1:28
    that's why we are currently
    so very productive.
  • 1:28 - 1:33
    But I like to think back further
    to the ultimate cause.
  • 1:33 - 1:37
    Look at these two
    highly distinguished gentlemen:
  • 1:37 - 1:39
    We have Kanzi,
  • 1:39 - 1:41
    he's mastered 200 lexical tokens,
  • 1:41 - 1:43
    an incredible feat.
  • 1:43 - 1:47
    And Ed Witten unleashed the second
    super string revolution.
  • 1:47 - 1:49
    If we look under the hood,
    this is what we find:
  • 1:49 - 1:51
    basically the same thing.
  • 1:51 - 1:53
    One is a little larger,
  • 1:53 - 1:55
    it maybe also has a few tricks
    in the exact way it's wired.
  • 1:55 - 1:59
    These invisible differences cannot
    be too complicated, however,
  • 1:59 - 2:03
    because they've only been
    250,000 generations since
  • 2:03 - 2:05
    our last common ancestor.
  • 2:05 - 2:10
    We know that complicated mechanisms
    that a long time to evolve.
  • 2:10 - 2:12
    So a bunch of relatively minor changes
  • 2:12 - 2:16
    take us from Kanzi to Witten.
  • 2:16 - 2:17
    From broken-off tree branches,
  • 2:17 - 2:21
    to intercontinental balistic missles.
  • 2:21 - 2:23
    So this then seems pretty obvious that
  • 2:23 - 2:25
    everything we've achieved, pretty much,
  • 2:25 - 2:27
    and everything we care about
    depends crucially
  • 2:27 - 2:33
    on relatively some minor changes
    that made the human mind.
  • 2:33 - 2:36
    And the collaraly, of course,
    is that any further changes
  • 2:36 - 2:40
    that could significantly change
    the substrate of thinking
  • 2:40 - 2:44
    could have potentially
    enormous consequences.
  • 2:44 - 2:47
    Some of my colleagues
    think we're on the verge
  • 2:47 - 2:50
    of something that could cause
    a profound change
  • 2:50 - 2:51
    in that substrate,
  • 2:51 - 2:54
    and that is machine super intelligence.
  • 2:54 - 2:59
    Artificial intelligence used to be
    about putting commands in a box.
  • 2:59 - 3:04
    You would have human programmers
    that would painstakingly handcraft items,
  • 3:04 - 3:06
    You build up these expert systems,
  • 3:06 - 3:08
    and they were kind of useful
    for some purposes,
  • 3:08 - 3:10
    but they were very brittle,
  • 3:10 - 3:11
    you couldn't scale them.
  • 3:11 - 3:14
    Basically, you got out only
    what you put in.
  • 3:14 - 3:17
    But since then, a paradigm shift
    has taken place
  • 3:17 - 3:19
    in the field of artificial intelligence.
  • 3:19 - 3:22
    Today, the action is really
    around machine learning.
  • 3:22 - 3:28
    So rather than handcrafting knowledge
    representations and features,
  • 3:28 - 3:32
    we create algorithms that learn,
  • 3:32 - 3:34
    often from raw perceptual data.
  • 3:34 - 3:39
    Basically the same thing
    that the human infant does.
  • 3:39 - 3:43
    The result is AI that is
    not limited to one domain,
  • 3:43 - 3:48
    the same system can learn to translate
    between any pairs of languages,
  • 3:48 - 3:53
    or learn to play any computer game
    at the Atari console.
  • 3:53 - 3:57
    Now of course, AI is still
    no where near having
  • 3:57 - 4:01
    the same powerful, cross-domain
    ability to learn and plan
  • 4:01 - 4:02
    as a human being has.
  • 4:02 - 4:04
    The cortex still has some
    algorithmic tricks
  • 4:04 - 4:08
    that we don't yet know
    how to match in machines.
  • 4:08 - 4:10
    But so the question is,
  • 4:10 - 4:14
    how far are we from being able
    to match those tricks?
  • 4:14 - 4:16
    A couple of years ago, we did a survey
  • 4:16 - 4:18
    of some of the world's leading AI experts
  • 4:18 - 4:19
    to see what they think
  • 4:19 - 4:21
    and one of the questions we asked was,
  • 4:21 - 4:25
    "By which year do you think
    there is a 50 percent probability
  • 4:25 - 4:29
    that we will have achieved
    human-level machine intelligence?"
  • 4:29 - 4:32
    We defined human-level here
    as the ability to perform
  • 4:32 - 4:36
    almost any job at least as well
    as an adult human,
  • 4:36 - 4:40
    so real human-level, not just
    within some limited domain.
  • 4:40 - 4:43
    And the median answer was 2040 or 2050,
  • 4:43 - 4:46
    depending on precisely which
    group of experts we asked.
  • 4:46 - 4:49
    Now, it could happen much, much later,
  • 4:49 - 4:52
    or sooner, the truth is
    nobody really knows.
  • 4:52 - 4:56
    What we do know is that
    the ultimate limit
  • 4:56 - 4:59
    to information processing
    in machine substrate,
  • 4:59 - 5:03
    lie far outside the limits
    in biological tissue.
  • 5:03 - 5:06
    This comes down to physics.
  • 5:06 - 5:10
    A biological neuron fires, maybe,
    at 200 Hertz, 200 times a second.
  • 5:10 - 5:14
    But even a present-day transistor
    operates at a gigahert.
  • 5:14 - 5:17
    Neurons propagate slowly in axons,
  • 5:17 - 5:20
    100 meters per second, tops.
  • 5:20 - 5:23
    But in computers, signals can travel
    at the speed of light.
  • 5:23 - 5:25
    There's also size limitations,
  • 5:25 - 5:28
    a human brain has to fit inside a cranium,
  • 5:28 - 5:33
    but a computer can be the size
    of a warehouse or larger.
  • 5:33 - 5:38
    So the potential of super intelligence
    lies dormant in matter,
  • 5:38 - 5:44
    much like the power of the atom
    lay dormant throughout human history,
  • 5:44 - 5:48
    patiently waiting there until 1945.
  • 5:48 - 5:51
    In this century, scientists
    may learn to awaken
  • 5:51 - 5:54
    the power of artificial intelligence.
  • 5:54 - 5:58
    And I think we might then see
    an intelligence explosion.
  • 5:58 - 6:02
    Now most people, when they think
    about what is smart and what is dumb,
  • 6:02 - 6:05
    I think I have in mind a picture
    roughly like this.
  • 6:05 - 6:08
    So at one hand, we have the village idiot,
  • 6:08 - 6:10
    and then far over at the other side,
  • 6:10 - 6:12
    we have Ed Witten,
  • 6:12 - 6:16
    or Albert Einsten or whoever
    your favorite guru is.
  • 6:16 - 6:19
    But I think that from the point of view
    of artificial intelligence,
  • 6:19 - 6:23
    the true picture is actually
    probably more like this:
  • 6:23 - 6:27
    AI starts out at this point here,
    at zero intelligence,
  • 6:27 - 6:30
    and then, after many, many
    years of really hard work,
  • 6:30 - 6:33
    maybe eventually we get to
    mouse-level artificial intelligence,
  • 6:33 - 6:36
    something that can navigate
    cluttered environments
  • 6:36 - 6:38
    as well as a mouse can.
  • 6:38 - 6:42
    And then, after many, many more years
    of really hard work, lots of investment,
  • 6:42 - 6:47
    maybe eventually we get to
    chimpanzee-level artificial intelligence.
  • 6:47 - 6:50
    And then, after even more years
    of really, really hard work,
  • 6:50 - 6:53
    we get village idiot
    artificial intelligence.
  • 6:53 - 6:56
    And a few moments later,
    we are beyond Ed Witten.
  • 6:56 - 6:59
    The train doesn't stop at
    Human-ville Station.
  • 6:59 - 7:02
    It's likely, rather, to swoosh right by.
  • 7:02 - 7:04
    Now this has profound implications,
  • 7:04 - 7:08
    particularly when it comes
    to questions of power.
  • 7:08 - 7:10
    For example, chimpanzees are strong,
  • 7:10 - 7:15
    pound for pound, a chimpanzee is about
    twice as strong as a fit human male.
  • 7:15 - 7:20
    And yet, the fate of Kanzi and his pals
    depends a lot more
  • 7:20 - 7:24
    on what we humans do than on
    what the chimpanzees do themselves.
  • 7:24 - 7:28
    Once there is super intelligence,
  • 7:28 - 7:32
    the fate of humanity may depend
    on what the super intelligence does.
  • 7:32 - 7:37
    Think about it: machine intelligence
    is the last invention
  • 7:37 - 7:39
    that humanity will ever need to make.
  • 7:39 - 7:42
    Machines will then be better
    at inventing than we are,
  • 7:42 - 7:44
    and they'll be doing so
    on digital timescales.
  • 7:44 - 7:49
    What this means is basically
    a telescoping of the future.
  • 7:49 - 7:53
    Think of all the crazy technologies
    that you could have imagined
  • 7:53 - 7:55
    maybe humans could have developed
    in the fullness of time:
  • 7:55 - 7:59
    cures for aging, space colonization,
  • 7:59 - 8:00
    self-replicating nanobots
  • 8:00 - 8:02
    or uploading of minds into computers,
  • 8:02 - 8:04
    all kinds of science fiction-y stuff
  • 8:04 - 8:07
    that's nevertheless consistent
    with the laws of physics.
  • 8:07 - 8:10
    All of this, super intelligence
    could develop
  • 8:10 - 8:12
    and possibly, quite rapidly.
  • 8:12 - 8:16
    Now, super intelligence with such
    technological maturity
  • 8:16 - 8:18
    would be extremely powerful,
  • 8:18 - 8:20
    and at least in some scenarios,
  • 8:20 - 8:23
    it would be able to get
    what it wants.
  • 8:23 - 8:25
    We would then have a future
    that would be shaped
  • 8:25 - 8:28
    by the preferences of this AI.
  • 8:30 - 8:34
    Now a good question is, what are
    those preferences?
  • 8:34 - 8:36
    Here it gets trickier.
  • 8:36 - 8:37
    To make any headway with this,
  • 8:37 - 8:39
    we must first, first of all,
  • 8:39 - 8:41
    avoid anthropomorphizing.
  • 8:41 - 8:45
    And this is ironic because
    every newspaper article
  • 8:45 - 8:50
    about the future of AI
    has a picture of this:
  • 8:50 - 8:52
    So I think what we need
    to do is to conceive
  • 8:52 - 8:55
    of the issue more abstractly,
  • 8:55 - 8:57
    not in terms of vivid Hollywood scenarios.
  • 8:57 - 9:01
    We need to think of intelligence
    as an optimization process,
  • 9:01 - 9:06
    a process that steers the future
    into a particular set of configurations.
  • 9:06 - 9:08
    As super intelligence --
  • 9:08 - 9:10
    it's a really strong optimization process.
  • 9:10 - 9:13
    It's extremely good at using
    available means
  • 9:13 - 9:16
    to achieve a state in which its
    goal is realized.
  • 9:16 - 9:19
    This means that there is no necessary
    conenction between
  • 9:19 - 9:22
    being highly intelligent in this sense,
  • 9:22 - 9:24
    and having an objective that we humans
  • 9:24 - 9:27
    would find worthwhile or meaningful.
  • 9:27 - 9:31
    Suppose we give AI the goal
    to make humans smile.
  • 9:31 - 9:34
    When the AI is weak, it performs useful
    or amusing actions
  • 9:34 - 9:36
    that cause its user to smile.
  • 9:36 - 9:39
    When the AI becomes super intelligent,
  • 9:39 - 9:41
    it realizes that there is
    a more effective way
  • 9:41 - 9:43
    to achieve this goal:
  • 9:43 - 9:44
    take control of the world
  • 9:44 - 9:48
    and stick electrodes into
    the facial muscles of humans
  • 9:48 - 9:51
    to cause constant, beaming grins.
  • 9:51 - 9:53
    Another example, suppose
    we give AI the goal to solve
  • 9:53 - 9:55
    a difficult mathematical problem.
  • 9:55 - 9:57
    When the AI becomes super intelligent,
  • 9:57 - 10:01
    it realizes that the most effective way
    to get the solution to this problem
  • 10:01 - 10:04
    is by transforming the planet
    into a giant computer,
  • 10:04 - 10:06
    so as to increase its thinking capacity.
  • 10:06 - 10:09
    And notice that this gives the AIs
    an instrumental reason
  • 10:09 - 10:12
    to do things to us that we
    might not approve of.
  • 10:12 - 10:13
    Human beings in this model are threats,
  • 10:13 - 10:16
    we could prevent the
    mathematical problem from being solved.
  • 10:16 - 10:20
    Of course, perceivably things won't
    go wrong in these particular ways,
  • 10:20 - 10:22
    these are cartoon examples.
  • 10:22 - 10:24
    But the general point here is important:
  • 10:24 - 10:27
    if you create a really powerful
    optimization process
  • 10:27 - 10:30
    to maximize for objective x,
  • 10:30 - 10:32
    you better make sure that
    your definition of x
  • 10:32 - 10:35
    incorporates everything you care about.
  • 10:35 - 10:39
    This is a lesson that's also taught
    in many a myth.
  • 10:39 - 10:45
    Kind Midas wishes that everything
    he touches be turned into gold.
  • 10:45 - 10:47
    He touches his daughter,
    she turns into fold.
  • 10:47 - 10:50
    He touches his food, it turns into gold.
  • 10:50 - 10:53
    This could become practically relevant,
  • 10:53 - 10:55
    not just for a metaphor for greed,
  • 10:55 - 10:57
    but an illustration of what happens
    if you create
  • 10:57 - 10:59
    a powerful optimization process
  • 10:59 - 11:04
    and give it misconceived
    or poorly specified goals.
  • 11:04 - 11:09
    Now you might say, "If a computer starts
    sticking electrodes into people's faces,
  • 11:09 - 11:13
    we'd just shut it off."
  • 11:13 - 11:17
    A: This is not necessarily so easy
    to do if we've grown
  • 11:17 - 11:18
    dependent on the system,
  • 11:18 - 11:21
    like where is the off switch
    to the internet?
  • 11:21 - 11:26
    B: Why haven't the chimpanzees
    flicked the off-switch to humanity,
  • 11:26 - 11:27
    or the neanderthals?
  • 11:27 - 11:30
    They certainly had reasons.
  • 11:30 - 11:33
    We have an off switch,
    for example, right here.
  • 11:33 - 11:35
    [choking sound]
  • 11:35 - 11:37
    The reason is that we are
    an intelligent adversary,
  • 11:37 - 11:40
    we can anticipate threats
    and we can plan around them.
  • 11:40 - 11:42
    But so could a super intelligent agent,
  • 11:42 - 11:46
    and it would be much better
    at that than we are.
  • 11:46 - 11:53
    The point is, we should not be confident
    that we have this under control here.
  • 11:53 - 11:56
    And we could try to make our job
    a little bit easier by, say,
  • 11:56 - 11:58
    putting the AI in a box,
  • 11:58 - 12:01
    like a secure software environment,
    a virtual reality simulation
  • 12:01 - 12:03
    from which it cannot escape.
  • 12:03 - 12:07
    But how confident can we be that
    the AI couldn't find a bug.
  • 12:07 - 12:10
    Given that even human hackers
    find bugs all the time,
  • 12:10 - 12:14
    I'd say, probably not very confident.
  • 12:14 - 12:19
    So we disconnect the ethernet cable
    to create an air gap,
  • 12:19 - 12:24
    but again, like nearly human hackers
    routinely transgress air gaps
  • 12:24 - 12:25
    using social engineering.
  • 12:25 - 12:27
    Like right now as I speak, I'm sure
    there is some employee
  • 12:27 - 12:31
    out there somewhere who's been
    talked into handing out
  • 12:31 - 12:35
    her account details by somebody
    claiming to be from the IT department.
  • 12:35 - 12:37
    More creative scenarios are also possible,
  • 12:37 - 12:40
    like if you're the AI, you can imagine
    wiggling electroces around
  • 12:40 - 12:43
    in your internal circuitry
    to create radio waves
  • 12:43 - 12:45
    that you can use to communicate.
  • 12:45 - 12:47
    Or maybe you could pretend to malfunction,
  • 12:47 - 12:51
    and then when the programmers open
    you up to see what went wrong with you,
  • 12:51 - 12:53
    they look at the source code -- BAM! --
  • 12:53 - 12:55
    the manipulation can take place.
  • 12:55 - 12:59
    Or it could output the blueprint
    to a really nifty technology
  • 12:59 - 13:00
    and when we implement it,
  • 13:00 - 13:05
    it has some surreptitious side effect
    that the AI had planned.
  • 13:05 - 13:08
    The point here is that we should
    not be confident in our ability
  • 13:08 - 13:12
    to keep a super intelligent genie
    locked up in its bottle forever.
  • 13:12 - 13:15
    Sooner or later, it will out.
  • 13:15 - 13:18
    I believe that the answer here
    is to figure out
  • 13:18 - 13:23
    how to create super intelligent AI
    such that even if, when it escapes,
  • 13:23 - 13:26
    it is still safe because it
    is fundamentally on our side
  • 13:26 - 13:28
    because it shares our values.
  • 13:28 - 13:33
    I see no way around
    this difficult problem.
  • 13:33 - 13:36
    Now, I'm actually fairly optimistic
    that this problem can be solved.
  • 13:36 - 13:40
    We wouldn't have to write down
    a long list of everything we care aobut
  • 13:40 - 13:44
    or worse yet, spell it out
    in some computer language
  • 13:44 - 13:45
    like C ++ or Python,
  • 13:45 - 13:48
    that would be a task beyond hopeless.
  • 13:48 - 13:52
    Instead, we would create an AI
    that uses its intelligence
  • 13:52 - 13:55
    to learn what we value,
  • 13:55 - 14:01
    and its motivation system is constructed
    in such a way that it is motivated
  • 14:01 - 14:06
    to pursue our values or to perform actions
    that it predicts we would approve of.
  • 14:06 - 14:09
    We would thus leverage
    its intelligence as much as possible
  • 14:09 - 14:13
    to solve the problem of value -loading.
  • 14:13 - 14:14
    This can happen,
  • 14:14 - 14:18
    and the outcome could be
    very good for humanity.
  • 14:18 - 14:22
    But it doesn't happen automatically.
  • 14:22 - 14:25
    The initial conditions
    for the intelligent explosion
  • 14:25 - 14:28
    might need to be set up
    in just the right way
  • 14:28 - 14:31
    if we are to have a controlled detonation.
  • 14:31 - 14:34
    The values that the AI has
    need to match ours,
  • 14:34 - 14:36
    not just in the familiar context,
  • 14:36 - 14:38
    like where we can easily check
    how the AI behaves,
  • 14:38 - 14:41
    but also in all novel contexts
    that the AI might encounter
  • 14:41 - 14:43
    in the indefinite future.
  • 14:43 - 14:48
    And there are also some esoteric issues
    that would need to be solved, sorted out
  • 14:48 - 14:50
    the exact decisions
    of its decision theory,
  • 14:50 - 14:53
    how to deal with
    logical uncertainty and so forth.
  • 14:53 - 14:57
    So the technical problems that need
    to be solved to make this work
  • 14:57 - 14:58
    look quite difficult,
  • 14:58 - 15:01
    -- not as difficult as making
    a super intelligent AI,
  • 15:01 - 15:04
    but fairly difficult.
  • 15:04 - 15:05
    Here is the worry:
  • 15:05 - 15:10
    making super intelligent AI
    is a really hard challenge.
  • 15:10 - 15:13
    Making super intelligent AI that is safe
  • 15:13 - 15:15
    involves some additional
    challenge on top of that.
  • 15:15 - 15:18
    The risk is that if somebody
    figures out how to crack
  • 15:18 - 15:21
    the first challenge without also
    having cracked
  • 15:21 - 15:25
    the additional challenge
    of ensuring perfect safety.
  • 15:25 - 15:29
    So I think that we should
    work out a solution
  • 15:29 - 15:32
    to the controlled problem in advance,
  • 15:32 - 15:35
    so that we have it available
    by the time it is needed.
  • 15:35 - 15:38
    Now it might be that we cannot
    solve the entire controlled problem
  • 15:38 - 15:41
    in advance because maybe some
    element can only be put in place
  • 15:41 - 15:44
    once you know the details of
    the architecture
  • 15:44 - 15:45
    where it will be implemented.
  • 15:45 - 15:49
    But the more of the controlled problem
    that we solve in advance,
  • 15:49 - 15:53
    the better the odds that the transition
    to the machine intelligence era
  • 15:53 - 15:55
    will go well.
  • 15:55 - 15:59
    This to me looks like a thing
    that is well worth doing
  • 15:59 - 16:02
    and I can imagine that if
    things turn out okay,
  • 16:02 - 16:05
    that people in a million years
    from now
  • 16:05 - 16:07
    look back at this century
  • 16:07 - 16:09
    and it might well be
    that they say
  • 16:09 - 16:11
    that he one thing we did
    that really mattered
  • 16:11 - 16:13
    was to get this thing right.
  • 16:13 - 16:14
    Thank you.
  • 16:14 - 16:17
    (Applause)
Title:
What happens when our computers get smarter than we are?
Speaker:
Nick Bostrom
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
16:31

English subtitles

Revisions Compare revisions