Can we build AI without losing control over it?
-
0:01 - 0:03I'm going to talk
about a failure of intuition -
0:03 - 0:05that many of us suffer from.
-
0:05 - 0:09It's really a failure
to detect a certain kind of danger. -
0:09 - 0:11I'm going to describe a scenario
-
0:11 - 0:14that I think is both terrifying
-
0:14 - 0:16and likely to occur,
-
0:17 - 0:18and that's not a good combination,
-
0:19 - 0:20as it turns out.
-
0:20 - 0:23And yet rather than be scared,
most of you will feel -
0:23 - 0:25that what I'm talking about
is kind of cool. -
0:25 - 0:28I'm going to describe
how the gains we make -
0:28 - 0:30in artificial intelligence
-
0:30 - 0:32could ultimately destroy us.
-
0:32 - 0:35And in fact, I think it's very difficult
to see how they won't destroy us -
0:35 - 0:37or inspire us to destroy ourselves.
-
0:37 - 0:39And yet if you're anything like me,
-
0:39 - 0:42you'll find that it's fun
to think about these things. -
0:42 - 0:45And that response is part of the problem.
-
0:45 - 0:47OK? That response should worry you.
-
0:48 - 0:51And if I were to convince you in this talk
-
0:51 - 0:54that we were likely
to suffer a global famine, -
0:54 - 0:57either because of climate change
or some other catastrophe, -
0:57 - 1:01and that your grandchildren,
or their grandchildren, -
1:01 - 1:02are very likely to live like this,
-
1:03 - 1:04you wouldn't think,
-
1:05 - 1:07"Interesting.
-
1:07 - 1:08I like this TED Talk."
-
1:09 - 1:11Famine isn't fun.
-
1:12 - 1:15Death by science fiction,
on the other hand, is fun, -
1:15 - 1:19and one of the things that worries me most
about the development of AI at this point -
1:19 - 1:23is that we seem unable to marshal
an appropriate emotional response -
1:23 - 1:25to the dangers that lie ahead.
-
1:25 - 1:28I am unable to marshal this response,
and I'm giving this talk. -
1:30 - 1:33It's as though we stand before two doors.
-
1:33 - 1:34Behind door number one,
-
1:34 - 1:37we stop making progress
in building intelligent machines. -
1:37 - 1:41Our computer hardware and software
just stops getting better for some reason. -
1:41 - 1:44Now take a moment
to consider why this might happen. -
1:45 - 1:49I mean, given how valuable
intelligence and automation are, -
1:49 - 1:52we will continue to improve our technology
if we are at all able to. -
1:53 - 1:55What could stop us from doing this?
-
1:56 - 1:58A full-scale nuclear war?
-
1:59 - 2:01A global pandemic?
-
2:02 - 2:04An asteroid impact?
-
2:06 - 2:08Justin Bieber becoming
president of the United States? -
2:08 - 2:11(Laughter)
-
2:13 - 2:17The point is, something would have to
destroy civilization as we know it. -
2:17 - 2:22You have to imagine
how bad it would have to be -
2:22 - 2:25to prevent us from making
improvements in our technology -
2:25 - 2:26permanently,
-
2:26 - 2:28generation after generation.
-
2:28 - 2:30Almost by definition,
this is the worst thing -
2:30 - 2:32that's ever happened in human history.
-
2:33 - 2:34So the only alternative,
-
2:34 - 2:36and this is what lies
behind door number two, -
2:36 - 2:39is that we continue
to improve our intelligent machines -
2:39 - 2:41year after year after year.
-
2:42 - 2:45At a certain point, we will build
machines that are smarter than we are, -
2:46 - 2:49and once we have machines
that are smarter than we are, -
2:49 - 2:51they will begin to improve themselves.
-
2:51 - 2:53And then we risk what
the mathematician IJ Good called -
2:53 - 2:55an "intelligence explosion,"
-
2:55 - 2:57that the process could get away from us.
-
2:58 - 3:01Now, this is often caricatured,
as I have here, -
3:01 - 3:04as a fear that armies of malicious robots
-
3:04 - 3:05will attack us.
-
3:05 - 3:08But that isn't the most likely scenario.
-
3:08 - 3:13It's not that our machines
will become spontaneously malevolent. -
3:13 - 3:16The concern is really
that we will build machines -
3:16 - 3:18that are so much
more competent than we are -
3:18 - 3:22that the slightest divergence
between their goals and our own -
3:22 - 3:23could destroy us.
-
3:24 - 3:26Just think about how we relate to ants.
-
3:27 - 3:28We don't hate them.
-
3:28 - 3:30We don't go out of our way to harm them.
-
3:30 - 3:33In fact, sometimes
we take pains not to harm them. -
3:33 - 3:35We step over them on the sidewalk.
-
3:35 - 3:37But whenever their presence
-
3:37 - 3:39seriously conflicts with one of our goals,
-
3:39 - 3:42let's say when constructing
a building like this one, -
3:42 - 3:44we annihilate them without a qualm.
-
3:44 - 3:47The concern is that we will
one day build machines -
3:47 - 3:50that, whether they're conscious or not,
-
3:50 - 3:52could treat us with similar disregard.
-
3:54 - 3:57Now, I suspect this seems
far-fetched to many of you. -
3:57 - 4:04I bet there are those of you who doubt
that superintelligent AI is possible, -
4:04 - 4:05much less inevitable.
-
4:05 - 4:09But then you must find something wrong
with one of the following assumptions. -
4:09 - 4:11And there are only three of them.
-
4:12 - 4:17Intelligence is a matter of information
processing in physical systems. -
4:17 - 4:20Actually, this is a little bit more
than an assumption. -
4:20 - 4:23We have already built
narrow intelligence into our machines, -
4:23 - 4:25and many of these machines perform
-
4:25 - 4:28at a level of superhuman
intelligence already. -
4:29 - 4:31And we know that mere matter
-
4:31 - 4:34can give rise to what is called
"general intelligence," -
4:34 - 4:38an ability to think flexibly
across multiple domains, -
4:38 - 4:41because our brains have managed it. Right?
-
4:41 - 4:45I mean, there's just atoms in here,
-
4:45 - 4:49and as long as we continue
to build systems of atoms -
4:49 - 4:52that display more and more
intelligent behavior, -
4:52 - 4:55we will eventually,
unless we are interrupted, -
4:55 - 4:58we will eventually
build general intelligence -
4:58 - 4:59into our machines.
-
4:59 - 5:03It's crucial to realize
that the rate of progress doesn't matter, -
5:03 - 5:06because any progress
is enough to get us into the end zone. -
5:06 - 5:10We don't need Moore's law to continue.
We don't need exponential progress. -
5:10 - 5:12We just need to keep going.
-
5:13 - 5:16The second assumption
is that we will keep going. -
5:17 - 5:20We will continue to improve
our intelligent machines. -
5:21 - 5:25And given the value of intelligence --
-
5:25 - 5:29I mean, intelligence is either
the source of everything we value -
5:29 - 5:32or we need it to safeguard
everything we value. -
5:32 - 5:34It is our most valuable resource.
-
5:34 - 5:36So we want to do this.
-
5:36 - 5:39We have problems
that we desperately need to solve. -
5:39 - 5:42We want to cure diseases
like Alzheimer's and cancer. -
5:43 - 5:47We want to understand economic systems.
We want to improve our climate science. -
5:47 - 5:49So we will do this, if we can.
-
5:49 - 5:52The train is already out of the station,
and there's no brake to pull. -
5:54 - 5:59Finally, we don't stand
on a peak of intelligence, -
5:59 - 6:01or anywhere near it, likely.
-
6:02 - 6:04And this really is the crucial insight.
-
6:04 - 6:06This is what makes
our situation so precarious, -
6:06 - 6:10and this is what makes our intuitions
about risk so unreliable. -
6:11 - 6:14Now, just consider the smartest person
who has ever lived. -
6:15 - 6:18On almost everyone's shortlist here
is John von Neumann. -
6:18 - 6:21I mean, the impression that von Neumann
made on the people around him, -
6:21 - 6:25and this included the greatest
mathematicians and physicists of his time, -
6:26 - 6:27is fairly well-documented.
-
6:27 - 6:31If only half the stories
about him are half true, -
6:31 - 6:32there's no question
-
6:33 - 6:35he's one of the smartest people
who has ever lived. -
6:35 - 6:38So consider the spectrum of intelligence.
-
6:38 - 6:40Here we have John von Neumann.
-
6:42 - 6:43And then we have you and me.
-
6:44 - 6:45And then we have a chicken.
-
6:45 - 6:47(Laughter)
-
6:47 - 6:49Sorry, a chicken.
-
6:49 - 6:50(Laughter)
-
6:50 - 6:54There's no reason for me to make this talk
more depressing than it needs to be. -
6:54 - 6:55(Laughter)
-
6:56 - 7:00It seems overwhelmingly likely, however,
that the spectrum of intelligence -
7:00 - 7:03extends much further
than we currently conceive, -
7:04 - 7:07and if we build machines
that are more intelligent than we are, -
7:07 - 7:09they will very likely
explore this spectrum -
7:09 - 7:11in ways that we can't imagine,
-
7:11 - 7:14and exceed us in ways
that we can't imagine. -
7:15 - 7:19And it's important to recognize that
this is true by virtue of speed alone. -
7:19 - 7:24Right? So imagine if we just built
a superintelligent AI -
7:24 - 7:28that was no smarter
than your average team of researchers -
7:28 - 7:30at Stanford or MIT.
-
7:30 - 7:33Well, electronic circuits
function about a million times faster -
7:33 - 7:34than biochemical ones,
-
7:35 - 7:38so this machine should think
about a million times faster -
7:38 - 7:39than the minds that built it.
-
7:40 - 7:41So you set it running for a week,
-
7:41 - 7:46and it will perform 20,000 years
of human-level intellectual work, -
7:46 - 7:48week after week after week.
-
7:50 - 7:53How could we even understand,
much less constrain, -
7:53 - 7:55a mind making this sort of progress?
-
7:57 - 7:59The other thing that's worrying, frankly,
-
7:59 - 8:04is that, imagine the best case scenario.
-
8:04 - 8:08So imagine we hit upon a design
of superintelligent AI -
8:08 - 8:10that has no safety concerns.
-
8:10 - 8:13We have the perfect design
the first time around. -
8:13 - 8:15It's as though we've been handed an oracle
-
8:15 - 8:17that behaves exactly as intended.
-
8:17 - 8:21Well, this machine would be
the perfect labor-saving device. -
8:22 - 8:24It can design the machine
that can build the machine -
8:24 - 8:26that can do any physical work,
-
8:26 - 8:27powered by sunlight,
-
8:27 - 8:30more or less for the cost
of raw materials. -
8:30 - 8:33So we're talking about
the end of human drudgery. -
8:33 - 8:36We're also talking about the end
of most intellectual work. -
8:37 - 8:40So what would apes like ourselves
do in this circumstance? -
8:40 - 8:44Well, we'd be free to play Frisbee
and give each other massages. -
8:46 - 8:49Add some LSD and some
questionable wardrobe choices, -
8:49 - 8:51and the whole world
could be like Burning Man. -
8:51 - 8:53(Laughter)
-
8:54 - 8:56Now, that might sound pretty good,
-
8:57 - 9:00but ask yourself what would happen
-
9:00 - 9:02under our current economic
and political order? -
9:02 - 9:05It seems likely that we would witness
-
9:05 - 9:09a level of wealth inequality
and unemployment -
9:09 - 9:11that we have never seen before.
-
9:11 - 9:13Absent a willingness
to immediately put this new wealth -
9:13 - 9:15to the service of all humanity,
-
9:16 - 9:19a few trillionaires could grace
the covers of our business magazines -
9:19 - 9:22while the rest of the world
would be free to starve. -
9:22 - 9:25And what would the Russians
or the Chinese do -
9:25 - 9:27if they heard that some company
in Silicon Valley -
9:27 - 9:30was about to deploy a superintelligent AI?
-
9:30 - 9:33This machine would be capable
of waging war, -
9:33 - 9:35whether terrestrial or cyber,
-
9:35 - 9:37with unprecedented power.
-
9:38 - 9:40This is a winner-take-all scenario.
-
9:40 - 9:43To be six months ahead
of the competition here -
9:43 - 9:46is to be 500,000 years ahead,
-
9:46 - 9:47at a minimum.
-
9:47 - 9:52So it seems that even mere rumors
of this kind of breakthrough -
9:52 - 9:55could cause our species to go berserk.
-
9:55 - 9:58Now, one of the most frightening things,
-
9:58 - 10:00in my view, at this moment,
-
10:00 - 10:05are the kinds of things
that AI researchers say -
10:05 - 10:06when they want to be reassuring.
-
10:07 - 10:10And the most common reason
we're told not to worry is time. -
10:10 - 10:13This is all a long way off,
don't you know. -
10:13 - 10:15This is probably 50 or 100 years away.
-
10:16 - 10:17One researcher has said,
-
10:17 - 10:19"Worrying about AI safety
-
10:19 - 10:21is like worrying
about overpopulation on Mars." -
10:22 - 10:24This is the Silicon Valley version
-
10:24 - 10:26of "don't worry your
pretty little head about it." -
10:26 - 10:27(Laughter)
-
10:28 - 10:29No one seems to notice
-
10:29 - 10:32that referencing the time horizon
-
10:32 - 10:35is a total non sequitur.
-
10:35 - 10:38If intelligence is just a matter
of information processing, -
10:38 - 10:41and we continue to improve our machines,
-
10:41 - 10:44we will produce
some form of superintelligence. -
10:44 - 10:48And we have no idea
how long it will take us -
10:48 - 10:50to create the conditions
to do that safely. -
10:52 - 10:53Let me say that again.
-
10:54 - 10:57We have no idea how long it will take us
-
10:57 - 11:00to create the conditions
to do that safely. -
11:01 - 11:04And if you haven't noticed,
50 years is not what it used to be. -
11:04 - 11:07This is 50 years in months.
-
11:07 - 11:09This is how long we've had the iPhone.
-
11:09 - 11:12This is how long "The Simpsons"
has been on television. -
11:13 - 11:15Fifty years is not that much time
-
11:15 - 11:18to meet one of the greatest challenges
our species will ever face. -
11:20 - 11:24Once again, we seem to be failing
to have an appropriate emotional response -
11:24 - 11:26to what we have every reason
to believe is coming. -
11:26 - 11:30The computer scientist Stuart Russell
has a nice analogy here. -
11:30 - 11:35He said, imagine that we received
a message from an alien civilization, -
11:35 - 11:37which read:
-
11:37 - 11:39"People of Earth,
-
11:39 - 11:41we will arrive on your planet in 50 years.
-
11:42 - 11:43Get ready."
-
11:43 - 11:48And now we're just counting down
the months until the mothership lands? -
11:48 - 11:51We would feel a little
more urgency than we do. -
11:53 - 11:55Another reason we're told not to worry
-
11:55 - 11:58is that these machines
can't help but share our values -
11:58 - 12:00because they will be literally
extensions of ourselves. -
12:00 - 12:02They'll be grafted onto our brains,
-
12:02 - 12:04and we'll essentially
become their limbic systems. -
12:05 - 12:07Now take a moment to consider
-
12:07 - 12:10that the safest
and only prudent path forward, -
12:10 - 12:11recommended,
-
12:11 - 12:14is to implant this technology
directly into our brains. -
12:15 - 12:18Now, this may in fact be the safest
and only prudent path forward, -
12:18 - 12:21but usually one's safety concerns
about a technology -
12:21 - 12:25have to be pretty much worked out
before you stick it inside your head. -
12:25 - 12:27(Laughter)
-
12:27 - 12:32The deeper problem is that
building superintelligent AI on its own -
12:32 - 12:34seems likely to be easier
-
12:34 - 12:36than building superintelligent AI
-
12:36 - 12:38and having the completed neuroscience
-
12:38 - 12:40that allows us to seamlessly
integrate our minds with it. -
12:41 - 12:44And given that the companies
and governments doing this work -
12:44 - 12:48are likely to perceive themselves
as being in a race against all others, -
12:48 - 12:51given that to win this race
is to win the world, -
12:51 - 12:53provided you don't destroy it
in the next moment, -
12:53 - 12:56then it seems likely
that whatever is easier to do -
12:56 - 12:57will get done first.
-
12:59 - 13:01Now, unfortunately,
I don't have a solution to this problem, -
13:01 - 13:04apart from recommending
that more of us think about it. -
13:04 - 13:06I think we need something
like a Manhattan Project -
13:06 - 13:08on the topic of artificial intelligence.
-
13:09 - 13:11Not to build it, because I think
we'll inevitably do that, -
13:11 - 13:15but to understand
how to avoid an arms race -
13:15 - 13:18and to build it in a way
that is aligned with our interests. -
13:18 - 13:20When you're talking
about superintelligent AI -
13:20 - 13:23that can make changes to itself,
-
13:23 - 13:27it seems that we only have one chance
to get the initial conditions right, -
13:27 - 13:29and even then we will need to absorb
-
13:29 - 13:32the economic and political
consequences of getting them right. -
13:34 - 13:36But the moment we admit
-
13:36 - 13:40that information processing
is the source of intelligence, -
13:41 - 13:46that some appropriate computational system
is what the basis of intelligence is, -
13:46 - 13:50and we admit that we will improve
these systems continuously, -
13:51 - 13:56and we admit that the horizon
of cognition very likely far exceeds -
13:56 - 13:57what we currently know,
-
13:58 - 13:59then we have to admit
-
13:59 - 14:02that we are in the process
of building some sort of god. -
14:03 - 14:05Now would be a good time
-
14:05 - 14:07to make sure it's a god we can live with.
-
14:08 - 14:10Thank you very much.
-
14:10 - 14:15(Applause)
- Title:
- Can we build AI without losing control over it?
- Speaker:
- Sam Harris
- Description:
-
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.
- Video Language:
- English
- Team:
closed TED
- Project:
- TEDTalks
- Duration:
- 14:27
![]() |
Brian Greene edited English subtitles for Can we build AI without losing control over it? | |
![]() |
Brian Greene edited English subtitles for Can we build AI without losing control over it? | |
![]() |
Brian Greene approved English subtitles for Can we build AI without losing control over it? | |
![]() |
Brian Greene edited English subtitles for Can we build AI without losing control over it? | |
![]() |
Joanna Pietrulewicz accepted English subtitles for Can we build AI without losing control over it? | |
![]() |
Joanna Pietrulewicz edited English subtitles for Can we build AI without losing control over it? | |
![]() |
Joanna Pietrulewicz edited English subtitles for Can we build AI without losing control over it? | |
![]() |
Joseph Geni edited English subtitles for Can we build AI without losing control over it? |