-
Not Synced
I'm going to talk about
a failure of intuition
-
Not Synced
that many of us suffer from.
-
Not Synced
It's really a failure to detect
a certain kind of danger.
-
Not Synced
I'm going to describe a scenario
-
Not Synced
that I think is both terrifying
-
Not Synced
and likely to occur,
-
Not Synced
and that's not a good combination,
-
Not Synced
as it turns out.
-
Not Synced
And yet rather than be scared,
most of you will feel
-
Not Synced
that what I'm talking about
is kind of cool.
-
Not Synced
I'm going to describe how
the gains we make
-
Not Synced
in artificial intelligence
-
Not Synced
could ultimately destroy us.
-
Not Synced
And in fact, I think it's very difficult
to see how they won't destroy us
-
Not Synced
or inspire us to destroy ourselves.
-
Not Synced
And yet if you're anything like me,
-
Not Synced
you'll find that it's fun
to think about these things.
-
Not Synced
And that response is part of the problem.
-
Not Synced
Okay? That response should worry you.
-
Not Synced
And if I were to convince you in this talk
-
Not Synced
that we were likely to suffer
a global famine,
-
Not Synced
either because of climate change
or some other catastrophe,
-
Not Synced
and that your grandchildren,
or their grandchildren,
-
Not Synced
are very likely to live like this,
-
Not Synced
you wouldn't think,
-
Not Synced
"Interesting.
-
Not Synced
I like this TEDTalk."
-
Not Synced
Famine isn't fun.
-
Not Synced
Death by science fiction,
on the other hand, is fun,
-
Not Synced
and one of the things that worries me most
about the development of AI at this point
-
Not Synced
is that we seem unable to marshal
-
Not Synced
an appropriate emotional response
-
Not Synced
to the dangers that lie ahead.
-
Not Synced
I am unable to marshal this response,
and I'm giving this talk.
-
Not Synced
It's as though we stand before two doors.
-
Not Synced
Behind door number one,
-
Not Synced
we stop making progress
in building intelligent machines.
-
Not Synced
Our computer hardware and software
just stops getting better for some reason.
-
Not Synced
Now take a moment to consider
-
Not Synced
why this might happen.
-
Not Synced
I mean, given how valuable
intelligence and automation are,
-
Not Synced
we will continue to improve our technology
if we are at all able to.
-
Not Synced
What could stop us from doing this?
-
Not Synced
A full-scale nuclear war?
-
Not Synced
A global pandemic?
-
Not Synced
An asteroid impact?
-
Not Synced
Justin Bieber becoming
President of the United States?
-
Not Synced
(Laughter)
-
Not Synced
The point is, something would have
to destroy civilization as we know it.
-
Not Synced
You have to imagine
how bad it would have to be
-
Not Synced
to prevent us from making
improvements in our technology
-
Not Synced
permanently,
-
Not Synced
generation after generation.
-
Not Synced
Almost by definition, this is
the worst thing that's ever happened
-
Not Synced
in human history.
-
Not Synced
So the only alternative,
-
Not Synced
and this is what lies behind
door number two,
-
Not Synced
is that we continued to improve
our intelligent machines
-
Not Synced
year after year after year.
-
Not Synced
At a certain point, we will build
machines that are smarter than we are,
-
Not Synced
and once we have machines
that are smarter than we are,
-
Not Synced
they will begin to improve themselves.
-
Not Synced
And then we risk what
the mathematician IJ Good called
-
Not Synced
an "intelligence explosion,"
-
Not Synced
that the process could get away from us.
-
Not Synced
Now this is often caricatured,
as I have here,
-
Not Synced
as a fear that armies of malicious robots
-
Not Synced
will attack us.
-
Not Synced
But that isn't the most likely scenario.
-
Not Synced
It's not that our machines
will become spontaneously malevolent.
-
Not Synced
The concern is really that we will build
machines that are so much
-
Not Synced
more competent than we are
-
Not Synced
that the slightest divergence
between their goals and our own
-
Not Synced
could destroy us.
-
Not Synced
Just think about how we relate to ants.
-
Not Synced
We don't hate them.
-
Not Synced
We don't go out of our way to harm them.
-
Not Synced
In fact, sometimes we take pains
not to harm them.
-
Not Synced
We step over them on the sidewalk.
-
Not Synced
But whenever their presence
-
Not Synced
seriously conflicts
with one of our goals,
-
Not Synced
let's say when constructing
a building like this one,
-
Not Synced
we annihilate them without a qualm.
-
Not Synced
The concern is that we
will one day build machines
-
Not Synced
that, whether they're conscious or not,
-
Not Synced
could treat us with similar disregard.
-
Not Synced
Now, I suspect this seems
farfetched to many of you.
-
Not Synced
I bet there are those of you who doubt
that superintelligent AI is possible,
-
Not Synced
much less inevitable.
-
Not Synced
But then you must find something wrong
with one of the following assumptions.
-
Not Synced
And there are only three of them.
-
Not Synced
Intelligence is a matter of information
processing in physical systems.
-
Not Synced
Actually, this is a little bit more
than an assumption.
-
Not Synced
We have already built narrow intelligence
into our machines,
-
Not Synced
and many of these machines perform
-
Not Synced
at a level of superhuman
intelligence already.
-
Not Synced
And we know that mere matter
-
Not Synced
can give rise to what is called
"general intelligence,"
-
Not Synced
an ability to think flexibly
across multiple domains,
-
Not Synced
because our brains have managed it. Right?
-
Not Synced
There's just atoms in here,
-
Not Synced
and as long as we continue to
-
Not Synced
build systems of atoms
-
Not Synced
that display more and more
intelligent behavior,
-
Not Synced
we will eventually,
-
Not Synced
unless we are interrupted, we will
eventually build general intelligence
-
Not Synced
into our machines.
-
Not Synced
It's crucial to realize that
the rate of progress doesn't matter,
-
Not Synced
because any progress is enough
to get us into the end zone.
-
Not Synced
We don't need Moore's Law to continue.
We don't need exponential progress.
-
Not Synced
We just need to keep going.
-
Not Synced
The second assumption
is that we will keep going.
-
Not Synced
We will continue to improve
our intelligent machines.
-
Not Synced
And given the value of intelligence,
-
Not Synced
I mean, intelligence is either
the source of everything we value
-
Not Synced
or we need it to safeguard
everything we value.
-
Not Synced
It is our most valuable resource.
-
Not Synced
So we want to do this.
-
Not Synced
We have problems that we
desperately need to solve.
-
Not Synced
We want to cure diseases
like Alzheimer's and cancer.
-
Not Synced
We want to understand economic systems.
We want to improve our climate science.
-
Not Synced
So we will do this, if we can.
-
Not Synced
The train is already out of the station,
and there's no brake to pull.
-
Not Synced
Finally, we don't stand on a peak
of intelligence,
-
Not Synced
or anywhere near it, likely.
-
Not Synced
And this really is the crucial insight.
-
Not Synced
This is what makes our situation
so precarious,
-
Not Synced
and this is what makes our intuitions
about risk so unreliable.
-
Not Synced
Now, just consider the smartest person
who has ever lived.
-
Not Synced
On almost everyone's shortlist here
is John Von Neumann.
-
Not Synced
I mean, the impression that Von Neumann
made on the people around him,
-
Not Synced
and this included the greatest
mathematicians and physicists of his time,
-
Not Synced
is fairly well documented.
-
Not Synced
If only half the stories about him
are half true,
-
Not Synced
there's no question he is one of
the smartest people who has ever lived.
-
Not Synced
So consider the spectrum of intelligence.
-
Not Synced
We have John Von Neumann.
-
Not Synced
And then we have you and me.
-
Not Synced
And then we have a chicken.
-
Not Synced
(Laughter)
-
Not Synced
Sorry, a chicken.
-
Not Synced
(Laughter)
-
Not Synced
There's no reason for me to make this talk
more depressing than it needs to be.
-
Not Synced
(Laughter)
-
Not Synced
It seems overwhelmingly, however,
that the spectrum of intelligence
-
Not Synced
extends much further
than we current conceive,
-
Not Synced
and if we build machines
that are more intelligent than we are,
-
Not Synced
they will very likely
explore this spectrum
-
Not Synced
in ways that we can't imagine,
-
Not Synced
and exceed us in ways
that we can't imagine.
-
Not Synced
And it's important to recognize that this
is true by virtue of speed alone.
-
Not Synced
Right? So imagine if we just built
a super-intelligent AI, right,
-
Not Synced
that was no smarter than
your average team of researchers
-
Not Synced
at Stanford or at MIT.
-
Not Synced
Well, electronic circuits function
about a million times faster
-
Not Synced
than biochemical ones,
-
Not Synced
so this machine should think
about a million times faster
-
Not Synced
than the minds that built it.
-
Not Synced
So you set it running for a week,
-
Not Synced
and it will perform 20,000 years
of human-level intellectual work,
-
Not Synced
week after week after week.
-
Not Synced
How could we even understand,
much less constrain,
-
Not Synced
a mind making this sort of progress?
-
Not Synced
The other thing that's worrying, frankly,
-
Not Synced
is that, imagine the best case scenario.
-
Not Synced
So imagine we hit upon a design
of super-intelligent AI
-
Not Synced
that has no safety concerns.
-
Not Synced
We have the perfect design
the first time around.
-
Not Synced
It's as though we've been handed an oracle
-
Not Synced
that behaves exactly as intended.
-
Not Synced
Well, this machine would be
the perfect labor-saving device.
-
Not Synced
It can design the machine
that can build the machine
-
Not Synced
that can do any physical work,
-
Not Synced
powered by sunlight,
-
Not Synced
more or less for the cost
of raw materials.
-
Not Synced
So we're talking about
the end of human drudgery.
-
Not Synced
We're also talking about the end
of most intellectual work.
-
Not Synced
So what would apes like ourselves
do in this circumstance?
-
Not Synced
Well, we'd be free to play frisbee
and give each other massages.
-
Not Synced
Add some LSD and some
questionable wardrobe choices,
-
Not Synced
and the whole world
could be like Burning Man.
-
Not Synced
(Laughter)
-
Not Synced
Now, that might sound pretty good,
-
Not Synced
but ask yourself what would happen
-
Not Synced
under our current economic
and political order?
-
Not Synced
It seems likely that we would witness
a level of wealth inequality
-
Not Synced
and unemployment
that we have never seen before.
-
Not Synced
Absent a willingness to immediately
put this new wealth
-
Not Synced
to the service of all humanity,
-
Not Synced
a few trillionaires could grace
the covers of our business magazines
-
Not Synced
while the rest of the world
would be free to starve.
-
Not Synced
And what would the Russians
or the Chinese do
-
Not Synced
if they heard that some company
in Silicon Valley
-
Not Synced
was about to deploy
a super-intelligent AI?
-
Not Synced
This machine would be capable
of waging war,
-
Not Synced
whether terrestrial or cyber,
-
Not Synced
with unprecedented power.
-
Not Synced
This is a winner-take-all scenario.
-
Not Synced
To be six months ahead
of the competition here
-
Not Synced
is to be 500,000 years ahead,
at a minimum.
-
Not Synced
So even mere rumors
of this kind of breakthrough
-
Not Synced
could cause our species to go berserk.
-
Not Synced
Now, one of the most frightening things,
-
Not Synced
in my view, at this moment,
-
Not Synced
are the kinds of things
-
Not Synced
that AI researchers say
-
Not Synced
when they want to be reassuring.
-
Not Synced
And the most common reason
we're told not to worry is time.
-
Not Synced
This is all a long way off,
don't you know.
-
Not Synced
This is probably 50 or 100 years away.
-
Not Synced
One researcher has said,
-
Not Synced
"Worrying about AI safety
-
Not Synced
is like worrying about
overpopulation on Mars."
-
Not Synced
This is the Silicon Valley version of
-
Not Synced
"don't worry your
pretty little head about it."
-
Not Synced
(Laughter)
-
Not Synced
No one seems to notice
-
Not Synced
that referencing the time horizon
-
Not Synced
is a total non sequitur.
-
Not Synced
If intelligence is just a matter
of information processing,
-
Not Synced
and we continue to improve our machines,
-
Not Synced
we will improve some form
of super-intelligence.
-
Not Synced
And we have no idea
-
Not Synced
how long it will take us
-
Not Synced
to create the conditions
to do that safely.
-
Not Synced
Let me say that again.
-
Not Synced
And we have no idea
how long it will take us
-
Not Synced
to create the conditions
to do that safely.
-
Not Synced
And if you haven't noticed,
-
Not Synced
50 years is not what it used to be.
-
Not Synced
This is 50 years in months.
-
Not Synced
This is how long we've had the iPhone.
-
Not Synced
This is how long "The Simpsons"
has been on television.
-
Not Synced
Fifty years is not that much time
-
Not Synced
to meet one of the greatest challenges
our species will ever face.
-
Not Synced
Once again, we seem to be failing
to have an appropriate emotional response
-
Not Synced
to what we have every reason
to believe is coming.
-
Not Synced
The computer scientist Stuart Russell
has a nice analogy here.
-
Not Synced
He said, imagine that we received
a message from an alien civilization,
-
Not Synced
which read:
-
Not Synced
"People of Earth,
-
Not Synced
we will arrive on your planet in 50 years.
-
Not Synced
Get ready."
-
Not Synced
And now we're just counting down
the months until the mothership lands.
-
Not Synced
We would feel a little
more urgency than we do.
-
Not Synced
Another reason we're told not to worry
-
Not Synced
is that these machines can't help
but share our values
-
Not Synced
because they will be literally
extensions of ourselves.
-
Not Synced
They'll be grafted onto our brains,
-
Not Synced
and we'll essentially become
their limbic systems.
-
Not Synced
Now take a moment to consider that the
safest and only prudent path forward,
-
Not Synced
recommended,
-
Not Synced
is to implant this technology
directly into our brains.
-
Not Synced
Now, this may in fact be the safest
and only prudent path forward,
-
Not Synced
but usually one's safety concerns
about a technology
-
Not Synced
have to be pretty much worked out
before you stick it inside your head.
-
Not Synced
(Laughter)
-
Not Synced
The deeper problem is that
-
Not Synced
building super-intelligent AI on its own
-
Not Synced
seems likely to be easier
-
Not Synced
than building super-intelligent AI
-
Not Synced
and having the completed neuroscience
that allows us to seamlessly
-
Not Synced
integrate our minds with it.
-
Not Synced
And given that the companies
and governments doing this work
-
Not Synced
are likely to perceive themselves
as being in a race against all others,
-
Not Synced
given that to win this race
is to win the world,
-
Not Synced
provided you don't destroy it
in the next moment,
-
Not Synced
then it seems likely
that whatever is easier to do
-
Not Synced
will get done first.
-
Not Synced
Now, unfortunately, I don't have
a solution to this problem,
-
Not Synced
apart from recommending
that more of us think about it.
-
Not Synced
I think we need something like
a Manhattan Project
-
Not Synced
on the topic of artificial intelligence.
-
Not Synced
Not to build it, because I think
we'll inevitably do that,
-
Not Synced
but to understand how to avoid
an arms race and to build it
-
Not Synced
in a way that is aligned
with our interests.
-
Not Synced
When you're talking about
super-intelligent AI
-
Not Synced
that can make changes to itself,
-
Not Synced
it seems that we only have one chance
to get the initial conditions right,
-
Not Synced
and even then we will need
to absorb the economic
-
Not Synced
and political consequences
of getting them right.
-
Not Synced
But the moment we admit
-
Not Synced
that information processing
is the source of intelligence,
-
Not Synced
that some appropriate computational system
is what the basis of intelligence is,
-
Not Synced
and we admit that we will improve
these systems continuously,
-
Not Synced
and we admit that the horizon
of cognition very likely far exceeds
-
Not Synced
what we currently know,
-
Not Synced
then we have to admit that we
are in the process of building
-
Not Synced
some sort of god.
-
Not Synced
Now would be a good time
-
Not Synced
to make sure it's a god we can live with.
-
Not Synced
Thank you very much.
-
Not Synced
(Applause)