-
Chris Anderson: Nick Bostrom.
-
So, you have already given us
so many crazy ideas out there.
-
I think a couple of decades ago,
-
you made the case that we might
all be living in a simulation,
-
or perhaps probably were.
-
More recently,
-
you've painted the most vivid examples
of how artificial general intelligence
-
could go horribly wrong.
-
And now this year,
-
you're about to publish
-
a paper that presents something called
the vulnerable world hypothesis.
-
And our job this evening is to
give the illustrated guide to that.
-
So let's do that.
-
What is that hypothesis?
-
Nick Bostrom: It's trying to think about
-
a sort of structural feature
of the current human condition.
-
You like the urn metaphor,
-
so I'm going to use that to explain it.
-
So picture a big urn filled with balls
-
representing ideas, methods,
possible technologies.
-
You can think of the history
of human creativity
-
as the process of reaching into this urn
and pulling out one ball after another,
-
and the net effect so far
has been hugely beneficial, right?
-
We've extracted a great many white balls,
-
some various shades of gray,
mixed blessings.
-
We haven't so far
pulled out the black ball --
-
a technology that invariably destroys
the civilization that discovers it.
-
So the paper tries to think
about what could such a black ball be.
-
CA: So you define that ball
-
as one that would inevitably
bring about civilizational destruction.
-
NB: Unless we exit what I call
the semi-anarchic default condition.
-
But sort of, by default.
-
CA: So, you make the case compelling
-
by showing some sort of counterexamples
-
where you believe that so far
we've actually got lucky,
-
that we might have pulled out
that death ball
-
without even knowing it.
-
So there's this quote, what's this quote?
-
NB: Well, I guess
it's just meant to illustrate
-
the difficulty of foreseeing
-
what basic discoveries will lead to.
-
We just don't have that capability.
-
Because we have become quite good
at pulling out balls,
-
but we don't really have the ability
to put the ball back into the urn, right.
-
We can invent, but we can't un-invent.
-
So our strategy, such as it is,
-
is to hope that there is
no black ball in the urn.
-
CA: So once it's out, it's out,
and you can't put it back in,
-
and you think we've been lucky.
-
So talk through a couple
of these examples.
-
You talk about different
types of vulnerability.
-
NB: So the easiest type to understand
-
is a technology
that just makes it very easy
-
to cause massive amounts of destruction.
-
Synthetic biology might be a fecund
source of that kind of black ball,
-
but many other possible things we could --
-
think of geoengineering,
really great, right?
-
We could combat global warming,
-
but you don't want it
to get too easy either,
-
you don't want any random person
and his grandmother
-
to have the ability to radically
alter the earth's climate.
-
Or maybe lethal autonomous drones,
-
massed-produced, mosquito-sized
killer bot swarms.
-
Nanotechnology,
artificial general intelligence.
-
CA: You argue in the paper
-
that it's a matter of luck
that when we discovered
-
that nuclear power could create a bomb,
-
it might have been the case
-
that you could have created a bomb
-
with much easier resources,
accessible to anyone.
-
NB: Yeah, so think back to the 1930s
-
where for the first time we make
some breakthroughs in nuclear physics,
-
some genius figures out that it's possible
to create a nuclear chain reaction
-
and then realizes
that this could lead to the bomb.
-
And we do some more work,
-
it turns out that what you require
to make a nuclear bomb
-
is highly enriched uranium or plutonium,
-
which are very difficult materials to get.
-
You need ultracentrifuges,
-
you need reactors, like,
massive amounts of energy.
-
But suppose it had turned out instead
-
there had been an easy way
to unlock the energy of the atom.
-
That maybe by baking sand
in the microwave oven
-
or something like that
-
you could have created
a nuclear detonation.
-
So we know that that's
physically impossible.
-
But before you did the relevant physics
-
how could you have known
how it would turn out?
-
CA: Although, couldn't you argue
-
that for life to evolve on Earth
-
that implied sort of stable environment,
-
that if it was possible to create
massive nuclear reactions relatively easy,
-
the Earth would never have been stable,
-
that we wouldn't be here at all.
-
NB: Yeah, unless there were something
that is easy to do on purpose
-
but that wouldn't happen by random chance.
-
So, like things we can easily do,
-
we can stack 10 blocks
on top of one another,
-
but in nature, you're not going to find,
like, a stack of 10 blocks.
-
CA: OK, so this is probably the one
-
that many of us worry about most,
-
and yes, synthetic biology
is perhaps the quickest route
-
that we can foresee
in our near future to get us here.
-
NB: Yeah, and so think
about what that would have meant
-
if, say, anybody by working
in their kitchen for an afternoon
-
could destroy a city.
-
It's hard to see how
modern civilization as we know it
-
could have survived that.
-
Because in any population
of a million people,
-
there will always be some
who would, for whatever reason,
-
choose to use that destructive power.
-
So if that apocalyptic residual
-
would choose to destroy a city, or worse,
-
then cities would get destroyed.
-
CA: So here's another type
of vulnerability.
-
Talk about this.
-
NB: Yeah, so in addition to these
kind of obvious types of black balls
-
that would just make it possible
to blow up a lot of things,
-
other types would act
by creating bad incentives
-
for humans to do things that are harmful.
-
So, the Type-2a, we might call it that,
-
is to think about some technology
that incentivizes great powers
-
to use their massive amounts of force
to create destruction.
-
So, nuclear weapons were actually
very close to this, right?
-
What we did, we spent
over 10 trillion dollars
-
to build 70,000 nuclear warheads
-
and put them on hair-trigger alert.
-
And there were several times
during the Cold War
-
we almost blew each other up.
-
It's not because a lot of people felt
this would be a great idea,
-
let's all spend 10 trillion dollars
to blow ourselves up,
-
but the incentives were such
that we were finding ourselves --
-
this could have been worse.
-
Imagine if there had been
a safe first strike.
-
Then it might have been very tricky,
-
in a crisis situation,
-
to refrain from launching
all their nuclear missiles.
-
If nothing else, because you would fear
that the other side might do it.
-
CA: Right, mutual assured destruction
-
kept the Cold War relatively stable,
-
without that, we might not be here now.
-
NB: It could have been
more unstable than it was.
-
And there could be
other properties of technology.
-
It could have been harder
to have arms treaties,
-
if instead of nuclear weapons
-
there had been some smaller thing
or something less distinctive.
-
CA: And as well as bad incentives
for powerful actors,
-
you also worry about bad incentives
for all of us, in Type-2b here.
-
NB: Yeah, so, here we might
take the case of global warming.
-
There are a lot of little conveniences
-
that cause each one of us to do things
-
that individually
have no significant effect, right?
-
But if billions of people do it,
-
cumulatively, it has a damaging effect.
-
Now, global warming
could have been a lot worse than it is.
-
So we have the climate
sensitivity parameter, right.
-
It's a parameter that says
how much warmer does it get
-
if you emit a certain amount
of greenhouse gases.
-
But, suppose that it had been the case
-
that with the amount
of greenhouse gases we emitted,
-
instead of the temperature rising by, say,
-
between three and 4.5 degrees by 2100,
-
suppose it had been
15 degrees or 20 degrees.
-
Like, then we might have been
in a very bad situation.
-
Or suppose that renewable energy
had just been a lot harder to do.
-
Or that there had been
more fossil fuels in the ground.
-
CA: Couldn't you argue
that if in that case of --
-
if what we are doing today
-
had resulted in 10 degrees difference
in the time period that we could see,
-
actually humanity would have got
off its ass and done something about it.
-
We're stupid, but we're not
maybe that stupid.
-
Or maybe we are.
-
NB: I wouldn't bet on it.
-
(Laughter)
-
You could imagine other features.
-
So, right now, it's a little bit difficult
to switch to renewables and stuff, right,
-
but it can be done.
-
But it might just have been,
with slightly different physics,
-
it could have been much more expensive
to do these things.
-
CA: And what's your view, Nick?
-
Do you think, putting
these possibilities together,
-
that this earth, humanity that we are,
-
we count as a vulnerable world?
-
That there is a death ball in our future?
-
NB: It's hard to say.
-
I mean, I think there might
well be various black balls in the urn,
-
that's what it looks like.
-
There might also be some golden balls
-
that would help us
protect against black balls.
-
And I don't know which order
they will come out.
-
CA: I mean, one possible
philosophical critique of this idea
-
is that it implies a view
that the future is essentially settled.
-
That there either
is that ball there or it's not.
-
And in a way,
-
that's not a view of the future
that I want to believe.
-
I want to believe
that the future is undetermined,
-
that our decisions today will determine
-
what kind of balls
we pull out of that urn.
-
NB: I mean, if we just keep inventing,
-
like, eventually we will
pull out all the balls.
-
I mean, I think there's a kind
of weak form of technological determinism
-
that is quite plausible,
-
like, you're unlikely
to encounter a society
-
that uses flint axes and jet planes.
-
But you can almost think
of a technology as a set of affordances.
-
So technology is the thing
that enables us to do various things
-
and achieve various effects in the world.
-
How we'd then use that,
of course depends on human choice.
-
But if we think about these
three types of vulnerability,
-
they make quite weak assumptions
about how we would choose to use them.
-
So a Type-1 vulnerability, again,
this massive, destructive power,
-
it's a fairly weak assumption
-
to think that in a population
of millions of people
-
there would be some that would choose
to use it destructively.
-
CA: For me, the most single
disturbing argument
-
is that we actually might have
some kind of view into the urn
-
that makes it actually
very likely that we're doomed.
-
Namely, if you believe
in accelerating power,
-
that technology inherently accelerates,
-
that we build the tools
that make us more powerful,
-
then at some point you get to a stage
-
where a single individual
can take us all down,
-
and then it looks like we're screwed.
-
Isn't that argument quite alarming?
-
NB: Ah, yeah.
-
(Laughter)
-
I think --
-
Yeah, we get more and more power,
-
and [it's] easier and easier
to use those powers,
-
but we can also invent technologies
that kind of help us control
-
how people use those powers.
-
CA: So let's talk about that,
let's talk about the response.
-
Suppose that thinking
about all the possibilities
-
that are out there now --
-
it's not just synbio,
it's things like cyberwarfare,
-
artificial intelligence, etc., etc. --
-
that there might be
serious doom in our future.
-
What are the possible responses?
-
And you've talked about
four possible responses as well.
-
NB: Restricting technological development
doesn't seem promising,
-
if we are talking about a general halt
to technological progress.
-
I think neither feasible,
-
nor would it be desirable
even if we could do it.
-
I think there might be very limited areas
-
where maybe you would want
slower technological progress.
-
You don't, I think, want
faster progress in bioweapons,
-
or in, say, isotope separation,
-
that would make it easier to create nukes.
-
CA: I mean, I used to be
fully on board with that.
-
But I would like to actually
push back on that for a minute.
-
Just because, first of all,
-
if you look at the history
of the last couple of decades,
-
you know, it's always been
push forward at full speed,
-
it's OK, that's our only choice.
-
But if you look at globalization
and the rapid acceleration of that,
-
if you look at the strategy of
"move fast and break things"
-
and what happened with that,
-
and then you look at the potential
for synthetic biology,
-
I don't know that we should
move forward rapidly
-
or without any kind of restriction
-
to a world where you could have
a DNA printer in every home
-
and high school lab.
-
There are some restrictions, right?
-
NB: Possibly, there is
the first part, the not feasible.
-
If you think it would be
desirable to stop it,
-
there's the problem of feasibility.
-
So it doesn't really help
if one nation kind of --
-
CA: No, it doesn't help
if one nation does,
-
but we've had treaties before.
-
That's really how we survived
the nuclear threat,
-
was by going out there
-
and going through
the painful process of negotiating.
-
I just wonder whether the logic isn't
that we, as a matter of global priority,
-
we shouldn't go out there and try,
-
like, now start negotiating
really strict rules
-
on where synthetic bioresearch is done,
-
that it's not something
that you want to democratize, no?
-
NB: I totally agree with that --
-
that it would be desirable, for example,
-
maybe to have DNA synthesis machines,
-
not as a product where each lab
has their own device,
-
but maybe as a service.
-
Maybe there could be
four or five places in the world
-
where you send in your digital blueprint
and the DNA comes back, right?
-
And then, you would have the ability,
-
if one day it really looked
like it was necessary,
-
we would have like,
a finite set of choke points.
-
So I think you want to look
for kind of special opportunities,
-
where you could have tighter control.
-
CA: Your belief is, fundamentally,
-
we are not going to be successful
in just holding back.
-
Someone, somewhere --
North Korea, you know --
-
someone is going to go there
and discover this knowledge,
-
if it's there to be found.
-
NB: That looks plausible
under current conditions.
-
It's not just synthetic biology, either.
-
I mean, any kind of profound,
new change in the world
-
could turn out to be a black ball.
-
CA: Let's look at
another possible response.
-
NB: This also, I think,
has only limited potential.
-
So, with the Type-1 vulnerability again,
-
I mean, if you could reduce the number
of people who are incentivized
-
to destroy the world,
-
if only they could get
access and the means,
-
that would be good.
-
CA: In this image that you asked us to do
-
you're imagining these drones
flying around the world
-
with facial recognition.
-
When they spot someone
showing signs of sociopathic behavior,
-
they shower them with love, they fix them.
-
NB: I think it's like a hybrid picture.
-
Eliminate can either mean,
like, incarcerate or kill,
-
or it can mean persuade them
to a better view of the world.
-
But the point is that,
-
suppose you were
extremely successful in this,
-
and you reduced the number
of such individuals by half.
-
And if you want to do it by persuasion,
-
you are competing against
all other powerful forces
-
that are trying to persuade people,
-
parties, religion, education system.
-
But suppose you could reduce it by half,
-
I don't think the risk
would be reduced by half.
-
Maybe by five or 10 percent.
-
CA: You're not recommending that we gamble
humanity's future on response two.
-
NB: I think it's all good
to try to deter and persuade people,
-
but we shouldn't rely on that
as our only safeguard.
-
CA: How about three?
-
NB: I think there are two general methods
-
that we could use to achieve
the ability to stabilize the world
-
against the whole spectrum
of possible vulnerabilities.
-
And we probably would need both.
-
So, one is an extremely effective ability
-
to do preventive policing.
-
Such that you could intercept.
-
If anybody started to do
this dangerous thing,
-
you could intercept them
in real time, and stop them.
-
So this would require
ubiquitous surveillance,
-
everybody would be monitored all the time.
-
CA: This is "Minority Report,"
essentially, a form of.
-
NB: You would have maybe AI algorithms,
-
big freedom centers
that were reviewing this, etc., etc.
-
CA: You know that mass surveillance
is not a very popular term right now?
-
(Laughter)
-
NB: Yeah, so this little device there,
-
imagine that kind of necklace
that you would have to wear at all times
-
with multidirectional cameras.
-
But, to make it go down better,
-
just call it the "freedom tag"
or something like that.
-
(Laughter)
-
CA: OK.
-
I mean, this is the conversation, friends,
-
this is why this is
such a mind-blowing conversation.
-
NB: Actually, there's
a whole big conversation on this
-
on its own, obviously.
-
There are huge problems and risks
with that, right?
-
We may come back to that.
-
So the other, the final,
-
the other general stabilization capability
-
is kind of plugging
another governance gap.
-
So the surveillance would be kind of
governance gap at the microlevel,
-
like, preventing anybody
from ever doing something highly illegal.
-
Then, there's a corresponding
governance gap
-
at the macro level, at the global level.
-
You would need the ability, reliably,
-
to prevent the worst kinds
of global coordination failures,
-
to avoid wars between great powers,
-
arms races,
-
cataclysmic commons problems,
-
in order to deal with
the Type-2a vulnerabilities.
-
CA: Global governance is a term
-
that's definitely way out
of fashion right now,
-
but could you make the case
that throughout history,
-
the history of humanity
-
is that at every stage
of technological power increase,
-
people have reorganized
and sort of centralized the power.
-
So, for example,
when a roving band of criminals
-
could take over a society,
-
the response was,
well, you have a nation-state
-
and you centralize force,
a police force or an army,
-
so, "No, you can't do that."
-
The logic, perhaps, of having
a single person or a single group
-
able to take out humanity
-
means at some point
we're going to have to go this route,
-
at least in some form, no?
-
NB: It's certainly true that the scale
of political organization has increased
-
over the course of human history.
-
It used to be hunter-gatherer band, right,
-
and then chiefdom, city-states, nations,
-
now there are international organizations
and so on and so forth.
-
Again, I just want to make sure
-
I get the chance to stress
-
that obviously there are huge downsides
-
and indeed, massive risks,
-
both to mass surveillance
and to global governance.
-
I'm just pointing out
that if we are lucky,
-
the world could be such
that these would be the only ways
-
you could survive a black ball.
-
CA: The logic of this theory,
-
it seems to me,
-
is that we've got to recognize
we can't have it all.
-
That the sort of,
-
I would say, naive dream
that many of us had
-
that technology is always
going to be a force for good,
-
keep going, don't stop,
go as fast as you can
-
and not pay attention
to some of the consequences,
-
that's actually just not an option.
-
We can have that.
-
If we have that,
-
we're going to have to accept
-
some of these other
very uncomfortable things with it,
-
and kind of be in this
arms race with ourselves
-
of, you want the power,
you better limit it,
-
you better figure out how to limit it.
-
NB: I think it is an option,
-
a very tempting option,
it's in a sense the easiest option
-
and it might work,
-
but it means we are fundamentally
vulnerable to extracting a black ball.
-
Now, I think with a bit of coordination,
-
like, if you did solve this
macrogovernance problem,
-
and the microgovernance problem,
-
then we could extract
all the balls from the urn
-
and we'd benefit greatly.
-
CA: I mean, if we're living
in a simulation, does it matter?
-
We just reboot.
-
(Laughter)
-
NB: Then ... I ...
-
(Laughter)
-
I didn't see that one coming.
-
CA: So what's your view?
-
Putting all the pieces together,
how likely is it that we're doomed?
-
(Laughter)
-
I love how people laugh
when you ask that question.
-
NB: On an individual level,
-
we seem to kind of be doomed anyway,
just with the time line,
-
we're rotting and aging
and all kinds of things, right?
-
(Laughter)
-
It's actually a little bit tricky.
-
If you want to set up
so that you can attach a probability,
-
first, who are we?
-
If you're very old,
probably you'll die of natural causes,
-
if you're very young,
you might have a 100-year --
-
the probability might depend
on who you ask.
-
Then the threshold, like, what counts
as civilizational devastation?
-
In the paper I don't require
an existential catastrophe
-
in order for it to count.
-
This is just a definitional matter,
-
I say a billion dead,
-
or a reduction of world GDP by 50 percent,
-
but depending on what
you say the threshold is,
-
you get a different probability estimate.
-
But I guess you could
put me down as a frightened optimist.
-
(Laughter)
-
CA: You're a frightened optimist,
-
and I think you've just created
a large number of other frightened ...
-
people.
-
(Laughter)
-
NB: In the simulation.
-
CA: In a simulation.
-
Nick Bostrom, your mind amazes me,
-
thank you so much for scaring
the living daylights out of us.
-
(Applause)