How civilization could destroy itself -- and 4 ways we could prevent it
-
0:01 - 0:03Chris Anderson: Nick Bostrom.
-
0:03 - 0:07So, you have already given us
so many crazy ideas out there. -
0:07 - 0:09I think a couple of decades ago,
-
0:09 - 0:12you made the case that we might
all be living in a simulation, -
0:12 - 0:13or perhaps probably were.
-
0:13 - 0:15More recently,
-
0:15 - 0:19you've painted the most vivid examples
of how artificial general intelligence -
0:19 - 0:21could go horribly wrong.
-
0:22 - 0:23And now this year,
-
0:23 - 0:25you're about to publish
-
0:25 - 0:29a paper that presents something called
the vulnerable world hypothesis. -
0:29 - 0:34And our job this evening is to
give the illustrated guide to that. -
0:34 - 0:36So let's do that.
-
0:37 - 0:39What is that hypothesis?
-
0:40 - 0:42Nick Bostrom: It's trying to think about
-
0:42 - 0:46a sort of structural feature
of the current human condition. -
0:47 - 0:49You like the urn metaphor,
-
0:50 - 0:51so I'm going to use that to explain it.
-
0:51 - 0:56So picture a big urn filled with balls
-
0:56 - 1:00representing ideas, methods,
possible technologies. -
1:01 - 1:05You can think of the history
of human creativity -
1:05 - 1:08as the process of reaching into this urn
and pulling out one ball after another, -
1:08 - 1:12and the net effect so far
has been hugely beneficial, right? -
1:12 - 1:14We've extracted a great many white balls,
-
1:14 - 1:17some various shades of gray,
mixed blessings. -
1:18 - 1:21We haven't so far
pulled out the black ball -- -
1:22 - 1:28a technology that invariably destroys
the civilization that discovers it. -
1:28 - 1:31So the paper tries to think
about what could such a black ball be. -
1:31 - 1:33CA: So you define that ball
-
1:33 - 1:37as one that would inevitably
bring about civilizational destruction. -
1:37 - 1:42NB: Unless we exit what I call
the semi-anarchic default condition. -
1:42 - 1:43But sort of, by default.
-
1:44 - 1:48CA: So, you make the case compelling
-
1:48 - 1:50by showing some sort of counterexamples
-
1:50 - 1:53where you believe that so far
we've actually got lucky, -
1:53 - 1:56that we might have pulled out
that death ball -
1:56 - 1:57without even knowing it.
-
1:57 - 2:00So there's this quote, what's this quote?
-
2:01 - 2:03NB: Well, I guess
it's just meant to illustrate -
2:03 - 2:05the difficulty of foreseeing
-
2:05 - 2:08what basic discoveries will lead to.
-
2:08 - 2:11We just don't have that capability.
-
2:11 - 2:15Because we have become quite good
at pulling out balls, -
2:15 - 2:18but we don't really have the ability
to put the ball back into the urn, right. -
2:18 - 2:21We can invent, but we can't un-invent.
-
2:22 - 2:24So our strategy, such as it is,
-
2:24 - 2:27is to hope that there is
no black ball in the urn. -
2:27 - 2:31CA: So once it's out, it's out,
and you can't put it back in, -
2:31 - 2:32and you think we've been lucky.
-
2:32 - 2:35So talk through a couple
of these examples. -
2:35 - 2:38You talk about different
types of vulnerability. -
2:38 - 2:40NB: So the easiest type to understand
-
2:40 - 2:43is a technology
that just makes it very easy -
2:43 - 2:46to cause massive amounts of destruction.
-
2:47 - 2:51Synthetic biology might be a fecund
source of that kind of black ball, -
2:51 - 2:54but many other possible things we could --
-
2:54 - 2:56think of geoengineering,
really great, right? -
2:56 - 2:58We could combat global warming,
-
2:58 - 3:01but you don't want it
to get too easy either, -
3:01 - 3:03you don't want any random person
and his grandmother -
3:03 - 3:06to have the ability to radically
alter the earth's climate. -
3:06 - 3:10Or maybe lethal autonomous drones,
-
3:10 - 3:13massed-produced, mosquito-sized
killer bot swarms. -
3:14 - 3:17Nanotechnology,
artificial general intelligence. -
3:17 - 3:19CA: You argue in the paper
-
3:19 - 3:21that it's a matter of luck
that when we discovered -
3:22 - 3:25that nuclear power could create a bomb,
-
3:25 - 3:26it might have been the case
-
3:26 - 3:28that you could have created a bomb
-
3:28 - 3:32with much easier resources,
accessible to anyone. -
3:32 - 3:35NB: Yeah, so think back to the 1930s
-
3:35 - 3:40where for the first time we make
some breakthroughs in nuclear physics, -
3:40 - 3:44some genius figures out that it's possible
to create a nuclear chain reaction -
3:44 - 3:47and then realizes
that this could lead to the bomb. -
3:47 - 3:49And we do some more work,
-
3:49 - 3:52it turns out that what you require
to make a nuclear bomb -
3:52 - 3:54is highly enriched uranium or plutonium,
-
3:54 - 3:56which are very difficult materials to get.
-
3:56 - 3:58You need ultracentrifuges,
-
3:58 - 4:02you need reactors, like,
massive amounts of energy. -
4:02 - 4:04But suppose it had turned out instead
-
4:04 - 4:08there had been an easy way
to unlock the energy of the atom. -
4:08 - 4:11That maybe by baking sand
in the microwave oven -
4:11 - 4:12or something like that
-
4:12 - 4:14you could have created
a nuclear detonation. -
4:14 - 4:16So we know that that's
physically impossible. -
4:16 - 4:18But before you did the relevant physics
-
4:18 - 4:20how could you have known
how it would turn out? -
4:21 - 4:22CA: Although, couldn't you argue
-
4:22 - 4:24that for life to evolve on Earth
-
4:24 - 4:27that implied sort of stable environment,
-
4:27 - 4:32that if it was possible to create
massive nuclear reactions relatively easy, -
4:32 - 4:33the Earth would never have been stable,
-
4:33 - 4:35that we wouldn't be here at all.
-
4:35 - 4:38NB: Yeah, unless there were something
that is easy to do on purpose -
4:38 - 4:41but that wouldn't happen by random chance.
-
4:41 - 4:43So, like things we can easily do,
-
4:43 - 4:45we can stack 10 blocks
on top of one another, -
4:45 - 4:48but in nature, you're not going to find,
like, a stack of 10 blocks. -
4:48 - 4:50CA: OK, so this is probably the one
-
4:50 - 4:52that many of us worry about most,
-
4:52 - 4:55and yes, synthetic biology
is perhaps the quickest route -
4:55 - 4:58that we can foresee
in our near future to get us here. -
4:58 - 5:01NB: Yeah, and so think
about what that would have meant -
5:01 - 5:05if, say, anybody by working
in their kitchen for an afternoon -
5:05 - 5:07could destroy a city.
-
5:07 - 5:10It's hard to see how
modern civilization as we know it -
5:10 - 5:12could have survived that.
-
5:12 - 5:14Because in any population
of a million people, -
5:14 - 5:17there will always be some
who would, for whatever reason, -
5:17 - 5:19choose to use that destructive power.
-
5:20 - 5:23So if that apocalyptic residual
-
5:23 - 5:25would choose to destroy a city, or worse,
-
5:25 - 5:26then cities would get destroyed.
-
5:26 - 5:29CA: So here's another type
of vulnerability. -
5:29 - 5:31Talk about this.
-
5:31 - 5:35NB: Yeah, so in addition to these
kind of obvious types of black balls -
5:35 - 5:37that would just make it possible
to blow up a lot of things, -
5:37 - 5:42other types would act
by creating bad incentives -
5:42 - 5:44for humans to do things that are harmful.
-
5:44 - 5:48So, the Type-2a, we might call it that,
-
5:48 - 5:53is to think about some technology
that incentivizes great powers -
5:53 - 5:57to use their massive amounts of force
to create destruction. -
5:57 - 6:00So, nuclear weapons were actually
very close to this, right? -
6:02 - 6:05What we did, we spent
over 10 trillion dollars -
6:05 - 6:08to build 70,000 nuclear warheads
-
6:08 - 6:10and put them on hair-trigger alert.
-
6:10 - 6:12And there were several times
during the Cold War -
6:12 - 6:14we almost blew each other up.
-
6:14 - 6:17It's not because a lot of people felt
this would be a great idea, -
6:17 - 6:20let's all spend 10 trillion dollars
to blow ourselves up, -
6:20 - 6:23but the incentives were such
that we were finding ourselves -- -
6:23 - 6:24this could have been worse.
-
6:24 - 6:26Imagine if there had been
a safe first strike. -
6:26 - 6:29Then it might have been very tricky,
-
6:29 - 6:30in a crisis situation,
-
6:30 - 6:33to refrain from launching
all their nuclear missiles. -
6:33 - 6:36If nothing else, because you would fear
that the other side might do it. -
6:36 - 6:38CA: Right, mutual assured destruction
-
6:38 - 6:41kept the Cold War relatively stable,
-
6:41 - 6:43without that, we might not be here now.
-
6:43 - 6:45NB: It could have been
more unstable than it was. -
6:45 - 6:47And there could be
other properties of technology. -
6:47 - 6:50It could have been harder
to have arms treaties, -
6:50 - 6:51if instead of nuclear weapons
-
6:51 - 6:54there had been some smaller thing
or something less distinctive. -
6:54 - 6:57CA: And as well as bad incentives
for powerful actors, -
6:57 - 7:00you also worry about bad incentives
for all of us, in Type-2b here. -
7:00 - 7:05NB: Yeah, so, here we might
take the case of global warming. -
7:07 - 7:09There are a lot of little conveniences
-
7:09 - 7:11that cause each one of us to do things
-
7:11 - 7:14that individually
have no significant effect, right? -
7:14 - 7:16But if billions of people do it,
-
7:16 - 7:18cumulatively, it has a damaging effect.
-
7:18 - 7:21Now, global warming
could have been a lot worse than it is. -
7:21 - 7:24So we have the climate
sensitivity parameter, right. -
7:24 - 7:28It's a parameter that says
how much warmer does it get -
7:28 - 7:30if you emit a certain amount
of greenhouse gases. -
7:30 - 7:33But, suppose that it had been the case
-
7:33 - 7:35that with the amount
of greenhouse gases we emitted, -
7:35 - 7:37instead of the temperature rising by, say,
-
7:37 - 7:41between three and 4.5 degrees by 2100,
-
7:41 - 7:44suppose it had been
15 degrees or 20 degrees. -
7:44 - 7:47Like, then we might have been
in a very bad situation. -
7:47 - 7:50Or suppose that renewable energy
had just been a lot harder to do. -
7:50 - 7:53Or that there had been
more fossil fuels in the ground. -
7:53 - 7:55CA: Couldn't you argue
that if in that case of -- -
7:55 - 7:57if what we are doing today
-
7:57 - 8:02had resulted in 10 degrees difference
in the time period that we could see, -
8:02 - 8:05actually humanity would have got
off its ass and done something about it. -
8:06 - 8:08We're stupid, but we're not
maybe that stupid. -
8:08 - 8:10Or maybe we are.
-
8:10 - 8:11NB: I wouldn't bet on it.
-
8:11 - 8:13(Laughter)
-
8:13 - 8:15You could imagine other features.
-
8:15 - 8:20So, right now, it's a little bit difficult
to switch to renewables and stuff, right, -
8:20 - 8:22but it can be done.
-
8:22 - 8:25But it might just have been,
with slightly different physics, -
8:25 - 8:27it could have been much more expensive
to do these things. -
8:28 - 8:30CA: And what's your view, Nick?
-
8:30 - 8:32Do you think, putting
these possibilities together, -
8:32 - 8:37that this earth, humanity that we are,
-
8:37 - 8:38we count as a vulnerable world?
-
8:38 - 8:41That there is a death ball in our future?
-
8:44 - 8:45NB: It's hard to say.
-
8:45 - 8:50I mean, I think there might
well be various black balls in the urn, -
8:50 - 8:52that's what it looks like.
-
8:52 - 8:54There might also be some golden balls
-
8:54 - 8:58that would help us
protect against black balls. -
8:58 - 9:01And I don't know which order
they will come out. -
9:01 - 9:04CA: I mean, one possible
philosophical critique of this idea -
9:04 - 9:10is that it implies a view
that the future is essentially settled. -
9:10 - 9:13That there either
is that ball there or it's not. -
9:13 - 9:16And in a way,
-
9:16 - 9:18that's not a view of the future
that I want to believe. -
9:18 - 9:21I want to believe
that the future is undetermined, -
9:21 - 9:23that our decisions today will determine
-
9:23 - 9:25what kind of balls
we pull out of that urn. -
9:26 - 9:30NB: I mean, if we just keep inventing,
-
9:30 - 9:32like, eventually we will
pull out all the balls. -
9:33 - 9:36I mean, I think there's a kind
of weak form of technological determinism -
9:36 - 9:38that is quite plausible,
-
9:38 - 9:40like, you're unlikely
to encounter a society -
9:40 - 9:43that uses flint axes and jet planes.
-
9:44 - 9:48But you can almost think
of a technology as a set of affordances. -
9:48 - 9:51So technology is the thing
that enables us to do various things -
9:51 - 9:53and achieve various effects in the world.
-
9:53 - 9:56How we'd then use that,
of course depends on human choice. -
9:56 - 9:59But if we think about these
three types of vulnerability, -
9:59 - 10:02they make quite weak assumptions
about how we would choose to use them. -
10:02 - 10:06So a Type-1 vulnerability, again,
this massive, destructive power, -
10:06 - 10:07it's a fairly weak assumption
-
10:07 - 10:10to think that in a population
of millions of people -
10:10 - 10:13there would be some that would choose
to use it destructively. -
10:13 - 10:15CA: For me, the most single
disturbing argument -
10:15 - 10:20is that we actually might have
some kind of view into the urn -
10:20 - 10:23that makes it actually
very likely that we're doomed. -
10:23 - 10:28Namely, if you believe
in accelerating power, -
10:28 - 10:30that technology inherently accelerates,
-
10:30 - 10:33that we build the tools
that make us more powerful, -
10:33 - 10:35then at some point you get to a stage
-
10:35 - 10:38where a single individual
can take us all down, -
10:38 - 10:41and then it looks like we're screwed.
-
10:41 - 10:44Isn't that argument quite alarming?
-
10:44 - 10:46NB: Ah, yeah.
-
10:47 - 10:48(Laughter)
-
10:48 - 10:49I think --
-
10:51 - 10:52Yeah, we get more and more power,
-
10:52 - 10:56and [it's] easier and easier
to use those powers, -
10:56 - 11:00but we can also invent technologies
that kind of help us control -
11:00 - 11:02how people use those powers.
-
11:02 - 11:05CA: So let's talk about that,
let's talk about the response. -
11:05 - 11:07Suppose that thinking
about all the possibilities -
11:07 - 11:09that are out there now --
-
11:09 - 11:13it's not just synbio,
it's things like cyberwarfare, -
11:13 - 11:17artificial intelligence, etc., etc. --
-
11:17 - 11:21that there might be
serious doom in our future. -
11:21 - 11:23What are the possible responses?
-
11:23 - 11:28And you've talked about
four possible responses as well. -
11:28 - 11:31NB: Restricting technological development
doesn't seem promising, -
11:31 - 11:35if we are talking about a general halt
to technological progress. -
11:35 - 11:36I think neither feasible,
-
11:36 - 11:38nor would it be desirable
even if we could do it. -
11:38 - 11:41I think there might be very limited areas
-
11:41 - 11:44where maybe you would want
slower technological progress. -
11:44 - 11:47You don't, I think, want
faster progress in bioweapons, -
11:47 - 11:49or in, say, isotope separation,
-
11:49 - 11:52that would make it easier to create nukes.
-
11:53 - 11:56CA: I mean, I used to be
fully on board with that. -
11:56 - 11:59But I would like to actually
push back on that for a minute. -
11:59 - 12:01Just because, first of all,
-
12:01 - 12:03if you look at the history
of the last couple of decades, -
12:03 - 12:07you know, it's always been
push forward at full speed, -
12:07 - 12:09it's OK, that's our only choice.
-
12:09 - 12:13But if you look at globalization
and the rapid acceleration of that, -
12:13 - 12:16if you look at the strategy of
"move fast and break things" -
12:16 - 12:19and what happened with that,
-
12:19 - 12:21and then you look at the potential
for synthetic biology, -
12:21 - 12:26I don't know that we should
move forward rapidly -
12:26 - 12:27or without any kind of restriction
-
12:27 - 12:31to a world where you could have
a DNA printer in every home -
12:31 - 12:32and high school lab.
-
12:33 - 12:35There are some restrictions, right?
-
12:35 - 12:38NB: Possibly, there is
the first part, the not feasible. -
12:38 - 12:40If you think it would be
desirable to stop it, -
12:40 - 12:41there's the problem of feasibility.
-
12:42 - 12:44So it doesn't really help
if one nation kind of -- -
12:44 - 12:46CA: No, it doesn't help
if one nation does, -
12:46 - 12:49but we've had treaties before.
-
12:49 - 12:53That's really how we survived
the nuclear threat, -
12:53 - 12:54was by going out there
-
12:54 - 12:57and going through
the painful process of negotiating. -
12:57 - 13:02I just wonder whether the logic isn't
that we, as a matter of global priority, -
13:02 - 13:04we shouldn't go out there and try,
-
13:04 - 13:06like, now start negotiating
really strict rules -
13:06 - 13:09on where synthetic bioresearch is done,
-
13:09 - 13:12that it's not something
that you want to democratize, no? -
13:12 - 13:14NB: I totally agree with that --
-
13:14 - 13:18that it would be desirable, for example,
-
13:18 - 13:22maybe to have DNA synthesis machines,
-
13:22 - 13:25not as a product where each lab
has their own device, -
13:25 - 13:27but maybe as a service.
-
13:27 - 13:29Maybe there could be
four or five places in the world -
13:29 - 13:33where you send in your digital blueprint
and the DNA comes back, right? -
13:33 - 13:35And then, you would have the ability,
-
13:35 - 13:37if one day it really looked
like it was necessary, -
13:37 - 13:39we would have like,
a finite set of choke points. -
13:39 - 13:43So I think you want to look
for kind of special opportunities, -
13:43 - 13:45where you could have tighter control.
-
13:45 - 13:47CA: Your belief is, fundamentally,
-
13:47 - 13:50we are not going to be successful
in just holding back. -
13:50 - 13:52Someone, somewhere --
North Korea, you know -- -
13:52 - 13:56someone is going to go there
and discover this knowledge, -
13:56 - 13:57if it's there to be found.
-
13:57 - 14:00NB: That looks plausible
under current conditions. -
14:00 - 14:02It's not just synthetic biology, either.
-
14:02 - 14:04I mean, any kind of profound,
new change in the world -
14:04 - 14:06could turn out to be a black ball.
-
14:06 - 14:08CA: Let's look at
another possible response. -
14:08 - 14:10NB: This also, I think,
has only limited potential. -
14:10 - 14:14So, with the Type-1 vulnerability again,
-
14:14 - 14:18I mean, if you could reduce the number
of people who are incentivized -
14:18 - 14:19to destroy the world,
-
14:20 - 14:22if only they could get
access and the means, -
14:22 - 14:23that would be good.
-
14:23 - 14:25CA: In this image that you asked us to do
-
14:25 - 14:27you're imagining these drones
flying around the world -
14:27 - 14:29with facial recognition.
-
14:29 - 14:32When they spot someone
showing signs of sociopathic behavior, -
14:32 - 14:34they shower them with love, they fix them.
-
14:34 - 14:36NB: I think it's like a hybrid picture.
-
14:36 - 14:40Eliminate can either mean,
like, incarcerate or kill, -
14:40 - 14:43or it can mean persuade them
to a better view of the world. -
14:43 - 14:45But the point is that,
-
14:45 - 14:47suppose you were
extremely successful in this, -
14:47 - 14:50and you reduced the number
of such individuals by half. -
14:50 - 14:52And if you want to do it by persuasion,
-
14:52 - 14:54you are competing against
all other powerful forces -
14:54 - 14:56that are trying to persuade people,
-
14:56 - 14:58parties, religion, education system.
-
14:58 - 15:00But suppose you could reduce it by half,
-
15:00 - 15:02I don't think the risk
would be reduced by half. -
15:02 - 15:04Maybe by five or 10 percent.
-
15:04 - 15:08CA: You're not recommending that we gamble
humanity's future on response two. -
15:08 - 15:11NB: I think it's all good
to try to deter and persuade people, -
15:11 - 15:14but we shouldn't rely on that
as our only safeguard. -
15:14 - 15:15CA: How about three?
-
15:15 - 15:18NB: I think there are two general methods
-
15:18 - 15:22that we could use to achieve
the ability to stabilize the world -
15:22 - 15:25against the whole spectrum
of possible vulnerabilities. -
15:25 - 15:27And we probably would need both.
-
15:27 - 15:31So, one is an extremely effective ability
-
15:32 - 15:33to do preventive policing.
-
15:33 - 15:35Such that you could intercept.
-
15:35 - 15:38If anybody started to do
this dangerous thing, -
15:38 - 15:40you could intercept them
in real time, and stop them. -
15:40 - 15:43So this would require
ubiquitous surveillance, -
15:43 - 15:45everybody would be monitored all the time.
-
15:46 - 15:49CA: This is "Minority Report,"
essentially, a form of. -
15:49 - 15:51NB: You would have maybe AI algorithms,
-
15:51 - 15:55big freedom centers
that were reviewing this, etc., etc. -
15:57 - 16:01CA: You know that mass surveillance
is not a very popular term right now? -
16:01 - 16:02(Laughter)
-
16:03 - 16:05NB: Yeah, so this little device there,
-
16:05 - 16:09imagine that kind of necklace
that you would have to wear at all times -
16:09 - 16:11with multidirectional cameras.
-
16:12 - 16:14But, to make it go down better,
-
16:14 - 16:16just call it the "freedom tag"
or something like that. -
16:16 - 16:18(Laughter)
-
16:18 - 16:19CA: OK.
-
16:20 - 16:22I mean, this is the conversation, friends,
-
16:22 - 16:25this is why this is
such a mind-blowing conversation. -
16:25 - 16:28NB: Actually, there's
a whole big conversation on this -
16:28 - 16:29on its own, obviously.
-
16:29 - 16:32There are huge problems and risks
with that, right? -
16:32 - 16:33We may come back to that.
-
16:33 - 16:34So the other, the final,
-
16:34 - 16:37the other general stabilization capability
-
16:37 - 16:39is kind of plugging
another governance gap. -
16:39 - 16:43So the surveillance would be kind of
governance gap at the microlevel, -
16:43 - 16:46like, preventing anybody
from ever doing something highly illegal. -
16:46 - 16:49Then, there's a corresponding
governance gap -
16:49 - 16:51at the macro level, at the global level.
-
16:51 - 16:54You would need the ability, reliably,
-
16:54 - 16:57to prevent the worst kinds
of global coordination failures, -
16:57 - 17:01to avoid wars between great powers,
-
17:01 - 17:02arms races,
-
17:04 - 17:06cataclysmic commons problems,
-
17:08 - 17:12in order to deal with
the Type-2a vulnerabilities. -
17:12 - 17:14CA: Global governance is a term
-
17:14 - 17:16that's definitely way out
of fashion right now, -
17:16 - 17:19but could you make the case
that throughout history, -
17:19 - 17:20the history of humanity
-
17:20 - 17:25is that at every stage
of technological power increase, -
17:25 - 17:29people have reorganized
and sort of centralized the power. -
17:29 - 17:32So, for example,
when a roving band of criminals -
17:32 - 17:34could take over a society,
-
17:34 - 17:36the response was,
well, you have a nation-state -
17:36 - 17:38and you centralize force,
a police force or an army, -
17:39 - 17:40so, "No, you can't do that."
-
17:40 - 17:45The logic, perhaps, of having
a single person or a single group -
17:45 - 17:46able to take out humanity
-
17:46 - 17:49means at some point
we're going to have to go this route, -
17:49 - 17:51at least in some form, no?
-
17:51 - 17:54NB: It's certainly true that the scale
of political organization has increased -
17:54 - 17:56over the course of human history.
-
17:56 - 17:59It used to be hunter-gatherer band, right,
-
17:59 - 18:01and then chiefdom, city-states, nations,
-
18:02 - 18:05now there are international organizations
and so on and so forth. -
18:06 - 18:07Again, I just want to make sure
-
18:07 - 18:09I get the chance to stress
-
18:09 - 18:11that obviously there are huge downsides
-
18:11 - 18:12and indeed, massive risks,
-
18:12 - 18:16both to mass surveillance
and to global governance. -
18:16 - 18:18I'm just pointing out
that if we are lucky, -
18:18 - 18:21the world could be such
that these would be the only ways -
18:21 - 18:22you could survive a black ball.
-
18:22 - 18:25CA: The logic of this theory,
-
18:25 - 18:26it seems to me,
-
18:26 - 18:30is that we've got to recognize
we can't have it all. -
18:30 - 18:32That the sort of,
-
18:34 - 18:36I would say, naive dream
that many of us had -
18:36 - 18:40that technology is always
going to be a force for good, -
18:40 - 18:43keep going, don't stop,
go as fast as you can -
18:43 - 18:45and not pay attention
to some of the consequences, -
18:45 - 18:47that's actually just not an option.
-
18:47 - 18:49We can have that.
-
18:49 - 18:50If we have that,
-
18:50 - 18:52we're going to have to accept
-
18:52 - 18:54some of these other
very uncomfortable things with it, -
18:54 - 18:56and kind of be in this
arms race with ourselves -
18:56 - 18:59of, you want the power,
you better limit it, -
18:59 - 19:01you better figure out how to limit it.
-
19:01 - 19:04NB: I think it is an option,
-
19:04 - 19:07a very tempting option,
it's in a sense the easiest option -
19:07 - 19:09and it might work,
-
19:09 - 19:13but it means we are fundamentally
vulnerable to extracting a black ball. -
19:13 - 19:16Now, I think with a bit of coordination,
-
19:16 - 19:18like, if you did solve this
macrogovernance problem, -
19:18 - 19:20and the microgovernance problem,
-
19:20 - 19:22then we could extract
all the balls from the urn -
19:22 - 19:25and we'd benefit greatly.
-
19:25 - 19:28CA: I mean, if we're living
in a simulation, does it matter? -
19:28 - 19:29We just reboot.
-
19:29 - 19:31(Laughter)
-
19:31 - 19:32NB: Then ... I ...
-
19:32 - 19:35(Laughter)
-
19:35 - 19:36I didn't see that one coming.
-
19:38 - 19:39CA: So what's your view?
-
19:39 - 19:44Putting all the pieces together,
how likely is it that we're doomed? -
19:44 - 19:46(Laughter)
-
19:47 - 19:49I love how people laugh
when you ask that question. -
19:49 - 19:51NB: On an individual level,
-
19:51 - 19:55we seem to kind of be doomed anyway,
just with the time line, -
19:55 - 19:57we're rotting and aging
and all kinds of things, right? -
19:57 - 19:59(Laughter)
-
19:59 - 20:01It's actually a little bit tricky.
-
20:01 - 20:03If you want to set up
so that you can attach a probability, -
20:03 - 20:05first, who are we?
-
20:05 - 20:07If you're very old,
probably you'll die of natural causes, -
20:08 - 20:10if you're very young,
you might have a 100-year -- -
20:10 - 20:12the probability might depend
on who you ask. -
20:12 - 20:16Then the threshold, like, what counts
as civilizational devastation? -
20:16 - 20:22In the paper I don't require
an existential catastrophe -
20:22 - 20:23in order for it to count.
-
20:23 - 20:25This is just a definitional matter,
-
20:25 - 20:26I say a billion dead,
-
20:26 - 20:29or a reduction of world GDP by 50 percent,
-
20:29 - 20:31but depending on what
you say the threshold is, -
20:31 - 20:33you get a different probability estimate.
-
20:33 - 20:37But I guess you could
put me down as a frightened optimist. -
20:37 - 20:38(Laughter)
-
20:38 - 20:40CA: You're a frightened optimist,
-
20:40 - 20:44and I think you've just created
a large number of other frightened ... -
20:44 - 20:46people.
-
20:46 - 20:47(Laughter)
-
20:47 - 20:48NB: In the simulation.
-
20:48 - 20:49CA: In a simulation.
-
20:49 - 20:51Nick Bostrom, your mind amazes me,
-
20:51 - 20:54thank you so much for scaring
the living daylights out of us. -
20:54 - 20:56(Applause)
- Title:
- How civilization could destroy itself -- and 4 ways we could prevent it
- Speaker:
- Nick Bostrom
- Description:
-
Humanity is on its way to creating a "black ball": a technological breakthrough that could destroy us all, says philosopher Nick Bostrom. In this incisive, surprisingly light-hearted conversation with Head of TED Chris Anderson, Bostrom outlines the vulnerabilities we could face if (or when) our inventions spiral beyond our control -- and explores how we can prevent our future demise.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 21:09
Erin Gregory edited English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Oliver Friedman edited English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Erin Gregory edited English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Erin Gregory approved English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Erin Gregory edited English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Erin Gregory edited English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Erin Gregory edited English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it | ||
Joanna Pietrulewicz accepted English subtitles for How civilization could destroy itself -- and 4 ways we could prevent it |