Return to Video

How civilization could destroy itself -- and 4 ways we could prevent it

  • 0:01 - 0:03
    Chris Anderson: Nick Bostrom.
  • 0:03 - 0:07
    So, you have already given us
    so many crazy ideas out there.
  • 0:07 - 0:09
    I think a couple of decades ago,
  • 0:09 - 0:12
    you made the case that we might
    all be living in a simulation,
  • 0:12 - 0:13
    or perhaps probably were.
  • 0:13 - 0:15
    More recently,
  • 0:15 - 0:19
    you've painted the most vivid examples
    of how artificial general intelligence
  • 0:19 - 0:21
    could go horribly wrong.
  • 0:22 - 0:23
    And now this year,
  • 0:23 - 0:25
    you're about to publish
  • 0:25 - 0:29
    a paper that presents something called
    the vulnerable world hypothesis.
  • 0:29 - 0:34
    And our job this evening is to
    give the illustrated guide to that.
  • 0:34 - 0:36
    So let's do that.
  • 0:37 - 0:39
    What is that hypothesis?
  • 0:40 - 0:42
    Nick Bostrom: It's trying to think about
  • 0:42 - 0:46
    a sort of structural feature
    of the current human condition.
  • 0:47 - 0:49
    You like the urn metaphor,
  • 0:50 - 0:51
    so I'm going to use that to explain it.
  • 0:51 - 0:56
    So picture a big urn filled with balls
  • 0:56 - 1:00
    representing ideas, methods,
    possible technologies.
  • 1:01 - 1:05
    You can think of the history
    of human creativity
  • 1:05 - 1:08
    as the process of reaching into this urn
    and pulling out one ball after another,
  • 1:08 - 1:12
    and the net effect so far
    has been hugely beneficial, right?
  • 1:12 - 1:14
    We've extracted a great many white balls,
  • 1:14 - 1:17
    some various shades of gray,
    mixed blessings.
  • 1:18 - 1:21
    We haven't so far
    pulled out the black ball --
  • 1:22 - 1:28
    a technology that invariably destroys
    the civilization that discovers it.
  • 1:28 - 1:31
    So the paper tries to think
    about what could such a black ball be.
  • 1:31 - 1:33
    CA: So you define that ball
  • 1:33 - 1:37
    as one that would inevitably
    bring about civilizational destruction.
  • 1:37 - 1:42
    NB: Unless we exit what I call
    the semi-anarchic default condition.
  • 1:42 - 1:43
    But sort of, by default.
  • 1:44 - 1:48
    CA: So, you make the case compelling
  • 1:48 - 1:50
    by showing some sort of counterexamples
  • 1:50 - 1:53
    where you believe that so far
    we've actually got lucky,
  • 1:53 - 1:56
    that we might have pulled out
    that death ball
  • 1:56 - 1:57
    without even knowing it.
  • 1:57 - 2:00
    So there's this quote, what's this quote?
  • 2:01 - 2:03
    NB: Well, I guess
    it's just meant to illustrate
  • 2:03 - 2:05
    the difficulty of foreseeing
  • 2:05 - 2:08
    what basic discoveries will lead to.
  • 2:08 - 2:11
    We just don't have that capability.
  • 2:11 - 2:15
    Because we have become quite good
    at pulling out balls,
  • 2:15 - 2:18
    but we don't really have the ability
    to put the ball back into the urn, right.
  • 2:18 - 2:21
    We can invent, but we can't un-invent.
  • 2:22 - 2:24
    So our strategy, such as it is,
  • 2:24 - 2:27
    is to hope that there is
    no black ball in the urn.
  • 2:27 - 2:31
    CA: So once it's out, it's out,
    and you can't put it back in,
  • 2:31 - 2:32
    and you think we've been lucky.
  • 2:32 - 2:35
    So talk through a couple
    of these examples.
  • 2:35 - 2:38
    You talk about different
    types of vulnerability.
  • 2:38 - 2:40
    NB: So the easiest type to understand
  • 2:40 - 2:43
    is a technology
    that just makes it very easy
  • 2:43 - 2:46
    to cause massive amounts of destruction.
  • 2:47 - 2:51
    Synthetic biology might be a fecund
    source of that kind of black ball,
  • 2:51 - 2:54
    but many other possible things we could --
  • 2:54 - 2:56
    think of geoengineering,
    really great, right?
  • 2:56 - 2:58
    We could combat global warming,
  • 2:58 - 3:01
    but you don't want it
    to get too easy either,
  • 3:01 - 3:03
    you don't want any random person
    and his grandmother
  • 3:03 - 3:06
    to have the ability to radically
    alter the earth's climate.
  • 3:06 - 3:10
    Or maybe lethal autonomous drones,
  • 3:10 - 3:13
    massed-produced, mosquito-sized
    killer bot swarms.
  • 3:14 - 3:17
    Nanotechnology,
    artificial general intelligence.
  • 3:17 - 3:19
    CA: You argue in the paper
  • 3:19 - 3:21
    that it's a matter of luck
    that when we discovered
  • 3:22 - 3:25
    that nuclear power could create a bomb,
  • 3:25 - 3:26
    it might have been the case
  • 3:26 - 3:28
    that you could have created a bomb
  • 3:28 - 3:32
    with much easier resources,
    accessible to anyone.
  • 3:32 - 3:35
    NB: Yeah, so think back to the 1930s
  • 3:35 - 3:40
    where for the first time we make
    some breakthroughs in nuclear physics,
  • 3:40 - 3:44
    some genius figures out that it's possible
    to create a nuclear chain reaction
  • 3:44 - 3:47
    and then realizes
    that this could lead to the bomb.
  • 3:47 - 3:49
    And we do some more work,
  • 3:49 - 3:52
    it turns out that what you require
    to make a nuclear bomb
  • 3:52 - 3:54
    is highly enriched uranium or plutonium,
  • 3:54 - 3:56
    which are very difficult materials to get.
  • 3:56 - 3:58
    You need ultracentrifuges,
  • 3:58 - 4:02
    you need reactors, like,
    massive amounts of energy.
  • 4:02 - 4:04
    But suppose it had turned out instead
  • 4:04 - 4:08
    there had been an easy way
    to unlock the energy of the atom.
  • 4:08 - 4:11
    That maybe by baking sand
    in the microwave oven
  • 4:11 - 4:12
    or something like that
  • 4:12 - 4:14
    you could have created
    a nuclear detonation.
  • 4:14 - 4:16
    So we know that that's
    physically impossible.
  • 4:16 - 4:18
    But before you did the relevant physics
  • 4:18 - 4:20
    how could you have known
    how it would turn out?
  • 4:21 - 4:22
    CA: Although, couldn't you argue
  • 4:22 - 4:24
    that for life to evolve on Earth
  • 4:24 - 4:27
    that implied sort of stable environment,
  • 4:27 - 4:32
    that if it was possible to create
    massive nuclear reactions relatively easy,
  • 4:32 - 4:33
    the Earth would never have been stable,
  • 4:33 - 4:35
    that we wouldn't be here at all.
  • 4:35 - 4:38
    NB: Yeah, unless there were something
    that is easy to do on purpose
  • 4:38 - 4:41
    but that wouldn't happen by random chance.
  • 4:41 - 4:43
    So, like things we can easily do,
  • 4:43 - 4:45
    we can stack 10 blocks
    on top of one another,
  • 4:45 - 4:48
    but in nature, you're not going to find,
    like, a stack of 10 blocks.
  • 4:48 - 4:50
    CA: OK, so this is probably the one
  • 4:50 - 4:52
    that many of us worry about most,
  • 4:52 - 4:55
    and yes, synthetic biology
    is perhaps the quickest route
  • 4:55 - 4:58
    that we can foresee
    in our near future to get us here.
  • 4:58 - 5:01
    NB: Yeah, and so think
    about what that would have meant
  • 5:01 - 5:05
    if, say, anybody by working
    in their kitchen for an afternoon
  • 5:05 - 5:07
    could destroy a city.
  • 5:07 - 5:10
    It's hard to see how
    modern civilization as we know it
  • 5:10 - 5:12
    could have survived that.
  • 5:12 - 5:14
    Because in any population
    of a million people,
  • 5:14 - 5:17
    there will always be some
    who would, for whatever reason,
  • 5:17 - 5:19
    choose to use that destructive power.
  • 5:20 - 5:23
    So if that apocalyptic residual
  • 5:23 - 5:25
    would choose to destroy a city, or worse,
  • 5:25 - 5:26
    then cities would get destroyed.
  • 5:26 - 5:29
    CA: So here's another type
    of vulnerability.
  • 5:29 - 5:31
    Talk about this.
  • 5:31 - 5:35
    NB: Yeah, so in addition to these
    kind of obvious types of black balls
  • 5:35 - 5:37
    that would just make it possible
    to blow up a lot of things,
  • 5:37 - 5:42
    other types would act
    by creating bad incentives
  • 5:42 - 5:44
    for humans to do things that are harmful.
  • 5:44 - 5:48
    So, the Type-2a, we might call it that,
  • 5:48 - 5:53
    is to think about some technology
    that incentivizes great powers
  • 5:53 - 5:57
    to use their massive amounts of force
    to create destruction.
  • 5:57 - 6:00
    So, nuclear weapons were actually
    very close to this, right?
  • 6:02 - 6:05
    What we did, we spent
    over 10 trillion dollars
  • 6:05 - 6:08
    to build 70,000 nuclear warheads
  • 6:08 - 6:10
    and put them on hair-trigger alert.
  • 6:10 - 6:12
    And there were several times
    during the Cold War
  • 6:12 - 6:14
    we almost blew each other up.
  • 6:14 - 6:17
    It's not because a lot of people felt
    this would be a great idea,
  • 6:17 - 6:20
    let's all spend 10 trillion dollars
    to blow ourselves up,
  • 6:20 - 6:23
    but the incentives were such
    that we were finding ourselves --
  • 6:23 - 6:24
    this could have been worse.
  • 6:24 - 6:26
    Imagine if there had been
    a safe first strike.
  • 6:26 - 6:29
    Then it might have been very tricky,
  • 6:29 - 6:30
    in a crisis situation,
  • 6:30 - 6:33
    to refrain from launching
    all their nuclear missiles.
  • 6:33 - 6:36
    If nothing else, because you would fear
    that the other side might do it.
  • 6:36 - 6:38
    CA: Right, mutual assured destruction
  • 6:38 - 6:41
    kept the Cold War relatively stable,
  • 6:41 - 6:43
    without that, we might not be here now.
  • 6:43 - 6:45
    NB: It could have been
    more unstable than it was.
  • 6:45 - 6:47
    And there could be
    other properties of technology.
  • 6:47 - 6:50
    It could have been harder
    to have arms treaties,
  • 6:50 - 6:51
    if instead of nuclear weapons
  • 6:51 - 6:54
    there had been some smaller thing
    or something less distinctive.
  • 6:54 - 6:57
    CA: And as well as bad incentives
    for powerful actors,
  • 6:57 - 7:00
    you also worry about bad incentives
    for all of us, in Type-2b here.
  • 7:00 - 7:05
    NB: Yeah, so, here we might
    take the case of global warming.
  • 7:07 - 7:09
    There are a lot of little conveniences
  • 7:09 - 7:11
    that cause each one of us to do things
  • 7:11 - 7:14
    that individually
    have no significant effect, right?
  • 7:14 - 7:16
    But if billions of people do it,
  • 7:16 - 7:18
    cumulatively, it has a damaging effect.
  • 7:18 - 7:21
    Now, global warming
    could have been a lot worse than it is.
  • 7:21 - 7:24
    So we have the climate
    sensitivity parameter, right.
  • 7:24 - 7:28
    It's a parameter that says
    how much warmer does it get
  • 7:28 - 7:30
    if you emit a certain amount
    of greenhouse gases.
  • 7:30 - 7:33
    But, suppose that it had been the case
  • 7:33 - 7:35
    that with the amount
    of greenhouse gases we emitted,
  • 7:35 - 7:37
    instead of the temperature rising by, say,
  • 7:37 - 7:41
    between three and 4.5 degrees by 2100,
  • 7:41 - 7:44
    suppose it had been
    15 degrees or 20 degrees.
  • 7:44 - 7:47
    Like, then we might have been
    in a very bad situation.
  • 7:47 - 7:50
    Or suppose that renewable energy
    had just been a lot harder to do.
  • 7:50 - 7:53
    Or that there had been
    more fossil fuels in the ground.
  • 7:53 - 7:55
    CA: Couldn't you argue
    that if in that case of --
  • 7:55 - 7:57
    if what we are doing today
  • 7:57 - 8:02
    had resulted in 10 degrees difference
    in the time period that we could see,
  • 8:02 - 8:05
    actually humanity would have got
    off its ass and done something about it.
  • 8:06 - 8:08
    We're stupid, but we're not
    maybe that stupid.
  • 8:08 - 8:10
    Or maybe we are.
  • 8:10 - 8:11
    NB: I wouldn't bet on it.
  • 8:11 - 8:13
    (Laughter)
  • 8:13 - 8:15
    You could imagine other features.
  • 8:15 - 8:20
    So, right now, it's a little bit difficult
    to switch to renewables and stuff, right,
  • 8:20 - 8:22
    but it can be done.
  • 8:22 - 8:25
    But it might just have been,
    with slightly different physics,
  • 8:25 - 8:27
    it could have been much more expensive
    to do these things.
  • 8:28 - 8:30
    CA: And what's your view, Nick?
  • 8:30 - 8:32
    Do you think, putting
    these possibilities together,
  • 8:32 - 8:37
    that this earth, humanity that we are,
  • 8:37 - 8:38
    we count as a vulnerable world?
  • 8:38 - 8:41
    That there is a death ball in our future?
  • 8:44 - 8:45
    NB: It's hard to say.
  • 8:45 - 8:50
    I mean, I think there might
    well be various black balls in the urn,
  • 8:50 - 8:52
    that's what it looks like.
  • 8:52 - 8:54
    There might also be some golden balls
  • 8:54 - 8:58
    that would help us
    protect against black balls.
  • 8:58 - 9:01
    And I don't know which order
    they will come out.
  • 9:01 - 9:04
    CA: I mean, one possible
    philosophical critique of this idea
  • 9:04 - 9:10
    is that it implies a view
    that the future is essentially settled.
  • 9:10 - 9:13
    That there either
    is that ball there or it's not.
  • 9:13 - 9:16
    And in a way,
  • 9:16 - 9:18
    that's not a view of the future
    that I want to believe.
  • 9:18 - 9:21
    I want to believe
    that the future is undetermined,
  • 9:21 - 9:23
    that our decisions today will determine
  • 9:23 - 9:25
    what kind of balls
    we pull out of that urn.
  • 9:26 - 9:30
    NB: I mean, if we just keep inventing,
  • 9:30 - 9:32
    like, eventually we will
    pull out all the balls.
  • 9:33 - 9:36
    I mean, I think there's a kind
    of weak form of technological determinism
  • 9:36 - 9:38
    that is quite plausible,
  • 9:38 - 9:40
    like, you're unlikely
    to encounter a society
  • 9:40 - 9:43
    that uses flint axes and jet planes.
  • 9:44 - 9:48
    But you can almost think
    of a technology as a set of affordances.
  • 9:48 - 9:51
    So technology is the thing
    that enables us to do various things
  • 9:51 - 9:53
    and achieve various effects in the world.
  • 9:53 - 9:56
    How we'd then use that,
    of course depends on human choice.
  • 9:56 - 9:59
    But if we think about these
    three types of vulnerability,
  • 9:59 - 10:02
    they make quite weak assumptions
    about how we would choose to use them.
  • 10:02 - 10:06
    So a Type-1 vulnerability, again,
    this massive, destructive power,
  • 10:06 - 10:07
    it's a fairly weak assumption
  • 10:07 - 10:10
    to think that in a population
    of millions of people
  • 10:10 - 10:13
    there would be some that would choose
    to use it destructively.
  • 10:13 - 10:15
    CA: For me, the most single
    disturbing argument
  • 10:15 - 10:20
    is that we actually might have
    some kind of view into the urn
  • 10:20 - 10:23
    that makes it actually
    very likely that we're doomed.
  • 10:23 - 10:28
    Namely, if you believe
    in accelerating power,
  • 10:28 - 10:30
    that technology inherently accelerates,
  • 10:30 - 10:33
    that we build the tools
    that make us more powerful,
  • 10:33 - 10:35
    then at some point you get to a stage
  • 10:35 - 10:38
    where a single individual
    can take us all down,
  • 10:38 - 10:41
    and then it looks like we're screwed.
  • 10:41 - 10:44
    Isn't that argument quite alarming?
  • 10:44 - 10:46
    NB: Ah, yeah.
  • 10:47 - 10:48
    (Laughter)
  • 10:48 - 10:49
    I think --
  • 10:51 - 10:52
    Yeah, we get more and more power,
  • 10:52 - 10:56
    and [it's] easier and easier
    to use those powers,
  • 10:56 - 11:00
    but we can also invent technologies
    that kind of help us control
  • 11:00 - 11:02
    how people use those powers.
  • 11:02 - 11:05
    CA: So let's talk about that,
    let's talk about the response.
  • 11:05 - 11:07
    Suppose that thinking
    about all the possibilities
  • 11:07 - 11:09
    that are out there now --
  • 11:09 - 11:13
    it's not just synbio,
    it's things like cyberwarfare,
  • 11:13 - 11:17
    artificial intelligence, etc., etc. --
  • 11:17 - 11:21
    that there might be
    serious doom in our future.
  • 11:21 - 11:23
    What are the possible responses?
  • 11:23 - 11:28
    And you've talked about
    four possible responses as well.
  • 11:28 - 11:31
    NB: Restricting technological development
    doesn't seem promising,
  • 11:31 - 11:35
    if we are talking about a general halt
    to technological progress.
  • 11:35 - 11:36
    I think neither feasible,
  • 11:36 - 11:38
    nor would it be desirable
    even if we could do it.
  • 11:38 - 11:41
    I think there might be very limited areas
  • 11:41 - 11:44
    where maybe you would want
    slower technological progress.
  • 11:44 - 11:47
    You don't, I think, want
    faster progress in bioweapons,
  • 11:47 - 11:49
    or in, say, isotope separation,
  • 11:49 - 11:52
    that would make it easier to create nukes.
  • 11:53 - 11:56
    CA: I mean, I used to be
    fully on board with that.
  • 11:56 - 11:59
    But I would like to actually
    push back on that for a minute.
  • 11:59 - 12:01
    Just because, first of all,
  • 12:01 - 12:03
    if you look at the history
    of the last couple of decades,
  • 12:03 - 12:07
    you know, it's always been
    push forward at full speed,
  • 12:07 - 12:09
    it's OK, that's our only choice.
  • 12:09 - 12:13
    But if you look at globalization
    and the rapid acceleration of that,
  • 12:13 - 12:16
    if you look at the strategy of
    "move fast and break things"
  • 12:16 - 12:19
    and what happened with that,
  • 12:19 - 12:21
    and then you look at the potential
    for synthetic biology,
  • 12:21 - 12:26
    I don't know that we should
    move forward rapidly
  • 12:26 - 12:27
    or without any kind of restriction
  • 12:27 - 12:31
    to a world where you could have
    a DNA printer in every home
  • 12:31 - 12:32
    and high school lab.
  • 12:33 - 12:35
    There are some restrictions, right?
  • 12:35 - 12:38
    NB: Possibly, there is
    the first part, the not feasible.
  • 12:38 - 12:40
    If you think it would be
    desirable to stop it,
  • 12:40 - 12:41
    there's the problem of feasibility.
  • 12:42 - 12:44
    So it doesn't really help
    if one nation kind of --
  • 12:44 - 12:46
    CA: No, it doesn't help
    if one nation does,
  • 12:46 - 12:49
    but we've had treaties before.
  • 12:49 - 12:53
    That's really how we survived
    the nuclear threat,
  • 12:53 - 12:54
    was by going out there
  • 12:54 - 12:57
    and going through
    the painful process of negotiating.
  • 12:57 - 13:02
    I just wonder whether the logic isn't
    that we, as a matter of global priority,
  • 13:02 - 13:04
    we shouldn't go out there and try,
  • 13:04 - 13:06
    like, now start negotiating
    really strict rules
  • 13:06 - 13:09
    on where synthetic bioresearch is done,
  • 13:09 - 13:12
    that it's not something
    that you want to democratize, no?
  • 13:12 - 13:14
    NB: I totally agree with that --
  • 13:14 - 13:18
    that it would be desirable, for example,
  • 13:18 - 13:22
    maybe to have DNA synthesis machines,
  • 13:22 - 13:25
    not as a product where each lab
    has their own device,
  • 13:25 - 13:27
    but maybe as a service.
  • 13:27 - 13:29
    Maybe there could be
    four or five places in the world
  • 13:29 - 13:33
    where you send in your digital blueprint
    and the DNA comes back, right?
  • 13:33 - 13:35
    And then, you would have the ability,
  • 13:35 - 13:37
    if one day it really looked
    like it was necessary,
  • 13:37 - 13:39
    we would have like,
    a finite set of choke points.
  • 13:39 - 13:43
    So I think you want to look
    for kind of special opportunities,
  • 13:43 - 13:45
    where you could have tighter control.
  • 13:45 - 13:47
    CA: Your belief is, fundamentally,
  • 13:47 - 13:50
    we are not going to be successful
    in just holding back.
  • 13:50 - 13:52
    Someone, somewhere --
    North Korea, you know --
  • 13:52 - 13:56
    someone is going to go there
    and discover this knowledge,
  • 13:56 - 13:57
    if it's there to be found.
  • 13:57 - 14:00
    NB: That looks plausible
    under current conditions.
  • 14:00 - 14:02
    It's not just synthetic biology, either.
  • 14:02 - 14:04
    I mean, any kind of profound,
    new change in the world
  • 14:04 - 14:06
    could turn out to be a black ball.
  • 14:06 - 14:08
    CA: Let's look at
    another possible response.
  • 14:08 - 14:10
    NB: This also, I think,
    has only limited potential.
  • 14:10 - 14:14
    So, with the Type-1 vulnerability again,
  • 14:14 - 14:18
    I mean, if you could reduce the number
    of people who are incentivized
  • 14:18 - 14:19
    to destroy the world,
  • 14:20 - 14:22
    if only they could get
    access and the means,
  • 14:22 - 14:23
    that would be good.
  • 14:23 - 14:25
    CA: In this image that you asked us to do
  • 14:25 - 14:27
    you're imagining these drones
    flying around the world
  • 14:27 - 14:29
    with facial recognition.
  • 14:29 - 14:32
    When they spot someone
    showing signs of sociopathic behavior,
  • 14:32 - 14:34
    they shower them with love, they fix them.
  • 14:34 - 14:36
    NB: I think it's like a hybrid picture.
  • 14:36 - 14:40
    Eliminate can either mean,
    like, incarcerate or kill,
  • 14:40 - 14:43
    or it can mean persuade them
    to a better view of the world.
  • 14:43 - 14:45
    But the point is that,
  • 14:45 - 14:47
    suppose you were
    extremely successful in this,
  • 14:47 - 14:50
    and you reduced the number
    of such individuals by half.
  • 14:50 - 14:52
    And if you want to do it by persuasion,
  • 14:52 - 14:54
    you are competing against
    all other powerful forces
  • 14:54 - 14:56
    that are trying to persuade people,
  • 14:56 - 14:58
    parties, religion, education system.
  • 14:58 - 15:00
    But suppose you could reduce it by half,
  • 15:00 - 15:02
    I don't think the risk
    would be reduced by half.
  • 15:02 - 15:04
    Maybe by five or 10 percent.
  • 15:04 - 15:08
    CA: You're not recommending that we gamble
    humanity's future on response two.
  • 15:08 - 15:11
    NB: I think it's all good
    to try to deter and persuade people,
  • 15:11 - 15:14
    but we shouldn't rely on that
    as our only safeguard.
  • 15:14 - 15:15
    CA: How about three?
  • 15:15 - 15:18
    NB: I think there are two general methods
  • 15:18 - 15:22
    that we could use to achieve
    the ability to stabilize the world
  • 15:22 - 15:25
    against the whole spectrum
    of possible vulnerabilities.
  • 15:25 - 15:27
    And we probably would need both.
  • 15:27 - 15:31
    So, one is an extremely effective ability
  • 15:32 - 15:33
    to do preventive policing.
  • 15:33 - 15:35
    Such that you could intercept.
  • 15:35 - 15:38
    If anybody started to do
    this dangerous thing,
  • 15:38 - 15:40
    you could intercept them
    in real time, and stop them.
  • 15:40 - 15:43
    So this would require
    ubiquitous surveillance,
  • 15:43 - 15:45
    everybody would be monitored all the time.
  • 15:46 - 15:49
    CA: This is "Minority Report,"
    essentially, a form of.
  • 15:49 - 15:51
    NB: You would have maybe AI algorithms,
  • 15:51 - 15:55
    big freedom centers
    that were reviewing this, etc., etc.
  • 15:57 - 16:01
    CA: You know that mass surveillance
    is not a very popular term right now?
  • 16:01 - 16:02
    (Laughter)
  • 16:03 - 16:05
    NB: Yeah, so this little device there,
  • 16:05 - 16:09
    imagine that kind of necklace
    that you would have to wear at all times
  • 16:09 - 16:11
    with multidirectional cameras.
  • 16:12 - 16:14
    But, to make it go down better,
  • 16:14 - 16:16
    just call it the "freedom tag"
    or something like that.
  • 16:16 - 16:18
    (Laughter)
  • 16:18 - 16:19
    CA: OK.
  • 16:20 - 16:22
    I mean, this is the conversation, friends,
  • 16:22 - 16:25
    this is why this is
    such a mind-blowing conversation.
  • 16:25 - 16:28
    NB: Actually, there's
    a whole big conversation on this
  • 16:28 - 16:29
    on its own, obviously.
  • 16:29 - 16:32
    There are huge problems and risks
    with that, right?
  • 16:32 - 16:33
    We may come back to that.
  • 16:33 - 16:34
    So the other, the final,
  • 16:34 - 16:37
    the other general stabilization capability
  • 16:37 - 16:39
    is kind of plugging
    another governance gap.
  • 16:39 - 16:43
    So the surveillance would be kind of
    governance gap at the microlevel,
  • 16:43 - 16:46
    like, preventing anybody
    from ever doing something highly illegal.
  • 16:46 - 16:49
    Then, there's a corresponding
    governance gap
  • 16:49 - 16:51
    at the macro level, at the global level.
  • 16:51 - 16:54
    You would need the ability, reliably,
  • 16:54 - 16:57
    to prevent the worst kinds
    of global coordination failures,
  • 16:57 - 17:01
    to avoid wars between great powers,
  • 17:01 - 17:02
    arms races,
  • 17:04 - 17:06
    cataclysmic commons problems,
  • 17:08 - 17:12
    in order to deal with
    the Type-2a vulnerabilities.
  • 17:12 - 17:14
    CA: Global governance is a term
  • 17:14 - 17:16
    that's definitely way out
    of fashion right now,
  • 17:16 - 17:19
    but could you make the case
    that throughout history,
  • 17:19 - 17:20
    the history of humanity
  • 17:20 - 17:25
    is that at every stage
    of technological power increase,
  • 17:25 - 17:29
    people have reorganized
    and sort of centralized the power.
  • 17:29 - 17:32
    So, for example,
    when a roving band of criminals
  • 17:32 - 17:34
    could take over a society,
  • 17:34 - 17:36
    the response was,
    well, you have a nation-state
  • 17:36 - 17:38
    and you centralize force,
    a police force or an army,
  • 17:39 - 17:40
    so, "No, you can't do that."
  • 17:40 - 17:45
    The logic, perhaps, of having
    a single person or a single group
  • 17:45 - 17:46
    able to take out humanity
  • 17:46 - 17:49
    means at some point
    we're going to have to go this route,
  • 17:49 - 17:51
    at least in some form, no?
  • 17:51 - 17:54
    NB: It's certainly true that the scale
    of political organization has increased
  • 17:54 - 17:56
    over the course of human history.
  • 17:56 - 17:59
    It used to be hunter-gatherer band, right,
  • 17:59 - 18:01
    and then chiefdom, city-states, nations,
  • 18:02 - 18:05
    now there are international organizations
    and so on and so forth.
  • 18:06 - 18:07
    Again, I just want to make sure
  • 18:07 - 18:09
    I get the chance to stress
  • 18:09 - 18:11
    that obviously there are huge downsides
  • 18:11 - 18:12
    and indeed, massive risks,
  • 18:12 - 18:16
    both to mass surveillance
    and to global governance.
  • 18:16 - 18:18
    I'm just pointing out
    that if we are lucky,
  • 18:18 - 18:21
    the world could be such
    that these would be the only ways
  • 18:21 - 18:22
    you could survive a black ball.
  • 18:22 - 18:25
    CA: The logic of this theory,
  • 18:25 - 18:26
    it seems to me,
  • 18:26 - 18:30
    is that we've got to recognize
    we can't have it all.
  • 18:30 - 18:32
    That the sort of,
  • 18:34 - 18:36
    I would say, naive dream
    that many of us had
  • 18:36 - 18:40
    that technology is always
    going to be a force for good,
  • 18:40 - 18:43
    keep going, don't stop,
    go as fast as you can
  • 18:43 - 18:45
    and not pay attention
    to some of the consequences,
  • 18:45 - 18:47
    that's actually just not an option.
  • 18:47 - 18:49
    We can have that.
  • 18:49 - 18:50
    If we have that,
  • 18:50 - 18:52
    we're going to have to accept
  • 18:52 - 18:54
    some of these other
    very uncomfortable things with it,
  • 18:54 - 18:56
    and kind of be in this
    arms race with ourselves
  • 18:56 - 18:59
    of, you want the power,
    you better limit it,
  • 18:59 - 19:01
    you better figure out how to limit it.
  • 19:01 - 19:04
    NB: I think it is an option,
  • 19:04 - 19:07
    a very tempting option,
    it's in a sense the easiest option
  • 19:07 - 19:09
    and it might work,
  • 19:09 - 19:13
    but it means we are fundamentally
    vulnerable to extracting a black ball.
  • 19:13 - 19:16
    Now, I think with a bit of coordination,
  • 19:16 - 19:18
    like, if you did solve this
    macrogovernance problem,
  • 19:18 - 19:20
    and the microgovernance problem,
  • 19:20 - 19:22
    then we could extract
    all the balls from the urn
  • 19:22 - 19:25
    and we'd benefit greatly.
  • 19:25 - 19:28
    CA: I mean, if we're living
    in a simulation, does it matter?
  • 19:28 - 19:29
    We just reboot.
  • 19:29 - 19:31
    (Laughter)
  • 19:31 - 19:32
    NB: Then ... I ...
  • 19:32 - 19:35
    (Laughter)
  • 19:35 - 19:36
    I didn't see that one coming.
  • 19:38 - 19:39
    CA: So what's your view?
  • 19:39 - 19:44
    Putting all the pieces together,
    how likely is it that we're doomed?
  • 19:44 - 19:46
    (Laughter)
  • 19:47 - 19:49
    I love how people laugh
    when you ask that question.
  • 19:49 - 19:51
    NB: On an individual level,
  • 19:51 - 19:55
    we seem to kind of be doomed anyway,
    just with the time line,
  • 19:55 - 19:57
    we're rotting and aging
    and all kinds of things, right?
  • 19:57 - 19:59
    (Laughter)
  • 19:59 - 20:01
    It's actually a little bit tricky.
  • 20:01 - 20:03
    If you want to set up
    so that you can attach a probability,
  • 20:03 - 20:05
    first, who are we?
  • 20:05 - 20:07
    If you're very old,
    probably you'll die of natural causes,
  • 20:08 - 20:10
    if you're very young,
    you might have a 100-year --
  • 20:10 - 20:12
    the probability might depend
    on who you ask.
  • 20:12 - 20:16
    Then the threshold, like, what counts
    as civilizational devastation?
  • 20:16 - 20:22
    In the paper I don't require
    an existential catastrophe
  • 20:22 - 20:23
    in order for it to count.
  • 20:23 - 20:25
    This is just a definitional matter,
  • 20:25 - 20:26
    I say a billion dead,
  • 20:26 - 20:29
    or a reduction of world GDP by 50 percent,
  • 20:29 - 20:31
    but depending on what
    you say the threshold is,
  • 20:31 - 20:33
    you get a different probability estimate.
  • 20:33 - 20:37
    But I guess you could
    put me down as a frightened optimist.
  • 20:37 - 20:38
    (Laughter)
  • 20:38 - 20:40
    CA: You're a frightened optimist,
  • 20:40 - 20:44
    and I think you've just created
    a large number of other frightened ...
  • 20:44 - 20:46
    people.
  • 20:46 - 20:47
    (Laughter)
  • 20:47 - 20:48
    NB: In the simulation.
  • 20:48 - 20:49
    CA: In a simulation.
  • 20:49 - 20:51
    Nick Bostrom, your mind amazes me,
  • 20:51 - 20:54
    thank you so much for scaring
    the living daylights out of us.
  • 20:54 - 20:56
    (Applause)
Title:
How civilization could destroy itself -- and 4 ways we could prevent it
Speaker:
Nick Bostrom
Description:

Humanity is on its way to creating a "black ball": a technological breakthrough that could destroy us all, says philosopher Nick Bostrom. In this incisive, surprisingly light-hearted conversation with Head of TED Chris Anderson, Bostrom outlines the vulnerabilities we could face if (or when) our inventions spiral beyond our control -- and explores how we can prevent our future demise.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
21:09

English subtitles

Revisions Compare revisions