English feliratok

← How civilization could destroy itself -- and 4 ways we could prevent it

Beágyazókód kérése
23 Languages

Showing Revision 10 created 12/19/2019 by Erin Gregory.

  1. Chris Anderson: Nick Bostrom.
  2. So, you have already given us
    so many crazy ideas out there.
  3. I think a couple of decades ago,
  4. you made the case that we might
    all be living in a simulation,
  5. or perhaps probably were.
  6. More recently,
  7. you've painted the most vivid examples
    of how artificial general intelligence
  8. could go horribly wrong.
  9. And now this year,
  10. you're about to publish
  11. a paper that presents something called
    the vulnerable world hypothesis.
  12. And our job this evening is to
    give the illustrated guide to that.
  13. So let's do that.
  14. What is that hypothesis?
  15. Nick Bostrom: It's trying to think about

  16. a sort of structural feature
    of the current human condition.
  17. You like the urn metaphor,
  18. so I'm going to use that to explain it.
  19. So picture a big urn filled with balls
  20. representing ideas, methods,
    possible technologies.
  21. You can think of the history
    of human creativity
  22. as the process of reaching into this urn
    and pulling out one ball after another,
  23. and the net effect so far
    has been hugely beneficial, right?
  24. We've extracted a great many white balls,
  25. some various shades of gray,
    mixed blessings.
  26. We haven't so far
    pulled out the black ball --
  27. a technology that invariably destroys
    the civilization that discovers it.
  28. So the paper tries to think
    about what could such a black ball be.
  29. CA: So you define that ball

  30. as one that would inevitably
    bring about civilizational destruction.
  31. NB: Unless we exit what I call
    the semi-anarchic default condition.

  32. But sort of, by default.
  33. CA: So, you make the case compelling

  34. by showing some sort of counterexamples
  35. where you believe that so far
    we've actually got lucky,
  36. that we might have pulled out
    that death ball
  37. without even knowing it.
  38. So there's this quote, what's this quote?
  39. NB: Well, I guess
    it's just meant to illustrate

  40. the difficulty of foreseeing
  41. what basic discoveries will lead to.
  42. We just don't have that capability.
  43. Because we have become quite good
    at pulling out balls,
  44. but we don't really have the ability
    to put the ball back into the urn, right.
  45. We can invent, but we can't un-invent.
  46. So our strategy, such as it is,
  47. is to hope that there is
    no black ball in the urn.
  48. CA: So once it's out, it's out,
    and you can't put it back in,

  49. and you think we've been lucky.
  50. So talk through a couple
    of these examples.
  51. You talk about different
    types of vulnerability.
  52. NB: So the easiest type to understand

  53. is a technology
    that just makes it very easy
  54. to cause massive amounts of destruction.
  55. Synthetic biology might be a fecund
    source of that kind of black ball,
  56. but many other possible things we could --
  57. think of geoengineering,
    really great, right?
  58. We could combat global warming,
  59. but you don't want it
    to get too easy either,
  60. you don't want any random person
    and his grandmother
  61. to have the ability to radically
    alter the earth's climate.
  62. Or maybe lethal autonomous drones,
  63. massed-produced, mosquito-sized
    killer bot swarms.
  64. Nanotechnology,
    artificial general intelligence.
  65. CA: You argue in the paper

  66. that it's a matter of luck
    that when we discovered
  67. that nuclear power could create a bomb,
  68. it might have been the case
  69. that you could have created a bomb
  70. with much easier resources,
    accessible to anyone.
  71. NB: Yeah, so think back to the 1930s

  72. where for the first time we make
    some breakthroughs in nuclear physics,
  73. some genius figures out that it's possible
    to create a nuclear chain reaction
  74. and then realizes
    that this could lead to the bomb.
  75. And we do some more work,
  76. it turns out that what you require
    to make a nuclear bomb
  77. is highly enriched uranium or plutonium,
  78. which are very difficult materials to get.
  79. You need ultracentrifuges,
  80. you need reactors, like,
    massive amounts of energy.
  81. But suppose it had turned out instead
  82. there had been an easy way
    to unlock the energy of the atom.
  83. That maybe by baking sand
    in the microwave oven
  84. or something like that
  85. you could have created
    a nuclear detonation.
  86. So we know that that's
    physically impossible.
  87. But before you did the relevant physics
  88. how could you have known
    how it would turn out?
  89. CA: Although, couldn't you argue

  90. that for life to evolve on Earth
  91. that implied sort of stable environment,
  92. that if it was possible to create
    massive nuclear reactions relatively easy,
  93. the Earth would never have been stable,
  94. that we wouldn't be here at all.
  95. NB: Yeah, unless there were something
    that is easy to do on purpose

  96. but that wouldn't happen by random chance.
  97. So, like things we can easily do,
  98. we can stack 10 blocks
    on top of one another,
  99. but in nature, you're not going to find,
    like, a stack of 10 blocks.
  100. CA: OK, so this is probably the one

  101. that many of us worry about most,
  102. and yes, synthetic biology
    is perhaps the quickest route
  103. that we can foresee
    in our near future to get us here.
  104. NB: Yeah, and so think
    about what that would have meant

  105. if, say, anybody by working
    in their kitchen for an afternoon
  106. could destroy a city.
  107. It's hard to see how
    modern civilization as we know it
  108. could have survived that.
  109. Because in any population
    of a million people,
  110. there will always be some
    who would, for whatever reason,
  111. choose to use that destructive power.
  112. So if that apocalyptic residual
  113. would choose to destroy a city, or worse,
  114. then cities would get destroyed.
  115. CA: So here's another type
    of vulnerability.

  116. Talk about this.
  117. NB: Yeah, so in addition to these
    kind of obvious types of black balls

  118. that would just make it possible
    to blow up a lot of things,
  119. other types would act
    by creating bad incentives
  120. for humans to do things that are harmful.
  121. So, the Type-2a, we might call it that,
  122. is to think about some technology
    that incentivizes great powers
  123. to use their massive amounts of force
    to create destruction.
  124. So, nuclear weapons were actually
    very close to this, right?
  125. What we did, we spent
    over 10 trillion dollars
  126. to build 70,000 nuclear warheads
  127. and put them on hair-trigger alert.
  128. And there were several times
    during the Cold War
  129. we almost blew each other up.
  130. It's not because a lot of people felt
    this would be a great idea,
  131. let's all spend 10 trillion dollars
    to blow ourselves up,
  132. but the incentives were such
    that we were finding ourselves --
  133. this could have been worse.
  134. Imagine if there had been
    a safe first strike.
  135. Then it might have been very tricky,
  136. in a crisis situation,
  137. to refrain from launching
    all their nuclear missiles.
  138. If nothing else, because you would fear
    that the other side might do it.
  139. CA: Right, mutual assured destruction

  140. kept the Cold War relatively stable,
  141. without that, we might not be here now.
  142. NB: It could have been
    more unstable than it was.

  143. And there could be
    other properties of technology.
  144. It could have been harder
    to have arms treaties,
  145. if instead of nuclear weapons
  146. there had been some smaller thing
    or something less distinctive.
  147. CA: And as well as bad incentives
    for powerful actors,

  148. you also worry about bad incentives
    for all of us, in Type-2b here.
  149. NB: Yeah, so, here we might
    take the case of global warming.

  150. There are a lot of little conveniences
  151. that cause each one of us to do things
  152. that individually
    have no significant effect, right?
  153. But if billions of people do it,
  154. cumulatively, it has a damaging effect.
  155. Now, global warming
    could have been a lot worse than it is.
  156. So we have the climate
    sensitivity parameter, right.
  157. It's a parameter that says
    how much warmer does it get
  158. if you emit a certain amount
    of greenhouse gases.
  159. But, suppose that it had been the case
  160. that with the amount
    of greenhouse gases we emitted,
  161. instead of the temperature rising by, say,
  162. between three and 4.5 degrees by 2100,
  163. suppose it had been
    15 degrees or 20 degrees.
  164. Like, then we might have been
    in a very bad situation.
  165. Or suppose that renewable energy
    had just been a lot harder to do.
  166. Or that there had been
    more fossil fuels in the ground.
  167. CA: Couldn't you argue
    that if in that case of --

  168. if what we are doing today
  169. had resulted in 10 degrees difference
    in the time period that we could see,
  170. actually humanity would have got
    off its ass and done something about it.
  171. We're stupid, but we're not
    maybe that stupid.
  172. Or maybe we are.
  173. NB: I wouldn't bet on it.

  174. (Laughter)

  175. You could imagine other features.

  176. So, right now, it's a little bit difficult
    to switch to renewables and stuff, right,
  177. but it can be done.
  178. But it might just have been,
    with slightly different physics,
  179. it could have been much more expensive
    to do these things.
  180. CA: And what's your view, Nick?

  181. Do you think, putting
    these possibilities together,
  182. that this earth, humanity that we are,
  183. we count as a vulnerable world?
  184. That there is a death ball in our future?
  185. NB: It's hard to say.

  186. I mean, I think there might
    well be various black balls in the urn,
  187. that's what it looks like.
  188. There might also be some golden balls
  189. that would help us
    protect against black balls.
  190. And I don't know which order
    they will come out.
  191. CA: I mean, one possible
    philosophical critique of this idea

  192. is that it implies a view
    that the future is essentially settled.
  193. That there either
    is that ball there or it's not.
  194. And in a way,
  195. that's not a view of the future
    that I want to believe.
  196. I want to believe
    that the future is undetermined,
  197. that our decisions today will determine
  198. what kind of balls
    we pull out of that urn.
  199. NB: I mean, if we just keep inventing,

  200. like, eventually we will
    pull out all the balls.
  201. I mean, I think there's a kind
    of weak form of technological determinism
  202. that is quite plausible,
  203. like, you're unlikely
    to encounter a society
  204. that uses flint axes and jet planes.
  205. But you can almost think
    of a technology as a set of affordances.
  206. So technology is the thing
    that enables us to do various things
  207. and achieve various effects in the world.
  208. How we'd then use that,
    of course depends on human choice.
  209. But if we think about these
    three types of vulnerability,
  210. they make quite weak assumptions
    about how we would choose to use them.
  211. So a Type-1 vulnerability, again,
    this massive, destructive power,
  212. it's a fairly weak assumption
  213. to think that in a population
    of millions of people
  214. there would be some that would choose
    to use it destructively.
  215. CA: For me, the most single
    disturbing argument

  216. is that we actually might have
    some kind of view into the urn
  217. that makes it actually
    very likely that we're doomed.
  218. Namely, if you believe
    in accelerating power,
  219. that technology inherently accelerates,
  220. that we build the tools
    that make us more powerful,
  221. then at some point you get to a stage
  222. where a single individual
    can take us all down,
  223. and then it looks like we're screwed.
  224. Isn't that argument quite alarming?
  225. NB: Ah, yeah.

  226. (Laughter)

  227. I think --

  228. Yeah, we get more and more power,
  229. and [it's] easier and easier
    to use those powers,
  230. but we can also invent technologies
    that kind of help us control
  231. how people use those powers.
  232. CA: So let's talk about that,
    let's talk about the response.

  233. Suppose that thinking
    about all the possibilities
  234. that are out there now --
  235. it's not just synbio,
    it's things like cyberwarfare,
  236. artificial intelligence, etc., etc. --
  237. that there might be
    serious doom in our future.
  238. What are the possible responses?
  239. And you've talked about
    four possible responses as well.
  240. NB: Restricting technological development
    doesn't seem promising,

  241. if we are talking about a general halt
    to technological progress.
  242. I think neither feasible,
  243. nor would it be desirable
    even if we could do it.
  244. I think there might be very limited areas
  245. where maybe you would want
    slower technological progress.
  246. You don't, I think, want
    faster progress in bioweapons,
  247. or in, say, isotope separation,
  248. that would make it easier to create nukes.
  249. CA: I mean, I used to be
    fully on board with that.

  250. But I would like to actually
    push back on that for a minute.
  251. Just because, first of all,
  252. if you look at the history
    of the last couple of decades,
  253. you know, it's always been
    push forward at full speed,
  254. it's OK, that's our only choice.
  255. But if you look at globalization
    and the rapid acceleration of that,
  256. if you look at the strategy of
    "move fast and break things"
  257. and what happened with that,
  258. and then you look at the potential
    for synthetic biology,
  259. I don't know that we should
    move forward rapidly
  260. or without any kind of restriction
  261. to a world where you could have
    a DNA printer in every home
  262. and high school lab.
  263. There are some restrictions, right?
  264. NB: Possibly, there is
    the first part, the not feasible.

  265. If you think it would be
    desirable to stop it,
  266. there's the problem of feasibility.
  267. So it doesn't really help
    if one nation kind of --
  268. CA: No, it doesn't help
    if one nation does,

  269. but we've had treaties before.
  270. That's really how we survived
    the nuclear threat,
  271. was by going out there
  272. and going through
    the painful process of negotiating.
  273. I just wonder whether the logic isn't
    that we, as a matter of global priority,
  274. we shouldn't go out there and try,
  275. like, now start negotiating
    really strict rules
  276. on where synthetic bioresearch is done,
  277. that it's not something
    that you want to democratize, no?
  278. NB: I totally agree with that --

  279. that it would be desirable, for example,
  280. maybe to have DNA synthesis machines,
  281. not as a product where each lab
    has their own device,
  282. but maybe as a service.
  283. Maybe there could be
    four or five places in the world
  284. where you send in your digital blueprint
    and the DNA comes back, right?
  285. And then, you would have the ability,
  286. if one day it really looked
    like it was necessary,
  287. we would have like,
    a finite set of choke points.
  288. So I think you want to look
    for kind of special opportunities,
  289. where you could have tighter control.
  290. CA: Your belief is, fundamentally,

  291. we are not going to be successful
    in just holding back.
  292. Someone, somewhere --
    North Korea, you know --
  293. someone is going to go there
    and discover this knowledge,
  294. if it's there to be found.
  295. NB: That looks plausible
    under current conditions.

  296. It's not just synthetic biology, either.
  297. I mean, any kind of profound,
    new change in the world
  298. could turn out to be a black ball.
  299. CA: Let's look at
    another possible response.

  300. NB: This also, I think,
    has only limited potential.

  301. So, with the Type-1 vulnerability again,
  302. I mean, if you could reduce the number
    of people who are incentivized
  303. to destroy the world,
  304. if only they could get
    access and the means,
  305. that would be good.
  306. CA: In this image that you asked us to do

  307. you're imagining these drones
    flying around the world
  308. with facial recognition.
  309. When they spot someone
    showing signs of sociopathic behavior,
  310. they shower them with love, they fix them.
  311. NB: I think it's like a hybrid picture.

  312. Eliminate can either mean,
    like, incarcerate or kill,
  313. or it can mean persuade them
    to a better view of the world.
  314. But the point is that,
  315. suppose you were
    extremely successful in this,
  316. and you reduced the number
    of such individuals by half.
  317. And if you want to do it by persuasion,
  318. you are competing against
    all other powerful forces
  319. that are trying to persuade people,
  320. parties, religion, education system.
  321. But suppose you could reduce it by half,
  322. I don't think the risk
    would be reduced by half.
  323. Maybe by five or 10 percent.
  324. CA: You're not recommending that we gamble
    humanity's future on response two.

  325. NB: I think it's all good
    to try to deter and persuade people,

  326. but we shouldn't rely on that
    as our only safeguard.
  327. CA: How about three?

  328. NB: I think there are two general methods

  329. that we could use to achieve
    the ability to stabilize the world
  330. against the whole spectrum
    of possible vulnerabilities.
  331. And we probably would need both.
  332. So, one is an extremely effective ability
  333. to do preventive policing.
  334. Such that you could intercept.
  335. If anybody started to do
    this dangerous thing,
  336. you could intercept them
    in real time, and stop them.
  337. So this would require
    ubiquitous surveillance,
  338. everybody would be monitored all the time.
  339. CA: This is "Minority Report,"
    essentially, a form of.

  340. NB: You would have maybe AI algorithms,

  341. big freedom centers
    that were reviewing this, etc., etc.
  342. CA: You know that mass surveillance
    is not a very popular term right now?

  343. (Laughter)

  344. NB: Yeah, so this little device there,

  345. imagine that kind of necklace
    that you would have to wear at all times
  346. with multidirectional cameras.
  347. But, to make it go down better,
  348. just call it the "freedom tag"
    or something like that.
  349. (Laughter)

  350. CA: OK.

  351. I mean, this is the conversation, friends,
  352. this is why this is
    such a mind-blowing conversation.
  353. NB: Actually, there's
    a whole big conversation on this

  354. on its own, obviously.
  355. There are huge problems and risks
    with that, right?
  356. We may come back to that.
  357. So the other, the final,
  358. the other general stabilization capability
  359. is kind of plugging
    another governance gap.
  360. So the surveillance would be kind of
    governance gap at the microlevel,
  361. like, preventing anybody
    from ever doing something highly illegal.
  362. Then, there's a corresponding
    governance gap
  363. at the macro level, at the global level.
  364. You would need the ability, reliably,
  365. to prevent the worst kinds
    of global coordination failures,
  366. to avoid wars between great powers,
  367. arms races,
  368. cataclysmic commons problems,
  369. in order to deal with
    the Type-2a vulnerabilities.
  370. CA: Global governance is a term

  371. that's definitely way out
    of fashion right now,
  372. but could you make the case
    that throughout history,
  373. the history of humanity
  374. is that at every stage
    of technological power increase,
  375. people have reorganized
    and sort of centralized the power.
  376. So, for example,
    when a roving band of criminals
  377. could take over a society,
  378. the response was,
    well, you have a nation-state
  379. and you centralize force,
    a police force or an army,
  380. so, "No, you can't do that."
  381. The logic, perhaps, of having
    a single person or a single group
  382. able to take out humanity
  383. means at some point
    we're going to have to go this route,
  384. at least in some form, no?
  385. NB: It's certainly true that the scale
    of political organization has increased

  386. over the course of human history.
  387. It used to be hunter-gatherer band, right,
  388. and then chiefdom, city-states, nations,
  389. now there are international organizations
    and so on and so forth.
  390. Again, I just want to make sure
  391. I get the chance to stress
  392. that obviously there are huge downsides
  393. and indeed, massive risks,
  394. both to mass surveillance
    and to global governance.
  395. I'm just pointing out
    that if we are lucky,
  396. the world could be such
    that these would be the only ways
  397. you could survive a black ball.
  398. CA: The logic of this theory,

  399. it seems to me,
  400. is that we've got to recognize
    we can't have it all.
  401. That the sort of,
  402. I would say, naive dream
    that many of us had
  403. that technology is always
    going to be a force for good,
  404. keep going, don't stop,
    go as fast as you can
  405. and not pay attention
    to some of the consequences,
  406. that's actually just not an option.
  407. We can have that.
  408. If we have that,
  409. we're going to have to accept
  410. some of these other
    very uncomfortable things with it,
  411. and kind of be in this
    arms race with ourselves
  412. of, you want the power,
    you better limit it,
  413. you better figure out how to limit it.
  414. NB: I think it is an option,

  415. a very tempting option,
    it's in a sense the easiest option
  416. and it might work,
  417. but it means we are fundamentally
    vulnerable to extracting a black ball.
  418. Now, I think with a bit of coordination,
  419. like, if you did solve this
    macrogovernance problem,
  420. and the microgovernance problem,
  421. then we could extract
    all the balls from the urn
  422. and we'd benefit greatly.
  423. CA: I mean, if we're living
    in a simulation, does it matter?

  424. We just reboot.
  425. (Laughter)

  426. NB: Then ... I ...

  427. (Laughter)

  428. I didn't see that one coming.
  429. CA: So what's your view?

  430. Putting all the pieces together,
    how likely is it that we're doomed?
  431. (Laughter)

  432. I love how people laugh
    when you ask that question.

  433. NB: On an individual level,

  434. we seem to kind of be doomed anyway,
    just with the time line,
  435. we're rotting and aging
    and all kinds of things, right?
  436. (Laughter)

  437. It's actually a little bit tricky.

  438. If you want to set up
    so that you can attach a probability,
  439. first, who are we?
  440. If you're very old,
    probably you'll die of natural causes,
  441. if you're very young,
    you might have a 100-year --
  442. the probability might depend
    on who you ask.
  443. Then the threshold, like, what counts
    as civilizational devastation?
  444. In the paper I don't require
    an existential catastrophe
  445. in order for it to count.
  446. This is just a definitional matter,
  447. I say a billion dead,
  448. or a reduction of world GDP by 50 percent,
  449. but depending on what
    you say the threshold is,
  450. you get a different probability estimate.
  451. But I guess you could
    put me down as a frightened optimist.
  452. (Laughter)

  453. CA: You're a frightened optimist,

  454. and I think you've just created
    a large number of other frightened ...
  455. people.
  456. (Laughter)

  457. NB: In the simulation.

  458. CA: In a simulation.

  459. Nick Bostrom, your mind amazes me,
  460. thank you so much for scaring
    the living daylights out of us.
  461. (Applause)