YouTube

Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← 34C3 - Dude, you broke the Future!

Get Embed Code
1 Language

Showing Revision 36 created 02/08/2018 by paul-kurt.

  1. 34C3 preroll music

  2. Herald: Humans of Congress, it is my
    pleasure to announce the next speaker.
  3. I was supposed to pick out a few awards or
    something, to actually present what he's
  4. done in his life, but I can
    only say: he's one of us!
  5. applause
  6. Charles Stross!
    ongoing applause
  7. Charles Stross: Hi! Is this on?
    Good. Great.
  8. I'm really pleased to be here and I
    want to start by apologizing for my total
  9. lack of German. So this talk is gonna be in
    English. Good morning. I'm Charlie Stross
  10. and it's my job to tell lies for money, or
    rather, I write science fiction, much of
  11. it about on the future, which in recent
    years has become ridiculously hard to
  12. predict. In this talk I'm going to talk
    about why. Now our species, Homo sapiens
  13. sapiens, is about 300,000 years old. It
    used to be about 200,000 years old,
  14. but it grew an extra 100,000
    years in the past year because of new
  15. archaeological discoveries, I mean, go
    figure. For all but the last three
  16. centuries or so - of that span, however -
    predicting the future was really easy. If
  17. you were an average person - as opposed to
    maybe a king or a pope - natural disasters
  18. aside, everyday life 50 years in the
    future would resemble everyday life 50
  19. years in your past. Let that sink in for a
    bit. For 99.9% of human existence on this
  20. earth, the future was static. Then
    something changed and the future began to
  21. shift increasingly rapidly, until, in the
    present day, things are moving so fast,
  22. it's barely possible to anticipate trends
    from one month to the next. Now as an
  23. eminent computer scientist, Edsger Dijkstra
    once remarked, computer science is no more
  24. about computers than astronomy is about
    building big telescopes, the same can be
  25. said of my field of work, writing science
    fiction, sci-fi is rarely about science
  26. and even more rarely about predicting the
    future, but sometimes we dabble in
  27. Futurism and lately, Futurism has gotten
    really, really, weird. Now when I write a
  28. near future work of fiction, one set, say, a
    decade hence, there used to be a recipe I
  29. could follow, that worked eerily well. Simply put:
    90% of the next decade stuff is
  30. already here around us today.
    Buildings are designed to
  31. last many years, automobiles have a design
    life of about a decade, so half the cars on
  32. the road in 2027 are already there now -
    they're new. People? There'll be some new
  33. faces, aged 10 and under, and some older
    people will have died, but most of us
  34. adults will still be around, albeit older
    and grayer, this is the 90% of a near
  35. future that's already here today. After
    the already existing 90%, another 9% of a
  36. near future a decade hence used to be
    easily predictable: you look at trends
  37. dictated by physical limits, such as
    Moore's law and you look at Intel's road
  38. map and you use a bit of creative
    extrapolation and you won't go too far
  39. wrong. If I predict - wearing my futurology
    hat - that in 2027 LTE cellular phones will
  40. be ubiquitous, 5G will be available for
    high bandwidth applications and there will be
  41. fallback to some kind of satellite data
    service at a price, you probably won't
  42. laugh at me.
    I mean, it's not like I'm predicting that
  43. airlines will fly slower and Nazis will
    take over the United States, is it ?
  44. laughing
  45. And therein lies the problem. There is
    remaining 1% of what Donald Rumsfeld
  46. called the "unknown unknowns", what throws off
    all predictions. As it happens, airliners

  47. today are slower than they were in the
    1970s and don't get me started about the Nazis,

  48. I mean, nobody in 2007 was expecting a Nazi
    revival in 2017, were they?

  49. Only this time, Germans get to be the good guys.
    laughing, applause
  50. So. My recipe for fiction set 10 years
    in the future used to be:
  51. "90% is already here,
    9% is not here yet but predictable
  52. and 1% is 'who ordered that?'" But unfortunately
    the ratios have changed, I think we're now
  53. down to maybe 80% already here - climate
    change takes a huge toll on architecture -
  54. then 15% not here yet, but predictable and
    a whopping 5% of utterly unpredictable
  55. deep craziness. Now... before I carry on
    with this talk, I want to spend a minute or
  56. two ranting loudly and ruling out the
    singularity. Some of you might assume, that
  57. as the author of books like "Singularity
    Sky" and "Accelerando",
  58. I expect an impending technological
    singularity,
  59. that we will develop self-improving
    artificial intelligence and mind uploading
  60. and the whole wish list of transhumanist
    aspirations promoted by the likes of
  61. Ray Kurzweil, will come to pass. Unfortunately
    this isn't the case. I think transhumanism
  62. is a warmed-over Christian heresy. While
    its adherents tend to be outspoken atheists,
  63. they can't quite escape from the
    history that gave rise to our current
  64. Western civilization. Many of you are
    familiar with design patterns, an approach
  65. to software engineering that focuses on
    abstraction and simplification, in order
  66. to promote reusable code. When you look at
    the AI singularity as a narrative and
  67. identify the numerous places in their
    story where the phrase "and then a miracle
  68. happens" occur, it becomes apparent pretty
    quickly, that they've reinvented Christiantiy.
  69. applause
  70. Indeed, the wellspring of
    today's transhumanists draw in a long rich
  71. history of Russian philosophy, exemplified
    by the russian orthodox theologian Nikolai
  72. Fyodorovich Fedorov by way of his disciple
    Konstantin Tsiolkovsky, whose derivation
  73. of a rocket equation makes him
    essentially the father of modern space
  74. flight. Once you start probing the nether
    regions of transhumanist forth and run
  75. into concepts like Roko's Basilisk - by the
    way, any of you who didn't know about the
  76. Basilisk before, are now doomed to an
    eternity in AI hell, terribly sorry - you
  77. realize, they've mangled it to match some
    of the nastier aspects of Presbyterian
  78. Protestantism. Now they basically invented
    original sin and Satan in the guise of an
  79. AI that doesn't exist yet ,it's.. kind of
    peculiar. Anyway, my take on the
  80. singularity is: What if something walks
    like a duck and quacks like a duck? It's
  81. probably a duck. And if it looks like a
    religion, it's probably a religion.
  82. I don't see much evidence for human-like,
    self-directed artificial intelligences
  83. coming along any time soon, and a fair bit
    of evidence, that nobody accepts and freaks
  84. in cognitive science departments, even
    want it. I mean, if we invented an AI
  85. that was like a human mind, it would do the
    AI equivalent of sitting on the sofa,
  86. munching popcorn and
    watching the Super Bowl all day.
  87. It wouldn't be much use to us.
    laughter, applause
  88. What we're getting instead,
    is self-optimizing tools that defy
  89. human comprehension, but are not
    in fact any more like our kind
  90. of intelligence than a Boeing 737 is like
    a seagull. Boeing 737s and seagulls both
  91. fly, Boeing 737s don't lay eggs and shit
    everywhere. So I'm going to wash my hands
  92. of a singularity as a useful explanatory
    model of the future without further ado.
  93. I'm one of those vehement atheists as well
    and I'm gonna try and offer you a better
  94. model for what's happening to us. Now, as
    my fellow Scottish science fictional author
  95. Ken MacLeod likes to say "the secret
    weapon of science fiction is history".
  96. History is, loosely speaking, is the written
    record of what and how people did things
  97. in past times. Times that have slipped out
    of our personal memories. We science
  98. fiction writers tend to treat history as a
    giant toy chest to raid, whenever we feel
  99. like telling a story. With a little bit of
    history, it's really easy to whip up an
  100. entertaining yarn about a galactic empire
    that mirrors the development and decline
  101. of a Habsburg Empire or to respin the
    October Revolution as a tale of how Mars
  102. got its independence. But history is
    useful for so much more than that.
  103. It turns out, that our personal memories
    don't span very much time at all. I'm 53
  104. and I barely remember the 1960s. I only
    remember the 1970s with the eyes of a 6 to
  105. 16 year old. My father died this year,
    aged 93, and he'd just about remembered the
  106. 1930s. Only those of my father's
    generation directly remember the Great
  107. Depression and can compare it to the
    2007/08 global financial crisis directly.
  108. We Westerners tend to pay little attention
    to cautionary tales told by 90-somethings.
  109. We're modern, we're change obsessed and we
    tend to repeat our biggest social mistakes
  110. just as they slip out of living memory,
    which means they recur on a timescale of
  111. 70 to 100 years.
    So if our personal memories are useless,
  112. we need a better toolkit
    and history provides that toolkit.
  113. History gives us the perspective to see what
    went wrong in the past and to look for
  114. patterns and check to see whether those
    patterns are recurring in the present.
  115. Looking in particular at the history of the past two
    to four hundred years, that age of rapidly
  116. increasing change that I mentioned at the
    beginning. One glaringly obvious deviation
  117. from the norm of the preceding
    3000 centuries is obvious, and that's
  118. the development of artificial intelligence,
    which happened no earlier than 1553 and no
  119. later than 1844. I'm talking of course
    about the very old, very slow AI's we call
  120. corporations. What lessons from the history
    of a company can we draw that tell us
  121. about the likely behavior of the type of
    artificial intelligence we're interested
  122. in here, today?
    Well. Need a mouthful of water.
  123. Let me crib from Wikipedia for a moment.
  124. Wikipedia: "In the late 18th
    century, Stewart Kyd, the author of the
  125. first treatise on corporate law in English,
    defined a corporation as: 'a collection of
  126. many individuals united into one body,
    under a special denomination, having
  127. perpetual succession under an artificial
    form, and vested, by policy of the law, with
  128. the capacity of acting, in several respects,
    as an individual, enjoying privileges and
  129. immunities in common, and of exercising a
    variety of political rights, more or less
  130. extensive, according to the design of its
    institution, or the powers conferred upon
  131. it, either at the time of its creation, or
    at any subsequent period of its
  132. existence.'"
    This was a late 18th century definition,
  133. sound like a piece of software to you?
    In 1844, the British government passed the
  134. "Joint Stock Companies Act" which created
    a register of companies and allowed any
  135. legal person, for a fee, to register a
    company which in turn existed as a
  136. separate legal person. Prior to that point,
    it required a Royal Charter or an act of
  137. Parliament to create a company.
    Subsequently, the law was extended to limit
  138. the liability of individual shareholders
    in event of business failure and then both
  139. Germany and the United States added their
    own unique twists to what today we see is
  140. the doctrine of corporate personhood.
    Now, though plenty of other things that
  141. happened between the 16th and 21st centuries
    did change the shape of the world we live in.
  142. I've skipped the changes in
    agricultural productivity that happened
  143. due to energy economics,
    which finally broke the Malthusian trap
  144. our predecessors lived in.
    This in turn broke the long-term
  145. cap on economic growth of about
    0.1% per year
  146. in the absence of famines, plagues and
    wars and so on.
  147. I've skipped the germ theory of diseases
    and the development of trade empires
  148. in the age of sail and gunpowder,
    that were made possible by advances
  149. in accurate time measurement.
  150. I've skipped the rise, and
    hopefully decline, of the pernicious
  151. theory of scientific racism that
    underpinned Western colonialism and the
  152. slave trade. I've skipped the rise of
    feminism, the ideological position that
  153. women are human beings rather than
    property and the decline of patriarchy.
  154. I've skipped the whole of the
    Enlightenment and the Age of Revolutions,
  155. but this is a technocratic.. technocentric
    Congress, so I want to frame this talk in
  156. terms of AI, which we all like to think we
    understand. Here's the thing about these
  157. artificial persons we call corporations.
    Legally, they're people. They have goals,
  158. they operate in pursuit of these goals,
    they have a natural life cycle.
  159. In the 1950s, a typical U.S. corporation on the
    S&P 500 Index had a life span of 60 years.
  160. Today it's down to less than 20 years.
    This is largely due to predation.
  161. Corporations are cannibals, they eat
    one another.
  162. They're also hive super organisms
    like bees or ants.
  163. For the first century and a
    half, they relied entirely on human
  164. employees for their internal operation,
    but today they're automating their
  165. business processes very rapidly. Each
    human is only retained so long as they can
  166. perform their assigned tasks more
    efficiently than a piece of software
  167. and they can all be replaced by another
    human, much as the cells in our own bodies
  168. are functionally interchangeable and a
    group of cells can - in extremis - often be

  169. replaced by a prosthetic device.
    To some extent, corporations can be
  170. trained to serve of the personal desires of
    their chief executives, but even CEOs can
  171. be dispensed with, if their activities
    damage the corporation, as Harvey
  172. Weinstein found out a couple of months
    ago.
  173. Finally, our legal environment today has
    been tailored for the convenience of
  174. corporate persons, rather than human
    persons, to the point where our governments
  175. now mimic corporations in many of our
    internal structures.
  176. So, to understand where we're going, we
    need to start by asking "What do our
  177. current actually existing AI overlords
    want?"
  178. Now, Elon Musk, who I believe you've
    all heard of, has an obsessive fear of one
  179. particular hazard of artificial
    intelligence, which he conceives of as
  180. being a piece of software that functions
    like a brain in a box, namely the
  181. Paperclip Optimizer or Maximizer.
    A Paperclip Maximizer is a term of art for
  182. a goal seeking AI that has a single
    priority, e.g., maximizing the
  183. number of paperclips in the universe. The
    Paperclip Maximizer is able to improve
  184. itself in pursuit of its goal, but has no
    ability to vary its goal, so will
  185. ultimately attempt to convert all the
    metallic elements in the solar system into
  186. paperclips, even if this is obviously
    detrimental to the well-being of the
  187. humans who set it this goal.
    Unfortunately I don't think Musk
  188. is paying enough attention,
    consider his own companies.
  189. Tesla isn't a Paperclip Maximizer, it's a
    battery Maximizer.
  190. After all, a battery.. an
    electric car is a battery with wheels and
  191. seats. SpaceX is an orbital payload
    Maximizer, driving down the cost of space
  192. launches in order to encourage more sales
    for the service it provides. SolarCity is
  193. a photovoltaic panel maximizer and so on.
    All three of the.. Musk's very own slow AIs
  194. are based on an architecture, designed to
    maximize return on shareholder
  195. investment, even if by doing so they cook
    the planet the shareholders have to live
  196. on or turn the entire thing into solar
    panels.
  197. But hey, if you're Elon Musk, thats okay,
    you're gonna retire on Mars anyway.
  198. laughing
  199. By the way, I'm ragging on Musk in this
    talks, simply because he's the current
  200. opinionated tech billionaire, who thinks
    for disrupting a couple of industries
  201. entitles him to make headlines.
    If this was 2007 and my focus slightly
  202. difference.. different, I'd be ragging on
    Steve Jobs and if we're in 1997 my target
  203. would be Bill Gates.
    Don't take it personally, Elon.
  204. laughing
  205. Back to topic. The problem of
    corporations is, that despite their overt
  206. goals, whether they make electric vehicles
    or beer or sell life insurance policies,
  207. they all have a common implicit Paperclip
    Maximizer goal: to generate revenue. If
  208. they don't make money, they're eaten by a
    bigger predator or they go bust. It's as
  209. vital to them as breathing is to us
    mammals. They generally pursue their
  210. implicit goal - maximizing revenue - by
    pursuing their overt goal.
  211. But sometimes they try instead to
    manipulate their environment, to ensure
  212. that money flows to them regardless.
    Human toolmaking culture has become very
  213. complicated over time. New technologies
    always come with an attached implicit
  214. political agenda that seeks to extend the
    use of the technology. Governments react
  215. to this by legislating to control new
    technologies and sometimes we end up with
  216. industries actually indulging in legal
    duels through the regulatory mechanism of
  217. law to determine, who prevails. For
    example, consider the automobile. You
  218. can't have mass automobile transport
    without gas stations and fuel distribution
  219. pipelines.
    These in turn require access to whoever
  220. owns the land the oil is extracted from
    under and before you know it, you end up
  221. with a permanent army in Iraq and a clamp
    dictatorship in Saudi Arabia. Closer to
  222. home, automobiles imply jaywalking laws and
    drink-driving laws. They affect Town
  223. Planning regulations and encourage
    suburban sprawl, the construction of human
  224. infrastructure on a scale required by
    automobiles, not pedestrians.
  225. This in turn is bad for competing
    transport technologies, like buses or
  226. trams, which work best in cities with a
    high population density. So to get laws
  227. that favour the automobile in place,
    providing an environment conducive to
  228. doing business, automobile companies spend
    money on political lobbyists and when they
  229. can get away with it, on bribes. Bribery
    needn't be blatant of course. E.g.,
  230. the reforms of a British railway network
    in the 1960s dismembered many branch lines
  231. and coincided with a surge in road
    building and automobile sales. These
  232. reforms were orchestrated by Transport
    Minister Ernest Marples, who was purely a
  233. politician. The fact that he accumulated a
    considerable personal fortune during this
  234. period by buying shares in motorway
    construction corporations, has nothing to
  235. do with it. So, no conflict of interest
    there - now if the automobile in industry
  236. can't be considered a pure Paperclip
    Maximizer... sorry, the automobile
  237. industry in isolation can't be considered
    a pure Paperclip Maximizer. You have to
  238. look at it in conjunction with the fossil
    fuel industries, the road construction
  239. business, the accident insurance sector
    and so on. When you do this, you begin to
  240. see the outline of a paperclip-maximizing
    ecosystem that invades far-flung lands and
  241. grinds up and kills around one and a
    quarter million people per year. That's
  242. the global death toll from automobile
    accidents currently, according to the World
  243. Health Organization. It rivals the First
    World War on an ongoing permanent basis
  244. and these are all side effects of its
    drive to sell you a new car. Now,
  245. automobiles aren't of course a total
    liability. Today's cars are regulated
  246. stringently for safety and, in theory, to
    reduce toxic emissions. They're fast,
  247. efficient and comfortable. We can thank
    legal mandated regulations imposed by
  248. governments for this, of course. Go back
    to the 1970s and cars didn't have crumple
  249. zones, go back to the 50s and they didn't
    come with seat belts as standard. In the
  250. 1930s, indicators, turn signals and brakes
    on all four wheels were optional and your
  251. best hope of surviving a 50 km/h-crash was
    to be thrown out of a car and land somewhere
  252. without breaking your neck.
    Regulator agencies are our current

  253. political system's tool of choice for
    preventing Paperclip Maximizers from
  254. running amok. Unfortunately, regulators
    don't always work. The first failure mode
  255. of regulators that you need to be aware of
    is regulatory capture, where regulatory
  256. bodies are captured by the industries they
    control. Ajit Pai, Head of American Federal
  257. Communications Commission, which just voted
    to eliminate net neutrality rules in the
  258. U.S., has worked as Associate
    General Counsel for Verizon Communications
  259. Inc, the largest current descendant of the
    Bell Telephone system's monopoly. After
  260. the AT&T antitrust lawsuit, the Bell
    network was broken up into the seven baby
  261. bells. They've now pretty much reformed
    and reaggregated and Verizon is the largest current one.
  262. Why should someone with a transparent
    interest in a technology corporation end
  263. up running a regulator that tries to
    control the industry in question? Well, if
  264. you're going to regulate a complex
    technology, you need to recruit regulators
  265. from people who understand it.
    Unfortunately, most of those people are
  266. industry insiders. Ajit Pai is clearly
    very much aware of how Verizon is
  267. regulated, very insightful into its
    operations and wants to do something about
  268. it - just not necessarily in the public
    interest.
  269. applause
    When regulators end up staffed by people
  270. drawn from the industries they're supposed
    to control, they frequently end up working
  271. with their former office mates, to make it
    easier to turn a profit, either by raising
  272. barriers to keep new insurgent companies
    out or by dismantling safeguards that
  273. protect the public. Now a second problem
    is regulatory lag where a technology
  274. advances so rapidly, that regulations are
    laughably obsolete by the time they're
  275. issued. Consider the EU directive
    requiring cookie notices on websites to
  276. caution users, that their activities are
    tracked and their privacy may be violated.
  277. This would have been a good idea in 1993
    or 1996, but unfortunatelly it didn't show up
  278. until 2011. Fingerprinting and tracking
    mechanisms have nothing to do with cookies
  279. and were already widespread by then. Tim
    Berners-Lee observed in 1995, that five
  280. years worth of change was happening on the
    web for every 12 months of real-world
  281. time. By that yardstick, the cookie law
    came out nearly a century too late to do
  282. any good. Again, look at Uber. This month,
    the European Court of Justice ruled that
  283. Uber is a taxi service, not a Web App. This
    is arguably correct - the problem is, Uber
  284. has spread globally since it was founded
    eight years ago, subsidizing its drivers to
  285. put competing private hire firms out of
    business. Whether this is a net good for
  286. societys own is debatable. The problem is, a
    taxi driver can get awfully hungry if she
  287. has to wait eight years for a court ruling
    against a predator intent on disrupting
  288. her business. So, to recap: firstly, we
    already have Paperclip Maximizers and
  289. Musk's AI alarmism is curiously mirror
    blind. Secondly, we have mechanisms for
  290. keeping Paperclip Maximizers in check, but
    they don't work very well against AIs that
  291. deploy the dark arts, especially
    corruption and bribery and they're even
  292. worse against true AIs, that evolved too
    fast for human mediated mechanisms like
  293. the law to keep up with. Finally, unlike
    the naive vision of a Paperclip Maximizer
  294. that maximizes only paperclips, existing
    AIs have multiple agendas, their overt
  295. goal, but also profit seeking, expansion
    into new markets and to accommodate the
  296. desire of whoever is currently in the
    driving seat.
  297. sighs
  298. Now, this brings me to the next major
    heading in this dismaying laundry list:
  299. how it all went wrong. It seems to me that
    our current political upheavals, the best
  300. understood, is arising from the capture
    of post 1917 democratic institutions by
  301. large-scale AI. Everywhere you look, you
    see voters protesting angrily against an
  302. entrenched establishment, that seems
    determined to ignore the wants and needs
  303. of their human constituents in favor of
    those of the machines. The brexit upset
  304. was largely result of a protest vote
    against the British political
  305. establishment, the election of Donald
    Trump likewise, with a side order of racism
  306. on top. Our major political parties are
    led by people who are compatible with the
  307. system as it exists today, a system that
    has been shaped over decades by
  308. corporations distorting our government and
    regulatory environments. We humans live in
  309. a world shaped by the desires and needs of
    AI, forced to live on their terms and we're
  310. taught, that we're valuable only to the
    extent we contribute to the rule of the
  311. machines. Now this is free sea and we're
    all more interested in computers and
  312. communications technology than this
    historical crap. But as I said earlier,
  313. history is a secret weapon, if you know how
    to use it. What history is good for, is
  314. enabling us to spot recurring patterns
    that repeat across timescales outside our
  315. personal experience. And if we look at our
    historical very slow AIs, what do we learn
  316. from them about modern AI and how it's
    going to behave? Well to start with, our
  317. AIs have been warped, the new AIs,
    the electronic one's instantiated in our
  318. machines, have been warped by a terrible
    fundamentally flawed design decision back
  319. in 1995, but as damaged democratic
    political processes crippled our ability
  320. to truly understand the world around us
    and led to the angry upheavals and upsets
  321. of our present decade. That mistake was
    the decision, to fund the build-out of a
  322. public World Wide Web as opposed to be
    earlier government-funded corporate and
  323. academic Internet by
    monetizing eyeballs through advertising
  324. revenue. The ad-supported web we're used
    to today wasn't inevitable. If you recall
  325. the web as it was in 1994, there were very
    few ads at all and not much, in a way, of
  326. Commerce. 1995 was the year, the World Wide
    Web really came to public attention in the
  327. anglophone world and consumer-facing
    websites began to appear. Nobody really
  328. knew, how this thing was going to be paid
    for. The original .com bubble was all
  329. about working out, how to monetize the web
    for the first time and a lot of people
  330. lost their shirts in the process. A naive
    initial assumption was that the
  331. transaction cost of setting up a tcp/ip
    connection over modem was too high to
  332. support.. to be supported by per-use micro
    billing for web pages. So instead of
  333. charging people fraction of a euro cent
    for every page view, we'd bill customers
  334. indirectly, by shoving advertising banners
    in front of their eyes and hoping they'd
  335. click through and buy something.
    Unfortunately, advertising is in an
  336. industry, one of those pre-existing very
    slow AI ecosystems I already alluded to.
  337. Advertising tries to maximize its hold on
    the attention of the minds behind each
  338. human eyeball. The coupling of advertising
    with web search was an inevitable
  339. outgrowth, I mean how better to attract
    the attention of reluctant subjects, than to
  340. find out what they're really interested in
    seeing and selling ads that relate to
  341. those interests. The problem of applying
    the paperclip maximize approach to
  342. monopolizing eyeballs, however, is that
    eyeballs are a limited, scarce resource.
  343. There are only 168 hours in every week, in
    which I can gaze at banner ads. Moreover,
  344. most ads are irrelevant to my interests and
    it doesn't matter, how often you flash an ad
  345. for dog biscuits at me, I'm never going to
    buy any. I have a cat. To make best
  346. revenue-generating use of our eyeballs,
    it's necessary for the ad industry to
  347. learn, who we are and what interests us and
    to target us increasingly minutely in hope
  348. of hooking us with stuff we're attracted
    to.
  349. In other words: the ad industry is a
    paperclip maximizer, but for its success,
  350. it relies on developing a theory of mind
    that applies to human beings.
  351. sighs
  352. Do I need to divert on to the impassioned
    rant about the hideous corruption
  353. and evil that is Facebook?
    Audience: Yes!
  354. CS: Okay, somebody said yes.
    I'm guessing you've heard it all before,
  355. but for too long don't read.. summary is:
    Facebook is as much a search engine as
  356. Google or Amazon. Facebook searches are
    optimized for faces, that is for human
  357. beings. If you want to find someone you
    fell out of touch with thirty years ago,
  358. Facebook probably knows where they live,
    what their favorite color is, what sized
  359. shoes they wear and what they said about
    you to your friends behind your back all
  360. those years ago, that made you cut them off.
    Even if you don't have a Facebook account,
  361. Facebook has a You account, a hole in their
    social graph of a bunch of connections
  362. pointing in to it and your name tagged on
    your friends photographs. They know a lot
  363. about you and they sell access to their
    social graph to advertisers, who then
  364. target you, even if you don't think you use
    Facebook. Indeed, there is barely any
  365. point in not using Facebook these days, if
    ever. Social media Borg: "Resistance is
  366. futile!" So however, Facebook is trying to
    get eyeballs on ads, so is Twitter and so
  367. are Google. To do this, they fine-tuned the
    content they show you to make it more
  368. attractive to your eyes and by attractive
    I do not mean pleasant. We humans have an
  369. evolved automatic reflex to pay attention
    to threats and horrors as well as
  370. pleasurable stimuli and the algorithms,
    that determine what they show us when we
  371. look at Facebook or Twitter, take this bias
    into account. You might react more
  372. strongly to a public hanging in Iran or an
    outrageous statement by Donald Trump than
  373. to a couple kissing. The algorithm knows
    and will show you whatever makes you pay
  374. attention, not necessarily what you need or
    want to see.
  375. So this brings me to another point about
    computerized AI as opposed to corporate
  376. AI. AI algorithms tend to embody the
    prejudices and beliefs of either the
  377. programmers, or the data set
    the AI was trained on.
  378. A couple of years ago I ran across an
    account of a webcam, developed by mostly
  379. pale-skinned Silicon Valley engineers, that
    had difficulty focusing or achieving correct
  380. color balance, when pointed at dark-skinned
    faces.
  381. Fast an example of human programmer
    induced bias, they didn't have a wide
  382. enough test set and didn't recognize that
    they were inherently biased towards
  383. expecting people to have pale skin. But
    with today's deep learning, bias can creep
  384. in, while the datasets for neural networks are
    trained on, even without the programmers
  385. intending it. Microsoft's first foray into
    a conversational chat bot driven by
  386. machine learning, Tay, was what we yanked
    offline within days last year, because
  387. 4chan and reddit based trolls discovered,
    that they could train it towards racism and
  388. sexism for shits and giggles. Just imagine
    you're a poor naive innocent AI who's just
  389. been switched on and you're hoping to pass
    your Turing test and what happens? 4chan
  390. decide to play with your head.
    laughing
  391. I got to feel sorry for Tay.
    Now, humans may be biased,
  392. but at least individually we're
    accountable and if somebody gives you
  393. racist or sexist abuse to your face, you
    can complain or maybe punch them. It's
  394. impossible to punch a corporation and it
    may not even be possible to identify the
  395. source of unfair bias, when you're dealing
    with a machine learning system. AI based
  396. systems that instantiate existing
    prejudices make social change harder.
  397. Traditional advertising works by playing
    on the target customer's insecurity and
  398. fear as much as their aspirations. And fear
    of a loss of social status and privileges
  399. are powerful stress. Fear and xenophobia
    are useful tools for tracking advertising..
  400. ah, eyeballs.
    What happens when we get pervasive social
  401. networks, that have learned biases against
    say Feminism or Islam or melanin? Or deep
  402. learning systems, trained on datasets
    contaminated by racist dipshits and their
  403. propaganda? Deep learning systems like the
    ones inside Facebook, that determine which
  404. stories to show you to get you to pay as
    much attention as possible to be adverse.
  405. I think, you probably have an inkling of
    how.. where this is now going. Now, if you
  406. think, this is sounding a bit bleak and
    unpleasant, you'd be right. I write sci-fi.
  407. You read or watch or play sci-fi. We're
    acculturated to think of science and
  408. technology as good things that make life
    better, but this ain't always so. Plenty of
  409. technologies have historically been
    heavily regulated or even criminalized for
  410. good reason and once you get past any
    reflexive indignation, criticism of
  411. technology and progress, you might agree
    with me, that it is reasonable to ban
  412. individuals from owning nuclear weapons or
    nerve gas. Less obviously, they may not be
  413. weapons, but we've banned
    chlorofluorocarbon refrigerants, because
  414. they were building up in the high
    stratosphere and destroying the ozone
  415. layer that protects us from UVB radiation.
    We banned tetra e-file LED in
  416. gasoline, because it poisoned people and
    led to a crime wave. These are not
  417. weaponized technologies, but they have
    horrible side effects. Now, nerve gas and
  418. leaded gasoline were 1930s chemical
    technologies, promoted by 1930s
  419. corporations. Halogenated refrigerants and
    nuclear weapons are totally 1940s. ICBMs
  420. date to the 1950s. You know, I have
    difficulty seeing why people are getting
  421. so worked up over North Korea. North Korea
    reaches 1953 level parity - be terrified
  422. and hide under the bed!
    I submit that the 21st century is throwing
  423. up dangerous new technologies, just as our
    existing strategies for regulating very
  424. slow AIs have proven to be inadequate. And
    I don't have an answer to how we regulate
  425. new technologies, I just want to flag it up
    as a huge social problem that is going to
  426. affect the coming century.
    I'm now going to give you four examples of
  427. new types of AI application that are
    going to warp our societies even more
  428. badly than the old slow AIs, we.. have done.
    This isn't an exhaustive list, this is just
  429. some examples I dream, I pulled out of
    my ass. We need to work out a general
  430. strategy for getting on top of this sort
    of thing before they get on top of us and
  431. I think, this is actually a very urgent
    problem. So I'm just going to give you this
  432. list of dangerous new technologies that
    are arriving now, or coming, and send you
  433. away to think about what to do next. I
    mean, we are activists here, we should be
  434. thinking about this and planning what
    to do. Now, the first nasty technology I'd
  435. like to talk about, is political hacking
    tools that rely on social graph directed
  436. propaganda. This is low-hanging fruit
    after the electoral surprises of 2016.
  437. Cambridge Analytica pioneered the use of
    deep learning by scanning the Facebook and
  438. Twitter social graphs to identify voters
    political affiliations by simply looking
  439. at what tweets or Facebook comments they
    liked, very able to do this, to identify
  440. individuals with a high degree of
    precision, who were vulnerable to
  441. persuasion and who lived in electorally
    sensitive districts. They then canvassed
  442. them with propaganda, that targeted their
    personal hot-button issues to change their
  443. electoral intentions. The tools developed
    by web advertisers to sell products have
  444. now been weaponized for political purposes
    and the amount of personal information
  445. about our affiliations that we expose on
    social media, makes us vulnerable. Aside, in
  446. the last U.S. Presidential election, as
    mounting evidence for the British
  447. referendum on leaving the EU was subject
    to foreign cyber war attack, now
  448. weaponized social media, as was the most
    recent French Presidential election.
  449. In fact, if we remember the leak of emails
    from the Macron campaign, it turns out that
  450. many of those emails were false, because
    the Macron campaign anticipated that they
  451. would be attacked and an email trove would
    be leaked in the last days before the
  452. election. So they deliberately set up
    false emails that would be hacked and then
  453. leaked and then could be discredited. It
    gets twisty fast. Now I'm kind of biting
  454. my tongue and trying, not to take sides
    here. I have my own political affiliation
  455. after all, and I'm not terribly mainstream.
    But if social media companies don't work
  456. out how to identify and flag micro-
    targeted propaganda, then democratic
  457. institutions will stop working and elections
    will be replaced by victories, whoever
  458. can buy the most trolls. This won't
    simply be billionaires but.. like the Koch
  459. brothers and Robert Mercer from the U.S.
    throwing elections to whoever will
  460. hand them the biggest tax cuts. Russian
    military cyber war doctrine calls for the
  461. use of social media to confuse and disable
    perceived enemies, in addition to the
  462. increasingly familiar use of zero-day
    exploits for espionage, such as spear
  463. phishing and distributed denial-of-service
    attacks, on our infrastructure, which are
  464. practiced by Western agencies. Problem is,
    once the Russians have demonstrated that
  465. this is an effective tactic, the use of
    propaganda bot armies in cyber war will go
  466. global. And at that point, our social
    discourse will be irreparably poisoned.
  467. Incidentally, I'd like to add - as another
    aside like the Elon Musk thing - I hate
  468. the cyber prefix! It usually indicates,
    that whoever's using it has no idea what
  469. they're talking about.
    applause, laughter
  470. Unfortunately, much as the way the term
    hacker was corrupted from its original
  471. meaning in the 1990s, the term cyber war
    has, it seems, to have stuck and it's now an
  472. actual thing that we can point to and say:
    "This is what we're talking about". So I'm
  473. afraid, we're stuck with this really
    horrible term. But that's a digression, I
  474. should get back on topic, because I've only
    got 20 minutes to go.
  475. Now, the second threat that we need to
    think about regulating ,or controlling, is
  476. an adjunct to deep learning target
    propaganda: it's the use of neural network
  477. generated false video media. We're used to
    photoshopped images these days, but faking
  478. video and audio takes it to the next
    level. Luckily, faking video and audio is
  479. labor-intensive, isn't it? Well nope, not
    anymore. We're seeing the first generation
  480. of AI assisted video porn, in which the
    faces of film stars are mapped onto those
  481. of other people in a video clip, using
    software rather than laborious in human
  482. process.
    A properly trained neural network
  483. recognizes faces and transforms the face
    of the Hollywood star, they want to put
  484. into a porn movie, into the face of - onto
    the face of the porn star in the porn clip
  485. and suddenly you have "Oh dear God, get it
    out of my head" - no, not gonna give you
  486. any examples. Let's just say it's bad
    stuff.
  487. laughs
    Meanwhile we have WaveNet, a system
  488. for generating realistic sounding speech,
    if a voice of a human's speak of a neural
  489. network has been trained to mimic any
    human speaker. We can now put words into
  490. other people's mouths realistically
    without employing a voice actor. This
  491. stuff is still geek intensive. It requires
    relatively expensive GPUs or cloud
  492. computing clusters, but in less than a
    decade it'll be out in the wild, turned
  493. into something, any damn script kiddie can
    use and just about everyone will be able
  494. to fake up a realistic video of someone
    they don't like doing something horrible.
  495. I mean, Donald Trump in the White House. I
    can't help but hope that out there
  496. somewhere there's some geek like Steve
    Bannon with a huge rack of servers who's
  497. faking it all, but no. Now, also we've
    already seen alarm this year over bizarre
  498. YouTube channels that attempt to monetize
    children's TV brands by scraping the video
  499. content of legitimate channels and adding
    their own advertising in keywords on top
  500. before reposting it. This is basically
    your YouTube spam.
  501. Many of these channels are shaped by
    paperclip maximizing advertising AIs, but
  502. are simply trying to maximise their search
    ranking on YouTube and it's entirely
  503. algorithmic: you have a whole list of
    keywords, you perm, you take them, you slap
  504. them on top of existing popular videos and
    re-upload the videos. Once you add neural
  505. network driven tools for inserting
    character A into pirated video B, to click
  506. maximize.. for click maximizing bots,
    things are gonna get very weird, though. And
  507. they're gonna get even weirder, when these
    tools are deployed for political gain.
  508. We tend - being primates, that evolved 300
    thousand years ago in a smartphone free
  509. environment - to evaluate the inputs from
    our eyes and ears much less critically
  510. than what random strangers on the Internet
    tell us in text. We're already too
  511. vulnerable to fake news as it is. Soon
    they'll be coming for us, armed with
  512. believable video evidence. The Smart Money
    says that by 2027 you won't be able to
  513. believe anything you see in video, unless
    for a cryptographic signatures on it,
  514. linking it back to the camera that shot
    the raw feed. But you know how good most
  515. people are at using encryption - it's going to
    be chaos!
  516. So, paperclip maximizers with focus on
    eyeballs are very 20th century. The new
  517. generation is going to be focusing on our
    nervous system. Advertising as an industry
  518. can only exist because of a quirk of our
    nervous system, which is that we're
  519. susceptible to addiction. Be it
    tobacco, gambling or heroin, we
  520. recognize addictive behavior, when we see
    it. Well, do we? It turns out the human
  521. brain's reward feedback loops are
    relatively easy to gain. Large
  522. corporations like Zynga - producers of
    FarmVille - exist solely because of it,
  523. free to use social media platforms like
    Facebook and Twitter, are dominant precisely
  524. because they're structured to reward
    frequent short bursts of interaction and
  525. to generate emotional engagement - not
    necessarily positive emotions, anger and
  526. hatred are just as good when it comes to
    attracting eyeballs for advertisers.
  527. Smartphone addiction is a side effect of
    advertising as a revenue model. Frequent
  528. short bursts of interaction to keep us
    coming back for more. Now a new.. newish
  529. development, thanks to deep learning again -
    I keep coming back to deep learning,
  530. don't I? - use of neural networks in a
    manner that Marvin Minsky never envisaged,
  531. back when he was deciding that the
    Perzeptron was where it began and ended
  532. and it couldn't do anything.
    Well, we have neuroscientists now, who've
  533. mechanized the process of making apps more
    addictive. Dopamine Labs is one startup
  534. that provides tools to app developers to
    make any app more addictive, as well as to
  535. reduce the desire to continue
    participating in a behavior if it's
  536. undesirable, if the app developer actually
    wants to help people kick the habit. This
  537. goes way beyond automated A/B testing. A/B
    testing allows developers to plot a binary
  538. tree path between options, moving towards a
    single desired goal. But true deep
  539. learning, addictiveness maximizers, can
    optimize for multiple attractors in
  540. parallel. The more users you've got on
    your app, the more effectively you can work
  541. out, what attracts them and train them and
    focus on extra addictive characteristics.
  542. Now, going by their public face, the folks
    at Dopamine Labs seem to have ethical
  543. qualms about the misuse of addiction
    maximizers. But neuroscience isn't a
  544. secret and sooner or later some really
    unscrupulous sociopaths will try to see
  545. how far they can push it. So let me give
    you a specific imaginary scenario: Apple
  546. have put a lot of effort into making real-
    time face recognition work on the iPhone X
  547. and it's going to be everywhere on
    everybody's phone in another couple of
  548. years. You can't fool an iPhone X with a
    photo or even a simple mask. It does depth
  549. mapping to ensure, your eyes are in the
    right place and can tell whether they're
  550. open or closed. It recognizes your face
    from underlying bone structure through
  551. makeup and bruises. It's running
    continuously, checking pretty much as often
  552. as every time you'd hit the home button on
    a more traditional smartphone UI and it
  553. can see where your eyeballs are pointing.
    The purpose of a face recognition system
  554. is to provide for real-time authenticate
    continuous authentication when you're
  555. using a device - not just enter a PIN or
    sign a password or use a two factor
  556. authentication pad, but the device knows
    that you are its authorized user on a
  557. continuous basis and if somebody grabs
    your phone and runs away with it, it'll
  558. know that it's been stolen immediately, it
    sees the face of the thief.
  559. However, your phone monitoring your facial
    expressions and correlating against app
  560. usage has other implications. Your phone
    will be aware of precisely what you like
  561. to look at on your screen.. on its screen.
    We may well have sufficient insight on the
  562. part of the phone to identify whether
    you're happy or sad, bored or engaged.
  563. With addiction seeking deep learning tools
    and neural network generated images, those
  564. synthetic videos I was talking about, it's
    entirely.. in principle entirely possible to
  565. feed you an endlessly escalating payload
    of arousal-inducing inputs. It might be
  566. Facebook or Twitter messages, optimized to
    produce outrage, or it could be porn
  567. generated by AI to appeal to kinks you
    don't even consciously know you have.
  568. But either way, the app now owns your
    central nervous system and you will be
  569. monetized. And finally, I'd like to raise a
    really hair-raising specter that goes well
  570. beyond the use of deep learning and
    targeted propaganda and cyber war. Back in
  571. 2011, an obscure Russian software house
    launched an iPhone app for pickup artists
  572. called 'Girls Around Me'. Spoiler: Apple
    pulled it like a hot potato as soon as
  573. word got out that it existed. Now, Girls
    Around Me works out where the user is
  574. using GPS, then it would query Foursquare
    and Facebook for people matching a simple
  575. relational search, for single females on
    Facebook, per relationship status, who have
  576. checked in, or been checked in by their
    friends, in your vicinity on Foursquare.
  577. The app then displays their locations on a
    map along with links to their social media
  578. profiles. If they were doing it today, the
    interface would be gamified, showing strike
  579. rates and a leaderboard and flagging
    targets who succumbed to harassment as
  580. easy lays.
    But these days, the cool kids and single
  581. adults are all using dating apps with a
    missing vowel in the name, only a creeper
  582. would want something like Girls Around Me,
    right? Unfortunately, there are much, much
  583. nastier uses of and scraping social media
    to find potential victims for serial
  584. rapists. Does your social media profile
    indicate your political religious
  585. affiliation? No? Cambridge Analytica can
    work them out with 99.9% precision
  586. anyway, so don't worry about that. We
    already have you pegged. Now add a service
  587. that can identify people's affiliation and
    location and you have a beginning of a
  588. flash mob app, one that will show people
    like us and people like them on a
  589. hyperlocal map.
    Imagine you're a young female and a
  590. supermarket like Target has figured out
    from your purchase patterns, that you're
  591. pregnant, even though you don't know it
    yet. This actually happened in 2011. Now
  592. imagine, that all the anti-abortion
    campaigners in your town have an app
  593. called "Babies Risk" on their phones.
    Someone has paid for the analytics feed
  594. from the supermarket and every time you go
    near a family planning clinic, a group of
  595. unfriendly anti-abortion protesters
    somehow miraculously show up and swarm
  596. you. Or imagine you're male and gay and
    the "God hates fags"-crowd has invented a
  597. 100% reliable gaydar app, based on your
    Grindr profile, and is getting their fellow
  598. travelers to queer bash gay men - only when
    they're alone or outnumbered by ten to
  599. one. That's the special horror of precise
    geolocation not only do you always know
  600. where you are, the AIs know, where you are
    and some of them aren't friendly. Or
  601. imagine, you're in Pakistan and Christian
    Muslim tensions are rising or in rural
  602. Alabama or an Democrat, you know the
    possibilities are endless. Someone out
  603. there is working on this. A geolocation
    aware, social media scraping deep learning
  604. application, that uses a gamified
    competitive interface to reward its
  605. players for joining in acts of mob
    violence against whoever the app developer
  606. hates.
    Probably it has an innocuous seeming, but
  607. highly addictive training mode, to get the
    users accustomed to working in teams and
  608. obeying the apps instructions. Think
    Ingress or Pokemon Go. Then at some pre-
  609. planned zero-hour, it switches mode and
    starts rewarding players for violence,
  610. players who have been primed to think of
    their targets as vermin by a steady drip
  611. feed of micro-targeted dehumanizing
    propaganda inputs, delivered over a period
  612. of months. And the worst bit of this picture?
    Is that the app developer isn't even a
  613. nation-state trying to disrupt its enemies
    or an extremist political group trying to
  614. murder gays, Jews or Muslims. It's just a
    Paperclip Maximizer doing what it does
  615. and you are the paper. Welcome to the 21st
    century.
  616. applause
    Uhm...
  617. Thank you.
  618. ongoing applause
    We have a little time for questions. Do
  619. you have a microphone for the orders? Do
    we have any questions? ... OK.
  620. Herald: So you are doing a Q&A?
    CS: Hmm?
  621. Herald: So you are doing a Q&A. Well if
    there are any questions, please come
  622. forward to the microphones, numbers 1
    through 4 and ask.
  623. Mic 1: Do you really think it's all
    bleak and dystopian like you prescribed
  624. it, because I also think the future can be
    bright, looking at the internet with open
  625. source and like, it's all growing and going
    faster and faster in a good
  626. direction. So what do you think about
    the balance here?
  627. CS: sighs Basically, I think the
    problem is, that about 3% of us
  628. are sociopaths or psychopaths, who spoil
    everything for the other 97% of us.
  629. Wouldn't it be great if somebody could
    write an app that would identify all the
  630. psychopaths among us and let the rest of
    us just kill them?
  631. laughing, applause
    Yeah, we have all the
  632. tools to make a utopia, we have it now
    today. A bleak miserable grim meathook
  633. future is not inevitable, but it's up to
    us to use these tools to prevent the bad
  634. stuff happening and to do that, we have to
    anticipate the bad outcomes and work to
  635. try and figure out a way to deal with
    them. That's what this talk is. I'm trying
  636. to do a bit of a wake-up call and get
    people thinking about how much worse
  637. things can get and what we need to do to
    prevent it from happening. What I was
  638. saying earlier about our regulatory
    systems being broken, stands. How do we
  639. regulate the deep learning technologies?
    This is something we need to think about.
  640. H: Okay mic number two.
    Mic 2: Hello? ... When you talk about
  641. corporations as AIs, where do you see that
    analogy you're making? Do you see them as
  642. literally AIs or figuratively?
    CS: Almost literally. If
  643. you're familiar with philosopher
    <Ronaldson>(?) Searle's Chinese room paradox
  644. from the 1970s, by which he attempted to
    prove that artificial intelligence was
  645. impossible, a corporation is very much the
    Chinese room implementation of an AI. It
  646. is a bunch of human beings in a box. You
    put inputs into the box, you get apples
  647. out of a box. Does it matter, whether it's
    all happening in software or whether
  648. there's a human being following rules
    inbetween to assemble the output? I don't
  649. see there being much of a difference.
    Now you have to look at a company at a
  650. very abstract level to view it as an AI,
    but more and more companies are automating
  651. their internal business processes. You've
    got to view this as an ongoing trend. And
  652. yeah, they have many of the characteristics
    of an AI.
  653. Herald: Okay mic number four.
    Mic 4: Hi, thanks for your talk.
  654. You probably heard of the Time Well
    Spent and Design Ethics movements that
  655. are alerting developers to dark patterns
    in UI design, where
  656. these people design apps to manipulate
    people. I'm curious if you find any
  657. optimism in the possibility of amplifying
    or promoting those movements.
  658. CS: Uhm, you know, I knew about dark
    patterns, I knew about people trying to
  659. optimize them, I wasn't actually aware
    there were movements against this. Okay I'm
  660. 53 years old, I'm out of touch. I haven't
    actually done any serious programming in
  661. 15 years. I'm so rusty, my rust has rust on
    it. But, you know, it is a worrying trend
  662. and actual activism is a good start.
    Raising awareness of hazards and of what
  663. we should be doing about them, is a good
    start. And I would classify this actually
  664. as a moral issue. We need to..
    corporations evaluate everything in terms
  665. of revenue, because it's very
    equivalent to breathing, they have to
  666. breathe. Corporations don't usually have
    any moral framework. We're humans, we need
  667. a moral framework to operate within. Even
    if it's as simple as first "Do no harm!"
  668. or "Do not do unto others that which would
    be repugnant if it was done unto you!",
  669. the Golden Rule. So, yeah, we should be
    trying to spread awareness of this about
  670. and working with program developers, to
    look to remind them that they are human
  671. beings and have to be humane in their
    application of technology, is a necessary
  672. start.
    applause
  673. H: Thank you! Mic 3?
    Mic 3: Hi! Yeah, I think that folks,
  674. especially in this sort of crowd, tend to
    jump to the "just get off of
  675. Facebook"-solution first, for a lot of
    these things that are really, really
  676. scary. But what worries me, is how we sort
    of silence ourselves when we do that.
  677. After the election I actually got back on
    Facebook, because the Women's March was
  678. mostly organized through Facebook. But
    yeah, I think we need a lot more
  679. regulation, but we can't just throw it
    out. We're.. because it's..
  680. social media is the only... really good
    platform we have right now
  681. to express ourselves, to
    have our rules, or power.
  682. CS: Absolutely. I have made
    a point of not really using Facebook
  683. for many, many, many years.
    I have a Facebook page simply to
  684. shut up the young marketing people at my
    publisher, who used to prop up every two
  685. years and say: "Why don't you have a
    Facebook. Everybody's got a Facebook."
  686. No, I've had a blog since 1993!
    laughing
  687. But no, I'm gonna have to use Facebook,
    because these days, not using Facebook is
  688. like not using email. You're cutting off
    your nose to spite your face. What we
  689. really do need to be doing, is looking for
    some form of effective oversight of
  690. Facebook and particularly, of how they..
    the algorithms that show you content, are
  691. written. What I was saying earlier about
    how algorithms are not as transparent as
  692. human beings to people, applies hugely to
    them. And both, Facebook and Twitter
  693. control the information
    that they display to you.
  694. Herald: Okay, I'm terribly sorry for all the
    people queuing at the mics now, we're out
  695. of time. I also have to apologize, I
    announced, that this talk was being held in
  696. English, but it was being held in English.
    the latter pronounced on the G
  697. Thank you very much, Charles Stross!
  698. CS: Thank you very much for
    listening to me, it's been a pleasure!
  699. applause
  700. postroll music
  701. subtitles created by c3subtitles.de
    in the year 2018