< Return to Video

34C3 - Dude, you broke the Future!

  • 0:00 - 0:19
    34C3 preroll music
  • 0:19 - 0:25
    Herald: Humans of Congress, it is my
    pleasure to announce the next speaker.
  • 0:25 - 0:32
    I was supposed to pick out a few awards or
    something, to actually present what he's
  • 0:32 - 0:36
    done in his life, but I can
    only say: he's one of us!
  • 0:36 - 0:40
    applause
  • 0:40 - 0:43
    Charles Stross!
    ongoing applause
  • 0:43 - 0:46
    Charles Stross: Hi! Is this on?
    Good. Great.
  • 0:46 - 0:51
    I'm really pleased to be here and I
    want to start by apologizing for my total
  • 0:51 - 0:58
    lack of German. So this talk is gonna be in
    English. Good morning. I'm Charlie Stross
  • 0:58 - 1:04
    and it's my job to tell lies for money, or
    rather, I write science fiction, much of
  • 1:04 - 1:08
    it about on the future, which in recent
    years has become ridiculously hard to
  • 1:08 - 1:16
    predict. In this talk I'm going to talk
    about why. Now our species, Homo sapiens
  • 1:16 - 1:22
    sapiens, is about 300,000 years old. It
    used to be about 200,000 years old,
  • 1:22 - 1:26
    but it grew an extra 100,000
    years in the past year because of new
  • 1:26 - 1:31
    archaeological discoveries, I mean, go
    figure. For all but the last three
  • 1:31 - 1:37
    centuries or so - of that span, however -
    predicting the future was really easy. If
  • 1:37 - 1:41
    you were an average person - as opposed to
    maybe a king or a pope - natural disasters
  • 1:41 - 1:47
    aside, everyday life 50 years in the
    future would resemble everyday life 50
  • 1:47 - 1:56
    years in your past. Let that sink in for a
    bit. For 99.9% of human existence on this
  • 1:56 - 2:02
    earth, the future was static. Then
    something changed and the future began to
  • 2:02 - 2:07
    shift increasingly rapidly, until, in the
    present day, things are moving so fast,
  • 2:07 - 2:13
    it's barely possible to anticipate trends
    from one month to the next. Now as an
  • 2:13 - 2:18
    eminent computer scientist, Edsger Dijkstra
    once remarked, computer science is no more
  • 2:18 - 2:24
    about computers than astronomy is about
    building big telescopes, the same can be
  • 2:24 - 2:29
    said of my field of work, writing science
    fiction, sci-fi is rarely about science
  • 2:29 - 2:34
    and even more rarely about predicting the
    future, but sometimes we dabble in
  • 2:34 - 2:42
    Futurism and lately, Futurism has gotten
    really, really, weird. Now when I write a
  • 2:42 - 2:47
    near future work of fiction, one set, say, a
    decade hence, there used to be a recipe I
  • 2:47 - 2:54
    could follow, that worked eerily well. Simply put:
    90% of the next decade stuff is
  • 2:54 - 2:57
    already here around us today.
    Buildings are designed to
  • 2:57 - 3:03
    last many years, automobiles have a design
    life of about a decade, so half the cars on
  • 3:03 - 3:10
    the road in 2027 are already there now -
    they're new. People? There'll be some new
  • 3:10 - 3:16
    faces, aged 10 and under, and some older
    people will have died, but most of us
  • 3:16 - 3:23
    adults will still be around, albeit older
    and grayer, this is the 90% of a near
  • 3:23 - 3:31
    future that's already here today. After
    the already existing 90%, another 9% of a
  • 3:31 - 3:36
    near future a decade hence used to be
    easily predictable: you look at trends
  • 3:36 - 3:40
    dictated by physical limits, such as
    Moore's law and you look at Intel's road
  • 3:40 - 3:44
    map and you use a bit of creative
    extrapolation and you won't go too far
  • 3:44 - 3:53
    wrong. If I predict - wearing my futurology
    hat - that in 2027 LTE cellular phones will
  • 3:53 - 3:58
    be ubiquitous, 5G will be available for
    high bandwidth applications and there will be
  • 3:58 - 4:02
    fallback to some kind of satellite data
    service at a price, you probably won't
  • 4:02 - 4:04
    laugh at me.
    I mean, it's not like I'm predicting that
  • 4:04 - 4:08
    airlines will fly slower and Nazis will
    take over the United States, is it ?
  • 4:08 - 4:10
    laughing
  • 4:10 - 4:15
    And therein lies the problem. There is
    remaining 1% of what Donald Rumsfeld
  • 4:15 - 4:21
    called the "unknown unknowns", what throws off
    all predictions. As it happens, airliners
  • 4:21 - 4:26
    today are slower than they were in the
    1970s and don't get me started about the Nazis,
  • 4:26 - 4:32
    I mean, nobody in 2007 was expecting a Nazi
    revival in 2017, were they?
  • 4:32 - 4:37
    Only this time, Germans get to be the good guys.
    laughing, applause
  • 4:37 - 4:42
    So. My recipe for fiction set 10 years
    in the future used to be:
  • 4:42 - 4:47
    "90% is already here,
    9% is not here yet but predictable
  • 4:47 - 4:54
    and 1% is 'who ordered that?'" But unfortunately
    the ratios have changed, I think we're now
  • 4:54 - 5:00
    down to maybe 80% already here - climate
    change takes a huge toll on architecture -
  • 5:00 - 5:06
    then 15% not here yet, but predictable and
    a whopping 5% of utterly unpredictable
  • 5:06 - 5:13
    deep craziness. Now... before I carry on
    with this talk, I want to spend a minute or
  • 5:13 - 5:19
    two ranting loudly and ruling out the
    singularity. Some of you might assume, that
  • 5:19 - 5:24
    as the author of books like "Singularity
    Sky" and "Accelerando",
  • 5:24 - 5:28
    I expect an impending technological
    singularity,
  • 5:28 - 5:32
    that we will develop self-improving
    artificial intelligence and mind uploading
  • 5:32 - 5:35
    and the whole wish list of transhumanist
    aspirations promoted by the likes of
  • 5:35 - 5:42
    Ray Kurzweil, will come to pass. Unfortunately
    this isn't the case. I think transhumanism
  • 5:42 - 5:49
    is a warmed-over Christian heresy. While
    its adherents tend to be outspoken atheists,
  • 5:49 - 5:52
    they can't quite escape from the
    history that gave rise to our current
  • 5:52 - 5:57
    Western civilization. Many of you are
    familiar with design patterns, an approach
  • 5:57 - 6:02
    to software engineering that focuses on
    abstraction and simplification, in order
  • 6:02 - 6:08
    to promote reusable code. When you look at
    the AI singularity as a narrative and
  • 6:08 - 6:12
    identify the numerous places in their
    story where the phrase "and then a miracle
  • 6:12 - 6:19
    happens" occur, it becomes apparent pretty
    quickly, that they've reinvented Christiantiy.
  • 6:19 - 6:19
    applause
  • 6:19 - 6:25
    Indeed, the wellspring of
    today's transhumanists draw in a long rich
  • 6:25 - 6:30
    history of Russian philosophy, exemplified
    by the russian orthodox theologian Nikolai
  • 6:30 - 6:37
    Fyodorovich Fedorov by way of his disciple
    Konstantin Tsiolkovsky, whose derivation
  • 6:37 - 6:41
    of a rocket equation makes him
    essentially the father of modern space
  • 6:41 - 6:45
    flight. Once you start probing the nether
    regions of transhumanist forth and run
  • 6:45 - 6:50
    into concepts like Roko's Basilisk - by the
    way, any of you who didn't know about the
  • 6:50 - 6:54
    Basilisk before, are now doomed to an
    eternity in AI hell, terribly sorry - you
  • 6:54 - 6:58
    realize, they've mangled it to match some
    of the nastier aspects of Presbyterian
  • 6:58 - 7:03
    Protestantism. Now they basically invented
    original sin and Satan in the guise of an
  • 7:03 - 7:09
    AI that doesn't exist yet ,it's.. kind of
    peculiar. Anyway, my take on the
  • 7:09 - 7:13
    singularity is: What if something walks
    like a duck and quacks like a duck? It's
  • 7:13 - 7:18
    probably a duck. And if it looks like a
    religion, it's probably a religion.
  • 7:18 - 7:23
    I don't see much evidence for human-like,
    self-directed artificial intelligences
  • 7:23 - 7:28
    coming along any time soon, and a fair bit
    of evidence, that nobody accepts and freaks
  • 7:28 - 7:32
    in cognitive science departments, even
    want it. I mean, if we invented an AI
  • 7:32 - 7:36
    that was like a human mind, it would do the
    AI equivalent of sitting on the sofa,
  • 7:36 - 7:39
    munching popcorn and
    watching the Super Bowl all day.
  • 7:39 - 7:43
    It wouldn't be much use to us.
    laughter, applause
  • 7:43 - 7:47
    What we're getting instead,
    is self-optimizing tools that defy
  • 7:47 - 7:51
    human comprehension, but are not
    in fact any more like our kind
  • 7:51 - 7:58
    of intelligence than a Boeing 737 is like
    a seagull. Boeing 737s and seagulls both
  • 7:58 - 8:04
    fly, Boeing 737s don't lay eggs and shit
    everywhere. So I'm going to wash my hands
  • 8:04 - 8:10
    of a singularity as a useful explanatory
    model of the future without further ado.
  • 8:10 - 8:15
    I'm one of those vehement atheists as well
    and I'm gonna try and offer you a better
  • 8:15 - 8:21
    model for what's happening to us. Now, as
    my fellow Scottish science fictional author
  • 8:21 - 8:27
    Ken MacLeod likes to say "the secret
    weapon of science fiction is history".
  • 8:27 - 8:31
    History is, loosely speaking, is the written
    record of what and how people did things
  • 8:31 - 8:37
    in past times. Times that have slipped out
    of our personal memories. We science
  • 8:37 - 8:41
    fiction writers tend to treat history as a
    giant toy chest to raid, whenever we feel
  • 8:41 - 8:45
    like telling a story. With a little bit of
    history, it's really easy to whip up an
  • 8:45 - 8:49
    entertaining yarn about a galactic empire
    that mirrors the development and decline
  • 8:49 - 8:54
    of a Habsburg Empire or to respin the
    October Revolution as a tale of how Mars
  • 8:54 - 9:00
    got its independence. But history is
    useful for so much more than that.
  • 9:00 - 9:05
    It turns out, that our personal memories
    don't span very much time at all. I'm 53
  • 9:05 - 9:11
    and I barely remember the 1960s. I only
    remember the 1970s with the eyes of a 6 to
  • 9:11 - 9:18
    16 year old. My father died this year,
    aged 93, and he'd just about remembered the
  • 9:18 - 9:23
    1930s. Only those of my father's
    generation directly remember the Great
  • 9:23 - 9:29
    Depression and can compare it to the
    2007/08 global financial crisis directly.
  • 9:29 - 9:34
    We Westerners tend to pay little attention
    to cautionary tales told by 90-somethings.
  • 9:34 - 9:39
    We're modern, we're change obsessed and we
    tend to repeat our biggest social mistakes
  • 9:39 - 9:43
    just as they slip out of living memory,
    which means they recur on a timescale of
  • 9:43 - 9:48
    70 to 100 years.
    So if our personal memories are useless,
  • 9:48 - 9:52
    we need a better toolkit
    and history provides that toolkit.
  • 9:52 - 9:57
    History gives us the perspective to see what
    went wrong in the past and to look for
  • 9:57 - 10:02
    patterns and check to see whether those
    patterns are recurring in the present.
  • 10:02 - 10:07
    Looking in particular at the history of the past two
    to four hundred years, that age of rapidly
  • 10:07 - 10:12
    increasing change that I mentioned at the
    beginning. One glaringly obvious deviation
  • 10:12 - 10:17
    from the norm of the preceding
    3000 centuries is obvious, and that's
  • 10:17 - 10:22
    the development of artificial intelligence,
    which happened no earlier than 1553 and no
  • 10:22 - 10:29
    later than 1844. I'm talking of course
    about the very old, very slow AI's we call
  • 10:29 - 10:34
    corporations. What lessons from the history
    of a company can we draw that tell us
  • 10:34 - 10:38
    about the likely behavior of the type of
    artificial intelligence we're interested
  • 10:38 - 10:47
    in here, today?
    Well. Need a mouthful of water.
  • 10:47 - 10:52
    Let me crib from Wikipedia for a moment.
  • 10:52 - 10:56
    Wikipedia: "In the late 18th
    century, Stewart Kyd, the author of the
  • 10:56 - 11:03
    first treatise on corporate law in English,
    defined a corporation as: 'a collection of
  • 11:03 - 11:08
    many individuals united into one body,
    under a special denomination, having
  • 11:08 - 11:14
    perpetual succession under an artificial
    form, and vested, by policy of the law, with
  • 11:14 - 11:20
    the capacity of acting, in several respects,
    as an individual, enjoying privileges and
  • 11:20 - 11:24
    immunities in common, and of exercising a
    variety of political rights, more or less
  • 11:24 - 11:29
    extensive, according to the design of its
    institution, or the powers conferred upon
  • 11:29 - 11:32
    it, either at the time of its creation, or
    at any subsequent period of its
  • 11:32 - 11:37
    existence.'"
    This was a late 18th century definition,
  • 11:37 - 11:43
    sound like a piece of software to you?
    In 1844, the British government passed the
  • 11:43 - 11:46
    "Joint Stock Companies Act" which created
    a register of companies and allowed any
  • 11:46 - 11:51
    legal person, for a fee, to register a
    company which in turn existed as a
  • 11:51 - 11:56
    separate legal person. Prior to that point,
    it required a Royal Charter or an act of
  • 11:56 - 12:00
    Parliament to create a company.
    Subsequently, the law was extended to limit
  • 12:00 - 12:05
    the liability of individual shareholders
    in event of business failure and then both
  • 12:05 - 12:09
    Germany and the United States added their
    own unique twists to what today we see is
  • 12:09 - 12:14
    the doctrine of corporate personhood.
    Now, though plenty of other things that
  • 12:14 - 12:19
    happened between the 16th and 21st centuries
    did change the shape of the world we live in.
  • 12:19 - 12:22
    I've skipped the changes in
    agricultural productivity that happened
  • 12:22 - 12:26
    due to energy economics,
    which finally broke the Malthusian trap
  • 12:26 - 12:29
    our predecessors lived in.
    This in turn broke the long-term
  • 12:29 - 12:33
    cap on economic growth of about
    0.1% per year
  • 12:33 - 12:36
    in the absence of famines, plagues and
    wars and so on.
  • 12:36 - 12:39
    I've skipped the germ theory of diseases
    and the development of trade empires
  • 12:39 - 12:43
    in the age of sail and gunpowder,
    that were made possible by advances
  • 12:43 - 12:45
    in accurate time measurement.
  • 12:45 - 12:49
    I've skipped the rise, and
    hopefully decline, of the pernicious
  • 12:49 - 12:52
    theory of scientific racism that
    underpinned Western colonialism and the
  • 12:52 - 12:57
    slave trade. I've skipped the rise of
    feminism, the ideological position that
  • 12:57 - 13:02
    women are human beings rather than
    property and the decline of patriarchy.
  • 13:02 - 13:06
    I've skipped the whole of the
    Enlightenment and the Age of Revolutions,
  • 13:06 - 13:09
    but this is a technocratic.. technocentric
    Congress, so I want to frame this talk in
  • 13:09 - 13:15
    terms of AI, which we all like to think we
    understand. Here's the thing about these
  • 13:15 - 13:21
    artificial persons we call corporations.
    Legally, they're people. They have goals,
  • 13:21 - 13:26
    they operate in pursuit of these goals,
    they have a natural life cycle.
  • 13:26 - 13:33
    In the 1950s, a typical U.S. corporation on the
    S&P 500 Index had a life span of 60 years.
  • 13:33 - 13:38
    Today it's down to less than 20 years.
    This is largely due to predation.
  • 13:38 - 13:42
    Corporations are cannibals, they eat
    one another.
  • 13:42 - 13:46
    They're also hive super organisms
    like bees or ants.
  • 13:46 - 13:49
    For the first century and a
    half, they relied entirely on human
  • 13:49 - 13:52
    employees for their internal operation,
    but today they're automating their
  • 13:52 - 13:57
    business processes very rapidly. Each
    human is only retained so long as they can
  • 13:57 - 14:01
    perform their assigned tasks more
    efficiently than a piece of software
  • 14:01 - 14:05
    and they can all be replaced by another
    human, much as the cells in our own bodies
  • 14:05 - 14:10
    are functionally interchangeable and a
    group of cells can - in extremis - often be
  • 14:10 - 14:15
    replaced by a prosthetic device.
    To some extent, corporations can be
  • 14:15 - 14:19
    trained to serve of the personal desires of
    their chief executives, but even CEOs can
  • 14:19 - 14:23
    be dispensed with, if their activities
    damage the corporation, as Harvey
  • 14:23 - 14:26
    Weinstein found out a couple of months
    ago.
  • 14:26 - 14:31
    Finally, our legal environment today has
    been tailored for the convenience of
  • 14:31 - 14:35
    corporate persons, rather than human
    persons, to the point where our governments
  • 14:35 - 14:40
    now mimic corporations in many of our
    internal structures.
  • 14:40 - 14:44
    So, to understand where we're going, we
    need to start by asking "What do our
  • 14:44 - 14:52
    current actually existing AI overlords
    want?"
  • 14:52 - 14:56
    Now, Elon Musk, who I believe you've
    all heard of, has an obsessive fear of one
  • 14:56 - 15:00
    particular hazard of artificial
    intelligence, which he conceives of as
  • 15:00 - 15:04
    being a piece of software that functions
    like a brain in a box, namely the
  • 15:04 - 15:10
    Paperclip Optimizer or Maximizer.
    A Paperclip Maximizer is a term of art for
  • 15:10 - 15:15
    a goal seeking AI that has a single
    priority, e.g., maximizing the
  • 15:15 - 15:20
    number of paperclips in the universe. The
    Paperclip Maximizer is able to improve
  • 15:20 - 15:24
    itself in pursuit of its goal, but has no
    ability to vary its goal, so will
  • 15:24 - 15:28
    ultimately attempt to convert all the
    metallic elements in the solar system into
  • 15:28 - 15:32
    paperclips, even if this is obviously
    detrimental to the well-being of the
  • 15:32 - 15:36
    humans who set it this goal.
    Unfortunately I don't think Musk
  • 15:36 - 15:41
    is paying enough attention,
    consider his own companies.
  • 15:41 - 15:45
    Tesla isn't a Paperclip Maximizer, it's a
    battery Maximizer.
  • 15:45 - 15:48
    After all, a battery.. an
    electric car is a battery with wheels and
  • 15:48 - 15:54
    seats. SpaceX is an orbital payload
    Maximizer, driving down the cost of space
  • 15:54 - 15:59
    launches in order to encourage more sales
    for the service it provides. SolarCity is
  • 15:59 - 16:06
    a photovoltaic panel maximizer and so on.
    All three of the.. Musk's very own slow AIs
  • 16:06 - 16:09
    are based on an architecture, designed to
    maximize return on shareholder
  • 16:09 - 16:13
    investment, even if by doing so they cook
    the planet the shareholders have to live
  • 16:13 - 16:16
    on or turn the entire thing into solar
    panels.
  • 16:16 - 16:19
    But hey, if you're Elon Musk, thats okay,
    you're gonna retire on Mars anyway.
  • 16:19 - 16:21
    laughing
  • 16:21 - 16:25
    By the way, I'm ragging on Musk in this
    talks, simply because he's the current
  • 16:25 - 16:29
    opinionated tech billionaire, who thinks
    for disrupting a couple of industries
  • 16:29 - 16:34
    entitles him to make headlines.
    If this was 2007 and my focus slightly
  • 16:34 - 16:39
    difference.. different, I'd be ragging on
    Steve Jobs and if we're in 1997 my target
  • 16:39 - 16:42
    would be Bill Gates.
    Don't take it personally, Elon.
  • 16:42 - 16:44
    laughing
  • 16:44 - 16:49
    Back to topic. The problem of
    corporations is, that despite their overt
  • 16:49 - 16:54
    goals, whether they make electric vehicles
    or beer or sell life insurance policies,
  • 16:54 - 17:00
    they all have a common implicit Paperclip
    Maximizer goal: to generate revenue. If
  • 17:00 - 17:04
    they don't make money, they're eaten by a
    bigger predator or they go bust. It's as
  • 17:04 - 17:08
    vital to them as breathing is to us
    mammals. They generally pursue their
  • 17:08 - 17:12
    implicit goal - maximizing revenue - by
    pursuing their overt goal.
  • 17:12 - 17:17
    But sometimes they try instead to
    manipulate their environment, to ensure
  • 17:17 - 17:23
    that money flows to them regardless.
    Human toolmaking culture has become very
  • 17:23 - 17:28
    complicated over time. New technologies
    always come with an attached implicit
  • 17:28 - 17:33
    political agenda that seeks to extend the
    use of the technology. Governments react
  • 17:33 - 17:37
    to this by legislating to control new
    technologies and sometimes we end up with
  • 17:37 - 17:42
    industries actually indulging in legal
    duels through the regulatory mechanism of
  • 17:42 - 17:50
    law to determine, who prevails. For
    example, consider the automobile. You
  • 17:50 - 17:54
    can't have mass automobile transport
    without gas stations and fuel distribution
  • 17:54 - 17:57
    pipelines.
    These in turn require access to whoever
  • 17:57 - 18:01
    owns the land the oil is extracted from
    under and before you know it, you end up
  • 18:01 - 18:07
    with a permanent army in Iraq and a clamp
    dictatorship in Saudi Arabia. Closer to
  • 18:07 - 18:12
    home, automobiles imply jaywalking laws and
    drink-driving laws. They affect Town
  • 18:12 - 18:17
    Planning regulations and encourage
    suburban sprawl, the construction of human
  • 18:17 - 18:21
    infrastructure on a scale required by
    automobiles, not pedestrians.
  • 18:21 - 18:25
    This in turn is bad for competing
    transport technologies, like buses or
  • 18:25 - 18:32
    trams, which work best in cities with a
    high population density. So to get laws
  • 18:32 - 18:35
    that favour the automobile in place,
    providing an environment conducive to
  • 18:35 - 18:40
    doing business, automobile companies spend
    money on political lobbyists and when they
  • 18:40 - 18:47
    can get away with it, on bribes. Bribery
    needn't be blatant of course. E.g.,
  • 18:47 - 18:52
    the reforms of a British railway network
    in the 1960s dismembered many branch lines
  • 18:52 - 18:56
    and coincided with a surge in road
    building and automobile sales. These
  • 18:56 - 19:01
    reforms were orchestrated by Transport
    Minister Ernest Marples, who was purely a
  • 19:01 - 19:06
    politician. The fact that he accumulated a
    considerable personal fortune during this
  • 19:06 - 19:10
    period by buying shares in motorway
    construction corporations, has nothing to
  • 19:10 - 19:18
    do with it. So, no conflict of interest
    there - now if the automobile in industry
  • 19:18 - 19:23
    can't be considered a pure Paperclip
    Maximizer... sorry, the automobile
  • 19:23 - 19:28
    industry in isolation can't be considered
    a pure Paperclip Maximizer. You have to
  • 19:28 - 19:32
    look at it in conjunction with the fossil
    fuel industries, the road construction
  • 19:32 - 19:38
    business, the accident insurance sector
    and so on. When you do this, you begin to
  • 19:38 - 19:43
    see the outline of a paperclip-maximizing
    ecosystem that invades far-flung lands and
  • 19:43 - 19:47
    grinds up and kills around one and a
    quarter million people per year. That's
  • 19:47 - 19:51
    the global death toll from automobile
    accidents currently, according to the World
  • 19:51 - 19:56
    Health Organization. It rivals the First
    World War on an ongoing permanent basis
  • 19:56 - 20:02
    and these are all side effects of its
    drive to sell you a new car. Now,
  • 20:02 - 20:07
    automobiles aren't of course a total
    liability. Today's cars are regulated
  • 20:07 - 20:11
    stringently for safety and, in theory, to
    reduce toxic emissions. They're fast,
  • 20:11 - 20:17
    efficient and comfortable. We can thank
    legal mandated regulations imposed by
  • 20:17 - 20:22
    governments for this, of course. Go back
    to the 1970s and cars didn't have crumple
  • 20:22 - 20:27
    zones, go back to the 50s and they didn't
    come with seat belts as standard. In the
  • 20:27 - 20:34
    1930s, indicators, turn signals and brakes
    on all four wheels were optional and your
  • 20:34 - 20:39
    best hope of surviving a 50 km/h-crash was
    to be thrown out of a car and land somewhere
  • 20:39 - 20:43
    without breaking your neck.
    Regulator agencies are our current
  • 20:43 - 20:47
    political system's tool of choice for
    preventing Paperclip Maximizers from
  • 20:47 - 20:55
    running amok. Unfortunately, regulators
    don't always work. The first failure mode
  • 20:55 - 21:00
    of regulators that you need to be aware of
    is regulatory capture, where regulatory
  • 21:00 - 21:05
    bodies are captured by the industries they
    control. Ajit Pai, Head of American Federal
  • 21:05 - 21:09
    Communications Commission, which just voted
    to eliminate net neutrality rules in the
  • 21:09 - 21:14
    U.S., has worked as Associate
    General Counsel for Verizon Communications
  • 21:14 - 21:19
    Inc, the largest current descendant of the
    Bell Telephone system's monopoly. After
  • 21:19 - 21:25
    the AT&T antitrust lawsuit, the Bell
    network was broken up into the seven baby
  • 21:25 - 21:32
    bells. They've now pretty much reformed
    and reaggregated and Verizon is the largest current one.
  • 21:32 - 21:36
    Why should someone with a transparent
    interest in a technology corporation end
  • 21:36 - 21:41
    up running a regulator that tries to
    control the industry in question? Well, if
  • 21:41 - 21:45
    you're going to regulate a complex
    technology, you need to recruit regulators
  • 21:45 - 21:49
    from people who understand it.
    Unfortunately, most of those people are
  • 21:49 - 21:54
    industry insiders. Ajit Pai is clearly
    very much aware of how Verizon is
  • 21:54 - 21:58
    regulated, very insightful into its
    operations and wants to do something about
  • 21:58 - 22:03
    it - just not necessarily in the public
    interest.
  • 22:03 - 22:11
    applause
    When regulators end up staffed by people
  • 22:11 - 22:15
    drawn from the industries they're supposed
    to control, they frequently end up working
  • 22:15 - 22:20
    with their former office mates, to make it
    easier to turn a profit, either by raising
  • 22:20 - 22:24
    barriers to keep new insurgent companies
    out or by dismantling safeguards that
  • 22:24 - 22:32
    protect the public. Now a second problem
    is regulatory lag where a technology
  • 22:32 - 22:35
    advances so rapidly, that regulations are
    laughably obsolete by the time they're
  • 22:35 - 22:40
    issued. Consider the EU directive
    requiring cookie notices on websites to
  • 22:40 - 22:46
    caution users, that their activities are
    tracked and their privacy may be violated.
  • 22:46 - 22:51
    This would have been a good idea in 1993
    or 1996, but unfortunatelly it didn't show up
  • 22:51 - 22:58
    until 2011. Fingerprinting and tracking
    mechanisms have nothing to do with cookies
  • 22:58 - 23:04
    and were already widespread by then. Tim
    Berners-Lee observed in 1995, that five
  • 23:04 - 23:08
    years worth of change was happening on the
    web for every 12 months of real-world
  • 23:08 - 23:12
    time. By that yardstick, the cookie law
    came out nearly a century too late to do
  • 23:12 - 23:19
    any good. Again, look at Uber. This month,
    the European Court of Justice ruled that
  • 23:19 - 23:25
    Uber is a taxi service, not a Web App. This
    is arguably correct - the problem is, Uber
  • 23:25 - 23:29
    has spread globally since it was founded
    eight years ago, subsidizing its drivers to
  • 23:29 - 23:34
    put competing private hire firms out of
    business. Whether this is a net good for
  • 23:34 - 23:39
    societys own is debatable. The problem is, a
    taxi driver can get awfully hungry if she
  • 23:39 - 23:42
    has to wait eight years for a court ruling
    against a predator intent on disrupting
  • 23:42 - 23:50
    her business. So, to recap: firstly, we
    already have Paperclip Maximizers and
  • 23:50 - 23:55
    Musk's AI alarmism is curiously mirror
    blind. Secondly, we have mechanisms for
  • 23:55 - 24:00
    keeping Paperclip Maximizers in check, but
    they don't work very well against AIs that
  • 24:00 - 24:03
    deploy the dark arts, especially
    corruption and bribery and they're even
  • 24:03 - 24:08
    worse against true AIs, that evolved too
    fast for human mediated mechanisms like
  • 24:08 - 24:14
    the law to keep up with. Finally, unlike
    the naive vision of a Paperclip Maximizer
  • 24:14 - 24:19
    that maximizes only paperclips, existing
    AIs have multiple agendas, their overt
  • 24:19 - 24:24
    goal, but also profit seeking, expansion
    into new markets and to accommodate the
  • 24:24 - 24:28
    desire of whoever is currently in the
    driving seat.
  • 24:28 - 24:30
    sighs
  • 24:30 - 24:36
    Now, this brings me to the next major
    heading in this dismaying laundry list:
  • 24:36 - 24:43
    how it all went wrong. It seems to me that
    our current political upheavals, the best
  • 24:43 - 24:49
    understood, is arising from the capture
    of post 1917 democratic institutions by
  • 24:49 - 24:55
    large-scale AI. Everywhere you look, you
    see voters protesting angrily against an
  • 24:55 - 24:59
    entrenched establishment, that seems
    determined to ignore the wants and needs
  • 24:59 - 25:04
    of their human constituents in favor of
    those of the machines. The brexit upset
  • 25:04 - 25:07
    was largely result of a protest vote
    against the British political
  • 25:07 - 25:11
    establishment, the election of Donald
    Trump likewise, with a side order of racism
  • 25:11 - 25:16
    on top. Our major political parties are
    led by people who are compatible with the
  • 25:16 - 25:21
    system as it exists today, a system that
    has been shaped over decades by
  • 25:21 - 25:26
    corporations distorting our government and
    regulatory environments. We humans live in
  • 25:26 - 25:31
    a world shaped by the desires and needs of
    AI, forced to live on their terms and we're
  • 25:31 - 25:34
    taught, that we're valuable only to the
    extent we contribute to the rule of the
  • 25:34 - 25:40
    machines. Now this is free sea and we're
    all more interested in computers and
  • 25:40 - 25:44
    communications technology than this
    historical crap. But as I said earlier,
  • 25:44 - 25:49
    history is a secret weapon, if you know how
    to use it. What history is good for, is
  • 25:49 - 25:53
    enabling us to spot recurring patterns
    that repeat across timescales outside our
  • 25:53 - 25:58
    personal experience. And if we look at our
    historical very slow AIs, what do we learn
  • 25:58 - 26:05
    from them about modern AI and how it's
    going to behave? Well to start with, our
  • 26:05 - 26:10
    AIs have been warped, the new AIs,
    the electronic one's instantiated in our
  • 26:10 - 26:15
    machines, have been warped by a terrible
    fundamentally flawed design decision back
  • 26:15 - 26:20
    in 1995, but as damaged democratic
    political processes crippled our ability
  • 26:20 - 26:25
    to truly understand the world around us
    and led to the angry upheavals and upsets
  • 26:25 - 26:30
    of our present decade. That mistake was
    the decision, to fund the build-out of a
  • 26:30 - 26:34
    public World Wide Web as opposed to be
    earlier government-funded corporate and
  • 26:34 - 26:38
    academic Internet by
    monetizing eyeballs through advertising
  • 26:38 - 26:45
    revenue. The ad-supported web we're used
    to today wasn't inevitable. If you recall
  • 26:45 - 26:50
    the web as it was in 1994, there were very
    few ads at all and not much, in a way, of
  • 26:50 - 26:56
    Commerce. 1995 was the year, the World Wide
    Web really came to public attention in the
  • 26:56 - 27:01
    anglophone world and consumer-facing
    websites began to appear. Nobody really
  • 27:01 - 27:04
    knew, how this thing was going to be paid
    for. The original .com bubble was all
  • 27:04 - 27:08
    about working out, how to monetize the web
    for the first time and a lot of people
  • 27:08 - 27:13
    lost their shirts in the process. A naive
    initial assumption was that the
  • 27:13 - 27:17
    transaction cost of setting up a tcp/ip
    connection over modem was too high to
  • 27:17 - 27:22
    support.. to be supported by per-use micro
    billing for web pages. So instead of
  • 27:22 - 27:27
    charging people fraction of a euro cent
    for every page view, we'd bill customers
  • 27:27 - 27:32
    indirectly, by shoving advertising banners
    in front of their eyes and hoping they'd
  • 27:32 - 27:39
    click through and buy something.
    Unfortunately, advertising is in an
  • 27:39 - 27:46
    industry, one of those pre-existing very
    slow AI ecosystems I already alluded to.
  • 27:46 - 27:50
    Advertising tries to maximize its hold on
    the attention of the minds behind each
  • 27:50 - 27:54
    human eyeball. The coupling of advertising
    with web search was an inevitable
  • 27:54 - 27:58
    outgrowth, I mean how better to attract
    the attention of reluctant subjects, than to
  • 27:58 - 28:01
    find out what they're really interested in
    seeing and selling ads that relate to
  • 28:01 - 28:07
    those interests. The problem of applying
    the paperclip maximize approach to
  • 28:07 - 28:13
    monopolizing eyeballs, however, is that
    eyeballs are a limited, scarce resource.
  • 28:13 - 28:18
    There are only 168 hours in every week, in
    which I can gaze at banner ads. Moreover,
  • 28:18 - 28:22
    most ads are irrelevant to my interests and
    it doesn't matter, how often you flash an ad
  • 28:22 - 28:28
    for dog biscuits at me, I'm never going to
    buy any. I have a cat. To make best
  • 28:28 - 28:32
    revenue-generating use of our eyeballs,
    it's necessary for the ad industry to
  • 28:32 - 28:37
    learn, who we are and what interests us and
    to target us increasingly minutely in hope
  • 28:37 - 28:40
    of hooking us with stuff we're attracted
    to.
  • 28:40 - 28:44
    In other words: the ad industry is a
    paperclip maximizer, but for its success,
  • 28:44 - 28:50
    it relies on developing a theory of mind
    that applies to human beings.
  • 28:50 - 28:53
    sighs
  • 28:53 - 28:56
    Do I need to divert on to the impassioned
    rant about the hideous corruption
  • 28:56 - 29:00
    and evil that is Facebook?
    Audience: Yes!
  • 29:00 - 29:03
    CS: Okay, somebody said yes.
    I'm guessing you've heard it all before,
  • 29:03 - 29:07
    but for too long don't read.. summary is:
    Facebook is as much a search engine as
  • 29:07 - 29:12
    Google or Amazon. Facebook searches are
    optimized for faces, that is for human
  • 29:12 - 29:16
    beings. If you want to find someone you
    fell out of touch with thirty years ago,
  • 29:16 - 29:21
    Facebook probably knows where they live,
    what their favorite color is, what sized
  • 29:21 - 29:24
    shoes they wear and what they said about
    you to your friends behind your back all
  • 29:24 - 29:30
    those years ago, that made you cut them off.
    Even if you don't have a Facebook account,
  • 29:30 - 29:34
    Facebook has a You account, a hole in their
    social graph of a bunch of connections
  • 29:34 - 29:39
    pointing in to it and your name tagged on
    your friends photographs. They know a lot
  • 29:39 - 29:43
    about you and they sell access to their
    social graph to advertisers, who then
  • 29:43 - 29:47
    target you, even if you don't think you use
    Facebook. Indeed, there is barely any
  • 29:47 - 29:52
    point in not using Facebook these days, if
    ever. Social media Borg: "Resistance is
  • 29:52 - 30:01
    futile!" So however, Facebook is trying to
    get eyeballs on ads, so is Twitter and so
  • 30:01 - 30:06
    are Google. To do this, they fine-tuned the
    content they show you to make it more
  • 30:06 - 30:12
    attractive to your eyes and by attractive
    I do not mean pleasant. We humans have an
  • 30:12 - 30:15
    evolved automatic reflex to pay attention
    to threats and horrors as well as
  • 30:15 - 30:20
    pleasurable stimuli and the algorithms,
    that determine what they show us when we
  • 30:20 - 30:24
    look at Facebook or Twitter, take this bias
    into account. You might react more
  • 30:24 - 30:28
    strongly to a public hanging in Iran or an
    outrageous statement by Donald Trump than
  • 30:28 - 30:32
    to a couple kissing. The algorithm knows
    and will show you whatever makes you pay
  • 30:32 - 30:38
    attention, not necessarily what you need or
    want to see.
  • 30:38 - 30:43
    So this brings me to another point about
    computerized AI as opposed to corporate
  • 30:43 - 30:47
    AI. AI algorithms tend to embody the
    prejudices and beliefs of either the
  • 30:47 - 30:53
    programmers, or the data set
    the AI was trained on.
  • 30:53 - 30:56
    A couple of years ago I ran across an
    account of a webcam, developed by mostly
  • 30:56 - 31:01
    pale-skinned Silicon Valley engineers, that
    had difficulty focusing or achieving correct
  • 31:01 - 31:04
    color balance, when pointed at dark-skinned
    faces.
  • 31:04 - 31:08
    Fast an example of human programmer
    induced bias, they didn't have a wide
  • 31:08 - 31:13
    enough test set and didn't recognize that
    they were inherently biased towards
  • 31:13 - 31:19
    expecting people to have pale skin. But
    with today's deep learning, bias can creep
  • 31:19 - 31:24
    in, while the datasets for neural networks are
    trained on, even without the programmers
  • 31:24 - 31:29
    intending it. Microsoft's first foray into
    a conversational chat bot driven by
  • 31:29 - 31:33
    machine learning, Tay, was what we yanked
    offline within days last year, because
  • 31:33 - 31:37
    4chan and reddit based trolls discovered,
    that they could train it towards racism and
  • 31:37 - 31:44
    sexism for shits and giggles. Just imagine
    you're a poor naive innocent AI who's just
  • 31:44 - 31:48
    been switched on and you're hoping to pass
    your Turing test and what happens? 4chan
  • 31:48 - 31:53
    decide to play with your head.
    laughing
  • 31:53 - 31:58
    I got to feel sorry for Tay.
    Now, humans may be biased,
  • 31:58 - 32:01
    but at least individually we're
    accountable and if somebody gives you
  • 32:01 - 32:06
    racist or sexist abuse to your face, you
    can complain or maybe punch them. It's
  • 32:06 - 32:11
    impossible to punch a corporation and it
    may not even be possible to identify the
  • 32:11 - 32:16
    source of unfair bias, when you're dealing
    with a machine learning system. AI based
  • 32:16 - 32:22
    systems that instantiate existing
    prejudices make social change harder.
  • 32:22 - 32:25
    Traditional advertising works by playing
    on the target customer's insecurity and
  • 32:25 - 32:31
    fear as much as their aspirations. And fear
    of a loss of social status and privileges
  • 32:31 - 32:36
    are powerful stress. Fear and xenophobia
    are useful tools for tracking advertising..
  • 32:36 - 32:40
    ah, eyeballs.
    What happens when we get pervasive social
  • 32:40 - 32:44
    networks, that have learned biases against
    say Feminism or Islam or melanin? Or deep
  • 32:44 - 32:48
    learning systems, trained on datasets
    contaminated by racist dipshits and their
  • 32:48 - 32:53
    propaganda? Deep learning systems like the
    ones inside Facebook, that determine which
  • 32:53 - 32:58
    stories to show you to get you to pay as
    much attention as possible to be adverse.
  • 32:58 - 33:05
    I think, you probably have an inkling of
    how.. where this is now going. Now, if you
  • 33:05 - 33:09
    think, this is sounding a bit bleak and
    unpleasant, you'd be right. I write sci-fi.
  • 33:09 - 33:13
    You read or watch or play sci-fi. We're
    acculturated to think of science and
  • 33:13 - 33:19
    technology as good things that make life
    better, but this ain't always so. Plenty of
  • 33:19 - 33:23
    technologies have historically been
    heavily regulated or even criminalized for
  • 33:23 - 33:28
    good reason and once you get past any
    reflexive indignation, criticism of
  • 33:28 - 33:33
    technology and progress, you might agree
    with me, that it is reasonable to ban
  • 33:33 - 33:39
    individuals from owning nuclear weapons or
    nerve gas. Less obviously, they may not be
  • 33:39 - 33:43
    weapons, but we've banned
    chlorofluorocarbon refrigerants, because
  • 33:43 - 33:46
    they were building up in the high
    stratosphere and destroying the ozone
  • 33:46 - 33:51
    layer that protects us from UVB radiation.
    We banned tetra e-file LED in
  • 33:51 - 33:58
    gasoline, because it poisoned people and
    led to a crime wave. These are not
  • 33:58 - 34:03
    weaponized technologies, but they have
    horrible side effects. Now, nerve gas and
  • 34:03 - 34:09
    leaded gasoline were 1930s chemical
    technologies, promoted by 1930s
  • 34:09 - 34:15
    corporations. Halogenated refrigerants and
    nuclear weapons are totally 1940s. ICBMs
  • 34:15 - 34:19
    date to the 1950s. You know, I have
    difficulty seeing why people are getting
  • 34:19 - 34:26
    so worked up over North Korea. North Korea
    reaches 1953 level parity - be terrified
  • 34:26 - 34:31
    and hide under the bed!
    I submit that the 21st century is throwing
  • 34:31 - 34:35
    up dangerous new technologies, just as our
    existing strategies for regulating very
  • 34:35 - 34:42
    slow AIs have proven to be inadequate. And
    I don't have an answer to how we regulate
  • 34:42 - 34:46
    new technologies, I just want to flag it up
    as a huge social problem that is going to
  • 34:46 - 34:50
    affect the coming century.
    I'm now going to give you four examples of
  • 34:50 - 34:54
    new types of AI application that are
    going to warp our societies even more
  • 34:54 - 35:01
    badly than the old slow AIs, we.. have done.
    This isn't an exhaustive list, this is just
  • 35:01 - 35:05
    some examples I dream, I pulled out of
    my ass. We need to work out a general
  • 35:05 - 35:08
    strategy for getting on top of this sort
    of thing before they get on top of us and
  • 35:08 - 35:12
    I think, this is actually a very urgent
    problem. So I'm just going to give you this
  • 35:12 - 35:18
    list of dangerous new technologies that
    are arriving now, or coming, and send you
  • 35:18 - 35:22
    away to think about what to do next. I
    mean, we are activists here, we should be
  • 35:22 - 35:28
    thinking about this and planning what
    to do. Now, the first nasty technology I'd
  • 35:28 - 35:32
    like to talk about, is political hacking
    tools that rely on social graph directed
  • 35:32 - 35:40
    propaganda. This is low-hanging fruit
    after the electoral surprises of 2016.
  • 35:40 - 35:43
    Cambridge Analytica pioneered the use of
    deep learning by scanning the Facebook and
  • 35:43 - 35:48
    Twitter social graphs to identify voters
    political affiliations by simply looking
  • 35:48 - 35:53
    at what tweets or Facebook comments they
    liked, very able to do this, to identify
  • 35:53 - 35:56
    individuals with a high degree of
    precision, who were vulnerable to
  • 35:56 - 36:01
    persuasion and who lived in electorally
    sensitive districts. They then canvassed
  • 36:01 - 36:07
    them with propaganda, that targeted their
    personal hot-button issues to change their
  • 36:07 - 36:12
    electoral intentions. The tools developed
    by web advertisers to sell products have
  • 36:12 - 36:16
    now been weaponized for political purposes
    and the amount of personal information
  • 36:16 - 36:21
    about our affiliations that we expose on
    social media, makes us vulnerable. Aside, in
  • 36:21 - 36:25
    the last U.S. Presidential election, as
    mounting evidence for the British
  • 36:25 - 36:29
    referendum on leaving the EU was subject
    to foreign cyber war attack, now
  • 36:29 - 36:33
    weaponized social media, as was the most
    recent French Presidential election.
  • 36:33 - 36:38
    In fact, if we remember the leak of emails
    from the Macron campaign, it turns out that
  • 36:38 - 36:42
    many of those emails were false, because
    the Macron campaign anticipated that they
  • 36:42 - 36:47
    would be attacked and an email trove would
    be leaked in the last days before the
  • 36:47 - 36:51
    election. So they deliberately set up
    false emails that would be hacked and then
  • 36:51 - 37:01
    leaked and then could be discredited. It
    gets twisty fast. Now I'm kind of biting
  • 37:01 - 37:05
    my tongue and trying, not to take sides
    here. I have my own political affiliation
  • 37:05 - 37:10
    after all, and I'm not terribly mainstream.
    But if social media companies don't work
  • 37:10 - 37:14
    out how to identify and flag micro-
    targeted propaganda, then democratic
  • 37:14 - 37:18
    institutions will stop working and elections
    will be replaced by victories, whoever
  • 37:18 - 37:23
    can buy the most trolls. This won't
    simply be billionaires but.. like the Koch
  • 37:23 - 37:26
    brothers and Robert Mercer from the U.S.
    throwing elections to whoever will
  • 37:26 - 37:31
    hand them the biggest tax cuts. Russian
    military cyber war doctrine calls for the
  • 37:31 - 37:36
    use of social media to confuse and disable
    perceived enemies, in addition to the
  • 37:36 - 37:40
    increasingly familiar use of zero-day
    exploits for espionage, such as spear
  • 37:40 - 37:43
    phishing and distributed denial-of-service
    attacks, on our infrastructure, which are
  • 37:43 - 37:49
    practiced by Western agencies. Problem is,
    once the Russians have demonstrated that
  • 37:49 - 37:54
    this is an effective tactic, the use of
    propaganda bot armies in cyber war will go
  • 37:54 - 38:00
    global. And at that point, our social
    discourse will be irreparably poisoned.
  • 38:00 - 38:05
    Incidentally, I'd like to add - as another
    aside like the Elon Musk thing - I hate
  • 38:05 - 38:10
    the cyber prefix! It usually indicates,
    that whoever's using it has no idea what
  • 38:10 - 38:16
    they're talking about.
    applause, laughter
  • 38:16 - 38:21
    Unfortunately, much as the way the term
    hacker was corrupted from its original
  • 38:21 - 38:27
    meaning in the 1990s, the term cyber war
    has, it seems, to have stuck and it's now an
  • 38:27 - 38:32
    actual thing that we can point to and say:
    "This is what we're talking about". So I'm
  • 38:32 - 38:36
    afraid, we're stuck with this really
    horrible term. But that's a digression, I
  • 38:36 - 38:39
    should get back on topic, because I've only
    got 20 minutes to go.
  • 38:39 - 38:46
    Now, the second threat that we need to
    think about regulating ,or controlling, is
  • 38:46 - 38:50
    an adjunct to deep learning target
    propaganda: it's the use of neural network
  • 38:50 - 38:57
    generated false video media. We're used to
    photoshopped images these days, but faking
  • 38:57 - 39:03
    video and audio takes it to the next
    level. Luckily, faking video and audio is
  • 39:03 - 39:09
    labor-intensive, isn't it? Well nope, not
    anymore. We're seeing the first generation
  • 39:09 - 39:14
    of AI assisted video porn, in which the
    faces of film stars are mapped onto those
  • 39:14 - 39:17
    of other people in a video clip, using
    software rather than laborious in human
  • 39:17 - 39:22
    process.
    A properly trained neural network
  • 39:22 - 39:29
    recognizes faces and transforms the face
    of the Hollywood star, they want to put
  • 39:29 - 39:35
    into a porn movie, into the face of - onto
    the face of the porn star in the porn clip
  • 39:35 - 39:41
    and suddenly you have "Oh dear God, get it
    out of my head" - no, not gonna give you
  • 39:41 - 39:44
    any examples. Let's just say it's bad
    stuff.
  • 39:44 - 39:47
    laughs
    Meanwhile we have WaveNet, a system
  • 39:47 - 39:51
    for generating realistic sounding speech,
    if a voice of a human's speak of a neural
  • 39:51 - 39:56
    network has been trained to mimic any
    human speaker. We can now put words into
  • 39:56 - 40:01
    other people's mouths realistically
    without employing a voice actor. This
  • 40:01 - 40:07
    stuff is still geek intensive. It requires
    relatively expensive GPUs or cloud
  • 40:07 - 40:11
    computing clusters, but in less than a
    decade it'll be out in the wild, turned
  • 40:11 - 40:16
    into something, any damn script kiddie can
    use and just about everyone will be able
  • 40:16 - 40:19
    to fake up a realistic video of someone
    they don't like doing something horrible.
  • 40:19 - 40:27
    I mean, Donald Trump in the White House. I
    can't help but hope that out there
  • 40:27 - 40:31
    somewhere there's some geek like Steve
    Bannon with a huge rack of servers who's
  • 40:31 - 40:41
    faking it all, but no. Now, also we've
    already seen alarm this year over bizarre
  • 40:41 - 40:45
    YouTube channels that attempt to monetize
    children's TV brands by scraping the video
  • 40:45 - 40:50
    content of legitimate channels and adding
    their own advertising in keywords on top
  • 40:50 - 40:54
    before reposting it. This is basically
    your YouTube spam.
  • 40:54 - 40:59
    Many of these channels are shaped by
    paperclip maximizing advertising AIs, but
  • 40:59 - 41:04
    are simply trying to maximise their search
    ranking on YouTube and it's entirely
  • 41:04 - 41:08
    algorithmic: you have a whole list of
    keywords, you perm, you take them, you slap
  • 41:08 - 41:15
    them on top of existing popular videos and
    re-upload the videos. Once you add neural
  • 41:15 - 41:20
    network driven tools for inserting
    character A into pirated video B, to click
  • 41:20 - 41:24
    maximize.. for click maximizing bots,
    things are gonna get very weird, though. And
  • 41:24 - 41:29
    they're gonna get even weirder, when these
    tools are deployed for political gain.
  • 41:29 - 41:35
    We tend - being primates, that evolved 300
    thousand years ago in a smartphone free
  • 41:35 - 41:40
    environment - to evaluate the inputs from
    our eyes and ears much less critically
  • 41:40 - 41:44
    than what random strangers on the Internet
    tell us in text. We're already too
  • 41:44 - 41:49
    vulnerable to fake news as it is. Soon
    they'll be coming for us, armed with
  • 41:49 - 41:55
    believable video evidence. The Smart Money
    says that by 2027 you won't be able to
  • 41:55 - 41:59
    believe anything you see in video, unless
    for a cryptographic signatures on it,
  • 41:59 - 42:03
    linking it back to the camera that shot
    the raw feed. But you know how good most
  • 42:03 - 42:08
    people are at using encryption - it's going to
    be chaos!
  • 42:08 - 42:14
    So, paperclip maximizers with focus on
    eyeballs are very 20th century. The new
  • 42:14 - 42:20
    generation is going to be focusing on our
    nervous system. Advertising as an industry
  • 42:20 - 42:23
    can only exist because of a quirk of our
    nervous system, which is that we're
  • 42:23 - 42:27
    susceptible to addiction. Be it
    tobacco, gambling or heroin, we
  • 42:27 - 42:32
    recognize addictive behavior, when we see
    it. Well, do we? It turns out the human
  • 42:32 - 42:36
    brain's reward feedback loops are
    relatively easy to gain. Large
  • 42:36 - 42:41
    corporations like Zynga - producers of
    FarmVille - exist solely because of it,
  • 42:41 - 42:46
    free to use social media platforms like
    Facebook and Twitter, are dominant precisely
  • 42:46 - 42:50
    because they're structured to reward
    frequent short bursts of interaction and
  • 42:50 - 42:55
    to generate emotional engagement - not
    necessarily positive emotions, anger and
  • 42:55 - 43:00
    hatred are just as good when it comes to
    attracting eyeballs for advertisers.
  • 43:00 - 43:05
    Smartphone addiction is a side effect of
    advertising as a revenue model. Frequent
  • 43:05 - 43:10
    short bursts of interaction to keep us
    coming back for more. Now a new.. newish
  • 43:10 - 43:14
    development, thanks to deep learning again -
    I keep coming back to deep learning,
  • 43:14 - 43:19
    don't I? - use of neural networks in a
    manner that Marvin Minsky never envisaged,
  • 43:19 - 43:23
    back when he was deciding that the
    Perzeptron was where it began and ended
  • 43:23 - 43:27
    and it couldn't do anything.
    Well, we have neuroscientists now, who've
  • 43:27 - 43:34
    mechanized the process of making apps more
    addictive. Dopamine Labs is one startup
  • 43:34 - 43:38
    that provides tools to app developers to
    make any app more addictive, as well as to
  • 43:38 - 43:41
    reduce the desire to continue
    participating in a behavior if it's
  • 43:41 - 43:47
    undesirable, if the app developer actually
    wants to help people kick the habit. This
  • 43:47 - 43:52
    goes way beyond automated A/B testing. A/B
    testing allows developers to plot a binary
  • 43:52 - 43:59
    tree path between options, moving towards a
    single desired goal. But true deep
  • 43:59 - 44:04
    learning, addictiveness maximizers, can
    optimize for multiple attractors in
  • 44:04 - 44:10
    parallel. The more users you've got on
    your app, the more effectively you can work
  • 44:10 - 44:17
    out, what attracts them and train them and
    focus on extra addictive characteristics.
  • 44:17 - 44:21
    Now, going by their public face, the folks
    at Dopamine Labs seem to have ethical
  • 44:21 - 44:25
    qualms about the misuse of addiction
    maximizers. But neuroscience isn't a
  • 44:25 - 44:29
    secret and sooner or later some really
    unscrupulous sociopaths will try to see
  • 44:29 - 44:36
    how far they can push it. So let me give
    you a specific imaginary scenario: Apple
  • 44:36 - 44:41
    have put a lot of effort into making real-
    time face recognition work on the iPhone X
  • 44:41 - 44:45
    and it's going to be everywhere on
    everybody's phone in another couple of
  • 44:45 - 44:51
    years. You can't fool an iPhone X with a
    photo or even a simple mask. It does depth
  • 44:51 - 44:54
    mapping to ensure, your eyes are in the
    right place and can tell whether they're
  • 44:54 - 44:58
    open or closed. It recognizes your face
    from underlying bone structure through
  • 44:58 - 45:03
    makeup and bruises. It's running
    continuously, checking pretty much as often
  • 45:03 - 45:07
    as every time you'd hit the home button on
    a more traditional smartphone UI and it
  • 45:07 - 45:14
    can see where your eyeballs are pointing.
    The purpose of a face recognition system
  • 45:14 - 45:20
    is to provide for real-time authenticate
    continuous authentication when you're
  • 45:20 - 45:24
    using a device - not just enter a PIN or
    sign a password or use a two factor
  • 45:24 - 45:28
    authentication pad, but the device knows
    that you are its authorized user on a
  • 45:28 - 45:33
    continuous basis and if somebody grabs
    your phone and runs away with it, it'll
  • 45:33 - 45:38
    know that it's been stolen immediately, it
    sees the face of the thief.
  • 45:38 - 45:43
    However, your phone monitoring your facial
    expressions and correlating against app
  • 45:43 - 45:48
    usage has other implications. Your phone
    will be aware of precisely what you like
  • 45:48 - 45:54
    to look at on your screen.. on its screen.
    We may well have sufficient insight on the
  • 45:54 - 46:00
    part of the phone to identify whether
    you're happy or sad, bored or engaged.
  • 46:00 - 46:05
    With addiction seeking deep learning tools
    and neural network generated images, those
  • 46:05 - 46:09
    synthetic videos I was talking about, it's
    entirely.. in principle entirely possible to
  • 46:09 - 46:15
    feed you an endlessly escalating payload
    of arousal-inducing inputs. It might be
  • 46:15 - 46:19
    Facebook or Twitter messages, optimized to
    produce outrage, or it could be porn
  • 46:19 - 46:25
    generated by AI to appeal to kinks you
    don't even consciously know you have.
  • 46:25 - 46:29
    But either way, the app now owns your
    central nervous system and you will be
  • 46:29 - 46:36
    monetized. And finally, I'd like to raise a
    really hair-raising specter that goes well
  • 46:36 - 46:42
    beyond the use of deep learning and
    targeted propaganda and cyber war. Back in
  • 46:42 - 46:46
    2011, an obscure Russian software house
    launched an iPhone app for pickup artists
  • 46:46 - 46:53
    called 'Girls Around Me'. Spoiler: Apple
    pulled it like a hot potato as soon as
  • 46:53 - 46:59
    word got out that it existed. Now, Girls
    Around Me works out where the user is
  • 46:59 - 47:04
    using GPS, then it would query Foursquare
    and Facebook for people matching a simple
  • 47:04 - 47:10
    relational search, for single females on
    Facebook, per relationship status, who have
  • 47:10 - 47:15
    checked in, or been checked in by their
    friends, in your vicinity on Foursquare.
  • 47:15 - 47:19
    The app then displays their locations on a
    map along with links to their social media
  • 47:19 - 47:25
    profiles. If they were doing it today, the
    interface would be gamified, showing strike
  • 47:25 - 47:29
    rates and a leaderboard and flagging
    targets who succumbed to harassment as
  • 47:29 - 47:32
    easy lays.
    But these days, the cool kids and single
  • 47:32 - 47:36
    adults are all using dating apps with a
    missing vowel in the name, only a creeper
  • 47:36 - 47:45
    would want something like Girls Around Me,
    right? Unfortunately, there are much, much
  • 47:45 - 47:49
    nastier uses of and scraping social media
    to find potential victims for serial
  • 47:49 - 47:54
    rapists. Does your social media profile
    indicate your political religious
  • 47:54 - 48:00
    affiliation? No? Cambridge Analytica can
    work them out with 99.9% precision
  • 48:00 - 48:05
    anyway, so don't worry about that. We
    already have you pegged. Now add a service
  • 48:05 - 48:08
    that can identify people's affiliation and
    location and you have a beginning of a
  • 48:08 - 48:14
    flash mob app, one that will show people
    like us and people like them on a
  • 48:14 - 48:19
    hyperlocal map.
    Imagine you're a young female and a
  • 48:19 - 48:22
    supermarket like Target has figured out
    from your purchase patterns, that you're
  • 48:22 - 48:28
    pregnant, even though you don't know it
    yet. This actually happened in 2011. Now
  • 48:28 - 48:31
    imagine, that all the anti-abortion
    campaigners in your town have an app
  • 48:31 - 48:36
    called "Babies Risk" on their phones.
    Someone has paid for the analytics feed
  • 48:36 - 48:40
    from the supermarket and every time you go
    near a family planning clinic, a group of
  • 48:40 - 48:44
    unfriendly anti-abortion protesters
    somehow miraculously show up and swarm
  • 48:44 - 48:50
    you. Or imagine you're male and gay and
    the "God hates fags"-crowd has invented a
  • 48:50 - 48:55
    100% reliable gaydar app, based on your
    Grindr profile, and is getting their fellow
  • 48:55 - 49:01
    travelers to queer bash gay men - only when
    they're alone or outnumbered by ten to
  • 49:01 - 49:06
    one. That's the special horror of precise
    geolocation not only do you always know
  • 49:06 - 49:13
    where you are, the AIs know, where you are
    and some of them aren't friendly. Or
  • 49:13 - 49:17
    imagine, you're in Pakistan and Christian
    Muslim tensions are rising or in rural
  • 49:17 - 49:24
    Alabama or an Democrat, you know the
    possibilities are endless. Someone out
  • 49:24 - 49:30
    there is working on this. A geolocation
    aware, social media scraping deep learning
  • 49:30 - 49:34
    application, that uses a gamified
    competitive interface to reward its
  • 49:34 - 49:38
    players for joining in acts of mob
    violence against whoever the app developer
  • 49:38 - 49:42
    hates.
    Probably it has an innocuous seeming, but
  • 49:42 - 49:47
    highly addictive training mode, to get the
    users accustomed to working in teams and
  • 49:47 - 49:53
    obeying the apps instructions. Think
    Ingress or Pokemon Go. Then at some pre-
  • 49:53 - 49:58
    planned zero-hour, it switches mode and
    starts rewarding players for violence,
  • 49:58 - 50:02
    players who have been primed to think of
    their targets as vermin by a steady drip
  • 50:02 - 50:06
    feed of micro-targeted dehumanizing
    propaganda inputs, delivered over a period
  • 50:06 - 50:12
    of months. And the worst bit of this picture?
    Is that the app developer isn't even a
  • 50:12 - 50:17
    nation-state trying to disrupt its enemies
    or an extremist political group trying to
  • 50:17 - 50:22
    murder gays, Jews or Muslims. It's just a
    Paperclip Maximizer doing what it does
  • 50:22 - 50:28
    and you are the paper. Welcome to the 21st
    century.
  • 50:28 - 50:41
    applause
    Uhm...
  • 50:41 - 50:42
    Thank you.
  • 50:42 - 50:48
    ongoing applause
    We have a little time for questions. Do
  • 50:48 - 50:54
    you have a microphone for the orders? Do
    we have any questions? ... OK.
  • 50:54 - 50:56
    Herald: So you are doing a Q&A?
    CS: Hmm?
  • 50:56 - 51:02
    Herald: So you are doing a Q&A. Well if
    there are any questions, please come
  • 51:02 - 51:24
    forward to the microphones, numbers 1
    through 4 and ask.
  • 51:24 - 51:29
    Mic 1: Do you really think it's all
    bleak and dystopian like you prescribed
  • 51:29 - 51:35
    it, because I also think the future can be
    bright, looking at the internet with open
  • 51:35 - 51:39
    source and like, it's all growing and going
    faster and faster in a good
  • 51:39 - 51:44
    direction. So what do you think about
    the balance here?
  • 51:44 - 51:48
    CS: sighs Basically, I think the
    problem is, that about 3% of us
  • 51:48 - 51:54
    are sociopaths or psychopaths, who spoil
    everything for the other 97% of us.
  • 51:54 - 51:57
    Wouldn't it be great if somebody could
    write an app that would identify all the
  • 51:57 - 52:00
    psychopaths among us and let the rest of
    us just kill them?
  • 52:00 - 52:05
    laughing, applause
    Yeah, we have all the
  • 52:05 - 52:12
    tools to make a utopia, we have it now
    today. A bleak miserable grim meathook
  • 52:12 - 52:18
    future is not inevitable, but it's up to
    us to use these tools to prevent the bad
  • 52:18 - 52:22
    stuff happening and to do that, we have to
    anticipate the bad outcomes and work to
  • 52:22 - 52:25
    try and figure out a way to deal with
    them. That's what this talk is. I'm trying
  • 52:25 - 52:29
    to do a bit of a wake-up call and get
    people thinking about how much worse
  • 52:29 - 52:34
    things can get and what we need to do to
    prevent it from happening. What I was
  • 52:34 - 52:38
    saying earlier about our regulatory
    systems being broken, stands. How do we
  • 52:38 - 52:48
    regulate the deep learning technologies?
    This is something we need to think about.
  • 52:48 - 52:55
    H: Okay mic number two.
    Mic 2: Hello? ... When you talk about
  • 52:55 - 53:02
    corporations as AIs, where do you see that
    analogy you're making? Do you see them as
  • 53:02 - 53:10
    literally AIs or figuratively?
    CS: Almost literally. If
  • 53:10 - 53:14
    you're familiar with philosopher
    (?) Searle's Chinese room paradox
  • 53:14 - 53:18
    from the 1970s, by which he attempted to
    prove that artificial intelligence was
  • 53:18 - 53:24
    impossible, a corporation is very much the
    Chinese room implementation of an AI. It
  • 53:24 - 53:29
    is a bunch of human beings in a box. You
    put inputs into the box, you get apples
  • 53:29 - 53:33
    out of a box. Does it matter, whether it's
    all happening in software or whether
  • 53:33 - 53:38
    there's a human being following rules
    inbetween to assemble the output? I don't
  • 53:38 - 53:41
    see there being much of a difference.
    Now you have to look at a company at a
  • 53:41 - 53:47
    very abstract level to view it as an AI,
    but more and more companies are automating
  • 53:47 - 53:54
    their internal business processes. You've
    got to view this as an ongoing trend. And
  • 53:54 - 54:01
    yeah, they have many of the characteristics
    of an AI.
  • 54:01 - 54:06
    Herald: Okay mic number four.
    Mic 4: Hi, thanks for your talk.
  • 54:06 - 54:13
    You probably heard of the Time Well
    Spent and Design Ethics movements that
  • 54:13 - 54:17
    are alerting developers to dark patterns
    in UI design, where
  • 54:17 - 54:21
    these people design apps to manipulate
    people. I'm curious if you find any
  • 54:21 - 54:27
    optimism in the possibility of amplifying
    or promoting those movements.
  • 54:27 - 54:32
    CS: Uhm, you know, I knew about dark
    patterns, I knew about people trying to
  • 54:32 - 54:36
    optimize them, I wasn't actually aware
    there were movements against this. Okay I'm
  • 54:36 - 54:41
    53 years old, I'm out of touch. I haven't
    actually done any serious programming in
  • 54:41 - 54:47
    15 years. I'm so rusty, my rust has rust on
    it. But, you know, it is a worrying trend
  • 54:47 - 54:54
    and actual activism is a good start.
    Raising awareness of hazards and of what
  • 54:54 - 54:58
    we should be doing about them, is a good
    start. And I would classify this actually
  • 54:58 - 55:04
    as a moral issue. We need to..
    corporations evaluate everything in terms
  • 55:04 - 55:08
    of revenue, because it's very
    equivalent to breathing, they have to
  • 55:08 - 55:14
    breathe. Corporations don't usually have
    any moral framework. We're humans, we need
  • 55:14 - 55:18
    a moral framework to operate within. Even
    if it's as simple as first "Do no harm!"
  • 55:18 - 55:22
    or "Do not do unto others that which would
    be repugnant if it was done unto you!",
  • 55:22 - 55:26
    the Golden Rule. So, yeah, we should be
    trying to spread awareness of this about
  • 55:26 - 55:32
    and working with program developers, to
    look to remind them that they are human
  • 55:32 - 55:36
    beings and have to be humane in their
    application of technology, is a necessary
  • 55:36 - 55:40
    start.
    applause
  • 55:40 - 55:46
    H: Thank you! Mic 3?
    Mic 3: Hi! Yeah, I think that folks,
  • 55:46 - 55:49
    especially in this sort of crowd, tend to
    jump to the "just get off of
  • 55:49 - 55:52
    Facebook"-solution first, for a lot of
    these things that are really, really
  • 55:52 - 55:57
    scary. But what worries me, is how we sort
    of silence ourselves when we do that.
  • 55:57 - 56:02
    After the election I actually got back on
    Facebook, because the Women's March was
  • 56:02 - 56:07
    mostly organized through Facebook. But
    yeah, I think we need a lot more
  • 56:07 - 56:13
    regulation, but we can't just throw it
    out. We're.. because it's..
  • 56:13 - 56:17
    social media is the only... really good
    platform we have right now
  • 56:17 - 56:20
    to express ourselves, to
    have our rules, or power.
  • 56:20 - 56:25
    CS: Absolutely. I have made
    a point of not really using Facebook
  • 56:25 - 56:28
    for many, many, many years.
    I have a Facebook page simply to
  • 56:28 - 56:32
    shut up the young marketing people at my
    publisher, who used to prop up every two
  • 56:32 - 56:35
    years and say: "Why don't you have a
    Facebook. Everybody's got a Facebook."
  • 56:35 - 56:40
    No, I've had a blog since 1993!
    laughing
  • 56:40 - 56:44
    But no, I'm gonna have to use Facebook,
    because these days, not using Facebook is
  • 56:44 - 56:51
    like not using email. You're cutting off
    your nose to spite your face. What we
  • 56:51 - 56:55
    really do need to be doing, is looking for
    some form of effective oversight of
  • 56:55 - 57:01
    Facebook and particularly, of how they..
    the algorithms that show you content, are
  • 57:01 - 57:05
    written. What I was saying earlier about
    how algorithms are not as transparent as
  • 57:05 - 57:11
    human beings to people, applies hugely to
    them. And both, Facebook and Twitter
  • 57:11 - 57:15
    control the information
    that they display to you.
  • 57:15 - 57:19
    Herald: Okay, I'm terribly sorry for all the
    people queuing at the mics now, we're out
  • 57:19 - 57:25
    of time. I also have to apologize, I
    announced, that this talk was being held in
  • 57:25 - 57:30
    English, but it was being held in English.
    the latter pronounced on the G
  • 57:30 - 57:31
    Thank you very much, Charles Stross!
  • 57:31 - 57:34
    CS: Thank you very much for
    listening to me, it's been a pleasure!
  • 57:34 - 57:36
    applause
  • 57:36 - 57:53
    postroll music
  • 57:53 - 57:58
    subtitles created by c3subtitles.de
    in the year 2018
Title:
34C3 - Dude, you broke the Future!
Description:

more » « less
Video Language:
English
Duration:
57:58

English subtitles

Revisions