-
34C3 preroll music
-
Herald: Humans of Congress, it is my
pleasure to announce the next speaker.
-
I was supposed to pick out a few awards or
something, to actually present what he's
-
done in his life, but I can
only say: he's one of us!
-
applause
-
Charles Stross!
ongoing applause
-
Charles Stross: Hi! Is this on?
Good. Great.
-
I'm really pleased to be here and I
want to start by apologizing for my total
-
lack of German. So this talk is gonna be in
English. Good morning. I'm Charlie Stross
-
and it's my job to tell lies for money, or
rather, I write science fiction, much of
-
it about on the future, which in recent
years has become ridiculously hard to
-
predict. In this talk I'm going to talk
about why. Now our species, Homo sapiens
-
sapiens, is about 300,000 years old. It
used to be about 200,000 years old,
-
but it grew an extra 100,000
years in the past year because of new
-
archaeological discoveries, I mean, go
figure. For all but the last three
-
centuries or so - of that span, however -
predicting the future was really easy. If
-
you were an average person - as opposed to
maybe a king or a pope - natural disasters
-
aside, everyday life 50 years in the
future would resemble everyday life 50
-
years in your past. Let that sink in for a
bit. For 99.9% of human existence on this
-
earth, the future was static. Then
something changed and the future began to
-
shift increasingly rapidly, until, in the
present day, things are moving so fast,
-
it's barely possible to anticipate trends
from one month to the next. Now as an
-
eminent computer scientist, Edsger Dijkstra
once remarked, computer science is no more
-
about computers than astronomy is about
building big telescopes, the same can be
-
said of my field of work, writing science
fiction, sci-fi is rarely about science
-
and even more rarely about predicting the
future, but sometimes we dabble in
-
Futurism and lately, Futurism has gotten
really, really, weird. Now when I write a
-
near future work of fiction, one set, say, a
decade hence, there used to be a recipe I
-
could follow, that worked eerily well. Simply put:
90% of the next decade stuff is
-
already here around us today.
Buildings are designed to
-
last many years, automobiles have a design
life of about a decade, so half the cars on
-
the road in 2027 are already there now -
they're new. People? There'll be some new
-
faces, aged 10 and under, and some older
people will have died, but most of us
-
adults will still be around, albeit older
and grayer, this is the 90% of a near
-
future that's already here today. After
the already existing 90%, another 9% of a
-
near future a decade hence used to be
easily predictable: you look at trends
-
dictated by physical limits, such as
Moore's law and you look at Intel's road
-
map and you use a bit of creative
extrapolation and you won't go too far
-
wrong. If I predict - wearing my futurology
hat - that in 2027 LTE cellular phones will
-
be ubiquitous, 5G will be available for
high bandwidth applications and there will be
-
fallback to some kind of satellite data
service at a price, you probably won't
-
laugh at me.
I mean, it's not like I'm predicting that
-
airlines will fly slower and Nazis will
take over the United States, is it ?
-
laughing
-
And therein lies the problem. There is
remaining 1% of what Donald Rumsfeld
-
called the "unknown unknowns", what throws off
all predictions. As it happens, airliners
-
today are slower than they were in the
1970s and don't get me started about the Nazis,
-
I mean, nobody in 2007 was expecting a Nazi
revival in 2017, were they?
-
Only this time, Germans get to be the good guys.
laughing, applause
-
So. My recipe for fiction set 10 years
in the future used to be:
-
"90% is already here,
9% is not here yet but predictable
-
and 1% is 'who ordered that?'" But unfortunately
the ratios have changed, I think we're now
-
down to maybe 80% already here - climate
change takes a huge toll on architecture -
-
then 15% not here yet, but predictable and
a whopping 5% of utterly unpredictable
-
deep craziness. Now... before I carry on
with this talk, I want to spend a minute or
-
two ranting loudly and ruling out the
singularity. Some of you might assume, that
-
as the author of books like "Singularity
Sky" and "Accelerando",
-
I expect an impending technological
singularity,
-
that we will develop self-improving
artificial intelligence and mind uploading
-
and the whole wish list of transhumanist
aspirations promoted by the likes of
-
Ray Kurzweil, will come to pass. Unfortunately
this isn't the case. I think transhumanism
-
is a warmed-over Christian heresy. While
its adherents tend to be outspoken atheists,
-
they can't quite escape from the
history that gave rise to our current
-
Western civilization. Many of you are
familiar with design patterns, an approach
-
to software engineering that focuses on
abstraction and simplification, in order
-
to promote reusable code. When you look at
the AI singularity as a narrative and
-
identify the numerous places in their
story where the phrase "and then a miracle
-
happens" occur, it becomes apparent pretty
quickly, that they've reinvented Christiantiy.
-
applause
-
Indeed, the wellspring of
today's transhumanists draw in a long rich
-
history of Russian philosophy, exemplified
by the russian orthodox theologian Nikolai
-
Fyodorovich Fedorov by way of his disciple
Konstantin Tsiolkovsky, whose derivation
-
of a rocket equation makes him
essentially the father of modern space
-
flight. Once you start probing the nether
regions of transhumanist forth and run
-
into concepts like Roko's Basilisk - by the
way, any of you who didn't know about the
-
Basilisk before, are now doomed to an
eternity in AI hell, terribly sorry - you
-
realize, they've mangled it to match some
of the nastier aspects of Presbyterian
-
Protestantism. Now they basically invented
original sin and Satan in the guise of an
-
AI that doesn't exist yet ,it's.. kind of
peculiar. Anyway, my take on the
-
singularity is: What if something walks
like a duck and quacks like a duck? It's
-
probably a duck. And if it looks like a
religion, it's probably a religion.
-
I don't see much evidence for human-like,
self-directed artificial intelligences
-
coming along any time soon, and a fair bit
of evidence, that nobody accepts and freaks
-
in cognitive science departments, even
want it. I mean, if we invented an AI
-
that was like a human mind, it would do the
AI equivalent of sitting on the sofa,
-
munching popcorn and
watching the Super Bowl all day.
-
It wouldn't be much use to us.
laughter, applause
-
What we're getting instead,
is self-optimizing tools that defy
-
human comprehension, but are not
in fact any more like our kind
-
of intelligence than a Boeing 737 is like
a seagull. Boeing 737s and seagulls both
-
fly, Boeing 737s don't lay eggs and shit
everywhere. So I'm going to wash my hands
-
of a singularity as a useful explanatory
model of the future without further ado.
-
I'm one of those vehement atheists as well
and I'm gonna try and offer you a better
-
model for what's happening to us. Now, as
my fellow Scottish science fictional author
-
Ken MacLeod likes to say "the secret
weapon of science fiction is history".
-
History is, loosely speaking, is the written
record of what and how people did things
-
in past times. Times that have slipped out
of our personal memories. We science
-
fiction writers tend to treat history as a
giant toy chest to raid, whenever we feel
-
like telling a story. With a little bit of
history, it's really easy to whip up an
-
entertaining yarn about a galactic empire
that mirrors the development and decline
-
of a Habsburg Empire or to respin the
October Revolution as a tale of how Mars
-
got its independence. But history is
useful for so much more than that.
-
It turns out, that our personal memories
don't span very much time at all. I'm 53
-
and I barely remember the 1960s. I only
remember the 1970s with the eyes of a 6 to
-
16 year old. My father died this year,
aged 93, and he'd just about remembered the
-
1930s. Only those of my father's
generation directly remember the Great
-
Depression and can compare it to the
2007/08 global financial crisis directly.
-
We Westerners tend to pay little attention
to cautionary tales told by 90-somethings.
-
We're modern, we're change obsessed and we
tend to repeat our biggest social mistakes
-
just as they slip out of living memory,
which means they recur on a timescale of
-
70 to 100 years.
So if our personal memories are useless,
-
we need a better toolkit
and history provides that toolkit.
-
History gives us the perspective to see what
went wrong in the past and to look for
-
patterns and check to see whether those
patterns are recurring in the present.
-
Looking in particular at the history of the past two
to four hundred years, that age of rapidly
-
increasing change that I mentioned at the
beginning. One glaringly obvious deviation
-
from the norm of the preceding
3000 centuries is obvious, and that's
-
the development of artificial intelligence,
which happened no earlier than 1553 and no
-
later than 1844. I'm talking of course
about the very old, very slow AI's we call
-
corporations. What lessons from the history
of a company can we draw that tell us
-
about the likely behavior of the type of
artificial intelligence we're interested
-
in here, today?
Well. Need a mouthful of water.
-
Let me crib from Wikipedia for a moment.
-
Wikipedia: "In the late 18th
century, Stewart Kyd, the author of the
-
first treatise on corporate law in English,
defined a corporation as: 'a collection of
-
many individuals united into one body,
under a special denomination, having
-
perpetual succession under an artificial
form, and vested, by policy of the law, with
-
the capacity of acting, in several respects,
as an individual, enjoying privileges and
-
immunities in common, and of exercising a
variety of political rights, more or less
-
extensive, according to the design of its
institution, or the powers conferred upon
-
it, either at the time of its creation, or
at any subsequent period of its
-
existence.'"
This was a late 18th century definition,
-
sound like a piece of software to you?
In 1844, the British government passed the
-
"Joint Stock Companies Act" which created
a register of companies and allowed any
-
legal person, for a fee, to register a
company which in turn existed as a
-
separate legal person. Prior to that point,
it required a Royal Charter or an act of
-
Parliament to create a company.
Subsequently, the law was extended to limit
-
the liability of individual shareholders
in event of business failure and then both
-
Germany and the United States added their
own unique twists to what today we see is
-
the doctrine of corporate personhood.
Now, though plenty of other things that
-
happened between the 16th and 21st centuries
did change the shape of the world we live in.
-
I've skipped the changes in
agricultural productivity that happened
-
due to energy economics,
which finally broke the Malthusian trap
-
our predecessors lived in.
This in turn broke the long-term
-
cap on economic growth of about
0.1% per year
-
in the absence of famines, plagues and
wars and so on.
-
I've skipped the germ theory of diseases
and the development of trade empires
-
in the age of sail and gunpowder,
that were made possible by advances
-
in accurate time measurement.
-
I've skipped the rise, and
hopefully decline, of the pernicious
-
theory of scientific racism that
underpinned Western colonialism and the
-
slave trade. I've skipped the rise of
feminism, the ideological position that
-
women are human beings rather than
property and the decline of patriarchy.
-
I've skipped the whole of the
Enlightenment and the Age of Revolutions,
-
but this is a technocratic.. technocentric
Congress, so I want to frame this talk in
-
terms of AI, which we all like to think we
understand. Here's the thing about these
-
artificial persons we call corporations.
Legally, they're people. They have goals,
-
they operate in pursuit of these goals,
they have a natural life cycle.
-
In the 1950s, a typical U.S. corporation on the
S&P 500 Index had a life span of 60 years.
-
Today it's down to less than 20 years.
This is largely due to predation.
-
Corporations are cannibals, they eat
one another.
-
They're also hive super organisms
like bees or ants.
-
For the first century and a
half, they relied entirely on human
-
employees for their internal operation,
but today they're automating their
-
business processes very rapidly. Each
human is only retained so long as they can
-
perform their assigned tasks more
efficiently than a piece of software
-
and they can all be replaced by another
human, much as the cells in our own bodies
-
are functionally interchangeable and a
group of cells can - in extremis - often be
-
replaced by a prosthetic device.
To some extent, corporations can be
-
trained to serve of the personal desires of
their chief executives, but even CEOs can
-
be dispensed with, if their activities
damage the corporation, as Harvey
-
Weinstein found out a couple of months
ago.
-
Finally, our legal environment today has
been tailored for the convenience of
-
corporate persons, rather than human
persons, to the point where our governments
-
now mimic corporations in many of our
internal structures.
-
So, to understand where we're going, we
need to start by asking "What do our
-
current actually existing AI overlords
want?"
-
Now, Elon Musk, who I believe you've
all heard of, has an obsessive fear of one
-
particular hazard of artificial
intelligence, which he conceives of as
-
being a piece of software that functions
like a brain in a box, namely the
-
Paperclip Optimizer or Maximizer.
A Paperclip Maximizer is a term of art for
-
a goal seeking AI that has a single
priority, e.g., maximizing the
-
number of paperclips in the universe. The
Paperclip Maximizer is able to improve
-
itself in pursuit of its goal, but has no
ability to vary its goal, so will
-
ultimately attempt to convert all the
metallic elements in the solar system into
-
paperclips, even if this is obviously
detrimental to the well-being of the
-
humans who set it this goal.
Unfortunately I don't think Musk
-
is paying enough attention,
consider his own companies.
-
Tesla isn't a Paperclip Maximizer, it's a
battery Maximizer.
-
After all, a battery.. an
electric car is a battery with wheels and
-
seats. SpaceX is an orbital payload
Maximizer, driving down the cost of space
-
launches in order to encourage more sales
for the service it provides. SolarCity is
-
a photovoltaic panel maximizer and so on.
All three of the.. Musk's very own slow AIs
-
are based on an architecture, designed to
maximize return on shareholder
-
investment, even if by doing so they cook
the planet the shareholders have to live
-
on or turn the entire thing into solar
panels.
-
But hey, if you're Elon Musk, thats okay,
you're gonna retire on Mars anyway.
-
laughing
-
By the way, I'm ragging on Musk in this
talks, simply because he's the current
-
opinionated tech billionaire, who thinks
for disrupting a couple of industries
-
entitles him to make headlines.
If this was 2007 and my focus slightly
-
difference.. different, I'd be ragging on
Steve Jobs and if we're in 1997 my target
-
would be Bill Gates.
Don't take it personally, Elon.
-
laughing
-
Back to topic. The problem of
corporations is, that despite their overt
-
goals, whether they make electric vehicles
or beer or sell life insurance policies,
-
they all have a common implicit Paperclip
Maximizer goal: to generate revenue. If
-
they don't make money, they're eaten by a
bigger predator or they go bust. It's as
-
vital to them as breathing is to us
mammals. They generally pursue their
-
implicit goal - maximizing revenue - by
pursuing their overt goal.
-
But sometimes they try instead to
manipulate their environment, to ensure
-
that money flows to them regardless.
Human toolmaking culture has become very
-
complicated over time. New technologies
always come with an attached implicit
-
political agenda that seeks to extend the
use of the technology. Governments react
-
to this by legislating to control new
technologies and sometimes we end up with
-
industries actually indulging in legal
duels through the regulatory mechanism of
-
law to determine, who prevails. For
example, consider the automobile. You
-
can't have mass automobile transport
without gas stations and fuel distribution
-
pipelines.
These in turn require access to whoever
-
owns the land the oil is extracted from
under and before you know it, you end up
-
with a permanent army in Iraq and a clamp
dictatorship in Saudi Arabia. Closer to
-
home, automobiles imply jaywalking laws and
drink-driving laws. They affect Town
-
Planning regulations and encourage
suburban sprawl, the construction of human
-
infrastructure on a scale required by
automobiles, not pedestrians.
-
This in turn is bad for competing
transport technologies, like buses or
-
trams, which work best in cities with a
high population density. So to get laws
-
that favour the automobile in place,
providing an environment conducive to
-
doing business, automobile companies spend
money on political lobbyists and when they
-
can get away with it, on bribes. Bribery
needn't be blatant of course. E.g.,
-
the reforms of a British railway network
in the 1960s dismembered many branch lines
-
and coincided with a surge in road
building and automobile sales. These
-
reforms were orchestrated by Transport
Minister Ernest Marples, who was purely a
-
politician. The fact that he accumulated a
considerable personal fortune during this
-
period by buying shares in motorway
construction corporations, has nothing to
-
do with it. So, no conflict of interest
there - now if the automobile in industry
-
can't be considered a pure Paperclip
Maximizer... sorry, the automobile
-
industry in isolation can't be considered
a pure Paperclip Maximizer. You have to
-
look at it in conjunction with the fossil
fuel industries, the road construction
-
business, the accident insurance sector
and so on. When you do this, you begin to
-
see the outline of a paperclip-maximizing
ecosystem that invades far-flung lands and
-
grinds up and kills around one and a
quarter million people per year. That's
-
the global death toll from automobile
accidents currently, according to the World
-
Health Organization. It rivals the First
World War on an ongoing permanent basis
-
and these are all side effects of its
drive to sell you a new car. Now,
-
automobiles aren't of course a total
liability. Today's cars are regulated
-
stringently for safety and, in theory, to
reduce toxic emissions. They're fast,
-
efficient and comfortable. We can thank
legal mandated regulations imposed by
-
governments for this, of course. Go back
to the 1970s and cars didn't have crumple
-
zones, go back to the 50s and they didn't
come with seat belts as standard. In the
-
1930s, indicators, turn signals and brakes
on all four wheels were optional and your
-
best hope of surviving a 50 km/h-crash was
to be thrown out of a car and land somewhere
-
without breaking your neck.
Regulator agencies are our current
-
political system's tool of choice for
preventing Paperclip Maximizers from
-
running amok. Unfortunately, regulators
don't always work. The first failure mode
-
of regulators that you need to be aware of
is regulatory capture, where regulatory
-
bodies are captured by the industries they
control. Ajit Pai, Head of American Federal
-
Communications Commission, which just voted
to eliminate net neutrality rules in the
-
U.S., has worked as Associate
General Counsel for Verizon Communications
-
Inc, the largest current descendant of the
Bell Telephone system's monopoly. After
-
the AT&T antitrust lawsuit, the Bell
network was broken up into the seven baby
-
bells. They've now pretty much reformed
and reaggregated and Verizon is the largest current one.
-
Why should someone with a transparent
interest in a technology corporation end
-
up running a regulator that tries to
control the industry in question? Well, if
-
you're going to regulate a complex
technology, you need to recruit regulators
-
from people who understand it.
Unfortunately, most of those people are
-
industry insiders. Ajit Pai is clearly
very much aware of how Verizon is
-
regulated, very insightful into its
operations and wants to do something about
-
it - just not necessarily in the public
interest.
-
applause
When regulators end up staffed by people
-
drawn from the industries they're supposed
to control, they frequently end up working
-
with their former office mates, to make it
easier to turn a profit, either by raising
-
barriers to keep new insurgent companies
out or by dismantling safeguards that
-
protect the public. Now a second problem
is regulatory lag where a technology
-
advances so rapidly, that regulations are
laughably obsolete by the time they're
-
issued. Consider the EU directive
requiring cookie notices on websites to
-
caution users, that their activities are
tracked and their privacy may be violated.
-
This would have been a good idea in 1993
or 1996, but unfortunatelly it didn't show up
-
until 2011. Fingerprinting and tracking
mechanisms have nothing to do with cookies
-
and were already widespread by then. Tim
Berners-Lee observed in 1995, that five
-
years worth of change was happening on the
web for every 12 months of real-world
-
time. By that yardstick, the cookie law
came out nearly a century too late to do
-
any good. Again, look at Uber. This month,
the European Court of Justice ruled that
-
Uber is a taxi service, not a Web App. This
is arguably correct - the problem is, Uber
-
has spread globally since it was founded
eight years ago, subsidizing its drivers to
-
put competing private hire firms out of
business. Whether this is a net good for
-
societys own is debatable. The problem is, a
taxi driver can get awfully hungry if she
-
has to wait eight years for a court ruling
against a predator intent on disrupting
-
her business. So, to recap: firstly, we
already have Paperclip Maximizers and
-
Musk's AI alarmism is curiously mirror
blind. Secondly, we have mechanisms for
-
keeping Paperclip Maximizers in check, but
they don't work very well against AIs that
-
deploy the dark arts, especially
corruption and bribery and they're even
-
worse against true AIs, that evolved too
fast for human mediated mechanisms like
-
the law to keep up with. Finally, unlike
the naive vision of a Paperclip Maximizer
-
that maximizes only paperclips, existing
AIs have multiple agendas, their overt
-
goal, but also profit seeking, expansion
into new markets and to accommodate the
-
desire of whoever is currently in the
driving seat.
-
sighs
-
Now, this brings me to the next major
heading in this dismaying laundry list:
-
how it all went wrong. It seems to me that
our current political upheavals, the best
-
understood, is arising from the capture
of post 1917 democratic institutions by
-
large-scale AI. Everywhere you look, you
see voters protesting angrily against an
-
entrenched establishment, that seems
determined to ignore the wants and needs
-
of their human constituents in favor of
those of the machines. The brexit upset
-
was largely result of a protest vote
against the British political
-
establishment, the election of Donald
Trump likewise, with a side order of racism
-
on top. Our major political parties are
led by people who are compatible with the
-
system as it exists today, a system that
has been shaped over decades by
-
corporations distorting our government and
regulatory environments. We humans live in
-
a world shaped by the desires and needs of
AI, forced to live on their terms and we're
-
taught, that we're valuable only to the
extent we contribute to the rule of the
-
machines. Now this is free sea and we're
all more interested in computers and
-
communications technology than this
historical crap. But as I said earlier,
-
history is a secret weapon, if you know how
to use it. What history is good for, is
-
enabling us to spot recurring patterns
that repeat across timescales outside our
-
personal experience. And if we look at our
historical very slow AIs, what do we learn
-
from them about modern AI and how it's
going to behave? Well to start with, our
-
AIs have been warped, the new AIs,
the electronic one's instantiated in our
-
machines, have been warped by a terrible
fundamentally flawed design decision back
-
in 1995, but as damaged democratic
political processes crippled our ability
-
to truly understand the world around us
and led to the angry upheavals and upsets
-
of our present decade. That mistake was
the decision, to fund the build-out of a
-
public World Wide Web as opposed to be
earlier government-funded corporate and
-
academic Internet by
monetizing eyeballs through advertising
-
revenue. The ad-supported web we're used
to today wasn't inevitable. If you recall
-
the web as it was in 1994, there were very
few ads at all and not much, in a way, of
-
Commerce. 1995 was the year, the World Wide
Web really came to public attention in the
-
anglophone world and consumer-facing
websites began to appear. Nobody really
-
knew, how this thing was going to be paid
for. The original .com bubble was all
-
about working out, how to monetize the web
for the first time and a lot of people
-
lost their shirts in the process. A naive
initial assumption was that the
-
transaction cost of setting up a tcp/ip
connection over modem was too high to
-
support.. to be supported by per-use micro
billing for web pages. So instead of
-
charging people fraction of a euro cent
for every page view, we'd bill customers
-
indirectly, by shoving advertising banners
in front of their eyes and hoping they'd
-
click through and buy something.
Unfortunately, advertising is in an
-
industry, one of those pre-existing very
slow AI ecosystems I already alluded to.
-
Advertising tries to maximize its hold on
the attention of the minds behind each
-
human eyeball. The coupling of advertising
with web search was an inevitable
-
outgrowth, I mean how better to attract
the attention of reluctant subjects, than to
-
find out what they're really interested in
seeing and selling ads that relate to
-
those interests. The problem of applying
the paperclip maximize approach to
-
monopolizing eyeballs, however, is that
eyeballs are a limited, scarce resource.
-
There are only 168 hours in every week, in
which I can gaze at banner ads. Moreover,
-
most ads are irrelevant to my interests and
it doesn't matter, how often you flash an ad
-
for dog biscuits at me, I'm never going to
buy any. I have a cat. To make best
-
revenue-generating use of our eyeballs,
it's necessary for the ad industry to
-
learn, who we are and what interests us and
to target us increasingly minutely in hope
-
of hooking us with stuff we're attracted
to.
-
In other words: the ad industry is a
paperclip maximizer, but for its success,
-
it relies on developing a theory of mind
that applies to human beings.
-
sighs
-
Do I need to divert on to the impassioned
rant about the hideous corruption
-
and evil that is Facebook?
Audience: Yes!
-
CS: Okay, somebody said yes.
I'm guessing you've heard it all before,
-
but for too long don't read.. summary is:
Facebook is as much a search engine as
-
Google or Amazon. Facebook searches are
optimized for faces, that is for human
-
beings. If you want to find someone you
fell out of touch with thirty years ago,
-
Facebook probably knows where they live,
what their favorite color is, what sized
-
shoes they wear and what they said about
you to your friends behind your back all
-
those years ago, that made you cut them off.
Even if you don't have a Facebook account,
-
Facebook has a You account, a hole in their
social graph of a bunch of connections
-
pointing in to it and your name tagged on
your friends photographs. They know a lot
-
about you and they sell access to their
social graph to advertisers, who then
-
target you, even if you don't think you use
Facebook. Indeed, there is barely any
-
point in not using Facebook these days, if
ever. Social media Borg: "Resistance is
-
futile!" So however, Facebook is trying to
get eyeballs on ads, so is Twitter and so
-
are Google. To do this, they fine-tuned the
content they show you to make it more
-
attractive to your eyes and by attractive
I do not mean pleasant. We humans have an
-
evolved automatic reflex to pay attention
to threats and horrors as well as
-
pleasurable stimuli and the algorithms,
that determine what they show us when we
-
look at Facebook or Twitter, take this bias
into account. You might react more
-
strongly to a public hanging in Iran or an
outrageous statement by Donald Trump than
-
to a couple kissing. The algorithm knows
and will show you whatever makes you pay
-
attention, not necessarily what you need or
want to see.
-
So this brings me to another point about
computerized AI as opposed to corporate
-
AI. AI algorithms tend to embody the
prejudices and beliefs of either the
-
programmers, or the data set
the AI was trained on.
-
A couple of years ago I ran across an
account of a webcam, developed by mostly
-
pale-skinned Silicon Valley engineers, that
had difficulty focusing or achieving correct
-
color balance, when pointed at dark-skinned
faces.
-
Fast an example of human programmer
induced bias, they didn't have a wide
-
enough test set and didn't recognize that
they were inherently biased towards
-
expecting people to have pale skin. But
with today's deep learning, bias can creep
-
in, while the datasets for neural networks are
trained on, even without the programmers
-
intending it. Microsoft's first foray into
a conversational chat bot driven by
-
machine learning, Tay, was what we yanked
offline within days last year, because
-
4chan and reddit based trolls discovered,
that they could train it towards racism and
-
sexism for shits and giggles. Just imagine
you're a poor naive innocent AI who's just
-
been switched on and you're hoping to pass
your Turing test and what happens? 4chan
-
decide to play with your head.
laughing
-
I got to feel sorry for Tay.
Now, humans may be biased,
-
but at least individually we're
accountable and if somebody gives you
-
racist or sexist abuse to your face, you
can complain or maybe punch them. It's
-
impossible to punch a corporation and it
may not even be possible to identify the
-
source of unfair bias, when you're dealing
with a machine learning system. AI based
-
systems that instantiate existing
prejudices make social change harder.
-
Traditional advertising works by playing
on the target customer's insecurity and
-
fear as much as their aspirations. And fear
of a loss of social status and privileges
-
are powerful stress. Fear and xenophobia
are useful tools for tracking advertising..
-
ah, eyeballs.
What happens when we get pervasive social
-
networks, that have learned biases against
say Feminism or Islam or melanin? Or deep
-
learning systems, trained on datasets
contaminated by racist dipshits and their
-
propaganda? Deep learning systems like the
ones inside Facebook, that determine which
-
stories to show you to get you to pay as
much attention as possible to be adverse.
-
I think, you probably have an inkling of
how.. where this is now going. Now, if you
-
think, this is sounding a bit bleak and
unpleasant, you'd be right. I write sci-fi.
-
You read or watch or play sci-fi. We're
acculturated to think of science and
-
technology as good things that make life
better, but this ain't always so. Plenty of
-
technologies have historically been
heavily regulated or even criminalized for
-
good reason and once you get past any
reflexive indignation, criticism of
-
technology and progress, you might agree
with me, that it is reasonable to ban
-
individuals from owning nuclear weapons or
nerve gas. Less obviously, they may not be
-
weapons, but we've banned
chlorofluorocarbon refrigerants, because
-
they were building up in the high
stratosphere and destroying the ozone
-
layer that protects us from UVB radiation.
We banned tetra e-file LED in
-
gasoline, because it poisoned people and
led to a crime wave. These are not
-
weaponized technologies, but they have
horrible side effects. Now, nerve gas and
-
leaded gasoline were 1930s chemical
technologies, promoted by 1930s
-
corporations. Halogenated refrigerants and
nuclear weapons are totally 1940s. ICBMs
-
date to the 1950s. You know, I have
difficulty seeing why people are getting
-
so worked up over North Korea. North Korea
reaches 1953 level parity - be terrified
-
and hide under the bed!
I submit that the 21st century is throwing
-
up dangerous new technologies, just as our
existing strategies for regulating very
-
slow AIs have proven to be inadequate. And
I don't have an answer to how we regulate
-
new technologies, I just want to flag it up
as a huge social problem that is going to
-
affect the coming century.
I'm now going to give you four examples of
-
new types of AI application that are
going to warp our societies even more
-
badly than the old slow AIs, we.. have done.
This isn't an exhaustive list, this is just
-
some examples I dream, I pulled out of
my ass. We need to work out a general
-
strategy for getting on top of this sort
of thing before they get on top of us and
-
I think, this is actually a very urgent
problem. So I'm just going to give you this
-
list of dangerous new technologies that
are arriving now, or coming, and send you
-
away to think about what to do next. I
mean, we are activists here, we should be
-
thinking about this and planning what
to do. Now, the first nasty technology I'd
-
like to talk about, is political hacking
tools that rely on social graph directed
-
propaganda. This is low-hanging fruit
after the electoral surprises of 2016.
-
Cambridge Analytica pioneered the use of
deep learning by scanning the Facebook and
-
Twitter social graphs to identify voters
political affiliations by simply looking
-
at what tweets or Facebook comments they
liked, very able to do this, to identify
-
individuals with a high degree of
precision, who were vulnerable to
-
persuasion and who lived in electorally
sensitive districts. They then canvassed
-
them with propaganda, that targeted their
personal hot-button issues to change their
-
electoral intentions. The tools developed
by web advertisers to sell products have
-
now been weaponized for political purposes
and the amount of personal information
-
about our affiliations that we expose on
social media, makes us vulnerable. Aside, in
-
the last U.S. Presidential election, as
mounting evidence for the British
-
referendum on leaving the EU was subject
to foreign cyber war attack, now
-
weaponized social media, as was the most
recent French Presidential election.
-
In fact, if we remember the leak of emails
from the Macron campaign, it turns out that
-
many of those emails were false, because
the Macron campaign anticipated that they
-
would be attacked and an email trove would
be leaked in the last days before the
-
election. So they deliberately set up
false emails that would be hacked and then
-
leaked and then could be discredited. It
gets twisty fast. Now I'm kind of biting
-
my tongue and trying, not to take sides
here. I have my own political affiliation
-
after all, and I'm not terribly mainstream.
But if social media companies don't work
-
out how to identify and flag micro-
targeted propaganda, then democratic
-
institutions will stop working and elections
will be replaced by victories, whoever
-
can buy the most trolls. This won't
simply be billionaires but.. like the Koch
-
brothers and Robert Mercer from the U.S.
throwing elections to whoever will
-
hand them the biggest tax cuts. Russian
military cyber war doctrine calls for the
-
use of social media to confuse and disable
perceived enemies, in addition to the
-
increasingly familiar use of zero-day
exploits for espionage, such as spear
-
phishing and distributed denial-of-service
attacks, on our infrastructure, which are
-
practiced by Western agencies. Problem is,
once the Russians have demonstrated that
-
this is an effective tactic, the use of
propaganda bot armies in cyber war will go
-
global. And at that point, our social
discourse will be irreparably poisoned.
-
Incidentally, I'd like to add - as another
aside like the Elon Musk thing - I hate
-
the cyber prefix! It usually indicates,
that whoever's using it has no idea what
-
they're talking about.
applause, laughter
-
Unfortunately, much as the way the term
hacker was corrupted from its original
-
meaning in the 1990s, the term cyber war
has, it seems, to have stuck and it's now an
-
actual thing that we can point to and say:
"This is what we're talking about". So I'm
-
afraid, we're stuck with this really
horrible term. But that's a digression, I
-
should get back on topic, because I've only
got 20 minutes to go.
-
Now, the second threat that we need to
think about regulating ,or controlling, is
-
an adjunct to deep learning target
propaganda: it's the use of neural network
-
generated false video media. We're used to
photoshopped images these days, but faking
-
video and audio takes it to the next
level. Luckily, faking video and audio is
-
labor-intensive, isn't it? Well nope, not
anymore. We're seeing the first generation
-
of AI assisted video porn, in which the
faces of film stars are mapped onto those
-
of other people in a video clip, using
software rather than laborious in human
-
process.
A properly trained neural network
-
recognizes faces and transforms the face
of the Hollywood star, they want to put
-
into a porn movie, into the face of - onto
the face of the porn star in the porn clip
-
and suddenly you have "Oh dear God, get it
out of my head" - no, not gonna give you
-
any examples. Let's just say it's bad
stuff.
-
laughs
Meanwhile we have WaveNet, a system
-
for generating realistic sounding speech,
if a voice of a human's speak of a neural
-
network has been trained to mimic any
human speaker. We can now put words into
-
other people's mouths realistically
without employing a voice actor. This
-
stuff is still geek intensive. It requires
relatively expensive GPUs or cloud
-
computing clusters, but in less than a
decade it'll be out in the wild, turned
-
into something, any damn script kiddie can
use and just about everyone will be able
-
to fake up a realistic video of someone
they don't like doing something horrible.
-
I mean, Donald Trump in the White House. I
can't help but hope that out there
-
somewhere there's some geek like Steve
Bannon with a huge rack of servers who's
-
faking it all, but no. Now, also we've
already seen alarm this year over bizarre
-
YouTube channels that attempt to monetize
children's TV brands by scraping the video
-
content of legitimate channels and adding
their own advertising in keywords on top
-
before reposting it. This is basically
your YouTube spam.
-
Many of these channels are shaped by
paperclip maximizing advertising AIs, but
-
are simply trying to maximise their search
ranking on YouTube and it's entirely
-
algorithmic: you have a whole list of
keywords, you perm, you take them, you slap
-
them on top of existing popular videos and
re-upload the videos. Once you add neural
-
network driven tools for inserting
character A into pirated video B, to click
-
maximize.. for click maximizing bots,
things are gonna get very weird, though. And
-
they're gonna get even weirder, when these
tools are deployed for political gain.
-
We tend - being primates, that evolved 300
thousand years ago in a smartphone free
-
environment - to evaluate the inputs from
our eyes and ears much less critically
-
than what random strangers on the Internet
tell us in text. We're already too
-
vulnerable to fake news as it is. Soon
they'll be coming for us, armed with
-
believable video evidence. The Smart Money
says that by 2027 you won't be able to
-
believe anything you see in video, unless
for a cryptographic signatures on it,
-
linking it back to the camera that shot
the raw feed. But you know how good most
-
people are at using encryption - it's going to
be chaos!
-
So, paperclip maximizers with focus on
eyeballs are very 20th century. The new
-
generation is going to be focusing on our
nervous system. Advertising as an industry
-
can only exist because of a quirk of our
nervous system, which is that we're
-
susceptible to addiction. Be it
tobacco, gambling or heroin, we
-
recognize addictive behavior, when we see
it. Well, do we? It turns out the human
-
brain's reward feedback loops are
relatively easy to gain. Large
-
corporations like Zynga - producers of
FarmVille - exist solely because of it,
-
free to use social media platforms like
Facebook and Twitter, are dominant precisely
-
because they're structured to reward
frequent short bursts of interaction and
-
to generate emotional engagement - not
necessarily positive emotions, anger and
-
hatred are just as good when it comes to
attracting eyeballs for advertisers.
-
Smartphone addiction is a side effect of
advertising as a revenue model. Frequent
-
short bursts of interaction to keep us
coming back for more. Now a new.. newish
-
development, thanks to deep learning again -
I keep coming back to deep learning,
-
don't I? - use of neural networks in a
manner that Marvin Minsky never envisaged,
-
back when he was deciding that the
Perzeptron was where it began and ended
-
and it couldn't do anything.
Well, we have neuroscientists now, who've
-
mechanized the process of making apps more
addictive. Dopamine Labs is one startup
-
that provides tools to app developers to
make any app more addictive, as well as to
-
reduce the desire to continue
participating in a behavior if it's
-
undesirable, if the app developer actually
wants to help people kick the habit. This
-
goes way beyond automated A/B testing. A/B
testing allows developers to plot a binary
-
tree path between options, moving towards a
single desired goal. But true deep
-
learning, addictiveness maximizers, can
optimize for multiple attractors in
-
parallel. The more users you've got on
your app, the more effectively you can work
-
out, what attracts them and train them and
focus on extra addictive characteristics.
-
Now, going by their public face, the folks
at Dopamine Labs seem to have ethical
-
qualms about the misuse of addiction
maximizers. But neuroscience isn't a
-
secret and sooner or later some really
unscrupulous sociopaths will try to see
-
how far they can push it. So let me give
you a specific imaginary scenario: Apple
-
have put a lot of effort into making real-
time face recognition work on the iPhone X
-
and it's going to be everywhere on
everybody's phone in another couple of
-
years. You can't fool an iPhone X with a
photo or even a simple mask. It does depth
-
mapping to ensure, your eyes are in the
right place and can tell whether they're
-
open or closed. It recognizes your face
from underlying bone structure through
-
makeup and bruises. It's running
continuously, checking pretty much as often
-
as every time you'd hit the home button on
a more traditional smartphone UI and it
-
can see where your eyeballs are pointing.
The purpose of a face recognition system
-
is to provide for real-time authenticate
continuous authentication when you're
-
using a device - not just enter a PIN or
sign a password or use a two factor
-
authentication pad, but the device knows
that you are its authorized user on a
-
continuous basis and if somebody grabs
your phone and runs away with it, it'll
-
know that it's been stolen immediately, it
sees the face of the thief.
-
However, your phone monitoring your facial
expressions and correlating against app
-
usage has other implications. Your phone
will be aware of precisely what you like
-
to look at on your screen.. on its screen.
We may well have sufficient insight on the
-
part of the phone to identify whether
you're happy or sad, bored or engaged.
-
With addiction seeking deep learning tools
and neural network generated images, those
-
synthetic videos I was talking about, it's
entirely.. in principle entirely possible to
-
feed you an endlessly escalating payload
of arousal-inducing inputs. It might be
-
Facebook or Twitter messages, optimized to
produce outrage, or it could be porn
-
generated by AI to appeal to kinks you
don't even consciously know you have.
-
But either way, the app now owns your
central nervous system and you will be
-
monetized. And finally, I'd like to raise a
really hair-raising specter that goes well
-
beyond the use of deep learning and
targeted propaganda and cyber war. Back in
-
2011, an obscure Russian software house
launched an iPhone app for pickup artists
-
called 'Girls Around Me'. Spoiler: Apple
pulled it like a hot potato as soon as
-
word got out that it existed. Now, Girls
Around Me works out where the user is
-
using GPS, then it would query Foursquare
and Facebook for people matching a simple
-
relational search, for single females on
Facebook, per relationship status, who have
-
checked in, or been checked in by their
friends, in your vicinity on Foursquare.
-
The app then displays their locations on a
map along with links to their social media
-
profiles. If they were doing it today, the
interface would be gamified, showing strike
-
rates and a leaderboard and flagging
targets who succumbed to harassment as
-
easy lays.
But these days, the cool kids and single
-
adults are all using dating apps with a
missing vowel in the name, only a creeper
-
would want something like Girls Around Me,
right? Unfortunately, there are much, much
-
nastier uses of and scraping social media
to find potential victims for serial
-
rapists. Does your social media profile
indicate your political religious
-
affiliation? No? Cambridge Analytica can
work them out with 99.9% precision
-
anyway, so don't worry about that. We
already have you pegged. Now add a service
-
that can identify people's affiliation and
location and you have a beginning of a
-
flash mob app, one that will show people
like us and people like them on a
-
hyperlocal map.
Imagine you're a young female and a
-
supermarket like Target has figured out
from your purchase patterns, that you're
-
pregnant, even though you don't know it
yet. This actually happened in 2011. Now
-
imagine, that all the anti-abortion
campaigners in your town have an app
-
called "Babies Risk" on their phones.
Someone has paid for the analytics feed
-
from the supermarket and every time you go
near a family planning clinic, a group of
-
unfriendly anti-abortion protesters
somehow miraculously show up and swarm
-
you. Or imagine you're male and gay and
the "God hates fags"-crowd has invented a
-
100% reliable gaydar app, based on your
Grindr profile, and is getting their fellow
-
travelers to queer bash gay men - only when
they're alone or outnumbered by ten to
-
one. That's the special horror of precise
geolocation not only do you always know
-
where you are, the AIs know, where you are
and some of them aren't friendly. Or
-
imagine, you're in Pakistan and Christian
Muslim tensions are rising or in rural
-
Alabama or an Democrat, you know the
possibilities are endless. Someone out
-
there is working on this. A geolocation
aware, social media scraping deep learning
-
application, that uses a gamified
competitive interface to reward its
-
players for joining in acts of mob
violence against whoever the app developer
-
hates.
Probably it has an innocuous seeming, but
-
highly addictive training mode, to get the
users accustomed to working in teams and
-
obeying the apps instructions. Think
Ingress or Pokemon Go. Then at some pre-
-
planned zero-hour, it switches mode and
starts rewarding players for violence,
-
players who have been primed to think of
their targets as vermin by a steady drip
-
feed of micro-targeted dehumanizing
propaganda inputs, delivered over a period
-
of months. And the worst bit of this picture?
Is that the app developer isn't even a
-
nation-state trying to disrupt its enemies
or an extremist political group trying to
-
murder gays, Jews or Muslims. It's just a
Paperclip Maximizer doing what it does
-
and you are the paper. Welcome to the 21st
century.
-
applause
Uhm...
-
Thank you.
-
ongoing applause
We have a little time for questions. Do
-
you have a microphone for the orders? Do
we have any questions? ... OK.
-
Herald: So you are doing a Q&A?
CS: Hmm?
-
Herald: So you are doing a Q&A. Well if
there are any questions, please come
-
forward to the microphones, numbers 1
through 4 and ask.
-
Mic 1: Do you really think it's all
bleak and dystopian like you prescribed
-
it, because I also think the future can be
bright, looking at the internet with open
-
source and like, it's all growing and going
faster and faster in a good
-
direction. So what do you think about
the balance here?
-
CS: sighs Basically, I think the
problem is, that about 3% of us
-
are sociopaths or psychopaths, who spoil
everything for the other 97% of us.
-
Wouldn't it be great if somebody could
write an app that would identify all the
-
psychopaths among us and let the rest of
us just kill them?
-
laughing, applause
Yeah, we have all the
-
tools to make a utopia, we have it now
today. A bleak miserable grim meathook
-
future is not inevitable, but it's up to
us to use these tools to prevent the bad
-
stuff happening and to do that, we have to
anticipate the bad outcomes and work to
-
try and figure out a way to deal with
them. That's what this talk is. I'm trying
-
to do a bit of a wake-up call and get
people thinking about how much worse
-
things can get and what we need to do to
prevent it from happening. What I was
-
saying earlier about our regulatory
systems being broken, stands. How do we
-
regulate the deep learning technologies?
This is something we need to think about.
-
H: Okay mic number two.
Mic 2: Hello? ... When you talk about
-
corporations as AIs, where do you see that
analogy you're making? Do you see them as
-
literally AIs or figuratively?
CS: Almost literally. If
-
you're familiar with philosopher
(?) Searle's Chinese room paradox
-
from the 1970s, by which he attempted to
prove that artificial intelligence was
-
impossible, a corporation is very much the
Chinese room implementation of an AI. It
-
is a bunch of human beings in a box. You
put inputs into the box, you get apples
-
out of a box. Does it matter, whether it's
all happening in software or whether
-
there's a human being following rules
inbetween to assemble the output? I don't
-
see there being much of a difference.
Now you have to look at a company at a
-
very abstract level to view it as an AI,
but more and more companies are automating
-
their internal business processes. You've
got to view this as an ongoing trend. And
-
yeah, they have many of the characteristics
of an AI.
-
Herald: Okay mic number four.
Mic 4: Hi, thanks for your talk.
-
You probably heard of the Time Well
Spent and Design Ethics movements that
-
are alerting developers to dark patterns
in UI design, where
-
these people design apps to manipulate
people. I'm curious if you find any
-
optimism in the possibility of amplifying
or promoting those movements.
-
CS: Uhm, you know, I knew about dark
patterns, I knew about people trying to
-
optimize them, I wasn't actually aware
there were movements against this. Okay I'm
-
53 years old, I'm out of touch. I haven't
actually done any serious programming in
-
15 years. I'm so rusty, my rust has rust on
it. But, you know, it is a worrying trend
-
and actual activism is a good start.
Raising awareness of hazards and of what
-
we should be doing about them, is a good
start. And I would classify this actually
-
as a moral issue. We need to..
corporations evaluate everything in terms
-
of revenue, because it's very
equivalent to breathing, they have to
-
breathe. Corporations don't usually have
any moral framework. We're humans, we need
-
a moral framework to operate within. Even
if it's as simple as first "Do no harm!"
-
or "Do not do unto others that which would
be repugnant if it was done unto you!",
-
the Golden Rule. So, yeah, we should be
trying to spread awareness of this about
-
and working with program developers, to
look to remind them that they are human
-
beings and have to be humane in their
application of technology, is a necessary
-
start.
applause
-
H: Thank you! Mic 3?
Mic 3: Hi! Yeah, I think that folks,
-
especially in this sort of crowd, tend to
jump to the "just get off of
-
Facebook"-solution first, for a lot of
these things that are really, really
-
scary. But what worries me, is how we sort
of silence ourselves when we do that.
-
After the election I actually got back on
Facebook, because the Women's March was
-
mostly organized through Facebook. But
yeah, I think we need a lot more
-
regulation, but we can't just throw it
out. We're.. because it's..
-
social media is the only... really good
platform we have right now
-
to express ourselves, to
have our rules, or power.
-
CS: Absolutely. I have made
a point of not really using Facebook
-
for many, many, many years.
I have a Facebook page simply to
-
shut up the young marketing people at my
publisher, who used to prop up every two
-
years and say: "Why don't you have a
Facebook. Everybody's got a Facebook."
-
No, I've had a blog since 1993!
laughing
-
But no, I'm gonna have to use Facebook,
because these days, not using Facebook is
-
like not using email. You're cutting off
your nose to spite your face. What we
-
really do need to be doing, is looking for
some form of effective oversight of
-
Facebook and particularly, of how they..
the algorithms that show you content, are
-
written. What I was saying earlier about
how algorithms are not as transparent as
-
human beings to people, applies hugely to
them. And both, Facebook and Twitter
-
control the information
that they display to you.
-
Herald: Okay, I'm terribly sorry for all the
people queuing at the mics now, we're out
-
of time. I also have to apologize, I
announced, that this talk was being held in
-
English, but it was being held in English.
the latter pronounced on the G
-
Thank you very much, Charles Stross!
-
CS: Thank you very much for
listening to me, it's been a pleasure!
-
applause
-
postroll music
-
subtitles created by c3subtitles.de
in the year 2018