1
00:00:00,000 --> 00:00:18,721
34C3 preroll music
2
00:00:18,721 --> 00:00:24,996
Herald: Humans of Congress, it is my
pleasure to announce the next speaker.
3
00:00:24,996 --> 00:00:31,749
I was supposed to pick out a few awards or
something, to actually present what he's
4
00:00:31,750 --> 00:00:36,419
done in his life, but I can
only say: he's one of us!
5
00:00:36,419 --> 00:00:40,149
applause
6
00:00:40,149 --> 00:00:43,279
Charles Stross!
ongoing applause
7
00:00:43,279 --> 00:00:45,969
Charles Stross: Hi! Is this on?
Good. Great.
8
00:00:45,969 --> 00:00:50,679
I'm really pleased to be here and I
want to start by apologizing for my total
9
00:00:50,679 --> 00:00:57,629
lack of German. So this talk is gonna be in
English. Good morning. I'm Charlie Stross
10
00:00:57,629 --> 00:01:03,559
and it's my job to tell lies for money, or
rather, I write science fiction, much of
11
00:01:03,559 --> 00:01:07,860
it about on the future, which in recent
years has become ridiculously hard to
12
00:01:07,860 --> 00:01:16,090
predict. In this talk I'm going to talk
about why. Now our species, Homo sapiens
13
00:01:16,090 --> 00:01:21,990
sapiens, is about 300,000 years old. It
used to be about 200,000 years old,
14
00:01:21,990 --> 00:01:26,189
but it grew an extra 100,000
years in the past year because of new
15
00:01:26,189 --> 00:01:31,329
archaeological discoveries, I mean, go
figure. For all but the last three
16
00:01:31,329 --> 00:01:36,759
centuries or so - of that span, however -
predicting the future was really easy. If
17
00:01:36,759 --> 00:01:41,369
you were an average person - as opposed to
maybe a king or a pope - natural disasters
18
00:01:41,369 --> 00:01:46,690
aside, everyday life 50 years in the
future would resemble everyday life 50
19
00:01:46,690 --> 00:01:56,089
years in your past. Let that sink in for a
bit. For 99.9% of human existence on this
20
00:01:56,089 --> 00:02:01,929
earth, the future was static. Then
something changed and the future began to
21
00:02:01,929 --> 00:02:06,600
shift increasingly rapidly, until, in the
present day, things are moving so fast,
22
00:02:06,600 --> 00:02:13,440
it's barely possible to anticipate trends
from one month to the next. Now as an
23
00:02:13,440 --> 00:02:17,830
eminent computer scientist, Edsger Dijkstra
once remarked, computer science is no more
24
00:02:17,830 --> 00:02:23,620
about computers than astronomy is about
building big telescopes, the same can be
25
00:02:23,620 --> 00:02:28,760
said of my field of work, writing science
fiction, sci-fi is rarely about science
26
00:02:28,760 --> 00:02:33,690
and even more rarely about predicting the
future, but sometimes we dabble in
27
00:02:33,690 --> 00:02:42,140
Futurism and lately, Futurism has gotten
really, really, weird. Now when I write a
28
00:02:42,140 --> 00:02:47,230
near future work of fiction, one set, say, a
decade hence, there used to be a recipe I
29
00:02:47,230 --> 00:02:53,770
could follow, that worked eerily well. Simply put:
90% of the next decade stuff is
30
00:02:53,770 --> 00:02:57,420
already here around us today.
Buildings are designed to
31
00:02:57,420 --> 00:03:03,040
last many years, automobiles have a design
life of about a decade, so half the cars on
32
00:03:03,040 --> 00:03:10,240
the road in 2027 are already there now -
they're new. People? There'll be some new
33
00:03:10,240 --> 00:03:15,520
faces, aged 10 and under, and some older
people will have died, but most of us
34
00:03:15,520 --> 00:03:22,840
adults will still be around, albeit older
and grayer, this is the 90% of a near
35
00:03:22,840 --> 00:03:30,820
future that's already here today. After
the already existing 90%, another 9% of a
36
00:03:30,820 --> 00:03:35,650
near future a decade hence used to be
easily predictable: you look at trends
37
00:03:35,650 --> 00:03:39,970
dictated by physical limits, such as
Moore's law and you look at Intel's road
38
00:03:39,970 --> 00:03:44,000
map and you use a bit of creative
extrapolation and you won't go too far
39
00:03:44,000 --> 00:03:52,510
wrong. If I predict - wearing my futurology
hat - that in 2027 LTE cellular phones will
40
00:03:52,510 --> 00:03:57,560
be ubiquitous, 5G will be available for
high bandwidth applications and there will be
41
00:03:57,560 --> 00:04:01,790
fallback to some kind of satellite data
service at a price, you probably won't
42
00:04:01,790 --> 00:04:04,330
laugh at me.
I mean, it's not like I'm predicting that
43
00:04:04,330 --> 00:04:08,110
airlines will fly slower and Nazis will
take over the United States, is it ?
44
00:04:08,110 --> 00:04:09,900
laughing
45
00:04:09,900 --> 00:04:15,210
And therein lies the problem. There is
remaining 1% of what Donald Rumsfeld
46
00:04:15,210 --> 00:04:20,940
called the "unknown unknowns", what throws off
all predictions. As it happens, airliners
47
00:04:20,940 --> 00:04:26,060
today are slower than they were in the
1970s and don't get me started about the Nazis,
48
00:04:26,060 --> 00:04:31,860
I mean, nobody in 2007 was expecting a Nazi
revival in 2017, were they?
49
00:04:31,860 --> 00:04:37,320
Only this time, Germans get to be the good guys.
laughing, applause
50
00:04:37,320 --> 00:04:42,260
So. My recipe for fiction set 10 years
in the future used to be:
51
00:04:42,260 --> 00:04:47,360
"90% is already here,
9% is not here yet but predictable
52
00:04:47,360 --> 00:04:53,730
and 1% is 'who ordered that?'" But unfortunately
the ratios have changed, I think we're now
53
00:04:53,730 --> 00:04:59,660
down to maybe 80% already here - climate
change takes a huge toll on architecture -
54
00:04:59,660 --> 00:05:05,760
then 15% not here yet, but predictable and
a whopping 5% of utterly unpredictable
55
00:05:05,760 --> 00:05:12,740
deep craziness. Now... before I carry on
with this talk, I want to spend a minute or
56
00:05:12,740 --> 00:05:18,530
two ranting loudly and ruling out the
singularity. Some of you might assume, that
57
00:05:18,530 --> 00:05:23,590
as the author of books like "Singularity
Sky" and "Accelerando",
58
00:05:23,590 --> 00:05:28,220
I expect an impending technological
singularity,
59
00:05:28,220 --> 00:05:32,090
that we will develop self-improving
artificial intelligence and mind uploading
60
00:05:32,090 --> 00:05:35,410
and the whole wish list of transhumanist
aspirations promoted by the likes of
61
00:05:35,410 --> 00:05:42,300
Ray Kurzweil, will come to pass. Unfortunately
this isn't the case. I think transhumanism
62
00:05:42,300 --> 00:05:49,050
is a warmed-over Christian heresy. While
its adherents tend to be outspoken atheists,
63
00:05:49,050 --> 00:05:51,910
they can't quite escape from the
history that gave rise to our current
64
00:05:51,910 --> 00:05:57,220
Western civilization. Many of you are
familiar with design patterns, an approach
65
00:05:57,220 --> 00:06:01,560
to software engineering that focuses on
abstraction and simplification, in order
66
00:06:01,560 --> 00:06:07,730
to promote reusable code. When you look at
the AI singularity as a narrative and
67
00:06:07,730 --> 00:06:11,790
identify the numerous places in their
story where the phrase "and then a miracle
68
00:06:11,790 --> 00:06:18,790
happens" occur, it becomes apparent pretty
quickly, that they've reinvented Christiantiy.
69
00:06:18,790 --> 00:06:19,460
applause
70
00:06:19,460 --> 00:06:25,330
Indeed, the wellspring of
today's transhumanists draw in a long rich
71
00:06:25,330 --> 00:06:29,990
history of Russian philosophy, exemplified
by the russian orthodox theologian Nikolai
72
00:06:29,990 --> 00:06:37,169
Fyodorovich Fedorov by way of his disciple
Konstantin Tsiolkovsky, whose derivation
73
00:06:37,169 --> 00:06:40,520
of a rocket equation makes him
essentially the father of modern space
74
00:06:40,520 --> 00:06:45,350
flight. Once you start probing the nether
regions of transhumanist forth and run
75
00:06:45,350 --> 00:06:49,800
into concepts like Roko's Basilisk - by the
way, any of you who didn't know about the
76
00:06:49,800 --> 00:06:54,120
Basilisk before, are now doomed to an
eternity in AI hell, terribly sorry - you
77
00:06:54,120 --> 00:06:57,889
realize, they've mangled it to match some
of the nastier aspects of Presbyterian
78
00:06:57,889 --> 00:07:03,189
Protestantism. Now they basically invented
original sin and Satan in the guise of an
79
00:07:03,189 --> 00:07:09,270
AI that doesn't exist yet ,it's.. kind of
peculiar. Anyway, my take on the
80
00:07:09,270 --> 00:07:12,949
singularity is: What if something walks
like a duck and quacks like a duck? It's
81
00:07:12,949 --> 00:07:18,460
probably a duck. And if it looks like a
religion, it's probably a religion.
82
00:07:18,460 --> 00:07:23,280
I don't see much evidence for human-like,
self-directed artificial intelligences
83
00:07:23,280 --> 00:07:28,070
coming along any time soon, and a fair bit
of evidence, that nobody accepts and freaks
84
00:07:28,070 --> 00:07:32,150
in cognitive science departments, even
want it. I mean, if we invented an AI
85
00:07:32,150 --> 00:07:35,940
that was like a human mind, it would do the
AI equivalent of sitting on the sofa,
86
00:07:35,940 --> 00:07:39,455
munching popcorn and
watching the Super Bowl all day.
87
00:07:39,455 --> 00:07:43,261
It wouldn't be much use to us.
laughter, applause
88
00:07:43,261 --> 00:07:46,776
What we're getting instead,
is self-optimizing tools that defy
89
00:07:46,776 --> 00:07:51,220
human comprehension, but are not
in fact any more like our kind
90
00:07:51,220 --> 00:07:57,500
of intelligence than a Boeing 737 is like
a seagull. Boeing 737s and seagulls both
91
00:07:57,500 --> 00:08:04,240
fly, Boeing 737s don't lay eggs and shit
everywhere. So I'm going to wash my hands
92
00:08:04,240 --> 00:08:09,590
of a singularity as a useful explanatory
model of the future without further ado.
93
00:08:09,590 --> 00:08:14,670
I'm one of those vehement atheists as well
and I'm gonna try and offer you a better
94
00:08:14,670 --> 00:08:20,960
model for what's happening to us. Now, as
my fellow Scottish science fictional author
95
00:08:20,960 --> 00:08:27,130
Ken MacLeod likes to say "the secret
weapon of science fiction is history".
96
00:08:27,130 --> 00:08:31,229
History is, loosely speaking, is the written
record of what and how people did things
97
00:08:31,229 --> 00:08:36,528
in past times. Times that have slipped out
of our personal memories. We science
98
00:08:36,528 --> 00:08:41,099
fiction writers tend to treat history as a
giant toy chest to raid, whenever we feel
99
00:08:41,099 --> 00:08:45,389
like telling a story. With a little bit of
history, it's really easy to whip up an
100
00:08:45,389 --> 00:08:49,019
entertaining yarn about a galactic empire
that mirrors the development and decline
101
00:08:49,019 --> 00:08:53,529
of a Habsburg Empire or to respin the
October Revolution as a tale of how Mars
102
00:08:53,529 --> 00:08:59,599
got its independence. But history is
useful for so much more than that.
103
00:08:59,599 --> 00:09:05,380
It turns out, that our personal memories
don't span very much time at all. I'm 53
104
00:09:05,380 --> 00:09:10,800
and I barely remember the 1960s. I only
remember the 1970s with the eyes of a 6 to
105
00:09:10,800 --> 00:09:17,580
16 year old. My father died this year,
aged 93, and he'd just about remembered the
106
00:09:17,580 --> 00:09:22,770
1930s. Only those of my father's
generation directly remember the Great
107
00:09:22,780 --> 00:09:29,079
Depression and can compare it to the
2007/08 global financial crisis directly.
108
00:09:29,079 --> 00:09:34,250
We Westerners tend to pay little attention
to cautionary tales told by 90-somethings.
109
00:09:34,250 --> 00:09:38,999
We're modern, we're change obsessed and we
tend to repeat our biggest social mistakes
110
00:09:38,999 --> 00:09:43,259
just as they slip out of living memory,
which means they recur on a timescale of
111
00:09:43,259 --> 00:09:47,680
70 to 100 years.
So if our personal memories are useless,
112
00:09:47,680 --> 00:09:52,330
we need a better toolkit
and history provides that toolkit.
113
00:09:52,330 --> 00:09:57,099
History gives us the perspective to see what
went wrong in the past and to look for
114
00:09:57,099 --> 00:10:01,509
patterns and check to see whether those
patterns are recurring in the present.
115
00:10:01,509 --> 00:10:06,689
Looking in particular at the history of the past two
to four hundred years, that age of rapidly
116
00:10:06,689 --> 00:10:11,800
increasing change that I mentioned at the
beginning. One glaringly obvious deviation
117
00:10:11,800 --> 00:10:16,929
from the norm of the preceding
3000 centuries is obvious, and that's
118
00:10:16,929 --> 00:10:22,059
the development of artificial intelligence,
which happened no earlier than 1553 and no
119
00:10:22,059 --> 00:10:29,359
later than 1844. I'm talking of course
about the very old, very slow AI's we call
120
00:10:29,359 --> 00:10:34,240
corporations. What lessons from the history
of a company can we draw that tell us
121
00:10:34,240 --> 00:10:38,490
about the likely behavior of the type of
artificial intelligence we're interested
122
00:10:38,490 --> 00:10:47,258
in here, today?
Well. Need a mouthful of water.
123
00:10:47,258 --> 00:10:51,618
Let me crib from Wikipedia for a moment.
124
00:10:51,618 --> 00:10:56,329
Wikipedia: "In the late 18th
century, Stewart Kyd, the author of the
125
00:10:56,329 --> 00:11:02,800
first treatise on corporate law in English,
defined a corporation as: 'a collection of
126
00:11:02,800 --> 00:11:08,290
many individuals united into one body,
under a special denomination, having
127
00:11:08,290 --> 00:11:13,779
perpetual succession under an artificial
form, and vested, by policy of the law, with
128
00:11:13,779 --> 00:11:20,201
the capacity of acting, in several respects,
as an individual, enjoying privileges and
129
00:11:20,201 --> 00:11:24,151
immunities in common, and of exercising a
variety of political rights, more or less
130
00:11:24,151 --> 00:11:28,510
extensive, according to the design of its
institution, or the powers conferred upon
131
00:11:28,510 --> 00:11:32,269
it, either at the time of its creation, or
at any subsequent period of its
132
00:11:32,269 --> 00:11:36,699
existence.'"
This was a late 18th century definition,
133
00:11:36,699 --> 00:11:42,509
sound like a piece of software to you?
In 1844, the British government passed the
134
00:11:42,509 --> 00:11:46,450
"Joint Stock Companies Act" which created
a register of companies and allowed any
135
00:11:46,450 --> 00:11:51,360
legal person, for a fee, to register a
company which in turn existed as a
136
00:11:51,360 --> 00:11:55,869
separate legal person. Prior to that point,
it required a Royal Charter or an act of
137
00:11:55,869 --> 00:12:00,420
Parliament to create a company.
Subsequently, the law was extended to limit
138
00:12:00,420 --> 00:12:04,680
the liability of individual shareholders
in event of business failure and then both
139
00:12:04,680 --> 00:12:08,629
Germany and the United States added their
own unique twists to what today we see is
140
00:12:08,629 --> 00:12:14,360
the doctrine of corporate personhood.
Now, though plenty of other things that
141
00:12:14,360 --> 00:12:18,740
happened between the 16th and 21st centuries
did change the shape of the world we live in.
142
00:12:18,740 --> 00:12:22,149
I've skipped the changes in
agricultural productivity that happened
143
00:12:22,149 --> 00:12:25,550
due to energy economics,
which finally broke the Malthusian trap
144
00:12:25,550 --> 00:12:29,129
our predecessors lived in.
This in turn broke the long-term
145
00:12:29,129 --> 00:12:32,681
cap on economic growth of about
0.1% per year
146
00:12:32,681 --> 00:12:35,681
in the absence of famines, plagues and
wars and so on.
147
00:12:35,681 --> 00:12:38,790
I've skipped the germ theory of diseases
and the development of trade empires
148
00:12:38,790 --> 00:12:42,760
in the age of sail and gunpowder,
that were made possible by advances
149
00:12:42,760 --> 00:12:45,119
in accurate time measurement.
150
00:12:45,119 --> 00:12:49,079
I've skipped the rise, and
hopefully decline, of the pernicious
151
00:12:49,079 --> 00:12:52,430
theory of scientific racism that
underpinned Western colonialism and the
152
00:12:52,430 --> 00:12:57,220
slave trade. I've skipped the rise of
feminism, the ideological position that
153
00:12:57,220 --> 00:13:02,470
women are human beings rather than
property and the decline of patriarchy.
154
00:13:02,470 --> 00:13:05,999
I've skipped the whole of the
Enlightenment and the Age of Revolutions,
155
00:13:05,999 --> 00:13:09,410
but this is a technocratic.. technocentric
Congress, so I want to frame this talk in
156
00:13:09,410 --> 00:13:14,869
terms of AI, which we all like to think we
understand. Here's the thing about these
157
00:13:14,869 --> 00:13:21,050
artificial persons we call corporations.
Legally, they're people. They have goals,
158
00:13:21,050 --> 00:13:25,957
they operate in pursuit of these goals,
they have a natural life cycle.
159
00:13:25,957 --> 00:13:33,230
In the 1950s, a typical U.S. corporation on the
S&P 500 Index had a life span of 60 years.
160
00:13:33,230 --> 00:13:37,805
Today it's down to less than 20 years.
This is largely due to predation.
161
00:13:37,805 --> 00:13:41,659
Corporations are cannibals, they eat
one another.
162
00:13:41,659 --> 00:13:45,849
They're also hive super organisms
like bees or ants.
163
00:13:45,849 --> 00:13:48,699
For the first century and a
half, they relied entirely on human
164
00:13:48,699 --> 00:13:52,399
employees for their internal operation,
but today they're automating their
165
00:13:52,399 --> 00:13:57,110
business processes very rapidly. Each
human is only retained so long as they can
166
00:13:57,110 --> 00:14:01,339
perform their assigned tasks more
efficiently than a piece of software
167
00:14:01,339 --> 00:14:05,439
and they can all be replaced by another
human, much as the cells in our own bodies
168
00:14:05,439 --> 00:14:09,799
are functionally interchangeable and a
group of cells can - in extremis - often be
169
00:14:09,799 --> 00:14:14,509
replaced by a prosthetic device.
To some extent, corporations can be
170
00:14:14,509 --> 00:14:19,019
trained to serve of the personal desires of
their chief executives, but even CEOs can
171
00:14:19,019 --> 00:14:22,749
be dispensed with, if their activities
damage the corporation, as Harvey
172
00:14:22,749 --> 00:14:26,389
Weinstein found out a couple of months
ago.
173
00:14:26,389 --> 00:14:30,939
Finally, our legal environment today has
been tailored for the convenience of
174
00:14:30,939 --> 00:14:34,910
corporate persons, rather than human
persons, to the point where our governments
175
00:14:34,910 --> 00:14:40,379
now mimic corporations in many of our
internal structures.
176
00:14:40,379 --> 00:14:43,940
So, to understand where we're going, we
need to start by asking "What do our
177
00:14:43,940 --> 00:14:51,850
current actually existing AI overlords
want?"
178
00:14:51,850 --> 00:14:56,400
Now, Elon Musk, who I believe you've
all heard of, has an obsessive fear of one
179
00:14:56,400 --> 00:14:59,750
particular hazard of artificial
intelligence, which he conceives of as
180
00:14:59,750 --> 00:15:03,760
being a piece of software that functions
like a brain in a box, namely the
181
00:15:03,760 --> 00:15:09,859
Paperclip Optimizer or Maximizer.
A Paperclip Maximizer is a term of art for
182
00:15:09,859 --> 00:15:14,519
a goal seeking AI that has a single
priority, e.g., maximizing the
183
00:15:14,519 --> 00:15:19,759
number of paperclips in the universe. The
Paperclip Maximizer is able to improve
184
00:15:19,759 --> 00:15:24,110
itself in pursuit of its goal, but has no
ability to vary its goal, so will
185
00:15:24,124 --> 00:15:27,604
ultimately attempt to convert all the
metallic elements in the solar system into
186
00:15:27,604 --> 00:15:31,749
paperclips, even if this is obviously
detrimental to the well-being of the
187
00:15:31,749 --> 00:15:36,359
humans who set it this goal.
Unfortunately I don't think Musk
188
00:15:36,359 --> 00:15:41,169
is paying enough attention,
consider his own companies.
189
00:15:41,169 --> 00:15:45,480
Tesla isn't a Paperclip Maximizer, it's a
battery Maximizer.
190
00:15:45,480 --> 00:15:48,240
After all, a battery.. an
electric car is a battery with wheels and
191
00:15:48,240 --> 00:15:54,199
seats. SpaceX is an orbital payload
Maximizer, driving down the cost of space
192
00:15:54,199 --> 00:15:59,424
launches in order to encourage more sales
for the service it provides. SolarCity is
193
00:15:59,424 --> 00:16:05,700
a photovoltaic panel maximizer and so on.
All three of the.. Musk's very own slow AIs
194
00:16:05,700 --> 00:16:09,259
are based on an architecture, designed to
maximize return on shareholder
195
00:16:09,259 --> 00:16:13,180
investment, even if by doing so they cook
the planet the shareholders have to live
196
00:16:13,180 --> 00:16:15,979
on or turn the entire thing into solar
panels.
197
00:16:15,979 --> 00:16:19,479
But hey, if you're Elon Musk, thats okay,
you're gonna retire on Mars anyway.
198
00:16:19,479 --> 00:16:20,879
laughing
199
00:16:20,879 --> 00:16:25,139
By the way, I'm ragging on Musk in this
talks, simply because he's the current
200
00:16:25,139 --> 00:16:28,600
opinionated tech billionaire, who thinks
for disrupting a couple of industries
201
00:16:28,600 --> 00:16:33,579
entitles him to make headlines.
If this was 2007 and my focus slightly
202
00:16:33,579 --> 00:16:39,041
difference.. different, I'd be ragging on
Steve Jobs and if we're in 1997 my target
203
00:16:39,041 --> 00:16:42,259
would be Bill Gates.
Don't take it personally, Elon.
204
00:16:42,259 --> 00:16:44,109
laughing
205
00:16:44,109 --> 00:16:49,139
Back to topic. The problem of
corporations is, that despite their overt
206
00:16:49,139 --> 00:16:53,959
goals, whether they make electric vehicles
or beer or sell life insurance policies,
207
00:16:53,959 --> 00:16:59,778
they all have a common implicit Paperclip
Maximizer goal: to generate revenue. If
208
00:16:59,778 --> 00:17:03,729
they don't make money, they're eaten by a
bigger predator or they go bust. It's as
209
00:17:03,729 --> 00:17:07,929
vital to them as breathing is to us
mammals. They generally pursue their
210
00:17:07,929 --> 00:17:12,439
implicit goal - maximizing revenue - by
pursuing their overt goal.
211
00:17:12,439 --> 00:17:16,510
But sometimes they try instead to
manipulate their environment, to ensure
212
00:17:16,510 --> 00:17:22,970
that money flows to them regardless.
Human toolmaking culture has become very
213
00:17:22,970 --> 00:17:27,980
complicated over time. New technologies
always come with an attached implicit
214
00:17:27,980 --> 00:17:32,869
political agenda that seeks to extend the
use of the technology. Governments react
215
00:17:32,869 --> 00:17:37,230
to this by legislating to control new
technologies and sometimes we end up with
216
00:17:37,230 --> 00:17:41,630
industries actually indulging in legal
duels through the regulatory mechanism of
217
00:17:41,630 --> 00:17:49,582
law to determine, who prevails. For
example, consider the automobile. You
218
00:17:49,582 --> 00:17:53,910
can't have mass automobile transport
without gas stations and fuel distribution
219
00:17:53,910 --> 00:17:57,160
pipelines.
These in turn require access to whoever
220
00:17:57,160 --> 00:18:01,179
owns the land the oil is extracted from
under and before you know it, you end up
221
00:18:01,179 --> 00:18:06,660
with a permanent army in Iraq and a clamp
dictatorship in Saudi Arabia. Closer to
222
00:18:06,660 --> 00:18:12,380
home, automobiles imply jaywalking laws and
drink-driving laws. They affect Town
223
00:18:12,380 --> 00:18:16,750
Planning regulations and encourage
suburban sprawl, the construction of human
224
00:18:16,750 --> 00:18:21,429
infrastructure on a scale required by
automobiles, not pedestrians.
225
00:18:21,429 --> 00:18:24,979
This in turn is bad for competing
transport technologies, like buses or
226
00:18:24,979 --> 00:18:31,720
trams, which work best in cities with a
high population density. So to get laws
227
00:18:31,720 --> 00:18:35,399
that favour the automobile in place,
providing an environment conducive to
228
00:18:35,399 --> 00:18:40,409
doing business, automobile companies spend
money on political lobbyists and when they
229
00:18:40,409 --> 00:18:46,620
can get away with it, on bribes. Bribery
needn't be blatant of course. E.g.,
230
00:18:46,620 --> 00:18:51,929
the reforms of a British railway network
in the 1960s dismembered many branch lines
231
00:18:51,929 --> 00:18:56,408
and coincided with a surge in road
building and automobile sales. These
232
00:18:56,408 --> 00:19:01,138
reforms were orchestrated by Transport
Minister Ernest Marples, who was purely a
233
00:19:01,138 --> 00:19:05,950
politician. The fact that he accumulated a
considerable personal fortune during this
234
00:19:05,950 --> 00:19:10,269
period by buying shares in motorway
construction corporations, has nothing to
235
00:19:10,269 --> 00:19:18,120
do with it. So, no conflict of interest
there - now if the automobile in industry
236
00:19:18,120 --> 00:19:23,230
can't be considered a pure Paperclip
Maximizer... sorry, the automobile
237
00:19:23,230 --> 00:19:27,510
industry in isolation can't be considered
a pure Paperclip Maximizer. You have to
238
00:19:27,510 --> 00:19:31,659
look at it in conjunction with the fossil
fuel industries, the road construction
239
00:19:31,659 --> 00:19:37,809
business, the accident insurance sector
and so on. When you do this, you begin to
240
00:19:37,809 --> 00:19:42,740
see the outline of a paperclip-maximizing
ecosystem that invades far-flung lands and
241
00:19:42,740 --> 00:19:47,167
grinds up and kills around one and a
quarter million people per year. That's
242
00:19:47,167 --> 00:19:50,951
the global death toll from automobile
accidents currently, according to the World
243
00:19:50,951 --> 00:19:56,429
Health Organization. It rivals the First
World War on an ongoing permanent basis
244
00:19:56,429 --> 00:20:01,510
and these are all side effects of its
drive to sell you a new car. Now,
245
00:20:01,510 --> 00:20:06,639
automobiles aren't of course a total
liability. Today's cars are regulated
246
00:20:06,639 --> 00:20:11,302
stringently for safety and, in theory, to
reduce toxic emissions. They're fast,
247
00:20:11,302 --> 00:20:16,990
efficient and comfortable. We can thank
legal mandated regulations imposed by
248
00:20:16,990 --> 00:20:22,029
governments for this, of course. Go back
to the 1970s and cars didn't have crumple
249
00:20:22,029 --> 00:20:27,199
zones, go back to the 50s and they didn't
come with seat belts as standard. In the
250
00:20:27,199 --> 00:20:33,690
1930s, indicators, turn signals and brakes
on all four wheels were optional and your
251
00:20:33,690 --> 00:20:38,850
best hope of surviving a 50 km/h-crash was
to be thrown out of a car and land somewhere
252
00:20:38,850 --> 00:20:43,059
without breaking your neck.
Regulator agencies are our current
253
00:20:43,059 --> 00:20:47,380
political system's tool of choice for
preventing Paperclip Maximizers from
254
00:20:47,380 --> 00:20:55,159
running amok. Unfortunately, regulators
don't always work. The first failure mode
255
00:20:55,159 --> 00:20:59,730
of regulators that you need to be aware of
is regulatory capture, where regulatory
256
00:20:59,730 --> 00:21:05,038
bodies are captured by the industries they
control. Ajit Pai, Head of American Federal
257
00:21:05,038 --> 00:21:09,460
Communications Commission, which just voted
to eliminate net neutrality rules in the
258
00:21:09,460 --> 00:21:14,101
U.S., has worked as Associate
General Counsel for Verizon Communications
259
00:21:14,101 --> 00:21:19,291
Inc, the largest current descendant of the
Bell Telephone system's monopoly. After
260
00:21:19,291 --> 00:21:24,690
the AT&T antitrust lawsuit, the Bell
network was broken up into the seven baby
261
00:21:24,690 --> 00:21:32,279
bells. They've now pretty much reformed
and reaggregated and Verizon is the largest current one.
262
00:21:32,279 --> 00:21:36,099
Why should someone with a transparent
interest in a technology corporation end
263
00:21:36,099 --> 00:21:41,129
up running a regulator that tries to
control the industry in question? Well, if
264
00:21:41,129 --> 00:21:44,919
you're going to regulate a complex
technology, you need to recruit regulators
265
00:21:44,919 --> 00:21:48,580
from people who understand it.
Unfortunately, most of those people are
266
00:21:48,580 --> 00:21:53,520
industry insiders. Ajit Pai is clearly
very much aware of how Verizon is
267
00:21:53,520 --> 00:21:58,450
regulated, very insightful into its
operations and wants to do something about
268
00:21:58,450 --> 00:22:02,509
it - just not necessarily in the public
interest.
269
00:22:02,509 --> 00:22:11,029
applause
When regulators end up staffed by people
270
00:22:11,029 --> 00:22:15,370
drawn from the industries they're supposed
to control, they frequently end up working
271
00:22:15,370 --> 00:22:20,019
with their former office mates, to make it
easier to turn a profit, either by raising
272
00:22:20,019 --> 00:22:24,350
barriers to keep new insurgent companies
out or by dismantling safeguards that
273
00:22:24,350 --> 00:22:31,610
protect the public. Now a second problem
is regulatory lag where a technology
274
00:22:31,610 --> 00:22:35,240
advances so rapidly, that regulations are
laughably obsolete by the time they're
275
00:22:35,240 --> 00:22:40,450
issued. Consider the EU directive
requiring cookie notices on websites to
276
00:22:40,450 --> 00:22:45,709
caution users, that their activities are
tracked and their privacy may be violated.
277
00:22:45,709 --> 00:22:51,419
This would have been a good idea in 1993
or 1996, but unfortunatelly it didn't show up
278
00:22:51,419 --> 00:22:57,690
until 2011. Fingerprinting and tracking
mechanisms have nothing to do with cookies
279
00:22:57,690 --> 00:23:04,149
and were already widespread by then. Tim
Berners-Lee observed in 1995, that five
280
00:23:04,149 --> 00:23:07,780
years worth of change was happening on the
web for every 12 months of real-world
281
00:23:07,780 --> 00:23:12,389
time. By that yardstick, the cookie law
came out nearly a century too late to do
282
00:23:12,389 --> 00:23:19,269
any good. Again, look at Uber. This month,
the European Court of Justice ruled that
283
00:23:19,269 --> 00:23:24,520
Uber is a taxi service, not a Web App. This
is arguably correct - the problem is, Uber
284
00:23:24,520 --> 00:23:28,970
has spread globally since it was founded
eight years ago, subsidizing its drivers to
285
00:23:28,970 --> 00:23:33,580
put competing private hire firms out of
business. Whether this is a net good for
286
00:23:33,580 --> 00:23:38,659
societys own is debatable. The problem is, a
taxi driver can get awfully hungry if she
287
00:23:38,659 --> 00:23:42,429
has to wait eight years for a court ruling
against a predator intent on disrupting
288
00:23:42,429 --> 00:23:49,549
her business. So, to recap: firstly, we
already have Paperclip Maximizers and
289
00:23:49,549 --> 00:23:54,980
Musk's AI alarmism is curiously mirror
blind. Secondly, we have mechanisms for
290
00:23:54,980 --> 00:23:59,509
keeping Paperclip Maximizers in check, but
they don't work very well against AIs that
291
00:23:59,509 --> 00:24:03,490
deploy the dark arts, especially
corruption and bribery and they're even
292
00:24:03,490 --> 00:24:07,769
worse against true AIs, that evolved too
fast for human mediated mechanisms like
293
00:24:07,769 --> 00:24:13,529
the law to keep up with. Finally, unlike
the naive vision of a Paperclip Maximizer
294
00:24:13,529 --> 00:24:19,190
that maximizes only paperclips, existing
AIs have multiple agendas, their overt
295
00:24:19,190 --> 00:24:24,210
goal, but also profit seeking, expansion
into new markets and to accommodate the
296
00:24:24,210 --> 00:24:27,500
desire of whoever is currently in the
driving seat.
297
00:24:27,500 --> 00:24:30,500
sighs
298
00:24:30,500 --> 00:24:36,190
Now, this brings me to the next major
heading in this dismaying laundry list:
299
00:24:36,190 --> 00:24:42,560
how it all went wrong. It seems to me that
our current political upheavals, the best
300
00:24:42,560 --> 00:24:49,379
understood, is arising from the capture
of post 1917 democratic institutions by
301
00:24:49,379 --> 00:24:54,811
large-scale AI. Everywhere you look, you
see voters protesting angrily against an
302
00:24:54,811 --> 00:24:58,779
entrenched establishment, that seems
determined to ignore the wants and needs
303
00:24:58,779 --> 00:25:04,330
of their human constituents in favor of
those of the machines. The brexit upset
304
00:25:04,330 --> 00:25:07,130
was largely result of a protest vote
against the British political
305
00:25:07,130 --> 00:25:11,340
establishment, the election of Donald
Trump likewise, with a side order of racism
306
00:25:11,340 --> 00:25:15,940
on top. Our major political parties are
led by people who are compatible with the
307
00:25:15,940 --> 00:25:20,789
system as it exists today, a system that
has been shaped over decades by
308
00:25:20,789 --> 00:25:26,010
corporations distorting our government and
regulatory environments. We humans live in
309
00:25:26,010 --> 00:25:30,700
a world shaped by the desires and needs of
AI, forced to live on their terms and we're
310
00:25:30,700 --> 00:25:34,010
taught, that we're valuable only to the
extent we contribute to the rule of the
311
00:25:34,010 --> 00:25:40,259
machines. Now this is free sea and we're
all more interested in computers and
312
00:25:40,259 --> 00:25:44,389
communications technology than this
historical crap. But as I said earlier,
313
00:25:44,389 --> 00:25:49,149
history is a secret weapon, if you know how
to use it. What history is good for, is
314
00:25:49,149 --> 00:25:53,080
enabling us to spot recurring patterns
that repeat across timescales outside our
315
00:25:53,080 --> 00:25:58,129
personal experience. And if we look at our
historical very slow AIs, what do we learn
316
00:25:58,129 --> 00:26:04,880
from them about modern AI and how it's
going to behave? Well to start with, our
317
00:26:04,880 --> 00:26:09,879
AIs have been warped, the new AIs,
the electronic one's instantiated in our
318
00:26:09,879 --> 00:26:14,639
machines, have been warped by a terrible
fundamentally flawed design decision back
319
00:26:14,639 --> 00:26:20,289
in 1995, but as damaged democratic
political processes crippled our ability
320
00:26:20,289 --> 00:26:24,570
to truly understand the world around us
and led to the angry upheavals and upsets
321
00:26:24,570 --> 00:26:30,440
of our present decade. That mistake was
the decision, to fund the build-out of a
322
00:26:30,440 --> 00:26:34,370
public World Wide Web as opposed to be
earlier government-funded corporate and
323
00:26:34,370 --> 00:26:38,230
academic Internet by
monetizing eyeballs through advertising
324
00:26:38,230 --> 00:26:45,260
revenue. The ad-supported web we're used
to today wasn't inevitable. If you recall
325
00:26:45,260 --> 00:26:49,510
the web as it was in 1994, there were very
few ads at all and not much, in a way, of
326
00:26:49,510 --> 00:26:55,720
Commerce. 1995 was the year, the World Wide
Web really came to public attention in the
327
00:26:55,720 --> 00:27:00,570
anglophone world and consumer-facing
websites began to appear. Nobody really
328
00:27:00,570 --> 00:27:04,490
knew, how this thing was going to be paid
for. The original .com bubble was all
329
00:27:04,490 --> 00:27:07,850
about working out, how to monetize the web
for the first time and a lot of people
330
00:27:07,850 --> 00:27:12,840
lost their shirts in the process. A naive
initial assumption was that the
331
00:27:12,840 --> 00:27:17,440
transaction cost of setting up a tcp/ip
connection over modem was too high to
332
00:27:17,440 --> 00:27:22,500
support.. to be supported by per-use micro
billing for web pages. So instead of
333
00:27:22,500 --> 00:27:27,059
charging people fraction of a euro cent
for every page view, we'd bill customers
334
00:27:27,059 --> 00:27:31,570
indirectly, by shoving advertising banners
in front of their eyes and hoping they'd
335
00:27:31,570 --> 00:27:39,029
click through and buy something.
Unfortunately, advertising is in an
336
00:27:39,029 --> 00:27:45,509
industry, one of those pre-existing very
slow AI ecosystems I already alluded to.
337
00:27:45,509 --> 00:27:49,700
Advertising tries to maximize its hold on
the attention of the minds behind each
338
00:27:49,700 --> 00:27:53,750
human eyeball. The coupling of advertising
with web search was an inevitable
339
00:27:53,750 --> 00:27:57,991
outgrowth, I mean how better to attract
the attention of reluctant subjects, than to
340
00:27:57,991 --> 00:28:01,450
find out what they're really interested in
seeing and selling ads that relate to
341
00:28:01,450 --> 00:28:07,070
those interests. The problem of applying
the paperclip maximize approach to
342
00:28:07,070 --> 00:28:12,509
monopolizing eyeballs, however, is that
eyeballs are a limited, scarce resource.
343
00:28:12,509 --> 00:28:18,500
There are only 168 hours in every week, in
which I can gaze at banner ads. Moreover,
344
00:28:18,500 --> 00:28:22,289
most ads are irrelevant to my interests and
it doesn't matter, how often you flash an ad
345
00:28:22,289 --> 00:28:28,129
for dog biscuits at me, I'm never going to
buy any. I have a cat. To make best
346
00:28:28,129 --> 00:28:31,990
revenue-generating use of our eyeballs,
it's necessary for the ad industry to
347
00:28:31,990 --> 00:28:36,789
learn, who we are and what interests us and
to target us increasingly minutely in hope
348
00:28:36,789 --> 00:28:39,549
of hooking us with stuff we're attracted
to.
349
00:28:39,549 --> 00:28:43,879
In other words: the ad industry is a
paperclip maximizer, but for its success,
350
00:28:43,879 --> 00:28:49,990
it relies on developing a theory of mind
that applies to human beings.
351
00:28:49,990 --> 00:28:52,990
sighs
352
00:28:52,990 --> 00:28:55,739
Do I need to divert on to the impassioned
rant about the hideous corruption
353
00:28:55,739 --> 00:28:59,759
and evil that is Facebook?
Audience: Yes!
354
00:28:59,759 --> 00:29:03,289
CS: Okay, somebody said yes.
I'm guessing you've heard it all before,
355
00:29:03,289 --> 00:29:07,429
but for too long don't read.. summary is:
Facebook is as much a search engine as
356
00:29:07,429 --> 00:29:12,029
Google or Amazon. Facebook searches are
optimized for faces, that is for human
357
00:29:12,029 --> 00:29:15,649
beings. If you want to find someone you
fell out of touch with thirty years ago,
358
00:29:15,649 --> 00:29:20,610
Facebook probably knows where they live,
what their favorite color is, what sized
359
00:29:20,610 --> 00:29:24,390
shoes they wear and what they said about
you to your friends behind your back all
360
00:29:24,390 --> 00:29:30,090
those years ago, that made you cut them off.
Even if you don't have a Facebook account,
361
00:29:30,090 --> 00:29:34,230
Facebook has a You account, a hole in their
social graph of a bunch of connections
362
00:29:34,230 --> 00:29:38,669
pointing in to it and your name tagged on
your friends photographs. They know a lot
363
00:29:38,669 --> 00:29:42,529
about you and they sell access to their
social graph to advertisers, who then
364
00:29:42,529 --> 00:29:47,400
target you, even if you don't think you use
Facebook. Indeed, there is barely any
365
00:29:47,400 --> 00:29:52,270
point in not using Facebook these days, if
ever. Social media Borg: "Resistance is
366
00:29:52,270 --> 00:30:00,510
futile!" So however, Facebook is trying to
get eyeballs on ads, so is Twitter and so
367
00:30:00,510 --> 00:30:05,970
are Google. To do this, they fine-tuned the
content they show you to make it more
368
00:30:05,970 --> 00:30:11,520
attractive to your eyes and by attractive
I do not mean pleasant. We humans have an
369
00:30:11,520 --> 00:30:15,139
evolved automatic reflex to pay attention
to threats and horrors as well as
370
00:30:15,139 --> 00:30:19,830
pleasurable stimuli and the algorithms,
that determine what they show us when we
371
00:30:19,830 --> 00:30:24,259
look at Facebook or Twitter, take this bias
into account. You might react more
372
00:30:24,259 --> 00:30:28,490
strongly to a public hanging in Iran or an
outrageous statement by Donald Trump than
373
00:30:28,490 --> 00:30:32,100
to a couple kissing. The algorithm knows
and will show you whatever makes you pay
374
00:30:32,100 --> 00:30:38,389
attention, not necessarily what you need or
want to see.
375
00:30:38,389 --> 00:30:42,510
So this brings me to another point about
computerized AI as opposed to corporate
376
00:30:42,510 --> 00:30:47,260
AI. AI algorithms tend to embody the
prejudices and beliefs of either the
377
00:30:47,260 --> 00:30:52,759
programmers, or the data set
the AI was trained on.
378
00:30:52,759 --> 00:30:56,250
A couple of years ago I ran across an
account of a webcam, developed by mostly
379
00:30:56,250 --> 00:31:00,860
pale-skinned Silicon Valley engineers, that
had difficulty focusing or achieving correct
380
00:31:00,860 --> 00:31:04,500
color balance, when pointed at dark-skinned
faces.
381
00:31:04,500 --> 00:31:08,400
Fast an example of human programmer
induced bias, they didn't have a wide
382
00:31:08,400 --> 00:31:13,249
enough test set and didn't recognize that
they were inherently biased towards
383
00:31:13,249 --> 00:31:19,290
expecting people to have pale skin. But
with today's deep learning, bias can creep
384
00:31:19,290 --> 00:31:24,009
in, while the datasets for neural networks are
trained on, even without the programmers
385
00:31:24,009 --> 00:31:28,659
intending it. Microsoft's first foray into
a conversational chat bot driven by
386
00:31:28,659 --> 00:31:33,330
machine learning, Tay, was what we yanked
offline within days last year, because
387
00:31:33,330 --> 00:31:37,149
4chan and reddit based trolls discovered,
that they could train it towards racism and
388
00:31:37,149 --> 00:31:43,649
sexism for shits and giggles. Just imagine
you're a poor naive innocent AI who's just
389
00:31:43,649 --> 00:31:48,480
been switched on and you're hoping to pass
your Turing test and what happens? 4chan
390
00:31:48,480 --> 00:31:53,189
decide to play with your head.
laughing
391
00:31:53,189 --> 00:31:57,519
I got to feel sorry for Tay.
Now, humans may be biased,
392
00:31:57,519 --> 00:32:01,059
but at least individually we're
accountable and if somebody gives you
393
00:32:01,059 --> 00:32:06,499
racist or sexist abuse to your face, you
can complain or maybe punch them. It's
394
00:32:06,499 --> 00:32:10,740
impossible to punch a corporation and it
may not even be possible to identify the
395
00:32:10,740 --> 00:32:16,029
source of unfair bias, when you're dealing
with a machine learning system. AI based
396
00:32:16,029 --> 00:32:21,739
systems that instantiate existing
prejudices make social change harder.
397
00:32:21,739 --> 00:32:25,470
Traditional advertising works by playing
on the target customer's insecurity and
398
00:32:25,470 --> 00:32:30,750
fear as much as their aspirations. And fear
of a loss of social status and privileges
399
00:32:30,750 --> 00:32:36,240
are powerful stress. Fear and xenophobia
are useful tools for tracking advertising..
400
00:32:36,240 --> 00:32:39,580
ah, eyeballs.
What happens when we get pervasive social
401
00:32:39,580 --> 00:32:44,379
networks, that have learned biases against
say Feminism or Islam or melanin? Or deep
402
00:32:44,379 --> 00:32:48,230
learning systems, trained on datasets
contaminated by racist dipshits and their
403
00:32:48,230 --> 00:32:53,369
propaganda? Deep learning systems like the
ones inside Facebook, that determine which
404
00:32:53,369 --> 00:32:58,210
stories to show you to get you to pay as
much attention as possible to be adverse.
405
00:32:58,210 --> 00:33:04,841
I think, you probably have an inkling of
how.. where this is now going. Now, if you
406
00:33:04,841 --> 00:33:08,990
think, this is sounding a bit bleak and
unpleasant, you'd be right. I write sci-fi.
407
00:33:08,990 --> 00:33:12,669
You read or watch or play sci-fi. We're
acculturated to think of science and
408
00:33:12,669 --> 00:33:19,420
technology as good things that make life
better, but this ain't always so. Plenty of
409
00:33:19,420 --> 00:33:23,140
technologies have historically been
heavily regulated or even criminalized for
410
00:33:23,140 --> 00:33:27,690
good reason and once you get past any
reflexive indignation, criticism of
411
00:33:27,690 --> 00:33:32,740
technology and progress, you might agree
with me, that it is reasonable to ban
412
00:33:32,740 --> 00:33:39,139
individuals from owning nuclear weapons or
nerve gas. Less obviously, they may not be
413
00:33:39,139 --> 00:33:42,820
weapons, but we've banned
chlorofluorocarbon refrigerants, because
414
00:33:42,820 --> 00:33:45,570
they were building up in the high
stratosphere and destroying the ozone
415
00:33:45,570 --> 00:33:51,360
layer that protects us from UVB radiation.
We banned tetra e-file LED in
416
00:33:51,360 --> 00:33:57,540
gasoline, because it poisoned people and
led to a crime wave. These are not
417
00:33:57,540 --> 00:34:03,240
weaponized technologies, but they have
horrible side effects. Now, nerve gas and
418
00:34:03,240 --> 00:34:08,690
leaded gasoline were 1930s chemical
technologies, promoted by 1930s
419
00:34:08,690 --> 00:34:15,390
corporations. Halogenated refrigerants and
nuclear weapons are totally 1940s. ICBMs
420
00:34:15,390 --> 00:34:19,210
date to the 1950s. You know, I have
difficulty seeing why people are getting
421
00:34:19,210 --> 00:34:26,040
so worked up over North Korea. North Korea
reaches 1953 level parity - be terrified
422
00:34:26,040 --> 00:34:30,760
and hide under the bed!
I submit that the 21st century is throwing
423
00:34:30,760 --> 00:34:35,030
up dangerous new technologies, just as our
existing strategies for regulating very
424
00:34:35,030 --> 00:34:42,060
slow AIs have proven to be inadequate. And
I don't have an answer to how we regulate
425
00:34:42,060 --> 00:34:45,889
new technologies, I just want to flag it up
as a huge social problem that is going to
426
00:34:45,889 --> 00:34:50,380
affect the coming century.
I'm now going to give you four examples of
427
00:34:50,380 --> 00:34:54,290
new types of AI application that are
going to warp our societies even more
428
00:34:54,290 --> 00:35:00,800
badly than the old slow AIs, we.. have done.
This isn't an exhaustive list, this is just
429
00:35:00,800 --> 00:35:05,160
some examples I dream, I pulled out of
my ass. We need to work out a general
430
00:35:05,160 --> 00:35:08,430
strategy for getting on top of this sort
of thing before they get on top of us and
431
00:35:08,430 --> 00:35:12,040
I think, this is actually a very urgent
problem. So I'm just going to give you this
432
00:35:12,040 --> 00:35:18,110
list of dangerous new technologies that
are arriving now, or coming, and send you
433
00:35:18,110 --> 00:35:22,150
away to think about what to do next. I
mean, we are activists here, we should be
434
00:35:22,150 --> 00:35:27,690
thinking about this and planning what
to do. Now, the first nasty technology I'd
435
00:35:27,690 --> 00:35:31,600
like to talk about, is political hacking
tools that rely on social graph directed
436
00:35:31,600 --> 00:35:39,500
propaganda. This is low-hanging fruit
after the electoral surprises of 2016.
437
00:35:39,500 --> 00:35:43,280
Cambridge Analytica pioneered the use of
deep learning by scanning the Facebook and
438
00:35:43,280 --> 00:35:47,750
Twitter social graphs to identify voters
political affiliations by simply looking
439
00:35:47,750 --> 00:35:52,950
at what tweets or Facebook comments they
liked, very able to do this, to identify
440
00:35:52,950 --> 00:35:56,400
individuals with a high degree of
precision, who were vulnerable to
441
00:35:56,400 --> 00:36:01,490
persuasion and who lived in electorally
sensitive districts. They then canvassed
442
00:36:01,490 --> 00:36:06,910
them with propaganda, that targeted their
personal hot-button issues to change their
443
00:36:06,910 --> 00:36:12,060
electoral intentions. The tools developed
by web advertisers to sell products have
444
00:36:12,060 --> 00:36:16,260
now been weaponized for political purposes
and the amount of personal information
445
00:36:16,260 --> 00:36:21,260
about our affiliations that we expose on
social media, makes us vulnerable. Aside, in
446
00:36:21,260 --> 00:36:24,970
the last U.S. Presidential election, as
mounting evidence for the British
447
00:36:24,970 --> 00:36:28,510
referendum on leaving the EU was subject
to foreign cyber war attack, now
448
00:36:28,510 --> 00:36:33,250
weaponized social media, as was the most
recent French Presidential election.
449
00:36:33,250 --> 00:36:38,080
In fact, if we remember the leak of emails
from the Macron campaign, it turns out that
450
00:36:38,080 --> 00:36:42,020
many of those emails were false, because
the Macron campaign anticipated that they
451
00:36:42,020 --> 00:36:47,120
would be attacked and an email trove would
be leaked in the last days before the
452
00:36:47,120 --> 00:36:51,260
election. So they deliberately set up
false emails that would be hacked and then
453
00:36:51,260 --> 00:37:00,610
leaked and then could be discredited. It
gets twisty fast. Now I'm kind of biting
454
00:37:00,610 --> 00:37:04,650
my tongue and trying, not to take sides
here. I have my own political affiliation
455
00:37:04,650 --> 00:37:09,770
after all, and I'm not terribly mainstream.
But if social media companies don't work
456
00:37:09,770 --> 00:37:14,220
out how to identify and flag micro-
targeted propaganda, then democratic
457
00:37:14,220 --> 00:37:18,140
institutions will stop working and elections
will be replaced by victories, whoever
458
00:37:18,140 --> 00:37:23,200
can buy the most trolls. This won't
simply be billionaires but.. like the Koch
459
00:37:23,200 --> 00:37:26,470
brothers and Robert Mercer from the U.S.
throwing elections to whoever will
460
00:37:26,470 --> 00:37:31,120
hand them the biggest tax cuts. Russian
military cyber war doctrine calls for the
461
00:37:31,120 --> 00:37:35,730
use of social media to confuse and disable
perceived enemies, in addition to the
462
00:37:35,730 --> 00:37:39,900
increasingly familiar use of zero-day
exploits for espionage, such as spear
463
00:37:39,900 --> 00:37:43,200
phishing and distributed denial-of-service
attacks, on our infrastructure, which are
464
00:37:43,200 --> 00:37:49,260
practiced by Western agencies. Problem is,
once the Russians have demonstrated that
465
00:37:49,260 --> 00:37:53,990
this is an effective tactic, the use of
propaganda bot armies in cyber war will go
466
00:37:53,990 --> 00:38:00,400
global. And at that point, our social
discourse will be irreparably poisoned.
467
00:38:00,400 --> 00:38:04,990
Incidentally, I'd like to add - as another
aside like the Elon Musk thing - I hate
468
00:38:04,990 --> 00:38:09,680
the cyber prefix! It usually indicates,
that whoever's using it has no idea what
469
00:38:09,680 --> 00:38:15,940
they're talking about.
applause, laughter
470
00:38:15,940 --> 00:38:21,120
Unfortunately, much as the way the term
hacker was corrupted from its original
471
00:38:21,120 --> 00:38:27,020
meaning in the 1990s, the term cyber war
has, it seems, to have stuck and it's now an
472
00:38:27,020 --> 00:38:31,500
actual thing that we can point to and say:
"This is what we're talking about". So I'm
473
00:38:31,500 --> 00:38:35,510
afraid, we're stuck with this really
horrible term. But that's a digression, I
474
00:38:35,510 --> 00:38:38,780
should get back on topic, because I've only
got 20 minutes to go.
475
00:38:38,780 --> 00:38:46,240
Now, the second threat that we need to
think about regulating ,or controlling, is
476
00:38:46,240 --> 00:38:50,031
an adjunct to deep learning target
propaganda: it's the use of neural network
477
00:38:50,031 --> 00:38:56,940
generated false video media. We're used to
photoshopped images these days, but faking
478
00:38:56,940 --> 00:39:02,510
video and audio takes it to the next
level. Luckily, faking video and audio is
479
00:39:02,510 --> 00:39:08,560
labor-intensive, isn't it? Well nope, not
anymore. We're seeing the first generation
480
00:39:08,560 --> 00:39:13,510
of AI assisted video porn, in which the
faces of film stars are mapped onto those
481
00:39:13,510 --> 00:39:17,370
of other people in a video clip, using
software rather than laborious in human
482
00:39:17,370 --> 00:39:22,490
process.
A properly trained neural network
483
00:39:22,490 --> 00:39:29,430
recognizes faces and transforms the face
of the Hollywood star, they want to put
484
00:39:29,430 --> 00:39:35,190
into a porn movie, into the face of - onto
the face of the porn star in the porn clip
485
00:39:35,190 --> 00:39:40,520
and suddenly you have "Oh dear God, get it
out of my head" - no, not gonna give you
486
00:39:40,520 --> 00:39:43,500
any examples. Let's just say it's bad
stuff.
487
00:39:43,500 --> 00:39:47,260
laughs
Meanwhile we have WaveNet, a system
488
00:39:47,260 --> 00:39:51,120
for generating realistic sounding speech,
if a voice of a human's speak of a neural
489
00:39:51,120 --> 00:39:56,160
network has been trained to mimic any
human speaker. We can now put words into
490
00:39:56,160 --> 00:40:01,250
other people's mouths realistically
without employing a voice actor. This
491
00:40:01,250 --> 00:40:06,650
stuff is still geek intensive. It requires
relatively expensive GPUs or cloud
492
00:40:06,650 --> 00:40:10,910
computing clusters, but in less than a
decade it'll be out in the wild, turned
493
00:40:10,910 --> 00:40:15,500
into something, any damn script kiddie can
use and just about everyone will be able
494
00:40:15,500 --> 00:40:18,940
to fake up a realistic video of someone
they don't like doing something horrible.
495
00:40:18,940 --> 00:40:26,970
I mean, Donald Trump in the White House. I
can't help but hope that out there
496
00:40:26,970 --> 00:40:30,670
somewhere there's some geek like Steve
Bannon with a huge rack of servers who's
497
00:40:30,670 --> 00:40:40,980
faking it all, but no. Now, also we've
already seen alarm this year over bizarre
498
00:40:40,980 --> 00:40:45,090
YouTube channels that attempt to monetize
children's TV brands by scraping the video
499
00:40:45,090 --> 00:40:50,220
content of legitimate channels and adding
their own advertising in keywords on top
500
00:40:50,220 --> 00:40:54,110
before reposting it. This is basically
your YouTube spam.
501
00:40:54,110 --> 00:40:58,521
Many of these channels are shaped by
paperclip maximizing advertising AIs, but
502
00:40:58,521 --> 00:41:03,670
are simply trying to maximise their search
ranking on YouTube and it's entirely
503
00:41:03,670 --> 00:41:08,470
algorithmic: you have a whole list of
keywords, you perm, you take them, you slap
504
00:41:08,470 --> 00:41:15,300
them on top of existing popular videos and
re-upload the videos. Once you add neural
505
00:41:15,300 --> 00:41:19,510
network driven tools for inserting
character A into pirated video B, to click
506
00:41:19,510 --> 00:41:24,420
maximize.. for click maximizing bots,
things are gonna get very weird, though. And
507
00:41:24,420 --> 00:41:28,980
they're gonna get even weirder, when these
tools are deployed for political gain.
508
00:41:28,980 --> 00:41:34,830
We tend - being primates, that evolved 300
thousand years ago in a smartphone free
509
00:41:34,830 --> 00:41:39,950
environment - to evaluate the inputs from
our eyes and ears much less critically
510
00:41:39,950 --> 00:41:44,380
than what random strangers on the Internet
tell us in text. We're already too
511
00:41:44,380 --> 00:41:48,930
vulnerable to fake news as it is. Soon
they'll be coming for us, armed with
512
00:41:48,930 --> 00:41:54,830
believable video evidence. The Smart Money
says that by 2027 you won't be able to
513
00:41:54,830 --> 00:41:58,550
believe anything you see in video, unless
for a cryptographic signatures on it,
514
00:41:58,550 --> 00:42:02,530
linking it back to the camera that shot
the raw feed. But you know how good most
515
00:42:02,530 --> 00:42:08,150
people are at using encryption - it's going to
be chaos!
516
00:42:08,150 --> 00:42:14,070
So, paperclip maximizers with focus on
eyeballs are very 20th century. The new
517
00:42:14,070 --> 00:42:19,750
generation is going to be focusing on our
nervous system. Advertising as an industry
518
00:42:19,750 --> 00:42:22,510
can only exist because of a quirk of our
nervous system, which is that we're
519
00:42:22,510 --> 00:42:26,680
susceptible to addiction. Be it
tobacco, gambling or heroin, we
520
00:42:26,680 --> 00:42:31,980
recognize addictive behavior, when we see
it. Well, do we? It turns out the human
521
00:42:31,980 --> 00:42:36,250
brain's reward feedback loops are
relatively easy to gain. Large
522
00:42:36,250 --> 00:42:41,400
corporations like Zynga - producers of
FarmVille - exist solely because of it,
523
00:42:41,400 --> 00:42:45,580
free to use social media platforms like
Facebook and Twitter, are dominant precisely
524
00:42:45,580 --> 00:42:50,100
because they're structured to reward
frequent short bursts of interaction and
525
00:42:50,100 --> 00:42:54,790
to generate emotional engagement - not
necessarily positive emotions, anger and
526
00:42:54,790 --> 00:42:59,660
hatred are just as good when it comes to
attracting eyeballs for advertisers.
527
00:42:59,660 --> 00:43:04,510
Smartphone addiction is a side effect of
advertising as a revenue model. Frequent
528
00:43:04,510 --> 00:43:09,850
short bursts of interaction to keep us
coming back for more. Now a new.. newish
529
00:43:09,850 --> 00:43:13,720
development, thanks to deep learning again -
I keep coming back to deep learning,
530
00:43:13,720 --> 00:43:19,290
don't I? - use of neural networks in a
manner that Marvin Minsky never envisaged,
531
00:43:19,290 --> 00:43:22,530
back when he was deciding that the
Perzeptron was where it began and ended
532
00:43:22,530 --> 00:43:27,360
and it couldn't do anything.
Well, we have neuroscientists now, who've
533
00:43:27,360 --> 00:43:33,680
mechanized the process of making apps more
addictive. Dopamine Labs is one startup
534
00:43:33,680 --> 00:43:38,030
that provides tools to app developers to
make any app more addictive, as well as to
535
00:43:38,030 --> 00:43:41,130
reduce the desire to continue
participating in a behavior if it's
536
00:43:41,130 --> 00:43:46,520
undesirable, if the app developer actually
wants to help people kick the habit. This
537
00:43:46,520 --> 00:43:52,060
goes way beyond automated A/B testing. A/B
testing allows developers to plot a binary
538
00:43:52,060 --> 00:43:58,760
tree path between options, moving towards a
single desired goal. But true deep
539
00:43:58,760 --> 00:44:03,640
learning, addictiveness maximizers, can
optimize for multiple attractors in
540
00:44:03,640 --> 00:44:10,011
parallel. The more users you've got on
your app, the more effectively you can work
541
00:44:10,011 --> 00:44:16,920
out, what attracts them and train them and
focus on extra addictive characteristics.
542
00:44:16,920 --> 00:44:20,610
Now, going by their public face, the folks
at Dopamine Labs seem to have ethical
543
00:44:20,610 --> 00:44:24,930
qualms about the misuse of addiction
maximizers. But neuroscience isn't a
544
00:44:24,930 --> 00:44:28,730
secret and sooner or later some really
unscrupulous sociopaths will try to see
545
00:44:28,730 --> 00:44:36,440
how far they can push it. So let me give
you a specific imaginary scenario: Apple
546
00:44:36,440 --> 00:44:41,400
have put a lot of effort into making real-
time face recognition work on the iPhone X
547
00:44:41,400 --> 00:44:44,940
and it's going to be everywhere on
everybody's phone in another couple of
548
00:44:44,940 --> 00:44:50,760
years. You can't fool an iPhone X with a
photo or even a simple mask. It does depth
549
00:44:50,760 --> 00:44:54,010
mapping to ensure, your eyes are in the
right place and can tell whether they're
550
00:44:54,010 --> 00:44:58,200
open or closed. It recognizes your face
from underlying bone structure through
551
00:44:58,200 --> 00:45:02,760
makeup and bruises. It's running
continuously, checking pretty much as often
552
00:45:02,760 --> 00:45:06,840
as every time you'd hit the home button on
a more traditional smartphone UI and it
553
00:45:06,840 --> 00:45:13,750
can see where your eyeballs are pointing.
The purpose of a face recognition system
554
00:45:13,750 --> 00:45:19,980
is to provide for real-time authenticate
continuous authentication when you're
555
00:45:19,980 --> 00:45:23,960
using a device - not just enter a PIN or
sign a password or use a two factor
556
00:45:23,960 --> 00:45:28,420
authentication pad, but the device knows
that you are its authorized user on a
557
00:45:28,420 --> 00:45:33,000
continuous basis and if somebody grabs
your phone and runs away with it, it'll
558
00:45:33,000 --> 00:45:38,050
know that it's been stolen immediately, it
sees the face of the thief.
559
00:45:38,050 --> 00:45:42,910
However, your phone monitoring your facial
expressions and correlating against app
560
00:45:42,910 --> 00:45:48,250
usage has other implications. Your phone
will be aware of precisely what you like
561
00:45:48,250 --> 00:45:53,550
to look at on your screen.. on its screen.
We may well have sufficient insight on the
562
00:45:53,550 --> 00:45:59,740
part of the phone to identify whether
you're happy or sad, bored or engaged.
563
00:45:59,740 --> 00:46:04,680
With addiction seeking deep learning tools
and neural network generated images, those
564
00:46:04,680 --> 00:46:09,150
synthetic videos I was talking about, it's
entirely.. in principle entirely possible to
565
00:46:09,150 --> 00:46:15,250
feed you an endlessly escalating payload
of arousal-inducing inputs. It might be
566
00:46:15,250 --> 00:46:19,440
Facebook or Twitter messages, optimized to
produce outrage, or it could be porn
567
00:46:19,440 --> 00:46:24,620
generated by AI to appeal to kinks you
don't even consciously know you have.
568
00:46:24,620 --> 00:46:28,980
But either way, the app now owns your
central nervous system and you will be
569
00:46:28,980 --> 00:46:36,230
monetized. And finally, I'd like to raise a
really hair-raising specter that goes well
570
00:46:36,230 --> 00:46:41,730
beyond the use of deep learning and
targeted propaganda and cyber war. Back in
571
00:46:41,730 --> 00:46:46,340
2011, an obscure Russian software house
launched an iPhone app for pickup artists
572
00:46:46,340 --> 00:46:52,940
called 'Girls Around Me'. Spoiler: Apple
pulled it like a hot potato as soon as
573
00:46:52,940 --> 00:46:59,060
word got out that it existed. Now, Girls
Around Me works out where the user is
574
00:46:59,060 --> 00:47:03,730
using GPS, then it would query Foursquare
and Facebook for people matching a simple
575
00:47:03,730 --> 00:47:09,880
relational search, for single females on
Facebook, per relationship status, who have
576
00:47:09,880 --> 00:47:15,020
checked in, or been checked in by their
friends, in your vicinity on Foursquare.
577
00:47:15,020 --> 00:47:19,270
The app then displays their locations on a
map along with links to their social media
578
00:47:19,270 --> 00:47:24,560
profiles. If they were doing it today, the
interface would be gamified, showing strike
579
00:47:24,560 --> 00:47:28,980
rates and a leaderboard and flagging
targets who succumbed to harassment as
580
00:47:28,980 --> 00:47:31,960
easy lays.
But these days, the cool kids and single
581
00:47:31,960 --> 00:47:35,980
adults are all using dating apps with a
missing vowel in the name, only a creeper
582
00:47:35,980 --> 00:47:45,420
would want something like Girls Around Me,
right? Unfortunately, there are much, much
583
00:47:45,420 --> 00:47:49,450
nastier uses of and scraping social media
to find potential victims for serial
584
00:47:49,450 --> 00:47:54,480
rapists. Does your social media profile
indicate your political religious
585
00:47:54,480 --> 00:47:59,650
affiliation? No? Cambridge Analytica can
work them out with 99.9% precision
586
00:47:59,650 --> 00:48:04,660
anyway, so don't worry about that. We
already have you pegged. Now add a service
587
00:48:04,660 --> 00:48:08,430
that can identify people's affiliation and
location and you have a beginning of a
588
00:48:08,430 --> 00:48:13,520
flash mob app, one that will show people
like us and people like them on a
589
00:48:13,520 --> 00:48:18,551
hyperlocal map.
Imagine you're a young female and a
590
00:48:18,551 --> 00:48:22,240
supermarket like Target has figured out
from your purchase patterns, that you're
591
00:48:22,240 --> 00:48:28,160
pregnant, even though you don't know it
yet. This actually happened in 2011. Now
592
00:48:28,160 --> 00:48:31,250
imagine, that all the anti-abortion
campaigners in your town have an app
593
00:48:31,250 --> 00:48:36,110
called "Babies Risk" on their phones.
Someone has paid for the analytics feed
594
00:48:36,110 --> 00:48:40,320
from the supermarket and every time you go
near a family planning clinic, a group of
595
00:48:40,320 --> 00:48:44,270
unfriendly anti-abortion protesters
somehow miraculously show up and swarm
596
00:48:44,270 --> 00:48:49,990
you. Or imagine you're male and gay and
the "God hates fags"-crowd has invented a
597
00:48:49,990 --> 00:48:55,300
100% reliable gaydar app, based on your
Grindr profile, and is getting their fellow
598
00:48:55,300 --> 00:49:00,810
travelers to queer bash gay men - only when
they're alone or outnumbered by ten to
599
00:49:00,810 --> 00:49:06,410
one. That's the special horror of precise
geolocation not only do you always know
600
00:49:06,410 --> 00:49:13,020
where you are, the AIs know, where you are
and some of them aren't friendly. Or
601
00:49:13,020 --> 00:49:17,330
imagine, you're in Pakistan and Christian
Muslim tensions are rising or in rural
602
00:49:17,330 --> 00:49:24,250
Alabama or an Democrat, you know the
possibilities are endless. Someone out
603
00:49:24,250 --> 00:49:30,120
there is working on this. A geolocation
aware, social media scraping deep learning
604
00:49:30,120 --> 00:49:34,230
application, that uses a gamified
competitive interface to reward its
605
00:49:34,230 --> 00:49:38,500
players for joining in acts of mob
violence against whoever the app developer
606
00:49:38,500 --> 00:49:41,640
hates.
Probably it has an innocuous seeming, but
607
00:49:41,640 --> 00:49:46,650
highly addictive training mode, to get the
users accustomed to working in teams and
608
00:49:46,650 --> 00:49:52,640
obeying the apps instructions. Think
Ingress or Pokemon Go. Then at some pre-
609
00:49:52,640 --> 00:49:57,770
planned zero-hour, it switches mode and
starts rewarding players for violence,
610
00:49:57,770 --> 00:50:01,780
players who have been primed to think of
their targets as vermin by a steady drip
611
00:50:01,780 --> 00:50:06,470
feed of micro-targeted dehumanizing
propaganda inputs, delivered over a period
612
00:50:06,470 --> 00:50:12,310
of months. And the worst bit of this picture?
Is that the app developer isn't even a
613
00:50:12,310 --> 00:50:16,540
nation-state trying to disrupt its enemies
or an extremist political group trying to
614
00:50:16,540 --> 00:50:22,000
murder gays, Jews or Muslims. It's just a
Paperclip Maximizer doing what it does
615
00:50:22,000 --> 00:50:27,900
and you are the paper. Welcome to the 21st
century.
616
00:50:27,900 --> 00:50:40,730
applause
Uhm...
617
00:50:40,730 --> 00:50:41,960
Thank you.
618
00:50:41,960 --> 00:50:47,780
ongoing applause
We have a little time for questions. Do
619
00:50:47,780 --> 00:50:53,790
you have a microphone for the orders? Do
we have any questions? ... OK.
620
00:50:53,790 --> 00:50:56,430
Herald: So you are doing a Q&A?
CS: Hmm?
621
00:50:56,430 --> 00:51:02,290
Herald: So you are doing a Q&A. Well if
there are any questions, please come
622
00:51:02,290 --> 00:51:24,320
forward to the microphones, numbers 1
through 4 and ask.
623
00:51:24,320 --> 00:51:29,010
Mic 1: Do you really think it's all
bleak and dystopian like you prescribed
624
00:51:29,010 --> 00:51:34,750
it, because I also think the future can be
bright, looking at the internet with open
625
00:51:34,750 --> 00:51:38,730
source and like, it's all growing and going
faster and faster in a good
626
00:51:38,730 --> 00:51:44,110
direction. So what do you think about
the balance here?
627
00:51:44,110 --> 00:51:48,070
CS: sighs Basically, I think the
problem is, that about 3% of us
628
00:51:48,070 --> 00:51:54,330
are sociopaths or psychopaths, who spoil
everything for the other 97% of us.
629
00:51:54,330 --> 00:51:56,980
Wouldn't it be great if somebody could
write an app that would identify all the
630
00:51:56,980 --> 00:51:59,690
psychopaths among us and let the rest of
us just kill them?
631
00:51:59,690 --> 00:52:05,320
laughing, applause
Yeah, we have all the
632
00:52:05,320 --> 00:52:12,300
tools to make a utopia, we have it now
today. A bleak miserable grim meathook
633
00:52:12,300 --> 00:52:17,510
future is not inevitable, but it's up to
us to use these tools to prevent the bad
634
00:52:17,510 --> 00:52:22,480
stuff happening and to do that, we have to
anticipate the bad outcomes and work to
635
00:52:22,480 --> 00:52:25,260
try and figure out a way to deal with
them. That's what this talk is. I'm trying
636
00:52:25,260 --> 00:52:29,300
to do a bit of a wake-up call and get
people thinking about how much worse
637
00:52:29,300 --> 00:52:33,630
things can get and what we need to do to
prevent it from happening. What I was
638
00:52:33,630 --> 00:52:37,960
saying earlier about our regulatory
systems being broken, stands. How do we
639
00:52:37,960 --> 00:52:47,500
regulate the deep learning technologies?
This is something we need to think about.
640
00:52:47,500 --> 00:52:55,230
H: Okay mic number two.
Mic 2: Hello? ... When you talk about
641
00:52:55,230 --> 00:53:02,250
corporations as AIs, where do you see that
analogy you're making? Do you see them as
642
00:53:02,250 --> 00:53:10,010
literally AIs or figuratively?
CS: Almost literally. If
643
00:53:10,010 --> 00:53:14,230
you're familiar with philosopher
(?) Searle's Chinese room paradox
644
00:53:14,230 --> 00:53:17,620
from the 1970s, by which he attempted to
prove that artificial intelligence was
645
00:53:17,620 --> 00:53:24,380
impossible, a corporation is very much the
Chinese room implementation of an AI. It
646
00:53:24,380 --> 00:53:29,400
is a bunch of human beings in a box. You
put inputs into the box, you get apples
647
00:53:29,400 --> 00:53:33,000
out of a box. Does it matter, whether it's
all happening in software or whether
648
00:53:33,000 --> 00:53:37,520
there's a human being following rules
inbetween to assemble the output? I don't
649
00:53:37,520 --> 00:53:41,190
see there being much of a difference.
Now you have to look at a company at a
650
00:53:41,190 --> 00:53:46,630
very abstract level to view it as an AI,
but more and more companies are automating
651
00:53:46,630 --> 00:53:54,500
their internal business processes. You've
got to view this as an ongoing trend. And
652
00:53:54,500 --> 00:54:01,490
yeah, they have many of the characteristics
of an AI.
653
00:54:01,490 --> 00:54:06,240
Herald: Okay mic number four.
Mic 4: Hi, thanks for your talk.
654
00:54:06,240 --> 00:54:13,220
You probably heard of the Time Well
Spent and Design Ethics movements that
655
00:54:13,220 --> 00:54:16,850
are alerting developers to dark patterns
in UI design, where
656
00:54:16,850 --> 00:54:20,630
these people design apps to manipulate
people. I'm curious if you find any
657
00:54:20,630 --> 00:54:26,750
optimism in the possibility of amplifying
or promoting those movements.
658
00:54:26,750 --> 00:54:31,570
CS: Uhm, you know, I knew about dark
patterns, I knew about people trying to
659
00:54:31,570 --> 00:54:36,230
optimize them, I wasn't actually aware
there were movements against this. Okay I'm
660
00:54:36,230 --> 00:54:40,760
53 years old, I'm out of touch. I haven't
actually done any serious programming in
661
00:54:40,760 --> 00:54:47,000
15 years. I'm so rusty, my rust has rust on
it. But, you know, it is a worrying trend
662
00:54:47,000 --> 00:54:54,000
and actual activism is a good start.
Raising awareness of hazards and of what
663
00:54:54,000 --> 00:54:58,330
we should be doing about them, is a good
start. And I would classify this actually
664
00:54:58,330 --> 00:55:04,220
as a moral issue. We need to..
corporations evaluate everything in terms
665
00:55:04,220 --> 00:55:07,960
of revenue, because it's very
equivalent to breathing, they have to
666
00:55:07,960 --> 00:55:13,810
breathe. Corporations don't usually have
any moral framework. We're humans, we need
667
00:55:13,810 --> 00:55:18,480
a moral framework to operate within. Even
if it's as simple as first "Do no harm!"
668
00:55:18,480 --> 00:55:22,500
or "Do not do unto others that which would
be repugnant if it was done unto you!",
669
00:55:22,500 --> 00:55:26,450
the Golden Rule. So, yeah, we should be
trying to spread awareness of this about
670
00:55:26,450 --> 00:55:32,170
and working with program developers, to
look to remind them that they are human
671
00:55:32,170 --> 00:55:36,480
beings and have to be humane in their
application of technology, is a necessary
672
00:55:36,480 --> 00:55:39,510
start.
applause
673
00:55:39,510 --> 00:55:45,501
H: Thank you! Mic 3?
Mic 3: Hi! Yeah, I think that folks,
674
00:55:45,501 --> 00:55:48,600
especially in this sort of crowd, tend to
jump to the "just get off of
675
00:55:48,600 --> 00:55:52,330
Facebook"-solution first, for a lot of
these things that are really, really
676
00:55:52,330 --> 00:55:56,640
scary. But what worries me, is how we sort
of silence ourselves when we do that.
677
00:55:56,640 --> 00:56:02,240
After the election I actually got back on
Facebook, because the Women's March was
678
00:56:02,240 --> 00:56:07,250
mostly organized through Facebook. But
yeah, I think we need a lot more
679
00:56:07,250 --> 00:56:12,880
regulation, but we can't just throw it
out. We're.. because it's..
680
00:56:12,880 --> 00:56:16,520
social media is the only... really good
platform we have right now
681
00:56:16,520 --> 00:56:20,410
to express ourselves, to
have our rules, or power.
682
00:56:20,410 --> 00:56:24,660
CS: Absolutely. I have made
a point of not really using Facebook
683
00:56:24,660 --> 00:56:28,480
for many, many, many years.
I have a Facebook page simply to
684
00:56:28,480 --> 00:56:31,530
shut up the young marketing people at my
publisher, who used to prop up every two
685
00:56:31,530 --> 00:56:34,680
years and say: "Why don't you have a
Facebook. Everybody's got a Facebook."
686
00:56:34,680 --> 00:56:39,790
No, I've had a blog since 1993!
laughing
687
00:56:39,790 --> 00:56:44,170
But no, I'm gonna have to use Facebook,
because these days, not using Facebook is
688
00:56:44,170 --> 00:56:50,780
like not using email. You're cutting off
your nose to spite your face. What we
689
00:56:50,780 --> 00:56:54,830
really do need to be doing, is looking for
some form of effective oversight of
690
00:56:54,830 --> 00:57:00,830
Facebook and particularly, of how they..
the algorithms that show you content, are
691
00:57:00,830 --> 00:57:05,340
written. What I was saying earlier about
how algorithms are not as transparent as
692
00:57:05,340 --> 00:57:10,530
human beings to people, applies hugely to
them. And both, Facebook and Twitter
693
00:57:10,530 --> 00:57:15,180
control the information
that they display to you.
694
00:57:15,180 --> 00:57:18,690
Herald: Okay, I'm terribly sorry for all the
people queuing at the mics now, we're out
695
00:57:18,690 --> 00:57:24,960
of time. I also have to apologize, I
announced, that this talk was being held in
696
00:57:24,960 --> 00:57:29,690
English, but it was being held in English.
the latter pronounced on the G
697
00:57:29,690 --> 00:57:31,470
Thank you very much, Charles Stross!
698
00:57:31,470 --> 00:57:33,690
CS: Thank you very much for
listening to me, it's been a pleasure!
699
00:57:33,690 --> 00:57:36,500
applause
700
00:57:36,500 --> 00:57:52,667
postroll music
701
00:57:52,667 --> 00:57:57,781
subtitles created by c3subtitles.de
in the year 2018