WEBVTT
00:00:00.000 --> 00:00:14.990
34c3 intro
00:00:14.990 --> 00:00:23.740
Herald: the next talk is Marloes de Valk,
she's an artist and writer from the
00:00:23.740 --> 00:00:31.289
Netherlands and she's working with lots of
different materials and media and at the
00:00:31.289 --> 00:00:40.490
moment she's doing an 8-bit game, so the
topic is "why do we anthropomorphize
00:00:40.490 --> 00:00:48.460
computers and dehumanize ourselves in the
process?" and we have a mumble, which is
00:00:48.460 --> 00:00:54.240
doing the translation, the talk is in
English and we will translate into French
00:00:54.240 --> 00:01:01.930
and German.
Okay, give a big applause for Marloes!
00:01:01.930 --> 00:01:09.670
applause
00:01:09.670 --> 00:01:13.740
Marloes: Thank you and thank you all for
00:01:13.740 --> 00:01:20.860
coming, my name is Marloes de Valk and I'm
going to talk about anthropomorphization
00:01:20.860 --> 00:01:27.450
and I will approach this as a survival
strategy, see how it works and if it is
00:01:27.450 --> 00:01:34.490
effective. And when I'm speaking of big
data, which is an umbrella term, my focus
00:01:34.490 --> 00:01:38.940
will be on the socio-technical aspect of
the phenomenon, the assumptions and
00:01:38.940 --> 00:01:44.090
beliefs surrounding Big Data and on
research using data exhaust or found data
00:01:44.090 --> 00:01:49.770
such as status updates on social media web
searches and credit card payments.
00:01:55.750 --> 00:02:08.300
Oh and now my slides are frozen. Oh my
gosh.
00:02:09.460 --> 00:02:11.280
Audience: Have you tried turning it of and on again?
00:02:11.280 --> 00:02:15.880
Marloes: laughs
I will in a moment. gosh it's
00:02:15.880 --> 00:02:25.730
completely frozen... I'm very sorry,
technical staff I have to exit, if I can.
00:02:25.730 --> 00:02:40.730
I can't. Help! I have to get rid of
something I think, should we just kill it?
00:02:40.730 --> 00:02:55.280
That's so stupid yeah.
00:02:55.280 --> 00:02:58.280
But they're gonna have a coffee soon and then it's gonna
00:02:58.280 --> 00:03:33.070
Yes, force quit... I think I know
what the problem is. I'm sorry it's, it's
00:03:33.070 --> 00:04:09.940
really not working. All right let's see if
we're back.
00:04:09.940 --> 00:04:18.870
Okay, okay so sorry for the interruption.
00:04:18.870 --> 00:04:24.150
I wanted to start by letting Silicon
Valley itself tell a story about
00:04:24.150 --> 00:04:30.930
technology, really, sorry about the
interruption. So, Silicon Valley propaganda
00:04:30.930 --> 00:04:35.330
during our lifetime, we're about to see
the transformation of the human race, it's
00:04:35.330 --> 00:04:39.400
really something that blows my mind every
time I think about it. People have no idea
00:04:39.400 --> 00:04:42.800
how fast the world is changing and I want
to give you a sense of that because it
00:04:42.800 --> 00:04:47.650
fills me with awe and with an
extraordinary sense of responsibility. I
00:04:47.650 --> 00:04:52.120
want to give you a sense of why now is
different why this decade the next decade
00:04:52.120 --> 00:04:56.370
is not interesting times
but THE most extraordinary times ever in
00:04:56.370 --> 00:05:00.520
human history and they truly are. What
we're talking about here is the notion
00:05:00.520 --> 00:05:04.670
that faster cheaper computing power which
is almost like a force of nature, is
00:05:04.670 --> 00:05:09.350
driving a whole slew of technologies,
technology being the force that takes what
00:05:09.350 --> 00:05:14.090
used to be scarce and make it abundant.
That is why we're heading towards this
00:05:14.090 --> 00:05:19.630
extraordinary age of abundance. The future
will not take care of itself as we know
00:05:19.630 --> 00:05:23.590
the world looks to America for progress
and America looks to California and if you
00:05:23.590 --> 00:05:28.070
ask most Californians where they get their
progress they'll point towards the bay,
00:05:28.070 --> 00:05:33.210
but here at the bay there is no place left
to point, so we have to create solutions
00:05:33.210 --> 00:05:37.840
and my goal is to simplify complexity,
take Internet technology and cross it with
00:05:37.840 --> 00:05:43.020
an old industry and magic and progress and
big things can happen. I really think
00:05:43.020 --> 00:05:46.990
there are two fundamental paths for
humans, one path is we stay on earth
00:05:46.990 --> 00:05:51.650
forever, or some eventual extinction event
wipes us out, I don't have a doomsday
00:05:51.650 --> 00:05:56.009
prophecy but history suggests some
doomsday event will happen. The
00:05:56.009 --> 00:06:00.930
alternative is becoming a spacefaring and
multiplanetary species and it will be like
00:06:00.930 --> 00:06:06.670
really fun to go, you'll have a great
time. We will set on Mars and we should,
00:06:06.670 --> 00:06:10.630
because it's cool. When it comes to space
I see it as my job to build infrastructure
00:06:10.630 --> 00:06:14.480
the hard way. I'm using my resources to
put in that infrastructure so that the
00:06:14.480 --> 00:06:19.150
next generation of people can have a
dynamic entrepreneurial solar system as
00:06:19.150 --> 00:06:23.000
interesting as we see on the internet
today. We want the population to keep
00:06:23.000 --> 00:06:29.430
growing on this planet, we want to keep
you using more energy per capita. Death
00:06:29.430 --> 00:06:33.889
makes me very angry, probably the most
extreme form of inequality is between
00:06:33.889 --> 00:06:38.180
people who are alive and people who are
dead. I have the idea that aging is
00:06:38.180 --> 00:06:42.050
plastic, that it's encoded and if
something is encoded you can crack the
00:06:42.050 --> 00:06:46.390
code if you can crack the code you can
hack the code and thermodynamically there
00:06:46.390 --> 00:06:51.470
should be no reason we can't defer entropy
indefinitely. We can end aging forever.
00:06:51.470 --> 00:06:54.510
This is not about
Silicon Valley billionaires
00:06:54.510 --> 00:06:57.199
living forever off the blood of young
people.
00:06:57.199 --> 00:07:02.020
It's about a Star Trek future where no one
dies of preventable diseases where life is
00:07:02.020 --> 00:07:05.770
fair. Health technology is becoming an
information technology, where we can read
00:07:05.770 --> 00:07:10.539
and edit our own genomes clearly it is
possible through technology to make death
00:07:10.539 --> 00:07:16.880
optional. Yes, our bodies are information
processing systems. We can enable human
00:07:16.880 --> 00:07:22.350
transformations that would rival Marvel
Comics super muscularity ultra endurance,
00:07:22.350 --> 00:07:26.880
super radiation resistance, you could have
people living on the moons of Jupiter,
00:07:26.880 --> 00:07:30.259
who'd be modified in this way and they
could physically harvest energy from the
00:07:30.259 --> 00:07:35.199
gamma rays they were exposed to. Form a
culture connected with the ideology of the
00:07:35.199 --> 00:07:39.080
future, promoting technical progress
artificial intellects, multi-body
00:07:39.080 --> 00:07:44.419
immortality and cyborgization. We are at
the beginning of the beginning the first
00:07:44.419 --> 00:07:49.250
hour of day one, there have never been
more opportunities the greatest products
00:07:49.250 --> 00:07:54.830
of the next 25 years have not been
invented yet. You are not too late.
00:07:54.830 --> 00:07:59.700
We're going to take over the world, one
robot at a time. It's gonna be an AI that
00:07:59.700 --> 00:08:03.740
is able to source create solve an answer
just what is your desire. I mean this is
00:08:03.740 --> 00:08:09.380
an almost godlike view of the future. AI
is gonna be magic. Especially in the
00:08:09.380 --> 00:08:14.310
digital manufacturing world, what is going
to be created will effectively be a god,
00:08:14.310 --> 00:08:17.810
the idea needs to spread before the
technology, the church is how we spread
00:08:17.810 --> 00:08:21.690
the word, the gospel. If you believe in
it, start a conversation with someone else
00:08:21.690 --> 00:08:26.280
and help them understand the same things.
Computers are going to take over from
00:08:26.280 --> 00:08:29.940
humans, no question, but when I got that
thinking in my head about if I'm going to
00:08:29.940 --> 00:08:33.599
be treated in the future as a pet to these
smart machines, well I'm gonna treat my
00:08:33.599 --> 00:08:38.068
own pet dog really nice, but in the end we
may just have created the species that is
00:08:38.068 --> 00:08:42.688
above us. Chaining it isn't gonna be the
solution as it will be stronger than any
00:08:42.688 --> 00:08:48.149
change could put on. the existential risk
that is associated with AI we will not be
00:08:48.149 --> 00:08:51.180
able to beat
AI, so then as the saying goes if you
00:08:51.180 --> 00:08:54.879
can't beat them, join them.
History has shown us we aren't gonna win
00:08:54.879 --> 00:08:58.879
this war by changing human behavior but
maybe we can build systems that are so
00:08:58.879 --> 00:09:03.149
locked down, that humans lose the ability
to make dumb mistakes until we gain the
00:09:03.149 --> 00:09:08.190
ability to upgrade the human brain, it's
the only way. Let's stop pretending we can
00:09:08.190 --> 00:09:11.929
hold back the development of intelligence
when there are clear massive short-term
00:09:11.929 --> 00:09:16.069
economic benefits to those who develop it
and instead understand the future and have
00:09:16.069 --> 00:09:20.611
it treat us like a beloved elder who
created it. As a company, one of our
00:09:20.611 --> 00:09:24.119
greatest cultural strengths is accepting
the fact that if you're gonna invent,
00:09:24.119 --> 00:09:29.019
you're gonna disrupt. Progress is
happening because there is economic
00:09:29.019 --> 00:09:32.629
advantage to having machines work for you
and solve problems for you. People are
00:09:32.629 --> 00:09:38.929
chasing that. AI, the term has become more
of a broad, almost marketing driven term
00:09:38.929 --> 00:09:42.439
and I'm probably okay with that. What
matters is what people think of when they
00:09:42.439 --> 00:09:47.339
hear of this. We are in a deadly race
between politics and technology, the fate
00:09:47.339 --> 00:09:51.420
of our world may depend on the effort of a
single person who builds or propagates the
00:09:51.420 --> 00:09:57.749
machinery of freedom, that makes the world
safe for capitalism.
00:09:57.749 --> 00:10:04.470
These were all quotes. Every single one.
not only Silicon Valley CEO speak of
00:10:04.470 --> 00:10:07.889
Technology in mysterious ways, let's see
some examples from the media.
00:10:07.889 --> 00:10:12.079
Our official intelligence regulation,
"lets not regulate mathematics" a headline
00:10:12.079 --> 00:10:16.779
from import dot IO from May 2016 about the
European general data protection
00:10:16.779 --> 00:10:22.319
regulation and the article concludes
autonomous cars should be regulated as
00:10:22.319 --> 00:10:26.129
cars, they should safely deliver users to
their destinations in the real world and
00:10:26.129 --> 00:10:31.320
overall reduce the number of accidents.
How they achieve this is irrelevant. With
00:10:31.320 --> 00:10:34.299
enough data the numbers speak for
themselves which comes from the super
00:10:34.299 --> 00:10:40.240
famous article "The end of theory" from
Chris Anderson in Wired magazine 2008.
00:10:40.240 --> 00:10:45.309
"Google creates an AI that can teach
itself to be better than humans" headline
00:10:45.309 --> 00:10:48.549
from "The Independent. The
article continues the company's AI
00:10:48.549 --> 00:10:52.639
division deepmind has unveiled "alpha go
zero" an extremely advanced system that
00:10:52.639 --> 00:10:58.160
managed to accumulate thousands of years
of human knowledge within days. Microsoft
00:10:58.160 --> 00:11:02.850
apologizing for their teen chat Bot gone
Nazi stating it wasn't their fault. "We're
00:11:02.850 --> 00:11:06.199
deeply sorry for the unintended and
hurtful tweets from Tay which do not
00:11:06.199 --> 00:11:13.699
represent who we are or what we stand for
nor how we design Tay" and then the PC
00:11:13.699 --> 00:11:18.829
world article "AI just 3d printed a brand
new Rembrandt and it's shockingly good",
00:11:18.829 --> 00:11:21.550
the subtitle reads
"the next Rembrandt project used data and
00:11:21.550 --> 00:11:26.860
deep learning to produce uncanny results".
Advertising firm J walter thompson
00:11:26.860 --> 00:11:32.049
unveiled a 3d printed painting called "the
next Rembrandt" based on 346 paintings of
00:11:32.049 --> 00:11:36.100
the old master, not just PC world, but
many more articles touted similar titles
00:11:36.100 --> 00:11:41.209
presenting the painting to the public, as
if it were made by a computer, a 3d
00:11:41.209 --> 00:11:45.179
printer, AI and deep learning. It is clear
though, that the computer programmers who
00:11:45.179 --> 00:11:48.765
worked on the project are not computers
and neither are the people who tagged the
00:11:48.765 --> 00:11:53.509
346 Rembrandt paintings by hand. The
painting was made by a team of programmers
00:11:53.509 --> 00:11:58.589
and researchers and it took them 18 months
to do. So what is communicated through
00:11:58.589 --> 00:12:03.130
these messages is that the computer did
it, yet there is no strong AI, as in
00:12:03.130 --> 00:12:07.029
consciousness in machines at this moment,
only very clever automation, meaning it
00:12:07.029 --> 00:12:11.570
was really us. We comprehend the role and
function of non-human actors rationally,
00:12:11.570 --> 00:12:15.680
but still intuitively approach them
differently. We anthropomorphize and
00:12:15.680 --> 00:12:19.160
stories about the intelligent things
machines can do and force the belief in
00:12:19.160 --> 00:12:26.620
the human-like agency of machines, so why
do we do it.
00:12:26.620 --> 00:12:28.019
I'd like to think of this as two survival
00:12:28.019 --> 00:12:32.680
strategies that found each other in big
data and AI discourse. George Zarkadakis
00:12:32.680 --> 00:12:37.209
in the book "in our own image" describes
the root of anthropomorphization, during
00:12:37.209 --> 00:12:40.610
the evolution of the modern mind humans
acquired and developed general-purpose
00:12:40.610 --> 00:12:44.259
language, through social language and this
first social language was a way of
00:12:44.259 --> 00:12:48.999
grooming of creating social cohesion.
We gained theory of mind to believe that
00:12:48.999 --> 00:12:52.959
other people have thoughts, desires,
intentions and feelings of their own -
00:12:52.959 --> 00:12:56.579
Empathy. And this led to the describing of
the world in social terms, perceiving
00:12:56.579 --> 00:13:01.259
everything around us as agents possessing
mind, including the nonhuman, when hunting
00:13:01.259 --> 00:13:05.600
anthropomorphizing animals had a great
advantage because you could strategize,
00:13:05.600 --> 00:13:12.190
predict their movements. They show through
multiple experiment- Oh, Reeves and Nass
00:13:12.190 --> 00:13:16.040
were picking up on this
anthropomorphization and they show through
00:13:16.040 --> 00:13:20.220
multiple experiments that we haven't
changed that much, through multiple
00:13:20.220 --> 00:13:24.309
experiments they show how people treat
computers, television and new media like
00:13:24.309 --> 00:13:28.249
real people in places even though it test
subjects were completely unaware of it,
00:13:28.249 --> 00:13:33.189
they responded to computers as they would
to people being polite cooperative,
00:13:33.189 --> 00:13:37.009
attributing personality characteristics
such as aggressiveness, humor, expertise
00:13:37.009 --> 00:13:41.339
and even gender. Meaning we haven't
evolved that much, we still do it.
00:13:41.339 --> 00:13:44.799
Microsoft unfortunately misinterpreted
their research and developed the innocent
00:13:44.799 --> 00:13:49.589
yet much hated Clippy the paper clip,
appearing one year later in office 97.
00:13:49.589 --> 00:13:54.920
This survival strategy found its way into
another one. The Oracle. Survival through
00:13:54.920 --> 00:13:58.480
predicting events.
The second strategy is trying to predict
00:13:58.480 --> 00:14:03.049
the future, to steer events in our favor,
in order to avoid disaster. The fear of
00:14:03.049 --> 00:14:06.989
death has inspired us throughout the ages
to try and predict the future and it has
00:14:06.989 --> 00:14:10.480
led us to consult Oracles and to creating
a new one.
00:14:10.480 --> 00:14:13.679
Because we cannot predict the future in
the midst of lives many insecurities, we
00:14:13.679 --> 00:14:18.000
desperately crave the feeling of being in
control over our destiny, we have
00:14:18.000 --> 00:14:23.269
developed ways to calm our anxiety, to
comfort ourselves and what we do is we
00:14:23.269 --> 00:14:27.420
obfuscate that human hand in a generation
of messages that require an objective or
00:14:27.420 --> 00:14:31.660
authority feel, although disputed is
commonly believed that the Delphic Oracle
00:14:31.660 --> 00:14:36.079
delivered messages from her god Apollo in
a state of trance induced by intoxicating
00:14:36.079 --> 00:14:40.949
vapors arising from the chasm over which
she was seated, possesed by her God the
00:14:40.949 --> 00:14:45.209
Oracle spoke ecstatically and
spontaneously. Priests of the temple then
00:14:45.209 --> 00:14:50.360
translated her gibberish into the
prophesies, the seekers of advice were
00:14:50.360 --> 00:14:56.220
sent home with. And Apollo had spoken.
Nowadays we turn to data for advice. The
00:14:56.220 --> 00:15:00.121
Oracle of Big Data functions in a similar
way to the Oracle of Delphi. Algorithms
00:15:00.121 --> 00:15:04.750
programmed by humans are fed data and
consequently spit out numbers that are
00:15:04.750 --> 00:15:07.949
then translated and interpreted by
researchers into the prophecies the
00:15:07.949 --> 00:15:12.230
seekers of advice are sent home with. The
bigger data the set, the more accurate the
00:15:12.230 --> 00:15:16.290
results. Data has spoken.
We are brought closer to the truth, to
00:15:16.290 --> 00:15:22.060
reality as it is, unmediated by us,
subjective biased and error-prone humans.
00:15:22.060 --> 00:15:26.529
This Oracle inspires great hope. It's a
utopia and this is best putting words in
00:15:26.529 --> 00:15:29.600
the article "The end of theory" by
Anderson where he states that with enough
00:15:29.600 --> 00:15:34.859
data the numbers can speak for themselves.
We can forget about taxonomy, ontology,
00:15:34.859 --> 00:15:39.459
psychology, who knows why people do what
they do. The point is they do it and we
00:15:39.459 --> 00:15:43.229
can track and measure it with
unprecedented fidelity, with enough data
00:15:43.229 --> 00:15:48.149
the numbers speak for themselves. This
Oracle is of course embraced with great
00:15:48.149 --> 00:15:52.649
enthusiasm by database and storage
businesses as shown here in an Oracle
00:15:52.649 --> 00:15:58.660
presentation slide. High Five! And getting
it right one out of ten times and using
00:15:58.660 --> 00:16:02.730
the one success story to strengthen the
belief in big data superpowers happens a
00:16:02.730 --> 00:16:06.889
lot in the media, a peculiar example is
the story on Motherboard about how
00:16:06.889 --> 00:16:10.579
"Cambridge Analytica" helped Trump win the
elections by psychologically profiling the
00:16:10.579 --> 00:16:14.809
entire American population and using
targeted Facebook ads to influence the
00:16:14.809 --> 00:16:20.209
results of the election. This story evokes
the idea that they know more about you
00:16:20.209 --> 00:16:26.790
than your own mother. The article reads
"more likes could even surpass what a
00:16:26.790 --> 00:16:31.019
person thought they knew about themselves"
and although this form of manipulation is
00:16:31.019 --> 00:16:36.160
seriously scary in very undemocratic as
Cathy O'Neil author of "weapons
00:16:36.160 --> 00:16:41.879
of mass mass destruction" notes, "don't
believe the hype".
00:16:41.879 --> 00:16:45.459
It wasn't just Trump everyone was doing it
Hillary was using the groundwork, a
00:16:45.459 --> 00:16:49.380
startup funded by Google's Eric Schmidt,
Obama used groundwork too, but the
00:16:49.380 --> 00:16:53.089
groundwork somehow comes across a lot more
cute compared to Cambridge analytica,
00:16:53.089 --> 00:16:56.279
funded by billionaire Robert Mercer who
also is heavily invested in all-tried
00:16:56.279 --> 00:17:00.759
media outlet Breitbart, who describes
itself as a killing machine waging the war
00:17:00.759 --> 00:17:05.869
for the West, he also donated Cambridge
analytica service to the brexit campaign.
00:17:05.869 --> 00:17:08.949
The Motherboard article and many others
describing the incredibly detailed
00:17:08.949 --> 00:17:13.019
knowledge Cambridge Analytica has on
American citizens were amazing advertising
00:17:13.019 --> 00:17:17.659
for the company, but most of all a warning
sign that applying big data research to
00:17:17.659 --> 00:17:21.539
elections creates a very undemocratic
Asymmetry and available information and
00:17:21.539 --> 00:17:27.559
undermines the notion of an informed
citizenry. Dana Boyd and Kate Crawford
00:17:27.559 --> 00:17:31.309
described the beliefs attached to big data
as a mythology "the widespread believe
00:17:31.309 --> 00:17:34.880
that large datasets offer a higher form of
intelligence and knowledge that can
00:17:34.880 --> 00:17:39.120
generate insights, that were previously
impossible with the aura of truth
00:17:39.120 --> 00:17:43.610
objectivity and accuracy".
The deconstruction of this myth was
00:17:43.610 --> 00:17:48.080
attempted as early as 1984 in a
spreadsheet way of knowledge, Steven Levi
00:17:48.080 --> 00:17:51.929
describes how the authority of look of a
spreadsheet and the fact that it was done
00:17:51.929 --> 00:17:55.330
by a computer has a strong persuasive
effect on people, leading to the
00:17:55.330 --> 00:18:01.990
acceptance of the proposed model of
reality as gospel. He says fortunately few
00:18:01.990 --> 00:18:05.390
would argue that all relations between
people can be quantified and manipulated
00:18:05.390 --> 00:18:09.559
by formulas of human behavior, no
faultless assumptions and so no perfect
00:18:09.559 --> 00:18:14.679
model can be made. Tim Harford also refers
to faith when he describes four
00:18:14.679 --> 00:18:19.980
assumptions underlying Big Data research,
the first uncanny accuracy is easy to
00:18:19.980 --> 00:18:24.299
overrate, if we simply ignore false
positives, oh sorry, the claim that
00:18:24.299 --> 00:18:27.320
causation has been knocked off its
pedestal is fine if we are making
00:18:27.320 --> 00:18:31.539
predictions in the stable environment,
but not if the world is changing. If you
00:18:31.539 --> 00:18:35.000
do not understand why things correlate,
you cannot know what might breakdown this
00:18:35.000 --> 00:18:38.620
correlation either.
The promise that sampling bias does not
00:18:38.620 --> 00:18:43.320
matter in such large data sets is simply
not true, there is lots of bias in data
00:18:43.320 --> 00:18:47.950
sets, as for the idea of why with enough
data, the numbers speak for themselves
00:18:47.950 --> 00:18:53.720
that seems hopelessly naive, in data sets
where spurious patterns vastly outnumber
00:18:53.720 --> 00:18:58.299
genuine discoveries. This last point is
described by Nicholas Taleb who writes
00:18:58.299 --> 00:19:02.700
that big data research has brought cherry-
picking to an industrial level. Liam Weber
00:19:02.700 --> 00:19:08.140
in a 2007 paper demonstrated that data
mining techniques could show a strong, but
00:19:08.140 --> 00:19:13.549
spurious relationship between the changes
in the S&P 500 stock index and butter
00:19:13.549 --> 00:19:18.740
production in Bangladesh. What is strange
about this mythology, that large data sets
00:19:18.740 --> 00:19:23.419
offer some higher form of intelligences,
is that is paradoxical it attributes human
00:19:23.419 --> 00:19:26.730
qualities to something, while at the same
time considering it to be more objective
00:19:26.730 --> 00:19:32.070
and more accurate than humans, but these
beliefs can exist side by side. Consulting
00:19:32.070 --> 00:19:36.259
this Oracle and critically has quite far-
reaching implications.
00:19:36.259 --> 00:19:40.490
For one it dehumanizes humans by asserting
that human involvement through hypothesis
00:19:40.490 --> 00:19:46.950
and interpretation is unreliable and only
by removing ourselves from the equation,
00:19:46.950 --> 00:19:51.100
can we finally see the world as it is.
The practical consequence of this dynamic
00:19:51.100 --> 00:19:55.450
is that it is no longer possible to argue
with the outcome of big data analysis
00:19:55.450 --> 00:19:59.770
because first of all it's supposedly bias
free, interpretation free, you can't
00:19:59.770 --> 00:20:04.210
question it, you cannot check if it is
bias free because the algorithms governing
00:20:04.210 --> 00:20:09.160
the analysis are often completely opaque.
This becomes painful when you find
00:20:09.160 --> 00:20:13.159
yourself in the wrong category of a social
sorting algorithm guiding real-world
00:20:13.159 --> 00:20:18.280
decisions on insurance, mortgage, work
border check, scholarships and so on.
00:20:18.280 --> 00:20:23.919
Exclusion from certain privileges is only
the most optimistic scenario, so it is not
00:20:23.919 --> 00:20:28.179
as effective as we might hope. It has a
dehumanizing dark side
00:20:28.179 --> 00:20:30.910
so why do we
believe. How did we come so infatuated
00:20:30.910 --> 00:20:35.200
with information. Our idea about
information changed radically in the
00:20:35.200 --> 00:20:39.830
previous century from small statement of
fact, to the essence of man's inner life
00:20:39.830 --> 00:20:43.870
and this shift started with the advent of
cybernetics and information theory in the
00:20:43.870 --> 00:20:48.200
40s and 50s where information was suddenly
seen as a means to control a system, any
00:20:48.200 --> 00:20:53.429
system be it mechanical physical,
biological, cognitive or social. Here you
00:20:53.429 --> 00:20:58.410
see Norbert Wiener's moths a machine he
built as part of a public relations stunt
00:20:58.410 --> 00:21:03.370
financed by Life magazine. The photos with
him and his moth were unfortunately never
00:21:03.370 --> 00:21:07.470
published, because according to Life's
editors, it didn't illustrate the human
00:21:07.470 --> 00:21:12.540
characteristics of computers very well.
Norbert Wiener in the human hues of human
00:21:12.540 --> 00:21:15.090
beings wrote,
"to live effectively is to live with
00:21:15.090 --> 00:21:19.340
adequate information, thus communication
and control belong to the essence of man's
00:21:19.340 --> 00:21:23.539
inner life, even as they belong to his
life in society" and almost
00:21:23.539 --> 00:21:27.570
simultaneously, Shannon published a
mathematical theory of communication a
00:21:27.570 --> 00:21:31.720
theory of signals transmitted over
distance. John Durham Peters in speaking
00:21:31.720 --> 00:21:36.389
into the air, describes how over time this
information theory got reinterpreted by
00:21:36.389 --> 00:21:41.040
social scientists who mistook signal for
significance.
00:21:41.040 --> 00:21:45.299
Or at Halpern in beautiful data describes
how Alan Turing and Bertrand Russell had
00:21:45.299 --> 00:21:49.659
proved conclusively in struggling with the
Entscheidungsproblem that many analytic
00:21:49.659 --> 00:21:54.090
functions could not be logically
represented or mechanically executed and
00:21:54.090 --> 00:21:58.470
therefore machines were not human minds.
She asks the very important question of
00:21:58.470 --> 00:22:03.320
why we have forgotten this history and do
we still regularly equate reason with
00:22:03.320 --> 00:22:08.031
rationality. Having forgotten this ten
years later in '58, artificial
00:22:08.031 --> 00:22:11.929
intelligence research began comparing
computers and humans. Simon and Newell
00:22:11.929 --> 00:22:15.740
wrote: the programmed computer and human
problem solver are both species belonging
00:22:15.740 --> 00:22:19.220
to the genus 'information processing
system'.
00:22:19.220 --> 00:22:23.390
In the 80s, information was granted an
even more powerful status: that of
00:22:23.390 --> 00:22:27.669
commodity. Like it or not, information has
finally surpassed material goods as our
00:22:27.669 --> 00:22:41.009
basic resource. Bon appetit! How did we
become so infatuated with information?
00:22:41.009 --> 00:22:50.890
Hey sorry sighs yeah, this is an image
of a medieval drawing where the humors,
00:22:50.890 --> 00:22:57.009
the liquids in the body were seen as the
the essence of our intelligence in the
00:22:57.009 --> 00:23:02.610
functioning of our system. A metaphor for
our intelligence by the 1500s automata
00:23:02.610 --> 00:23:06.190
powered by Springs and gears had been
devised, inspiring leading thinkers such
00:23:06.190 --> 00:23:11.669
as Rene Descartes to assert that humans
are complex machines. The mind or soul was
00:23:11.669 --> 00:23:15.190
immaterial, completely separated from the
body - only able to interact with the body
00:23:15.190 --> 00:23:18.909
through the pineal gland, which he
considered the seat of the soul.
00:23:18.909 --> 00:23:22.760
And we still do it, the brain is commonly
compared to a computer with the role of
00:23:22.760 --> 00:23:26.809
physical hardware played by the brain, and
our thoughts serving a software. The brain
00:23:26.809 --> 00:23:31.769
is information processor. It is a metaphor
that is sometimes mistaken for reality.
00:23:31.769 --> 00:23:36.150
Because of this the belief in the Oracle
of big data is not such a great leap.
00:23:36.150 --> 00:23:39.440
Information is the essence of
consciousness in this view. We've come
00:23:39.440 --> 00:23:45.169
full circle, we see machines as human like
and view ourselves as machines. So does it
00:23:45.169 --> 00:23:48.309
work, we started out with two survival
strategies predicting the behavior of
00:23:48.309 --> 00:23:52.090
others through anthropomorphizing and
trying to predict the future through
00:23:52.090 --> 00:23:55.710
oracles. The first has helped us survive
in the past, allows us to be empathic
00:23:55.710 --> 00:24:00.440
towards others - human and non-human. The
second has comforted us throughout the
00:24:00.440 --> 00:24:03.769
ages, creating the idea of control of
being able to predict and prevent
00:24:03.769 --> 00:24:07.809
disaster. So how are they working for us
today?
00:24:07.809 --> 00:24:11.809
We definitely have reasons to be concerned
with the sword of Damocles hanging over
00:24:11.809 --> 00:24:16.260
our heads: global warming setting in
motion a chain of catastrophes threatening
00:24:16.260 --> 00:24:20.910
our survival, facing the inevitable death
of capitalism's myth of eternal growth as
00:24:20.910 --> 00:24:26.970
Earth's research has run out we are in a
bit of a pickle. Seeing our consciousness
00:24:26.970 --> 00:24:31.210
as separate from our bodies, like software
and hardware. That offers some comforting
00:24:31.210 --> 00:24:34.190
options.
One option is that since human
00:24:34.190 --> 00:24:37.820
consciousness is so similar to computer
software, it can be transferred to a
00:24:37.820 --> 00:24:42.940
computer. Ray Kurzweil for example
believes that it will soon be possible to
00:24:42.940 --> 00:24:48.160
download human minds to a computer, with
immortality as a result. "Alliance to
00:24:48.160 --> 00:24:52.279
Rescue Civilization" by Burrows and
Shapiro is a project that aims to back up
00:24:52.279 --> 00:24:56.110
human civilization in a lunar facility.
The project artificially separates the
00:24:56.110 --> 00:25:01.000
hardware of the planet with its oceans and
soils, and a data of human civilization.
00:25:01.000 --> 00:25:04.289
And last but not least, the most explicit
and radical separation as well as the
00:25:04.289 --> 00:25:08.609
least optimistic outlook on our future,
Elon Musk's SpaceX planned to colonize
00:25:08.609 --> 00:25:13.009
Mars, presented in September last year.
The goal of the presentation being to make
00:25:13.009 --> 00:25:18.470
living on Mars seemed possible within our
lifetime. Possible - and fun.
00:25:18.470 --> 00:25:22.460
A less extreme version of these attempts
to escape doom is what, that with so much
00:25:22.460 --> 00:25:26.409
data at our fingertips and clever
scientists, will figure out a way to solve
00:25:26.409 --> 00:25:30.419
our problems. Soon we'll laugh at our
panic over global warming safely aboard
00:25:30.419 --> 00:25:34.790
our CO2 vacuum cleaners. With this belief
we don't have to change our lives, our
00:25:34.790 --> 00:25:40.360
economies, our politics. We can carry on
without making radical changes. Is this
00:25:40.360 --> 00:25:46.239
apathy warranted? What is happening while
we are filling up the world's hard disks?
00:25:46.239 --> 00:25:50.009
Well, information is never disembodied, it
always needs a carrier and the minerals
00:25:50.009 --> 00:25:54.019
used in the technology hosting our data
come from conflict zones, resulting in
00:25:54.019 --> 00:25:58.549
slavery and ecocide. As for instance in
the coltan and cassiterite mines in Congo,
00:25:58.549 --> 00:26:02.679
gold mines in Ghana. Minerals used in
technology hosting our data come from
00:26:02.679 --> 00:26:07.399
unregulated zones leading to extreme
pollution, as here in the black sludge
00:26:07.399 --> 00:26:15.850
lake in Baotou in China. EU waste is
exported to unregulated zones, and server
00:26:15.850 --> 00:26:20.070
farms spit out an equal amount of CO2 as
the global aviation industry. Our data
00:26:20.070 --> 00:26:23.860
cannot be separated from the physical, and
its physical side is not so pretty.
00:26:23.860 --> 00:26:27.789
And what is happening is that the earth is
getting warmer and climate research is not
00:26:27.789 --> 00:26:32.039
based on Twitter feeds, but our
measurements yet somehow largely has been
00:26:32.039 --> 00:26:36.539
ignored for decades. Scientific consensus
was reached in the 80s, and if you compare
00:26:36.539 --> 00:26:40.070
the dangerously slow response to this, to
the response given to the threat of
00:26:40.070 --> 00:26:44.789
terrorism which has rapidly led to new
laws, even new presidents, this shows how
00:26:44.789 --> 00:26:48.799
stories, metaphors, and mythologies in the
world of social beings have more impact
00:26:48.799 --> 00:26:52.740
than scientific facts. And how threats
that require drastic changes to the status
00:26:52.740 --> 00:26:59.730
quo are willfully ignored.
So does this survival strategy work? This
00:26:59.730 --> 00:27:02.990
mythology, this belief in taking ourselves
out of the equation, to bring us closer to
00:27:02.990 --> 00:27:07.260
truth, to reality as it is, separating
ourselves from that which we observe,
00:27:07.260 --> 00:27:12.590
blinds us to the trouble we are in. And
our true nature and embodied intelligence,
00:27:12.590 --> 00:27:17.620
not a brain in a jar, an organism
completely intertwined with its
00:27:17.620 --> 00:27:21.240
environment, its existence completely
dependent on the survival of the organisms
00:27:21.240 --> 00:27:25.679
it shares this planet with, we can't help
to anthropomorphize, to approach
00:27:25.679 --> 00:27:30.390
everything around us as part of our social
sphere with minds and agencies. And that
00:27:30.390 --> 00:27:35.320
is fine, it makes us human. It allows us
to study the world around us with empathy.
00:27:35.320 --> 00:27:39.090
The most important thing is that the
metaphor is not mistaken for reality. The
00:27:39.090 --> 00:27:45.009
computer creating, thinking, memorizing,
writing, reading, learning, understanding,
00:27:45.009 --> 00:27:50.169
and people being hard-wired, stuck in a
loop, unable to compute, interfacing with,
00:27:50.169 --> 00:27:55.710
and reprogramming ourselves - those
metaphors are so embedded in our culture.
00:27:55.710 --> 00:27:59.159
You can only hope to create awareness
about them. If there is more awareness
00:27:59.159 --> 00:28:02.480
about the misleading descriptions of
machines as human-like and humans as
00:28:02.480 --> 00:28:06.820
machine-like and all of reality as an
information process, it is more likely
00:28:06.820 --> 00:28:10.139
that there will be less blind enchantment
with certain technology, and more
00:28:10.139 --> 00:28:14.249
questions asked about its
purpose and demands.
00:28:14.249 --> 00:28:18.980
There is no strong AI... yet, only very
clever automation. At this moment in
00:28:18.980 --> 00:28:23.419
history machines are proxies for human
agendas and ideologies. There are many
00:28:23.419 --> 00:28:32.679
issues that need addressing. As Kate
Crawford and Meredith Whittaker point out
00:28:32.679 --> 00:28:37.369
in the AI Now report, recent examples of
AI deployments such as during the US
00:28:37.369 --> 00:28:41.190
elections and Brexit, or Facebook
revealing teenagers emotional states to
00:28:41.190 --> 00:28:45.539
advertisers looking to target depressed
teens, show how the interests of those
00:28:45.539 --> 00:28:49.039
deploying advanced data systems can
overshadow the public interest, acting in
00:28:49.039 --> 00:28:53.080
ways contrary to individual autonomy and
collective welfare, often without this
00:28:53.080 --> 00:28:58.630
being visible at all to those affected.
The report points to many - I highly
00:28:58.630 --> 00:29:04.769
recommend reading it - and here are a few
concerns. Concerns about social safety
00:29:04.769 --> 00:29:08.290
nets and human resource distributions when
the dynamic of labor and employment
00:29:08.290 --> 00:29:12.691
change. Workers most likely to be affected
are women and minorities. Automated
00:29:12.691 --> 00:29:16.899
decision-making systems are often unseen
and there are few established means to
00:29:16.899 --> 00:29:20.460
assess their fairness, to contest and
rectify wrong or harmful decisions or
00:29:20.460 --> 00:29:29.590
impacts.
Those directly impacted.... Sorry,
00:29:29.590 --> 00:29:35.570
automated... No, sorry.... I'm lost...
Those directly impacted by deployment of
00:29:35.570 --> 00:29:49.909
AI systems rarely have a role in designing
them. sighs And to assess their
00:29:49.909 --> 00:29:54.070
fairness, to confess and rectify wrong and
harmful decisions or impacts, lacks....
00:29:54.070 --> 00:29:58.960
lack of methods measuring and assessing
social and economic impacts... nah, let's
00:29:58.960 --> 00:30:09.240
keep scrolling back.... In any case, there
is a great chance of like me bias because
00:30:09.240 --> 00:30:17.299
of the uniform... uniformity of those
developing these systems. Seeing the
00:30:17.299 --> 00:30:20.889
Oracle we've constructed for what it is
means to stop comforting, comforting
00:30:20.889 --> 00:30:25.269
ourselves, to ask questions. A quote from
super intelligence, the idea that it's
00:30:25.269 --> 00:30:29.800
smart people by (?)Muchaichai(?) (?)Clowsky(?),
the pressing ethical questions in machine
00:30:29.800 --> 00:30:33.430
learning are not about machines becoming
self-aware and taking over the world, but
00:30:33.430 --> 00:30:37.910
about how people can exploit other people.
Or through careless, in carelessness
00:30:37.910 --> 00:30:43.070
introduce immoral behavior into automated
systems. Instead of waiting for the nerd
00:30:43.070 --> 00:30:46.860
rapture, or for Elon Musk to whisk us off
the planet, it is important to come to
00:30:46.860 --> 00:30:50.750
terms with a more modest perception of
ourselves and our machines. Facing the
00:30:50.750 --> 00:30:56.590
ethical repercussions of the systems we
are putting in place. Having the real
00:30:56.590 --> 00:31:02.320
discussion, not the one we hope for, but
the hard one that requires actual change
00:31:02.320 --> 00:31:06.540
and a new mythology. One that works, not
only for us, but for all those human and
00:31:06.540 --> 00:31:11.800
non-human, we share the planet with.
Thank you. That's it.
00:31:11.800 --> 00:31:24.170
applause
00:31:24.170 --> 00:31:30.209
Herald Angel: Thank you, Marloes. Is there
any questions? Like, you would have one
00:31:30.209 --> 00:31:36.689
minute. laugs Okay. So, thank you
again. Give her a big applause again,
00:31:36.689 --> 00:31:39.469
thank you.
00:31:39.469 --> 00:31:43.976
applause
00:31:43.976 --> 00:31:49.958
34c3 outro
00:31:49.958 --> 00:32:04.901
subtitles created by c3subtitles.de
in the year 2018. Join, and help us!