< Return to Video

34C3 - Why Do We Anthropomorphize Computers?...

  • 0:00 - 0:15
    34c3 intro
  • 0:15 - 0:24
    Herald: the next talk is Marloes de Valk,
    she's an artist and writer from the
  • 0:24 - 0:31
    Netherlands and she's working with lots of
    different materials and media and at the
  • 0:31 - 0:40
    moment she's doing an 8-bit game, so the
    topic is "why do we anthropomorphize
  • 0:40 - 0:48
    computers and dehumanize ourselves in the
    process?" and we have a mumble, which is
  • 0:48 - 0:54
    doing the translation, the talk is in
    English and we will translate into French
  • 0:54 - 1:02
    and German.
    Okay, give a big applause for Marloes!
  • 1:02 - 1:10
    applause
  • 1:10 - 1:14
    Marloes: Thank you and thank you all for
  • 1:14 - 1:21
    coming, my name is Marloes de Valk and I'm
    going to talk about anthropomorphization
  • 1:21 - 1:27
    and I will approach this as a survival
    strategy, see how it works and if it is
  • 1:27 - 1:34
    effective. And when I'm speaking of big
    data, which is an umbrella term, my focus
  • 1:34 - 1:39
    will be on the socio-technical aspect of
    the phenomenon, the assumptions and
  • 1:39 - 1:44
    beliefs surrounding Big Data and on
    research using data exhaust or found data
  • 1:44 - 1:50
    such as status updates on social media web
    searches and credit card payments.
  • 1:56 - 2:08
    Oh and now my slides are frozen. Oh my
    gosh.
  • 2:09 - 2:11
    Audience: Have you tried turning it of and on again?
  • 2:11 - 2:16
    Marloes: laughs
    I will in a moment. gosh it's
  • 2:16 - 2:26
    completely frozen... I'm very sorry,
    technical staff I have to exit, if I can.
  • 2:26 - 2:41
    I can't. Help! I have to get rid of
    something I think, should we just kill it?
  • 2:41 - 2:55
    That's so stupid yeah.
  • 2:55 - 2:58
    But they're gonna have a coffee soon and then it's gonna
  • 2:58 - 3:33
    Yes, force quit... I think I know
    what the problem is. I'm sorry it's, it's
  • 3:33 - 4:10
    really not working. All right let's see if
    we're back.
  • 4:10 - 4:19
    Okay, okay so sorry for the interruption.
  • 4:19 - 4:24
    I wanted to start by letting Silicon
    Valley itself tell a story about
  • 4:24 - 4:31
    technology, really, sorry about the
    interruption. So, Silicon Valley propaganda
  • 4:31 - 4:35
    during our lifetime, we're about to see
    the transformation of the human race, it's
  • 4:35 - 4:39
    really something that blows my mind every
    time I think about it. People have no idea
  • 4:39 - 4:43
    how fast the world is changing and I want
    to give you a sense of that because it
  • 4:43 - 4:48
    fills me with awe and with an
    extraordinary sense of responsibility. I
  • 4:48 - 4:52
    want to give you a sense of why now is
    different why this decade the next decade
  • 4:52 - 4:56
    is not interesting times
    but THE most extraordinary times ever in
  • 4:56 - 5:01
    human history and they truly are. What
    we're talking about here is the notion
  • 5:01 - 5:05
    that faster cheaper computing power which
    is almost like a force of nature, is
  • 5:05 - 5:09
    driving a whole slew of technologies,
    technology being the force that takes what
  • 5:09 - 5:14
    used to be scarce and make it abundant.
    That is why we're heading towards this
  • 5:14 - 5:20
    extraordinary age of abundance. The future
    will not take care of itself as we know
  • 5:20 - 5:24
    the world looks to America for progress
    and America looks to California and if you
  • 5:24 - 5:28
    ask most Californians where they get their
    progress they'll point towards the bay,
  • 5:28 - 5:33
    but here at the bay there is no place left
    to point, so we have to create solutions
  • 5:33 - 5:38
    and my goal is to simplify complexity,
    take Internet technology and cross it with
  • 5:38 - 5:43
    an old industry and magic and progress and
    big things can happen. I really think
  • 5:43 - 5:47
    there are two fundamental paths for
    humans, one path is we stay on earth
  • 5:47 - 5:52
    forever, or some eventual extinction event
    wipes us out, I don't have a doomsday
  • 5:52 - 5:56
    prophecy but history suggests some
    doomsday event will happen. The
  • 5:56 - 6:01
    alternative is becoming a spacefaring and
    multiplanetary species and it will be like
  • 6:01 - 6:07
    really fun to go, you'll have a great
    time. We will set on Mars and we should,
  • 6:07 - 6:11
    because it's cool. When it comes to space
    I see it as my job to build infrastructure
  • 6:11 - 6:14
    the hard way. I'm using my resources to
    put in that infrastructure so that the
  • 6:14 - 6:19
    next generation of people can have a
    dynamic entrepreneurial solar system as
  • 6:19 - 6:23
    interesting as we see on the internet
    today. We want the population to keep
  • 6:23 - 6:29
    growing on this planet, we want to keep
    you using more energy per capita. Death
  • 6:29 - 6:34
    makes me very angry, probably the most
    extreme form of inequality is between
  • 6:34 - 6:38
    people who are alive and people who are
    dead. I have the idea that aging is
  • 6:38 - 6:42
    plastic, that it's encoded and if
    something is encoded you can crack the
  • 6:42 - 6:46
    code if you can crack the code you can
    hack the code and thermodynamically there
  • 6:46 - 6:51
    should be no reason we can't defer entropy
    indefinitely. We can end aging forever.
  • 6:51 - 6:55
    This is not about
    Silicon Valley billionaires
  • 6:55 - 6:57
    living forever off the blood of young
    people.
  • 6:57 - 7:02
    It's about a Star Trek future where no one
    dies of preventable diseases where life is
  • 7:02 - 7:06
    fair. Health technology is becoming an
    information technology, where we can read
  • 7:06 - 7:11
    and edit our own genomes clearly it is
    possible through technology to make death
  • 7:11 - 7:17
    optional. Yes, our bodies are information
    processing systems. We can enable human
  • 7:17 - 7:22
    transformations that would rival Marvel
    Comics super muscularity ultra endurance,
  • 7:22 - 7:27
    super radiation resistance, you could have
    people living on the moons of Jupiter,
  • 7:27 - 7:30
    who'd be modified in this way and they
    could physically harvest energy from the
  • 7:30 - 7:35
    gamma rays they were exposed to. Form a
    culture connected with the ideology of the
  • 7:35 - 7:39
    future, promoting technical progress
    artificial intellects, multi-body
  • 7:39 - 7:44
    immortality and cyborgization. We are at
    the beginning of the beginning the first
  • 7:44 - 7:49
    hour of day one, there have never been
    more opportunities the greatest products
  • 7:49 - 7:55
    of the next 25 years have not been
    invented yet. You are not too late.
  • 7:55 - 8:00
    We're going to take over the world, one
    robot at a time. It's gonna be an AI that
  • 8:00 - 8:04
    is able to source create solve an answer
    just what is your desire. I mean this is
  • 8:04 - 8:09
    an almost godlike view of the future. AI
    is gonna be magic. Especially in the
  • 8:09 - 8:14
    digital manufacturing world, what is going
    to be created will effectively be a god,
  • 8:14 - 8:18
    the idea needs to spread before the
    technology, the church is how we spread
  • 8:18 - 8:22
    the word, the gospel. If you believe in
    it, start a conversation with someone else
  • 8:22 - 8:26
    and help them understand the same things.
    Computers are going to take over from
  • 8:26 - 8:30
    humans, no question, but when I got that
    thinking in my head about if I'm going to
  • 8:30 - 8:34
    be treated in the future as a pet to these
    smart machines, well I'm gonna treat my
  • 8:34 - 8:38
    own pet dog really nice, but in the end we
    may just have created the species that is
  • 8:38 - 8:43
    above us. Chaining it isn't gonna be the
    solution as it will be stronger than any
  • 8:43 - 8:48
    change could put on. the existential risk
    that is associated with AI we will not be
  • 8:48 - 8:51
    able to beat
    AI, so then as the saying goes if you
  • 8:51 - 8:55
    can't beat them, join them.
    History has shown us we aren't gonna win
  • 8:55 - 8:59
    this war by changing human behavior but
    maybe we can build systems that are so
  • 8:59 - 9:03
    locked down, that humans lose the ability
    to make dumb mistakes until we gain the
  • 9:03 - 9:08
    ability to upgrade the human brain, it's
    the only way. Let's stop pretending we can
  • 9:08 - 9:12
    hold back the development of intelligence
    when there are clear massive short-term
  • 9:12 - 9:16
    economic benefits to those who develop it
    and instead understand the future and have
  • 9:16 - 9:21
    it treat us like a beloved elder who
    created it. As a company, one of our
  • 9:21 - 9:24
    greatest cultural strengths is accepting
    the fact that if you're gonna invent,
  • 9:24 - 9:29
    you're gonna disrupt. Progress is
    happening because there is economic
  • 9:29 - 9:33
    advantage to having machines work for you
    and solve problems for you. People are
  • 9:33 - 9:39
    chasing that. AI, the term has become more
    of a broad, almost marketing driven term
  • 9:39 - 9:42
    and I'm probably okay with that. What
    matters is what people think of when they
  • 9:42 - 9:47
    hear of this. We are in a deadly race
    between politics and technology, the fate
  • 9:47 - 9:51
    of our world may depend on the effort of a
    single person who builds or propagates the
  • 9:51 - 9:58
    machinery of freedom, that makes the world
    safe for capitalism.
  • 9:58 - 10:04
    These were all quotes. Every single one.
    not only Silicon Valley CEO speak of
  • 10:04 - 10:08
    Technology in mysterious ways, let's see
    some examples from the media.
  • 10:08 - 10:12
    Our official intelligence regulation,
    "lets not regulate mathematics" a headline
  • 10:12 - 10:17
    from import dot IO from May 2016 about the
    European general data protection
  • 10:17 - 10:22
    regulation and the article concludes
    autonomous cars should be regulated as
  • 10:22 - 10:26
    cars, they should safely deliver users to
    their destinations in the real world and
  • 10:26 - 10:31
    overall reduce the number of accidents.
    How they achieve this is irrelevant. With
  • 10:31 - 10:34
    enough data the numbers speak for
    themselves which comes from the super
  • 10:34 - 10:40
    famous article "The end of theory" from
    Chris Anderson in Wired magazine 2008.
  • 10:40 - 10:45
    "Google creates an AI that can teach
    itself to be better than humans" headline
  • 10:45 - 10:49
    from "The Independent. The
    article continues the company's AI
  • 10:49 - 10:53
    division deepmind has unveiled "alpha go
    zero" an extremely advanced system that
  • 10:53 - 10:58
    managed to accumulate thousands of years
    of human knowledge within days. Microsoft
  • 10:58 - 11:03
    apologizing for their teen chat Bot gone
    Nazi stating it wasn't their fault. "We're
  • 11:03 - 11:06
    deeply sorry for the unintended and
    hurtful tweets from Tay which do not
  • 11:06 - 11:14
    represent who we are or what we stand for
    nor how we design Tay" and then the PC
  • 11:14 - 11:19
    world article "AI just 3d printed a brand
    new Rembrandt and it's shockingly good",
  • 11:19 - 11:22
    the subtitle reads
    "the next Rembrandt project used data and
  • 11:22 - 11:27
    deep learning to produce uncanny results".
    Advertising firm J walter thompson
  • 11:27 - 11:32
    unveiled a 3d printed painting called "the
    next Rembrandt" based on 346 paintings of
  • 11:32 - 11:36
    the old master, not just PC world, but
    many more articles touted similar titles
  • 11:36 - 11:41
    presenting the painting to the public, as
    if it were made by a computer, a 3d
  • 11:41 - 11:45
    printer, AI and deep learning. It is clear
    though, that the computer programmers who
  • 11:45 - 11:49
    worked on the project are not computers
    and neither are the people who tagged the
  • 11:49 - 11:54
    346 Rembrandt paintings by hand. The
    painting was made by a team of programmers
  • 11:54 - 11:59
    and researchers and it took them 18 months
    to do. So what is communicated through
  • 11:59 - 12:03
    these messages is that the computer did
    it, yet there is no strong AI, as in
  • 12:03 - 12:07
    consciousness in machines at this moment,
    only very clever automation, meaning it
  • 12:07 - 12:12
    was really us. We comprehend the role and
    function of non-human actors rationally,
  • 12:12 - 12:16
    but still intuitively approach them
    differently. We anthropomorphize and
  • 12:16 - 12:19
    stories about the intelligent things
    machines can do and force the belief in
  • 12:19 - 12:27
    the human-like agency of machines, so why
    do we do it.
  • 12:27 - 12:28
    I'd like to think of this as two survival
  • 12:28 - 12:33
    strategies that found each other in big
    data and AI discourse. George Zarkadakis
  • 12:33 - 12:37
    in the book "in our own image" describes
    the root of anthropomorphization, during
  • 12:37 - 12:41
    the evolution of the modern mind humans
    acquired and developed general-purpose
  • 12:41 - 12:44
    language, through social language and this
    first social language was a way of
  • 12:44 - 12:49
    grooming of creating social cohesion.
    We gained theory of mind to believe that
  • 12:49 - 12:53
    other people have thoughts, desires,
    intentions and feelings of their own -
  • 12:53 - 12:57
    Empathy. And this led to the describing of
    the world in social terms, perceiving
  • 12:57 - 13:01
    everything around us as agents possessing
    mind, including the nonhuman, when hunting
  • 13:01 - 13:06
    anthropomorphizing animals had a great
    advantage because you could strategize,
  • 13:06 - 13:12
    predict their movements. They show through
    multiple experiment- Oh, Reeves and Nass
  • 13:12 - 13:16
    were picking up on this
    anthropomorphization and they show through
  • 13:16 - 13:20
    multiple experiments that we haven't
    changed that much, through multiple
  • 13:20 - 13:24
    experiments they show how people treat
    computers, television and new media like
  • 13:24 - 13:28
    real people in places even though it test
    subjects were completely unaware of it,
  • 13:28 - 13:33
    they responded to computers as they would
    to people being polite cooperative,
  • 13:33 - 13:37
    attributing personality characteristics
    such as aggressiveness, humor, expertise
  • 13:37 - 13:41
    and even gender. Meaning we haven't
    evolved that much, we still do it.
  • 13:41 - 13:45
    Microsoft unfortunately misinterpreted
    their research and developed the innocent
  • 13:45 - 13:50
    yet much hated Clippy the paper clip,
    appearing one year later in office 97.
  • 13:50 - 13:55
    This survival strategy found its way into
    another one. The Oracle. Survival through
  • 13:55 - 13:58
    predicting events.
    The second strategy is trying to predict
  • 13:58 - 14:03
    the future, to steer events in our favor,
    in order to avoid disaster. The fear of
  • 14:03 - 14:07
    death has inspired us throughout the ages
    to try and predict the future and it has
  • 14:07 - 14:10
    led us to consult Oracles and to creating
    a new one.
  • 14:10 - 14:14
    Because we cannot predict the future in
    the midst of lives many insecurities, we
  • 14:14 - 14:18
    desperately crave the feeling of being in
    control over our destiny, we have
  • 14:18 - 14:23
    developed ways to calm our anxiety, to
    comfort ourselves and what we do is we
  • 14:23 - 14:27
    obfuscate that human hand in a generation
    of messages that require an objective or
  • 14:27 - 14:32
    authority feel, although disputed is
    commonly believed that the Delphic Oracle
  • 14:32 - 14:36
    delivered messages from her god Apollo in
    a state of trance induced by intoxicating
  • 14:36 - 14:41
    vapors arising from the chasm over which
    she was seated, possesed by her God the
  • 14:41 - 14:45
    Oracle spoke ecstatically and
    spontaneously. Priests of the temple then
  • 14:45 - 14:50
    translated her gibberish into the
    prophesies, the seekers of advice were
  • 14:50 - 14:56
    sent home with. And Apollo had spoken.
    Nowadays we turn to data for advice. The
  • 14:56 - 15:00
    Oracle of Big Data functions in a similar
    way to the Oracle of Delphi. Algorithms
  • 15:00 - 15:05
    programmed by humans are fed data and
    consequently spit out numbers that are
  • 15:05 - 15:08
    then translated and interpreted by
    researchers into the prophecies the
  • 15:08 - 15:12
    seekers of advice are sent home with. The
    bigger data the set, the more accurate the
  • 15:12 - 15:16
    results. Data has spoken.
    We are brought closer to the truth, to
  • 15:16 - 15:22
    reality as it is, unmediated by us,
    subjective biased and error-prone humans.
  • 15:22 - 15:27
    This Oracle inspires great hope. It's a
    utopia and this is best putting words in
  • 15:27 - 15:30
    the article "The end of theory" by
    Anderson where he states that with enough
  • 15:30 - 15:35
    data the numbers can speak for themselves.
    We can forget about taxonomy, ontology,
  • 15:35 - 15:39
    psychology, who knows why people do what
    they do. The point is they do it and we
  • 15:39 - 15:43
    can track and measure it with
    unprecedented fidelity, with enough data
  • 15:43 - 15:48
    the numbers speak for themselves. This
    Oracle is of course embraced with great
  • 15:48 - 15:53
    enthusiasm by database and storage
    businesses as shown here in an Oracle
  • 15:53 - 15:59
    presentation slide. High Five! And getting
    it right one out of ten times and using
  • 15:59 - 16:03
    the one success story to strengthen the
    belief in big data superpowers happens a
  • 16:03 - 16:07
    lot in the media, a peculiar example is
    the story on Motherboard about how
  • 16:07 - 16:11
    "Cambridge Analytica" helped Trump win the
    elections by psychologically profiling the
  • 16:11 - 16:15
    entire American population and using
    targeted Facebook ads to influence the
  • 16:15 - 16:20
    results of the election. This story evokes
    the idea that they know more about you
  • 16:20 - 16:27
    than your own mother. The article reads
    "more likes could even surpass what a
  • 16:27 - 16:31
    person thought they knew about themselves"
    and although this form of manipulation is
  • 16:31 - 16:36
    seriously scary in very undemocratic as
    Cathy O'Neil author of "weapons
  • 16:36 - 16:42
    of mass mass destruction" notes, "don't
    believe the hype".
  • 16:42 - 16:45
    It wasn't just Trump everyone was doing it
    Hillary was using the groundwork, a
  • 16:45 - 16:49
    startup funded by Google's Eric Schmidt,
    Obama used groundwork too, but the
  • 16:49 - 16:53
    groundwork somehow comes across a lot more
    cute compared to Cambridge analytica,
  • 16:53 - 16:56
    funded by billionaire Robert Mercer who
    also is heavily invested in all-tried
  • 16:56 - 17:01
    media outlet Breitbart, who describes
    itself as a killing machine waging the war
  • 17:01 - 17:06
    for the West, he also donated Cambridge
    analytica service to the brexit campaign.
  • 17:06 - 17:09
    The Motherboard article and many others
    describing the incredibly detailed
  • 17:09 - 17:13
    knowledge Cambridge Analytica has on
    American citizens were amazing advertising
  • 17:13 - 17:18
    for the company, but most of all a warning
    sign that applying big data research to
  • 17:18 - 17:22
    elections creates a very undemocratic
    Asymmetry and available information and
  • 17:22 - 17:28
    undermines the notion of an informed
    citizenry. Dana Boyd and Kate Crawford
  • 17:28 - 17:31
    described the beliefs attached to big data
    as a mythology "the widespread believe
  • 17:31 - 17:35
    that large datasets offer a higher form of
    intelligence and knowledge that can
  • 17:35 - 17:39
    generate insights, that were previously
    impossible with the aura of truth
  • 17:39 - 17:44
    objectivity and accuracy".
    The deconstruction of this myth was
  • 17:44 - 17:48
    attempted as early as 1984 in a
    spreadsheet way of knowledge, Steven Levi
  • 17:48 - 17:52
    describes how the authority of look of a
    spreadsheet and the fact that it was done
  • 17:52 - 17:55
    by a computer has a strong persuasive
    effect on people, leading to the
  • 17:55 - 18:02
    acceptance of the proposed model of
    reality as gospel. He says fortunately few
  • 18:02 - 18:05
    would argue that all relations between
    people can be quantified and manipulated
  • 18:05 - 18:10
    by formulas of human behavior, no
    faultless assumptions and so no perfect
  • 18:10 - 18:15
    model can be made. Tim Harford also refers
    to faith when he describes four
  • 18:15 - 18:20
    assumptions underlying Big Data research,
    the first uncanny accuracy is easy to
  • 18:20 - 18:24
    overrate, if we simply ignore false
    positives, oh sorry, the claim that
  • 18:24 - 18:27
    causation has been knocked off its
    pedestal is fine if we are making
  • 18:27 - 18:32
    predictions in the stable environment,
    but not if the world is changing. If you
  • 18:32 - 18:35
    do not understand why things correlate,
    you cannot know what might breakdown this
  • 18:35 - 18:39
    correlation either.
    The promise that sampling bias does not
  • 18:39 - 18:43
    matter in such large data sets is simply
    not true, there is lots of bias in data
  • 18:43 - 18:48
    sets, as for the idea of why with enough
    data, the numbers speak for themselves
  • 18:48 - 18:54
    that seems hopelessly naive, in data sets
    where spurious patterns vastly outnumber
  • 18:54 - 18:58
    genuine discoveries. This last point is
    described by Nicholas Taleb who writes
  • 18:58 - 19:03
    that big data research has brought cherry-
    picking to an industrial level. Liam Weber
  • 19:03 - 19:08
    in a 2007 paper demonstrated that data
    mining techniques could show a strong, but
  • 19:08 - 19:14
    spurious relationship between the changes
    in the S&P 500 stock index and butter
  • 19:14 - 19:19
    production in Bangladesh. What is strange
    about this mythology, that large data sets
  • 19:19 - 19:23
    offer some higher form of intelligences,
    is that is paradoxical it attributes human
  • 19:23 - 19:27
    qualities to something, while at the same
    time considering it to be more objective
  • 19:27 - 19:32
    and more accurate than humans, but these
    beliefs can exist side by side. Consulting
  • 19:32 - 19:36
    this Oracle and critically has quite far-
    reaching implications.
  • 19:36 - 19:40
    For one it dehumanizes humans by asserting
    that human involvement through hypothesis
  • 19:40 - 19:47
    and interpretation is unreliable and only
    by removing ourselves from the equation,
  • 19:47 - 19:51
    can we finally see the world as it is.
    The practical consequence of this dynamic
  • 19:51 - 19:55
    is that it is no longer possible to argue
    with the outcome of big data analysis
  • 19:55 - 20:00
    because first of all it's supposedly bias
    free, interpretation free, you can't
  • 20:00 - 20:04
    question it, you cannot check if it is
    bias free because the algorithms governing
  • 20:04 - 20:09
    the analysis are often completely opaque.
    This becomes painful when you find
  • 20:09 - 20:13
    yourself in the wrong category of a social
    sorting algorithm guiding real-world
  • 20:13 - 20:18
    decisions on insurance, mortgage, work
    border check, scholarships and so on.
  • 20:18 - 20:24
    Exclusion from certain privileges is only
    the most optimistic scenario, so it is not
  • 20:24 - 20:28
    as effective as we might hope. It has a
    dehumanizing dark side
  • 20:28 - 20:31
    so why do we
    believe. How did we come so infatuated
  • 20:31 - 20:35
    with information. Our idea about
    information changed radically in the
  • 20:35 - 20:40
    previous century from small statement of
    fact, to the essence of man's inner life
  • 20:40 - 20:44
    and this shift started with the advent of
    cybernetics and information theory in the
  • 20:44 - 20:48
    40s and 50s where information was suddenly
    seen as a means to control a system, any
  • 20:48 - 20:53
    system be it mechanical physical,
    biological, cognitive or social. Here you
  • 20:53 - 20:58
    see Norbert Wiener's moths a machine he
    built as part of a public relations stunt
  • 20:58 - 21:03
    financed by Life magazine. The photos with
    him and his moth were unfortunately never
  • 21:03 - 21:07
    published, because according to Life's
    editors, it didn't illustrate the human
  • 21:07 - 21:13
    characteristics of computers very well.
    Norbert Wiener in the human hues of human
  • 21:13 - 21:15
    beings wrote,
    "to live effectively is to live with
  • 21:15 - 21:19
    adequate information, thus communication
    and control belong to the essence of man's
  • 21:19 - 21:24
    inner life, even as they belong to his
    life in society" and almost
  • 21:24 - 21:28
    simultaneously, Shannon published a
    mathematical theory of communication a
  • 21:28 - 21:32
    theory of signals transmitted over
    distance. John Durham Peters in speaking
  • 21:32 - 21:36
    into the air, describes how over time this
    information theory got reinterpreted by
  • 21:36 - 21:41
    social scientists who mistook signal for
    significance.
  • 21:41 - 21:45
    Or at Halpern in beautiful data describes
    how Alan Turing and Bertrand Russell had
  • 21:45 - 21:50
    proved conclusively in struggling with the
    Entscheidungsproblem that many analytic
  • 21:50 - 21:54
    functions could not be logically
    represented or mechanically executed and
  • 21:54 - 21:58
    therefore machines were not human minds.
    She asks the very important question of
  • 21:58 - 22:03
    why we have forgotten this history and do
    we still regularly equate reason with
  • 22:03 - 22:08
    rationality. Having forgotten this ten
    years later in '58, artificial
  • 22:08 - 22:12
    intelligence research began comparing
    computers and humans. Simon and Newell
  • 22:12 - 22:16
    wrote: the programmed computer and human
    problem solver are both species belonging
  • 22:16 - 22:19
    to the genus 'information processing
    system'.
  • 22:19 - 22:23
    In the 80s, information was granted an
    even more powerful status: that of
  • 22:23 - 22:28
    commodity. Like it or not, information has
    finally surpassed material goods as our
  • 22:28 - 22:41
    basic resource. Bon appetit! How did we
    become so infatuated with information?
  • 22:41 - 22:51
    Hey sorry sighs yeah, this is an image
    of a medieval drawing where the humors,
  • 22:51 - 22:57
    the liquids in the body were seen as the
    the essence of our intelligence in the
  • 22:57 - 23:03
    functioning of our system. A metaphor for
    our intelligence by the 1500s automata
  • 23:03 - 23:06
    powered by Springs and gears had been
    devised, inspiring leading thinkers such
  • 23:06 - 23:12
    as Rene Descartes to assert that humans
    are complex machines. The mind or soul was
  • 23:12 - 23:15
    immaterial, completely separated from the
    body - only able to interact with the body
  • 23:15 - 23:19
    through the pineal gland, which he
    considered the seat of the soul.
  • 23:19 - 23:23
    And we still do it, the brain is commonly
    compared to a computer with the role of
  • 23:23 - 23:27
    physical hardware played by the brain, and
    our thoughts serving a software. The brain
  • 23:27 - 23:32
    is information processor. It is a metaphor
    that is sometimes mistaken for reality.
  • 23:32 - 23:36
    Because of this the belief in the Oracle
    of big data is not such a great leap.
  • 23:36 - 23:39
    Information is the essence of
    consciousness in this view. We've come
  • 23:39 - 23:45
    full circle, we see machines as human like
    and view ourselves as machines. So does it
  • 23:45 - 23:48
    work, we started out with two survival
    strategies predicting the behavior of
  • 23:48 - 23:52
    others through anthropomorphizing and
    trying to predict the future through
  • 23:52 - 23:56
    oracles. The first has helped us survive
    in the past, allows us to be empathic
  • 23:56 - 24:00
    towards others - human and non-human. The
    second has comforted us throughout the
  • 24:00 - 24:04
    ages, creating the idea of control of
    being able to predict and prevent
  • 24:04 - 24:08
    disaster. So how are they working for us
    today?
  • 24:08 - 24:12
    We definitely have reasons to be concerned
    with the sword of Damocles hanging over
  • 24:12 - 24:16
    our heads: global warming setting in
    motion a chain of catastrophes threatening
  • 24:16 - 24:21
    our survival, facing the inevitable death
    of capitalism's myth of eternal growth as
  • 24:21 - 24:27
    Earth's research has run out we are in a
    bit of a pickle. Seeing our consciousness
  • 24:27 - 24:31
    as separate from our bodies, like software
    and hardware. That offers some comforting
  • 24:31 - 24:34
    options.
    One option is that since human
  • 24:34 - 24:38
    consciousness is so similar to computer
    software, it can be transferred to a
  • 24:38 - 24:43
    computer. Ray Kurzweil for example
    believes that it will soon be possible to
  • 24:43 - 24:48
    download human minds to a computer, with
    immortality as a result. "Alliance to
  • 24:48 - 24:52
    Rescue Civilization" by Burrows and
    Shapiro is a project that aims to back up
  • 24:52 - 24:56
    human civilization in a lunar facility.
    The project artificially separates the
  • 24:56 - 25:01
    hardware of the planet with its oceans and
    soils, and a data of human civilization.
  • 25:01 - 25:04
    And last but not least, the most explicit
    and radical separation as well as the
  • 25:04 - 25:09
    least optimistic outlook on our future,
    Elon Musk's SpaceX planned to colonize
  • 25:09 - 25:13
    Mars, presented in September last year.
    The goal of the presentation being to make
  • 25:13 - 25:18
    living on Mars seemed possible within our
    lifetime. Possible - and fun.
  • 25:18 - 25:22
    A less extreme version of these attempts
    to escape doom is what, that with so much
  • 25:22 - 25:26
    data at our fingertips and clever
    scientists, will figure out a way to solve
  • 25:26 - 25:30
    our problems. Soon we'll laugh at our
    panic over global warming safely aboard
  • 25:30 - 25:35
    our CO2 vacuum cleaners. With this belief
    we don't have to change our lives, our
  • 25:35 - 25:40
    economies, our politics. We can carry on
    without making radical changes. Is this
  • 25:40 - 25:46
    apathy warranted? What is happening while
    we are filling up the world's hard disks?
  • 25:46 - 25:50
    Well, information is never disembodied, it
    always needs a carrier and the minerals
  • 25:50 - 25:54
    used in the technology hosting our data
    come from conflict zones, resulting in
  • 25:54 - 25:59
    slavery and ecocide. As for instance in
    the coltan and cassiterite mines in Congo,
  • 25:59 - 26:03
    gold mines in Ghana. Minerals used in
    technology hosting our data come from
  • 26:03 - 26:07
    unregulated zones leading to extreme
    pollution, as here in the black sludge
  • 26:07 - 26:16
    lake in Baotou in China. EU waste is
    exported to unregulated zones, and server
  • 26:16 - 26:20
    farms spit out an equal amount of CO2 as
    the global aviation industry. Our data
  • 26:20 - 26:24
    cannot be separated from the physical, and
    its physical side is not so pretty.
  • 26:24 - 26:28
    And what is happening is that the earth is
    getting warmer and climate research is not
  • 26:28 - 26:32
    based on Twitter feeds, but our
    measurements yet somehow largely has been
  • 26:32 - 26:37
    ignored for decades. Scientific consensus
    was reached in the 80s, and if you compare
  • 26:37 - 26:40
    the dangerously slow response to this, to
    the response given to the threat of
  • 26:40 - 26:45
    terrorism which has rapidly led to new
    laws, even new presidents, this shows how
  • 26:45 - 26:49
    stories, metaphors, and mythologies in the
    world of social beings have more impact
  • 26:49 - 26:53
    than scientific facts. And how threats
    that require drastic changes to the status
  • 26:53 - 27:00
    quo are willfully ignored.
    So does this survival strategy work? This
  • 27:00 - 27:03
    mythology, this belief in taking ourselves
    out of the equation, to bring us closer to
  • 27:03 - 27:07
    truth, to reality as it is, separating
    ourselves from that which we observe,
  • 27:07 - 27:13
    blinds us to the trouble we are in. And
    our true nature and embodied intelligence,
  • 27:13 - 27:18
    not a brain in a jar, an organism
    completely intertwined with its
  • 27:18 - 27:21
    environment, its existence completely
    dependent on the survival of the organisms
  • 27:21 - 27:26
    it shares this planet with, we can't help
    to anthropomorphize, to approach
  • 27:26 - 27:30
    everything around us as part of our social
    sphere with minds and agencies. And that
  • 27:30 - 27:35
    is fine, it makes us human. It allows us
    to study the world around us with empathy.
  • 27:35 - 27:39
    The most important thing is that the
    metaphor is not mistaken for reality. The
  • 27:39 - 27:45
    computer creating, thinking, memorizing,
    writing, reading, learning, understanding,
  • 27:45 - 27:50
    and people being hard-wired, stuck in a
    loop, unable to compute, interfacing with,
  • 27:50 - 27:56
    and reprogramming ourselves - those
    metaphors are so embedded in our culture.
  • 27:56 - 27:59
    You can only hope to create awareness
    about them. If there is more awareness
  • 27:59 - 28:02
    about the misleading descriptions of
    machines as human-like and humans as
  • 28:02 - 28:07
    machine-like and all of reality as an
    information process, it is more likely
  • 28:07 - 28:10
    that there will be less blind enchantment
    with certain technology, and more
  • 28:10 - 28:14
    questions asked about its
    purpose and demands.
  • 28:14 - 28:19
    There is no strong AI... yet, only very
    clever automation. At this moment in
  • 28:19 - 28:23
    history machines are proxies for human
    agendas and ideologies. There are many
  • 28:23 - 28:33
    issues that need addressing. As Kate
    Crawford and Meredith Whittaker point out
  • 28:33 - 28:37
    in the AI Now report, recent examples of
    AI deployments such as during the US
  • 28:37 - 28:41
    elections and Brexit, or Facebook
    revealing teenagers emotional states to
  • 28:41 - 28:46
    advertisers looking to target depressed
    teens, show how the interests of those
  • 28:46 - 28:49
    deploying advanced data systems can
    overshadow the public interest, acting in
  • 28:49 - 28:53
    ways contrary to individual autonomy and
    collective welfare, often without this
  • 28:53 - 28:59
    being visible at all to those affected.
    The report points to many - I highly
  • 28:59 - 29:05
    recommend reading it - and here are a few
    concerns. Concerns about social safety
  • 29:05 - 29:08
    nets and human resource distributions when
    the dynamic of labor and employment
  • 29:08 - 29:13
    change. Workers most likely to be affected
    are women and minorities. Automated
  • 29:13 - 29:17
    decision-making systems are often unseen
    and there are few established means to
  • 29:17 - 29:20
    assess their fairness, to contest and
    rectify wrong or harmful decisions or
  • 29:20 - 29:30
    impacts.
    Those directly impacted.... Sorry,
  • 29:30 - 29:36
    automated... No, sorry.... I'm lost...
    Those directly impacted by deployment of
  • 29:36 - 29:50
    AI systems rarely have a role in designing
    them. sighs And to assess their
  • 29:50 - 29:54
    fairness, to confess and rectify wrong and
    harmful decisions or impacts, lacks....
  • 29:54 - 29:59
    lack of methods measuring and assessing
    social and economic impacts... nah, let's
  • 29:59 - 30:09
    keep scrolling back.... In any case, there
    is a great chance of like me bias because
  • 30:09 - 30:17
    of the uniform... uniformity of those
    developing these systems. Seeing the
  • 30:17 - 30:21
    Oracle we've constructed for what it is
    means to stop comforting, comforting
  • 30:21 - 30:25
    ourselves, to ask questions. A quote from
    super intelligence, the idea that it's
  • 30:25 - 30:30
    smart people by (?)Muchaichai(?) (?)Clowsky(?),
    the pressing ethical questions in machine
  • 30:30 - 30:33
    learning are not about machines becoming
    self-aware and taking over the world, but
  • 30:33 - 30:38
    about how people can exploit other people.
    Or through careless, in carelessness
  • 30:38 - 30:43
    introduce immoral behavior into automated
    systems. Instead of waiting for the nerd
  • 30:43 - 30:47
    rapture, or for Elon Musk to whisk us off
    the planet, it is important to come to
  • 30:47 - 30:51
    terms with a more modest perception of
    ourselves and our machines. Facing the
  • 30:51 - 30:57
    ethical repercussions of the systems we
    are putting in place. Having the real
  • 30:57 - 31:02
    discussion, not the one we hope for, but
    the hard one that requires actual change
  • 31:02 - 31:07
    and a new mythology. One that works, not
    only for us, but for all those human and
  • 31:07 - 31:12
    non-human, we share the planet with.
    Thank you. That's it.
  • 31:12 - 31:24
    applause
  • 31:24 - 31:30
    Herald Angel: Thank you, Marloes. Is there
    any questions? Like, you would have one
  • 31:30 - 31:37
    minute. laugs Okay. So, thank you
    again. Give her a big applause again,
  • 31:37 - 31:39
    thank you.
  • 31:39 - 31:44
    applause
  • 31:44 - 31:50
    34c3 outro
  • 31:50 - 32:05
    subtitles created by c3subtitles.de
    in the year 2018. Join, and help us!
Title:
34C3 - Why Do We Anthropomorphize Computers?...
Description:

more » « less
Video Language:
English
Duration:
32:05

English subtitles

Revisions