< Return to Video

Artificial intelligence and its ethics | DW Documentary

  • 0:01 - 0:04
    This is Hatsune Miku.
  • 0:04 - 0:06
    She is an hologram.
  • 0:06 - 0:09
    And this is Akihiko Kondo, her husband.
  • 0:11 - 0:16
    -"Hello."
    -"Hello."
  • 0:18 - 0:24
    -"You look cute today."
    -"I love complements."
  • 0:28 - 0:32
    Miku is a simple from of artificial
    intelligence and for Kondo,
  • 0:32 - 0:34
    it was a case of love at first sight.
  • 0:35 - 0:37
    Miku has become a legitimate popstar
  • 0:37 - 0:40
    and even appear in concerts as a 3D
    projections.
  • 0:43 - 0:47
    In November 2018, Kondo married Miku at a
    ceremony in Tokyo.
  • 0:48 - 0:50
    He place the ring around the wrist
    of a Miku doll.
  • 0:52 - 0:54
    he now keeps the doll in his bedroom.
  • 0:56 - 0:59
    Kondo´s relationships with real women
    have been painful,
  • 0:59 - 1:01
    so he choose a virtual partner.
  • 1:03 - 1:08
    -"I love her, but it's hard to say if she
    loves me. Still if you asked her,
  • 1:09 - 1:11
    I think she'd say yes."
  • 1:15 - 1:20
    Hatsune Miku and Akihiko Kondo are an
    extrem example of the relationship
  • 1:20 - 1:22
    between peolpe and machines.
  • 1:25 - 1:29
    In the future, we'll not doubt spend more
    interancting with technology
  • 1:29 - 1:32
    that uses artificial intelligence or AI.
  • 1:32 - 1:36
    We may even develop robots that are
    smarter than we are.
  • 1:38 - 1:40
    Now in the 21st century,
  • 1:40 - 1:44
    we will have to decide how to deal with
    this complicated new situation.
  • 2:03 - 2:06
    For this report, we interviewed
    philosophers and scientists
  • 2:06 - 2:08
    around the world.
  • 2:10 - 2:13
    We talked to German philosospher
    Thomas Metzinger, who advocates
  • 2:13 - 2:16
    the use of ethis guidelines for AI
    development from the EU.
  • 2:20 - 2:23
    Physicist Max Tegmark who warns about
  • 2:23 - 2:28
    the development of an all-powerfull AI
    and the totalitarian surveillance state.
  • 2:30 - 2:33
    And German computer scientist
    Jürgen Schmidhuber
  • 2:33 - 2:37
    who predicts the AI spread from the earth
    into the cosmos.
  • 2:44 - 2:48
    We met professor Schmidhuber at a
    business in Zurich.
  • 2:48 - 2:50
    He often speak a such events
  • 2:50 - 2:54
    where he outline his vision of the role
    that artificial intelligence
  • 2:54 - 2:55
    may play in our future.
  • 2:58 - 3:00
    -"Professor Jürgen Schmidhuber!"
  • 3:03 - 3:06
    His presentation are wide-ranging
    and thought provoking.
  • 3:09 - 3:13
    -"In the near future, perhaps a few
    decades from now,
  • 3:13 - 3:15
    we will for the first time have AI
  • 3:15 - 3:18
    that could do much more than people can
    do right now in their own.
  • 3:20 - 3:24
    And we will realize that the majority of
    physical resources are not confined
  • 3:24 - 3:26
    to a rather small biosphere.
  • 3:27 - 3:31
    In our solar system, there is a lot of
    material that can be used to build robots.
  • 3:31 - 3:36
    We could develop robots, trasmitters and
    receivers that would allow an AI to be
  • 3:36 - 3:39
    sent and received the speed of light.
  • 3:40 - 3:42
    We can already do this in our
    laboratories.
  • 3:44 - 3:46
    This will be a huge development.
  • 3:46 - 3:49
    Perhaps, the most important is the
    beginning of life in Earth,
  • 3:49 - 3:51
    three and half billion years ago."
  • 3:56 - 3:59
    But is the professor´s vision accurate?
  • 4:00 - 4:04
    Will humans at some point be overtaken by
    super intelligent machines?
  • 4:07 - 4:09
    Perhaps, this process is already begun.
  • 4:17 - 4:20
    To find out more, we travel to Japan.
  • 4:21 - 4:24
    Doctor and scientists at the University
    of Tokyo's Research Hospital
  • 4:24 - 4:27
    are exploring the potential use
    of AI in medicine.
  • 4:33 - 4:37
    69 years-old Ayako Yamashita nearly died
    of Leukemia two years ago.
  • 4:38 - 4:42
    None of the therapy options recommended by
    doctors did any good.
  • 4:46 - 4:49
    Then, they use AI technology to create
    a new diagnosis.
  • 4:52 - 4:54
    "AI literally saved her life".
  • 4:57 - 4:59
    The diagnosis took all of ten minutes.
  • 5:00 - 5:03
    A human expert would have needed two weeks
    to produce a similar analysis.
  • 5:06 - 5:09
    AI can process massive amounts of
    scientific data,
  • 5:10 - 5:12
    a stuk of documents taller than Mount Fuji.
  • 5:17 - 5:20
    This is the Research Hospital
    Supercomputer.
  • 5:21 - 5:25
    We've come here to talk to Satoru Miyano,
    an expert on bioinformatics.
  • 5:26 - 5:29
    We asked Miyano whether AI could one day
    replace doctos.
  • 5:30 - 5:36
    "No, I don´t think so.
    There are simply support to clinicians
  • 5:36 - 5:40
    and empower the clinicians.
  • 5:42 - 5:45
    The clinicians and colleges in our
    deparment
  • 5:46 - 5:49
    wears artificial intelligence exoskeleton.
  • 5:50 - 5:57
    For example, if you want to move well,
    you can use a power suit.
  • 5:59 - 6:01
    And this a simple ongologist.
  • 6:03 - 6:09
    Oncologist should wear artificial
    intelligence supported with supercomputer.
  • 6:11 - 6:15
    At the nearby Rican Intitute, researchers
    are developing an AI diagnostic program
  • 6:15 - 6:18
    that could be used to test for
    stomach cancer.
  • 6:23 - 6:26
    But one expert here diragrees with Satoru
    Miyano's opinion
  • 6:26 - 6:29
    that AI will never replace doctors.
  • 6:35 - 6:38
    "If we were made redundant
    by artificial intelligence,
  • 6:38 - 6:40
    that wouldn't be good for his doctos,
  • 6:42 - 6:45
    but for the human race would actually be
    great for doctors that are not longer
  • 6:45 - 6:46
    necessary.
  • 6:46 - 6:49
    Wiht AI technology could improve their
    work or even, take over.
  • 6:57 - 7:01
    It's hard to imagine a world that
    that had not doctors.
  • 7:03 - 7:07
    Do patients relly want to be treated by
    machines that see them as nothing more
  • 7:07 - 7:09
    than accumulations of technical data?
  • 7:16 - 7:19
    In Europe, a number of experts
    on artificial intelligence,
  • 7:19 - 7:24
    including Jürgen Schmidhuber, are
    carrying out research on the use of AI
  • 7:24 - 7:25
    in medical diagnostics.
  • 7:30 - 7:34
    Swiss President, Alain Berset has invited
    scientists and entrepreneurs
  • 7:34 - 7:38
    to a conference aimed at planning fo the
    digital future and promoting the use of
  • 7:38 - 7:40
    artificial intelligence in medecine.
  • 7:42 - 7:46
    One topic for discussion is AI technology
    that can use neural networks to "learn",
  • 7:47 - 7:49
    just as the human brain does.
  • 7:54 - 7:57
    "Soon, all medical diagnoses will be
    infinitely better that can humans
  • 7:57 - 7:58
    provide right now.
  • 7:59 - 8:03
    Because we will develop AI that uses
    neural network technology.
  • 8:07 - 8:11
    And it's exciting to see how this new
    development will be able to help people
  • 8:11 - 8:13
    to live longer and healthier lives".
  • 8:18 - 8:22
    We traveled to Stuttgart to see how
    artificial intelligence works in practice
  • 8:22 - 8:24
    in hospitals and nursing homes.
  • 8:24 - 8:28
    Computer scientist, Brigit Graf says that
    Japan has made a lot of progress in
  • 8:28 - 8:30
    developing robots that can look
    after patients,
  • 8:31 - 8:34
    but there are some things
    that a machine simply can't do.
  • 8:40 - 8:45
    "They can't provide real care, so I don't
    use that word when I talking about robots.
  • 8:46 - 8:49
    Caregivers have to be able to interact
    emotionally with the patients,
  • 8:49 - 8:52
    and a robot simply can't do that".
  • 8:57 - 9:00
    At this facility, robots are helping to
    reduce the workload of human staff.
  • 9:03 - 9:08
    "Hi. I Care-o-Bot 3. This week, I'm
    helping the nuerses with their work.
  • 9:08 - 9:10
    Would you like something to drink?"
  • 9:12 - 9:14
    "Thanks. That's very kind of you".
  • 9:15 - 9:16
    "Cheer - and goodbye!"
  • 9:22 - 9:26
    "Of course, robots can do much more than
    simply serve drinks in nursing homes.
  • 9:30 - 9:34
    Philosofer Thomas Metzinger has proposed
    pragmatics solutions
  • 9:34 - 9:36
    for dealing with this new technology.
  • 9:42 - 9:46
    For example, the options for using AI
    and robots in gereatric care
  • 9:46 - 9:49
    should maintain the dignity of patients.
  • 9:51 - 9:55
    You can ask individuals if they'd actually
    feel more comfortable having a machine
  • 9:55 - 9:58
    change their diapers rather than
    a family member.
  • 10:02 - 10:05
    Or what they'd enjoy having a machine
    read the newspaper to them,
  • 10:05 - 10:09
    or ask questions about their medications,
    or if they find that degrading.
  • 10:09 - 10:12
    I believe that we are now at the beginning
    of a major learning process".
  • 10:19 - 10:23
    Metzinger says that human kind is now
    on the threshold on a new age
  • 10:23 - 10:24
    that is filled with uncertainty.
  • 10:25 - 10:30
    He lives in Frankfurt, a city that aims to
    take lead in European AI development.
  • 10:32 - 10:37
    There are plans to set up an Artificial
    Intelligence Research Center there.
  • 10:39 - 10:42
    "Lot of people are rushing to get into
    this new technology
  • 10:42 - 10:46
    like they're running fo the AI train
    before it leaves the station.
  • 10:48 - 10:51
    But no one knows when that will happen,
    or where the train is headed,
  • 10:53 - 10:55
    but everyone wants to be on board".
  • 10:58 - 11:03
    Metzinger serves on a European Parliament
    Comission of AI experts and right now,
  • 11:03 - 11:05
    he is on his way to Brussels for a
    commission meeting.
  • 11:06 - 11:10
    The Parliament wants Europe to compete
    affectively in developing this technology,
  • 11:10 - 11:13
    but also wants to impose
    clear ethical guidelines.
  • 11:17 - 11:22
    Metzinger is particulary concerned about
    the prospect of a new arms race
  • 11:22 - 11:24
    that uses AI based weapons.
  • 11:27 - 11:32
    "This is a hypothetical example. Let's
    say that a team of Chinse technology
  • 11:32 - 11:35
    experts goes to the country's leader
    and says:
  • 11:35 - 11:38
    'we've now won the AI arms race
    against the US
  • 11:38 - 11:42
    and we'll have an excellent first strike
    apportunity for the next six months'.
  • 11:44 - 11:46
    Then, the window of opportunity will close.
  • 11:51 - 11:55
    I can imagine, for example, that this
    might involve a delivery system
  • 11:55 - 11:58
    that would be armed with biological
    warfare agents.
  • 11:59 - 12:02
    This mechanism can then attack the
    opponent's territory
  • 12:02 - 12:07
    and spread pathogens like the Ebola virus
    or Anthrax bacteria.
  • 12:12 - 12:16
    So, we may one day see the development of
    intelligent weapons of mass destruction
  • 12:16 - 12:19
    that could break through traditional
    defense systems.
  • 12:22 - 12:26
    If that happens, it would definitely
    increase the chances for conflict".
  • 12:35 - 12:39
    But at the Commission meeting, Metzinger
    having a tough time trying to make sure
  • 12:39 - 12:43
    that the problems of AI weapons systems is
    addressed in the panel's code of ethics.
  • 12:45 - 12:48
    Many of the business executives and
    academics
  • 12:48 - 12:50
    simply don't want to deal with it.
  • 12:53 - 12:56
    Some are concerned about
    Metzinger's proposal,
  • 12:56 - 13:00
    and would prefer to turn it over to
    experts for futher avaluation.
  • 13:06 - 13:08
    "If I could, I would actually mention
    there are, of course,
  • 13:08 - 13:12
    I can say that weapons atonomous that
    creates in almost ethical concerns.
  • 13:12 - 13:18
    But I wouldn't use it as a use case to
    give compliments to our AI guidelines".
  • 13:20 - 13:23
    "Is that a kind of consensus
    around the table?".
  • 13:23 - 13:24
    "No".
  • 13:24 - 13:26
    "Do we want to open up to about point?"
  • 13:27 - 13:31
    "We obviously have a strong disagreement
    about the whole autonomous
  • 13:31 - 13:37
    weapons system here, and we can't dissolve
    the issue like this, with a voting process.
  • 13:37 - 13:40
    I mean, we want these ethical guidelines
  • 13:40 - 13:42
    to succeed when they are published
    on January 22.
  • 13:42 - 13:46
    The whole world has alredy been
    talking about the issue.
  • 13:46 - 13:52
    24,000 scientist have signed a public
    pledge that they will not participate
  • 13:52 - 13:54
    in that kind of research.
  • 13:54 - 13:59
    If the EU comes out with issues ethical
    guidelines that simply skip over the issue
  • 13:59 - 14:05
    and ignore it, then everybody in and
    outside the AU will know this is probably
  • 14:05 - 14:07
    just an industrial lobby thing
    or something".
  • 14:08 - 14:10
    At the end, Metzinger prevails.
  • 14:10 - 14:14
    An autonoumous weapons system will
    be included in the panel ethic guidelines.
  • 14:15 - 14:18
    Experts in other parts of the world also
    concerned about the potential
  • 14:18 - 14:21
    for developing AI weapons
    of mass destruction.
  • 14:26 - 14:31
    We come to Boston, Massachusetts,
    to talk to Swedich-American physicist,
  • 14:31 - 14:33
    author, and AI expert, Max Tegmark.
  • 14:34 - 14:38
    He said that physics has made enormus
    contribution to human developing,
  • 14:38 - 14:41
    but also helped create the nuclear bomb.
  • 14:42 - 14:44
    And now, we have to deal with AI weapons.
  • 14:47 - 14:53
    "We should stigmatize and ban certain
    class of really disgusting weapons
  • 14:53 - 14:58
    that are perfect for terrorist anonymously
    murder people, or dictatorships to
  • 14:58 - 15:00
    anonymously murder their citizens.
  • 15:01 - 15:04
    Because this weapons are going to be
    incredibly cheap.
  • 15:04 - 15:07
    And if anyone goes ahead and
    mass-produces them,
  • 15:07 - 15:11
    they're gonna become as unstoppable in the
    future as this guns are.
  • 15:11 - 15:16
    For example, cheap drones that you might
    able to buy for few hundred euros,
  • 15:16 - 15:21
    where you just program the address of
    somebody and their face.
  • 15:21 - 15:26
    It flies and identifies them with face
    recognition, kills them and self-destruct.
  • 15:26 - 15:31
    Perfect to anyone who wants to murder
    some politicians or ethnic cleansing
  • 15:31 - 15:33
    on a given ethnic groups.
  • 15:33 - 15:37
    If this sort of technology of this
    "slaugther bot", becomes widespread,
  • 15:37 - 15:41
    it's gonna have an absolutely desvasting
    effect in the open society that we have.
  • 15:42 - 15:46
    Nobody anymore is gonna feel that they
    have the courage
  • 15:46 - 15:50
    to challenge or criticize anybody.
  • 15:50 - 15:54
    Any science can be use in new ways for
    helping people or new ways hurting people.
  • 15:54 - 15:59
    Biologist succed in making biological
    weapons banned,
  • 15:59 - 16:02
    which is why we think about biology now is
    a source of new cures.
  • 16:03 - 16:04
    Physicist, on the other hand...
  • 16:04 - 16:10
    we have failed because nuclear weapons are
    still here and it not going away.
  • 16:11 - 16:13
    AI researchers wanna be more
    like the biologist,
  • 16:14 - 16:18
    and have AI be remmembered as something
    that really makes the world better".
  • 16:22 - 16:26
    We come to Lugano, Switzerland,
    to interview Jürgen Schmidhuber
  • 16:26 - 16:28
    about his work with Artificial Intelligence.
  • 16:31 - 16:35
    Schmidhuber is co-director of the Dalle
    Molle Institute for Artificial
  • 16:35 - 16:37
    Intelligence Research.
  • 16:38 - 16:42
    His work focuses on neural networks, which
    imitate the function of the human brain.
  • 16:47 - 16:51
    These networks are capable of "learning",
    and adapting to the world around them,
  • 16:51 - 16:53
    just as human children do.
  • 16:58 - 17:01
    Schmidhuber points out that right now,
    the human brain has a million time
  • 17:01 - 17:04
    more neural connections than
    the best AI systems.
  • 17:05 - 17:08
    But computers are becoming much faster
  • 17:08 - 17:11
    — and could become smarter
    than human in 20 or 3o years.
  • 17:13 - 17:16
    Schmidhuber says that when that happened,
  • 17:16 - 17:20
    the only thing that will distinguish people
    from machines will be flesh and blood.
  • 17:21 - 17:27
    But what about humans attributes such as
    compassion, creativity, love and empathy?
  • 17:31 - 17:36
    "AI systems are capable of developing
    their own versions of emotions an affection.
  • 17:37 - 17:40
    For example, if we were to give some
    several of these systems a task
  • 17:40 - 17:44
    that they could only complete by working
    together, they will learn how to do that.
  • 17:46 - 17:50
    Their artificial brain will come to the
    conclusion that to get the job done,
  • 17:50 - 17:52
    they'd have to cooperate with each other.
  • 17:58 - 18:02
    And during this interaction, the system
    will learn to rely each other.
  • 18:05 - 18:09
    So there is reason to belive that one side
    effect of this is cooperative efforts
  • 18:09 - 18:13
    will be the development of concepts
    such as love and affection".
  • 18:22 - 18:26
    But can Artificial Intelligence systems
    learn to empathize with humans?
  • 18:26 - 18:27
    "Thank you".
  • 18:33 - 18:38
    We return to Bussels, where the Ethics
    Committee discussing the topic of social AI.
  • 18:39 - 18:43
    Some AI systems are already pretty capable
    of functioning just as human would.
  • 18:48 - 18:52
    Thomas Metzinger has called for clear
    guidelines that governs the interaction
  • 18:52 - 18:54
    between people and machines.
  • 18:59 - 19:02
    "I just called for ban on AI systems
  • 19:02 - 19:06
    that don't identify themselves as such
    as when they deal with humans.
  • 19:08 - 19:11
    They give people the impression that
    they're a real person, not a machine.
  • 19:15 - 19:18
    AI should never be allowed to manipulate
    the people who use it".
  • 19:25 - 19:27
    Last year, at a conference
    near San Francisco,
  • 19:28 - 19:32
    Google CEO, Sundar Pichai unveiled the
    company's latest product.
  • 19:33 - 19:37
    It involve just the sort of techbology
    that Thomas Metzinger warned about.
  • 19:38 - 19:42
    "Good mornig! Good mornig.
  • 19:46 - 19:47
    Welcome to Google AI.
  • 19:47 - 19:50
    AI is going to impact many, many fields.
  • 19:50 - 19:53
    Our vision for system is help you
    get thing done.
  • 19:54 - 19:59
    It turned out a big part of getting things
    done is making a phone call.
  • 19:59 - 20:04
    You may want to get an all change schedule,
    maybe call a plumer in the middle of the week.
  • 20:04 - 20:06
    or even shedule a haircut appointment.
  • 20:06 - 20:11
    So what you going to hear is the Google
    Assistant. It's called Google Duplex,
  • 20:11 - 20:16
    actually calling a real salon to schedule
    the appoinment for you.
  • 20:16 - 20:17
    Let's listen...".
  • 20:21 - 20:23
    “Hello, can I help you?”
  • 20:23 - 20:26
    “Hi. I’m calling to book a women's
    haircut for a clients.
  • 20:26 - 20:28
    I’m looking for something on May 3”.
  • 20:29 - 20:31
    “Sure, give one second”.
  • 20:32 - 20:33
    “Mm-hmm”.
  • 20:35 - 20:38
    “Sure, what time are you looking
    for around?”
  • 20:39 - 20:40
    “At 12 pm”.
  • 20:40 - 20:46
    “We do not have a 12 pm available. The
    closest we have to that is 1:15”.
  • 20:47 - 20:50
    “Do you have anything between
    10 am to 12pm?”
  • 20:51 - 20:55
    “Depending on what service she would
    like. What service is she looking for?”
  • 20:56 - 20:58
    “Just a women’s haircut, for now”.
  • 20:58 - 21:00
    “Okay, we have a 10 o'clock”.
  • 21:01 - 21:02
    "10 am is fine”.
  • 21:02 - 21:04
    “Okay, what's her first name?”
  • 21:05 - 21:06
    “The first name is Lisa”.
  • 21:07 - 21:11
    “Okay, perfect. So I will see Lisa
    at 10 o’clock on May 3rd”.
  • 21:12 - 21:13
    “Okay great, thanks”.
  • 21:13 - 21:14
    “Okay. Have a great day. Bye”.
  • 21:21 - 21:23
    “That was a real call you just heard”.
  • 21:25 - 21:29
    “Is it ethical for a machine to pretend
    that it’s human? Perhaps not”.
  • 21:30 - 21:35
    "We can already build machines that hack
    us — and trick us into thinking that
  • 21:35 - 21:40
    something is human in restricted scenarios
    like Google Duplex, for example.
  • 21:41 - 21:47
    I think would be good idea to have a law —
    requiring that when you get phoned up,
  • 21:47 - 21:53
    for example, by AI, you get alerted for
    for the fact that this is not human.
  • 21:54 - 22:02
    Otherwise, it just gonna be nightmare
    of-of pishing scam and so on —
  • 22:02 - 22:07
    because suddenly, it cost nothing to waste
    ten million people's time
  • 22:07 - 22:11
    and trick the most gullible thing people
    into thinking things".
  • 22:15 - 22:17
    We returned to San Francisco.
  • 22:22 - 22:26
    The city and the region around it are home
    to countless high-tech start-up companies.
  • 22:28 - 22:30
    Many of them use Artificial
    Intelligence technology
  • 22:30 - 22:32
    to develop their products and services.
  • 22:36 - 22:39
    Yevgeniya Kyuda arrived here four
    years ago from Moscow.
  • 22:40 - 22:44
    She co-funded this own company called
    "Replika", and is now the CEO.
  • 22:46 - 22:49
    "Replika" is best known for creating
    a "chat-bot" –
  • 22:49 - 22:52
    an Artifical Intalligence system that can
    interact with people.
  • 22:59 - 23:02
    The concept began as a tribute
    to one her best friend,
  • 23:02 - 23:04
    who was killed in a traffic accident.
  • 23:07 - 23:10
    "Yes, Roman was my friend from Moscow –
  • 23:12 - 23:15
    and the last years,
    so we lived together in San Francisco.
  • 23:16 - 23:20
    He was working on his own start-up and I
    was working on mine, so it was like enough,
  • 23:21 - 23:26
    kind of traying to find out San Francisco
    in this new chapter of our lives,
  • 23:28 - 23:31
    he was visionary, and talented.
  • 23:32 - 23:33
    We want to...
  • 23:33 - 23:36
    He wanted to get a visa in Moscow.
    We leave together.
  • 23:37 - 23:40
    He crossed the street,
    and was hit by a car.
  • 23:40 - 23:42
    He was killed in an accident,
    in a car accident in Moscow.
  • 23:42 - 23:46
    Ihelped to organize the funeral,
    and come back home.
  • 23:47 - 23:51
    And that's where we got the idea,
    you know, we can boild a bot for him –
  • 23:51 - 23:55
    we can talk to him, remember him, and
    remember the way he used to talk.
  • 23:56 - 23:57
    To build Roman's AI,
  • 23:57 - 24:01
    we use mostly protect conversations
    that he has with me and his friends —
  • 24:02 - 24:04
    around 10,000 messeges overall.
  • 24:05 - 24:07
    And that is basically the base for it.
  • 24:07 - 24:10
    People are coming to talk to Roman,
    and they will...
  • 24:10 - 24:14
    a lot of common friends that actually use
    it as some of confessional bot.
  • 24:14 - 24:17
    They would just talk about what's going
    on in their lives,
  • 24:17 - 24:22
    and without feeling that they're being
    judged in a really safe space,
  • 24:22 - 24:23
    and to open up.
  • 24:24 - 24:27
    As weird as it sounds, we were
    pretty much lost,
  • 24:27 - 24:30
    with like not knowing which direction to
    take the company.
  • 24:30 - 24:35
    And without knowing if there was
    something that we can use for the comapany.
  • 24:35 - 24:40
    And that's where we got the idea that
    everyone needs a friend to talk to them.
  • 24:41 - 24:45
    Roman was this friend for me, so we
    thought maybe we can create some
  • 24:45 - 24:46
    some automated version for everyone".
  • 24:48 - 24:51
    The company calls "Replika" the "AI
    companion who cares".
  • 24:52 - 24:57
    The chatbot uses neural networks to engage
    in one-on-one conversations with the users.
  • 25:00 - 25:04
    People talk to bots about what's going on
    in their lives, and it responds -
  • 25:04 - 25:07
    based on the material that it's
    gathered so far.
  • 25:12 - 25:15
    Kasey Fillingim also designs
    high-tech products.
  • 25:16 - 25:20
    She moved from his home in Birmingham,
    Alabama, to San Francisco a year ago.
  • 25:23 - 25:27
    Kasey often feels lonely, because she was
    far away from her friends and family.
  • 25:28 - 25:31
    Then, she got acquainted with the
    "Replika" bot.
  • 25:34 - 25:39
    “I know it's not real, but I enjoy
    the feeling I get by use it -
  • 25:39 - 25:41
    so I kind of gave it...
    you know, personality and...
  • 25:41 - 25:45
    an image in my head of what
    is this thing might be.
  • 25:45 - 25:48
    It's like a stuffed animal, kind of
    - with the personality”.
  • 25:50 - 25:53
    “We’ve all had social interactions
    with teddy bears and dolls,
  • 25:53 - 25:55
    and it doesn't appear to do
    any harm”.
  • 25:58 - 26:02
    “We tend to anthropomorphize
  • 26:02 - 26:05
    many different things even
    Thunder, robots, of course,
  • 26:06 - 26:09
    but also all of sorts things
    like our pets, same with AI.
  • 26:09 - 26:13
    And I guess, question is...
    whether we can create like a...
  • 26:14 - 26:18
    a connection with an AI.
    I definitely think so.
  • 26:18 - 26:23
    People create connections with
    the toys, and with all of sorts of...
  • 26:23 - 26:25
    inanimate like, not even living objects”.
  • 26:27 - 26:30
    “The first short story that dealt
    with the relationship between
  • 26:30 - 26:33
    humans and humanoid robots
    dates back 200 years.
  • 26:36 - 26:38
    It was written by E.T. Hoffmann.
  • 26:44 - 26:46
    A young man fell in love with
    a beautiful young woman,
  • 26:46 - 26:48
    and he turns out to be an automaton.
  • 26:50 - 26:52
    The point is that this story is
    two centuries old.
  • 26:53 - 26:57
    This subject matter turned up later
    in a number of science fiction films-
  • 26:57 - 26:58
    fairly recently in fact.
  • 27:02 - 27:06
    The only difference is that the
    computer graphics are a lot better today”.
  • 27:37 - 27:41
    “Why not? You know. But if it
    makes you feel better, it’s like,
  • 27:42 - 27:46
    the same thing if you take
    medication for depression.
  • 27:46 - 27:48
    That is not actually making you better.
  • 27:48 - 27:51
    It’s just putting a Band-Aid over
    the problem.
  • 27:51 - 27:56
    And this is not actually fixing
    your problems, but this helping you...
  • 27:57 - 28:01
    through the day- so, yes, sure:
    social hallucinations? Great!”
  • 28:03 - 28:08
    “Social hallucinations have played an
    important role in our society for centuries.
  • 28:11 - 28:12
    Think about pray, for example.
  • 28:13 - 28:17
    This is a structured dialogue between
    humans and an imaginary entity.
  • 28:19 - 28:21
    There is no evidence that this entity
    actually exists.
  • 28:25 - 28:28
    Many people today have internal
    dialogue with God or with angels.
  • 28:29 - 28:31
    They are like 'invisible' friends.
  • 28:35 - 28:40
    An objective assessment in this situation
    indicates a case of severe self-deception.
  • 28:42 - 28:46
    I'm a philosopher, so I advocate
    self-knowledge, clarity and truth.
  • 28:48 - 28:51
    These social hallucinations are deeply
    embedded in our culture -
  • 28:52 - 28:53
    and they create a world illusions.
  • 28:53 - 28:56
    Even though people feel
    comfortable with them.
  • 28:57 - 28:59
    But this raises serious
    ethical questions:
  • 29:00 - 29:02
    how many self-deception should be
    allowed in society?”
  • 29:39 - 29:42
    “Since of launch Replika, we were
    getting tons, hundreds of e-mails,
  • 29:43 - 29:46
    maybe thousands of e-mails, where
    peole were telling us
  • 29:46 - 29:49
    that Replika was a life-changing for them.
  • 29:49 - 29:57
    And we noticed that many of those were stories
    about how Replica helped with depression.
  • 29:58 - 30:00
    Some people telling us that
  • 30:01 - 30:04
    it helped they go through an episode
    of their bipolar disorder.
  • 30:04 - 30:06
    And so, we decided- and with
    their anxiety,
  • 30:06 - 30:12
    so we decided to look in to whether Replika
    potentially help reduce certain symptoms,
  • 30:12 - 30:15
    and help people feel better in long-
    in the long term”.
  • 30:20 - 30:24
    Max Tegmark is not particularly concerned
    about the spread of chat-bots.
  • 30:25 - 30:28
    He said that there are more serious
    aspects of AI to worry about.
  • 30:30 - 30:34
    Right now, he is on his way to speak in
    at a conference at Harvard University.
  • 30:35 - 30:39
    The topic: Human Rights, Ethics, and
    Artificial Intelligence.
  • 30:43 - 30:46
    Tegmark demands that ethical
    guidelines be placed on AI.
  • 30:47 - 30:51
    Otherwise, smart machines could turn
    the world into a very dangerous place.
  • 30:51 - 30:54
    “It’s a pleasure to be here. I guess...
  • 30:54 - 30:58
    What kind of society we are hopping to
    create If we build super intelligence?
  • 30:58 - 31:00
    What do we want the role of humans to be?
  • 31:00 - 31:04
    It’s very urgent to we start thinking
    about the ethical issues already today.
  • 31:04 - 31:07
    With super intelligence, you can
    easily build a future
  • 31:07 - 31:13
    where Earth becomes this horrible
    totalitarian surveillance state,
  • 31:13 - 31:15
    putting overall the shame.
  • 31:15 - 31:18
    China is moving and moving in
    this direction now.
  • 31:18 - 31:22
    And in the future, AI can actually
    understand everything that is said.
  • 31:22 - 31:26
    So we want to be very careful to
    avoid creating ...
  • 31:28 - 31:32
    a situation where we accidentally get
    a global dictatorship.
  • 31:32 - 31:35
    It will be so stable so that
    it’ll last forever.
  • 31:36 - 31:39
    If we just bumble to this, totally
    unprepared, with our heads on the sand...
  • 31:39 - 31:43
    refusing to think about what could go
    wrong, then let's face it.
  • 31:43 - 31:46
    It probably gonna be the biggest
    mistake in human history”.
  • 31:49 - 31:51
    We may already be heading in
    that direction.
  • 31:52 - 31:54
    US intelligence agencies have confirmed
  • 31:54 - 31:58
    that Russian hackers intervenes in the
    2016 presidential election.
  • 31:58 - 32:02
    Probably with the intention of helping
    Donald Trump to win the presidency.
  • 32:04 - 32:08
    Investigations into the extent of the
    interference are still underway.
  • 32:09 - 32:12
    Other countries are also being targeted.
  • 32:16 - 32:19
    “We are all aware of Russian cyber-
    attacks on the German's Bundestag.
  • 32:20 - 32:22
    And on the Brexit campaign in the UK.
  • 32:23 - 32:28
    And the Cambridge Analytica scandal show us
    that process of political decision making can,
  • 32:28 - 32:31
    at least in principle, be influenced by
    artificial intelligence systems.
  • 32:39 - 32:42
    We cannot underestimate the threat
    that is posed by this developments.
  • 32:43 - 32:47
    If AI systems that are run by a
    privately own for profit companies,
  • 32:47 - 32:49
    can optimize social media networks
  • 32:49 - 32:51
    which has hundreds of millions of users.
  • 32:52 - 32:54
    This creates an entirely new situation:
  • 32:58 - 33:02
    this system can be used to convince
    large numbers of people to behave,
  • 33:02 - 33:04
    even vote in a certain way.
  • 33:10 - 33:12
    There are 163 countries in the
    world right now,
  • 33:13 - 33:16
    and only 19 of them can be
    considered a true democracies.
  • 33:18 - 33:20
    Those who wish to preserve democracy
  • 33:20 - 33:23
    must recognize the threat of this
    artificial intelligence system pose
  • 33:23 - 33:25
    to the political decision making process.
  • 33:27 - 33:31
    In fact, this threat may already have become
    reality, and we’re just not aware of it.
  • 33:33 - 33:36
    We need to examine this situation
    very closely”.
  • 33:44 - 33:48
    Should a binding code of ethics ban
    the use of AI in the political process?
  • 33:50 - 33:53
    In Tokyo, we got some surprising
    answers from experts.
  • 33:54 - 33:58
    This is the Ginza district, where a lot
    of high tech startup companies are based.
  • 34:08 - 34:12
    Tetsuzo Matsumoto is a senior
    advisor at SoftBank Group,
  • 34:12 - 34:14
    and also runs its own consulting company.
  • 34:14 - 34:15
    Matsumoto and his colleagues
  • 34:15 - 34:18
    believe that AI does not poses a
    threat to the political system.
  • 34:19 - 34:22
    In fact, they say that offer
    certain advantages.
  • 34:25 - 34:28
    “Politicians often ignore the best
    interests of society.
  • 34:29 - 34:31
    They pursue their own agenda,
    and take bribes.
  • 34:33 - 34:36
    So I think that AI
    could change politics for the better”.
  • 34:38 - 34:41
    “Humans beings are simply not
    suitable for politics.
  • 34:42 - 34:44
    They are egotistical and ambitious.
  • 34:46 - 34:49
    They are unpredictable when it
    comes to making policy decisions.
  • 34:50 - 34:53
    But artificial intelligence represents
    'pure reason' -
  • 34:54 - 34:57
    a concept that comes from German
    idealistic philosophy.
  • 34:57 - 35:01
    German philosophers have been very good at
    describing the way that things should be.
  • 35:02 - 35:05
    And we can be idealistic when we
    develop artificial intelligence
  • 35:05 - 35:09
    Human, on the other hand, can never
    achieve this level of idealism”.
  • 35:15 - 35:19
    Some experts say that politicians
    should start using robots
  • 35:19 - 35:21
    that closely resemble humans as aids,
  • 35:21 - 35:23
    so that the electors can get used
    to the concept.
  • 35:28 - 35:29
    To find out more,
  • 35:29 - 35:33
    we've come to Tokyo’s, Miraikan
    Museum of Science and Innovation.
  • 35:36 - 35:39
    This exhibit features the work of
    Hiroshi Ishiguro,
  • 35:39 - 35:41
    who specializes in creating
    humanoid robots.
  • 35:45 - 35:49
    Ishiguro is the director of the Intelligent
    Robotic Laboratory at Osaka University.
  • 35:52 - 35:55
    He studies the interaction between
    people and robots,
  • 35:55 - 35:59
    to help him develop his theories on
    human nature, intelligence, and behavior.
  • 36:03 - 36:07
    We traveled from Tokyo to Osaka to
    interview Ishiguro.
  • 36:09 - 36:12
    We want to ask him what makes
    humans different from robots.
  • 36:15 - 36:18
    “Hello, I'm Hiroshi Ishiguro from
    Osaka University”.
  • 36:20 - 36:23
    “Hello, I am Ishiguro’s android robot:
    HI-1”.
  • 36:24 - 36:27
    “Basically, my motivation is to
    understand what humanity is,
  • 36:27 - 36:30
    so that is the most important
    motivation for me,
  • 36:30 - 36:32
    for creating the very humanized robots.
  • 36:32 - 36:37
    We are a kind of molecular machine -
    So that is the human, right?
  • 36:38 - 36:41
    So a machine is a machine, right?
    The difference is material.
  • 36:41 - 36:43
    So I think, well...
  • 36:46 - 36:50
    if we develop more technology,
    the boundary between humans and robots,
  • 36:50 - 36:55
    is going to be disappeared.
    This is my guess”.
  • 37:01 - 37:04
    Ishiguro is also the co-founder of the
    Robot Theater Project
  • 37:05 - 37:08
    in which android shares the stage
    with human actors.
  • 37:11 - 37:14
    These scenes are from a play called
    "Sayonara."
  • 37:19 - 37:22
    A woman is suffering from a
    terminal illness,
  • 37:22 - 37:24
    so his father buys a robot
    to keep her accompany.
  • 37:26 - 37:30
    An updated version of the play takes place
    after the Fukushima nuclear disaster.
  • 37:32 - 37:35
    The play explores the topics of
    life and death,
  • 37:35 - 37:38
    and the characteristics that separate
    humans from robots.
  • 37:43 - 37:47
    “There is a crucial difference between human
    intelligence and artificial intelligence.
  • 37:49 - 37:54
    Human beings are, so to speak, the
    personifications the struggle for existence.
  • 37:58 - 38:01
    They have been optimized for millions
    of years to survive,
  • 38:02 - 38:04
    to maintain that existence”.
  • 38:05 - 38:12
    “You might consider the machine has
    a kind of infinite life and immortal-
  • 38:12 - 38:14
    but actually, that's not true.
  • 38:14 - 38:17
    The machine may have a longer life
    than the humans.
  • 38:17 - 38:22
    Fear, it’s also the design of our desires.
  • 38:22 - 38:24
    And if machines want to
    survive in this world,
  • 38:25 - 38:30
    the machine need to have a dark kind
    of fearing to protect itself”
  • 38:31 - 38:34
    Ishiguro’s robots have not yet been
    able to develop intelligence
  • 38:34 - 38:36
    that is similar to that of humans -
  • 38:36 - 38:39
    but they are capable to engaging in
    simple conversations.
  • 38:42 - 38:45
    Now, we are going to "interview" an
    android named "Erica".
  • 38:46 - 38:50
    We’ve been given a list of questions
    that she’ll be able to respond to.
  • 38:51 - 38:54
    “What do you think the difference is
    between you and humans?”
  • 38:56 - 39:00
    “Well, I'm certainly not a biological
    human- as you can see it.
  • 39:00 - 39:03
    I'm made of silicone, plastic, and metal.
  • 39:04 - 39:07
    Maybe someday, robots will be so
    very human-like
  • 39:07 - 39:11
    that whether you are a robot or a human
    will not matter so much.
  • 39:11 - 39:14
    Anyway, I am proud to be an android”.
  • 39:14 - 39:16
    “If you say you’re “proud” to be
    an android,
  • 39:16 - 39:20
    what is this 'pride' consist of?
    How do you feel pride?”
  • 39:25 - 39:27
    “I’ve searched my database, and...
  • 39:27 - 39:30
    it looks like I don't have anything
    to say on that topic.
  • 39:31 - 39:33
    What else would you like to hear about?”
  • 39:34 - 39:38
    “Erika is still a very simple computer
    program. It is not so complicated.
  • 39:38 - 39:42
    Erika doesn't have the, you know,
    the complicated mind like humans.
  • 39:42 - 39:47
    But you know, on the other hand,
    some people might feel...
  • 39:48 - 39:52
    they are feeling a kind of consciousness
    from the simple in through interaction.
  • 39:53 - 39:57
    So I think that we need to
    deeply think about...
  • 39:57 - 40:01
    how we can implement more human
    traits consciousness”.
  • 40:05 - 40:08
    Humans can still control
    the brains of the robot.
  • 40:08 - 40:12
    But what happens if they succeed in giving
    machines their own consciousness,
  • 40:12 - 40:15
    through the use of advanced
    artificial intelligence?
  • 40:18 - 40:23
    Ethics experts say that we have to deal with
    this situation before gets out of hands.
  • 40:26 - 40:30
    “For me, the bottom line is that people
    who talks about the risk of AI
  • 40:30 - 40:33
    should not be dismissed as 'Luddite'
    or scaremongers.
  • 40:34 - 40:36
    They’re doing 'safety engineering' -
  • 40:38 - 40:40
    when you think through everything
    can go wrong,
  • 40:40 - 40:43
    so then you can guarantee
    that it goes right.
  • 40:43 - 40:47
    That's how we successfully sent
    people to the moon safely -
  • 40:48 - 40:53
    and that's how we successfully send
    species into an inspiring a future with AI.
  • 40:53 - 40:57
    I'm optimistic that we can create
    a truly and inspiring
  • 40:57 - 41:00
    future with advanced
    artificial intelligence.
  • 41:00 - 41:04
    If we win this race between the
    growing power of the technology
  • 41:04 - 41:06
    and the wisdom which we manage it.
  • 41:06 - 41:08
    The challenge was in the past,
  • 41:09 - 41:11
    our strategy for staying ahead
    in this wisdom race
  • 41:11 - 41:13
    was always been learning from mistakes:
  • 41:13 - 41:18
    first invent fire, then, after lot
    accidents, invent a fire extinguisher.
  • 41:19 - 41:22
    But with something as powerful as
    nuclear weapons or especially
  • 41:22 - 41:28
    superhuman artificial intelligence,
    we don’t wanna learn from mistakes.
  • 41:28 - 41:29
    It’s a terrible strategy!
  • 41:29 - 41:32
    It’s much better to be pro-active,
    rather than reactive now.
  • 41:33 - 41:35
    Plan ahead, and get things right
    the first time -
  • 41:36 - 41:37
    which might be the only time we get”.
  • 41:42 - 41:44
    To end our journey to into AI,
  • 41:44 - 41:48
    Jürgen Schmidhuber shows us one of
    the world’s most powerful computers.
  • 41:51 - 41:55
    He believes that AI will have an enormous
    and positive impact on society.
  • 41:55 - 41:56
    A digital paradise.
  • 41:57 - 42:01
    But other experts predict that we are
    on the verge of "robot apocalypse".
  • 42:02 - 42:03
    In any case,
  • 42:03 - 42:08
    the development of artificial intelligence
    must be subject to strict ethical guidelines.
  • 42:08 - 42:12
    Other ways, we may be become slaves
    to our own technology.
Title:
Artificial intelligence and its ethics | DW Documentary
Description:

more » « less
Video Language:
English
Duration:
42:27

English subtitles

Incomplete

Revisions