< Return to Video

The real reason to be afraid of Artificial Intelligence | Peter Haas | TEDxDirigo

  • 0:13 - 0:17
    The rise of the machines!
  • 0:18 - 0:23
    Who here is scared of killer robots?
  • 0:23 - 0:25
    (Laughter)
  • 0:26 - 0:27
    I am!
  • 0:28 - 0:32
    I used to work in UAVs -
    Unmanned Aerial Vehicles -
  • 0:32 - 0:37
    and all I could think seeing
    these things is that someday,
  • 0:37 - 0:41
    somebody is going to strap
    a machine-gun to these things,
  • 0:41 - 0:44
    and they're going
    to hunt me down in swarms.
  • 0:45 - 0:50
    I work in robotics at Brown University
    and I'm scared of robots.
  • 0:51 - 0:53
    Actually, I'm kind of terrified,
  • 0:54 - 0:56
    but, can you blame me?
  • 0:56 - 1:00
    Ever since I was a kid,
    all I've seen are movies
  • 1:00 - 1:03
    that portrayed the ascendance
    of Artificial Intelligence
  • 1:03 - 1:06
    and our inevitable conflict with it -
  • 1:06 - 1:11
    2001 Space Odyssey,
    The Terminator, The Matrix -
  • 1:12 - 1:16
    and the stories they tell are pretty scary:
  • 1:16 - 1:21
    rogue bands of humans running away
    from super intelligent machines.
  • 1:22 - 1:27
    That scares me. From the hands,
    it seems like it scares you as well.
  • 1:27 - 1:30
    I know it is scary to Elon Musk.
  • 1:31 - 1:35
    But, you know, we have a little bit
    of time before the robots rise up.
  • 1:35 - 1:39
    Robots like the PR2
    that I have at my initiative,
  • 1:39 - 1:41
    they can't even open the door yet.
  • 1:42 - 1:47
    So in my mind, this discussion
    of super intelligent robots
  • 1:47 - 1:52
    is a little bit of a distraction
    from something far more insidious
  • 1:52 - 1:56
    that is going on with AI systems
    across the country.
  • 1:57 - 2:00
    You see, right now,
    there are people -
  • 2:00 - 2:04
    doctors, judges, accountants -
  • 2:04 - 2:08
    who are getting information
    from an AI system
  • 2:08 - 2:13
    and treating it as if it is information
    from a trusted colleague.
  • 2:14 - 2:17
    It's this trust that bothers me
  • 2:17 - 2:20
    not because of how often
    AI gets it wrong.
  • 2:20 - 2:24
    AI researchers pride themselves
    in accuracy on results.
  • 2:25 - 2:28
    It's how badly it gets it wrong
    when it makes a mistake
  • 2:28 - 2:30
    that has me worried.
  • 2:30 - 2:34
    These systems do not fail gracefully.
  • 2:34 - 2:37
    So, let's take a look
    at what this looks like.
  • 2:37 - 2:43
    This is a dog that has been misidentified
    as a wolf by an AI algorithm.
  • 2:43 - 2:45
    The researchers wanted to know:
  • 2:45 - 2:50
    why did this particular husky
    get misidentified as a wolf?
  • 2:50 - 2:53
    So they rewrote the algorithm
    to explain to them
  • 2:53 - 2:56
    the parts of the picture
    it was paying attention to
  • 2:56 - 2:59
    when the AI algorithm made its decision.
  • 2:59 - 3:03
    In this picture, what do you
    think it paid attention to?
  • 3:03 - 3:05
    What would you pay attention to?
  • 3:05 - 3:10
    Maybe the eyes,
    maybe the ears, the snout ...
  • 3:13 - 3:17
    This is what it paid attention to:
  • 3:17 - 3:20
    mostly the snow
    and the background of the picture.
  • 3:21 - 3:26
    You see, there was bias in the data set
    that was fed to this algorithm.
  • 3:26 - 3:30
    Most of the pictures of wolves were in snow,
  • 3:31 - 3:35
    so the AI algorithm conflated
    the presence or absence of snow
  • 3:35 - 3:38
    for the presence or absence of a wolf.
  • 3:40 - 3:42
    The scary thing about this
  • 3:42 - 3:46
    is the researchers had
    no idea this was happening
  • 3:46 - 3:50
    until they rewrote
    the algorithm to explain itself.
  • 3:51 - 3:55
    And that's the thing with AI algorithms,
    deep learning, machine learning.
  • 3:55 - 3:59
    Even the developers who work on this stuff
  • 3:59 - 4:02
    have no idea what it's doing.
  • 4:03 - 4:08
    So, that might be
    a great example for a research,
  • 4:08 - 4:10
    but what does this mean in the real world?
  • 4:11 - 4:16
    The Compas Criminal Sentencing
    algorithm is used in 13 states
  • 4:16 - 4:18
    to determine criminal recidivism
  • 4:18 - 4:22
    or the risk of committing
    a crime again after you're released.
  • 4:23 - 4:27
    ProPublica found
    that if you're African-American,
  • 4:27 - 4:32
    Compas was 77% more likely to qualify
    you as a potentially violent offender
  • 4:32 - 4:34
    than if you're a Caucasian.
  • 4:35 - 4:39
    This is a real system being used
    in the real world by real judges
  • 4:39 - 4:42
    to make decisions about real people's lives.
  • 4:44 - 4:49
    Why would the judges trust it
    if it seems to exhibit bias?
  • 4:50 - 4:55
    Well, the reason they use Compas
    is because it is a model for efficiency.
  • 4:56 - 5:00
    Compas lets them go
    through caseloads much faster
  • 5:00 - 5:03
    in a backlogged criminal justice system.
  • 5:05 - 5:07
    Why would they question
    their own software?
  • 5:07 - 5:11
    It's been requisitioned by the State,
    approved by their IT Department.
  • 5:11 - 5:13
    Why would they question it?
  • 5:13 - 5:17
    Well, the people sentenced
    by Compas have questioned it,
  • 5:17 - 5:19
    and their lawsuits should chill us all.
  • 5:19 - 5:22
    The Wisconsin State Supreme Court ruled
  • 5:22 - 5:26
    that compass did not deny
    a defendant due process
  • 5:26 - 5:28
    provided it was used "properly."
  • 5:29 - 5:31
    In the same set of rulings, they ruled
  • 5:31 - 5:35
    that the defendant could not inspect
    the source code of Compass.
  • 5:36 - 5:40
    It has to be used properly
    but you can't inspect the source code?
  • 5:40 - 5:43
    This is a disturbing set of rulings
    when taken together
  • 5:43 - 5:46
    for anyone facing criminal sentencing.
  • 5:47 - 5:51
    You may not care about this because
    you're not facing criminal sentencing,
  • 5:51 - 5:55
    but what if I told you
    that black box AI algorithms like this
  • 5:55 - 5:59
    are being used to decide whether or not
    you can get a loan for your house,
  • 6:00 - 6:03
    whether you get a job interview,
  • 6:03 - 6:06
    whether you get Medicaid,
  • 6:06 - 6:10
    and are even driving cars
    and trucks down the highway.
  • 6:11 - 6:15
    Would you want the public
    to be able to inspect the algorithm
  • 6:15 - 6:17
    that's trying to make a decisiom
    between a shopping cart
  • 6:17 - 6:21
    and a baby carriage
    in a self-driving truck,
  • 6:21 - 6:24
    in the same way the dog/wolf
    algorithm was trying to decide
  • 6:24 - 6:26
    between a dog or a wolf?
  • 6:26 - 6:31
    Are you potentially a metaphorical dog
    who's been misidentified as a wolf
  • 6:31 - 6:34
    by somebody's AI algorithm?
  • 6:35 - 6:39
    Considering the complexity
    of people, it's possible.
  • 6:39 - 6:42
    Is there anything
    you can do about it now?
  • 6:42 - 6:47
    Probably not, and that's
    what we need to focus on.
  • 6:47 - 6:51
    We need to demand
    standards of accountability,
  • 6:51 - 6:55
    transparency and recourse in AI systems.
  • 6:56 - 7:01
    ISO, the International Standards
    Organization, just formed a committee
  • 7:01 - 7:05
    to make decisions about
    what to do for AI standards.
  • 7:05 - 7:09
    They're about five years out
    from coming up with a standard.
  • 7:09 - 7:12
    These systems are being used now,
  • 7:14 - 7:19
    not just in loans, but they're being
    used in vehicles like I was saying.
  • 7:21 - 7:25
    They're being used in things like
    Cooperative Adaptive Cruise Control.
  • 7:25 - 7:28
    It's funny that they call that "cruise control"
  • 7:28 - 7:33
    because the type of controller used
    in cruise control, a PID controller,
  • 7:33 - 7:38
    was used for 30 years in chemical plants
    before it ever made into a car.
  • 7:39 - 7:41
    The type of controller that's used
  • 7:41 - 7:45
    to drive a self-driving car
    and a machine learning,
  • 7:45 - 7:49
    that's only been used
    in research since 2007.
  • 7:50 - 7:52
    These are new technologies.
  • 7:52 - 7:56
    We need to demand the standards
    and we need to demand regulation
  • 7:56 - 8:00
    so that we don't get snake oil
    in the marketplace.
  • 8:01 - 8:05
    And we also have to have
    a little bit of skepticism.
  • 8:06 - 8:08
    The experiments in Authority
  • 8:08 - 8:11
    done by Stanley Milgram
    after World War II,
  • 8:11 - 8:16
    showed that your average person
    would follow an authority figures orders
  • 8:16 - 8:20
    even if it meant harming their fellow citizen.
  • 8:20 - 8:23
    In this experiment,
  • 8:23 - 8:27
    every day Americans would shock an actor
  • 8:28 - 8:31
    past the point of him
    complaining about her trouble,
  • 8:32 - 8:35
    past the point of him screaming in pain,
  • 8:36 - 8:41
    past the point of him
    going silent in simulated death,
  • 8:42 - 8:44
    all because somebody
  • 8:44 - 8:48
    with no credentials, in a lab coat,
  • 8:48 - 8:51
    was saying some variation of the phrase
  • 8:51 - 8:54
    "The experiment must continue."
  • 8:57 - 9:02
    In AI, we have Milgram's
    ultimate authority figure.
  • 9:04 - 9:08
    We have a dispassionate
    system that can't reflect,
  • 9:09 - 9:13
    that can't make another decision,
  • 9:13 - 9:15
    that there is no recourse to,
  • 9:15 - 9:20
    that will always say "The system
    or "The process must continue."
  • 9:23 - 9:26
    Now, I'm going to tell you a little story.
  • 9:26 - 9:30
    It's about a car trip I took
    driving across country.
  • 9:31 - 9:35
    I was coming into Salt Lake City
    and it started raining.
  • 9:35 - 9:40
    As I climbed into the mountains,
    that rain turned into snow,
  • 9:40 - 9:43
    and pretty soon that snow was whiteout.
  • 9:43 - 9:46
    I couldn't see the taillights
    of the car in front of me.
  • 9:46 - 9:48
    I started skidding.
  • 9:48 - 9:51
    I went 360 one way,
    I went 360 the other way.
  • 9:51 - 9:53
    I went off the highway.
  • 9:53 - 9:55
    Mud-coated my windows,
    I couldn't see a thing.
  • 9:55 - 9:59
    I was terrified some car was going
    to come crashing into me.
  • 10:00 - 10:04
    Now, I'm telling you this story
    to get you thinking
  • 10:04 - 10:07
    about how something small
    and seemingly mundane
  • 10:07 - 10:10
    like a little bit precipitation,
  • 10:10 - 10:15
    can easily grow
    into something very dangerous.
  • 10:15 - 10:20
    We are driving in the rain
    with AI right now,
  • 10:20 - 10:23
    and that rain will turn to snow,
  • 10:24 - 10:27
    and that snow could become a blizzard.
  • 10:28 - 10:30
    We need to pause,
  • 10:31 - 10:33
    check the conditions,
  • 10:33 - 10:36
    put in place safety standards,
  • 10:36 - 10:41
    and ask ourselves
    how far do we want to go,
  • 10:42 - 10:46
    because the economic incentives
    for AI and automation
  • 10:46 - 10:48
    to replace human labor
  • 10:48 - 10:53
    will be beyond anything we have seen
    since the Industrial Revolution.
  • 10:54 - 10:58
    Human salary demands can't compete
  • 10:58 - 11:02
    with the base cost of electricity.
  • 11:02 - 11:08
    AIs and robots will replace
    fry cooks and fast-food joints
  • 11:08 - 11:10
    and radiologists in hospitals.
  • 11:11 - 11:14
    Someday, the AI will diagnose your cancer,
  • 11:14 - 11:17
    and a robot will perform the surgery.
  • 11:18 - 11:22
    Only a healthy skepticism of these systems
  • 11:22 - 11:26
    is going to help keep people in the loop.
  • 11:26 - 11:31
    And I'm confident, if we can
    keep people in the loop,
  • 11:31 - 11:36
    if we can build transparent
    AI systems like the dog/wolf example
  • 11:36 - 11:40
    where the AI explained
    what it was doing to people,
  • 11:40 - 11:43
    and people were able to spot-check it,
  • 11:43 - 11:47
    we can create new jobs
    for people partnering with AI.
  • 11:49 - 11:51
    If we work together with AI,
  • 11:51 - 11:56
    we will probably be able to solve
    some of our greatest challenges.
  • 11:57 - 12:01
    But to do that, we need
    to lead and not follow.
  • 12:02 - 12:05
    We need to choose to be less like robots,
  • 12:05 - 12:10
    and we need to build the robots
    to be more like people,
  • 12:11 - 12:13
    because ultimately,
  • 12:13 - 12:18
    the only thing we need to fear
    is not killer robots,
  • 12:19 - 12:22
    it's our own intellectual laziness.
  • 12:22 - 12:27
    The only thing we need
    to fear is ourselves.
  • 12:27 - 12:28
    Thank you.
  • 12:28 - 12:30
    (Applause)
Title:
The real reason to be afraid of Artificial Intelligence | Peter Haas | TEDxDirigo
Description:

A robotics researcher, Peter Haas, invites us into his world in order to understand where the threats of robots and artificial intelligence lie. Before we get to Sci-Fi robot death machines, the internet of things or even transhumanism, there's something right in front of us we need to confront. And this is, ourselves.

Peter is the Associate Director of the Brown University Humanity Centered Robotics Initiative. He was the Co-Founder and COO of XactSense, a UAV manufacturer working on LIDAR mapping and autonomous navigation. Prior to XactSense, Peter founded AIDG, a small hardware enterprise accelerator in emerging markets. Peter received both TED and Echoing Green fellowships. He has been a speaker at TED Global, The World Bank, Harvard University and other venues. He holds a Philosophy B.A. from Yale.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
12:38
  • At 5:07, "It's been requisitioned by the State, approved by their ID Department.". Shouldn't it be : "IT department" ?

  • Thank you for the correction. Done!

English subtitles

Revisions Compare revisions