Return to Video

Should you trust what AI says? | Elisa Celis | TEDxProvidence

  • 0:17 - 0:20
    So, let me ask you a question:
  • 0:20 - 0:24
    how many of you have witnessed
    some kind of racism or sexism
  • 0:24 - 0:27
    just today, in the last 24 hours?
  • 0:27 - 0:30
    Or let me rephrase that:
  • 0:30 - 0:33
    how many of you have used
    the Internet today?
  • 0:33 - 0:34
    (Laughter)
  • 0:34 - 0:38
    Unfortunately, these two things
    are effectively the same.
  • 0:39 - 0:40
    I'm a computer scientist by training,
  • 0:40 - 0:45
    and I work to design AI technology
    to better the world that we are in.
  • 0:46 - 0:48
    But the more I work with it,
    the more I realize
  • 0:48 - 0:52
    that often this technology
    is used under a lie of objectivity.
  • 0:53 - 0:54
    I like objectivity;
  • 0:54 - 0:59
    in part, I studied math and computer
    science because I like that aspect.
  • 1:00 - 1:02
    Sure, there's problems that are hard,
  • 1:02 - 1:04
    but at the end of the day,
    you have an answer,
  • 1:04 - 1:06
    and you know that answer is right.
  • 1:07 - 1:09
    AI is nothing like this.
  • 1:09 - 1:13
    AI is built on data,
    and data is not truth.
  • 1:14 - 1:16
    Data is not reality.
  • 1:16 - 1:20
    And AI and data are far from objective.
  • 1:21 - 1:22
    Let me give you an example.
  • 1:23 - 1:25
    What do you think a CEO looks like?
  • 1:26 - 1:28
    Well, according to Google,
  • 1:29 - 1:30
    it looks like this.
  • 1:30 - 1:34
    So according to Google,
    a CEO looks like this.
  • 1:35 - 1:39
    Now, sure, all these people
    look like CEOs,
  • 1:39 - 1:41
    but there are also a lot of people
  • 1:41 - 1:45
    who do not look like this
    who are CEOs.
  • 1:45 - 1:49
    What you're seeing here
    is not reality; it is a stereotype.
  • 1:51 - 1:53
    A recent study showed
  • 1:54 - 1:59
    that even though
    more than 25% of women are CEOs,
  • 1:59 - 2:03
    what you see on Google Images
    is just 11% women.
  • 2:03 - 2:06
    And this was true of every profession
    that was studied.
  • 2:06 - 2:10
    The images were a gendered
    stereotype of the reality.
  • 2:10 - 2:15
    So, how is this supposedly
    intelligent AI technology
  • 2:16 - 2:18
    making such basic mistakes?
  • 2:19 - 2:23
    The problem really lies
    along every step of the way,
  • 2:23 - 2:27
    from the moment we collect data,
    to the way we design our algorithms,
  • 2:28 - 2:31
    to how we analyze and deploy and use them.
  • 2:32 - 2:35
    Each of these steps
    requires human decisions
  • 2:35 - 2:38
    and is determined by human motivations.
  • 2:38 - 2:43
    And rarely do we stop ourselves
    and ask, Who is taking these decisions?
  • 2:44 - 2:46
    Who is benefiting from them?
  • 2:46 - 2:48
    And who is being excluded?
  • 2:50 - 2:53
    This happens all over the Internet.
  • 2:53 - 2:58
    Online ads, for example, have been
    repeatedly shown to discriminate
  • 2:58 - 3:01
    in housing, lending and employment.
  • 3:02 - 3:05
    A recent study showed
    that ads for high-paying jobs
  • 3:05 - 3:09
    were five times more likely
    to be shown to men than to women,
  • 3:10 - 3:13
    and ads for housing
    effectively redline people.
  • 3:13 - 3:18
    They show ads for home buying
    to audiences that are 75% white,
  • 3:19 - 3:24
    whereas ads for diverse audiences
    show rental homes instead.
  • 3:25 - 3:27
    For me, this is personal.
  • 3:28 - 3:31
    I'm a woman, I'm Latina, I'm a mother.
  • 3:32 - 3:35
    This is not the world that I want,
    it's not the world I want for my kids,
  • 3:35 - 3:38
    and it's certainly no world
    that I want to be a part of building.
  • 3:39 - 3:42
    When I realized that,
    I knew I had to do something about it,
  • 3:42 - 3:45
    and that's what I've been working on
    the last several years,
  • 3:45 - 3:49
    along with my colleagues
    and an incredible community of researchers
  • 3:49 - 3:51
    that has been building this
    around the world.
  • 3:52 - 3:55
    We're defining and designing AI technology
  • 3:55 - 3:59
    that does not suffer from these problems
    of discrimination and bias.
  • 4:00 - 4:02
    So, think about the CEO example.
  • 4:02 - 4:04
    That's what we call a selection problem.
  • 4:04 - 4:07
    We have a whole bunch of data,
    all these images,
  • 4:07 - 4:08
    and we have to chose some of them.
  • 4:08 - 4:11
    And in the real world,
    we have similar problems.
  • 4:11 - 4:14
    Say I'm an employer
    and I have to hire some people.
  • 4:14 - 4:16
    Well, I have a whole bunch of candidates,
  • 4:16 - 4:18
    this time with their CVs
    and their interviews,
  • 4:18 - 4:20
    and I have to select a few.
  • 4:20 - 4:22
    But in the real world,
    there are protections.
  • 4:23 - 4:24
    If, for example,
  • 4:24 - 4:27
    I have 100 male candidates
    and 100 female candidates,
  • 4:27 - 4:31
    if I go ahead and I hire
    10 of those male candidates,
  • 4:31 - 4:34
    well, then I better, legally,
    have a very good reason
  • 4:34 - 4:37
    to not have hired at least
    eight of those women as well.
  • 4:38 - 4:42
    So can we ask AI
    to follow these same rules?
  • 4:42 - 4:45
    And increasingly,
    we show that yes, we can.
  • 4:45 - 4:47
    It's just a matter of tweaking the system.
  • 4:47 - 4:52
    We can build AI that is held to the same
    standards that we have for people,
  • 4:52 - 4:54
    that we have for companies.
  • 4:55 - 4:56
    Remember our CEOs?
  • 4:57 - 4:58
    We can go from that
  • 4:59 - 5:00
    to this.
  • 5:00 - 5:04
    We can go from the stereotype
    to the reality.
  • 5:04 - 5:07
    In fact, we can go
    from the reality we have now,
  • 5:07 - 5:10
    to the reality that we want
    our world to be.
  • 5:11 - 5:14
    Now, there are technical solutions
  • 5:15 - 5:19
    for this, for ads, for a myriad
    of other AI problems.
  • 5:21 - 5:23
    But I don't want you
    to think that that is enough.
  • 5:25 - 5:28
    AI is being used right now
    in your communities,
  • 5:29 - 5:33
    in your police departments,
    in your government offices.
  • 5:33 - 5:37
    It is being used to decide
    whether or not you get that loan,
  • 5:37 - 5:40
    to screen you for potential
    health problems,
  • 5:41 - 5:45
    and to decide whether or not
    you get that callback on that interview.
  • 5:46 - 5:49
    AI is touching all of our lives,
  • 5:49 - 5:54
    and it is largely doing that
    in an unchecked and unregulated manner.
  • 5:56 - 5:58
    To give another example,
  • 5:58 - 6:02
    facial recognition technology
    is being used all across the US,
  • 6:02 - 6:04
    everywhere from police departments
    to shopping malls,
  • 6:04 - 6:06
    to help identify criminals.
  • 6:08 - 6:10
    Do any of these faces look familiar?
  • 6:11 - 6:15
    The ACLU showed that all of these people
  • 6:16 - 6:22
    were identified by Amazon's off-the-shelf
    AI technology as arrested criminals.
  • 6:23 - 6:29
    I should say falsely identified,
    because these are all US congresspeople.
  • 6:29 - 6:30
    (Laughter)
  • 6:31 - 6:33
    AI makes mistakes,
  • 6:33 - 6:37
    and these mistakes affect real people,
  • 6:38 - 6:41
    from the people who were told
    that they did not have cancer
  • 6:41 - 6:45
    just to find out too late
    that that was a mistake;
  • 6:45 - 6:48
    to people who are imprisoned
    for extended periods of time
  • 6:48 - 6:53
    based on recommendations
    by AI technology that is flawed.
  • 6:54 - 6:56
    These mistakes have human impact.
  • 6:57 - 6:59
    These mistakes are real.
  • 7:01 - 7:05
    And time and again,
    just as in the previous examples,
  • 7:05 - 7:09
    we show that these mistakes
    exacerbate existing societal biases.
  • 7:11 - 7:13
    Among the congresspeople,
  • 7:15 - 7:18
    even though only 20% of Congress
  • 7:19 - 7:20
    are people of color,
  • 7:21 - 7:25
    they were more than twice as likely
    to be flagged by the system
  • 7:25 - 7:27
    as being an arrested criminal.
  • 7:28 - 7:32
    We need to stop allowing
    this pseudo-objective AI
  • 7:33 - 7:36
    legitimize oppressive systems.
  • 7:38 - 7:39
    So again, I want to say,
  • 7:40 - 7:43
    yes, there are technical problems,
    and those are hard,
  • 7:43 - 7:45
    but we're working on those;
    we have solutions.
  • 7:45 - 7:47
    I'm making sure of that.
  • 7:47 - 7:51
    But having that technical
    solution is not enough.
  • 7:52 - 7:57
    What we need is to move
    from those technical solutions
  • 7:57 - 7:58
    to systems of justice.
  • 8:00 - 8:02
    We need to be able to hold AI accountable
  • 8:03 - 8:06
    to the same high standards
    that we hold each other.
  • 8:06 - 8:10
    And increasingly, it is people like you
    who are making that happen.
  • 8:11 - 8:14
    When it comes to governments,
    in the past few months alone,
  • 8:14 - 8:18
    San Francisco, Oakland
    and Somerville in Massachusetts
  • 8:19 - 8:23
    passed laws the prevent the government
    from using facial recognition technology.
  • 8:24 - 8:28
    This came from groundwork,
    from people showing up,
  • 8:28 - 8:31
    going to their town meetings,
    writing letters, asking questions,
  • 8:31 - 8:35
    and not buying the snake oil
    of objective AI.
  • 8:36 - 8:37
    When it comes to companies,
  • 8:37 - 8:41
    we can't underestimate
    the power of collective action.
  • 8:42 - 8:44
    Due to public pressure,
  • 8:44 - 8:48
    large companies have
    rolled back problematic AI.
  • 8:48 - 8:51
    From Watson Health, which is
    misdiagnosing cancer patients,
  • 8:52 - 8:55
    to Amazon's hiring tool, which is
    discriminating against women,
  • 8:55 - 9:00
    large companies have been shown
    to roll back and stop and pause
  • 9:00 - 9:02
    when we have public outcry.
  • 9:03 - 9:08
    Together, we can prevent AI
    from holding us back,
  • 9:08 - 9:10
    or worse, pushing us backwards.
  • 9:10 - 9:12
    If we're careful with it,
  • 9:12 - 9:16
    if we hold it accountable,
    if we use it judiciously,
  • 9:16 - 9:20
    we can have AI show us
    not just the world we're in,
  • 9:21 - 9:23
    but the world that we want to be in.
  • 9:23 - 9:25
    The potential is incredible,
  • 9:25 - 9:28
    and it's up to all of us
    to make sure that happens.
  • 9:29 - 9:29
    Thank you.
  • 9:29 - 9:31
    (Applause) (Cheering)
Title:
Should you trust what AI says? | Elisa Celis | TEDxProvidence
Description:

Yale Professor Elisa Celis worked to create AI technology to better the world, only to find out that it has a problem. A big one. AI that is designed to serve all of us in fact excludes most of us. Learn why this happens, what can be fixed, and if that is really enough.

Elisa Celis is an Assistant Professor of Statistics and Data Science at Yale University. Elisa’s research focuses on problems that arise at the interface of computation and machine learning and its societal ramifications. Specifically, she studies the manifestation of social and economic biases in our online lives via the algorithms that encode and perpetuate them. Her work spans multiple areas, including social computing and crowdsourcing, data science, and algorithm design with a current emphasis on fairness and diversity in artificial intelligence and machine learning.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
09:33

English subtitles

Revisions