Return to Video

How to keep human biases out of AI

  • 0:01 - 0:05
    How many decisions
    have been made about you today,
  • 0:05 - 0:07
    or this week or this year,
  • 0:07 - 0:09
    by artificial intelligence?
  • 0:11 - 0:13
    I build AI for a living
  • 0:13 - 0:16
    so, full disclosure, I'm kind of a nerd.
  • 0:16 - 0:18
    And because I'm kind of a nerd,
  • 0:18 - 0:20
    wherever some new news story comes out
  • 0:20 - 0:24
    about artificial intelligence
    stealing all our jobs,
  • 0:24 - 0:28
    or robots getting citizenship
    of an actual country,
  • 0:28 - 0:31
    I'm the person my friends
    and followers message
  • 0:31 - 0:33
    freaking out about the future.
  • 0:34 - 0:36
    We see this everywhere.
  • 0:36 - 0:41
    This media panic that
    our robot overlords are taking over.
  • 0:41 - 0:43
    We could blame Hollywood for that.
  • 0:44 - 0:48
    But in reality, that's not the problem
    we should be focusing on.
  • 0:49 - 0:53
    There is a more pressing danger,
    a bigger risk with AI,
  • 0:53 - 0:54
    that we need to fix first.
  • 0:55 - 0:58
    So we are back to this question:
  • 0:58 - 1:02
    How many decisions
    have been made about you today by AI?
  • 1:04 - 1:06
    And how many of these
  • 1:06 - 1:10
    were based on your gender,
    your race or your background?
  • 1:12 - 1:15
    Algorithms are being used all the time
  • 1:15 - 1:19
    to make decisions about who we are
    and what we want.
  • 1:20 - 1:24
    Some of the women in this room
    will know what I'm talking about
  • 1:24 - 1:28
    if you've been made to sit through
    those pregnancy test adverts on YouTube
  • 1:28 - 1:30
    like 1,000 times.
  • 1:30 - 1:33
    Or you've scrolled past adverts
    of fertility clinics
  • 1:33 - 1:35
    on your Facebook feed.
  • 1:36 - 1:38
    Or in my case, Indian marriage bureaus.
  • 1:38 - 1:39
    (Laughter)
  • 1:39 - 1:42
    But AI isn't just being used
    to make decisions
  • 1:42 - 1:45
    about what products we want to buy
  • 1:45 - 1:47
    or which show we want to binge watch next.
  • 1:49 - 1:54
    I wonder how you'd feel about someone
    who thought things like this:
  • 1:54 - 1:56
    "A black or Latino person
  • 1:56 - 2:00
    is less likely than a white person
    to pay off their loan on time."
  • 2:02 - 2:04
    "A person called John
    makes a better programmer
  • 2:04 - 2:06
    than a person called Mary."
  • 2:07 - 2:12
    "A black man is more likely to be
    a repeat offender than a white man."
  • 2:15 - 2:16
    You're probably thinking,
  • 2:16 - 2:20
    "Wow, that sounds like a pretty sexist,
    racist person," right?
  • 2:21 - 2:26
    These are some real decisions
    that AI has made very recently,
  • 2:26 - 2:29
    based on the biases
    it has learned from us,
  • 2:29 - 2:30
    from the humans.
  • 2:32 - 2:37
    AI is being used to help decide
    whether or not you get that job interview;
  • 2:37 - 2:39
    how much you pay for your car insurance;
  • 2:39 - 2:41
    how good your credit score is;
  • 2:41 - 2:44
    and even what rating you get
    in your annual performance review.
  • 2:45 - 2:48
    But these decisions
    are all being filtered through
  • 2:48 - 2:54
    its assumptions about our identity,
    our race, our gender, our age.
  • 2:56 - 2:59
    How is that happening?
  • 2:59 - 3:02
    Now, imagine an AI is helping
    a hiring manager
  • 3:02 - 3:05
    find the next tech leader in the company.
  • 3:05 - 3:08
    So far, the manager
    has been hiring mostly men.
  • 3:08 - 3:13
    So the AI learns men are more likely
    to be programmers than women.
  • 3:14 - 3:16
    And it's a very short leap from there to:
  • 3:16 - 3:18
    men make better programmers than women.
  • 3:19 - 3:23
    We have reinforced
    our own bias into the AI.
  • 3:23 - 3:27
    And now, it's screening out
    female candidates.
  • 3:29 - 3:32
    Hang on, if a human
    hiring manager did that,
  • 3:32 - 3:34
    we'd be outraged, we wouldn't allow it.
  • 3:34 - 3:38
    This kind of gender
    discrimination is not OK.
  • 3:38 - 3:42
    And yet somehow,
    AI has become above the law,
  • 3:42 - 3:44
    because a machine made the decision.
  • 3:46 - 3:47
    That's not it.
  • 3:47 - 3:52
    We are also reinforcing our bias
    in how we interact with AI.
  • 3:53 - 3:59
    How often do you use a voice assistant
    like Siri, Alexa or even Cortana?
  • 3:59 - 4:01
    They all have two things in common:
  • 4:02 - 4:05
    one, they can never get my name right,
  • 4:05 - 4:07
    and second, they are all female.
  • 4:08 - 4:11
    They are designed to be
    our obedient servants,
  • 4:11 - 4:14
    turning your lights on and off,
    ordering your shopping.
  • 4:15 - 4:18
    You get male AIs too,
    but they tend to be more high-powered,
  • 4:18 - 4:22
    like IBM Watson,
    making business decisions,
  • 4:22 - 4:25
    Salesforce Einstein
    or ROSS, the robot lawyer.
  • 4:26 - 4:30
    So poor robots, even they suffer
    from sexism in the workplace.
  • 4:30 - 4:31
    (Laughter)
  • 4:33 - 4:35
    Think about how these two things combine
  • 4:35 - 4:41
    and affect a kid growing up
    in today's world around AI.
  • 4:41 - 4:44
    So they're doing some research
    for a school project
  • 4:44 - 4:47
    and they Google images of CEO.
  • 4:47 - 4:50
    The algorithm shows them
    results of mostly men.
  • 4:50 - 4:52
    And now, they Google personal assistant.
  • 4:52 - 4:56
    As you can guess,
    it shows them mostly females.
  • 4:56 - 4:59
    And then they want to put on some music,
    and maybe order some food,
  • 4:59 - 5:06
    and now, they are barking orders
    at an obedient female voice assistant.
  • 5:08 - 5:13
    Some of our brightest minds
    are creating this technology today.
  • 5:13 - 5:17
    Technology that they could have created
    in any way they wanted.
  • 5:17 - 5:23
    And yet, they have chosen to create it
    in the style of 1950s "Mad Man" secretary.
  • 5:23 - 5:24
    Yay!
  • 5:25 - 5:26
    But OK, don't worry,
  • 5:26 - 5:28
    this is not going to end
    with me telling you
  • 5:28 - 5:32
    that we are all heading towards
    sexist, racist machines running the world.
  • 5:33 - 5:39
    The good news about AI
    is that it is entirely within our control.
  • 5:39 - 5:43
    We get to teach the right values,
    the right ethics to AI.
  • 5:44 - 5:46
    So there are three things we can do.
  • 5:46 - 5:50
    One, we can be aware of our own biases
  • 5:50 - 5:52
    and the bias in machines around us.
  • 5:52 - 5:57
    Two, we can make sure that diverse teams
    are building this technology.
  • 5:57 - 6:02
    And three, we have to give it
    diverse experiences to learn from.
  • 6:03 - 6:06
    I can talk about the first two
    from personal experience.
  • 6:06 - 6:08
    When you work in technology
  • 6:08 - 6:11
    and you don't look like
    a Mark Zuckerberg or Elon Musk,
  • 6:11 - 6:15
    your life is a little bit difficult,
    your ability gets questioned.
  • 6:16 - 6:17
    Here's just one example.
  • 6:17 - 6:21
    Like most developers,
    I often join online tech forums
  • 6:21 - 6:24
    and share my knowledge to help others.
  • 6:24 - 6:26
    And I've found,
  • 6:26 - 6:30
    when I log on as myself,
    with my own photo, my own name,
  • 6:30 - 6:34
    I tend to get questions
    or comments like this:
  • 6:34 - 6:37
    "What makes you think
    you're qualified to talk about AI?"
  • 6:38 - 6:42
    "What makes you think
    you know about machine learning?"
  • 6:42 - 6:45
    So, as you do, I made a new profile,
  • 6:45 - 6:50
    and this time, instead of my own picture,
    I chose a cat with a jet pack on it.
  • 6:50 - 6:53
    And I chose a name
    that did not reveal my gender.
  • 6:54 - 6:57
    You can probably guess
    where this is going, right?
  • 6:57 - 7:03
    So, this time, I didn't get any of those
    patronizing comments about my ability
  • 7:03 - 7:06
    and I was able to actually
    get some work done.
  • 7:08 - 7:09
    And it sucks, guys.
  • 7:09 - 7:12
    I've been building robots since I was 15,
  • 7:12 - 7:14
    I have a few degrees in computer science,
  • 7:14 - 7:17
    and yet, I had to hide my gender
  • 7:17 - 7:19
    in order for my work
    to be taken seriously.
  • 7:20 - 7:22
    So, what's going on here?
  • 7:22 - 7:25
    Are men just better
    at technology than women?
  • 7:26 - 7:27
    Another study found
  • 7:28 - 7:32
    that when women coders on one platform
    hid their gender, like myself,
  • 7:32 - 7:36
    their code was accepted
    four percent more than men.
  • 7:37 - 7:39
    So this is not about the talent.
  • 7:40 - 7:43
    This is about an elitism in AI
  • 7:43 - 7:46
    that says a programmer
    needs to look like a certain person.
  • 7:47 - 7:50
    What we really need to do
    to make AI better
  • 7:50 - 7:54
    is bring people
    from all kinds of backgrounds.
  • 7:55 - 7:57
    We need people who can
    write and tell stories
  • 7:57 - 7:59
    to help us create personalities of AI.
  • 8:00 - 8:02
    We need people who can solve problems.
  • 8:03 - 8:07
    We need people
    who face different challenges
  • 8:07 - 8:12
    and we need people who can tell us
    what are the real issues that need fixing
  • 8:12 - 8:15
    and help us find ways
    that technology can actually fix it.
  • 8:18 - 8:22
    Because, when people
    from diverse backgrounds come together,
  • 8:22 - 8:24
    when we build things in the right way,
  • 8:24 - 8:26
    the possibilities are limitless.
  • 8:27 - 8:30
    And that's what I want to end
    by talking to you about.
  • 8:30 - 8:34
    Less racist robots, less machines
    that are going to take our jobs --
  • 8:34 - 8:37
    and more about what technology
    can actually achieve.
  • 8:38 - 8:42
    So, yes, some of the energy
    in the world of AI,
  • 8:42 - 8:43
    in the world of technology
  • 8:43 - 8:47
    is going to be about
    what ads you see on your stream.
  • 8:47 - 8:53
    But a lot of it is going towards
    making the world so much better.
  • 8:54 - 8:57
    Think about a pregnant woman
    in the Democratic Republic of Congo,
  • 8:57 - 9:01
    who has to walk 17 hours
    to her nearest rural prenatal clinic
  • 9:02 - 9:03
    to get a checkup.
  • 9:03 - 9:06
    What if she could get diagnosis
    on her phone, instead?
  • 9:08 - 9:10
    Or think about what AI could do
  • 9:10 - 9:12
    for those one in three women
    in South Africa
  • 9:12 - 9:14
    who face domestic violence.
  • 9:15 - 9:18
    If it wasn't safe to talk out loud,
  • 9:18 - 9:20
    they could get an AI service
    to raise alarm,
  • 9:20 - 9:23
    get financial and legal advice.
  • 9:24 - 9:29
    These are all real examples of projects
    that people, including myself,
  • 9:29 - 9:32
    are working on right now, using AI.
  • 9:34 - 9:37
    So, I'm sure in the next couple of days
    there will be yet another news story
  • 9:37 - 9:40
    about the existential risk,
  • 9:40 - 9:42
    robots taking over
    and coming for your jobs.
  • 9:42 - 9:43
    (Laughter)
  • 9:43 - 9:46
    And when something like that happens,
  • 9:46 - 9:49
    I know I'll get the same messages
    worrying about the future.
  • 9:49 - 9:53
    But I feel incredibly positive
    about this technology.
  • 9:55 - 10:01
    This is our chance to remake the world
    into a much more equal place.
  • 10:02 - 10:06
    But to do that, we need to build it
    the right way from the get go.
  • 10:08 - 10:13
    We need people of different genders,
    races, sexualities and backgrounds.
  • 10:14 - 10:17
    We need women to be the makers
  • 10:17 - 10:20
    and not just the machines
    who do the makers' bidding.
  • 10:22 - 10:26
    We need to think very carefully
    what we teach machines,
  • 10:26 - 10:27
    what data we give them,
  • 10:27 - 10:30
    so they don't just repeat
    our own past mistakes.
  • 10:32 - 10:36
    So I hope I leave you
    thinking about two things.
  • 10:37 - 10:41
    First, I hope you leave
    thinking about bias today.
  • 10:41 - 10:44
    And that the next time
    you scroll past an advert
  • 10:44 - 10:47
    that assumes you are interested
    in fertility clinics
  • 10:47 - 10:50
    or online betting websites,
  • 10:50 - 10:52
    that you think and remember
  • 10:52 - 10:57
    that the same technology is assuming
    that a black man will reoffend.
  • 10:58 - 11:02
    Or that a woman is more likely
    to be a personal assistant than a CEO.
  • 11:03 - 11:07
    And I hope that reminds you
    that we need to do something about it.
  • 11:09 - 11:11
    And second,
  • 11:11 - 11:13
    I hope you think about the fact
  • 11:13 - 11:15
    that you don't need to look a certain way
  • 11:15 - 11:19
    or have a certain background
    in engineering or technology
  • 11:19 - 11:20
    to create AI,
  • 11:20 - 11:23
    which is going to be
    a phenomenal force for our future.
  • 11:24 - 11:26
    You don't need to look
    like a Mark Zuckerberg,
  • 11:26 - 11:28
    you can look like me.
  • 11:29 - 11:32
    And it is up to all of us in this room
  • 11:32 - 11:35
    to convince the governments
    and the corporations
  • 11:35 - 11:38
    to build AI technology for everyone,
  • 11:38 - 11:40
    including the edge cases.
  • 11:40 - 11:42
    And for us all to get education
  • 11:42 - 11:45
    about this phenomenal
    technology in the future.
  • 11:46 - 11:48
    Because if we do that,
  • 11:48 - 11:53
    then we've only just scratched the surface
    of what we can achieve with AI.
  • 11:53 - 11:54
    Thank you.
  • 11:54 - 11:57
    (Applause)
Title:
How to keep human biases out of AI
Speaker:
Kriti Sharma
Description:

AI algorithms make important decisions about you all the time -- like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human biases coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
12:10

English subtitles

Revisions Compare revisions