Should you trust what AI says? | Elisa Celis | TEDxProvidence
-
0:17 - 0:20So, let me ask you a question:
-
0:20 - 0:24how many of you have witnessed
some kind of racism or sexism -
0:24 - 0:27just today, in the last 24 hours?
-
0:27 - 0:30Or let me rephrase that:
-
0:30 - 0:33how many of you have used
the Internet today? -
0:33 - 0:34(Laughter)
-
0:34 - 0:38Unfortunately, these two things
are effectively the same. -
0:39 - 0:40I'm a computer scientist by training,
-
0:40 - 0:45and I work to design AI technology
to better the world that we are in. -
0:46 - 0:48But the more I work with it,
the more I realize -
0:48 - 0:52that often this technology
is used under a lie of objectivity. -
0:53 - 0:54I like objectivity;
-
0:54 - 0:59in part, I studied math and computer
science because I like that aspect. -
1:00 - 1:02Sure, there's problems that are hard,
-
1:02 - 1:04but at the end of the day,
you have an answer, -
1:04 - 1:06and you know that answer is right.
-
1:07 - 1:09AI is nothing like this.
-
1:09 - 1:13AI is built on data,
and data is not truth. -
1:14 - 1:16Data is not reality.
-
1:16 - 1:20And AI and data are far from objective.
-
1:21 - 1:22Let me give you an example.
-
1:23 - 1:25What do you think a CEO looks like?
-
1:26 - 1:28Well, according to Google,
-
1:29 - 1:30it looks like this.
-
1:30 - 1:34So according to Google,
a CEO looks like this. -
1:35 - 1:39Now, sure, all these people
look like CEOs, -
1:39 - 1:41but there are also a lot of people
-
1:41 - 1:45who do not look like this
who are CEOs. -
1:45 - 1:49What you're seeing here
is not reality; it is a stereotype. -
1:51 - 1:53A recent study showed
-
1:54 - 1:59that even though
more than 25% of women are CEOs, -
1:59 - 2:03what you see on Google Images
is just 11% women. -
2:03 - 2:06And this was true of every profession
that was studied. -
2:06 - 2:10The images were a gendered
stereotype of the reality. -
2:10 - 2:15So, how is this supposedly
intelligent AI technology -
2:16 - 2:18making such basic mistakes?
-
2:19 - 2:23The problem really lies
along every step of the way, -
2:23 - 2:27from the moment we collect data,
to the way we design our algorithms, -
2:28 - 2:31to how we analyze and deploy and use them.
-
2:32 - 2:35Each of these steps
requires human decisions -
2:35 - 2:38and is determined by human motivations.
-
2:38 - 2:43And rarely do we stop ourselves
and ask, Who is taking these decisions? -
2:44 - 2:46Who is benefiting from them?
-
2:46 - 2:48And who is being excluded?
-
2:50 - 2:53This happens all over the Internet.
-
2:53 - 2:58Online ads, for example, have been
repeatedly shown to discriminate -
2:58 - 3:01in housing, lending and employment.
-
3:02 - 3:05A recent study showed
that ads for high-paying jobs -
3:05 - 3:09were five times more likely
to be shown to men than to women, -
3:10 - 3:13and ads for housing
effectively redline people. -
3:13 - 3:18They show ads for home buying
to audiences that are 75% white, -
3:19 - 3:24whereas ads for diverse audiences
show rental homes instead. -
3:25 - 3:27For me, this is personal.
-
3:28 - 3:31I'm a woman, I'm Latina, I'm a mother.
-
3:32 - 3:35This is not the world that I want,
it's not the world I want for my kids, -
3:35 - 3:38and it's certainly no world
that I want to be a part of building. -
3:39 - 3:42When I realized that,
I knew I had to do something about it, -
3:42 - 3:45and that's what I've been working on
the last several years, -
3:45 - 3:49along with my colleagues
and an incredible community of researchers -
3:49 - 3:51that has been building this
around the world. -
3:52 - 3:55We're defining and designing AI technology
-
3:55 - 3:59that does not suffer from these problems
of discrimination and bias. -
4:00 - 4:02So, think about the CEO example.
-
4:02 - 4:04That's what we call a selection problem.
-
4:04 - 4:07We have a whole bunch of data,
all these images, -
4:07 - 4:08and we have to chose some of them.
-
4:08 - 4:11And in the real world,
we have similar problems. -
4:11 - 4:14Say I'm an employer
and I have to hire some people. -
4:14 - 4:16Well, I have a whole bunch of candidates,
-
4:16 - 4:18this time with their CVs
and their interviews, -
4:18 - 4:20and I have to select a few.
-
4:20 - 4:22But in the real world,
there are protections. -
4:23 - 4:24If, for example,
-
4:24 - 4:27I have 100 male candidates
and 100 female candidates, -
4:27 - 4:31if I go ahead and I hire
10 of those male candidates, -
4:31 - 4:34well, then I better, legally,
have a very good reason -
4:34 - 4:37to not have hired at least
eight of those women as well. -
4:38 - 4:42So can we ask AI
to follow these same rules? -
4:42 - 4:45And increasingly,
we show that yes, we can. -
4:45 - 4:47It's just a matter of tweaking the system.
-
4:47 - 4:52We can build AI that is held to the same
standards that we have for people, -
4:52 - 4:54that we have for companies.
-
4:55 - 4:56Remember our CEOs?
-
4:57 - 4:58We can go from that
-
4:59 - 5:00to this.
-
5:00 - 5:04We can go from the stereotype
to the reality. -
5:04 - 5:07In fact, we can go
from the reality we have now, -
5:07 - 5:10to the reality that we want
our world to be. -
5:11 - 5:14Now, there are technical solutions
-
5:15 - 5:19for this, for ads, for a myriad
of other AI problems. -
5:21 - 5:23But I don't want you
to think that that is enough. -
5:25 - 5:28AI is being used right now
in your communities, -
5:29 - 5:33in your police departments,
in your government offices. -
5:33 - 5:37It is being used to decide
whether or not you get that loan, -
5:37 - 5:40to screen you for potential
health problems, -
5:41 - 5:45and to decide whether or not
you get that callback on that interview. -
5:46 - 5:49AI is touching all of our lives,
-
5:49 - 5:54and it is largely doing that
in an unchecked and unregulated manner. -
5:56 - 5:58To give another example,
-
5:58 - 6:02facial recognition technology
is being used all across the US, -
6:02 - 6:04everywhere from police departments
to shopping malls, -
6:04 - 6:06to help identify criminals.
-
6:08 - 6:10Do any of these faces look familiar?
-
6:11 - 6:15The ACLU showed that all of these people
-
6:16 - 6:22were identified by Amazon's off-the-shelf
AI technology as arrested criminals. -
6:23 - 6:29I should say falsely identified,
because these are all US congresspeople. -
6:29 - 6:30(Laughter)
-
6:31 - 6:33AI makes mistakes,
-
6:33 - 6:37and these mistakes affect real people,
-
6:38 - 6:41from the people who were told
that they did not have cancer -
6:41 - 6:45just to find out too late
that that was a mistake; -
6:45 - 6:48to people who are imprisoned
for extended periods of time -
6:48 - 6:53based on recommendations
by AI technology that is flawed. -
6:54 - 6:56These mistakes have human impact.
-
6:57 - 6:59These mistakes are real.
-
7:01 - 7:05And time and again,
just as in the previous examples, -
7:05 - 7:09we show that these mistakes
exacerbate existing societal biases. -
7:11 - 7:13Among the congresspeople,
-
7:15 - 7:18even though only 20% of Congress
-
7:19 - 7:20are people of color,
-
7:21 - 7:25they were more than twice as likely
to be flagged by the system -
7:25 - 7:27as being an arrested criminal.
-
7:28 - 7:32We need to stop allowing
this pseudo-objective AI -
7:33 - 7:36legitimize oppressive systems.
-
7:38 - 7:39So again, I want to say,
-
7:40 - 7:43yes, there are technical problems,
and those are hard, -
7:43 - 7:45but we're working on those;
we have solutions. -
7:45 - 7:47I'm making sure of that.
-
7:47 - 7:51But having that technical
solution is not enough. -
7:52 - 7:57What we need is to move
from those technical solutions -
7:57 - 7:58to systems of justice.
-
8:00 - 8:02We need to be able to hold AI accountable
-
8:03 - 8:06to the same high standards
that we hold each other. -
8:06 - 8:10And increasingly, it is people like you
who are making that happen. -
8:11 - 8:14When it comes to governments,
in the past few months alone, -
8:14 - 8:18San Francisco, Oakland
and Somerville in Massachusetts -
8:19 - 8:23passed laws the prevent the government
from using facial recognition technology. -
8:24 - 8:28This came from groundwork,
from people showing up, -
8:28 - 8:31going to their town meetings,
writing letters, asking questions, -
8:31 - 8:35and not buying the snake oil
of objective AI. -
8:36 - 8:37When it comes to companies,
-
8:37 - 8:41we can't underestimate
the power of collective action. -
8:42 - 8:44Due to public pressure,
-
8:44 - 8:48large companies have
rolled back problematic AI. -
8:48 - 8:51From Watson Health, which is
misdiagnosing cancer patients, -
8:52 - 8:55to Amazon's hiring tool, which is
discriminating against women, -
8:55 - 9:00large companies have been shown
to roll back and stop and pause -
9:00 - 9:02when we have public outcry.
-
9:03 - 9:08Together, we can prevent AI
from holding us back, -
9:08 - 9:10or worse, pushing us backwards.
-
9:10 - 9:12If we're careful with it,
-
9:12 - 9:16if we hold it accountable,
if we use it judiciously, -
9:16 - 9:20we can have AI show us
not just the world we're in, -
9:21 - 9:23but the world that we want to be in.
-
9:23 - 9:25The potential is incredible,
-
9:25 - 9:28and it's up to all of us
to make sure that happens. -
9:29 - 9:29Thank you.
-
9:29 - 9:31(Applause) (Cheering)
- Title:
- Should you trust what AI says? | Elisa Celis | TEDxProvidence
- Description:
-
Yale Professor Elisa Celis worked to create AI technology to better the world, only to find out that it has a problem. A big one. AI that is designed to serve all of us in fact excludes most of us. Learn why this happens, what can be fixed, and if that is really enough.
Elisa Celis is an Assistant Professor of Statistics and Data Science at Yale University. Elisa’s research focuses on problems that arise at the interface of computation and machine learning and its societal ramifications. Specifically, she studies the manifestation of social and economic biases in our online lives via the algorithms that encode and perpetuate them. Her work spans multiple areas, including social computing and crowdsourcing, data science, and algorithm design with a current emphasis on fairness and diversity in artificial intelligence and machine learning.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
- Video Language:
- English
- Team:
closed TED
- Project:
- TEDxTalks
- Duration:
- 09:33
![]() |
Peter van de Ven approved English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Peter van de Ven edited English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Lisa Thompson accepted English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Lisa Thompson edited English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Lisa Thompson edited English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Yara Saleh edited English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Yara Saleh edited English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence | |
![]() |
Yara Saleh edited English subtitles for Should you trust what AI says? | Elisa Celis | TEDxProvidence |