So, let me ask you a question:
how many of you have witnessed
some kind of racism or sexism
just today, in the last 24 hours?
Or let me rephrase that:
how many of you have used
the Internet today?
(Laughter)
Unfortunately, these two things
are effectively the same.
I'm a computer scientist by training,
and I work to design AI technology
to better the world that we are in.
But the more I work with it,
the more I realize
that often this technology
is used under a lie of objectivity.
I like objectivity;
in part, I studied math and computer
science because I like that aspect.
Sure, there's problems that are hard,
but at the end of the day,
you have an answer,
and you know that answer is right.
AI is nothing like this.
AI is built on data,
and data is not truth.
Data is not reality.
And AI and data are far from objective.
Let me give you an example.
What do you think a CEO looks like?
Well, according to Google,
it looks like this.
So according to Google,
a CEO looks like this.
Now, sure, all these people
look like CEOs,
but there are also a lot of people
who do not look like this
who are CEOs.
What you're seeing here
is not reality; it is a stereotype.
A recent study showed
that even though
more than 25% of women are CEOs,
what you see on Google Images
is just 11% women.
And this was true of every profession
that was studied.
The images were a gendered
stereotype of the reality.
So, how is this supposedly
intelligent AI technology
making such basic mistakes?
The problem really lies
along every step of the way,
from the moment we collect data,
to the way we design our algorithms,
to how we analyze and deploy and use them.
Each of these steps
requires human decisions
and is determined by human motivations.
And rarely do we stop ourselves
and ask, Who is taking these decisions?
Who is benefiting from them?
And who is being excluded?
This happens all over the Internet.
Online ads, for example, have been
repeatedly shown to discriminate
in housing, lending and employment.
A recent study showed
that ads for high-paying jobs
were five times more likely
to be shown to men than to women,
and ads for housing
effectively redline people.
They show ads for home buying
to audiences that are 75% white,
whereas ads for diverse audiences
show rental homes instead.
For me, this is personal.
I'm a woman, I'm Latina, I'm a mother.
This is not the world that I want,
it's not the world I want for my kids,
and it's certainly no world
that I want to be a part of building.
When I realized that,
I knew I had to do something about it,
and that's what I've been working on
the last several years,
along with my colleagues
and an incredible community of researchers
that has been building this
around the world.
We're defining and designing AI technology
that does not suffer from these problems
of discrimination and bias.
So, think about the CEO example.
That's what we call a selection problem.
We have a whole bunch of data,
all these images,
and we have to chose some of them.
And in the real world,
we have similar problems.
Say I'm an employer
and I have to hire some people.
Well, I have a whole bunch of candidates,
this time with their CVs
and their interviews,
and I have to select a few.
But in the real world,
there are protections.
If, for example,
I have 100 male candidates
and 100 female candidates,
if I go ahead and I hire
10 of those male candidates,
well, then I better, legally,
have a very good reason
to not have hired at least
eight of those women as well.
So can we ask AI
to follow these same rules?
And increasingly,
we show that yes, we can.
It's just a matter of tweaking the system.
We can build AI that is held to the same
standards that we have for people,
that we have for companies.
Remember our CEOs?
We can go from that
to this.
We can go from the stereotype
to the reality.
In fact, we can go
from the reality we have now,
to the reality that we want
our world to be.
Now, there are technical solutions
for this, for ads, for a myriad
of other AI problems.
But I don't want you
to think that that is enough.
AI is being used right now
in your communities,
in your police departments,
in your government offices.
It is being used to decide
whether or not you get that loan,
to screen you for potential
health problems,
and to decide whether or not
you get that callback on that interview.
AI is touching all of our lives,
and it is largely doing that
in an unchecked and unregulated manner.
To give another example,
facial recognition technology
is being used all across the US,
everywhere from police departments
to shopping malls,
to help identify criminals.
Do any of these faces look familiar?
The ACLU showed that all of these people
were identified by Amazon's off-the-shelf
AI technology as arrested criminals.
I should say falsely identified,
because these are all US congresspeople.
(Laughter)
AI makes mistakes,
and these mistakes affect real people,
from the people who were told
that they did not have cancer
just to find out too late
that that was a mistake;
to people who are imprisoned
for extended periods of time
based on recommendations
by AI technology that is flawed.
These mistakes have human impact.
These mistakes are real.
And time and again,
just as in the previous examples,
we show that these mistakes
exacerbate existing societal biases.
Among the congresspeople,
even though only 20% of Congress
are people of color,
they were more than twice as likely
to be flagged by the system
as being an arrested criminal.
We need to stop allowing
this pseudo-objective AI
legitimize oppressive systems.
So again, I want to say,
yes, there are technical problems,
and those are hard,
but we're working on those;
we have solutions.
I'm making sure of that.
But having that technical
solution is not enough.
What we need is to move
from those technical solutions
to systems of justice.
We need to be able to hold AI accountable
to the same high standards
that we hold each other.
And increasingly, it is people like you
who are making that happen.
When it comes to governments,
in the past few months alone,
San Francisco, Oakland
and Somerville in Massachusetts
passed laws the prevent the government
from using facial recognition technology.
This came from groundwork,
from people showing up,
going to their town meetings,
writing letters, asking questions,
and not buying the snake oil
of objective AI.
When it comes to companies,
we can't underestimate
the power of collective action.
Due to public pressure,
large companies have
rolled back problematic AI.
From Watson Health, which is
misdiagnosing cancer patients,
to Amazon's hiring tool, which is
discriminating against women,
large companies have been shown
to roll back and stop and pause
when we have public outcry.
Together, we can prevent AI
from holding us back,
or worse, pushing us backwards.
If we're careful with it,
if we hold it accountable,
if we use it judiciously,
we can have AI show us
not just the world we're in,
but the world that we want to be in.
The potential is incredible,
and it's up to all of us
to make sure that happens.
Thank you.
(Applause) (Cheering)