The real reason to be afraid of Artificial Intelligence | Peter Haas | TEDxDirigo
-
0:13 - 0:17The rise of the machines!
-
0:18 - 0:23Who here is scared of killer robots?
-
0:23 - 0:25(Laughter)
-
0:26 - 0:27I am!
-
0:28 - 0:32I used to work in UAVs -
Unmanned Aerial Vehicles - -
0:32 - 0:37and all I could think seeing
these things is that someday, -
0:37 - 0:41somebody is going to strap
a machine-gun to these things, -
0:41 - 0:44and they're going
to hunt me down in swarms. -
0:45 - 0:50I work in robotics at Brown University
and I'm scared of robots. -
0:51 - 0:53Actually, I'm kind of terrified,
-
0:54 - 0:56but, can you blame me?
-
0:56 - 1:00Ever since I was a kid,
all I've seen are movies -
1:00 - 1:03that portrayed the ascendance
of Artificial Intelligence -
1:03 - 1:06and our inevitable conflict with it -
-
1:06 - 1:112001 Space Odyssey,
The Terminator, The Matrix - -
1:12 - 1:16and the stories they tell are pretty scary:
-
1:16 - 1:21rogue bands of humans running away
from super intelligent machines. -
1:22 - 1:27That scares me. From the hands,
it seems like it scares you as well. -
1:27 - 1:30I know it is scary to Elon Musk.
-
1:31 - 1:35But, you know, we have a little bit
of time before the robots rise up. -
1:35 - 1:39Robots like the PR2
that I have at my initiative, -
1:39 - 1:41they can't even open the door yet.
-
1:42 - 1:47So in my mind, this discussion
of super intelligent robots -
1:47 - 1:52is a little bit of a distraction
from something far more insidious -
1:52 - 1:56that is going on with AI systems
across the country. -
1:57 - 2:00You see, right now,
there are people - -
2:00 - 2:04doctors, judges, accountants -
-
2:04 - 2:08who are getting information
from an AI system -
2:08 - 2:13and treating it as if it is information
from a trusted colleague. -
2:14 - 2:17It's this trust that bothers me
-
2:17 - 2:20not because of how often
AI gets it wrong. -
2:20 - 2:24AI researchers pride themselves
in accuracy on results. -
2:25 - 2:28It's how badly it gets it wrong
when it makes a mistake -
2:28 - 2:30that has me worried.
-
2:30 - 2:34These systems do not fail gracefully.
-
2:34 - 2:37So, let's take a look
at what this looks like. -
2:37 - 2:43This is a dog that has been misidentified
as a wolf by an AI algorithm. -
2:43 - 2:45The researchers wanted to know:
-
2:45 - 2:50why did this particular husky
get misidentified as a wolf? -
2:50 - 2:53So they rewrote the algorithm
to explain to them -
2:53 - 2:56the parts of the picture
it was paying attention to -
2:56 - 2:59when the AI algorithm made its decision.
-
2:59 - 3:03In this picture, what do you
think it paid attention to? -
3:03 - 3:05What would you pay attention to?
-
3:05 - 3:10Maybe the eyes,
maybe the ears, the snout ... -
3:13 - 3:17This is what it paid attention to:
-
3:17 - 3:20mostly the snow
and the background of the picture. -
3:21 - 3:26You see, there was bias in the data set
that was fed to this algorithm. -
3:26 - 3:30Most of the pictures of wolves were in snow,
-
3:31 - 3:35so the AI algorithm conflated
the presence or absence of snow -
3:35 - 3:38for the presence or absence of a wolf.
-
3:40 - 3:42The scary thing about this
-
3:42 - 3:46is the researchers had
no idea this was happening -
3:46 - 3:50until they rewrote
the algorithm to explain itself. -
3:51 - 3:55And that's the thing with AI algorithms,
deep learning, machine learning. -
3:55 - 3:59Even the developers who work on this stuff
-
3:59 - 4:02have no idea what it's doing.
-
4:03 - 4:08So, that might be
a great example for a research, -
4:08 - 4:10but what does this mean in the real world?
-
4:11 - 4:16The Compas Criminal Sentencing
algorithm is used in 13 states -
4:16 - 4:18to determine criminal recidivism
-
4:18 - 4:22or the risk of committing
a crime again after you're released. -
4:23 - 4:27ProPublica found
that if you're African-American, -
4:27 - 4:32Compas was 77% more likely to qualify
you as a potentially violent offender -
4:32 - 4:34than if you're a Caucasian.
-
4:35 - 4:39This is a real system being used
in the real world by real judges -
4:39 - 4:42to make decisions about real people's lives.
-
4:44 - 4:49Why would the judges trust it
if it seems to exhibit bias? -
4:50 - 4:55Well, the reason they use Compas
is because it is a model for efficiency. -
4:56 - 5:00Compas lets them go
through caseloads much faster -
5:00 - 5:03in a backlogged criminal justice system.
-
5:05 - 5:07Why would they question
their own software? -
5:07 - 5:11It's been requisitioned by the State,
approved by their IT Department. -
5:11 - 5:13Why would they question it?
-
5:13 - 5:17Well, the people sentenced
by Compas have questioned it, -
5:17 - 5:19and their lawsuits should chill us all.
-
5:19 - 5:22The Wisconsin State Supreme Court ruled
-
5:22 - 5:26that compass did not deny
a defendant due process -
5:26 - 5:28provided it was used "properly."
-
5:29 - 5:31In the same set of rulings, they ruled
-
5:31 - 5:35that the defendant could not inspect
the source code of Compass. -
5:36 - 5:40It has to be used properly
but you can't inspect the source code? -
5:40 - 5:43This is a disturbing set of rulings
when taken together -
5:43 - 5:46for anyone facing criminal sentencing.
-
5:47 - 5:51You may not care about this because
you're not facing criminal sentencing, -
5:51 - 5:55but what if I told you
that black box AI algorithms like this -
5:55 - 5:59are being used to decide whether or not
you can get a loan for your house, -
6:00 - 6:03whether you get a job interview,
-
6:03 - 6:06whether you get Medicaid,
-
6:06 - 6:10and are even driving cars
and trucks down the highway. -
6:11 - 6:15Would you want the public
to be able to inspect the algorithm -
6:15 - 6:17that's trying to make a decisiom
between a shopping cart -
6:17 - 6:21and a baby carriage
in a self-driving truck, -
6:21 - 6:24in the same way the dog/wolf
algorithm was trying to decide -
6:24 - 6:26between a dog or a wolf?
-
6:26 - 6:31Are you potentially a metaphorical dog
who's been misidentified as a wolf -
6:31 - 6:34by somebody's AI algorithm?
-
6:35 - 6:39Considering the complexity
of people, it's possible. -
6:39 - 6:42Is there anything
you can do about it now? -
6:42 - 6:47Probably not, and that's
what we need to focus on. -
6:47 - 6:51We need to demand
standards of accountability, -
6:51 - 6:55transparency and recourse in AI systems.
-
6:56 - 7:01ISO, the International Standards
Organization, just formed a committee -
7:01 - 7:05to make decisions about
what to do for AI standards. -
7:05 - 7:09They're about five years out
from coming up with a standard. -
7:09 - 7:12These systems are being used now,
-
7:14 - 7:19not just in loans, but they're being
used in vehicles like I was saying. -
7:21 - 7:25They're being used in things like
Cooperative Adaptive Cruise Control. -
7:25 - 7:28It's funny that they call that "cruise control"
-
7:28 - 7:33because the type of controller used
in cruise control, a PID controller, -
7:33 - 7:38was used for 30 years in chemical plants
before it ever made into a car. -
7:39 - 7:41The type of controller that's used
-
7:41 - 7:45to drive a self-driving car
and a machine learning, -
7:45 - 7:49that's only been used
in research since 2007. -
7:50 - 7:52These are new technologies.
-
7:52 - 7:56We need to demand the standards
and we need to demand regulation -
7:56 - 8:00so that we don't get snake oil
in the marketplace. -
8:01 - 8:05And we also have to have
a little bit of skepticism. -
8:06 - 8:08The experiments in Authority
-
8:08 - 8:11done by Stanley Milgram
after World War II, -
8:11 - 8:16showed that your average person
would follow an authority figures orders -
8:16 - 8:20even if it meant harming their fellow citizen.
-
8:20 - 8:23In this experiment,
-
8:23 - 8:27every day Americans would shock an actor
-
8:28 - 8:31past the point of him
complaining about her trouble, -
8:32 - 8:35past the point of him screaming in pain,
-
8:36 - 8:41past the point of him
going silent in simulated death, -
8:42 - 8:44all because somebody
-
8:44 - 8:48with no credentials, in a lab coat,
-
8:48 - 8:51was saying some variation of the phrase
-
8:51 - 8:54"The experiment must continue."
-
8:57 - 9:02In AI, we have Milgram's
ultimate authority figure. -
9:04 - 9:08We have a dispassionate
system that can't reflect, -
9:09 - 9:13that can't make another decision,
-
9:13 - 9:15that there is no recourse to,
-
9:15 - 9:20that will always say "The system
or "The process must continue." -
9:23 - 9:26Now, I'm going to tell you a little story.
-
9:26 - 9:30It's about a car trip I took
driving across country. -
9:31 - 9:35I was coming into Salt Lake City
and it started raining. -
9:35 - 9:40As I climbed into the mountains,
that rain turned into snow, -
9:40 - 9:43and pretty soon that snow was whiteout.
-
9:43 - 9:46I couldn't see the taillights
of the car in front of me. -
9:46 - 9:48I started skidding.
-
9:48 - 9:51I went 360 one way,
I went 360 the other way. -
9:51 - 9:53I went off the highway.
-
9:53 - 9:55Mud-coated my windows,
I couldn't see a thing. -
9:55 - 9:59I was terrified some car was going
to come crashing into me. -
10:00 - 10:04Now, I'm telling you this story
to get you thinking -
10:04 - 10:07about how something small
and seemingly mundane -
10:07 - 10:10like a little bit precipitation,
-
10:10 - 10:15can easily grow
into something very dangerous. -
10:15 - 10:20We are driving in the rain
with AI right now, -
10:20 - 10:23and that rain will turn to snow,
-
10:24 - 10:27and that snow could become a blizzard.
-
10:28 - 10:30We need to pause,
-
10:31 - 10:33check the conditions,
-
10:33 - 10:36put in place safety standards,
-
10:36 - 10:41and ask ourselves
how far do we want to go, -
10:42 - 10:46because the economic incentives
for AI and automation -
10:46 - 10:48to replace human labor
-
10:48 - 10:53will be beyond anything we have seen
since the Industrial Revolution. -
10:54 - 10:58Human salary demands can't compete
-
10:58 - 11:02with the base cost of electricity.
-
11:02 - 11:08AIs and robots will replace
fry cooks and fast-food joints -
11:08 - 11:10and radiologists in hospitals.
-
11:11 - 11:14Someday, the AI will diagnose your cancer,
-
11:14 - 11:17and a robot will perform the surgery.
-
11:18 - 11:22Only a healthy skepticism of these systems
-
11:22 - 11:26is going to help keep people in the loop.
-
11:26 - 11:31And I'm confident, if we can
keep people in the loop, -
11:31 - 11:36if we can build transparent
AI systems like the dog/wolf example -
11:36 - 11:40where the AI explained
what it was doing to people, -
11:40 - 11:43and people were able to spot-check it,
-
11:43 - 11:47we can create new jobs
for people partnering with AI. -
11:49 - 11:51If we work together with AI,
-
11:51 - 11:56we will probably be able to solve
some of our greatest challenges. -
11:57 - 12:01But to do that, we need
to lead and not follow. -
12:02 - 12:05We need to choose to be less like robots,
-
12:05 - 12:10and we need to build the robots
to be more like people, -
12:11 - 12:13because ultimately,
-
12:13 - 12:18the only thing we need to fear
is not killer robots, -
12:19 - 12:22it's our own intellectual laziness.
-
12:22 - 12:27The only thing we need
to fear is ourselves. -
12:27 - 12:28Thank you.
-
12:28 - 12:30(Applause)
- Title:
- The real reason to be afraid of Artificial Intelligence | Peter Haas | TEDxDirigo
- Description:
-
A robotics researcher, Peter Haas, invites us into his world in order to understand where the threats of robots and artificial intelligence lie. Before we get to Sci-Fi robot death machines, the internet of things or even transhumanism, there's something right in front of us we need to confront. And this is, ourselves.
Peter is the Associate Director of the Brown University Humanity Centered Robotics Initiative. He was the Co-Founder and COO of XactSense, a UAV manufacturer working on LIDAR mapping and autonomous navigation. Prior to XactSense, Peter founded AIDG, a small hardware enterprise accelerator in emerging markets. Peter received both TED and Echoing Green fellowships. He has been a speaker at TED Global, The World Bank, Harvard University and other venues. He holds a Philosophy B.A. from Yale.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDxTalks
- Duration:
- 12:38
eric vautier
At 5:07, "It's been requisitioned by the State, approved by their ID Department.". Shouldn't it be : "IT department" ?
Hélène Vernet
Thank you for the correction. Done!