The danger of AI is weirder than you think
-
0:02 - 0:05So, artificial intelligence
-
0:05 - 0:08is known for disrupting
all kinds of industries. -
0:09 - 0:11What about ice cream?
-
0:12 - 0:16What kind of mind-blowing
new flavors could we generate -
0:16 - 0:19with the power of an advanced
artificial intelligence? -
0:19 - 0:23So I teamed up with a group of coders
from Kealing Middle School -
0:23 - 0:25to find out the answer to this question.
-
0:25 - 0:31They collected over 1,600
existing ice cream flavors, -
0:31 - 0:36and together, we fed them to an algorithm
to see what it would generate. -
0:36 - 0:40And here are some of the flavors
that the AI came up with. -
0:40 - 0:42[Pumpkin Trash Break]
-
0:42 - 0:43(Laughter)
-
0:43 - 0:46[Peanut Butter Slime]
-
0:47 - 0:48[Strawberry Cream Disease]
-
0:48 - 0:50(Laughter)
-
0:50 - 0:55These flavors are not delicious,
as we might have hoped they would be. -
0:55 - 0:57So the question is: What happened?
-
0:57 - 0:58What went wrong?
-
0:58 - 1:00Is the AI trying to kill us?
-
1:01 - 1:05Or is it trying to do what we asked,
and there was a problem? -
1:07 - 1:09In movies, when something
goes wrong with AI, -
1:09 - 1:12it's usually because the AI has decided
-
1:12 - 1:14that it doesn't want to obey
the humans anymore, -
1:14 - 1:17and it's got its own goals,
thank you very much. -
1:17 - 1:20In real life, though,
the AI that we actually have -
1:21 - 1:22is not nearly smart enough for that.
-
1:23 - 1:26It has the approximate computing power
-
1:26 - 1:27of an earthworm,
-
1:27 - 1:30or maybe at most a single honeybee,
-
1:31 - 1:33and actually, probably maybe less.
-
1:33 - 1:35Like, we're constantly learning
new things about brains -
1:35 - 1:40that make it clear how much our AIs
don't measure up to real brains. -
1:40 - 1:45So today's AI can do a task
like identify a pedestrian in a picture, -
1:45 - 1:48but it doesn't have a concept
of what the pedestrian is -
1:48 - 1:53beyond that it's a collection
of lines and textures and things. -
1:54 - 1:56It doesn't know what a human actually is.
-
1:57 - 2:00So will today's AI
do what we ask it to do? -
2:00 - 2:02It will if it can,
-
2:02 - 2:04but it might not do what we actually want.
-
2:04 - 2:07So let's say that you
were trying to get an AI -
2:07 - 2:10to take this collection of robot parts
-
2:10 - 2:14and assemble them into some kind of robot
to get from Point A to Point B. -
2:14 - 2:16Now, if you were going to try
and solve this problem -
2:16 - 2:19by writing a traditional-style
computer program, -
2:19 - 2:22you would give the program
step-by-step instructions -
2:22 - 2:23on how to take these parts,
-
2:23 - 2:26how to assemble them
into a robot with legs -
2:26 - 2:29and then how to use those legs
to walk to Point B. -
2:29 - 2:32But when you're using AI
to solve the problem, -
2:32 - 2:33it goes differently.
-
2:33 - 2:35You don't tell it
how to solve the problem, -
2:35 - 2:37you just give it the goal,
-
2:37 - 2:40and it has to figure out for itself
via trial and error -
2:40 - 2:42how to reach that goal.
-
2:42 - 2:46And it turns out that the way AI tends
to solve this particular problem -
2:46 - 2:48is by doing this:
-
2:48 - 2:51it assembles itself into a tower
and then falls over -
2:51 - 2:53and lands at Point B.
-
2:53 - 2:56And technically, this solves the problem.
-
2:56 - 2:58Technically, it got to Point B.
-
2:58 - 3:02The danger of AI is not that
it's going to rebel against us, -
3:02 - 3:06it's that it's going to do
exactly what we ask it to do. -
3:07 - 3:09So then the trick
of working with AI becomes: -
3:09 - 3:13How do we set up the problem
so that it actually does what we want? -
3:15 - 3:18So this little robot here
is being controlled by an AI. -
3:18 - 3:21The AI came up with a design
for the robot legs -
3:21 - 3:25and then figured out how to use them
to get past all these obstacles. -
3:25 - 3:28But when David Ha set up this experiment,
-
3:28 - 3:31he had to set it up
with very, very strict limits -
3:31 - 3:34on how big the AI
was allowed to make the legs, -
3:34 - 3:36because otherwise ...
-
3:43 - 3:47(Laughter)
-
3:49 - 3:52And technically, it got
to the end of that obstacle course. -
3:52 - 3:57So you see how hard it is to get AI
to do something as simple as just walk. -
3:57 - 4:01So seeing the AI do this,
you may say, OK, no fair, -
4:01 - 4:04you can't just be
a tall tower and fall over, -
4:04 - 4:07you have to actually, like,
use legs to walk. -
4:07 - 4:10And it turns out,
that doesn't always work, either. -
4:10 - 4:13This AI's job was to move fast.
-
4:13 - 4:17They didn't tell it that it had
to run facing forward -
4:17 - 4:19or that it couldn't use its arms.
-
4:19 - 4:24So this is what you get
when you train AI to move fast, -
4:24 - 4:28you get things like somersaulting
and silly walks. -
4:28 - 4:29It's really common.
-
4:30 - 4:33So is twitching along the floor in a heap.
-
4:33 - 4:34(Laughter)
-
4:35 - 4:38So in my opinion, you know what
should have been a whole lot weirder -
4:39 - 4:40is the "Terminator" robots.
-
4:40 - 4:44Hacking "The Matrix" is another thing
that AI will do if you give it a chance. -
4:44 - 4:47So if you train an AI in a simulation,
-
4:47 - 4:51it will learn how to do things like
hack into the simulation's math errors -
4:51 - 4:53and harvest them for energy.
-
4:53 - 4:58Or it will figure out how to move faster
by glitching repeatedly into the floor. -
4:58 - 5:00When you're working with AI,
-
5:00 - 5:02it's less like working with another human
-
5:02 - 5:06and a lot more like working
with some kind of weird force of nature. -
5:07 - 5:11And it's really easy to accidentally
give AI the wrong problem to solve, -
5:11 - 5:16and often we don't realize that
until something has actually gone wrong. -
5:16 - 5:18So here's an experiment I did,
-
5:18 - 5:22where I wanted the AI
to copy paint colors, -
5:22 - 5:23to invent new paint colors,
-
5:23 - 5:26given the list like the ones
here on the left. -
5:27 - 5:30And here's what the AI
actually came up with. -
5:30 - 5:33[Sindis Poop, Turdly, Suffer, Gray Pubic]
-
5:33 - 5:37(Laughter)
-
5:39 - 5:41So technically,
-
5:41 - 5:43it did what I asked it to.
-
5:43 - 5:46I thought I was asking it for,
like, nice paint color names, -
5:46 - 5:49but what I was actually asking it to do
-
5:49 - 5:52was just imitate the kinds
of letter combinations -
5:52 - 5:54that it had seen in the original.
-
5:54 - 5:57And I didn't tell it anything
about what words mean, -
5:57 - 5:59or that there are maybe some words
-
5:59 - 6:02that it should avoid using
in these paint colors. -
6:03 - 6:07So its entire world
is the data that I gave it. -
6:07 - 6:11Like with the ice cream flavors,
it doesn't know about anything else. -
6:12 - 6:14So it is through the data
-
6:14 - 6:18that we often accidentally tell AI
to do the wrong thing. -
6:19 - 6:22This is a fish called a tench.
-
6:22 - 6:24And there was a group of researchers
-
6:24 - 6:27who trained an AI to identify
this tench in pictures. -
6:27 - 6:29But then when they asked it
-
6:29 - 6:32what part of the picture it was actually
using to identify the fish, -
6:32 - 6:34here's what it highlighted.
-
6:35 - 6:37Yes, those are human fingers.
-
6:37 - 6:39Why would it be looking for human fingers
-
6:39 - 6:41if it's trying to identify a fish?
-
6:42 - 6:45Well, it turns out that the tench
is a trophy fish, -
6:45 - 6:49and so in a lot of pictures
that the AI had seen of this fish -
6:49 - 6:50during training,
-
6:50 - 6:52the fish looked like this.
-
6:52 - 6:53(Laughter)
-
6:53 - 6:57And it didn't know that the fingers
aren't part of the fish. -
6:59 - 7:03So you see why it is so hard
to design an AI -
7:03 - 7:06that actually can understand
what it's looking at. -
7:06 - 7:09And this is why designing
the image recognition -
7:09 - 7:11in self-driving cars is so hard,
-
7:11 - 7:13and why so many self-driving car failures
-
7:14 - 7:16are because the AI got confused.
-
7:16 - 7:20I want to talk about an example from 2016.
-
7:20 - 7:25There was a fatal accident when somebody
was using Tesla's autopilot AI, -
7:25 - 7:28but instead of using it on the highway
like it was designed for, -
7:28 - 7:31they used it on city streets.
-
7:31 - 7:32And what happened was,
-
7:32 - 7:36a truck drove out in front of the car
and the car failed to brake. -
7:37 - 7:41Now, the AI definitely was trained
to recognize trucks in pictures. -
7:41 - 7:43But what it looks like happened is
-
7:43 - 7:46the AI was trained to recognize
trucks on highway driving, -
7:46 - 7:49where you would expect
to see trucks from behind. -
7:49 - 7:53Trucks on the side is not supposed
to happen on a highway, -
7:53 - 7:56and so when the AI saw this truck,
-
7:56 - 8:01it looks like the AI recognized it
as most likely to be a road sign -
8:01 - 8:03and therefore, safe to drive underneath.
-
8:04 - 8:07Here's an AI misstep
from a different field. -
8:07 - 8:10Amazon recently had to give up
on a résumé-sorting algorithm -
8:10 - 8:11that they were working on
-
8:11 - 8:15when they discovered that the algorithm
had learned to discriminate against women. -
8:15 - 8:18What happened is they had trained it
on example résumés -
8:18 - 8:20of people who they had hired in the past.
-
8:20 - 8:24And from these examples, the AI learned
to avoid the résumés of people -
8:24 - 8:26who had gone to women's colleges
-
8:26 - 8:29or who had the word "women"
somewhere in their resume, -
8:29 - 8:34as in, "women's soccer team"
or "Society of Women Engineers." -
8:34 - 8:38The AI didn't know that it wasn't supposed
to copy this particular thing -
8:38 - 8:40that it had seen the humans do.
-
8:40 - 8:43And technically, it did
what they asked it to do. -
8:43 - 8:46They just accidentally asked it
to do the wrong thing. -
8:47 - 8:50And this happens all the time with AI.
-
8:50 - 8:54AI can be really destructive
and not know it. -
8:54 - 8:59So the AIs that recommend
new content in Facebook, in YouTube, -
8:59 - 9:02they're optimized to increase
the number of clicks and views. -
9:02 - 9:06And unfortunately, one way
that they have found of doing this -
9:06 - 9:10is to recommend the content
of conspiracy theories or bigotry. -
9:11 - 9:16The AIs themselves don't have any concept
of what this content actually is, -
9:16 - 9:20and they don't have any concept
of what the consequences might be -
9:20 - 9:22of recommending this content.
-
9:22 - 9:24So, when we're working with AI,
-
9:24 - 9:29it's up to us to avoid problems.
-
9:29 - 9:31And avoiding things going wrong,
-
9:31 - 9:35that may come down to
the age-old problem of communication, -
9:35 - 9:39where we as humans have to learn
how to communicate with AI. -
9:39 - 9:43We have to learn what AI
is capable of doing and what it's not, -
9:43 - 9:46and to understand that,
with its tiny little worm brain, -
9:46 - 9:50AI doesn't really understand
what we're trying to ask it to do. -
9:51 - 9:54So in other words, we have
to be prepared to work with AI -
9:54 - 10:00that's not the super-competent,
all-knowing AI of science fiction. -
10:00 - 10:03We have to be prepared to work with an AI
-
10:03 - 10:06that's the one that we actually have
in the present day. -
10:06 - 10:10And present-day AI is plenty weird enough.
-
10:10 - 10:11Thank you.
-
10:11 - 10:16(Applause)
- Title:
- The danger of AI is weirder than you think
- Speaker:
- Janelle Shane
- Description:
-
The danger of artificial intelligence isn't that it's going to rebel against us, but that it's going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems -- like creating new ice cream flavors or recognizing cars on the road -- Shane shows why AI doesn't yet measure up to real brains.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 10:28
Erin Gregory edited English subtitles for The danger of AI is weirder than you think | ||
Oliver Friedman edited English subtitles for The danger of AI is weirder than you think | ||
Oliver Friedman edited English subtitles for The danger of AI is weirder than you think | ||
Brian Greene approved English subtitles for The danger of AI is weirder than you think | ||
Brian Greene edited English subtitles for The danger of AI is weirder than you think | ||
Camille Martínez accepted English subtitles for The danger of AI is weirder than you think | ||
Camille Martínez edited English subtitles for The danger of AI is weirder than you think | ||
Camille Martínez edited English subtitles for The danger of AI is weirder than you think |