Return to Video

The danger of AI is weirder than you think

  • 0:02 - 0:05
    So artificial intelligence
  • 0:05 - 0:08
    is known for disrupting
    all kinds of industries.
  • 0:09 - 0:12
    What about ice cream?
  • 0:12 - 0:15
    What kind of mind-blowing new flavors
  • 0:15 - 0:19
    could we generate with the power
    of an advanced artificial intelligence?
  • 0:19 - 0:23
    So I teamed up with a group
    of coders from Kealing Middle School
  • 0:23 - 0:26
    to find out the answer to this question,
  • 0:26 - 0:31
    and they collected over 1,600
    existing ice cream flavors,
  • 0:31 - 0:36
    and together we fed them to an algorithm
    to see what it would generate.
  • 0:36 - 0:40
    And here are some of the flavors
    that the AI came up with.
  • 0:40 - 0:42
    [Pumpkin Trash Break]
  • 0:42 - 0:44
    (Laughter)
  • 0:44 - 0:46
    [Peanut Butter Slime]
  • 0:46 - 0:51
    [Strawberry Cream Disease]
    (Laughter)
  • 0:51 - 0:55
    These flavors are not disease
    as we might have hoped they would be,
  • 0:55 - 0:57
    so the question is, what happened?
  • 0:57 - 0:58
    What went wrong?
  • 0:58 - 1:01
    Is the AI trying to kill us?
  • 1:01 - 1:04
    Or is it trying to do what we asked,
    and there was a problem?
  • 1:04 - 1:09
    So in movies, when something
    goes wrong with AI,
  • 1:09 - 1:12
    it's usually because the AI has decided
  • 1:12 - 1:14
    that it doesn't want to obey
    the humans anymore
  • 1:14 - 1:17
    and it's got its own goals,
    thank you very much.
  • 1:17 - 1:21
    So in real life, though,
    the AI that we actually have
  • 1:21 - 1:23
    is not nearly smart enough for that.
  • 1:23 - 1:25
    It has the approximate computing power
  • 1:25 - 1:28
    of an earthworm,
  • 1:28 - 1:31
    or maybe at most a single honeybee,
  • 1:31 - 1:33
    and actually probably maybe less.
  • 1:33 - 1:36
    Like, we're constantly learning
    new things about brains
  • 1:36 - 1:40
    that make it clear how much our AI's
    don't measure up to real brains.
  • 1:40 - 1:45
    So today's AI can do a task
    like identify a pedestrian in a picture,
  • 1:45 - 1:49
    but it doesn't have a concept
    of what the pedestrian is
  • 1:49 - 1:54
    beyond it's a collection
    of lines and textures and things.
  • 1:54 - 1:57
    It doesn't know what a human actually is.
  • 1:57 - 2:01
    So will today's AI
    do what we ask it to do?
  • 2:01 - 2:02
    It will if it can,
  • 2:02 - 2:05
    but it might not do what we actually want.
  • 2:05 - 2:07
    So let's say that you
    were trying to get an AI
  • 2:07 - 2:10
    to take this collection of robot parts
  • 2:10 - 2:14
    and assemble them into some kind of robot
    to get from Point A to Point B.
  • 2:14 - 2:16
    Now, if you were going to try
    and solve this problem
  • 2:16 - 2:19
    by writing a traditional-style
    computer program,
  • 2:19 - 2:23
    you would give the program
    step-by-step instructions
  • 2:23 - 2:24
    on how to take these parts,
  • 2:24 - 2:27
    how to assemble them
    into a robot with legs,
  • 2:27 - 2:30
    and then how to use those legs
    to walk to Point B.
  • 2:30 - 2:32
    But when you're using AI
    to solve the problem,
  • 2:32 - 2:33
    it goes differently.
  • 2:33 - 2:36
    You don't tell it
    how to solve the problem,
  • 2:36 - 2:38
    you just give it the goal
  • 2:38 - 2:39
    and it has to figure out for itself
  • 2:39 - 2:42
    via trial and error
    how to reach that goal.
  • 2:42 - 2:47
    And it turns out that the way that AI
    tends to solve this particular problem
  • 2:47 - 2:48
    is by doing this:
  • 2:48 - 2:50
    it assembles itself into a tower
    and then falls over
  • 2:50 - 2:53
    and lands at Point B
  • 2:53 - 2:56
    and technically this solves the problem.
  • 2:56 - 2:58
    Technically it got to Point B.
  • 2:58 - 3:02
    The danger of AI is not that
    it's going to rebel against us,
  • 3:02 - 3:07
    it's that it's going to do
    exactly what we ask it to do.
  • 3:07 - 3:10
    So then the trick
    of working with AI becomes,
  • 3:10 - 3:14
    how do we set up the problem
    so that it actually does what we want?
  • 3:14 - 3:18
    So this little robot here
    is being controlled by an AI.
  • 3:18 - 3:21
    The AI came up with a design
    for the robot legs
  • 3:21 - 3:25
    and then figured out how to use them
    to get past all these obstacles.
  • 3:25 - 3:28
    But when David Ha set up this experiment,
  • 3:28 - 3:31
    he had to set it up
    with very, very strict limits
  • 3:31 - 3:34
    on how big the AI
    was allowed to make the legs,
  • 3:34 - 3:40
    because otherwise...
  • 3:44 - 3:47
    (Laughter)
  • 3:49 - 3:52
    And technically it got
    to the end of that obstacle course.
  • 3:52 - 3:57
    So you see how hard it is to get AI
    to do something as simple as just walk.
  • 3:57 - 4:01
    So seeing the AI do this,
    you may say, no fair,
  • 4:01 - 4:04
    you can't just be
    a tall tower and fall over,
  • 4:04 - 4:08
    you have to actually, like,
    use legs to walk.
  • 4:08 - 4:10
    And it turns out
    that doesn't always work either.
  • 4:10 - 4:13
    So this AI's job was to move fast.
  • 4:13 - 4:17
    They didn't tell it that it had
    to run facing forward
  • 4:17 - 4:19
    or that it couldn't use its arms.
  • 4:19 - 4:24
    So this is what you get
    when you train AI to move fast
  • 4:24 - 4:29
    is you get things like somersaulting
    and silly walks. It's really common.
  • 4:30 - 4:34
    So is twitching along the floor in a heap.
  • 4:34 - 4:36
    (Laughter)
  • 4:36 - 4:39
    So in my opinion, you know what
    should have been a whole lot weirder
  • 4:39 - 4:41
    is the Terminator robots.
  • 4:41 - 4:44
    Hacking the matrix is another thing
    that AI will do if you give it a chance.
  • 4:44 - 4:47
    So if you train an AI in a simulation,
  • 4:47 - 4:51
    it will learn how to do things like
    hack into the simulation's math errors
  • 4:51 - 4:53
    and harvest them for energy,
  • 4:53 - 4:58
    or it will figure out how to move faster
    by glitching repeatedly into the floor.
  • 4:59 - 5:00
    When you're working with AI,
  • 5:00 - 5:02
    it's less like working with another human
  • 5:02 - 5:06
    and a lot more like working
    with some kind of weird force of nature.
  • 5:06 - 5:11
    And it's really easy to accidentally
    give AI the wrong problem to solve,
  • 5:11 - 5:17
    and often we don't realize that
  • 5:17 - 5:17
    until something has actually gone wrong.
  • 5:17 - 5:19
    So here's an experiment I did
  • 5:19 - 5:23
    where I wanted the AI
    to copy paint colors,
  • 5:23 - 5:27
    to invent new paint colors
  • 5:27 - 5:29
    given the list like the ones
    here on the left.
  • 5:29 - 5:32
    And here's what the AI
    actually came up with.
  • 5:32 - 5:35
    (Laughter)
  • 5:39 - 5:41
    So technically,
  • 5:41 - 5:43
    it did what I asked it to.
  • 5:43 - 5:47
    So I thought I was asking it for,
    like, nice paint color names,
  • 5:47 - 5:49
    but what I was actually asking it to do
  • 5:49 - 5:52
    was just imitate the kinds
    of letter combinations
  • 5:52 - 5:53
    that it had seen in the original.
  • 5:53 - 5:57
    And I didn't tell it anything
    about what words mean
  • 5:57 - 5:59
    or that there are maybe some words
  • 5:59 - 6:02
    that it should avoid using
    in these paint colors.
  • 6:03 - 6:07
    So its entire world
    is the data that I gave it.
  • 6:07 - 6:11
    Like with the ice cream flavors,
    it doesn't know about anything else.
  • 6:13 - 6:15
    So it is through the data
  • 6:15 - 6:19
    that we often accidentally tell AI
    to do the wrong thing.
  • 6:19 - 6:22
    So this is a fish called a ??,
  • 6:22 - 6:24
    and there was a group of researchers
  • 6:24 - 6:28
    who trained an AI to identify
    this ?? in pictures.
  • 6:28 - 6:31
    But then when they asked it what part
    of the picture that it was actually using
  • 6:31 - 6:32
    to identify the fish,
  • 6:32 - 6:34
    here's what it highlighted.
  • 6:35 - 6:38
    Yes, those are human fingers.
  • 6:38 - 6:40
    Why would it be looking for human fingers
  • 6:40 - 6:42
    if it's trying to identify a fish?
  • 6:42 - 6:45
    Well, it turns out that the ??
    is a trophy fish,
  • 6:45 - 6:49
    and so in a lot of pictures
    that the AI had seen
  • 6:49 - 6:52
    of this fish during training,
    the fish looked like this.
  • 6:53 - 6:58
    And it didn't know that the fingers
    aren't part of the fish.
  • 6:59 - 7:03
    So you see why it is so hard
    to design an AI
  • 7:03 - 7:07
    that actually can understand
    what it's looking at.
  • 7:07 - 7:09
    And this is why designing
    the image recognition
  • 7:09 - 7:12
    in self-driving cars is so hard,
  • 7:12 - 7:14
    and why so many self-driving car failures
  • 7:14 - 7:16
    are because the AI got confused.
  • 7:17 - 7:20
    I want to talk about an example from 2016.
  • 7:20 - 7:25
    There was a fatal accident when somebody
    was using Tesla's autopilot AI,
  • 7:25 - 7:28
    but instead of using it on the highway
    like it was designed for,
  • 7:28 - 7:31
    they used it on city streets,
  • 7:31 - 7:35
    and what happened was a truck
    drove out in front of the car
  • 7:35 - 7:37
    and the car failed to brake.
  • 7:37 - 7:42
    Now, the AI definitely was trained
    to recognize trucks in pictures,
  • 7:42 - 7:44
    but what it looks like happened
  • 7:44 - 7:47
    is the AI was trained to recognize
    trucks on highway driving
  • 7:47 - 7:50
    where you would expect
    to see trucks from behind.
  • 7:50 - 7:53
    Trucks on the side is not supposed
    to happen on a highway,
  • 7:53 - 7:56
    and so when the AI saw this truck,
  • 7:56 - 8:02
    it looks like the AI recognized it
    as mostly likely to be a road sign
  • 8:02 - 8:05
    and therefore safe to drive underneath.
  • 8:05 - 8:07
    Here's an AI misstep
    from a different field.
  • 8:07 - 8:10
    Amazon recently had to give up
    on a resume-sorting algorithm
  • 8:10 - 8:12
    that they were working on
  • 8:12 - 8:15
    when they discovered that the algorithm
    had learned to discriminate against women.
  • 8:15 - 8:17
    What happened is they had trained it
  • 8:17 - 8:20
    on example resumes of people
    who they had hired in the past,
  • 8:20 - 8:24
    and from these examples, the AI learned
    to avoid the resumes of people
  • 8:24 - 8:26
    who had gone to women's colleges
  • 8:26 - 8:30
    or who had the word "women"
    somewhere in their resume,
  • 8:30 - 8:34
    as in, "women's soccer team"
    or "society of women engineers."
  • 8:34 - 8:38
    The AI didn't know that it wasn't supposed
    to copy this particular thing
  • 8:38 - 8:40
    that it had seen the humans do.
  • 8:40 - 8:43
    And technically, it did
    what they asked it to do.
  • 8:43 - 8:46
    They just accidentally asked it
    to do the wrong thing.
  • 8:47 - 8:49
    And this happens all the time with AI.
  • 8:49 - 8:54
    AI can be really destructive
    and not know it.
  • 8:54 - 8:59
    So the AI's that recommend
    new content in Facebook, in YouTube,
  • 8:59 - 9:03
    they're optimized to increase
    the number of clicks and views,
  • 9:03 - 9:06
    and unfortunately one way
    that they have found of doing this
  • 9:06 - 9:10
    is to recommend the content
    of conspiracy theories or bigotry.
  • 9:10 - 9:16
    The AI's themselves don't have any concept
    of what this content actually is,
  • 9:16 - 9:20
    and they don't have any concept
    of what the consequences might be
  • 9:20 - 9:23
    of recommending this content.
  • 9:23 - 9:25
    So when we are working with AI,
  • 9:25 - 9:26
    it's up to us
  • 9:26 - 9:28
    to avoid problems,
  • 9:28 - 9:32
    and avoiding things going wrong,
  • 9:32 - 9:35
    that may come down to
    the age-old problem of communication,
  • 9:35 - 9:40
    where we as humans have to learn
    how to communicate with AI.
  • 9:40 - 9:44
    We have to learn what AI
    is capable of doing and what it's not,
  • 9:44 - 9:46
    and to understand that
    with its tiny little worm brain,
  • 9:46 - 9:51
    AI doesn't really understand
    what we're trying to ask it to do.
  • 9:51 - 9:55
    So in other words, we have
    to be prepared to work with AI
  • 9:55 - 10:00
    that's not the super-competent,
    all-knowing AI of science fiction.
  • 10:00 - 10:03
    We have to prepared to work with an AI
  • 10:03 - 10:07
    that's the one that we actually have
    in the present day,
  • 10:07 - 10:10
    and present day AI is plenty weird enough.
  • 10:10 - 10:13
    Thank you.
  • 10:13 - 10:16
    (Applause)
Title:
The danger of AI is weirder than you think
Speaker:
Janelle Shane
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
10:28

English subtitles

Revisions Compare revisions