Return to Video

The danger of AI is weirder than you think

  • 0:02 - 0:05
    So, artificial intelligence
  • 0:05 - 0:08
    is known for disrupting
    all kinds of industries.
  • 0:09 - 0:11
    What about ice cream?
  • 0:12 - 0:16
    What kind of mind-blowing
    new flavors could we generate
  • 0:16 - 0:19
    with the power of an advanced
    artificial intelligence?
  • 0:19 - 0:23
    So I teamed up with a group of coders
    from Kealing Middle School
  • 0:23 - 0:25
    to find out the answer to this question.
  • 0:25 - 0:31
    They collected over 1,600
    existing ice cream flavors,
  • 0:31 - 0:36
    and together, we fed them to an algorithm
    to see what it would generate.
  • 0:36 - 0:40
    And here are some of the flavors
    that the AI came up with.
  • 0:40 - 0:42
    [Pumpkin Trash Break]
  • 0:42 - 0:43
    (Laughter)
  • 0:43 - 0:46
    [Peanut Butter Slime]
  • 0:47 - 0:48
    [Strawberry Cream Disease]
  • 0:48 - 0:50
    (Laughter)
  • 0:50 - 0:55
    These flavors are not delicious,
    as we might have hoped they would be.
  • 0:55 - 0:57
    So the question is: What happened?
  • 0:57 - 0:58
    What went wrong?
  • 0:58 - 1:00
    Is the AI trying to kill us?
  • 1:01 - 1:05
    Or is it trying to do what we asked,
    and there was a problem?
  • 1:07 - 1:09
    In movies, when something
    goes wrong with AI,
  • 1:09 - 1:12
    it's usually because the AI has decided
  • 1:12 - 1:14
    that it doesn't want to obey
    the humans anymore,
  • 1:14 - 1:17
    and it's got its own goals,
    thank you very much.
  • 1:17 - 1:20
    In real life, though,
    the AI that we actually have
  • 1:21 - 1:22
    is not nearly smart enough for that.
  • 1:23 - 1:26
    It has the approximate computing power
  • 1:26 - 1:27
    of an earthworm,
  • 1:27 - 1:30
    or maybe at most a single honeybee,
  • 1:31 - 1:33
    and actually, probably maybe less.
  • 1:33 - 1:35
    Like, we're constantly learning
    new things about brains
  • 1:35 - 1:40
    that make it clear how much our AI's
    don't measure up to real brains.
  • 1:40 - 1:45
    So today's AI can do a task
    like identify a pedestrian in a picture,
  • 1:45 - 1:48
    but it doesn't have a concept
    of what the pedestrian is
  • 1:48 - 1:53
    beyond that it's a collection
    of lines and textures and things.
  • 1:54 - 1:56
    It doesn't know what a human actually is.
  • 1:57 - 2:00
    So will today's AI
    do what we ask it to do?
  • 2:00 - 2:02
    It will if it can,
  • 2:02 - 2:04
    but it might not do what we actually want.
  • 2:04 - 2:07
    So let's say that you
    were trying to get an AI
  • 2:07 - 2:10
    to take this collection of robot parts
  • 2:10 - 2:14
    and assemble them into some kind of robot
    to get from Point A to Point B.
  • 2:14 - 2:16
    Now, if you were going to try
    and solve this problem
  • 2:16 - 2:19
    by writing a traditional-style
    computer program,
  • 2:19 - 2:22
    you would give the program
    step-by-step instructions
  • 2:22 - 2:23
    on how to take these parts,
  • 2:23 - 2:26
    how to assemble them
    into a robot with legs
  • 2:26 - 2:29
    and then how to use those legs
    to walk to Point B.
  • 2:29 - 2:32
    But when you're using AI
    to solve the problem,
  • 2:32 - 2:33
    it goes differently.
  • 2:33 - 2:35
    You don't tell it
    how to solve the problem,
  • 2:35 - 2:37
    you just give it the goal,
  • 2:37 - 2:40
    and it has to figure out for itself
    via trial and error
  • 2:40 - 2:42
    how to reach that goal.
  • 2:42 - 2:46
    And it turns out that the way AI tends
    to solve this particular problem
  • 2:46 - 2:48
    is by doing this:
  • 2:48 - 2:51
    it assembles itself into a tower
    and then falls over
  • 2:51 - 2:53
    and lands at Point B.
  • 2:53 - 2:56
    And technically, this solves the problem.
  • 2:56 - 2:58
    Technically, it got to Point B.
  • 2:58 - 3:02
    The danger of AI is not that
    it's going to rebel against us,
  • 3:02 - 3:06
    it's that it's going to do
    exactly what we ask it to do.
  • 3:07 - 3:09
    So then the trick
    of working with AI becomes:
  • 3:09 - 3:13
    How do we set up the problem
    so that it actually does what we want?
  • 3:15 - 3:18
    So this little robot here
    is being controlled by an AI.
  • 3:18 - 3:21
    The AI came up with a design
    for the robot legs
  • 3:21 - 3:25
    and then figured out how to use them
    to get past all these obstacles.
  • 3:25 - 3:28
    But when David Ha set up this experiment,
  • 3:28 - 3:31
    he had to set it up
    with very, very strict limits
  • 3:31 - 3:34
    on how big the AI
    was allowed to make the legs,
  • 3:34 - 3:36
    because otherwise ...
  • 3:43 - 3:47
    (Laughter)
  • 3:49 - 3:52
    And technically, it got
    to the end of that obstacle course.
  • 3:52 - 3:57
    So you see how hard it is to get AI
    to do something as simple as just walk.
  • 3:57 - 4:01
    So seeing the AI do this,
    you may say, OK, no fair,
  • 4:01 - 4:04
    you can't just be
    a tall tower and fall over,
  • 4:04 - 4:07
    you have to actually, like,
    use legs to walk.
  • 4:07 - 4:10
    And it turns out,
    that doesn't always work, either.
  • 4:10 - 4:13
    This AI's job was to move fast.
  • 4:13 - 4:17
    They didn't tell it that it had
    to run facing forward
  • 4:17 - 4:19
    or that it couldn't use its arms.
  • 4:19 - 4:24
    So this is what you get
    when you train AI to move fast,
  • 4:24 - 4:28
    you get things like somersaulting
    and silly walks.
  • 4:28 - 4:29
    It's really common.
  • 4:30 - 4:33
    So is twitching along the floor in a heap.
  • 4:33 - 4:34
    (Laughter)
  • 4:35 - 4:38
    So in my opinion, you know what
    should have been a whole lot weirder
  • 4:39 - 4:40
    is the "Terminator" robots.
  • 4:40 - 4:44
    Hacking "The Matrix" is another thing
    that AI will do if you give it a chance.
  • 4:44 - 4:47
    So if you train an AI in a simulation,
  • 4:47 - 4:51
    it will learn how to do things like
    hack into the simulation's math errors
  • 4:51 - 4:53
    and harvest them for energy.
  • 4:53 - 4:58
    Or it will figure out how to move faster
    by glitching repeatedly into the floor.
  • 4:58 - 5:00
    When you're working with AI,
  • 5:00 - 5:02
    it's less like working with another human
  • 5:02 - 5:06
    and a lot more like working
    with some kind of weird force of nature.
  • 5:07 - 5:11
    And it's really easy to accidentally
    give AI the wrong problem to solve,
  • 5:11 - 5:16
    and often we don't realize that
    until something has actually gone wrong.
  • 5:16 - 5:18
    So here's an experiment I did,
  • 5:18 - 5:22
    where I wanted the AI
    to copy paint colors,
  • 5:22 - 5:23
    to invent new paint colors,
  • 5:23 - 5:26
    given the list like the ones
    here on the left.
  • 5:27 - 5:30
    And here's what the AI
    actually came up with.
  • 5:30 - 5:33
    [Sindis Poop, Turdly, Suffer, Gray Pubic]
  • 5:33 - 5:37
    (Laughter)
  • 5:39 - 5:41
    So technically,
  • 5:41 - 5:43
    it did what I asked it to.
  • 5:43 - 5:46
    I thought I was asking it for,
    like, nice paint color names,
  • 5:46 - 5:49
    but what I was actually asking it to do
  • 5:49 - 5:52
    was just imitate the kinds
    of letter combinations
  • 5:52 - 5:54
    that it had seen in the original.
  • 5:54 - 5:57
    And I didn't tell it anything
    about what words mean,
  • 5:57 - 5:59
    or that there are maybe some words
  • 5:59 - 6:02
    that it should avoid using
    in these paint colors.
  • 6:03 - 6:07
    So its entire world
    is the data that I gave it.
  • 6:07 - 6:11
    Like with the ice cream flavors,
    it doesn't know about anything else.
  • 6:12 - 6:14
    So it is through the data
  • 6:14 - 6:18
    that we often accidentally tell AI
    to do the wrong thing.
  • 6:19 - 6:22
    This is a fish called a tench.
  • 6:22 - 6:24
    And there was a group of researchers
  • 6:24 - 6:27
    who trained an AI to identify
    this tench in pictures.
  • 6:27 - 6:29
    But then when they asked it
  • 6:29 - 6:32
    what part of the picture it was actually
    using to identify the fish,
  • 6:32 - 6:34
    here's what it highlighted.
  • 6:35 - 6:37
    Yes, those are human fingers.
  • 6:37 - 6:39
    Why would it be looking for human fingers
  • 6:39 - 6:41
    if it's trying to identify a fish?
  • 6:42 - 6:45
    Well, it turns out that the tench
    is a trophy fish,
  • 6:45 - 6:49
    and so in a lot of pictures
    that the AI had seen of this fish
  • 6:49 - 6:50
    during training,
  • 6:50 - 6:52
    the fish looked like this.
  • 6:52 - 6:53
    (Laughter)
  • 6:53 - 6:57
    And it didn't know that the fingers
    aren't part of the fish.
  • 6:59 - 7:03
    So you see why it is so hard
    to design an AI
  • 7:03 - 7:06
    that actually can understand
    what it's looking at.
  • 7:06 - 7:09
    And this is why designing
    the image recognition
  • 7:09 - 7:11
    in self-driving cars is so hard,
  • 7:11 - 7:13
    and why so many self-driving car failures
  • 7:14 - 7:16
    are because the AI got confused.
  • 7:16 - 7:20
    I want to talk about an example from 2016.
  • 7:20 - 7:25
    There was a fatal accident when somebody
    was using Tesla's autopilot AI,
  • 7:25 - 7:28
    but instead of using it on the highway
    like it was designed for,
  • 7:28 - 7:31
    they used it on city streets.
  • 7:31 - 7:32
    And what happened was,
  • 7:32 - 7:36
    a truck drove out in front of the car
    and the car failed to brake.
  • 7:37 - 7:41
    Now, the AI definitely was trained
    to recognize trucks in pictures.
  • 7:41 - 7:43
    But what it looks like happened is,
  • 7:43 - 7:46
    the AI was trained to recognize
    trucks on highway driving,
  • 7:46 - 7:49
    where you would expect
    to see trucks from behind.
  • 7:49 - 7:53
    Trucks on the side is not supposed
    to happen on a highway,
  • 7:53 - 7:56
    and so when the AI saw this truck,
  • 7:56 - 8:01
    it looks like the AI recognized it
    as most likely to be a road sign
  • 8:01 - 8:03
    and therefore, safe to drive underneath.
  • 8:04 - 8:07
    Here's an AI misstep
    from a different field.
  • 8:07 - 8:10
    Amazon recently had to give up
    on a résumé-sorting algorithm
  • 8:10 - 8:11
    that they were working on
  • 8:11 - 8:15
    when they discovered that the algorithm
    had learned to discriminate against women.
  • 8:15 - 8:18
    What happened is, they had trained it
    on example résumés
  • 8:18 - 8:20
    of people who they had hired in the past.
  • 8:20 - 8:24
    And from these examples, the AI learned
    to avoid the résumés of people
  • 8:24 - 8:26
    who had gone to women's colleges
  • 8:26 - 8:29
    or who had the word "women"
    somewhere in their resume,
  • 8:29 - 8:34
    as in, "women's soccer team"
    or "Society of Women Engineers."
  • 8:34 - 8:38
    The AI didn't know that it wasn't supposed
    to copy this particular thing
  • 8:38 - 8:40
    that it had seen the humans do.
  • 8:40 - 8:43
    And technically, it did
    what they asked it to do.
  • 8:43 - 8:46
    They just accidentally asked it
    to do the wrong thing.
  • 8:47 - 8:50
    And this happens all the time with AI.
  • 8:50 - 8:54
    AI can be really destructive
    and not know it.
  • 8:54 - 8:59
    So the AI's that recommend
    new content in Facebook, in YouTube,
  • 8:59 - 9:02
    they're optimized to increase
    the number of clicks and views.
  • 9:02 - 9:06
    And unfortunately, one way
    that they have found of doing this
  • 9:06 - 9:10
    is to recommend the content
    of conspiracy theories or bigotry.
  • 9:11 - 9:16
    The AI's themselves don't have any concept
    of what this content actually is,
  • 9:16 - 9:20
    and they don't have any concept
    of what the consequences might be
  • 9:20 - 9:22
    of recommending this content.
  • 9:22 - 9:24
    So, when we're working with AI,
  • 9:24 - 9:29
    it's up to us to avoid problems.
  • 9:29 - 9:31
    And avoiding things going wrong,
  • 9:31 - 9:35
    that may come down to
    the age-old problem of communication,
  • 9:35 - 9:39
    where we as humans have to learn
    how to communicate with AI.
  • 9:39 - 9:43
    We have to learn what AI
    is capable of doing and what it's not,
  • 9:43 - 9:46
    and to understand that,
    with its tiny little worm brain,
  • 9:46 - 9:50
    AI doesn't really understand
    what we're trying to ask it to do.
  • 9:51 - 9:54
    So in other words, we have
    to be prepared to work with AI
  • 9:54 - 10:00
    that's not the super-competent,
    all-knowing AI of science fiction.
  • 10:00 - 10:03
    We have to prepared to work with an AI
  • 10:03 - 10:06
    that's the one that we actually have
    in the present day.
  • 10:06 - 10:10
    And present-day AI is plenty weird enough.
  • 10:10 - 10:11
    Thank you.
  • 10:11 - 10:16
    (Applause)
Title:
The danger of AI is weirder than you think
Speaker:
Janelle Shane
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
10:28

English subtitles

Revisions Compare revisions