Return to Video

The danger of AI is weirder than you think

  • Not Synced
    So artificial intelligence
  • Not Synced
    is known for disrupting
    all kinds of industries.
  • Not Synced
    What about ice cream?
  • Not Synced
    What kind of mind-blowing new flavors
  • Not Synced
    could we generate with the power
    of an advanced artificial intelligence?
  • Not Synced
    So I teamed up with a group
    of coders from Kealing Middle School
  • Not Synced
    to find out the answer to this question,
  • Not Synced
    and they collected over 1,600
    existing ice cream flavors,
  • Not Synced
    and together we fed them to an algorithm
    to see what it would generate.
  • Not Synced
    And here are some of the flavors
    that the AI came up with.
  • Not Synced
    [Pumpkin Trash Break]
  • Not Synced
    (Laughter)
  • Not Synced
    [Peanut Butter Slime]
  • Not Synced
    [Strawberry Cream Disease]
  • Not Synced
    These flavors are not disease
    as we might have hoped they would be,
  • Not Synced
    so the question is, what happened?
  • Not Synced
    What went wrong?
  • Not Synced
    Is the AI trying to kill us?
  • Not Synced
    Or is it trying to do what we asked,
    and there was a problem?
  • Not Synced
    So in movies, when something
    goes wrong with AI,
  • Not Synced
    it's usually because the AI has decided
  • Not Synced
    that it doesn't want to obey
    the humans anymore
  • Not Synced
    and it's got its own goals,
    thank you very much.
  • Not Synced
    So in real life, though,
    the AI that we actually have
  • Not Synced
    is not nearly smart enough for that.
  • Not Synced
    It has the approximate computing power
  • Not Synced
    of an earthworm,
  • Not Synced
    or maybe at most a single honeybee,
  • Not Synced
    and actually probably maybe less.
  • Not Synced
    Like, we're constantly learning
    new things about brains
  • Not Synced
    that make it clear how much our AI's
    don't measure up to real brains.
  • Not Synced
    So today's AI can do a task
    like identify a pedestrian in a picture,
  • Not Synced
    but it doesn't have a concept
    of what the pedestrian is
  • Not Synced
    beyond it's a collection
    of lines and textures and things.
  • Not Synced
    It doesn't know what a human actually is.
  • Not Synced
    So will today's AI
    do what we ask it to do?
  • Not Synced
    It will if it can,
  • Not Synced
    but it might not do what we actually want.
  • Not Synced
    So let's say that you
    were trying to get an AI
  • Not Synced
    to take this collection of robot parts
  • Not Synced
    and assemble them into some kind of robot
    to get from Point A to Point B.
  • Not Synced
    Now, if you were going to try
    and solve this problem
  • Not Synced
    by writing a traditional-style
    computer program,
  • Not Synced
    you would give the program
    step-by-step instructions
  • Not Synced
    on how to take these parts,
  • Not Synced
    how to assemble them
    into a robot with legs,
  • Not Synced
    and then how to use those legs
    to walk to Point B.
  • Not Synced
    But when you're using AI
    to solve the problem,
  • Not Synced
    it goes differently.
  • Not Synced
    You don't tell it
    how to solve the problem,
  • Not Synced
    you just give it the goal
  • Not Synced
    and it has to figure out for itself
  • Not Synced
    via trial and error
    how to reach that goal.
  • Not Synced
    And it turns out that the way that AI
    tends to solve this particular problem
  • Not Synced
    is by doing this:
  • Not Synced
    it assembles itself into a tower
    and then falls over
  • Not Synced
    and lands at Point B
  • Not Synced
    and technically this solves the problem.
  • Not Synced
    Technically it got to Point B.
  • Not Synced
    The danger of AI is not that
    it's going to rebel against it,
  • Not Synced
    it's that it's going to do
    exactly what we ask it to do.
  • Not Synced
    So then the trick
    of working with AI becomes,
  • Not Synced
    how do we set up the problem
    so that it actually does what we want?
  • Not Synced
    So this little robot here
    is being controlled by an AI.
  • Not Synced
    The AI came up with a design
    for the robot legs
  • Not Synced
    and then figured out how to use them
    to get past all these obstacles.
  • Not Synced
    But when David Ha set up this experiment,
  • Not Synced
    he had to set it up
    with very, very strict limits
  • Not Synced
    on how big the AI
    was allowed to make the legs,
  • Not Synced
    because otherwise...
  • Not Synced
    (Laughter)
  • Not Synced
    And technically it got
    to the end of that obstacle course.
  • Not Synced
    So you see how hard it is to get AI
    to do something as simple as just walk.
  • Not Synced
    so seeing the AI do this,
    you may say, no fair,
  • Not Synced
    you can't just be
    a tall tower and fall over,
  • Not Synced
    you have to actually, like,
    use legs to walk.
  • Not Synced
    And it turns out
    that doesn't always work either.
  • Not Synced
    So this AI's job was to move fast.
  • Not Synced
    They didn't tell it that it had
    to run facing forward
  • Not Synced
    or that it couldn't use its arms.
  • Not Synced
    So this is what you get
    when you train AI to move fast
  • Not Synced
    is you get things like somersaulting
    and silly walks. It's really common.
  • Not Synced
    So is twitching along the floor in a heap.
  • Not Synced
    (Laughter)
  • Not Synced
    So in my opinion, you know what
    should have been a whole lot weirder
  • Not Synced
    is the Terminator robots.
  • Not Synced
    Hacking the matrix is another thing
    that AI will do if you give it a chance.
  • Not Synced
    So if you train an AI in a simulation,
  • Not Synced
    it will learn how to do things like
    hack into the simulation's math errors
  • Not Synced
    and harvest them for energy,
  • Not Synced
    or it will figure out how to move faster
    by glitching repeatedly into the floor.
  • Not Synced
    When you're working with AI,
  • Not Synced
    it's less like working with another human
  • Not Synced
    and a lot more like working
    with some kind of weird force of nature.
  • Not Synced
    And it's really easy to accidentally
    give AI the wrong problem to solve,
  • Not Synced
    and often we don't realize that
  • Not Synced
    until something has actually gone wrong.
  • Not Synced
    So here's an experiment I did
  • Not Synced
    where I wanted the AI
    to copy paint colors,
  • Not Synced
    to invent new paint colors
  • Not Synced
    given the list like the ones
    here on the left.
  • Not Synced
    And here's what the AI
    actually came up with.
  • Not Synced
    (Laughter)
  • Not Synced
    So technically,
  • Not Synced
    it did what I asked it to.
  • Not Synced
    So I thought I was asking it for,
    like, nice paint color names,
  • Not Synced
    but what I was actually asking it to do
  • Not Synced
    was just imitate the kinds
    of letter combinations
  • Not Synced
    that it had seen in the original.
  • Not Synced
    And I didn't tell it anything
    about what words mean
  • Not Synced
    or that there are maybe some words
  • Not Synced
    that it should avoid using
    in these paint colors.
  • Not Synced
    So its entire world
    is the data that I gave it.
  • Not Synced
    Like with the ice cream flavors,
    it doesn't know about anything else.
  • Not Synced
    So it is through the data
  • Not Synced
    that we often accidentally tell AI
    to do the wrong thing.
  • Not Synced
    So this is a fish called a ??,
  • Not Synced
    and there was a group of researchers
  • Not Synced
    who trained an AI to identify
    this ?? in pictures.
  • Not Synced
    But then when they asked it what part
    of the picture that it was actually using
  • Not Synced
    to identify the fish,
  • Not Synced
    here's what it highlighted.
  • Not Synced
    Yes, those are human fingers.
  • Not Synced
    Why would it be looking for human fingers
  • Not Synced
    if it's trying to identify a fish?
  • Not Synced
    Well, it turns out that the ??
    is a trophy fish,
  • Not Synced
    and so in a lot of pictures
    that the AI had seen
  • Not Synced
    of this fish during training,
    the fish looked like this.
  • Not Synced
    And it didn't know that the fingers
    aren't part of the fish.
  • Not Synced
    So you see why it is so hard
    to design an AI
  • Not Synced
    that actually can understand
    what it's looking at.
  • Not Synced
    And this is why designing
    the image recognition
  • Not Synced
    in self-driving cars is so hard,
  • Not Synced
    and why so many self-driving car failures
  • Not Synced
    are because the AI got confused.
  • Not Synced
    I want to talk about an example from 2016.
  • Not Synced
    There was a fatal accident when somebody
    was using Tesla's autopilot AI,
  • Not Synced
    but instead of using it on the highway
    like it was designed for,
  • Not Synced
    they used it on city streets,
  • Not Synced
    and what happened was a truck
    drove out in front of the car
  • Not Synced
    and the car failed to brake.
  • Not Synced
    Now, the AI definitely was trained
    to recognize trucks in pictures,
  • Not Synced
    but what it looks like happened
  • Not Synced
    is the AI was trained to recognize
    trucks on highway driving
  • Not Synced
    where you would expect
    to see trucks from behind.
  • Not Synced
    Trucks on the side is not supposed
    to happen on a highway,
  • Not Synced
    and so when the AI saw this truck,
  • Not Synced
    it looks like the AI recognized it
    as mostly likely to be a road sign
  • Not Synced
    and therefore safe to drive underneath.
  • Not Synced
    Here's an AI misstep
    from a different field.
  • Not Synced
    Amazon recently had to give up
    on a resume-sorting algorithm
  • Not Synced
    that they were working on
  • Not Synced
    when they discovered that the algorithm
    had learned to discriminate against women.
  • Not Synced
    What happened is they had trained it
  • Not Synced
    on example resumes of people
    who they had hired in the past,
  • Not Synced
    and from these examples, the AI learned
    to avoid the resumes of people
  • Not Synced
    who had gone to women's colleges
  • Not Synced
    or who had the word "women"
    somewhere in their resume,
  • Not Synced
    as in, "women's soccer team"
    or "society of women engineers."
  • Not Synced
    The AI didn't know that it wasn't supposed
    to copy this particular thing
  • Not Synced
    that it had seen the humans do.
  • Not Synced
    And technically, it did
    what they asked it to do.
  • Not Synced
    They just accidentally asked it
    to do the wrong thing.
  • Not Synced
    And this happens all the time with AI.
  • Not Synced
    AI can be really destructive
    and not know it.
  • Not Synced
    So the AI's that recommend
    new content in Facebook, in YouTube,
  • Not Synced
    they're optimized to increase
    the number of clicks and views,
  • Not Synced
    and unfortunately one way
    that they have found of doing this
  • Not Synced
    is to recommend the content
    of conspiracy theories or bigotry.
  • Not Synced
    The AI's themselves don't have any concept
    of what this content actually is,
  • Not Synced
    and they don't have any concept
    of what the consequences might be
  • Not Synced
    of recommending this content.
  • Not Synced
    So when we are working with AI,
  • Not Synced
    it's up to us
  • Not Synced
    to avoid problems,
  • Not Synced
    and avoiding things going wrong,
  • Not Synced
    that may come down to
    the age-old problem of communication,
  • Not Synced
    where we as humans have to learn
    how to communicate with AI.
  • Not Synced
    We have to learn what AI
    is capable of doing and what it's not,
  • Not Synced
    and to understand that
    with its tiny little worm brain,
  • Not Synced
    AI doesn't really understand
    what we're trying to ask it to do.
  • Not Synced
    So in other words, we have
    to be prepared to work with AI
  • Not Synced
    that's not the super-competent,
    all-knowing AI of science fiction.
  • Not Synced
    We have to prepared to work with an AI
  • Not Synced
    that's the one that we actually have
    in the present day,
  • Not Synced
    and present day AI is plenty weird enough.
  • Not Synced
    Thank you.
  • Not Synced
    (Applause)
Title:
The danger of AI is weirder than you think
Speaker:
Janelle Shane
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
10:28

English subtitles

Revisions Compare revisions