< Return to Video

How computers are learning to be creative

  • 0:01 - 0:04
    So, I lead a team at Google
    that works on machine intelligence;
  • 0:04 - 0:09
    in other words, the engineering discipline
    of making computers and devices
  • 0:09 - 0:11
    able to do some of the things
    that brains do.
  • 0:11 - 0:15
    And this makes us
    interested in real brains
  • 0:15 - 0:16
    and neuroscience as well,
  • 0:16 - 0:20
    and especially interested
    in the things that our brains do
  • 0:20 - 0:24
    that are still far superior
    to the performance of computers.
  • 0:25 - 0:29
    Historically, one of those areas
    has been perception,
  • 0:29 - 0:32
    the process by which things
    out there in the world --
  • 0:32 - 0:33
    sounds and images --
  • 0:34 - 0:36
    can turn into concepts in the mind.
  • 0:36 - 0:39
    This is essential for our own brains,
  • 0:39 - 0:41
    and it's also pretty useful on a computer.
  • 0:42 - 0:45
    The machine perception algorithms,
    for example, that our team makes,
  • 0:45 - 0:49
    are what enable your pictures
    on Google Photos to become searchable,
  • 0:49 - 0:50
    based on what's in them.
  • 0:52 - 0:55
    The flip side of perception is creativity:
  • 0:55 - 0:58
    turning a concept into something
    out there into the world.
  • 0:58 - 1:02
    So over the past year,
    our work on machine perception
  • 1:02 - 1:07
    has also unexpectedly connected
    with the world of machine creativity
  • 1:07 - 1:08
    and machine art.
  • 1:09 - 1:12
    I think Michelangelo
    had a penetrating insight
  • 1:12 - 1:16
    into to this dual relationship
    between perception and creativity.
  • 1:16 - 1:18
    This is a famous quote of his:
  • 1:18 - 1:21
    "Every block of stone
    has a statue inside of it,
  • 1:22 - 1:25
    and the job of the sculptor
    is to discover it."
  • 1:26 - 1:29
    So I think that what
    Michelangelo was getting at
  • 1:29 - 1:32
    is that we create by perceiving,
  • 1:32 - 1:35
    and that perception itself
    is an act of imagination
  • 1:36 - 1:38
    and is the stuff of creativity.
  • 1:39 - 1:43
    The organ that does all the thinking
    and perceiving and imagining,
  • 1:43 - 1:44
    of course, is the brain.
  • 1:45 - 1:48
    And I'd like to begin
    with a brief bit of history
  • 1:48 - 1:50
    about what we know about brains.
  • 1:50 - 1:53
    Because unlike, say,
    the heart or the intestines,
  • 1:53 - 1:56
    you really can't say very much
    about a brain by just looking at it,
  • 1:56 - 1:58
    at least with the naked eye.
  • 1:58 - 2:00
    The early anatomists who looked at brains
  • 2:00 - 2:04
    gave the superficial structures
    of this thing all kinds of fanciful names,
  • 2:04 - 2:07
    like hippocampus, meaning "little shrimp."
  • 2:07 - 2:09
    But of course that sort of thing
    doesn't tell us very much
  • 2:09 - 2:12
    about what's actually going on inside.
  • 2:13 - 2:16
    The first person who, I think, really
    developed some kind of insight
  • 2:16 - 2:18
    into what was going on in the brain
  • 2:18 - 2:22
    was the great Spanish neuroanatomist,
    Santiago Ramón y Cajal,
  • 2:22 - 2:24
    in the 19th century,
  • 2:24 - 2:28
    who used microscopy and special stains
  • 2:28 - 2:32
    that could selectively fill in
    or render in very high contrast
  • 2:32 - 2:34
    the individual cells in the brain,
  • 2:34 - 2:37
    in order to start to understand
    their morphologies.
  • 2:38 - 2:41
    And these are the kinds of drawings
    that he made of neurons
  • 2:41 - 2:42
    in the 19th century.
  • 2:42 - 2:44
    This is from a bird brain.
  • 2:44 - 2:47
    And you see this incredible variety
    of different sorts of cells,
  • 2:47 - 2:51
    even the cellular theory itself
    was quite new at this point.
  • 2:51 - 2:52
    And these structures,
  • 2:52 - 2:54
    these cells that have these arborizations,
  • 2:54 - 2:57
    these branches that can go
    very, very long distances --
  • 2:57 - 2:58
    this was very novel at the time.
  • 2:59 - 3:02
    They're reminiscent, of course, of wires.
  • 3:02 - 3:05
    That might have been obvious
    to some people in the 19th century;
  • 3:05 - 3:10
    the revolutions of wiring and electricity
    were just getting underway.
  • 3:10 - 3:11
    But in many ways,
  • 3:11 - 3:14
    these microanatomical drawings
    of Ramón y Cajal's, like this one,
  • 3:15 - 3:17
    they're still in some ways unsurpassed.
  • 3:17 - 3:19
    We're still more than a century later,
  • 3:19 - 3:22
    trying to finish the job
    that Ramón y Cajal started.
  • 3:22 - 3:25
    These are raw data from our collaborators
  • 3:25 - 3:28
    at the Max Planck Institute
    of Neuroscience.
  • 3:28 - 3:29
    And what our collaborators have done
  • 3:29 - 3:34
    is to image little pieces of brain tissue.
  • 3:34 - 3:38
    The entire sample here
    is about one cubic millimeter in size,
  • 3:38 - 3:40
    and I'm showing you a very,
    very small piece of it here.
  • 3:40 - 3:43
    That bar on the left is about one micron.
  • 3:43 - 3:45
    The structures you see are mitochondria
  • 3:45 - 3:47
    that are the size of bacteria.
  • 3:47 - 3:49
    And these are consecutive slices
  • 3:49 - 3:52
    through this very, very
    tiny block of tissue.
  • 3:52 - 3:55
    Just for comparison's sake,
  • 3:55 - 3:58
    the diameter of an average strand
    of hair is about 100 microns.
  • 3:58 - 4:01
    So we're looking at something
    much, much smaller
  • 4:01 - 4:02
    than a single strand of hair.
  • 4:02 - 4:06
    And from these kinds of serial
    electron microscopy slices,
  • 4:06 - 4:11
    one can start to make reconstructions
    in 3D of neurons that look like these.
  • 4:11 - 4:14
    So these are sort of in the same
    style as Ramón y Cajal.
  • 4:14 - 4:16
    Only a few neurons lit up,
  • 4:16 - 4:19
    because otherwise we wouldn't
    be able to see anything here.
  • 4:19 - 4:20
    It would be so crowded,
  • 4:20 - 4:21
    so full of structure,
  • 4:21 - 4:24
    of wiring all connecting
    one neuron to another.
  • 4:25 - 4:28
    So Ramón y Cajal was a little bit
    ahead of his time,
  • 4:28 - 4:31
    and progress on understanding the brain
  • 4:31 - 4:33
    proceeded slowly
    over the next few decades.
  • 4:33 - 4:36
    But we knew that neurons used electricity,
  • 4:36 - 4:39
    and by World War II, our technology
    was advanced enough
  • 4:39 - 4:42
    to start doing real electrical
    experiments on live neurons
  • 4:42 - 4:44
    to better understand how they worked.
  • 4:45 - 4:49
    This was the very same time
    when computers were being invented,
  • 4:49 - 4:52
    very much based on the idea
    of modeling the brain --
  • 4:52 - 4:55
    of "intelligent machinery,"
    as Alan Turing called it,
  • 4:55 - 4:57
    one of the fathers of computer science.
  • 4:58 - 5:03
    Warren McCulloch and Walter Pitts
    looked at Ramón y Cajal's drawing
  • 5:03 - 5:04
    of visual cortex,
  • 5:04 - 5:05
    which I'm showing here.
  • 5:06 - 5:10
    This is the cortex that processes
    imagery that comes from the eye.
  • 5:10 - 5:14
    And for them, this looked
    like a circuit diagram.
  • 5:14 - 5:18
    So there are a lot of details
    in McCulloch and Pitts's circuit diagram
  • 5:18 - 5:20
    that are not quite right.
  • 5:20 - 5:21
    But this basic idea
  • 5:21 - 5:25
    that visual cortex works like a series
    of computational elements
  • 5:25 - 5:28
    that pass information
    one to the next in a cascade,
  • 5:28 - 5:29
    is essentially correct.
  • 5:29 - 5:32
    Let's talk for a moment
  • 5:32 - 5:36
    about what a model for processing
    visual information would need to do.
  • 5:36 - 5:39
    The basic task of perception
  • 5:39 - 5:43
    is to take an image like this one and say,
  • 5:43 - 5:44
    "That's a bird,"
  • 5:44 - 5:47
    which is a very simple thing
    for us to do with our brains.
  • 5:47 - 5:51
    But you should all understand
    that for a computer,
  • 5:51 - 5:54
    this was pretty much impossible
    just a few years ago.
  • 5:54 - 5:56
    The classical computing paradigm
  • 5:56 - 5:58
    is not one in which
    this task is easy to do.
  • 5:59 - 6:02
    So what's going on between the pixels,
  • 6:02 - 6:06
    between the image of the bird
    and the word "bird,"
  • 6:06 - 6:09
    is essentially a set of neurons
    connected to each other
  • 6:09 - 6:10
    in a neural network,
  • 6:10 - 6:11
    as I'm diagramming here.
  • 6:11 - 6:15
    This neural network could be biological,
    inside our visual cortices,
  • 6:15 - 6:17
    or, nowadays, we start
    to have the capability
  • 6:17 - 6:19
    to model such neural networks
    on the computer.
  • 6:20 - 6:22
    And I'll show you what
    that actually looks like.
  • 6:22 - 6:26
    So the pixels you can think
    about as a first layer of neurons,
  • 6:26 - 6:28
    and that's, in fact,
    how it works in the eye --
  • 6:28 - 6:30
    that's the neurons in the retina.
  • 6:30 - 6:31
    And those feed forward
  • 6:31 - 6:35
    into one layer after another layer,
    after another layer of neurons,
  • 6:35 - 6:38
    all connected by synapses
    of different weights.
  • 6:38 - 6:39
    The behavior of this network
  • 6:39 - 6:42
    is characterized by the strengths
    of all of those synapses.
  • 6:42 - 6:46
    Those characterize the computational
    properties of this network.
  • 6:46 - 6:47
    And at the end of the day,
  • 6:47 - 6:50
    you have a neuron
    or a small group of neurons
  • 6:50 - 6:51
    that light up, saying, "bird."
  • 6:52 - 6:55
    Now I'm going to represent
    those three things --
  • 6:55 - 7:00
    the input pixels and the synapses
    in the neural network,
  • 7:00 - 7:01
    and bird, the output --
  • 7:01 - 7:04
    by three variables: x, w and y.
  • 7:05 - 7:07
    There are maybe a million or so x's --
  • 7:07 - 7:09
    a million pixels in that image.
  • 7:09 - 7:11
    There are billions or trillions of w's,
  • 7:11 - 7:15
    which represent the weights of all
    these synapses in the neural network.
  • 7:15 - 7:16
    And there's a very small number of y's,
  • 7:16 - 7:18
    of outputs that that network has.
  • 7:18 - 7:20
    "Bird" is only four letters, right?
  • 7:21 - 7:25
    So let's pretend that this
    is just a simple formula,
  • 7:25 - 7:27
    x "x" w = y.
  • 7:27 - 7:29
    I'm putting the times in scare quotes
  • 7:29 - 7:31
    because what's really
    going on there, of course,
  • 7:31 - 7:34
    is a very complicated series
    of mathematical operations.
  • 7:35 - 7:36
    That's one equation.
  • 7:36 - 7:38
    There are three variables.
  • 7:38 - 7:41
    And we all know
    that if you have one equation,
  • 7:41 - 7:45
    you can solve one variable
    by knowing the other two things.
  • 7:45 - 7:49
    So the problem of inference,
  • 7:49 - 7:51
    that is, figuring out
    that the picture of a bird is a bird,
  • 7:51 - 7:53
    is this one:
  • 7:53 - 7:56
    it's where y is the unknown
    and w and x are known.
  • 7:56 - 7:59
    You know the neural network,
    you know the pixels.
  • 7:59 - 8:02
    As you can see, that's actually
    a relatively straightforward problem.
  • 8:02 - 8:04
    You multiply two times three
    and you're done.
  • 8:05 - 8:07
    I'll show you an artificial neural network
  • 8:07 - 8:09
    that we've built recently,
    doing exactly that.
  • 8:10 - 8:12
    This is running in real time
    on a mobile phone,
  • 8:13 - 8:16
    and that's, of course,
    amazing in its own right,
  • 8:16 - 8:19
    that mobile phones can do so many
    billions and trillions of operations
  • 8:19 - 8:21
    per second.
  • 8:21 - 8:22
    What you're looking at is a phone
  • 8:22 - 8:26
    looking at one after another
    picture of a bird,
  • 8:26 - 8:29
    and actually not only saying,
    "Yes, it's a bird,"
  • 8:29 - 8:32
    but identifying the species of bird
    with a network of this sort.
  • 8:33 - 8:35
    So in that picture,
  • 8:35 - 8:39
    the x and the w are known,
    and the y is the unknown.
  • 8:39 - 8:41
    I'm glossing over the very
    difficult part, of course,
  • 8:41 - 8:45
    which is how on earth
    do we figure out the w,
  • 8:45 - 8:47
    the brain that can do such a thing?
  • 8:47 - 8:49
    How would we ever learn such a model?
  • 8:49 - 8:53
    So this process of learning,
    of solving for w,
  • 8:53 - 8:55
    if we were doing this
    with the simple equation
  • 8:55 - 8:57
    in which we think about these as numbers,
  • 8:57 - 9:00
    we know exactly how to do that: 6 = 2 x w,
  • 9:00 - 9:03
    well, we divide by two and we're done.
  • 9:04 - 9:06
    The problem is with this operator.
  • 9:07 - 9:08
    So, division --
  • 9:08 - 9:11
    we've used division because
    it's the inverse to multiplication,
  • 9:11 - 9:13
    but as I've just said,
  • 9:13 - 9:15
    the multiplication is a bit of a lie here.
  • 9:15 - 9:18
    This is a very, very complicated,
    very non-linear operation;
  • 9:18 - 9:20
    it has no inverse.
  • 9:20 - 9:23
    So we have to figure out a way
    to solve the equation
  • 9:23 - 9:25
    without a division operator.
  • 9:25 - 9:28
    And the way to do that
    is fairly straightforward.
  • 9:28 - 9:30
    You just say, let's play
    a little algebra trick,
  • 9:30 - 9:33
    and move the six over
    to the right-hand side of the equation.
  • 9:33 - 9:35
    Now, we're still using multiplication.
  • 9:36 - 9:39
    And that zero -- let's think
    about it as an error.
  • 9:39 - 9:42
    In other words, if we've solved
    for w the right way,
  • 9:42 - 9:43
    then the error will be zero.
  • 9:43 - 9:45
    And if we haven't gotten it quite right,
  • 9:45 - 9:47
    the error will be greater than zero.
  • 9:47 - 9:51
    So now we can just take guesses
    to minimize the error,
  • 9:51 - 9:53
    and that's the sort of thing
    computers are very good at.
  • 9:53 - 9:55
    So you've taken an initial guess:
  • 9:55 - 9:56
    what if w = 0?
  • 9:56 - 9:57
    Well, then the error is 6.
  • 9:57 - 9:59
    What if w = 1? The error is 4.
  • 9:59 - 10:01
    And then the computer can
    sort of play Marco Polo,
  • 10:01 - 10:04
    and drive down the error close to zero.
  • 10:04 - 10:07
    As it does that, it's getting
    successive approximations to w.
  • 10:07 - 10:11
    Typically, it never quite gets there,
    but after about a dozen steps,
  • 10:11 - 10:15
    we're up to w = 2.999,
    which is close enough.
  • 10:16 - 10:18
    And this is the learning process.
  • 10:18 - 10:21
    So remember that what's been going on here
  • 10:21 - 10:25
    is that we've been taking
    a lot of known x's and known y's
  • 10:25 - 10:29
    and solving for the w in the middle
    through an iterative process.
  • 10:29 - 10:32
    It's exactly the same way
    that we do our own learning.
  • 10:32 - 10:35
    We have many, many images as babies
  • 10:35 - 10:37
    and we get told, "This is a bird;
    this is not a bird."
  • 10:38 - 10:40
    And over time, through iteration,
  • 10:40 - 10:43
    we solve for w, we solve
    for those neural connections.
  • 10:43 - 10:48
    So now, we've held
    x and w fixed to solve for y;
  • 10:48 - 10:49
    that's everyday, fast perception.
  • 10:49 - 10:51
    We figure out how we can solve for w,
  • 10:51 - 10:53
    that's learning, which is a lot harder,
  • 10:53 - 10:55
    because we need to do error minimization,
  • 10:55 - 10:57
    using a lot of training examples.
  • 10:57 - 11:00
    And about a year ago,
    Alex Mordvintsev, on our team,
  • 11:00 - 11:04
    decided to experiment
    with what happens if we try solving for x,
  • 11:04 - 11:06
    given a known w and a known y.
  • 11:06 - 11:07
    In other words,
  • 11:07 - 11:09
    you know that it's a bird,
  • 11:09 - 11:12
    and you already have your neural network
    that you've trained on birds,
  • 11:12 - 11:14
    but what is the picture of a bird?
  • 11:15 - 11:20
    It turns out that by using exactly
    the same error-minimization procedure,
  • 11:20 - 11:24
    one can do that with the network
    trained to recognize birds,
  • 11:24 - 11:27
    and the result turns out to be ...
  • 11:30 - 11:32
    a picture of birds.
  • 11:33 - 11:37
    So this is a picture of birds
    generated entirely by a neural network
  • 11:37 - 11:38
    that was trained to recognize birds,
  • 11:38 - 11:42
    just by solving for x
    rather than solving for y,
  • 11:42 - 11:43
    and doing that iteratively.
  • 11:44 - 11:46
    Here's another fun example.
  • 11:46 - 11:49
    This was a work made
    by Mike Tyka in our group,
  • 11:49 - 11:51
    which he calls "Animal Parade."
  • 11:51 - 11:54
    It reminds me a little bit
    of William Kentridge's artworks,
  • 11:54 - 11:57
    in which he makes sketches, rubs them out,
  • 11:57 - 11:58
    makes sketches, rubs them out,
  • 11:58 - 12:00
    and creates a movie this way.
  • 12:00 - 12:01
    In this case,
  • 12:01 - 12:04
    what Mike is doing is varying y
    over the space of different animals,
  • 12:04 - 12:07
    in a network designed
    to recognize and distinguish
  • 12:07 - 12:08
    different animals from each other.
  • 12:08 - 12:12
    And you get this strange, Escher-like
    morph from one animal to another.
  • 12:14 - 12:19
    Here he and Alex together
    have tried reducing
  • 12:19 - 12:22
    the y's to a space of only two dimensions,
  • 12:22 - 12:25
    thereby making a map
    out of the space of all things
  • 12:25 - 12:27
    recognized by this network.
  • 12:27 - 12:29
    Doing this kind of synthesis
  • 12:29 - 12:31
    or generation of imagery
    over that entire surface,
  • 12:31 - 12:34
    varying y over the surface,
    you make a kind of map --
  • 12:34 - 12:37
    a visual map of all the things
    the network knows how to recognize.
  • 12:37 - 12:40
    The animals are all here;
    "armadillo" is right in that spot.
  • 12:41 - 12:43
    You can do this with other kinds
    of networks as well.
  • 12:43 - 12:46
    This is a network designed
    to recognize faces,
  • 12:46 - 12:48
    to distinguish one face from another.
  • 12:48 - 12:52
    And here, we're putting
    in a y that says, "me,"
  • 12:52 - 12:53
    my own face parameters.
  • 12:53 - 12:55
    And when this thing solves for x,
  • 12:55 - 12:58
    it generates this rather crazy,
  • 12:58 - 13:02
    kind of cubist, surreal,
    psychedelic picture of me
  • 13:02 - 13:04
    from multiple points of view at once.
  • 13:04 - 13:07
    The reason it looks like
    multiple points of view at once
  • 13:07 - 13:10
    is because that network is designed
    to get rid of the ambiguity
  • 13:10 - 13:13
    of a face being in one pose
    or another pose,
  • 13:13 - 13:16
    being looked at with one kind of lighting,
    another kind of lighting.
  • 13:16 - 13:18
    So when you do
    this sort of reconstruction,
  • 13:18 - 13:21
    if you don't use some sort of guide image
  • 13:21 - 13:22
    or guide statistics,
  • 13:22 - 13:26
    then you'll get a sort of confusion
    of different points of view,
  • 13:26 - 13:27
    because it's ambiguous.
  • 13:28 - 13:32
    This is what happens if Alex uses
    his own face as a guide image
  • 13:32 - 13:35
    during that optimization process
    to reconstruct my own face.
  • 13:36 - 13:39
    So you can see it's not perfect.
  • 13:39 - 13:41
    There's still quite a lot of work to do
  • 13:41 - 13:43
    on how we optimize
    that optimization process.
  • 13:43 - 13:46
    But you start to get something
    more like a coherent face,
  • 13:46 - 13:48
    rendered using my own face as a guide.
  • 13:49 - 13:51
    You don't have to start
    with a blank canvas
  • 13:51 - 13:53
    or with white noise.
  • 13:53 - 13:54
    When you're solving for x,
  • 13:54 - 13:58
    you can begin with an x,
    that is itself already some other image.
  • 13:58 - 14:00
    That's what this little demonstration is.
  • 14:00 - 14:05
    This is a network
    that is designed to categorize
  • 14:05 - 14:08
    all sorts of different objects --
    man-made structures, animals ...
  • 14:08 - 14:10
    Here we're starting
    with just a picture of clouds,
  • 14:10 - 14:12
    and as we optimize,
  • 14:12 - 14:17
    basically, this network is figuring out
    what it sees in the clouds.
  • 14:17 - 14:19
    And the more time
    you spend looking at this,
  • 14:19 - 14:22
    the more things you also
    will see in the clouds.
  • 14:23 - 14:26
    You could also use the face network
    to hallucinate into this,
  • 14:26 - 14:28
    and you get some pretty crazy stuff.
  • 14:28 - 14:29
    (Laughter)
  • 14:30 - 14:33
    Or, Mike has done some other experiments
  • 14:33 - 14:37
    in which he takes that cloud image,
  • 14:37 - 14:41
    hallucinates, zooms, hallucinates,
    zooms hallucinates, zooms.
  • 14:41 - 14:42
    And in this way,
  • 14:42 - 14:45
    you can get a sort of fugue state
    of the network, I suppose,
  • 14:46 - 14:49
    or a sort of free association,
  • 14:49 - 14:51
    in which the network
    is eating its own tail.
  • 14:51 - 14:55
    So every image is now the basis for,
  • 14:55 - 14:56
    "What do I think I see next?
  • 14:56 - 14:59
    What do I think I see next?
    What do I think I see next?"
  • 14:59 - 15:02
    I showed this for the first time in public
  • 15:02 - 15:08
    to a group at a lecture in Seattle
    called "Higher Education" --
  • 15:08 - 15:10
    this was right after
    marijuana was legalized.
  • 15:10 - 15:13
    (Laughter)
  • 15:15 - 15:17
    So I'd like to finish up quickly
  • 15:17 - 15:21
    by just noting that this technology
    is not constrained.
  • 15:21 - 15:25
    I've shown you purely visual examples
    because they're really fun to look at.
  • 15:25 - 15:27
    It's not a purely visual technology.
  • 15:27 - 15:29
    Our artist collaborator, Ross Goodwin,
  • 15:29 - 15:33
    has done experiments involving
    a camera that takes a picture,
  • 15:33 - 15:37
    and then a computer in his backpack
    writes a poem using neural networks,
  • 15:37 - 15:39
    based on the contents of the image.
  • 15:39 - 15:42
    And that poetry neural network
    has been trained
  • 15:42 - 15:44
    on a large corpus of 20th-century poetry.
  • 15:44 - 15:46
    And the poetry is, you know,
  • 15:46 - 15:48
    I think, kind of not bad, actually.
  • 15:48 - 15:49
    (Laughter)
  • 15:49 - 15:50
    In closing,
  • 15:50 - 15:53
    I think that per Michelangelo,
  • 15:53 - 15:54
    I think he was right;
  • 15:54 - 15:57
    perception and creativity
    are very intimately connected.
  • 15:58 - 16:00
    What we've just seen are neural networks
  • 16:00 - 16:03
    that are entirely trained to discriminate,
  • 16:03 - 16:05
    or to recognize different
    things in the world,
  • 16:05 - 16:08
    able to be run in reverse, to generate.
  • 16:08 - 16:10
    One of the things that suggests to me
  • 16:10 - 16:12
    is not only that
    Michelangelo really did see
  • 16:12 - 16:15
    the sculpture in the blocks of stone,
  • 16:15 - 16:18
    but that any creature,
    any being, any alien
  • 16:18 - 16:22
    that is able to do
    perceptual acts of that sort
  • 16:22 - 16:23
    is also able to create
  • 16:23 - 16:27
    because it's exactly the same
    machinery that's used in both cases.
  • 16:27 - 16:31
    Also, I think that perception
    and creativity are by no means
  • 16:31 - 16:33
    uniquely human.
  • 16:33 - 16:36
    We start to have computer models
    that can do exactly these sorts of things.
  • 16:36 - 16:40
    And that ought to be unsurprising;
    the brain is computational.
  • 16:40 - 16:41
    And finally,
  • 16:41 - 16:46
    computing began as an exercise
    in designing intelligent machinery.
  • 16:46 - 16:48
    It was very much modeled after the idea
  • 16:48 - 16:51
    of how could we make machines intelligent.
  • 16:52 - 16:54
    And we finally are starting to fulfill now
  • 16:54 - 16:56
    some of the promises
    of those early pioneers,
  • 16:56 - 16:58
    of Turing and von Neumann
  • 16:58 - 17:00
    and McCulloch and Pitts.
  • 17:00 - 17:04
    And I think that computing
    is not just about accounting
  • 17:04 - 17:06
    or playing Candy Crush or something.
  • 17:06 - 17:09
    From the beginning,
    we modeled them after our minds.
  • 17:09 - 17:12
    And they give us both the ability
    to understand our own minds better
  • 17:12 - 17:14
    and to extend them.
  • 17:15 - 17:16
    Thank you very much.
  • 17:16 - 17:22
    (Applause)
Title:
How computers are learning to be creative
Speaker:
Blaise Agüera y Arcas
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
17:34

English subtitles

Revisions Compare revisions