Return to Video

This computer is learning to read your mind

  • 0:00 - 0:03
    Greg Gage: Mind-reading.
    You've seen this in sci-fi movies:
  • 0:03 - 0:05
    machines that can read our thoughts.
  • 0:05 - 0:07
    However, there are devices today
  • 0:07 - 0:09
    that can read the electrical
    activity from our brains.
  • 0:09 - 0:11
    We call this the EEG.
  • 0:12 - 0:15
    Is there information
    contained in these brainwaves?
  • 0:15 - 0:17
    And if so, could we train a computer
    to read our thoughts?
  • 0:17 - 0:20
    My buddy Nathan
    has been working to hack the EEG
  • 0:20 - 0:22
    to build a mind-reading machine.
  • 0:22 - 0:24
    [DIY Neuroscience]
  • 0:25 - 0:26
    So this is how the EEG works.
  • 0:27 - 0:28
    Inside your head is a brain,
  • 0:28 - 0:31
    and that brain is made
    out of billions of neurons.
  • 0:31 - 0:34
    Each of those neurons sends
    an electrical message to each other.
  • 0:34 - 0:37
    These small messages can combine
    to make an electrical wave
  • 0:37 - 0:38
    that we can detect on a monitor.
  • 0:38 - 0:41
    Now traditionally, the EEG
    can tell us large-scale things,
  • 0:41 - 0:44
    for example if you're asleep
    or if you're alert.
  • 0:44 - 0:45
    But can it tell us anything else?
  • 0:45 - 0:47
    Can it actually read our thoughts?
  • 0:47 - 0:48
    We're going to test this,
  • 0:48 - 0:51
    and we're not going to start
    with some complex thoughts.
  • 0:51 - 0:53
    We're going to do something very simple.
  • 0:53 - 0:56
    Can we interpret what someone is seeing
    using only their brainwaves?
  • 0:56 - 0:59
    Nathan's going to begin by placing
    electrodes on Christy's head.
  • 0:59 - 1:01
    Nathan: My life is tangled.
  • 1:01 - 1:02
    (Laughter)
  • 1:02 - 1:05
    GG: And then he's going to show her
    a bunch of pictures
  • 1:05 - 1:06
    from four different categories.
  • 1:06 - 1:09
    Nathan: Face, house, scenery
    and weird pictures.
  • 1:09 - 1:11
    GG: As we show Christy
    hundreds of these images,
  • 1:12 - 1:15
    we are also capturing the electrical waves
    onto Nathan's computer.
  • 1:15 - 1:18
    We want to see if we can detect
    any visual information about the photos
  • 1:18 - 1:20
    contained in the brainwaves,
  • 1:20 - 1:22
    so when we're done,
    we're going to see if the EEG
  • 1:22 - 1:25
    can tell us what kind of picture
    Christy is looking at,
  • 1:25 - 1:28
    and if it does, each category
    should trigger a different brain signal.
  • 1:28 - 1:31
    OK, so we collected all the raw EEG data,
  • 1:31 - 1:32
    and this is what we got.
  • 1:33 - 1:36
    It all looks pretty messy,
    so let's arrange them by picture.
  • 1:37 - 1:39
    Now, still a bit too noisy
    to see any differences,
  • 1:40 - 1:43
    but if we average the EEG
    across all image types
  • 1:43 - 1:45
    by aligning them
    to when the image first appeared,
  • 1:45 - 1:47
    we can remove this noise,
  • 1:47 - 1:49
    and pretty soon, we can see
    some dominant patterns
  • 1:49 - 1:51
    emerge for each category.
  • 1:51 - 1:53
    Now the signals all
    still look pretty similar.
  • 1:53 - 1:54
    Let's take a closer look.
  • 1:54 - 1:57
    About a hundred milliseconds
    after the image comes on,
  • 1:57 - 1:59
    we see a positive bump in all four cases,
  • 1:59 - 2:02
    and we call this the P100,
    and what we think that is
  • 2:02 - 2:05
    is what happens in your brain
    when you recognize an object.
  • 2:05 - 2:07
    But damn, look at
    that signal for the face.
  • 2:07 - 2:09
    It looks different than the others.
  • 2:09 - 2:12
    There's a negative dip
    about 170 milliseconds
  • 2:12 - 2:13
    after the image comes on.
  • 2:13 - 2:15
    What could be going on here?
  • 2:15 - 2:18
    Research shows that our brain
    has a lot of neurons that are dedicated
  • 2:19 - 2:20
    to recognizing human faces,
  • 2:20 - 2:23
    so this N170 spike could be
    all those neurons
  • 2:23 - 2:25
    firing at once in the same location,
  • 2:25 - 2:27
    and we can detect that in the EEG.
  • 2:27 - 2:29
    So there are two takeaways here.
  • 2:29 - 2:32
    One, our eyes can't really detect
    the differences in patterns
  • 2:32 - 2:34
    without averaging out the noise,
  • 2:34 - 2:36
    and two, even after removing the noise,
  • 2:36 - 2:39
    our eyes can only pick up
    the signals associated with faces.
  • 2:39 - 2:41
    So this is where we turn
    to machine learning.
  • 2:41 - 2:45
    Now, our eyes are not very good
    at picking up patterns in noisy data,
  • 2:45 - 2:48
    but machine learning algorithms
    are designed to do just that,
  • 2:48 - 2:51
    so could we take a lot of pictures
    and a lot of data
  • 2:51 - 2:53
    and feed it in and train a computer
  • 2:53 - 2:57
    to be able to interpret
    what Christy is looking at in real time?
  • 2:57 - 3:01
    We're trying to code the information
    that's coming out of her EEG
  • 3:01 - 3:02
    in real time
  • 3:02 - 3:05
    and predict what it is
    that her eyes are looking at.
  • 3:05 - 3:07
    And if it works, what we should see
  • 3:07 - 3:09
    is every time that she gets
    a picture of scenery,
  • 3:09 - 3:11
    it should say scenery,
    scenery, scenery, scenery.
  • 3:11 - 3:13
    A face -- face, face, face, face,
  • 3:13 - 3:17
    but it's not quite working that way,
    is what we're discovering.
  • 3:21 - 3:25
    (Laughter)
  • 3:25 - 3:26
    OK.
  • 3:26 - 3:30
    Director: So what's going on here?
    GG: We need a new career, I think.
  • 3:30 - 3:31
    (Laughter)
  • 3:31 - 3:33
    OK, so that was a massive failure.
  • 3:33 - 3:36
    But we're still curious:
    How far could we push this technology?
  • 3:36 - 3:38
    And we looked back at what we did.
  • 3:38 - 3:41
    We noticed that the data was coming
    into our computer very quickly,
  • 3:41 - 3:43
    without any timing
    of when the images came on,
  • 3:43 - 3:46
    and that's the equivalent
    of reading a very long sentence
  • 3:46 - 3:48
    without spaces between the words.
  • 3:48 - 3:49
    It would be hard to read,
  • 3:49 - 3:53
    but once we add the spaces,
    individual words appear
  • 3:53 - 3:55
    and it becomes a lot more understandable.
  • 3:55 - 3:57
    But what if we cheat a little bit?
  • 3:57 - 4:01
    By using a sensor, we can tell
    the computer when the image first appears.
  • 4:01 - 4:04
    That way, the brainwave stops being
    a continuous stream of information,
  • 4:04 - 4:07
    and instead becomes
    individual packets of meaning.
  • 4:07 - 4:09
    Also, we're going
    to cheat a little bit more,
  • 4:09 - 4:11
    by limiting the categories to two.
  • 4:11 - 4:14
    Let's see if we can do
    some real-time mind-reading.
  • 4:14 - 4:15
    In this new experiment,
  • 4:15 - 4:17
    we're going to constrict it
    a little bit more
  • 4:17 - 4:19
    so that we know the onset of the image
  • 4:19 - 4:23
    and we're going to limit
    the categories to "face" or "scenery."
  • 4:23 - 4:25
    Nathan: Face. Correct.
  • 4:26 - 4:27
    Scenery. Correct.
  • 4:28 - 4:31
    GG: So right now,
    every time the image comes on,
  • 4:31 - 4:33
    we're taking a picture
    of the onset of the image
  • 4:33 - 4:35
    and decoding the EEG.
  • 4:35 - 4:36
    It's getting correct.
  • 4:36 - 4:38
    Nathan: Yes. Face. Correct.
  • 4:38 - 4:40
    GG: So there is information
    in the EEG signal, which is cool.
  • 4:40 - 4:43
    We just had to align it
    to the onset of the image.
  • 4:43 - 4:45
    Nathan: Scenery. Correct.
  • 4:47 - 4:48
    Face. Yeah.
  • 4:49 - 4:51
    GG: This means there is some
    information there,
  • 4:51 - 4:54
    so if we know at what time
    the picture came on,
  • 4:54 - 4:56
    we can tell what type of picture it was,
  • 4:56 - 5:01
    possibly, at least on average,
    by looking at these evoked potentials.
  • 5:01 - 5:02
    Nathan: Exactly.
  • 5:02 - 5:06
    GG: If you had told me at the beginning
    of this project this was possible,
  • 5:06 - 5:07
    I would have said no way.
  • 5:07 - 5:09
    I literally did not think
    we could do this.
  • 5:09 - 5:11
    Did our mind-reading
    experiment really work?
  • 5:11 - 5:13
    Yes, but we had to do a lot of cheating.
  • 5:13 - 5:16
    It turns out you can find
    some interesting things in the EEG,
  • 5:16 - 5:18
    for example if you're
    looking at someone's face,
  • 5:18 - 5:21
    but it does have a lot of limitations.
  • 5:21 - 5:24
    Perhaps advances in machine learning
    will make huge strides,
  • 5:24 - 5:27
    and one day we will be able to decode
    what's going on in our thoughts.
  • 5:27 - 5:31
    But for now, the next time a company says
    that they can harness your brainwaves
  • 5:31 - 5:33
    to be able to control devices,
  • 5:33 - 5:36
    it is your right, it is your duty
    to be skeptical.
Title:
This computer is learning to read your mind
Speaker:
DIY Neuroscience
Description:

Modern technology lets neuroscientists peer into the human brain, but can it also read minds? Armed with the device known as a electroencephalogram, or EEG, and some computing wizardry, our intrepid neuroscientists attempt to peer into a subject's thoughts.

more » « less
Video Language:
English
Team:
closed TED
Project:
TED Series
Duration:
05:51

English subtitles

Revisions Compare revisions