Return to Video

How AI could become an extension of your mind

  • 0:02 - 0:04
    Ever since computers were invented,
  • 0:04 - 0:07
    we've been trying to make them
    smarter and more powerful.
  • 0:07 - 0:10
    From the abacus, to room-sized machines,
  • 0:10 - 0:13
    to desktops, to computers in our pockets.
  • 0:13 - 0:16
    And are now designing
    artificial intelligence to automate tasks
  • 0:16 - 0:18
    that would require human intelligence.
  • 0:20 - 0:21
    If you look at the history of computing,
  • 0:22 - 0:25
    we've always treated computers
    as external devices
  • 0:25 - 0:27
    that compute and act on our behalf.
  • 0:28 - 0:34
    What I want to do is I want to weave
    computing, AI and internet as part of us.
  • 0:34 - 0:37
    As part of human cognition,
  • 0:37 - 0:39
    freeing us to interact
    with the world around us.
  • 0:40 - 0:42
    Integrate human and machine intelligence
  • 0:42 - 0:47
    right inside our own bodies to augment us,
    instead of diminishing us or replacing us.
  • 0:49 - 0:53
    Could we combine what people do best,
    such as creative and intuitive thinking,
  • 0:53 - 0:54
    with what computers do best,
  • 0:54 - 0:58
    such as processing information
    and perfectly memorizing stuff?
  • 0:58 - 1:01
    Could this whole be better
    than the sum of its parts?
  • 1:02 - 1:05
    We have a device
    that could make that possible.
  • 1:05 - 1:08
    It's called AlterEgo,
    and it's a wearable device
  • 1:08 - 1:11
    that gives you the experience
    of a conversational AI
  • 1:11 - 1:12
    that lives inside your head,
  • 1:12 - 1:16
    that you could talk to in likeness
    to talking to yourself internally.
  • 1:17 - 1:20
    We have a new prototype
    that we're showing here,
  • 1:20 - 1:23
    for the first time at TED,
    and here's how it works.
  • 1:24 - 1:26
    Normally, when we speak,
  • 1:26 - 1:28
    the brain sends neurosignals
    through the nerves
  • 1:28 - 1:31
    to your internal speech systems,
  • 1:31 - 1:34
    to activate them and your vocal cords
    to produce speech.
  • 1:35 - 1:37
    One of the most complex
    cognitive and motor tasks
  • 1:37 - 1:39
    that we do as human beings.
  • 1:40 - 1:42
    Now, imagine talking to yourself
  • 1:42 - 1:45
    without vocalizing,
    without moving your mouth,
  • 1:45 - 1:46
    without moving your jaw,
  • 1:46 - 1:49
    but by simply articulating
    those words internally.
  • 1:50 - 1:55
    Thereby very subtly engaging
    your internal speech systems,
  • 1:55 - 1:58
    such as your tongue
    and back of your palate.
  • 1:59 - 2:01
    When that happens,
  • 2:01 - 2:05
    the brain sends extremely weak signals
    to these internal speech systems.
  • 2:05 - 2:07
    AlterEgo has sensors,
  • 2:07 - 2:11
    embedded in a thin plastic,
    flexible and transparent device
  • 2:11 - 2:13
    that sits on your neck
    just like a sticker.
  • 2:16 - 2:18
    These sensors pick up
    on these internal signals
  • 2:18 - 2:20
    sourced deep within the mouth cavity,
  • 2:20 - 2:22
    right from the surface of the skin.
  • 2:24 - 2:26
    An AI program running in the background
  • 2:26 - 2:29
    then tries to figure out
    what the user's trying to say.
  • 2:29 - 2:33
    It then feeds back an answer to the user
  • 2:33 - 2:34
    by means of bone conduction,
  • 2:34 - 2:38
    audio conducted through the skull
    into the user's inner ear,
  • 2:38 - 2:39
    that the user hears,
  • 2:39 - 2:42
    overlaid on top of the user's
    natural hearing of the environment,
  • 2:42 - 2:44
    without blocking it.
  • 2:47 - 2:50
    The combination of all these parts,
    the input, the output and the AI,
  • 2:50 - 2:54
    gives a net subjective experience
    of an interface inside your head
  • 2:54 - 2:57
    that you could talk to
    in likeness to talking to yourself.
  • 3:00 - 3:04
    Just to be very clear, the device
    does not record or read your thoughts.
  • 3:04 - 3:07
    It records deliberate information
    that you want to communicate
  • 3:07 - 3:10
    through deliberate engagement
    of your internal speech systems.
  • 3:10 - 3:13
    People don't want to be read,
    they want to write.
  • 3:13 - 3:14
    Which is why we designed the system
  • 3:14 - 3:18
    to deliberately record
    from the peripheral nervous system.
  • 3:20 - 3:23
    Which is why the control
    in all situations resides with the user.
  • 3:25 - 3:28
    I want to stop here for a second
    and show you a live demo.
  • 3:29 - 3:32
    What I'm going to do is,
    I'm going to ask Eric a question.
  • 3:32 - 3:34
    And he's going to search
    for that information
  • 3:34 - 3:38
    without vocalizing, without typing,
    without moving his fingers,
  • 3:38 - 3:39
    without moving his mouth.
  • 3:39 - 3:42
    Simply by internally asking that question.
  • 3:42 - 3:45
    The AI will then figure out the answer
    and feed it back to Eric,
  • 3:45 - 3:47
    through audio, through the device.
  • 3:48 - 3:51
    While you see a laptop
    in front of him, he's not using it.
  • 3:51 - 3:53
    Everything lives on the device.
  • 3:53 - 3:57
    All he needs is that sticker device
    to interface with the AI and the internet.
  • 3:59 - 4:03
    So, Eric, what's the weather
    in Vancouver like, right now?
  • 4:11 - 4:13
    What you see on the screen
  • 4:13 - 4:17
    are the words that Eric
    is speaking to himself right now.
  • 4:17 - 4:19
    This is happening in real time.
  • 4:20 - 4:23
    Eric: It's 50 degrees
    and rainy here in Vancouver.
  • 4:23 - 4:26
    Arnav Kapur: What happened is
    that the AI sent the answer
  • 4:26 - 4:28
    through audio, through
    the device, back to Eric.
  • 4:29 - 4:32
    What could the implications
    of something like this be?
  • 4:33 - 4:36
    Imagine perfectly memorizing things,
  • 4:36 - 4:39
    where you perfectly record information
    that you silently speak,
  • 4:39 - 4:41
    and then hear them later when you want to,
  • 4:41 - 4:43
    internally searching for information,
  • 4:43 - 4:46
    crunching numbers at speeds computers do,
  • 4:46 - 4:48
    silently texting other people.
  • 4:49 - 4:51
    Suddenly becoming multilingual,
  • 4:51 - 4:54
    so that you internally
    speak in one language,
  • 4:54 - 4:56
    and hear the translation
    in your head in another.
  • 4:57 - 4:59
    The potential could be far-reaching.
  • 5:01 - 5:03
    There are millions of people
    around the world
  • 5:03 - 5:05
    who struggle with using natural speech.
  • 5:05 - 5:09
    People with conditions such as ALS,
    or Lou Gehrig's disease,
  • 5:09 - 5:10
    stroke and oral cancer,
  • 5:10 - 5:12
    amongst many other conditions.
  • 5:14 - 5:18
    For them, communicating is
    a painstakingly slow and tiring process.
  • 5:19 - 5:20
    This is Doug.
  • 5:20 - 5:23
    Doug was diagnosed with ALS
    about 12 years ago
  • 5:23 - 5:25
    and has since lost the ability to speak.
  • 5:25 - 5:27
    Today, he uses an on-screen keyboard
  • 5:27 - 5:31
    where he types in individual letters
    using his head movements.
  • 5:31 - 5:34
    And it takes several minutes
    to communicate a single sentence.
  • 5:35 - 5:36
    So we went to Doug and asked him
  • 5:36 - 5:41
    what were the first words
    he'd like to use or say, using our system.
  • 5:42 - 5:45
    Perhaps a greeting, like,
    "Hello, how are you?"
  • 5:45 - 5:47
    Or indicate that he needed
    help with something.
  • 5:49 - 5:52
    What Doug said that he wanted
    to use our system for
  • 5:52 - 5:55
    is to reboot the old system he had,
    because that old system kept on crashing.
  • 5:55 - 5:57
    (Laughter)
  • 5:58 - 6:00
    We never could have predicted that.
  • 6:01 - 6:05
    I'm going to show you a short clip of Doug
    using our system for the first time.
  • 6:10 - 6:12
    (Voice) Reboot computer.
  • 6:14 - 6:16
    AK: What you just saw there
  • 6:16 - 6:19
    was Doug communicating or speaking
    in real time for the first time
  • 6:19 - 6:21
    since he lost the ability to speak.
  • 6:22 - 6:24
    There are millions of people
  • 6:24 - 6:27
    who might be able to communicate
    in real time like Doug,
  • 6:27 - 6:30
    with other people, with their friends
    and with their families.
  • 6:31 - 6:35
    My hope is to be able to help them
    express their thoughts and ideas.
  • 6:37 - 6:39
    I believe computing, AI and the internet
  • 6:39 - 6:43
    would disappear into us
    as extensions of our cognition,
  • 6:43 - 6:46
    instead of being external
    entities or adversaries,
  • 6:46 - 6:49
    amplifying human ingenuity,
  • 6:49 - 6:53
    giving us unimaginable abilities
    and unlocking our true potential.
  • 6:54 - 6:58
    And perhaps even freeing us
    to becoming better at being human.
  • 6:58 - 7:00
    Thank you so much.
  • 7:00 - 7:06
    (Applause)
  • 7:06 - 7:07
    Shoham Arad: Come over here.
  • 7:11 - 7:12
    OK.
  • 7:14 - 7:17
    I want to ask you a couple of questions,
    they're going to clear the stage.
  • 7:17 - 7:21
    I feel like this is amazing,
    it's innovative,
  • 7:21 - 7:24
    it's creepy, it's terrifying.
  • 7:24 - 7:27
    Can you tell us what I think ...
  • 7:27 - 7:29
    I think there are some
    uncomfortable feelings around this.
  • 7:29 - 7:32
    Tell us, is this reading your thoughts,
  • 7:32 - 7:33
    will it in five years,
  • 7:33 - 7:36
    is there a weaponized version of this,
    what does it look like?
  • 7:37 - 7:40
    AK: So our first design principle,
    before we started working on this,
  • 7:40 - 7:44
    was to not render ethics
    as an afterthought.
  • 7:44 - 7:47
    So we wanted to bake ethics
    right into the design.
  • 7:47 - 7:48
    We flipped the design.
  • 7:48 - 7:50
    Instead of reading
    from the brain directly,
  • 7:50 - 7:53
    we're reading from
    the voluntary nervous system
  • 7:53 - 7:56
    that you deliberately have to engage
    to communicate with the device,
  • 7:56 - 7:59
    while still bringing the benefits
    of a thinking or a thought device.
  • 7:59 - 8:01
    The best of both worlds in a way.
  • 8:01 - 8:04
    SA: OK, I think people are going to have
    a lot more questions for you.
  • 8:04 - 8:06
    Also, you said that it's a sticker.
  • 8:06 - 8:08
    So right now it sits just right here?
  • 8:08 - 8:10
    Is that the final iteration,
  • 8:10 - 8:13
    what the final design you hope looks like?
  • 8:14 - 8:19
    AK: Our goal is for the technology
    to disappear completely.
  • 8:19 - 8:20
    SA: What does that mean?
  • 8:20 - 8:23
    AK: If you're wearing it,
    I shouldn't be able to see it.
  • 8:23 - 8:26
    You don't want technology on your face,
    you want it in the background,
  • 8:26 - 8:28
    to augment you in the background.
  • 8:28 - 8:31
    So we have a sticker version
    that conforms to the skin,
  • 8:31 - 8:32
    that looks like the skin,
  • 8:32 - 8:34
    but we're trying to make
    an even smaller version
  • 8:34 - 8:35
    that would sit right here.
  • 8:37 - 8:38
    SA: OK.
  • 8:38 - 8:41
    I feel like if anyone has any questions
    they want to ask Arnav,
  • 8:41 - 8:43
    he'll be here all week.
  • 8:43 - 8:44
    OK, thank you so much, Arnav.
  • 8:44 - 8:46
    AK: Thanks, Shoham.
Title:
How AI could become an extension of your mind
Speaker:
Arnav Kapur
Description:

Try talking to yourself without opening your mouth, by simply saying words internally. What if you could search the internet like that -- and get an answer back? In the first live public demo of his new technology, TED Fellow Arnav Kapur introduces AlterEgo: a wearable AI device with the potential to let you silently talk to and get information from a computer system, like a voice inside your head. Learn more about how the device works and the far-reaching implications of this new kind of human-computer interaction.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
08:58

English subtitles

Revisions Compare revisions