< Return to Video

The thrilling potential of SixthSense technology

  • 0:00 - 0:02
    We grew up
  • 0:02 - 0:05
    interacting with the physical
    objects around us.
  • 0:05 - 0:08
    There are an enormous number of them
    that we use every day.
  • 0:09 - 0:12
    Unlike most of our computing devices,
  • 0:12 - 0:14
    these objects are much more fun to use.
  • 0:16 - 0:18
    When you talk about objects,
  • 0:18 - 0:21
    one other thing automatically
    comes attached to that thing,
  • 0:21 - 0:23
    and that is gestures:
  • 0:23 - 0:25
    how we manipulate these objects,
  • 0:25 - 0:28
    how we use these objects in everyday life.
  • 0:28 - 0:31
    We use gestures not only to interact
    with these objects,
  • 0:31 - 0:33
    but we also use them
    to interact with each other.
  • 0:33 - 0:37
    A gesture of "Namaste!",
    maybe, to respect someone, or maybe,
  • 0:37 - 0:39
    in India I don't need to teach
    a kid that this means
  • 0:39 - 0:41
    "four runs" in cricket.
  • 0:41 - 0:44
    It comes as a part
    of our everyday learning.
  • 0:44 - 0:48
    So, I am very interested,
    from the beginning,
  • 0:48 - 0:52
    how our knowledge
    about everyday objects and gestures,
  • 0:52 - 0:54
    and how we use these objects,
  • 0:54 - 0:57
    can be leveraged to our interactions
    with the digital world.
  • 0:57 - 1:00
    Rather than using a keyboard and mouse,
  • 1:00 - 1:03
    why can I not use my computer
  • 1:03 - 1:06
    in the same way that I interact
    in the physical world?
  • 1:06 - 1:09
    So, I started this exploration
    around eight years back,
  • 1:09 - 1:12
    and it literally started
    with a mouse on my desk.
  • 1:12 - 1:18
    Rather than using it for my computer,
    I actually opened it.
  • 1:18 - 1:20
    Most of you might be aware
    that, in those days,
  • 1:20 - 1:22
    the mouse used to come with a ball inside,
  • 1:22 - 1:24
    and there were two rollers
  • 1:24 - 1:27
    that actually guide the computer
    where the ball is moving,
  • 1:27 - 1:29
    and, accordingly,
    where the mouse is moving.
  • 1:29 - 1:32
    So, I was interested in these two rollers,
  • 1:32 - 1:35
    and I actually wanted more, so I borrowed
    another mouse from a friend --
  • 1:35 - 1:37
    never returned to him --
  • 1:37 - 1:39
    and I now had four rollers.
  • 1:39 - 1:42
    Interestingly, what I did
    with these rollers is,
  • 1:42 - 1:47
    basically, I took them off of these mouses
    and then put them in one line.
  • 1:47 - 1:50
    It had some strings
    and pulleys and some springs.
  • 1:50 - 1:53
    What I got is basically
    a gesture-interface device
  • 1:53 - 1:57
    that actually acts
    as a motion-sensing device
  • 1:57 - 1:59
    made for two dollars.
  • 1:59 - 2:02
    So, here, whatever movement
    I do in my physical world
  • 2:02 - 2:05
    is actually replicated
    inside the digital world
  • 2:05 - 2:08
    just using this small device
    that I made, around eight years back,
  • 2:08 - 2:10
    in 2000.
  • 2:10 - 2:13
    Because I was interested
    in integrating these two worlds,
  • 2:13 - 2:14
    I thought of sticky notes.
  • 2:14 - 2:17
    I thought, "Why can I not connect
  • 2:17 - 2:19
    the normal interface
    of a physical sticky note
  • 2:19 - 2:21
    to the digital world?"
  • 2:21 - 2:23
    A message written
    on a sticky note to my mom,
  • 2:23 - 2:24
    on paper,
  • 2:24 - 2:26
    can come to an SMS,
  • 2:26 - 2:28
    or maybe a meeting reminder
  • 2:28 - 2:30
    automatically syncs
    with my digital calendar --
  • 2:30 - 2:33
    a to-do list that automatically
    syncs with you.
  • 2:33 - 2:36
    But you can also search
    in the digital world,
  • 2:36 - 2:38
    or maybe you can write a query, saying,
  • 2:38 - 2:40
    "What is Dr. Smith's address?"
  • 2:40 - 2:42
    and this small system
    actually prints it out --
  • 2:42 - 2:45
    so it actually acts like a paper
    input-output system,
  • 2:45 - 2:48
    just made out of paper.
  • 2:50 - 2:52
    In another exploration,
  • 2:52 - 2:55
    I thought of making a pen
    that can draw in three dimensions.
  • 2:55 - 2:59
    So, I implemented this pen
    that can help designers and architects
  • 2:59 - 3:01
    not only think in three dimensions,
  • 3:01 - 3:03
    but they can actually draw,
  • 3:03 - 3:05
    so that it's more intuitive
    to use that way.
  • 3:05 - 3:07
    Then I thought,
    "Why not make a Google Map,
  • 3:07 - 3:09
    but in the physical world?"
  • 3:09 - 3:12
    Rather than typing a keyword
    to find something,
  • 3:12 - 3:14
    I put my objects on top of it.
  • 3:14 - 3:17
    If I put a boarding pass, it will show me
    where the flight gate is.
  • 3:17 - 3:20
    A coffee cup will show
    where you can find more coffee,
  • 3:20 - 3:22
    or where you can trash the cup.
  • 3:22 - 3:25
    So, these were some
    of the earlier explorations I did
  • 3:25 - 3:28
    because the goal was to connect
    these two worlds seamlessly.
  • 3:29 - 3:31
    Among all these experiments,
  • 3:31 - 3:33
    there was one thing in common:
  • 3:33 - 3:37
    I was trying to bring
    a part of the physical world
  • 3:37 - 3:38
    to the digital world.
  • 3:38 - 3:40
    I was taking some part of the objects,
  • 3:40 - 3:43
    or any of the intuitiveness of real life,
  • 3:43 - 3:45
    and bringing them to the digital world,
  • 3:45 - 3:49
    because the goal was to make
    our computing interfaces more intuitive.
  • 3:49 - 3:54
    But then I realized that we humans
    are not actually interested in computing.
  • 3:54 - 3:57
    What we are interested in is information.
  • 3:57 - 3:59
    We want to know about things.
  • 3:59 - 4:01
    We want to know about
    dynamic things going around.
  • 4:01 - 4:06
    So I thought, around last year --
    in the beginning of the last year --
  • 4:06 - 4:09
    I started thinking, "Why can I not take
    this approach in the reverse way?"
  • 4:10 - 4:12
    Maybe, "How about I take my digital world
  • 4:12 - 4:17
    and paint the physical world
    with that digital information?"
  • 4:18 - 4:22
    Because pixels are actually, right now,
    confined in these rectangular devices
  • 4:22 - 4:24
    that fit in our pockets.
  • 4:24 - 4:26
    Why can I not remove this confine
  • 4:26 - 4:29
    and take that to my everyday
    objects, everyday life
  • 4:29 - 4:31
    so that I don't need
    to learn the new language
  • 4:31 - 4:33
    for interacting with those pixels?
  • 4:34 - 4:37
    So, in order to realize this dream,
  • 4:37 - 4:40
    I actually thought of putting
    a big-size projector on my head.
  • 4:40 - 4:43
    I think that's why this is called
    a head-mounted projector, isn't it?
  • 4:43 - 4:45
    I took it very literally,
  • 4:45 - 4:47
    and took my bike helmet,
  • 4:47 - 4:50
    put a little cut over there so that
    the projector actually fits nicely.
  • 4:50 - 4:52
    So now, what I can do --
  • 4:52 - 4:56
    I can augment the world around me
    with this digital information.
  • 4:57 - 4:58
    But later,
  • 4:58 - 5:00
    I realized that I actually
    wanted to interact
  • 5:00 - 5:02
    with those digital pixels, also.
  • 5:02 - 5:05
    So I put a small camera over there
    that acts as a digital eye.
  • 5:05 - 5:07
    Later, we moved to a much better,
  • 5:07 - 5:09
    consumer-oriented pendant version of that,
  • 5:09 - 5:12
    that many of you now know
    as the SixthSense device.
  • 5:12 - 5:15
    But the most interesting thing
    about this particular technology
  • 5:15 - 5:19
    is that you can carry
    your digital world with you
  • 5:19 - 5:21
    wherever you go.
  • 5:21 - 5:24
    You can start using any surface,
    any wall around you,
  • 5:24 - 5:26
    as an interface.
  • 5:26 - 5:29
    The camera is actually tracking
    all your gestures.
  • 5:29 - 5:31
    Whatever you're doing with your hands,
  • 5:31 - 5:33
    it's understanding that gesture.
  • 5:33 - 5:36
    And, actually, if you see,
    there are some color markers
  • 5:36 - 5:38
    that in the beginning version
    we are using with it.
  • 5:38 - 5:40
    You can start painting on any wall.
  • 5:40 - 5:43
    You stop by a wall,
    and start painting on that wall.
  • 5:43 - 5:45
    But we are not only tracking
    one finger, here.
  • 5:45 - 5:49
    We are giving you the freedom
    of using all of both of your hands,
  • 5:49 - 5:52
    so you can actually use both of your hands
    to zoom into or zoom out
  • 5:52 - 5:54
    of a map just by pinching all present.
  • 5:54 - 5:58
    The camera is actually doing --
    just, getting all the images --
  • 5:58 - 6:01
    is doing the edge recognition
    and also the color recognition
  • 6:01 - 6:04
    and so many other small algorithms
    are going on inside.
  • 6:04 - 6:06
    So, technically,
    it's a little bit complex,
  • 6:06 - 6:10
    but it gives you an output which is more
    intuitive to use, in some sense.
  • 6:10 - 6:12
    But I'm more excited that you can
    actually take it outside.
  • 6:12 - 6:15
    Rather than getting your camera
    out of your pocket,
  • 6:15 - 6:18
    you can just do the gesture
    of taking a photo,
  • 6:18 - 6:20
    and it takes a photo for you.
  • 6:20 - 6:24
    (Applause)
  • 6:24 - 6:25
    Thank you.
  • 6:26 - 6:28
    And later I can find a wall, anywhere,
  • 6:28 - 6:30
    and start browsing those photos
  • 6:30 - 6:33
    or maybe, "OK, I want to modify
    this photo a little bit
  • 6:33 - 6:35
    and send it as an email to a friend."
  • 6:35 - 6:37
    So, we are looking for an era
  • 6:37 - 6:40
    where computing will actually merge
    with the physical world.
  • 6:40 - 6:43
    And, of course,
    if you don't have any surface,
  • 6:43 - 6:46
    you can start using your palm
    for simple operations.
  • 6:46 - 6:48
    Here, I'm dialing a phone number
    just using my hand.
  • 6:52 - 6:55
    The camera is actually not
    only understanding your hand movements,
  • 6:55 - 6:56
    but, interestingly,
  • 6:56 - 6:59
    is also able to understand what objects
    you are holding in your hand.
  • 7:00 - 7:04
    For example, in this case,
  • 7:04 - 7:06
    the book cover is matched
  • 7:06 - 7:09
    with so many thousands,
    or maybe millions of books online,
  • 7:09 - 7:11
    and checking out which book it is.
  • 7:11 - 7:12
    Once it has that information,
  • 7:12 - 7:14
    it finds out more reviews about that,
  • 7:14 - 7:17
    or maybe New York Times
    has a sound overview on that,
  • 7:17 - 7:19
    so you can actually hear,
    on a physical book,
  • 7:19 - 7:21
    a review as sound.
  • 7:21 - 7:23
    (Video) Famous talk
    at Harvard University --
  • 7:23 - 7:27
    This was Obama's visit last week to MIT.
  • 7:27 - 7:30
    (Video) And particularly I want
    to thank two outstanding MIT --
  • 7:30 - 7:34
    Pranav Mistry: So, I was seeing
    the live [video] of his talk,
  • 7:34 - 7:35
    outside, on just a newspaper.
  • 7:36 - 7:39
    Your newspaper will show you
    live weather information
  • 7:39 - 7:42
    rather than having it updated.
  • 7:42 - 7:44
    You have to check your computer
    in order to do that, right?
  • 7:45 - 7:49
    (Applause)
  • 7:49 - 7:52
    When I'm going back,
    I can just use my boarding pass
  • 7:52 - 7:54
    to check how much my flight
    has been delayed,
  • 7:54 - 7:56
    because at that particular time,
  • 7:56 - 7:58
    I'm not feeling like opening my iPhone,
  • 7:58 - 8:00
    and checking out a particular icon.
  • 8:00 - 8:03
    And I think this technology
    will not only change the way --
  • 8:03 - 8:04
    (Laughter)
  • 8:04 - 8:05
    Yes.
  • 8:05 - 8:08
    It will change the way
    we interact with people, also,
  • 8:08 - 8:09
    not only the physical world.
  • 8:09 - 8:12
    The fun part is, I'm going
    to the Boston metro,
  • 8:12 - 8:17
    and playing a pong game inside the train
    on the ground, right?
  • 8:17 - 8:18
    (Laughter)
  • 8:18 - 8:20
    And I think the imagination
    is the only limit
  • 8:20 - 8:22
    of what you can think of
  • 8:22 - 8:24
    when this kind of technology
    merges with real life.
  • 8:24 - 8:26
    But many of you argue, actually,
  • 8:26 - 8:29
    that all of our work is not
    only about physical objects.
  • 8:29 - 8:32
    We actually do lots
    of accounting and paper editing
  • 8:32 - 8:34
    and all those kinds of things;
    what about that?
  • 8:34 - 8:38
    And many of you are excited
    about the next-generation tablet computers
  • 8:38 - 8:40
    to come out in the market.
  • 8:40 - 8:42
    So, rather than waiting for that,
  • 8:42 - 8:45
    I actually made my own,
    just using a piece of paper.
  • 8:45 - 8:47
    So, what I did here
    is remove the camera --
  • 8:47 - 8:51
    All the webcam cameras have
    a microphone inside the camera.
  • 8:51 - 8:54
    I removed the microphone from that,
  • 8:54 - 8:56
    and then just pinched that --
  • 8:56 - 8:59
    like I just made a clip
    out of the microphone --
  • 8:59 - 9:03
    and clipped that to a piece of paper,
    any paper that you found around.
  • 9:03 - 9:06
    So now the sound of the touch
  • 9:06 - 9:09
    is getting me when exactly
    I'm touching the paper.
  • 9:09 - 9:13
    But the camera is actually tracking
    where my fingers are moving.
  • 9:13 - 9:16
    You can of course watch movies.
  • 9:16 - 9:19
    (Video) Good afternoon.
    My name is Russell,
  • 9:19 - 9:22
    and I am a Wilderness
    Explorer in Tribe 54."
  • 9:22 - 9:25
    PM: And you can of course play games.
  • 9:25 - 9:28
    (Car engine)
  • 9:28 - 9:31
    Here, the camera is actually understanding
    how you're holding the paper
  • 9:31 - 9:33
    and playing a car-racing game.
  • 9:33 - 9:36
    (Applause)
  • 9:37 - 9:40
    Many of you already must have
    thought, OK, you can browse.
  • 9:40 - 9:42
    Yeah. Of course you can
    browse to any websites
  • 9:42 - 9:45
    or you can do all sorts
    of computing on a piece of paper
  • 9:45 - 9:46
    wherever you need it.
  • 9:46 - 9:49
    So, more interestingly,
  • 9:49 - 9:52
    I'm interested in how we can
    take that in a more dynamic way.
  • 9:52 - 9:55
    When I come back to my desk,
    I can just pinch that information
  • 9:55 - 9:57
    back to my desktop
  • 9:57 - 10:00
    so I can use my full-size computer.
  • 10:00 - 10:02
    (Applause)
  • 10:02 - 10:05
    And why only computers?
    We can just play with papers.
  • 10:05 - 10:08
    Paper world is interesting to play with.
  • 10:08 - 10:10
    Here, I'm taking a part of a document,
  • 10:10 - 10:14
    and putting over here a second part
    from a second place,
  • 10:14 - 10:19
    and I'm actually modifying the information
    that I have over there.
  • 10:19 - 10:24
    Yeah. And I say, "OK, this looks nice,
    let me print it out, that thing."
  • 10:24 - 10:26
    So I now have a print-out of that thing.
  • 10:26 - 10:29
    So the workflow is more intuitive,
  • 10:29 - 10:32
    the way we used to do it
    maybe 20 years back,
  • 10:32 - 10:35
    rather than now switching
    between these two worlds.
  • 10:35 - 10:38
    So, as a last thought,
  • 10:38 - 10:42
    I think that integrating
    information to everyday objects
  • 10:42 - 10:46
    will not only help us to get rid
    of the digital divide,
  • 10:46 - 10:48
    the gap between these two worlds,
  • 10:48 - 10:50
    but will also help us, in some way,
  • 10:50 - 10:52
    to stay human,
  • 10:52 - 10:55
    to be more connected
    to our physical world.
  • 10:58 - 11:01
    And it will actually help us
    not end up being machines
  • 11:01 - 11:03
    sitting in front of other machines.
  • 11:04 - 11:06
    That's all. Thank you.
  • 11:06 - 11:20
    (Applause)
  • 11:20 - 11:21
    Thank you.
  • 11:21 - 11:24
    (Applause)
  • 11:24 - 11:28
    Chris Anderson: So, Pranav,
    first of all, you're a genius.
  • 11:28 - 11:31
    This is incredible, really.
  • 11:31 - 11:34
    What are you doing with this?
    Is there a company being planned?
  • 11:34 - 11:36
    Or is this research forever, or what?
  • 11:36 - 11:38
    Pranav Mistry: So, there are
    lots of companies,
  • 11:38 - 11:41
    sponsor companies of Media Lab
    interested in taking this ahead
  • 11:41 - 11:43
    in one or another way.
  • 11:43 - 11:45
    Companies like mobile-phone operators
  • 11:45 - 11:47
    want to take this in a different way
    than the NGOs in India,
  • 11:47 - 11:50
    thinking, "Why can we only
    have 'Sixth Sense'?
  • 11:50 - 11:53
    We should have a 'Fifth Sense'
    for missing-sense people who cannot speak.
  • 11:53 - 11:56
    This technology can be used for them
    to speak out in a different way
  • 11:56 - 11:58
    maybe a speaker system."
  • 11:58 - 12:00
    CA: What are your own plans?
    Are you staying at MIT,
  • 12:00 - 12:02
    or are you going to do
    something with this?
  • 12:02 - 12:05
    PM: I'm trying to make this
    more available to people
  • 12:05 - 12:08
    so that anyone can develop
    their own SixthSense device,
  • 12:08 - 12:11
    because the hardware is actually
    not that hard to manufacture
  • 12:11 - 12:13
    or hard to make your own.
  • 12:13 - 12:16
    We will provide all the open source
    software for them,
  • 12:16 - 12:17
    maybe starting next month.
  • 12:17 - 12:19
    CA: Open source? Wow.
  • 12:19 - 12:24
    (Applause)
  • 12:24 - 12:27
    CA: Are you going to come back to India
    with some of this, at some point?
  • 12:27 - 12:29
    PM: Yeah. Yes, yes, of course.
  • 12:29 - 12:31
    CA: What are your plans? MIT? India?
  • 12:31 - 12:33
    How are you going to split
    your time going forward?
  • 12:34 - 12:36
    PM: There is a lot of energy here.
    Lots of learning.
  • 12:36 - 12:40
    All of this work that you have seen
    is all about my learning in India.
  • 12:40 - 12:43
    And now, if you see, it's more about
    the cost-effectiveness:
  • 12:43 - 12:45
    this system costs you $300
  • 12:45 - 12:48
    compared to the $20,000 surface tables,
    or anything like that.
  • 12:48 - 12:54
    Or maybe even the $2 mouse gesture system
    at that time was costing around $5,000?
  • 12:54 - 13:00
    I showed that, at a conference,
    to President Abdul Kalam, at that time,
  • 13:00 - 13:04
    and then he said, "OK, we should use this
    in Bhabha Atomic Research Centre
  • 13:04 - 13:05
    for some use of that."
  • 13:05 - 13:08
    So I'm excited about how I can bring
    the technology to the masses
  • 13:08 - 13:11
    rather than just keeping that technology
    in the lab environment.
  • 13:11 - 13:15
    (Applause)
  • 13:15 - 13:17
    CA: Based on the people we've seen at TED,
  • 13:17 - 13:19
    I would say you're truly
    one of the two or three
  • 13:20 - 13:21
    best inventors in the world right now.
  • 13:21 - 13:23
    It's an honor to have you at TED.
  • 13:23 - 13:25
    Thank you so much.
  • 13:25 - 13:26
    That's fantastic.
  • 13:26 - 13:30
    (Applause)
Title:
The thrilling potential of SixthSense technology
Speaker:
Pranav Mistry
Description:

At TEDIndia, Pranav Mistry demos several tools that help the physical world interact with the world of data -- including a deep look at his SixthSense device and a new, paradigm-shifting paper "laptop." In an onstage Q&A, Mistry says he'll open-source the software behind SixthSense, to open its possibilities to all.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
13:30

English subtitles

Revisions Compare revisions