Return to Video

Shape-shifting tech will change work as we know it

  • 0:01 - 0:04
    We've evolved with tools,
    and tools have evolved with us.
  • 0:04 - 0:09
    Our ancestors created these
    hand axes 1.5 million years ago,
  • 0:09 - 0:12
    shaping them to not only
    fit the task at hand
  • 0:12 - 0:14
    but also their hand.
  • 0:15 - 0:16
    However, over the years,
  • 0:16 - 0:19
    tools have become
    more and more specialized.
  • 0:19 - 0:23
    These sculpting tools
    have evolved through their use,
  • 0:23 - 0:27
    and each one has a different form
    which matches its function.
  • 0:27 - 0:29
    And they leverage
    the dexterity of our hands
  • 0:29 - 0:33
    in order to manipulate things
    with much more precision.
  • 0:33 - 0:36
    But as tools have become
    more and more complex,
  • 0:36 - 0:40
    we need more complex controls
    to control them.
  • 0:41 - 0:45
    And so designers have become
    very adept at creating interfaces
  • 0:45 - 0:49
    that allow you to manipulate parameters
    while you're attending to other things,
  • 0:49 - 0:52
    such as taking a photograph
    and changing the focus
  • 0:52 - 0:53
    or the aperture.
  • 0:54 - 0:58
    But the computer has fundamentally
    changed the way we think about tools
  • 0:58 - 1:00
    because computation is dynamic.
  • 1:01 - 1:03
    So it can do a million different things
  • 1:03 - 1:05
    and run a million different applications.
  • 1:05 - 1:09
    However, computers have
    the same static physical form
  • 1:09 - 1:11
    for all of these different applications
  • 1:11 - 1:14
    and the same static
    interface elements as well.
  • 1:14 - 1:16
    And I believe that this
    is fundamentally a problem,
  • 1:16 - 1:19
    because it doesn't really allow us
    to interact with our hands
  • 1:19 - 1:23
    and capture the rich dexterity
    that we have in our bodies.
  • 1:24 - 1:29
    And my belief is that, then,
    we must need new types of interfaces
  • 1:29 - 1:32
    that can capture these
    rich abilities that we have
  • 1:32 - 1:35
    and that can physically adapt to us
  • 1:35 - 1:37
    and allow us to interact in new ways.
  • 1:37 - 1:40
    And so that's what I've been doing
    at the MIT Media Lab
  • 1:40 - 1:41
    and now at Stanford.
  • 1:42 - 1:46
    So with my colleagues,
    Daniel Leithinger and Hiroshi Ishii,
  • 1:46 - 1:47
    we created inFORM,
  • 1:47 - 1:49
    where the interface can actually
    come off the screen
  • 1:49 - 1:52
    and you can physically manipulate it.
  • 1:52 - 1:55
    Or you can visualize
    3D information physically
  • 1:55 - 1:58
    and touch it and feel it
    to understand it in new ways.
  • 2:04 - 2:08
    Or you can interact through gestures
    and direct deformations
  • 2:08 - 2:10
    to sculpt digital clay.
  • 2:14 - 2:18
    Or interface elements can arise
    out of the surface
  • 2:18 - 2:19
    and change on demand.
  • 2:19 - 2:21
    And the idea is that for each
    individual application,
  • 2:22 - 2:25
    the physical form can be matched
    to the application.
  • 2:25 - 2:27
    And I believe this represents a new way
  • 2:27 - 2:29
    that we can interact with information,
  • 2:29 - 2:31
    by making it physical.
  • 2:31 - 2:33
    So the question is, how can we use this?
  • 2:34 - 2:38
    Traditionally, urban planners
    and architects build physical models
  • 2:38 - 2:40
    of cities and buildings
    to better understand them.
  • 2:40 - 2:45
    So with Tony Tang at the Media Lab,
    we created an interface built on inFORM
  • 2:45 - 2:50
    to allow urban planners
    to design and view entire cities.
  • 2:50 - 2:54
    And now you can walk around it,
    but it's dynamic, it's physical,
  • 2:54 - 2:56
    and you can also interact directly.
  • 2:56 - 2:57
    Or you can look at different views,
  • 2:57 - 3:00
    such as population or traffic information,
  • 3:00 - 3:02
    but it's made physical.
  • 3:03 - 3:07
    We also believe that these dynamic
    shape displays can really change
  • 3:07 - 3:10
    the ways that we remotely
    collaborate with people.
  • 3:10 - 3:12
    So when we're working together in person,
  • 3:12 - 3:14
    I'm not only looking at your face
  • 3:14 - 3:17
    but I'm also gesturing
    and manipulating objects,
  • 3:17 - 3:21
    and that's really hard to do
    when you're using tools like Skype.
  • 3:22 - 3:25
    And so using inFORM,
    you can reach out from the screen
  • 3:25 - 3:27
    and manipulate things at a distance.
  • 3:27 - 3:30
    So we used the pins of the display
    to represent people's hands,
  • 3:30 - 3:35
    allowing them to actually touch
    and manipulate objects at a distance.
  • 3:39 - 3:43
    And you can also manipulate
    and collaborate on 3D data sets as well,
  • 3:43 - 3:46
    so you can gesture around them
    as well as manipulate them.
  • 3:47 - 3:51
    And that allows people to collaborate
    on these new types of 3D information
  • 3:51 - 3:55
    in a richer way than might
    be possible with traditional tools.
  • 3:56 - 3:59
    And so you can also
    bring in existing objects,
  • 3:59 - 4:02
    and those will be captured on one side
    and transmitted to the other.
  • 4:02 - 4:05
    Or you can have an object that's linked
    between two places,
  • 4:05 - 4:07
    so as I move a ball on one side,
  • 4:07 - 4:09
    the ball moves on the other as well.
  • 4:10 - 4:13
    And so we do this by capturing
    the remote user
  • 4:13 - 4:16
    using a depth-sensing camera
    like a Microsoft Kinect.
  • 4:17 - 4:20
    Now, you might be wondering
    how does this all work,
  • 4:20 - 4:23
    and essentially, what it is,
    is 900 linear actuators
  • 4:23 - 4:26
    that are connected to these
    mechanical linkages
  • 4:26 - 4:30
    that allow motion down here
    to be propagated in these pins above.
  • 4:30 - 4:33
    So it's not that complex
    compared to what's going on at CERN,
  • 4:33 - 4:35
    but it did take a long time
    for us to build it.
  • 4:35 - 4:38
    And so we started with a single motor,
  • 4:38 - 4:39
    a single linear actuator,
  • 4:40 - 4:43
    and then we had to design
    a custom circuit board to control them.
  • 4:43 - 4:45
    And then we had to make a lot of them.
  • 4:45 - 4:49
    And so the problem with having
    900 of something
  • 4:49 - 4:52
    is that you have to do
    every step 900 times.
  • 4:52 - 4:54
    And so that meant that we had
    a lot of work to do.
  • 4:54 - 4:58
    So we sort of set up
    a mini-sweatshop in the Media Lab
  • 4:58 - 5:02
    and brought undergrads in and convinced
    them to do "research" --
  • 5:02 - 5:03
    (Laughter)
  • 5:03 - 5:06
    and had late nights
    watching movies, eating pizza
  • 5:06 - 5:08
    and screwing in thousands of screws.
  • 5:08 - 5:09
    You know -- research.
  • 5:09 - 5:10
    (Laughter)
  • 5:10 - 5:14
    But anyway, I think that we were
    really excited by the things
  • 5:14 - 5:15
    that inFORM allowed us to do.
  • 5:16 - 5:20
    Increasingly, we're using mobile devices,
    and we interact on the go.
  • 5:20 - 5:22
    But mobile devices, just like computers,
  • 5:22 - 5:25
    are used for so many
    different applications.
  • 5:25 - 5:27
    So you use them to talk on the phone,
  • 5:27 - 5:30
    to surf the web, to play games,
    to take pictures
  • 5:30 - 5:32
    or even a million different things.
  • 5:32 - 5:35
    But again, they have the same
    static physical form
  • 5:35 - 5:37
    for each of these applications.
  • 5:37 - 5:40
    And so we wanted to know how can we take
    some of the same interactions
  • 5:40 - 5:42
    that we developed for inFORM
  • 5:42 - 5:44
    and bring them to mobile devices.
  • 5:44 - 5:48
    So at Stanford, we created
    this haptic edge display,
  • 5:48 - 5:51
    which is a mobile device
    with an array of linear actuators
  • 5:51 - 5:53
    that can change shape,
  • 5:53 - 5:57
    so you can feel in your hand
    where you are as you're reading a book.
  • 5:57 - 6:01
    Or you can feel in your pocket
    new types of tactile sensations
  • 6:01 - 6:03
    that are richer than the vibration.
  • 6:03 - 6:06
    Or buttons can emerge from the side
    that allow you to interact
  • 6:06 - 6:08
    where you want them to be.
  • 6:09 - 6:13
    Or you can play games
    and have actual buttons.
  • 6:14 - 6:15
    And so we were able to do this
  • 6:15 - 6:20
    by embedding 40 small, tiny
    linear actuators inside the device,
  • 6:20 - 6:22
    and that allow you not only to touch them
  • 6:22 - 6:24
    but also back-drive them as well.
  • 6:25 - 6:29
    But we've also looked at other ways
    to create more complex shape change.
  • 6:29 - 6:33
    So we've used pneumatic actuation
    to create a morphing device
  • 6:33 - 6:36
    where you can go from something
    that looks a lot like a phone ...
  • 6:36 - 6:39
    to a wristband on the go.
  • 6:40 - 6:43
    And so together with Ken Nakagaki
    at the Media Lab,
  • 6:43 - 6:45
    we created this new
    high-resolution version
  • 6:45 - 6:51
    that uses an array of servomotors
    to change from interactive wristband
  • 6:51 - 6:54
    to a touch-input device
  • 6:54 - 6:56
    to a phone.
  • 6:56 - 6:57
    (Laughter)
  • 6:58 - 7:00
    And we're also interested
    in looking at ways
  • 7:00 - 7:03
    that users can actually
    deform the interfaces
  • 7:03 - 7:06
    to shape them into the devices
    that they want to use.
  • 7:06 - 7:08
    So you can make something
    like a game controller,
  • 7:08 - 7:11
    and then the system will understand
    what shape it's in
  • 7:11 - 7:13
    and change to that mode.
  • 7:14 - 7:16
    So, where does this point?
  • 7:16 - 7:18
    How do we move forward from here?
  • 7:18 - 7:20
    I think, really, where we are today
  • 7:20 - 7:23
    is in this new age
    of the Internet of Things,
  • 7:23 - 7:25
    where we have computers everywhere --
  • 7:25 - 7:27
    they're in our pockets,
    they're in our walls,
  • 7:27 - 7:31
    they're in almost every device
    that you'll buy in the next five years.
  • 7:31 - 7:33
    But what if we stopped
    thinking about devices
  • 7:33 - 7:36
    and think instead about environments?
  • 7:36 - 7:38
    And so how can we have smart furniture
  • 7:38 - 7:42
    or smart rooms or smart environments
  • 7:42 - 7:45
    or cities that can adapt to us physically,
  • 7:45 - 7:49
    and allow us to do new ways
    of collaborating with people
  • 7:49 - 7:51
    and doing new types of tasks?
  • 7:51 - 7:55
    So for the Milan Design Week,
    we created TRANSFORM,
  • 7:55 - 7:59
    which is an interactive table-scale
    version of these shape displays,
  • 7:59 - 8:02
    which can move physical objects
    on the surface; for example,
  • 8:02 - 8:04
    reminding you to take your keys.
  • 8:04 - 8:09
    But it can also transform
    to fit different ways of interacting.
  • 8:09 - 8:10
    So if you want to work,
  • 8:10 - 8:13
    then it can change to sort of
    set up your work system.
  • 8:13 - 8:15
    And so as you bring a device over,
  • 8:15 - 8:18
    it creates all the affordances you need
  • 8:18 - 8:23
    and brings other objects
    to help you accomplish those goals.
  • 8:25 - 8:27
    So, in conclusion,
  • 8:27 - 8:31
    I really think that we need to think
    about a new, fundamentally different way
  • 8:31 - 8:33
    of interacting with computers.
  • 8:34 - 8:37
    We need computers
    that can physically adapt to us
  • 8:37 - 8:39
    and adapt to the ways
    that we want to use them
  • 8:39 - 8:44
    and really harness the rich dexterity
    that we have of our hands,
  • 8:44 - 8:48
    and our ability to think spatially
    about information by making it physical.
  • 8:49 - 8:53
    But looking forward, I think we need
    to go beyond this, beyond devices,
  • 8:53 - 8:56
    to really think about new ways
    that we can bring people together,
  • 8:56 - 8:59
    and bring our information into the world,
  • 8:59 - 9:03
    and think about smart environments
    that can adapt to us physically.
  • 9:03 - 9:05
    So with that, I will leave you.
  • 9:05 - 9:06
    Thank you very much.
  • 9:06 - 9:09
    (Applause)
Title:
Shape-shifting tech will change work as we know it
Speaker:
Sean Follmer
Description:

What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
09:22

English subtitles

Revisions Compare revisions