Shape-shifting tech will change work as we know it
-
0:01 - 0:04We've evolved with tools,
and tools have evolved with us. -
0:04 - 0:09Our ancestors created these
hand axes 1.5 million years ago, -
0:09 - 0:12shaping them to not only
fit the task at hand -
0:12 - 0:14but also their hand.
-
0:15 - 0:16However, over the years,
-
0:16 - 0:19tools have become
more and more specialized. -
0:19 - 0:23These sculpting tools
have evolved through their use, -
0:23 - 0:27and each one has a different form
which matches its function. -
0:27 - 0:29And they leverage
the dexterity of our hands -
0:29 - 0:33in order to manipulate things
with much more precision. -
0:33 - 0:36But as tools have become
more and more complex, -
0:36 - 0:40we need more complex controls
to control them. -
0:41 - 0:45And so designers have become
very adept at creating interfaces -
0:45 - 0:49that allow you to manipulate parameters
while you're attending to other things, -
0:49 - 0:52such as taking a photograph
and changing the focus -
0:52 - 0:53or the aperture.
-
0:54 - 0:58But the computer has fundamentally
changed the way we think about tools -
0:58 - 1:00because computation is dynamic.
-
1:01 - 1:03So it can do a million different things
-
1:03 - 1:05and run a million different applications.
-
1:05 - 1:09However, computers have
the same static physical form -
1:09 - 1:11for all of these different applications
-
1:11 - 1:14and the same static
interface elements as well. -
1:14 - 1:16And I believe that this
is fundamentally a problem, -
1:16 - 1:19because it doesn't really allow us
to interact with our hands -
1:19 - 1:23and capture the rich dexterity
that we have in our bodies. -
1:24 - 1:29And my belief is that, then,
we must need new types of interfaces -
1:29 - 1:32that can capture these
rich abilities that we have -
1:32 - 1:35and that can physically adapt to us
-
1:35 - 1:37and allow us to interact in new ways.
-
1:37 - 1:40And so that's what I've been doing
at the MIT Media Lab -
1:40 - 1:41and now at Stanford.
-
1:42 - 1:46So with my colleagues,
Daniel Leithinger and Hiroshi Ishii, -
1:46 - 1:47we created inFORM,
-
1:47 - 1:49where the interface can actually
come off the screen -
1:49 - 1:52and you can physically manipulate it.
-
1:52 - 1:55Or you can visualize
3D information physically -
1:55 - 1:58and touch it and feel it
to understand it in new ways. -
2:04 - 2:08Or you can interact through gestures
and direct deformations -
2:08 - 2:10to sculpt digital clay.
-
2:14 - 2:18Or interface elements can arise
out of the surface -
2:18 - 2:19and change on demand.
-
2:19 - 2:21And the idea is that for each
individual application, -
2:22 - 2:25the physical form can be matched
to the application. -
2:25 - 2:27And I believe this represents a new way
-
2:27 - 2:29that we can interact with information,
-
2:29 - 2:31by making it physical.
-
2:31 - 2:33So the question is, how can we use this?
-
2:34 - 2:38Traditionally, urban planners
and architects build physical models -
2:38 - 2:40of cities and buildings
to better understand them. -
2:40 - 2:45So with Tony Tang at the Media Lab,
we created an interface built on inFORM -
2:45 - 2:50to allow urban planners
to design and view entire cities. -
2:50 - 2:54And now you can walk around it,
but it's dynamic, it's physical, -
2:54 - 2:56and you can also interact directly.
-
2:56 - 2:57Or you can look at different views,
-
2:57 - 3:00such as population or traffic information,
-
3:00 - 3:02but it's made physical.
-
3:03 - 3:07We also believe that these dynamic
shape displays can really change -
3:07 - 3:10the ways that we remotely
collaborate with people. -
3:10 - 3:12So when we're working together in person,
-
3:12 - 3:14I'm not only looking at your face
-
3:14 - 3:17but I'm also gesturing
and manipulating objects, -
3:17 - 3:21and that's really hard to do
when you're using tools like Skype. -
3:22 - 3:25And so using inFORM,
you can reach out from the screen -
3:25 - 3:27and manipulate things at a distance.
-
3:27 - 3:30So we used the pins of the display
to represent people's hands, -
3:30 - 3:35allowing them to actually touch
and manipulate objects at a distance. -
3:39 - 3:43And you can also manipulate
and collaborate on 3D data sets as well, -
3:43 - 3:46so you can gesture around them
as well as manipulate them. -
3:47 - 3:51And that allows people to collaborate
on these new types of 3D information -
3:51 - 3:55in a richer way than might
be possible with traditional tools. -
3:56 - 3:59And so you can also
bring in existing objects, -
3:59 - 4:02and those will be captured on one side
and transmitted to the other. -
4:02 - 4:05Or you can have an object that's linked
between two places, -
4:05 - 4:07so as I move a ball on one side,
-
4:07 - 4:09the ball moves on the other as well.
-
4:10 - 4:13And so we do this by capturing
the remote user -
4:13 - 4:16using a depth-sensing camera
like a Microsoft Kinect. -
4:17 - 4:20Now, you might be wondering
how does this all work, -
4:20 - 4:23and essentially, what it is,
is 900 linear actuators -
4:23 - 4:26that are connected to these
mechanical linkages -
4:26 - 4:30that allow motion down here
to be propagated in these pins above. -
4:30 - 4:33So it's not that complex
compared to what's going on at CERN, -
4:33 - 4:35but it did take a long time
for us to build it. -
4:35 - 4:38And so we started with a single motor,
-
4:38 - 4:39a single linear actuator,
-
4:40 - 4:43and then we had to design
a custom circuit board to control them. -
4:43 - 4:45And then we had to make a lot of them.
-
4:45 - 4:49And so the problem with having
900 of something -
4:49 - 4:52is that you have to do
every step 900 times. -
4:52 - 4:54And so that meant that we had
a lot of work to do. -
4:54 - 4:58So we sort of set up
a mini-sweatshop in the Media Lab -
4:58 - 5:02and brought undergrads in and convinced
them to do "research" -- -
5:02 - 5:03(Laughter)
-
5:03 - 5:06and had late nights
watching movies, eating pizza -
5:06 - 5:08and screwing in thousands of screws.
-
5:08 - 5:09You know -- research.
-
5:09 - 5:10(Laughter)
-
5:10 - 5:14But anyway, I think that we were
really excited by the things -
5:14 - 5:15that inFORM allowed us to do.
-
5:16 - 5:20Increasingly, we're using mobile devices,
and we interact on the go. -
5:20 - 5:22But mobile devices, just like computers,
-
5:22 - 5:25are used for so many
different applications. -
5:25 - 5:27So you use them to talk on the phone,
-
5:27 - 5:30to surf the web, to play games,
to take pictures -
5:30 - 5:32or even a million different things.
-
5:32 - 5:35But again, they have the same
static physical form -
5:35 - 5:37for each of these applications.
-
5:37 - 5:40And so we wanted to know how can we take
some of the same interactions -
5:40 - 5:42that we developed for inFORM
-
5:42 - 5:44and bring them to mobile devices.
-
5:44 - 5:48So at Stanford, we created
this haptic edge display, -
5:48 - 5:51which is a mobile device
with an array of linear actuators -
5:51 - 5:53that can change shape,
-
5:53 - 5:57so you can feel in your hand
where you are as you're reading a book. -
5:57 - 6:01Or you can feel in your pocket
new types of tactile sensations -
6:01 - 6:03that are richer than the vibration.
-
6:03 - 6:06Or buttons can emerge from the side
that allow you to interact -
6:06 - 6:08where you want them to be.
-
6:09 - 6:13Or you can play games
and have actual buttons. -
6:14 - 6:15And so we were able to do this
-
6:15 - 6:20by embedding 40 small, tiny
linear actuators inside the device, -
6:20 - 6:22and that allow you not only to touch them
-
6:22 - 6:24but also back-drive them as well.
-
6:25 - 6:29But we've also looked at other ways
to create more complex shape change. -
6:29 - 6:33So we've used pneumatic actuation
to create a morphing device -
6:33 - 6:36where you can go from something
that looks a lot like a phone ... -
6:36 - 6:39to a wristband on the go.
-
6:40 - 6:43And so together with Ken Nakagaki
at the Media Lab, -
6:43 - 6:45we created this new
high-resolution version -
6:45 - 6:51that uses an array of servomotors
to change from interactive wristband -
6:51 - 6:54to a touch-input device
-
6:54 - 6:56to a phone.
-
6:56 - 6:57(Laughter)
-
6:58 - 7:00And we're also interested
in looking at ways -
7:00 - 7:03that users can actually
deform the interfaces -
7:03 - 7:06to shape them into the devices
that they want to use. -
7:06 - 7:08So you can make something
like a game controller, -
7:08 - 7:11and then the system will understand
what shape it's in -
7:11 - 7:13and change to that mode.
-
7:14 - 7:16So, where does this point?
-
7:16 - 7:18How do we move forward from here?
-
7:18 - 7:20I think, really, where we are today
-
7:20 - 7:23is in this new age
of the Internet of Things, -
7:23 - 7:25where we have computers everywhere --
-
7:25 - 7:27they're in our pockets,
they're in our walls, -
7:27 - 7:31they're in almost every device
that you'll buy in the next five years. -
7:31 - 7:33But what if we stopped
thinking about devices -
7:33 - 7:36and think instead about environments?
-
7:36 - 7:38And so how can we have smart furniture
-
7:38 - 7:42or smart rooms or smart environments
-
7:42 - 7:45or cities that can adapt to us physically,
-
7:45 - 7:49and allow us to do new ways
of collaborating with people -
7:49 - 7:51and doing new types of tasks?
-
7:51 - 7:55So for the Milan Design Week,
we created TRANSFORM, -
7:55 - 7:59which is an interactive table-scale
version of these shape displays, -
7:59 - 8:02which can move physical objects
on the surface; for example, -
8:02 - 8:04reminding you to take your keys.
-
8:04 - 8:09But it can also transform
to fit different ways of interacting. -
8:09 - 8:10So if you want to work,
-
8:10 - 8:13then it can change to sort of
set up your work system. -
8:13 - 8:15And so as you bring a device over,
-
8:15 - 8:18it creates all the affordances you need
-
8:18 - 8:23and brings other objects
to help you accomplish those goals. -
8:25 - 8:27So, in conclusion,
-
8:27 - 8:31I really think that we need to think
about a new, fundamentally different way -
8:31 - 8:33of interacting with computers.
-
8:34 - 8:37We need computers
that can physically adapt to us -
8:37 - 8:39and adapt to the ways
that we want to use them -
8:39 - 8:44and really harness the rich dexterity
that we have of our hands, -
8:44 - 8:48and our ability to think spatially
about information by making it physical. -
8:49 - 8:53But looking forward, I think we need
to go beyond this, beyond devices, -
8:53 - 8:56to really think about new ways
that we can bring people together, -
8:56 - 8:59and bring our information into the world,
-
8:59 - 9:03and think about smart environments
that can adapt to us physically. -
9:03 - 9:05So with that, I will leave you.
-
9:05 - 9:06Thank you very much.
-
9:06 - 9:09(Applause)
- Title:
- Shape-shifting tech will change work as we know it
- Speaker:
- Sean Follmer
- Description:
-
What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 09:22
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene approved English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it | ||
Brian Greene edited English subtitles for Shape-shifting tech will change work as we know it |