An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet
-
0:16 - 0:20Intelligence, what is it?
-
0:20 - 0:25If we take a look back at the history
of how intelligence is being viewed, -
0:25 - 0:31one seminal example has been
Edsger Dijkstra's famous quote -
0:31 - 0:35that the question of
whether a machine can think -
0:35 - 0:38is about as interesting as the question of
-
0:38 - 0:41whether a submarine can swim.
-
0:41 - 0:47Now, Edsger Dijkstra, when he wrote this,
intended it as a criticism -
0:47 - 0:52of early pioneers of computer science
like Alan Turing. -
0:53 - 0:56However, if you take a look back
-
0:56 - 0:59and think about what have been
the most empowering innovations -
0:59 - 1:03that enabled us to build
artificial machines that swim -
1:04 - 1:06and artificial machines that [fly],
-
1:06 - 1:11you find that it was only through
understanding the underlying -
1:11 - 1:16physical mechanisms of swimming
and flight that we were able -
1:16 - 1:18to build these machines.
-
1:18 - 1:22And so, several years ago,
I undertook a program -
1:22 - 1:26to try to understand the fundamental
physical mechanisms -
1:26 - 1:29underlying intelligence.
-
1:30 - 1:32Let's take a step back.
-
1:32 - 1:36Let's first begin
with a thought experiment. -
1:36 - 1:38Pretend that you're an alien race
-
1:38 - 1:43that doesn't know anything
about Earth biology or Earth neuroscience -
1:43 - 1:47or Earth intelligence, but you have
amazing telescopes -
1:47 - 1:51and you're able to watch the Earth
and you have amazingly long lives -
1:51 - 1:56so you're able to watch the Earth
over millions, even billions of years. -
1:56 - 2:00And you observe a really strange effect,
-
2:00 - 2:04you observe that over the course
of the millennia, -
2:04 - 2:10Earth is continually bombarded
with asteroids up until a point -
2:10 - 2:13and that at some point,
corresponding roughly -
2:13 - 2:19to our year 2000 AD, asteroids that are
on a collision course with the Earth, -
2:19 - 2:23that otherwise would have collided,
mysteriously get deflected -
2:24 - 2:27or detonate before they can hit the Earth.
-
2:27 - 2:30Now, of course, as Earthlings,
we know the reason would be -
2:30 - 2:35that we're trying to save ourselves,
we're trying to prevent an impact. -
2:35 - 2:38But if you're an alien race
that doesn't know any of this, -
2:38 - 2:41that doesn't have any concept
of Earth intelligence, -
2:41 - 2:43you'd be forced to put together
-
2:43 - 2:47a physical theory that explains how,
up until a certain point in time, -
2:48 - 2:52asteroids thad would demolish
the surface of the planet, -
2:52 - 2:55mysteriously stop doing that.
-
2:55 - 3:00So, I claim that this is the same question
-
3:00 - 3:03as understanding the physical
nature of intelligence. -
3:04 - 3:09So, in this program that I undertook
years ago, I've looked at a variety -
3:09 - 3:14of different threads in crossed science
across a variety of disciplines, -
3:14 - 3:19pointing, I think, towards a single
underlying mechanism for intelligence. -
3:20 - 3:22In cosmology, for example,
-
3:22 - 3:25there has been a variety
of different threads of evidence -
3:25 - 3:30that our universe appears to be
finely tuned for the development -
3:30 - 3:33of intelligence, and in particular,
for the development -
3:33 - 3:39of universal states that maximize
the diversity of possible futures. -
3:39 - 3:44In gameplay, for example in Go,
everyone remembers in 1997 -
3:44 - 3:48when IBM's Deep Blue beat
Gary Kasparov at chess. -
3:48 - 3:52Fewer people are aware
that in the past ten year or so, -
3:52 - 3:56the game of Go, arguably a much more
challenging game because it has -
3:56 - 4:01a much higher branching factor,
has also started to succumb to computer -
4:01 - 4:04game players for the same reason.
-
4:04 - 4:07The best techniques, right now,
for computers playing Go, -
4:07 - 4:12are techniques that try to maximize
future options during gameplay. -
4:12 - 4:16Finally, in robotic motion planning,
-
4:16 - 4:18there has been a variety
of recent techniques -
4:18 - 4:23that have tried to take advantage
of abilities of robots to maximize -
4:23 - 4:27future freedom of action in order
to accomplish complex tasks. -
4:27 - 4:31And so, taking all of these different
threads and putting them together, -
4:32 - 4:36I asked, starting several years ago,
is there an underlying mechanism -
4:36 - 4:40for intelligence that we can factor out
of all of these different threads? -
4:41 - 4:45Is there, as it were,
a single equation for intelligence? -
4:47 - 4:50And the answer, I believe, is yes.
-
4:50 - 4:57What you're seeing is probably the closest
equivalent to an E=mc2 for intelligence -
4:57 - 5:00that I certainly have ever seen.
-
5:00 - 5:02So, what you're seeing here
-
5:02 - 5:08is a statement of correspondence
that intelligence is a Force (F) -
5:09 - 5:13that acts so as to maximize
future freedom of action; -
5:14 - 5:17It acts to maximize future freedom
of action or keep options open -
5:17 - 5:20with some strength (T),
-
5:20 - 5:25with the amount of the diversity
of possible accessible futures (S), -
5:25 - 5:28up to some future time horizon (Ƭ).
-
5:28 - 5:31In short, intelligence doesn't like
-
5:31 - 5:34to get trapped, intelligence tries
to maximize future freedom of action -
5:34 - 5:40and keep options open.
And so, given this one equation -
5:40 - 5:42it's natural to ask:
So, what can you do with this? -
5:42 - 5:46How predictive is it? Does it predict
human-level intelligence? -
5:46 - 5:49Does it predict artificial intelligence?
-
5:49 - 5:54So, I'm going to show you now a video
that will, I think, demonstrate -
5:54 - 5:58some of the amazing applications
of just this single equation. -
6:00 - 6:03(Video) Recent research in cosmology
has suggested that universes -
6:03 - 6:08that produce more disorder or "entropy"
over their lifetimes should tend -
6:08 - 6:11to have more favorable conditions
for the existence of intelligent beings -
6:12 - 6:13such as ourselves.
-
6:13 - 6:16But what if that tentative
cosmological connection -
6:16 - 6:19between entropy and intelligence
hints at a deeper relationship? -
6:19 - 6:22What if intelligent behavior
doesn't just correlate -
6:22 - 6:26with the production of long-term entropy,
but actually emerges directly from it? -
6:27 - 6:30To find out, we developed
a software engine called ENTROPICA -
6:30 - 6:34designed to maximize the production
of long-term entropy of any system -
6:34 - 6:36that it finds itself in.
-
6:36 - 6:41Amazingly, ENTROPICA was able to pass
multiple animal intelligence tests, -
6:41 - 6:44play human games
and even earn money trading stocks; -
6:44 - 6:46all without being instructed to do so.
-
6:46 - 6:49Here are some examples
of ENTROPICA in action: -
6:49 - 6:52just like a human standing upright
without falling over, here we see -
6:52 - 6:56ENTROPICA automatically
balancing a pole using a cart. -
6:56 - 7:00This behavior is remarkable, in part,
because we never gave ENTROPICA a goal, -
7:00 - 7:04it simply decided on its own
to balance the pole. -
7:04 - 7:07This balancing ability would have
applications for humanoid robotics -
7:07 - 7:09and human assistive technologies.
-
7:10 - 7:13Just as some animals can use
objects in their environments -
7:13 - 7:15as tools to reach into narrow spaces,
-
7:15 - 7:19here we see that ENTROPICA,
again on its own initiative, -
7:19 - 7:22was able to move a large disk,
representing an animal, -
7:22 - 7:25around so as to cause a small disk,
representing a tool, -
7:25 - 7:28to reach into a confined space
holding a third disk -
7:28 - 7:32and release the third disk
from its initially fixed position. -
7:32 - 7:37This tool usability would have application
for smart manufacturing and agriculture. -
7:37 - 7:40In addition, just as some other animals
are able to cooperate -
7:40 - 7:44by pulling opposite ends of a rope
at the same time to release food, -
7:44 - 7:47here we see that ENTROPICA
is able to accomplish -
7:47 - 7:48a model version of that task.
-
7:48 - 7:52This cooperative ability has interesting
implications for economic planning -
7:52 - 7:55and a variety of other fields.
-
7:55 - 7:59ENTROPICA is broadly applicable
to a variety of domains. -
7:59 - 8:04For example, here we see it successfully
playing a game of pong against itself -
8:04 - 8:06illustrating its potential for gaming.
-
8:08 - 8:10Here, we see ENTROPICA orchestrating
-
8:10 - 8:13new connections on a social network
where friends are constantly -
8:13 - 8:17falling out of touch and successfully
keeping the network well connected. -
8:18 - 8:22This same network orchestration ability
also has applications in health care, -
8:22 - 8:25energy and intelligence.
-
8:25 - 8:29Here we see ENTROPICA directing
the paths of a fleet of ships -
8:29 - 8:33successfully discovering and utilizing
the Panama Canal to globally extend -
8:33 - 8:36its reach from the Atlantic
to the Pacific. -
8:36 - 8:39By the same token, ENTROPICA
is broadly applicable to problems -
8:39 - 8:43in autonomous defense,
logistics and transportation. -
8:45 - 8:49Finally, here we see ENTROPICA
spontaneously discovering and executing -
8:49 - 8:54a buy low, sell high strategy
on a simulated range traded stock -
8:54 - 8:57successfully growing assets
under management exponentially. -
8:57 - 9:01This risk management ability
would have broad applications -
9:01 - 9:03in finance and insurance.
-
9:08 - 9:12AWG: So, what you've just seen
is that a variety -
9:12 - 9:16of signature human
intelligent cognitive behavior -
9:16 - 9:19such us tool use and walking upright
-
9:19 - 9:24and social cooperation, all follow
from a single equation -
9:24 - 9:29which drives a system to maximize
its future freedom of action. -
9:30 - 9:33Now, there's a profound irony here.
-
9:33 - 9:38Going back to the beginning
of the usage of the term robot, -
9:39 - 9:41the play RUR,
-
9:41 - 9:47there was always a concept
that if we develop machine, intelligence, -
9:47 - 9:53there will be a cybernetic revolt,
that machines would rise up against us. -
9:53 - 9:59One major consequence of this work
is that maybe all of these decades -
9:59 - 10:03we've had the whole concept
of cybernetic revolt in reverse. -
10:04 - 10:07It's not that machines
first become intelligent -
10:07 - 10:11and then megalomaniacal,
and try to take over the world. -
10:11 - 10:16It's quite the opposite:
that the urge to take control -
10:16 - 10:20of all possible futures
is a more fundamental principle -
10:20 - 10:24than that of intelligence;
that general intelligence may, in fact, -
10:24 - 10:28emerge directly from this sort
of control grabbing, -
10:28 - 10:31rather than vice versa.
-
10:33 - 10:36Another important consequence
is goal seeking. -
10:37 - 10:42I'm often asked how does the ability
to seek goals follow from this framework -
10:43 - 10:44and the answer is:
-
10:44 - 10:48the ability to seek goals, for example
if you're playing the game of chess, -
10:49 - 10:53to try to win that game of chess
in order to accomplish worldly goods -
10:53 - 10:56and accomplishments outside of that game,
-
10:56 - 10:59will follow directly from this
in the following sense: -
11:00 - 11:04Just like you would travel
through a tunnel, a bottleneck, -
11:04 - 11:07in your future path space
in order to achieve many other -
11:07 - 11:11diverse objectives later on
or just like you would invest -
11:11 - 11:15in a financial security reducing
your short term liquidity -
11:15 - 11:18in order to increase your wealth
over the long term, -
11:18 - 11:22goal seeking emerges directly
from a long term drive -
11:22 - 11:26to increase future freedom of action.
-
11:26 - 11:30Finally, the famous physicist
Richard Feynman once wrote -
11:30 - 11:35that if human civilization were destroyed
and you could pass only a single concept -
11:35 - 11:38on to our descendents
to help them rebuild civilization, -
11:39 - 11:42that concept should be
that all matter around us -
11:42 - 11:46is made out of tiny elements
that attract each other -
11:46 - 11:48when they're far apart,
but repel each other -
11:48 - 11:50when they're close together.
-
11:50 - 11:53My equivalent to that statement
to pass on to descendents -
11:53 - 11:56to help them build
artificial intelligence, -
11:56 - 12:00or to help them to understand
human intelligence, is the following: -
12:00 - 12:04Intelligence should be viewed
as a physical process -
12:04 - 12:06that tries to maximize
future freedom of action -
12:06 - 12:10and avoid constraints in its own future.
-
12:10 - 12:11Thank you very much.
-
12:11 - 12:14(Applause)
- Title:
- An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet
- Description:
-
What is the most intelligent way to behave? Dr. Wissner-Gross explains how the latest research findings in physics, computer science, and animal behavior suggest that the smartest actions -- from the dawn of human tool use all the way up to modern business and financial strategy -- are all driven by the single fundamental principle of keeping future options as open as possible.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDxTalks
- Duration:
- 12:16
TED Translators admin edited English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Ivana Korom edited English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Krystian Aparta edited English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Krystian Aparta approved English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Krystian Aparta edited English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Krystian Aparta edited English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Krystian Aparta edited English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet | ||
Krystian Aparta commented on English subtitles for An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet |
Krystian Aparta
I'm sending back the transcript because it needs further edits: 1. Please break subtitles over 42 characters in length into two lines. If you cannot fit the line in 42 characters (even with some editing, like removing padding or slips of the tongue), split the subtitle into two separate subtitles. 2. Please make sure that the reading speed in all subtitles doesn't go over 21 characters per second (you can do this by merging subtitles, editing the timing to let a subtitle ran a little over into where the next bit of the talk is spoken, or reducing/compressing text in the subtitle). To learn more about this, please watch this tutorial https://www.youtube.com/watch?v=yvNQoD32Qqo --------------------------------------- General notes: Please edit descriptions to follow the title and description standards (they should be short and about the talk, not about the speaker) http://translations.ted.org/wiki/How_to_Tackle_a_Transcript#Title_and_description_standard
Krystian Aparta
Thanks for making those extra edits. I split subtitles that contained the end of one sentence and the beginning of another, changed the curly apostrophes to straight ones (see http://translations.ted.org/wiki/How_to_Tackle_a_Transcript#Avoiding_character_display_errors:_simple_quotes.2C_apostrophes_and_dashes) and fixed several typos.