Return to Video

An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet

  • 0:16 - 0:20
    Intelligence, what is it?
  • 0:20 - 0:25
    If we take a look back at the history
    of how intelligence is being viewed,
  • 0:25 - 0:31
    one seminal example has been
    Edsger Dijkstra's famous quote
  • 0:31 - 0:35
    that the question of
    whether a machine can think
  • 0:35 - 0:38
    is about as interesting as the question of
  • 0:38 - 0:41
    whether a submarine can swim.
  • 0:41 - 0:47
    Now, Edsger Dijkstra, when he wrote this,
    intended it as a criticism
  • 0:47 - 0:52
    of early pioneers of computer science
    like Alan Turing.
  • 0:53 - 0:56
    However, if you take a look back
  • 0:56 - 0:59
    and think about what have been
    the most empowering innovations
  • 0:59 - 1:03
    that enabled us to build
    artificial machines that swim
  • 1:04 - 1:06
    and artificial machines that [fly],
  • 1:06 - 1:11
    you find that it was only through
    understanding the underlying
  • 1:11 - 1:16
    physical mechanisms of swimming
    and flight that we were able
  • 1:16 - 1:18
    to build these machines.
  • 1:18 - 1:22
    And so, several years ago,
    I undertook a program
  • 1:22 - 1:26
    to try to understand the fundamental
    physical mechanisms
  • 1:26 - 1:29
    underlying intelligence.
  • 1:30 - 1:32
    Let's take a step back.
  • 1:32 - 1:36
    Let's first begin
    with a thought experiment.
  • 1:36 - 1:38
    Pretend that you're an alien race
  • 1:38 - 1:43
    that doesn't know anything
    about Earth biology or Earth neuroscience
  • 1:43 - 1:47
    or Earth intelligence, but you have
    amazing telescopes
  • 1:47 - 1:51
    and you're able to watch the Earth
    and you have amazingly long lives
  • 1:51 - 1:56
    so you're able to watch the Earth
    over millions, even billions of years.
  • 1:56 - 2:00
    And you observe a really strange effect,
  • 2:00 - 2:04
    you observe that over the course
    of the millennia,
  • 2:04 - 2:10
    Earth is continually bombarded
    with asteroids up until a point
  • 2:10 - 2:13
    and that at some point,
    corresponding roughly
  • 2:13 - 2:19
    to our year 2000 AD, asteroids that are
    on a collision course with the Earth,
  • 2:19 - 2:23
    that otherwise would have collided,
    mysteriously get deflected
  • 2:24 - 2:27
    or detonate before they can hit the Earth.
  • 2:27 - 2:30
    Now, of course, as Earthlings,
    we know the reason would be
  • 2:30 - 2:35
    that we're trying to save ourselves,
    we're trying to prevent an impact.
  • 2:35 - 2:38
    But if you're an alien race
    that doesn't know any of this,
  • 2:38 - 2:41
    that doesn't have any concept
    of Earth intelligence,
  • 2:41 - 2:43
    you'd be forced to put together
  • 2:43 - 2:47
    a physical theory that explains how,
    up until a certain point in time,
  • 2:48 - 2:52
    asteroids thad would demolish
    the surface of the planet,
  • 2:52 - 2:55
    mysteriously stop doing that.
  • 2:55 - 3:00
    So, I claim that this is the same question
  • 3:00 - 3:03
    as understanding the physical
    nature of intelligence.
  • 3:04 - 3:09
    So, in this program that I undertook
    years ago, I've looked at a variety
  • 3:09 - 3:14
    of different threads in crossed science
    across a variety of disciplines,
  • 3:14 - 3:19
    pointing, I think, towards a single
    underlying mechanism for intelligence.
  • 3:20 - 3:22
    In cosmology, for example,
  • 3:22 - 3:25
    there has been a variety
    of different threads of evidence
  • 3:25 - 3:30
    that our universe appears to be
    finely tuned for the development
  • 3:30 - 3:33
    of intelligence, and in particular,
    for the development
  • 3:33 - 3:39
    of universal states that maximize
    the diversity of possible futures.
  • 3:39 - 3:44
    In gameplay, for example in Go,
    everyone remembers in 1997
  • 3:44 - 3:48
    when IBM's Deep Blue beat
    Gary Kasparov at chess.
  • 3:48 - 3:52
    Fewer people are aware
    that in the past ten year or so,
  • 3:52 - 3:56
    the game of Go, arguably a much more
    challenging game because it has
  • 3:56 - 4:01
    a much higher branching factor,
    has also started to succumb to computer
  • 4:01 - 4:04
    game players for the same reason.
  • 4:04 - 4:07
    The best techniques, right now,
    for computers playing Go,
  • 4:07 - 4:12
    are techniques that try to maximize
    future options during gameplay.
  • 4:12 - 4:16
    Finally, in robotic motion planning,
  • 4:16 - 4:18
    there has been a variety
    of recent techniques
  • 4:18 - 4:23
    that have tried to take advantage
    of abilities of robots to maximize
  • 4:23 - 4:27
    future freedom of action in order
    to accomplish complex tasks.
  • 4:27 - 4:31
    And so, taking all of these different
    threads and putting them together,
  • 4:32 - 4:36
    I asked, starting several years ago,
    is there an underlying mechanism
  • 4:36 - 4:40
    for intelligence that we can factor out
    of all of these different threads?
  • 4:41 - 4:45
    Is there, as it were,
    a single equation for intelligence?
  • 4:47 - 4:50
    And the answer, I believe, is yes.
  • 4:50 - 4:57
    What you're seeing is probably the closest
    equivalent to an E=mc2 for intelligence
  • 4:57 - 5:00
    that I certainly have ever seen.
  • 5:00 - 5:02
    So, what you're seeing here
  • 5:02 - 5:08
    is a statement of correspondence
    that intelligence is a Force (F)
  • 5:09 - 5:13
    that acts so as to maximize
    future freedom of action;
  • 5:14 - 5:17
    It acts to maximize future freedom
    of action or keep options open
  • 5:17 - 5:20
    with some strength (T),
  • 5:20 - 5:25
    with the amount of the diversity
    of possible accessible futures (S),
  • 5:25 - 5:28
    up to some future time horizon (Ƭ).
  • 5:28 - 5:31
    In short, intelligence doesn't like
  • 5:31 - 5:34
    to get trapped, intelligence tries
    to maximize future freedom of action
  • 5:34 - 5:40
    and keep options open.
    And so, given this one equation
  • 5:40 - 5:42
    it's natural to ask:
    So, what can you do with this?
  • 5:42 - 5:46
    How predictive is it? Does it predict
    human-level intelligence?
  • 5:46 - 5:49
    Does it predict artificial intelligence?
  • 5:49 - 5:54
    So, I'm going to show you now a video
    that will, I think, demonstrate
  • 5:54 - 5:58
    some of the amazing applications
    of just this single equation.
  • 6:00 - 6:03
    (Video) Recent research in cosmology
    has suggested that universes
  • 6:03 - 6:08
    that produce more disorder or "entropy"
    over their lifetimes should tend
  • 6:08 - 6:11
    to have more favorable conditions
    for the existence of intelligent beings
  • 6:12 - 6:13
    such as ourselves.
  • 6:13 - 6:16
    But what if that tentative
    cosmological connection
  • 6:16 - 6:19
    between entropy and intelligence
    hints at a deeper relationship?
  • 6:19 - 6:22
    What if intelligent behavior
    doesn't just correlate
  • 6:22 - 6:26
    with the production of long-term entropy,
    but actually emerges directly from it?
  • 6:27 - 6:30
    To find out, we developed
    a software engine called ENTROPICA
  • 6:30 - 6:34
    designed to maximize the production
    of long-term entropy of any system
  • 6:34 - 6:36
    that it finds itself in.
  • 6:36 - 6:41
    Amazingly, ENTROPICA was able to pass
    multiple animal intelligence tests,
  • 6:41 - 6:44
    play human games
    and even earn money trading stocks;
  • 6:44 - 6:46
    all without being instructed to do so.
  • 6:46 - 6:49
    Here are some examples
    of ENTROPICA in action:
  • 6:49 - 6:52
    just like a human standing upright
    without falling over, here we see
  • 6:52 - 6:56
    ENTROPICA automatically
    balancing a pole using a cart.
  • 6:56 - 7:00
    This behavior is remarkable, in part,
    because we never gave ENTROPICA a goal,
  • 7:00 - 7:04
    it simply decided on its own
    to balance the pole.
  • 7:04 - 7:07
    This balancing ability would have
    applications for humanoid robotics
  • 7:07 - 7:09
    and human assistive technologies.
  • 7:10 - 7:13
    Just as some animals can use
    objects in their environments
  • 7:13 - 7:15
    as tools to reach into narrow spaces,
  • 7:15 - 7:19
    here we see that ENTROPICA,
    again on its own initiative,
  • 7:19 - 7:22
    was able to move a large disk,
    representing an animal,
  • 7:22 - 7:25
    around so as to cause a small disk,
    representing a tool,
  • 7:25 - 7:28
    to reach into a confined space
    holding a third disk
  • 7:28 - 7:32
    and release the third disk
    from its initially fixed position.
  • 7:32 - 7:37
    This tool usability would have application
    for smart manufacturing and agriculture.
  • 7:37 - 7:40
    In addition, just as some other animals
    are able to cooperate
  • 7:40 - 7:44
    by pulling opposite ends of a rope
    at the same time to release food,
  • 7:44 - 7:47
    here we see that ENTROPICA
    is able to accomplish
  • 7:47 - 7:48
    a model version of that task.
  • 7:48 - 7:52
    This cooperative ability has interesting
    implications for economic planning
  • 7:52 - 7:55
    and a variety of other fields.
  • 7:55 - 7:59
    ENTROPICA is broadly applicable
    to a variety of domains.
  • 7:59 - 8:04
    For example, here we see it successfully
    playing a game of pong against itself
  • 8:04 - 8:06
    illustrating its potential for gaming.
  • 8:08 - 8:10
    Here, we see ENTROPICA orchestrating
  • 8:10 - 8:13
    new connections on a social network
    where friends are constantly
  • 8:13 - 8:17
    falling out of touch and successfully
    keeping the network well connected.
  • 8:18 - 8:22
    This same network orchestration ability
    also has applications in health care,
  • 8:22 - 8:25
    energy and intelligence.
  • 8:25 - 8:29
    Here we see ENTROPICA directing
    the paths of a fleet of ships
  • 8:29 - 8:33
    successfully discovering and utilizing
    the Panama Canal to globally extend
  • 8:33 - 8:36
    its reach from the Atlantic
    to the Pacific.
  • 8:36 - 8:39
    By the same token, ENTROPICA
    is broadly applicable to problems
  • 8:39 - 8:43
    in autonomous defense,
    logistics and transportation.
  • 8:45 - 8:49
    Finally, here we see ENTROPICA
    spontaneously discovering and executing
  • 8:49 - 8:54
    a buy low, sell high strategy
    on a simulated range traded stock
  • 8:54 - 8:57
    successfully growing assets
    under management exponentially.
  • 8:57 - 9:01
    This risk management ability
    would have broad applications
  • 9:01 - 9:03
    in finance and insurance.
  • 9:08 - 9:12
    AWG: So, what you've just seen
    is that a variety
  • 9:12 - 9:16
    of signature human
    intelligent cognitive behavior
  • 9:16 - 9:19
    such us tool use and walking upright
  • 9:19 - 9:24
    and social cooperation, all follow
    from a single equation
  • 9:24 - 9:29
    which drives a system to maximize
    its future freedom of action.
  • 9:30 - 9:33
    Now, there's a profound irony here.
  • 9:33 - 9:38
    Going back to the beginning
    of the usage of the term robot,
  • 9:39 - 9:41
    the play RUR,
  • 9:41 - 9:47
    there was always a concept
    that if we develop machine, intelligence,
  • 9:47 - 9:53
    there will be a cybernetic revolt,
    that machines would rise up against us.
  • 9:53 - 9:59
    One major consequence of this work
    is that maybe all of these decades
  • 9:59 - 10:03
    we've had the whole concept
    of cybernetic revolt in reverse.
  • 10:04 - 10:07
    It's not that machines
    first become intelligent
  • 10:07 - 10:11
    and then megalomaniacal,
    and try to take over the world.
  • 10:11 - 10:16
    It's quite the opposite:
    that the urge to take control
  • 10:16 - 10:20
    of all possible futures
    is a more fundamental principle
  • 10:20 - 10:24
    than that of intelligence;
    that general intelligence may, in fact,
  • 10:24 - 10:28
    emerge directly from this sort
    of control grabbing,
  • 10:28 - 10:31
    rather than vice versa.
  • 10:33 - 10:36
    Another important consequence
    is goal seeking.
  • 10:37 - 10:42
    I'm often asked how does the ability
    to seek goals follow from this framework
  • 10:43 - 10:44
    and the answer is:
  • 10:44 - 10:48
    the ability to seek goals, for example
    if you're playing the game of chess,
  • 10:49 - 10:53
    to try to win that game of chess
    in order to accomplish worldly goods
  • 10:53 - 10:56
    and accomplishments outside of that game,
  • 10:56 - 10:59
    will follow directly from this
    in the following sense:
  • 11:00 - 11:04
    Just like you would travel
    through a tunnel, a bottleneck,
  • 11:04 - 11:07
    in your future path space
    in order to achieve many other
  • 11:07 - 11:11
    diverse objectives later on
    or just like you would invest
  • 11:11 - 11:15
    in a financial security reducing
    your short term liquidity
  • 11:15 - 11:18
    in order to increase your wealth
    over the long term,
  • 11:18 - 11:22
    goal seeking emerges directly
    from a long term drive
  • 11:22 - 11:26
    to increase future freedom of action.
  • 11:26 - 11:30
    Finally, the famous physicist
    Richard Feynman once wrote
  • 11:30 - 11:35
    that if human civilization were destroyed
    and you could pass only a single concept
  • 11:35 - 11:38
    on to our descendents
    to help them rebuild civilization,
  • 11:39 - 11:42
    that concept should be
    that all matter around us
  • 11:42 - 11:46
    is made out of tiny elements
    that attract each other
  • 11:46 - 11:48
    when they're far apart,
    but repel each other
  • 11:48 - 11:50
    when they're close together.
  • 11:50 - 11:53
    My equivalent to that statement
    to pass on to descendents
  • 11:53 - 11:56
    to help them build
    artificial intelligence,
  • 11:56 - 12:00
    or to help them to understand
    human intelligence, is the following:
  • 12:00 - 12:04
    Intelligence should be viewed
    as a physical process
  • 12:04 - 12:06
    that tries to maximize
    future freedom of action
  • 12:06 - 12:10
    and avoid constraints in its own future.
  • 12:10 - 12:11
    Thank you very much.
  • 12:11 - 12:14
    (Applause)
Title:
An equation for intelligence: Alex Wissner-Gross at TEDxBeaconStreet
Description:

What is the most intelligent way to behave? Dr. Wissner-Gross explains how the latest research findings in physics, computer science, and animal behavior suggest that the smartest actions -- from the dawn of human tool use all the way up to modern business and financial strategy -- are all driven by the single fundamental principle of keeping future options as open as possible.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
12:16
  • I'm sending back the transcript because it needs further edits: 1. Please break subtitles over 42 characters in length into two lines. If you cannot fit the line in 42 characters (even with some editing, like removing padding or slips of the tongue), split the subtitle into two separate subtitles. 2. Please make sure that the reading speed in all subtitles doesn't go over 21 characters per second (you can do this by merging subtitles, editing the timing to let a subtitle ran a little over into where the next bit of the talk is spoken, or reducing/compressing text in the subtitle). To learn more about this, please watch this tutorial https://www.youtube.com/watch?v=yvNQoD32Qqo --------------------------------------- General notes: Please edit descriptions to follow the title and description standards (they should be short and about the talk, not about the speaker) http://translations.ted.org/wiki/How_to_Tackle_a_Transcript#Title_and_description_standard

  • Thanks for making those extra edits. I split subtitles that contained the end of one sentence and the beginning of another, changed the curly apostrophes to straight ones (see http://translations.ted.org/wiki/How_to_Tackle_a_Transcript#Avoiding_character_display_errors:_simple_quotes.2C_apostrophes_and_dashes) and fixed several typos.

English subtitles

Revisions Compare revisions