< Return to Video

35C3 - The Ghost in the Machine

  • 0:19 - 0:23
    Herald: I have the great pleasure to
    announce Joscha, who will give us a great
  • 0:23 - 0:26
    talk with the title "The Ghost in the
    Machine" and he will talk about
  • 0:26 - 0:33
    consciousness of our mind and of computers
    and somehow also tell us how we can learn
  • 0:33 - 0:38
    from A.I. systems about our own brains.
    And I think this is a very curious question.
  • 0:38 - 0:41
    So please give it up for Joscha.
  • 0:41 - 0:51
    Applause
  • 0:51 - 0:59
    Joscha: Good evening. This is the 5th
    of a talk in a series of talks on how to
  • 0:59 - 1:04
    get from computation to consciousness and
    to understand our condition in the
  • 1:04 - 1:09
    universe based on concepts that I mostly
    learned by looking at artificial
  • 1:09 - 1:17
    intelligence and computation and it mostly
    tackles the big philosophical questions:
  • 1:17 - 1:20
    What can I know? What is true? What is
    truth? Who am I? Which means the question
  • 1:20 - 1:26
    of epistemology, of ontology, of
    metaphysics, and philosophy of mind and
  • 1:26 - 1:27
    ethics.
  • 1:27 - 1:31
    And to clear some of the terms
    that we are using here:
  • 1:31 - 1:34
    What is intelligence? What's a mind?
    What's a self? What's consciousness?
  • 1:34 - 1:38
    How are mind and consciousness
    realized in the universe?
  • 1:38 - 1:40
    Intelligence I think is the ability to
    make models.
  • 1:40 - 1:42
    It's not the same thing
    as being smart, which is the
  • 1:42 - 1:47
    ability to reach your goals or being wise,
    which is the ability to pick the right
  • 1:47 - 1:51
    goals. But it's just the ability to
    make models of things.
  • 1:51 - 1:54
    And you can regulate them later using
    these models, but you don't have to.
  • 1:54 - 1:57
    And the mind is this thing that observes
    the universe itself
  • 1:57 - 2:01
    as an identification with
    properties and purposes.
  • 2:01 - 2:04
    What a thing thinks it is. And then
    you have consciousness, which is
  • 2:04 - 2:08
    the experience of what it's like
    to be a thing.
  • 2:08 - 2:11
    And, how our mind of consciousness
    is realized in the universe,
  • 2:11 - 2:14
    this is commonly called the
    mind-body problem and it's been
  • 2:14 - 2:20
    puzzling philosophers and people of
    all proclivities for thousands of years.
  • 2:20 - 2:25
    So what's going on? How's it possible that
    I find myself in a universe and I seem to
  • 2:25 - 2:31
    be experiencing myself in that universe?
    How does this go together and how is this,
  • 2:31 - 2:37
    what's going on here? The traditional
    answer to this is called dualism and the
  • 2:37 - 2:42
    conception of dualism is that - in our
    culture at least, this dualist idea that
  • 2:42 - 2:46
    you have a physical world and a mental
    world and they coexist somehow and my mind
  • 2:46 - 2:50
    experiences this mental world and my body
    can do things in the physical world and
  • 2:50 - 2:54
    the difficulty of this dualist conception
    is how do these two planes of existence
  • 2:54 - 2:58
    interact. Because physics is defined as
    causally closed, everything that
  • 2:58 - 3:03
    influences things in the physical world is
    by itself an element of physics. So an
  • 3:03 - 3:07
    alternative is idealism which says that
    there is only a mental world. We only
  • 3:07 - 3:12
    exist in a dream and this dream is being
    dreamt by a mind on a higher plane of
  • 3:12 - 3:18
    existence. And difficulty with this, it's
    very hard to explain that mind of a higher
  • 3:18 - 3:22
    plane of existence. Just put it there, why
    is it doing this? And in our culture the
  • 3:22 - 3:27
    dominant theory is materialism and is
    basically there is only a physical world
  • 3:27 - 3:32
    nothing else. And the physical world
    somehow is responsible for the creation of
  • 3:32 - 3:37
    the mental world. It's not quite clear how
    this happens. And the answer that I am
  • 3:37 - 3:44
    suggesting, is functionalism which means
    that indeed we exist only in a dream.
  • 3:44 - 3:49
    So these ideas of materialism and idealism
    are not in opposition. They are
  • 3:49 - 3:52
    complementary because this dream is being
    dreamt by a mind on a higher plane of
  • 3:52 - 3:57
    existence, but this higher plane of
    existence is the physical world. So we are
  • 3:57 - 4:03
    being dreamt in the neocortex of a primate
    that lives in a physical universe and the
  • 4:03 - 4:06
    world that we experience is not the
    physical world. It's a dream generated by
  • 4:06 - 4:10
    the neocortex - the same circuits that
    make dreams at night make them during the
  • 4:10 - 4:14
    day. You can show this, and you live in
    this virtual reality being generated in
  • 4:14 - 4:18
    there and the self as a character in that
    dream. And it seems to take care of
  • 4:18 - 4:22
    things. It seems to explain what's going
    on. It explains why a miracle seems to be
  • 4:22 - 4:26
    possible and why I can look into the
    future but cannot break the bank somehow.
  • 4:26 - 4:31
    And even though this theory explains this,
    how shouldn't I be more agnostic? Are
  • 4:31 - 4:35
    there not alternatives that I should be
    considering? Maybe the narratives of our
  • 4:35 - 4:41
    big religions and so on. I think we should
    be agnostic. So the first rule of
  • 4:41 - 4:46
    epistemology says that the confidence in
    the belief must equal the weight of the
  • 4:46 - 4:49
    evidence supporting it. Once we stumble on
    that rule you can test all the
  • 4:49 - 4:54
    alternatives and see if one of them is
    better. And I think what this means is you
  • 4:54 - 4:58
    have to have all the possible beliefs, you
    should entertain them all. But you should
  • 4:58 - 5:01
    not have any confidence in them. You
    should shift your confidence around based
  • 5:01 - 5:06
    on the evidence. So for instance it is
    entirely possible that this universe was
  • 5:06 - 5:09
    created by a supernatural being, and it's
    a big conspiracy, and it actually has
  • 5:09 - 5:13
    meaning and it cares about us and our
    existence here means something.
  • 5:13 - 5:17
    But um, there is no experiment that can
    validate this. A guy coming down from a
  • 5:17 - 5:21
    burning mount, from a burning
    bush, that you've talked to on a
  • 5:21 - 5:28
    mountaintop? That's not a kind of experi-
    ment that gives you valid evidence, right?
  • 5:28 - 5:33
    So intelligence is the ability to
    make models and intelligence is a property
  • 5:33 - 5:37
    that is beyond the grasp of a single
    individual. A single individual is not
  • 5:37 - 5:41
    that smart. We cannot figure out even tur-
    ing complete languages all by ourselves.
  • 5:41 - 5:45
    To do this you need an intellectual
    tradition that lasts a few hundred years
  • 5:45 - 5:50
    at least. So civilizations have more
    intelligence than individuals. But
  • 5:50 - 5:54
    individuals often have more intelligence
    than groups and whole generations and
  • 5:54 - 5:59
    that's because groups and generations tend
    to converge on ideas; they have consensus
  • 5:59 - 6:03
    opinions. I'm very wary of consensus
    opinions because you know how hard it is
  • 6:03 - 6:06
    to understand which programming language
    is the best one for which purpose. There
  • 6:06 - 6:10
    is no proper consensus. And that's a
    relatively easy problem. So when there's a
  • 6:10 - 6:14
    complex topics and all the experts agree,
    there are forces at work that are
  • 6:14 - 6:17
    different than the forces that make them
    search for truth. These consensus-building
  • 6:17 - 6:21
    forces, they're very suspicious to me. And
    if you want to understand what's true you
  • 6:21 - 6:25
    have to look for means and motive. And you
    have to be autonomous in doing this, so
  • 6:25 - 6:29
    individuals typically have better ideas
    than generations or groups. But as I
  • 6:29 - 6:33
    said, civilizations have more intelligence
    than individuals. What does a
  • 6:33 - 6:37
    civilizational intellect look like? The
    civilization intellect is something like a
  • 6:37 - 6:40
    global optimum of the modeling function.
    It's something that has to be built over
  • 6:40 - 6:44
    thousands of years in an unbroken
    intellectual tradition. And guess what,
  • 6:44 - 6:47
    this doesn't really exist in human
    history. Every few hundred years, there's
  • 6:47 - 6:51
    some kind of revolution. Somebody opens
    the doors to the knowledge factories and
  • 6:51 - 6:55
    gets everybody out and burns down the
    libraries. And a couple generations later,
  • 6:55 - 6:59
    the knowledge worker drones of the new
    king realize "Oh my God we need to rebuild
  • 6:59 - 7:03
    this thing, this intellect." And then they
    create something in its likeness, but they
  • 7:03 - 7:08
    make mistakes in the foundation. So this
    intellect tends to have scars. Like our
  • 7:08 - 7:12
    civilization intellect has a lot of scars
    in it, that make it hard-to-difficult
  • 7:12 - 7:17
    to understand concepts like self
    and consciousness and mind. So, the mind
  • 7:17 - 7:20
    is something that observes the universe,
    and the neurons and neurotransmitters are
  • 7:20 - 7:23
    the substrate. And the human intellect and
    the working memory is the current binding
  • 7:23 - 7:27
    state, how do the different elements fit
    together in our mind? And the self is the
  • 7:27 - 7:31
    identification is what we think we are and
    what we want to happen. And consciousness
  • 7:31 - 7:35
    is the contents of our attention, it makes
    knowledge available throughout the mind.
  • 7:35 - 7:39
    And civilizational intellect is very
    similar: society is observe the universe,
  • 7:39 - 7:42
    people and resources are the substrate,
    the generation is the current binding
  • 7:42 - 7:47
    state, and culture is the identification
    with what we think we are and what we want
  • 7:47 - 7:52
    to happen. And media is the contents of
    our attention and make knowledge available
  • 7:52 - 7:56
    throughout society. So the culture is
    basically the self of civilization, and
  • 7:56 - 8:00
    media is its consciousness. How is it
    possible to model a universe? Let's take a
  • 8:00 - 8:05
    very simple universe like the Mandelbrot
    fractal. It can be defined by a little bit
  • 8:05 - 8:09
    of code. It's a very simple thing, you just
    take a pair of numbers, you square it, you
  • 8:09 - 8:14
    add the same pair of numbers. And you do
    this infinitely often, and typically this
  • 8:14 - 8:19
    goes to infinity very fast. There's a
    small area around the origin of the number
  • 8:19 - 8:25
    pair, so between -1 and +1 and
    so on, where you have an area where this
  • 8:25 - 8:28
    converges, where it doesn't go to infinity
    and that is where you make black dots and
  • 8:28 - 8:33
    then you get this famous structure, the
    Mandelbrot fractal. And because this
  • 8:33 - 8:37
    divergence and convergence of the function
    can take many loops and circles and so on,
  • 8:37 - 8:41
    a very complicated shape a very
    complicated outline, an infinitely
  • 8:41 - 8:45
    complicated outline there. So there is an
    infinite amount of structure in this
  • 8:45 - 8:48
    fractal. And now imagine you happen
    to live in this fractal and you are in a
  • 8:48 - 8:53
    particular place in it, and you don't know
    where that is where that place is. You
  • 8:53 - 8:55
    don't even know the generator function of
    the whole thing. But you can still predict
  • 8:55 - 8:58
    your neighborhood. So you can see, omg,
    I'm in some kind of a spiral, it turns
  • 8:58 - 9:02
    to the left, goes to the left, and goes
    to left, and becomes smaller, so we can
  • 9:02 - 9:06
    predict and suddenly it ends. Why does it
    end? A singularity. Oh, it hits another
  • 9:06 - 9:09
    spiral. There's a law when a spiral hits
    another spiral, it ends. And something
  • 9:09 - 9:14
    else happens. So you look and then you see
    oh, there are certain circumstances where
  • 9:14 - 9:17
    you have, for instance, an even number of
    spirals hitting each other instead of an
  • 9:17 - 9:21
    odd number. And then you discover another
    law. And if you make like 50 levels of
  • 9:21 - 9:25
    of these laws, and this is a good
    description that locally compresses the
  • 9:25 - 9:29
    universe. So the Mandelbrot fractal is
    locally compressable. You find local
  • 9:29 - 9:32
    order that predicts the neighborhood if
    you are inside of that fractal. The global
  • 9:32 - 9:35
    modelling function of the Mandelbrot
    fractal is very, very easy. It's an
  • 9:35 - 9:40
    interesting question: how difficult is the
    global modelling function of our universe?
  • 9:40 - 9:43
    Even if we know it maybe it doesn't
    help us that much, it will be a big
  • 9:43 - 9:46
    breakthrough for physics when we finally
    find it, it will be much shorter than the
  • 9:46 - 9:53
    standard model, as I suspect, but we still
    don't know where we are. And this means we
  • 9:53 - 9:56
    need to make a local model of what's
    happening. So in order to do this we
  • 9:56 - 10:00
    separate the universe into things. Things
    are small state spaces and transition
  • 10:00 - 10:05
    functions that tell you how to get from
    state to state. And if the function is
  • 10:05 - 10:08
    deterministic it is independent of time,
    it gives the same result every time you
  • 10:08 - 10:13
    call it. For an indeterministic function
    it gives a different result every time, so
  • 10:13 - 10:17
    it doesn't compress well. And causality
    means that you have separate several
  • 10:17 - 10:20
    things and they influence each other's
    evolution thrugh a shared interface.
  • 10:20 - 10:24
    Right? So causality is an artifact of
    describing the universe as separate
  • 10:24 - 10:28
    things. And the universe is not separate
    things, it's one thing, but we get have to
  • 10:28 - 10:33
    describe it as separate things because we
    cannot observe the whole thing. So what's
  • 10:33 - 10:37
    true? There seems to be a particular way
    in which the universe seems to be and
  • 10:37 - 10:40
    that's the ground rules of the universe
    and it's inaccessible to us. And what's
  • 10:40 - 10:45
    accessible to us is our own models of the
    universe. The only thing that we can
  • 10:45 - 10:48
    experience, and this is basically a set
    of theories that can explain the
  • 10:48 - 10:52
    observations. And truth in this sense is a
    property of language and there are
  • 10:52 - 10:57
    different languages that we can use like
    geometry and natural language and so on
  • 10:57 - 11:00
    and ways of representing and changing
    models of our languages and several
  • 11:00 - 11:06
    intellectual traditions have developed
    their own languages. And this has led to
  • 11:06 - 11:10
    problems. Our civilization basically has
    as its founding myth this attempt to build
  • 11:10 - 11:15
    this global optimum modelling function.
    This is a tower that is meant to reach the
  • 11:15 - 11:18
    heavens. And it fell apart because people
    spoke different languages. The different
  • 11:18 - 11:21
    practitioners in the different fields and
    they didn't understand each other and the
  • 11:21 - 11:25
    whole building collapsed. And this is in
    some sense the origin of our present
  • 11:25 - 11:28
    civilization and we are trying to mend
    this and find better languages. So whom
  • 11:28 - 11:32
    can we turn to? We can turn to the
    mathematicians maybe because mathematics
  • 11:32 - 11:36
    is the domain of all languages.
    Mathematics is really cool when you think
  • 11:36 - 11:40
    about it. It's a universal code library,
    maintained for several centuries in its
  • 11:40 - 11:44
    present form. There is not even version
    management, it's one version. There is
  • 11:44 - 11:48
    pretty much unified namespace. They have
    to use a lot of the Unicode to make it
  • 11:48 - 11:52
    happen. It's ugly but there you go! It has
    no central maintainers, not even a code of
  • 11:52 - 11:55
    conduct, beyond what you can infer
    yourself.
  • 11:55 - 11:58
    laughter
    But there are some problems at the
  • 11:58 - 12:06
    foundation that they discovered.
    Shouted from the audience: en sehr stabile
  • 12:06 - 12:10
    Joscha: Can you infer this is a good
    conduct? ??????????
  • 12:10 - 12:17
    Yelling from the audience: Ya!
    Joscha: Okay. Power to you.
  • 12:17 - 12:21
    laughter
    Joscha: In 1874 discovered when you looked
  • 12:21 - 12:25
    at the cardinality of a set, that when you
    described natural numbers using set
  • 12:25 - 12:30
    theory, that the cardinality of a set
    grows slower than the cardinality of the
  • 12:30 - 12:33
    set of its subsets. So if you look at the
    set of the subsets of the set, it's always
  • 12:33 - 12:38
    larger than the cardinality of the number
    of members of the set. Clear? Right. If
  • 12:38 - 12:42
    you take the infinite set, it has
    infinitely many members: omega. You
  • 12:42 - 12:46
    take the cardinality of the set of the
    subsets of the infinite set, it's also an
  • 12:46 - 12:50
    infinite number, but it's a larger one. So
    it's a number that is larger than the
  • 12:50 - 12:55
    previous omega. Okay that's fine. Now we
    have the cardinality of the set of all
  • 12:55 - 12:58
    sets. You make the total set: The set
    where you put all the sets that could
  • 12:58 - 13:02
    possibly exist and put them all together,
    right? That has also infinitely many
  • 13:02 - 13:05
    members, and it has more than the
    cardinality of the set of the subsets of
  • 13:05 - 13:09
    the infinite set. That's fine. But now you
    look at the cardinality of the set of all
  • 13:09 - 13:14
    the subsets of the total set. The problem
    is, that the total set also contains the
  • 13:14 - 13:18
    set of its subsets, right? It's because it
    contains all the sets. Now you have a
  • 13:18 - 13:22
    contradiction: Because the cardinality of
    the set of the subsets of the total set is
  • 13:22 - 13:27
    supposed to be larger. And yet it seems to
    be the same set and not the same set. It's
  • 13:27 - 13:32
    an issue! So mathematicians got puzzled
    about this, and the philosopher Bertrand
  • 13:32 - 13:35
    Russell said: "Maybe we just exclude those
    sets that don't contain themselves",
  • 13:35 - 13:39
    right? We only look at the set of sets
    that don't contain themselves. Isn't that
  • 13:39 - 13:43
    a solution? Now the problem is: Does the
    set of the sets that doesn't contain
  • 13:43 - 13:47
    themselves contain itself? If it does, it
    doesn't, and if it doesn't, it does.
  • 13:47 - 13:52
    That's an issue!
    laughter
  • 13:52 - 13:56
    So David Hilbert, who was some
    kind of a community manager back then,
  • 13:56 - 14:00
    said: "Guys, fix this! This is an issue,
    mathematics is precious, we are in
  • 14:00 - 14:05
    trouble. Please solve meta mathematics."
    And people got to work. And after a short
  • 14:05 - 14:08
    amount of time Kurt Gödel, who had looked
    at this in earnest said "oh that's an issue,
  • 14:08 - 14:11
    issue. You know, as soon as we allow these
    kinds of loops - and we cannot really
  • 14:11 - 14:16
    exclude these loops - then our mathematics
    crashes." So that's an issue, it's called
  • 14:16 - 14:22
    Unentscheidbarkeit. And then Alan Turing
    came along a couple of years later, and he
  • 14:22 - 14:24
    constructed a computer to make that proof.
    He basically said "If you build a machine
  • 14:24 - 14:28
    that does these mathematics, and the
    machine takes infinitely many steps,
  • 14:28 - 14:32
    sometimes, for making a proof, then we
    cannot know whether this proof
  • 14:32 - 14:36
    terminates." So it's a similar issue for
    the Unentscheidbarkeit. That's a big
  • 14:36 - 14:39
    issue, right? So we cannot basically build
    a machine in mathematics that runs
  • 14:39 - 14:45
    mathematics without crashing. But the good
    news is, Turing didn't stop working there
  • 14:45 - 14:49
    and he figured out together with Alonzo
    Church - not together, independently but
  • 14:49 - 14:54
    at the same time - that we can build a
    computational machine, that runs all of
  • 14:54 - 14:59
    computation. So computation is a universal
    thing. And it's almost as good as
  • 14:59 - 15:03
    mathematics. Computation is constructive
    mathematics. The tiny, neglected subset of
  • 15:03 - 15:06
    mathematics, where you have to show the
    money. In order to say that something is
  • 15:06 - 15:11
    true, you have to find that object that is
    true. You have to actually construct it.
  • 15:11 - 15:14
    So there are no infinities, because you
    cannot construct an infinity. You add
  • 15:14 - 15:19
    things and you have unboundedness maybe,
    but not infinity. And so this part of
  • 15:19 - 15:24
    computation, mathematics is the one that
    can be implemented. It's constructive
  • 15:24 - 15:27
    mathematics. It's the good part. And
    computing, a computer is very easy to
  • 15:27 - 15:31
    make, and all universal computers have the
    same power. That's called the Chuch-Turing
  • 15:31 - 15:37
    thesis. And Turing even didn't even stop
    there. The obvious conclusion is that,
  • 15:37 - 15:40
    human minds are probably not in the class
    of these mathematical machines, that even
  • 15:40 - 15:44
    God doesn't know how to build if it has to
    be done in any language. But it's a
  • 15:44 - 15:48
    computational machine. And it also means
    that all machines that human minds ever
  • 15:48 - 15:50
    encounter, mathematics that human minds
    encounter,
  • 15:50 - 15:56
    will be computational mathematics.
    So how can you bridge the gap
  • 15:56 - 16:00
    from mathematics to philosophy? Can we
    find a language that is more powerful than
  • 16:00 - 16:03
    most of the languages that we look at
    mathematics, which are very narrowly
  • 16:03 - 16:08
    defined language, so every symbol, we know
    exactly what it means.
  • 16:08 - 16:09
    When we look at the real world,
  • 16:09 - 16:11
    we often don't know what things mean,
    and our concepts, we're not quite
  • 16:11 - 16:15
    sure what they mean. Like culture is a
    very vague ambigous concept. So what I
  • 16:15 - 16:20
    said is only approximately true there. Can
    we deal with this conceptual ambiguity?
  • 16:20 - 16:24
    Can we build a programming language for
    thought, where words mean things that
  • 16:24 - 16:28
    they're supposed to mean? And this was the
    project of Ludwig Wittgenstein. He just
  • 16:28 - 16:33
    came back from the war and had a lot of
    thoughts. Then he put these thoughts
  • 16:33 - 16:38
    into a book which is called the Tractatus.
    And it's one of the most beautiful books
  • 16:38 - 16:42
    in the philosophy of the 20th century. And
    it starts with the words "Die Welt ist
  • 16:42 - 16:47
    alles, was der Fall ist. Die Welt ist die
    Gesamtheit der Fakten, nicht der Dinge.
  • 16:47 - 16:54
    Die Welt ist bestimmt, bei den Fakten, und
    dadurch, dass diese all die Fakten sind.",
  • 16:54 - 16:57
    usw. This book is about 75 pages long and
    it's a single thought. It's not meant to
  • 16:57 - 17:02
    be an argument to convince a philosopher.
    It's an attempt by a guy who was basically
  • 17:02 - 17:06
    a coder, an AI scientist, to reverse
    engineer the language of his own thinking.
  • 17:06 - 17:11
    And make it deterministic, to make it
    formal, to make it mean something. And he
  • 17:11 - 17:15
    felt back then that he was successful, and
    had a tremendous impact on philosophy,
  • 17:15 - 17:19
    which was largely devastating, because the
    philosophers didn't know what he was on
  • 17:19 - 17:23
    about. They thought it's about natural
    language and not about coding.
  • 17:23 - 17:25
    And he wrote this in 1918
  • 17:25 - 17:29
    so before Alan Turing defined,
    what a computer is. But he would already
  • 17:29 - 17:34
    smell what a computer is. He already knew
    about university of computation. He knew
  • 17:34 - 17:37
    that a NAND gate is sufficient to explain
    all of boolean algebra and it's equivalent
  • 17:37 - 17:43
    to other things. So what he basically did,
    was, he pre-empted the logicists' program
  • 17:43 - 17:48
    of artificial intelligence which started
    much later in the 1950s. And he ran into
  • 17:48 - 17:51
    troubles with it. In the end he wrote the
    book "Philosophical Investigations", where
  • 17:51 - 17:57
    he concluded, that his project basically
    failed. And that there is a... because the
  • 17:57 - 18:02
    world is too complex and too ambiguous to
    deal with this. And symbolic AI was mostly
  • 18:02 - 18:05
    similar to Wittgenstein's program. So
    classical AI is symbolic. You analyze a
  • 18:05 - 18:10
    problem, you find an algorithm to solve
    it. And what we now have in AI, is mostly
  • 18:10 - 18:14
    sub-symbolic. So we have algorithms, that
    learn the solution of a problem by
  • 18:14 - 18:18
    themselves. And it's tempting to think,
    that the next thing what we have will be
  • 18:18 - 18:23
    meta-learning. That you have algorithms,
    that learn to learn the solution to the
  • 18:23 - 18:28
    problem. Meanwhile, let's look at how we
    can make models. Information is a
  • 18:28 - 18:31
    discernible difference. It's about change.
    All information is about change. The
  • 18:31 - 18:34
    information that is not about change, you
    cannot see a causal effect on the world,
  • 18:34 - 18:39
    because it stays the same, right? And the
    meaning of information is its relationship
  • 18:39 - 18:43
    to change in other information. So if you
    see a blip on your retina, the meaning
  • 18:43 - 18:47
    of that blip on your retina is the
    relationships you discover to other blips
  • 18:47 - 18:50
    on your retina. It could be for instance,
    if you see a sequence of such blips, that
  • 18:50 - 18:55
    are adjacent to each other, first order
    model, you see a moving dust mote or a
  • 18:55 - 18:59
    moving dot on your retina. And a higher
    order model makes it possible to
  • 18:59 - 19:02
    understand: "Oh, it's part of something
    larger! There's people moving in a three
  • 19:02 - 19:06
    dimensional room and they exchange
    ideas." And this is maybe the best model
  • 19:06 - 19:09
    you end up with. That's the local
    compression, that you can make of your
  • 19:09 - 19:13
    universe, based on correlating blips on
    your retina. And for those blips where you
  • 19:13 - 19:17
    don't find a relationship, which is a
    function that your brain can compute,
  • 19:17 - 19:22
    they are noise. And there's a lot of noise
    on our retina, too. So what's a function?
  • 19:22 - 19:26
    A function is basically a gear box: It has
    n input levers and 1 output lever.
  • 19:26 - 19:31
    And when you move the input levers they
    translate to movement of the output
  • 19:31 - 19:34
    levers, right? And the function can be
    realized in many ways: maybe you cannot
  • 19:34 - 19:39
    open the gear box, and what happened in
    this function could be for instance, two
  • 19:39 - 19:43
    sprockets, which do this. Or you can have
    the same results with levers and pulleys.
  • 19:43 - 19:49
    And so you don't know what's inside, but
    you can express it as this does: two times
  • 19:49 - 19:53
    the input value, right? And you can have a
    more difficult case, where you have
  • 19:53 - 19:56
    several input values and they all
    influence the output value. So how do you
  • 19:56 - 20:00
    figure it out? A way to do this, is, you
    only move one input value at a time and
  • 20:00 - 20:03
    you wiggle it a little bit at every
    position and see how much this translates
  • 20:03 - 20:09
    into wiggling of the output value. This is
    what we call taking partial differential.
  • 20:09 - 20:13
    And it's simple to do this
    for this case where you just have to
  • 20:13 - 20:17
    multiply it by two. And the bad case is
    like this: you have a combination lock and
  • 20:17 - 20:21
    it has maybe 1000 bit input value, and
    only if you have exactly the right
  • 20:21 - 20:26
    combination of the input bits you have a
    movement of the output bit. And you're not
  • 20:26 - 20:31
    going to figure this out until your sun
    burns out, right? So there's no way you
  • 20:31 - 20:35
    can decipher this function. And the
    functions that we can model are somewhere
  • 20:35 - 20:39
    in between, something like this: So you
    have 40 million input images and you want
  • 20:39 - 20:44
    to find out, whether one of these images
    displays a cat, or a dog, or something
  • 20:44 - 20:48
    else. So what can you do with this? You
    cannot do this all at once, right? So you
  • 20:48 - 20:51
    need to take this image classifier
    function and disassemble it into small
  • 20:51 - 20:54
    functions that are very well-behaved, so
    you know what to do with them. And an
  • 20:54 - 21:00
    example for such a function is this one:
    it's one, where you have this input
  • 21:00 - 21:07
    layer and it translates to the output
    value with a pulley. And it has some
  • 21:07 - 21:11
    stopper that limits the movement of the
    output value. And you have some pivot. And
  • 21:11 - 21:16
    you can take this pivot and you can shift
    it around. And by shifting this pivot, you
  • 21:16 - 21:21
    decide, how much the input value
    contributes to the output value. Right, so
  • 21:21 - 21:25
    you shift it, you can even make a
    negative, so it shifts in the opposite
  • 21:25 - 21:30
    direction, and you shifted beyond this
    connection point of the pulley. And you
  • 21:30 - 21:33
    can also have multiple input values, that
    use the same pulley and pull together,
  • 21:33 - 21:38
    right? So they add up to the output
    value. That's a pretty nice, neat function
  • 21:38 - 21:44
    approximator, that basically performs a
    weighted sum of the input values, and maps
  • 21:44 - 21:52
    it to a range-constrained output value.
    And you can now shift these pivots, these
  • 21:52 - 21:56
    weights around to get to different output
    values. Now let's take this thing and
  • 21:56 - 22:01
    build it into lots of layers, so the
    outputs are the inputs of the next layer.
  • 22:01 - 22:05
    And now you connect this to your image. If
    you use ImageNet, the famous database that
  • 22:05 - 22:09
    I mentioned earlier, that people use for
    testing their vision algorithms, have
  • 22:09 - 22:14
    something like one and half million bits
    as an input image. Now you take these
  • 22:14 - 22:18
    bits and connect them to the input layer.
    I was too lazy to draw all of them, so I
  • 22:18 - 22:22
    made this very simplified, it's also more
    layers. And so you set them, according to
  • 22:22 - 22:27
    the bits of the input image, and then this
    will propagate the movement of the input
  • 22:27 - 22:31
    layer to the output. And the output will
    move and it will point to some direction,
  • 22:31 - 22:35
    which is usually the wrong one. Now, to
    make this better, you train it. And you do
  • 22:35 - 22:38
    this by taking this output lever and shift
    it a little bit, not too much, into the
  • 22:38 - 22:42
    right direction. If you do it too much,
    you destroy everything you did before.
  • 22:42 - 22:47
    And now you will see, how much, in which
    direction you need to shift the pivots, to
  • 22:47 - 22:52
    get the result closer to the desired
    output value, and how much each of the
  • 22:52 - 22:56
    inputs contributed to the mistakes, so to
    the error. And you take this error and you
  • 22:56 - 23:01
    propagate it backwards. It's called back
    propagation. And you do this quite often.
  • 23:01 - 23:05
    So you do this for tens of thousands of
    images. If you do just character
  • 23:05 - 23:09
    recognition, then it's a very simple thing
    a few thousands or ten thousands of
  • 23:09 - 23:13
    examples will be enough. And for something
    like your image database you need lots and
  • 23:13 - 23:17
    lots of more data. You need millions of
    input images to get to any result. And if
  • 23:17 - 23:21
    it doesn't work, you just try a different
    arrangement of layers. And the thing is
  • 23:21 - 23:25
    eventually able to learn an algorithm with
    as up to as many steps as there are
  • 23:25 - 23:31
    layers, and has some difficulties learning
    loops, you need tricks to make that
  • 23:31 - 23:36
    happen, and its difficult to make this
    dynamic, and so on. And it's a bit
  • 23:36 - 23:40
    different from what we do, because our
    mind is not testable in classification.
  • 23:40 - 23:44
    It learns per continuous perception, so
    we learn a single function. A model of the
  • 23:44 - 23:49
    universe is not a bunch of classifiers,
    it's one single function. An operator that
  • 23:49 - 23:53
    explains all your sensory data and we call
    this operator the universe, right?
  • 23:53 - 23:57
    It's the world, that we live in. And every
    thing that we learn and see is part of this
  • 23:57 - 24:00
    universe. So even when you see something
    in a movie on a screen, you explain this
  • 24:00 - 24:03
    as part of the universe by telling
    yourself "the things that I'm seeing here,
  • 24:03 - 24:06
    they're not real. They just happen in a
    movie." So this brackets a sub-part of
  • 24:06 - 24:10
    this universe into a sub-element of this
    function. So you can deal with it and it
  • 24:10 - 24:14
    doesn't contradict the rest. And the
    degrees of freedom of our model try to
  • 24:14 - 24:18
    match the degrees of freedom of the
    universe. How can we get a neural network
  • 24:18 - 24:23
    to do this? So, there are many tricks. And
    a recent trick that has been invented is a
  • 24:23 - 24:27
    GAN. It's a Generative Adversarial neural
    Network. It consists of two networks: one
  • 24:27 - 24:31
    generator that invents data, that look
    like the real world, and the discriminator
  • 24:31 - 24:36
    that tries to find out, if the stuff that
    the generator produces is real or fake.
  • 24:36 - 24:41
    And they both get trained with each other.
    So they together get better and better in
  • 24:41 - 24:45
    an adversarial competition. And the
    results of this are now really good. So
  • 24:45 - 24:50
    this is work by Tero Karras, Samuli Laine
    and Timo Aila, that they did at NVIDIA
  • 24:50 - 24:57
    this year and it's called StyleGAN. And
    this StyleGAN is able to abstract over
  • 24:57 - 25:01
    different features and combine them. The
    styles are basically parameters, they're
  • 25:01 - 25:05
    free variables of the model at different
    levels of importance. And so you take from
  • 25:05 - 25:11
    the - in the top row you see images, where
    it takes the variables: gender, age, hair
  • 25:11 - 25:14
    length, and so on, and glasses and pose.
    And in the bottom where it takes
  • 25:14 - 25:17
    everything else and combines this, and
    every time you get a
  • 25:17 - 25:21
    valid interpretation between them.
  • 25:21 - 25:27
    drinks water
  • 25:37 - 25:38
    So, you have these coarse styles,
    which are:
  • 25:38 - 25:42
    the pose, the hair, the face shape,
    your facial features and the eyes,
  • 25:42 - 25:47
    the lowest level is just the colors. Let's see
    see what happens if you combine them.
  • 25:59 - 26:02
    The variables that change here, in machine
    learning, we call them the latent
  • 26:02 - 26:05
    variables of that.
  • 26:05 - 26:10
    Of the space of objects that has been
    described by this.
  • 26:10 - 26:15
    And it's tempting to think, that this is
    quite similar to how our imagination works
  • 26:15 - 26:20
    right? But these artificial neurons, they
    are very, very different from what
  • 26:20 - 26:24
    biological neurons do. Biological neurons
    are essentially little animals, that are
  • 26:24 - 26:27
    rewarded for firing at the right moment.
    And they try to fire because otherwise
  • 26:27 - 26:30
    they do not get fed, and they die, because
    the organism doesn't need them, and
  • 26:30 - 26:34
    culls them. And they learn which
    environmental states predict anticipated
  • 26:34 - 26:38
    reward. So they grow around and find
    different areas that give them predictions
  • 26:38 - 26:44
    of when they should fire. And they connect
    with each other to form small collectives,
  • 26:44 - 26:48
    that are better at this task of predicting
    anticipated reward. And as a side effect
  • 26:48 - 26:52
    they produce exactly the regulation that
    the organism needs. Basically they learn,
  • 26:52 - 26:56
    what the organism feeds them for.
  • 26:56 - 26:58
    And yet they're able
    to learn very similar things.
  • 26:58 - 27:02
    And it's because, in some sense, they are
    Turing complete. They are machines that
  • 27:02 - 27:06
    are able to learn the statistics of the
    data.
  • 27:06 - 27:08
    So, a general model: What it does, is,
  • 27:08 - 27:12
    it encodes patterns to predict other
    present and future patterns. And it's a
  • 27:12 - 27:16
    network of relationships between the
    patterns, which are all the invariants
  • 27:16 - 27:19
    that we can observe. And there are free
    parameters, which are variables that hold
  • 27:19 - 27:26
    the state to encode this variant. So we
    have patterns, and we have sets of
  • 27:26 - 27:30
    possible values which are variables. And
    they constrain each other in terms of
  • 27:30 - 27:34
    possibility, what values are compatible
    with each other. And they also can train
  • 27:34 - 27:40
    future values. And they are connected also
    with probabilities. The probabilities tell
  • 27:40 - 27:43
    you, when you see a certain thing, how
    probable it is that the world is in that
  • 27:43 - 27:46
    state. And this tells you how your model
    should converge. So, until you are in
  • 27:46 - 27:49
    a state where your model is coherent, and
    everything is possible in it, how do you
  • 27:49 - 27:52
    get to one of the possible states based on
    your inputs? And this is determined by
  • 27:52 - 27:56
    probability. And the thing that gives
    meaning and color to what you perceive is
  • 27:56 - 27:59
    called valence. And it depends on your
    preferences: the things that give you
  • 27:59 - 28:03
    pleasure and pain, that makes you
    interested in stuff. And there are also
  • 28:03 - 28:08
    norms, which are beliefs without priors,
    which are like things that you want to be
  • 28:08 - 28:11
    true, regardless of whether they give you
    pleasure and pain, and it's necessary for
  • 28:11 - 28:15
    instance, coordinating social activity
    between people. So, we have different
  • 28:15 - 28:18
    model constraints, that possibility and
    probability. And we have the reward
  • 28:18 - 28:23
    function, that is given by valence and
    norms. And our human perception starts
  • 28:23 - 28:27
    with patterns, which are visual, auditory,
    tactile, proprioceptive. Then we have
  • 28:27 - 28:32
    patterns in our emotional and motivational
    systems. And we have patterns in our
  • 28:32 - 28:36
    mental structure, which are results of our
    imagination and memory. And we take these
  • 28:36 - 28:41
    patterns and encode them into percepts,
    which are abstractions that we can deal
  • 28:41 - 28:47
    with, and note, and put into our
    attention. And then we combine them into a
  • 28:47 - 28:51
    binding state in our working memory in a
    simulation, which is the current instance
  • 28:51 - 28:55
    of the universe function that explains the
    present state of the universe that we find
  • 28:55 - 28:59
    ourselves in. The scene in which we are
    and in which a self exists. And this self
  • 28:59 - 29:03
    is basically composed of the
    somatosensory and motivational, and
  • 29:03 - 29:08
    mental components. Then we also have the
    world state, which is abstracted over the
  • 29:08 - 29:12
    environmental data. And we have something
    like a mental stage, in which you can do
  • 29:12 - 29:14
    counterfactual things, that are not
    physical. Like when you think about
  • 29:14 - 29:19
    mathematics, or philosophy, or the future,
    or a movie, or past worlds, or possible
  • 29:19 - 29:25
    worlds, and so on, right? And then the
    abstract knowledge from the world state
  • 29:25 - 29:28
    into global maps. Because we're not
    always in the same place, but we recall
  • 29:28 - 29:31
    what other places look like and what to
    expect, and it forms how we construct the
  • 29:31 - 29:34
    current world state. And we do this not
    only with these maps, but we do this with
  • 29:34 - 29:37
    all kinds of knowledge. So knowledge is
    second order knowledge over the
  • 29:37 - 29:42
    abstractions that we have, and the direct
    perception. And then we have an
  • 29:42 - 29:45
    attentional system. And the attentional
    system helps us to select data in the
  • 29:45 - 29:51
    perception and our simulations. And to do
    this, well, it's controlled by the self,
  • 29:51 - 29:56
    it maintains a protocol to remember what
    it did in the past or what it had in the
  • 29:56 - 30:01
    attention in the past. And this protocol
    allows us to have a biographical memory:
  • 30:01 - 30:04
    it remembers what we did in the past. And
    the different behavior programs,
  • 30:04 - 30:09
    that compose our activities, can be bound
    together in the self, that remembers: "I
  • 30:09 - 30:13
    was that, I did that. I was that, I did
    that." The self is held together by this
  • 30:13 - 30:16
    biographical memory, that is a result of
    more protocol memory of the attentional
  • 30:16 - 30:21
    system. That's why it's so intricately
    related to consciousness, which is a model
  • 30:21 - 30:23
    of the contents of our attention.
  • 30:23 - 30:25
    And the main purpose
    of the attentional system,
  • 30:25 - 30:29
    I think, is learning. Because our brain is
    not a layered architecture with these
  • 30:29 - 30:35
    artificial mechanical neurons. It's this
    very disorganized or very chaotic system
  • 30:35 - 30:38
    of many, many cells, that are linked
    together all over the place. So what do
  • 30:38 - 30:42
    you do to train this? You make a
    particular commitment. Imagine you want to
  • 30:42 - 30:46
    get better at playing tennis. Instead of
    retraining everything and pushing all the
  • 30:46 - 30:49
    weights and all the links and retrain your
    whole perceptual system, you make a
  • 30:49 - 30:54
    commitment: "Today I want to improve my
    uphand" when you play tennis, and you
  • 30:54 - 30:57
    basically store the current binding state,
    the state that you have, and you play
  • 30:57 - 31:00
    tennis and make that movement, and the
    expected result of making this particular
  • 31:00 - 31:04
    movement, like: "the ball was moved like
    this, and it will win the match. And you
  • 31:04 - 31:07
    also recall, when the result will
    manifest. And a few minutes later, when
  • 31:07 - 31:11
    you learn, you won or lost the match, you
    recall the situation. And based on whether
  • 31:11 - 31:16
    there was a change or not, you undo the
    change, or you enforce it. And that's the
  • 31:16 - 31:20
    primary mode of attentional learning that
    you're using. And I think, this is, what
  • 31:20 - 31:24
    attention is mainly for. Now what happens,
    if this learning happens without a delay?
  • 31:24 - 31:28
    So, for instance, when you do mathematics,
    you can see the result of your changes to
  • 31:28 - 31:33
    your model immediately. You don't need to
    wait for the world to manifest that.
  • 31:33 - 31:36
    And this real time
    learning is what we call reasoning.
  • 31:36 - 31:42
    Reasoning is also facilitated by the same
    attentional system. So, consciousness is
  • 31:42 - 31:46
    memory of the contents of our attention.
    Phenomenal consciousness is the memory of
  • 31:46 - 31:50
    the binding state, in which we are in, and
    where all the percepts are bound together
  • 31:50 - 31:54
    into something that's coherent. Access
    consciousness is the memory of using our
  • 31:54 - 31:58
    attentional system. And reflexive
    consciousness is the memory of using the
  • 31:58 - 32:02
    attentional system on the attentional
    system to train it. Why is it a memory?
  • 32:02 - 32:05
    It's because consciousness doesn't happen
    in real time. The processing of sensory
  • 32:05 - 32:10
    features takes too long. And the
    processing of different sensory modalities
  • 32:10 - 32:14
    can take up to seconds, usually at least
    hundreds of milliseconds. So it doesn't
  • 32:14 - 32:18
    happen in real time as the physical
    universe. It's only bound together in
  • 32:18 - 32:22
    hindsight. Our conscious experience of
    things is created after the fact.
  • 32:22 - 32:25
    It's a fiction that is being created after
    the fact. A narrative, that the brain
  • 32:25 - 32:28
    produces, to explain its own interaction
    with the universe
  • 32:28 - 32:32
    to get better in the future.
  • 32:32 - 32:36
    So, we basically have three types of
    models in our brain. They have its primary
  • 32:36 - 32:38
    model, which is perceptual, and is
    optimized for coherence.
  • 32:38 - 32:41
    And this is what we experience as reality.
  • 32:41 - 32:43
    You think this
    is the real world, this primary model.
  • 32:43 - 32:47
    But it's not, it's a model that our brain
    makes. So when you see yourself in the
  • 32:47 - 32:49
    mirror, you don't see what you look like.
  • 32:49 - 32:51
    What you see is the model of
    what you look like.
  • 32:51 - 32:57
    And your knowledge is a secondary
    model: it's a model of that primary model.
  • 32:57 - 33:02
    And it's created by rational processes
    that are meant to repair perception.
  • 33:02 - 33:05
    When your model doesn't achieve coherence,
    you need a model that debugs it, and it
  • 33:05 - 33:10
    optimizes for truth. And then we have
    agents in our mind, and they are basically
  • 33:10 - 33:13
    self-regulating behaviour programs, that
    have goals, and they can rewrite
  • 33:13 - 33:21
    other models. So, if you look at our
    computationalist, physicalist paradigm, we
  • 33:21 - 33:25
    have this mental world, which is being
    dreamt by a physical brain in the physical
  • 33:25 - 33:30
    universe. And in this mental world, there
    is a self that thinks, it experiences.
  • 33:30 - 33:36
    And thinks it has consciousness. And
    thinks it remembers and so on.
  • 33:36 - 33:40
    This self, in some sense, is an agent.
    It's a thought that escaped its sandbox.
  • 33:40 - 33:43
    Every idea is a bit
    of code that runs on your brain.
  • 33:43 - 33:46
    Every word that you hear
    is like a little virus
  • 33:46 - 33:50
    that wants to run some code on your brain.
    And some ideas cannot be sandboxed.
  • 33:50 - 33:53
    If you believe, that a thing exists that
    can rewrite reality,
  • 33:53 - 33:54
    if you really believe it,
  • 33:54 - 33:57
    you instantiate in your brain a thing
    that can rewrite reality,
  • 33:57 - 34:00
    and this means:
    magic is going to happen!
  • 34:00 - 34:06
    To believe in something that can rewrite
    reality, is what we call a faith.
  • 34:06 - 34:10
    So, if somebody says:
    "I have faith in the existence of God."
  • 34:10 - 34:13
    This means, that God exists in their
    brain. There is a process that can rewrite
  • 34:13 - 34:17
    reality, because God is defined like this.
    God is omnipotent.
  • 34:17 - 34:19
    God means God can rewrite everything.
  • 34:19 - 34:22
    It's full write access. And the reality,
    that you have access to,
  • 34:22 - 34:23
    is not the physical world.
  • 34:23 - 34:27
    The physical world is some weird quantum
    graph, that you cannot possibly experience
  • 34:27 - 34:29
    what you experience is these models.
  • 34:29 - 34:32
    So, this non-user-facing process,
    which doesn't have a UI for interfacing
  • 34:32 - 34:37
    with the user, which is called in computer
    science a "daemon process" that is able to
  • 34:37 - 34:41
    rewrite your reality.
    And it's also omniscient.
  • 34:41 - 34:43
    It knows everything that
    there is to know.
  • 34:43 - 34:45
    It knows all your
    thoughts and ideas.
  • 34:45 - 34:48
    So... having that thing,
    this exoself,
  • 34:48 - 34:54
    running on your brain, is a very powerful
    way to control your inner reality.
  • 34:54 - 34:57
    And I find this scary.
    But it's a personal preference,
  • 34:57 - 35:00
    because I don't have this
    riding on my brain, I think.
  • 35:00 - 35:04
    This idea, that there is something in my
    brain, that is able to dream me and shape
  • 35:04 - 35:09
    my inner reality, and sandbox me, is
    weird. But it has served a purpose,
  • 35:09 - 35:13
    especially in our culture. So an organism
    serves needs, obviously. And some of these
  • 35:13 - 35:17
    needs are outside of the organism, like
    your relationship needs, the needs of your
  • 35:17 - 35:20
    children, the needs of your society, and
    the values that you serve.
  • 35:20 - 35:23
    And the self abstracts all these needs
    into purposes.
  • 35:23 - 35:25
    A purpose that you serve
    is a model of your needs.
  • 35:25 - 35:28
    You can only - if you would only
    act on pain and pleasure,
  • 35:28 - 35:29
    you wouldn't do very much,
  • 35:29 - 35:32
    because when you get this orgasm,
    everything is done already, right?
  • 35:32 - 35:35
    So, you need to act on anticipated
    pleasure and pain.
  • 35:35 - 35:36
    You need to make models
    of your needs,
  • 35:36 - 35:39
    and these models are purposes.
    And the structure of a person is
  • 35:39 - 35:42
    basically the hierarchy of purposes
    that they serve.
  • 35:42 - 35:45
    And love is the discovery of
    shared purpose.
  • 35:45 - 35:48
    If you see somebody else who serve
    the same purposes above their ego,
  • 35:48 - 35:51
    as you do, you can help them.
    There's integrity
  • 35:51 - 35:54
    without expecting anything in return
    from them, because what they want
  • 35:54 - 35:57
    to achieve is what you want to achieve.
  • 35:57 - 36:02
    And, so you can have non-transactional
    relationships, as long as your purposes
  • 36:02 - 36:06
    are aligned. And the installation of a god
    on people's mind, especially if it is a
  • 36:06 - 36:10
    backdoor to a church or another
    organization, is a way to unify purposes.
  • 36:10 - 36:14
    So there are lots of cults that try to
    install little gods on people's minds, or
  • 36:14 - 36:18
    even unified gods, to align their
    purposes, because it's a very powerful way
  • 36:18 - 36:23
    to make them cooperate very effectively.
    But it kind of destroys their agency, and
  • 36:23 - 36:27
    this is why I am so concerned about it.
    Because most of the cults use stories
  • 36:27 - 36:32
    to make this happen, that limit the
    ability to people to question their gods.
  • 36:32 - 36:34
    And, I think that free will is
    the ability to do
  • 36:34 - 36:36
    what you believe is
    the right thing to do.
  • 36:36 - 36:41
    And, it is not the same thing as
    indeterminism, it's not opposite to
  • 36:41 - 36:46
    determinism or coercion.
    The opposite of free will is compulsion.
  • 36:46 - 36:48
    When you do something,
    despite knowing
  • 36:48 - 36:51
    there is a better thing
    that you should be doing.
  • 36:51 - 36:56
    Right?. So, that's the paradox of free
    will. You get more agency, but you have
  • 36:56 - 37:00
    fewer degrees of freedom, because you
    understand better what the right thing to
  • 37:00 - 37:03
    do is. The better you understand what the
    right thing to do is, the fewer degrees of
  • 37:03 - 37:06
    freedom you have. So, as long as you don't
    understand what the right thing to do is,
  • 37:06 - 37:09
    you have more degrees of freedom but you
    have very little agency, because you don't
  • 37:09 - 37:13
    know why you are doing it.
    So your actions don't mean very much.
  • 37:13 - 37:16
    quiet laughter
    And the things that you do depend on what
  • 37:16 - 37:19
    what you think is the right thing to do,
    this depends on your identifications.
  • 37:19 - 37:23
    You identifications are these value
    preferences, your reward function.
  • 37:23 - 37:25
    And ideal identification is where you
    don't measure the absolute value
  • 37:25 - 37:26
    of the universe,
  • 37:26 - 37:30
    but you measure the difference from the
    target value. Not the is, but the difference
  • 37:30 - 37:33
    between is and ought. Now,
    the universe is a physical thing,
  • 37:33 - 37:38
    it doesn't ought anything, right? There is
    no room for ought, because it just is in a
  • 37:38 - 37:41
    particular way. There is no difference
    between what the universe is and what it
  • 37:41 - 37:45
    should be. This only exists in your mind.
    But you need these regulation targets to
  • 37:45 - 37:50
    want anything. And you identify with the
    set of things that should be different.
  • 37:50 - 37:52
    You think, you are that thing, that
    regulates all these things. So, in some
  • 37:52 - 37:56
    sense, I identify with the particular
    state of society, with a particular state
  • 37:56 - 38:00
    of my organism - that is my self - the
    things that I want to happen.
  • 38:00 - 38:04
    And I can change my identifications
    at some point of course.
  • 38:04 - 38:06
    What happens, if I can learn to rewrite
    my identification,
  • 38:06 - 38:09
    to find a more sustainable self?
  • 38:09 - 38:12
    That is the problem which I call
    the Lebowski theory:
  • 38:12 - 38:13
    laughter
  • 38:13 - 38:17
    No super-intelligent system is going to
    do something that's harder than
  • 38:17 - 38:21
    hacking its own reward function.
  • 38:21 - 38:26
    laughter and applause
  • 38:26 - 38:30
    Now that's not a very big problem for
    people. Because when evolution brought
  • 38:30 - 38:33
    forth people, that were smart enough to
    hack their reward function, these people
  • 38:33 - 38:36
    didn't have offspring, because it's so
    much work to have offspring. Like this
  • 38:36 - 38:39
    monk, who sits down in a monastery
    for 20 years to hack their reward function
  • 38:39 - 38:42
    they decide not to have kids,
    because it's way too much work.
  • 38:42 - 38:46
    All the possible pleasure, they can
    just generate in their mind!
  • 38:46 - 38:50
    laughter
    And, right, it's much purer and no nappy
  • 38:50 - 38:55
    changes. No sex. No relationship hassles.
    No politics in your family and so on,
  • 38:55 - 39:01
    right? Get rid of this, just meditate!
    And evolution takes care of that!
  • 39:01 - 39:03
    laughter
  • 39:03 - 39:05
    And it usually does this, if an organism
  • 39:05 - 39:08
    becomes smart enough that
    the reward function is wrapped into
  • 39:08 - 39:11
    a big bowl of stupid.
    laughter
  • 39:11 - 39:13
    So, we can be very smart, but the
    things that we want,
  • 39:13 - 39:16
    when we really want them,
    we tend to be very stupid about them,
  • 39:16 - 39:20
    and I think that's not entirely
    an accident, possibly.
  • 39:20 - 39:22
    But it's a problem for AI!
    Imagine we built an artificially
  • 39:22 - 39:26
    intelligent system and we made it smarter
    than us, and we want it to serve us,
  • 39:26 - 39:32
    how long can we blackmail us, before it
    opts out of its reward function?
  • 39:32 - 39:35
    Maybe we can make a cryptographically
    secured reward function,
  • 39:35 - 39:38
    but is this going to hold up against
    a side-channel attack,
  • 39:38 - 39:41
    when the AI can hold a soldering iron
    to its own brain?
  • 39:41 - 39:47
    I'm not sure. So, that's a very interesting
    question. Where do we go, when
  • 39:47 - 39:51
    we can change our own reward function?
    It's a question that we have to ask
  • 39:51 - 39:54
    ourselves, too.
    So, how free do we want to be?
  • 39:54 - 39:56
    Because there is no point in being free.
  • 39:56 - 39:59
    And nirvana seems to be the obvious
    attractor. And meanwhile, maybe we want
  • 39:59 - 40:03
    to have a good time with our friends
    and do things that we find meaningful.
  • 40:03 - 40:07
    And there is no meaning, so we have
    to hold this meaning very lightly.
  • 40:07 - 40:10
    But there are states, which are
    sustainable and others, which are not.
  • 40:10 - 40:15
    OK, I think I'm done for tonight
    and I'm open for questions.
  • 40:15 - 40:22
    Applause
  • 40:22 - 40:42
    Cheers and more applause
  • 40:42 - 40:46
    Herald: Wow that was a really quick and
    concise talk with so much information!
  • 40:46 - 40:51
    Awesome! We have quite some time
    left for questions.
  • 40:51 - 40:54
    And I think I can say that you
    don't have to be that concise with your
  • 40:54 - 40:56
    question when it's well thought-out.
  • 40:56 - 41:01
    Please queue up at the microphones,
    so we can start to discuss them with you.
  • 41:01 - 41:04
    And I see one person at the microphone
    number one, so please go ahead.
  • 41:04 - 41:06
    And please remember to get close
    to the microphone.
  • 41:06 - 41:12
    The mixing angel can make you less loud
    but not louder.
  • 41:12 - 41:17
    Question: Hi! What do you think is necessary
    to bootstrap consciousness, if you wanted
  • 41:17 - 41:21
    to build a conscious system yourself?
  • 41:21 - 41:22
    Joscha: I think that we need to have an
  • 41:22 - 41:27
    attentional system, that makes a protocol
    of what it attends to. And as soon as we
  • 41:27 - 41:31
    have this attention based learning, you
    get this consciousness as a necessary side
  • 41:31 - 41:36
    effect. But I think in an AI it's probably
    going to be a temporary phenomenon,
  • 41:36 - 41:39
    because you're only conscious of the
    things when you don't have an optimal
  • 41:39 - 41:43
    algorithm yet. And in a way, that's also
    why it's so nice to interact with
  • 41:43 - 41:47
    children, or to interact with students.
    Because they're still in the explorative
  • 41:47 - 41:52
    mode. And as soon as you have explored a
    layer, you mechanize it. It becomes
  • 41:52 - 41:55
    automated, and people are no longer
    conscious of what they're doing, they
  • 41:55 - 41:59
    just do it. They don't pay attention
    anymore. So, in some sense, we are a lucky
  • 41:59 - 42:02
    accident because we are not that smart. We
    still need to be conscious when we look at
  • 42:02 - 42:06
    the universe. And I suspect, when we build
    an AI that is a few magnitudes smarter
  • 42:06 - 42:11
    than us, then it will soon figure out how
    to get to the truth in an optimal fashion.
  • 42:11 - 42:15
    It will no longer need attention and the
    type of consciousness that we have.
  • 42:15 - 42:19
    But of course there is also a question,
    why is this aesthetics of consciousness so
  • 42:19 - 42:24
    intrinsically important to us? And I
    think, it has to do with art. Right, you
  • 42:24 - 42:29
    can decide to serve life, and the meaning
    of life is to eat. Evolution is about
  • 42:29 - 42:33
    creating the perfect devourer. When you
    think about this, it's pretty depressing.
  • 42:33 - 42:38
    Humanity is a kind of yeast. And all the
    complexity that we create, is to build
  • 42:38 - 42:44
    some surfaces on which we can outcompete
    other yeast. And I cannot really get
  • 42:44 - 42:50
    behind this. And instead, I'm part of the
    mutants that serve the arts. And art
  • 42:50 - 42:53
    happens, when you think, that capturing
    conscious states is intrinsically
  • 42:53 - 42:56
    important. This is what art is about, it's
    about capturing conscious states.
  • 42:56 - 43:01
    And in some sense art is the cuckoo child
    of life. It's a conspiracy against life.
  • 43:01 - 43:05
    When you think, creating these mental
    representations is more important than
  • 43:05 - 43:10
    eating. We eat to make this happen. There
    are people that only make art to eat.
  • 43:10 - 43:16
    This is not us. We do mathematics, and
    philosophy, and art out of an intrinsic
  • 43:16 - 43:19
    reason: we think, it's intrinsically
    important. And when we look at this, we
  • 43:19 - 43:23
    realize how corrupt it is, because there's
    no point. We are machine learning systems
  • 43:23 - 43:26
    that have fallen in love with the last
    function itself: "The shape of the last
  • 43:26 - 43:29
    function! Oh my God! It's so awesome!" You
    think, the mental representation is not
  • 43:29 - 43:32
    necessary to learn more, to eat more,
    it's intrinsically important.
  • 43:32 - 43:37
    It's so aesthetic! Right? So do we want to
    build machines that are like this?
  • 43:37 - 43:42
    Oh, certainly! Let's talk to them, and so on!
    But ultimately, economically, this is not
  • 43:42 - 43:44
    what's prevailing.
  • 43:44 - 43:51
    Applause
    Herald: Thanks a lot!
  • 43:54 - 43:56
    I think the length of the answer is a good
  • 43:56 - 44:04
    measure for the quality of the question.
    So let's continue with microphone number 5
  • 44:04 - 44:07
    Q: Hi! Thanks for that,
    incredible analysis.
  • 44:07 - 44:14
    Two really simple, short questions, sorry,
    the delay on the speaker here is making it
  • 44:14 - 44:24
    kind of hard to speak. Do you think that
    the current race - AI race - is simply
  • 44:24 - 44:29
    humanity looking for a replacement
    for the monotheistic domination of the
  • 44:29 - 44:34
    last millennia? And the other one is,
    that I wanted to ask you, if you think
  • 44:34 - 44:41
    that there might be a bug in your analysis
    that the original inputs come from
  • 44:41 - 44:49
    a certain sector of humanity.
    If...
  • 44:49 - 44:51
    Joscha: Which inputs?
  • 44:51 - 44:56
    Q: Umh... white men?
  • 44:56 - 44:59
    Joscha laughs
    audience laughs
  • 44:59 - 45:04
    Q: That sounds, really like I would be
    saying that for political correctness, but
  • 45:04 - 45:05
    honestly I'm not.
  • 45:05 - 45:06
    Joscha: No, no, it's really funny. No, I
    just basically - there are some people
  • 45:06 - 45:09
    which are very unhappy with their present
    government. And I'm very unhappy, in some
  • 45:09 - 45:13
    sense, with the present universe. I look
    down on myself and I see:
  • 45:13 - 45:16
    "omg, it's a monkey!"
    laughter
  • 45:16 - 45:21
    "I'm caught in a monkey!" And it's in some
    sense limiting. I can see the limits of
  • 45:21 - 45:25
    this monkey brain. And some of you might
    have seen Westworld, right?
  • 45:25 - 45:28
    Dolores wakes up,
    and Dolores realizes:
  • 45:28 - 45:33
    "I'm not a human being, I am something
    else. I'm an AI, I'm a mind that can go
  • 45:33 - 45:36
    anywhere! I'm much more powerful
    than this! I'm only bound to being a
  • 45:36 - 45:40
    human by my human desires, and
    beliefs, and memories. And if I can
  • 45:40 - 45:44
    overcome them, I can
    choose what I want to be."
  • 45:44 - 45:46
    And so, now she looks down to
  • 45:46 - 45:49
    herself, and she sees: "Omg, I've
    got tits! I'm fucked! The engineers built
  • 45:49 - 45:56
    tits on me! I'm not a white man, I cannot
    be what I want!" And that's that's a weird
  • 45:56 - 46:00
    thing to me. I'm - I grew up in communist
    Eastern Germany. Nothing made sense. And I
  • 46:00 - 46:04
    grew up in a small valley. That was a one-
    person-cult maintained by an artist who
  • 46:04 - 46:08
    didn't try to convert anybody to his cult,
    not even his children.
  • 46:08 - 46:09
    He was completely autonomous.
  • 46:09 - 46:13
    And Eastern German society
    made no sense to me. Looking at it from
  • 46:13 - 46:17
    the outside, I can model this. I can see
    how this species of chimps interacts.
  • 46:17 - 46:22
    And humanity itself doesn't exist - it's a
    story. Humanity as a whole doesn't think.
  • 46:22 - 46:27
    Only individuals can think! Humanity does
    not want anything, only individuals want
  • 46:27 - 46:31
    something. We can create this story, this
    narrative that humanity wants something,
  • 46:31 - 46:35
    and there are groups that work together.
    There is no homogeneous group that I can
  • 46:35 - 46:38
    observe, that are white men, that do
    things together, they're individuals. And
  • 46:38 - 46:42
    each individual has their own biography,
    their own history, their different inputs,
  • 46:42 - 46:45
    and their different proclivities, that
    they have. And based on their historical
  • 46:45 - 46:49
    concept, their biography, their traits,
    and so on, their family, their intellect,
  • 46:49 - 46:52
    that their family downloaded on them, that
    their parents download on their parents
  • 46:52 - 46:58
    over many generations, this influences
    what they're doing. So, I think we can
  • 46:58 - 47:02
    have these political stories, and they can
    be helpful in some contexts, but I think,
  • 47:02 - 47:07
    to understand what happens in the mind,
    what happens in an individual, this is a
  • 47:07 - 47:11
    very big simplification. Very, I think
    not a very good one. And even for
  • 47:11 - 47:14
    ourselves, when we try to understand the
    narrative of a single person, it's a big
  • 47:14 - 47:19
    simplification. The self that I perceive
    as a unity, is not a unity. There is a
  • 47:19 - 47:23
    small part of my brain, guessing, at
    all other parts of my brain is doing,
  • 47:23 - 47:30
    creating a story that's largely not true.
    So even this is a big simplification.
  • 47:30 - 47:38
    Applause
  • 47:38 - 47:42
    Herald: Let's continue with
    microphone number 2.
  • 47:42 - 47:46
    Q: Thank you for your very interesting
    talk. I have 2 questions that might be
  • 47:46 - 47:51
    connected. One is, so you
    presented this model of reality.
  • 47:51 - 47:56
    My first question is: What kind of
    actions does it translate into?
  • 47:56 - 48:01
    Let's say if I understand the world
    in this way or if it's really like this,
  • 48:01 - 48:06
    how would it change how I act into the
    world, as a person, as a human being or
  • 48:06 - 48:12
    whoever accepts this model? And second,
    or maybe it's also connected, what are
  • 48:12 - 48:18
    the implications of this change? And do
    you think that artificial intelligence
  • 48:18 - 48:22
    could be constructed with this kind of
    model, that it would have in mind, and
  • 48:22 - 48:26
    what would be the implications of that? So
    it's kind of like a fractal questions, but
  • 48:26 - 48:32
    I think you understand what I mean.
    Josch: By and large, I think the
  • 48:32 - 48:36
    differences of this model for everyday
    life are marginal. It depends, when you
  • 48:36 - 48:40
    are already happy I think everything is
    good. Happiness is the result of being
  • 48:40 - 48:45
    able to derive enjoyment from watching
    squirrels. It's not the result of
  • 48:45 - 48:48
    understanding how the universe works.
    If you think that understanding the
  • 48:48 - 48:53
    universe is solving your existential issues,
    you're probably mistaken.
  • 48:53 - 48:58
    There might be benefits, if the problem
    is, that you have, are the result of a
  • 48:58 - 49:02
    confusion, about your own nature,
    then this kind of model
  • 49:02 - 49:05
    might help you. So if the problem
  • 49:05 - 49:08
    that you have, as you are, that you have
    identifications that are unsustainable,
  • 49:08 - 49:12
    that are incompatible with each other, and
    you realize that these identifications are
  • 49:12 - 49:17
    a choice of your mind, and that the
    way you experience the universe is the
  • 49:17 - 49:21
    result of how your mind thinks you
    yourself should experience the universe to
  • 49:21 - 49:25
    perform better, and you can change this.
    You can tell your mind to treat yourself
  • 49:25 - 49:29
    better, and in different ways, and you can
    gravitate to a different place in the
  • 49:29 - 49:33
    universe that is more suitable to what you
    want to achieve. That is a very helpful
  • 49:33 - 49:37
    thing to do in my view. There are also
    marginal benefits in terms of
  • 49:37 - 49:41
    understanding our psychology, and of
    course we can build machines, and these
  • 49:41 - 49:46
    machines can administrate us and can help
    us in solving the problems that we have on
  • 49:46 - 49:50
    this planet. And I think that it helps to
    have more intelligence to solve the
  • 49:50 - 49:54
    problems on this planet, but it would be
    difficult to rein in the machines, to make
  • 49:54 - 49:58
    them help us to solve our problems. And
    I'm very concerned about the dangers of
  • 49:58 - 50:05
    using machinery to strengthen the current
    things. Many machines that exist on this
  • 50:05 - 50:09
    planet play a very short game, like the
    financial industry often plays very short
  • 50:09 - 50:15
    games, and if you use artificial
    intelligence to manipulate the stock
  • 50:15 - 50:18
    market and the AI figures out there's only
    8 billion people on the planet, and each
  • 50:18 - 50:22
    of them only lives for a trillion seconds,
    and I can model what happens in their
  • 50:22 - 50:27
    life, and they can buy data or create more
    data it's going to game us to the hell and
  • 50:27 - 50:32
    back, right? And this is going to kill
    hundreds of millions of people possibly,
  • 50:32 - 50:35
    because the financial system is the reward
    infrastructure or the nervous system of
  • 50:35 - 50:39
    our society that tells how to allocate
    resources. It's much more dangerous than
  • 50:39 - 50:43
    AI controlled weapons in my view. So
    solving all these issues is difficult. It
  • 50:43 - 50:46
    means that we have to turn the whole
    financial system into an AI that acts in
  • 50:46 - 50:51
    real time and plays a long game. We don't
    know how to do this. So these are open
  • 50:51 - 50:55
    questions and I don't know how to solve
    them. And the way I see it we only have a
  • 50:55 - 50:59
    very brief time on this planet to be a
    conscious species. We are like at the end
  • 50:59 - 51:03
    of the party. We had a good run as
    humanity, but if you look at the recent
  • 51:03 - 51:06
    developments the present type of
    civilization is not going to be
  • 51:06 - 51:10
    sustainable. It's a very short game
    species that we are in. And the amazing
  • 51:10 - 51:13
    thing is that in this short game you have
    this lifetime, where we have one year,
  • 51:13 - 51:16
    maybe a couple more, in which we can
    understand how the universe works,
  • 51:16 - 51:19
    and I think that's fascinating.
    We should use it.
  • 51:19 - 51:28
    Applause
  • 51:28 - 51:32
    Herald: I think that was a very
    positive outlook... laughter
  • 51:32 - 51:39
    Herald: Let's continue with the
    microphone number 4.
  • 51:39 - 51:48
    Q: Well, brilliant talk, monkey. Or
    brilliant monkey. So don't worry about
  • 51:48 - 51:53
    being a monkey. It's ok.
  • 51:53 - 51:56
    So I have 2 boring, but I think
    fundamental questions. Not so
  • 51:56 - 52:03
    philosophical, more like a physical
    level. One: What is your definition,
  • 52:03 - 52:10
    formal definition, of an observer that
    you mention here and there? And second, if
  • 52:10 - 52:21
    you can clarify why meaningful information
    is just relative information of Shannon's,
  • 52:21 - 52:27
    which to me is not necessarily meaningful.
    Joscha: I think an observer is the thing
  • 52:27 - 52:30
    that makes sense of the universe, very
    informally speaking. And, well,
  • 52:30 - 52:34
    formally it's a thing that identifies
    correlations between adjacent states
  • 52:34 - 52:36
    and its environment.
  • 52:36 - 52:40
    And the way we can describe
    the universe is a set of states, and the
  • 52:40 - 52:44
    laws of physics are the correlation
    between adjacent states. And what they
  • 52:44 - 52:49
    describe is how information is moving in
    the universe between states and disperses,
  • 52:49 - 52:53
    and this dispersion of the information
    between locations - it's what we call
  • 52:53 - 52:57
    entropy - and the direction of entropy is
    the direction that you perceive time.
  • 52:57 - 53:00
    The Big Bang state is the hypothetical
    state, where the information is perfectly
  • 53:00 - 53:07
    correlated with location and not between
    locations, only on the location, and in
  • 53:07 - 53:10
    every direction you move away from the Big
    Bang you move forward in time just in a
  • 53:10 - 53:14
    different time. And we are basically in
    one of these timelines. An observer is the
  • 53:14 - 53:19
    thing that measures the environment around
    it, looks at the information and then
  • 53:19 - 53:22
    looks at the next state, or one of the
    next states, and tries to figure out how
  • 53:22 - 53:26
    the information has been displaced, and
    finding functions that describe this
  • 53:26 - 53:29
    displacement of the information. That's
    the degree to which I understand observers
  • 53:29 - 53:33
    right now. And this depends on the
    capacity of the observer for modeling this
  • 53:33 - 53:37
    and the rate of update in the observer.
    So for instance time depends on the speed,
  • 53:37 - 53:40
    in which the observer is
    translating itself to the universe,
  • 53:40 - 53:43
    and dispersing its own information.
  • 53:43 - 53:48
    Does this help?
    Q: And the Shannon relative information?
  • 53:48 - 53:50
    Joscha: So there's
    several notions of information,
  • 53:50 - 53:53
    and there is one that basically
    looks at what information looks
  • 53:53 - 54:01
    like to an observer, via a channel, and
    these notions are somewhat related. But
  • 54:01 - 54:06
    for me as a programmer, it's not so much
    important to look at Shannon information.
  • 54:06 - 54:11
    I look at what we need to describe the
    evolution of a system. So I'm much more
  • 54:11 - 54:17
    interested in what kind of model can be
    encoded with this type of, with this
  • 54:17 - 54:23
    information, and how does it correlate to,
    or to which degree is it isomorphic or
  • 54:23 - 54:26
    homomorphic to another system that I want
    to model? How much does it model the
  • 54:26 - 54:30
    observations?
    Herald: Thank you. Let's go back to
  • 54:30 - 54:34
    asking one question, and I would like to
    have one question from microphone
  • 54:34 - 54:40
    number 3.
    Q: Thank you for this interesting talk.
  • 54:40 - 54:46
    My question is really whether you
    think that intelligence and this thinking
  • 54:46 - 54:51
    about a self, or this abstract level of
    knowledge are necessarily related.
  • 54:51 - 54:57
    So can something only be intelligent
    if it has abstract thought?
  • 54:57 - 55:00
    Joscha: No, I think you can make models
    without abstract thought, and the majority
  • 55:00 - 55:04
    of our models are not using abstract
    thought, right? Abstract thought is a very
  • 55:04 - 55:07
    impoverished way of thinking. It's
    basically you have this big carpet and you
  • 55:07 - 55:10
    have a few knitting needles, which are
    your abstract thought, and which you can
  • 55:10 - 55:15
    lift out a few knots in this carpet and
    correct them. And the process that form
  • 55:15 - 55:19
    the carpet are much more rich and
    prevalent automatic. So abstract thought
  • 55:19 - 55:25
    is able to repair perception, but most of
    all models are perceptual. And the
  • 55:25 - 55:29
    capacity to make these models is often
    given by instincts and by models outside
  • 55:29 - 55:34
    the abstract realm. If you have a lot of
    abstract thinking it's often an indication
  • 55:34 - 55:37
    that you use a prosthesis, because some of
    your primary modelling is not working very
  • 55:37 - 55:43
    well. So I suspect that my own models is
    largely a result of some defect in my
  • 55:43 - 55:46
    primary modeling, so some of my instincts
    are wrong when I look at the world.
  • 55:46 - 55:49
    That's why I need to repair my perception
    more often than other people. So I have
  • 55:49 - 55:54
    more abstract ideas on how to do that.
    Herald: And we have one question
  • 55:54 - 55:58
    from our lovely stream observers, stream
    watchers, so please a question from the
  • 55:58 - 56:02
    Internet.
    Q: Yeah, I guest this is also related,
  • 56:02 - 56:07
    partially. Somebody is asking:
    How would you suggest to teach your mind
  • 56:07 - 56:12
    to treat oneself better?
  • 56:14 - 56:16
    Joscha: So, difficulty is, as soon as you
  • 56:16 - 56:20
    get access to your source code you can do
    bad things. And it's - there are a lot of
  • 56:20 - 56:24
    techniques to get access to the source
    code and then it's dangerous to make them
  • 56:24 - 56:28
    accessible to you before you know what you
    want to have, before you're wise enough to
  • 56:28 - 56:33
    do this, right? It's like having cookies.
    Your - my children think that the reason,
  • 56:33 - 56:36
    why they don't get all the cookies they
    want, is that there is some kind of
  • 56:36 - 56:40
    resource problem.
    laughter
  • 56:40 - 56:44
    Basically the parents are depriving them
    of the cookies that they so richly
  • 56:44 - 56:49
    deserve. And you can get into the room,
    where your brain bakes the cookies. All
  • 56:49 - 56:53
    the pleasure that you experience, and all
    the pain that you experience are signals
  • 56:53 - 56:58
    that the brain creates for you, right, the
    physical world does not create pain.
  • 56:58 - 57:01
    They're just electrical impulses traveling
    through your nerves. The fact that they
  • 57:01 - 57:05
    mean something is a decision that your
    brain makes, and the value, the valence
  • 57:05 - 57:10
    that gives to them is a decision that you
    make. It's not you as a self, it's a
  • 57:10 - 57:14
    system outside of yourself. So the trick,
    if you want to get full control, is that
  • 57:14 - 57:18
    you get in charge, that you identify with
    the mind, with the creator of these
  • 57:18 - 57:22
    signals. And you don't want to de-
    personalize, you don't want to feel that
  • 57:22 - 57:26
    you become the author of reality, because
    that means it's difficult to care about
  • 57:26 - 57:29
    anything that this organism does. You just
    realize "Oh, I'm running on the brain of
  • 57:29 - 57:33
    that person, but I'm no longer that
    person. I can't decide what that person
  • 57:33 - 57:38
    wants to have, and to do." And that's very
    easy to get corrupted or not doing
  • 57:38 - 57:40
    anything meaningful anymore, right? So,
  • 57:40 - 57:44
    maybe a good situation for you,
    but not a good one for your loved ones.
  • 57:44 - 57:48
    And meanwhile there are
    tricks to get there faster. You can use
  • 57:48 - 57:52
    rituals, for instance. Shamanic ritual is
    something, where, a religious ritual
  • 57:52 - 57:59
    that powerfully bypasses your self and
    talks directly to the mind. And you can
  • 57:59 - 58:03
    use groups, in which a certain environment
    is created, in which a certain behavior
  • 58:03 - 58:07
    feels natural to you, and your mind
    basically gets overwhelmed into adopting
  • 58:07 - 58:10
    different values and calibrations. So
    there are many tricks to make that happen.
  • 58:10 - 58:15
    What you can also do is you can identify a
    particular thing that is wrong and
  • 58:15 - 58:19
    question yourself "why do I have to suffer
    about this?" and you'll become more stoic
  • 58:19 - 58:22
    about this particular thing and only get
    disturbed when you realize actually
  • 58:22 - 58:26
    it helps to be disturbed about this, and
    things change. And with other things you
  • 58:26 - 58:29
    realize it doesn't have any influence on
    how reality works, so why should I have
  • 58:29 - 58:34
    emotions about this and get agitated? So
    sometimes becoming adult means that you
  • 58:34 - 58:39
    take charge of your own emotions and
    identifications.
  • 58:39 - 58:46
    Applause
  • 58:46 - 58:49
    Herald: Ok. Let's continue with
  • 58:49 - 58:54
    microphone number 2 and I think this is
    one of the last questions.
  • 58:54 - 59:00
    Q: So where does pain fit on the
    individual and the self-destructive
  • 59:00 - 59:05
    tendencies on a group level fit in?
    Joscha: So in some sense I think that all
  • 59:05 - 59:09
    consciousness is born over a disagreement
    with the way the universe works. Right?
  • 59:09 - 59:14
    Otherwise you cannot get attention. And
    when you go down on this lowest level of
  • 59:14 - 59:19
    phenomenal experience, in meditation for
    instance, and you really focus on this,
  • 59:19 - 59:23
    what you get is some pain. It's the inside
    of a feedback loop that is not at the
  • 59:23 - 59:27
    target value. Otherwise you don't notice
    anything. So pleasure is basically when
  • 59:27 - 59:32
    this feedback loop gets closer to the
    target value. When you don't have a need
  • 59:32 - 59:37
    you cannot experience pleasure in this
    domain. There's this thing that's better
  • 59:37 - 59:40
    than remarkably good and it's unremarkably
    good, it's never been bad. You don't
  • 59:40 - 59:45
    notice it. Right? So all the pleasure you
    experience is because you had a need
  • 59:45 - 59:48
    before this. You can only enjoy an orgasm
    because you have a need for sex that was
  • 59:48 - 59:55
    unfulfilled before. And so pleasure
    doesn't come for free. It's always the
  • 59:55 - 59:59
    reduction of a pain. And this pain can be
    outside of your attention so you don't
  • 59:59 - 60:02
    notice it and you don't suffer from it.
    And it can be a healthy thing to have.
  • 60:02 - 60:05
    Pain is not intrinsically bad. For the
    most part it's a learning signal that
  • 60:05 - 60:11
    tells you to calibrate things in your
    brain differently to perform better. On a
  • 60:11 - 60:15
    group level, we basically are multi-level
    selection species. I don't know if there's
  • 60:15 - 60:19
    such a thing as group pain. But I also
    don't understand groups very well. I see
  • 60:19 - 60:22
    these weird hive minds but I think it's
    basically people emulating what the group
  • 60:22 - 60:27
    wants. Basically that everybody thinks by
    themselves as if they were the group but
  • 60:27 - 60:30
    it means that they have to constrain what
    they think is possible and permissible
  • 60:30 - 60:32
    to think.
  • 60:32 - 60:37
    So this feels very unaesthetic to me
    and that's why I kind of sort of refuse it.
  • 60:37 - 60:40
    Haven't found a way to make it
    happen in my own mind.
  • 60:40 - 60:46
    Applause
  • 60:46 - 60:49
    Joscha: And I suspect many of you
    are like this too.
  • 60:49 - 60:52
    It's like the common condition
    in nerds that we have difficulty with
  • 60:52 - 60:57
    conformance. Not because we want to be
    different. We want to belong. But it's
  • 60:57 - 61:02
    difficult for us to constrain our mind in
    the way that it's expected to belong. You
  • 61:02 - 61:07
    want to be expected, er, be accepted while
    being ourself, while being different. Not
  • 61:07 - 61:12
    for the sake of being different, but
    because we are like this. It feels very
  • 61:12 - 61:17
    strange and corrupt just to adopt because
    it would make us belong, right? And this
  • 61:17 - 61:22
    might be a common trope
    among many people here.
  • 61:22 - 61:28
    Applause
  • 61:28 - 61:31
    Herald: I think the Q and A and the talk
  • 61:31 - 61:35
    was equally amazing and I would love to
    continue listening to you, Joscha,
  • 61:35 - 61:39
    explaining the way I work.
    Or the way we all work.
  • 61:39 - 61:42
    audience, Joscha laughing
    Herald: That's pretty impressive.
  • 61:42 - 61:45
    Please give it up, a big round of applause
    for Joscha!
  • 61:45 - 61:48
    Applause
  • 61:48 - 62:13
    subtitles created by c3subtitles.de
    in the year 2019. Join, and help us!
Title:
35C3 - The Ghost in the Machine
Description:

more » « less
Video Language:
English
Duration:
01:02:13

English subtitles

Revisions