Return to Video

Joscha: Computational Meta-Psychology

  • 0:00 - 0:09
    preroll music
  • 0:09 - 0:14
    Herald: Our next talk is going to be about AI and
    it's going to be about proper AI.
  • 0:14 - 0:18
    It's not going to be about
    deep learning or buzz word bingo.
  • 0:18 - 0:23
    It's going to be about actual psychology.
    It's going to be about computational metapsychology.
  • 0:23 - 0:26
    And now please welcome Joscha!
  • 0:26 - 0:33
    applause
  • 0:33 - 0:36
    Joscha: Thank you.
  • 0:36 - 0:38
    I'm interested in understanding
    how the mind works,
  • 0:38 - 0:43
    and I believe that the most foolproof perspective
    at looking ... of looking at minds is to understand
  • 0:43 - 0:47
    that they are systems that if you saw patterns
    at them you find meaning.
  • 0:47 - 0:52
    And you find meaning in those in very particular
    ways and this is what makes us who we are.
  • 0:52 - 0:55
    So they way to study and understand who we
    are in my understanding is
  • 0:55 - 1:01
    to build models of information processing
    that constitutes our minds.
  • 1:01 - 1:06
    Last year about the same time, I've answered
    the four big questions of philosophy:
  • 1:06 - 1:09
    "Whats the nature of reality?", "What can
    be known?", "Who are we?",
  • 1:09 - 1:15
    "What should we do?"
    So now, how can I top this?
  • 1:15 - 1:19
    applause
  • 1:19 - 1:23
    I'm going to give you the drama
    that divided a planet.
  • 1:23 - 1:26
    Some of a very, very big events,
    that happened in the course of last year,
  • 1:26 - 1:30
    so I couldn't tell you about it before.
  • 1:30 - 1:38
    What color is the dress
    laughsapplause
  • 1:38 - 1:45
    I mean ahmm... If you have.. do not have any
    mental defects you can clearly see it's white
  • 1:45 - 1:47
    and gold. Right?
  • 1:47 - 1:49
    [voices from audience]
  • 1:49 - 1:53
    Turns out, ehmm.. most people seem to have
    mental defects and say it is blue and black.
  • 1:53 - 1:58
    I have no idea why. Well Ok, I have an idea,
    why that is the case.
  • 1:58 - 2:01
    Ehmm, I guess that you got too, it has to
    do with color renormalization
  • 2:01 - 2:05
    and color renormalization happens differently
    apparently in different people.
  • 2:05 - 2:09
    So we have different wireing to renormalize
    the white balance.
  • 2:09 - 2:13
    And it seems to work in real world
    situations in pretty much the same way,
  • 2:13 - 2:18
    but not necessarily for photographs.
    Which have only very small fringe around them,
  • 2:18 - 2:21
    which gives you hint about the lighting situation.
  • 2:21 - 2:27
    And that's why you get this huge divergencies,
    which is amazing!
  • 2:27 - 2:30
    So what we see that our minds can not know
  • 2:30 - 2:33
    objective truths in any way. Outside of mathematics.
  • 2:33 - 2:36
    They can generate meaning though.
  • 2:36 - 2:39
    How does this work?
  • 2:39 - 2:42
    I did robotic soccer for a while,
    and there you have the situation,
  • 2:42 - 2:45
    that you have a bunch of robots, that are
    situated on a playing field.
  • 2:45 - 2:48
    And they have a model of what goes on
    in the playing field.
  • 2:48 - 2:52
    Physics generates data for their sensors.
    They read the bits of the sensors.
  • 2:52 - 2:56
    And then they use them to.. erghmm update
    the world model.
  • 2:56 - 2:59
    And sometimes we didn't want
    to take the whole playing field along,
  • 2:59 - 3:03
    and the physical robots, because they are
    expensive and heavy and so on.
  • 3:03 - 3:06
    Instead if you just want to improve the learning
    and the game play of the robots
  • 3:06 - 3:08
    you can use the simulations.
  • 3:08 - 3:11
    So we've wrote a computer simulation of the
    playing field and the physics, and so on,
  • 3:11 - 3:15
    that generates pretty some the same data,
    and put the robot mind into the simulator
  • 3:15 - 3:17
    robot body, and it works just as well.
  • 3:17 - 3:21
    That is, if you the robot, because you can
    not know the difference if you are the robot.
  • 3:21 - 3:24
    You can not know what's out there. The only
    thing that you get to see is what is the structure
  • 3:24 - 3:28
    of the data at you system bit interface.
  • 3:28 - 3:30
    And then you can derive model from this.
  • 3:30 - 3:33
    And this is pretty much the situation
    that we are in.
  • 3:33 - 3:38
    That is, we are minds that are somehow computational,
  • 3:38 - 3:41
    they are able to find regularity in patterns,
  • 3:41 - 3:45
    and they are... we.. seem to have access to
    something that is full of regularity,
  • 3:45 - 3:47
    so we can make sense out of it.
  • 3:47 - 3:49
    [ghulp, ghulp]
  • 3:49 - 3:53
    Now, if you discover that you are in the same
    situation as these robots,
  • 3:53 - 3:56
    basically you discover that you are some kind
    of apparently biological robot,
  • 3:56 - 3:59
    that doesn't have direct access
    to the world of concepts.
  • 3:59 - 4:02
    That has never actually seen matter
    and energy and other people.
  • 4:02 - 4:05
    All it got to see was little bits of information,
  • 4:05 - 4:06
    that were transmitted through the nerves,
  • 4:06 - 4:08
    and the brain had to make sense of them,
  • 4:08 - 4:10
    by counting them in elaborate ways.
  • 4:10 - 4:13
    What's the best model of the world
    that you can have with this?
  • 4:13 - 4:17
    What will the state of affairs,
    what's the system that you are in?
  • 4:17 - 4:21
    And what are the best algorithms that you
    should be using, to fix your world model.
  • 4:21 - 4:23
    And this question is pretty old.
  • 4:23 - 4:28
    And I think that has been answered for the
    first time by Ray Solomonoff in the 1960.
  • 4:28 - 4:31
    He has discovered an algorithm,
    that you can apply when you discover
  • 4:31 - 4:34
    that you are an robot,
    and all you have is data.
  • 4:34 - 4:35
    What is the world like?
  • 4:35 - 4:41
    And this algorithm is basically
    a combination of induction and Occam's razor.
  • 4:41 - 4:46
    And we can mathematically prove that we can
    not do better than Solomonoff induction.
  • 4:46 - 4:51
    Unfortunately, Solomonoff induction
    is not quite computable.
  • 4:51 - 4:54
    But everything that we are going to do is
    some... is going to be some approximation
  • 4:54 - 4:56
    of Salomonoff induction.
  • 4:56 - 4:59
    So our concepts can not really refer
    to the facts in the world out there.
  • 4:59 - 5:02
    We do not get the truth by referring
    to stuff out there, in the world.
  • 5:02 - 5:08
    We get meaning by suitably encoding
    the patterns at our systemic interface.
  • 5:08 - 5:12
    And AI has recently made a huge progress in
    encoding data at perceptual interfaces.
  • 5:12 - 5:16
    Deep learning is about using a stacked hierarchy
    of feature detectors.
  • 5:16 - 5:21
    That is, we use pattern detectors and we build
    them into a networks that are arranged in
  • 5:21 - 5:23
    hundreds of layers.
  • 5:23 - 5:26
    And then we adjust the links
    between these layers.
  • 5:26 - 5:29
    Usually some kind of... using
    some kind of gradient descent.
  • 5:29 - 5:33
    And we can use this to classify
    for instance images and parts of speech.
  • 5:33 - 5:38
    So, we get to features that are more and more
    complex, they started as very, very simple patterns.
  • 5:38 - 5:41
    And then get more and more complex,
    until we get to object categories.
  • 5:41 - 5:44
    And now this systems are able
    in image recognition task,
  • 5:44 - 5:47
    to approach performance that is very similar
    to human performance.
  • 5:47 - 5:52
    Also what is nice is that it seems to be somewhat
    similar to what the brain seems to be doing
  • 5:52 - 5:54
    in visual processing.
  • 5:54 - 5:58
    And if you take the activation in different
    levels of these networks and you
  • 5:58 - 6:01
    erghm... improve the... that... erghmm...
    enhance this activation a little bit, what
  • 6:01 - 6:04
    you get is stuff that look very psychedelic.
  • 6:04 - 6:10
    Which may be similar to what happens, if you
    put certain illegal substances into people,
  • 6:10 - 6:14
    and enhance the activity on certain layers
    of their visual processing.
  • 6:14 - 6:22
    [BROKEN AUDIO]If you want to classify the
    differences what we do if we want quantify
  • 6:22 - 6:33
    this you filter out all the invariences in
    the data.
  • 6:33 - 6:36
    The pose that she has, the lighting,
    the dress that she is on.. has on,
  • 6:36 - 6:38
    her facial expression and so on.
  • 6:38 - 6:43
    And then we go to only to this things that
    is left after we've removed all the nuance data.
  • 6:43 - 6:47
    But what if we... erghmm
    want to get to something else,
  • 6:47 - 6:50
    for instance if we want to understand poses.
  • 6:50 - 6:53
    Could be for instance that we have several
    dancers and we want to understand what they
  • 6:53 - 6:54
    have in common.
  • 6:54 - 6:58
    So our best bet is not just to have a single
    classification based filtering,
  • 6:58 - 7:01
    but instead what we want to have is to take
    the low level input
  • 7:01 - 7:05
    and get a whole universe of features,
    that is interrelated.
  • 7:05 - 7:07
    So we have different levels of interrelations.
  • 7:07 - 7:09
    At the lowest levels we have percepts.
  • 7:09 - 7:12
    On the slightly higher level we have simulations.
  • 7:12 - 7:17
    And on even higher level we have concept landscape.
  • 7:17 - 7:19
    How does this representation
    by simulation work?
  • 7:19 - 7:22
    Now imagine you want to understand sound.
  • 7:22 - 7:24
    [Ghulp]
  • 7:24 - 7:27
    If you are a brain and you want to understand
    sound you need to model it.
  • 7:27 - 7:31
    Unfortunatly we can not really model sound
    with neurons, because sound goes up to 20kHz,
  • 7:31 - 7:37
    or if you are old like me maybe to 12 kHz.
    20 kHz is what babies could do.
  • 7:37 - 7:41
    And... neurons do not want to do 20 kHz.
    That's way too fast for them.
  • 7:41 - 7:43
    They like something like 20 Hz.
  • 7:43 - 7:46
    So what do you do? You need
    to make a Fourier transform.
  • 7:46 - 7:50
    The Fourier transform measures the amount
    of energy at different frequencies.
  • 7:50 - 7:52
    And because you can not do it with neurons,
    you need to do it in hardware.
  • 7:52 - 7:54
    And turns out this is exactly
    what we are doing.
  • 7:54 - 8:00
    We have this cochlea which is this snail like
    thing in our ears,
  • 8:00 - 8:07
    and what it does, it transforms energy of
    sound in different frequency intervals into
  • 8:07 - 8:08
    energy measurments.
  • 8:08 - 8:10
    And then gives you something
    like what you see here.
  • 8:10 - 8:13
    And this is something that the brain can model,
  • 8:13 - 8:16
    so we can get a neurosimulator that tries
    to recreate this patterns.
  • 8:16 - 8:21
    And we can predict the next input from the
    cochlea that then understand the sound.
  • 8:21 - 8:23
    Of course if you want to understand music,
  • 8:23 - 8:25
    we have to go beyond understanding sound.
  • 8:25 - 8:29
    We have to understand the transformations
    that sound can have if you play it at different pitch.
  • 8:29 - 8:34
    We have to arrange the sound in the sequence
    that give you rhythms and so on.
  • 8:34 - 8:36
    And then we want to identify
    some kind of musical grammar
  • 8:36 - 8:39
    that we can use to again control the sequencer.
  • 8:39 - 8:43
    So we have stucked structures.
    That simulate the world.
  • 8:43 - 8:44
    And once you've learned this model of music,
  • 8:44 - 8:47
    once you've learned the musical grammar,
    the sequencer and the sounds.
  • 8:47 - 8:52
    You can get to the structure
    of the individual piece of music.
  • 8:52 - 8:54
    So, if you want to model the world of music.
  • 8:54 - 8:58
    You need to have the lowest level of percepts
    then we have the higher level of mental simulations.
  • 8:58 - 9:02
    And... which give the sequences of the music
    and the grammars of music.
  • 9:02 - 9:05
    And beyond this you have the conceptual landscape
    that you can use
  • 9:05 - 9:08
    to describe different styles of music.
  • 9:08 - 9:12
    And if you go up in the hierarchy,
    you get to more and more abstract models.
  • 9:12 - 9:14
    More and more conceptual models.
  • 9:14 - 9:16
    And more and more analytic models.
  • 9:16 - 9:18
    And this are causal models at some point.
  • 9:18 - 9:21
    This causal models can be weakly deterministic,
  • 9:21 - 9:23
    basically associative models, which tell you
  • 9:23 - 9:27
    if this state happens, it's quite probable
    that this one comes afterwords.
  • 9:27 - 9:29
    Or you can get to a strongly determined model.
  • 9:29 - 9:33
    Strongly determined model is one which tells
    you, if you are in this state
  • 9:33 - 9:34
    and this condition is met,
  • 9:34 - 9:36
    You are are going to go exactly in this state.
  • 9:36 - 9:40
    If this condition is not met, or a different
    condition is met, you are going to this state.
  • 9:40 - 9:41
    And this is what we call an alghorithm.
  • 9:41 - 9:47
    it's.. now we are on the domain of computation.
  • 9:47 - 9:49
    Computation is slightly different from mathematics.
  • 9:49 - 9:51
    It's important to understand this.
  • 9:51 - 9:55
    For a long time people have thought that the
    universe is written in mathematics.
  • 9:55 - 9:58
    Or that.. minds are mathematical,
    or anything is mathematical.
  • 9:58 - 10:00
    In fact nothing is mathematical.
  • 10:00 - 10:05
    Mathematics is just the domain
    of formal languages. It doesn't exist.
  • 10:05 - 10:07
    Mathematics starts with a void.
  • 10:07 - 10:12
    You throw in a few axioms, and if you've chosen
    a nice axioms, then you get infinite complexity.
  • 10:12 - 10:14
    Most of which is not computable.
  • 10:14 - 10:16
    In mathematics you can express arbitrary statements,
  • 10:16 - 10:18
    because it's all about formal languages.
  • 10:18 - 10:20
    Many of this statements will not make sense.
  • 10:20 - 10:22
    Many of these statements will make sense
    in some way,
  • 10:22 - 10:24
    but you can not test whether they make sense,
  • 10:24 - 10:27
    because they're not computable.
  • 10:27 - 10:30
    Computation is different.
    Computation can exist.
  • 10:30 - 10:32
    It's starts with an initial state.
  • 10:32 - 10:35
    And then you have a transition function.
    You do the work.
  • 10:35 - 10:38
    You apply the transition function,
    and you get into the next state.
  • 10:38 - 10:41
    Computation is always finite.
  • 10:41 - 10:44
    Mathematics is the kingdom of specification.
  • 10:44 - 10:47
    And computation is the kingdom of implementation.
  • 10:47 - 10:51
    It's very important to understand this difference.
  • 10:51 - 10:55
    All our access to mathematics of course is
    because we do computation.
  • 10:55 - 10:57
    We can understand mathematics,
  • 10:57 - 11:00
    because our brain can compute
    some parts of mathematics.
  • 11:00 - 11:04
    Very, very little of it, and to
    very constrained complexity.
  • 11:04 - 11:07
    But enough, so we can map
    some of the infinite complexity
  • 11:07 - 11:10
    and noncomputability of mathematics
    into computational patterns,
  • 11:10 - 11:12
    that we can explore.
  • 11:12 - 11:14
    So computation is about doing the work,
  • 11:14 - 11:17
    it's about executing the transition function.
  • 11:20 - 11:23
    Now we've seen that mental representation
    is about concepts,
  • 11:23 - 11:26
    mental simulations, conceptual representations
  • 11:26 - 11:29
    and this conceptual representations
    give us concept spaces.
  • 11:29 - 11:31
    And the nice thing
    about this concept spaces is
  • 11:31 - 11:33
    that they give us an interface
    to our mental representations,
  • 11:33 - 11:36
    We can use to address and manipulate them.
  • 11:36 - 11:39
    And we can share them in cultures.
  • 11:39 - 11:41
    And this concepts are compositional.
  • 11:41 - 11:44
    We can put them together, to create new concepts.
  • 11:44 - 11:48
    And they can be described using
    higher dimensional vector spaces.
  • 11:48 - 11:50
    They don't do simulation
    and prediction and so on,
  • 11:50 - 11:53
    but we can capture regularity
    in our concept wisdom.
  • 11:53 - 11:55
    With this vector space
    you can do amazing things.
  • 11:55 - 11:58
    For instance, if you take the vector from
    "King" to "Queen"
  • 11:58 - 12:01
    is pretty much the same vector
    as to.. between "Man" and "Woman"
  • 12:01 - 12:04
    And because of this properties, because it's
    really a high dimentional manifold
  • 12:04 - 12:08
    this concepts faces, we can do interesting
    things, like machine translation
  • 12:08 - 12:09
    without understanding what it means.
  • 12:09 - 12:14
    That is without doing any proper mental representation,
    that predicts the world.
  • 12:14 - 12:17
    So this is a type of meta representation,
    that is somewhat incomplete,
  • 12:17 - 12:21
    but it captures the landscape that we share
    in a culture.
  • 12:21 - 12:25
    And then there is another type of meta representation,
    that is linguistic protocols.
  • 12:25 - 12:28
    Which is basically a formal grammar and vocabulary.
  • 12:28 - 12:30
    And we need this linguistic protocols
  • 12:30 - 12:33
    to transfer mental representations
    between people.
  • 12:33 - 12:36
    And we do this by basically
    scanning our mental representation,
  • 12:36 - 12:39
    disassembling them in some way
    or disambiguating them.
  • 12:39 - 12:43
    And then we use it as discrete string of symbols
    to get it to somebody else,
  • 12:43 - 12:46
    and he trains an assembler,
    that reverses this process,
  • 12:46 - 12:51
    and build something that is pretty similar
    to what we intended to convey.
  • 12:51 - 12:54
    And if you look at the progression of AI models,
  • 12:54 - 12:56
    it pretty much went the opposite direction.
  • 12:56 - 13:00
    So AI started with linguistic protocols, which
    were expressed in formal grammars.
  • 13:00 - 13:05
    And then it got to concepts spaces, and now
    it's about to address percepts.
  • 13:05 - 13:10
    And at some point in near future it's going
    to get better at mental simulations.
  • 13:10 - 13:12
    And at some point after that we get to
  • 13:12 - 13:15
    attention directed and
    motivationally connected systems,
  • 13:15 - 13:17
    that make sense of the world.
  • 13:17 - 13:20
    that are in some sense able to address meaning.
  • 13:20 - 13:23
    This is the hardware that we have can do.
  • 13:23 - 13:26
    What kind of hardware do we have?
  • 13:26 - 13:28
    That's a very interesting question.
  • 13:28 - 13:32
    It could start out with a question:
    How difficult is it to define a brain?
  • 13:32 - 13:35
    We know that the brain must be
    somewhere hidden in the genome.
  • 13:35 - 13:38
    The genome fits on a CD ROM.
    It's not that complicated.
  • 13:38 - 13:40
    It's easier than Microsoft Windows. laughter
  • 13:40 - 13:46
    And we also know, that about 2%
    of the genome is coding for proteins.
  • 13:46 - 13:48
    And maybe about 10% of the genome
    has some kind of stuff
  • 13:48 - 13:51
    that tells you when to switch protein.
  • 13:51 - 13:53
    And the remainder is mostly garbage.
  • 13:53 - 13:57
    It's old viruses that are left over and has
    never been properly deleted and so on.
  • 13:57 - 14:01
    Because there are no real
    code revisions in the genome.
  • 14:01 - 14:08
    So how much of this 10%
    that is 75 MB code for the brain.
  • 14:08 - 14:09
    We don't really know.
  • 14:09 - 14:13
    What we do know is we share
    almost all of this with mice.
  • 14:13 - 14:16
    Genetically speaking human
    is a pretty big mouse.
  • 14:16 - 14:21
    With a few bits changed, so.. to fix some
    of the genetic expressions
  • 14:21 - 14:26
    And that is most of the stuff there is going
    to code for cells and metabolism
  • 14:26 - 14:28
    and how your body looks like and so on.
  • 14:28 - 14:34
    But if you look at erghmm... how much is expressed
    in the brain and only in the brain,
  • 14:34 - 14:35
    in terms of proteins and so on.
  • 14:35 - 14:46
    We find it's about... well of the 2% it's
    about 5%. That is only the 5% of the 2% that
  • 14:46 - 14:47
    is only in the brain.
  • 14:47 - 14:50
    And another 5% of the 2% is predominantly
    in the brain.
  • 14:50 - 14:52
    That is more in the brain than anywhere else.
  • 14:52 - 14:54
    Which gives you some kind of thing
    like a lower bound.
  • 14:54 - 14:59
    Which means to encode a brain genetically
    base on the hardware that we are using.
  • 14:59 - 15:04
    We need something like
    at least 500 kB of code.
  • 15:04 - 15:07
    Actually ehmm.. this... we very conservative
    lower bound.
  • 15:07 - 15:09
    It's going to be a little more I guess.
  • 15:09 - 15:11
    But it sounds surprisingly little, right?
  • 15:11 - 15:14
    But in terms of scientific theories
    this is a lot.
  • 15:14 - 15:17
    I mean the universe,
    according to the core theory
  • 15:17 - 15:19
    of the quantum mechanics and so on
    is like so much of code.
  • 15:19 - 15:21
    It's like half a page of code.
  • 15:21 - 15:23
    That's it. That's all you need
    to generate the universe.
  • 15:23 - 15:25
    And if you want to understand evolution
    it's like a paragraph.
  • 15:25 - 15:30
    It's couple lines you need to understand
    evolutionary process.
  • 15:30 - 15:32
    And there is a lots, lots of details, that's
    you get afterwards.
  • 15:32 - 15:34
    Because this process itself doesn't define
  • 15:34 - 15:37
    how the animals are going to look like,
    and in similar way is..
  • 15:37 - 15:41
    the code of the universe doesn't tell you
    what this planet is going to look like.
  • 15:41 - 15:43
    And what you guys are going to look like.
  • 15:43 - 15:46
    It's just defining the rulebook.
  • 15:46 - 15:49
    And in the same sense genome defines the rulebook,
  • 15:49 - 15:52
    by which our brain is build.
  • 15:52 - 15:56
    erghmmm,.. The brain boots itself
    into developer process,
  • 15:56 - 15:58
    and this booting takes some time.
  • 15:58 - 16:01
    So subliminal learning in which
    initial connections are forged
  • 16:01 - 16:05
    And basic models are build of the world,
    so we can operate in it.
  • 16:05 - 16:07
    And how long does this booting take?
  • 16:07 - 16:10
    I thing it's about 80 mega seconds.
  • 16:10 - 16:14
    That's the time that a child is awake until
    it's 2.5 years old.
  • 16:14 - 16:16
    By this age you understand Star Wars.
  • 16:16 - 16:20
    And I think that everything after
    understanding Star Wars is cosmetics.
  • 16:20 - 16:27
    laughterapplause
  • 16:27 - 16:33
    You are going to be online, if you get to
    arrive old age for about 1.5 giga seconds.
  • 16:33 - 16:38
    And in this time I think you are going to
    get not to watch more than 5 milion concepts.
  • 16:38 - 16:42
    Why? I don't know real...
    If you look at this child.
  • 16:42 - 16:45
    If a child would be able to form a concept
    let say every 5 minutes,
  • 16:45 - 16:49
    then by the time it's about 4 years old,
    it's going to have
  • 16:49 - 16:52
    something like 250 thousands concepts.
  • 16:52 - 16:54
    And... so... a quarter million.
  • 16:54 - 16:57
    And if we extrapolate this into our lifetime,
  • 16:57 - 17:00
    at some point it slows down,
    because we have enough concepts,
  • 17:00 - 17:01
    to describe the world.
  • 17:01 - 17:04
    Maybe it's something... It's I think it's
    less that 5 million.
  • 17:04 - 17:07
    How much storage capacity does the brain has?
  • 17:07 - 17:12
    I think that the... the estimates
    are pretty divergent,
  • 17:12 - 17:15
    The lower bound is something like a 100 GB,
  • 17:15 - 17:19
    And the upper bound
    is something like 2.5 PB.
  • 17:19 - 17:22
    There is even...
    even some higher outliers this..
  • 17:22 - 17:26
    If you for instance think that we need all
    those synaptic vesicle to store information,
  • 17:26 - 17:28
    maybe even more fits into this.
  • 17:28 - 17:32
    But the 2.5 PB is usually based
    on what you need
  • 17:32 - 17:35
    to code the information
    that is in all the neurons.
  • 17:35 - 17:37
    But maybe the neurons
    do not really matter so much,
  • 17:37 - 17:40
    because if the neuron dies it's not like the
    word is changing dramatically.
  • 17:40 - 17:44
    The brain is very resilient
    against individual neurons failing.
  • 17:44 - 17:49
    So the 100 GB capacity is much more
    what you actually store in the neurons.
  • 17:49 - 17:51
    If you look at all the redundancy
    that you need.
  • 17:51 - 17:54
    And I think this is much closer to the actual
    Ballpark figure.
  • 17:54 - 17:58
    Also if you want to store 5 hundred...
    5 million concepts,
  • 17:58 - 18:02
    and maybe 10 times or 100 times the number
    of percepts, on top of this,
  • 18:02 - 18:05
    this is roughly the Ballpark figure
    that you are going to need.
  • 18:05 - 18:07
    So our brain
  • 18:07 - 18:08
    is a prediction machine.
  • 18:08 - 18:11
    It... What it does is it reduces the entropy
    of the environment,
  • 18:11 - 18:15
    to solve whatever problems you are encountering,
  • 18:15 - 18:18
    if you don't have a... feedback loop, to fix
    them.
  • 18:18 - 18:20
    So normally if something happens, we have
    some kind of feedback loop,
  • 18:20 - 18:23
    that regulates our temperature or that makes
    problems go away.
  • 18:23 - 18:26
    And only when this is not working
    we employ recognition.
  • 18:26 - 18:29
    And then we start this arbitrary
    computational processes,
  • 18:29 - 18:32
    that is facilitated by the neural cortex.
  • 18:32 - 18:35
    And this.. arhmm.. neural cortex has really
    do arbitrary programs.
  • 18:35 - 18:38
    But it can do so
    with only with very limited complexity,
  • 18:38 - 18:42
    because really you just saw,
    it's not that complex.
  • 18:42 - 18:44
    The modeling of the world is very slow.
  • 18:44 - 18:47
    And it's something
    that we see in our eye models.
  • 18:47 - 18:48
    To learn the basic structure of the world
  • 18:48 - 18:49
    takes a very long time.
  • 18:49 - 18:53
    To learn basically that we are moving in 3D
    and objects are moving,
  • 18:53 - 18:54
    and what they look like.
  • 18:54 - 18:55
    Once we have this basic model,
  • 18:55 - 18:59
    we can get to very, very quick
    understanding within this model.
  • 18:59 - 19:02
    Basically encoding based
    on the structure of the world,
  • 19:02 - 19:04
    that we've learned.
  • 19:04 - 19:07
    And this is some kind of
    data compression, that we are doing.
  • 19:07 - 19:10
    We use this model, this grammar of the world,
  • 19:10 - 19:12
    this simulation structures that we've learned,
  • 19:12 - 19:15
    to encode the world very, very efficently.
  • 19:15 - 19:18
    How much data compression do we get?
  • 19:18 - 19:20
    Well... if you look at the retina.
  • 19:20 - 19:25
    The retina get's data
    in the order of about 10Gb/s.
  • 19:25 - 19:28
    And the retina already compresses these data,
  • 19:28 - 19:31
    and puts them into optic nerve
    at the rate of about 1Mb/s
  • 19:31 - 19:34
    This is what you get fed into visual cortex.
  • 19:34 - 19:36
    And the visual cortex
    does some additional compression,
  • 19:36 - 19:42
    and by the time it gets to layer four of the
    first layer of vision, to V1.
  • 19:42 - 19:47
    We are down to something like 1Kb/s.
  • 19:47 - 19:51
    So if we extrapolate this, and you get live
    to the age of 80 years,
  • 19:51 - 19:54
    and you are awake for 2/3 of your lifetime.
  • 19:54 - 19:57
    That is you have your eyes open for 2/3 of
    your lifetime.
  • 19:57 - 19:59
    The stuff that you get into your brain,
  • 19:59 - 20:04
    via your visual perception
    is going to be only 2TB.
  • 20:04 - 20:05
    Only 2TB of visual data.
  • 20:05 - 20:07
    Throughout all your lifetime.
  • 20:07 - 20:09
    That's all you are going to get ever to see.
  • 20:09 - 20:11
    Isn't this depressing?
  • 20:11 - 20:13
    laughter
  • 20:13 - 20:17
    So I would really like to eghmm..
    to tell you,
  • 20:17 - 20:23
    choose wisely what you
    are going to look at. laughter
  • 20:23 - 20:27
    Ok. Let's look at this problem of neural compositionality.
  • 20:27 - 20:29
    Our brains has this amazing thing
    that they can put
  • 20:29 - 20:32
    meta representation together very, very quickly.
  • 20:32 - 20:33
    For instance you read a page of code,
  • 20:33 - 20:35
    you compile it in you mind
    into some kind of program
  • 20:35 - 20:38
    it tells you what this page is going to do.
  • 20:38 - 20:39
    Isn't that amazing?
  • 20:39 - 20:41
    And then you can forget about this,
  • 20:41 - 20:44
    disassemble it all, and use the
    building blocks for something else.
  • 20:44 - 20:45
    It's like legos.
  • 20:45 - 20:48
    How you can do this with neurons?
  • 20:48 - 20:50
    Legos can do this, because they have
    a well defined interface.
  • 20:50 - 20:52
    They have all this slots, you know,
    that fit together
  • 20:52 - 20:54
    in well defined ways.
  • 20:54 - 20:55
    How can neurons do this?
  • 20:55 - 20:57
    Well, neurons can maybe learn
    the interface of other neurons.
  • 20:57 - 21:00
    But that's difficult, because every neuron
    looks slightly different,
  • 21:00 - 21:05
    after all this... some kind of biologically
    grown natural stuff.
  • 21:05 - 21:07
    laughter
  • 21:07 - 21:11
    So what you want to do is,
    you want to encapsulate this erhmm...
  • 21:11 - 21:13
    diversity of the neurons to make the predictable.
  • 21:13 - 21:15
    To give them well defined interface.
  • 21:15 - 21:16
    And I think that nature solution to this
  • 21:16 - 21:20
    is cortical columns.
  • 21:20 - 21:24
    Cortical column is a circuit of
    between 100 and 400 neurons.
  • 21:24 - 21:27
    And this circuit has some kind of neural network,
  • 21:27 - 21:29
    that can learn stuff.
  • 21:29 - 21:31
    And after it has learned particular function,
  • 21:31 - 21:35
    and in between, it's able to link up these
    other cortical columns.
  • 21:35 - 21:37
    And we have about 100 million of those.
  • 21:37 - 21:40
    Depending on how many neurons
    you assume is in there,
  • 21:40 - 21:41
    it's... erghmm we guess it's something,
  • 21:41 - 21:46
    at least 20 million and maybe
    something like a 100 million.
  • 21:46 - 21:48
    And this cortical columns, what they can do,
  • 21:48 - 21:50
    is they can link up like lego bricks,
  • 21:50 - 21:54
    and then perform,
    by transmitting information between them,
  • 21:54 - 21:56
    pretty much arbitrary computations.
  • 21:56 - 21:58
    What kind of computation?
  • 21:58 - 22:00
    Well... Solomonoff induction.
  • 22:00 - 22:04
    And... they have some short range links,
    to their neighbors.
  • 22:04 - 22:06
    Which comes almost for free, because erghmm..
  • 22:06 - 22:08
    well, they are connected to them,
    they are direct neighborhood.
  • 22:08 - 22:10
    And they have some long range connectivity,
  • 22:10 - 22:13
    so you can combine everything
    in your cortex with everything.
  • 22:13 - 22:15
    So you need some kind of global switchboard.
  • 22:15 - 22:18
    Some grid like architecture
    of long range connections.
  • 22:18 - 22:19
    They are going to be more expensive,
  • 22:19 - 22:21
    they are going to be slower,
  • 22:21 - 22:24
    but they are going to be there.
  • 22:24 - 22:26
    So how can we optimize
    what these guys are doing?
  • 22:26 - 22:28
    In some sense it's like an economy.
  • 22:28 - 22:31
    It's not enduring based system,
    as we often use in machine learning.
  • 22:31 - 22:33
    It's really an economy. You have...
  • 22:33 - 22:36
    The question is, you have a fixed number of
    elements,
  • 22:36 - 22:38
    how can you do the most valuable stuff with
    them.
  • 22:38 - 22:41
    Fixed resources, most valuable stuff, the
    problem is economy.
  • 22:41 - 22:43
    So you have an economy of information brokers.
  • 22:43 - 22:46
    Every one of these guys,
    this little cortical columns,
  • 22:46 - 22:48
    is very simplistic information broker.
  • 22:48 - 22:51
    And they trade rewards against neg entropy,
  • 22:51 - 22:54
    Against reducing entropy in the...
    in the world.
  • 22:54 - 22:56
    And to do this, as we just saw
  • 22:56 - 22:59
    that they need some kind of standardized interface.
  • 22:59 - 23:02
    And internally, to use this interface
    they are going to
  • 23:02 - 23:04
    have some kind of state machine.
  • 23:04 - 23:06
    And then they are going to pass messages
  • 23:06 - 23:07
    between each other.
  • 23:07 - 23:09
    And what are these messages?
  • 23:09 - 23:11
    Well, it's going to be hard
    to discover these messages,
  • 23:11 - 23:13
    by looking at brains.
  • 23:13 - 23:15
    Because it's very difficult to see in brains,
  • 23:15 - 23:15
    what the are actually doing.
  • 23:15 - 23:17
    you just see all these neurons.
  • 23:17 - 23:19
    And if you would be waiting for neuroscience,
  • 23:19 - 23:21
    to discover anything, we wouldn't even have
  • 23:21 - 23:23
    gradient descent or anything else.
  • 23:23 - 23:24
    We wouldn't have neuron learning.
  • 23:24 - 23:25
    We wouldn't have all this advances in AI.
  • 23:25 - 23:28
    Jürgen Schmidhuber said that the biggest,
  • 23:28 - 23:30
    the last contribution of neuroscience to
  • 23:30 - 23:32
    artificial intelligence
    was about 50 years ago.
  • 23:32 - 23:34
    That's depressing, and it might be
  • 23:34 - 23:38
    overemphasizing the unimportance of neuroscience,
  • 23:38 - 23:39
    because neuroscience is very important,
  • 23:39 - 23:41
    once you know what are you looking for.
  • 23:41 - 23:43
    You can actually often find this,
  • 23:43 - 23:44
    and see whether you are on the right track.
  • 23:44 - 23:46
    But it's very difficult to take neuroscience
  • 23:46 - 23:48
    to understand how the brain is working.
  • 23:48 - 23:49
    Because it's really like understanding
  • 23:49 - 23:53
    flight by looking at birds through a microscope.
  • 23:53 - 23:55
    So, what are these messages?
  • 23:55 - 23:58
    You are going to need messages,
    that tell these cortical columns
  • 23:58 - 24:00
    to join themselves into a structure.
  • 24:00 - 24:02
    And to unlink again once they're done.
  • 24:02 - 24:04
    You need ways that they can request each other
  • 24:04 - 24:06
    to perform computations for them.
  • 24:06 - 24:08
    You need ways they can inhibit each other
  • 24:08 - 24:08
    when they are linked up.
  • 24:08 - 24:11
    So they don't do conflicting computations.
  • 24:11 - 24:13
    Then they need to tell you whether the computation,
  • 24:13 - 24:14
    the result of the computation
  • 24:14 - 24:17
    that the are asked to do is probably false.
  • 24:17 - 24:19
    Or whether it's probably true,
    but you still need to wait for others,
  • 24:19 - 24:22
    to tell you whether the details worked out.
  • 24:22 - 24:24
    Or whether it's confirmed true that the concepts
  • 24:24 - 24:27
    that they stand for is actually the case.
  • 24:27 - 24:28
    And then you want to have learning,
  • 24:28 - 24:30
    to tell you how well this worked.
  • 24:30 - 24:31
    So you will have to announce a bounty,
  • 24:31 - 24:34
    that tells them to link up
    and kind of reward signal
  • 24:34 - 24:37
    that makes do computation in the first place.
  • 24:37 - 24:39
    And then you want to have
    some kind of reward signal
  • 24:39 - 24:41
    once you got the result as an organism.
  • 24:41 - 24:42
    But you reach your goal if you made
  • 24:42 - 24:46
    the disturbance go away
    or what ever you consume the cake.
  • 24:46 - 24:48
    And then you will have
    some kind of reward signal
  • 24:48 - 24:49
    that's you give everybody.
  • 24:49 - 24:51
    That was involved in this.
  • 24:51 - 24:53
    And this reward signal facilitates learning,
  • 24:53 - 24:55
    so the.. difference between the announce reward
  • 24:55 - 24:58
    and consumption reward is the learning signal
  • 24:58 - 24:59
    for these guys.
  • 24:59 - 25:00
    So they can learn how to play together,
  • 25:00 - 25:03
    and how to do the Solomonoff induction.
  • 25:03 - 25:05
    Now, I've told you that Solomonoff induction
  • 25:05 - 25:05
    is not computable.
  • 25:05 - 25:08
    And it's mostly because of two things,
  • 25:08 - 25:09
    First of all it's needs infinite resources
  • 25:09 - 25:11
    to compare all the possible models.
  • 25:11 - 25:14
    And the other one is that we do not know
  • 25:14 - 25:15
    the priori probability for our Bayesian model.
  • 25:15 - 25:19
    If we do not know
    how likely unknown stuff is in the world.
  • 25:19 - 25:23
    So what we do instead is,
    we set some kind of hyperparameter,
  • 25:23 - 25:25
    Some kind of default
    priori probability for concepts,
  • 25:25 - 25:28
    that are encoded by cortical columns.
  • 25:28 - 25:31
    And if we set these parameters very low,
  • 25:31 - 25:32
    then we are going to end up with inferences
  • 25:32 - 25:35
    that are quite probable.
  • 25:35 - 25:36
    For unknown things.
  • 25:36 - 25:38
    And then we can test for those.
  • 25:38 - 25:41
    If we set this parameter higher, we are going
    to be very, very creative.
  • 25:41 - 25:44
    But we end up with many many theories,
  • 25:44 - 25:45
    that are difficult to test.
  • 25:45 - 25:48
    Because maybe there are
    too many theories to test.
  • 25:48 - 25:51
    Basically every of these cortical columns
    will now tell you,
  • 25:51 - 25:52
    when you ask them if they are true:
  • 25:52 - 25:55
    "Yes I'm probably true,
    but i still need to ask others,
  • 25:55 - 25:57
    to work on the details"
  • 25:57 - 25:59
    So these others are going to be get active,
  • 25:59 - 26:01
    and they are being asked by the asking element:
  • 26:01 - 26:02
    "Are you going to be true?",
  • 26:02 - 26:04
    and they say "Yeah, probably yes,
    I just have to work on the details"
  • 26:04 - 26:06
    and they are going to ask even more.
  • 26:06 - 26:08
    So your brain is going to light up like a
    christmas tree,
  • 26:08 - 26:10
    and do all these amazing computations,
  • 26:10 - 26:12
    and you see connections everywhere,
    most of them are wrong.
  • 26:12 - 26:16
    You are basically in psychotic state
    if your hyperparameter is too high.
  • 26:16 - 26:21
    You're brain invents more theories
    that it can disproof.
  • 26:21 - 26:25
    Would it actually sometimes be good
    to be in this state?
  • 26:25 - 26:28
    You bet. So i think every night our brain
    goes in this state.
  • 26:28 - 26:32
    We turn up this hyperparameter.
    We dream. We get all kinds
  • 26:32 - 26:34
    weird connections, and we get to see connections,
  • 26:34 - 26:36
    that otherwise we couldn't be seeing.
  • 26:36 - 26:38
    Even though... because they are highly improbable.
  • 26:38 - 26:43
    But sometimes they hold, and we see... "Oh
    my God, DNA is organized in double helix".
  • 26:43 - 26:45
    And this is what we remember in the morning.
  • 26:45 - 26:47
    All the other stuff is deleted.
  • 26:47 - 26:48
    So we usually don't form long term memories
  • 26:48 - 26:51
    in dreams, if everything goes well.
  • 26:51 - 26:57
    If you accidentally trip this up.. your modulators,
  • 26:57 - 26:59
    for instance by consuming illegal substances,
  • 26:59 - 27:02
    or because you just gone randomly psychotic
  • 27:02 - 27:05
    you was basically entering
    a dreaming state I guess.
  • 27:05 - 27:07
    You get to a state
    when the brain starts inventing more
  • 27:07 - 27:11
    concepts that it can disproof.
  • 27:11 - 27:14
    So you want to have a state
    where this is well balanced.
  • 27:14 - 27:16
    And the difference between
    highly creative people,
  • 27:16 - 27:20
    and very religious people is probably
    a different setting of this hyperparameter.
  • 27:20 - 27:22
    So I suspect that people that people
    that are genius,
  • 27:22 - 27:24
    like people like Einstein and so on,
  • 27:24 - 27:27
    do not simply have better neurons than others.
  • 27:27 - 27:29
    What they mostly have is a slightly hyperparameter,
  • 27:29 - 27:34
    that is very finely tuned, so they can get
    better balance than other people
  • 27:34 - 27:44
    in finding theories that might be true,
    but can still be disprooven.
  • 27:44 - 27:49
    So inventiveness could be
    a hyperparameter in the brain.
  • 27:49 - 27:54
    If you want to measure
    the quality of belief that we have
  • 27:54 - 27:56
    we are going to have to have
    some kind of some cost function
  • 27:56 - 27:59
    which is based on motivational system.
  • 27:59 - 28:02
    And to identify if belief
    is good or not we can abstract criteria,
  • 28:02 - 28:06
    for instance how well does it predict the
    wourld, or how about does it reduce uncertainty
  • 28:06 - 28:08
    in the world,
  • 28:08 - 28:10
    or is it consistency and sparse.
  • 28:10 - 28:14
    And then of course utility, how about does
    it help me to satisfy my needs.
  • 28:14 - 28:19
    And the motivational system is going
    to evaluate all this things by giving a signal.
  • 28:19 - 28:24
    And the first signal.. kind of signal
    is the possible rewards if we are able to compute
  • 28:24 - 28:25
    the task.
  • 28:25 - 28:27
    And this is probably done by dopamine.
  • 28:27 - 28:30
    So we have a very small area in the brain,
    substantia nigra,
  • 28:30 - 28:34
    and the ventral tegmental area,
    and they produce dopamine.
  • 28:34 - 28:38
    And this get fed into lateral frontal cortext
    and the frontal lobe,
  • 28:38 - 28:42
    which control attention,
    and tell you what things to do.
  • 28:42 - 28:46
    And if we have successfully done
    what you wanted to do,
  • 28:46 - 28:49
    we consume the rewards.
  • 28:49 - 28:52
    And we do this with another signal
    which is serotonine.
  • 28:52 - 28:53
    It's also announce to motivational system,
  • 28:53 - 28:56
    to this very small are the Raphe nuclei.
  • 28:56 - 28:59
    And it feeds into all the areas of the brain
    where learning is necessary.
  • 28:59 - 29:02
    A connection is strengthen
    once you get to result.
  • 29:02 - 29:08
    These two substances are emitted
    by the motivational system.
  • 29:08 - 29:10
    The motivational system is a bunch of needs,
  • 29:10 - 29:12
    essentially you regulate it below the cortext.
  • 29:12 - 29:14
    They are not part of your mental representations.
  • 29:14 - 29:17
    They are part of something
    that is more primary than this.
  • 29:17 - 29:19
    This is what makes us go,
    this is what makes us human.
  • 29:19 - 29:22
    This is not our rationality, this is what we want.
  • 29:22 - 29:27
    And the needs are physiological,
    they are social, they are cognitive.
  • 29:27 - 29:29
    And you pretty much born with them.
  • 29:29 - 29:30
    They can not be totally adaptive,
  • 29:30 - 29:33
    because if we were adaptive,
    we wouldn't be doing anything.
  • 29:33 - 29:35
    The needs are resistive.
  • 29:35 - 29:38
    They are pushing us against the world.
  • 29:38 - 29:40
    If you wouldn't have all this needs,
  • 29:40 - 29:42
    If you wouldn't have this motivational system,
  • 29:42 - 29:44
    you would just be doing what best for you.
  • 29:44 - 29:45
    Which means collapse on the ground,
  • 29:45 - 29:49
    be a vegetable, rod, give into gravity.
  • 29:49 - 29:50
    Instead you do all this unpleasant things,
  • 29:50 - 29:53
    to get up in the morning,
    you eat, you have sex,
  • 29:53 - 29:54
    you do all this crazy things.
  • 29:54 - 29:59
    And it's only because the
    motivational system forces you to.
  • 29:59 - 30:01
    The motivational system
    takes this bunch of matter,
  • 30:01 - 30:03
    and makes us to do all these strange things,
  • 30:03 - 30:06
    just so genomes get replicated and so on.
  • 30:06 - 30:10
    And... so to do this, we are going to build
    resistance against the world.
  • 30:10 - 30:13
    And the motivational system
    is in a sense forcing us,
  • 30:13 - 30:15
    to do all this things by giving us needs,
  • 30:15 - 30:18
    and the need have some kind
    of target value and current value.
  • 30:18 - 30:22
    If we have a differential
    between the target value and current value,
  • 30:22 - 30:25
    we perceive some urgency
    to do something about the need.
  • 30:25 - 30:27
    And when the target value
    approaches the current value
  • 30:27 - 30:29
    we get the pleasure, which is a learning signal.
  • 30:29 - 30:31
    If it gets away from it
    we get a displeasure signal,
  • 30:31 - 30:32
    which is also a learning signal.
  • 30:32 - 30:35
    And we can use this to structure
    our understanding of the world.
  • 30:35 - 30:37
    To understand what goals are and so on.
  • 30:37 - 30:40
    Goals are learned. Needs are not.
  • 30:40 - 30:43
    To learn we need success
    and failure in the world.
  • 30:43 - 30:46
    But to do things we need anticipated reward.
  • 30:46 - 30:48
    So it's dopamine that's makes brain go round.
  • 30:48 - 30:51
    Dopamine makes you do things.
  • 30:51 - 30:53
    But in order to do this in the right way,
  • 30:53 - 30:55
    you have to make sure,
    that the cells can not
  • 30:55 - 30:56
    produce dopamine themselves.
  • 30:56 - 30:59
    If they do this they can start
    to drive others to work for them.
  • 30:59 - 31:02
    You are going to get something like
    bureaucracy in your neural cortext,
  • 31:02 - 31:06
    where different bosses try
    to set up others to they own bidding
  • 31:06 - 31:08
    and pitch against other groups in nerual cortext.
  • 31:08 - 31:10
    It's going to be horrible.
  • 31:10 - 31:12
    So you want to have some kind of central authority,
  • 31:12 - 31:16
    that make sure that the cells
    do not produce dopamine themselves.
  • 31:16 - 31:20
    It's only been produce in
    very small area and then given out,
  • 31:20 - 31:21
    and pass through the system.
  • 31:21 - 31:23
    And after you're done with it's going to be gone,
  • 31:23 - 31:26
    so there is no hoarding of the dopamine.
  • 31:26 - 31:30
    And in our society the role of dopamine
    is played by money.
  • 31:30 - 31:32
    Money is not reward in itself.
  • 31:32 - 31:36
    It's in some sense way
    that you can trade against the reward.
  • 31:36 - 31:37
    You can not eat money.
  • 31:37 - 31:40
    You can take it later and take
    a arbitrary reward for it.
  • 31:40 - 31:45
    And in some sense money is the dopamine
    that makes organizations
  • 31:45 - 31:48
    and society, companies
    and many individuals do things.
  • 31:48 - 31:50
    They do stuff because of money.
  • 31:50 - 31:53
    But money if you compare to dopamine
    is pretty broken,
  • 31:53 - 31:55
    because you can hoard it.
  • 31:55 - 31:57
    So you are going to have this
    cortical columns in the real world,
  • 31:57 - 32:00
    which are individual people
    or individual corporations.
  • 32:00 - 32:03
    They are hoarding the dopamine,
    they sit on this very big pile of dopamine.
  • 32:03 - 32:08
    They are starving the rest
    of the society of the dopamine.
  • 32:08 - 32:11
    They don't give it away,
    and they can make it do it's bidding.
  • 32:11 - 32:14
    So for instance they can pitch
    substantial part of society
  • 32:14 - 32:16
    against understanding of global warming.
  • 32:16 - 32:20
    because they profit of global warming
    or of technology that leads to global warming,
  • 32:20 - 32:23
    which is very bad for all of us. applause
  • 32:23 - 32:29
    So our society is a nervous system
    that lies to itself.
  • 32:29 - 32:30
    How can we overcome this?
  • 32:30 - 32:32
    Actually, we don't know.
  • 32:32 - 32:35
    To do this we would need
    to have some kind of centrialized,
  • 32:35 - 32:37
    top-down reward motivational system.
  • 32:37 - 32:39
    We have this for instance in the military,
  • 32:39 - 32:43
    you have this system of
    military rewards that you get.
  • 32:43 - 32:45
    And this are completely
    controlled from the top.
  • 32:45 - 32:47
    Also within working organizations
    you have this.
  • 32:47 - 32:50
    In corporations you have centralized rewards,
  • 32:50 - 32:52
    it's not like rewards flow bottom-up,
  • 32:52 - 32:55
    they always flown top-down.
  • 32:55 - 32:58
    And there was an attempt
    to model society in such a way.
  • 32:58 - 33:03
    That was in Chile in the early 1970,
    the Allende government had the idea
  • 33:03 - 33:07
    to redesign society or economy
    in society using cybernetics.
  • 33:07 - 33:13
    So Allende invited a bunch of cyberneticians
    to redesign the Chilean economy.
  • 33:13 - 33:15
    And this was meant to be the control room,
  • 33:15 - 33:17
    where Allende and his chief economists
    would be sitting,
  • 33:17 - 33:20
    to look at what the economy is doing.
  • 33:20 - 33:24
    We don't know how this would work out,
    because we know how it ended.
  • 33:24 - 33:27
    In 1973 there was this big putsch in Chile,
  • 33:27 - 33:30
    and this experiment ended among other things.
  • 33:30 - 33:34
    Maybe it would have worked, who knows?
    Nobody tried it.
  • 33:34 - 33:38
    So, there is something else
    what is going on in people,
  • 33:38 - 33:40
    beyond the motivational system.
  • 33:40 - 33:44
    That is: we have social criteria, for learning.
  • 33:44 - 33:48
    We also check if our ideas
    are normativly acceptable.
  • 33:48 - 33:51
    And this is actually a good thing,
    because individual may shortcut
  • 33:51 - 33:53
    the learning through communication.
  • 33:53 - 33:55
    Other people have learned stuff
    that we don't need to learn ourselves.
  • 33:55 - 34:00
    We can build on this, so we can accelerate
    learning by many order of magnitutde,
  • 34:00 - 34:01
    which makes culture possible.
  • 34:01 - 34:04
    And which makes many anything possible,
    because if you were on your own
  • 34:04 - 34:07
    you would not be going to find out
    very much in your lifetime.
  • 34:09 - 34:11
    You know how they say?
    Everything that you do,
  • 34:11 - 34:14
    you do by standing on the shoulders of giants.
  • 34:14 - 34:18
    Or on a big pile of dwarfs
    it works either way.
  • 34:18 - 34:27
    laughterapplause
  • 34:27 - 34:30
    Social learning usually outperforms
    individual learning. You can test this.
  • 34:30 - 34:34
    But in the case of conflict
    between different social truths,
  • 34:34 - 34:37
    you need some way to decide who to believe.
  • 34:37 - 34:39
    So you have some kind of reputation
    estimate for different authority,
  • 34:39 - 34:42
    and you use this to check whom you believe.
  • 34:42 - 34:46
    And the problem of course is this
    in existing society, in real society,
  • 34:46 - 34:48
    this reputation system is going
    to reflect power structure,
  • 34:48 - 34:52
    which may distort your belief systematically.
  • 34:52 - 34:55
    Social learning therefore leads groups
    to synchronize their opinions.
  • 34:55 - 34:57
    And the opinions become ...get another role.
  • 34:57 - 35:02
    They become important part
    of signalling which group you belong to.
  • 35:02 - 35:07
    So opinions start to signal
    group loyalty in societies.
  • 35:07 - 35:11
    And people in this, and that's the actual world,
    they should optimize not for getting the best possible
  • 35:11 - 35:13
    opinions in terms of truth.
  • 35:13 - 35:17
    They should guess... they should optimize
    for doing... having the best possible opinion,
  • 35:17 - 35:20
    with respect to agreement with their peers.
  • 35:20 - 35:22
    If you have the same opinion
    as your peers, you can signal them
  • 35:22 - 35:24
    that you are the part of their ingroup,
    they are going to like you.
  • 35:24 - 35:28
    If you don't do this, chances are
    they are not going to like you.
  • 35:28 - 35:34
    There is rarely any benefit in life to be
    in disagreement with your boss. Right?
  • 35:34 - 35:39
    So, if you evolve an opinion forming system
    in these curcumstances,
  • 35:39 - 35:41
    you should be ending up
    with an opinion forming system,
  • 35:41 - 35:43
    that leaves you with the most usefull opinion,
  • 35:43 - 35:45
    which is the opinion in your environment.
  • 35:45 - 35:48
    And it turns out, most people are able
    to do this effortlessly.
  • 35:48 - 35:51
    laughter
  • 35:51 - 35:56
    They have an instinct, that makes them adapt
    the dominant opinion in their social environment.
  • 35:56 - 35:57
    It's amazing, right?
  • 35:57 - 36:01
    And if you are nerd like me,
    you don't get this.
  • 36:01 - 36:09
    laugingapplause
  • 36:09 - 36:13
    So in the world out there,
    explanations piggyback on you group allegiance.
  • 36:13 - 36:16
    For instance you will find that there is a
    substantial group of people that believes
  • 36:16 - 36:18
    the minimum wage is good
    for the economy and for you
  • 36:18 - 36:21
    and another one believes that its bad.
  • 36:21 - 36:23
    And its pretty much aligned
    with political parties.
  • 36:23 - 36:26
    Its not aligned with different
    understandings of economy,
  • 36:26 - 36:31
    because nobody understands
    how the economy works.
  • 36:31 - 36:36
    And if you are a nerd you try to understand
    the world in terms of what is true and false.
  • 36:36 - 36:41
    You try to prove everything by putting it
    in some kind of true and false level
  • 36:41 - 36:44
    and if you are not a nerd
    you try to get to right and wrong
  • 36:44 - 36:46
    you try to understand
    whether you are in alignment
  • 36:46 - 36:50
    with what's objectively right
    in your society, right?
  • 36:50 - 36:56
    So I guess that nerds are people that have
    a defect in there opinion forming system.
  • 36:56 - 36:57
    laughing
  • 36:57 - 37:01
    And usually that's maladaptive
    and under normal circumstances
  • 37:01 - 37:03
    nerds would mostly be filtered
    from the world,
  • 37:03 - 37:07
    because they don't reproduce so well,
    because people don't like them so much.
  • 37:07 - 37:08
    laughing
  • 37:08 - 37:11
    And then something very strange happened.
    The computer revolution came along and
  • 37:11 - 37:14
    suddenly if you argue with the computer
    it doesn't help you if you have the
  • 37:14 - 37:18
    normatively correct opinion you need to
    be able to understand things in terms of
  • 37:18 - 37:26
    true and false, right? applause
  • 37:26 - 37:30
    So now we have this strange situation that
    the weird people that have this offensive,
  • 37:30 - 37:33
    strange opinions and that really don't
    mix well with the real normal people
  • 37:33 - 37:38
    get all this high paying jobs
    and we don't understand how is that happening.
  • 37:38 - 37:43
    And it's because suddenly
    our maladapting is a benefit.
  • 37:43 - 37:47
    But out there there is this world of the
    social norms and it's made of paperwalls.
  • 37:47 - 37:50
    There are all this things that are true
    and false in a society that make
  • 37:50 - 37:52
    people behave.
  • 37:52 - 37:57
    It's like this japanese wall, there.
    They made palaces out of paper basically.
  • 37:57 - 38:00
    And these are walls by convention.
  • 38:00 - 38:04
    They exist because people agree
    that this is a wall.
  • 38:04 - 38:07
    And if you are a hypnotist
    like Donald Trump
  • 38:07 - 38:11
    you can see that these are paper walls
    and you can shift them.
  • 38:11 - 38:14
    And if you are a nerd like me
    you can not see these paperwalls.
  • 38:14 - 38:20
    If you pay closely attention you see that
    people move and then suddenly middair
  • 38:20 - 38:23
    they make a turn. Why would they do this?
  • 38:23 - 38:24
    There must be something
    that they see there
  • 38:24 - 38:27
    and this is basically a normative agreement.
  • 38:27 - 38:30
    And you can infer what this is
    and then you can manipulate it and understand it.
  • 38:30 - 38:33
    Of course you can't fix this, you can
    debug yourself in this regard,
  • 38:33 - 38:35
    but it's something that is hard
    to see for nerds.
  • 38:35 - 38:38
    So in some sense they have a superpower:
    they can think straight in the presence
  • 38:38 - 38:39
    of others.
  • 38:39 - 38:43
    But often they end up in their living room
    and people are upset.
  • 38:43 - 38:46
    laughter
  • 38:46 - 38:50
    Learning in a complex domain can not
    guarantee that you find the global maximum.
  • 38:50 - 38:54
    We know that we can not find truth
    because we can not recognize whether we live
  • 38:54 - 38:57
    on a plain field or on a
    simulated plain field.
  • 38:57 - 39:01
    But what we can do is, we can try to
    approach a global maximum.
  • 39:01 - 39:02
    But we don't know if that
    is the global maximum.
  • 39:02 - 39:06
    We will always move along
    some kind of belief gradient.
  • 39:06 - 39:09
    We will take certain elements of
    our belief and then give them up
  • 39:09 - 39:13
    for new elements of a belief based on
    thinking, that this new element
  • 39:13 - 39:15
    of belief is better than the one
    we give up.
  • 39:15 - 39:17
    So we always move along
    some kind of gradient.
  • 39:17 - 39:20
    and the truth does not matter,
    the gradient matters.
  • 39:20 - 39:24
    If you think about teaching for a moment,
    when I started teaching I often thought:
  • 39:24 - 39:27
    Okay, I understand the truth of the
    subject, the students don't, so I have to
  • 39:27 - 39:30
    give this to them
    and at some point I realized:
  • 39:30 - 39:33
    Oh, I changed my mind so many times
    in the past and I'm probably not going to
  • 39:33 - 39:36
    stop changing it in the future.
  • 39:36 - 39:39
    I'm always moving along a gradient
    and I keep moving along a gradient.
  • 39:39 - 39:43
    So I'm not moving to truth,
    I'm moving forward.
  • 39:43 - 39:45
    And when we teach our kids
    we should probably not think about
  • 39:45 - 39:46
    how to give them truth.
  • 39:46 - 39:51
    We should think about how to put them onto
    an interesting gradient, that makes them
  • 39:51 - 39:55
    explore the world,
    world of possible beliefs.
  • 39:55 - 40:03
    applause
  • 40:03 - 40:05
    And this possible beliefs
    lead us into local minima.
  • 40:05 - 40:08
    This is inevitable. This are like valleys
    and sometimes this valleys are
  • 40:08 - 40:11
    neighbouring and we don't understand
    what the people in the neighbouring
  • 40:11 - 40:16
    valley are doing unless we are willing to
    retrace the steps they have been taken.
  • 40:16 - 40:20
    And if you want to get from one valley
    into the next, we will have to have some kind
  • 40:20 - 40:22
    of energy that moves us over the hill.
  • 40:22 - 40:28
    We have to have a trajectory were every
    step works by finding reason to give up
  • 40:28 - 40:30
    bit of our current belief and adopt a
    new belief, because it's somehow
  • 40:30 - 40:35
    more useful, more relevant,
    more consistent and so on.
  • 40:35 - 40:38
    Now the problem is that this is not
    monotonous we can not guarantee that
  • 40:38 - 40:40
    we're always climbing,
    because the problem is, that
  • 40:40 - 40:45
    the beliefs themselfs can change
    our evaluation of the belief.
  • 40:45 - 40:50
    It could be for instance that you start
    believing in a religion and this religion
  • 40:50 - 40:54
    could tell you: If you give up the belief
    in the religion, you're going to face
  • 40:54 - 40:56
    eternal damnation in hell.
  • 40:56 - 40:59
    As long as you believe in the religion,
    it's going to be very expensive for you
  • 40:59 - 41:02
    to give up the religion, right?
    If you truly belief in it.
  • 41:02 - 41:05
    You're now caught
    in some kind of attractor.
  • 41:05 - 41:09
    Before you believe the religion it is not
    very dangerous but once you've gotten
  • 41:09 - 41:13
    into the attractor it's very,
    very hard to get out.
  • 41:13 - 41:16
    So these belief attractors
    are actually quite dangerous.
  • 41:16 - 41:20
    You can get not only to chaotic behaviour,
    where you can not guarantee that your
  • 41:20 - 41:23
    current belief is better than the last one
    but you can also get into beliefs that are
  • 41:23 - 41:27
    almost impossible to change.
  • 41:27 - 41:34
    And that makes it possible to program
    people to work in societies.
  • 41:34 - 41:38
    Social domains are structured by values.
    Basically a preference is what makes you
  • 41:38 - 41:41
    do things, because you anticipate
    pleasure or displeasure,
  • 41:41 - 41:45
    and values make you do things
    even if you don't anticipate any pleasure.
  • 41:45 - 41:50
    These are virtual rewards.
    They make us do things, because we believe
  • 41:50 - 41:52
    that is stuff
    that is more important then us.
  • 41:52 - 41:55
    This is what values are about.
  • 41:55 - 42:01
    And these values are the source
    of what we would call true meaning, deeper meaning.
  • 42:01 - 42:05
    There is something that is more important
    than us, something that we can serve.
  • 42:05 - 42:09
    This is what we usually perceive as
    meaningful life, it is one which
  • 42:09 - 42:13
    is in the serves of values that are more
    important than I myself,
  • 42:13 - 42:16
    because after all I'm not that important.
    I'm just this machine that runs around
  • 42:16 - 42:21
    and tries to optimize its pleasure and
    pain, which is kinda boring.
  • 42:21 - 42:26
    So my PI has puzzled me, my principle
    investigator in the Havard department,
  • 42:26 - 42:29
    where I have my desk, Martin Nowak.
  • 42:29 - 42:34
    He said, that meaning can not exist without
    god; you are either religious,
  • 42:34 - 42:37
    or you are a nihilist.
  • 42:37 - 42:43
    And this guy is the head of the
    department for evolutionary dynamics.
  • 42:43 - 42:46
    Also he is a catholic.. chuckling
  • 42:46 - 42:50
    So this really puzzled me and I tried
    to understand what he meant by this.
  • 42:50 - 42:53
    Typically if you are a good atheist
    like me,
  • 42:53 - 42:58
    you tend to attack gods that are
    structured like this, religious gods,
  • 42:58 - 43:03
    that are institutional, they are personal,
    they are some kind of person.
  • 43:03 - 43:08
    They do care about you, they prescribe
    norms, for instance don't mastrubate
  • 43:08 - 43:10
    it's bad for you.
  • 43:10 - 43:15
    Many of this norms are very much aligned
    with societal institutions, for instance
  • 43:15 - 43:21
    don't questions the authorities,
    god wants them to be ruling above you
  • 43:21 - 43:24
    and be monogamous and so on and so on.
  • 43:24 - 43:29
    So they prescribe norms that do not make
    a lot of sense in terms of beings that
  • 43:29 - 43:31
    creates world every now and then,
  • 43:31 - 43:35
    but they make sense in terms of
    what you should be doing to be a
  • 43:35 - 43:37
    functioning member of society.
  • 43:37 - 43:41
    And this god also does things like it
    creates world, they like to manifest as
  • 43:41 - 43:44
    burning shrubbery and so on. There are
    many books that describe stories that
  • 43:44 - 43:46
    these gods have allegedly done.
  • 43:46 - 43:49
    And it's very hard to test for all these
    features which makes this gods very
  • 43:49 - 43:54
    improbable for us. And makes Atheist
    very dissatisfied with these gods.
  • 43:54 - 43:57
    But then there is a different kind of god.
  • 43:57 - 43:59
    This is what we call the spiritual god.
  • 43:59 - 44:02
    This spiritual god is independent of
    institutions, it still does care about you.
  • 44:02 - 44:06
    It's probably conscious. It might not be a
    person. There are not that many stories,
  • 44:06 - 44:11
    that you can consistently tell about it,
    but you might be able to connect to it
  • 44:11 - 44:15
    spiritually.
  • 44:15 - 44:19
    Then there is a god that is even less
    expensive. That is god as a transcendental
  • 44:19 - 44:23
    principle and this god is simply the reason
    why there is something rather then
  • 44:23 - 44:28
    nothing. This god is the question the
    universe is the answer to, this is the
  • 44:28 - 44:30
    thing that gives meaning.
  • 44:30 - 44:31
    Everything else about it is unknowable.
  • 44:31 - 44:34
    This is the god of Thomas of Aquinus.
  • 44:34 - 44:38
    The God that Thomas of Aquinus discovered
    is not the god of Abraham this is not the
  • 44:38 - 44:39
    religious god.
  • 44:39 - 44:44
    It's a god that is basically a principle
    that us ... the universe into existence.
  • 44:44 - 44:47
    It's the one that gives
    the universe it's purpose.
  • 44:47 - 44:50
    And because every other property
    is unknowable about this,
  • 44:50 - 44:52
    this god is not that expensive.
  • 44:52 - 44:56
    Unfortunately it doesn't really work.
    I mean Thomas of Aquinus tried to prove
  • 44:56 - 45:00
    god. He tried to prove an necessary god,
    a god that has to be existing and
  • 45:00 - 45:03
    I think we can only prove a possible god.
  • 45:03 - 45:05
    So if you try to prove a necessary god,
    this god can not exist.
  • 45:05 - 45:12
    Which means your god prove is going to
    fail. You can only prove possible gods.
  • 45:12 - 45:13
    And then there is an even more improper god.
  • 45:13 - 45:16
    And that's the god of Aristotle and he said:
  • 45:16 - 45:20
    "If there is change in the universe,
    something in going to have to change it."
  • 45:20 - 45:24
    There must be something that moves it
    along from one state to the next.
  • 45:24 - 45:26
    So I would say that is the primary
    computational transition function
  • 45:26 - 45:35
    of the universe.
    laughingapplause
  • 45:35 - 45:38
    And Aristotle discovered it.
    It's amazing isn't it?
  • 45:38 - 45:42
    We have to have this because we
    can not be conscious in a single state.
  • 45:42 - 45:43
    We need to move between states
    to be conscious.
  • 45:43 - 45:46
    We need to be processes.
  • 45:46 - 45:51
    So we can take our gods and sort them by
    their metaphysical cost.
  • 45:51 - 45:53
    The 1st degree god would be the first mover.
  • 45:53 - 45:56
    The 2nd degree god is the god of purpose and meaning.
  • 45:56 - 45:59
    3rd degree god is the spiritual god.
    And the 4th degree god is this bound to
  • 45:59 - 46:01
    religious institutions, right?
  • 46:01 - 46:04
    So if you take this statement
    from Martin Nowak,
  • 46:04 - 46:08
    "You can not have meaning without god!"
    I would say: yes! You need at least
  • 46:08 - 46:15
    a 2nd degree god to have meaning.
    So objective meaning can only exist
  • 46:15 - 46:19
    with a 2nd degree god. chuckling
  • 46:19 - 46:22
    And subjective meaning can exist as a
    function in a cognitive system of course.
  • 46:22 - 46:24
    We don't need objective meaning.
  • 46:24 - 46:27
    So we can subjectively feel that there is
    something more important to us
  • 46:27 - 46:31
    and this makes us work in society and
    makes us perceive that we have values
  • 46:31 - 46:34
    and so on, but we don't need to believe
    that there is something outside of the
  • 46:34 - 46:37
    universe to have this.
  • 46:37 - 46:41
    So the 4th degree god is the one
    that is bound to religious institutions,
  • 46:41 - 46:45
    it requires a belief attractor and it
    enables complex norm prescriptions.
  • 46:45 - 46:48
    It my theory is right then it should be
    much harder for nerds to believe in
  • 46:48 - 46:52
    a 4th degree god then for normal people.
  • 46:52 - 46:56
    And what this god does it allows you to
    have state building mind viruses.
  • 46:56 - 47:00
    Basically religion is a mind virus. And
    the amazing thing about these mind viruses
  • 47:00 - 47:02
    is that they structure behaviour
    in large groups.
  • 47:02 - 47:06
    We have evolved to live in small groups
    of a few 100 individuals, maybe somthing
  • 47:06 - 47:07
    like a 150.
  • 47:07 - 47:10
    This is roughly the level
    to which reputation works.
  • 47:10 - 47:15
    We can keep track of about 150 people and
    after this it gets much much worse.
  • 47:15 - 47:18
    So in this system where you have
    reputation people feel responsible
  • 47:18 - 47:21
    for each other and they can
    keep track of their doings
  • 47:21 - 47:23
    and society kind of sort of works.
  • 47:23 - 47:28
    If you want to go beyond this, you have
    to right a software that controls people.
  • 47:28 - 47:32
    And religions were the first software,
    that did this on a very large scale.
  • 47:32 - 47:35
    And in order to keep stable they had to be
    designed like operating systems
  • 47:35 - 47:36
    in some sense.
  • 47:36 - 47:40
    They give people different roles
    like insects in a hive.
  • 47:40 - 47:45
    And they have even as part of this roles is
    to update this religion but it has to be
  • 47:45 - 47:48
    done very carefully and centrally
    because otherwise the religion will split apart
  • 47:48 - 47:52
    and fall together into new religions
    or be overcome by new ones.
  • 47:52 - 47:54
    So there is some kind of
    evolutionary dynamics that goes on
  • 47:54 - 47:56
    with respect to religion.
  • 47:56 - 47:59
    And if you look the religions,
    there is actually a veritable evolution
  • 47:59 - 48:00
    of religions.
  • 48:00 - 48:05
    So we have this Israelic tradition and
    the Mesoputanic mythology that gave rise
  • 48:05 - 48:13
    to Judaism. applause
  • 48:13 - 48:16
    It's kind of cool, right? laughing
  • 48:16 - 48:36
    Also history totally repeats itself.
    roaring laughterapplause
  • 48:36 - 48:42
    Yeah, it totally blew my mind when
    I discovered this. laughter
  • 48:42 - 48:45
    Of course the real tree of programming
    languages is slightly more complicated,
  • 48:45 - 48:49
    And the real tree of religion is slightly
    more complicated.
  • 48:49 - 48:51
    But still its neat.
  • 48:51 - 48:54
    So if you want to immunize yourself
    against mind viruses,
  • 48:54 - 48:59
    first of all you want to check yourself
    whether you are infected.
  • 48:59 - 49:03
    You should check: Can I let go of my
    current beliefs without feeling that
  • 49:03 - 49:08
    meaning departures me and I feel very
    terrible, when I let go of my beliefs.
  • 49:08 - 49:11
    Also you should check: All the other
    people around there that don't
  • 49:11 - 49:17
    share my belief, are they either stupid,
    or crazy, or evil?
  • 49:17 - 49:20
    If you think this chances are you are
    infected by some kind of mind virus,
  • 49:20 - 49:24
    because they are just part
    of the out group.
  • 49:24 - 49:28
    And does your god have properties that
    you know but you did not observe.
  • 49:28 - 49:32
    So basically you have a god
    of 2nd or 3rd degree or higher.
  • 49:32 - 49:35
    In this case you also probably got a mind virus.
  • 49:35 - 49:37
    There is nothing wrong
    with having a mind virus,
  • 49:37 - 49:40
    but if you want to immunize yourself
    against this people have invented
  • 49:40 - 49:44
    rationalism and enlightenment,
    basically to act as immunization against
  • 49:44 - 49:51
    mind viruses.
    loud applause
  • 49:51 - 49:54
    And in some sense its what the mind does
    by itself because, if you want to
  • 49:54 - 49:57
    understand how you go wrong,
    you need to have a mechanism
  • 49:57 - 49:59
    that discovers who you are.
  • 49:59 - 50:03
    Some kind of auto debugging mechanism,
    that makes the mind aware of itself.
  • 50:03 - 50:05
    And this is actually the self.
  • 50:05 - 50:08
    So according to Robert Kegan:
    "The development of ourself is a process,
  • 50:08 - 50:13
    in which we learn who we are by making
    thing explicit", by making processes that
  • 50:13 - 50:17
    are automatic visible to us and by
    conceptualize them so we no longer
  • 50:17 - 50:19
    identify with them.
  • 50:19 - 50:22
    And it starts out with understanding
    that there is only pleasure and pain.
  • 50:22 - 50:25
    If you are a baby, you have only
    pleasure and pain you identify with this.
  • 50:25 - 50:28
    And then you turn into a toddler and the
    toddler understands that they are not
  • 50:28 - 50:31
    their pleasure and pain
    but they are their impulses.
  • 50:31 - 50:34
    And in the next level if you grow beyond
    the toddler age you actually know that
  • 50:34 - 50:39
    you have goals and that your needs and
    impulses are there to serve goals, but its
  • 50:39 - 50:40
    very difficult to let go of the goals,
  • 50:40 - 50:43
    if you are a very young child.
  • 50:43 - 50:46
    And at some point you realize: Oh, the
    goals don't really matter, because
  • 50:46 - 50:50
    sometimes you can not reach them, but
    we have preferences, we have thing that we
  • 50:50 - 50:53
    want to happen and thing that we do not
    want to happen. And then at some point
  • 50:53 - 50:56
    we realize that other people have
    preferences, too.
  • 50:56 - 50:59
    And then we start to model the world
    as a system where different people have
  • 50:59 - 51:02
    different preferences and we have
    to navigate this landscape.
  • 51:02 - 51:06
    And then we realize that this preferences
    also relate to values and we start
  • 51:06 - 51:10
    to identify with this values as members of
    society.
  • 51:10 - 51:13
    And this is basically the stage if you
    are an adult being, that you get into.
  • 51:13 - 51:17
    And you can get to a stage beyond that,
    especially if you have people this, which
  • 51:17 - 51:20
    have already done this. And this means
    that you understand that people have
  • 51:20 - 51:24
    different values and what they do
    naturally flows out of them.
  • 51:24 - 51:27
    And this values are not necessarily worse
    than yours they are just different.
  • 51:27 - 51:29
    And you learn that you can hold different
    sets of values in your mind at
  • 51:29 - 51:33
    the same time, isn't that amazing?
    and understand other people, even if
  • 51:33 - 51:37
    they are not part of your group.
    If you get that, this is really good.
  • 51:37 - 51:39
    But I don't think it stops there.
  • 51:39 - 51:43
    You can also learn that the stuff that
    you perceive is kind of incidental,
  • 51:43 - 51:45
    that you can turn it of and you can
    manipulate it.
  • 51:45 - 51:50
    And at some point you also can realize
    that yourself is only incidental that you
  • 51:50 - 51:53
    can manipulate it or turn it of.
    And that your basically some kind of
  • 51:53 - 51:57
    consciousness that happens to run a brain
    of some kind of person, that navigates
  • 51:57 - 52:04
    the world in terms to get rewards or avoid
    displeasure and serve values and so on,
  • 52:04 - 52:05
    but it doesn't really matter.
  • 52:05 - 52:08
    There is just this consciousness which
    understands the world.
  • 52:08 - 52:11
    And this is the stage that we typically
    call enlightenment.
  • 52:11 - 52:15
    In this stage you realize that you are not
    your brain, but you are a story that
  • 52:15 - 52:26
    your brain tells itself.
    applause
  • 52:26 - 52:30
    So becoming self aware is a process of
    reverse engineering your mind.
  • 52:30 - 52:33
    Its a different set of stages in which
    to realize what goes on.
  • 52:33 - 52:34
    So isn't that amazing.
  • 52:34 - 52:39
    AI is a way to get to more self awareness?
  • 52:39 - 52:41
    I think that is a good point to stop here.
  • 52:41 - 52:44
    The first talk that I gave in this series
    was 2 years ago. It was about
  • 52:44 - 52:46
    how to build a mind.
  • 52:46 - 52:50
    Last year I talked about how to get from
    basic computation to consciousness.
  • 52:50 - 52:54
    And this year we have talked about
    finding meaning using AI.
  • 52:54 - 52:57
    I wonder where it goes next.
    laughter
  • 52:57 - 53:23
    applause
  • 53:23 - 53:26
    Herald: Thank you for this amazing talk!
    We now have some minutes for Q&A.
  • 53:26 - 53:31
    So please line up at the microphones as
    always. If you are unable to stand up
  • 53:31 - 53:36
    for some reason please very very visibly
    rise your hand, we should be able to dispatch
  • 53:36 - 53:40
    an audio angle to your location
    so you can have a question too.
  • 53:40 - 53:44
    And also if you are locationally
    disabled, you are not actually in the room
  • 53:44 - 53:49
    if you are on the stream, you can use IRC
    or twitter to also ask questions.
  • 53:49 - 53:51
    We also have a person for that.
  • 53:51 - 53:54
    We will start at microphone number 2.
  • 53:54 - 54:00
    Q: Wow that's me. Just a guess! What
    would you guess, when can you discuss
  • 54:00 - 54:05
    your talk with a machine,
    in how many years?
  • 54:05 - 54:07
    Joscha: I don't know! As a software
    engineer I know if I don't have the
  • 54:07 - 54:13
    specification all bets are off, until I
    have the implementation. laughter
  • 54:13 - 54:15
    So it can be of any order of magnitude.
  • 54:15 - 54:18
    I have a gut feeling but I also know as a
    software engineer that my gut feeling is
  • 54:18 - 54:23
    usually wrong, laughter
    until I have the specification.
  • 54:23 - 54:28
    So the question is if there are silver
    bullets? Right now there are some things
  • 54:28 - 54:31
    that are not solved yet and it could be
    that they are easier to solve
  • 54:31 - 54:33
    than we think, but it could be that
    they're harder to solve than we think.
  • 54:33 - 54:37
    Before I stumbled on this cortical
    self organization thing,
  • 54:37 - 54:41
    I thought it's going to be something like
    maybe 60, 80 years and now I think it's
  • 54:41 - 54:47
    way less, but again this is a very
    subjective perspective. I don't know.
  • 54:47 - 54:49
    Herald: Number 1, please!
  • 54:49 - 54:56
    Q: Yes, I wanted to ask a little bit about
    metacognition. It seems that you kind of
  • 54:56 - 55:01
    end your story saying that it's still
    reflecting on input that you get and
  • 55:01 - 55:05
    kind of working with your social norms
    and this and that, but Colberg
  • 55:05 - 55:12
    for instance talks about what he calls a
    postconventional universal morality
  • 55:12 - 55:17
    for instance, which is thinking about
    moral laws without context, basically
  • 55:17 - 55:23
    stating that there is something beyond the
    relative norm that we have to each other,
  • 55:23 - 55:30
    which would only be possible if you can do
    kind of, you know, meta cognition,
  • 55:30 - 55:33
    thinking about your own thinking
    and then modifying that thinking.
  • 55:33 - 55:37
    So kind of feeding back your own ideas
    into your own mind and coming up with
  • 55:37 - 55:44
    stuff that actually can't get ...
    well processing external inputs.
  • 55:44 - 55:48
    Joscha: Mhm! I think it's very tricky.
    This project of defining morality without
  • 55:48 - 55:53
    societies exists longer than Kant of
    course. And Kant tried to give this
  • 55:53 - 55:57
    internal rules and others tried to.
    I find this very difficult.
  • 55:57 - 56:01
    From my perspective we are just moving
    bits of rocks. And this bits of rocks they
  • 56:01 - 56:08
    are on some kind of dust mode in a galaxy
    out of trillions of galaxies and how can
  • 56:08 - 56:09
    there be meaning?
  • 56:09 - 56:11
    It's very hard for me to say:
  • 56:11 - 56:14
    One chimpanzee species is better than
    another chimpanzee species or
  • 56:14 - 56:17
    a particular monkey
    is better than another monkey.
  • 56:17 - 56:19
    This only happens
    within a certain framework
  • 56:19 - 56:20
    and we have to set this framework.
  • 56:20 - 56:24
    And I don't think that we can define this
    framework outside of a context of
  • 56:24 - 56:26
    social norms, that we have to agree on.
  • 56:26 - 56:30
    So objectively I'm not sure
    if we can get to ethics.
  • 56:30 - 56:34
    I only think that is possible based on
    some kind of framework that people
  • 56:34 - 56:38
    have to agree on implicitly or explicitly.
  • 56:38 - 56:41
    Herald: Microphone number 4, please.
  • 56:41 - 56:47
    Q: Hi, thank you, it was a fascinating talk.
    I have 2 thought that went through my mind.
  • 56:47 - 56:52
    And the first one is that it's so
    convincing the models that you present,
  • 56:52 - 56:57
    but it's kind of like you present
    another metaphor of understanding the
  • 56:57 - 57:02
    brain which is still something that we try
    to grasp on different levels of science
  • 57:02 - 57:07
    basically. And the 2nd one is that your
    definition of the nerd who walks
  • 57:07 - 57:11
    and doesn't see the walls is kind of
    definition... or reminds me
  • 57:11 - 57:15
    Richard Rortys definition of the ironist
    which is a person who knows that their
  • 57:15 - 57:21
    vocabulary is finite and that other people
    have also a finite vocabulary and
  • 57:21 - 57:25
    then that obviously opens up the whole question
    of meaning making which has been
  • 57:25 - 57:29
    discussed in so many
    other disciplines and fields.
  • 57:29 - 57:33
    And I thought about Darridas
    deconstruction of ideas and thoughts and
  • 57:33 - 57:36
    Butler and then down the rabbit hole to
    Nietzsche and I was just wondering,
  • 57:36 - 57:39
    if you could maybe
    map out other connections
  • 57:39 - 57:44
    where basically not AI helping us to
    understand the mind, but where
  • 57:44 - 57:50
    already existing huge, huge fields of
    science, like cognitive process
  • 57:50 - 57:53
    coming from the other end could help us
    to understand AI.
  • 57:53 - 58:00
    Joscha: Thank you, the tradition that you
    mentioned Rorty and Butler and so on
  • 58:00 - 58:03
    are part of a completely different belief
    attractor in my current perspective.
  • 58:03 - 58:06
    That is they are mostly
    social constructionists.
  • 58:06 - 58:11
    They believe that reality at least in the
    domains of the mind and sociality
  • 58:11 - 58:15
    are social constructs they are part
    of social agreement.
  • 58:15 - 58:17
    Personally I don't think that
    this is the case.
  • 58:17 - 58:20
    I think that patterns that we refer to
  • 58:20 - 58:24
    are mostly independent of your mind.
    The norms are part of social constructs,
  • 58:24 - 58:28
    but for instance our motivational
    preferences that make us adapt or
  • 58:28 - 58:33
    reject norms, are something that builds up
    resistance to the environment.
  • 58:33 - 58:36
    So they are probably not part
    of social agreement.
  • 58:36 - 58:42
    And the only thing I can invite you to is
    try to retrace both of the different
  • 58:42 - 58:46
    belief attractors, try to retrace the
    different paths on the landscape.
  • 58:46 - 58:49
    All this thing that I tell you, all of
    this is of course very speculative.
  • 58:49 - 58:52
    These are that seem to be logical
    to me at this point in my life.
  • 58:52 - 58:55
    And I try to give you the arguments
    why I think that is plausible, but don't
  • 58:55 - 58:59
    believe in them, question them, challenge
    them, see if they work for you!
  • 58:59 - 59:01
    I'm not giving you any truth.
  • 59:01 - 59:06
    I'm just going to give you suitable encodings
    according to my current perspective.
  • 59:06 - 59:12
    Q:Thank you!
    applause
  • 59:12 - 59:15
    Herald: The internet, please!
  • 59:19 - 59:26
    Signal angel: So, someone is asking
    if in this belief space you're talking about
  • 59:26 - 59:30
    how is it possible
    to get out of local minima?
  • 59:30 - 59:34
    And very related question as well:
  • 59:34 - 59:39
    Should we teach some momentum method
    to our children,
  • 59:39 - 59:42
    so we don't get stuck in a local minima.
  • 59:42 - 59:45
    Joscha: I believe at some level it's not
    possible to get out of a local minima.
  • 59:45 - 59:50
    In an absolute sense, because you only get
    to get into some kind of meta minimum,
  • 59:50 - 59:57
    but what you can do is to retrace the
    path that you took whenever you discover
  • 59:57 - 60:00
    that somebody else has a fundamentally
    different set of beliefs.
  • 60:00 - 60:03
    And if you realize that this person is
    basically a smart person that is not
  • 60:03 - 60:07
    completely insane but has reasons to
    believe in their beliefs and they seem to
  • 60:07 - 60:11
    be internally consistent it's usually
    worth to retrace what they
  • 60:11 - 60:12
    have been thinking and why.
  • 60:12 - 60:16
    And this means you have to understand
    where their starting point was and
  • 60:16 - 60:18
    how they moved from their current point
    to their starting point.
  • 60:18 - 60:22
    You might not be able to do this
    accurately and the important thing is
  • 60:22 - 60:25
    also afterwards you discover a second
    valley, you haven't discovered
  • 60:25 - 60:27
    the landscape inbetween.
  • 60:27 - 60:31
    But the only way that we can get an idea
    of the lay of the land is that we try to
  • 60:31 - 60:33
    retrace as many paths as possible.
  • 60:33 - 60:36
    And if we try to teach our children, what
    I think what we should be doing is:
  • 60:36 - 60:39
    To tell them how to explore
    this world on there own.
  • 60:39 - 60:44
    It's not that we tell them this is the
    valley, basically it's given, it's
  • 60:44 - 60:48
    the truth, but instead we have to tell
    them: This is the path that we took.
  • 60:48 - 60:51
    And these are the things that we saw
    inbetween and it is important to be not
  • 60:51 - 60:54
    completely naive when we go into this
    landscape, but we also have to understand
  • 60:54 - 60:58
    that it's always an exploration that
    never stops and that might change
  • 60:58 - 61:01
    everything that you believe now
    at a later point.
  • 61:01 - 61:06
    So for me it's about teaching my own
    children how to be explorers,
  • 61:06 - 61:11
    how to understand that knowledge is always
    changing and it's always a moving frontier.
  • 61:11 - 61:17
    applause
  • 61:17 - 61:22
    Herald: We are unfortunately out of time.
    So, please once again thank Joscha!
  • 61:22 - 61:24
    applause
    Joscha: Thank you!
  • 61:24 - 61:28
    applause
  • 61:28 - 61:33
    postroll music
  • 61:33 - 61:40
    subtitles created by c3subtitles.de
    Join, and help us!
Title:
Joscha: Computational Meta-Psychology
Description:

more » « less
Video Language:
English
Duration:
01:01:40

English subtitles

Revisions