0:00:00.000,0:00:09.044
preroll music
0:00:09.044,0:00:14.049
Herald: Our next talk is going to be about AI and[br]it's going to be about proper AI.
0:00:14.049,0:00:17.730
It's not going to be about[br]deep learning or buzz word bingo.
0:00:17.730,0:00:22.590
It's going to be about actual psychology.[br]It's going to be about computational metapsychology.
0:00:22.590,0:00:25.750
And now please welcome Joscha!
0:00:25.750,0:00:33.050
applause
0:00:33.050,0:00:35.620
Joscha: Thank you.
0:00:35.620,0:00:37.710
I'm interested in understanding[br]how the mind works,
0:00:37.710,0:00:42.640
and I believe that the most foolproof perspective[br]at looking ... of looking at minds is to understand
0:00:42.640,0:00:46.600
that they are systems that if you saw patterns[br]at them you find meaning.
0:00:46.600,0:00:51.700
And you find meaning in those in very particular[br]ways and this is what makes us who we are.
0:00:51.700,0:00:55.239
So they way to study and understand who we[br]are in my understanding is
0:00:55.239,0:01:01.149
to build models of information processing[br]that constitutes our minds.
0:01:01.149,0:01:05.640
Last year about the same time, I've answered[br]the four big questions of philosophy:
0:01:05.640,0:01:08.510
"Whats the nature of reality?", "What can[br]be known?", "Who are we?",
0:01:08.510,0:01:14.650
"What should we do?"[br]So now, how can I top this?
0:01:14.650,0:01:18.720
applause
0:01:18.720,0:01:22.849
I'm going to give you the drama[br]that divided a planet.
0:01:22.849,0:01:26.470
Some of a very, very big events,[br]that happened in the course of last year,
0:01:26.470,0:01:30.080
so I couldn't tell you about it before.
0:01:30.080,0:01:38.489
What color is the dress[br]laughsapplause
0:01:38.489,0:01:44.720
I mean ahmm... If you have.. do not have any[br]mental defects you can clearly see it's white
0:01:44.720,0:01:46.550
and gold. Right?
0:01:46.550,0:01:48.720
[voices from audience]
0:01:48.720,0:01:53.009
Turns out, ehmm.. most people seem to have[br]mental defects and say it is blue and black.
0:01:53.009,0:01:57.500
I have no idea why. Well Ok, I have an idea,[br]why that is the case.
0:01:57.500,0:02:01.170
Ehmm, I guess that you got too, it has to[br]do with color renormalization
0:02:01.170,0:02:04.720
and color renormalization happens differently[br]apparently in different people.
0:02:04.720,0:02:09.000
So we have different wireing to renormalize[br]the white balance.
0:02:09.000,0:02:12.650
And it seems to work in real world[br]situations in pretty much the same way,
0:02:12.650,0:02:18.000
but not necessarily for photographs.[br]Which have only very small fringe around them,
0:02:18.000,0:02:20.600
which gives you hint about the lighting situation.
0:02:20.600,0:02:27.000
And that's why you get this huge divergencies,[br]which is amazing!
0:02:27.000,0:02:29.660
So what we see that our minds can not know
0:02:29.660,0:02:33.250
objective truths in any way. Outside of mathematics.
0:02:33.250,0:02:36.340
They can generate meaning though.
0:02:36.340,0:02:38.760
How does this work?
0:02:38.760,0:02:42.010
I did robotic soccer for a while,[br]and there you have the situation,
0:02:42.010,0:02:45.150
that you have a bunch of robots, that are[br]situated on a playing field.
0:02:45.150,0:02:48.480
And they have a model of what goes on[br]in the playing field.
0:02:48.480,0:02:52.050
Physics generates data for their sensors.[br]They read the bits of the sensors.
0:02:52.050,0:02:55.900
And then they use them to.. erghmm update[br]the world model.
0:02:55.900,0:02:59.020
And sometimes we didn't want[br]to take the whole playing field along,
0:02:59.020,0:03:03.380
and the physical robots, because they are[br]expensive and heavy and so on.
0:03:03.380,0:03:06.480
Instead if you just want to improve the learning[br]and the game play of the robots
0:03:06.480,0:03:07.800
you can use the simulations.
0:03:07.800,0:03:11.200
So we've wrote a computer simulation of the[br]playing field and the physics, and so on,
0:03:11.200,0:03:15.210
that generates pretty some the same data,[br]and put the robot mind into the simulator
0:03:15.210,0:03:17.040
robot body, and it works just as well.
0:03:17.040,0:03:20.590
That is, if you the robot, because you can[br]not know the difference if you are the robot.
0:03:20.590,0:03:24.460
You can not know what's out there. The only[br]thing that you get to see is what is the structure
0:03:24.460,0:03:27.530
of the data at you system bit interface.
0:03:27.530,0:03:30.090
And then you can derive model from this.
0:03:30.090,0:03:32.960
And this is pretty much the situation[br]that we are in.
0:03:32.960,0:03:38.180
That is, we are minds that are somehow computational,
0:03:38.180,0:03:40.700
they are able to find regularity in patterns,
0:03:40.700,0:03:44.530
and they are... we.. seem to have access to[br]something that is full of regularity,
0:03:44.530,0:03:46.630
so we can make sense out of it.
0:03:46.630,0:03:48.930
[ghulp, ghulp]
0:03:48.930,0:03:52.800
Now, if you discover that you are in the same[br]situation as these robots,
0:03:52.800,0:03:56.180
basically you discover that you are some kind[br]of apparently biological robot,
0:03:56.180,0:03:58.530
that doesn't have direct access[br]to the world of concepts.
0:03:58.530,0:04:02.140
That has never actually seen matter[br]and energy and other people.
0:04:02.140,0:04:04.890
All it got to see was little bits of information,
0:04:04.890,0:04:06.270
that were transmitted through the nerves,
0:04:06.270,0:04:07.870
and the brain had to make sense of them,
0:04:07.870,0:04:10.470
by counting them in elaborate ways.
0:04:10.470,0:04:12.720
What's the best model of the world[br]that you can have with this?
0:04:12.720,0:04:16.530
What will the state of affairs,[br]what's the system that you are in?
0:04:16.530,0:04:20.920
And what are the best algorithms that you[br]should be using, to fix your world model.
0:04:20.920,0:04:23.310
And this question is pretty old.
0:04:23.310,0:04:27.750
And I think that has been answered for the[br]first time by Ray Solomonoff in the 1960.
0:04:27.750,0:04:30.840
He has discovered an algorithm,[br]that you can apply when you discover
0:04:30.840,0:04:33.540
that you are an robot,[br]and all you have is data.
0:04:33.540,0:04:34.870
What is the world like?
0:04:34.870,0:04:40.990
And this algorithm is basically[br]a combination of induction and Occam's razor.
0:04:40.990,0:04:45.710
And we can mathematically prove that we can[br]not do better than Solomonoff induction.
0:04:45.710,0:04:51.380
Unfortunately, Solomonoff induction[br]is not quite computable.
0:04:51.380,0:04:54.450
But everything that we are going to do is[br]some... is going to be some approximation
0:04:54.450,0:04:55.820
of Salomonoff induction.
0:04:55.820,0:04:59.400
So our concepts can not really refer[br]to the facts in the world out there.
0:04:59.400,0:05:02.380
We do not get the truth by referring[br]to stuff out there, in the world.
0:05:02.380,0:05:07.960
We get meaning by suitably encoding[br]the patterns at our systemic interface.
0:05:07.960,0:05:12.270
And AI has recently made a huge progress in[br]encoding data at perceptual interfaces.
0:05:12.270,0:05:15.900
Deep learning is about using a stacked hierarchy[br]of feature detectors.
0:05:15.900,0:05:21.280
That is, we use pattern detectors and we build[br]them into a networks that are arranged in
0:05:21.280,0:05:23.030
hundreds of layers.
0:05:23.030,0:05:26.500
And then we adjust the links[br]between these layers.
0:05:26.500,0:05:29.380
Usually some kind of... using[br]some kind of gradient descent.
0:05:29.380,0:05:33.220
And we can use this to classify[br]for instance images and parts of speech.
0:05:33.220,0:05:37.950
So, we get to features that are more and more[br]complex, they started as very, very simple patterns.
0:05:37.950,0:05:41.290
And then get more and more complex,[br]until we get to object categories.
0:05:41.290,0:05:44.199
And now this systems are able[br]in image recognition task,
0:05:44.199,0:05:47.480
to approach performance that is very similar[br]to human performance.
0:05:47.480,0:05:52.040
Also what is nice is that it seems to be somewhat[br]similar to what the brain seems to be doing
0:05:52.040,0:05:53.740
in visual processing.
0:05:53.740,0:05:57.570
And if you take the activation in different[br]levels of these networks and you
0:05:57.570,0:06:01.430
erghm... improve the... that... erghmm...[br]enhance this activation a little bit, what
0:06:01.430,0:06:03.500
you get is stuff that look very psychedelic.
0:06:03.500,0:06:09.620
Which may be similar to what happens, if you[br]put certain illegal substances into people,
0:06:09.620,0:06:13.650
and enhance the activity on certain layers[br]of their visual processing.
0:06:13.650,0:06:21.540
[BROKEN AUDIO]If you want to classify the[br]differences what we do if we want quantify
0:06:21.540,0:06:33.030
this you filter out all the invariences in[br]the data.
0:06:33.030,0:06:36.360
The pose that she has, the lighting,[br]the dress that she is on.. has on,
0:06:36.360,0:06:38.020
her facial expression and so on.
0:06:38.020,0:06:42.900
And then we go to only to this things that[br]is left after we've removed all the nuance data.
0:06:42.900,0:06:47.410
But what if we... erghmm[br]want to get to something else,
0:06:47.410,0:06:49.850
for instance if we want to understand poses.
0:06:49.850,0:06:53.240
Could be for instance that we have several[br]dancers and we want to understand what they
0:06:53.240,0:06:54.400
have in common.
0:06:54.400,0:06:58.330
So our best bet is not just to have a single[br]classification based filtering,
0:06:58.330,0:07:01.199
but instead what we want to have is to take[br]the low level input
0:07:01.199,0:07:05.180
and get a whole universe of features,[br]that is interrelated.
0:07:05.180,0:07:07.220
So we have different levels of interrelations.
0:07:07.220,0:07:08.960
At the lowest levels we have percepts.
0:07:08.960,0:07:11.580
On the slightly higher level we have simulations.
0:07:11.580,0:07:16.920
And on even higher level we have concept landscape.
0:07:16.920,0:07:19.300
How does this representation[br]by simulation work?
0:07:19.300,0:07:22.229
Now imagine you want to understand sound.
0:07:22.229,0:07:23.669
[Ghulp]
0:07:23.669,0:07:26.710
If you are a brain and you want to understand[br]sound you need to model it.
0:07:26.710,0:07:31.070
Unfortunatly we can not really model sound[br]with neurons, because sound goes up to 20kHz,
0:07:31.070,0:07:36.660
or if you are old like me maybe to 12 kHz.[br]20 kHz is what babies could do.
0:07:36.660,0:07:41.240
And... neurons do not want to do 20 kHz.[br]That's way too fast for them.
0:07:41.240,0:07:43.250
They like something like 20 Hz.
0:07:43.250,0:07:45.590
So what do you do? You need[br]to make a Fourier transform.
0:07:45.590,0:07:49.650
The Fourier transform measures the amount[br]of energy at different frequencies.
0:07:49.650,0:07:52.500
And because you can not do it with neurons,[br]you need to do it in hardware.
0:07:52.500,0:07:54.180
And turns out this is exactly[br]what we are doing.
0:07:54.180,0:07:59.860
We have this cochlea which is this snail like[br]thing in our ears,
0:07:59.860,0:08:06.669
and what it does, it transforms energy of[br]sound in different frequency intervals into
0:08:06.669,0:08:08.009
energy measurments.
0:08:08.009,0:08:10.479
And then gives you something[br]like what you see here.
0:08:10.479,0:08:12.550
And this is something that the brain can model,
0:08:12.550,0:08:16.210
so we can get a neurosimulator that tries[br]to recreate this patterns.
0:08:16.210,0:08:21.370
And we can predict the next input from the[br]cochlea that then understand the sound.
0:08:21.370,0:08:23.410
Of course if you want to understand music,
0:08:23.410,0:08:25.160
we have to go beyond understanding sound.
0:08:25.160,0:08:29.340
We have to understand the transformations[br]that sound can have if you play it at different pitch.
0:08:29.340,0:08:33.599
We have to arrange the sound in the sequence[br]that give you rhythms and so on.
0:08:33.599,0:08:35.889
And then we want to identify[br]some kind of musical grammar
0:08:35.889,0:08:38.799
that we can use to again control the sequencer.
0:08:38.799,0:08:42.529
So we have stucked structures.[br]That simulate the world.
0:08:42.529,0:08:44.319
And once you've learned this model of music,
0:08:44.319,0:08:47.309
once you've learned the musical grammar,[br]the sequencer and the sounds.
0:08:47.309,0:08:51.779
You can get to the structure[br]of the individual piece of music.
0:08:51.779,0:08:54.399
So, if you want to model the world of music.
0:08:54.399,0:08:58.279
You need to have the lowest level of percepts[br]then we have the higher level of mental simulations.
0:08:58.279,0:09:01.910
And... which give the sequences of the music[br]and the grammars of music.
0:09:01.910,0:09:05.149
And beyond this you have the conceptual landscape[br]that you can use
0:09:05.149,0:09:08.249
to describe different styles of music.
0:09:08.249,0:09:12.130
And if you go up in the hierarchy,[br]you get to more and more abstract models.
0:09:12.130,0:09:13.860
More and more conceptual models.
0:09:13.860,0:09:16.449
And more and more analytic models.
0:09:16.449,0:09:18.160
And this are causal models at some point.
0:09:18.160,0:09:20.999
This causal models can be weakly deterministic,
0:09:20.999,0:09:22.980
basically associative models, which tell you
0:09:22.980,0:09:27.339
if this state happens, it's quite probable[br]that this one comes afterwords.
0:09:27.339,0:09:29.389
Or you can get to a strongly determined model.
0:09:29.389,0:09:32.730
Strongly determined model is one which tells[br]you, if you are in this state
0:09:32.730,0:09:33.879
and this condition is met,
0:09:33.879,0:09:35.589
You are are going to go exactly in this state.
0:09:35.589,0:09:40.110
If this condition is not met, or a different[br]condition is met, you are going to this state.
0:09:40.110,0:09:41.449
And this is what we call an alghorithm.
0:09:41.449,0:09:46.769
it's.. now we are on the domain of computation.
0:09:46.769,0:09:48.730
Computation is slightly different from mathematics.
0:09:48.730,0:09:51.179
It's important to understand this.
0:09:51.179,0:09:54.699
For a long time people have thought that the[br]universe is written in mathematics.
0:09:54.699,0:09:58.399
Or that.. minds are mathematical,[br]or anything is mathematical.
0:09:58.399,0:10:00.439
In fact nothing is mathematical.
0:10:00.439,0:10:04.529
Mathematics is just the domain[br]of formal languages. It doesn't exist.
0:10:04.529,0:10:07.300
Mathematics starts with a void.
0:10:07.300,0:10:11.939
You throw in a few axioms, and if you've chosen[br]a nice axioms, then you get infinite complexity.
0:10:11.939,0:10:13.679
Most of which is not computable.
0:10:13.679,0:10:16.270
In mathematics you can express arbitrary statements,
0:10:16.270,0:10:18.269
because it's all about formal languages.
0:10:18.269,0:10:20.369
Many of this statements will not make sense.
0:10:20.369,0:10:22.469
Many of these statements will make sense[br]in some way,
0:10:22.469,0:10:24.429
but you can not test whether they make sense,
0:10:24.429,0:10:26.740
because they're not computable.
0:10:26.740,0:10:29.929
Computation is different.[br]Computation can exist.
0:10:29.929,0:10:32.459
It's starts with an initial state.
0:10:32.459,0:10:34.739
And then you have a transition function.[br]You do the work.
0:10:34.739,0:10:38.449
You apply the transition function,[br]and you get into the next state.
0:10:38.449,0:10:41.249
Computation is always finite.
0:10:41.249,0:10:43.689
Mathematics is the kingdom of specification.
0:10:43.689,0:10:47.290
And computation is the kingdom of implementation.
0:10:47.290,0:10:50.629
It's very important to understand this difference.
0:10:50.629,0:10:55.329
All our access to mathematics of course is[br]because we do computation.
0:10:55.329,0:10:57.459
We can understand mathematics,
0:10:57.459,0:10:59.939
because our brain can compute[br]some parts of mathematics.
0:10:59.939,0:11:04.439
Very, very little of it, and to[br]very constrained complexity.
0:11:04.439,0:11:06.860
But enough, so we can map[br]some of the infinite complexity
0:11:06.860,0:11:10.410
and noncomputability of mathematics[br]into computational patterns,
0:11:10.410,0:11:12.279
that we can explore.
0:11:12.279,0:11:14.410
So computation is about doing the work,
0:11:14.410,0:11:16.939
it's about executing the transition function.
0:11:19.730,0:11:22.899
Now we've seen that mental representation[br]is about concepts,
0:11:22.899,0:11:25.670
mental simulations, conceptual representations
0:11:25.670,0:11:29.110
and this conceptual representations[br]give us concept spaces.
0:11:29.110,0:11:30.970
And the nice thing[br]about this concept spaces is
0:11:30.970,0:11:33.399
that they give us an interface[br]to our mental representations,
0:11:33.399,0:11:36.290
We can use to address and manipulate them.
0:11:36.290,0:11:39.119
And we can share them in cultures.
0:11:39.119,0:11:40.899
And this concepts are compositional.
0:11:40.899,0:11:43.639
We can put them together, to create new concepts.
0:11:43.639,0:11:48.230
And they can be described using[br]higher dimensional vector spaces.
0:11:48.230,0:11:50.319
They don't do simulation[br]and prediction and so on,
0:11:50.319,0:11:53.119
but we can capture regularity[br]in our concept wisdom.
0:11:53.119,0:11:55.220
With this vector space[br]you can do amazing things.
0:11:55.220,0:11:57.589
For instance, if you take the vector from[br]"King" to "Queen"
0:11:57.589,0:12:01.009
is pretty much the same vector[br]as to.. between "Man" and "Woman"
0:12:01.009,0:12:04.110
And because of this properties, because it's[br]really a high dimentional manifold
0:12:04.110,0:12:07.569
this concepts faces, we can do interesting[br]things, like machine translation
0:12:07.569,0:12:09.470
without understanding what it means.
0:12:09.470,0:12:13.929
That is without doing any proper mental representation,[br]that predicts the world.
0:12:13.929,0:12:16.989
So this is a type of meta representation,[br]that is somewhat incomplete,
0:12:16.989,0:12:21.199
but it captures the landscape that we share[br]in a culture.
0:12:21.199,0:12:25.089
And then there is another type of meta representation,[br]that is linguistic protocols.
0:12:25.089,0:12:27.699
Which is basically a formal grammar and vocabulary.
0:12:27.699,0:12:29.619
And we need this linguistic protocols
0:12:29.619,0:12:32.869
to transfer mental representations[br]between people.
0:12:32.869,0:12:36.019
And we do this by basically[br]scanning our mental representation,
0:12:36.019,0:12:38.660
disassembling them in some way[br]or disambiguating them.
0:12:38.660,0:12:43.040
And then we use it as discrete string of symbols[br]to get it to somebody else,
0:12:43.040,0:12:46.429
and he trains an assembler,[br]that reverses this process,
0:12:46.429,0:12:51.389
and build something that is pretty similar[br]to what we intended to convey.
0:12:51.389,0:12:53.569
And if you look at the progression of AI models,
0:12:53.569,0:12:55.600
it pretty much went the opposite direction.
0:12:55.600,0:13:00.279
So AI started with linguistic protocols, which[br]were expressed in formal grammars.
0:13:00.279,0:13:05.209
And then it got to concepts spaces, and now[br]it's about to address percepts.
0:13:05.209,0:13:09.689
And at some point in near future it's going[br]to get better at mental simulations.
0:13:09.689,0:13:11.730
And at some point after that we get to
0:13:11.730,0:13:14.769
attention directed and[br]motivationally connected systems,
0:13:14.769,0:13:16.600
that make sense of the world.
0:13:16.600,0:13:20.290
that are in some sense able to address meaning.
0:13:20.290,0:13:23.489
This is the hardware that we have can do.
0:13:23.489,0:13:25.629
What kind of hardware do we have?
0:13:25.629,0:13:28.480
That's a very interesting question.
0:13:28.480,0:13:32.230
It could start out with a question:[br]How difficult is it to define a brain?
0:13:32.230,0:13:35.439
We know that the brain must be[br]somewhere hidden in the genome.
0:13:35.439,0:13:38.290
The genome fits on a CD ROM.[br]It's not that complicated.
0:13:38.290,0:13:40.399
It's easier than Microsoft Windows. laughter
0:13:40.399,0:13:45.549
And we also know, that about 2%[br]of the genome is coding for proteins.
0:13:45.549,0:13:48.429
And maybe about 10% of the genome[br]has some kind of stuff
0:13:48.429,0:13:51.239
that tells you when to switch protein.
0:13:51.239,0:13:52.829
And the remainder is mostly garbage.
0:13:52.829,0:13:57.170
It's old viruses that are left over and has[br]never been properly deleted and so on.
0:13:57.170,0:14:01.420
Because there are no real[br]code revisions in the genome.
0:14:01.420,0:14:08.119
So how much of this 10%[br]that is 75 MB code for the brain.
0:14:08.119,0:14:09.469
We don't really know.
0:14:09.469,0:14:13.399
What we do know is we share[br]almost all of this with mice.
0:14:13.399,0:14:15.769
Genetically speaking human[br]is a pretty big mouse.
0:14:15.769,0:14:21.049
With a few bits changed, so.. to fix some[br]of the genetic expressions
0:14:21.049,0:14:25.879
And that is most of the stuff there is going[br]to code for cells and metabolism
0:14:25.879,0:14:27.999
and how your body looks like and so on.
0:14:27.999,0:14:33.679
But if you look at erghmm... how much is expressed[br]in the brain and only in the brain,
0:14:33.679,0:14:35.170
in terms of proteins and so on.
0:14:35.170,0:14:45.639
We find it's about... well of the 2% it's[br]about 5%. That is only the 5% of the 2% that
0:14:45.639,0:14:46.799
is only in the brain.
0:14:46.799,0:14:50.199
And another 5% of the 2% is predominantly[br]in the brain.
0:14:50.199,0:14:52.069
That is more in the brain than anywhere else.
0:14:52.069,0:14:54.249
Which gives you some kind of thing[br]like a lower bound.
0:14:54.249,0:14:59.379
Which means to encode a brain genetically[br]base on the hardware that we are using.
0:14:59.379,0:15:03.539
We need something like[br]at least 500 kB of code.
0:15:03.539,0:15:06.670
Actually ehmm.. this... we very conservative[br]lower bound.
0:15:06.670,0:15:08.720
It's going to be a little more I guess.
0:15:08.720,0:15:11.449
But it sounds surprisingly little, right?
0:15:11.449,0:15:13.709
But in terms of scientific theories[br]this is a lot.
0:15:13.709,0:15:16.519
I mean the universe,[br]according to the core theory
0:15:16.519,0:15:19.420
of the quantum mechanics and so on[br]is like so much of code.
0:15:19.420,0:15:20.569
It's like half a page of code.
0:15:20.569,0:15:23.100
That's it. That's all you need[br]to generate the universe.
0:15:23.100,0:15:25.489
And if you want to understand evolution[br]it's like a paragraph.
0:15:25.489,0:15:29.609
It's couple lines you need to understand[br]evolutionary process.
0:15:29.609,0:15:32.199
And there is a lots, lots of details, that's[br]you get afterwards.
0:15:32.199,0:15:34.220
Because this process itself doesn't define
0:15:34.220,0:15:37.259
how the animals are going to look like,[br]and in similar way is..
0:15:37.259,0:15:41.269
the code of the universe doesn't tell you[br]what this planet is going to look like.
0:15:41.269,0:15:43.279
And what you guys are going to look like.
0:15:43.279,0:15:45.949
It's just defining the rulebook.
0:15:45.949,0:15:49.209
And in the same sense genome defines the rulebook,
0:15:49.209,0:15:51.569
by which our brain is build.
0:15:51.569,0:15:56.399
erghmmm,.. The brain boots itself[br]into developer process,
0:15:56.399,0:15:58.119
and this booting takes some time.
0:15:58.119,0:16:01.069
So subliminal learning in which[br]initial connections are forged
0:16:01.069,0:16:04.910
And basic models are build of the world,[br]so we can operate in it.
0:16:04.910,0:16:06.999
And how long does this booting take?
0:16:06.999,0:16:09.669
I thing it's about 80 mega seconds.
0:16:09.669,0:16:14.319
That's the time that a child is awake until[br]it's 2.5 years old.
0:16:14.319,0:16:16.449
By this age you understand Star Wars.
0:16:16.449,0:16:20.029
And I think that everything after[br]understanding Star Wars is cosmetics.
0:16:20.029,0:16:26.799
laughterapplause
0:16:26.799,0:16:32.820
You are going to be online, if you get to[br]arrive old age for about 1.5 giga seconds.
0:16:32.820,0:16:37.929
And in this time I think you are going to[br]get not to watch more than 5 milion concepts.
0:16:37.929,0:16:41.600
Why? I don't know real...[br]If you look at this child.
0:16:41.600,0:16:45.480
If a child would be able to form a concept[br]let say every 5 minutes,
0:16:45.480,0:16:48.529
then by the time it's about 4 years old,[br]it's going to have
0:16:48.529,0:16:51.549
something like 250 thousands concepts.
0:16:51.549,0:16:54.119
And... so... a quarter million.
0:16:54.119,0:16:56.809
And if we extrapolate this into our lifetime,
0:16:56.809,0:16:59.799
at some point it slows down,[br]because we have enough concepts,
0:16:59.799,0:17:01.230
to describe the world.
0:17:01.230,0:17:04.410
Maybe it's something... It's I think it's[br]less that 5 million.
0:17:04.410,0:17:07.140
How much storage capacity does the brain has?
0:17:07.140,0:17:12.319
I think that the... the estimates[br]are pretty divergent,
0:17:12.319,0:17:14.930
The lower bound is something like a 100 GB,
0:17:14.930,0:17:18.569
And the upper bound[br]is something like 2.5 PB.
0:17:18.569,0:17:21.890
There is even...[br]even some higher outliers this..
0:17:21.890,0:17:25.630
If you for instance think that we need all[br]those synaptic vesicle to store information,
0:17:25.630,0:17:27.530
maybe even more fits into this.
0:17:27.530,0:17:31.740
But the 2.5 PB is usually based[br]on what you need
0:17:31.740,0:17:34.760
to code the information[br]that is in all the neurons.
0:17:34.760,0:17:36.770
But maybe the neurons[br]do not really matter so much,
0:17:36.770,0:17:39.930
because if the neuron dies it's not like the[br]word is changing dramatically.
0:17:39.930,0:17:44.270
The brain is very resilient[br]against individual neurons failing.
0:17:44.270,0:17:48.930
So the 100 GB capacity is much more[br]what you actually store in the neurons.
0:17:48.930,0:17:51.380
If you look at all the redundancy[br]that you need.
0:17:51.380,0:17:54.230
And I think this is much closer to the actual[br]Ballpark figure.
0:17:54.230,0:17:58.130
Also if you want to store 5 hundred...[br]5 million concepts,
0:17:58.130,0:18:02.330
and maybe 10 times or 100 times the number[br]of percepts, on top of this,
0:18:02.330,0:18:05.490
this is roughly the Ballpark figure[br]that you are going to need.
0:18:05.490,0:18:07.110
So our brain
0:18:07.110,0:18:08.320
is a prediction machine.
0:18:08.320,0:18:11.490
It... What it does is it reduces the entropy[br]of the environment,
0:18:11.490,0:18:14.610
to solve whatever problems you are encountering,
0:18:14.610,0:18:17.790
if you don't have a... feedback loop, to fix[br]them.
0:18:17.790,0:18:20.240
So normally if something happens, we have[br]some kind of feedback loop,
0:18:20.240,0:18:23.440
that regulates our temperature or that makes[br]problems go away.
0:18:23.440,0:18:26.050
And only when this is not working[br]we employ recognition.
0:18:26.050,0:18:29.250
And then we start this arbitrary[br]computational processes,
0:18:29.250,0:18:31.830
that is facilitated by the neural cortex.
0:18:31.830,0:18:34.940
And this.. arhmm.. neural cortex has really[br]do arbitrary programs.
0:18:34.940,0:18:37.870
But it can do so[br]with only with very limited complexity,
0:18:37.870,0:18:42.070
because really you just saw,[br]it's not that complex.
0:18:42.070,0:18:43.900
The modeling of the world is very slow.
0:18:43.900,0:18:46.570
And it's something[br]that we see in our eye models.
0:18:46.570,0:18:48.150
To learn the basic structure of the world
0:18:48.150,0:18:49.330
takes a very long time.
0:18:49.330,0:18:52.650
To learn basically that we are moving in 3D[br]and objects are moving,
0:18:52.650,0:18:54.030
and what they look like.
0:18:54.030,0:18:55.130
Once we have this basic model,
0:18:55.130,0:18:59.300
we can get to very, very quick[br]understanding within this model.
0:18:59.300,0:19:02.110
Basically encoding based[br]on the structure of the world,
0:19:02.110,0:19:03.610
that we've learned.
0:19:03.610,0:19:07.100
And this is some kind of[br]data compression, that we are doing.
0:19:07.100,0:19:09.740
We use this model, this grammar of the world,
0:19:09.740,0:19:12.150
this simulation structures that we've learned,
0:19:12.150,0:19:15.190
to encode the world very, very efficently.
0:19:15.190,0:19:17.740
How much data compression do we get?
0:19:17.740,0:19:19.860
Well... if you look at the retina.
0:19:19.860,0:19:24.610
The retina get's data[br]in the order of about 10Gb/s.
0:19:24.610,0:19:27.500
And the retina already compresses these data,
0:19:27.500,0:19:31.120
and puts them into optic nerve[br]at the rate of about 1Mb/s
0:19:31.120,0:19:34.030
This is what you get fed into visual cortex.
0:19:34.030,0:19:36.370
And the visual cortex[br]does some additional compression,
0:19:36.370,0:19:42.110
and by the time it gets to layer four of the[br]first layer of vision, to V1.
0:19:42.110,0:19:46.880
We are down to something like 1Kb/s.
0:19:46.880,0:19:50.720
So if we extrapolate this, and you get live[br]to the age of 80 years,
0:19:50.720,0:19:54.140
and you are awake for 2/3 of your lifetime.
0:19:54.140,0:19:56.930
That is you have your eyes open for 2/3 of[br]your lifetime.
0:19:56.930,0:19:59.040
The stuff that you get into your brain,
0:19:59.040,0:20:03.700
via your visual perception[br]is going to be only 2TB.
0:20:03.700,0:20:05.370
Only 2TB of visual data.
0:20:05.370,0:20:06.680
Throughout all your lifetime.
0:20:06.680,0:20:09.430
That's all you are going to get ever to see.
0:20:09.430,0:20:11.160
Isn't this depressing?
0:20:11.160,0:20:12.790
laughter
0:20:12.790,0:20:16.540
So I would really like to eghmm..[br]to tell you,
0:20:16.540,0:20:22.750
choose wisely what you[br]are going to look at. laughter
0:20:22.750,0:20:26.940
Ok. Let's look at this problem of neural compositionality.
0:20:26.940,0:20:29.250
Our brains has this amazing thing[br]that they can put
0:20:29.250,0:20:31.510
meta representation together very, very quickly.
0:20:31.510,0:20:33.150
For instance you read a page of code,
0:20:33.150,0:20:35.190
you compile it in you mind[br]into some kind of program
0:20:35.190,0:20:37.700
it tells you what this page is going to do.
0:20:37.700,0:20:39.110
Isn't that amazing?
0:20:39.110,0:20:40.810
And then you can forget about this,
0:20:40.810,0:20:43.910
disassemble it all, and use the[br]building blocks for something else.
0:20:43.910,0:20:45.230
It's like legos.
0:20:45.230,0:20:48.000
How you can do this with neurons?
0:20:48.000,0:20:50.160
Legos can do this, because they have[br]a well defined interface.
0:20:50.160,0:20:52.180
They have all this slots, you know,[br]that fit together
0:20:52.180,0:20:53.600
in well defined ways.
0:20:53.600,0:20:54.530
How can neurons do this?
0:20:54.530,0:20:57.280
Well, neurons can maybe learn[br]the interface of other neurons.
0:20:57.280,0:20:59.780
But that's difficult, because every neuron[br]looks slightly different,
0:20:59.780,0:21:04.830
after all this... some kind of biologically[br]grown natural stuff.
0:21:04.830,0:21:06.610
laughter
0:21:06.610,0:21:10.620
So what you want to do is,[br]you want to encapsulate this erhmm...
0:21:10.620,0:21:13.020
diversity of the neurons to make the predictable.
0:21:13.020,0:21:14.820
To give them well defined interface.
0:21:14.820,0:21:16.410
And I think that nature solution to this
0:21:16.410,0:21:19.770
is cortical columns.
0:21:19.770,0:21:24.250
Cortical column is a circuit of[br]between 100 and 400 neurons.
0:21:24.250,0:21:26.860
And this circuit has some kind of neural network,
0:21:26.860,0:21:28.650
that can learn stuff.
0:21:28.650,0:21:31.070
And after it has learned particular function,
0:21:31.070,0:21:35.320
and in between, it's able to link up these[br]other cortical columns.
0:21:35.320,0:21:37.120
And we have about 100 million of those.
0:21:37.120,0:21:39.770
Depending on how many neurons[br]you assume is in there,
0:21:39.770,0:21:41.490
it's... erghmm we guess it's something,
0:21:41.490,0:21:46.500
at least 20 million and maybe[br]something like a 100 million.
0:21:46.500,0:21:48.330
And this cortical columns, what they can do,
0:21:48.330,0:21:50.280
is they can link up like lego bricks,
0:21:50.280,0:21:54.130
and then perform,[br]by transmitting information between them,
0:21:54.130,0:21:55.990
pretty much arbitrary computations.
0:21:55.990,0:21:57.540
What kind of computation?
0:21:57.540,0:22:00.130
Well... Solomonoff induction.
0:22:00.130,0:22:03.820
And... they have some short range links,[br]to their neighbors.
0:22:03.820,0:22:05.690
Which comes almost for free, because erghmm..
0:22:05.690,0:22:08.490
well, they are connected to them,[br]they are direct neighborhood.
0:22:08.490,0:22:10.050
And they have some long range connectivity,
0:22:10.050,0:22:13.000
so you can combine everything[br]in your cortex with everything.
0:22:13.000,0:22:14.900
So you need some kind of global switchboard.
0:22:14.900,0:22:17.630
Some grid like architecture[br]of long range connections.
0:22:17.630,0:22:18.900
They are going to be more expensive,
0:22:18.900,0:22:20.640
they are going to be slower,
0:22:20.640,0:22:23.590
but they are going to be there.
0:22:23.590,0:22:26.070
So how can we optimize[br]what these guys are doing?
0:22:26.070,0:22:28.270
In some sense it's like an economy.
0:22:28.270,0:22:31.460
It's not enduring based system,[br]as we often use in machine learning.
0:22:31.460,0:22:32.780
It's really an economy. You have...
0:22:32.780,0:22:35.560
The question is, you have a fixed number of[br]elements,
0:22:35.560,0:22:37.970
how can you do the most valuable stuff with[br]them.
0:22:37.970,0:22:41.030
Fixed resources, most valuable stuff, the[br]problem is economy.
0:22:41.030,0:22:43.320
So you have an economy of information brokers.
0:22:43.320,0:22:45.830
Every one of these guys,[br]this little cortical columns,
0:22:45.830,0:22:48.150
is very simplistic information broker.
0:22:48.150,0:22:50.950
And they trade rewards against neg entropy,
0:22:50.950,0:22:54.140
Against reducing entropy in the...[br]in the world.
0:22:54.140,0:22:55.790
And to do this, as we just saw
0:22:55.790,0:22:58.890
that they need some kind of standardized interface.
0:22:58.890,0:23:02.090
And internally, to use this interface[br]they are going to
0:23:02.090,0:23:03.880
have some kind of state machine.
0:23:03.880,0:23:05.660
And then they are going to pass messages
0:23:05.660,0:23:07.400
between each other.
0:23:07.400,0:23:08.630
And what are these messages?
0:23:08.630,0:23:11.100
Well, it's going to be hard[br]to discover these messages,
0:23:11.100,0:23:12.800
by looking at brains.
0:23:12.800,0:23:14.800
Because it's very difficult to see in brains,
0:23:14.800,0:23:15.450
what the are actually doing.
0:23:15.450,0:23:17.250
you just see all these neurons.
0:23:17.250,0:23:18.790
And if you would be waiting for neuroscience,
0:23:18.790,0:23:20.970
to discover anything, we wouldn't even have
0:23:20.970,0:23:22.590
gradient descent or anything else.
0:23:22.590,0:23:23.720
We wouldn't have neuron learning.
0:23:23.720,0:23:25.420
We wouldn't have all this advances in AI.
0:23:25.420,0:23:28.230
Jürgen Schmidhuber said that the biggest,
0:23:28.230,0:23:30.010
the last contribution of neuroscience to
0:23:30.010,0:23:32.220
artificial intelligence[br]was about 50 years ago.
0:23:32.220,0:23:34.280
That's depressing, and it might be
0:23:34.280,0:23:37.870
overemphasizing the unimportance of neuroscience,
0:23:37.870,0:23:39.490
because neuroscience is very important,
0:23:39.490,0:23:41.090
once you know what are you looking for.
0:23:41.090,0:23:42.510
You can actually often find this,
0:23:42.510,0:23:44.320
and see whether you are on the right track.
0:23:44.320,0:23:45.860
But it's very difficult to take neuroscience
0:23:45.860,0:23:47.940
to understand how the brain is working.
0:23:47.940,0:23:49.290
Because it's really like understanding
0:23:49.290,0:23:53.230
flight by looking at birds through a microscope.
0:23:53.230,0:23:55.150
So, what are these messages?
0:23:55.150,0:23:57.850
You are going to need messages,[br]that tell these cortical columns
0:23:57.850,0:24:00.160
to join themselves into a structure.
0:24:00.160,0:24:01.990
And to unlink again once they're done.
0:24:01.990,0:24:03.690
You need ways that they can request each other
0:24:03.690,0:24:06.040
to perform computations for them.
0:24:06.040,0:24:07.510
You need ways they can inhibit each other
0:24:07.510,0:24:08.320
when they are linked up.
0:24:08.320,0:24:10.990
So they don't do conflicting computations.
0:24:10.990,0:24:12.940
Then they need to tell you whether the computation,
0:24:12.940,0:24:14.110
the result of the computation
0:24:14.110,0:24:16.730
that the are asked to do is probably false.
0:24:16.730,0:24:19.340
Or whether it's probably true,[br]but you still need to wait for others,
0:24:19.340,0:24:21.990
to tell you whether the details worked out.
0:24:21.990,0:24:24.240
Or whether it's confirmed true that the concepts
0:24:24.240,0:24:26.730
that they stand for is actually the case.
0:24:26.730,0:24:28.150
And then you want to have learning,
0:24:28.150,0:24:29.630
to tell you how well this worked.
0:24:29.630,0:24:31.390
So you will have to announce a bounty,
0:24:31.390,0:24:34.380
that tells them to link up[br]and kind of reward signal
0:24:34.380,0:24:36.740
that makes do computation in the first place.
0:24:36.740,0:24:38.680
And then you want to have[br]some kind of reward signal
0:24:38.680,0:24:40.550
once you got the result as an organism.
0:24:40.550,0:24:42.280
But you reach your goal if you made
0:24:42.280,0:24:45.810
the disturbance go away[br]or what ever you consume the cake.
0:24:45.810,0:24:47.710
And then you will have[br]some kind of reward signal
0:24:47.710,0:24:49.250
that's you give everybody.
0:24:49.250,0:24:50.650
That was involved in this.
0:24:50.650,0:24:52.720
And this reward signal facilitates learning,
0:24:52.720,0:24:55.230
so the.. difference between the announce reward
0:24:55.230,0:24:57.530
and consumption reward is the learning signal
0:24:57.530,0:24:58.740
for these guys.
0:24:58.740,0:25:00.210
So they can learn how to play together,
0:25:00.210,0:25:02.700
and how to do the Solomonoff induction.
0:25:02.700,0:25:04.660
Now, I've told you that Solomonoff induction
0:25:04.660,0:25:05.280
is not computable.
0:25:05.280,0:25:07.630
And it's mostly because of two things,
0:25:07.630,0:25:09.280
First of all it's needs infinite resources
0:25:09.280,0:25:11.200
to compare all the possible models.
0:25:11.200,0:25:13.530
And the other one is that we do not know
0:25:13.530,0:25:15.440
the priori probability for our Bayesian model.
0:25:15.440,0:25:19.280
If we do not know[br]how likely unknown stuff is in the world.
0:25:19.280,0:25:22.520
So what we do instead is,[br]we set some kind of hyperparameter,
0:25:22.520,0:25:25.050
Some kind of default[br]priori probability for concepts,
0:25:25.050,0:25:28.110
that are encoded by cortical columns.
0:25:28.110,0:25:30.580
And if we set these parameters very low,
0:25:30.580,0:25:32.140
then we are going to end up with inferences
0:25:32.140,0:25:35.250
that are quite probable.
0:25:35.250,0:25:36.480
For unknown things.
0:25:36.480,0:25:37.690
And then we can test for those.
0:25:37.690,0:25:41.350
If we set this parameter higher, we are going[br]to be very, very creative.
0:25:41.350,0:25:43.670
But we end up with many many theories,
0:25:43.670,0:25:45.140
that are difficult to test.
0:25:45.140,0:25:48.470
Because maybe there are[br]too many theories to test.
0:25:48.470,0:25:50.650
Basically every of these cortical columns[br]will now tell you,
0:25:50.650,0:25:52.240
when you ask them if they are true:
0:25:52.240,0:25:54.960
"Yes I'm probably true,[br]but i still need to ask others,
0:25:54.960,0:25:56.980
to work on the details"
0:25:56.980,0:25:58.670
So these others are going to be get active,
0:25:58.670,0:26:00.640
and they are being asked by the asking element:
0:26:00.640,0:26:01.730
"Are you going to be true?",
0:26:01.730,0:26:04.380
and they say "Yeah, probably yes,[br]I just have to work on the details"
0:26:04.380,0:26:05.930
and they are going to ask even more.
0:26:05.930,0:26:07.980
So your brain is going to light up like a[br]christmas tree,
0:26:07.980,0:26:10.240
and do all these amazing computations,
0:26:10.240,0:26:12.450
and you see connections everywhere,[br]most of them are wrong.
0:26:12.450,0:26:16.310
You are basically in psychotic state[br]if your hyperparameter is too high.
0:26:16.310,0:26:20.790
You're brain invents more theories[br]that it can disproof.
0:26:20.790,0:26:24.550
Would it actually sometimes be good[br]to be in this state?
0:26:24.550,0:26:27.850
You bet. So i think every night our brain[br]goes in this state.
0:26:27.850,0:26:31.720
We turn up this hyperparameter.[br]We dream. We get all kinds
0:26:31.720,0:26:34.100
weird connections, and we get to see connections,
0:26:34.100,0:26:36.140
that otherwise we couldn't be seeing.
0:26:36.140,0:26:38.080
Even though... because they are highly improbable.
0:26:38.080,0:26:42.750
But sometimes they hold, and we see... "Oh[br]my God, DNA is organized in double helix".
0:26:42.750,0:26:44.640
And this is what we remember in the morning.
0:26:44.640,0:26:46.870
All the other stuff is deleted.
0:26:46.870,0:26:48.440
So we usually don't form long term memories
0:26:48.440,0:26:51.480
in dreams, if everything goes well.
0:26:51.480,0:26:56.670
If you accidentally trip this up.. your modulators,
0:26:56.670,0:26:59.100
for instance by consuming illegal substances,
0:26:59.100,0:27:01.690
or because you just gone randomly psychotic
0:27:01.690,0:27:04.600
you was basically entering[br]a dreaming state I guess.
0:27:04.600,0:27:06.990
You get to a state[br]when the brain starts inventing more
0:27:06.990,0:27:10.860
concepts that it can disproof.
0:27:10.860,0:27:13.600
So you want to have a state[br]where this is well balanced.
0:27:13.600,0:27:16.180
And the difference between[br]highly creative people,
0:27:16.180,0:27:20.070
and very religious people is probably[br]a different setting of this hyperparameter.
0:27:20.070,0:27:21.890
So I suspect that people that people[br]that are genius,
0:27:21.890,0:27:23.880
like people like Einstein and so on,
0:27:23.880,0:27:26.600
do not simply have better neurons than others.
0:27:26.600,0:27:29.130
What they mostly have is a slightly hyperparameter,
0:27:29.130,0:27:33.860
that is very finely tuned, so they can get[br]better balance than other people
0:27:33.860,0:27:43.850
in finding theories that might be true,[br]but can still be disprooven.
0:27:43.850,0:27:49.480
So inventiveness could be[br]a hyperparameter in the brain.
0:27:49.480,0:27:54.169
If you want to measure[br]the quality of belief that we have
0:27:54.169,0:27:56.370
we are going to have to have[br]some kind of some cost function
0:27:56.370,0:27:58.710
which is based on motivational system.
0:27:58.710,0:28:02.400
And to identify if belief[br]is good or not we can abstract criteria,
0:28:02.400,0:28:06.440
for instance how well does it predict the[br]wourld, or how about does it reduce uncertainty
0:28:06.440,0:28:07.590
in the world,
0:28:07.590,0:28:10.020
or is it consistency and sparse.
0:28:10.020,0:28:14.080
And then of course utility, how about does[br]it help me to satisfy my needs.
0:28:14.080,0:28:18.920
And the motivational system is going[br]to evaluate all this things by giving a signal.
0:28:18.920,0:28:24.200
And the first signal.. kind of signal[br]is the possible rewards if we are able to compute
0:28:24.200,0:28:25.020
the task.
0:28:25.020,0:28:27.430
And this is probably done by dopamine.
0:28:27.430,0:28:30.350
So we have a very small area in the brain,[br]substantia nigra,
0:28:30.350,0:28:33.610
and the ventral tegmental area,[br]and they produce dopamine.
0:28:33.610,0:28:38.180
And this get fed into lateral frontal cortext[br]and the frontal lobe,
0:28:38.180,0:28:41.920
which control attention,[br]and tell you what things to do.
0:28:41.920,0:28:46.020
And if we have successfully done[br]what you wanted to do,
0:28:46.020,0:28:49.300
we consume the rewards.
0:28:49.300,0:28:51.940
And we do this with another signal[br]which is serotonine.
0:28:51.940,0:28:53.480
It's also announce to motivational system,
0:28:53.480,0:28:55.870
to this very small are the Raphe nuclei.
0:28:55.870,0:28:58.690
And it feeds into all the areas of the brain[br]where learning is necessary.
0:28:58.690,0:29:02.160
A connection is strengthen[br]once you get to result.
0:29:02.160,0:29:07.559
These two substances are emitted[br]by the motivational system.
0:29:07.559,0:29:09.710
The motivational system is a bunch of needs,
0:29:09.710,0:29:11.510
essentially you regulate it below the cortext.
0:29:11.510,0:29:14.490
They are not part of your mental representations.
0:29:14.490,0:29:16.930
They are part of something[br]that is more primary than this.
0:29:16.930,0:29:19.360
This is what makes us go,[br]this is what makes us human.
0:29:19.360,0:29:22.290
This is not our rationality, this is what we want.
0:29:22.290,0:29:27.000
And the needs are physiological,[br]they are social, they are cognitive.
0:29:27.000,0:29:28.960
And you pretty much born with them.
0:29:28.960,0:29:30.470
They can not be totally adaptive,
0:29:30.470,0:29:33.340
because if we were adaptive,[br]we wouldn't be doing anything.
0:29:33.340,0:29:35.390
The needs are resistive.
0:29:35.390,0:29:38.290
They are pushing us against the world.
0:29:38.290,0:29:40.170
If you wouldn't have all this needs,
0:29:40.170,0:29:41.740
If you wouldn't have this motivational system,
0:29:41.740,0:29:43.630
you would just be doing what best for you.
0:29:43.630,0:29:45.150
Which means collapse on the ground,
0:29:45.150,0:29:49.010
be a vegetable, rod, give into gravity.
0:29:49.010,0:29:50.270
Instead you do all this unpleasant things,
0:29:50.270,0:29:52.690
to get up in the morning,[br]you eat, you have sex,
0:29:52.690,0:29:54.120
you do all this crazy things.
0:29:54.120,0:29:58.809
And it's only because the[br]motivational system forces you to.
0:29:58.809,0:30:00.850
The motivational system[br]takes this bunch of matter,
0:30:00.850,0:30:02.890
and makes us to do all these strange things,
0:30:02.890,0:30:05.940
just so genomes get replicated and so on.
0:30:05.940,0:30:10.470
And... so to do this, we are going to build[br]resistance against the world.
0:30:10.470,0:30:13.360
And the motivational system[br]is in a sense forcing us,
0:30:13.360,0:30:15.470
to do all this things by giving us needs,
0:30:15.470,0:30:18.330
and the need have some kind[br]of target value and current value.
0:30:18.330,0:30:21.850
If we have a differential[br]between the target value and current value,
0:30:21.850,0:30:24.590
we perceive some urgency[br]to do something about the need.
0:30:24.590,0:30:26.680
And when the target value[br]approaches the current value
0:30:26.680,0:30:28.660
we get the pleasure, which is a learning signal.
0:30:28.660,0:30:30.540
If it gets away from it[br]we get a displeasure signal,
0:30:30.540,0:30:31.870
which is also a learning signal.
0:30:31.870,0:30:35.370
And we can use this to structure[br]our understanding of the world.
0:30:35.370,0:30:36.870
To understand what goals are and so on.
0:30:36.870,0:30:40.020
Goals are learned. Needs are not.
0:30:40.020,0:30:42.780
To learn we need success[br]and failure in the world.
0:30:42.780,0:30:45.940
But to do things we need anticipated reward.
0:30:45.940,0:30:48.120
So it's dopamine that's makes brain go round.
0:30:48.120,0:30:50.560
Dopamine makes you do things.
0:30:50.560,0:30:52.750
But in order to do this in the right way,
0:30:52.750,0:30:54.610
you have to make sure,[br]that the cells can not
0:30:54.610,0:30:55.880
produce dopamine themselves.
0:30:55.880,0:30:59.100
If they do this they can start[br]to drive others to work for them.
0:30:59.100,0:31:01.870
You are going to get something like[br]bureaucracy in your neural cortext,
0:31:01.870,0:31:05.650
where different bosses try[br]to set up others to they own bidding
0:31:05.650,0:31:07.910
and pitch against other groups in nerual cortext.
0:31:07.910,0:31:09.730
It's going to be horrible.
0:31:09.730,0:31:12.210
So you want to have some kind of central authority,
0:31:12.210,0:31:16.290
that make sure that the cells[br]do not produce dopamine themselves.
0:31:16.290,0:31:19.679
It's only been produce in[br]very small area and then given out,
0:31:19.679,0:31:21.059
and pass through the system.
0:31:21.059,0:31:23.350
And after you're done with it's going to be gone,
0:31:23.350,0:31:26.070
so there is no hoarding of the dopamine.
0:31:26.070,0:31:29.770
And in our society the role of dopamine[br]is played by money.
0:31:29.770,0:31:32.150
Money is not reward in itself.
0:31:32.150,0:31:35.570
It's in some sense way[br]that you can trade against the reward.
0:31:35.570,0:31:36.850
You can not eat money.
0:31:36.850,0:31:40.500
You can take it later and take[br]a arbitrary reward for it.
0:31:40.500,0:31:45.400
And in some sense money is the dopamine[br]that makes organizations
0:31:45.400,0:31:48.410
and society, companies[br]and many individuals do things.
0:31:48.410,0:31:50.500
They do stuff because of money.
0:31:50.500,0:31:53.309
But money if you compare to dopamine[br]is pretty broken,
0:31:53.309,0:31:54.850
because you can hoard it.
0:31:54.850,0:31:57.400
So you are going to have this[br]cortical columns in the real world,
0:31:57.400,0:31:59.670
which are individual people[br]or individual corporations.
0:31:59.670,0:32:03.250
They are hoarding the dopamine,[br]they sit on this very big pile of dopamine.
0:32:03.250,0:32:07.890
They are starving the rest[br]of the society of the dopamine.
0:32:07.890,0:32:10.630
They don't give it away,[br]and they can make it do it's bidding.
0:32:10.630,0:32:13.970
So for instance they can pitch[br]substantial part of society
0:32:13.970,0:32:16.130
against understanding of global warming.
0:32:16.130,0:32:20.110
because they profit of global warming[br]or of technology that leads to global warming,
0:32:20.110,0:32:22.850
which is very bad for all of us. applause
0:32:22.850,0:32:28.850
So our society is a nervous system[br]that lies to itself.
0:32:28.850,0:32:30.429
How can we overcome this?
0:32:30.429,0:32:32.480
Actually, we don't know.
0:32:32.480,0:32:34.639
To do this we would need[br]to have some kind of centrialized,
0:32:34.639,0:32:36.660
top-down reward motivational system.
0:32:36.660,0:32:39.010
We have this for instance in the military,
0:32:39.010,0:32:42.520
you have this system of[br]military rewards that you get.
0:32:42.520,0:32:44.950
And this are completely[br]controlled from the top.
0:32:44.950,0:32:47.260
Also within working organizations[br]you have this.
0:32:47.260,0:32:49.600
In corporations you have centralized rewards,
0:32:49.600,0:32:51.850
it's not like rewards flow bottom-up,
0:32:51.850,0:32:55.120
they always flown top-down.
0:32:55.120,0:32:57.850
And there was an attempt[br]to model society in such a way.
0:32:57.850,0:33:03.380
That was in Chile in the early 1970,[br]the Allende government had the idea
0:33:03.380,0:33:07.320
to redesign society or economy[br]in society using cybernetics.
0:33:07.320,0:33:12.590
So Allende invited a bunch of cyberneticians[br]to redesign the Chilean economy.
0:33:12.590,0:33:14.550
And this was meant to be the control room,
0:33:14.550,0:33:17.460
where Allende and his chief economists[br]would be sitting,
0:33:17.460,0:33:19.709
to look at what the economy is doing.
0:33:19.709,0:33:23.880
We don't know how this would work out,[br]because we know how it ended.
0:33:23.880,0:33:27.260
In 1973 there was this big putsch in Chile,
0:33:27.260,0:33:30.290
and this experiment ended among other things.
0:33:30.290,0:33:34.170
Maybe it would have worked, who knows?[br]Nobody tried it.
0:33:34.170,0:33:38.370
So, there is something else[br]what is going on in people,
0:33:38.370,0:33:40.030
beyond the motivational system.
0:33:40.030,0:33:43.610
That is: we have social criteria, for learning.
0:33:43.610,0:33:47.670
We also check if our ideas[br]are normativly acceptable.
0:33:47.670,0:33:50.510
And this is actually a good thing,[br]because individual may shortcut
0:33:50.510,0:33:52.590
the learning through communication.
0:33:52.590,0:33:55.260
Other people have learned stuff[br]that we don't need to learn ourselves.
0:33:55.260,0:33:59.800
We can build on this, so we can accelerate[br]learning by many order of magnitutde,
0:33:59.800,0:34:00.970
which makes culture possible.
0:34:00.970,0:34:04.190
And which makes many anything possible,[br]because if you were on your own
0:34:04.190,0:34:06.860
you would not be going to find out[br]very much in your lifetime.
0:34:08.520,0:34:11.270
You know how they say?[br]Everything that you do,
0:34:11.270,0:34:14.250
you do by standing on the shoulders of giants.
0:34:14.250,0:34:17.779
Or on a big pile of dwarfs[br]it works either way.
0:34:17.779,0:34:27.089
laughterapplause
0:34:27.089,0:34:30.379
Social learning usually outperforms[br]individual learning. You can test this.
0:34:30.379,0:34:33.949
But in the case of conflict[br]between different social truths,
0:34:33.949,0:34:36.659
you need some way to decide who to believe.
0:34:36.659,0:34:39.498
So you have some kind of reputation[br]estimate for different authority,
0:34:39.498,0:34:42.399
and you use this to check whom you believe.
0:34:42.399,0:34:45.748
And the problem of course is this[br]in existing society, in real society,
0:34:45.748,0:34:48.389
this reputation system is going[br]to reflect power structure,
0:34:48.389,0:34:51.699
which may distort your belief systematically.
0:34:51.699,0:34:54.759
Social learning therefore leads groups[br]to synchronize their opinions.
0:34:54.759,0:34:57.220
And the opinions become ...get another role.
0:34:57.220,0:35:02.180
They become important part[br]of signalling which group you belong to.
0:35:02.180,0:35:06.630
So opinions start to signal[br]group loyalty in societies.
0:35:06.630,0:35:11.170
And people in this, and that's the actual world,[br]they should optimize not for getting the best possible
0:35:11.170,0:35:12.619
opinions in terms of truth.
0:35:12.619,0:35:17.289
They should guess... they should optimize[br]for doing... having the best possible opinion,
0:35:17.289,0:35:19.799
with respect to agreement with their peers.
0:35:19.799,0:35:22.029
If you have the same opinion[br]as your peers, you can signal them
0:35:22.029,0:35:24.299
that you are the part of their ingroup,[br]they are going to like you.
0:35:24.299,0:35:28.160
If you don't do this, chances are[br]they are not going to like you.
0:35:28.160,0:35:34.049
There is rarely any benefit in life to be[br]in disagreement with your boss. Right?
0:35:34.049,0:35:39.230
So, if you evolve an opinion forming system[br]in these curcumstances,
0:35:39.230,0:35:41.220
you should be ending up[br]with an opinion forming system,
0:35:41.220,0:35:42.980
that leaves you with the most usefull opinion,
0:35:42.980,0:35:45.400
which is the opinion in your environment.
0:35:45.400,0:35:48.400
And it turns out, most people are able[br]to do this effortlessly.
0:35:48.400,0:35:50.969
laughter
0:35:50.969,0:35:55.529
They have an instinct, that makes them adapt[br]the dominant opinion in their social environment.
0:35:55.529,0:35:56.599
It's amazing, right?
0:35:56.599,0:36:01.040
And if you are nerd like me,[br]you don't get this.
0:36:01.040,0:36:08.999
laugingapplause
0:36:08.999,0:36:12.999
So in the world out there,[br]explanations piggyback on you group allegiance.
0:36:12.999,0:36:15.900
For instance you will find that there is a[br]substantial group of people that believes
0:36:15.900,0:36:18.380
the minimum wage is good[br]for the economy and for you
0:36:18.380,0:36:20.549
and another one believes that its bad.
0:36:20.549,0:36:23.470
And its pretty much aligned[br]with political parties.
0:36:23.470,0:36:25.970
Its not aligned with different[br]understandings of economy,
0:36:25.970,0:36:30.740
because nobody understands[br]how the economy works.
0:36:30.740,0:36:36.330
And if you are a nerd you try to understand[br]the world in terms of what is true and false.
0:36:36.330,0:36:40.680
You try to prove everything by putting it[br]in some kind of true and false level
0:36:40.680,0:36:43.589
and if you are not a nerd[br]you try to get to right and wrong
0:36:43.589,0:36:45.609
you try to understand[br]whether you are in alignment
0:36:45.609,0:36:49.559
with what's objectively right[br]in your society, right?
0:36:49.559,0:36:55.680
So I guess that nerds are people that have[br]a defect in there opinion forming system.
0:36:55.680,0:36:57.069
laughing
0:36:57.069,0:37:00.609
And usually that's maladaptive[br]and under normal circumstances
0:37:00.609,0:37:03.099
nerds would mostly be filtered[br]from the world,
0:37:03.099,0:37:06.529
because they don't reproduce so well,[br]because people don't like them so much.
0:37:06.529,0:37:07.960
laughing
0:37:07.960,0:37:11.119
And then something very strange happened.[br]The computer revolution came along and
0:37:11.119,0:37:14.170
suddenly if you argue with the computer[br]it doesn't help you if you have the
0:37:14.170,0:37:17.849
normatively correct opinion you need to[br]be able to understand things in terms of
0:37:17.849,0:37:26.029
true and false, right? applause
0:37:26.029,0:37:29.779
So now we have this strange situation that[br]the weird people that have this offensive,
0:37:29.779,0:37:33.410
strange opinions and that really don't[br]mix well with the real normal people
0:37:33.410,0:37:38.119
get all this high paying jobs[br]and we don't understand how is that happening.
0:37:38.119,0:37:42.599
And it's because suddenly[br]our maladapting is a benefit.
0:37:42.599,0:37:47.300
But out there there is this world of the[br]social norms and it's made of paperwalls.
0:37:47.300,0:37:50.349
There are all this things that are true[br]and false in a society that make
0:37:50.349,0:37:51.549
people behave.
0:37:51.549,0:37:57.390
It's like this japanese wall, there.[br]They made palaces out of paper basically.
0:37:57.390,0:38:00.339
And these are walls by convention.
0:38:00.339,0:38:04.009
They exist because people agree[br]that this is a wall.
0:38:04.009,0:38:06.630
And if you are a hypnotist[br]like Donald Trump
0:38:06.630,0:38:11.109
you can see that these are paper walls[br]and you can shift them.
0:38:11.109,0:38:14.079
And if you are a nerd like me[br]you can not see these paperwalls.
0:38:14.079,0:38:20.230
If you pay closely attention you see that[br]people move and then suddenly middair
0:38:20.230,0:38:22.869
they make a turn. Why would they do this?
0:38:22.869,0:38:24.360
There must be something[br]that they see there
0:38:24.360,0:38:26.549
and this is basically a normative agreement.
0:38:26.549,0:38:29.690
And you can infer what this is[br]and then you can manipulate it and understand it.
0:38:29.690,0:38:32.640
Of course you can't fix this, you can[br]debug yourself in this regard,
0:38:32.640,0:38:34.690
but it's something that is hard[br]to see for nerds.
0:38:34.690,0:38:38.109
So in some sense they have a superpower:[br]they can think straight in the presence
0:38:38.109,0:38:39.079
of others.
0:38:39.079,0:38:42.590
But often they end up in their living room[br]and people are upset.
0:38:42.590,0:38:45.810
laughter
0:38:45.810,0:38:49.789
Learning in a complex domain can not[br]guarantee that you find the global maximum.
0:38:49.789,0:38:53.970
We know that we can not find truth[br]because we can not recognize whether we live
0:38:53.970,0:38:57.059
on a plain field or on a[br]simulated plain field.
0:38:57.059,0:39:00.579
But what we can do is, we can try to[br]approach a global maximum.
0:39:00.579,0:39:02.339
But we don't know if that[br]is the global maximum.
0:39:02.339,0:39:05.509
We will always move along[br]some kind of belief gradient.
0:39:05.509,0:39:09.110
We will take certain elements of[br]our belief and then give them up
0:39:09.110,0:39:12.650
for new elements of a belief based on[br]thinking, that this new element
0:39:12.650,0:39:15.049
of belief is better than the one[br]we give up.
0:39:15.049,0:39:17.079
So we always move along[br]some kind of gradient.
0:39:17.079,0:39:19.789
and the truth does not matter,[br]the gradient matters.
0:39:19.789,0:39:23.650
If you think about teaching for a moment,[br]when I started teaching I often thought:
0:39:23.650,0:39:27.489
Okay, I understand the truth of the[br]subject, the students don't, so I have to
0:39:27.489,0:39:30.069
give this to them[br]and at some point I realized:
0:39:30.069,0:39:33.450
Oh, I changed my mind so many times[br]in the past and I'm probably not going to
0:39:33.450,0:39:35.769
stop changing it in the future.
0:39:35.769,0:39:38.710
I'm always moving along a gradient[br]and I keep moving along a gradient.
0:39:38.710,0:39:43.099
So I'm not moving to truth,[br]I'm moving forward.
0:39:43.099,0:39:45.230
And when we teach our kids[br]we should probably not think about
0:39:45.230,0:39:46.390
how to give them truth.
0:39:46.390,0:39:51.039
We should think about how to put them onto[br]an interesting gradient, that makes them
0:39:51.039,0:39:55.079
explore the world,[br]world of possible beliefs.
0:39:55.079,0:40:03.150
applause
0:40:03.150,0:40:05.359
And this possible beliefs[br]lead us into local minima.
0:40:05.359,0:40:08.150
This is inevitable. This are like valleys[br]and sometimes this valleys are
0:40:08.150,0:40:11.210
neighbouring and we don't understand[br]what the people in the neighbouring
0:40:11.210,0:40:15.700
valley are doing unless we are willing to[br]retrace the steps they have been taken.
0:40:15.700,0:40:19.569
And if you want to get from one valley[br]into the next, we will have to have some kind
0:40:19.569,0:40:21.789
of energy that moves us over the hill.
0:40:21.789,0:40:27.739
We have to have a trajectory were every[br]step works by finding reason to give up
0:40:27.739,0:40:30.380
bit of our current belief and adopt a[br]new belief, because it's somehow
0:40:30.380,0:40:34.739
more useful, more relevant,[br]more consistent and so on.
0:40:34.739,0:40:38.349
Now the problem is that this is not[br]monotonous we can not guarantee that
0:40:38.349,0:40:40.499
we're always climbing,[br]because the problem is, that
0:40:40.499,0:40:44.599
the beliefs themselfs can change[br]our evaluation of the belief.
0:40:44.599,0:40:50.390
It could be for instance that you start[br]believing in a religion and this religion
0:40:50.390,0:40:54.299
could tell you: If you give up the belief[br]in the religion, you're going to face
0:40:54.299,0:40:56.500
eternal damnation in hell.
0:40:56.500,0:40:59.489
As long as you believe in the religion,[br]it's going to be very expensive for you
0:40:59.489,0:41:02.430
to give up the religion, right?[br]If you truly belief in it.
0:41:02.430,0:41:05.109
You're now caught[br]in some kind of attractor.
0:41:05.109,0:41:08.680
Before you believe the religion it is not[br]very dangerous but once you've gotten
0:41:08.680,0:41:13.019
into the attractor it's very,[br]very hard to get out.
0:41:13.019,0:41:16.309
So these belief attractors[br]are actually quite dangerous.
0:41:16.309,0:41:19.920
You can get not only to chaotic behaviour,[br]where you can not guarantee that your
0:41:19.920,0:41:23.470
current belief is better than the last one[br]but you can also get into beliefs that are
0:41:23.470,0:41:26.849
almost impossible to change.
0:41:26.849,0:41:33.739
And that makes it possible to program[br]people to work in societies.
0:41:33.739,0:41:37.529
Social domains are structured by values.[br]Basically a preference is what makes you
0:41:37.529,0:41:40.769
do things, because you anticipate[br]pleasure or displeasure,
0:41:40.769,0:41:45.339
and values make you do things[br]even if you don't anticipate any pleasure.
0:41:45.339,0:41:49.809
These are virtual rewards.[br]They make us do things, because we believe
0:41:49.809,0:41:51.799
that is stuff[br]that is more important then us.
0:41:51.799,0:41:55.109
This is what values are about.
0:41:55.109,0:42:00.690
And these values are the source[br]of what we would call true meaning, deeper meaning.
0:42:00.690,0:42:05.220
There is something that is more important[br]than us, something that we can serve.
0:42:05.220,0:42:08.769
This is what we usually perceive as[br]meaningful life, it is one which
0:42:08.769,0:42:12.759
is in the serves of values that are more[br]important than I myself,
0:42:12.759,0:42:15.749
because after all I'm not that important.[br]I'm just this machine that runs around
0:42:15.749,0:42:20.789
and tries to optimize its pleasure and[br]pain, which is kinda boring.
0:42:20.789,0:42:26.329
So my PI has puzzled me, my principle[br]investigator in the Havard department,
0:42:26.329,0:42:29.349
where I have my desk, Martin Nowak.
0:42:29.349,0:42:33.970
He said, that meaning can not exist without[br]god; you are either religious,
0:42:33.970,0:42:36.950
or you are a nihilist.
0:42:36.950,0:42:42.789
And this guy is the head of the[br]department for evolutionary dynamics.
0:42:42.789,0:42:45.769
Also he is a catholic.. chuckling
0:42:45.769,0:42:49.729
So this really puzzled me and I tried[br]to understand what he meant by this.
0:42:49.729,0:42:53.200
Typically if you are a good atheist[br]like me,
0:42:53.200,0:42:57.920
you tend to attack gods that are[br]structured like this, religious gods,
0:42:57.920,0:43:02.940
that are institutional, they are personal,[br]they are some kind of person.
0:43:02.940,0:43:08.239
They do care about you, they prescribe[br]norms, for instance don't mastrubate
0:43:08.239,0:43:10.060
it's bad for you.
0:43:10.060,0:43:14.759
Many of this norms are very much aligned[br]with societal institutions, for instance
0:43:14.759,0:43:20.799
don't questions the authorities,[br]god wants them to be ruling above you
0:43:20.799,0:43:23.839
and be monogamous and so on and so on.
0:43:23.839,0:43:28.979
So they prescribe norms that do not make[br]a lot of sense in terms of beings that
0:43:28.979,0:43:31.200
creates world every now and then,
0:43:31.200,0:43:34.619
but they make sense in terms of[br]what you should be doing to be a
0:43:34.619,0:43:36.730
functioning member of society.
0:43:36.730,0:43:40.799
And this god also does things like it[br]creates world, they like to manifest as
0:43:40.799,0:43:43.660
burning shrubbery and so on. There are[br]many books that describe stories that
0:43:43.660,0:43:45.700
these gods have allegedly done.
0:43:45.700,0:43:48.819
And it's very hard to test for all these[br]features which makes this gods very
0:43:48.819,0:43:54.280
improbable for us. And makes Atheist[br]very dissatisfied with these gods.
0:43:54.280,0:43:56.569
But then there is a different kind of god.
0:43:56.569,0:43:58.599
This is what we call the spiritual god.
0:43:58.599,0:44:02.410
This spiritual god is independent of[br]institutions, it still does care about you.
0:44:02.410,0:44:06.489
It's probably conscious. It might not be a[br]person. There are not that many stories,
0:44:06.489,0:44:10.579
that you can consistently tell about it,[br]but you might be able to connect to it
0:44:10.579,0:44:15.259
spiritually.
0:44:15.259,0:44:19.470
Then there is a god that is even less[br]expensive. That is god as a transcendental
0:44:19.470,0:44:23.489
principle and this god is simply the reason[br]why there is something rather then
0:44:23.489,0:44:28.150
nothing. This god is the question the[br]universe is the answer to, this is the
0:44:28.150,0:44:29.600
thing that gives meaning.
0:44:29.600,0:44:31.489
Everything else about it is unknowable.
0:44:31.489,0:44:34.190
This is the god of Thomas of Aquinus.
0:44:34.190,0:44:38.089
The God that Thomas of Aquinus discovered[br]is not the god of Abraham this is not the
0:44:38.089,0:44:39.180
religious god.
0:44:39.180,0:44:43.559
It's a god that is basically a principle[br]that us ... the universe into existence.
0:44:43.559,0:44:47.140
It's the one that gives[br]the universe it's purpose.
0:44:47.140,0:44:50.200
And because every other property[br]is unknowable about this,
0:44:50.200,0:44:52.010
this god is not that expensive.
0:44:52.010,0:44:55.960
Unfortunately it doesn't really work.[br]I mean Thomas of Aquinus tried to prove
0:44:55.960,0:45:00.049
god. He tried to prove an necessary god,[br]a god that has to be existing and
0:45:00.049,0:45:02.779
I think we can only prove a possible god.
0:45:02.779,0:45:05.339
So if you try to prove a necessary god,[br]this god can not exist.
0:45:05.339,0:45:11.650
Which means your god prove is going to[br]fail. You can only prove possible gods.
0:45:11.650,0:45:13.259
And then there is an even more improper god.
0:45:13.259,0:45:15.890
And that's the god of Aristotle and he said:
0:45:15.890,0:45:20.069
"If there is change in the universe,[br]something in going to have to change it."
0:45:20.069,0:45:23.640
There must be something that moves it[br]along from one state to the next.
0:45:23.640,0:45:26.289
So I would say that is the primary[br]computational transition function
0:45:26.289,0:45:35.079
of the universe.[br]laughingapplause
0:45:35.079,0:45:38.439
And Aristotle discovered it.[br]It's amazing isn't it?
0:45:38.439,0:45:41.509
We have to have this because we[br]can not be conscious in a single state.
0:45:41.509,0:45:43.279
We need to move between states[br]to be conscious.
0:45:43.279,0:45:45.979
We need to be processes.
0:45:45.979,0:45:50.859
So we can take our gods and sort them by[br]their metaphysical cost.
0:45:50.859,0:45:53.290
The 1st degree god would be the first mover.
0:45:53.290,0:45:56.069
The 2nd degree god is the god of purpose and meaning.
0:45:56.069,0:45:59.089
3rd degree god is the spiritual god.[br]And the 4th degree god is this bound to
0:45:59.089,0:46:01.229
religious institutions, right?
0:46:01.229,0:46:03.720
So if you take this statement[br]from Martin Nowak,
0:46:03.720,0:46:07.759
"You can not have meaning without god!"[br]I would say: yes! You need at least
0:46:07.759,0:46:14.990
a 2nd degree god to have meaning.[br]So objective meaning can only exist
0:46:14.990,0:46:19.119
with a 2nd degree god. chuckling
0:46:19.119,0:46:22.269
And subjective meaning can exist as a[br]function in a cognitive system of course.
0:46:22.269,0:46:24.180
We don't need objective meaning.
0:46:24.180,0:46:27.410
So we can subjectively feel that there is[br]something more important to us
0:46:27.410,0:46:30.509
and this makes us work in society and[br]makes us perceive that we have values
0:46:30.509,0:46:34.329
and so on, but we don't need to believe[br]that there is something outside of the
0:46:34.329,0:46:36.869
universe to have this.
0:46:36.869,0:46:40.650
So the 4th degree god is the one[br]that is bound to religious institutions,
0:46:40.650,0:46:45.400
it requires a belief attractor and it[br]enables complex norm prescriptions.
0:46:45.400,0:46:48.430
It my theory is right then it should be[br]much harder for nerds to believe in
0:46:48.430,0:46:52.039
a 4th degree god then for normal people.
0:46:52.039,0:46:56.489
And what this god does it allows you to[br]have state building mind viruses.
0:46:56.489,0:47:00.269
Basically religion is a mind virus. And[br]the amazing thing about these mind viruses
0:47:00.269,0:47:02.489
is that they structure behaviour[br]in large groups.
0:47:02.489,0:47:06.130
We have evolved to live in small groups[br]of a few 100 individuals, maybe somthing
0:47:06.130,0:47:07.249
like a 150.
0:47:07.249,0:47:10.059
This is roughly the level[br]to which reputation works.
0:47:10.059,0:47:15.369
We can keep track of about 150 people and[br]after this it gets much much worse.
0:47:15.369,0:47:18.290
So in this system where you have[br]reputation people feel responsible
0:47:18.290,0:47:21.349
for each other and they can[br]keep track of their doings
0:47:21.349,0:47:23.049
and society kind of sort of works.
0:47:23.049,0:47:27.789
If you want to go beyond this, you have[br]to right a software that controls people.
0:47:27.789,0:47:32.420
And religions were the first software,[br]that did this on a very large scale.
0:47:32.420,0:47:35.319
And in order to keep stable they had to be[br]designed like operating systems
0:47:35.319,0:47:36.039
in some sense.
0:47:36.039,0:47:39.930
They give people different roles[br]like insects in a hive.
0:47:39.930,0:47:44.529
And they have even as part of this roles is[br]to update this religion but it has to be
0:47:44.529,0:47:48.380
done very carefully and centrally[br]because otherwise the religion will split apart
0:47:48.380,0:47:51.719
and fall together into new religions[br]or be overcome by new ones.
0:47:51.719,0:47:54.259
So there is some kind of[br]evolutionary dynamics that goes on
0:47:54.259,0:47:55.930
with respect to religion.
0:47:55.930,0:47:58.519
And if you look the religions,[br]there is actually a veritable evolution
0:47:58.519,0:47:59.739
of religions.
0:47:59.739,0:48:04.789
So we have this Israelic tradition and[br]the Mesoputanic mythology that gave rise
0:48:04.789,0:48:13.019
to Judaism. applause
0:48:13.019,0:48:16.299
It's kind of cool, right? laughing
0:48:16.299,0:48:36.289
Also history totally repeats itself.[br]roaring laughterapplause
0:48:36.289,0:48:41.889
Yeah, it totally blew my mind when[br]I discovered this. laughter
0:48:41.889,0:48:45.039
Of course the real tree of programming[br]languages is slightly more complicated,
0:48:45.039,0:48:48.599
And the real tree of religion is slightly[br]more complicated.
0:48:48.599,0:48:51.229
But still its neat.
0:48:51.229,0:48:54.289
So if you want to immunize yourself[br]against mind viruses,
0:48:54.289,0:48:58.570
first of all you want to check yourself[br]whether you are infected.
0:48:58.570,0:49:02.809
You should check: Can I let go of my[br]current beliefs without feeling that
0:49:02.809,0:49:07.670
meaning departures me and I feel very[br]terrible, when I let go of my beliefs.
0:49:07.670,0:49:11.279
Also you should check: All the other[br]people around there that don't
0:49:11.279,0:49:17.019
share my belief, are they either stupid,[br]or crazy, or evil?
0:49:17.019,0:49:19.890
If you think this chances are you are[br]infected by some kind of mind virus,
0:49:19.890,0:49:23.710
because they are just part[br]of the out group.
0:49:23.710,0:49:28.059
And does your god have properties that[br]you know but you did not observe.
0:49:28.059,0:49:32.490
So basically you have a god[br]of 2nd or 3rd degree or higher.
0:49:32.490,0:49:34.589
In this case you also probably got a mind virus.
0:49:34.589,0:49:37.259
There is nothing wrong[br]with having a mind virus,
0:49:37.259,0:49:39.920
but if you want to immunize yourself[br]against this people have invented
0:49:39.920,0:49:44.059
rationalism and enlightenment,[br]basically to act as immunization against
0:49:44.059,0:49:50.660
mind viruses.[br]loud applause
0:49:50.660,0:49:53.869
And in some sense its what the mind does[br]by itself because, if you want to
0:49:53.869,0:49:56.949
understand how you go wrong,[br]you need to have a mechanism
0:49:56.949,0:49:58.839
that discovers who you are.
0:49:58.839,0:50:03.109
Some kind of auto debugging mechanism,[br]that makes the mind aware of itself.
0:50:03.109,0:50:04.779
And this is actually the self.
0:50:04.779,0:50:08.339
So according to Robert Kegan:[br]"The development of ourself is a process,
0:50:08.339,0:50:13.400
in which we learn who we are by making[br]thing explicit", by making processes that
0:50:13.400,0:50:17.249
are automatic visible to us and by[br]conceptualize them so we no longer
0:50:17.249,0:50:18.859
identify with them.
0:50:18.859,0:50:22.019
And it starts out with understanding[br]that there is only pleasure and pain.
0:50:22.019,0:50:25.180
If you are a baby, you have only[br]pleasure and pain you identify with this.
0:50:25.180,0:50:27.869
And then you turn into a toddler and the[br]toddler understands that they are not
0:50:27.869,0:50:31.059
their pleasure and pain[br]but they are their impulses.
0:50:31.059,0:50:34.259
And in the next level if you grow beyond[br]the toddler age you actually know that
0:50:34.259,0:50:38.880
you have goals and that your needs and[br]impulses are there to serve goals, but its
0:50:38.880,0:50:40.210
very difficult to let go of the goals,
0:50:40.210,0:50:42.789
if you are a very young child.
0:50:42.789,0:50:46.329
And at some point you realize: Oh, the[br]goals don't really matter, because
0:50:46.329,0:50:49.509
sometimes you can not reach them, but[br]we have preferences, we have thing that we
0:50:49.509,0:50:52.950
want to happen and thing that we do not[br]want to happen. And then at some point
0:50:52.950,0:50:55.869
we realize that other people have[br]preferences, too.
0:50:55.869,0:50:58.979
And then we start to model the world[br]as a system where different people have
0:50:58.979,0:51:01.940
different preferences and we have[br]to navigate this landscape.
0:51:01.940,0:51:06.420
And then we realize that this preferences[br]also relate to values and we start
0:51:06.420,0:51:09.700
to identify with this values as members of[br]society.
0:51:09.700,0:51:13.469
And this is basically the stage if you[br]are an adult being, that you get into.
0:51:13.469,0:51:16.910
And you can get to a stage beyond that,[br]especially if you have people this, which
0:51:16.910,0:51:20.059
have already done this. And this means[br]that you understand that people have
0:51:20.059,0:51:23.660
different values and what they do[br]naturally flows out of them.
0:51:23.660,0:51:26.849
And this values are not necessarily worse[br]than yours they are just different.
0:51:26.849,0:51:29.450
And you learn that you can hold different[br]sets of values in your mind at
0:51:29.450,0:51:33.019
the same time, isn't that amazing?[br]and understand other people, even if
0:51:33.019,0:51:36.660
they are not part of your group.[br]If you get that, this is really good.
0:51:36.660,0:51:39.269
But I don't think it stops there.
0:51:39.269,0:51:43.019
You can also learn that the stuff that[br]you perceive is kind of incidental,
0:51:43.019,0:51:45.339
that you can turn it of and you can[br]manipulate it.
0:51:45.339,0:51:49.940
And at some point you also can realize[br]that yourself is only incidental that you
0:51:49.940,0:51:52.559
can manipulate it or turn it of.[br]And that your basically some kind of
0:51:52.559,0:51:57.420
consciousness that happens to run a brain[br]of some kind of person, that navigates
0:51:57.420,0:52:04.279
the world in terms to get rewards or avoid[br]displeasure and serve values and so on,
0:52:04.279,0:52:05.130
but it doesn't really matter.
0:52:05.130,0:52:08.119
There is just this consciousness which[br]understands the world.
0:52:08.119,0:52:11.009
And this is the stage that we typically[br]call enlightenment.
0:52:11.009,0:52:14.549
In this stage you realize that you are not[br]your brain, but you are a story that
0:52:14.549,0:52:25.640
your brain tells itself.[br]applause
0:52:25.640,0:52:29.630
So becoming self aware is a process of[br]reverse engineering your mind.
0:52:29.630,0:52:32.890
Its a different set of stages in which[br]to realize what goes on.
0:52:32.890,0:52:33.799
So isn't that amazing.
0:52:33.799,0:52:38.930
AI is a way to get to more self awareness?
0:52:38.930,0:52:41.319
I think that is a good point to stop here.
0:52:41.319,0:52:44.499
The first talk that I gave in this series[br]was 2 years ago. It was about
0:52:44.499,0:52:45.979
how to build a mind.
0:52:45.979,0:52:49.670
Last year I talked about how to get from[br]basic computation to consciousness.
0:52:49.670,0:52:53.709
And this year we have talked about[br]finding meaning using AI.
0:52:53.709,0:52:57.470
I wonder where it goes next.[br]laughter
0:52:57.470,0:53:22.769
applause
0:53:22.769,0:53:26.489
Herald: Thank you for this amazing talk![br]We now have some minutes for Q&A.
0:53:26.489,0:53:31.190
So please line up at the microphones as[br]always. If you are unable to stand up
0:53:31.190,0:53:36.430
for some reason please very very visibly[br]rise your hand, we should be able to dispatch
0:53:36.430,0:53:40.099
an audio angle to your location[br]so you can have a question too.
0:53:40.099,0:53:44.030
And also if you are locationally[br]disabled, you are not actually in the room
0:53:44.030,0:53:49.069
if you are on the stream, you can use IRC[br]or twitter to also ask questions.
0:53:49.069,0:53:50.989
We also have a person for that.
0:53:50.989,0:53:53.779
We will start at microphone number 2.
0:53:53.779,0:53:59.940
Q: Wow that's me. Just a guess! What[br]would you guess, when can you discuss
0:53:59.940,0:54:04.559
your talk with a machine,[br]in how many years?
0:54:04.559,0:54:07.400
Joscha: I don't know! As a software[br]engineer I know if I don't have the
0:54:07.400,0:54:12.619
specification all bets are off, until I[br]have the implementation. laughter
0:54:12.619,0:54:14.509
So it can be of any order of magnitude.
0:54:14.509,0:54:18.249
I have a gut feeling but I also know as a[br]software engineer that my gut feeling is
0:54:18.249,0:54:23.450
usually wrong, laughter[br]until I have the specification.
0:54:23.450,0:54:28.200
So the question is if there are silver[br]bullets? Right now there are some things
0:54:28.200,0:54:30.569
that are not solved yet and it could be[br]that they are easier to solve
0:54:30.569,0:54:33.469
than we think, but it could be that[br]they're harder to solve than we think.
0:54:33.469,0:54:36.710
Before I stumbled on this cortical[br]self organization thing,
0:54:36.710,0:54:40.719
I thought it's going to be something like[br]maybe 60, 80 years and now I think it's
0:54:40.719,0:54:47.289
way less, but again this is a very[br]subjective perspective. I don't know.
0:54:47.289,0:54:49.240
Herald: Number 1, please!
0:54:49.240,0:54:55.589
Q: Yes, I wanted to ask a little bit about[br]metacognition. It seems that you kind of
0:54:55.589,0:55:01.329
end your story saying that it's still[br]reflecting on input that you get and
0:55:01.329,0:55:04.900
kind of working with your social norms[br]and this and that, but Colberg
0:55:04.900,0:55:11.839
for instance talks about what he calls a[br]postconventional universal morality
0:55:11.839,0:55:17.420
for instance, which is thinking about[br]moral laws without context, basically
0:55:17.420,0:55:23.069
stating that there is something beyond the[br]relative norm that we have to each other,
0:55:23.069,0:55:29.579
which would only be possible if you can do[br]kind of, you know, meta cognition,
0:55:29.579,0:55:32.599
thinking about your own thinking[br]and then modifying that thinking.
0:55:32.599,0:55:37.229
So kind of feeding back your own ideas[br]into your own mind and coming up with
0:55:37.229,0:55:43.779
stuff that actually can't get ...[br]well processing external inputs.
0:55:43.779,0:55:48.469
Joscha: Mhm! I think it's very tricky.[br]This project of defining morality without
0:55:48.469,0:55:53.119
societies exists longer than Kant of[br]course. And Kant tried to give this
0:55:53.119,0:55:56.869
internal rules and others tried to.[br]I find this very difficult.
0:55:56.869,0:56:01.069
From my perspective we are just moving[br]bits of rocks. And this bits of rocks they
0:56:01.069,0:56:07.589
are on some kind of dust mode in a galaxy[br]out of trillions of galaxies and how can
0:56:07.589,0:56:08.609
there be meaning?
0:56:08.609,0:56:11.180
It's very hard for me to say:
0:56:11.180,0:56:13.969
One chimpanzee species is better than[br]another chimpanzee species or
0:56:13.969,0:56:16.559
a particular monkey[br]is better than another monkey.
0:56:16.559,0:56:18.539
This only happens[br]within a certain framework
0:56:18.539,0:56:20.160
and we have to set this framework.
0:56:20.160,0:56:23.700
And I don't think that we can define this[br]framework outside of a context of
0:56:23.700,0:56:26.420
social norms, that we have to agree on.
0:56:26.420,0:56:29.650
So objectively I'm not sure[br]if we can get to ethics.
0:56:29.650,0:56:33.769
I only think that is possible based on[br]some kind of framework that people
0:56:33.769,0:56:38.339
have to agree on implicitly or explicitly.
0:56:38.339,0:56:40.630
Herald: Microphone number 4, please.
0:56:40.630,0:56:46.559
Q: Hi, thank you, it was a fascinating talk.[br]I have 2 thought that went through my mind.
0:56:46.559,0:56:51.589
And the first one is that it's so[br]convincing the models that you present,
0:56:51.589,0:56:56.709
but it's kind of like you present[br]another metaphor of understanding the
0:56:56.709,0:57:01.670
brain which is still something that we try[br]to grasp on different levels of science
0:57:01.670,0:57:07.469
basically. And the 2nd one is that your[br]definition of the nerd who walks
0:57:07.469,0:57:10.950
and doesn't see the walls is kind of[br]definition... or reminds me
0:57:10.950,0:57:15.229
Richard Rortys definition of the ironist[br]which is a person who knows that their
0:57:15.229,0:57:20.799
vocabulary is finite and that other people[br]have also a finite vocabulary and
0:57:20.799,0:57:24.599
then that obviously opens up the whole question[br]of meaning making which has been
0:57:24.599,0:57:28.979
discussed in so many[br]other disciplines and fields.
0:57:28.979,0:57:32.930
And I thought about Darridas[br]deconstruction of ideas and thoughts and
0:57:32.930,0:57:36.300
Butler and then down the rabbit hole to[br]Nietzsche and I was just wondering,
0:57:36.300,0:57:39.009
if you could maybe[br]map out other connections
0:57:39.009,0:57:44.430
where basically not AI helping us to[br]understand the mind, but where
0:57:44.430,0:57:49.819
already existing huge, huge fields of[br]science, like cognitive process
0:57:49.819,0:57:53.359
coming from the other end could help us[br]to understand AI.
0:57:53.359,0:57:59.680
Joscha: Thank you, the tradition that you[br]mentioned Rorty and Butler and so on
0:57:59.680,0:58:02.989
are part of a completely different belief[br]attractor in my current perspective.
0:58:02.989,0:58:06.209
That is they are mostly[br]social constructionists.
0:58:06.209,0:58:10.880
They believe that reality at least in the[br]domains of the mind and sociality
0:58:10.880,0:58:15.359
are social constructs they are part[br]of social agreement.
0:58:15.359,0:58:17.190
Personally I don't think that[br]this is the case.
0:58:17.190,0:58:19.630
I think that patterns that we refer to
0:58:19.630,0:58:23.890
are mostly independent of your mind.[br]The norms are part of social constructs,
0:58:23.890,0:58:28.099
but for instance our motivational[br]preferences that make us adapt or
0:58:28.099,0:58:32.719
reject norms, are something that builds up[br]resistance to the environment.
0:58:32.719,0:58:35.660
So they are probably not part[br]of social agreement.
0:58:35.660,0:58:41.569
And the only thing I can invite you to is[br]try to retrace both of the different
0:58:41.569,0:58:45.640
belief attractors, try to retrace the[br]different paths on the landscape.
0:58:45.640,0:58:48.529
All this thing that I tell you, all of[br]this is of course very speculative.
0:58:48.529,0:58:52.390
These are that seem to be logical[br]to me at this point in my life.
0:58:52.390,0:58:55.400
And I try to give you the arguments[br]why I think that is plausible, but don't
0:58:55.400,0:58:59.109
believe in them, question them, challenge[br]them, see if they work for you!
0:58:59.109,0:59:00.559
I'm not giving you any truth.
0:59:00.559,0:59:05.720
I'm just going to give you suitable encodings[br]according to my current perspective.
0:59:05.720,0:59:11.739
Q:Thank you![br]applause
0:59:11.739,0:59:15.099
Herald: The internet, please!
0:59:19.179,0:59:26.029
Signal angel: So, someone is asking[br]if in this belief space you're talking about
0:59:26.029,0:59:30.109
how is it possible[br]to get out of local minima?[br]
0:59:30.109,0:59:33.959
And very related question as well:
0:59:33.959,0:59:38.530
Should we teach some momentum method[br]to our children,
0:59:38.530,0:59:41.599
so we don't get stuck in a local minima.
0:59:41.599,0:59:44.829
Joscha: I believe at some level it's not[br]possible to get out of a local minima.
0:59:44.829,0:59:50.329
In an absolute sense, because you only get[br]to get into some kind of meta minimum,
0:59:50.329,0:59:56.769
but what you can do is to retrace the[br]path that you took whenever you discover
0:59:56.769,0:59:59.989
that somebody else has a fundamentally[br]different set of beliefs.
0:59:59.989,1:00:02.769
And if you realize that this person is[br]basically a smart person that is not
1:00:02.769,1:00:07.359
completely insane but has reasons to[br]believe in their beliefs and they seem to
1:00:07.359,1:00:10.579
be internally consistent it's usually[br]worth to retrace what they
1:00:10.579,1:00:12.180
have been thinking and why.
1:00:12.180,1:00:15.930
And this means you have to understand[br]where their starting point was and
1:00:15.930,1:00:18.279
how they moved from their current point[br]to their starting point.
1:00:18.279,1:00:22.219
You might not be able to do this[br]accurately and the important thing is
1:00:22.219,1:00:25.369
also afterwards you discover a second[br]valley, you haven't discovered
1:00:25.369,1:00:27.059
the landscape inbetween.
1:00:27.059,1:00:30.839
But the only way that we can get an idea[br]of the lay of the land is that we try to
1:00:30.839,1:00:33.200
retrace as many paths as possible.
1:00:33.200,1:00:36.339
And if we try to teach our children, what[br]I think what we should be doing is:
1:00:36.339,1:00:38.650
To tell them how to explore[br]this world on there own.
1:00:38.650,1:00:43.900
It's not that we tell them this is the[br]valley, basically it's given, it's
1:00:43.900,1:00:47.599
the truth, but instead we have to tell[br]them: This is the path that we took.
1:00:47.599,1:00:51.239
And these are the things that we saw[br]inbetween and it is important to be not
1:00:51.239,1:00:54.390
completely naive when we go into this[br]landscape, but we also have to understand
1:00:54.390,1:00:58.170
that it's always an exploration that[br]never stops and that might change
1:00:58.170,1:01:01.140
everything that you believe now[br]at a later point.
1:01:01.140,1:01:05.700
So for me it's about teaching my own[br]children how to be explorers,
1:01:05.700,1:01:10.950
how to understand that knowledge is always[br]changing and it's always a moving frontier.
1:01:10.950,1:01:17.230
applause
1:01:17.230,1:01:22.259
Herald: We are unfortunately out of time.[br]So, please once again thank Joscha!
1:01:22.259,1:01:24.069
applause[br]Joscha: Thank you!
1:01:24.069,1:01:28.239
applause
1:01:28.239,1:01:32.719
postroll music
1:01:32.719,1:01:40.000
subtitles created by c3subtitles.de[br]Join, and help us!