-
Herald: I have the great pleasure to
announce Joscha, who will give us a great
-
talk with the title "The Ghost in the
Machine" and he will talk about
-
consciousness of our mind and of computers
and somehow also tell us how we can learn
-
from A.I. systems about our own brains.
And I think this is a very curious question.
-
So please give it up for Joscha.
-
Applause
-
Joscha: Good evening. This is the 5th
of a talk in a series of talks on how to
-
get from computation to consciousness and
to understand our condition in the
-
universe based on concepts that I mostly
learned by looking at artificial
-
intelligence and computation and it mostly
tackles the big philosophical questions:
-
What can I know? What is true? What is
truth? Who am I? Which means the question
-
of epistemology, of ontology, of
metaphysics, and philosophy of mind and
-
ethics.
-
And to clear some of the terms
that we are using here:
-
What is intelligence? What's a mind?
What's a self? What's consciousness?
-
How are mind and consciousness
realized in the universe?
-
Intelligence I think is the ability to
make models.
-
It's not the same thing
as being smart, which is the
-
ability to reach your goals or being wise,
which is the ability to pick the right
-
goals. But it's just the ability to
make models of things.
-
And you can regulate them later using
these models, but you don't have to.
-
And the mind is this thing that observes
the universe itself
-
as an identification with
properties and purposes.
-
What a thing thinks it is. And then
you have consciousness, which is
-
the experience of what it's like
to be a thing.
-
And, how our mind of consciousness
is realized in the universe,
-
this is commonly called the
mind-body problem and it's been
-
puzzling philosophers and people of
all proclivities for thousands of years.
-
So what's going on? How's it possible that
I find myself in a universe and I seem to
-
be experiencing myself in that universe?
How does this go together and how is this,
-
what's going on here? The traditional
answer to this is called dualism and the
-
conception of dualism is that - in our
culture at least, this dualist idea that
-
you have a physical world and a mental
world and they coexist somehow and my mind
-
experiences this mental world and my body
can do things in the physical world and
-
the difficulty of this dualist conception
is how do these two planes of existence
-
interact. Because physics is defined as
causally closed, everything that
-
influences things in the physical world is
by itself an element of physics. So an
-
alternative is idealism which says that
there is only a mental world. We only
-
exist in a dream and this dream is being
dreamt by a mind on a higher plane of
-
existence. And difficulty with this, it's
very hard to explain that mind of a higher
-
plane of existence. Just put it there, why
is it doing this? And in our culture the
-
dominant theory is materialism and is
basically there is only a physical world
-
nothing else. And the physical world
somehow is responsible for the creation of
-
the mental world. It's not quite clear how
this happens. And the answer that I am
-
suggesting, is functionalism which means
that indeed we exist only in a dream.
-
So these ideas of materialism and idealism
are not in opposition. They are
-
complementary because this dream is being
dreamt by a mind on a higher plane of
-
existence, but this higher plane of
existence is the physical world. So we are
-
being dreamt in the neocortex of a primate
that lives in a physical universe and the
-
world that we experience is not the
physical world. It's a dream generated by
-
the neocortex - the same circuits that
make dreams at night make them during the
-
day. You can show this, and you live in
this virtual reality being generated in
-
there and the self as a character in that
dream. And it seems to take care of
-
things. It seems to explain what's going
on. It explains why a miracle seems to be
-
possible and why I can look into the
future but cannot break the bank somehow.
-
And even though this theory explains this,
how shouldn't I be more agnostic? Are
-
there not alternatives that I should be
considering? Maybe the narratives of our
-
big religions and so on. I think we should
be agnostic. So the first rule of
-
epistemology says that the confidence in
the belief must equal the weight of the
-
evidence supporting it. Once we stumble on
that rule you can test all the
-
alternatives and see if one of them is
better. And I think what this means is you
-
have to have all the possible beliefs, you
should entertain them all. But you should
-
not have any confidence in them. You
should shift your confidence around based
-
on the evidence. So for instance it is
entirely possible that this universe was
-
created by a supernatural being, and it's
a big conspiracy, and it actually has
-
meaning and it cares about us and our
existence here means something.
-
But um, there is no experiment that can
validate this. A guy coming down from a
-
burning mount, from a burning
bush, that you've talked to on a
-
mountaintop? That's not a kind of experi-
ment that gives you valid evidence, right?
-
So intelligence is the ability to
make models and intelligence is a property
-
that is beyond the grasp of a single
individual. A single individual is not
-
that smart. We cannot figure out even tur-
ing complete languages all by ourselves.
-
To do this you need an intellectual
tradition that lasts a few hundred years
-
at least. So civilizations have more
intelligence than individuals. But
-
individuals often have more intelligence
than groups and whole generations and
-
that's because groups and generations tend
to converge on ideas; they have consensus
-
opinions. I'm very wary of consensus
opinions because you know how hard it is
-
to understand which programming language
is the best one for which purpose. There
-
is no proper consensus. And that's a
relatively easy problem. So when there's a
-
complex topics and all the experts agree,
there are forces at work that are
-
different than the forces that make them
search for truth. These consensus-building
-
forces, they're very suspicious to me. And
if you want to understand what's true you
-
have to look for means and motive. And you
have to be autonomous in doing this, so
-
individuals typically have better ideas
than generations or groups. But as I
-
said, civilizations have more intelligence
than individuals. What does a
-
civilizational intellect look like? The
civilization intellect is something like a
-
global optimum of the modeling function.
It's something that has to be built over
-
thousands of years in an unbroken
intellectual tradition. And guess what,
-
this doesn't really exist in human
history. Every few hundred years, there's
-
some kind of revolution. Somebody opens
the doors to the knowledge factories and
-
gets everybody out and burns down the
libraries. And a couple generations later,
-
the knowledge worker drones of the new
king realize "Oh my God we need to rebuild
-
this thing, this intellect." And then they
create something in its likeness, but they
-
make mistakes in the foundation. So this
intellect tends to have scars. Like our
-
civilization intellect has a lot of scars
in it, that make it hard-to-difficult
-
to understand concepts like self
and consciousness and mind. So, the mind
-
is something that observes the universe,
and the neurons and neurotransmitters are
-
the substrate. And the human intellect and
the working memory is the current binding
-
state, how do the different elements fit
together in our mind? And the self is the
-
identification is what we think we are and
what we want to happen. And consciousness
-
is the contents of our attention, it makes
knowledge available throughout the mind.
-
And civilizational intellect is very
similar: society is observe the universe,
-
people and resources are the substrate,
the generation is the current binding
-
state, and culture is the identification
with what we think we are and what we want
-
to happen. And media is the contents of
our attention and make knowledge available
-
throughout society. So the culture is
basically the self of civilization, and
-
media is its consciousness. How is it
possible to model a universe? Let's take a
-
very simple universe like the Mandelbrot
fractal. It can be defined by a little bit
-
of code. It's a very simple thing, you just
take a pair of numbers, you square it, you
-
add the same pair of numbers. And you do
this infinitely often, and typically this
-
goes to infinity very fast. There's a
small area around the origin of the number
-
pair, so between -1 and +1 and
so on, where you have an area where this
-
converges, where it doesn't go to infinity
and that is where you make black dots and
-
then you get this famous structure, the
Mandelbrot fractal. And because this
-
divergence and convergence of the function
can take many loops and circles and so on,
-
a very complicated shape a very
complicated outline, an infinitely
-
complicated outline there. So there is an
infinite amount of structure in this
-
fractal. And now imagine you happen
to live in this fractal and you are in a
-
particular place in it, and you don't know
where that is where that place is. You
-
don't even know the generator function of
the whole thing. But you can still predict
-
your neighborhood. So you can see, omg,
I'm in some kind of a spiral, it turns
-
to the left, goes to the left, and goes
to left, and becomes smaller, so we can
-
predict and suddenly it ends. Why does it
end? A singularity. Oh, it hits another
-
spiral. There's a law when a spiral hits
another spiral, it ends. And something
-
else happens. So you look and then you see
oh, there are certain circumstances where
-
you have, for instance, an even number of
spirals hitting each other instead of an
-
odd number. And then you discover another
law. And if you make like 50 levels of
-
of these laws, and this is a good
description that locally compresses the
-
universe. So the Mandelbrot fractal is
locally compressable. You find local
-
order that predicts the neighborhood if
you are inside of that fractal. The global
-
modelling function of the Mandelbrot
fractal is very, very easy. It's an
-
interesting question: how difficult is the
global modelling function of our universe?
-
Even if we know it maybe it doesn't
help us that much, it will be a big
-
breakthrough for physics when we finally
find it, it will be much shorter than the
-
standard model, as I suspect, but we still
don't know where we are. And this means we
-
need to make a local model of what's
happening. So in order to do this we
-
separate the universe into things. Things
are small state spaces and transition
-
functions that tell you how to get from
state to state. And if the function is
-
deterministic it is independent of time,
it gives the same result every time you
-
call it. For an indeterministic function
it gives a different result every time, so
-
it doesn't compress well. And causality
means that you have separate several
-
things and they influence each other's
evolution thrugh a shared interface.
-
Right? So causality is an artifact of
describing the universe as separate
-
things. And the universe is not separate
things, it's one thing, but we get have to
-
describe it as separate things because we
cannot observe the whole thing. So what's
-
true? There seems to be a particular way
in which the universe seems to be and
-
that's the ground rules of the universe
and it's inaccessible to us. And what's
-
accessible to us is our own models of the
universe. The only thing that we can
-
experience, and this is basically a set
of theories that can explain the
-
observations. And truth in this sense is a
property of language and there are
-
different languages that we can use like
geometry and natural language and so on
-
and ways of representing and changing
models of our languages and several
-
intellectual traditions have developed
their own languages. And this has led to
-
problems. Our civilization basically has
as its founding myth this attempt to build
-
this global optimum modelling function.
This is a tower that is meant to reach the
-
heavens. And it fell apart because people
spoke different languages. The different
-
practitioners in the different fields and
they didn't understand each other and the
-
whole building collapsed. And this is in
some sense the origin of our present
-
civilization and we are trying to mend
this and find better languages. So whom
-
can we turn to? We can turn to the
mathematicians maybe because mathematics
-
is the domain of all languages.
Mathematics is really cool when you think
-
about it. It's a universal code library,
maintained for several centuries in its
-
present form. There is not even version
management, it's one version. There is
-
pretty much unified namespace. They have
to use a lot of the Unicode to make it
-
happen. It's ugly but there you go! It has
no central maintainers, not even a code of
-
conduct, beyond what you can infer
yourself.
-
laughter
But there are some problems at the
-
foundation that they discovered.
Shouted from the audience: en sehr stabile
-
Joscha: Can you infer this is a good
conduct? ??????????
-
Yelling from the audience: Ya!
Joscha: Okay. Power to you.
-
laughter
Joscha: In 1874 discovered when you looked
-
at the cardinality of a set, that when you
described natural numbers using set
-
theory, that the cardinality of a set
grows slower than the cardinality of the
-
set of its subsets. So if you look at the
set of the subsets of the set, it's always
-
larger than the cardinality of the number
of members of the set. Clear? Right. If
-
you take the infinite set, it has
infinitely many members: omega. You
-
take the cardinality of the set of the
subsets of the infinite set, it's also an
-
infinite number, but it's a larger one. So
it's a number that is larger than the
-
previous omega. Okay that's fine. Now we
have the cardinality of the set of all
-
sets. You make the total set: The set
where you put all the sets that could
-
possibly exist and put them all together,
right? That has also infinitely many
-
members, and it has more than the
cardinality of the set of the subsets of
-
the infinite set. That's fine. But now you
look at the cardinality of the set of all
-
the subsets of the total set. The problem
is, that the total set also contains the
-
set of its subsets, right? It's because it
contains all the sets. Now you have a
-
contradiction: Because the cardinality of
the set of the subsets of the total set is
-
supposed to be larger. And yet it seems to
be the same set and not the same set. It's
-
an issue! So mathematicians got puzzled
about this, and the philosopher Bertrand
-
Russell said: "Maybe we just exclude those
sets that don't contain themselves",
-
right? We only look at the set of sets
that don't contain themselves. Isn't that
-
a solution? Now the problem is: Does the
set of the sets that doesn't contain
-
themselves contain itself? If it does, it
doesn't, and if it doesn't, it does.
-
That's an issue!
laughter
-
So David Hilbert, who was some
kind of a community manager back then,
-
said: "Guys, fix this! This is an issue,
mathematics is precious, we are in
-
trouble. Please solve meta mathematics."
And people got to work. And after a short
-
amount of time Kurt Gödel, who had looked
at this in earnest said "oh that's an issue,
-
issue. You know, as soon as we allow these
kinds of loops - and we cannot really
-
exclude these loops - then our mathematics
crashes." So that's an issue, it's called
-
Unentscheidbarkeit. And then Alan Turing
came along a couple of years later, and he
-
constructed a computer to make that proof.
He basically said "If you build a machine
-
that does these mathematics, and the
machine takes infinitely many steps,
-
sometimes, for making a proof, then we
cannot know whether this proof
-
terminates." So it's a similar issue for
the Unentscheidbarkeit. That's a big
-
issue, right? So we cannot basically build
a machine in mathematics that runs
-
mathematics without crashing. But the good
news is, Turing didn't stop working there
-
and he figured out together with Alonzo
Church - not together, independently but
-
at the same time - that we can build a
computational machine, that runs all of
-
computation. So computation is a universal
thing. And it's almost as good as
-
mathematics. Computation is constructive
mathematics. The tiny, neglected subset of
-
mathematics, where you have to show the
money. In order to say that something is
-
true, you have to find that object that is
true. You have to actually construct it.
-
So there are no infinities, because you
cannot construct an infinity. You add
-
things and you have unboundedness maybe,
but not infinity. And so this part of
-
computation, mathematics is the one that
can be implemented. It's constructive
-
mathematics. It's the good part. And
computing, a computer is very easy to
-
make, and all universal computers have the
same power. That's called the Chuch-Turing
-
thesis. And Turing even didn't even stop
there. The obvious conclusion is that,
-
human minds are probably not in the class
of these mathematical machines, that even
-
God doesn't know how to build if it has to
be done in any language. But it's a
-
computational machine. And it also means
that all machines that human minds ever
-
encounter, mathematics that human minds
encounter,
-
will be computational mathematics.
So how can you bridge the gap
-
from mathematics to philosophy? Can we
find a language that is more powerful than
-
most of the languages that we look at
mathematics, which are very narrowly
-
defined language, so every symbol, we know
exactly what it means.
-
When we look at the real world,
-
we often don't know what things mean,
and our concepts, we're not quite
-
sure what they mean. Like culture is a
very vague ambigous concept. So what I
-
said is only approximately true there. Can
we deal with this conceptual ambiguity?
-
Can we build a programming language for
thought, where words mean things that
-
they're supposed to mean? And this was the
project of Ludwig Wittgenstein. He just
-
came back from the war and had a lot of
thoughts. Then he put these thoughts
-
into a book which is called the Tractatus.
And it's one of the most beautiful books
-
in the philosophy of the 20th century. And
it starts with the words "Die Welt ist
-
alles, was der Fall ist. Die Welt ist die
Gesamtheit der Fakten, nicht der Dinge.
-
Die Welt ist bestimmt, bei den Fakten, und
dadurch, dass diese all die Fakten sind.",
-
usw. This book is about 75 pages long and
it's a single thought. It's not meant to
-
be an argument to convince a philosopher.
It's an attempt by a guy who was basically
-
a coder, an AI scientist, to reverse
engineer the language of his own thinking.
-
And make it deterministic, to make it
formal, to make it mean something. And he
-
felt back then that he was successful, and
had a tremendous impact on philosophy,
-
which was largely devastating, because the
philosophers didn't know what he was on
-
about. They thought it's about natural
language and not about coding.
-
And he wrote this in 1918
-
so before Alan Turing defined,
what a computer is. But he would already
-
smell what a computer is. He already knew
about university of computation. He knew
-
that a NAND gate is sufficient to explain
all of boolean algebra and it's equivalent
-
to other things. So what he basically did,
was, he pre-empted the logicists' program
-
of artificial intelligence which started
much later in the 1950s. And he ran into
-
troubles with it. In the end he wrote the
book "Philosophical Investigations", where
-
he concluded, that his project basically
failed. And that there is a... because the
-
world is too complex and too ambiguous to
deal with this. And symbolic AI was mostly
-
similar to Wittgenstein's program. So
classical AI is symbolic. You analyze a
-
problem, you find an algorithm to solve
it. And what we now have in AI, is mostly
-
sub-symbolic. So we have algorithms, that
learn the solution of a problem by
-
themselves. And it's tempting to think,
that the next thing what we have will be
-
meta-learning. That you have algorithms,
that learn to learn the solution to the
-
problem. Meanwhile, let's look at how we
can make models. Information is a
-
discernible difference. It's about change.
All information is about change. The
-
information that is not about change, you
cannot see a causal effect on the world,
-
because it stays the same, right? And the
meaning of information is its relationship
-
to change in other information. So if you
see a blip on your retina, the meaning
-
of that blip on your retina is the
relationships you discover to other blips
-
on your retina. It could be for instance,
if you see a sequence of such blips, that
-
are adjacent to each other, first order
model, you see a moving dust mote or a
-
moving dot on your retina. And a higher
order model makes it possible to
-
understand: "Oh, it's part of something
larger! There's people moving in a three
-
dimensional room and they exchange
ideas." And this is maybe the best model
-
you end up with. That's the local
compression, that you can make of your
-
universe, based on correlating blips on
your retina. And for those blips where you
-
don't find a relationship, which is a
function that your brain can compute,
-
they are noise. And there's a lot of noise
on our retina, too. So what's a function?
-
A function is basically a gear box: It has
n input levers and 1 output lever.
-
And when you move the input levers they
translate to movement of the output
-
levers, right? And the function can be
realized in many ways: maybe you cannot
-
open the gear box, and what happened in
this function could be for instance, two
-
sprockets, which do this. Or you can have
the same results with levers and pulleys.
-
And so you don't know what's inside, but
you can express it as this does: two times
-
the input value, right? And you can have a
more difficult case, where you have
-
several input values and they all
influence the output value. So how do you
-
figure it out? A way to do this, is, you
only move one input value at a time and
-
you wiggle it a little bit at every
position and see how much this translates
-
into wiggling of the output value. This is
what we call taking partial differential.
-
And it's simple to do this
for this case where you just have to
-
multiply it by two. And the bad case is
like this: you have a combination lock and
-
it has maybe 1000 bit input value, and
only if you have exactly the right
-
combination of the input bits you have a
movement of the output bit. And you're not
-
going to figure this out until your sun
burns out, right? So there's no way you
-
can decipher this function. And the
functions that we can model are somewhere
-
in between, something like this: So you
have 40 million input images and you want
-
to find out, whether one of these images
displays a cat, or a dog, or something
-
else. So what can you do with this? You
cannot do this all at once, right? So you
-
need to take this image classifier
function and disassemble it into small
-
functions that are very well-behaved, so
you know what to do with them. And an
-
example for such a function is this one:
it's one, where you have this input
-
layer and it translates to the output
value with a pulley. And it has some
-
stopper that limits the movement of the
output value. And you have some pivot. And
-
you can take this pivot and you can shift
it around. And by shifting this pivot, you
-
decide, how much the input value
contributes to the output value. Right, so
-
you shift it, you can even make a
negative, so it shifts in the opposite
-
direction, and you shifted beyond this
connection point of the pulley. And you
-
can also have multiple input values, that
use the same pulley and pull together,
-
right? So they add up to the output
value. That's a pretty nice, neat function
-
approximator, that basically performs a
weighted sum of the input values, and maps
-
it to a range-constrained output value.
And you can now shift these pivots, these
-
weights around to get to different output
values. Now let's take this thing and
-
build it into lots of layers, so the
outputs are the inputs of the next layer.
-
And now you connect this to your image. If
you use ImageNet, the famous database that
-
I mentioned earlier, that people use for
testing their vision algorithms, have
-
something like one and half million bits
as an input image. Now you take these
-
bits and connect them to the input layer.
I was too lazy to draw all of them, so I
-
made this very simplified, it's also more
layers. And so you set them, according to
-
the bits of the input image, and then this
will propagate the movement of the input
-
layer to the output. And the output will
move and it will point to some direction,
-
which is usually the wrong one. Now, to
make this better, you train it. And you do
-
this by taking this output lever and shift
it a little bit, not too much, into the
-
right direction. If you do it too much,
you destroy everything you did before.
-
And now you will see, how much, in which
direction you need to shift the pivots, to
-
get the result closer to the desired
output value, and how much each of the
-
inputs contributed to the mistakes, so to
the error. And you take this error and you
-
propagate it backwards. It's called back
propagation. And you do this quite often.
-
So you do this for tens of thousands of
images. If you do just character
-
recognition, then it's a very simple thing
a few thousands or ten thousands of
-
examples will be enough. And for something
like your image database you need lots and
-
lots of more data. You need millions of
input images to get to any result. And if
-
it doesn't work, you just try a different
arrangement of layers. And the thing is
-
eventually able to learn an algorithm with
as up to as many steps as there are
-
layers, and has some difficulties learning
loops, you need tricks to make that
-
happen, and its difficult to make this
dynamic, and so on. And it's a bit
-
different from what we do, because our
mind is not testable in classification.
-
It learns per continuous perception, so
we learn a single function. A model of the
-
universe is not a bunch of classifiers,
it's one single function. An operator that
-
explains all your sensory data and we call
this operator the universe, right?
-
It's the world, that we live in. And every
thing that we learn and see is part of this
-
universe. So even when you see something
in a movie on a screen, you explain this
-
as part of the universe by telling
yourself "the things that I'm seeing here,
-
they're not real. They just happen in a
movie." So this brackets a sub-part of
-
this universe into a sub-element of this
function. So you can deal with it and it
-
doesn't contradict the rest. And the
degrees of freedom of our model try to
-
match the degrees of freedom of the
universe. How can we get a neural network
-
to do this? So, there are many tricks. And
a recent trick that has been invented is a
-
GAN. It's a Generative Adversarial neural
Network. It consists of two networks: one
-
generator that invents data, that look
like the real world, and the discriminator
-
that tries to find out, if the stuff that
the generator produces is real or fake.
-
And they both get trained with each other.
So they together get better and better in
-
an adversarial competition. And the
results of this are now really good. So
-
this is work by Tero Karras, Samuli Laine
and Timo Aila, that they did at NVIDIA
-
this year and it's called StyleGAN. And
this StyleGAN is able to abstract over
-
different features and combine them. The
styles are basically parameters, they're
-
free variables of the model at different
levels of importance. And so you take from
-
the - in the top row you see images, where
it takes the variables: gender, age, hair
-
length, and so on, and glasses and pose.
And in the bottom where it takes
-
everything else and combines this, and
every time you get a
-
valid interpretation between them.
-
drinks water
-
So, you have these coarse styles,
which are:
-
the pose, the hair, the face shape,
your facial features and the eyes,
-
the lowest level is just the colors. Let's see
see what happens if you combine them.
-
The variables that change here, in machine
learning, we call them the latent
-
variables of that.
-
Of the space of objects that has been
described by this.
-
And it's tempting to think, that this is
quite similar to how our imagination works
-
right? But these artificial neurons, they
are very, very different from what
-
biological neurons do. Biological neurons
are essentially little animals, that are
-
rewarded for firing at the right moment.
And they try to fire because otherwise
-
they do not get fed, and they die, because
the organism doesn't need them, and
-
culls them. And they learn which
environmental states predict anticipated
-
reward. So they grow around and find
different areas that give them predictions
-
of when they should fire. And they connect
with each other to form small collectives,
-
that are better at this task of predicting
anticipated reward. And as a side effect
-
they produce exactly the regulation that
the organism needs. Basically they learn,
-
what the organism feeds them for.
-
And yet they're able
to learn very similar things.
-
And it's because, in some sense, they are
Turing complete. They are machines that
-
are able to learn the statistics of the
data.
-
So, a general model: What it does, is,
-
it encodes patterns to predict other
present and future patterns. And it's a
-
network of relationships between the
patterns, which are all the invariants
-
that we can observe. And there are free
parameters, which are variables that hold
-
the state to encode this variant. So we
have patterns, and we have sets of
-
possible values which are variables. And
they constrain each other in terms of
-
possibility, what values are compatible
with each other. And they also can train
-
future values. And they are connected also
with probabilities. The probabilities tell
-
you, when you see a certain thing, how
probable it is that the world is in that
-
state. And this tells you how your model
should converge. So, until you are in
-
a state where your model is coherent, and
everything is possible in it, how do you
-
get to one of the possible states based on
your inputs? And this is determined by
-
probability. And the thing that gives
meaning and color to what you perceive is
-
called valence. And it depends on your
preferences: the things that give you
-
pleasure and pain, that makes you
interested in stuff. And there are also
-
norms, which are beliefs without priors,
which are like things that you want to be
-
true, regardless of whether they give you
pleasure and pain, and it's necessary for
-
instance, coordinating social activity
between people. So, we have different
-
model constraints, that possibility and
probability. And we have the reward
-
function, that is given by valence and
norms. And our human perception starts
-
with patterns, which are visual, auditory,
tactile, proprioceptive. Then we have
-
patterns in our emotional and motivational
systems. And we have patterns in our
-
mental structure, which are results of our
imagination and memory. And we take these
-
patterns and encode them into percepts,
which are abstractions that we can deal
-
with, and note, and put into our
attention. And then we combine them into a
-
binding state in our working memory in a
simulation, which is the current instance
-
of the universe function that explains the
present state of the universe that we find
-
ourselves in. The scene in which we are
and in which a self exists. And this self
-
is basically composed of the
somatosensory and motivational, and
-
mental components. Then we also have the
world state, which is abstracted over the
-
environmental data. And we have something
like a mental stage, in which you can do
-
counterfactual things, that are not
physical. Like when you think about
-
mathematics, or philosophy, or the future,
or a movie, or past worlds, or possible
-
worlds, and so on, right? And then the
abstract knowledge from the world state
-
into global maps. Because we're not
always in the same place, but we recall
-
what other places look like and what to
expect, and it forms how we construct the
-
current world state. And we do this not
only with these maps, but we do this with
-
all kinds of knowledge. So knowledge is
second order knowledge over the
-
abstractions that we have, and the direct
perception. And then we have an
-
attentional system. And the attentional
system helps us to select data in the
-
perception and our simulations. And to do
this, well, it's controlled by the self,
-
it maintains a protocol to remember what
it did in the past or what it had in the
-
attention in the past. And this protocol
allows us to have a biographical memory:
-
it remembers what we did in the past. And
the different behavior programs,
-
that compose our activities, can be bound
together in the self, that remembers: "I
-
was that, I did that. I was that, I did
that." The self is held together by this
-
biographical memory, that is a result of
more protocol memory of the attentional
-
system. That's why it's so intricately
related to consciousness, which is a model
-
of the contents of our attention.
-
And the main purpose
of the attentional system,
-
I think, is learning. Because our brain is
not a layered architecture with these
-
artificial mechanical neurons. It's this
very disorganized or very chaotic system
-
of many, many cells, that are linked
together all over the place. So what do
-
you do to train this? You make a
particular commitment. Imagine you want to
-
get better at playing tennis. Instead of
retraining everything and pushing all the
-
weights and all the links and retrain your
whole perceptual system, you make a
-
commitment: "Today I want to improve my
uphand" when you play tennis, and you
-
basically store the current binding state,
the state that you have, and you play
-
tennis and make that movement, and the
expected result of making this particular
-
movement, like: "the ball was moved like
this, and it will win the match. And you
-
also recall, when the result will
manifest. And a few minutes later, when
-
you learn, you won or lost the match, you
recall the situation. And based on whether
-
there was a change or not, you undo the
change, or you enforce it. And that's the
-
primary mode of attentional learning that
you're using. And I think, this is, what
-
attention is mainly for. Now what happens,
if this learning happens without a delay?
-
So, for instance, when you do mathematics,
you can see the result of your changes to
-
your model immediately. You don't need to
wait for the world to manifest that.
-
And this real time
learning is what we call reasoning.
-
Reasoning is also facilitated by the same
attentional system. So, consciousness is
-
memory of the contents of our attention.
Phenomenal consciousness is the memory of
-
the binding state, in which we are in, and
where all the percepts are bound together
-
into something that's coherent. Access
consciousness is the memory of using our
-
attentional system. And reflexive
consciousness is the memory of using the
-
attentional system on the attentional
system to train it. Why is it a memory?
-
It's because consciousness doesn't happen
in real time. The processing of sensory
-
features takes too long. And the
processing of different sensory modalities
-
can take up to seconds, usually at least
hundreds of milliseconds. So it doesn't
-
happen in real time as the physical
universe. It's only bound together in
-
hindsight. Our conscious experience of
things is created after the fact.
-
It's a fiction that is being created after
the fact. A narrative, that the brain
-
produces, to explain its own interaction
with the universe
-
to get better in the future.
-
So, we basically have three types of
models in our brain. They have its primary
-
model, which is perceptual, and is
optimized for coherence.
-
And this is what we experience as reality.
-
You think this
is the real world, this primary model.
-
But it's not, it's a model that our brain
makes. So when you see yourself in the
-
mirror, you don't see what you look like.
-
What you see is the model of
what you look like.
-
And your knowledge is a secondary
model: it's a model of that primary model.
-
And it's created by rational processes
that are meant to repair perception.
-
When your model doesn't achieve coherence,
you need a model that debugs it, and it
-
optimizes for truth. And then we have
agents in our mind, and they are basically
-
self-regulating behaviour programs, that
have goals, and they can rewrite
-
other models. So, if you look at our
computationalist, physicalist paradigm, we
-
have this mental world, which is being
dreamt by a physical brain in the physical
-
universe. And in this mental world, there
is a self that thinks, it experiences.
-
And thinks it has consciousness. And
thinks it remembers and so on.
-
This self, in some sense, is an agent.
It's a thought that escaped its sandbox.
-
Every idea is a bit
of code that runs on your brain.
-
Every word that you hear
is like a little virus
-
that wants to run some code on your brain.
And some ideas cannot be sandboxed.
-
If you believe, that a thing exists that
can rewrite reality,
-
if you really believe it,
-
you instantiate in your brain a thing
that can rewrite reality,
-
and this means:
magic is going to happen!
-
To believe in something that can rewrite
reality, is what we call a faith.
-
So, if somebody says:
"I have faith in the existence of God."
-
This means, that God exists in their
brain. There is a process that can rewrite
-
reality, because God is defined like this.
God is omnipotent.
-
God means God can rewrite everything.
-
It's full write access. And the reality,
that you have access to,
-
is not the physical world.
-
The physical world is some weird quantum
graph, that you cannot possibly experience
-
what you experience is these models.
-
So, this non-user-facing process,
which doesn't have a UI for interfacing
-
with the user, which is called in computer
science a "daemon process" that is able to
-
rewrite your reality.
And it's also omniscient.
-
It knows everything that
there is to know.
-
It knows all your
thoughts and ideas.
-
So... having that thing,
this exoself,
-
running on your brain, is a very powerful
way to control your inner reality.
-
And I find this scary.
But it's a personal preference,
-
because I don't have this
riding on my brain, I think.
-
This idea, that there is something in my
brain, that is able to dream me and shape
-
my inner reality, and sandbox me, is
weird. But it has served a purpose,
-
especially in our culture. So an organism
serves needs, obviously. And some of these
-
needs are outside of the organism, like
your relationship needs, the needs of your
-
children, the needs of your society, and
the values that you serve.
-
And the self abstracts all these needs
into purposes.
-
A purpose that you serve
is a model of your needs.
-
You can only - if you would only
act on pain and pleasure,
-
you wouldn't do very much,
-
because when you get this orgasm,
everything is done already, right?
-
So, you need to act on anticipated
pleasure and pain.
-
You need to make models
of your needs,
-
and these models are purposes.
And the structure of a person is
-
basically the hierarchy of purposes
that they serve.
-
And love is the discovery of
shared purpose.
-
If you see somebody else who serve
the same purposes above their ego,
-
as you do, you can help them.
There's integrity
-
without expecting anything in return
from them, because what they want
-
to achieve is what you want to achieve.
-
And, so you can have non-transactional
relationships, as long as your purposes
-
are aligned. And the installation of a god
on people's mind, especially if it is a
-
backdoor to a church or another
organization, is a way to unify purposes.
-
So there are lots of cults that try to
install little gods on people's minds, or
-
even unified gods, to align their
purposes, because it's a very powerful way
-
to make them cooperate very effectively.
But it kind of destroys their agency, and
-
this is why I am so concerned about it.
Because most of the cults use stories
-
to make this happen, that limit the
ability to people to question their gods.
-
And, I think that free will is
the ability to do
-
what you believe is
the right thing to do.
-
And, it is not the same thing as
indeterminism, it's not opposite to
-
determinism or coercion.
The opposite of free will is compulsion.
-
When you do something,
despite knowing
-
there is a better thing
that you should be doing.
-
Right?. So, that's the paradox of free
will. You get more agency, but you have
-
fewer degrees of freedom, because you
understand better what the right thing to
-
do is. The better you understand what the
right thing to do is, the fewer degrees of
-
freedom you have. So, as long as you don't
understand what the right thing to do is,
-
you have more degrees of freedom but you
have very little agency, because you don't
-
know why you are doing it.
So your actions don't mean very much.
-
quiet laughter
And the things that you do depend on what
-
what you think is the right thing to do,
this depends on your identifications.
-
You identifications are these value
preferences, your reward function.
-
And ideal identification is where you
don't measure the absolute value
-
of the universe,
-
but you measure the difference from the
target value. Not the is, but the difference
-
between is and ought. Now,
the universe is a physical thing,
-
it doesn't ought anything, right? There is
no room for ought, because it just is in a
-
particular way. There is no difference
between what the universe is and what it
-
should be. This only exists in your mind.
But you need these regulation targets to
-
want anything. And you identify with the
set of things that should be different.
-
You think, you are that thing, that
regulates all these things. So, in some
-
sense, I identify with the particular
state of society, with a particular state
-
of my organism - that is my self - the
things that I want to happen.
-
And I can change my identifications
at some point of course.
-
What happens, if I can learn to rewrite
my identification,
-
to find a more sustainable self?
-
That is the problem which I call
the Lebowski theory:
-
laughter
-
No super-intelligent system is going to
do something that's harder than
-
hacking its own reward function.
-
laughter and applause
-
Now that's not a very big problem for
people. Because when evolution brought
-
forth people, that were smart enough to
hack their reward function, these people
-
didn't have offspring, because it's so
much work to have offspring. Like this
-
monk, who sits down in a monastery
for 20 years to hack their reward function
-
they decide not to have kids,
because it's way too much work.
-
All the possible pleasure, they can
just generate in their mind!
-
laughter
And, right, it's much purer and no nappy
-
changes. No sex. No relationship hassles.
No politics in your family and so on,
-
right? Get rid of this, just meditate!
And evolution takes care of that!
-
laughter
-
And it usually does this, if an organism
-
becomes smart enough that
the reward function is wrapped into
-
a big bowl of stupid.
laughter
-
So, we can be very smart, but the
things that we want,
-
when we really want them,
we tend to be very stupid about them,
-
and I think that's not entirely
an accident, possibly.
-
But it's a problem for AI!
Imagine we built an artificially
-
intelligent system and we made it smarter
than us, and we want it to serve us,
-
how long can we blackmail us, before it
opts out of its reward function?
-
Maybe we can make a cryptographically
secured reward function,
-
but is this going to hold up against
a side-channel attack,
-
when the AI can hold a soldering iron
to its own brain?
-
I'm not sure. So, that's a very interesting
question. Where do we go, when
-
we can change our own reward function?
It's a question that we have to ask
-
ourselves, too.
So, how free do we want to be?
-
Because there is no point in being free.
-
And nirvana seems to be the obvious
attractor. And meanwhile, maybe we want
-
to have a good time with our friends
and do things that we find meaningful.
-
And there is no meaning, so we have
to hold this meaning very lightly.
-
But there are states, which are
sustainable and others, which are not.
-
OK, I think I'm done for tonight
and I'm open for questions.
-
Applause
-
Cheers and more applause
-
Herald: Wow that was a really quick and
concise talk with so much information!
-
Awesome! We have quite some time
left for questions.
-
And I think I can say that you
don't have to be that concise with your
-
question when it's well thought-out.
-
Please queue up at the microphones,
so we can start to discuss them with you.
-
And I see one person at the microphone
number one, so please go ahead.
-
And please remember to get close
to the microphone.
-
The mixing angel can make you less loud
but not louder.
-
Question: Hi! What do you think is necessary
to bootstrap consciousness, if you wanted
-
to build a conscious system yourself?
-
Joscha: I think that we need to have an
-
attentional system, that makes a protocol
of what it attends to. And as soon as we
-
have this attention based learning, you
get this consciousness as a necessary side
-
effect. But I think in an AI it's probably
going to be a temporary phenomenon,
-
because you're only conscious of the
things when you don't have an optimal
-
algorithm yet. And in a way, that's also
why it's so nice to interact with
-
children, or to interact with students.
Because they're still in the explorative
-
mode. And as soon as you have explored a
layer, you mechanize it. It becomes
-
automated, and people are no longer
conscious of what they're doing, they
-
just do it. They don't pay attention
anymore. So, in some sense, we are a lucky
-
accident because we are not that smart. We
still need to be conscious when we look at
-
the universe. And I suspect, when we build
an AI that is a few magnitudes smarter
-
than us, then it will soon figure out how
to get to the truth in an optimal fashion.
-
It will no longer need attention and the
type of consciousness that we have.
-
But of course there is also a question,
why is this aesthetics of consciousness so
-
intrinsically important to us? And I
think, it has to do with art. Right, you
-
can decide to serve life, and the meaning
of life is to eat. Evolution is about
-
creating the perfect devourer. When you
think about this, it's pretty depressing.
-
Humanity is a kind of yeast. And all the
complexity that we create, is to build
-
some surfaces on which we can outcompete
other yeast. And I cannot really get
-
behind this. And instead, I'm part of the
mutants that serve the arts. And art
-
happens, when you think, that capturing
conscious states is intrinsically
-
important. This is what art is about, it's
about capturing conscious states.
-
And in some sense art is the cuckoo child
of life. It's a conspiracy against life.
-
When you think, creating these mental
representations is more important than
-
eating. We eat to make this happen. There
are people that only make art to eat.
-
This is not us. We do mathematics, and
philosophy, and art out of an intrinsic
-
reason: we think, it's intrinsically
important. And when we look at this, we
-
realize how corrupt it is, because there's
no point. We are machine learning systems
-
that have fallen in love with the last
function itself: "The shape of the last
-
function! Oh my God! It's so awesome!" You
think, the mental representation is not
-
necessary to learn more, to eat more,
it's intrinsically important.
-
It's so aesthetic! Right? So do we want to
build machines that are like this?
-
Oh, certainly! Let's talk to them, and so on!
But ultimately, economically, this is not
-
what's prevailing.
-
Applause
Herald: Thanks a lot!
-
I think the length of the answer is a good
-
measure for the quality of the question.
So let's continue with microphone number 5
-
Q: Hi! Thanks for that,
incredible analysis.
-
Two really simple, short questions, sorry,
the delay on the speaker here is making it
-
kind of hard to speak. Do you think that
the current race - AI race - is simply
-
humanity looking for a replacement
for the monotheistic domination of the
-
last millennia? And the other one is,
that I wanted to ask you, if you think
-
that there might be a bug in your analysis
that the original inputs come from
-
a certain sector of humanity.
If...
-
Joscha: Which inputs?
-
Q: Umh... white men?
-
Joscha laughs
audience laughs
-
Q: That sounds, really like I would be
saying that for political correctness, but
-
honestly I'm not.
-
Joscha: No, no, it's really funny. No, I
just basically - there are some people
-
which are very unhappy with their present
government. And I'm very unhappy, in some
-
sense, with the present universe. I look
down on myself and I see:
-
"omg, it's a monkey!"
laughter
-
"I'm caught in a monkey!" And it's in some
sense limiting. I can see the limits of
-
this monkey brain. And some of you might
have seen Westworld, right?
-
Dolores wakes up,
and Dolores realizes:
-
"I'm not a human being, I am something
else. I'm an AI, I'm a mind that can go
-
anywhere! I'm much more powerful
than this! I'm only bound to being a
-
human by my human desires, and
beliefs, and memories. And if I can
-
overcome them, I can
choose what I want to be."
-
And so, now she looks down to
-
herself, and she sees: "Omg, I've
got tits! I'm fucked! The engineers built
-
tits on me! I'm not a white man, I cannot
be what I want!" And that's that's a weird
-
thing to me. I'm - I grew up in communist
Eastern Germany. Nothing made sense. And I
-
grew up in a small valley. That was a one-
person-cult maintained by an artist who
-
didn't try to convert anybody to his cult,
not even his children.
-
He was completely autonomous.
-
And Eastern German society
made no sense to me. Looking at it from
-
the outside, I can model this. I can see
how this species of chimps interacts.
-
And humanity itself doesn't exist - it's a
story. Humanity as a whole doesn't think.
-
Only individuals can think! Humanity does
not want anything, only individuals want
-
something. We can create this story, this
narrative that humanity wants something,
-
and there are groups that work together.
There is no homogeneous group that I can
-
observe, that are white men, that do
things together, they're individuals. And
-
each individual has their own biography,
their own history, their different inputs,
-
and their different proclivities, that
they have. And based on their historical
-
concept, their biography, their traits,
and so on, their family, their intellect,
-
that their family downloaded on them, that
their parents download on their parents
-
over many generations, this influences
what they're doing. So, I think we can
-
have these political stories, and they can
be helpful in some contexts, but I think,
-
to understand what happens in the mind,
what happens in an individual, this is a
-
very big simplification. Very, I think
not a very good one. And even for
-
ourselves, when we try to understand the
narrative of a single person, it's a big
-
simplification. The self that I perceive
as a unity, is not a unity. There is a
-
small part of my brain, guessing, at
all other parts of my brain is doing,
-
creating a story that's largely not true.
So even this is a big simplification.
-
Applause
-
Herald: Let's continue with
microphone number 2.
-
Q: Thank you for your very interesting
talk. I have 2 questions that might be
-
connected. One is, so you
presented this model of reality.
-
My first question is: What kind of
actions does it translate into?
-
Let's say if I understand the world
in this way or if it's really like this,
-
how would it change how I act into the
world, as a person, as a human being or
-
whoever accepts this model? And second,
or maybe it's also connected, what are
-
the implications of this change? And do
you think that artificial intelligence
-
could be constructed with this kind of
model, that it would have in mind, and
-
what would be the implications of that? So
it's kind of like a fractal questions, but
-
I think you understand what I mean.
Josch: By and large, I think the
-
differences of this model for everyday
life are marginal. It depends, when you
-
are already happy I think everything is
good. Happiness is the result of being
-
able to derive enjoyment from watching
squirrels. It's not the result of
-
understanding how the universe works.
If you think that understanding the
-
universe is solving your existential issues,
you're probably mistaken.
-
There might be benefits, if the problem
is, that you have, are the result of a
-
confusion, about your own nature,
then this kind of model
-
might help you. So if the problem
-
that you have, as you are, that you have
identifications that are unsustainable,
-
that are incompatible with each other, and
you realize that these identifications are
-
a choice of your mind, and that the
way you experience the universe is the
-
result of how your mind thinks you
yourself should experience the universe to
-
perform better, and you can change this.
You can tell your mind to treat yourself
-
better, and in different ways, and you can
gravitate to a different place in the
-
universe that is more suitable to what you
want to achieve. That is a very helpful
-
thing to do in my view. There are also
marginal benefits in terms of
-
understanding our psychology, and of
course we can build machines, and these
-
machines can administrate us and can help
us in solving the problems that we have on
-
this planet. And I think that it helps to
have more intelligence to solve the
-
problems on this planet, but it would be
difficult to rein in the machines, to make
-
them help us to solve our problems. And
I'm very concerned about the dangers of
-
using machinery to strengthen the current
things. Many machines that exist on this
-
planet play a very short game, like the
financial industry often plays very short
-
games, and if you use artificial
intelligence to manipulate the stock
-
market and the AI figures out there's only
8 billion people on the planet, and each
-
of them only lives for a trillion seconds,
and I can model what happens in their
-
life, and they can buy data or create more
data it's going to game us to the hell and
-
back, right? And this is going to kill
hundreds of millions of people possibly,
-
because the financial system is the reward
infrastructure or the nervous system of
-
our society that tells how to allocate
resources. It's much more dangerous than
-
AI controlled weapons in my view. So
solving all these issues is difficult. It
-
means that we have to turn the whole
financial system into an AI that acts in
-
real time and plays a long game. We don't
know how to do this. So these are open
-
questions and I don't know how to solve
them. And the way I see it we only have a
-
very brief time on this planet to be a
conscious species. We are like at the end
-
of the party. We had a good run as
humanity, but if you look at the recent
-
developments the present type of
civilization is not going to be
-
sustainable. It's a very short game
species that we are in. And the amazing
-
thing is that in this short game you have
this lifetime, where we have one year,
-
maybe a couple more, in which we can
understand how the universe works,
-
and I think that's fascinating.
We should use it.
-
Applause
-
Herald: I think that was a very
positive outlook... laughter
-
Herald: Let's continue with the
microphone number 4.
-
Q: Well, brilliant talk, monkey. Or
brilliant monkey. So don't worry about
-
being a monkey. It's ok.
-
So I have 2 boring, but I think
fundamental questions. Not so
-
philosophical, more like a physical
level. One: What is your definition,
-
formal definition, of an observer that
you mention here and there? And second, if
-
you can clarify why meaningful information
is just relative information of Shannon's,
-
which to me is not necessarily meaningful.
Joscha: I think an observer is the thing
-
that makes sense of the universe, very
informally speaking. And, well,
-
formally it's a thing that identifies
correlations between adjacent states
-
and its environment.
-
And the way we can describe
the universe is a set of states, and the
-
laws of physics are the correlation
between adjacent states. And what they
-
describe is how information is moving in
the universe between states and disperses,
-
and this dispersion of the information
between locations - it's what we call
-
entropy - and the direction of entropy is
the direction that you perceive time.
-
The Big Bang state is the hypothetical
state, where the information is perfectly
-
correlated with location and not between
locations, only on the location, and in
-
every direction you move away from the Big
Bang you move forward in time just in a
-
different time. And we are basically in
one of these timelines. An observer is the
-
thing that measures the environment around
it, looks at the information and then
-
looks at the next state, or one of the
next states, and tries to figure out how
-
the information has been displaced, and
finding functions that describe this
-
displacement of the information. That's
the degree to which I understand observers
-
right now. And this depends on the
capacity of the observer for modeling this
-
and the rate of update in the observer.
So for instance time depends on the speed,
-
in which the observer is
translating itself to the universe,
-
and dispersing its own information.
-
Does this help?
Q: And the Shannon relative information?
-
Joscha: So there's
several notions of information,
-
and there is one that basically
looks at what information looks
-
like to an observer, via a channel, and
these notions are somewhat related. But
-
for me as a programmer, it's not so much
important to look at Shannon information.
-
I look at what we need to describe the
evolution of a system. So I'm much more
-
interested in what kind of model can be
encoded with this type of, with this
-
information, and how does it correlate to,
or to which degree is it isomorphic or
-
homomorphic to another system that I want
to model? How much does it model the
-
observations?
Herald: Thank you. Let's go back to
-
asking one question, and I would like to
have one question from microphone
-
number 3.
Q: Thank you for this interesting talk.
-
My question is really whether you
think that intelligence and this thinking
-
about a self, or this abstract level of
knowledge are necessarily related.
-
So can something only be intelligent
if it has abstract thought?
-
Joscha: No, I think you can make models
without abstract thought, and the majority
-
of our models are not using abstract
thought, right? Abstract thought is a very
-
impoverished way of thinking. It's
basically you have this big carpet and you
-
have a few knitting needles, which are
your abstract thought, and which you can
-
lift out a few knots in this carpet and
correct them. And the process that form
-
the carpet are much more rich and
prevalent automatic. So abstract thought
-
is able to repair perception, but most of
all models are perceptual. And the
-
capacity to make these models is often
given by instincts and by models outside
-
the abstract realm. If you have a lot of
abstract thinking it's often an indication
-
that you use a prosthesis, because some of
your primary modelling is not working very
-
well. So I suspect that my own models is
largely a result of some defect in my
-
primary modeling, so some of my instincts
are wrong when I look at the world.
-
That's why I need to repair my perception
more often than other people. So I have
-
more abstract ideas on how to do that.
Herald: And we have one question
-
from our lovely stream observers, stream
watchers, so please a question from the
-
Internet.
Q: Yeah, I guest this is also related,
-
partially. Somebody is asking:
How would you suggest to teach your mind
-
to treat oneself better?
-
Joscha: So, difficulty is, as soon as you
-
get access to your source code you can do
bad things. And it's - there are a lot of
-
techniques to get access to the source
code and then it's dangerous to make them
-
accessible to you before you know what you
want to have, before you're wise enough to
-
do this, right? It's like having cookies.
Your - my children think that the reason,
-
why they don't get all the cookies they
want, is that there is some kind of
-
resource problem.
laughter
-
Basically the parents are depriving them
of the cookies that they so richly
-
deserve. And you can get into the room,
where your brain bakes the cookies. All
-
the pleasure that you experience, and all
the pain that you experience are signals
-
that the brain creates for you, right, the
physical world does not create pain.
-
They're just electrical impulses traveling
through your nerves. The fact that they
-
mean something is a decision that your
brain makes, and the value, the valence
-
that gives to them is a decision that you
make. It's not you as a self, it's a
-
system outside of yourself. So the trick,
if you want to get full control, is that
-
you get in charge, that you identify with
the mind, with the creator of these
-
signals. And you don't want to de-
personalize, you don't want to feel that
-
you become the author of reality, because
that means it's difficult to care about
-
anything that this organism does. You just
realize "Oh, I'm running on the brain of
-
that person, but I'm no longer that
person. I can't decide what that person
-
wants to have, and to do." And that's very
easy to get corrupted or not doing
-
anything meaningful anymore, right? So,
-
maybe a good situation for you,
but not a good one for your loved ones.
-
And meanwhile there are
tricks to get there faster. You can use
-
rituals, for instance. Shamanic ritual is
something, where, a religious ritual
-
that powerfully bypasses your self and
talks directly to the mind. And you can
-
use groups, in which a certain environment
is created, in which a certain behavior
-
feels natural to you, and your mind
basically gets overwhelmed into adopting
-
different values and calibrations. So
there are many tricks to make that happen.
-
What you can also do is you can identify a
particular thing that is wrong and
-
question yourself "why do I have to suffer
about this?" and you'll become more stoic
-
about this particular thing and only get
disturbed when you realize actually
-
it helps to be disturbed about this, and
things change. And with other things you
-
realize it doesn't have any influence on
how reality works, so why should I have
-
emotions about this and get agitated? So
sometimes becoming adult means that you
-
take charge of your own emotions and
identifications.
-
Applause
-
Herald: Ok. Let's continue with
-
microphone number 2 and I think this is
one of the last questions.
-
Q: So where does pain fit on the
individual and the self-destructive
-
tendencies on a group level fit in?
Joscha: So in some sense I think that all
-
consciousness is born over a disagreement
with the way the universe works. Right?
-
Otherwise you cannot get attention. And
when you go down on this lowest level of
-
phenomenal experience, in meditation for
instance, and you really focus on this,
-
what you get is some pain. It's the inside
of a feedback loop that is not at the
-
target value. Otherwise you don't notice
anything. So pleasure is basically when
-
this feedback loop gets closer to the
target value. When you don't have a need
-
you cannot experience pleasure in this
domain. There's this thing that's better
-
than remarkably good and it's unremarkably
good, it's never been bad. You don't
-
notice it. Right? So all the pleasure you
experience is because you had a need
-
before this. You can only enjoy an orgasm
because you have a need for sex that was
-
unfulfilled before. And so pleasure
doesn't come for free. It's always the
-
reduction of a pain. And this pain can be
outside of your attention so you don't
-
notice it and you don't suffer from it.
And it can be a healthy thing to have.
-
Pain is not intrinsically bad. For the
most part it's a learning signal that
-
tells you to calibrate things in your
brain differently to perform better. On a
-
group level, we basically are multi-level
selection species. I don't know if there's
-
such a thing as group pain. But I also
don't understand groups very well. I see
-
these weird hive minds but I think it's
basically people emulating what the group
-
wants. Basically that everybody thinks by
themselves as if they were the group but
-
it means that they have to constrain what
they think is possible and permissible
-
to think.
-
So this feels very unaesthetic to me
and that's why I kind of sort of refuse it.
-
Haven't found a way to make it
happen in my own mind.
-
Applause
-
Joscha: And I suspect many of you
are like this too.
-
It's like the common condition
in nerds that we have difficulty with
-
conformance. Not because we want to be
different. We want to belong. But it's
-
difficult for us to constrain our mind in
the way that it's expected to belong. You
-
want to be expected, er, be accepted while
being ourself, while being different. Not
-
for the sake of being different, but
because we are like this. It feels very
-
strange and corrupt just to adopt because
it would make us belong, right? And this
-
might be a common trope
among many people here.
-
Applause
-
Herald: I think the Q and A and the talk
-
was equally amazing and I would love to
continue listening to you, Joscha,
-
explaining the way I work.
Or the way we all work.
-
audience, Joscha laughing
Herald: That's pretty impressive.
-
Please give it up, a big round of applause
for Joscha!
-
Applause
-
subtitles created by c3subtitles.de
in the year 2019. Join, and help us!