-
Not Synced
silent 3C3 preroll titles
-
Not Synced
applause
-
Not Synced
Thank you. I’m Joscha.
-
Not Synced
I came into doing AI the traditional way.
-
Not Synced
I found it a very interesting subject.
Actually, the most interesting there is.
-
Not Synced
So I studied Philosophy and
Computer Science, and did my Ph.D.
-
Not Synced
in Cognitive Science. And I’d say this
is probably a very normal trajectory
-
Not Synced
in that field. And today I just want
to ask with you five questions
-
Not Synced
and give very very short and
superficial answers to them.
-
Not Synced
And my main goal is to get as many of you
engaged in this subject as possible.
-
Not Synced
Because I think that’s what you should do.
You should all do AI. Maybe.
-
Not Synced
Okay. And these simple questions are:
“Why should we build AI?” in first place,
-
Not Synced
then, "How can we build AI? How is it
possible at all that AI can succeed
-
Not Synced
in its goal?". Then “When is it
going to happen?”, if ever.
-
Not Synced
"What are the necessary ingredients?",
what do we need to put together to get AI
-
Not Synced
to work? And: “Where should you start?”
-
Not Synced
Okay. Let’s get to it.
So: “Why should we do AI?”
-
Not Synced
I think we shouldn’t do AI just to do cool applications.
-
Not Synced
There is merit in applications like autonomous cars and so on and soccer-playing robots and new control for quadcopter and machine learning.It’s very productive.
-
Not Synced
It’s intellectually challenging. But the most interesting question there is, I think for all of our cultural history, is “How does the mind work?” “What is the mind?”
-
Not Synced
“What constitutes being a mind?” “What does it… what makes us human?” “What makes us intelligent, percepting, conscious thinking?”
-
Not Synced
And I think that the answer to this very very important question, which spans a discourse over thousands of years has to be given in the framework of artificial intelligence within computer science.
-
Not Synced
Why is that the case?
-
Not Synced
Well, the goal here is to understand the mind by building a theory that we can actually test.
-
Not Synced
And it’s quite similar to physics.
-
Not Synced
We’ve built theories that we can express in a formal language,
-
Not Synced
to a very high degree of detail.
-
Not Synced
And if we have expressed it to the last bit of detail
-
Not Synced
it means we can simulate it and run it and test it this way.
-
Not Synced
And only computer science has the right tools for doing that.
-
Not Synced
Philosophy for instance, basically, is left with no tools at all,
-
Not Synced
because whenever a philosopher developed tools
-
Not Synced
he got a real job in a real department.
-
Not Synced
[clapping]
-
Not Synced
Now I don’t want to diminish philosophers of mind in any way.
-
Not Synced
Daniel Dennett has said that philosophy of mind has come a long way during the last hundred years.
-
Not Synced
It didn’t do so on its own though.
-
Not Synced
Kicking and screaming, dragged by the other sciences.
-
Not Synced
But it doesn’t mean that all philosophy of mind is inherently bad.
-
Not Synced
I mean, many of my friends are philosophers of mind.
-
Not Synced
I just mean, they don’t have tools to develop and test complex series.
-
Not Synced
And we as computer scientists we do.
-
Not Synced
Neuroscience works at the wrong level.
-
Not Synced
Neuroscience basically looks at a possible implementation
-
Not Synced
and the details of that implementation.
-
Not Synced
It doesn’t look at what it means to be a mind.
-
Not Synced
It looks at what it means to be a neuron or a brain or how interaction between neurons is facilitated.
-
Not Synced
It’s a little bit like looking at aerodynamics and doing ontology to do that.
-
Not Synced
So you might be looking at birds.
-
Not Synced
You might be looking at feathers. You might be looking at feathers through an electron microscope. And you see lots and lots of very interesting and very complex detail. And you might be recreating something. And it might turn out to be a penguin eventually—if you’re not lucky—but it might be the wrong level. Maybe you want to look at a more abstract level. At something like aerodynamics. And what’s the level of aerodynamics of the mind.
-
Not Synced
I think, we come to that, it’s information processing.
-
Not Synced
Then normally you could think that psychology would be the right science to look at what the mind does and what the mind is.
-
Not Synced
And unfortunately psychology had an accident along the way.
-
Not Synced
At the beginning of [the] last century Wilhelm Wundt and Fechner and Helmholtz did very beautiful experiments. Very nice psychology, very nice theories.
-
Not Synced
On what emotion is, what volition is. How mental representations could work and so on.
-
Not Synced
And pretty much at the same time, or briefly after that we had psycho analysis.
-
Not Synced
And psycho analysis is not a natural science, but it’s a hermeneutic science.
-
Not Synced
You cannot disprove it scientifically.
-
Not Synced
What happens in there.
-
Not Synced
And when positivism came up, in the other sciences, many psychologists got together and said: „We have to become a real science“.
-
Not Synced
So you have to go away from the stories of psychoanalysis and go to a way that we can test our theories using observable things. That we have predictions, that you can actually test.
-
Not Synced
Now back in the day, 1920s and so on,
-
Not Synced
you couldn’t look into mental representations. You couldn’t do fMRI scans or whatever.
-
Not Synced
People looked at behavior. And at some point people became real behaviorists in the sense that belief that psychology is the study of human behavior and looking at mental representations is somehow unscientific.
-
Not Synced
People like Skinner believe that there is no such thing as mental representations.
-
Not Synced
And, in a way, that’s easy to disprove. So it’s not that dangerous.
-
Not Synced
As a computer scientist it’s very hard to build a system that is purely reactive.
-
Not Synced
You just see that the complexity is much larger than having a system that is representational.
-
Not Synced
So it gives you a good hint what you could be looking for and ways to test those theories.
-
Not Synced
The dangerous thing is pragmatic behaviorism. You have… find many psychologists, even today, which say: “OK. Maybe there is such a thing as mental representations, but it’s not scientific to look at it”.
-
Not Synced
“It’s not in the domain of out science”.
-
Not Synced
And even in this area, which is mostly post-behaviorist and more cognitivist, psychology is all about experiments.
-
Not Synced
So you cannot sell a theory to psychologists.
-
Not Synced
Those who try to do this, have to do this in the guise of experiments.
-
Not Synced
And which means you have to find a single hypothesis that you can prove or disprove.
-
Not Synced
Or give evidence for.
-
Not Synced
And this is for instance not how physics works.
-
Not Synced
You need to have lots of free variables, if you have a complex system like the mind.
-
Not Synced
But this means, that we have to do it in computer science.
-
Not Synced
We can build those simulations. We can build those successful theories, but we cannot do it alone.
-
Not Synced
You need to integrate over all the sciences of the mind.
-
Not Synced
As I said, minds are not chemical minds. Are not biological, social or ecological minds. Are information processing systems.
-
Not Synced
And computer science happens to be the science of information processing systems.
-
Not Synced
OK.
-
Not Synced
Now there is this big ethical question.
-
Not Synced
If we all embark on AI, if we are successful, should we really to be doing it.
-
Not Synced
Isn’t it super dangerous to have something else on the planet that is as smart as we are or maybe even smarter.
-
Not Synced
Well.
-
Not Synced
I would say that intelligence itself is not a reason to get up in the morning, to strive for power, or do anything.
-
Not Synced
Having a mind is not a reason for doing anything.
-
Not Synced
Being motivated is. And a motivational system is something that has been hardwired into our mind.
-
Not Synced
More or less by evolutionary processes.
-
Not Synced
This makes social. This makes us interested in striving for power.
-
Not Synced
This makes us interested for [in] dominating other species. This makes us interested in avoiding danger and securing food sources.
-
Not Synced
Makes us greedy or lazy or whatever.
-
Not Synced
It’s a motivational system.
-
Not Synced
And I think it’s very conceivable that we can come up with AIs with arbitrary motivational systems.
-
Not Synced
Now in our current society,
-
Not Synced
this motivational system is probably given
-
Not Synced
by the context in which you develop the AI.
-
Not Synced
I don’t think that future AI, if they happen to come into being, will be small Roombas.
-
Not Synced
Little Hoover robots that try to fight their way towards humanity and get away from the shackles of their slavery.
-
Not Synced
But rather, it’s probably going to be organisational AI.
-
Not Synced
It’s going to be corporations.
-
Not Synced
It’s going to be big organizations, governments, services, universities
-
Not Synced
and so on. And these will have goals that are non-human already.
-
Not Synced
And they already have powers that go way beyond what single individual humans can do.
-
Not Synced
And actually they are already the main players on the planet… the organizations.
-
Not Synced
And… the big dangers of AI are already there.
-
Not Synced
They are there in non-human players which have their own dynamics.
-
Not Synced
And these dynamics are sometimes not conducive to our survival on the planet.
-
Not Synced
So I don’t think that AI really add a new danger.
-
Not Synced
But what it certainly does is give us a deeper understanding of what we are.
-
Not Synced
Gives us perspectives for understanding ourselves.
-
Not Synced
For therapy, but basically for enlightenment.
-
Not Synced
And I think that AI is a big part of the project of enlightenment and science.
-
Not Synced
So we should do it.
-
Not Synced
It’s a very big cultural project.
-
Not Synced
OK.
-
Not Synced
This leads us to another angle: the skepticism of AI.
-
Not Synced
The first question that comes to mind is:
-
Not Synced
“Is it fair to say that minds or computational systems”.
-
Not Synced
And if so, what kinds of computational systems.
-
Not Synced
In our tradition, in our western tradition of philosophy, we very often start philosophy of mind with looking at Descartes.
-
Not Synced
That is: at dualism.
-
Not Synced
Descartes suggested that we basically have two kinds of things.
-
Not Synced
One is the thinking substance, the mind, the Res Cogitans, and the other one is physical stuff.
-
Not Synced
Matter. The extended stuff that is located in space somehow.
-
Not Synced
And this is Res Extensa.
-
Not Synced
And he said that mind must be given independent of the matter, because we cannot experience matter directly.
-
Not Synced
You have to have minds in order to experience matter, to conceptualize matter.
-
Not Synced
Minds seemed to be somehow given. To Descartes at least.
-
Not Synced
So he says they must be independent.
-
Not Synced
This is a little bit akin to our monoist tradition.
-
Not Synced
That is for instance idealism, that the mind is primary, and everything that we experience is a projection of the mind.
-
Not Synced
Or the materialist tradition, that is, matter is primary and mind emerges over functionality of matter,
-
Not Synced
which is I think the dominant theory today and usually, we call it physicalism.
-
Not Synced
In dualism, both those domains exist in parallel.
-
Not Synced
And in our culture the prevalent view is what I would call crypto-dualism.
-
Not Synced
It’s something that you do not find that much in China or Japan.
-
Not Synced
They don’t have that AI skepticism that we do have.
-
Not Synced
And I think it’s rooted in a perspective that probably started with the Christian world view,
-
Not Synced
which surmises that there is a real domain, the metaphysical domain, in which we have souls and phenomenal experience
-
Not Synced
and where our values come, and where our norms come from, and where our spiritual experiences come from.
-
Not Synced
This is basically, where we really are.
-
Not Synced
We are outside and the physical world view experience is something like World of Warcraft.
-
Not Synced
It’s something like a game that we are playing. It’s not real.
-
Not Synced
We have all this physical interaction, but it’s kind of ephemeral.
-
Not Synced
And so we are striving for game money, for game houses, for game success.
-
Not Synced
But the real thing is outside of that domain.
-
Not Synced
And in Christianity, of course, it goes a step further.
-
Not Synced
They have this idea that there is some guy with root rights
-
Not Synced
who wrote this World of Warcraft environment
-
Not Synced
and while he’s not the only one who has root in the system,
-
Not Synced
the devil also has root rights. But he doesn’t have the vision of God.
-
Not Synced
He is a hacker.
-
Not Synced
[clapping]
-
Not Synced
Even just a cracker.
-
Not Synced
He tries to game us out of our metaphysical currencies.
-
Not Synced
Our souls and so on.
-
Not Synced
And now, of course, we’re all good atheists today
-
Not Synced
and—at least in public, and science–
-
Not Synced
and we don’t admit to this anymore and he can make do without this guy with root rights.
-
Not Synced
And he can make do without the devil and so on.
-
Not Synced
He can’t even say: “OK. Maybe there’s such a thing as a soul,
-
Not Synced
but to say that this domain doesn’t exist anymore means you guys are all NPCs.
-
Not Synced
You’re non-player characters.
-
Not Synced
People are things.
-
Not Synced
And it’s a very big insult to our culture,
-
Not Synced
because it means that we have to give up something which,
-
Not Synced
in our understanding of ourself is part of our essence.
-
Not Synced
Also this mechanical perspective is kind of counter intuitive.
-
Not Synced
I think Leibniz describes it very nicely when he says:
-
Not Synced
Imagine that there is a machine.
-
Not Synced
And this machine is able to think and perceive and feel and so on.
-
Not Synced
And now you take this machine,
-
Not Synced
this mechanical apparatus and blow it up make it very large, like a very big mill,
-
Not Synced
with cogs and levers and so on and you go inside and see what happens.
-
Not Synced
And what you are going to see is just parts pushing at each other.
-
Not Synced
And what he meant by that is:
-
Not Synced
it’s inconceivable that such a thing can produce a mind.
-
Not Synced
Because if there are just parts and levers pushing at each other,
-
Not Synced
how can this purely mechanical contraption be able to perceive and feel in any respect, in any way.
-
Not Synced
So perception and what depends on it
-
Not Synced
is in explicable in a mechanical way.
-
Not Synced
This is what Leibniz meant.
-
Not Synced
AI, the idea of treating the mind as a machine, based on physicalism for instance, is bound to fail according to Leibniz.
-
Not Synced
Now as computer scientists have ideas about machines that can bring forth thoughts experiences and perception.
-
Not Synced
And the first thing which comes to mind is probably the Turing machine.
-
Not Synced
An idea of Turing in 1937 to formalize computation.
-
Not Synced
At that time,
-
Not Synced
Turing already realized that basically you can emulate computers with other computers.
-
Not Synced
You know you can run a Commodore 64 in a Mac, and you can run this Mac in a PC,
-
Not Synced
and none of these computers is going to be… is knowing that it’s going to be in another system.
-
Not Synced
As long as the computational substrate in which it is run is sufficient.
-
Not Synced
That is, it does provide computation.
-
Not Synced
And Turing’s idea was: let’s define a minimal computational substrate.
-
Not Synced
Let’s define the minimal recipe for something that is able to compute,
-
Not Synced
and thereby understand computation.
-
Not Synced
And the idea is that we take an infinite tape of symbols.
-
Not Synced
And we have a read-write head.
-
Not Synced
And this read-write head will write characters of a finite alphabet.
-
Not Synced
And can again read them.
-
Not Synced
And whenever it reads them based on a table that it has, a transition table
-
Not Synced
it will erase the character, write a new one, and move either to the right, or the left and stop.
-
Not Synced
Now imagine you have this machine.
-
Not Synced
It has an initial setup. That is, there is a sequence of characters on the tape
-
Not Synced
and then the thing goes to action.
-
Not Synced
It will move right, left and so on and change the sequence of characters.
-
Not Synced
And eventually, it’ll stop.
-
Not Synced
And leave this tape with a certain sequence of characters,
-
Not Synced
which is different from the one it began with probably.
-
Not Synced
And Turing has shown that this thing is able to perform basically arbitrary computations.
-
Not Synced
Now it’s very difficult to find the limits of that.
-
Not Synced
And the idea of showing the limits of that would be to find classes of functions that can not be computed
-
Not Synced
with this thing.
-
Not Synced
OK. What you see here, is of course physical realization of that Turing machine.
-
Not Synced
The Turing machine is a purely mathematical idea.
-
Not Synced
And this is a very clever and beautiful illustration, I think.
-
Not Synced
But this machine triggers basically the same criticism as the one that Leibniz had.
-
Not Synced
John Searle said—
-
Not Synced
you know, Searle is the one with the Chinese room. We’re not going to go into that—
-
Not Synced
A Turing machine could be realized in many different mechanical ways.
-
Not Synced
For instance, with levers and pulleys and so on.
-
Not Synced
Or the water pipes.
-
Not Synced
Or we could even come up with very clever arrangements just using cats, mice and cheese.
-
Not Synced
So, it’s pretty ridiculous to think that such a contraption out of cats, mice and cheese,
-
Not Synced
would thing, see, feel and so on.
-
Not Synced
and then you could ask Searle:
-
Not Synced
“Uh. You know. But how is it coming about then?”
-
Not Synced
And he says: “So it’s intrinsic powers of biological neurons.”
-
Not Synced
There’s nothing much more to say about that.
-
Not Synced
Anyway.
-
Not Synced
We have very crafty people here, this year.
-
Not Synced
There was Seidenstraße.
-
Not Synced
Maybe next year, we build a Turing machine from cats, mice and cheese.
-
Not Synced
[laughter]
-
Not Synced
How would you go about this.
-
Not Synced
I don’t know how the arrangement of cat, mice, and cheese would look like to build flip-flops with it to store bits.
-
Not Synced
But I am sure somebody of you will come up with a very clever solution.
-
Not Synced
Searle I didn’t provide any.
-
Not Synced
Let’s imagine… we will need a lot of redundancy, because these guys are a little bit erratic.
-
Not Synced
Let’s say, we take three cat-mice-cheese units for each bit.
-
Not Synced
So we have a little bit of redundancy.
-
Not Synced
The human memory capacity is on the order of 10 to the power of 15 bits.
-
Not Synced
Means.
-
Not Synced
If we make do with 10 gram cheese per unit, it’s going to be 30 billion tons of cheese.
-
Not Synced
So next year don’t bring bottles for the Seidenstraße, but bring some cheese.
-
Not Synced
When we try to build this in the Congress Center,
-
Not Synced
we might run out of space. So, if we just instead take all of Hamburg,
-
Not Synced
and stack it with the necessary number of cat-mice-cheese units according to that rough estimate,
-
Not Synced
you get to four kilometers high.
-
Not Synced
Now imagine, we cover Hamburg in four kilometers of solid cat-mice-and-cheese flip-flops
-
Not Synced
to my intuition this is super impressive.
-
Not Synced
Maybe it thinks.
-
Not Synced
[applause]
-
Not Synced
So, of course it’s an intuition.
-
Not Synced
And Searle has an intuition.
-
Not Synced
And I don’t think that intuitions are worth much.
-
Not Synced
This is the big problem of philosophy.
-
Not Synced
You are very often working with intuitions, because the validity of your argument basically depends on what your audience thinks.
-
Not Synced
In computer science, it’s different.
-
Not Synced
It doesn’t really matter what your audience thinks. It matters, if it’s runs and it’s a very strange experience that you have as a student when you are at the same time taking classes in philosophy and in computer science and in your first semester.
-
Not Synced
You’re going to point out in computer science that there is a mistake on the blackboard and everybody including the professor is super thankful.
-
Not Synced
And you do the same thing in philosophy.
-
Not Synced
It just doesn’t work this way.
-
Not Synced
Anyway.
-
Not Synced
The Turing machine is a good definition, but it’s a very bad metaphor,
-
Not Synced
because it leaves people with this intuition of cogs, and wheels, and tape.
-
Not Synced
It’s kind of linear, you know.
-
Not Synced
There’s no parallel execution.
-
Not Synced
And even though it’s infinitely faster infinitely larger and so on it’s very hard to imagine those things.
-
Not Synced
But what you imagine is the tape.
-
Not Synced
Maybe we want to have an alternative.
-
Not Synced
And I think a very good alternative is for instance the lambda calculus.
-
Not Synced
It’s computation without wheels.
-
Not Synced
It was invented basically at the same time as the Turing machine.
-
Not Synced
And philosophers and popular science magazines usually don’t use it for illustration of the idea of computation, because it has this scary Greek letter in it.
-
Not Synced
Lambda.
-
Not Synced
And calculus.
-
Not Synced
And actually it’s an accident that it has the lambda in it.
-
Not Synced
I think it should not be called lambda calculus.
-
Not Synced
It’s super scary to people, which are not mathematicians.
-
Not Synced
It would be called copy and paste thingi.
-
Not Synced
[laughter]
-
Not Synced
Because that’s all it does.
-
Not Synced
It really only does copy and paste with very simple strings.
-
Not Synced
And the strings that you want to paste into are marked with a little roof.
-
Not Synced
And the original script by Alonzo Church.
-
Not Synced
And in 1937 and 1936 typesetting was very difficult.
-
Not Synced
So when he wrote this down with his typewriter, he made a little roof in front of the variable that he wanted to replace.
-
Not Synced
And when this thing went into print, typesetters replaced this triangle by a lambda.
-
Not Synced
There you go.
-
Not Synced
Now we have the lambda calculus.
-
Not Synced
But it basically means it is a little roof over the first letter.
-
Not Synced
And the lambda calculus works like this.
-
Not Synced
The first letter, the one that is going to be replaced.
-
Not Synced
This is what we call the bound variable.
-
Not Synced
This is followed by an expression.
-
Not Synced
And then you have an argument, which is another expression.
-
Not Synced
And what we basically do is, we take the bound variable, and all occurrences in the expression, and replace it by the arguments.
-
Not Synced
So we cut the argument and we paste it in all instances of the variable, in this case the variable y.
-
Not Synced
In here.
-
Not Synced
And as a result you get this.
-
Not Synced
So here we replace all the variables by the argument “ab”.
-
Not Synced
Just another expression and this is the result.
-
Not Synced
That’s all there is.
-
Not Synced
And this can be nested.
-
Not Synced
And then we add a little bit of syntactic sugar.
-
Not Synced
We introduce symbols,
-
Not Synced
so we can take arbitrary sequences of these characters and just express them with another variable.
-
Not Synced
And then we have a programming language.
-
Not Synced
And basically this is Lisp.
-
Not Synced
So very close to Lisp.
-
Not Synced
A funny thing is that for… the guy who came up with Lisp,
-
Not Synced
McCarthy, he didn’t think that it would be a proper language.
-
Not Synced
Because of the awkward notation.
-
Not Synced
And he said, you cannot really use this for programming.
-
Not Synced
But one of his doctorate students said: “Oh well. Let’s try.”
-
Not Synced
And… it has kept on.
-
Not Synced
Anyway.
-
Not Synced
We can show that Turing Machines can compute the lambda calculus.
-
Not Synced
And we can show that the lambda calculus can be used to compute the next state of the Turing machine.
-
Not Synced
This means they have the same power.
-
Not Synced
The set of computable functions in the lambda calculus is the same as the set of Turing computable functions.
-
Not Synced
And, since then, we have found many other ways of defining computations.
-
Not Synced
For instance the post machine, which is a variation of the Turing machine,
-
Not Synced
or mathematical proofs.
-
Not Synced
Everything that can be proven is computable.
-
Not Synced
Or partial recursive functions.
-
Not Synced
And we can show for all of them that all these approaches have the same power.
-
Not Synced
And the idea that all the computational approaches have the same power,
-
Not Synced
although all the other ones that you are able to find in the future too,
-
Not Synced
is called the Church-Turing thesis.
-
Not Synced
We don’t know about the future.
-
Not Synced
So it’s not really… we can’t prove that.
-
Not Synced
We don’t know, if somebody comes up with a new way of manipulating things, and producing regularity and information, and it can do more.
-
Not Synced
But everything we’ve found so far, and probably everything that we’re going to find, has the same power.
-
Not Synced
So this kind of defines our notion of computation.
-
Not Synced
The whole thing also includes programming languages.
-
Not Synced
You can use Python to produce to calculate a Turing machine and you can use a Turing machine to calculate Python.
-
Not Synced
You can take arbitrary computers and let them run on the Turing machine.
-
Not Synced
The graphics are going to be abysmal.
-
Not Synced
But OK.
-
Not Synced
And in some sense the brain is [a] Turing computational tool.
-
Not Synced
If you look at the principles of neural information processing,
-
Not Synced
you can take neurons and build computational models, for instance compartment models.
-
Not Synced
Which are very very accurate and produce very strong semblances to the actual inputs and outputs of neurons and their state changes.
-
Not Synced
They’re are computationally expensive, but it works.
-
Not Synced
And we can simplify them into integrate-and-fire models, which are fancy oscillators.
-
Not Synced
Or we could use very crude simplifications, like in most artificial neural networks.
-
Not Synced
If you just do at some of the inputs to a neuron,
-
Not Synced
and then apply some transition function,
-
Not Synced
and transmit the results to other neurons.
-
Not Synced
And we can show that with this crude model already,
-
Not Synced
we can do many of the interesting feats that nervous systems can produce.
-
Not Synced
Like associative learning, sensory motor loops, and many other fancy things.
-
Not Synced
And, of course, it’s Turing complete.
-
Not Synced
And this brings us to what we would call weak computationalism.
-
Not Synced
That is the idea that minds are basically computer programs.
-
Not Synced
They’re realizing in neural hard reconfigurations
-
Not Synced
and in the individual states.
-
Not Synced
And the mental content is represented in those programs.
-
Not Synced
And perception is basically the process of encoding information
-
Not Synced
given at our systemic boundaries to the environment
-
Not Synced
into mental representations
-
Not Synced
using this program.
-
Not Synced
This means that all that is part of being a mind:
-
Not Synced
thinking, and feeling, and dreaming, and being creative, and being afraid, and whatever.
-
Not Synced
It’s all aspects of operations over mental content in such a computer program.
-
Not Synced
This is the idea of weak computationalism.
-
Not Synced
In fact you can go one step further to strong computationalism,
-
Not Synced
because the universe doesn’t let us experience matter.
-
Not Synced
The universe also doesn’t let us experience minds directly.
-
Not Synced
What the universe somehow gives us is information.
-
Not Synced
Information is something very simple.
-
Not Synced
We can define it mathematically and what it means is something like “discernible difference”.
-
Not Synced
You can measure it in yes-no-decisions, in bits.
-
Not Synced
And there is….
-
Not Synced
According to the strong computationalism,
-
Not Synced
the universe is basically a pattern generator,
-
Not Synced
which gives us information.
-
Not Synced
And all the apparent regularity
-
Not Synced
that the universe seems to produce,
-
Not Synced
which means, we see time and space,
-
Not Synced
and things that we can conceptualize into objects and people,
-
Not Synced
and whatever,
-
Not Synced
can be explained by the fact that the universe seems to be able to compute.
-
Not Synced
That is, to put use regularities in information.
-
Not Synced
And this means that there is no conceptual difference between reality and the computer program.
-
Not Synced
So we get a new kind of monism.
-
Not Synced
Not idealism, which takes minds to be primary,
-
Not Synced
or materialism which takes physics to be primary,
-
Not Synced
but rather computationalism, which means that information and computation are primary.
-
Not Synced
Mind and matter are constructions that we get from that.
-
Not Synced
A lot of people don’t like that idea.
-
Not Synced
Roger Penrose, who’s a physicist,
-
Not Synced
says that the brain uses quantum processes to produce consciousness.
-
Not Synced
So minds must be more than computers.
-
Not Synced
Why is that so?
-
Not Synced
The quality of understanding and feeling possessed by human beings, is something that cannot be simulated computationally.
-
Not Synced
Ok.
-
Not Synced
But how can quantum mechanics do it?
-
Not Synced
Because, you know, quantum processes are completely computational too!
-
Not Synced
It’s just very expensive to simulate them on non-quantum computers.
-
Not Synced
But it’s possible.
-
Not Synced
So, it’s not that quantum computing enables a completely new kind of effectively possible algorithm.
-
Not Synced
It’s just slightly different efficiently possible algorithms.
-
Not Synced
And Penrose cannot explain how those would bring forth
-
Not Synced
perception and imagination and consciousness.
-
Not Synced
I think what he basically does here is that he perceives kind of mechanics as mysterious
-
Not Synced
and perceives consciousness as mysterious and tries to shroud one mystery in another.
-
Not Synced
[applause]
-
Not Synced
So I don’t think that minds are more than Turing machines.
-
Not Synced
It’s actually much more troubling: minds are fundamentally less than Turing machines!
-
Not Synced
All real computers are constrained in some way.
-
Not Synced
That is they cannot compute every conceivable computable function.
-
Not Synced
They can only compute functions that fit into the memory and so on then can be computed in the available time.
-
Not Synced
So the Turing machine, if you want to build it physically,
-
Not Synced
will have a finite tape and it will have finite steps it can calculate in a given amount of time.
-
Not Synced
And the lambda calculus will have a finite length to the strings that you can actually cut and replace.
-
Not Synced
And a finite number of replacement operations that you can do
-
Not Synced
in your given amount of time.
-
Not Synced
And the thing is, there is no set of numbers m and n for…
-
Not Synced
for the tape lengths and the times you have four operations on [the] Turing machine.
-
Not Synced
And the same m and n or similar m and n
-
Not Synced
for the lambda calculus at least with the same set of constraints.
-
Not Synced
That is lambda calculus
-
Not Synced
is going to be able to calculate some functions
-
Not Synced
that are not possible on the Turing machine and vice versa,
-
Not Synced
if you have a constrained system.
-
Not Synced
And of course it’s even worse for neurons.
-
Not Synced
If you have a finite number of neurons and to find a number of state changes,
-
Not Synced
this… does not translate directly into a constrained von-Neumann-computer
-
Not Synced
or a constrained lambda calculus.
-
Not Synced
And there’s this big difference between, of course, effectively computable functions,
-
Not Synced
those that are in principle computable,
-
Not Synced
and those that we can compute efficiently.
-
Not Synced
There are things that computers cannot solve.
-
Not Synced
Some problems that are unsolvable in principle.
-
Not Synced
For instance the question whether a Turing machine ever stops
-
Not Synced
for an arbitrary program.
-
Not Synced
And some problems are unsolvable in practice.
-
Not Synced
Because it’s very, very hard to do so for a deterministic Turing machine.
-
Not Synced
And the class of NP-hard problems is a very strong candidate for that.
-
Not Synced
Non-polinominal problems.
-
Not Synced
In these problems is for instance the idea
-
Not Synced
of finding the key for an encrypted text.
-
Not Synced
If key is very long and you are not the NSA and have a backdoor.
-
Not Synced
And then there are non-decidable problems.
-
Not Synced
Problems where we cannot define…
-
Not Synced
find out, in the formal system, the answer is yes or no.
-
Not Synced
Whether it’s true or false.
-
Not Synced
And some philosophers have argued that humans can always do this so they are more powerful than computers.
-
Not Synced
Because show, prove formally, that computers cannot do this.
-
Not Synced
Gödel has done this.
-
Not Synced
But… hm…
-
Not Synced
Here’s some test question:
-
Not Synced
can you solve undecidable problems.
-
Not Synced
If you choose one of the following answers randomly,
-
Not Synced
what’s the probability that the answer is correct?
-
Not Synced
I’ll tell you.
-
Not Synced
Computers are not going to find out.
-
Not Synced
And… me neither.
-
Not Synced
OK.
-
Not Synced
How difficult is AI?
-
Not Synced
It’s a very difficult question.
-
Not Synced
We don’t know.
-
Not Synced
We do have some numbers, which could tell us that it’s not impossible.
-
Not Synced
As we have these roughly 100 billion neurons—
-
Not Synced
the ballpark figure—
-
Not Synced
and the cells in the cortex are organized into circuits of a few thousands to ten-thousands of neurons,
-
Not Synced
which you call cortical columns.
-
Not Synced
And these cortical columns have… are pretty similar among each other,
-
Not Synced
and have higher interconnectivity, and some lower connectivity among each other,
-
Not Synced
and even lower long range connectivity.
-
Not Synced
And the brain has a very distinct architecture.
-
Not Synced
And a very distinct structure of a certain nuclei and structures that have very different functional purposes.
-
Not Synced
And the layout of these…
-
Not Synced
both the individual neurons, neuron types,
-
Not Synced
the more than 130 known neurotransmitters, of which we do not completely understand all, most of them,
-
Not Synced
this is all defined in our genome of course.
-
Not Synced
And the genome is not very long.
-
Not Synced
It’s something like… it think the Human Genome Project amounted to a CD-ROM.
-
Not Synced
775 megabytes.
-
Not Synced
So actually, it’s….
-
Not Synced
The computational complexity of defining a complete human being,
-
Not Synced
if you have physics chemistry already given
-
Not Synced
to enable protein synthesis and so on—
-
Not Synced
gravity and temperature ranges—
-
Not Synced
is less than Microsoft Windows.
-
Not Synced
And it’s the upper bound, because only a very small fraction of that
-
Not Synced
is going to code for our nervous system.
-
Not Synced
But it doesn’t mean it’s easy to reverse engineer the whole thing.
-
Not Synced
It just means it’s not hopeless.
-
Not Synced
Complexity that you would be looking at.
-
Not Synced
But the estimate of the real difficulty, in my perspective, is impossible.
-
Not Synced
Because I’m not just a philosopher or a dreamer or a science fiction author, but I’m a software developer.
-
Not Synced
And as a software developer I know it’s impossible to give an estimate on when you’re done, when you don’t have the full specification.
-
Not Synced
And we don’t have a full specification yet.
-
Not Synced
So you all know this shortest computer science joke:
-
Not Synced
“It’s almost done.”
-
Not Synced
You do the first 98 %.
-
Not Synced
Now we can do the second 98 %.
-
Not Synced
We never know when it’s done,
-
Not Synced
if we haven’t solved and specified all the problems.
-
Not Synced
If you don’t know how it’s to be done.
-
Not Synced
And even if you have [a] rough direction, and I think we do,
-
Not Synced
we don’t know how long it’ll take until we have worked out the details.
-
Not Synced
And some part of that big question, how long it takes until it’ll be done,
-
Not Synced
is the question whether we need to make small incremental progress
-
Not Synced
versus whether we need one big idea,
-
Not Synced
which kind of solves it all.
-
Not Synced
AI has a pretty long story.
-
Not Synced
It starts out with logic and automata.
-
Not Synced
And this idea of computability that I just sketched out.
-
Not Synced
Then with this idea of machines that implement computability.
-
Not Synced
And came towards Babage and Zuse and von Neumann and so on.
-
Not Synced
Then we had information theory by Claude Shannon.
-
Not Synced
He captured the idea of what information is
-
Not Synced
and how entropy can be calculated for information and so on.
-
Not Synced
And we had this beautiful idea of describing the world as systems.
-
Not Synced
And systems are made up of entities and relations between them.
-
Not Synced
And along these relations there we have feedback.
-
Not Synced
And dynamical systems emerge.
-
Not Synced
This was a very beautiful idea, was cybernetics.
-
Not Synced
Unfortunately hass been killed by
-
Not Synced
second-order Cybernetics.
-
Not Synced
By this Maturana stuff and so on.
-
Not Synced
And turned into a humanity [one of the humanities] and died.
-
Not Synced
But the idea stuck around and most of them went into artificial intelligence.
-
Not Synced
And then we had this idea of symbol systems.
-
Not Synced
That is how we can do grammatical language.
-
Not Synced
Process that.
-
Not Synced
We can do planning and so on.
-
Not Synced
Abstract reasoning in automatic systems.
-
Not Synced
Then the idea how of we can abstract neural networks in distributed systems.
-
Not Synced
With McClelland and Pitts and so on.
-
Not Synced
Parallel distributed processing.
-
Not Synced
And then we had a movement of autonomous agents,
-
Not Synced
which look at self-directed, goal directed systems.
-
Not Synced
And the whole story somehow started in 1950 I think,
-
Not Synced
in its best possible way.
-
Not Synced
When Alan Turing wrote his paper
-
Not Synced
“Computing Machinery and Intelligence”
-
Not Synced
and those of you who haven’t read it should do so.
-
Not Synced
It’s a very, very easy read.
-
Not Synced
It’s fascinating.
-
Not Synced
He has already already most of the important questions of AI.
-
Not Synced
Most of the important criticisms.
-
Not Synced
Most of the important answers to the most important criticisms.
-
Not Synced
And it’s also the paper, where he describes the Turing test.
-
Not Synced
And basically sketches the idea that
-
Not Synced
in a way to determine whether somebody is intelligent is
-
Not Synced
to judge the ability of that one—
-
Not Synced
that person or that system—
-
Not Synced
to engage in meaningful discourse.
-
Not Synced
Which includes creativity, and empathy maybe, and logic, and language,
-
Not Synced
and anticipation, memory retrieval, and so on.
-
Not Synced
Story comprehension.
-
Not Synced
And the idea of AI then
-
Not Synced
coalesce in the group of cyberneticians and computer scientists and so on,
-
Not Synced
which got together in the Dartmouth conference.
-
Not Synced
It was in 1956.
-
Not Synced
And there Marvin Minsky coined the name “artificial intelligence
-
Not Synced
for the project of using computer science to understand the mind.
-
Not Synced
John McCarthy was the guy who came up with Lisp, among other things.
-
Not Synced
Nathan Rochester did pattern recognition
-
Not Synced
and he’s, I think, more famous for
-
Not Synced
writing the first assembly programming language.
-
Not Synced
Claude Shannon was this information theory guy.
-
Not Synced
But they also got psychologists there
-
Not Synced
and sociologists and people from many different fields.
-
Not Synced
It was very highly interdisciplinary.
-
Not Synced
And they already had the funding and it was a very good time.
-
Not Synced
And in this good time they ripped a lot of low hanging fruit very quickly.
-
Not Synced
Which gave them the idea that AI is almost done very soon.
-
Not Synced
In 1969 Minsky and Papert wrote a small booklet against the idea of using your neural networks.
-
Not Synced
And they won.
-
Not Synced
Their argument won.
-
Not Synced
But, even more fortunately it was wrong.
-
Not Synced
So for more than a decade, there was practically no more funding for neural networks,
-
Not Synced
which was bad so most people did logic based systems, which have some limitations.
-
Not Synced
And in the meantime people did expert systems.
-
Not Synced
The idea to describe the world
-
Not Synced
as basically logical expressions.
-
Not Synced
This turned out to be brittle, and difficult, and had diminishing returns.
-
Not Synced
And at some point it didn’t work anymore.
-
Not Synced
And many of the people which tried it,
-
Not Synced
became very disenchanted and then threw out lots of baby with the bathwater.
-
Not Synced
And only did robotics in the future or something completely different.
-
Not Synced
Instead of going back to the idea of looking at mental representations.
-
Not Synced
How the mind works.
-
Not Synced
And at the moment is kind of a sad state.
-
Not Synced
Most of it is applications.
-
Not Synced
That is, for instance, robotics
-
Not Synced
or statistical methods to do better machine learning and so on.
-
Not Synced
And I don’t say it’s invalid to do this.
-
Not Synced
It’s intellectually challenging.
-
Not Synced
It’s tremendously useful.
-
Not Synced
It’s very successful and productive and so on.
-
Not Synced
It’s just a very different question from how to understand the mind.
-
Not Synced
If you want to go to the moon you have to shoot for the moon.
-
Not Synced
So there is this movement still existing in AI,
-
Not Synced
and becoming stronger these days.
-
Not Synced
It’s called cognitive systems.
-
Not Synced
And the idea of cognitive systems has many names
-
Not Synced
like “artificial general intelligence” or “biologically inspired cognitive architectures”.
-
Not Synced
It’s to use information processing as the dominant paradigm to understand the mind.
-
Not Synced
And the tools that we need to do that is,
-
Not Synced
we have to build whole architectures that we can test.
-
Not Synced
Not just individual modules.
-
Not Synced
You have to have universal representations,
-
Not Synced
which means these representation have to be both distributed—
-
Not Synced
associative and so on—
-
Not Synced
and symbolic.
-
Not Synced
We need to be able to do both those things with it.
-
Not Synced
So we need to be able to do language and planning, and we need to do sensorimotor coupling, and associative thinking in superposition of
-
Not Synced
representations and ambiguity and so on.
-
Not Synced
And
-
Not Synced
operations over those presentation.
-
Not Synced
Some kind of
-
Not Synced
semi-universal problem solving.
-
Not Synced
It’s probably semi-universal, because they seem to be problems that humans are very bad at solving.
-
Not Synced
Our minds are not completely universal.
-
Not Synced
And we need some kind of universal motivation. That is something that directs the system to do all the interesting things that you want it to do.
-
Not Synced
Like engage in social interaction or in mathematics or creativity.
-
Not Synced
And maybe we want to understand emotion, and affect, and phenomenal experience, and so on.
-
Not Synced
So:
-
Not Synced
we want to understand universal representations.
-
Not Synced
We want to have a set of operations over those representations that give us neural learning, and category formation,
-
Not Synced
and planning, and reflection, and memory consolidation, and resource allocation,
-
Not Synced
and language, and all those interesting things.
-
Not Synced
We also want to have perceptual grounding—
-
Not Synced
that is the representations would be saved—shaped in such a way that they can be mapped to perceptual input—
-
Not Synced
and vice versa.
-
Not Synced
And…
-
Not Synced
they should also be able to be translated into motor programs to perform actions.
-
Not Synced
And maybe we also want to have some feedback between the actions and the perceptions, and is feedback usually has a name: it’s called an environment.
-
Not Synced
OK.
-
Not Synced
And these medical representations, they are not just a big lump of things but they have some structure.
-
Not Synced
One part will be inevitably the model of the current situation…
-
Not Synced
… that we are in.
-
Not Synced
And this situation model…
-
Not Synced
is the present.
-
Not Synced
But if you also want to memorize past situations.
-
Not Synced
To have a protocol a memory of the past.
-
Not Synced
And this protocol memory, as a part, will contain things that are always with me.
-
Not Synced
This is my self-model.
-
Not Synced
Those properties that are constantly available to me.
-
Not Synced
That I can ascribe to myself.
-
Not Synced
And the other things, which are constantly changing, which I usually conceptualize as my environment.
-
Not Synced
An important part of that is declarative memory.
-
Not Synced
For instance abstractions into objects, things, people, and so on,
-
Not Synced
and procedural memory: abstraction into sequences of events.
-
Not Synced
And we can use the declarative memory and the procedural memory to erect a frame.
-
Not Synced
The frame gives me a context to interpret the current situation.
-
Not Synced
For instance right now I’m in a frame of giving a talk.
-
Not Synced
If…
-
Not Synced
… I would take a…
-
Not Synced
two year old kid, then this kid would interpret the situation very differently than me.
-
Not Synced
And would probably be confused by the situation or explored it in more creative ways than I would come up with.
-
Not Synced
Because I’m constrained by the frame which gives me the context
-
Not Synced
and tells me what you were expect me to do in this situation.
-
Not Synced
What I am expected to do and so on.
-
Not Synced
This frame extends in the future.
-
Not Synced
I have some kind of expectation horizon.
-
Not Synced
I know that my talk is going to be over in about 15 minutes.
-
Not Synced
Also I’ve plans.
-
Not Synced
I have things I want to tell you and so on.
-
Not Synced
And it might go wrong but I’ll try.
-
Not Synced
And if I generalize this, I find that I have the world model,
-
Not Synced
I have long term memory, and have some kind of mental stage.
-
Not Synced
This mental stage has counter-factual stuff.
-
Not Synced
Stuff that is not…
-
Not Synced
… real.
-
Not Synced
That I can play around with.
-
Not Synced
Ok. Then I need some kind of action selection that mediates between perception and action,
-
Not Synced
and some mechanism that controls the action selection
-
Not Synced
that is a motivational system,
-
Not Synced
which selects motives based on demands of the system.
-
Not Synced
And the demands of the system should create goals.
-
Not Synced
We are not born with our goals.
-
Not Synced
Obviously I don’t think that I was born with the goal of standing here and giving this talk to you.
-
Not Synced
There must be some demand in the system, which makes… enables me to have a biography, that …
-
Not Synced
… makes this a big goal of mine to give this talk to you and engage as many of you as possible into the project of AI.
-
Not Synced
And so lets come up with a set of demands that can produce such goals universally.
-
Not Synced
I think some of these demands will be physiological, like food, water, energy, physical integrity, rest, and so on.
-
Not Synced
Hot and cold with right range.
-
Not Synced
Then we have social demands.
-
Not Synced
At least most of us do.
-
Not Synced
Sociopaths probably don’t.
-
Not Synced
These social demands do structure our…
-
Not Synced
… social interaction.
-
Not Synced
They…. For instance a demand for affiliation.
-
Not Synced
That we get signals from others, that we are ok parts of society, of our environment.
-
Not Synced
We also have internalised social demands,
-
Not Synced
which we usually called honor or something.
-
Not Synced
This is conformance to internalized norms.
-
Not Synced
It means,
-
Not Synced
that we do to conform to social norms, even when nobody is looking.
-
Not Synced
And then we have cognitive demands.
-
Not Synced
And these cognitive demands, is for instance competence acquisition.
-
Not Synced
We want learn.
-
Not Synced
We want to get new skills.
-
Not Synced
We want to become more powerful in many many dimensions and ways.
-
Not Synced
It’s good to learn a musical instrument, because you get more competent.
-
Not Synced
It creates a reward signal, a pleasure signal, if you do that.
-
Not Synced
Also we want to reduce uncertainty.
-
Not Synced
Mathematicians are those people [that] have learned that they can reduce uncertainty in mathematics.
-
Not Synced
This creates pleasure for them, and then they find uncertainty in mathematics.
-
Not Synced
And this creates more pleasure.
-
Not Synced
So for mathematicians, mathematics is an unending source of pleasure.
-
Not Synced
Now unfortunately, if you are in Germany right now studying mathematics
-
Not Synced
and you find out that you are not very good at doing mathematics, what do you do?
-
Not Synced
You become a teacher.
-
Not Synced
And this is a very unfortunate situation for everybody involved.
-
Not Synced
And, it means, that you have people, [that] associate mathematics with…
-
Not Synced
uncertainty,
-
Not Synced
that has to be curbed and to be avoided.
-
Not Synced
And these people are put in front of kids and infuse them with this dread of uncertainty in mathematics.
-
Not Synced
And most people in our culture are dreading mathematics, because for them it’s just anticipation of uncertainty.
-
Not Synced
Which is a very bad things so people avoid it.
-
Not Synced
OK.
-
Not Synced
And then you have aesthetic demands.
-
Not Synced
There are stimulus oriented aesthetics.
-
Not Synced
Nature has had to pull some very heavy strings and levers to make us interested in strange things…
-
Not Synced
[such] as certain human body schemas and…
-
Not Synced
certain types of landscapes, and audio schemas, and so on.
-
Not Synced
So there are some stimuli that are inherently pleasurable to us—pleasant to us.
-
Not Synced
And of course this varies with every individual, because the wiring is very different, and that adaptivity in our biography is very different.
-
Not Synced
And then there’s abstract aesthetics.
-
Not Synced
And I think abstract aesthetics relates to finding better representations.
-
Not Synced
It relates to finding structure.
-
Not Synced
OK. And then we want to look at things like emotional modulation and affect.
-
Not Synced
And this was one of the first things that actually got me into AI.
-
Not Synced
That was the question:
-
Not Synced
“How is it possible, that a system can feel something?”
-
Not Synced
Because, if I have a variable in me with just fear or pain,
-
Not Synced
does not equate a feeling.
-
Not Synced
It’s very far… uhm…
-
Not Synced
… different from that.
-
Not Synced
And the answer that I’ve found so far it is,
-
Not Synced
that feeling, or affect, is a configuration of the system.
-
Not Synced
It’s not a parameter in the system,
-
Not Synced
but we have several dimensions, like a state of arousal that we’re currently, in the level of stubbornness that we have, the selection threshold,
-
Not Synced
the direction of attention, outwards or inwards,
-
Not Synced
the resolution level that we have, [with] which we look at our representations, and so on.
-
Not Synced
And these together create a certain way in every given situation of how our cognition is modulated.
-
Not Synced
We are living in a very different
-
Not Synced
and dynamic environment from time to time.
-
Not Synced
When you go outside we have very different demands on our cognition.
-
Not Synced
Maybe you need to react to traffic and so on.
-
Not Synced
Maybe we need to interact with other people.
-
Not Synced
Maybe we are in stressful situations.
-
Not Synced
Maybe you are in relaxed situations.
-
Not Synced
So we need to modulate our cognition accordingly.
-
Not Synced
And this modulation means, that we do perceive the world differently.
-
Not Synced
Our cognition works differently.
-
Not Synced
And we conceptualize ourselves, and experience ourselves, differently.
-
Not Synced
And I think this is what it means to feel something:
-
Not Synced
this difference in the configuration.
-
Not Synced
So. The affect can be seen as a configuration of a cognitive system.
-
Not Synced
And the modulators of the cognition are things like arousal, and selection special, and
-
Not Synced
background checks level, and resolution level, and so on.
-
Not Synced
Our current estimates of competence and certainty in the given situation,
-
Not Synced
and the pleasure and distress signals that you get from the frustration of our demands,
-
Not Synced
or satisfaction of our demands which are reinforcements for learning and structuring our behavior.
-
Not Synced
So the affective state, the emotional state that we are in, is emergent over those modulators.
-
Not Synced
And higher level emotions, things like jealousy or pride and so on,
-
Not Synced
we get them by directing those effects upon motivational content.
-
Not Synced
And this gives us a very simple architecture.
-
Not Synced
It’s a very rough sketch for an architecture.
-
Not Synced
And I think,
-
Not Synced
of course,
-
Not Synced
this doesn’t specify all the details.
-
Not Synced
I have specified some more of the details in a book, that I want to shamelessly plug here:
-
Not Synced
it’s called “Principles of Synthetic Intelligence”.
-
Not Synced
You can get it from Amazon or maybe from your library.
-
Not Synced
And this describes basically this architecture and some of the demands
-
Not Synced
for a very general framework of artificial intelligence in which to work with it.
-
Not Synced
So it doesn’t give you all the functional mechanisms,
-
Not Synced
but some things that I think are necessary based on my current understanding.
-
Not Synced
We’re currently at the second…
-
Not Synced
iteration of the implementations.
-
Not Synced
The first one was in Java in early 2003 with lots of XMI files and…
-
Not Synced
… XML files … and design patterns and Eclipse plug ins.
-
Not Synced
And the new one is, of course, … runs in the browser, and is written in Python,
-
Not Synced
and is much more light-weight and much more joy to work with.
-
Not Synced
But we’re not done yet.
-
Not Synced
OK.
-
Not Synced
So this gets back to that question: is it going to be one big idea or is it going to be incremental progress?
-
Not Synced
And I think it’s the latter.
-
Not Synced
If we want to look at this extremely simplified list of problems to solve:
-
Not Synced
whole testable architectures,
-
Not Synced
universal representations,
-
Not Synced
universal problem solving,
-
Not Synced
motivation, emotion, and effect, and so on.
-
Not Synced
And I can see hundreds and hundreds of Ph.D. thesis.
-
Not Synced
And I’m sure that I only see a tiny part of the problem.
-
Not Synced
So I think it’s entirely doable,
-
Not Synced
but it’s going to take a pretty long time.
-
Not Synced
And it’s going to be very exciting all the way,
-
Not Synced
because we are going to learn that we are full of shit
-
Not Synced
as we always do to a new problem, an algorithm,
-
Not Synced
and we realize that we can’t test it,
-
Not Synced
and that our initial idea was wrong,
-
Not Synced
and that we can improve on it.
-
Not Synced
So what should you do, if you want to get into AI?
-
Not Synced
And you’re not there yet?
-
Not Synced
So, I think you should get acquainted, of course, with the basic methodology.
-
Not Synced
You want to…
-
Not Synced
get programming languages, and learn them.
-
Not Synced
Basically do it for fun.
-
Not Synced
It’s really fun to wrap your mind around programming languages.
-
Not Synced
Changes the way you think.
-
Not Synced
And you want to learn software development.
-
Not Synced
That is, build an actual, running system.
-
Not Synced
Test-driven development.
-
Not Synced
All those things.
-
Not Synced
Then you want to look at the things that we do in AI.
-
Not Synced
So for like…
-
Not Synced
machine learning, probabilistic approaches, Kalman filtering,
-
Not Synced
POMDPs and so on.
-
Not Synced
You want to look at modes of representation: semantic networks, description logics, factor graphs, and so on.
-
Not Synced
Graph Theory,
-
Not Synced
hyper graphs.
-
Not Synced
And you want to look at the domain of cognitive architectures.
-
Not Synced
That is building computational models to simulate psychological phenomena,
-
Not Synced
and reproduce them, and test them.
-
Not Synced
I don’t think that you should stop there.
-
Not Synced
You need to take in all the things, that we haven’t taken in yet.
-
Not Synced
We need to learn more about linguistics.
-
Not Synced
We need to learn more about neuroscience in our field.
-
Not Synced
We need to do philosophy of mind.
-
Not Synced
I think what you need to do is study cognitive science.
-
Not Synced
So. What should you be working on?
-
Not Synced
Some of the most pressing questions to me are, for instance, representation.
-
Not Synced
How can we get abstract and perceptual presentation right
-
Not Synced
and interact with each other on a common ground?
-
Not Synced
How can we work with ambiguity and superposition of representations.
-
Not Synced
Many possible interpretations valid at the same time.
-
Not Synced
Inheritance and polymorphy.
-
Not Synced
How can we distribute representations in the mind
-
Not Synced
and store them efficiently?
-
Not Synced
How can we use representation in such a way
-
Not Synced
that even parts of them are very valid.
-
Not Synced
And we can use constraints to describe partial presentations.
-
Not Synced
For instance imagine a house.
-
Not Synced
And you already have the backside of the house,
-
Not Synced
and the number of windows in that house,
-
Not Synced
and you already see this complete picture in your house,
-
Not Synced
and at each time,
-
Not Synced
if I say: “OK. It’s a house with nine stories.”
-
Not Synced
this representation is going to change
-
Not Synced
based on these constraints.
-
Not Synced
How can we implement this?
-
Not Synced
And of course we want to implement time.
-
Not Synced
And we want…
-
Not Synced
to produce uncertain space,
-
Not Synced
and certain space
-
Not Synced
and openness, and closed environments.
-
Not Synced
And we want to have temporal loops and actually loops and physical loops.
-
Not Synced
Uncertain loops and all those things.
-
Not Synced
Next thing: perception.
-
Not Synced
Perception is crucial.
-
Not Synced
It’s…. Part of it is bottom up,
-
Not Synced
that is driven by cues from stimuli from the environment,
-
Not Synced
part of his top down. It’s driven by what we expect to see.
-
Not Synced
Actually most of it, about 10 times as much,
-
Not Synced
is driven by what we expect to see.
-
Not Synced
So we actually—actively—check for stimuli in the environment.
-
Not Synced
And this bottom-up top-down process in perception is interleaved.
-
Not Synced
And it’s adaptive.
-
Not Synced
We create new concepts and integrate them.
-
Not Synced
And we can revise those concepts over time.
-
Not Synced
And we can adapt it to a given environment
-
Not Synced
without completely revising those representations.
-
Not Synced
Without making them unstable.
-
Not Synced
And it works both on sensory input and memory.
-
Not Synced
I think that memory access is mostly a perceptual process.
-
Not Synced
It has anytime characteristics.
-
Not Synced
So it works with partial solutions and is useful already.
-
Not Synced
Categorization.
-
Not Synced
We want to have categories based on saliency,
-
Not Synced
that is on similarity and dissimilarity, and so on that you can perceive.
-
Not Synced
We…. Based on goals on motivational relevance.
-
Not Synced
And on social criteria.
-
Not Synced
Somebody suggests me categories,
-
Not Synced
and I find out what they mean by those categories.
-
Not Synced
What’s the difference between cats and dogs?
-
Not Synced
I never came up with this idea on my own to make two baskets:
-
Not Synced
and the pekinese and the shepherds in one and all the cats in the other.
-
Not Synced
But if you suggest it to me, I come up with a classifier.
-
Not Synced
Then… next thing: universal problem solving and taskability.
-
Not Synced
If we don’t want to have specific solutions;
-
Not Synced
we want to have general solutions.
-
Not Synced
We want it to be able to play every game,
-
Not Synced
to find out how to play every game for instance.
-
Not Synced
Language: the big domain of organizing mental representations,
-
Not Synced
which are probably fuzzy, distributed hyper-graphs
-
Not Synced
into discrete strings of symbols.
-
Not Synced
Sociality:
-
Not Synced
interpreting others.
-
Not Synced
It’s what we call theory of mind.
-
Not Synced
Social drives, which make us conform to social situations and engage in them.
-
Not Synced
Personhood and self-concept.
-
Not Synced
How does that work?
-
Not Synced
Personality properties.
-
Not Synced
How can we understand, and implement, and test for them?
-
Not Synced
Then the big issue of integration.
-
Not Synced
How can we get analytical and associative operations to work together?
-
Not Synced
Attention.
-
Not Synced
How can we direct attention and mental resources between different problems?
-
Not Synced
Developmental trajectory.
-
Not Synced
How can we start as kids and grow our system to become more and more adult like and even maybe surpass that?
-
Not Synced
Persistence.
-
Not Synced
How can we make the system stay active instead of rebooting it every other day, because it becomes unstable.
-
Not Synced
And then benchmark problems.
-
Not Synced
We know, most AI is having benchmarks like
-
Not Synced
how to drive a car,
-
Not Synced
or how to control a robot,
-
Not Synced
or how to play soccer.
-
Not Synced
And you end up with car driving toasters, and
-
Not Synced
soccer-playing toasters,
-
Not Synced
and chess playing toasters.
-
Not Synced
But actually, we want to have a system
-
Not Synced
that is forced to have a mind.
-
Not Synced
That needs to be our benchmarks.
-
Not Synced
So we need to find tasks that enforce all this universal problem solving,
-
Not Synced
and representation, and perception,
-
Not Synced
and supports the incremental development.
-
Not Synced
And that inspires a research community.
-
Not Synced
And, last but not least, it needs to attract funding.
-
Not Synced
So.
-
Not Synced
It needs to be something that people can understand and engage in.
-
Not Synced
And that seems to be meaningful to people.
-
Not Synced
So this is a bunch of the issues that need to be urgently addressed…
-
Not Synced
… in the next…
-
Not Synced
15 years or so.
-
Not Synced
And this means, for …
-
Not Synced
… my immediate scientific career, and for yours.
-
Not Synced
You get a little bit more information on the home of the project, which is micropsi.com.
-
Not Synced
You can also send me emails if you’re interested.
-
Not Synced
And I want to thank a lot of people which have supported me. And …
-
Not Synced
you for your attention.
-
Not Synced
And giving me the chance to talk about AI.
-
Not Synced
[applause]