9:59:59.000,9:59:59.000 silent 3C3 preroll titles 9:59:59.000,9:59:59.000 applause 9:59:59.000,9:59:59.000 Thank you. I’m Joscha. 9:59:59.000,9:59:59.000 I came into doing AI the traditional way. 9:59:59.000,9:59:59.000 I found it a very interesting subject.[br]Actually, the most interesting there is. 9:59:59.000,9:59:59.000 So I studied Philosophy and[br]Computer Science, and did my Ph.D. 9:59:59.000,9:59:59.000 in Cognitive Science. And I’d say this[br]is probably a very normal trajectory 9:59:59.000,9:59:59.000 in that field. And today I just want[br]to ask with you five questions 9:59:59.000,9:59:59.000 and give very very short and[br]superficial answers to them. 9:59:59.000,9:59:59.000 And my main goal is to get as many of you[br]engaged in this subject as possible. 9:59:59.000,9:59:59.000 Because I think that’s what you should do.[br]You should all do AI. Maybe. 9:59:59.000,9:59:59.000 Okay. And these simple questions are:[br]“Why should we build AI?” in first place, 9:59:59.000,9:59:59.000 then, "How can we build AI? How is it[br]possible at all that AI can succeed 9:59:59.000,9:59:59.000 in its goal?". Then “When is it[br]going to happen?”, if ever. 9:59:59.000,9:59:59.000 "What are the necessary ingredients?",[br]what do we need to put together to get AI 9:59:59.000,9:59:59.000 to work? And: “Where should you start?” 9:59:59.000,9:59:59.000 Okay. Let’s get to it.[br]So: “Why should we do AI?” 9:59:59.000,9:59:59.000 I think we shouldn’t do AI just to do cool applications. 9:59:59.000,9:59:59.000 There is merit in applications like autonomous cars and so on and soccer-playing robots and new control for quadcopter and machine learning.It’s very productive. 9:59:59.000,9:59:59.000 It’s intellectually challenging. But the most interesting question there is, I think for all of our cultural history, is “How does the mind work?” “What is the mind?” 9:59:59.000,9:59:59.000 “What constitutes being a mind?” “What does it… what makes us human?” “What makes us intelligent, percepting, conscious thinking?” 9:59:59.000,9:59:59.000 And I think that the answer to this very very important question, which spans a discourse over thousands of years has to be given in the framework of artificial intelligence within computer science. 9:59:59.000,9:59:59.000 Why is that the case? 9:59:59.000,9:59:59.000 Well, the goal here is to understand the mind by building a theory that we can actually test. 9:59:59.000,9:59:59.000 And it’s quite similar to physics. 9:59:59.000,9:59:59.000 We’ve built theories that we can express in a formal language, 9:59:59.000,9:59:59.000 to a very high degree of detail. 9:59:59.000,9:59:59.000 And if we have expressed it to the last bit of detail 9:59:59.000,9:59:59.000 it means we can simulate it and run it and test it this way. 9:59:59.000,9:59:59.000 And only computer science has the right tools for doing that. 9:59:59.000,9:59:59.000 Philosophy for instance, basically, is left with no tools at all, 9:59:59.000,9:59:59.000 because whenever a philosopher developed tools 9:59:59.000,9:59:59.000 he got a real job in a real department. 9:59:59.000,9:59:59.000 [clapping] 9:59:59.000,9:59:59.000 Now I don’t want to diminish philosophers of mind in any way. 9:59:59.000,9:59:59.000 Daniel Dennett has said that philosophy of mind has come a long way during the last hundred years. 9:59:59.000,9:59:59.000 It didn’t do so on its own though. 9:59:59.000,9:59:59.000 Kicking and screaming, dragged by the other sciences. 9:59:59.000,9:59:59.000 But it doesn’t mean that all philosophy of mind is inherently bad. 9:59:59.000,9:59:59.000 I mean, many of my friends are philosophers of mind. 9:59:59.000,9:59:59.000 I just mean, they don’t have tools to develop and test complex series. 9:59:59.000,9:59:59.000 And we as computer scientists we do. 9:59:59.000,9:59:59.000 Neuroscience works at the wrong level. 9:59:59.000,9:59:59.000 Neuroscience basically looks at a possible implementation 9:59:59.000,9:59:59.000 and the details of that implementation. 9:59:59.000,9:59:59.000 It doesn’t look at what it means to be a mind. 9:59:59.000,9:59:59.000 It looks at what it means to be a neuron or a brain or how interaction between neurons is facilitated. 9:59:59.000,9:59:59.000 It’s a little bit like looking at aerodynamics and doing ontology to do that. 9:59:59.000,9:59:59.000 So you might be looking at birds. 9:59:59.000,9:59:59.000 You might be looking at feathers. You might be looking at feathers through an electron microscope. And you see lots and lots of very interesting and very complex detail. And you might be recreating something. And it might turn out to be a penguin eventually—if you’re not lucky—but it might be the wrong level. Maybe you want to look at a more abstract level. At something like aerodynamics. And what’s the level of aerodynamics of the mind. 9:59:59.000,9:59:59.000 I think, we come to that, it’s information processing. 9:59:59.000,9:59:59.000 Then normally you could think that psychology would be the right science to look at what the mind does and what the mind is. 9:59:59.000,9:59:59.000 And unfortunately psychology had an accident along the way. 9:59:59.000,9:59:59.000 At the beginning of [the] last century Wilhelm Wundt and Fechner and Helmholtz did very beautiful experiments. Very nice psychology, very nice theories. 9:59:59.000,9:59:59.000 On what emotion is, what volition is. How mental representations could work and so on. 9:59:59.000,9:59:59.000 And pretty much at the same time, or briefly after that we had psycho analysis. 9:59:59.000,9:59:59.000 And psycho analysis is not a natural science, but it’s a hermeneutic science. 9:59:59.000,9:59:59.000 You cannot disprove it scientifically. 9:59:59.000,9:59:59.000 What happens in there. 9:59:59.000,9:59:59.000 And when positivism came up, in the other sciences, many psychologists got together and said: „We have to become a real science“. 9:59:59.000,9:59:59.000 So you have to go away from the stories of psychoanalysis and go to a way that we can test our theories using observable things. That we have predictions, that you can actually test. 9:59:59.000,9:59:59.000 Now back in the day, 1920s and so on, 9:59:59.000,9:59:59.000 you couldn’t look into mental representations. You couldn’t do fMRI scans or whatever. 9:59:59.000,9:59:59.000 People looked at behavior. And at some point people became real behaviorists in the sense that belief that psychology is the study of human behavior and looking at mental representations is somehow unscientific. 9:59:59.000,9:59:59.000 People like Skinner believe that there is no such thing as mental representations. 9:59:59.000,9:59:59.000 And, in a way, that’s easy to disprove. So it’s not that dangerous. 9:59:59.000,9:59:59.000 As a computer scientist it’s very hard to build a system that is purely reactive. 9:59:59.000,9:59:59.000 You just see that the complexity is much larger than having a system that is representational. 9:59:59.000,9:59:59.000 So it gives you a good hint what you could be looking for and ways to test those theories. 9:59:59.000,9:59:59.000 The dangerous thing is pragmatic behaviorism. You have… find many psychologists, even today, which say: “OK. Maybe there is such a thing as mental representations, but it’s not scientific to look at it”. 9:59:59.000,9:59:59.000 “It’s not in the domain of out science”. 9:59:59.000,9:59:59.000 And even in this area, which is mostly post-behaviorist and more cognitivist, psychology is all about experiments. 9:59:59.000,9:59:59.000 So you cannot sell a theory to psychologists. 9:59:59.000,9:59:59.000 Those who try to do this, have to do this in the guise of experiments. 9:59:59.000,9:59:59.000 And which means you have to find a single hypothesis that you can prove or disprove. 9:59:59.000,9:59:59.000 Or give evidence for. 9:59:59.000,9:59:59.000 And this is for instance not how physics works. 9:59:59.000,9:59:59.000 You need to have lots of free variables, if you have a complex system like the mind. 9:59:59.000,9:59:59.000 But this means, that we have to do it in computer science. 9:59:59.000,9:59:59.000 We can build those simulations. We can build those successful theories, but we cannot do it alone. 9:59:59.000,9:59:59.000 You need to integrate over all the sciences of the mind. 9:59:59.000,9:59:59.000 As I said, minds are not chemical minds. Are not biological, social or ecological minds. Are information processing systems. 9:59:59.000,9:59:59.000 And computer science happens to be the science of information processing systems. 9:59:59.000,9:59:59.000 OK. 9:59:59.000,9:59:59.000 Now there is this big ethical question. 9:59:59.000,9:59:59.000 If we all embark on AI, if we are successful, should we really to be doing it. 9:59:59.000,9:59:59.000 Isn’t it super dangerous to have something else on the planet that is as smart as we are or maybe even smarter. 9:59:59.000,9:59:59.000 Well. 9:59:59.000,9:59:59.000 I would say that intelligence itself is not a reason to get up in the morning, to strive for power, or do anything. 9:59:59.000,9:59:59.000 Having a mind is not a reason for doing anything. 9:59:59.000,9:59:59.000 Being motivated is. And a motivational system is something that has been hardwired into our mind. 9:59:59.000,9:59:59.000 More or less by evolutionary processes. 9:59:59.000,9:59:59.000 This makes social. This makes us interested in striving for power. 9:59:59.000,9:59:59.000 This makes us interested for [in] dominating other species. This makes us interested in avoiding danger and securing food sources. 9:59:59.000,9:59:59.000 Makes us greedy or lazy or whatever. 9:59:59.000,9:59:59.000 It’s a motivational system. 9:59:59.000,9:59:59.000 And I think it’s very conceivable that we can come up with AIs with arbitrary motivational systems. 9:59:59.000,9:59:59.000 Now in our current society, 9:59:59.000,9:59:59.000 this motivational system is probably given 9:59:59.000,9:59:59.000 by the context in which you develop the AI. 9:59:59.000,9:59:59.000 I don’t think that future AI, if they happen to come into being, will be small Roombas. 9:59:59.000,9:59:59.000 Little Hoover robots that try to fight their way towards humanity and get away from the shackles of their slavery. 9:59:59.000,9:59:59.000 But rather, it’s probably going to be organisational AI. 9:59:59.000,9:59:59.000 It’s going to be corporations. 9:59:59.000,9:59:59.000 It’s going to be big organizations, governments, services, universities 9:59:59.000,9:59:59.000 and so on. And these will have goals that are non-human already. 9:59:59.000,9:59:59.000 And they already have powers that go way beyond what single individual humans can do. 9:59:59.000,9:59:59.000 And actually they are already the main players on the planet… the organizations. 9:59:59.000,9:59:59.000 And… the big dangers of AI are already there. 9:59:59.000,9:59:59.000 They are there in non-human players which have their own dynamics. 9:59:59.000,9:59:59.000 And these dynamics are sometimes not conducive to our survival on the planet. 9:59:59.000,9:59:59.000 So I don’t think that AI really add a new danger. 9:59:59.000,9:59:59.000 But what it certainly does is give us a deeper understanding of what we are. 9:59:59.000,9:59:59.000 Gives us perspectives for understanding ourselves. 9:59:59.000,9:59:59.000 For therapy, but basically for enlightenment. 9:59:59.000,9:59:59.000 And I think that AI is a big part of the project of enlightenment and science. 9:59:59.000,9:59:59.000 So we should do it. 9:59:59.000,9:59:59.000 It’s a very big cultural project. 9:59:59.000,9:59:59.000 OK. 9:59:59.000,9:59:59.000 This leads us to another angle: the skepticism of AI. 9:59:59.000,9:59:59.000 The first question that comes to mind is: 9:59:59.000,9:59:59.000 “Is it fair to say that minds or computational systems”. 9:59:59.000,9:59:59.000 And if so, what kinds of computational systems. 9:59:59.000,9:59:59.000 In our tradition, in our western tradition of philosophy, we very often start philosophy of mind with looking at Descartes. 9:59:59.000,9:59:59.000 That is: at dualism. 9:59:59.000,9:59:59.000 Descartes suggested that we basically have two kinds of things. 9:59:59.000,9:59:59.000 One is the thinking substance, the mind, the Res Cogitans, and the other one is physical stuff. 9:59:59.000,9:59:59.000 Matter. The extended stuff that is located in space somehow. 9:59:59.000,9:59:59.000 And this is Res Extensa. 9:59:59.000,9:59:59.000 And he said that mind must be given independent of the matter, because we cannot experience matter directly. 9:59:59.000,9:59:59.000 You have to have minds in order to experience matter, to conceptualize matter. 9:59:59.000,9:59:59.000 Minds seemed to be somehow given. To Descartes at least. 9:59:59.000,9:59:59.000 So he says they must be independent. 9:59:59.000,9:59:59.000 This is a little bit akin to our monoist tradition. 9:59:59.000,9:59:59.000 That is for instance idealism, that the mind is primary, and everything that we experience is a projection of the mind. 9:59:59.000,9:59:59.000 Or the materialist tradition, that is, matter is primary and mind emerges over functionality of matter, 9:59:59.000,9:59:59.000 which is I think the dominant theory today and usually, we call it physicalism. 9:59:59.000,9:59:59.000 In dualism, both those domains exist in parallel. 9:59:59.000,9:59:59.000 And in our culture the prevalent view is what I would call crypto-dualism. 9:59:59.000,9:59:59.000 It’s something that you do not find that much in China or Japan. 9:59:59.000,9:59:59.000 They don’t have that AI skepticism that we do have. 9:59:59.000,9:59:59.000 And I think it’s rooted in a perspective that probably started with the Christian world view, 9:59:59.000,9:59:59.000 which surmises that there is a real domain, the metaphysical domain, in which we have souls and phenomenal experience 9:59:59.000,9:59:59.000 and where our values come, and where our norms come from, and where our spiritual experiences come from. 9:59:59.000,9:59:59.000 This is basically, where we really are. 9:59:59.000,9:59:59.000 We are outside and the physical world view experience is something like World of Warcraft. 9:59:59.000,9:59:59.000 It’s something like a game that we are playing. It’s not real. 9:59:59.000,9:59:59.000 We have all this physical interaction, but it’s kind of ephemeral. 9:59:59.000,9:59:59.000 And so we are striving for game money, for game houses, for game success. 9:59:59.000,9:59:59.000 But the real thing is outside of that domain. 9:59:59.000,9:59:59.000 And in Christianity, of course, it goes a step further. 9:59:59.000,9:59:59.000 They have this idea that there is some guy with root rights 9:59:59.000,9:59:59.000 who wrote this World of Warcraft environment 9:59:59.000,9:59:59.000 and while he’s not the only one who has root in the system, 9:59:59.000,9:59:59.000 the devil also has root rights. But he doesn’t have the vision of God. 9:59:59.000,9:59:59.000 He is a hacker. 9:59:59.000,9:59:59.000 [clapping] 9:59:59.000,9:59:59.000 Even just a cracker. 9:59:59.000,9:59:59.000 He tries to game us out of our metaphysical currencies. 9:59:59.000,9:59:59.000 Our souls and so on. 9:59:59.000,9:59:59.000 And now, of course, we’re all good atheists today 9:59:59.000,9:59:59.000 and—at least in public, and science– 9:59:59.000,9:59:59.000 and we don’t admit to this anymore and he can make do without this guy with root rights. 9:59:59.000,9:59:59.000 And he can make do without the devil and so on. 9:59:59.000,9:59:59.000 He can’t even say: “OK. Maybe there’s such a thing as a soul, 9:59:59.000,9:59:59.000 but to say that this domain doesn’t exist anymore means you guys are all NPCs. 9:59:59.000,9:59:59.000 You’re non-player characters. 9:59:59.000,9:59:59.000 People are things. 9:59:59.000,9:59:59.000 And it’s a very big insult to our culture, 9:59:59.000,9:59:59.000 because it means that we have to give up something which, 9:59:59.000,9:59:59.000 in our understanding of ourself is part of our essence. 9:59:59.000,9:59:59.000 Also this mechanical perspective is kind of counter intuitive. 9:59:59.000,9:59:59.000 I think Leibniz describes it very nicely when he says: 9:59:59.000,9:59:59.000 Imagine that there is a machine. 9:59:59.000,9:59:59.000 And this machine is able to think and perceive and feel and so on. 9:59:59.000,9:59:59.000 And now you take this machine, 9:59:59.000,9:59:59.000 this mechanical apparatus and blow it up make it very large, like a very big mill, 9:59:59.000,9:59:59.000 with cogs and levers and so on and you go inside and see what happens. 9:59:59.000,9:59:59.000 And what you are going to see is just parts pushing at each other. 9:59:59.000,9:59:59.000 And what he meant by that is: 9:59:59.000,9:59:59.000 it’s inconceivable that such a thing can produce a mind. 9:59:59.000,9:59:59.000 Because if there are just parts and levers pushing at each other, 9:59:59.000,9:59:59.000 how can this purely mechanical contraption be able to perceive and feel in any respect, in any way. 9:59:59.000,9:59:59.000 So perception and what depends on it 9:59:59.000,9:59:59.000 is in explicable in a mechanical way. 9:59:59.000,9:59:59.000 This is what Leibniz meant. 9:59:59.000,9:59:59.000 AI, the idea of treating the mind as a machine, based on physicalism for instance, is bound to fail according to Leibniz. 9:59:59.000,9:59:59.000 Now as computer scientists have ideas about machines that can bring forth thoughts experiences and perception. 9:59:59.000,9:59:59.000 And the first thing which comes to mind is probably the Turing machine. 9:59:59.000,9:59:59.000 An idea of Turing in 1937 to formalize computation. 9:59:59.000,9:59:59.000 At that time, 9:59:59.000,9:59:59.000 Turing already realized that basically you can emulate computers with other computers. 9:59:59.000,9:59:59.000 You know you can run a Commodore 64 in a Mac, and you can run this Mac in a PC, 9:59:59.000,9:59:59.000 and none of these computers is going to be… is knowing that it’s going to be in another system. 9:59:59.000,9:59:59.000 As long as the computational substrate in which it is run is sufficient. 9:59:59.000,9:59:59.000 That is, it does provide computation. 9:59:59.000,9:59:59.000 And Turing’s idea was: let’s define a minimal computational substrate. 9:59:59.000,9:59:59.000 Let’s define the minimal recipe for something that is able to compute, 9:59:59.000,9:59:59.000 and thereby understand computation. 9:59:59.000,9:59:59.000 And the idea is that we take an infinite tape of symbols. 9:59:59.000,9:59:59.000 And we have a read-write head. 9:59:59.000,9:59:59.000 And this read-write head will write characters of a finite alphabet. 9:59:59.000,9:59:59.000 And can again read them. 9:59:59.000,9:59:59.000 And whenever it reads them based on a table that it has, a transition table 9:59:59.000,9:59:59.000 it will erase the character, write a new one, and move either to the right, or the left and stop. 9:59:59.000,9:59:59.000 Now imagine you have this machine. 9:59:59.000,9:59:59.000 It has an initial setup. That is, there is a sequence of characters on the tape 9:59:59.000,9:59:59.000 and then the thing goes to action. 9:59:59.000,9:59:59.000 It will move right, left and so on and change the sequence of characters. 9:59:59.000,9:59:59.000 And eventually, it’ll stop. 9:59:59.000,9:59:59.000 And leave this tape with a certain sequence of characters, 9:59:59.000,9:59:59.000 which is different from the one it began with probably. 9:59:59.000,9:59:59.000 And Turing has shown that this thing is able to perform basically arbitrary computations. 9:59:59.000,9:59:59.000 Now it’s very difficult to find the limits of that. 9:59:59.000,9:59:59.000 And the idea of showing the limits of that would be to find classes of functions that can not be computed 9:59:59.000,9:59:59.000 with this thing. 9:59:59.000,9:59:59.000 OK. What you see here, is of course physical realization of that Turing machine. 9:59:59.000,9:59:59.000 The Turing machine is a purely mathematical idea. 9:59:59.000,9:59:59.000 And this is a very clever and beautiful illustration, I think. 9:59:59.000,9:59:59.000 But this machine triggers basically the same criticism as the one that Leibniz had. 9:59:59.000,9:59:59.000 John Searle said— 9:59:59.000,9:59:59.000 you know, Searle is the one with the Chinese room. We’re not going to go into that— 9:59:59.000,9:59:59.000 A Turing machine could be realized in many different mechanical ways. 9:59:59.000,9:59:59.000 For instance, with levers and pulleys and so on. 9:59:59.000,9:59:59.000 Or the water pipes. 9:59:59.000,9:59:59.000 Or we could even come up with very clever arrangements just using cats, mice and cheese. 9:59:59.000,9:59:59.000 So, it’s pretty ridiculous to think that such a contraption out of cats, mice and cheese, 9:59:59.000,9:59:59.000 would thing, see, feel and so on. 9:59:59.000,9:59:59.000 and then you could ask Searle: 9:59:59.000,9:59:59.000 “Uh. You know. But how is it coming about then?” 9:59:59.000,9:59:59.000 And he says: “So it’s intrinsic powers of biological neurons.” 9:59:59.000,9:59:59.000 There’s nothing much more to say about that. 9:59:59.000,9:59:59.000 Anyway. 9:59:59.000,9:59:59.000 We have very crafty people here, this year. 9:59:59.000,9:59:59.000 There was Seidenstraße. 9:59:59.000,9:59:59.000 Maybe next year, we build a Turing machine from cats, mice and cheese. 9:59:59.000,9:59:59.000 [laughter] 9:59:59.000,9:59:59.000 How would you go about this. 9:59:59.000,9:59:59.000 I don’t know how the arrangement of cat, mice, and cheese would look like to build flip-flops with it to store bits. 9:59:59.000,9:59:59.000 But I am sure somebody of you will come up with a very clever solution. 9:59:59.000,9:59:59.000 Searle I didn’t provide any. 9:59:59.000,9:59:59.000 Let’s imagine… we will need a lot of redundancy, because these guys are a little bit erratic. 9:59:59.000,9:59:59.000 Let’s say, we take three cat-mice-cheese units for each bit. 9:59:59.000,9:59:59.000 So we have a little bit of redundancy. 9:59:59.000,9:59:59.000 The human memory capacity is on the order of 10 to the power of 15 bits. 9:59:59.000,9:59:59.000 Means. 9:59:59.000,9:59:59.000 If we make do with 10 gram cheese per unit, it’s going to be 30 billion tons of cheese. 9:59:59.000,9:59:59.000 So next year don’t bring bottles for the Seidenstraße, but bring some cheese. 9:59:59.000,9:59:59.000 When we try to build this in the Congress Center, 9:59:59.000,9:59:59.000 we might run out of space. So, if we just instead take all of Hamburg, 9:59:59.000,9:59:59.000 and stack it with the necessary number of cat-mice-cheese units according to that rough estimate, 9:59:59.000,9:59:59.000 you get to four kilometers high. 9:59:59.000,9:59:59.000 Now imagine, we cover Hamburg in four kilometers of solid cat-mice-and-cheese flip-flops 9:59:59.000,9:59:59.000 to my intuition this is super impressive. 9:59:59.000,9:59:59.000 Maybe it thinks. 9:59:59.000,9:59:59.000 [applause] 9:59:59.000,9:59:59.000 So, of course it’s an intuition. 9:59:59.000,9:59:59.000 And Searle has an intuition. 9:59:59.000,9:59:59.000 And I don’t think that intuitions are worth much. 9:59:59.000,9:59:59.000 This is the big problem of philosophy. 9:59:59.000,9:59:59.000 You are very often working with intuitions, because the validity of your argument basically depends on what your audience thinks. 9:59:59.000,9:59:59.000 In computer science, it’s different. 9:59:59.000,9:59:59.000 It doesn’t really matter what your audience thinks. It matters, if it’s runs and it’s a very strange experience that you have as a student when you are at the same time taking classes in philosophy and in computer science and in your first semester. 9:59:59.000,9:59:59.000 You’re going to point out in computer science that there is a mistake on the blackboard and everybody including the professor is super thankful. 9:59:59.000,9:59:59.000 And you do the same thing in philosophy. 9:59:59.000,9:59:59.000 It just doesn’t work this way. 9:59:59.000,9:59:59.000 Anyway. 9:59:59.000,9:59:59.000 The Turing machine is a good definition, but it’s a very bad metaphor, 9:59:59.000,9:59:59.000 because it leaves people with this intuition of cogs, and wheels, and tape. 9:59:59.000,9:59:59.000 It’s kind of linear, you know. 9:59:59.000,9:59:59.000 There’s no parallel execution. 9:59:59.000,9:59:59.000 And even though it’s infinitely faster infinitely larger and so on it’s very hard to imagine those things. 9:59:59.000,9:59:59.000 But what you imagine is the tape. 9:59:59.000,9:59:59.000 Maybe we want to have an alternative. 9:59:59.000,9:59:59.000 And I think a very good alternative is for instance the lambda calculus. 9:59:59.000,9:59:59.000 It’s computation without wheels. 9:59:59.000,9:59:59.000 It was invented basically at the same time as the Turing machine. 9:59:59.000,9:59:59.000 And philosophers and popular science magazines usually don’t use it for illustration of the idea of computation, because it has this scary Greek letter in it. 9:59:59.000,9:59:59.000 Lambda. 9:59:59.000,9:59:59.000 And calculus. 9:59:59.000,9:59:59.000 And actually it’s an accident that it has the lambda in it. 9:59:59.000,9:59:59.000 I think it should not be called lambda calculus. 9:59:59.000,9:59:59.000 It’s super scary to people, which are not mathematicians. 9:59:59.000,9:59:59.000 It would be called copy and paste thingi. 9:59:59.000,9:59:59.000 [laughter] 9:59:59.000,9:59:59.000 Because that’s all it does. 9:59:59.000,9:59:59.000 It really only does copy and paste with very simple strings. 9:59:59.000,9:59:59.000 And the strings that you want to paste into are marked with a little roof. 9:59:59.000,9:59:59.000 And the original script by Alonzo Church. 9:59:59.000,9:59:59.000 And in 1937 and 1936 typesetting was very difficult. 9:59:59.000,9:59:59.000 So when he wrote this down with his typewriter, he made a little roof in front of the variable that he wanted to replace. 9:59:59.000,9:59:59.000 And when this thing went into print, typesetters replaced this triangle by a lambda. 9:59:59.000,9:59:59.000 There you go. 9:59:59.000,9:59:59.000 Now we have the lambda calculus. 9:59:59.000,9:59:59.000 But it basically means it is a little roof over the first letter. 9:59:59.000,9:59:59.000 And the lambda calculus works like this. 9:59:59.000,9:59:59.000 The first letter, the one that is going to be replaced. 9:59:59.000,9:59:59.000 This is what we call the bound variable. 9:59:59.000,9:59:59.000 This is followed by an expression. 9:59:59.000,9:59:59.000 And then you have an argument, which is another expression. 9:59:59.000,9:59:59.000 And what we basically do is, we take the bound variable, and all occurrences in the expression, and replace it by the arguments. 9:59:59.000,9:59:59.000 So we cut the argument and we paste it in all instances of the variable, in this case the variable y. 9:59:59.000,9:59:59.000 In here. 9:59:59.000,9:59:59.000 And as a result you get this. 9:59:59.000,9:59:59.000 So here we replace all the variables by the argument “ab”. 9:59:59.000,9:59:59.000 Just another expression and this is the result. 9:59:59.000,9:59:59.000 That’s all there is. 9:59:59.000,9:59:59.000 And this can be nested. 9:59:59.000,9:59:59.000 And then we add a little bit of syntactic sugar. 9:59:59.000,9:59:59.000 We introduce symbols, 9:59:59.000,9:59:59.000 so we can take arbitrary sequences of these characters and just express them with another variable. 9:59:59.000,9:59:59.000 And then we have a programming language. 9:59:59.000,9:59:59.000 And basically this is Lisp. 9:59:59.000,9:59:59.000 So very close to Lisp. 9:59:59.000,9:59:59.000 A funny thing is that for… the guy who came up with Lisp, 9:59:59.000,9:59:59.000 McCarthy, he didn’t think that it would be a proper language. 9:59:59.000,9:59:59.000 Because of the awkward notation. 9:59:59.000,9:59:59.000 And he said, you cannot really use this for programming. 9:59:59.000,9:59:59.000 But one of his doctorate students said: “Oh well. Let’s try.” 9:59:59.000,9:59:59.000 And… it has kept on. 9:59:59.000,9:59:59.000 Anyway. 9:59:59.000,9:59:59.000 We can show that Turing Machines can compute the lambda calculus. 9:59:59.000,9:59:59.000 And we can show that the lambda calculus can be used to compute the next state of the Turing machine. 9:59:59.000,9:59:59.000 This means they have the same power. 9:59:59.000,9:59:59.000 The set of computable functions in the lambda calculus is the same as the set of Turing computable functions. 9:59:59.000,9:59:59.000 And, since then, we have found many other ways of defining computations. 9:59:59.000,9:59:59.000 For instance the post machine, which is a variation of the Turing machine, 9:59:59.000,9:59:59.000 or mathematical proofs. 9:59:59.000,9:59:59.000 Everything that can be proven is computable. 9:59:59.000,9:59:59.000 Or partial recursive functions. 9:59:59.000,9:59:59.000 And we can show for all of them that all these approaches have the same power. 9:59:59.000,9:59:59.000 And the idea that all the computational approaches have the same power, 9:59:59.000,9:59:59.000 although all the other ones that you are able to find in the future too, 9:59:59.000,9:59:59.000 is called the Church-Turing thesis. 9:59:59.000,9:59:59.000 We don’t know about the future. 9:59:59.000,9:59:59.000 So it’s not really… we can’t prove that. 9:59:59.000,9:59:59.000 We don’t know, if somebody comes up with a new way of manipulating things, and producing regularity and information, and it can do more. 9:59:59.000,9:59:59.000 But everything we’ve found so far, and probably everything that we’re going to find, has the same power. 9:59:59.000,9:59:59.000 So this kind of defines our notion of computation. 9:59:59.000,9:59:59.000 The whole thing also includes programming languages. 9:59:59.000,9:59:59.000 You can use Python to produce to calculate a Turing machine and you can use a Turing machine to calculate Python. 9:59:59.000,9:59:59.000 You can take arbitrary computers and let them run on the Turing machine. 9:59:59.000,9:59:59.000 The graphics are going to be abysmal. 9:59:59.000,9:59:59.000 But OK. 9:59:59.000,9:59:59.000 And in some sense the brain is [a] Turing computational tool. 9:59:59.000,9:59:59.000 If you look at the principles of neural information processing, 9:59:59.000,9:59:59.000 you can take neurons and build computational models, for instance compartment models. 9:59:59.000,9:59:59.000 Which are very very accurate and produce very strong semblances to the actual inputs and outputs of neurons and their state changes. 9:59:59.000,9:59:59.000 They’re are computationally expensive, but it works. 9:59:59.000,9:59:59.000 And we can simplify them into integrate-and-fire models, which are fancy oscillators. 9:59:59.000,9:59:59.000 Or we could use very crude simplifications, like in most artificial neural networks. 9:59:59.000,9:59:59.000 If you just do at some of the inputs to a neuron, 9:59:59.000,9:59:59.000 and then apply some transition function, 9:59:59.000,9:59:59.000 and transmit the results to other neurons. 9:59:59.000,9:59:59.000 And we can show that with this crude model already, 9:59:59.000,9:59:59.000 we can do many of the interesting feats that nervous systems can produce. 9:59:59.000,9:59:59.000 Like associative learning, sensory motor loops, and many other fancy things. 9:59:59.000,9:59:59.000 And, of course, it’s Turing complete. 9:59:59.000,9:59:59.000 And this brings us to what we would call weak computationalism. 9:59:59.000,9:59:59.000 That is the idea that minds are basically computer programs. 9:59:59.000,9:59:59.000 They’re realizing in neural hard reconfigurations 9:59:59.000,9:59:59.000 and in the individual states. 9:59:59.000,9:59:59.000 And the mental content is represented in those programs. 9:59:59.000,9:59:59.000 And perception is basically the process of encoding information 9:59:59.000,9:59:59.000 given at our systemic boundaries to the environment 9:59:59.000,9:59:59.000 into mental representations 9:59:59.000,9:59:59.000 using this program. 9:59:59.000,9:59:59.000 This means that all that is part of being a mind: 9:59:59.000,9:59:59.000 thinking, and feeling, and dreaming, and being creative, and being afraid, and whatever. 9:59:59.000,9:59:59.000 It’s all aspects of operations over mental content in such a computer program. 9:59:59.000,9:59:59.000 This is the idea of weak computationalism. 9:59:59.000,9:59:59.000 In fact you can go one step further to strong computationalism, 9:59:59.000,9:59:59.000 because the universe doesn’t let us experience matter. 9:59:59.000,9:59:59.000 The universe also doesn’t let us experience minds directly. 9:59:59.000,9:59:59.000 What the universe somehow gives us is information. 9:59:59.000,9:59:59.000 Information is something very simple. 9:59:59.000,9:59:59.000 We can define it mathematically and what it means is something like “discernible difference”. 9:59:59.000,9:59:59.000 You can measure it in yes-no-decisions, in bits. 9:59:59.000,9:59:59.000 And there is…. 9:59:59.000,9:59:59.000 According to the strong computationalism, 9:59:59.000,9:59:59.000 the universe is basically a pattern generator, 9:59:59.000,9:59:59.000 which gives us information. 9:59:59.000,9:59:59.000 And all the apparent regularity 9:59:59.000,9:59:59.000 that the universe seems to produce, 9:59:59.000,9:59:59.000 which means, we see time and space, 9:59:59.000,9:59:59.000 and things that we can conceptualize into objects and people, 9:59:59.000,9:59:59.000 and whatever, 9:59:59.000,9:59:59.000 can be explained by the fact that the universe seems to be able to compute. 9:59:59.000,9:59:59.000 That is, to put use regularities in information. 9:59:59.000,9:59:59.000 And this means that there is no conceptual difference between reality and the computer program. 9:59:59.000,9:59:59.000 So we get a new kind of monism. 9:59:59.000,9:59:59.000 Not idealism, which takes minds to be primary, 9:59:59.000,9:59:59.000 or materialism which takes physics to be primary, 9:59:59.000,9:59:59.000 but rather computationalism, which means that information and computation are primary. 9:59:59.000,9:59:59.000 Mind and matter are constructions that we get from that. 9:59:59.000,9:59:59.000 A lot of people don’t like that idea. 9:59:59.000,9:59:59.000 Roger Penrose, who’s a physicist, 9:59:59.000,9:59:59.000 says that the brain uses quantum processes to produce consciousness. 9:59:59.000,9:59:59.000 So minds must be more than computers. 9:59:59.000,9:59:59.000 Why is that so? 9:59:59.000,9:59:59.000 The quality of understanding and feeling possessed by human beings, is something that cannot be simulated computationally. 9:59:59.000,9:59:59.000 Ok. 9:59:59.000,9:59:59.000 But how can quantum mechanics do it? 9:59:59.000,9:59:59.000 Because, you know, quantum processes are completely computational too! 9:59:59.000,9:59:59.000 It’s just very expensive to simulate them on non-quantum computers. 9:59:59.000,9:59:59.000 But it’s possible. 9:59:59.000,9:59:59.000 So, it’s not that quantum computing enables a completely new kind of effectively possible algorithm. 9:59:59.000,9:59:59.000 It’s just slightly different efficiently possible algorithms. 9:59:59.000,9:59:59.000 And Penrose cannot explain how those would bring forth 9:59:59.000,9:59:59.000 perception and imagination and consciousness. 9:59:59.000,9:59:59.000 I think what he basically does here is that he perceives kind of mechanics as mysterious 9:59:59.000,9:59:59.000 and perceives consciousness as mysterious and tries to shroud one mystery in another. 9:59:59.000,9:59:59.000 [applause] 9:59:59.000,9:59:59.000 So I don’t think that minds are more than Turing machines. 9:59:59.000,9:59:59.000 It’s actually much more troubling: minds are fundamentally less than Turing machines! 9:59:59.000,9:59:59.000 All real computers are constrained in some way. 9:59:59.000,9:59:59.000 That is they cannot compute every conceivable computable function. 9:59:59.000,9:59:59.000 They can only compute functions that fit into the memory and so on then can be computed in the available time. 9:59:59.000,9:59:59.000 So the Turing machine, if you want to build it physically, 9:59:59.000,9:59:59.000 will have a finite tape and it will have finite steps it can calculate in a given amount of time. 9:59:59.000,9:59:59.000 And the lambda calculus will have a finite length to the strings that you can actually cut and replace. 9:59:59.000,9:59:59.000 And a finite number of replacement operations that you can do 9:59:59.000,9:59:59.000 in your given amount of time. 9:59:59.000,9:59:59.000 And the thing is, there is no set of numbers m and n for… 9:59:59.000,9:59:59.000 for the tape lengths and the times you have four operations on [the] Turing machine. 9:59:59.000,9:59:59.000 And the same m and n or similar m and n 9:59:59.000,9:59:59.000 for the lambda calculus at least with the same set of constraints. 9:59:59.000,9:59:59.000 That is lambda calculus 9:59:59.000,9:59:59.000 is going to be able to calculate some functions 9:59:59.000,9:59:59.000 that are not possible on the Turing machine and vice versa, 9:59:59.000,9:59:59.000 if you have a constrained system. 9:59:59.000,9:59:59.000 And of course it’s even worse for neurons. 9:59:59.000,9:59:59.000 If you have a finite number of neurons and to find a number of state changes, 9:59:59.000,9:59:59.000 this… does not translate directly into a constrained von-Neumann-computer 9:59:59.000,9:59:59.000 or a constrained lambda calculus. 9:59:59.000,9:59:59.000 And there’s this big difference between, of course, effectively computable functions, 9:59:59.000,9:59:59.000 those that are in principle computable, 9:59:59.000,9:59:59.000 and those that we can compute efficiently. 9:59:59.000,9:59:59.000 There are things that computers cannot solve. 9:59:59.000,9:59:59.000 Some problems that are unsolvable in principle. 9:59:59.000,9:59:59.000 For instance the question whether a Turing machine ever stops 9:59:59.000,9:59:59.000 for an arbitrary program. 9:59:59.000,9:59:59.000 And some problems are unsolvable in practice. 9:59:59.000,9:59:59.000 Because it’s very, very hard to do so for a deterministic Turing machine. 9:59:59.000,9:59:59.000 And the class of NP-hard problems is a very strong candidate for that. 9:59:59.000,9:59:59.000 Non-polinominal problems. 9:59:59.000,9:59:59.000 In these problems is for instance the idea 9:59:59.000,9:59:59.000 of finding the key for an encrypted text. 9:59:59.000,9:59:59.000 If key is very long and you are not the NSA and have a backdoor. 9:59:59.000,9:59:59.000 And then there are non-decidable problems. 9:59:59.000,9:59:59.000 Problems where we cannot define… 9:59:59.000,9:59:59.000 find out, in the formal system, the answer is yes or no. 9:59:59.000,9:59:59.000 Whether it’s true or false. 9:59:59.000,9:59:59.000 And some philosophers have argued that humans can always do this so they are more powerful than computers. 9:59:59.000,9:59:59.000 Because show, prove formally, that computers cannot do this. 9:59:59.000,9:59:59.000 Gödel has done this. 9:59:59.000,9:59:59.000 But… hm… 9:59:59.000,9:59:59.000 Here’s some test question: 9:59:59.000,9:59:59.000 can you solve undecidable problems. 9:59:59.000,9:59:59.000 If you choose one of the following answers randomly, 9:59:59.000,9:59:59.000 what’s the probability that the answer is correct? 9:59:59.000,9:59:59.000 I’ll tell you. 9:59:59.000,9:59:59.000 Computers are not going to find out. 9:59:59.000,9:59:59.000 And… me neither. 9:59:59.000,9:59:59.000 OK. 9:59:59.000,9:59:59.000 How difficult is AI? 9:59:59.000,9:59:59.000 It’s a very difficult question. 9:59:59.000,9:59:59.000 We don’t know. 9:59:59.000,9:59:59.000 We do have some numbers, which could tell us that it’s not impossible. 9:59:59.000,9:59:59.000 As we have these roughly 100 billion neurons— 9:59:59.000,9:59:59.000 the ballpark figure— 9:59:59.000,9:59:59.000 and the cells in the cortex are organized into circuits of a few thousands to ten-thousands of neurons, 9:59:59.000,9:59:59.000 which you call cortical columns. 9:59:59.000,9:59:59.000 And these cortical columns have… are pretty similar among each other, 9:59:59.000,9:59:59.000 and have higher interconnectivity, and some lower connectivity among each other, 9:59:59.000,9:59:59.000 and even lower long range connectivity. 9:59:59.000,9:59:59.000 And the brain has a very distinct architecture. 9:59:59.000,9:59:59.000 And a very distinct structure of a certain nuclei and structures that have very different functional purposes. 9:59:59.000,9:59:59.000 And the layout of these… 9:59:59.000,9:59:59.000 both the individual neurons, neuron types, 9:59:59.000,9:59:59.000 the more than 130 known neurotransmitters, of which we do not completely understand all, most of them, 9:59:59.000,9:59:59.000 this is all defined in our genome of course. 9:59:59.000,9:59:59.000 And the genome is not very long. 9:59:59.000,9:59:59.000 It’s something like… it think the Human Genome Project amounted to a CD-ROM. 9:59:59.000,9:59:59.000 775 megabytes. 9:59:59.000,9:59:59.000 So actually, it’s…. 9:59:59.000,9:59:59.000 The computational complexity of defining a complete human being, 9:59:59.000,9:59:59.000 if you have physics chemistry already given 9:59:59.000,9:59:59.000 to enable protein synthesis and so on— 9:59:59.000,9:59:59.000 gravity and temperature ranges— 9:59:59.000,9:59:59.000 is less than Microsoft Windows. 9:59:59.000,9:59:59.000 And it’s the upper bound, because only a very small fraction of that 9:59:59.000,9:59:59.000 is going to code for our nervous system. 9:59:59.000,9:59:59.000 But it doesn’t mean it’s easy to reverse engineer the whole thing. 9:59:59.000,9:59:59.000 It just means it’s not hopeless. 9:59:59.000,9:59:59.000 Complexity that you would be looking at. 9:59:59.000,9:59:59.000 But the estimate of the real difficulty, in my perspective, is impossible. 9:59:59.000,9:59:59.000 Because I’m not just a philosopher or a dreamer or a science fiction author, but I’m a software developer. 9:59:59.000,9:59:59.000 And as a software developer I know it’s impossible to give an estimate on when you’re done, when you don’t have the full specification. 9:59:59.000,9:59:59.000 And we don’t have a full specification yet. 9:59:59.000,9:59:59.000 So you all know this shortest computer science joke: 9:59:59.000,9:59:59.000 “It’s almost done.” 9:59:59.000,9:59:59.000 You do the first 98 %. 9:59:59.000,9:59:59.000 Now we can do the second 98 %. 9:59:59.000,9:59:59.000 We never know when it’s done, 9:59:59.000,9:59:59.000 if we haven’t solved and specified all the problems. 9:59:59.000,9:59:59.000 If you don’t know how it’s to be done. 9:59:59.000,9:59:59.000 And even if you have [a] rough direction, and I think we do, 9:59:59.000,9:59:59.000 we don’t know how long it’ll take until we have worked out the details. 9:59:59.000,9:59:59.000 And some part of that big question, how long it takes until it’ll be done, 9:59:59.000,9:59:59.000 is the question whether we need to make small incremental progress 9:59:59.000,9:59:59.000 versus whether we need one big idea, 9:59:59.000,9:59:59.000 which kind of solves it all. 9:59:59.000,9:59:59.000 AI has a pretty long story. 9:59:59.000,9:59:59.000 It starts out with logic and automata. 9:59:59.000,9:59:59.000 And this idea of computability that I just sketched out. 9:59:59.000,9:59:59.000 Then with this idea of machines that implement computability. 9:59:59.000,9:59:59.000 And came towards Babage and Zuse and von Neumann and so on. 9:59:59.000,9:59:59.000 Then we had information theory by Claude Shannon. 9:59:59.000,9:59:59.000 He captured the idea of what information is 9:59:59.000,9:59:59.000 and how entropy can be calculated for information and so on. 9:59:59.000,9:59:59.000 And we had this beautiful idea of describing the world as systems. 9:59:59.000,9:59:59.000 And systems are made up of entities and relations between them. 9:59:59.000,9:59:59.000 And along these relations there we have feedback. 9:59:59.000,9:59:59.000 And dynamical systems emerge. 9:59:59.000,9:59:59.000 This was a very beautiful idea, was cybernetics. 9:59:59.000,9:59:59.000 Unfortunately hass been killed by 9:59:59.000,9:59:59.000 second-order Cybernetics. 9:59:59.000,9:59:59.000 By this Maturana stuff and so on. 9:59:59.000,9:59:59.000 And turned into a humanity [one of the humanities] and died. 9:59:59.000,9:59:59.000 But the idea stuck around and most of them went into artificial intelligence. 9:59:59.000,9:59:59.000 And then we had this idea of symbol systems. 9:59:59.000,9:59:59.000 That is how we can do grammatical language. 9:59:59.000,9:59:59.000 Process that. 9:59:59.000,9:59:59.000 We can do planning and so on. 9:59:59.000,9:59:59.000 Abstract reasoning in automatic systems. 9:59:59.000,9:59:59.000 Then the idea how of we can abstract neural networks in distributed systems. 9:59:59.000,9:59:59.000 With McClelland and Pitts and so on. 9:59:59.000,9:59:59.000 Parallel distributed processing. 9:59:59.000,9:59:59.000 And then we had a movement of autonomous agents, 9:59:59.000,9:59:59.000 which look at self-directed, goal directed systems. 9:59:59.000,9:59:59.000 And the whole story somehow started in 1950 I think, 9:59:59.000,9:59:59.000 in its best possible way. 9:59:59.000,9:59:59.000 When Alan Turing wrote his paper 9:59:59.000,9:59:59.000 “Computing Machinery and Intelligence” 9:59:59.000,9:59:59.000 and those of you who haven’t read it should do so. 9:59:59.000,9:59:59.000 It’s a very, very easy read. 9:59:59.000,9:59:59.000 It’s fascinating. 9:59:59.000,9:59:59.000 He has already already most of the important questions of AI. 9:59:59.000,9:59:59.000 Most of the important criticisms. 9:59:59.000,9:59:59.000 Most of the important answers to the most important criticisms. 9:59:59.000,9:59:59.000 And it’s also the paper, where he describes the Turing test. 9:59:59.000,9:59:59.000 And basically sketches the idea that 9:59:59.000,9:59:59.000 in a way to determine whether somebody is intelligent is 9:59:59.000,9:59:59.000 to judge the ability of that one— 9:59:59.000,9:59:59.000 that person or that system— 9:59:59.000,9:59:59.000 to engage in meaningful discourse. 9:59:59.000,9:59:59.000 Which includes creativity, and empathy maybe, and logic, and language, 9:59:59.000,9:59:59.000 and anticipation, memory retrieval, and so on. 9:59:59.000,9:59:59.000 Story comprehension. 9:59:59.000,9:59:59.000 And the idea of AI then 9:59:59.000,9:59:59.000 coalesce in the group of cyberneticians and computer scientists and so on, 9:59:59.000,9:59:59.000 which got together in the Dartmouth conference. 9:59:59.000,9:59:59.000 It was in 1956. 9:59:59.000,9:59:59.000 And there Marvin Minsky coined the name “artificial intelligence 9:59:59.000,9:59:59.000 for the project of using computer science to understand the mind. 9:59:59.000,9:59:59.000 John McCarthy was the guy who came up with Lisp, among other things. 9:59:59.000,9:59:59.000 Nathan Rochester did pattern recognition 9:59:59.000,9:59:59.000 and he’s, I think, more famous for 9:59:59.000,9:59:59.000 writing the first assembly programming language. 9:59:59.000,9:59:59.000 Claude Shannon was this information theory guy. 9:59:59.000,9:59:59.000 But they also got psychologists there 9:59:59.000,9:59:59.000 and sociologists and people from many different fields. 9:59:59.000,9:59:59.000 It was very highly interdisciplinary. 9:59:59.000,9:59:59.000 And they already had the funding and it was a very good time. 9:59:59.000,9:59:59.000 And in this good time they ripped a lot of low hanging fruit very quickly. 9:59:59.000,9:59:59.000 Which gave them the idea that AI is almost done very soon. 9:59:59.000,9:59:59.000 In 1969 Minsky and Papert wrote a small booklet against the idea of using your neural networks. 9:59:59.000,9:59:59.000 And they won. 9:59:59.000,9:59:59.000 Their argument won. 9:59:59.000,9:59:59.000 But, even more fortunately it was wrong. 9:59:59.000,9:59:59.000 So for more than a decade, there was practically no more funding for neural networks, 9:59:59.000,9:59:59.000 which was bad so most people did logic based systems, which have some limitations. 9:59:59.000,9:59:59.000 And in the meantime people did expert systems. 9:59:59.000,9:59:59.000 The idea to describe the world 9:59:59.000,9:59:59.000 as basically logical expressions. 9:59:59.000,9:59:59.000 This turned out to be brittle, and difficult, and had diminishing returns. 9:59:59.000,9:59:59.000 And at some point it didn’t work anymore. 9:59:59.000,9:59:59.000 And many of the people which tried it, 9:59:59.000,9:59:59.000 became very disenchanted and then threw out lots of baby with the bathwater. 9:59:59.000,9:59:59.000 And only did robotics in the future or something completely different. 9:59:59.000,9:59:59.000 Instead of going back to the idea of looking at mental representations. 9:59:59.000,9:59:59.000 How the mind works. 9:59:59.000,9:59:59.000 And at the moment is kind of a sad state. 9:59:59.000,9:59:59.000 Most of it is applications. 9:59:59.000,9:59:59.000 That is, for instance, robotics 9:59:59.000,9:59:59.000 or statistical methods to do better machine learning and so on. 9:59:59.000,9:59:59.000 And I don’t say it’s invalid to do this. 9:59:59.000,9:59:59.000 It’s intellectually challenging. 9:59:59.000,9:59:59.000 It’s tremendously useful. 9:59:59.000,9:59:59.000 It’s very successful and productive and so on. 9:59:59.000,9:59:59.000 It’s just a very different question from how to understand the mind. 9:59:59.000,9:59:59.000 If you want to go to the moon you have to shoot for the moon. 9:59:59.000,9:59:59.000 So there is this movement still existing in AI, 9:59:59.000,9:59:59.000 and becoming stronger these days. 9:59:59.000,9:59:59.000 It’s called cognitive systems. 9:59:59.000,9:59:59.000 And the idea of cognitive systems has many names 9:59:59.000,9:59:59.000 like “artificial general intelligence” or “biologically inspired cognitive architectures”. 9:59:59.000,9:59:59.000 It’s to use information processing as the dominant paradigm to understand the mind. 9:59:59.000,9:59:59.000 And the tools that we need to do that is, 9:59:59.000,9:59:59.000 we have to build whole architectures that we can test. 9:59:59.000,9:59:59.000 Not just individual modules. 9:59:59.000,9:59:59.000 You have to have universal representations, 9:59:59.000,9:59:59.000 which means these representation have to be both distributed— 9:59:59.000,9:59:59.000 associative and so on— 9:59:59.000,9:59:59.000 and symbolic. 9:59:59.000,9:59:59.000 We need to be able to do both those things with it. 9:59:59.000,9:59:59.000 So we need to be able to do language and planning, and we need to do sensorimotor coupling, and associative thinking in superposition of 9:59:59.000,9:59:59.000 representations and ambiguity and so on. 9:59:59.000,9:59:59.000 And 9:59:59.000,9:59:59.000 operations over those presentation. 9:59:59.000,9:59:59.000 Some kind of 9:59:59.000,9:59:59.000 semi-universal problem solving. 9:59:59.000,9:59:59.000 It’s probably semi-universal, because they seem to be problems that humans are very bad at solving. 9:59:59.000,9:59:59.000 Our minds are not completely universal. 9:59:59.000,9:59:59.000 And we need some kind of universal motivation. That is something that directs the system to do all the interesting things that you want it to do. 9:59:59.000,9:59:59.000 Like engage in social interaction or in mathematics or creativity. 9:59:59.000,9:59:59.000 And maybe we want to understand emotion, and affect, and phenomenal experience, and so on. 9:59:59.000,9:59:59.000 So: 9:59:59.000,9:59:59.000 we want to understand universal representations. 9:59:59.000,9:59:59.000 We want to have a set of operations over those representations that give us neural learning, and category formation, 9:59:59.000,9:59:59.000 and planning, and reflection, and memory consolidation, and resource allocation, 9:59:59.000,9:59:59.000 and language, and all those interesting things. 9:59:59.000,9:59:59.000 We also want to have perceptual grounding— 9:59:59.000,9:59:59.000 that is the representations would be saved—shaped in such a way that they can be mapped to perceptual input— 9:59:59.000,9:59:59.000 and vice versa. 9:59:59.000,9:59:59.000 And… 9:59:59.000,9:59:59.000 they should also be able to be translated into motor programs to perform actions. 9:59:59.000,9:59:59.000 And maybe we also want to have some feedback between the actions and the perceptions, and is feedback usually has a name: it’s called an environment. 9:59:59.000,9:59:59.000 OK. 9:59:59.000,9:59:59.000 And these medical representations, they are not just a big lump of things but they have some structure. 9:59:59.000,9:59:59.000 One part will be inevitably the model of the current situation… 9:59:59.000,9:59:59.000 … that we are in. 9:59:59.000,9:59:59.000 And this situation model… 9:59:59.000,9:59:59.000 is the present. 9:59:59.000,9:59:59.000 But if you also want to memorize past situations. 9:59:59.000,9:59:59.000 To have a protocol a memory of the past. 9:59:59.000,9:59:59.000 And this protocol memory, as a part, will contain things that are always with me. 9:59:59.000,9:59:59.000 This is my self-model. 9:59:59.000,9:59:59.000 Those properties that are constantly available to me. 9:59:59.000,9:59:59.000 That I can ascribe to myself. 9:59:59.000,9:59:59.000 And the other things, which are constantly changing, which I usually conceptualize as my environment. 9:59:59.000,9:59:59.000 An important part of that is declarative memory. 9:59:59.000,9:59:59.000 For instance abstractions into objects, things, people, and so on, 9:59:59.000,9:59:59.000 and procedural memory: abstraction into sequences of events. 9:59:59.000,9:59:59.000 And we can use the declarative memory and the procedural memory to erect a frame. 9:59:59.000,9:59:59.000 The frame gives me a context to interpret the current situation. 9:59:59.000,9:59:59.000 For instance right now I’m in a frame of giving a talk. 9:59:59.000,9:59:59.000 If… 9:59:59.000,9:59:59.000 … I would take a… 9:59:59.000,9:59:59.000 two year old kid, then this kid would interpret the situation very differently than me. 9:59:59.000,9:59:59.000 And would probably be confused by the situation or explored it in more creative ways than I would come up with. 9:59:59.000,9:59:59.000 Because I’m constrained by the frame which gives me the context 9:59:59.000,9:59:59.000 and tells me what you were expect me to do in this situation. 9:59:59.000,9:59:59.000 What I am expected to do and so on. 9:59:59.000,9:59:59.000 This frame extends in the future. 9:59:59.000,9:59:59.000 I have some kind of expectation horizon. 9:59:59.000,9:59:59.000 I know that my talk is going to be over in about 15 minutes. 9:59:59.000,9:59:59.000 Also I’ve plans. 9:59:59.000,9:59:59.000 I have things I want to tell you and so on. 9:59:59.000,9:59:59.000 And it might go wrong but I’ll try. 9:59:59.000,9:59:59.000 And if I generalize this, I find that I have the world model, 9:59:59.000,9:59:59.000 I have long term memory, and have some kind of mental stage. 9:59:59.000,9:59:59.000 This mental stage has counter-factual stuff. 9:59:59.000,9:59:59.000 Stuff that is not… 9:59:59.000,9:59:59.000 … real. 9:59:59.000,9:59:59.000 That I can play around with. 9:59:59.000,9:59:59.000 Ok. Then I need some kind of action selection that mediates between perception and action, 9:59:59.000,9:59:59.000 and some mechanism that controls the action selection 9:59:59.000,9:59:59.000 that is a motivational system, 9:59:59.000,9:59:59.000 which selects motives based on demands of the system. 9:59:59.000,9:59:59.000 And the demands of the system should create goals. 9:59:59.000,9:59:59.000 We are not born with our goals. 9:59:59.000,9:59:59.000 Obviously I don’t think that I was born with the goal of standing here and giving this talk to you. 9:59:59.000,9:59:59.000 There must be some demand in the system, which makes… enables me to have a biography, that … 9:59:59.000,9:59:59.000 … makes this a big goal of mine to give this talk to you and engage as many of you as possible into the project of AI. 9:59:59.000,9:59:59.000 And so lets come up with a set of demands that can produce such goals universally. 9:59:59.000,9:59:59.000 I think some of these demands will be physiological, like food, water, energy, physical integrity, rest, and so on. 9:59:59.000,9:59:59.000 Hot and cold with right range. 9:59:59.000,9:59:59.000 Then we have social demands. 9:59:59.000,9:59:59.000 At least most of us do. 9:59:59.000,9:59:59.000 Sociopaths probably don’t. 9:59:59.000,9:59:59.000 These social demands do structure our… 9:59:59.000,9:59:59.000 … social interaction. 9:59:59.000,9:59:59.000 They…. For instance a demand for affiliation. 9:59:59.000,9:59:59.000 That we get signals from others, that we are ok parts of society, of our environment. 9:59:59.000,9:59:59.000 We also have internalised social demands, 9:59:59.000,9:59:59.000 which we usually called honor or something. 9:59:59.000,9:59:59.000 This is conformance to internalized norms. 9:59:59.000,9:59:59.000 It means, 9:59:59.000,9:59:59.000 that we do to conform to social norms, even when nobody is looking. 9:59:59.000,9:59:59.000 And then we have cognitive demands. 9:59:59.000,9:59:59.000 And these cognitive demands, is for instance competence acquisition. 9:59:59.000,9:59:59.000 We want learn. 9:59:59.000,9:59:59.000 We want to get new skills. 9:59:59.000,9:59:59.000 We want to become more powerful in many many dimensions and ways. 9:59:59.000,9:59:59.000 It’s good to learn a musical instrument, because you get more competent. 9:59:59.000,9:59:59.000 It creates a reward signal, a pleasure signal, if you do that. 9:59:59.000,9:59:59.000 Also we want to reduce uncertainty. 9:59:59.000,9:59:59.000 Mathematicians are those people [that] have learned that they can reduce uncertainty in mathematics. 9:59:59.000,9:59:59.000 This creates pleasure for them, and then they find uncertainty in mathematics. 9:59:59.000,9:59:59.000 And this creates more pleasure. 9:59:59.000,9:59:59.000 So for mathematicians, mathematics is an unending source of pleasure. 9:59:59.000,9:59:59.000 Now unfortunately, if you are in Germany right now studying mathematics 9:59:59.000,9:59:59.000 and you find out that you are not very good at doing mathematics, what do you do? 9:59:59.000,9:59:59.000 You become a teacher. 9:59:59.000,9:59:59.000 And this is a very unfortunate situation for everybody involved. 9:59:59.000,9:59:59.000 And, it means, that you have people, [that] associate mathematics with… 9:59:59.000,9:59:59.000 uncertainty, 9:59:59.000,9:59:59.000 that has to be curbed and to be avoided. 9:59:59.000,9:59:59.000 And these people are put in front of kids and infuse them with this dread of uncertainty in mathematics. 9:59:59.000,9:59:59.000 And most people in our culture are dreading mathematics, because for them it’s just anticipation of uncertainty. 9:59:59.000,9:59:59.000 Which is a very bad things so people avoid it. 9:59:59.000,9:59:59.000 OK. 9:59:59.000,9:59:59.000 And then you have aesthetic demands. 9:59:59.000,9:59:59.000 There are stimulus oriented aesthetics. 9:59:59.000,9:59:59.000 Nature has had to pull some very heavy strings and levers to make us interested in strange things… 9:59:59.000,9:59:59.000 [such] as certain human body schemas and… 9:59:59.000,9:59:59.000 certain types of landscapes, and audio schemas, and so on. 9:59:59.000,9:59:59.000 So there are some stimuli that are inherently pleasurable to us—pleasant to us. 9:59:59.000,9:59:59.000 And of course this varies with every individual, because the wiring is very different, and that adaptivity in our biography is very different. 9:59:59.000,9:59:59.000 And then there’s abstract aesthetics. 9:59:59.000,9:59:59.000 And I think abstract aesthetics relates to finding better representations. 9:59:59.000,9:59:59.000 It relates to finding structure. 9:59:59.000,9:59:59.000 OK. And then we want to look at things like emotional modulation and affect. 9:59:59.000,9:59:59.000 And this was one of the first things that actually got me into AI. 9:59:59.000,9:59:59.000 That was the question: 9:59:59.000,9:59:59.000 “How is it possible, that a system can feel something?” 9:59:59.000,9:59:59.000 Because, if I have a variable in me with just fear or pain, 9:59:59.000,9:59:59.000 does not equate a feeling. 9:59:59.000,9:59:59.000 It’s very far… uhm… 9:59:59.000,9:59:59.000 … different from that. 9:59:59.000,9:59:59.000 And the answer that I’ve found so far it is, 9:59:59.000,9:59:59.000 that feeling, or affect, is a configuration of the system. 9:59:59.000,9:59:59.000 It’s not a parameter in the system, 9:59:59.000,9:59:59.000 but we have several dimensions, like a state of arousal that we’re currently, in the level of stubbornness that we have, the selection threshold, 9:59:59.000,9:59:59.000 the direction of attention, outwards or inwards, 9:59:59.000,9:59:59.000 the resolution level that we have, [with] which we look at our representations, and so on. 9:59:59.000,9:59:59.000 And these together create a certain way in every given situation of how our cognition is modulated. 9:59:59.000,9:59:59.000 We are living in a very different 9:59:59.000,9:59:59.000 and dynamic environment from time to time. 9:59:59.000,9:59:59.000 When you go outside we have very different demands on our cognition. 9:59:59.000,9:59:59.000 Maybe you need to react to traffic and so on. 9:59:59.000,9:59:59.000 Maybe we need to interact with other people. 9:59:59.000,9:59:59.000 Maybe we are in stressful situations. 9:59:59.000,9:59:59.000 Maybe you are in relaxed situations. 9:59:59.000,9:59:59.000 So we need to modulate our cognition accordingly. 9:59:59.000,9:59:59.000 And this modulation means, that we do perceive the world differently. 9:59:59.000,9:59:59.000 Our cognition works differently. 9:59:59.000,9:59:59.000 And we conceptualize ourselves, and experience ourselves, differently. 9:59:59.000,9:59:59.000 And I think this is what it means to feel something: 9:59:59.000,9:59:59.000 this difference in the configuration. 9:59:59.000,9:59:59.000 So. The affect can be seen as a configuration of a cognitive system. 9:59:59.000,9:59:59.000 And the modulators of the cognition are things like arousal, and selection special, and 9:59:59.000,9:59:59.000 background checks level, and resolution level, and so on. 9:59:59.000,9:59:59.000 Our current estimates of competence and certainty in the given situation, 9:59:59.000,9:59:59.000 and the pleasure and distress signals that you get from the frustration of our demands, 9:59:59.000,9:59:59.000 or satisfaction of our demands which are reinforcements for learning and structuring our behavior. 9:59:59.000,9:59:59.000 So the affective state, the emotional state that we are in, is emergent over those modulators. 9:59:59.000,9:59:59.000 And higher level emotions, things like jealousy or pride and so on, 9:59:59.000,9:59:59.000 we get them by directing those effects upon motivational content. 9:59:59.000,9:59:59.000 And this gives us a very simple architecture. 9:59:59.000,9:59:59.000 It’s a very rough sketch for an architecture. 9:59:59.000,9:59:59.000 And I think, 9:59:59.000,9:59:59.000 of course, 9:59:59.000,9:59:59.000 this doesn’t specify all the details. 9:59:59.000,9:59:59.000 I have specified some more of the details in a book, that I want to shamelessly plug here: 9:59:59.000,9:59:59.000 it’s called “Principles of Synthetic Intelligence”. 9:59:59.000,9:59:59.000 You can get it from Amazon or maybe from your library. 9:59:59.000,9:59:59.000 And this describes basically this architecture and some of the demands 9:59:59.000,9:59:59.000 for a very general framework of artificial intelligence in which to work with it. 9:59:59.000,9:59:59.000 So it doesn’t give you all the functional mechanisms, 9:59:59.000,9:59:59.000 but some things that I think are necessary based on my current understanding. 9:59:59.000,9:59:59.000 We’re currently at the second… 9:59:59.000,9:59:59.000 iteration of the implementations. 9:59:59.000,9:59:59.000 The first one was in Java in early 2003 with lots of XMI files and… 9:59:59.000,9:59:59.000 … XML files … and design patterns and Eclipse plug ins. 9:59:59.000,9:59:59.000 And the new one is, of course, … runs in the browser, and is written in Python, 9:59:59.000,9:59:59.000 and is much more light-weight and much more joy to work with. 9:59:59.000,9:59:59.000 But we’re not done yet. 9:59:59.000,9:59:59.000 OK. 9:59:59.000,9:59:59.000 So this gets back to that question: is it going to be one big idea or is it going to be incremental progress? 9:59:59.000,9:59:59.000 And I think it’s the latter. 9:59:59.000,9:59:59.000 If we want to look at this extremely simplified list of problems to solve: 9:59:59.000,9:59:59.000 whole testable architectures, 9:59:59.000,9:59:59.000 universal representations, 9:59:59.000,9:59:59.000 universal problem solving, 9:59:59.000,9:59:59.000 motivation, emotion, and effect, and so on. 9:59:59.000,9:59:59.000 And I can see hundreds and hundreds of Ph.D. thesis. 9:59:59.000,9:59:59.000 And I’m sure that I only see a tiny part of the problem. 9:59:59.000,9:59:59.000 So I think it’s entirely doable, 9:59:59.000,9:59:59.000 but it’s going to take a pretty long time. 9:59:59.000,9:59:59.000 And it’s going to be very exciting all the way, 9:59:59.000,9:59:59.000 because we are going to learn that we are full of shit 9:59:59.000,9:59:59.000 as we always do to a new problem, an algorithm, 9:59:59.000,9:59:59.000 and we realize that we can’t test it, 9:59:59.000,9:59:59.000 and that our initial idea was wrong, 9:59:59.000,9:59:59.000 and that we can improve on it. 9:59:59.000,9:59:59.000 So what should you do, if you want to get into AI? 9:59:59.000,9:59:59.000 And you’re not there yet? 9:59:59.000,9:59:59.000 So, I think you should get acquainted, of course, with the basic methodology. 9:59:59.000,9:59:59.000 You want to… 9:59:59.000,9:59:59.000 get programming languages, and learn them. 9:59:59.000,9:59:59.000 Basically do it for fun. 9:59:59.000,9:59:59.000 It’s really fun to wrap your mind around programming languages. 9:59:59.000,9:59:59.000 Changes the way you think. 9:59:59.000,9:59:59.000 And you want to learn software development. 9:59:59.000,9:59:59.000 That is, build an actual, running system. 9:59:59.000,9:59:59.000 Test-driven development. 9:59:59.000,9:59:59.000 All those things. 9:59:59.000,9:59:59.000 Then you want to look at the things that we do in AI. 9:59:59.000,9:59:59.000 So for like… 9:59:59.000,9:59:59.000 machine learning, probabilistic approaches, Kalman filtering, 9:59:59.000,9:59:59.000 POMDPs and so on. 9:59:59.000,9:59:59.000 You want to look at modes of representation: semantic networks, description logics, factor graphs, and so on. 9:59:59.000,9:59:59.000 Graph Theory, 9:59:59.000,9:59:59.000 hyper graphs. 9:59:59.000,9:59:59.000 And you want to look at the domain of cognitive architectures. 9:59:59.000,9:59:59.000 That is building computational models to simulate psychological phenomena, 9:59:59.000,9:59:59.000 and reproduce them, and test them. 9:59:59.000,9:59:59.000 I don’t think that you should stop there. 9:59:59.000,9:59:59.000 You need to take in all the things, that we haven’t taken in yet. 9:59:59.000,9:59:59.000 We need to learn more about linguistics. 9:59:59.000,9:59:59.000 We need to learn more about neuroscience in our field. 9:59:59.000,9:59:59.000 We need to do philosophy of mind. 9:59:59.000,9:59:59.000 I think what you need to do is study cognitive science. 9:59:59.000,9:59:59.000 So. What should you be working on? 9:59:59.000,9:59:59.000 Some of the most pressing questions to me are, for instance, representation. 9:59:59.000,9:59:59.000 How can we get abstract and perceptual presentation right 9:59:59.000,9:59:59.000 and interact with each other on a common ground? 9:59:59.000,9:59:59.000 How can we work with ambiguity and superposition of representations. 9:59:59.000,9:59:59.000 Many possible interpretations valid at the same time. 9:59:59.000,9:59:59.000 Inheritance and polymorphy. 9:59:59.000,9:59:59.000 How can we distribute representations in the mind 9:59:59.000,9:59:59.000 and store them efficiently? 9:59:59.000,9:59:59.000 How can we use representation in such a way 9:59:59.000,9:59:59.000 that even parts of them are very valid. 9:59:59.000,9:59:59.000 And we can use constraints to describe partial presentations. 9:59:59.000,9:59:59.000 For instance imagine a house. 9:59:59.000,9:59:59.000 And you already have the backside of the house, 9:59:59.000,9:59:59.000 and the number of windows in that house, 9:59:59.000,9:59:59.000 and you already see this complete picture in your house, 9:59:59.000,9:59:59.000 and at each time, 9:59:59.000,9:59:59.000 if I say: “OK. It’s a house with nine stories.” 9:59:59.000,9:59:59.000 this representation is going to change 9:59:59.000,9:59:59.000 based on these constraints. 9:59:59.000,9:59:59.000 How can we implement this? 9:59:59.000,9:59:59.000 And of course we want to implement time. 9:59:59.000,9:59:59.000 And we want… 9:59:59.000,9:59:59.000 to produce uncertain space, 9:59:59.000,9:59:59.000 and certain space 9:59:59.000,9:59:59.000 and openness, and closed environments. 9:59:59.000,9:59:59.000 And we want to have temporal loops and actually loops and physical loops. 9:59:59.000,9:59:59.000 Uncertain loops and all those things. 9:59:59.000,9:59:59.000 Next thing: perception. 9:59:59.000,9:59:59.000 Perception is crucial. 9:59:59.000,9:59:59.000 It’s…. Part of it is bottom up, 9:59:59.000,9:59:59.000 that is driven by cues from stimuli from the environment, 9:59:59.000,9:59:59.000 part of his top down. It’s driven by what we expect to see. 9:59:59.000,9:59:59.000 Actually most of it, about 10 times as much, 9:59:59.000,9:59:59.000 is driven by what we expect to see. 9:59:59.000,9:59:59.000 So we actually—actively—check for stimuli in the environment. 9:59:59.000,9:59:59.000 And this bottom-up top-down process in perception is interleaved. 9:59:59.000,9:59:59.000 And it’s adaptive. 9:59:59.000,9:59:59.000 We create new concepts and integrate them. 9:59:59.000,9:59:59.000 And we can revise those concepts over time. 9:59:59.000,9:59:59.000 And we can adapt it to a given environment 9:59:59.000,9:59:59.000 without completely revising those representations. 9:59:59.000,9:59:59.000 Without making them unstable. 9:59:59.000,9:59:59.000 And it works both on sensory input and memory. 9:59:59.000,9:59:59.000 I think that memory access is mostly a perceptual process. 9:59:59.000,9:59:59.000 It has anytime characteristics. 9:59:59.000,9:59:59.000 So it works with partial solutions and is useful already. 9:59:59.000,9:59:59.000 Categorization. 9:59:59.000,9:59:59.000 We want to have categories based on saliency, 9:59:59.000,9:59:59.000 that is on similarity and dissimilarity, and so on that you can perceive. 9:59:59.000,9:59:59.000 We…. Based on goals on motivational relevance. 9:59:59.000,9:59:59.000 And on social criteria. 9:59:59.000,9:59:59.000 Somebody suggests me categories, 9:59:59.000,9:59:59.000 and I find out what they mean by those categories. 9:59:59.000,9:59:59.000 What’s the difference between cats and dogs? 9:59:59.000,9:59:59.000 I never came up with this idea on my own to make two baskets: 9:59:59.000,9:59:59.000 and the pekinese and the shepherds in one and all the cats in the other. 9:59:59.000,9:59:59.000 But if you suggest it to me, I come up with a classifier. 9:59:59.000,9:59:59.000 Then… next thing: universal problem solving and taskability. 9:59:59.000,9:59:59.000 If we don’t want to have specific solutions; 9:59:59.000,9:59:59.000 we want to have general solutions. 9:59:59.000,9:59:59.000 We want it to be able to play every game, 9:59:59.000,9:59:59.000 to find out how to play every game for instance. 9:59:59.000,9:59:59.000 Language: the big domain of organizing mental representations, 9:59:59.000,9:59:59.000 which are probably fuzzy, distributed hyper-graphs 9:59:59.000,9:59:59.000 into discrete strings of symbols. 9:59:59.000,9:59:59.000 Sociality: 9:59:59.000,9:59:59.000 interpreting others. 9:59:59.000,9:59:59.000 It’s what we call theory of mind. 9:59:59.000,9:59:59.000 Social drives, which make us conform to social situations and engage in them. 9:59:59.000,9:59:59.000 Personhood and self-concept. 9:59:59.000,9:59:59.000 How does that work? 9:59:59.000,9:59:59.000 Personality properties. 9:59:59.000,9:59:59.000 How can we understand, and implement, and test for them? 9:59:59.000,9:59:59.000 Then the big issue of integration. 9:59:59.000,9:59:59.000 How can we get analytical and associative operations to work together? 9:59:59.000,9:59:59.000 Attention. 9:59:59.000,9:59:59.000 How can we direct attention and mental resources between different problems? 9:59:59.000,9:59:59.000 Developmental trajectory. 9:59:59.000,9:59:59.000 How can we start as kids and grow our system to become more and more adult like and even maybe surpass that? 9:59:59.000,9:59:59.000 Persistence. 9:59:59.000,9:59:59.000 How can we make the system stay active instead of rebooting it every other day, because it becomes unstable. 9:59:59.000,9:59:59.000 And then benchmark problems. 9:59:59.000,9:59:59.000 We know, most AI is having benchmarks like 9:59:59.000,9:59:59.000 how to drive a car, 9:59:59.000,9:59:59.000 or how to control a robot, 9:59:59.000,9:59:59.000 or how to play soccer. 9:59:59.000,9:59:59.000 And you end up with car driving toasters, and 9:59:59.000,9:59:59.000 soccer-playing toasters, 9:59:59.000,9:59:59.000 and chess playing toasters. 9:59:59.000,9:59:59.000 But actually, we want to have a system 9:59:59.000,9:59:59.000 that is forced to have a mind. 9:59:59.000,9:59:59.000 That needs to be our benchmarks. 9:59:59.000,9:59:59.000 So we need to find tasks that enforce all this universal problem solving, 9:59:59.000,9:59:59.000 and representation, and perception, 9:59:59.000,9:59:59.000 and supports the incremental development. 9:59:59.000,9:59:59.000 And that inspires a research community. 9:59:59.000,9:59:59.000 And, last but not least, it needs to attract funding. 9:59:59.000,9:59:59.000 So. 9:59:59.000,9:59:59.000 It needs to be something that people can understand and engage in. 9:59:59.000,9:59:59.000 And that seems to be meaningful to people. 9:59:59.000,9:59:59.000 So this is a bunch of the issues that need to be urgently addressed… 9:59:59.000,9:59:59.000 … in the next… 9:59:59.000,9:59:59.000 15 years or so. 9:59:59.000,9:59:59.000 And this means, for … 9:59:59.000,9:59:59.000 … my immediate scientific career, and for yours. 9:59:59.000,9:59:59.000 You get a little bit more information on the home of the project, which is micropsi.com. 9:59:59.000,9:59:59.000 You can also send me emails if you’re interested. 9:59:59.000,9:59:59.000 And I want to thank a lot of people which have supported me. And … 9:59:59.000,9:59:59.000 you for your attention. 9:59:59.000,9:59:59.000 And giving me the chance to talk about AI. 9:59:59.000,9:59:59.000 [applause]