My name is Leslie Lamport and I am a computer scientist, which is something that didn't really exist when I started being a computer scientist and it took me a while to figure out that I was one. My relationship with computers began. As a programmer, it's never quite occurred to me that I was doing anything scientific until after I had published enough papers that it finally occurred to me. My education was as a mathematician. It was just natural for me to think about computers as a mathematician. When you write an algorithm, you need to have a proof that it's correct. An algorithm without a proof is a conjecture, it's not a theorem. And if you're proving things well, that's means "Mathematics". Computer scientists tend to think in terms of programming languages. One of the epiphanies in my career was the realization that I was not writing programs as a computer scientist I was designing algorithms. I came to realize that if I'm not writing a program, I shouldn't use a programming language. People confuse "Programming" with "Coding". Coding is to programming, while typing is to writing. Writing is something that involves mental effort. You're thinking about what you're going to say. The words have some importance but in some sense, even they are secondary to the ideas. In the same way, programs are built on ideas. They have to do something. And what they're supposed to do, I mean it's like what writing is supposed to convey. If people are trying to learn programming by being taught to code well, they're being taught writing by being taught how to type. And that doesn't make much sense. The best way I have for teaching about programming as distinct from coding is to think about what the program is supposed to do mathematically. There's a very big practical problem with this. The mathematical education in this country is pretty terrible. Most people wind up being afraid of mathematics. This is even senior programmers. I've developed a language called TLA+ for writing down the ideas that go into the program. Before you do any coding, it's a pretty hard thing for engineers to get into. But when they do, it develops their ability to think mathematically. A distributed system is one in which your computer can be rendered useless by the failure of a computer that you didn't even know existed. Non-distributed computing is when different processes communicate by using the same memory. And distributed computing means that they're communicating with one another by sending messages. Now my interest in distributed systems came about by serendipity. I received a preprint of a paper by Robert Thomas and Paul Johnson, who had an algorithm for implementing distributed databases. These are databases where you could have multiple copies of the data sitting at different computers. So that programs on each computer could have rapid access to the data. But they had to be synchronized so that processes on all the computers got consistent views of what the data was. I happen to have become quite familiar with special relativity. One of the things that special relativity teaches you is that two different observers have different notions of what at the same time means. But there's one notion that is invariant: there's a certain notion of one event happening before another event. And that means that it's possible for information to be transmitted from one event to the others. When the information cannot travel faster than the speed of light, I realized that that notion of causality was violated by the algorithm of Thomas and Johnson. It's completely analogous to the relation in special relativity. So what I did is I wrote a paper that explained this notion of causality. One could solve any distributed system problem by building what I called a state machine. Think of it as an abstract computer that has one thing at a time. You make sure that all the computers in this distributed system cooperate to implement a single state machine. And that idea has become fundamental in the way, people think about building distributed systems. I had never even thought about a distributed system before I wrote that paper. As I progressed in my career, I came to appreciate the idea of working in industry. That's where most of the interesting problems that I found came from You know from engineers having a problem to solve. It reminds me actually of something that ''Auguste Renoir''. Once said if someone asked him why he painted outside rather than in his studio. And what he said is "If I were painting in the studio and I wanted to paint a leaf, I would be able to you think of only a half dozen or so different kinds of leaves that I could paint. But when I was painting outdoors, there were just these millions of different kinds of leaves that were there that I could paint from." I found my research the same way that if I sat down, you know, and just, you know, contemplated my navel and think about problems, you know, there's a small number of problems that I could think of. But there were just scares of them, sitting out in industry, waiting to be, to be solved. My favorite of my algorithms is the bakery algorithm. It's to solve the mutual exclusion problem that is keep two processes from using the printer at the same time. Processes choose a number, based on the numbers that have been chosen by other processes, and use an algorithm, so that the lowest is allowed to use the printer. But what is amazing about it is it does not make an assumption that almost every other algorithm makes the assumption being that if I say I'm changing my number from 47 to 100 and you read that number, you'll either get 47 or 100. But that algorithm works even if instead of getting 47 or 100, you maybe got 4 700 or maybe you got 9 999. The algorithm still works. I didn't intend it to, I mean I didn't intend that I just discovered that when I wrote the proof. I never needed to make the assumption. That is just so beautiful! And, you know, I'm really proud that I stumbled on it.