English sous-titres

← Lecture 1A | MIT 6.001 Structure and Interpretation, 1986

Overview and Introduction to Lisp

Despite the copyright notice on the screen, this course is now offered under a Creative Commons license: BY-NC-SA. Details at http://ocw.mit.edu/terms

Obtenir le code d’intégration
6 langues

Afficher la révision 16 créée 11/28/2013 par Dmitry Pulin.

  1. [MUSIC PLAYING]
  2. PROFESSOR: I'd like to
    welcome you to this
  3. course on computer science.
  4. Actually, that's a terrible
    way to start.
  5. Computer science is a terrible
    name for this business.
  6. First of all, it's
    not a science.
  7. It might be engineering or it
    might be art, but we'll
  8. actually see that computer
    so-called science actually has
  9. a lot in common with magic,
    and we'll see
  10. that in this course.
  11. So it's not a science.
  12. It's also not really very
    much about computers.
  13. And it's not about computers in
    the same sense that physics
  14. is not really about particle
    accelerators, and biology is
  15. not really about microscopes
    and petri dishes.
  16. And it's not about computers
    in the same sense that
  17. geometry is not really about
    using surveying instruments.
  18. In fact, there's a lot of
    commonality between computer
  19. science and geometry.
  20. Geometry, first of all,
    is another subject
  21. with a lousy name.
  22. The name comes from Gaia,
    meaning the Earth, and metron,
  23. meaning to measure.
  24. Geometry originally
    meant measuring
  25. the Earth or surveying.
  26. And the reason for that was
    that, thousands of years ago,
  27. the Egyptian priesthood
    developed the rudiments of
  28. geometry in order to figure
    out how to restore the
  29. boundaries of fields that were
    destroyed in the annual
  30. flooding of the Nile.
  31. And to the Egyptians who did
    that, geometry really was the
  32. use of surveying instruments.
  33. Now, the reason that we think
    computer science is about
  34. computers is pretty much the
    same reason that the Egyptians
  35. thought geometry was about
    surveying instruments.
  36. And that is, when some field
    is just getting started and
  37. you don't really understand it
    very well, it's very easy to
  38. confuse the essence of what
    you're doing with the tools
  39. that you use.
  40. And indeed, on some absolute
    scale of things, we probably
  41. know less about the essence of
    computer science than the
  42. ancient Egyptians really
    knew about geometry.
  43. Well, what do I mean by the
    essence of computer science?
  44. What do I mean by the
    essence of geometry?
  45. See, it's certainly true that
    these Egyptians went off and
  46. used surveying instruments, but
    when we look back on them
  47. after a couple of thousand
    years, we say, gee, what they
  48. were doing, the important stuff
    they were doing, was to
  49. begin to formalize notions about
    space and time, to start
  50. a way of talking about
    mathematical truths formally.
  51. That led to the axiomatic
    method.
  52. That led to sort of all of
    modern mathematics, figuring
  53. out a way to talk precisely
    about so-called declarative
  54. knowledge, what is true.
  55. Well, similarly, I think in the
    future people will look
  56. back and say, yes, those
    primitives in the 20th century
  57. were fiddling around with
    these gadgets called
  58. computers, but really what they
    were doing is starting to
  59. learn how to formalize
    intuitions about process, how
  60. to do things, starting to
    develop a way to talk
  61. precisely about how-to
    knowledge, as opposed to
  62. geometry that talks about
    what is true.
  63. Let me give you an
    example of that.
  64. Let's take a look.
  65. Here is a piece of mathematics
    that says what
  66. a square root is.
  67. The square root of X is the
    number Y, such that Y squared
  68. is equal to X and Y
    is greater than 0.
  69. Now, that's a fine piece of
    mathematics, but just telling
  70. you what a square root is
    doesn't really say anything
  71. about how you might go
    out and find one.
  72. So let's contrast that with a
    piece of imperative knowledge,
  73. how you might go out and
    find a square root.
  74. This, in fact, also comes
    from Egypt, not
  75. ancient, ancient Egypt.
  76. This is an algorithm due to
    Heron of Alexandria, called
  77. how to find a square root
    by successive averaging.
  78. And what it says is that, in
    order to find a square root,
  79. you make a guess, you
    improve that guess--
  80. and the way you improve the
    guess is to average the guess
  81. and X over the guess, and we'll
    talk a little bit later
  82. about why that's a reasonable
    thing--
  83. and you keep improving the guess
    until it's good enough.
  84. That's a method.
  85. That's how to do something
    as opposed to declarative
  86. knowledge that says what
    you're looking for.
  87. That's a process.
  88. Well, what's a process
    in general?
  89. It's kind of hard to say.
  90. You can think of it as like a
    magical spirit that sort of
  91. lives in the computer
    and does something.
  92. And the thing that directs a
    process is a pattern of rules
  93. called a procedure.
  94. So procedures are the spells,
    if you like, that control
  95. these magical spirits that
    are the processes.
  96. I guess you know everyone needs
    a magical language, and
  97. sorcerers, real sorcerers, use
    ancient Arcadian or Sumerian
  98. or Babylonian or whatever.
  99. We're going to conjure our
    spirits in a magical language
  100. called Lisp, which is a language
    designed for talking
  101. about, for casting the spells
    that are procedures to direct
  102. the processes.
  103. Now, it's very easy
    to learn Lisp.
  104. In fact, in a few minutes,
    I'm going to teach you,
  105. essentially, all of Lisp.
  106. I'm going to teach you,
    essentially, all of the rules.
  107. And you shouldn't find that
    particularly surprising.
  108. That's sort of like saying it's
    very easy to learn the
  109. rules of chess.
  110. And indeed, in a few minutes,
    you can tell somebody the
  111. rules of chess.
  112. But of course, that's very
    different from saying you
  113. understand the implications of
    those rules and how to use
  114. those rules to become a
    masterful chess player.
  115. Well, Lisp is the same way.
  116. We're going to state the rules
    in a few minutes, and it'll be
  117. very easy to see.
  118. But what's really hard is going
    to be the implications
  119. of those rules, how you exploit
    those rules to be a
  120. master programmer.
  121. And the implications of those
    rules are going to take us
  122. the, well, the whole rest of
    the subject and, of course,
  123. way beyond.
  124. OK, so in computer science,
    we're in the business of
  125. formalizing this sort of how-to
    imperative knowledge,
  126. how to do stuff.
  127. And the real issues of computer
    science are, of
  128. course, not telling people
    how to do square roots.
  129. Because if that was
    all it was, there
  130. wouldn't be no big deal.
  131. The real problems come when we
    try to build very, very large
  132. systems, computer programs that
    are thousands of pages
  133. long, so long that nobody can
    really hold them in their
  134. heads all at once.
  135. And the only reason that that's
    possible is because
  136. there are techniques for
    controlling the complexity of
  137. these large systems. And these
    techniques that are
  138. controlling complexity
    are what this
  139. course is really about.
  140. And in some sense, that's
    really what
  141. computer science is about.
  142. Now, that may seem like a very
    strange thing to say.
  143. Because after all, a lot of
    people besides computer
  144. scientists deal with controlling
    complexity.
  145. A large airliner is an extremely
    complex system, and
  146. the aeronautical engineers who
    design that are dealing with
  147. immense complexity.
  148. But there's a difference
    between that kind of
  149. complexity and what we deal
    with in computer science.
  150. And that is that computer
    science, in some
  151. sense, isn't real.
  152. You see, when an engineer is
    designing a physical system,
  153. that's made out of real parts.
  154. The engineers who worry about
    that have to address problems
  155. of tolerance and approximation
    and noise in the system.
  156. So for example, as an electrical
    engineer, I can go
  157. off and easily build a one-stage
    amplifier or a
  158. two-stage amplifier, and I can
    imagine cascading a lot of
  159. them to build a million-stage
    amplifier.
  160. But it's ridiculous to build
    such a thing, because long
  161. before the millionth stage,
    the thermal noise in those
  162. components way at the beginning
    is going to get
  163. amplified and make the whole
    thing meaningless.
  164. Computer science deals with
    idealized components.
  165. We know as much as we want about
    these little program and
  166. data pieces that we're fitting
    things together.
  167. We don't have to worry
    about tolerance.
  168. And that means that, in building
    a large program,
  169. there's not all that much
    difference between what I can
  170. build and what I can imagine,
    because the parts are these
  171. abstract entities that I
    know as much as I want.
  172. I know about them as precisely
    as I'd like.
  173. So as opposed to other kinds
    of engineering, where the
  174. constraints on what you can
    build are the constraints of
  175. physical systems, the
    constraints of physics and
  176. noise and approximation, the
    constraints imposed in
  177. building large software systems
    are the limitations of
  178. our own minds.
  179. So in that sense, computer
    science is like an abstract
  180. form of engineering.
  181. It's the kind of engineering
    where you ignore the
  182. constraints that are
    imposed by reality.
  183. Well, what are some of
    these techniques?
  184. They're not special to
    computer science.
  185. First technique, which is used
    in all of engineering, is a
  186. kind of abstraction called
    black-box abstraction.
  187. Take something and build
    a box about it.
  188. Let's see, for example, if we
    looked at that square root
  189. method, I might want to take
    that and build a box.
  190. That sort of says, to find the
    square root of X. And that
  191. might be a whole complicated
    set of rules.
  192. And that might end up being a
    kind of thing where I can put
  193. in, say, 36 and say, what's
    the square root of 36?
  194. And out comes 6.
  195. And the important thing is that
    I'd like to design that
  196. so that if George comes along
    and would like to compute,
  197. say, the square root of A plus
    the square root of B, he can
  198. take this thing and use it as
    a module without having to
  199. look inside and build something
    that looks like
  200. this, like an A and a B and a
    square root box and another
  201. square root box and then
    something that adds that would
  202. put out the answer.
  203. And you can see, just from the
    fact that I want to do that,
  204. is from George's point of view,
    the internals of what's
  205. in here should not
    be important.
  206. So for instance, it shouldn't
    matter that, when I wrote
  207. this, I said I want to find the
    square root of X. I could
  208. have said the square root of Y,
    or the square root of A, or
  209. anything at all.
  210. That's the fundamental notion of
    putting something in a box
  211. using black-box abstraction
    to suppress detail.
  212. And the reason for that is you
    want to go off and build
  213. bigger boxes.
  214. Now, there's another reason
    for doing black-box
  215. abstraction other than you want
    to suppress detail for
  216. building bigger boxes.
  217. Sometimes you want to say that
    your way of doing something,
  218. your how-to method, is an
    instance of a more general
  219. thing, and you'd like your
    language to be able to express
  220. that generality.
  221. Let me show you another example
  222. sticking with square roots.
  223. Let's go back and take another
    look at that slide with the
  224. square root algorithm on it.
  225. Remember what that says.
  226. That says, in order to do
    something, I make a guess, and
  227. I improve that guess,
    and I sort of keep
  228. improving that guess.
  229. So there's the general strategy
    of, I'm looking for
  230. something, and the way
    I find it is that I
  231. keep improving it.
  232. Now, that's a particular case
    of another kind of strategy
  233. for finding a fixed point
    of something.
  234. So you have a fixed point
    of a function.
  235. A fixed point of a function
    is something, is a value.
  236. A fixed point of a function F is
    a value Y, such that F of Y
  237. equals Y. And the way I might do
    that is start with a guess.
  238. And then if I want something
    that doesn't change when I
  239. keep applying F, is I'll keep
    applying F over and over until
  240. that result doesn't
    change very much.
  241. So there's a general strategy.
  242. And then, for example, to
    compute the square root of X,
  243. I can try and find a fixed point
    of the function which
  244. takes Y to the average of X/Y.
    And the idea that is that if I
  245. really had Y equal to the square
    root of X, then Y and
  246. X/Y would be the same value.
  247. They'd both be the square root
    of X, because X over the
  248. square root of X is the
    square root of X.
  249. And so the average if Y were
    equal to the square of X, then
  250. the average wouldn't change.
  251. So the square root of X
    is a fixed point of
  252. that particular function.
  253. Now, what I'd like to have,
    I'd like to express the
  254. general strategy for finding
    fixed points.
  255. So what I might imagine doing,
    is to find, is to be able to
  256. use my language to define a box
    that says "fixed point,"
  257. just like I could make a box
    that says "square root." And
  258. I'd like to be able to express
    this in my language.
  259. So I'd like to express not only
    the imperative how-to
  260. knowledge of a particular thing
    like square root, but
  261. I'd like to be able to express
    the imperative knowledge of
  262. how to do a general thing like
    how to find fixed point.
  263. And in fact, let's go back and
    look at that slide again.
  264. See, not only is this a piece
    of imperative knowledge, how
  265. to find a fixed point, but
    over here on the bottom,
  266. there's another piece of
    imperative knowledge which
  267. says, one way to compute square
    root is to apply this
  268. general fixed point method.
  269. So I'd like to also
    be able to express
  270. that imperative knowledge.
  271. What would that look like?
  272. That would say, this fixed point
    box is such that if I
  273. input to it the function that
    takes Y to the average of Y
  274. and X/Y, then what should come
    out of that fixed point box is
  275. a method for finding
    square roots.
  276. So in these boxes we're
    building, we're not only
  277. building boxes that you input
    numbers and output numbers,
  278. we're going to be building in
    boxes that, in effect, compute
  279. methods like finding
    square root.
  280. And my take is their inputs
    functions, like Y goes to the
  281. average of Y and X/Y. The reason
    we want to do that, the
  282. reason this is a procedure, will
    end up being a procedure,
  283. as we'll see, whose value is
    another procedure, the reason
  284. we want to do that is because
    procedures are going to be our
  285. ways of talking about imperative
    knowledge.
  286. And the way to make that very
    powerful is to be able to talk
  287. about other kinds
    of knowledge.
  288. So here is a procedure that, in
    effect, talks about another
  289. procedure, a general strategy
    that itself talks about
  290. general strategies.
  291. Well, our first topic in this
    course-- there'll be three
  292. major topics-- will be black-box
    abstraction.
  293. Let's look at that in a little
    bit more detail.
  294. What we're going to do is we
    will start out talking about
  295. how Lisp is built up out
    of primitive objects.
  296. What does the language
    supply with us?
  297. And we'll see that there are
    primitive procedures and
  298. primitive data.
  299. Then we're going to see, how do
    you take those primitives
  300. and combine them to make more
    complicated things, means of
  301. combination?
  302. And what we'll see is that
    there are ways of putting
  303. things together, putting
    primitive procedures together
  304. to make more complicated
    procedures.
  305. And we'll see how to put
    primitive data together to
  306. make compound data.
  307. Then we'll say, well, having
    made those compounds things,
  308. how do you abstract them?
  309. How do you put those black boxes
    around them so you can
  310. use them as components in
    more complex things?
  311. And we'll see that's done by
    defining procedures and a
  312. technique for dealing with
    compound data called data
  313. abstraction.
  314. And then, what's maybe the most
    important thing, is going
  315. from just the rules to how
    does an expert work?
  316. How do you express common
    patterns of doing things, like
  317. saying, well, there's a general
    method of fixed point
  318. and square root is a particular
    case of that?
  319. And we're going to use--
  320. I've already hinted at it--
    something called higher-order
  321. procedures, namely procedures
    whose inputs and outputs are
  322. themselves procedures.
  323. And then we'll also see
    something very interesting.
  324. We'll see, as we go further and
    further on and become more
  325. abstract, there'll be very--
  326. well, the line between what we
    consider to be data and what
  327. we consider to be procedures
    is going to blur at an
  328. incredible rate.
  329. Well, that's our first
    subject, black-box
  330. abstraction.
  331. Let's look at the
    second topic.
  332. I can introduce it like this.
  333. See, suppose I want to
    express the idea--
  334. remember, we're talking
    about ideas--
  335. suppose I want to express the
    idea that I can take something
  336. and multiply it by the sum
    of two other things.
  337. So for example, I might say, if
    I had 1 and 3 and multiply
  338. that by 2, I get 8.
  339. But I'm talking about the
    general idea of what's called
  340. linear combination, that you
    can add two things and
  341. multiply them by
    something else.
  342. It's very easy when I think
    about it for numbers, but
  343. suppose I also want to use that
    same idea to think about,
  344. I could add two vectors, a1 and
    a2, and then scale them by
  345. some factor x and get
    another vector.
  346. Or I might say, I want to think
    about a1 and a2 as being
  347. polynomials, and I might want
    to add those two polynomials
  348. and then multiply them by 2 to
    get a more complicated one.
  349. Or a1 and a2 might be electrical
    signals, and I
  350. might want to think about
    summing those two electrical
  351. signals and then putting the
    whole thing through an
  352. amplifier, multiplying it by
    some factor of 2 or something.
  353. The idea is I want to
    think about the
  354. general notion of that.
  355. Now, if our language is going
    to be good language for
  356. expressing those kind of general
    ideas, if I really,
  357. really can do that, I'd like to
    be able to say I'm going to
  358. multiply by x the sum of a1 and
    a2, and I'd like that to
  359. express the general idea of all
    different kinds of things
  360. that a1 and a2 could be.
  361. Now, if you think about that,
    there's a problem, because
  362. after all, the actual primitive
    operations that go
  363. on in the machine are obviously
    going to be
  364. different if I'm adding two
    numbers than if I'm adding two
  365. polynomials, or if I'm adding
    the representation of two
  366. electrical signals
    or wave forms.
  367. Somewhere, there has to be the
    knowledge of the kinds of
  368. various things that you
    can add and the
  369. ways of adding them.
  370. Now, to construct such a system,
    the question is, where
  371. do I put that knowledge?
  372. How do I think about
    the different kinds
  373. of choices I have?
  374. And if tomorrow George comes up
    with a new kind of object
  375. that might be added and
    multiplied, how do I add
  376. George's new object to the
    system without screwing up
  377. everything that was
    already there?
  378. Well, that's going to be the
    second big topic, the way of
  379. controlling that kind
    of complexity.
  380. And the way you do that is by
    establishing conventional
  381. interfaces, agreed upon ways of
    plugging things together.
  382. Just like in electrical
    engineering, people have
  383. standard impedances for
    connectors, and then you know
  384. if you build something with
    one of those standard
  385. impedances, you can plug it
    together with something else.
  386. So that's going to be our
    second large topic,
  387. conventional interfaces.
  388. What we're going to see is,
    first, we're going to talk
  389. about the problem of generic
    operations, which is the one I
  390. alluded to, things like "plus"
    that have to work with all
  391. different kinds of data.
  392. So we talk about generic
    operations.
  393. Then we're going to talk about
    really large-scale structures.
  394. How do you put together very
    large programs that model the
  395. kinds of complex systems
    in the real world that
  396. you'd like to model?
  397. And what we're going to see
    is that there are two very
  398. important metaphors for putting
    together such systems.
  399. One is called object-oriented
    programming, where you sort of
  400. think of your system as a kind
    of society full of little
  401. things that interact by sending
  402. information between them.
  403. And then the second one is
    operations on aggregates,
  404. called streams, where you think
    of a large system put
  405. together kind of like a signal
    processing engineer puts
  406. together a large electrical
    system.
  407. That's going to be
    our second topic.
  408. Now, the third thing we're going
    to come to, the third
  409. basic technique for controlling
    complexity, is
  410. making new languages.
  411. Because sometimes, when you're
    sort of overwhelmed by the
  412. complexity of a design, the
    way that you control that
  413. complexity is to pick a
    new design language.
  414. And the purpose of the new
    design language will be to
  415. highlight different aspects
    of the system.
  416. It will suppress some kinds of
    details and emphasize other
  417. kinds of details.
  418. This is going to be the most
    magical part of the course.
  419. We're going to start out by
    actually looking at the
  420. technology for building new
    computer languages.
  421. The first thing we're going to
    do is actually build in Lisp.
  422. We're going to express in Lisp
    the process of interpreting
  423. Lisp itself.
  424. And that's going to be a very
    sort of self-circular thing.
  425. There's a little mystical
    symbol that
  426. has to do with that.
  427. The process of interpreting Lisp
    is sort of a giant wheel
  428. of two processes, apply and
    eval, which sort of constantly
  429. reduce expressions
    to each other.
  430. Then we're going to see all
    sorts of other magical things.
  431. Here's another magical symbol.
  432. This is sort of the Y operator,
    which is, in some
  433. sense, the expression
    of infinity inside
  434. our procedural language.
  435. We'll take a look at that.
  436. In any case, this section
    of the course is called
  437. Metalinguistic Abstraction,
    abstracting by talking about
  438. how you construct
    new languages.
  439. As I said, we're going to start
    out by looking at the
  440. process of interpretation.
  441. We're going to look
    at this apply-eval
  442. loop, and build Lisp.
  443. Then, just to show you that this
    is very general, we're
  444. going to use exactly the same
    technology to build a very
  445. different kind of language, a
    so-called logic programming
  446. language, where you don't really
    talk about procedures
  447. at all that have inputs
    and outputs.
  448. What you do is talk about
    relations between things.
  449. And then finally, we're going
    to talk about how you
  450. implement these things very
    concretely on the very
  451. simplest kind of machines.
  452. We'll see something like this.
  453. This is a picture of a chip,
    which is the Lisp interpreter
  454. that we will be talking about
    then in hardware.
  455. Well, there's an outline of the
    course, three big topics.
  456. Black-box abstraction,
    conventional interfaces,
  457. metalinguistic abstraction.
  458. Now, let's take a break now and
    then we'll get started.
  459. [MUSIC PLAYING]
  460. Let's actually start in
    learning Lisp now.
  461. Actually, we'll start out by
    learning something much more
  462. important, maybe the very most
    important thing in this
  463. course, which is not Lisp, in
    particular, of course, but
  464. rather a general framework for
    thinking about languages that
  465. I already alluded to.
  466. When somebody tells you they're
    going to show you a
  467. language, what you should say
    is, what I'd like you to tell
  468. me is what are the primitive
    elements?
  469. What does the language
    come with?
  470. Then, what are the ways you
    put those together?
  471. What are the means
    of combination?
  472. What are the things that allow
    you to take these primitive
  473. elements and build bigger
    things out of them?
  474. What are the ways of putting
    things together?
  475. And then, what are the
    means of abstraction?
  476. How do we take those complicated
    things and draw
  477. those boxes around them?
  478. How do we name them so that we
    can now use them as if they
  479. were primitive elements
    in making still
  480. more complex things?
  481. And so on, and so
    on, and so on.
  482. So when someone says to you,
    gee, I have a great new
  483. computer language, you don't
    say, how many characters does
  484. it take to invert a matrix?
  485. It's irrelevant.
  486. What you say is, if the language
    did not come with
  487. matrices built in or with
    something else built in, how
  488. could I then build that thing?
  489. What are the means of
    combination which would allow
  490. me to do that?
  491. And then, what are the means of
    abstraction which allow me
  492. then to use those as elements
    in making more complicated
  493. things yet?
  494. Well, we're going to see that
    Lisp has some primitive data
  495. and some primitive procedures.
  496. In fact, let's really start.
  497. And here's a piece of
    primitive data in
  498. Lisp, number 3.
  499. Actually, if I'm being
    very pedantic, that's
  500. not the number 3.
  501. That's some symbol that
    represents Plato's concept of
  502. the number 3.
  503. And here's another.
  504. Here's some more primitive
    data in Lisp, 17.4.
  505. Or actually, some representation
    of 17.4.
  506. And here's another one, 5.
  507. Here's another primitive
    object that's
  508. built in Lisp, addition.
  509. Actually, to use the same kind
    of pedantic-- this is a name
  510. for the primitive method
    of adding things.
  511. Just like this is a name for
    Plato's number 3, this is a
  512. name for Plato's concept
    of how you add things.
  513. So those are some primitive
    elements.
  514. I can put them together.
  515. I can say, gee, what's the
    sum of 3 and 17.4 and 5?
  516. And the way I do that is to
    say, let's apply the sum
  517. operator to these
    three numbers.
  518. And I should get, what?
  519. 8, 17.
  520. 25.4.
  521. So I should be able to ask Lisp
    what the value of this
  522. is, and it will return 25.4.
  523. Let's introduce some names.
  524. This thing that I typed is
    called a combination.
  525. And a combination consists,
    in general,
  526. of applying an operator--
  527. so this is an operator--
  528. to some operands.
  529. These are the operands.
  530. And of course, I can make
    more complex things.
  531. The reason I can get complexity
    out of this is
  532. because the operands themselves,
    in general, can be
  533. combinations.
  534. So for instance, I could say,
    what is the sum of 3 and the
  535. product of 5 and
    6 and 8 and 2?
  536. And I should get-- let's see--
  537. 30, 40, 43.
  538. So Lisp should tell
    me that that's 43.
  539. Forming combinations is the
    basic needs of combination
  540. that we'll be looking at.
  541. And then, well, you see
    some syntax here.
  542. Lisp uses what's called prefix
    notation, which means that the
  543. operator is written to the
    left of the operands.
  544. It's just a convention.
  545. And notice, it's fully
    parenthesized.
  546. And the parentheses make it
    completely unambiguous.
  547. So by looking at this, I can see
    that there's the operator,
  548. and there are 1, 2,
    3, 4 operands.
  549. And I can see that the second
    operand here is itself some
  550. combination that has one
    operator and two operands.
  551. Parentheses in Lisp are a little
    bit, or are very unlike
  552. parentheses in conventional
    mathematics.
  553. In mathematics, we sort of use
    them to mean grouping, and it
  554. sort of doesn't hurt if
    sometimes you leave out
  555. parentheses if people
    understand
  556. that that's a group.
  557. And in general, it doesn't
    hurt if you put in extra
  558. parentheses, because that
    maybe makes the
  559. grouping more distinct.
  560. Lisp is not like that.
  561. In Lisp, you cannot leave out
    parentheses, and you cannot
  562. put in extra parentheses,
    because putting in parentheses
  563. always means, exactly and
    precisely, this is a
  564. combination which has
    meaning, applying
  565. operators to operands.
  566. And if I left this out, if I
    left those parentheses out, it
  567. would mean something else.
  568. In fact, the way to think about
    this, is really what I'm
  569. doing when I write something
    like this is writing a tree.
  570. So this combination is a tree
    that has a plus and then a 3
  571. and then a something else
    and an 8 and a 2.
  572. And then this something else
    here is itself a little
  573. subtree that has a star
    and a 5 and a 6.
  574. And the way to think of that
    is, really, what's going on
  575. are we're writing these trees,
    and parentheses are just a way
  576. to write this two-dimensional
    structure as a linear
  577. character string.
  578. Because at least when Lisp first
    started and people had
  579. teletypes or punch cards or
    whatever, this was more
  580. convenient.
  581. Maybe if Lisp started today,
    the syntax of Lisp
  582. would look like that.
  583. Well, let's look at
    what that actually
  584. looks like on the computer.
  585. Here I have a Lisp interaction
    set up.
  586. There's a editor.
  587. And on the top, I'm going to
    type some values and ask Lisp
  588. what they are.
  589. So for instance, I can say
    to Lisp, what's the
  590. value of that symbol?
  591. That's 3.
  592. And I ask Lisp to evaluate it.
  593. And there you see Lisp has
    returned on the bottom, and
  594. said, oh yeah, that's 3.
  595. Or I can say, what's the
    sum of 3 and 4 and 8?
  596. What's that combination?
  597. And ask Lisp to evaluate it.
  598. That's 15.
  599. Or I can type in something
    more complicated.
  600. I can say, what's the sum of the
    product of 3 and the sum
  601. of 7 and 19.5?
  602. And you'll notice here that Lisp
    has something built in
  603. that helps me keep track of
    all these parentheses.
  604. Watch as I type the next closed
    parentheses, which is
  605. going to close the combination
    starting with the star.
  606. The opening one will flash.
  607. Here, I'll rub those out
    and do it again.
  608. Type close, and you see
    that closes the plus.
  609. Close again, that
    closes the star.
  610. Now I'm back to the sum,
    and maybe I'm going to
  611. add that all to 4.
  612. That closes the plus.
  613. Now I have a complete
    combination, and I can ask
  614. Lisp for the value of that.
  615. That kind of paren balancing is
    something that's built into
  616. a lot of Lisp systems to help
    you keep track, because it is
  617. kind of hard just by hand doing
    all these parentheses.
  618. There's another kind of
    convention for keeping track
  619. of parentheses.
  620. Let me write another complicated
    combination.
  621. Let's take the sum of the
    product of 3 and 5 and add
  622. that to something.
  623. And now what I'm going to do is
    I'm going to indent so that
  624. the operands are written
    vertically.
  625. Which the sum of that and
    the product of 47 and--
  626. let's say the product
    of 47 with a
  627. difference of 20 and 6.8.
  628. That means subtract
    6.8 from 20.
  629. And then you see the
    parentheses close.
  630. Close the minus.
  631. Close the star.
  632. And now let's get another
    operator.
  633. You see the Lisp editor here
    is indenting to the right
  634. position automatically to
    help me keep track.
  635. I'll do that again.
  636. I'll close that last
    parentheses again.
  637. You see it balances the plus.
  638. Now I can say, what's
    the value of that?
  639. So those two things, indenting
    to the right level, which is
  640. called pretty printing, and
    flashing parentheses, are two
  641. things that a lot of Lisp
    systems have built in to help
  642. you keep track.
  643. And you should learn
    how to use them.
  644. Well, those are the
    primitives.
  645. There's a means of
    combination.
  646. Now let's go up to the
    means of abstraction.
  647. I'd like to be able to take
    the idea that I do some
  648. combination like this, and
    abstract it and give it a
  649. simple name, so I can use
    that as an element.
  650. And I do that in Lisp with
    "define." So I can say, for
  651. example, define A to be the
    product of 5 and 5.
  652. And now I could say, for
    example, to Lisp, what is the
  653. product of A and A?
  654. And this should be 25, and
    this should be 625.
  655. And then, crucial thing,
    I can now use A--
  656. here I've used it in
    a combination--
  657. but I could use that in other
    more complicated things that I
  658. name in turn.
  659. So I could say, define B to be
    the sum of, we'll say, A and
  660. the product of 5 and A. And
    then close the plus.
  661. Let's take a look at that
    on the computer and
  662. see how that looks.
  663. So I'll just type what
    I wrote on the board.
  664. I could say, define A to be
    the product of 5 and 5.
  665. And I'll tell that to Lisp.
  666. And notice what Lisp responded
    there with
  667. was an A in the bottom.
  668. In general, when you type in
    a definition in Lisp, it
  669. responds with the symbol
    being defined.
  670. Now I could say to Lisp, what
    is the product of A and A?
  671. And it says that's 625.
  672. I can define B to be the sum of
    A and the product of 5 and
  673. A. Close a paren closes
    the star.
  674. Close the plus.
  675. Close the "define." Lisp says,
    OK, B, there on the bottom.
  676. And now I can say to Lisp,
    what's the value of B?
  677. And I can say something more
    complicated, like what's the
  678. sum of A and the quotient
    of B and 5?
  679. That slash is divide, another
    primitive operator.
  680. I've divided B by 5,
    added it to A. Lisp
  681. says, OK, that's 55.
  682. So there's what it looks like.
  683. There's the basic means
    of defining something.
  684. It's the simplest kind of
    naming, but it's not really
  685. very powerful.
  686. See, what I'd really
    like to name--
  687. remember, we're talking about
    general methods--
  688. I'd like to name, oh, the
    general idea that, for
  689. example, I could multiply 5 by
    5, or 6 by 6, or 1,001 by
  690. 1,001, 1,001.7 by 1,001.7.
  691. I'd like to be able to name
    the general idea of
  692. multiplying something
    by itself.
  693. Well, you know what that is.
  694. That's called squaring.
  695. And the way I can do that in
    Lisp is I can say, define to
  696. square something x, multiply
    x by itself.
  697. And then having done that,
    I could say to Lisp, for
  698. example, what's the
    square of 10?
  699. And Lisp will say 100.
  700. So now let's actually look at
    that a little more closely.
  701. Right, there's the definition
    of square.
  702. To square something, multiply
    it by itself.
  703. You see this x here.
  704. That x is kind of a pronoun,
    which is the something that
  705. I'm going to square.
  706. And what I do with it
    is I multiply x, I
  707. multiply it by itself.
  708. OK.
  709. So there's the notation for
    defining a procedure.
  710. Actually, this is a little bit
    confusing, because this is
  711. sort of how I might
    use square.
  712. And I say square root of x or
    square root of 10, but it's
  713. not making it very clear that
    I'm actually naming something.
  714. So let me write this definition
    in another way that
  715. makes it a little
    bit more clear
  716. that I'm naming something.
  717. I'll say, "define" square to
    be lambda of x times xx.
  718. Here, I'm naming something
    square, just like over here,
  719. I'm naming something A. The
    thing that I'm naming square--
  720. here, the thing I named A was
    the value of this combination.
  721. Here, the thing that I'm naming
    square is this thing
  722. that begins with lambda, and
    lambda is Lisp's way of saying
  723. make a procedure.
  724. Let's look at that more
    closely on the slide.
  725. The way I read that definition
    is to say, I define square to
  726. be make a procedure--
  727. that's what the lambda is--
  728. make a procedure with
    an argument named x.
  729. And what it does is return
    the results of
  730. multiplying x by itself.
  731. Now, in general, we're going to
    be using this top form of
  732. defining, just because it's a
    little bit more convenient.
  733. But don't lose sight of the fact
    that it's really this.
  734. In fact, as far as the Lisp
    interpreter's concerned,
  735. there's no difference between
    typing this to it and typing
  736. this to it.
  737. And there's a word for that,
    sort of syntactic sugar.
  738. What syntactic sugar means,
    it's having somewhat more
  739. convenient surface forms
    for typing something.
  740. So this is just really syntactic
    sugar for this
  741. underlying Greek thing
    with the lambda.
  742. And the reason you should
    remember that is don't forget
  743. that, when I write something
    like this, I'm
  744. really naming something.
  745. I'm naming something square,
    and the something that I'm
  746. naming square is a procedure
    that's getting constructed.
  747. Well, let's look at that
    on the computer, too.
  748. So I'll come and I'll say,
    define square of
  749. x to be times xx.
  750. Now I'll tell Lisp that.
  751. It says "square." See, I've
    named something "square." Now,
  752. having done that, I can
    ask Lisp for, what's
  753. the square of 1,001?
  754. Or in general, I could say,
    what's the square of the sum
  755. of 5 and 7?
  756. The square of 12's 144.
  757. Or I can use square itself
    as an element in some
  758. combination.
  759. I can say, what's the sum
    of the square of 3 and
  760. the square of 4?
  761. 9 and 16 is 25.
  762. Or I can use square as an
    element in some much more
  763. complicated thing.
  764. I can say, what's the square
    of, the sqare of,
  765. the square of 1,001?
  766. And there's the square of the
    square of the square of 1,001.
  767. Or I can say to Lisp, what
    is square itself?
  768. What's the value of that?
  769. And Lisp returns some
    conventional way of telling me
  770. that that's a procedure.
  771. It says, "compound procedure
    square." Remember, the value
  772. of square is this procedure, and
    the thing with the stars
  773. and the brackets are just Lisp's
    conventional way of
  774. describing that.
  775. Let's look at two more
    examples of defining.
  776. Here are two more procedures.
  777. I can define the average of x
    and y to be the sum of x and y
  778. divided by 2.
  779. Or having had average and mean
    square, having had average and
  780. square, I can use that to talk
    about the mean square of
  781. something, which is the average
    of the square of x and
  782. the square of y.
  783. So for example, having done
    that, I could say, what's the
  784. mean square of 2 and 3?
  785. And I should get the average
    of 4 and 9, which is 6.5.
  786. The key thing here is that,
    having defined square, I can
  787. use it as if it were
    primitive.
  788. So if we look here on the
    slide, if I look at mean
  789. square, the person defining mean
    square doesn't have to
  790. know, at this point, whether
    square was something built
  791. into the language or
    whether it was a
  792. procedure that was defined.
  793. And that's a key thing in Lisp,
    that you do not make
  794. arbitrary distinctions between
    things that happen to be
  795. primitive in the language
    and things that
  796. happen to be built in.
  797. A person using that shouldn't
    even have to know.
  798. So the things you construct get
    used with all the power
  799. and flexibility as if they
    were primitives.
  800. In fact, you can drive that
    home by looking on the
  801. computer one more time.
  802. We talked about plus.
  803. And in fact, if I come here on
    the computer screen and say,
  804. what is the value of plus?
  805. Notice what Lisp types out.
  806. On the bottom there, it typed
    out, "compound procedure
  807. plus." Because, in this system,
    it turns out that the
  808. addition operator is itself
    a compound procedure.
  809. And if I didn't just type that
    in, you'd never know that, and
  810. it wouldn't make any
    difference anyway.
  811. We don't care.
  812. It's below the level of
    the abstraction that
  813. we're dealing with.
  814. So the key thing is you cannot
    tell, should not be able to
  815. tell, in general, the difference
    between things that
  816. are built in and things
    that are compound.
  817. Why is that?
  818. Because the things that are
    compound have an abstraction
  819. wrapper wrapped around them.
  820. We've seen almost all the
    elements of Lisp now.
  821. There's only one more we have to
    look at, and that is how to
  822. make a case analysis.
  823. Let me show you what I mean.
  824. We might want to think about the
    mathematical definition of
  825. the absolute value functions.
  826. I might say the absolute value
    of x is the function which has
  827. the property that it's
    negative of x.
  828. For x less than 0, it's
    0 for x equal to 0.
  829. And it's x for x
    greater than 0.
  830. And Lisp has a way of making
    case analyses.
  831. Let me define for you
    absolute value.
  832. Say define the absolute value
    of x is conditional.
  833. This means case analysis,
    COND.
  834. If x is less than 0, the
    answer is negate x.
  835. What I've written here
    is a clause.
  836. This whole thing is a
    conditional clause,
  837. and it has two parts.
  838. This part here is a predicate
    or a condition.
  839. That's a condition.
  840. And the condition is expressed
    by something called a
  841. predicate, and a predicate in
    Lisp is some sort of thing
  842. that returns either
    true or false.
  843. And you see Lisp has a
    primitive procedure,
  844. less-than, that tests whether
    something is true or false.
  845. And the other part of a clause
    is an action or a thing to do,
  846. in the case where that's true.
  847. And here, what I'm doing
    is negating x.
  848. The negation operator, the
    minus sign in Lisp is
  849. a little bit funny.
  850. If there's two or more
    arguments, if there's two
  851. arguments it subtracts the
    second one from the first, and
  852. we saw that.
  853. And if there's one argument,
    it negates it.
  854. So this corresponds to that.
  855. And then there's another
    COND clause.
  856. It says, in the case where
    x is equal to 0,
  857. the answer is 0.
  858. And in the case where
    x is greater than 0,
  859. the answer is x.
  860. Close that clause.
  861. Close the COND.
  862. Close the definition.
  863. And there's the definition
    of absolute value.
  864. And you see it's the case
    analysis that looks very much
  865. like the case analysis you
    use in mathematics.
  866. There's a somewhat different
    way of writing a restricted
  867. case analysis.
  868. Often, you have a case analysis
    where you only have
  869. one case, where you test
    something, and then depending
  870. on whether it's true or false,
    you do something.
  871. And here's another definition of
    absolute value which looks
  872. almost the same, which says,
    if x is less than 0, the
  873. result is negate x.
  874. Otherwise, the answer is x.
  875. And we'll be using "if" a lot.
  876. But again, the thing to remember
    is that this form of
  877. absolute value that you're
    looking at here, and then this
  878. one over here that I wrote
    on the board, are
  879. essentially the same.
  880. And "if" and COND are--
  881. well, whichever way
    you like it.
  882. You can think of COND as
    syntactic sugar for "if," or
  883. you can think of "if" as
    syntactic sugar for COND, and
  884. it doesn't make any
    difference.
  885. The person implementing a Lisp
    system will pick one and
  886. implement the other
    in terms of that.
  887. And it doesn't matter
    which one you pick.
  888. Why don't we break now, and
    then take some questions.
  889. How come sometimes when I write
    define, I put an open
  890. paren here and say, define open
    paren something or other,
  891. and sometimes when
    I write this, I
  892. don't put an open paren?
  893. The answer is, this particular
    form of "define," where you
  894. say define some expression, is
    this very special thing for
  895. defining procedures.
  896. But again, what it really means
    is I'm defining this
  897. symbol, square, to be that.
  898. So the way you should think
    about it is what "define" does
  899. is you write "define," and the
    second thing you write is the
  900. symbol here-- no open paren--
  901. the symbol you're defining and
    what you're defining it to be.
  902. That's like here
    and like here.
  903. That's sort of the basic way
    you use "define." And then,
  904. there's this special syntactic
    trick which allows you to
  905. define procedures that
    look like this.
  906. So the difference is, it's
    whether or not you're defining
  907. a procedure.
  908. [MUSIC PLAYING]
  909. Well, believe it or not, you
    actually now know enough Lisp
  910. to write essentially any
    numerical procedure that you'd
  911. write in a language like FORTRAN
    or Basic or whatever,
  912. or, essentially, any
    other language.
  913. And you're probably saying,
    that's not believable, because
  914. you know that these languages
    have things like "for
  915. statements," and "do until
    while" or something.
  916. But we don't really
    need any of that.
  917. In fact, we're not going
    to use any of
  918. that in this course.
  919. Let me show you.
  920. Again, looking back at square
    root, let's go back to this
  921. square root algorithm of
    Heron of Alexandria.
  922. Remember what that said.
  923. It said, to find an
    approximation to the square
  924. root of X, you make a guess,
    you improve that guess by
  925. averaging the guess and
    X over the guess.
  926. You keep improving that until
    the guess is good enough.
  927. I already alluded to the idea.
  928. The idea is that, if the initial
    guess that you took
  929. was actually equal to the square
    root of X, then G here
  930. would be equal to X/G.
  931. So if you hit the square
    root, averaging them
  932. wouldn't change it.
  933. If the G that you picked was
    larger than the square root of
  934. X, then X/G will be smaller than
    the square root of X, so
  935. that when you average
    G and X/G, you get
  936. something in between.
  937. So if you pick a G that's
    too small, your
  938. answer will be too large.
  939. If you pick a G that's too
    large, if your G is larger
  940. than the square root of X and
    X/G will be smaller than the
  941. square root of X.
  942. So averaging always gives you
    something in between.
  943. And then, it's not quite
    trivial, but it's possible to
  944. show that, in fact, if G misses
    the square root of X by
  945. a little bit, the average of G
    and X/G will actually keep
  946. getting closer to the square
    root of X. So if you keep
  947. doing this enough, you'll
    eventually get as
  948. close as you want.
  949. And then there's another fact,
    that you can always start out
  950. this process by using 1
    as an initial guess.
  951. And it'll always converge to
    the square root of X. So
  952. that's this method of successive
    averaging due to
  953. Heron of Alexandria.
  954. Let's write it in Lisp.
  955. Well, the central idea is, what
    does it mean to try a
  956. guess for the square
    root of X?
  957. Let's write that.
  958. So we'll say, define to try a
    guess for the square root of
  959. X, what do we do?
  960. We'll say, if the guess is good
    enough to be a guess for
  961. the square root of X,
    then, as an answer,
  962. we'll take the guess.
  963. Otherwise, we will try
    the improved guess.
  964. We'll improve that guess for
    the square root of X, and
  965. we'll try that as a guess for
    the square root of X. Close
  966. the "try." Close the "if." Close
    the "define." So that's
  967. how we try a guess.
  968. And then, the next part of the
    process said, in order to
  969. compute square roots, we'll
    say, define to compute the
  970. square root of X, we will try
    1 as a guess for the square
  971. root of X. Well, we have to
    define a couple more things.
  972. We have to say, how is
    a guess good enough?
  973. And how do we improve a guess?
  974. So let's look at that.
  975. The algorithm to improve a guess
    for the square root of
  976. X, we average--
  977. that was the algorithm--
  978. we average the guess with
    the quotient of
  979. dividing X by the guess.
  980. That's how we improve a guess.
  981. And to tell whether a guess is
    good enough, well, we have to
  982. decide something.
  983. This is supposed to be a guess
    for the square root of X, so
  984. one possible thing you can do
    is say, when you take that
  985. guess and square it, do you get
    something very close to X?
  986. So one way to say that is to
    say, I square the guess,
  987. subtract X from that, and see if
    the absolute value of that
  988. whole thing is less than some
    small number, which depends on
  989. my purposes.
  990. So there's a complete procedure
    for how to compute
  991. the square root of X. Let's look
    at the structure of that
  992. a little bit.
  993. I have the whole thing.
  994. I have the notion of how to
    compute a square root.
  995. That's some kind of module.
  996. That's some kind of black box.
  997. It's defined in terms of how to
    try a guess for the square
  998. root of X.
  999. "Try" is defined in terms of,
    well, telling whether
  1000. something is good enough
    and telling
  1001. how to improve something.
  1002. So good enough.
  1003. "Try" is defined in terms of
    "good enough" and "improve."
  1004. And let's see what
    else I fill in.
  1005. Well, I'll go down this tree.
  1006. "Good enough" was defined
    in terms of
  1007. absolute value, and square.
  1008. And improve was defined in
    terms of something called
  1009. averaging and then some other
    primitive operator.
  1010. Square root's defined in terms
    of "try." "Try" is defined in
  1011. terms of "good enough"
    and "improve,"
  1012. but also "try" itself.
  1013. So "try" is also defined in
    terms of how to try itself.
  1014. Well, that may give you some
    problems. Your high school
  1015. geometry teacher probably told
    you that it's naughty to try
  1016. and define things in terms of
    themselves, because it doesn't
  1017. make sense.
  1018. But that's false.
  1019. Sometimes it makes perfect
    sense to define things in
  1020. terms of themselves.
  1021. And this is the case.
  1022. And we can look at that.
  1023. We could write down what this
    means, and say, suppose I
  1024. asked Lisp what the square
    root of 2 is.
  1025. What's the square
    root of 2 mean?
  1026. Well, that means I try
    1 as a guess for the
  1027. square root of 2.
  1028. Now I look.
  1029. I say, gee, is 1 a good
    enough guess for the
  1030. square root of 2?
  1031. And that depends on the test
    that "good enough" does.
  1032. And in this case, "good enough"
    will say, no, 1 is not
  1033. a good enough guess for
    the square root of 2.
  1034. So that will reduce to saying,
    I have to try an improved--
  1035. improve 1 as a guess for the
    square root of 2, and try that
  1036. as a guess for the
    square root of 2.
  1037. Improving 1 as a guess for the
    square root of 2 means I
  1038. average 1 and 2 divided by 1.
  1039. So this is going
    to be average.
  1040. This piece here will be the
    average of 1 and the
  1041. quotient of 2 by 1.
  1042. That's this piece here.
  1043. And this is 1.5.
  1044. So this square root of 2 reduces
    to trying 1 for the
  1045. square root of 2, which reduces
    to trying 1.5 as a
  1046. guess for the square
    root of 2.
  1047. So that makes sense.
  1048. Let's look at the rest
    of the process.
  1049. If I try 1.5, that reduces.
  1050. 1.5 turns out to be not good
    enough as a guess for the
  1051. square root of 2.
  1052. So that reduces to trying the
    average of 1.5 and 2 divided
  1053. by 1.5 as a guess for the
    square root of 2.
  1054. That average turns
    out to be 1.333.
  1055. So this whole thing reduces to
    trying 1.333 as a guess for
  1056. the square root of 2.
  1057. And then so on.
  1058. That reduces to another called
    a "good enough," 1.4
  1059. something or other.
  1060. And then it keeps going until
    the process finally stops with
  1061. something that "good enough"
    thinks is good enough, which,
  1062. in this case, is 1.4142
    something or other.
  1063. So the process makes
    perfect sense.
  1064. This, by the way, is called
    a recursive definition.
  1065. And the ability to make
    recursive definitions is a
  1066. source of incredible power.
  1067. And as you can already see I've
    hinted at, it's the thing
  1068. that effectively allows you to
    do these infinite computations
  1069. that go on until something is
    true, without having any other
  1070. constricts other than the
    ability to call a procedure.
  1071. Well, let's see, there's
    one more thing.
  1072. Let me show you a variant of
    this definition of square root
  1073. here on the slide.
  1074. Here's sort of the same thing.
  1075. What I've done here is packaged
    the definitions of
  1076. "improve" and "good enough"
    and "try" inside "square
  1077. root." So, in effect, what
    I've done is I've built a
  1078. square root box.
  1079. So I've built a box that's the
    square root procedure that
  1080. someone can use.
  1081. They might put in 36
    and get out 6.
  1082. And then, packaged inside this
    box are the definitions of
  1083. "try" and "good enough"
    and "improve."
  1084. So they're hidden
    inside this box.
  1085. And the reason for doing that
    is that, if someone's using
  1086. this square root, if George is
    using this square root, George
  1087. probably doesn't care very much
    that, when I implemented
  1088. square root, I had things inside
    there called "try" and
  1089. "good enough" and "improve." And
    in fact, Harry might have
  1090. a cube root procedure that has
    "try" and "good enough" and
  1091. "improve." And in order to not
    get the whole system confused,
  1092. it'd be good for Harry to
    package his internal
  1093. procedures inside his
    cube root procedure.
  1094. Well, this is called block
    structure, this particular way
  1095. of packaging internals inside
    of a definition.
  1096. And let's go back and look
    at the slide again.
  1097. The way to read this kind of
    procedure is to say, to define
  1098. "square root," well, inside that
    definition, I'll have the
  1099. definition of an "improve" and
    the definition of "good
  1100. enough" and the definition of
    "try." And then, subject to
  1101. those definitions, the way I
    do square root is to try 1.
  1102. And notice here, I don't have
    to say 1 as a guess for the
  1103. square root of X, because since
    it's all inside the
  1104. square root, it sort of
    has this X known.
  1105. Let me summarize.
  1106. We started out with the idea
    that what we're going to be
  1107. doing is expressing imperative
    knowledge.
  1108. And in fact, here's a slide
    that summarizes the way we
  1109. looked at Lisp.
  1110. We started out by looking at
    some primitive elements in
  1111. addition and multiplication,
    some predicates for testing
  1112. whether something is less-than
    or something's equal.
  1113. And in fact, we saw really
    sneakily in the system we're
  1114. actually using, these aren't
    actually primitives, but it
  1115. doesn't matter.
  1116. What matters is we're going
    to use them as if they're
  1117. primitives.
  1118. We're not going to
    look inside.
  1119. We also have some primitive
    data and some numbers.
  1120. We saw some means of
    composition, means of
  1121. combination, the basic one being
    composing functions and
  1122. building combinations with
    operators and operands.
  1123. And there were some other
    things, like COND and "if" and
  1124. "define." But the main thing
    about "define," in particular,
  1125. was that it was the means
    of abstraction.
  1126. It was the way that
    we name things.
  1127. You can also see from this slide
    not only where we've
  1128. been, but holes we
    have to fill in.
  1129. At some point, we'll have to
    talk about how you combine
  1130. primitive data to get compound
    data, and how you abstract
  1131. data so you can use large
    globs of data as
  1132. if they were primitive.
  1133. So that's where we're going.
  1134. But before we do that, for the
    next couple of lectures we're
  1135. going to be talking about, first
    of all, how it is that
  1136. you make a link between these
    procedures we write and the
  1137. processes that happen
    in the machine.
  1138. And then, how it is that you
    start using the power of Lisp
  1139. to talk not only about these
    individual little
  1140. computations, but about general
    conventional methods
  1141. of doing things.
  1142. OK, are there any questions?
  1143. AUDIENCE: Yes.
  1144. If we defined A using
    parentheses instead of as we
  1145. did, what would be
    the difference?
  1146. PROFESSOR: If I wrote this, if
    I wrote that, what I would be
  1147. doing is defining a procedure
    named A. In this case, a
  1148. procedure of no arguments,
    which, when I ran it, would
  1149. give me back 5 times 5.
  1150. AUDIENCE: Right.
  1151. I mean, you come up with the
    same thing, except for you
  1152. really got a different--
  1153. PROFESSOR: Right.
  1154. And the difference would
    be, in the old one--
  1155. Let me be a little
    bit clearer here.
  1156. Let's call this A, like here.
  1157. And pretend here, just for
    contrast, I wrote, define D to
  1158. be the product of 5 and 5.
  1159. And the difference between
    those, let's think about
  1160. interactions with the
    Lisp interpreter.
  1161. I could type in A and Lisp
    would return 25.
  1162. I could type in D, if I just
    typed in D, Lisp would return
  1163. compound procedure D, because
    that's what it is.
  1164. It's a procedure.
  1165. I could run D. I could say,
    what's the value of running D?
  1166. Here is a combination
    with no operands.
  1167. I see there are no operands.
  1168. I didn't put any after D. And
    it would say, oh, that's 25.
  1169. Or I could say, just for
    completeness, if I typed in,
  1170. what's the value of running A?
  1171. I get an error.
  1172. The error would be the same
    one as over there.
  1173. It'd be the error would say,
    sorry, 25, which is the value
  1174. of A, is not an operator that
    I can apply to something.