1 00:00:16,910 --> 00:00:18,180 SAM LIVINGSTON-GRAY: Hello. Welcome to the very 2 00:00:18,180 --> 00:00:20,400 last session of RailsConf. 3 00:00:20,400 --> 00:00:21,080 When I leave this stage, 4 00:00:21,080 --> 00:00:22,410 they are gonna burn it down. 5 00:00:22,410 --> 00:00:24,210 AUDIENCE: Yeah! 6 00:00:24,900 --> 00:00:26,240 S.L.: I have a couple of items of business 7 00:00:26,250 --> 00:00:29,890 before I launch into my presentation proper. The first 8 00:00:29,890 --> 00:00:32,930 of which is turning on my clicker. I work 9 00:00:32,930 --> 00:00:36,449 for LivingSocial. We are hiring. If this fact is 10 00:00:36,449 --> 00:00:38,230 intriguing to you, please feel free to come and 11 00:00:38,230 --> 00:00:41,400 talk to me afterwards. Also, our recruiters brought a 12 00:00:41,400 --> 00:00:43,790 ton of little squishy stress balls that are shaped 13 00:00:43,790 --> 00:00:46,650 like little brains. As far as I know, this 14 00:00:46,650 --> 00:00:48,730 was a coincidence, but I love the tie-in so 15 00:00:48,730 --> 00:00:50,110 I brought the whole bag. I had them leave 16 00:00:50,110 --> 00:00:52,079 it for me. So if you would like an 17 00:00:52,079 --> 00:00:54,960 extra brain, please come talk to me after the 18 00:00:54,960 --> 00:00:55,899 show. 19 00:00:55,899 --> 00:00:59,050 A quick note about accessibility. If you have any 20 00:00:59,050 --> 00:01:02,079 trouble seeing my slides, hearing my voice, or following 21 00:01:02,079 --> 00:01:04,519 my weird trains of thought, or maybe you just 22 00:01:04,519 --> 00:01:06,580 like spoilers, you can get a PDF with both 23 00:01:06,580 --> 00:01:09,420 my slides and my script at this url. It's 24 00:01:09,420 --> 00:01:14,270 tinyurl dot com, cog dash shorts dash railsconf. I 25 00:01:14,270 --> 00:01:15,750 also have it up here on a thumb drive, 26 00:01:15,750 --> 00:01:18,220 so if the conference wi-fi does what it usually 27 00:01:18,220 --> 00:01:19,970 does, please go see Evan Light up in the 28 00:01:19,970 --> 00:01:21,020 second row. 29 00:01:21,020 --> 00:01:23,830 I'm gonna leave this up for a couple more 30 00:01:23,830 --> 00:01:25,780 minutes. And I also want to give a shoutout 31 00:01:25,780 --> 00:01:30,110 to the opportunity scholarship program here. To quote the 32 00:01:30,110 --> 00:01:33,020 RailsConf site, this program is for people who wouldn't 33 00:01:33,020 --> 00:01:35,360 usually take part in our community or who might 34 00:01:35,360 --> 00:01:37,380 just want a friendly face during their first time 35 00:01:37,380 --> 00:01:40,750 at RailsConf. I'm a huge fan of this program. 36 00:01:40,750 --> 00:01:42,940 I think it's a great way to welcome new 37 00:01:42,940 --> 00:01:45,890 people and new voices into our community. This is 38 00:01:45,890 --> 00:01:47,630 the second year that I've volunteered as a guide 39 00:01:47,630 --> 00:01:48,770 and this is the second year that I've met 40 00:01:48,770 --> 00:01:51,750 somebody with a fascinating story to tell. If you're 41 00:01:51,750 --> 00:01:54,780 a seasoned conference veteran, I strongly encourage you to 42 00:01:54,780 --> 00:01:56,869 apply next year. 43 00:01:56,869 --> 00:02:03,869 OK. Programming is hard. It's not quantum physics. But 44 00:02:04,410 --> 00:02:06,700 neither is it falling off a log. And if 45 00:02:06,700 --> 00:02:08,530 I had to pick just one word to explain 46 00:02:08,530 --> 00:02:12,530 why programming is hard, that word would be abstract. 47 00:02:12,530 --> 00:02:15,709 I asked Google to define abstract, and here's what 48 00:02:15,709 --> 00:02:17,040 it said. 49 00:02:17,040 --> 00:02:19,129 Existing in thought or as an idea, but not 50 00:02:19,129 --> 00:02:21,629 having a physical or concrete existence. 51 00:02:21,629 --> 00:02:24,459 I usually prefer defining things in terms of what 52 00:02:24,459 --> 00:02:26,500 they are, but in this case I find the 53 00:02:26,500 --> 00:02:31,390 negative definition extremely telling. Abstract things are hard for 54 00:02:31,390 --> 00:02:34,150 us to think about precisely because they don't have 55 00:02:34,150 --> 00:02:35,950 a physical or a concrete existence, and that's what 56 00:02:35,950 --> 00:02:37,540 our brains are wired for. 57 00:02:37,540 --> 00:02:40,670 Now, I normally prefer the kind of talk where 58 00:02:40,670 --> 00:02:42,950 the speaker just launches right in and forces me 59 00:02:42,950 --> 00:02:45,590 to keep up, but this is a complex idea, 60 00:02:45,590 --> 00:02:48,310 and it's the last talk of the last day, 61 00:02:48,310 --> 00:02:49,480 and I'm sure you're all as fried as I 62 00:02:49,480 --> 00:02:54,010 am. So, here's a little background. I got the 63 00:02:54,010 --> 00:02:56,570 idea for this talk when I was listening to 64 00:02:56,570 --> 00:03:00,510 the Ruby Rogues podcast episode with Glenn Vanderburg. This 65 00:03:00,510 --> 00:03:03,360 is lightly edited for length, but in that episode, 66 00:03:03,360 --> 00:03:06,200 Glenn said, The best programmers I know all have 67 00:03:06,200 --> 00:03:09,170 some good techniques for conceptualizing or modeling the programs 68 00:03:09,170 --> 00:03:11,030 that they work with. And it tends to be 69 00:03:11,030 --> 00:03:13,960 sort of a spatial/visual model, but not always. And 70 00:03:13,960 --> 00:03:16,069 he says, What's going on is our brains are 71 00:03:16,069 --> 00:03:18,400 geared towards the physical world and dealing with our 72 00:03:18,400 --> 00:03:21,680 senses and integrating that sensory input. 73 00:03:21,680 --> 00:03:23,100 But the work we do as programmers is all 74 00:03:23,100 --> 00:03:25,210 abstract. And it makes perfect sense that you would 75 00:03:25,210 --> 00:03:27,140 want to find techniques to rope the physical sensory 76 00:03:27,140 --> 00:03:29,540 parts of your brain into this task of dealing 77 00:03:29,540 --> 00:03:32,150 with abstractions. And this is the part that really 78 00:03:32,150 --> 00:03:34,170 got my attention. He says, But we don't ever 79 00:03:34,170 --> 00:03:35,730 teach anybody how to do that or even that 80 00:03:35,730 --> 00:03:37,950 they should do that. 81 00:03:37,950 --> 00:03:40,380 When I heard this, I started thinking about the 82 00:03:40,380 --> 00:03:44,069 times that I've stumbled across some technique for doing 83 00:03:44,069 --> 00:03:46,200 something like this, and I've been really excited to 84 00:03:46,200 --> 00:03:48,830 find a way of translating a programming problem into 85 00:03:48,830 --> 00:03:50,630 some form that my brain could really get a 86 00:03:50,630 --> 00:03:52,770 handle on. And I was like yeah, yeah, brains 87 00:03:52,770 --> 00:03:54,970 are awesome. And we should be teaching people that 88 00:03:54,970 --> 00:03:57,880 this is a thing they can do. 89 00:03:57,880 --> 00:03:59,650 And I thought about it, and some time later, 90 00:03:59,650 --> 00:04:02,600 I was like, wait a minute. No. Brains are 91 00:04:02,600 --> 00:04:06,540 horrible. And teaching people these tricks would be totally 92 00:04:06,540 --> 00:04:09,069 irresponsible, if we also didn't warn them about cognitive 93 00:04:09,069 --> 00:04:13,040 bias. I get to that in a little bit. 94 00:04:13,040 --> 00:04:16,639 This is a talk in three parts. Part one, 95 00:04:16,639 --> 00:04:19,000 brains are awesome. And as Glenn said, you can 96 00:04:19,000 --> 00:04:21,019 rope the physical and sensory parts of your brain 97 00:04:21,019 --> 00:04:23,020 as well as a few others I'll talk about 98 00:04:23,020 --> 00:04:26,220 into helping you deal with abstractions. Part two, brains 99 00:04:26,220 --> 00:04:28,319 are horrible and they lie to us all the 100 00:04:28,319 --> 00:04:31,470 time. But if you're on the look out for 101 00:04:31,470 --> 00:04:33,210 the kinds of lies that your brain will tell 102 00:04:33,210 --> 00:04:35,790 you, in part three I have an example of 103 00:04:35,790 --> 00:04:37,470 the kind of amazing hack that you just might 104 00:04:37,470 --> 00:04:42,290 be able to come up with. 105 00:04:42,290 --> 00:04:45,360 Our brains are extremely well-adapted for dealing with the 106 00:04:45,360 --> 00:04:50,250 physical world. Our hindbrains, which regular respiration, temperature, and 107 00:04:50,250 --> 00:04:52,310 balance, have been around for half a billion years 108 00:04:52,310 --> 00:04:57,139 or so. But when I write software, I am 109 00:04:57,139 --> 00:04:58,750 leaning hard on parts of the brain that are 110 00:04:58,750 --> 00:05:01,800 relatively new in evolutionary terms, and I'm using some 111 00:05:01,800 --> 00:05:04,370 relatively expensive resources. 112 00:05:04,370 --> 00:05:05,770 So over the years I have built up a 113 00:05:05,770 --> 00:05:09,400 small collection of techniques and shortcuts that engage specialized 114 00:05:09,400 --> 00:05:10,960 structures of my brain to help me reason about 115 00:05:10,960 --> 00:05:13,870 programming problems. Here's the list. 116 00:05:13,870 --> 00:05:17,880 I'm gonna start with a category of visual tools 117 00:05:17,880 --> 00:05:20,410 that let us leverage our spatial understanding of the 118 00:05:20,410 --> 00:05:23,500 world and our spatial reasoning skills to discover relationships 119 00:05:23,500 --> 00:05:25,850 between different parts of a model. Or just to 120 00:05:25,850 --> 00:05:27,760 stay oriented when we're trying to reason through a 121 00:05:27,760 --> 00:05:29,090 complex problem. 122 00:05:29,090 --> 00:05:31,570 I'm just gonna list out a few examples of 123 00:05:31,570 --> 00:05:34,350 this category quickly, because I think most developers are 124 00:05:34,350 --> 00:05:37,320 likely to encounter these, either in school or on 125 00:05:37,320 --> 00:05:38,740 the job. And they all have the same basic 126 00:05:38,740 --> 00:05:41,870 shape. They're boxes and arrows. 127 00:05:41,870 --> 00:05:45,620 There's Entity-Relationship Diagrams, which help us understand how our 128 00:05:45,620 --> 00:05:48,790 data is modeled. We use diagrams to describe data 129 00:05:48,790 --> 00:05:53,210 structures like binary trees, linked lists, and so on. 130 00:05:53,210 --> 00:05:56,090 And for state machines of any complexity, diagrams are 131 00:05:56,090 --> 00:05:57,580 often the only way to make any sense of 132 00:05:57,580 --> 00:06:00,330 them. I could go on, but like I said, 133 00:06:00,330 --> 00:06:01,949 most of us are probably used to using these, 134 00:06:01,949 --> 00:06:02,919 at least occasionally. 135 00:06:02,919 --> 00:06:05,470 There are three things that I like about these 136 00:06:05,470 --> 00:06:09,720 tools. First, they lend themselves really well to standing 137 00:06:09,720 --> 00:06:12,120 up in front of a white board, possibly with 138 00:06:12,120 --> 00:06:15,260 a co-worker, and just standing up and moving around 139 00:06:15,260 --> 00:06:17,030 a little bit will help get the blood flowing 140 00:06:17,030 --> 00:06:19,510 and, and get your brain perked up. 141 00:06:19,510 --> 00:06:22,570 Second, diagrams help us offload some of the work 142 00:06:22,570 --> 00:06:26,169 of keeping track of things, of different concepts, by 143 00:06:26,169 --> 00:06:29,250 attaching those concepts to objects in a two-dimensional space. 144 00:06:29,250 --> 00:06:30,919 And our brains have a lot of hardware support 145 00:06:30,919 --> 00:06:34,419 for keeping track of where things are in space. 146 00:06:34,419 --> 00:06:36,510 And third, our brains are really good at pattern 147 00:06:36,510 --> 00:06:39,690 recognition, so visualizing our designs can give us a 148 00:06:39,690 --> 00:06:42,560 chance to spot certain kinds of problems just by 149 00:06:42,560 --> 00:06:45,590 looking at their shapes before we ever start typing 150 00:06:45,590 --> 00:06:49,449 code in an editor, and I think that's pretty 151 00:06:49,449 --> 00:06:50,010 cool. 152 00:06:50,010 --> 00:06:52,560 Here's another technique that makes use of our spatial 153 00:06:52,560 --> 00:06:54,889 perception skills, and if you saw Sandi's talk yesterday, 154 00:06:54,889 --> 00:06:57,680 you'll know this one. It's the squint test. It's 155 00:06:57,680 --> 00:07:00,370 very straight forward. You open up some code and 156 00:07:00,370 --> 00:07:02,370 you either squint your eyes at it or you 157 00:07:02,370 --> 00:07:04,729 decrease the font size. The point is to look 158 00:07:04,729 --> 00:07:06,850 past the words and check out the shape of 159 00:07:06,850 --> 00:07:08,280 the code. 160 00:07:08,280 --> 00:07:10,940 This is a pathological example that I used in 161 00:07:10,940 --> 00:07:14,520 a refactoring talk last year. You can use this 162 00:07:14,520 --> 00:07:17,610 technique as an aid to navigation, as a way 163 00:07:17,610 --> 00:07:19,800 of zeroing in on high-risk areas of code, or 164 00:07:19,800 --> 00:07:22,210 just plain to get oriented in a new code 165 00:07:22,210 --> 00:07:25,479 base. There are a few specific patterns that you 166 00:07:25,479 --> 00:07:27,199 can look for, and you'll find others as you, 167 00:07:27,199 --> 00:07:28,800 as you do more of it. 168 00:07:28,800 --> 00:07:31,960 Is the left margin ragged, as it is here? 169 00:07:31,960 --> 00:07:34,139 Are there any ridiculously long lines? There's one towards 170 00:07:34,139 --> 00:07:37,639 the bottom. What does your syntax highlighting tell you? 171 00:07:37,639 --> 00:07:39,330 Are there groups of colors or are colors sort 172 00:07:39,330 --> 00:07:43,820 of spread out? And there's a lot of information 173 00:07:43,820 --> 00:07:47,759 you can glean from this. Incidentally, I have only 174 00:07:47,759 --> 00:07:50,070 ever met one blind programmer, and we didn't really 175 00:07:50,070 --> 00:07:52,060 talk about this stuff. If any of you have 176 00:07:52,060 --> 00:07:55,610 found that a physical or a cognitive disability gives 177 00:07:55,610 --> 00:07:58,960 you a, an interesting way of looking at code, 178 00:07:58,960 --> 00:08:01,840 or understanding code I suppose, please come talk to 179 00:08:01,840 --> 00:08:05,710 me, because I'd love to hear your story. 180 00:08:05,710 --> 00:08:07,620 Next up, I have a couple of techniques that 181 00:08:07,620 --> 00:08:10,560 involve a clever use of language. The first one 182 00:08:10,560 --> 00:08:13,990 is deceptively simple, but it does require a prop. 183 00:08:13,990 --> 00:08:18,470 Doesn't have to be that big. You can totally 184 00:08:18,470 --> 00:08:20,419 get away with using the souvenir edition. This is 185 00:08:20,419 --> 00:08:24,720 my daughter's duck cow bath toy. What you do 186 00:08:24,720 --> 00:08:26,710 is you keep a rubber duck on your desk. 187 00:08:26,710 --> 00:08:28,210 When you get stuck, you put the rubber deck 188 00:08:28,210 --> 00:08:30,840 on top- rubber duck, excuse me, on top of 189 00:08:30,840 --> 00:08:33,349 your keyboard, and you explain your problem out loud 190 00:08:33,349 --> 00:08:34,429 to the duck. 191 00:08:34,429 --> 00:08:35,328 [laughter] 192 00:08:35,328 --> 00:08:39,578 Really. I mean, it sounds absurd, right. But there's 193 00:08:39,578 --> 00:08:41,360 a good chance that in the process of putting 194 00:08:41,360 --> 00:08:44,269 your problem into words, you'll discover that there's an 195 00:08:44,269 --> 00:08:46,579 incorrect assumption that you've been making or you'll think 196 00:08:46,579 --> 00:08:48,490 of some other possible solution. 197 00:08:48,490 --> 00:08:50,709 I've also heard of people using teddy bears or 198 00:08:50,709 --> 00:08:53,689 other stuffed animals. And one of my co-workers told 199 00:08:53,689 --> 00:08:56,649 me that she learned this as the pet-rock technique. 200 00:08:56,649 --> 00:08:58,929 This was a thing in the seventies. And also 201 00:08:58,929 --> 00:09:00,869 that she finds it useful to compose an email 202 00:09:00,869 --> 00:09:03,740 describing the problem. So for those of you who, 203 00:09:03,740 --> 00:09:07,040 like me, think better when you're typing or writing 204 00:09:07,040 --> 00:09:08,429 than when you're speaking, that can be a nice, 205 00:09:08,429 --> 00:09:13,259 a nice alternative. 206 00:09:13,259 --> 00:09:15,069 The other linguistic hack that I got, I got 207 00:09:15,069 --> 00:09:18,639 from Sandi Metz, and in this book, Practical Oriented 208 00:09:18,639 --> 00:09:22,050 Design in Ruby, PODR for short, she describes a 209 00:09:22,050 --> 00:09:25,100 technique that she uses to figure out which object 210 00:09:25,100 --> 00:09:28,730 a method should belong. I tried paraphrasing this, but 211 00:09:28,730 --> 00:09:30,730 honestly Sandi did a much better job than I 212 00:09:30,730 --> 00:09:32,779 would, describing it, so I'm just gonna read it 213 00:09:32,779 --> 00:09:33,459 verbatim. 214 00:09:33,459 --> 00:09:35,819 She says, How can you determine if a Gear 215 00:09:35,819 --> 00:09:39,040 class contains behavior that belongs somewhere else? One way 216 00:09:39,040 --> 00:09:41,170 is to pretend that it's sentient and to interrogate 217 00:09:41,170 --> 00:09:43,769 it. If you rephrase every one of its methods 218 00:09:43,769 --> 00:09:45,600 as a question, asking the question ought to make 219 00:09:45,600 --> 00:09:47,100 sense. 220 00:09:47,100 --> 00:09:49,309 For example, "Please Mr. Gear, what is your ratio?" 221 00:09:49,309 --> 00:09:52,079 seems perfectly reasonable, while "Please Mr. Gear, what are 222 00:09:52,079 --> 00:09:54,449 your gear_inches?" is on shaky ground, and "Please Mr. 223 00:09:54,449 --> 00:09:59,889 Gear, what is your tire(size)?" is just downright ridiculous. 224 00:09:59,889 --> 00:10:01,809 This is a great way to evaluate objects in 225 00:10:01,809 --> 00:10:04,350 light of the single responsibility principle. Now I'll come 226 00:10:04,350 --> 00:10:07,240 back to that thought in just a minute, but 227 00:10:07,240 --> 00:10:10,240 first, I described the rubber duck and Please, Mr. 228 00:10:10,240 --> 00:10:13,579 Gear? as techniques to engage linguistic reasoning, but that 229 00:10:13,579 --> 00:10:17,269 doesn't quite feel right. Both of these techniques force 230 00:10:17,269 --> 00:10:20,439 us to put our questions into words, but words 231 00:10:20,439 --> 00:10:23,189 themselves are tools. We use words to communicate our 232 00:10:23,189 --> 00:10:25,749 ideas to other people. 233 00:10:25,749 --> 00:10:28,619 As primates, we've evolved a set of social skills 234 00:10:28,619 --> 00:10:30,970 and behaviors for getting our needs met as part 235 00:10:30,970 --> 00:10:34,619 of a community. So, while these techniques do involve 236 00:10:34,619 --> 00:10:36,869 using language centers of your brain, I think they 237 00:10:36,869 --> 00:10:39,989 reach beyond those centers to tap into our social 238 00:10:39,989 --> 00:10:41,709 reasoning. 239 00:10:41,709 --> 00:10:44,499 The rubber duck technique works because, putting your problem 240 00:10:44,499 --> 00:10:47,309 into words forces you to organize your understanding of 241 00:10:47,309 --> 00:10:49,230 a problem in such a way that you can 242 00:10:49,230 --> 00:10:53,529 verbally lead another mind through it. And Please, Mr. 243 00:10:53,529 --> 00:10:56,420 Gear? let's us anthropomorphize an object and talk to 244 00:10:56,420 --> 00:10:58,720 it to discover whether it conforms to the single 245 00:10:58,720 --> 00:11:00,910 responsibility principle. 246 00:11:00,910 --> 00:11:03,550 To me, the tell-tale phrase in Sandi's description of 247 00:11:03,550 --> 00:11:06,459 this technique is, asking the question ought to make 248 00:11:06,459 --> 00:11:13,459 sense. Most of us have an intuitive understanding that 249 00:11:13,809 --> 00:11:16,410 it might not be appropriate to ask Alice about 250 00:11:16,410 --> 00:11:20,350 something that is Bob's responsibility. Interrogating an object as 251 00:11:20,350 --> 00:11:22,420 though it were a person helps us use that 252 00:11:22,420 --> 00:11:25,239 social knowledge, and it gives us an opportunity to 253 00:11:25,239 --> 00:11:28,119 notice that a particular question doesn't make sense to 254 00:11:28,119 --> 00:11:31,009 ask any of our existing objects, which might prompt 255 00:11:31,009 --> 00:11:32,300 us to ask if we should create a new 256 00:11:32,300 --> 00:11:35,019 object to fill that role instead. 257 00:11:35,019 --> 00:11:39,610 Now, personally, I would have considered PODR to have 258 00:11:39,610 --> 00:11:41,679 been a worthwhile purchase if Please, Mr. Gear was 259 00:11:41,679 --> 00:11:44,089 the only thing I got from it. But, in 260 00:11:44,089 --> 00:11:45,649 this book, Sandi also made what I thought was 261 00:11:45,649 --> 00:11:49,209 a very compelling case for UML Sequence Diagrams. 262 00:11:49,209 --> 00:11:53,110 Where Please, Mr. Gear is a good tool for 263 00:11:53,110 --> 00:11:56,329 discovering which objects should be responsible for a particular 264 00:11:56,329 --> 00:11:59,100 method, a Sequence Diagram can help you analyze the 265 00:11:59,100 --> 00:12:04,019 runtime interaction between several different objects. At first glance, 266 00:12:04,019 --> 00:12:05,929 this looks kind of like something in the boxes 267 00:12:05,929 --> 00:12:08,499 and arrows category of visual and spatial tools, but 268 00:12:08,499 --> 00:12:10,429 again, this feels more like it's tapping into that 269 00:12:10,429 --> 00:12:12,959 social understanding that we have. This can be a 270 00:12:12,959 --> 00:12:14,339 good way to get a sense for when an 271 00:12:14,339 --> 00:12:18,089 object is bossy or when performing a task involves 272 00:12:18,089 --> 00:12:22,309 a complex sequence of several, several interactions. Or if 273 00:12:22,309 --> 00:12:25,009 there are just plain too many different things to 274 00:12:25,009 --> 00:12:27,319 keep track of. 275 00:12:27,319 --> 00:12:29,100 Rather than turn this into a lecture on UML, 276 00:12:29,100 --> 00:12:30,910 I'm just gonna tell you to go buy Sandi's 277 00:12:30,910 --> 00:12:33,779 book, and if for whatever reason, you cannot afford 278 00:12:33,779 --> 00:12:35,220 it, come talk to me later and we'll work 279 00:12:35,220 --> 00:12:36,329 something out. 280 00:12:36,329 --> 00:12:41,389 Now for the really hand-wavy stuff. Metaphors can be 281 00:12:41,389 --> 00:12:45,069 a really useful tool in software. The turtle graphic 282 00:12:45,069 --> 00:12:47,399 system in Logo is a great metaphor. Has anybody 283 00:12:47,399 --> 00:12:50,509 used Logo at any point in their life? About 284 00:12:50,509 --> 00:12:52,850 half the people. That's really cool. 285 00:12:52,850 --> 00:12:55,269 We've probably all played with drawing something on the 286 00:12:55,269 --> 00:12:57,679 screen at some point, but most of the rendering 287 00:12:57,679 --> 00:13:00,069 systems that I've used are based on a Cartesian 288 00:13:00,069 --> 00:13:04,910 coordinate system, a grid. And this metaphor encourages the 289 00:13:04,910 --> 00:13:07,569 programmer to imagine themselves as the turtle, and to 290 00:13:07,569 --> 00:13:09,449 use that understanding to figure out, when they get 291 00:13:09,449 --> 00:13:11,670 stuck, what they should be doing next. 292 00:13:11,670 --> 00:13:14,369 One of the original creators of Logo called this 293 00:13:14,369 --> 00:13:17,920 Body Syntonic Reasoning, and specifically developed it to help 294 00:13:17,920 --> 00:13:21,529 children solve problems. But the turtle metaphor, the turtle 295 00:13:21,529 --> 00:13:24,480 metaphor works for everybody, not just for kids. 296 00:13:24,480 --> 00:13:31,480 Cartesian grids are great for drawing boxes. Mostly great. 297 00:13:31,769 --> 00:13:33,489 But it can take some very careful thinking to 298 00:13:33,489 --> 00:13:36,529 figure out how to, how to use x, y 299 00:13:36,529 --> 00:13:41,239 coordinate pairs to draw a spiral or a star 300 00:13:41,239 --> 00:13:44,860 or a snowflake or a tree. Choosing a different 301 00:13:44,860 --> 00:13:47,739 metaphor can make different kinds of solutions easy, where 302 00:13:47,739 --> 00:13:49,319 before they seemed like too much trouble to be 303 00:13:49,319 --> 00:13:51,139 worth bothering with. 304 00:13:51,139 --> 00:13:58,139 James Ladd, in 2008, wrote a couple of interesting 305 00:13:58,819 --> 00:14:03,609 blog posts about what he called East-oriented code. Imagine 306 00:14:03,609 --> 00:14:07,420 a compass overlaid on top of your screen. In 307 00:14:07,420 --> 00:14:10,069 this, in this model, messages that an object sends 308 00:14:10,069 --> 00:14:13,319 to itself go South, and any data returned from 309 00:14:13,319 --> 00:14:17,420 those calls goes North. Communications between objects is the 310 00:14:17,420 --> 00:14:20,359 same thing, rotated ninety degrees. Messages sent to other 311 00:14:20,359 --> 00:14:22,879 objects go East, and the return values from those 312 00:14:22,879 --> 00:14:26,920 messages flow West. 313 00:14:26,920 --> 00:14:29,759 What James Ladd suggests is that, in general, code 314 00:14:29,759 --> 00:14:32,970 that sends messages to other objects, code where information 315 00:14:32,970 --> 00:14:36,290 mostly flows East, is easier to extend and maintain 316 00:14:36,290 --> 00:14:38,249 than code that looks at data and then decides 317 00:14:38,249 --> 00:14:40,269 what to do with it, which is code where 318 00:14:40,269 --> 00:14:42,100 information flows West. 319 00:14:42,100 --> 00:14:45,420 Really, this is just the design principle, tell, don't 320 00:14:45,420 --> 00:14:49,299 ask. But, the metaphor of the compass, compass recasts 321 00:14:49,299 --> 00:14:51,839 this in a way that helps us use our 322 00:14:51,839 --> 00:14:55,170 background spatial awareness to keep this principle in mind 323 00:14:55,170 --> 00:14:58,910 at all times. In fact, there are plenty of 324 00:14:58,910 --> 00:15:01,629 ways we can use our background level awareness to 325 00:15:01,629 --> 00:15:03,139 analyze our code. 326 00:15:03,139 --> 00:15:08,639 Isn't this adorable? I love this picture. 327 00:15:08,639 --> 00:15:11,220 Code smells are an entire category of metaphors that 328 00:15:11,220 --> 00:15:13,779 we use to talk about our work. In fact, 329 00:15:13,779 --> 00:15:16,959 the name code smell itself is a metaphor for 330 00:15:16,959 --> 00:15:19,639 anything about your code that hints at a design 331 00:15:19,639 --> 00:15:25,389 problem, which I suppose makes it a meta-metaphor. 332 00:15:25,389 --> 00:15:29,230 Some code smells have names are extremely literal. Duplicated 333 00:15:29,230 --> 00:15:30,559 code, long method and so on. But some of 334 00:15:30,559 --> 00:15:36,049 these are delightfully suggestive. Feature envy. Refused bequest. Primitive 335 00:15:36,049 --> 00:15:39,149 obsession. To me, the names on the right have 336 00:15:39,149 --> 00:15:41,769 a lot in common with Please, Mr. Gear. They're 337 00:15:41,769 --> 00:15:44,730 chosen to hook into something in our social awareness 338 00:15:44,730 --> 00:15:47,739 to give a name to a pattern of dysfunction, 339 00:15:47,739 --> 00:15:50,319 and by naming the problems it suggests a possible 340 00:15:50,319 --> 00:15:51,720 solution. 341 00:15:51,720 --> 00:15:54,730 So, these are most of the shortcuts that I've 342 00:15:54,730 --> 00:15:57,399 accumulated over the years, and I hope that this 343 00:15:57,399 --> 00:15:59,199 can be the start of a similar collection for 344 00:15:59,199 --> 00:16:04,480 some of you. 345 00:16:04,480 --> 00:16:05,359 Now the part where I try to put the 346 00:16:05,359 --> 00:16:09,629 fear into you. Evolution has designed our brains to 347 00:16:09,629 --> 00:16:13,529 lie to us. Brains are expensive. The human brain 348 00:16:13,529 --> 00:16:16,129 accounts for just two percent of body mass, but 349 00:16:16,129 --> 00:16:19,329 twenty percent of our caloric intake. That's a huge 350 00:16:19,329 --> 00:16:22,879 energy requirement that has to be justified. 351 00:16:22,879 --> 00:16:25,949 Evolution, as a designer, does one thing and one 352 00:16:25,949 --> 00:16:29,559 thing only. It selects for traits that allow an 353 00:16:29,559 --> 00:16:32,449 organism to stay alive long enough to reproduce. It 354 00:16:32,449 --> 00:16:34,899 doesn't care about getting the best solution. Only one 355 00:16:34,899 --> 00:16:38,029 that's good enough to compete in the current landscape. 356 00:16:38,029 --> 00:16:40,259 Evolution will tolerate any hack as long as it 357 00:16:40,259 --> 00:16:42,649 meets that one goal. 358 00:16:42,649 --> 00:16:44,249 As an example, I want to take a minute 359 00:16:44,249 --> 00:16:46,269 to talk about how we see the world around 360 00:16:46,269 --> 00:16:50,059 us. The human eye has two different kinds of 361 00:16:50,059 --> 00:16:52,389 photo receptors. There are about a hundred and twenty 362 00:16:52,389 --> 00:16:55,279 million rod cells in each eye. These play little 363 00:16:55,279 --> 00:16:57,730 or no role in color vision, and they're mostly 364 00:16:57,730 --> 00:16:59,809 used for night time and peripheral vision. 365 00:16:59,809 --> 00:17:02,019 There are also about six or seven million cone 366 00:17:02,019 --> 00:17:04,500 cells in each eye, and these give us color 367 00:17:04,500 --> 00:17:05,939 vision, but they require a lot more light to 368 00:17:05,939 --> 00:17:09,869 work. And the vast majority of cone cells are 369 00:17:09,869 --> 00:17:11,890 packed together in a tight little cluster near the 370 00:17:11,890 --> 00:17:13,900 center of the retina. This area is what we 371 00:17:13,900 --> 00:17:15,970 use to focus on individual details, and it's smaller 372 00:17:15,970 --> 00:17:20,640 than you might think. It's only fifteen degrees wide. 373 00:17:20,640 --> 00:17:23,660 As a result, our vision is extremely directional. We 374 00:17:23,660 --> 00:17:25,520 have a very small area of high detail and 375 00:17:25,520 --> 00:17:27,430 high color, and the rest of our field of 376 00:17:27,430 --> 00:17:31,140 vision is more or less monochrome. So when we 377 00:17:31,140 --> 00:17:38,140 look at this, our eyes see something like this. 378 00:17:40,390 --> 00:17:42,580 In order to turn the image on the left 379 00:17:42,580 --> 00:17:44,920 into the image on the right, our brains are 380 00:17:44,920 --> 00:17:46,960 doing a lot of work that we're mostly unaware 381 00:17:46,960 --> 00:17:49,280 of. 382 00:17:49,280 --> 00:17:51,400 We compensate for having such highly directional vision by 383 00:17:51,400 --> 00:17:54,500 moving our eyes around a lot. Our brains combine 384 00:17:54,500 --> 00:17:57,050 the details from these individual points of interest to 385 00:17:57,050 --> 00:17:59,710 construct a persistent mental model of whatever we're looking 386 00:17:59,710 --> 00:18:03,400 at. These fast point to point movements are called 387 00:18:03,400 --> 00:18:06,230 saccades. And they're actually the fastest movements that the 388 00:18:06,230 --> 00:18:09,230 human body can make. The shorter saccades that you 389 00:18:09,230 --> 00:18:11,020 make, might make when you're reading, last for twenty 390 00:18:11,020 --> 00:18:13,960 to forty milliseconds. Longer ones that travel through a 391 00:18:13,960 --> 00:18:16,610 wider arc might take two hundred milliseconds, or about 392 00:18:16,610 --> 00:18:18,520 a fifth of a second. 393 00:18:18,520 --> 00:18:20,990 What I find so fascinating about this is that 394 00:18:20,990 --> 00:18:25,030 we don't perceive saccades. During a saccade, the eye 395 00:18:25,030 --> 00:18:27,460 is still sending data to the brain, but what 396 00:18:27,460 --> 00:18:29,010 it's sending is a smeary blur. So the brain 397 00:18:29,010 --> 00:18:32,680 just edits that part out. This process is called 398 00:18:32,680 --> 00:18:35,970 saccadic masking. You can see this effect for yourself. 399 00:18:35,970 --> 00:18:37,880 Next time you're in front of a mirror, lean 400 00:18:37,880 --> 00:18:40,470 in close and look back and forth from the 401 00:18:40,470 --> 00:18:43,070 reflection of one eye to the other. You won't 402 00:18:43,070 --> 00:18:45,520 see your eyes move. As far as we can 403 00:18:45,520 --> 00:18:48,110 tell, our gaze just jumps instantaneously from one reference 404 00:18:48,110 --> 00:18:50,220 point to the next. And here's where I have 405 00:18:50,220 --> 00:18:51,990 to wait for a moment while everybody stops doing 406 00:18:51,990 --> 00:18:56,900 this. 407 00:18:56,900 --> 00:18:58,940 When I was preparing for this talk, I found 408 00:18:58,940 --> 00:19:01,940 an absolutely wonderful sentence in the Wikipedia entry on 409 00:19:01,940 --> 00:19:06,100 saccades. It said, Due to saccadic masking, the eye/brain 410 00:19:06,100 --> 00:19:07,810 system not only hides the eye movements from the 411 00:19:07,810 --> 00:19:10,130 individual, but also hides the evidence that anything has 412 00:19:10,130 --> 00:19:12,160 been hidden. 413 00:19:12,160 --> 00:19:17,890 Hides. The evidence. That anything has been hidden. Our 414 00:19:17,890 --> 00:19:20,100 brains lie to us. And they lie to us 415 00:19:20,100 --> 00:19:22,610 about having lied to us. And this happens to 416 00:19:22,610 --> 00:19:26,390 you multiple times a second, every waking hour, every 417 00:19:26,390 --> 00:19:29,060 day of your life. Of course, there's a reason 418 00:19:29,060 --> 00:19:30,390 for this. 419 00:19:30,390 --> 00:19:32,240 Imagine if, every time you shifted your gaze around, 420 00:19:32,240 --> 00:19:34,200 you got distracted by all the pretty colors. You 421 00:19:34,200 --> 00:19:36,330 would be eaten by lions. 422 00:19:36,330 --> 00:19:40,320 But, in selecting for this design, evolution made a 423 00:19:40,320 --> 00:19:42,680 trade off. The trade off is that we are 424 00:19:42,680 --> 00:19:45,570 effectively blind every time we move our eyes around. 425 00:19:45,570 --> 00:19:48,330 Sometimes for up to a fifth of a second. 426 00:19:48,330 --> 00:19:51,050 And I wanted to talk about this, partly because 427 00:19:51,050 --> 00:19:53,430 it's a really fun subject, but also to show 428 00:19:53,430 --> 00:19:55,760 that just one of the ways that our brains 429 00:19:55,760 --> 00:19:58,500 are doing a massive amount of work to process 430 00:19:58,500 --> 00:20:01,250 information from our environment and present us with an 431 00:20:01,250 --> 00:20:03,430 abstraction. 432 00:20:03,430 --> 00:20:05,980 And as programmers, if we know anything about abstractions, 433 00:20:05,980 --> 00:20:09,240 it's that they're hard to get right. Which leads 434 00:20:09,240 --> 00:20:11,860 me to an interesting question. Does it make sense 435 00:20:11,860 --> 00:20:13,440 to use any of the techniques that I talked 436 00:20:13,440 --> 00:20:15,900 about in part one, to try to coral different 437 00:20:15,900 --> 00:20:17,690 parts of our brains into doing our work for 438 00:20:17,690 --> 00:20:19,690 us, if we don't know what kinds of shortcuts 439 00:20:19,690 --> 00:20:26,650 they're gonna take? 440 00:20:26,650 --> 00:20:30,510 According to the Oxford English Dictionary, the word bias 441 00:20:30,510 --> 00:20:32,580 seems to have entered the English language around the 442 00:20:32,580 --> 00:20:35,770 1520s. It was used as a, a technical term 443 00:20:35,770 --> 00:20:38,600 in the game of lawn bowling, and it referred 444 00:20:38,600 --> 00:20:40,230 to a ball that was constructed in such a 445 00:20:40,230 --> 00:20:42,650 way that it would curve, it would roll in 446 00:20:42,650 --> 00:20:44,640 a curved path instead of in a straight line. 447 00:20:44,640 --> 00:20:47,150 And since then, it's picked up a few additional 448 00:20:47,150 --> 00:20:49,580 meanings, but they all have that same different connotation 449 00:20:49,580 --> 00:20:54,440 of something that's skewed or off a little bit. 450 00:20:54,440 --> 00:20:57,260 Cognitive bias is a term for systematic errors in 451 00:20:57,260 --> 00:20:59,490 thinking. These are patterns of thought that diverge in 452 00:20:59,490 --> 00:21:02,860 measurable and predictable ways from what the answers that 453 00:21:02,860 --> 00:21:07,070 pure rationality might give are. We have some free 454 00:21:07,070 --> 00:21:09,100 time. I suggest that you go have a look 455 00:21:09,100 --> 00:21:11,810 at the Wikipedia page called List of cognitive biases. 456 00:21:11,810 --> 00:21:13,890 There are over a hundred and fifty of them 457 00:21:13,890 --> 00:21:16,610 and they are fascinating reading. 458 00:21:16,610 --> 00:21:18,950 And this list of cognitive biases has a lot 459 00:21:18,950 --> 00:21:20,670 in common with the list of code smells that 460 00:21:20,670 --> 00:21:22,740 I showed earlier. A lot of these names are 461 00:21:22,740 --> 00:21:25,160 very literal. But there are a few that stand 462 00:21:25,160 --> 00:21:28,640 out, like cursive knowledge, or the Google effect or, 463 00:21:28,640 --> 00:21:32,680 and I kid you not, the Ikea effect. But 464 00:21:32,680 --> 00:21:34,490 the parallel goes deeper than that. 465 00:21:34,490 --> 00:21:37,850 This lif - excuse me - this list gives 466 00:21:37,850 --> 00:21:40,640 names to patterns of dysfunction, and once you have 467 00:21:40,640 --> 00:21:41,950 a name for a thing, it's a lot easier 468 00:21:41,950 --> 00:21:43,510 to recognize it and figure out what to do 469 00:21:43,510 --> 00:21:46,330 about it. I do want to call your attention 470 00:21:46,330 --> 00:21:48,280 to one particular item on this list. It's called 471 00:21:48,280 --> 00:21:51,920 the bias blind spot. This is the tendency to 472 00:21:51,920 --> 00:21:54,990 see oneself as less biased than other people, or 473 00:21:54,990 --> 00:21:57,150 to be able to identify more cognitive biases in 474 00:21:57,150 --> 00:22:04,150 others than in oneself. Sound like anybody you know? 475 00:22:10,820 --> 00:22:17,820 Just let that sink in for a minute. Seriously, 476 00:22:24,350 --> 00:22:25,730 though. 477 00:22:25,730 --> 00:22:27,630 In our field, we like to think of ourselves 478 00:22:27,630 --> 00:22:30,240 as more rational than the average person, and it 479 00:22:30,240 --> 00:22:33,940 just isn't true. Yes, as programmers, we have a 480 00:22:33,940 --> 00:22:37,250 valuable, marketable skill that depends on our ability to 481 00:22:37,250 --> 00:22:40,130 reason mathematically. But we do ourselves and others a 482 00:22:40,130 --> 00:22:42,280 disservice if we allow ourselves to believe that being 483 00:22:42,280 --> 00:22:45,260 good at programming means anything other than, we're good 484 00:22:45,260 --> 00:22:48,570 at programming. Because as humans we are all biased. 485 00:22:48,570 --> 00:22:51,220 It's built into us, in our DNA. And pretending 486 00:22:51,220 --> 00:22:53,630 that we aren't biased only allows our biases to 487 00:22:53,630 --> 00:22:54,780 run free. 488 00:22:54,780 --> 00:22:57,500 I don't have a lot of general advice for 489 00:22:57,500 --> 00:22:59,640 how to look for bias, but I think an 490 00:22:59,640 --> 00:23:02,830 obvious and necessary first step is just to ask 491 00:23:02,830 --> 00:23:07,510 the question, how is this biased? Beyond that, I 492 00:23:07,510 --> 00:23:09,520 suggest that you learn about as many specific cognitive 493 00:23:09,520 --> 00:23:11,230 biases as you can so that your brain can 494 00:23:11,230 --> 00:23:13,010 do what it does, which is to look for 495 00:23:13,010 --> 00:23:17,090 patterns and make associations and classify things. 496 00:23:17,090 --> 00:23:19,190 I think everybody should understand their own biases, because 497 00:23:19,190 --> 00:23:21,610 only by knowing how you're biased can you then 498 00:23:21,610 --> 00:23:23,910 decide how to con- how to correct for that 499 00:23:23,910 --> 00:23:26,610 bias in the decisions that you make. If you're 500 00:23:26,610 --> 00:23:28,630 not checking your work for bias, you can look 501 00:23:28,630 --> 00:23:30,630 right past a great solution and you'll never know 502 00:23:30,630 --> 00:23:32,200 it was there. 503 00:23:32,200 --> 00:23:35,340 So for part three of my talk, I have 504 00:23:35,340 --> 00:23:37,860 an example of a solution that is simple, elegant, 505 00:23:37,860 --> 00:23:41,460 just about the last thing I ever would have 506 00:23:41,460 --> 00:23:44,690 thought of. 507 00:23:44,690 --> 00:23:45,850 For the benefit of those of you who have 508 00:23:45,850 --> 00:23:48,810 yet to find your first gray hair, Pac-Man was 509 00:23:48,810 --> 00:23:52,420 a video game released in 1980 that let people 510 00:23:52,420 --> 00:23:54,990 maneuver around a maze eating dots while trying to 511 00:23:54,990 --> 00:23:58,300 avoid four ghosts. Now, playing games in fun, but 512 00:23:58,300 --> 00:23:59,960 we're programmers. We want to know how things work. 513 00:23:59,960 --> 00:24:02,720 So let's talk about programming Pac-Man. 514 00:24:02,720 --> 00:24:04,450 For the purposes of this discussion, we'll focus on 515 00:24:04,450 --> 00:24:10,640 just three things. The Pac-Man, the ghosts, and the 516 00:24:10,640 --> 00:24:11,930 maze. The Pac-Man is controlled by the player. So 517 00:24:11,930 --> 00:24:14,720 that code is basically just responding to hardware events. 518 00:24:14,720 --> 00:24:17,550 Boring. The maze is there so that the player 519 00:24:17,550 --> 00:24:19,780 has some chance at avoiding the ghosts. But the 520 00:24:19,780 --> 00:24:22,650 ghost AI, that's what's gonna make the game interesting 521 00:24:22,650 --> 00:24:25,880 enough that people keep dropping quarters into a slot, 522 00:24:25,880 --> 00:24:27,220 and by the way, video games used to cost 523 00:24:27,220 --> 00:24:28,400 a quarter. 524 00:24:28,400 --> 00:24:31,800 When I was your age. 525 00:24:31,800 --> 00:24:34,470 So to keep things simple, we'll start with one 526 00:24:34,470 --> 00:24:37,940 ghost. How do we program its movement? We could 527 00:24:37,940 --> 00:24:40,410 choose a random direction and move that way until 528 00:24:40,410 --> 00:24:42,090 we hit a wall and then choose another random 529 00:24:42,090 --> 00:24:44,550 direction. This is very easy to implement, but not 530 00:24:44,550 --> 00:24:46,220 much of a challenge for the player. 531 00:24:46,220 --> 00:24:49,580 OK, so, we could compute the distance to the 532 00:24:49,580 --> 00:24:52,180 Pac-Man in x and y and pick a direction 533 00:24:52,180 --> 00:24:54,920 that makes one of those smaller. But then the 534 00:24:54,920 --> 00:24:56,890 ghost is gonna get stuck in corners or behind 535 00:24:56,890 --> 00:24:58,630 walls cause it won't go around to catch the 536 00:24:58,630 --> 00:25:00,500 Pac-Man. And, again, it's gonna be too easy for 537 00:25:00,500 --> 00:25:01,210 the player. 538 00:25:01,210 --> 00:25:04,370 So how about instead of minimizing linear distance, we 539 00:25:04,370 --> 00:25:09,110 focus on topological distance? We can compute all possible 540 00:25:09,110 --> 00:25:12,760 paths through the maze, pick the shortest one that 541 00:25:12,760 --> 00:25:14,600 gets us to the Pac-Man and then step down 542 00:25:14,600 --> 00:25:16,520 it. And when we get to the next place, 543 00:25:16,520 --> 00:25:18,990 we'll do it all again. 544 00:25:18,990 --> 00:25:21,170 This works find for one ghost. But if all 545 00:25:21,170 --> 00:25:23,640 four ghosts use this algorithm, then they're gonna wind 546 00:25:23,640 --> 00:25:25,210 up chasing after the player in a tight little 547 00:25:25,210 --> 00:25:29,610 bunch instead of fanning out. OK. So each ghost 548 00:25:29,610 --> 00:25:31,990 computes all possible paths to the Pac-Man and rejects 549 00:25:31,990 --> 00:25:34,670 any path that goes through another ghost. That shouldn't 550 00:25:34,670 --> 00:25:37,760 be too hard, right? 551 00:25:37,760 --> 00:25:41,530 I don't have a statistically valid sample, but my 552 00:25:41,530 --> 00:25:43,820 guess is that when asked to design an AI 553 00:25:43,820 --> 00:25:45,730 for the ghosts, most programmers would go through a 554 00:25:45,730 --> 00:25:47,310 thought process more or less like what I just 555 00:25:47,310 --> 00:25:51,590 walked through. So, how is this solution biased? 556 00:25:51,590 --> 00:25:54,420 I don't have a name, a good name for 557 00:25:54,420 --> 00:25:56,390 how this is biased, so the best way I 558 00:25:56,390 --> 00:25:58,670 have to communicate this idea is to walk you 559 00:25:58,670 --> 00:26:00,840 through a very different solution. 560 00:26:00,840 --> 00:26:04,560 In 2006, I attended Oopsla, this is a conference 561 00:26:04,560 --> 00:26:07,840 put on by the ACM, as a student volunteer, 562 00:26:07,840 --> 00:26:09,620 and I happened to sit in on a presentation 563 00:26:09,620 --> 00:26:14,270 by Alexander Repenning from the University of Colorado. And 564 00:26:14,270 --> 00:26:18,060 in his presentation, he walked through the Pac-Man problem, 565 00:26:18,060 --> 00:26:19,980 more or less the way I just did, and 566 00:26:19,980 --> 00:26:21,140 then he presented this idea. 567 00:26:21,140 --> 00:26:23,870 What you do is you give the Pac-Man a 568 00:26:23,870 --> 00:26:25,970 smell, and then you model the diffusion of that 569 00:26:25,970 --> 00:26:31,340 smell throughout the environment. In the real world, smells 570 00:26:31,340 --> 00:26:33,640 travel through the air. We certainly don't need to 571 00:26:33,640 --> 00:26:36,440 model each individual air molecule. What we can do, 572 00:26:36,440 --> 00:26:39,180 instead, is just divide the environment up into reasonably 573 00:26:39,180 --> 00:26:42,260 sized logical chunks, and we model those. 574 00:26:42,260 --> 00:26:44,760 Coincidentally, we already do have an object that does 575 00:26:44,760 --> 00:26:47,120 exactly that for us. It's the tiles of the 576 00:26:47,120 --> 00:26:49,470 maze itself. They're not really doing anything else, so 577 00:26:49,470 --> 00:26:51,620 we can borrow those as a convenient container for 578 00:26:51,620 --> 00:26:56,360 this computation. We program the game as follows. 579 00:26:56,360 --> 00:26:58,770 We say that the Pac-Man gives whatever floor tile 580 00:26:58,770 --> 00:27:02,030 it's standing on a Pac-Man smell value, say a 581 00:27:02,030 --> 00:27:05,620 thousand. The number doesn't really matter. And that, that 582 00:27:05,620 --> 00:27:08,030 tile then passes a smaller value off to each 583 00:27:08,030 --> 00:27:10,270 of its neighbors, and they pass a smaller value 584 00:27:10,270 --> 00:27:12,520 off to each of their neighbors and so on. 585 00:27:12,520 --> 00:27:13,750 Iterate this a few times and you get a 586 00:27:13,750 --> 00:27:16,830 diffusion contour that we can visualize as a hill 587 00:27:16,830 --> 00:27:18,600 with its peak centered on the Pac-Man. 588 00:27:18,600 --> 00:27:21,380 It's a little hard to see here. The Pac-Man 589 00:27:21,380 --> 00:27:23,110 is at the bottom of that big yellow bar 590 00:27:23,110 --> 00:27:30,110 on the left. So we've got the Pac-Man. We've 591 00:27:34,140 --> 00:27:37,080 got the floor tiles. But in order to make 592 00:27:37,080 --> 00:27:38,550 it a maze, we also have to have some 593 00:27:38,550 --> 00:27:41,110 walls. What we do is we give the walls 594 00:27:41,110 --> 00:27:43,770 a Pac-Man smell value of zero. That chops the 595 00:27:43,770 --> 00:27:50,770 hill up a bit. 596 00:27:56,050 --> 00:27:58,210 And now all our ghost has to do is 597 00:27:58,210 --> 00:28:03,559 climb the hill. We program the first ghost to 598 00:28:03,559 --> 00:28:06,650 sample each of the floor tiles next to it, 599 00:28:06,650 --> 00:28:08,800 pick the one with the biggest number, go that 600 00:28:08,800 --> 00:28:12,580 way. It barely seems worthy of being called an 601 00:28:12,580 --> 00:28:16,590 AI, does it? But check this out. When we 602 00:28:16,590 --> 00:28:18,280 add more ghosts to the maze, we only have 603 00:28:18,280 --> 00:28:20,750 to make one change to get them to cooperate. 604 00:28:20,750 --> 00:28:23,360 And interestingly, we don't change the ghosts' movement behaviors 605 00:28:23,360 --> 00:28:26,380 at all. Instead, we have the ghosts tell the 606 00:28:26,380 --> 00:28:29,390 floor tile that they're, I guess, floating above, that 607 00:28:29,390 --> 00:28:33,130 its Pac-Men smell value is zero. This changes the 608 00:28:33,130 --> 00:28:35,680 shape of that diffusion contour. Instead of a smooth 609 00:28:35,680 --> 00:28:38,320 hill that always slopes down away from the Pac-Man, 610 00:28:38,320 --> 00:28:40,030 there are now cliffs where the hill drops immediately 611 00:28:40,030 --> 00:28:40,780 to zero. 612 00:28:40,780 --> 00:28:45,030 In effect, we turn the ghosts into movable walls, 613 00:28:45,030 --> 00:28:46,650 so that when one ghost cuts off another one, 614 00:28:46,650 --> 00:28:49,559 the second one will automatically choose a different route. 615 00:28:49,559 --> 00:28:52,990 This lets the ghosts cooperate without needing to be 616 00:28:52,990 --> 00:28:56,750 aware of each other. And halfway through this conference 617 00:28:56,750 --> 00:28:58,270 session that I was sitting in where I saw 618 00:28:58,270 --> 00:29:00,820 this, I was like. 619 00:29:00,820 --> 00:29:04,650 What just happened? 620 00:29:04,650 --> 00:29:07,220 At first, like, my first level of surprise was 621 00:29:07,220 --> 00:29:09,970 just, what an interesting approach. But then I got 622 00:29:09,970 --> 00:29:13,330 really, completely stunned when I thought about how surprising 623 00:29:13,330 --> 00:29:16,760 that solution was. And I hope that looking at 624 00:29:16,760 --> 00:29:19,520 the second solution helps you understand the bias in 625 00:29:19,520 --> 00:29:21,809 the first solution. 626 00:29:21,809 --> 00:29:24,940 In his paper, Professor Repenning wrote, The challenge to 627 00:29:24,940 --> 00:29:27,520 find this solution is a psychological, not a technical 628 00:29:27,520 --> 00:29:30,930 one. Our first instinct, when we're presented with this 629 00:29:30,930 --> 00:29:33,860 problem, is to imagine ourselves as the ghost. This 630 00:29:33,860 --> 00:29:36,920 is the body syntonic reasoning that's built into Logo, 631 00:29:36,920 --> 00:29:39,080 and in this case it's a trap. 632 00:29:39,080 --> 00:29:40,780 Because it leads us to solve the pursuit problem 633 00:29:40,780 --> 00:29:44,730 by making the pursuer smarter. Once we started down 634 00:29:44,730 --> 00:29:48,210 that road, it's very unlikely that we're going to 635 00:29:48,210 --> 00:29:51,600 consider a radically different approach, even, or, perhaps, especially 636 00:29:51,600 --> 00:29:55,740 if it's a very much simpler one. In other 637 00:29:55,740 --> 00:29:59,020 words, body syntonicity biases us towards modeling objects in 638 00:29:59,020 --> 00:30:03,830 the foreground, rather than objects in the background. 639 00:30:03,830 --> 00:30:06,970 Oops, sorry. 640 00:30:06,970 --> 00:30:10,600 OK. Does this mean that you shouldn't use body 641 00:30:10,600 --> 00:30:12,780 syntonic reasoning? Of course not. It's a tool. It's 642 00:30:12,780 --> 00:30:15,430 right for some jobs. It's not right for others. 643 00:30:15,430 --> 00:30:16,910 I want to take a look at one more 644 00:30:16,910 --> 00:30:21,110 technique from part one. What's the bias in Please 645 00:30:21,110 --> 00:30:23,590 Mr. Gear, what is your ratio? Aside from the 646 00:30:23,590 --> 00:30:27,770 gendered language, which is trivially easy to address. This 647 00:30:27,770 --> 00:30:31,110 technique is explicitly designed to give you an opportunity 648 00:30:31,110 --> 00:30:34,120 to discover new objects in your model. But it 649 00:30:34,120 --> 00:30:35,650 only works after you've given at least one of 650 00:30:35,650 --> 00:30:38,050 those objects a name. 651 00:30:38,050 --> 00:30:42,180 Names have gravity. Metaphors can be tar pits. It's 652 00:30:42,180 --> 00:30:44,620 very likely that the new objects that you discover 653 00:30:44,620 --> 00:30:46,860 are going to be fairly closely related to the 654 00:30:46,860 --> 00:30:51,390 ones that you already have. Another way to help 655 00:30:51,390 --> 00:30:53,420 see this is to think about how many steps 656 00:30:53,420 --> 00:30:56,610 it takes to get from Please, Ms. Pac-Man, what 657 00:30:56,610 --> 00:30:59,730 is your current position in the maze? To, please 658 00:30:59,730 --> 00:31:01,610 Ms. Floor Tile, how much do you smell like 659 00:31:01,610 --> 00:31:04,540 Pac-Man? 660 00:31:04,540 --> 00:31:05,880 For a lot of people, the answer to that 661 00:31:05,880 --> 00:31:09,660 question is probably infinity. It certainly was for me. 662 00:31:09,660 --> 00:31:11,390 My guess is that you don't come up with 663 00:31:11,390 --> 00:31:14,400 this technique unless you've already done some work modeling 664 00:31:14,400 --> 00:31:18,950 diffusion in another context. Which, incidentally, is why I 665 00:31:18,950 --> 00:31:21,840 like to work on diverse teams. The more different 666 00:31:21,840 --> 00:31:24,730 backgrounds and perspectives we have access to, the more 667 00:31:24,730 --> 00:31:27,520 chances we have to find a novel application of 668 00:31:27,520 --> 00:31:31,720 some seemingly unrelated technique, because somebody's worked with it 669 00:31:31,720 --> 00:31:33,370 before. 670 00:31:33,370 --> 00:31:37,600 It can be exhilarating and very empowering to find 671 00:31:37,600 --> 00:31:39,900 these techniques that let us take shortcuts in our 672 00:31:39,900 --> 00:31:43,929 work by leveraging these specialized structures in our brains. 673 00:31:43,929 --> 00:31:46,890 But those structures themselves take shortcuts, and if you're 674 00:31:46,890 --> 00:31:48,890 not careful, they can lead you down a primrose 675 00:31:48,890 --> 00:31:49,290 path. 676 00:31:49,290 --> 00:31:51,550 I want to go back to that quote that 677 00:31:51,550 --> 00:31:53,309 got me thinking about all this in the first 678 00:31:53,309 --> 00:31:56,080 place. About how we don't ever teach anybody how 679 00:31:56,080 --> 00:31:58,510 to do that or even that they should. 680 00:31:58,510 --> 00:32:00,920 Ultimately, I think we should use techniques like this, 681 00:32:00,920 --> 00:32:03,090 despite the, the biases in them. I think we 682 00:32:03,090 --> 00:32:06,650 should share them. And I think, to paraphrase Glenn, 683 00:32:06,650 --> 00:32:08,059 we should teach people that this is a thing 684 00:32:08,059 --> 00:32:11,980 that you can and should do. And, I think 685 00:32:11,980 --> 00:32:14,240 that we should teach people that looking critically at 686 00:32:14,240 --> 00:32:16,960 the answers that these techniques give you is also 687 00:32:16,960 --> 00:32:19,490 a thing that you can and should do. 688 00:32:19,490 --> 00:32:20,640 We might not always be able to come up 689 00:32:20,640 --> 00:32:23,960 with a radically simpler or different approach, but the 690 00:32:23,960 --> 00:32:27,059 least we can do is give ourselves the opportunity 691 00:32:27,059 --> 00:32:33,050 to do so, by asking how is this biased? 692 00:32:33,050 --> 00:32:34,690 I want to say thank you, real quickly, to 693 00:32:34,690 --> 00:32:36,490 everybody who helped me with this talk, or the 694 00:32:36,490 --> 00:32:39,120 ideas in it. And also thank you to LivingSocial 695 00:32:39,120 --> 00:32:41,820 for paying for my trip. And also for bringing 696 00:32:41,820 --> 00:32:45,790 these wonderful brains. So, they're gonna start tearing this 697 00:32:45,790 --> 00:32:48,630 stage down in a few minutes. Rather than take 698 00:32:48,630 --> 00:32:50,050 Q and A up here, I'm gonna pack up 699 00:32:50,050 --> 00:32:52,380 all my stuff and then I'm gonna migrate over 700 00:32:52,380 --> 00:32:54,570 there, and you can come and bug me. pick 701 00:32:54,570 --> 00:32:55,929 up a brain. Whatever. 702 00:32:55,929 --> 00:32:56,790 Thank you.