English subtitles

← renorm 7 Conclusion Pt I

Get Embed Code
1 Language

Showing Revision 6 created 11/03/2017 by Charles Fiertz.

  1. One of the major themes of this series of
  2. this series of lectures has been going on
  3. the data side from one level of
  4. description to another.
  5. From data to data prime with some kind of
  6. coarse graining prescription and then
  7. asking the question, okay, if this was
  8. your model for the data at this scale what
  9. is the corresponding model prime for the
  10. data at this scale here and this
  11. relationship was the one that we
  12. understood as the renormalization
  13. relationship. And that goes all the way
  14. from how a Markov chain coarse grains and
  15. flows indeed to the south manifold of the
  16. higher dimensional space that the Markov
  17. chains originally lived in. It applies
  18. just as well to how electrodynamics
  19. changes as you go from the finer
  20. grain scale, where
  21. let's say you can observe electrons on,
  22. let's say a scale of 1 millimeter, up to
  23. a scale of, let's say, a meter, and that
  24. renormalization there, as I indicated
  25. could be understood as changing, not the
  26. laws of electrodynamics, but just one of
  27. its parameters, the electron charge as you
  28. moved to different distances. This
  29. operation here we've left somewhat
  30. ambiguous. In each of the talks I told you
  31. a coarse graining operation that we were
  32. going to use. And we did the Markov chains
  33. and said "okay, you have some finite time
  34. resolution" when we came to study the
  35. icing model I said "okay, look here's how
  36. we're going to decimate, we have our grid
  37. and what we're going to do is we're going
  38. to take every other particle as you go
  39. along the grid in these directions we're
  40. going to average over every other particle
  41. like that, or rather, trace over every
  42. particle like that." The one time where we
  43. really started to ask which coarse
  44. graining do we want to use was when we
  45. came to do the CAs, we looked at Israeli
  46. and Goldenfeld's work, where we found is
  47. that they were simultaneously solving for
  48. the model, the g, that came from the f,
  49. but also, solving for the projection
  50. function that took the supercells and
  51. mapped them into groups, into single cell
  52. examples. And so I'll draw an example here
  53. of how Goldenfeld and Israeli's projection
  54. might work in some case, in fact it takes
  55. blank spaces to a blank cell, but if
  56. there's one filled-in cell, it always
  57. takes it to a filled-in cell at the coarse
  58. grain level description. What Israeli and
  59. Goldenfeld were doing were simultaneously
  60. solving for these two objects. And when
  61. they did that, one of the things that we
  62. talked quite a bit about was that they
  63. found that in fact, Rule 110 could indeed
  64. be efficiently coarse-grained. And that's
  65. kind of remarkable, right? It's sort of
  66. like saying, "you know, like, yeah, you
  67. know your clock speed is 5 GHz and you
  68. have you know, memory of, you know, 16 GB
  69. but actually I can do what you think you
  70. want to do, I could do it in half the
  71. memory and half the time." Now, when we
  72. actually came to look at what the coarse
  73. graining was doing for Rule 110, we were
  74. much less impressed. And for example, one
  75. of the kinds of coarse grainings that
  76. Israeli and Goldenfeld discovered was the
  77. Garden of Eden coarse graining, which
  78. turned out to be incredibly trivial. What
  79. it did was it took a certain subset of
  80. supercells that could never be produced by
  81. Rule 110, not in fact blocks of two, they
  82. had to go to a longer set of blocks to
  83. find them. But they found these Garden of
  84. Eden supercells and then projected the
  85. whole world into Garden of Eden versus not
  86. Garden of Eden, you know post-fall, right?
  87. And then by projecting them into those two
  88. spaces, then they could actually map
  89. Rule 110 onto Rule 0. And yet they
  90. satisfied what they wanted, and which
  91. seems like a natural thing to satisfy,
  92. which is the commutation of the diagram.
  93. Right, the diagram they wanted to commute
  94. was if you evolve on the fine grain scale
  95. and project. If you use the f operation
  96. twice and then project, it's the same as
  97. using the projection and then the g
  98. operation once. So these commuted and yet,
  99. the answer was somewhat unsatisfying.
  100. In the case of the icing model, we had
  101. this goal, our goal was secretly to figure
  102. out what was going on with phase
  103. transitions in the sort-of two-dimensional
  104. grid where all of a sudden at some
  105. critical point you found that the whole
  106. system coordinated. And so in the end they
  107. said "you know, look, this was not the
  108. world's greatest coarse graining, because
  109. you couldn't quite get a solution, but it
  110. was good enough." Always what's happening
  111. in each of these stories, the Markov
  112. chain, the cellular automata, the icing
  113. model, the Krohn-Rhodes Theorem is that
  114. secretly we have some idea of what we want
  115. the data to do for us, and therefore, we
  116. have some idea of what we want this
  117. projection operator to be. And in a subset
  118. of the cases, we also had an idea about g,
  119. so if you think about the icing model case
  120. we really didn't like that term that was
  121. the quartet, sigma 1, sigma 2, sigma 3 ...
  122. We actually just neglected it. And we
  123. didn't like it because it made
  124. calculations hard. So secretly, we also
  125. have a little bit of a constraint on g,
  126. but in general, what we were doing was
  127. picking a p that we hoped did what we want
  128. And that goes all the way back to the
  129. Alice in Wonderland story that we began
  130. with. Here's an image, here's the coarse
  131. graining, do you like it?