English subtitles

← renorm 3 3 Cellular Automata Networks of Renormalization I

Get Embed Code
1 Language

Showing Revision 2 created 03/17/2019 by Alexander Krotov.

  1. For Israeli and Goldenfeld
  2. it's all about taking
  3. one of two possible paths
  4. and getting to the same place
  5. One path says
  6. Okay, look I have a fine-grained description
  7. of the system
  8. I'm going to evolve it forward
  9. with my fine-grained rule and at the end
  10. I'm going to simplify the answer
    I'm gonna say look yes
  11. I mean I kept track of all these details
  12. but in fact
  13. you know what?
  14. I only need to know some of the output
  15. I only need to know some of the final stages of system
  16. So you can think of that as sort of walking along like this and then projecting up
  17. Another way you can do it though
  18. right, is you can say look you give you this fine-grained description
  19. but I know I don't really care so much about this I'm going to project that down.
  20. Here's my fine-grain beginning initial condition
  21. I'm going to project that down to a simpler
  22. description and then I'm going to use a new rule
  23. that allows me to evolve
    that simplified description forward.
  24. So there are two paths
    and if you've done it right,
  25. they'll get you to the same place
  26. evolve the fine-grained
    system forward and project or
  27. project and evolve the
    coarse-grained description forward.
  28. A mathematician would say
    that these two operations
  29. right, the operation of evolving forward
    and the operation of projecting up commute
  30. you can do one or the other
    in either order,
  31. and you'll get the same answer
  32. A then B projecting then evolving is the same as B then A
  33. evolving and projecting.
  34. So let's see how this plays out with a particular example.
  35. Again, I'll just take one from their paper.
  36. This is rule 105. Rule 105 is quite similar
    to rule 150 which you've seen before.
  37. It takes the XOR of the three pixels
    above the pixel in question
  38. and then inverts them.
  39. So that's the only difference between
    rule 105 and 150 is that final inversion.
  40. Another way to think of that is
  41. equivalently, the output is black when
    there's an even number of black cells above.
  42. alright
  43. So now you know what we have to do, right?
  44. The first thing is we're going to consider
    not the final state of one pixel
  45. but the final state of two pixels.
  46. And we're going to ask what happens
  47. not when you take one time step
    but in fact when you take two time steps.
  48. And that means that those two final pixels
    will depend on a group of six pixels
  49. two time steps previously.
  50. And now we'll consider those pairs
    of pixels to be the supercells.
  51. So you have a big supercell here, which is two pixels
    [takes] four possible states
  52. and you have three supercells up here.
  53. So that's our f hat
  54. What we have to do now is
    find a combination, a projection
  55. p that takes that supercell
    and summarizes it,
  56. simplifies it.
  57. It maps each of those four possible states down to one of two possible states
  58. We have a projection p and we want to
    find an evolution operator g
  59. that allows us to evolve forward
  60. those projected down superstates.
  61. So the p is what takes you from the
    fine-grained descriptions up
  62. to the coarse-grained description
  63. and that g is what's going to take you
  64. between to coarse-grained descriptions
    at different times.
  65. So you can either go
  66. f, f, f, f, f, p or
  67. p, g, g
  68. right
  69. For every two times you iterate f you're going
    [of course] only to iterate g once
  70. and this simple example
  71. we'll just do the case where you skip one step
    and so you have supercells
  72. of size two
  73. so
  74. Fortunately it turns out
  75. It is possible to find a p and g that enable that diagram to commute and here it is in the case of Rule 105
  76. Right in this case the projection rule says look if the supercell has one cell
  77. that's black and one cell that's white make that's all white if
  78. Both cells are white or both cells or black
  79. then make it black
  80. It's sort of like an extra rule
  81. itself in fact
  82. Actually, it looks a little bit like an edge detector
  83. it's as, look, if there's a difference within the supercell mark it one way
  84. But if there's no difference in the supercells
    that are homogeneous mark it the other way
  85. If you use that projection operator then it turns out in fact that your g
  86. Right which is now of course taking binary values right take a binary value because you projected the four possible states supercell down to a cell
  87. It only has one of two possible states,
    and that's what g operates on
  88. that g evolution operator actually turns out to be rule 150
  89. So what we've shown
  90. is that it's possible to find a non-trivial coarse-graining
    ??? an interesting one ???
  91. to find a non-trivial coarse graining and
  92. an evolution operator that's still within the space of
  93. cellular automata some of an evolution operator that enables that diagram to commute.
  94. And so now just as we were able to talk about different kinds of
  95. Markov chains coarse-graining into each other
  96. we're now able to talk about how
    rules coarse-grain into each other.
  97. And in fact for a non-trivial projection operator Rule 105
  98. course-grains into rule 150.
  99. Here's what it looks like.
  100. In the top you can see the
    fine-grained level description
  101. and the bottom you can see
    the coarse-grained level description.
  102. At the top there you can see
    that we have the smaller pixels
  103. and those smaller pixels are both
  104. small in the x direction along this axis here
  105. and smaller in the time direction
  106. then in the coarse-grained case
    the coarse-grained case of course
  107. ???mps pairs of states into one and then in fact the jumps are now larger
  108. there are two time steps instead of one.
  109. And by looking at the the comparison
    between these two
  110. you can sort of see what's going on, right.
    First of all of course
  111. the coarse-grained description is
    capturing something interesting
  112. about the fine-grained description, right.
  113. We still have this idea that
    these triangles are sort of
  114. These these little perturbations that begin
  115. That we begin with that lead to these expanding ways
  116. right we still get that kind of wave like texture.
  117. These sort of propagating spaces that
    have kind of internal structure
  118. But you can also see that
    we are missing things too, right?
  119. So if you look at those two
    triangles at the fine-grained level
  120. one of them is sort of darker than the other
  121. But in fact when we coarse-grain the differences
    between those two triangles goes away.
  122. So somehow rule 150
  123. when we evolve it forward is
    operating our coarse-grained descriptions.
  124. That's thrown out some interesting features of Rule 105
  125. another obvious feature here that
    distinguishes rule 105 from rule 150
  126. is that we lose that kind of zebra
    stripe pattern
  127. and that's of course because if
  128. A pair of squares is both white
    or both black
  129. the projection operator maps them both
    into a square that's both black.
  130. So we've lost some of the structure
  131. both in the sort of places
    where those opening
  132. Propagating triangle waves have
    sort of reached it
  133. and also within the triangles themselves
  134. This gives you a little bit of a better sense
  135. now as we'll see it's not always the case.
  136. That's the picture that you get
  137. when you coarse-grain
  138. looks
  139. similar in some important respects
    to the fine-grained descriptions
  140. Here it's a particularly elegant example
    of how we we're able to capture something about the rule
  141. But of course not everything
  142. we can't really capture everything of course
  143. because that projection operator
  144. is a lossy compression,
    it throws out information.
  145. And for rule 105 it really matters
  146. whether everything is white or everything is black
  147. But in fact the rule 150 when
    we do the projection
  148. it masks both of those cases to the same state.
  149. so
  150. Israeli and Goldenfeld hacked and they
    hacked and hacked and hacked
  151. and they looked at all 256 rules.
    And they tried to figure out
  152. how or the extent to which one rule
    could course-grain into another.
  153. So these arrows here
    show you how it's possible
  154. to find whether or not it's possible
    to find a projection operator
  155. and an evolution operator that
    allows one rule set to map to another
  156. upon that coarse-graining and
    in fact they consider not only
  157. supercells of size 2
    but also size 3 and size 4 and
  158. and computation of this starts getting really hard
  159. because there's so many different kinds
    of projection operators you can use
  160. and there's so many different possible
  161. evolutions that you can pick
    that you start to run out of time
  162. it gets exponentially hard to find a good projection operator,
  163. that gets exponentially hard to
    search the space.
  164. There's only a partial map, but what
    you can see here is for example
  165. in the bottom the result that we just
    talked through a little bit laboriously
  166. which is the fact that it's possible
    to find a projection operator
  167. that takes you from state 105 or
  168. from evolution rule 105 to evolution rule 150.
  169. By the way one of the things you
    can notice from this graph
  170. is that it's clear that
    Israeli and Goldenfeld
  171. haven't actually found every possible
    coarse-graining relationship.
  172. And that's because there should be a
    feature of this network
  173. that doesn't actually happen.
  174. And that's that if A coarse-grains into B
  175. If A renormalizes into B,
  176. if it's possible to find a projection
    and evolution operator to take A and B
  177. and B renormalizes into C
  178. it's possible to find a projection that takes B into C
  179. then it should also be possible
  180. to renormalize A into C
  181. of course now you're going
    to be course-graining twice
  182. and it's harder of course for
    Israeli and Goldenfeld to find those
  183. but if you look at this chart
  184. what you should see
  185. is for example the fact that not only
    does rule 23 coarse-grain to rule 128
  186. and not only does rule 128
    coarse-grain to rule 0
  187. but it also should be the case
    that it's possible for rule 23
  188. to coarse-grain all the way down to rule 0
  189. just by doing two projections
    and zooming out even further.
  190. That said Israeli and Goldenfeld did a pretty good job
  191. looking at an enormous number of possible
  192. relationships between all of these possible rulesets
  193. And I find these diagrams quite compelling.
  194. It tells you something really complicated,
    really interesting about how
  195. deterministic rules
    and deterministic projections
  196. map into each other other.
  197. One of the things that you'll see
    from that network
  198. is that not only does rule 105
    coarse-grains to rule 150
  199. but in fact rule 50 coarse-grains into itself.
  200. So the pretentious way to say this
  201. this is a fixed point
    of renormalization group.
  202. With that projection operator
  203. you actually take rule 150
    into a zoomed-out version of itself.
  204. You sort of skip a step,
    you project down the supercells
  205. and you recover the same rules.
  206. Now it's important to notice
    there's a subtlety here, right.
  207. It doesn't mean that
    the image itself is self-similar
  208. doesn't necessarily mean the rule 150
  209. is kind of fractal in some interesting way.
  210. Because the coarse-graining
    may not be
  211. the kind of coarse-graining
    that just simply zooms out.
  212. Consider for example the projection
    we had going from rule 105 to 150.
  213. Now wasn't a simple decimation in the way
    that we did on the Alice picture for example
  214. at the beginning of this renormalization module.
  215. In that case, right, we're renormalizing Alice.
    We took her picture,
  216. we looked at little packages of cells,
    and we just take one of the values to define
  217. The value of that grid of
    that larger grid cell
  218. that's supercell in the Alice case.
  219. But if you remember the
    rule 105 to rule 150 projection
  220. that worked and that case
    was actually an edge detector.
  221. If the cell was all white
    or all black
  222. it got mapped to something
    that was all black.
  223. So it doesn't necessarily mean
    that if you kind of fuzz
  224. rule 150 it still looks like rule 150.
  225. It really depends upon the details
    of that projection operator.
  226. That said in fact you might
    think of it another way.
  227. Ihe rule 150 is a fixed point
    of renormalization group
  228. with potentially a much more interesting projection
  229. than a simple decimation.