English subtitles

← Nonlinear 9.2 Lyapunov Exponents, Kantz, Etc

Get Embed Code
1 Language

Showing Revision 11 created 08/06/2016 by Andrew Medeiros.

  1. Like forward Euler, and like backward Euler,
  2. the wolff algorithm is a useful thing to talk
  3. about in a class because it helps people understand
  4. the basic ideas, but no one really uses it anymore.
  5. Although, you will see an occasional exception
  6. to that statement, usually in the literature of a
  7. different field. The primary drawback of the
  8. wolff algorithm is its single point nature - the
  9. fact that you are picking a starting point and
  10. looking for the nearest neighbor at every step.
  11. As you can well imagine, any noise in the data
  12. might really trash that. The Kantz algorithm
  13. and the family of related methods address that
  14. by picking a whole bunch of points, tracking them
  15. in parallel, averaging their distance to some central
  16. point, and watching how that quantity grows.
  17. As in the wolff algorithm, you pick a starting point
  18. in the trajectory, then you draw an epsilon ball
  19. around it, find all the points in the trajectory that
  20. are inside that epsilon ball and measure their
  21. distances to that central point. Finally, you average
  22. all those distances. Now remember, these are all
  23. points in a single trajectory. For each one of them, you
  24. know where it goes next. If the center point,
  25. for example, is the 17th point in the trajectory,
  26. you know where the 18th point is, so what you do
  27. is you figure out where each of those points
  28. goes next...and I'm going to redo this drawing
  29. so it's a little less confusing...and this drawing,
  30. I'm going to draw in green where each point
  31. goes next. This is where the middle one goes next.
  32. This one goes here next.
  33. This one will go here next.
  34. And this one here next, and so on and so forth.
  35. This drawing doesn't exactly match the one on the
  36. left by the way, it's just a schematic. Then what I do
  37. is I compute the distances between the forward image
  38. of that center point - which is where the first one went -
  39. and the forward images of all those other points.
  40. The ratio of the average of all the red distances in
  41. this plot to all the average of all the red distances in
  42. this plot is a measure of how much the dynamics is
  43. stretching out state space around that central point.
  44. It's called the stretching factor and that stretching
  45. factor is exactly what the Lyapunov exponent is
  46. trying to capture. The Kantz algorithm repeats this
  47. calculation for a range of different time lengths
  48. and a bunch of different initial points.
  49. If you plot the results on a log of delta s vs time curve,
  50. you'll see a curve like this. Now what's going on here?
  51. When time is small, that means that you're not giving
  52. things much time to stretch out, so they won't stretch
  53. and the ratio will be 1 and the log will be 0.
  54. If you let them stretch for a really really long time
  55. however, the points in the epsilon ball will spread out all
  56. over the attractor and that's as far apart as they can
  57. get and that's as far as the volume can grow.
  58. That's this upper flat area. In between those
  59. asymptotes, there's a scaling region...
  60. I apologize for the weakly writing by the way,
  61. I downloaded a new copy of the sketchpad software
  62. that I use on the tablet and it's making my writing
  63. look awful... Let's think about what a diagonal
  64. line on a loglinear plot means. What that means
  65. is that the natural log of delta s is proportional
  66. to time and the constant of proportionality is a.
  67. If we take e to the both sides of this, we'll get...
  68. where a is this slope. And state space stretches as
  69. e to the lambda t, so the slope of the scaling region, a,
  70. is the Lyapunov exponent. You saw a similar thing
  71. in the previous segment about calculating the
  72. fractal dimension by the way, although the axes
  73. were different. There, we were after the
  74. relationship between the log of the number of boxes
  75. of size epsilon to cover something and the log of
  76. one over the size of those boxes, so we plotted
  77. log of one over epsilon on the horizontal axis, looked
  78. for a scaling region on that curve and took its
  79. slope as the value of the capacity dimension.
  80. In any calculation like these two things that depends
  81. on the existence of a scaling region, it's critically
  82. important not to impude the existence of one -
  83. that is, to see one there just because you want it there
  84. when it's really not. And that's a really subtle problem
  85. because scaling regions are very hard to define.
  86. You could, for example, fit a line to a chunk of your
  87. curve and insist that the r-squared value of that line
  88. be above 0.9 or something, but that's an arbitrary
  89. threshold. All of this brings up an interesting and
  90. important point about nonlinear time series analysis.
  91. One that's come up before. This is a very very powerful
  92. tool, but it has to be used wisely. And what I mean by
  93. wisely, is by a person in the loop. Someone needs
  94. to look at those plots and see if they have the right
  95. shape before deciding to pick off a fractal dimension
  96. or a Lyapunov exponent. If it doesn't have the right
  97. shape from your perspective, not mine.
  98. Or if that shape doesn't persist as you turn the
  99. knobs on the algorithm, like the size of that
  100. epsilon ball in the Kantz algorithm, you shouldn't
  101. use that curve to pick off those values. And that
  102. means, among other things, that these kinds
  103. of calculations are all but impossible to automate.
  104. By that I mean that if you have 1000 data sets and
  105. you want to compute the Lyapunov exponent of
  106. all 1000 of them, it's not going to work to
  107. simply throw those in the hopper of either of these
  108. algorithms and go off and have a beer.
  109. The answers won't be right. You need to be involved
  110. with the process. That's because the problem is
  111. hard and subtle, and you now know how to do that.
  112. The reason for this is not that these tools are lousy,
  113. but rather that nonlinear time series is a very hard
  114. problem. Linear tools are easy to use and they
  115. always give you an answer, but if you apply them to
  116. a nonlinear system, that answer may well be very
  117. wrong. Remember the lamp post. Knowing how these
  118. algorithms work should also help you understand
  119. and respect the requirements for what kind of data
  120. are needed for them to work properly.
  121. The data set needs to be long enough to cover
  122. the behavior that you are trying to sample. It needs
  123. to be quickly enough sampled to catch all the details
  124. in that, and it needs to not have so much noise
  125. as to obscure those details. And those are tough
  126. requirements sometimes.
  127. The TISEAN toolset, by the way, includes the Kantz
  128. algorithm for computing Lyapunov exponents from
  129. data. All of that was about calculating lambda 1,
  130. the largest positive exponent. There are n-1
  131. other Lyapunov exponents in an n-dimensional
  132. system. How to calculate them? That's much harder.
  133. Instead of following the spreading a 2-D volume, as
  134. sampled by the data, you have to follow higher-
  135. dimensional volumes. The algorithms, and the numerics,
  136. and the data requirements were hard enough for
  137. following the 2-D volumes. The 3-D version occasionally
  138. works, giving you lambda 2, but I've never seen
  139. results beyond that that I'm really happy with.