English subtitles

← 04-34 Left Turn Policy Solution

Get Embed Code
3 Languages

Showing Revision 1 created 06/29/2012 by Amara Bot.

  1. Here is my solution, I have the value function initialized. It has lots of 999s.
  2. The policy is a similar function in 3D.
  3. Then I have a function called policy2d, which is the one I'm later going to print.
  4. That's the same in 2D.
  5. Scrolling down, my update function is exactly the same as before for dynamic programming
  6. While change exists go through [x, y]'s and all orientations
  7. of which there are 4, so it's now a deeper loop.
  8. If you found the goal location, then update the value,
  9. and if there's an actual update, set "change" to True
  10. and also mark it as the goal location.
  11. Otherwise, if our grid cell is navigable at all,
  12. let's go through the 3 different actions and here's a tricky part
  13. how to make the action work but it works beautifully.
  14. We go through the 3 different actions.
  15. When we tag the ith action,
  16. we add the corresponding orientation change to our orientation modulo 4.
  17. It's a cyclic buffer, so this might subtract 1.
  18. Keeping it the same will add 1 to orientation.
  19. Then we apply the corresponding new motion model to x and y to obtain x2 and y2.
  20. Then over here is our model of a car that steers first and then moves.
  21. Scrolling down further, if we arrived at a valid grid cell in that it's still inside the grid
  22. and it's not an obstacle, then like before we add to the value
  23. the value of this new grid cell plus the cost of the corresponding action.
  24. This is non-uniform, depending on what action we pick now.
  25. This improves over the existing value.
  26. We set this value to be the new value, and we mark change as True.
  27. We also memorize the action name as before.
  28. This is all effectively the same code as we had before
  29. when we did dynamic programming in a 2-dimensional world.
  30. It gets us the value function, and it gets us the policy action.
  31. However, I printed out a 2-dimensional table, not a 3-dimensional table.
  32. To get to the 2-dimensional table, I now need to be sensitive of my initial state.
  33. Otherwise, it actually turns out to be undefined.
  34. Let me set the initial state to be x, y, and orientation.
  35. All I do now is run the policy.
  36. With the very first state, I copy over the policy form the 3-dimensional table
  37. into the 2-dimensional one, which will be this hash mark over here.
  38. While I haven't reached the goal state quite yet as indicated
  39. by checking for the star in my policy table.
  40. Now, my policy table has a hash mark R and L,
  41. but otherwise is the same as before.
  42. If it's a hash mark, we just keep our orientation the way it is.
  43. If it's R, I turn to the right. L is turn to the left.
  44. I apply my forward motion,
  45. and I then update my new x and y coordinates
  46. to be the corresponding after the motion,
  47. and I update my orientation to be o2.
  48. Finally, I copy the 3-dimensional symbol for my policy straight into the 2-dimensional array.
  49. This is the array that I finally print.
  50. The key insight here is to go from the 3-dimensional full policy
  51. to a 2-dimensional array I had to run the policy.
  52. That's something you would have done to get back this table over here.
  53. That's somewhat nontrivial. I didn't tell you this, but I hope you figured it out.
  54. But everything else is the same dynamic programming loop that you've seen before.