## ← 13-36 Conclusion

• 1 Follower
• 28 Lines

Unit 13 36 Conclusion.mp4

### Get Embed Code x Embed video Use the following code to embed this video. See our usage guide for more details on embedding. Paste this in your document somewhere (closest to the closing body tag is preferable): ```<script type="text/javascript" src='https://amara.org/embedder-iframe'></script> ``` Paste this inside your HTML body, where you want to include the widget: ```<div class="amara-embed" data-url="http://www.youtube.com/watch?v=fYQb-uLTmYc" data-team="udacity"></div> ``` 2 Languages

Showing Revision 1 created 11/28/2012 by Amara Bot.

1. Let's summarize what we've done so far.
2. We've built up this valuation function that tells us the value of any state,
3. and therefore we can choose the best action in a state.
4. We started off just having terminal states and max value states.
5. That's good for one-player, deterministic games,
6. and we realized that that's just the same thing as searches we've seen before
7. where we had A* search or depth-first search.
8. Then we added in an opponent player for two-player or multiplayer games,
9. which is trying to minimize rather than maximize. We saw how to do that.
10. Then we optimized by saying at some point we may not be able to search the whole tree,
11. so we're going to have a cutoff depth and an evaluation function.
12. We recognized that that means that we're no longer perfect in terms of
13. valuating the tree. We now have an estimate.
14. We also tried to be more computationally effective
15. by throwing in the alpha and beta parameters,
16. which keep track of the best value so far for Max and Min
17. and prune off branches of the tree that are outside of that range
18. that are provably not part of the answer for the best value.
19. We kept track of those through these bookkeeping parameters.
20. Then finally we introduced stochastic games,
21. in which there is an element of chance or luck or rolling of the dice.
22. We realized that in order to valuate those nodes,
23. we have to take the expected value rather than the minimum or the maximum value.
24. Now we have a way to deal with all the popular types of games.
25. The details now go into when we figure out to cut off and what's the right evaluation function.
26. Those are a complex area.
27. A lot of research in AI is being done in that,
28. but it's being done for specific games rather than for the theory in general.