## ← 06-23 EM As Generalization Of K Means

• 1 Follower
• 26 Lines

Unit 6 5i EM as Generalization of k-Means

### Get Embed Code x Embed video Use the following code to embed this video. See our usage guide for more details on embedding. Paste this in your document somewhere (closest to the closing body tag is preferable): ```<script type="text/javascript" src='https://amara.org/embedder-iframe'></script> ``` Paste this inside your HTML body, where you want to include the widget: ```<div class="amara-embed" data-url="http://www.youtube.com/watch?v=1CWDWmF0i2s" data-team="udacity"></div> ``` 3 Languages

Showing Revision 1 created 11/28/2012 by Amara Bot.

1. It is now really easy to explain expectation maximization
2. as a generalization of K-means.
3. Again, we have a couple of data points here
4. and 2 randomly chosen cluster centers.
5. But in the correspondence step instead of making a hard correspondence
6. we make a soft correspondence.
7. Each data point is attracted to a cluster center
8. in proportion to the posterior likelihood
9. which we will define in a minute.
10. In the adjustment step or the maximization step
11. the cluster centers are being optimized just like before
12. but now the correspondence is a soft variable
13. and they correspond to all data points in different strengths
14. not just the nearest ones.
15. As a result, in EM the cluster centers
16. tend not to move as far as in K-means.
17. Their movement is smooth away.
18. A new correspondence over here gives us different strength
19. as indicated by the different coloring of the links
20. and another relaxation step gives us better cluster centers.
21. And as you can see over time, gradually
22. the EM will then converge to about the same solution as K-means.
23. However, all the correspondences are still alive.
24. Which means there is not a 0, 1 correspondence.
25. There is a soft correspondence
26. which relates to a posterior probability, which I will explain next.