English subtitles

← Scaling - Web Development

Get Embed Code
4 Languages

Showing Revision 10 created 05/24/2016 by Udacity Robot.

  1. So we had these app servers that were all running their own caches,
  2. and then we had a couple databases that were all replicas of each other.
  3. At this point we added a load balancer, and this load balancer probably actually ran.
  4. It was probably a program running on one of these app machines,
  5. and these guys were still keeping their caches in sync using interacting with the databases directly.
  6. And we had a limit ot how many app servers we can have because we had this complicated caching spread thing.
  7. The next thing we added was the memcache layer.
  8. So instead of these app services containing their end memory cache,
  9. they would communicate via memcache.
  10. So instead of having to keep their cache in sync, we just had 1 cache that was just shared among all of our app servers.
  11. I'm sad it took us so long to figure this out because memcache existed when we started reddit,
  12. and we should have been using it from the beginning.
  13. This is what allowed us to get all of that state, all of that cache, out of the apps and into memcached
  14. and allowed us to add apps arbitrarily.
  15. Once we had that going, that allowed us to scale our apps
  16. and they stayed in sync, and so we can add an app, lose an app. We didn't have to worry about it.
  17. The next thing we had to start
    dealing with was the database load.
  18. So we're already replicating for kind of
    durability and for performance reasons,
  19. so we can spread our reads across multiple
    machines as we started segmenting on type.
  20. So we'd have a database for just links; then we could separate comments out into its own database.
  21. And so these would still replicate to each other, but if you're only submitting a link,
  22. you only have to touch this database,
    and if you're only, like, reading a comment,
  23. for example, you only have to touch this database.
  24. And this is actually still basically the general setup reddit has today in terms of how the database is scaled.
  25. And we never wrote sharding in the beginning, and I really regret that.
  26. When I rewrote the ThingDB, the second version of it, I had in the back of my head,
  27. you know, I should add sharding, because we're going to need that someday.
  28. And then I just wanted to get the damn thing into production so I stopped.
  29. The big lession I've learned is when
    you're writing a big system like that,
  30. if you don't do the hard parts up front, you may never get another opportunity to them,
  31. because now the database is so big, that if we wanted to bolt on sharding, that's a huge project.
  32. It's easier right now to just add bigger machines and more caching.
  33. It's not going to work forever, and somebody's going to have to bite the bullet and do that.
  34. And it would have been a lot easier to do it at the time.
  35. Since all of our queries, we stopped using joins when we switched to ThingDB,
  36. sharding's actually fairly straightforward if you kind of do it right from the beginning.
  37. Over time some of the software on these app servers changed so we've always been using Python.
  38. I don't remember what app server we used originally.
  39. We switched from whatever we used initially to web.py, which is a framework that we wrote at reddit.
  40. Aaron was basically the main author of that, and it's still out there on the Internet somewhere.
  41. And this is where the first time I recall seeing a framework that had kind of the notion
  42. of a handler class and then functions for get and post,
  43. and I've become kind of addicted to thinking of web applications that way.
  44. Actually the Google app engine, the webapp2 framework,
  45. inherited a lot of that design from web.py, which is nice.
  46. Nice for me, at least, is that design decision has kind of stuck around for a little while
  47. so I think that means it was a good one.
  48. Now Python uses a web framer called Pylons, but it uses a very old version of Pylons.
  49. And basically when we switched to Pylons, Aaron had stopped maintaining web.py.
  50. I didn't want to maintain it. We switched to something else maintained by somebody else.
  51. And then we basically shredded most of it and made it function just like web.py.
  52. In hindsight we probably should have just written our own because that's effectively what we did,
  53. but we did it on top of Pylons.
  54. So if you want to use the reddit version of Pylons, it's open source. It's online.
  55. But it doesn't resemble anything like the actual Pylons web framework at this point.
  56. And to my knowledge, that's still what they use today, this hacked up version of Pylons.