YouTube

Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← NVIDIA and Titan - Intro to Parallel Programming

Get Embed Code
2 Languages

Showing Revision 4 created 05/24/2016 by Udacity Robot.

  1. One of the cool recent announcements we've seen is the announcement
  2. of the Titan Supercomputer, which is now the fastest supercomputer in the world,
  3. and this has NVIDIA processors in its core.
  4. Can you talk a little bit about that process and how NVIDIA got to be involved
  5. and why that's such an exciting thing for GPU computing?
  6. Well, first of all, Titan is an awesome machine.
  7. It's 18,688 Kepler K20 GPUs and is the fastest computer in the world at running high performance winPAC.
  8. There's an interesting story there.
  9. The story of Titan actually starts with a meeting that I had with Steve Scott,
  10. who at the time was CTO of Cray at the Salishan Conference up on the Oregon Coast in 2009.
  11. I was talking to Steve and trying to see how can we work together.
  12. We really should get NVIDIA GPUs into Cray supercomputers
  13. because we have the best compute per dollar, compute per watt,
  14. which are the two things that matter in high performance computing,
  15. of anybody in the world, and they were actually going through a problem
  16. because they had bet on a different vendor who cancelled a project on them,
  17. and it left a hole in what they wanted to bid for this solicitation
  18. that was out from Oakridge to build what they call their leadership class computing facility--
  19. ultimately what turned in into Titan.
  20. It was just a nice sort of juxtaposition in time that I was having this conversation
  21. with him right at the point in time where there was a hole to be filled.
  22. It turns out Kepler filled that hole wonderfully.
  23. There was a lot of challenges along the way that, I think, really had to do with getting the people
  24. in the National Labs to embrace the model of parallelism that Cuda presents.
  25. I think that once they embraced it, they found it was actually easier
  26. to write their programs that way, and they actually ran better across the board
  27. once they were reorganized into that style of parallelism of watching CTAs and organizing things in that style.
  28. But they had a very large chunk of legacy code mostly in Fortran.
  29. It was coded sort of Fortran running on a single node with MPI used to communicate between the nodes,
  30. and it was a nontrivial exercise to really bring that software over
  31. and get it to run well on a GPU accelerated system.
  32. And I think beyond the LINPAC number, which is extra relatively easy number to get because it's one program,
  33. what really is a success of Titan is the very large number
  34. of basic energy science and defense codes that have been ported over very successfully
  35. and get just tremendous performance on the K20s.