English subtitles

← Overview - Intro to Parallel Programming

Get Embed Code
2 Languages

Showing Revision 4 created 05/25/2016 by Udacity Robot.

  1. So today we're going to cover the following broad topics.
  2. We'll start by returning to the topic of optimizing GPU programs.
  3. Now in unit 5, we gave some pretty specific, detailed advice.
  4. And unit 6 explored some examples of how to think parallel.
  5. Here we're going to back off and talk more generally
  6. about strategies that parallel programmers use to optimize their programs.
  7. Now the best kind of programming, as we mentioned briefly in unit 5,
  8. is the kind of programming that you don't do because somebody else has already programmed
  9. it for you and packaged into a library you can use.
  10. So we'll talk about some important and useful CUDA libraries that are out there.
  11. Some of these libraries are less about packaging up code to solve a particular problem and more about
  12. providing what I call programming power tools to help code up your own solutions.
  13. So examples familiar to C++ programmers would be the standard template library,
  14. or STL, or the boost library.
  15. We'll discuss a few such power tools for CUDA C++ programmers.
  16. Now to write this class we focused on the CUDA C++ language,
  17. but there are many other platforms for parallel programming on GPUs.
  18. We'll talk about CUDA platforms that support other languages from Fortran to Python to Matlab.
  19. And we'll also discuss cross platform accelerator solutions like Open CL,
  20. Open ACC and Open GL compute.
  21. Now GPU computing is a young field and part of what
  22. makes it exciting is that the hardware and software are improving each year,
  23. not just getting incrementally faster but also adding fundamentally new capabilities.
  24. So we're going to finish the unit and the course with a fantastic
  25. guest lecture, inviting Dr. Stephen Jones from NVIDIA to come and teach us about
  26. the latest advance in CUDA called Dynamic Parllelism.