English subtitles

← Configuring the Kernel Launch Parameters 2 - Intro to Parallel Programming

Get Embed Code
3 Languages

Showing Revision 6 created 05/25/2016 by Udacity Robot.

  1. The most general kernel launch we can do looks like thi:, square of 3 parameters.
  2. The first is the dimensionality of the grid of blocks
  3. that has bx X by X bz blocks.
  4. Each one of those blocks is specified by this parameter: the block of threads that has tx X ty X tz threads in it,
  5. and recall that this has a maximum size.
  6. Finally, there's a third argument that defaults to zero if you don't use it,
  7. and we're not going to cover it specifically today.
  8. It's the amount of shared memory in bytes allocated per thread block.
  9. With this one kernel call, you can launch an enormous number of threads.
  10. And let's all remember, with great power comes great responsibility, so launch your kernels wisely.
  11. One more important thing about blocks and threads.
  12. Recall from our square kernel, that each thread knows its thread ID within a block.
  13. It actually knows many things.
  14. First is threaded x, as we've seen, which thread it is within the block.
  15. Here we have a block.
  16. Each thread, say this thread here, knows its index in each of the x, y, and z dimensions,
  17. and we can access those as thread idx.x, thread idx.y, and dot z.
  18. We also know block Dim, the size of a block.
  19. How many threads are there in this block
  20. along the x dimension, the y dimension, and potentially the z dimension?
  21. So we know those two things for a block.
  22. We know the analogous things for a grid.
  23. Block index for instance is which block am I in within the grid. Again dot x, dot y, and dot z.
  24. And grid Dim will tell us the size of the grid, how many blocks there are
  25. in the x dimension, the y dimension, and the z dimension.
  26. What I want you to take home from this little discussion is only the following.
  27. It's convenient to have multi-dimensional grids and blocks when your problem has multiple dimensions.
  28. CUDA implements this natively and efficiently.
  29. When you call thread at idx.x, or block dim.y, that's a very efficient thing within CUDA.
  30. Since we're doing image processing in this course,
  31. you should be counting on finding a lot of two dimensional grids and blocks.
  32. So, let's wrap up with a little quiz.
  33. Let's say I launch the following kernel.
  34. Kernel with 2 parameters dim 3 (8, 4, 2, 2) and dim 3 (16, 16).
  35. How many blocks will this call launch,
  36. how many threads per block, and how many total threads?