Return to Video

Map and Gather - Intro to Parallel Programming

  • 0:00 - 0:02
    Let's talk about the different kinds of communication,
  • 0:02 - 0:05
    the different patterns of communication you'll see in parallel computing.
  • 0:05 - 0:10
    And as you'll see, this is really all about how to map tasks and memory together--
  • 0:10 - 0:12
    how to map tasks, which are threads in CUDA,
  • 0:12 - 0:15
    and the memory that they're communicating through.
  • 0:15 - 0:18
    The communication pattern you've already seen is called map.
  • 0:18 - 0:20
    Now with map, you've got many data elements
  • 0:20 - 0:25
    such as elements of an array, or entries in a matrix, or pixels in an image,
  • 0:25 - 0:30
    And you're going to do the same function or computational on each piece of data.
  • 0:30 - 0:34
    This means each task is going to read from and write to a specific place in memory.
  • 0:34 - 0:38
    There's a 1 to 1 correspondence between input and output.
  • 0:38 - 0:41
    So map is very efficient on GPUs,
  • 0:41 - 0:43
    and it's easily expressed in an efficient way in CUDA
  • 0:43 - 0:48
    by simply having 1 thread do each task, but this isn't a very flexible framework.
  • 0:48 - 0:50
    There's many things you can't do with a simple map operation.
  • 0:50 - 0:54
    Now suppose that you want each thread to compute and store the average
  • 0:54 - 0:56
    across a range of data elements.
  • 0:56 - 1:01
    Say maybe we want to average each set of 3 elements together.
  • 1:01 - 1:04
    In this case, each thread is going to read the values from 3 locations in memory
  • 1:04 - 1:08
    and write them into a single place and so on.
  • 1:08 - 1:10
    Or suppose you want to blur an image
  • 1:10 - 1:14
    by setting each pixel to the average of its neighboring pixels,
  • 1:14 - 1:19
    so that this pixel would average together the values of all 5 of these pixels,
  • 1:19 - 1:22
    and then let's take this pixel next to it,
  • 1:22 - 1:26
    would average together the values of all these pixels and so on.
  • 1:26 - 1:29
    We'll do exactly this kind of blurring operation in the homework assignment
  • 1:29 - 1:31
    that's coming up at the end of this lecture.
  • 1:31 - 1:36
    This operation is called a gather, because each calculation gathers input data elements together
  • 1:36 - 1:40
    from different places to compute an output result.
Title:
Map and Gather - Intro to Parallel Programming
Description:

03-03 Map and Gather

more » « less
Video Language:
English
Team:
Udacity
Project:
CS344 - Intro to Parallel Programming
Duration:
01:39

English subtitles

Revisions Compare revisions