Return to Video

Map and Gather - Intro to Parallel Programming

  • 0:00 - 0:04
    So let's talk about the different kinds of communication, the different patterns
  • 0:04 - 0:08
    of communication you'll see in parallel computing. And as you'll see, this is
  • 0:08 - 0:12
    really all about how to map tasks and memory together. How to map tasks, which
  • 0:12 - 0:16
    are threads in CUDA and the memory that they're communicating through. So the
  • 0:16 - 0:20
    communication pattern you've already seen is called Map. Now with Map, you've
  • 0:20 - 0:24
    got many data elements. Such as elements of an array, or entries in a matrix, or
  • 0:24 - 0:28
    pixels in an image. And you're going to do the same function, or computational
  • 0:28 - 0:32
    task, on each piece of data. This means each task is going to read from and
  • 0:32 - 0:36
    write to a specific place in memory. There's a 1 to 1 correspondence between
  • 0:36 - 0:40
    input and output. So, map is very efficient on GPUs. And it's easily expressed
  • 0:40 - 0:44
    in an efficient way in CUDA by simply having 1 thread do each task. But this
  • 0:44 - 0:48
    isn't a very flexible framework. There's many things you can't do with a simple
  • 0:48 - 0:53
    math operation. Now suppose that you want each thread to compute and store the
  • 0:53 - 0:58
    average across a range of data elements. Say maybe we want to average each set
  • 0:58 - 1:03
    of 3 elements together. In this case each thread is going to read the values
  • 1:03 - 1:08
    from 3 locations in memory and write them into a single place and so on. Or
  • 1:08 - 1:12
    suppose you want to blur image by setting each pixel to the average of its
  • 1:12 - 1:17
    neighboring pixels. So that this pixel would average together the values of all
  • 1:17 - 1:22
    five of these pixels. And, then let's take this pixel next to it would average
  • 1:22 - 1:27
    together the values of all these pixels and so on. We'll do exactly this kind of
  • 1:27 - 1:31
    blurring operation in the homework assignment that's coming up at the end of
  • 1:31 - 1:35
    this lecture. This operation is called a gather because each calculation gathers
  • 1:35 - 1:39
    input data elements together from different places to compute an output result.
Title:
Map and Gather - Intro to Parallel Programming
Description:

more » « less
Video Language:
English
Team:
Udacity
Project:
CS344 - Intro to Parallel Programming
Duration:
01:39

English subtitles

Revisions Compare revisions