YouTube

Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← Texturing and Postprocessing - Interactive 3D Graphics

Get Embed Code
2 Languages

Showing Revision 2 created 05/24/2016 by Udacity Robot.

  1. I haven't talked about texture access in fragment shaders. The short answer is,
  2. you can access textures in fragment shaders. The theory of sampling and
  3. filtering is the same as you've already learned in previous lessons. Most of
  4. the rest is just the syntax of setting up the texture access. The key bit is
  5. that in the shader you use the texture 2D function. This gives back the color
  6. from the texture which you then use as you wish. Search the Three.js code base
  7. for this keyword and you'll find more than 40 shaders that use texture access.
  8. The main file for material shading is WebGLShaders.js, and it's worth your time
  9. to give it a look if you want to become better with shaders. The other function
  10. to look for is textureCube for cube maps, which takes a 3D direction as an
  11. input. Many of the rest of the shaders perform image processing, where the
  12. pixels of the image are used to form another image. Let's take a concrete
  13. example. You want to convert a color image to grayscale. The formula is
  14. luminance equals this much red, this much green, and this much blue. This
  15. formula is for linearized colors, and luminance is intensity, the grayscale.
  16. There's also another formula for luma, for when the inputs are not gamma
  17. corrected. It's surprising to me that in both formulas how important green is
  18. and how little blue matters. Whichever formula you use, you might think you
  19. could simply add it as a final step in a fragment shader. However, transparency
  20. again is a problem. You want to apply this formula as a post-process. That is,
  21. you want to convert to grayscale after the whole scene is rendered. But by then
  22. it's too late. The image has already been sent to the screen. Well, in fact,
  23. you can send the image off-screen and have it become a texture. This is called,
  24. simply, an offscreen texture, pixel buffer, or render target. This texture is
  25. not typically not a powers of two texture, it's not normally 512 by 512, or
  26. anything that you'd use to make a mipmap chain. Our goal is to perform image
  27. processing on this so-called texture. We want to read a pixel and convert it to
  28. gray. You'd think there'd be some mode that could just take one texel and spit
  29. it out to a new pixel, but the GPU is optimized for drawing triangles and
  30. accessing textures. The way we use this texture is by drawing a single
  31. rectangle with UV coordinates that exactly fill the screen. You'll hear this
  32. called a screen-filling quad. The UV values are then used in the fragment
  33. shader to precisely grab the single texel that corresponds to our final output
  34. pixel on the screen, one for one. This sounds inefficient, and in a certain
  35. sense it is. But this process is fast enough that often a considerable number
  36. of post-processing passes can be done in each frame. In Three.js, the Effect
  37. Composer class lets you create and chain different passes together with just a
  38. few lines of code. For our grayscale post-process, the vertex shader is along
  39. these lines. This is almost as simple as a vertex shader can get. It copies
  40. over the UV value and projects the screen-filling quadrilateral to clip
  41. coordinates. The whole point of rendering the rectangle is to force the
  42. fragment shader to be evaluated at every pixel. The fragment shader code is
  43. also quite simple. The texture is accessed by the screen location, essentially.
  44. So each textel will be associated with its corresponding output pixel. We use
  45. the color of this textel to get the grey scale color, in this case using the
  46. luma equation. This color is then saved to the fragment's output color and
  47. we're done. But we don't have to stop there. Multiple post-process passes can
  48. be done, and it's often necessary or even more efficient. For example, a blur
  49. can be done in two passes, a horizontal blur and then a vertical blur. Doing so
  50. results in fewer texture look-ups than a single pass. Multiple passes can be
  51. quite memory efficient as the input texture from one pass can become the output
  52. texture for the next, and vice versa. This process is called ping ponging, as
  53. the flow of data goes back and forth. Here the horizontal blur uses the left
  54. image as input, the right for output. The vertical blur uses the right as
  55. input, left as output. Say we now want to convert to gray scale, we'd add
  56. another pass and reverse the direction again. Here's an example of grayscale in
  57. action. The critical idea here is that we can do all sorts of things in this
  58. fragment shader. We can sample nearby texels and blend them together to blur
  59. the image, detect edges, or 100 other operations. We can chain together each
  60. post-process to feed the next. This demo by Felix Turner shows some of the many
  61. effects you can achieve. Three.js, in fact, has lots of post-processing
  62. operations in its library, and it's easy to add your own. I highly recommend
  63. looking at the additional course materials for links to Felix's tutorial and
  64. other worthwhile resources.