YouTube

Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← Deferred Rendering - Interactive 3D Graphics

Get Embed Code
2 Languages

Showing Revision 2 created 05/25/2016 by Udacity Robot.

  1. A problem with adding more and more lights to a scene is the expense. Every
  2. light you add means yet another light that must be evaluated for the surface.
  3. One way around this is deferred rendering. Look at this demo. There are 50
  4. lights in the scene, and it runs just fine. Normally, you render a surface, and
  5. the fragment color for each pixel is stored, if it's the closest visible
  6. object. This is often called forward rendering. In a deferred rendering
  7. algorithm, you instead store data of some sort in each pixel. There are many
  8. variations, with names such as deferred shading versus deferred lighting. And
  9. here's just one. You could store the position, normal, and material color and
  10. shininess of the closest surface at each pixel. You also draw into the Z-buffer
  11. as usual. I'm not going to get into the details of how you store these various
  12. pieces of data, the point is that you can do so. It's just image data in
  13. another format. With deferred rendering every point light in the scene has an
  14. upper limit as to how far its light goes. This distance forms a sphere. So a
  15. sphere is drawn in a special way for each light. Another way of saying this is
  16. that each light can affect a volume in space. Whatever surfaces we find inside
  17. the sphere are affected by the light. Each light effects a fairly small number
  18. of pixels on the screen, namely whatever area the light sphere covers. This
  19. means that a huge number of lights with a limited radius can be evaluated in
  20. this way. By drawing a sphere, we're telling the GPU which pixels on the screen
  21. are covered by the light and so, should be evaluated. There are variants on
  22. what shapes you draw. A circle, a screen aligned rectangle. Whatever is drawn,
  23. the idea is that the geometry's purpose is to test only the limited set of
  24. pixels potentially in range of the light. This is as opposed to standard
  25. lights, where every light is evaluated for every surface at every pixel. I hope
  26. this give you a flavor of how deferred rendering works. I'm really jumping the
  27. gun, here. You need to know about shader programming to implement these. But
  28. the idea is to treat lights as objects that are rendered into the scene after
  29. all object surface values are recorded. There's some problem cases for
  30. preferred rendering techniques, such as transparency, but it offers a way to
  31. have an incredible number of lights in a scene.