YouTube

Got a YouTube account?

New: enable viewer-created translations and captions on your YouTube channel!

English subtitles

← Vertex Shader Programming - Interactive 3D Graphics

Get Embed Code
2 Languages

Showing Revision 2 created 05/24/2016 by Udacity Robot.

  1. I find myself usually modifying the fragment shader in a program, since that's
  2. where all the per pixel processing is happening. Let's back up a bit and show
  3. how various parameters were produced by the vertex shader. The position and
  4. normal of the vertex are passed in with these names, position and normal. A few
  5. built in matrices are used for transformation, namely projectionMatrix,
  6. modelViewMatrix and normalMatrix. In Three.js, these are always available to
  7. the shader if desired. In WebGL itself you need to do a little more work.
  8. There's currently not a modelViewProjectionMatrix of these two matrices
  9. multiplied together. Maybe you'll be the person to add it to Three.js, as it's
  10. commonly used and would be more efficient to use here. What we get out of this
  11. shader are a few vectors. First the gl.Position is set, which is the location
  12. in clip coordinates. This vector must always be set by the vertex shader, at a
  13. minimum. One of the features of the vertex shader is that you can change the
  14. shape of an object. You can't really change it in the fragment shader. The
  15. normal in modelView space is computed here, using the normal transform matrix.
  16. Finally, a vector from the location in modelView space toward the viewer is
  17. computed. First the position in modelView space is computed, and then negating
  18. this vector gives the direction toward the viewer from the surface, instead of
  19. from the viewer to object. Remember that the camera's at the origin in View
  20. space. We don't really need the temporary mvPosition vector. We could have
  21. combined these last two lines of code. This example is here to show how to
  22. compute it. To sum up, the vertex shader took as inputs the model space
  23. position and normal. It transformed them to create a point in clip coordinates
  24. for the rasterizer. It also transformed the normal and position. The resulting
  25. transform vertices are then interpolated across the triangle during
  26. rasterization, and sent to the fragment shader for each fragment produced.