I find myself usually modifying the fragment shader in a program, since that's
where all the per pixel processing is happening. Let's back up a bit and show
how various parameters were produced by the vertex shader. The position and
normal of the vertex are passed in with these names, position and normal. A few
built in matrices are used for transformation, namely projectionMatrix,
modelViewMatrix and normalMatrix. In Three.js, these are always available to
the shader if desired. In WebGL itself you need to do a little more work.
There's currently not a modelViewProjectionMatrix of these two matrices
multiplied together. Maybe you'll be the person to add it to Three.js, as it's
commonly used and would be more efficient to use here. What we get out of this
shader are a few vectors. First the gl.Position is set, which is the location
in clip coordinates. This vector must always be set by the vertex shader, at a
minimum. One of the features of the vertex shader is that you can change the
shape of an object. You can't really change it in the fragment shader. The
normal in modelView space is computed here, using the normal transform matrix.
Finally, a vector from the location in modelView space toward the viewer is
computed. First the position in modelView space is computed, and then negating
this vector gives the direction toward the viewer from the surface, instead of
from the viewer to object. Remember that the camera's at the origin in View
space. We don't really need the temporary mvPosition vector. We could have
combined these last two lines of code. This example is here to show how to
compute it. To sum up, the vertex shader took as inputs the model space
position and normal. It transformed them to create a point in clip coordinates
for the rasterizer. It also transformed the normal and position. The resulting
transform vertices are then interpolated across the triangle during
rasterization, and sent to the fragment shader for each fragment produced.
ピクセル処理を行うフラグメントシェーダの書き換えは
私自身よくやります ここで頂点シェーダが
どのように変数を生成するか見てみましょう
頂点の位置と法線はpositionとnormalとして渡されます
すでに組み込まれているprojectionMatrixや
modelViewMatrixとnormalMatrixは
three.jsでは常に使用可能です
ただしWebGLにおいては追加の作業が必要になります
modelViewとprojectionMatrixの積
modelViewProjectionMatrixが存在しないからです
でも便利な機能なので
将来あなたが作るかもしれませんね
このシェーダで得るのは まず位置を示すgl_Position
このベクトルは必ず頂点シェーダによって
設定されます
頂点シェーダで
オブジェクトの変形は可能ですが
フラグメントシェーダでは不可能です
modelViewの法線はこの変換行列により計算され
最後に視点へ向かうベクトルが算出されます
つまりmodelViewの位置が計算されたあと
ベクトルを面から視点へ向かう方向へと変えるのです
ただ原点は視点にあることを忘れないでください
一時的なmvPositionのベクトルは特に必要なく
最後の2行は1つにすることも可能です
つまり頂点シェーダは空間位置や法線を
入力値と捉え
それらをクリッピング座標の値に変換したり
法線や位置を変換します
この変換された頂点値を元に
三角形の面を補間するラスタ化が行われ
生成されたフラグメントが
フラグメントシェーダへ送られます