CS 488: Lecture 14 – Extra Texturing
Dear students:
Last time we introduced texturing as a way of adding surface details without adding extra geometry. Textures are effectively our way of defining attributes at the fragment level. Today we extend our discussion of textures with a hodge podge of miscellaneous topics surrounding their use.
Interpolation
Suppose I have 3 cats at the beginning of the year, and 13 by the end. How many do you expect I’d have on July 1? In the absence of any other information, 8 is a reasonable answer. July 1 is 50% of the way through the year, and 8 is 50% of the way between 3 and 13. The strategy we have applied here is called linear interpolation. We are guessing that a phenomenon follows linear growth or decay between two known observations and predicting an unknown value using the line between the known endpoints.
Here’s another problem. Suppose I am 150 centimeters tall at age 12. Suppose I am 175 centimeters tall at age 17. How tall am I at age 13? 13 years is 20% of the way between 12 and 17 years. 20% of of the height difference is:
Assuming I grow linearly, I am 155 centimeters tall at 13 years.
One more. At startTick, we have startValue. At endTick, we have endValue. At tick, what value do we expect to see? We write our answer to this generalization as a function named lerp
, which is a contraction of “linear interpolation”:
float lerp(float startTick, float startValue, float endTick, float endValue, float tick) {
float t = (tick - startTick) / (endTick - startTick);
float value = startValue + t * (endValue - startValue);
return value;
}
Sometimes we’ll see the right-hand side of value
expressed differently. We distribute, regroup, and factor to derive this equivalent form:
Performance fiends can express this equation use two vector multiply-add instructions, which are very fast on GPUs.
Texture Filtering
Texture coordinates are essentially continuous as they are interpolated across the surface of a triangle. But the underlying image is not continuous. It’s made of discrete pixels. If we want to look up a texture color using texture coordinates, we need a scheme for turning the coordinates into row and column indices, which are integers. First we need to apply the coordinates, which are proportions, to the actual image resolution:
vec2 floatIndices = ftexcoords * vec2(textureWidth, textureHeight);
The applied texture coordinates will likely have fractional components. We need integer indices for our discrete texture. There are two options for going from those floats to ints. We could round, perhaps by adding 0.5 and truncating:
ivec2 intIndices = ivec2(int(floatIndices.x + 0.5), int(floatIndices.y + 0.5));
This scheme is called nearest neighbor interpolation. We don’t actually need to write any of this code to perform this interpolation. WebGL will automatically turn proportions into the nearest row and column indices if we set these texture parameters:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
When we render with nearest neighbor interpolation, we see harsh lines when we view the textured surface.
Our other option is to use the fractions to perform a weighted average of the four surrounding texture colors. Suppose our fractional coordinates are (50.3, 70.6). Our truncated coordinates are (50, 70), and our weights are (0.3, 0.6). We’d first look up the known colors at (50, 70) and (51, 70) and mix them to get color below. We want 70% of the (50, 70) color and 30% of the (51, 70) color. Similarly, we’d look up the known colors at (50, 71) and (51, 71) and mix them according to the same weights to get the color above. Then we’d mix 40% of below with 60% of above.
This scheme is called bilinear interpolation because we are applying linear interpolation across a two-dimensional domain. WebGL will perform bilinear interpolation automatically if we set these texture parameters:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
The minification filter is applied when the texture is zoomed out and we are squeezing more than one texel inside a single pixel. The magnification filter is applied when the texture is zoomed in and a single texel cover more than one pixel.
Adding Alpha
We can produce irregular shapes with very simple geometry by adding an alpha channel to our texture. Then in the fragment shader, we can perform an alpha test, discarding any fragments that have a non-1 opacity:
fragmentColor = texture(checkerboard, ftexcoords);
if (fragmentColor.a < 1) {
discard;
}
This creates a very jagged transition. We can also choose to blend the fragments with the pixel’s existing color by enabling gl.BLEND
:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
We’ll talk more about blending another day.
TODO
Here’s your TODO list:
- Complete your programming assignments. Be sure they are in a Git repository somewhere that you have shared with me. We are starting week 9. Week 15 is your last week to turn in a programming assignment, and you may only turn in one assignment per week.
See you next time.
P.S. It’s time for a haiku!
I got rid of him
From all my photos even
Alpha 0 male