Dear students:

Today we visit just a few more miscellaneous topics related to texturing. We’ll examine texturing a cube, dealing with constraints on a texture’s dimensions, and apply textures to produce more discrete shading.

Texturing a quadrilateral is like biking down hill. You don’t have to exert much effort to paste a rectangular texture on a quadrilateral. We must try to texture something more difficult, like a cube. Suppose we want to put a different image on each face. Our first impulse might be to use six textures, but it’s not clear how we would apply different textures to different faces within a single draw call. We either need the faces to be separated or we need to use only a single texture.

When we load multiple images into a single texture, we have a *texture atlas*. For a cube, we can use an atlas that is a 4×2 grid of square cells, filling 6 of the cells with the cube’s images. The texture coordinates must be carefully assigned so that each face maps to the right “page” of the atlas.

Some graphics libraries expect textures to have dimensions that are powers of 2, like 1024×256. We reap several advantages by constraining textures in this way. First, we are guaranteed to have a clean mipmap set since the dimensions will divide by two with no remainder. Second, texture lookups may be faster. Let’s explore a few reasons why that might be.

What is `5 << 1`

? In binary, that’s `101 << 1`

, which is $1010_2$ or $10_{10}$. Extending this, `5 << 2`

is $10100_2$ or $20_{10}$. Shifting binary digits is the same multiplying by 2. Hang on to that truth for a second.

If you’ve got integer texture coordinates `s`

and `t`

and width `w`

, then you can turn a 2D texel coordinate into a 1D index into a row-major buffer using these equations:

```
i = w * t + s
```

However, if `w`

is 2 to the power of `n`

, then the multiplication can be replaced by shifting:

```
i = t << n + s
```

Shifting in hardware is generally faster to perform than multiplication.

WebGL allows non-power of 2 textures (NPOT), but a call to `gl.generateMipmap`

will fail on such textures. The minification filter must not be set to any interpolation based on mipmap levels. Only `gl.LINEAR`

and `gl.NEAREST`

are supported.

Additionally, one cannot use `gl.REPEAT`

or `gl.MIRRORED_REPEAT`

for wrapping coordinates on NPOT textures. Only `gl.CLAMP_TO_EDGE`

is supported. Let’s explore why this might be. Suppose we have a texture that is 64 texels wide. We have these binary representations of texture coordinate 50 and its successive counterparts in the repetitions that will reduce to 50:

50 = 110010 50 + 64 * 1 = 1110010 50 + 64 * 2 = 10110010 50 + 64 * 3 = 11110010

We see from these examples that we can reduce the out-of-range texture coordinates down to an in-range texture coordinate with a masking operation. In this case, we want to mask with bits 111111, which is 63 in decimal. In general, we calculate our in-range coordinate like this:

texcoord = texcoord & (width - 1)

Computing the mask only works when the size is a power of 2, since one less than a power of 2 has all its bits on.

If you have an NPOT texture, you have several options:

- Live with it. The graphics API may accept NPOT textures.
- Pad the image to the nearest higher powers of 2 in your graphics editor. The padding wastes disk and increases download times.
- Pad the image to the nearest higher powers of 2 programmatically at runtime. JavaScript’s
`Image`

class doesn’t give us much control over pixels. However, its`canvas`

element does. We can write this code to draw the NPOT image into a POT canvas:function padToPot(image) { const canvas = document.createElement('canvas'); const context = canvas.getContext('2d'); canvas.width = powerOfTwoCeiling(image.width); canvas.height = powerOfTwoCeiling(image.height); context.drawImage(image, 0, 0); return context.getImageData(0, 0, canvas.width, canvas.height); }

- Allocate a texture on the GPU whose dimensions are the nearest higher powers of 2 and upload the texels as a sub-image within the texture.
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, powerOfTwoCeiling(image.width), powerOfTwoCeiling(image.height), 0, gl.RGBA, gl.UNSIGNED_BYTE, gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, 0, image.width, image.height, gl.RGBA, gl.UNSIGNED_BYTE, image);

The code in these solutions depends on a `powerOfTwoCeiling`

function. We compute the power of 2 that’s greater than or equal to a number, by taking the base-2 logarithm, rounding up, and raising 2 to that power. The function can be written as a gauntlet of math functions:

```
function powerOfTwoCeiling(x) {
return Math.pow(2, Math.ceil(Math.log2(x)));
}
```

Alternatively, if you prefer to overengineering things, there are bit twiddling hacks that can be used to find the leftmost 1-bit, which gets you pretty close to the power of 2.

With the padding options, you must adapt the texture coordinates to the new resolution.

Someday these limitations on NPOT textures may go away. The practical concerns that led to these limitations might not even be relevant on modern hardware, yet we continue to abide by them for backward compatibility.

The Blinn-Phong lighting model that we’ve implemented produces smooth shading. In some games and movies, we see a different kind of lighting that uses a small number of discrete bands of illumination. This effect is sometimes called *toon shading* or *cel shading* because it mimics the practice of some cartoon animators who painted the animated foreground on sheets of celluloid. Celluloid is transparent and can be overlaid on static and painterly backgrounds. Many cels were needed to orchestrate the animation, and animators saved time by using few colors and no gradients.

In standard diffuse shading, we modulate the surface’s albedo according to the degree of alignment between the normal and the light vector. The dot product—the cosine—that we use to compute this alignment produces a continuous dropoff. To get a discrete dropoff, we can use the dot product as a texture coordinate into a 1D lookup table that gives a small set of “litness” values.

Unlike the full OpenGL, WebGL doesn’t allow 1D textures. But we can create a 2D texture that has a height of 1. We could make our lookup table in an image editor, but it’s also possible to synthesize it programmatically:

```
function loadTable() {
// Make an array of unsigned bytes with a handful of illumination levels.
const table = new Uint8Array(128);
for (let i = 0; i < table.length; i += 1) {
if (i < 20) {
table[i] = 0;
} else if (i < 30) {
table[i] = 50;
} else if (i < 70) {
table[i] = 128;
} else if (i < 120) {
table[i] = 200;
} else {
table[i] = 255;
}
}
// Upload the texture.
gl.activeTexture(gl.TEXTURE0);
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R8, table.length, 1, 0, gl.RED, gl.UNSIGNED_BYTE, table);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.generateMipmap(gl.TEXTURE_2D);
return texture;
}
```

We change our fragment shader to index into this shader to determine the litness level:

```
uniform sampler2D table;
const vec3 lightDirection = normalize(vec3(1.0, 1.0, 1.0));
const vec3 albedo = vec3(1.0, 1.0, 1.0);
in vec3 fnormal;
out vec4 fragmentColor;
void main() {
vec3 normal = normalize(fnormal);
float litness = max(0.0, dot(normal, lightDirection));
float level = texture(table, vec2(litness, 0.0)).r;
fragmentColor = vec4(albedo * level, 1.0);
}
```

Here’s your TODO list:

- Complete your programming assignments. Be sure they are in a Git repository somewhere that you have shared with me. We are starting week 9. Week 15 is your last week to turn in a programming assignment, and you may only turn in one assignment per week.
- For lab on Friday, have a renderer ready that renders both a sphere and a 4-vertex quadrilateral and allows the user to move around with a
`Camera`

.

See you next time.

Sincerely,

P.S. It’s time for a haiku!

Computers on Mars

They’ll use binary for sure

Because it’s base 2

## Comments