# teaching machines

## CS 488: Lecture 24 – Shadow Mapping

April 21, 2021 by . Filed under graphics-3d, lectures, spring-2021.

Dear students:

Today is the culmination of many topics we’ve discussed earlier: perspective projections, projective texturing, and rendering to textures. It’s also a day I dread. We are talking about implementing shadows. Shadows are an important to realism. They communicate size and depth, and our visual system is accustomed to having them to reinforce or challenge our perception. But they don’t come naturally in WebGL, and getting them right takes finessing.

Psychology researchers have investigated how shadows influence our perception, and the field of computer graphics has benefited from some of their findings. Researcher Daniel Kersten offers some videos of experiments designed to explore how we perceive depth. If you don’t have time to watch the whole 12-minute video, snippets at 1:28 and 9:20 are good representations.

### Shadow Mapping at a Glance

Our renderings do not currently include shadows because when calculate lighting, we only consider two properties: the fragment whose normal hints at the surrounding surface’s orientation and the light source. Nowhere do we ask any questions about objects that might occlude the fragment and put it in shadow. How do we start asking such questions?

My intuition tells me to cast a ray from the fragment to the light source and see if it hits any other object. That’s doesn’t really fit the WebGL model in which each object is treated independently. It also requires ray-mesh intersection tests that will be expensive to run per fragment.

Our computer graphics forebears—namely Lance Williams, the same individual who invented mipmapping—invented a faster algorithm. That requires two passes through the scene. The shadow mapping algorithm looks something like this in WebGL terms:

1. Create an FBO with a depth attachment only.
2. Whenever the light source moves, render the scene to the FBO from the light’s perspective.
3. Render the scene to the default framebuffer.
4. Project each fragment into texture space. All the fragments on the line of sight from the light to this fragment will project to the same texel. The texture lookup gives the depth of the fragment that is closest to the light.
5. Compare the fragment’s depth with the closest depth. If the fragment’s depth is greater, then it is beyond and behind the closest fragment, and is therefore in shadow. We render the fragment darker.

In an ideal world, we’d follow this algorithm and find perfect shadows. That won’t be the case for the world we actually have. We’ll walk through the algorithm and then fix a handful of issues.

### Create an FBO

The first step is create a depth texture, which we call a depth map. WebGL doesn’t currently support linear interpolation of depth textures, which is unfortunate.

function reserveDepthTexture(width, height, unit = gl.TEXTURE0) {
gl.activeTexture(unit);
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, height, 0, gl.DEPTH_COMPONENT, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
return texture;
}


This depth map will be the sole attachment of a framebuffer object, which we create with this function:

function initializeFbo(depthTexture) {
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTexture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return framebuffer;
}


We have no need of a color attachment. We only care about recording the depth of the first fragment that the light source sees, not its color. Both of these functions get called in our one-time setup:

const depthTexture = reserveDepthTexture(mapSize, mapSize, gl.TEXTURE0);
framebuffer = initializeFbo(depthTexture);


### Render to Depth Map

The next step is to render the scene from the point of view of the light source. We use the light’s position and direction to construct our camera. We omit any omit any calls relating to color. Our render function might look something like this:

function renderMap(width, height, framebuffer) {
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);

gl.viewport(0, 0, width, height);
gl.clear(gl.DEPTH_BUFFER_BIT);

const lightCamera = Camera.lookAt(
light.position,
new Vector3(0, 0, 0),
new Vector3(0, 1, 0)
);
const clipFromWorld = light.lightFromEye.multiplyMatrix(lightCamera.matrix);

mapProgram.bind();
for (let shape of shapes) {
mapProgram.setUniformMatrix4('clipFromModel', clipFromWorld.multiplyMatrix(shape.matrix));
shape.mapArray.bind();
shape.mapArray.drawIndexed(gl.TRIANGLES);
shape.mapArray.unbind();
}
mapProgram.unbind();

gl.bindFramebuffer(gl.FRAMEBUFFER, null);
}


The mapProgram has a simpler job than a normal rendering shader. All it has to do is transform the vertices. We still have to write the fragment color, but the color will be thrown away.

function loadMapShader() {
const vertexSource =
uniform mat4 clipFromModel;
in vec4 position;

void main() {
gl_Position = clipFromModel * position;
}
;

const fragmentSource =
out vec4 fragmentColor;

void main() {
fragmentColor = vec4(1.0);
}
;

}


We render to our texture once during initialization so that the texture is in place for the rendering to the default framebuffer.

const depthTexture = reserveDepthTexture(mapSize, mapSize, gl.TEXTURE0);
framebuffer = initializeFbo(depthTexture);
renderMap(mapSize, mapSize, framebuffer);


### Project Fragments into Texture Space

Now that the depth map is recorded, we are ready to draw the scene to the default framebuffer. We need a matrix that will take each fragment into the depth map’s texture space. This matrix is the same one we used to achieve projective texturing.

const lightCamera = Camera.lookAt(
light.position,
new Vector3(0, 0, 0),
new Vector3(0, 1, 0)
);

const textureFromWorld =
Matrix4.translate(0.5, 0.5, 0.5).multiplyMatrix(
Matrix4.scale(0.5, 0.5, 0.5).multiplyMatrix(
light.lightFromEye.multiplyMatrix(
lightCamera.matrix
)
)
);

renderProgram.bind();
renderProgram.setUniform1i('depthMap', 0);
// set up uniforms
for (let shape of shapes) {
renderProgram.setUniformMatrix4('textureFromModel', textureFromWorld.multiplyMatrix(shape.matrix));
// draw each shape
}
renderProgram.unbind();


Our shader programs project each fragment into texture space. Our vertex shader does the initial transformation:

uniform mat4 textureFromModel;
out vec4 ftexcoords;
// ...

void main() {
// ...
ftexcoords = textureFromModel * position;
}


The fragment shader receives the interpolated coordinates and manually performs the perspective divide to land us in the [0, 1] texture space:

// ...
in vec4 ftexcoords;
void main() {
vec3 texturePosition = ftexcoords.xyz / ftexcoords.w;
float depth = texturePosition.z;
// ...
}


As a debugging check, we color each fragment according to its texture coordinates:

fragmentColor = vec4(texturePosition.xy, 0.0, 1.0);
return;


We should see black, green, red, and yellow corners. To restrict ourselves to just the frustum that the light sees, we can add some inequality checks:

bool isInFrustum = texturePosition.x >= 0.0 &&
texturePosition.x <= 1.0 &&
texturePosition.y >= 0.0 &&
texturePosition.y <= 1.0;
fragmentColor = isInFrustum ? vec4(texturePosition.xy, 0.0, 1.0) : vec4(1.0);
return;


The projected texture coordinates are used to look up the depth of the closest fragment to the light source. If this value is smaller than the current fragment’s depth, then we are in shadow. We compute a shadowedness factor that determines how much the lighting should be muted when its not the closest thing to the light source:

uniform sampler2D depthMap;
// ...

void main() {
// ...
float closestDepth = texture(depthMap, texturePosition.xy).r;
float shadowedness = (isInFrustum && closestDepth <= depth) ? 0.5 : 1.0;
// ...
fragmentColor = vec4(ambient + diffuse * shadowedness, 1.0);
}


### Increasing Resolution

We may have shadows, but the scene looks terrible. The depth map is a finite raster. A single texel in this raster is likely to span many pixels in the framebuffer, so we see blocky artifacts. Our first fix is to bump up the texture resolution.

### Bias

Increase the resolution smooths out the shadow’s edges, but we still see a lot of what is called shadow acne. Where does the shadow acne appear? Not in the shadows, but on the fragments that are closest to the light source. The number of bits in the texture is finite, which leads to information loss. When we go to compare a nearest fragment to the texture’s depth, sometimes it wins and sometimes it loses. One hack is to apply a slight bias to the computed depth:

float depth = texturePosition.z - 0.005;


This will pull each fragment a little bit closer to the light than it really is.

### Culling Front Faces

Another strategy to avoid precision fighting is to render only the backfaces of the models into the depth map:

function renderMap(width, height, framebuffer) {
gl.cullFace(gl.FRONT);
// ...
gl.cullFace(gl.BACK);
}


By rendering only the backfaces, the record depth will generally be greater and there will be fewer imprecise toss-ups.

### Percentage Closer Filtering

The shadows edges may still appear pixelated, and we may not be able to increase the resolution of our depth map further. Even if they aren’t pixelated, the shadows edges may appear harsh. Real lights and real shadows have two regions: an umbra that is completely occluded and a penumbra that “sees” some of the light source but not all. The penumbra appears as a gradient from the dark umbra to the unoccluded surrounding surfaces.

We can achieve a penumbra by sampling the depth map in multiple places around each fragment, testing our depth against each one and counting what percentage of the samples are shadowed. Here we take 9 samples in a 3×3 neighborhood:

float percentage = 0.0;
for (int y = -1; y <= 1; y += 1) {
for (int x = -1; x <= 1; x += 1) {
float closestDepth = texture(depthMap, texturePosition.xy + vec2(x, y) / 1024.0).r;
percentage += (isInFrustum && closestDepth <= depth) ? 0.5 : 1.0;
}
}
float shadowedness = percentage / 9.0;


We assume the depth map is 1024×1024 when determining how far to reach to sample the neighboring texels. We’ve used textureOffset to sample neighbors in the past. This function only works if the displacement is known at compile time. If we really want to use textureOffset, we could unroll this loop so that the offsets are compile-time constants.

### Conclusion

Whew. Shadow mapping requires a fair bit of parameter twiddling to get right. The orientation of the light source relative to the surface, the texture size, and the precision of the texture must all unite in perfect harmony for the shadows to not be distracting. This often doesn’t happen in games. Shadow artifacts are common.

See you next time.

Sincerely,

P.S. It’s time for a haiku!

I finished my game
Peter Pan vs. Himself