teaching machines

CS 488: Lecture 22 – Spotlights and Projective Texturing

April 12, 2021 by . Filed under graphics-3d, lectures, spring-2021.

Dear students:

Today we examine two new lighting effects. First we focus light along a particular direction to produce a spotlight effect. Second we broadcast an image from a light source, just as our projectors broadcast input from a DVD player or our computer. Such textures are not bound to a particular object, like our previous textures were. They cover the entire scene, span across objects, and move as the light source moves.

Not just every graphics application will have need of an in-game projector. However, the projection mechanism we explore today is the very same one we’ll use soon to implement shadows.

Spotlight

The essential properties of a spotlight are its position, its direction, and its spread factor. To give a spotlight effect, we must decide how illuminated the fragment is by the spotlight in the fragment shader.

We did most of our earlier shading calculations in eye space. If we are wearing the spotlight, perhaps in the form a flashlight, then eye space is a good choice. If the spotlight has a fixed position in the world, the world space is a better choice. Let’s use world space.

A fragment is fully illuminated by the spotlight if lies in the spotlight’s line of sight. As it moves away from the line of sight, its illumination diminishes. To capture the degree of alignment, we’ll call upon the dot product to measure the angle between the light direction vector and a vector from the light source to the fragment:

vec3 positionToLight = normalize(lightPositionWorld - positionWorld); 
float spotAlignment = dot(positionToLight, lightDirectionWorld);

The dot produce gives us back a value in [-1, 1]. We clamp this to 0 to prevent the spotlight from sucking color out of our scene:

float spottedness = max(0, spotAlignment);

We can use spottedness to modulate our normal surface color or to selective contribute extra light. Let’s just modulate for the time being with something like this:

fragmentColor = vec4(rgb * spottedness, 1.0);

To achieve a spot that doesn’t fade toward its edges, we use the step function instead of max:

float spottedness = step(0.7, spotAlignment);

To achieve a spot that fades in a way that we can control, we can raise the alignment to a power or use the smoothstep function:

float spottedness = pow(spotAlignment, attenuationFactor);
float spottedness = step(0.3, 0.7, spotAlignment);

A spotlight just broadcasts a circle of photons. Let’s next look at having it broadcast an image.

Light as Eye

Imagine a bat signal has been turned on. Light pours out of a spotlight and is blocked by a filter that is shaped like a bat. The remaining light spills out on nearby buildings, clouds, trees, and so on. How do we figure out which fragments should pick up the broadcast signal? The key idea is to determine where the fragment appears in relation to the projected image. We’ve done something like this already. We calculated where a fragment projects on the camera’s image plane and then plotted the fragment’s color in that position. With the projective texturing, we calculate where a fragment projects on the texture and then add the texture’s color to that fragment.

On the CPU

We calculated the projected position on the framebuffer with this gauntlet of matrix transformations:

clipPosition = clipFromEye * eyeFromWorld * worldFromModel * position;

After the perspective divide, we landed in the normalized device coordinate space that WebGL expects. We employ a similar gauntlet for projective texturing, but instead of using the eye’s position and focal direction to determine the eyeFromWorld matrix, we use the light source’s position and direction. Instead of converting eye space coordinates into normalized device coordinates, we want to project into texture coordinate space, which spans [0, 1].

Our gauntlet should look something like this:

texcoords = textureFromLight * lightFromWorld * worldFromModel * position;

We don’t have a matrix routine that maps eye/light space to texture space. Instead of building a new one, we can first land in the traditional [-1, 1] space, but then scale by (0.5, 0.5, 1) and translate by (0.5, 0.5, 0) to arrive in the [0, 1] space. Our revised gauntlet looks like this:

texcoords =
  Matrix4.translate(0.5, 0.5, 0) *
  Matrix4.scale(0.5, 0.5, 1) *
  clipFromLight *
  lightFromWorld *
  worldFromModel *
  position;

This looks a little messy with so many stages. Since this matrix is only needed for projective texturing, we can pre-multiply it once for each object and upload it to the GPU as a single mat4 uniform. For the regular camera, we often keep the clipFromEye and eyeFromModel matrices separate. That’s because we want to stop and do some work in eye space. There’s nothing that we need to do in light space. Our code might look something like this:

const lightCamera = Camera.lookAt(lightPosition, lightTarget, new Vector3(0, 1, 0));
const matrices = [
  Matrix4.translate(0.5, 0.5, 0),
  Matrix4.scale(0.5, 0.5, 1),
  Matrix4.fovPerspective(45, 1, 0.1, 1000),
  lightCamera.matrix,
  worldFromModel,
];
const textureFromModel = matrices.reduce((accum, transform) => accum.multiplyMatrix(transform));

shaderProgram.bind();
shaderProgram.setUniform('textureFromModel', textureFromModel);

This is all that needs to happen on the CPU, besides the usual tasks of uploading the model and texture.

On the GPU

The rest of the work happens in the shaders. In the vertex shader, we transform the model space coordinates and put them in texture space. Since the matrix transformation involves perspective, we need to make sure that we retain all four components of the vector that we send along the fragment shader. We use a vec4, like this:

uniform mat4 textureFromModel;
out vec4 signalCoords;

void main() {
  // ...
  signalCoords = textureFromModel * position;
}

In the fragment shader, we receive the interpolated signalCoords, perform the perspective divide to fully land us in texture space, and then use these coordinates to reach into the texture and pull out the color from the bat signal. To add that light onto the existing surface color, we might write something like this:

uniform sampler2D signal;
in vec4 signalCoords;

void main() {
  // ...
  vec3 signalColor = texture(signal, signalCoords.xy / signalCoords.w).rgb;
  fragmentColor = vec4(rgb + signalColor, 1.0);
}

GLSL provides an alternative texture lookup function that will perform the perspective divide on our behalf:

vec3 signalColor = textureProj(signal, signalCoords).rgb;

One consequence of our implementation is that we also get a projection of the texture on the back side of the light source. To suppress that, we shut off any contribution when the w-component goes negative:

vec3 signalColor = signalCoords.w > 0.0 ? textureProj(signal, signalCoords).rgb : vec3(0.0);

Conclusion

Surfaces get a lot of attention in computer graphics, but it’s light that we see, not the surfaces. Often the surfaces bounce light into our eyes, but movies, mirages, and reflections show us that light can make things exist that don’t have physical surfaces. Projective texturing is the light source putting its own story into the world.

See you next time.

Sincerely,

P.S. It’s time for a haiku!

That bunny’s fake too
Else why would it disappear
When the light turns off