CS 455 Lecture 25 – Projective Texturing
Agenda
- what ?s
- projective texturing
TODO
As a lab exercise:
- Sync with the class repository to get the
batsignal
starter project. - Run it and explore the scene. Notice the cone in the center of the cube. That’s our light source. It will project a bat signal onto the walls of the surrounding cube.
- Let’s first make our signal orientable. In a sense, a projective light source is like a camera, but instead of sucking up light on a rectangle in front of it, it emits light from a rectangle into the scene. We can continue to use camera mechanic to model the light’s orientation. Add a second “light camera” instance of the
Camera
class. Situate it at the origin, have it look down the negative z-axis, and have it point up. - Map cursor key events to change the light camera’s yaw and pitch. LEFT should yaw a positive value and UP should pitch a positive value.
- Now we can use the light camera’s view matrix to orient the cone. How do we do that? Well, let’s try just multiplying the matrices we have. In
OnDraw
, alter the cone’smodelview
matrix to be the regular camera’s view matrix multiplied by the light camera’s view matrix. Run your code. What happens when you hit the cursor keys? - The light camera’s view matrix seems to act opposite to our intention. But don’t alter your key mapping! To align our cone with its camera, we need to multiply by the transpose of the camera’s view matrix. The
Matrix4
class has aGetTranspose
method that yields a matrix’s transpose. Verify that this produces the expected behavior. - Now we’re ready to add the “film” to be projected from our light camera! In
OnInitialize
, load in as anImage
thebatsignal.ppm
from theMODELS_DIR
. - Create a bat signal texture and upload the image to it. It’ll need three color channels:
bat_signal_texture->Channels(Texture::RGB); bat_signal_texture->Upload(image->GetWidth(), image->GetHeight(), image->GetPixels());
- Add a uniform for this texture to the box’s fragment shader, and upload it in
OnDraw
. - Now, what about texture coordinates? We need to discover where a vertex lands in the light camera’s “viewport.” If we’re on the bottom left, then we want texture coordinate (0, 0). If we’re on the top right, we want texture coordinate (1, 1). We can get close to this if we consider that the light is a camera. If we multiply the scene’s coordinates by its view matrix, we’ll put them into “light eye space”—which is like regular eye space but which has the light at the center of the scene.If we further multiply by the camera’s projection matrix (which describes the light’s “lens”; what field of view does it have, how far does it see, etc.), we’ll land in “light clip space”—which will ultimately land us in a [-1, 1] space. That’s pretty close to the [0, 1] space that we want for texture coordinates.We can further scale by 0.5 to turn [-1, 1] into [-0.5, 0.5]. We can then translate by 0.5 to land in [0, 1]. All told then, we need a transform composed as follows:
Matrix4 model_to_tex = translate by (0.5, 0.5, 0.5) * scale by (0.5, 0.5, 0.5) * Matrix4::GetPerspective(fov, texture_aspect_ratio, 0.1f, 1000.0f) * light's view matrix * object's model-to-world transform;
Compute this transform in
OnDraw
for the box. Upload it as a uniform to its vertex shader. The texture is 1024×512. I used 80 for the field of view. In our case, the box’s model coordinates are identical to its world coordinates, so the last matrix in the expression can be omitted. - In
v.glsl
, compute texture coordinates for the fragment shader by applying themodel_to_texture
matrix to the vertex’s model space coordinates. The result is avec4
. - Receive the texture coordinates in the fragment shader. How do we turn a vec4 into 2-D texture coordinate? Recall that with a regular camera/projection transformation, we do a perspective divide by the homogeneous coordinate that gives us three values: (pixel column, pixel row, depth). So, we can apply the perspective divide ourselves or let the GPU do this for us by using the
texture2DProj
function:texture2DProj(texture, someVec4);
Modulate the resulting color by
litness
and add it into the fragment’s color. How do things look? - Hopefully you said terrible. Fragments outside the camera’s viewport are getting assigned texture coordinates, and we see too many bat signals! Let’s clamp our texture coordinates to the edge of the light’s viewport:
bat_signal_texture->Wrap(Texture::CLAMP_TO_BORDER);
Now how do things look?
- There are still too many bat signals! There’s a back projection that also sneaks in on the opposing wall. Batman’s Bizarro will appear if we don’t fix this. The back projection happens due to the way we are treating the light source as a camera. With our normal camera, we don’t see this because clipping prevents the back projection of geometry from entering our scene. We can do some manual clipping in the fragment shader. If the homogeneous coordinate is negative, our fragment is in the back projection, and we can silence it:
vec3 signal = ftexcoord.w < 0.0 ? vec3(0.0) : texture2DProj(...).rgb;
- How do things look now?
- Send me an email screenshot of your projected signal.
Haiku
on “folktography”
Cameras eat souls
But lights generate new ones
So I use my flash
Cameras eat souls
But lights generate new ones
So I use my flash