teaching machines

CS 455 Lecture 25 – Projective Texturing

May 12, 2015 by . Filed under cs455, lectures, spring 2015.

Agenda

TODO

As a lab exercise:

  1. Sync with the class repository to get the batsignal starter project.
  2. Run it and explore the scene. Notice the cone in the center of the cube. That’s our light source. It will project a bat signal onto the walls of the surrounding cube.
  3. Let’s first make our signal orientable. In a sense, a projective light source is like a camera, but instead of sucking up light on a rectangle in front of it, it emits light from a rectangle into the scene. We can continue to use camera mechanic to model the light’s orientation. Add a second “light camera” instance of the Camera class. Situate it at the origin, have it look down the negative z-axis, and have it point up.
  4. Map cursor key events to change the light camera’s yaw and pitch. LEFT should yaw a positive value and UP should pitch a positive value.
  5. Now we can use the light camera’s view matrix to orient the cone. How do we do that? Well, let’s try just multiplying the matrices we have. In OnDraw, alter the cone’s modelview matrix to be the regular camera’s view matrix multiplied by the light camera’s view matrix. Run your code. What happens when you hit the cursor keys?
  6. The light camera’s view matrix seems to act opposite to our intention. But don’t alter your key mapping! To align our cone with its camera, we need to multiply by the transpose of the camera’s view matrix. The Matrix4 class has a GetTranspose method that yields a matrix’s transpose. Verify that this produces the expected behavior.
  7. Now we’re ready to add the “film” to be projected from our light camera! In OnInitialize, load in as an Image the batsignal.ppm from the MODELS_DIR.
  8. Create a bat signal texture and upload the image to it. It’ll need three color channels:
    bat_signal_texture->Channels(Texture::RGB);
    bat_signal_texture->Upload(image->GetWidth(), image->GetHeight(), image->GetPixels());
  9. Add a uniform for this texture to the box’s fragment shader, and upload it in OnDraw.
  10. Now, what about texture coordinates? We need to discover where a vertex lands in the light camera’s “viewport.” If we’re on the bottom left, then we want texture coordinate (0, 0). If we’re on the top right, we want texture coordinate (1, 1). We can get close to this if we consider that the light is a camera. If we multiply the scene’s coordinates by its view matrix, we’ll put them into “light eye space”—which is like regular eye space but which has the light at the center of the scene.If we further multiply by the camera’s projection matrix (which describes the light’s “lens”; what field of view does it have, how far does it see, etc.), we’ll land in “light clip space”—which will ultimately land us in a [-1, 1] space. That’s pretty close to the [0, 1] space that we want for texture coordinates.We can further scale by 0.5 to turn [-1, 1] into [-0.5, 0.5]. We can then translate by 0.5 to land in [0, 1]. All told then, we need a transform composed as follows:
    Matrix4 model_to_tex =
      translate by (0.5, 0.5, 0.5) *
      scale by (0.5, 0.5, 0.5) *
      Matrix4::GetPerspective(fov, texture_aspect_ratio, 0.1f, 1000.0f) *
      light's view matrix *
      object's model-to-world transform;

    Compute this transform in OnDraw for the box. Upload it as a uniform to its vertex shader. The texture is 1024×512. I used 80 for the field of view. In our case, the box’s model coordinates are identical to its world coordinates, so the last matrix in the expression can be omitted.

  11. In v.glsl, compute texture coordinates for the fragment shader by applying the model_to_texture matrix to the vertex’s model space coordinates. The result is a vec4.
  12. Receive the texture coordinates in the fragment shader. How do we turn a vec4 into 2-D texture coordinate? Recall that with a regular camera/projection transformation, we do a perspective divide by the homogeneous coordinate that gives us three values: (pixel column, pixel row, depth). So, we can apply the perspective divide ourselves or let the GPU do this for us by using the texture2DProj function:
    texture2DProj(texture, someVec4);

    Modulate the resulting color by litness and add it into the fragment’s color. How do things look?

  13. Hopefully you said terrible. Fragments outside the camera’s viewport are getting assigned texture coordinates, and we see too many bat signals! Let’s clamp our texture coordinates to the edge of the light’s viewport:
    bat_signal_texture->Wrap(Texture::CLAMP_TO_BORDER);

    Now how do things look?

  14. There are still too many bat signals! There’s a back projection that also sneaks in on the opposing wall. Batman’s Bizarro will appear if we don’t fix this. The back projection happens due to the way we are treating the light source as a camera. With our normal camera, we don’t see this because clipping prevents the back projection of geometry from entering our scene. We can do some manual clipping in the fragment shader. If the homogeneous coordinate is negative, our fragment is in the back projection, and we can silence it:
    vec3 signal = ftexcoord.w < 0.0 ? vec3(0.0) : texture2DProj(...).rgb;
  15. How do things look now?
  16. Send me an email screenshot of your projected signal.

Haiku

on “folktography”
Cameras eat souls
But lights generate new ones
So I use my flash