teaching machines

CS 455 Lecture 26 – Shadows

May 14, 2015 by . Filed under cs455, lectures, spring 2015.



As a lab exercise:

  1. Sync with the class repo to get the shadows starter project. Render the scene to find three rotating spheres.
  2. We first need a texture to hold a picture of the scene from the light’s point of view. Instead of storing color, however, we only need to store a texel’s depth from the light source. In OnInitialize, create a depth_from_light_texture. Set its channels to Texture::DEPTH instead of the usual RGB or GRAYSCALE. We don’t have pixel data for it yet, but we need to make space to render into. Allocate it to be FBO_SIZE by FBO_SIZE pixels.
  3. Create a FramebufferObject depth_from_light_fbo. Pass the depth texture to its constructor.
  4. Now we move to OnDraw. Shadow mapping requires two rendering passes. The first renders the scene from our light camera and produces the depth or shadow map in our depth_from_light_texture. This texture records for us which surfaces are closest to the light. The second renders the scene from the regular camera’s point of view. However, during this pass, we project the shadow map onto our geometry. If a fragment’s depth from the camera is greater than what’s recorded in the shadow map, it must be occluded by some intervening geometry and is therefore in shadow. Let’s start with the first pass.
  5. Bind the FBO. Since we’re only recording depth, we have to disable the color writing. Use these two lines to do this:

    Without these, our FBO will be considered incomplete.

  6. Set the viewport with glViewport to span the dimensions of the FBO.
  7. Clear the FBO’s depth buffer (our texture) with:
  8. Since we’re not writing color, the vertex and fragment shader for this pass can be very simple. Look at from_light.v.glsl and from_light.f.glsl to see that we only need to transform the model space position into clip space and assign to gl_FragColor anything we please (as it will have no effect). However, the depth will be recorded, and that’s what we care about. These shaders are loaded into shader_programs[2].
  9. Draw the scene as is done from the regular camera, but use the light camera and shader_programs[2] instead. You can also leave out the albedo, light_position_eye, and object_to_tex uniforms. You will also need to a projection uniform to describe the chunk of the world that the light projects into. For this, upload the light_projection matrix calculated at the beginning of OnDraw. And finally, instead of drawing ball and terrain, draw the shader_programs[2] version of these: shadow_ball and shadow_terrain.
  10. Unbind the FBO to resume drawing into the default framebuffer. Now we want to draw our second pass, projecting the depth texture into our scene.
  11. Upload the depth_from_light_texture to a sampler2D uniform in f.glsl.
  12. In v.glsl, we’ve already used the object_to_tex transform to put the model space position into the light camera’s texture space. We can use this value in the fragment shader f.glsl to figure out if another object appears nearer to the light than does this fragment. Since the texture coordinates were arrived at through the light’s perspective projection, we first perform a perspective divide:
    vec3 position_tex = ftexcoords.xyz / ftexcoords.w;
  13. Now, use position_tex.xy to perform a texture lookup. The red channel will tell us the depth of the fragment closest to the light source.
  14. How does this fragment compare? What is its depth from the light source? Just position_tex.z. We can relate these two to determine the fragment’s shadowedness:
    float shadowedness = least_depth < position_tex.z ? 0.5 : 0.0;
  15. Modulate the fragment’s resulting color by its degree of shadowedness:
    gl_FragColor = (1.0 - shadowedness) * vec4(color, 1.0);
  16. How do things look? Pretty terrible, right? We’ve got a couple of issues to work through. One is the fact that surfaces closest to the light source are going to be recorded as such in the depth texture. When we project the depth texture back onto these surfaces in the second pass, the depth we compute in position_tex.z will be very close to the one recorded in the texture, but we’ll have precision and roundoff errors. The resulting flip-flopping occludedness we see is informally called “shadow acne.” A cheap hack is to bias the computed depth a bit, based on how much we’re facing the light source:
    float bias = 0.01 * tan(acos(litness));
    bias = clamp(bias, 0, 0.01);
    position_tex.z -= bias;

    The effect is that if the depths are close, we assume we are not in shadow. How do things look now?

  17. A second problem appears on fragments that lie outside the light’s projection. Their texture coordinates exceed the [0, 1] space of the texture and by default are wrapped back into this space. What we really want is for these fragments not to be shadowed by this light source, since they are outside its jurisdiction. We can set the texture’s border color to the farthest possible depth—meaning that our computed depths will always be less than what’s recorded—and clamp the texture coordinates to the border. Add this setup after you create your texture:
    float border[] = {1.0f, 0.0f, 0.0f, 0.0f};
    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, border);
  18. How are things now?
  19. Send me an email screenshot of your shadowed terrain.