CS 488: Lecture 11 – Blinn-Phong Illumination
Dear students:
When we first implemented basic diffuse lighting, we considered just three pieces of information: the surface normal, a vector pointing toward the light source, and the surface’s base color or albedo. Today we’re going to complicate our lighting to bring in a lot more information.
Lighting in Eye Space
Last time we added a camera abstraction to library of code. When we add a camera to our trackball-powered mesh viewer, with a light direction of normalize(vec3(1.0))
, our scene probably looks okay. But if place the camera in the opposite direction of this light source, we find ourselves on the dark side of the object.
In a scene with an immersive world, we’d expect to be able to get into shadows. But in a mesh viewer, that feels strange. We’d like the light to always be near the camera, like a miner’s headlamp is always attached in a fixed relationship to the miner’s eyes. As the camera moves around the scene, the light moves with it.
To make this happen in our renderer, we first make the decision that our light direction is in eye coordinates. We hadn’t really formally specified what the coordinates were before, but they were world coordinates. Now that they’re in eye coordinates, we need to get the normal in eye space as well with this code in the vertex shader:
fnormal = (eyeFromWorld * worldFromModel * vec4(normal, 0.0)).xyz;
Since these two matrices always appear in succession now, we might as well multiply them together once on the CPU instead of once per vertex on the GPU. Our renderer sends just their product:
shaderProgram.setUniformMatrix('eyeFromModel', camera.matrix.multiplyMatrix(trackball.rotation));
Our vertex shader accordingly accepts just the product as uniform:
uniform mat4 eyeFromModel;
// ...
void main() {
gl_Position = clipFromEye * eyeFromModel * vec4(position, 1.0);
fnormal = (eyeFromModel * vec4(normal, 0.0)).xyz;
}
Now as we move around the model, the surface is always illuminated.
Albedo Meets Light Color
The light that enters our eyes is an interaction between the light and the surface off of which it bounces. Both the light and the surface have a color. So far we’ve only considered the surface’s color, or albedo. Let’s give the light source a color too, which we’ll declare in the fragment shader:
const vec3 lightColor = vec3(1.0, 0.0, 0.0);
How do we combine the light color and the albedo? The albedo represents how much of a color is reflected off of the surface. For example, an albedo of (1, 0.5, 0) means that the surface reflects all red light and half the green light. It fully absorbs the blue. If the light source has color (0.5, 1, 0.5), we want to reflect (0.5, 0.5, 0). To determine the reflected intensity, we compute the component-wise multiplication of the two vectors. That leads to this update in our fragment shader:
const vec3 lightVector = normalize(vec3(1.0));
const vec3 lightColor = vec3(0.5, 1, 0.5);
in vec3 fnormal;
out vec3 fragmentColor;
void main() {
vec3 normal = normalize(fnormal);
// Diffuse
float litness = max(0.0, dot(normal, lightVector));
vec3 diffuse = litness * lightColor * albedo;
vec3 fragmentColor = vec4(diffuse, 1.0);
}
Positional Light Source
When we first implemented basic diffuse lighting, we considered just two pieces of information: the surface normal and a vector pointing toward the light source. The light source, we assumed, was infinitely far away. Let’s remove this assumption for our shading algorithm.
Imagine a light source near a surface. Each location on the surface has a unique vector pointing toward the light. As the light moves away from the surface, those vectors start to align. When the light source is as far away as the sun, we hardly see any variety in these vectors—at least not on a small scale. In such situations we can use a single uniform light vector across the surface. When the light is near, we need to compute each fragment’s light vector individually.
We compute the light vector by subtracting the surface position from the light position. Since we’re lighting in eye space now, we need both positions to be in eye coordinates. The vertex shader is where we have the vertex position, so we add an out
variable there to hold the eye position and let it get interpolated between vertices:
out vec3 positionEye;
void main() {
positionEye = (eyeFromModel * vec4(position, 1.0)).xyz;
gl_Position = clipFromEye * vec4(positionEye, 1.0);
// ...
}
In the fragment shader, we receive the interpolated position and compute our light vector:
const vec3 lightPosition = vec3(1.0);
in vec3 positionEye;
// ...
void main() {
vec3 lightVector = normalize(lightPosition - positionEye);
// ...
}
A positional light source near the surface has a very different effect than a directional light source. The shadows add intrigue and mystery.
Ambient
The lighting system that we discuss in this course makes no empty to be physically accurate. Rather it is a compromise developed by computer graphics researchers who were trying to balance performance and human perception for interactive frame rates. One of the most glaring limitations of our model is that each fragment is shaded in isolation, as if no other surfaces were in the scene. We fire a vector off from the fragment toward the light source and pretend that there are no occluding objects. Nor do we take into account the light that bounces off of other surfaces and illuminates the fragment.
There is a slight hack that we can implement to give an impression of inter-surface bouncing. We add an ambient term to our lighting equation. Ambient light is “background” light, not really having any particular direction or source. It’s just there, giving a baseline amount of illumination to our surfaces. It keeps our objects from becoming unrealistically pure black on their shadowed sides. We calculate the ambient term like this in the fragment shader:
const float ambientWeight = 0.1;
// ...
void main() {
// ...
vec3 ambient = lightColor * ambientWeight;
// ...
}
Here the baseline ambient contribution is 10% of the light intensity. We combine this with the diffuse color as a weighted average:
const float ambientWeight = 0.1;
const vec3 lightPosition = vec3(1.0);
const vec3 lightColor = vec3(1.0);
in vec3 positionEye;
in vec3 fnormal;
out vec3 fragmentColor;
void main() {
vec3 lightVector = normalize(lightPosition - positionEye);
vec3 normal = normalize(fnormal);
// Diffuse
float litness = max(0.0, dot(normal, lightVector));
vec3 diffuse = litness * lightColor * (1.0 - ambientWeight);
// Ambient
vec3 ambient = lightColor * ambientWeight;
vec3 rgb = (ambient + diffuse) * albedo;
vec3 fragmentColor = vec4(rgb, 1.0);
}
We use a weighted average to prevent the ambient and diffuse contribution from exceeding pure white.
If the ambient weight is too much, our scene appears desaturated. That’s an aesthetic that has been popular at various points in gamedev history.
Specular
Another element that has been missing from our lighting are shiny reflections. The diffuse lighting that we have is an approximation of how light behaves on surfaces that have a matte finish. Roughness is the cause of a matte finish. The surface is actually made up of very tiny microsurfaces, all pointing in different directions. When light hits a rough surface, it bounces in many directions. We assume that light bounces in every direction equally off a diffuse surface.
Light bounces differently off a perfectly smooth surface, a mirror. Rather than going all directions equally, it reflects in a single direction. That direction is the reflection of the light vector about the surface’s normal. The closer the eye is to that reflected vector, the more the viewer sees the reflected light.
There are two popular means of determining that reflected vector. We examine both.
Reflection Vector
To calculate a perfect reflection of the light vector $\mathbf{l}$ about the normal $\mathbf{n}$, we first find a vector that leads from the light vector and heads straight to the normal, hitting it in a perpendicular way. If we can find that vector, we can just double it to shoot past the normal to its other side.
Before we can find the vector, we need to find the location $\mathbf{p}$ on the normal where it hits. This is called the vector’s projection on the normal. If we think of the three vectors as forming a triangle, we see that we can use cosine to determine the length along the normal where the intersection appears. The cosine can be computed using the dot product. We use that length to scale the normal down to the intersection point:
We find the reflected vector by going from the intersection point along the same vector that took us from the light vector to the intersection:
That’s our reflection vector. We compute it in GLSL with this code:
vec3 reflectedLightVector = 2.0 * dot(normal, lightVector) * normal - lightVector;
There’s also a builtin reflection
function in GLSL that computes this, which expects the inverse of lightVector
.
Alignment
The amount of light the viewer sees depends on how aligned they are with this reflection vector. That means we need a vector pointing from the fragment back to the eye. Since we are lighting in eye space, we know the eye is at the origin, and we also know the fragment’s eye space position. The vector from the fragment to the eye is vec3(0.0) - positionEye
or just -positionEye
.
As with the diffuse term, the amount of alignment is computed using the dot product.
vec3 eyeVector = -normalize(positionEye);
float specularity = max(0.0, dot(reflectedLightVector, eyeVector));
With the diffuse and ambient terms, we considered the light color and albedo. If you think about the highlights you see on shiny objects, what color do you see? Often white, but it depends on the material. Since we’re treating the material as a mirrored surface, we ignore the albedo term. We assume there’s no absorption filtering out any of the light. We can write this first draft of our specular term:
vec3 specular = vec3(1.0) * specularity;
This gives us a very bright highlight.
Shininess
To get more concentrated highlights, we can be pickier about how aligned the viewer must be with the reflection vector by raising the specularity to a higher power:
const float shininess = 90.0;
// ...
void main() {
// ...
vec3 specular = vec3(1.0) * pow(specularity, shininess);
vec3 rgb = (ambient + diffuse) * albedo + specular;
fragmentColor = vec4(rgb, 1.0);
}
The shininess factors can get very high to achieve certain material effects.
This combination of ambient, diffuse, and specular terms is called Phong illumination. It was described in 1975 by graduate student Bui Tuong Phong at the University of Utah.
Half Vector
The method of determining specularity described above has a flaw. If the viewer is near the light source, and both are at grazing angles to the normal, then the angle between the eye vector and the reflected vector might get very large, exceeding 90 degrees. Our method of calculating the angle, the dot product, will go negative and get clamped to 0 in these situations. This might be acceptable if the shininess is high, but if it’s low, the viewer might be surprised by the sudden cutoff when moving across the 90-degree boundary.
Microsoft research Jim Blinn proposed an alternative method for calculating the reflection vector that prevents this problem. In place of the actual reflection, we use the vector that’s halfway between the light source and the eye. Additionally, we check for the alignment between this half vector and the normal. The code is a little simpler:
vec3 halfVector = normalize(lightVector + eyeVector);
float specularity = max(0.0, dot(halfVector, normal));
This tweak to the algorithm is called Blinn-Phong illumination.
Altogether
Let’s put all the code above together in one place. Here we combine the ambient, diffuse, and half-vector specular terms together.
const float ambientWeight = 0.1;
const float shininess = 20.0;
const vec3 lightPosition = vec3(1.0);
const vec3 lightColor = vec3(1.0);
in vec3 fnormal;
in vec3 positionEye;
out vec4 fragmentColor;
void main() {
vec3 lightVector = normalize(lightPosition - positionEye);
vec3 normal = normalize(fnormal);
// Diffuse
float litness = max(0.0, dot(normal, lightVector));
vec3 diffuse = litness * lightColor * (1.0 - ambientWeight);
// Ambient
vec3 ambient = lightColor * ambientWeight;
// Specular
vec3 eyeVector = -normalize(fpositionEye);
// vec3 reflectedLightVector = 2.0 * dot(normal, lightVector) * normal - lightVector;
// float specularity = max(0.0, dot(reflectedLightVector, eyeVector));
vec3 halfVector = normalize(lightVector + eyeVector);
float specularity = max(0.0, dot(halfVector, normal));
vec3 specular = vec3(1.0) * pow(specularity, shininess);
vec3 rgb = (ambient + diffuse) * albedo + specular;
fragmentColor = vec4(rgb, 1.0);
}
TODO
Here’s your TODO list:
- Start working on the Gyromesh project. There’s a fair bit of work involved in this project, as you are integrating file parsing, normal generation, a trackball, and lighting. Give yourself plenty of time.
See you next time.
P.S. It’s time for a haiku!
The more something shines
The less it reveals itself
The more someone shines