teaching machines

CS 488: Lecture 6 – Lit Sphere

February 10, 2021 by . Filed under graphics-3d, lectures, spring-2021.

Dear students:

Our goal today is to add lighting to our renderer. To make this work, we need a new mathematical tool: the vector. First, we take a tour through a handful of operations that we’ll need to work with vectors. Second, we’ll apply these operations to compute the “litness” of every fragment on the surface of a sphere.

Vectors

A while back we introduced the notion of a vector as a collection of numbers. We use the native vector types of the GPU to get fast and compact transformations. In the world of physics, vector has a more formal definition: a vector is a direction with a magnitude. A push to the right with 20 units of force would be expressed as [20, 0, 0]. An object that falls from a cliff 100 units on the y-axis to the ground has dropped [0, -100, 0]. In contrast to our use of vectors, there’s no mention of position in this definition. We can talk about how much we pushing something or how far something falls without considering its position.

Combining

Suppose we shove something right and then it falls from the plane. The vector that represents the combined offset is [20, -100, 0]. To compute this single vector representing the net effect, we add the components of the two vectors together:

$$\mathbf{a}+\mathbf{b} = \begin{bmatrix} a_x + b_x \\ a_y + b_y \\ \ldots \end{bmatrix}$$

To consolidate matrices into a single matrix, we multiply them. To consolidate vectors into a single vector, we add them.

Scaling

To double the offset represented by a vector, we multiply every component by 2. We call this operation scalar multiplication:

$$s \cdot \mathbf{v} = \mathbf{v} \cdot s = \begin{bmatrix} s \cdot v_x \\ s \cdot v_y \\ s \cdot v_z \\ \ldots \end{bmatrix}$$

A scaled vector points in the same direction as the unscaled vector if the scalar factor is positive. It points in the opposite direction if the scale factor is negative.

Magnitude

We know the magnitude or length of a vector thanks to Pythagoras. The equation you’ve seen for 2D generalizes to higher dimensions:

$$|\mathbf{v}| = \sqrt {v_x^2 + v_y^2 + v_z^2 + \ldots }$$

If we scale a vector by factor $s$, let’s see what happens to its length:

$$\begin{array}{rcl}s \cdot \mathbf{v} &=& \begin{bmatrix} s \cdot v_x \\ s \cdot v_y \\ s \cdot v_z \\ \ldots \end{bmatrix} \\|s \cdot \mathbf{v}| &=& \sqrt {s^2 \cdot v_x^2 + s^2 \cdot v_y^2 + s^2 \cdot v_z^2 + \ldots } \\ &=& \sqrt {s^2 \cdot (v_x^2 + v_y^2 + v_z^2 + \ldots) } \\ &=& s \sqrt {v_x^2 + v_y^2 + v_z^2 + \ldots } \\ &=& s \cdot |\mathbf{v}|\end{array}$$

Scaling a vector effectively scales its length.

If a vector has a magnitude of 1, we say it is normalized. A normalized vector is handy. Many of our equations are simpler for normalized vectors, as we’ll see. To make a vector have length 1, we scale it. We work out the scale factor $s$:

$$\begin{array}{rcl}|s \cdot \mathbf{v}| &=& 1 \\s \cdot |\mathbf{v}| &=& 1 \\s &=& \dfrac{1}{|\mathbf{v}|} \\\end{array}$$

Once normalized, it’s relatively easy to make a vector have a certain length. We simply scale it by the desired length. Suppose $\mathbf{v}$ is a normalized vector and $s$ is the desired length. Then we know this:

$$\begin{array}{rcl}|s \cdot \mathbf{v}| &=& s \cdot |\mathbf{v}| \\ &=& s \cdot 1 \\ &=& s \\\end{array}$$

Homogeneous Coordinates

In computer graphics, we tend to blur the lines between vectors and positions, often using the same type to represent both. But they are different. Positions are absolute locations in a coordinate system. Vectors are offsets. They are relative and not anchored to absolute positions. If we hear on the news that the wind is blowing northeast at 30 miles per hour, we have no information about where exactly this wind is blowing.

Because vectors are unrooted, translation is not a meaningful operation. If we transform a vector with a transformation matrix, we must cancel out the translation. This is can be done without a lot of clamor by setting the vector’s homogeneous coordinate to 0.

Dot Product

There are a couple of operations for “multiplying” two vectors. We’ve already seen the dot product:

$$\mathbf{a} \cdot \mathbf{b} = a_x \cdot b_x + a_y \cdot b_y + a_z \cdot b_z + \ldots$$

There’s also the cross product, but we’ve leave that for another day.

Diffuse Shading

To light a surface, we consider the normal vector at each fragment. The normal vector is perpendicular to the surface, pointing in the direction that the surface faces. We also consider the light vector, which points from the fragment to the light source. The angle between these two vectors gives us a measure of the “litness” of the surface.

When the angle is 0, the surface is fully lit. Let’s say the litness is 1. As the angle gets bigger, the surface faces away. When it reaches 0, the surface is no longer hit by any light rays, and the litness is 0. What function fits this pattern? We could use a linear function. But a physicist named Lambert found that cosine is a pretty good fit. We don’t want the litness to ever go negative, so we use this equation:

$$\mathrm{litness} = \max(0, \cos a)$$

All we need is to find the angle $a$ between the two vectors. Have I got good news for you. It turns out the dot product has a geometric interpretation that we are not going to prove today. If both vectors are normalized, the angle $a$ between them is just their dot product:

$$\mathbf{a} \cdot \mathbf{b} = \cos a$$

This is an especially useful result because it turns a messy trigonometric calculation into a series of multiplications and additions.

Implementation

This is enough information to implement a very simple lighting or shading scheme. We only need to know two things: the direction of the light source and the normals at each fragment. We can decide the direction of the light source. In general, we must provide the normals in our vertex attributes, just as we provide position and color. However, the normals on the unit sphere are conveniently the same as the positions. That leads us to this vertex shader:

uniform mat4 worldToClip;
uniform mat4 modelToWorld;

in vec3 position;

out vec3 fnormal;

void main() {
  gl_Position = worldToClip * modelToWorld * vec4(position, 1.0);
  fnormal = position;
}

The lighting itself happens in the fragment shader. We receive the interpolated normal, which may have lost its normalization in the blending math. We must renormalize it. Here too we define the direction of the light source, which we assume is the same for every fragment. This is only true for a light source that is infinitely far away, as we’ll discuss later. Here’s our fragment shader:

in vec3 fnormal;

out vec4 fragmentColor;

const vec3 lightDirection = normalize(vec3(1.0, 1.0, 1.0));

void main() {
  vec3 normal = normalize(fnormal);
  float litness = max(0.0, dot(normal, lightDirection));
  fragmentColor = vec4(vec3(litness), 1.0);
}

As we turn the sphere, we find that the light turns with it, which is kind of odd. That’s because we’re using the normal in model coordinates to determine the lighting. We are lighting in model space. But we probably intend the light to be located in world coordinates. To do that, we need to transform the normal into world space, which we can do by transforming it in the vertex shader:

fnormal = (modelToWorld * vec4(position, 0.0)).xyz;

We set the homogeneous coordinate to 0 to cancel out any translation.

Horizon

There’s a lot more to lighting than just diffuse shading. From here we will go on to examine how to get shiny highlights, spot light effects, multiple light sources, and crazy effects that rely on images.

TODO

Here’s your very first TODO list:

See you next time.

Sincerely,

P.S. It’s time for a haiku!

I stand on the Earth
Rays from the sun meet my own
Just a normal day