# teaching machines

## CS 488: Lecture 16 – Skyboxes and Environment Mapping

March 22, 2021 by . Filed under graphics-3d, lectures, spring-2021.

Dear students:

Today we pull another treasure out of the bag of tricks that is computer graphics. We add a surrounding environment to our scenes that gives an illusion of immersion. The surrounding scene is made entirely of textures pasted on a simple cube, so it’s cheap to implement. We can also make highly reflective objects within the cube pick up color from these textures.

### Skyboxes

Horizons in games and movies situate the small world where the action takes place withing the vast world around it. They symbolize hope, remind us of our smallness, and reveal encroaching threats. However, if we were to faithfully model them in 3D, we’d spend a lot of time shaping triangles that are never interacted with and never appear in detail. Instead we model our horizons with textures.

These horizon textures go on geometry that sounds our entire scene. We could surround our scene with a sphere and texture it, producing a skysphere. If we’re on a terrain, we’ll never see the bottom half of the sphere, in which case it’d be cheaper to surround just the top half of our scene with a hemispherical skydome. Capturing real photographs that can be mapped across these curved shapes requires a fish eye lens. We also have to deal with the distortion that comes from projecting a non-rectangle on a rectangle. It’s much simpler to surround our scene with a skybox, which is just a cube with each of its six faces textured with a regular rectangular image.

WebGL provides hardware support for textures that go on skyboxes. They are called cube maps and are really just a bundle of six 2D textures. We load in a cubemap with the following code, which assumes the six images are labeled with the prefix neg or pos; the dimension x, y, or z; and a common extension:

async function loadTexture(directoryUrl, extension, textureUnit = gl.TEXTURE0) {
const faces = ['posx', 'negx', 'posy', 'negy', 'posz', 'negz'];

const images = await Promise.all(faces.map(async face => {
const url = ${directoryUrl}/${face}.\${extension};
const image = new Image();
image.src = url;
await image.decode();
return image.decode();
}));

gl.activeTexture(textureUnit);
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture);

gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[0]);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_X, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[1]);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_Y, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[2]);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[3]);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_Z, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[4]);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[5]);

gl.generateMipmap(gl.TEXTURE_CUBE_MAP);

return texture;
}


The images are loaded in parallel, and we use Promise.all to wait for all six of them to fully load before uploading them to the GPU. We’ve made six separate calls to upload the textures. The values of the target enums are in a serial order, so we could condense the calls with this loop:

for (let [i, image] of images.entries()) {
gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, images[i);
}


That takes care of the texture. Now we need the box itself. The box should be a unit cube that appears centered around the eye. Interestingly, it does not need texture coordinates. Nor normals. We can upload an 8-vertex cube, which vertices are shared between faces.

In model space, this cube is situated around the origin. We want it to appear around the eye, which is the origin of eye space. That means we need to transform it to the eye’s location. We also want to it to turn as the camera turns, just like the rest of the scene. That leads us to this sequence of transformations:

const worldFromModel = Matrix4.translate(camera.from.x, camera.from.y, camera.from.z);
skyboxProgram.setUniformMatrix4('eyeFromModel', camera.matrix.multiplyMatrix(worldFromModel));


The vertex program that renders the box has just two jobs: transform the vertices into clip space like normal and assign texture coordinates that will map the skybox texture onto the cube. In a rate bit of simplicity, the model space coordinates are themselves the texture coordinates. That leads us to this vertex program:

uniform mat4 clipFromEye;
uniform mat4 eyeFromModel;

in vec3 position;

out vec3 ftexcoords;

void main() {
gl_Position = clipFromEye * eyeFromModel * vec4(position, 1.0);
ftexcoords = position;
}


The fragment shader receives these texture coordinates and looks them up in the texture, whose type is samplerCube instead of sampler2D:

uniform samplerCube skybox;

in vec3 ftexcoords;

out vec4 fragmentColor;

void main() {
fragmentColor = texture(skybox, ftexcoords);
}


That’s about it for the skybox. When the camera turns, we find ourselves looking at different faces. As we advance or strafe, we find that the texture never shifts or scales. It’s always infinitely far away. Skyboxes are best suited for environments that one can never reach.

If we add other geometry to our scene, we are likely to not see it. That is probably because the skybox is in the way. The cube is always situated around the eye. Since it’s supposed to be really far away, we can render it first, but not write any fragment’s depth to the depth buffer. Then we can draw the rest of the scene. The skybox color will always get overwritten since its fragments never got recorded. We use gl.depthMask to achieve this:

gl.depthMask(false);
// draw skybox

// draw rest of scene


### Environment Mapping

Earlier we illuminated highly reflective surfaces using specular lighting. The amount of light bouncing off a mirror into the eyes of the viewer depends on the alignment between the viewer and the incoming light vector reflected about the normal. The specular light equation assumes the incoming ray of light is coming directly from the light source. But we can break that assumption. Let’s have the light ray come from the surrounding skybox. This technique is called environment mapping.

As with specular lighting, we want to reflect the eye vector around the normal. We’ll do this in eye space, so we send along the fragment’s eye space position from the vertex shader:

// ...

out vec3 positionEye;

void main() {
// ...
positionEye = (eyeFromModel * vec4(position, 1.0)).xyz;
}


In the fragment shader, we receive the interpolated eye position and reflect it about the normal using GLSL’s reflect function:

uniform samplerCube skybox;

in vec3 positionEye;
in vec3 fnormal;

out vec4 fragmentColor;

void main() {
vec3 normal = normalize(fnormal);
vec3 reflection = reflect(positionEye, normal);
fragmentColor = texture(skybox, reflection);
}


We do not need to normalize the eye position, as the cubemap texture lookup tolerates non-normalized vectors.

The result is that our model appears to be plated in chrome. If we have multiple objects in the scene, we will never see them reflecting each other. Only the skybox is reflected.