teaching machines

CS 488: Lecture 25 – THREE.js

April 26, 2021 by . Filed under graphics-3d, lectures, spring-2021.

Dear students:

Managers and experts like to tell their juniors not to reinvent the wheel. That’s because they are more interested in getting a product to market than in your learning. In an educational setting, our goals are very different. In a classroom, the product is you. If an activity helps you learn, then we do not care if others have done it before. In this course, we’ve looked at graphics from at a fairly low level. We wrote our own vector and matrix classes and wrote WebGL directly. Not every computer graphics course does this. Some use a graphics engine like THREE.js in order to get you right into rendering scenes. THREE.js is amazing and popular, but the skills it took to make it do not come from merely using it. In this course, I wanted you to learn skills that would enable you to write your own THREE.js someday.

That said, when we reinvent the wheel, we should really look at other people’s wheels for inspiration. Today we do a quick tour through the THREE.js library and see its take on some of things we’ve explored at a lower level.

Hello World

THREE.js has many abstractions for putting together a renderer with few lines of code. In our renderers, we put a canvas in the HTML file and grabbed its context. In THREE.js, we use WebGLRenderer:

const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);

We don’t need a canvas in the HTML. The library automatically inserts one.

There are many shape generators, including boxes, cylinders, tetrahedrons, and toruses. Here we create a torus with radius of 10 and a cross-sectional radius of 3:

const geometry = new THREE.TorusGeometry(10, 3, 16, 30);
const material = new THREE.MeshBasicMaterial({
  color: 0xff7700,
const mesh = new THREE.Mesh(geometry, material);

One abstraction that we didn’t have an equivalent for is a method for composing a scene hierarchically. The Scene class acts as the root of a hierarchy:

const scene = new THREE.Scene();

We must add a camera, which is similar to our own camera class but which rolls in the projection matrix:

const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.z = 25;

With both a scene and a camera, we can render a frame:

renderer.render(scene, camera);

Animating is accomplished using requestAnimationFrame, just as we used it:

function animate() {
  mesh.rotation.x += 0.01;
  renderer.render(scene, camera);



You may have noticed there was something missing in the example code above. We don’t ever need to write shaders when using THREE.js. Rather, we specify the material for a mesh, and the library applies an appropriate ready-made shader. The basic material is used for a flat color. If we want shading, we must use a more advanced material and also add a light source.

Here we add toon shading:

const material = new THREE.MeshToonMaterial({
  color: 0xff7700,

// ...

const light = new THREE.PointLight(0xffffff, 1, 100);
light.position.set(0, 10, 20);

Switching to Blinn-Phong illumination is a simple matter of switching the material, and overriding the default values for the properties as we wish:

const material = new THREE.MeshPhongMaterial({
  color: 0xff7700,
  shininess: 100,

Resize Window

If we resize our window, we find that the image gets distorted and the canvas is the wrong size. We fixed this in our code with a resize event listener, and we do the same thing here:

window.addEventListener('resize', () => {
  renderer.setSize(window.innerWidth, window.innerHeight);
  camera.aspect = window.innerWidth / window.innerHeight;

We must not forget to update the projection matrix. In many graphics APIs, changing the state of an object often requires you to actively freshen other state that depends on it.


One thing we did not look at this semester is rendering wireframes. We could have implemented it for a trimesh by iterating through the faces, emitting an index pair for every edge, and rendering as gl.LINES. We’d probably want to do something to prevent shared edges from being rendered twice.

THREE.js has utilities for turning geometry into lines and rendering them. We add a skeleton around our torus like this:

const wireframe = new THREE.WireframeGeometry(geometry);
const lines = new THREE.LineSegments(wireframe);
lines.material.depthTest = false;
lines.material.opacity = 0.1;
lines.material.transparent = true;

// ...


Weird stuff happens after adding the wireframe. The solid torus keeps rotating, but not the wireframe. That’s because only the mesh is rotated in the animate function. We could also rotate the wireframe, but a better solution is to group the two objects together using Group. A group has its own transformation that is applied to all of its children, so we can rotate both objects just by rotating the group:

const group = new THREE.Group();

const scene = new THREE.Scene();

// ...

function animate() {
  group.rotation.x += 0.01;
  renderer.render(scene, camera);

Most hierarchical graphics APIs have this behavior. Behind the scenes, an object’s worldFromModel matrix is effectively computed like this:

worldFromModel = grandparent.matrix * parent.matrix * child.matrix


Adding shadows in THREE.js requires little knowledge of framebuffer objects, depth textures, or projective texturing. But it does expect you to know how to set up the parameters that influence shadow mapping. We set properties on the renderer and the light, but we also allow each individual mesh to independently contribute to and receive the shadows.

Here we have the torus cast a shadow on itself and the plane:

renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;

light.castShadow = true;
light.shadow.mapSize.width = 512;
light.shadow.mapSize.height = 512;
light.shadow.mapSize.near = 0.5;
light.shadow.mapSize.far = 500;

torusMesh.castShadow = true;
torusMesh.receiveShadow = true;
planeMesh.receiveShadow = true;


If we want to spin the scene around with the mouse, we can add a trackball interface much like the one we implemented. The trackball controls are not in the library proper, so we import them a bit differently:

import {TrackballControls} from 'three/examples/jsm/controls/TrackballControls';

We used our trackball interface to alter an object’s transformation to world space. In THREE.js, the transformation is applied to a camera and we must trigger an update call, as we do here:

const controls = new TrackballControls(camera, renderer.domElement);
function animate() {
  // ...

Other properties allow us to change the speed or allow inertial spin:

controls.rotateSpeed = 5;
controls.dynamicDampingFactor = 0.01;


Sometimes we find it useful to add little debugging glyphs into our scenes. Unity calls them gizmos. THREE.js has several such glyphs for plotting arrows, boxes, camera frustums, animation skeletons, and grids. Here we add a little axis helper that shows our scene’s rotation:

const axesHelper = new THREE.AxesHelper(10);

The parameter is the length of the lines.

First-Person Camera

Several first-person control systems are available. We can add pointer-lock controls that allow mouse-looking with code like this:

import {PointerLockControls} from 'three/examples/jsm/controls/PointerLockControls';

const controls = new PointerLockControls(camera, renderer.domElement);
window.addEventListener('mousedown', () => {

These controls do not handle WASD inputs. We can support them with our own event handler:

window.addEventListener('keydown', event => {
  if (event.key === 'd') {
  } else if (event.key === 'a') {
  } else if (event.key === 'w') {
  } else if (event.key === 's') {

After this change, our renderer shows an artifact that we have in our own renderers that has been bothering me. The movement is very blocky and jarring. This is because the keyboard events just don’t happen fast enough. Instead of applying instantaneous movement just on events, we want to update the viewer’s location on every frame. We start by setting some flags for the directions we are going:

let moveLeft = false;
let moveRight = false;
let moveForward = false;
let moveBackward = false;

window.addEventListener('keydown', event => {
  if (event.key === 'd') {
    moveRight = true;
  } else if (event.key === 'a') {
    moveLeft = true;
  } else if (event.key === 'w') {
    moveForward = true;
  } else if (event.key === 's') {
    moveBackward = true;

window.addEventListener('keyup', event => {
  if (event.key === 'd') {
    moveRight = false;
  } else if (event.key === 'a') {
    moveLeft = false;
  } else if (event.key === 'w') {
    moveForward = false;
  } else if (event.key === 's') {
    moveBackward = false;

Then we apply our offsets inside animate, perhaps like this:

function animate() {
  if (moveLeft) controls.moveRight(-0.1);
  if (moveRight) controls.moveRight(0.1);
  if (moveForward) controls.moveForward(0.1);
  if (moveBackward) controls.moveForward(-0.1);
  // ...

There’s a problem with this approach. On a machine that has a framerate of 60fps, we’ll have moved 60 0.1 = 6 units after 1 second. On a machine that has a framerate of 30 fps, we’ll have moved 30 0.1 = 3 units after 1 second. This is privilege. We want to enforce a constant speed that’s independent of the framerate. We know this to be true:

$$\mathrm{speed} = \frac{\mathrm{distance}}{\mathrm{time}}$$

To determine how much the camera has moved ($\mathrm{distance}$), we need some measure of how much time has elapsed. That leads us to this code:

let previousTime = performance.now();
const speed = 0.01;

function animate() {
  let currentTime = performance.now();
  let elapsedTime = currentTime - previousTime;

  if (moveLeft) controls.moveRight(-elapsedTime * speed);
  if (moveRight) controls.moveRight(elapsedTime * speed);
  if (moveForward) controls.moveForward(elapsedTime * speed);
  if (moveBackward) controls.moveForward(-elapsedTime * speed);
  previousTime = currentTime;
  // ...

Probably we should go back and implement this in our heightmap renders.


Well, we recapitulated a whole semester in the span of a single lecture with the help of a library. How depressing. THREE.js has support for much more, including textures, mesh loaders, and skeletal animations. I encourage you to use it and learn from it, but nothing beats implementing your own system to maximize that learning.

See you next time.


P.S. It’s time for a haiku!

We reinvent wheels
It’s not that we need new wheels
But new inventors