# teaching machines

## CS 488: Lecture 20 – Perlin Noise

April 5, 2021 by . Filed under graphics-3d, lectures, spring-2021.

Dear students:

Last time we examined value noise as a means of adding coherent randomness to too-perfect surfaces to make them feel more natural and less algorithmic. We continue that discussion today, but explore a different noise generation scheme invented by Ken Perlin.

### The Rise and Fall of Value Noise

The algorithm for generating value noise is reasonably straightforward, and it does produce coherent randomness. Why should we consider alternative noise algorithms? Examine enough samples of value noise, and you will see artifacts—which is the computer graphics term for distracting features in surfaces, textures, and animations that reveal their algorithmic nature. The artifacts that we see in value noise are strips of homogeneous color. What makes these regions of low-frequency changes offensive is that they are aligned with the grid. When the homogeneity spreads in along both axes, our eyes are drawn to the plus signs that appear.

To eliminate the artifacts of value noise, we turn to gradient noise.

The gradient noise algorithm was invented by Ken Perlin in the 1980s. Perlin had been hired by the Mathematical Applications Group, Inc., to work on the movie Tron. The movie was a technical feat for the time, but Perlin said this of his experience:

Working on TRON was a blast, but on some level I was frustrated by the fact that everything looked machine-like. In fact, that became the “look” associated with MAGI in the wake of TRON. So I got to work trying to help us break out of the “machine-look ghetto.”

Gradient noise differs from value noise in that instead of generating random scalars at every lattice point, we generate random vectors. These vectors float along the surface of the randomness, point in the direction of a nearby maximum. Such vectors are called gradients, which gives rise to the name of this flavor of noise. As the noise surface conforms to these gradients, we see frequent dips above and below the equilibrium point. We won’t see the patches of axis-aligned homogeneity we see in value noise.

### 2D Noise Implementation

There are thousands of tutorials on how to implement Perlin noise on the internet. One reason for their abundance is that the algorithm is not obvious. It has some mathematical underpinnings that many developers don’t carry around in their heads. Also, Perlin himself has published several versions of the algorithm. Really, though, I think people feel compelled to write because much of what they’ve seen their predecessors write obscures the algorithm with unnecessary optimizations. We are going to throw away all optimizations in our implementation.

First up, we need random gradients. Let’s generate a 2D table of them. We want to ensure vectors point in all directions. Generating random x- and y-coordinates in $[-1, 1]$ would not produce a uniform distribution. There are more vectors that point along the diagonals than along the axes. Instead, we generate a random angle in $[0, 2\pi]$ and convert the polar coordinates $(1, \mathrm{angle})$ to Cartesian coordinates. Here’s our code to produce our table of gradients:

static randomGradients2(width, height) {
for (let r = 0; r < height; ++r) {
for (let c = 0; c < width; ++c) {
const radians = Math.random() * 2 * Math.PI;
}
}
}


We want to create just one table and use it over and over again. Let’s lazily evaluate it on the first call to the noise lookup function:

static slowPerlin2(p) {
}


Notice that we hardcode the table size in this implementation to 256×256.

The call has sent in a position p. We find the lattice points of the cell that surrounds p:

static slowPerlin2(p) {
// ...
const base = p.floor();
const fraction = p.subtract(base);


With the lattice point coordinates in hand, we look up the gradients at the four neighbors:

static slowPerlin2(p) {
// ...


The naming scheme here is a little funny. A lowercase x means left. An uppercase X means right. The y-axis is treated similarly.

How do we go from gradients to the scalar noise intensity? Perlin proposed taking the dot product between a corner’s gradient and the vector from the corner to the position p, like this:

static slowPerlin2(p) {
// ...
const xyDot = p.subtract(new Vector2(base.x, base.y)).dot(xyGradient);
const XyDot = p.subtract(new Vector2(apex.x, base.y)).dot(XyGradient);
const xYDot = p.subtract(new Vector2(base.x, apex.y)).dot(xYGradient);
const XYDot = p.subtract(new Vector2(apex.x, apex.y)).dot(XYGradient);


Now we’ve got four scalars describing how p aligns with the four gradients. We condense these down using a bilinear interpolation, landing us at our scalar noise:

static slowPerlin2(p) {
// ...
const lerp = (a, b, t) => a + t * (b - a);

const below = lerp(xyDot, XyDot, fraction.x);
const above = lerp(xYDot, XYDot, fraction.x);
const value = lerp(below, above, fraction.y);

return value;
}


In our Field2 class, can generate a raster of Perlin noise like this:

static slowPerlin(dimensions, scale) {
const field = this.allocate(dimensions, 1);
for (let p of field) {
const value = Noise.slowPerlin2(p.multiply(scale));
field.set(p, value * 0.5 + 0.5);
}
return field;
}


When we view this at small scales, we see artifacts, the very thing we were trying to get rid of! The fix is to smooth out the weights we use in the lerp calls. Perlin recommended passing them through this easing function:

static fade(t) {
return t * t * t * (t * (t * 6 - 15) + 10);
}


Our improved lerp calls look like this:

const below = lerp(xyDot, XyDot, this.fade(fraction.x));
const above = lerp(xYDot, XYDot, this.fade(fraction.x));
const value = lerp(below, above, this.fade(fraction.y));


Now the noise is wonderfully smooth and mostly free of artifacts.

This technique scored Perlin an Academy Award. What have I been doing?

### Nontextural Noise

Perlin had another goal beyond imposing high-frequency changes. The machines he was using at MAGI had limited memory, so he wanted to avoid stored textures. A Perlin noise function is largely procedural, doing most of its computation on the fly. A small amount of static storage may be needed for random gradients, though some implementations optimize this need away.

To compute the noise at position p, we must find the gradients at the lattice points of the surrouding cell. This is tricky if the gradients aren’t stored in a nice grid in RAM. Perlin’s suggestion was to store a fixed set of gradients in a 1D array. Accompanying that is a shuffled 1D array of indices into the gradients array. This shuffled array is used to turn the 2D coordinates into single index. We need this code to deal with the gradient lookup:

generate random gradients
generate permutation table p

function hash(x, y)
return p[p[x] + y]

We’d use this code like so:

static fastPerlin2(p) {
// ...


Perlin suggested many other optimizations, but we’re not going to discuss them.

### Applications of Noise

Noise can be used to implement many effects, too many for my limited imagination to enumerate. We’ll show just a few examples of how it may be used to disrupt perfect surfaces, perfect texture patterns, and perfect animation.

#### Displacement Mapping

We can turn a perfect sphere into a bumpy blob by pushing each vertex out from the center, as we do in this code:

const amplitude = 0.07;
for (let i = 0; i < trimesh.vertexCount; i += 1) {
const noise = Noise.fractalPerlinNoise(trimesh.positions[i].scalarMultiply(5), 1);
trimesh.positions[i] = trimesh.positions[i].scalarMultiply(1 + noise * amplitude);
}


This displacement works better on a subdivided icosahedron (an icosphere) than a lat-lon sphere because the triangles are sized more uniformly.

#### Wood Grain

We can also “dip” 3D models into noise to apply perturbations on their surfaces. For instance, we can make an object appear like it is carved from wood. If we use a 3D noise field, we don’t even have to deal with UV unwrapping.

Let’s assume our model is situation about the origin in model space. We’ll use its model space coordinates as texture coordinates into our 3D noise texture. First, we export these coordinates from the vertex shader:

// ...
out vec3 ftexcoords;
void main() {
// ...
ftexcoords = position;
}


In the fragment shader, we figure out how far we are from the y-axis and initially color by that distance:

// ...
in vec3 ftexcoords;
void main() {
float distanceY = length(ftexcoords.xz);
vec3 albedo = vec3(distanceY);
fragmentColor = vec4(albedo, 1.0);
}


We want the color to band into periodic rings, as on a tree, so we call upon sin to make an oscillation:

// ...
in vec3 ftexcoords;
void main() {
float distanceY = length(ftexcoords.xz);
float grain = sin(distanceY * someFrequency);
vec3 albedo = vec3(grain);
fragmentColor = vec4(albedo, 1.0);
}


Let’s use grain to arbitrate between two earth tones to make this look more like wood:

vec3 albedo = grain > -0.5 ? vec3(1.0, 0.6, 0.2) : vec3(1.0, 0.5, 0.1);


But the rings are too perfect. Let’s call upon noise and apply a phase shift to our oscillation that will be coherently random:

float random = texture(tex, ftexcoords).r;
float grain = sin(length(ftexcoords.xz) * 80.0 + random * 40.0);


See you next time.

Sincerely, 