CS 488: Lecture 23 – Framebuffer Objects and Billboards
Dear students:
Normally we draw our objects directly into the framebuffer that shows up on our screens. Sometimes, however, we’d like to draw into some other raster, perhaps because we’re doing offline rendering or because we want to synthetically generate textures. To change the destination of our pixels in WebGL, we use framebuffer objects (FBOs). Today we explore the framebuffer object API and apply FBOs to accelerate the rendering of many spheres using billboards.
Framebuffer Objects
Framebuffer objects are a generalization of the default framebuffer that we’ve been writing to up till now. They are pixel rasters that are resident on the GPU, and they are created and bound like other OpenGL objects:
create FBO bind FBO attach color raster attach depth raster draw scene bind null FBO
Before any drawing will succeed, the FBO must be framebuffer complete, which means that it must have its attachments properly set. Common attachments include a color raster and a depth raster. The rasters may be renderbuffers or textures. Renderbuffers can only be written to by the GPU and are intended to be read back to the CPU. They cannot be read by a shader and therefore do not influence any rendering. Renderbuffers are handy if you are writing an offline renderer and need to capture its images. Textures can be written to and read from in future renders. These are handy if you want to put a mirror in your scene. You render the scene from inside the mirror to a texture-based FBO in a first pass. In a second pass, you apply the texture to the planar shape of the mirror or portal.
You are not required to attach both color and depth rasters. If you don’t need a depth test, for example, you can omit the depth attachment. If you only need depth, as will be the case with shadow mapping, you can omit the color attachment.
Rendering a High-poly Sphere
To demonstrate an FBO in action, let’s render a mass of high-poly spheres. In a normal rendering, we’d be processing many vertices per sphere. However, we’ll render just one real sphere to a texture-based FBO. Then we’ll render a bunch of quadrilaterals that have the texture applied.
Our first step is to write a little helper method to reserve the destination texture. There is no source image; we only need the texture allocated.
function reserveColorTexture(width, height, unit = gl.TEXTURE0) {
gl.activeTexture(unit);
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
return texture;
}
Our second step is to initialize a framebuffer object that renders to this texture. For the moment, we attach only a color raster, ignoring depth.
function initializeFbo(colorTexture) {
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorTexture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return framebuffer;
}
Our third step is to render the sphere to the texture. Not much in our standard drawing routine changes. We bind the framebuffer, set the viewport to the texture dimensions, and adapt the projection matrix to use the texture’s aspect ratio:
function renderToTexture(width, height, framebuffer) {
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.viewport(0, 0, width, height);
gl.clearColor(1, 1, 1, 1);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
const clipFromModel = Matrix4.ortho(-1, 1, -1, 1, -1, 1);
sphereProgram.bind();
sphereProgram.setUniformMatrix4('clipFromModel', clipFromModel);
sphereArray.bind();
sphereArray.drawIndexed(gl.TRIANGLES);
sphereArray.unbind();
sphereProgram.unbind();
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
}
Somewhere in our initialization routine, we call all these functions:
const size = 256;
const colorTexture = reserveColorTexture(size, size, gl.TEXTURE0);
const framebuffer = initializeFbo(colorTexture);
renderToTexture(size, size, framebuffer);
After this code, our texture is ready to be applied.
Atoms
Ultimately we want to render a mass of atoms, but let’s start with just one. For our geometry, we only need four vertices, each with just a position and texture coordinates. We don’t need any normals because the lighting has already been done.
function initializeAtoms() {
const positions = [
-0.5, -0.5, 0, 1,
0.5, -0.5, 0, 1,
-0.5, 0.5, 0, 1,
0.5, 0.5, 0, 1,
];
const texcoords = [
0, 0,
1, 0,
0, 1,
1, 1,
];
const faces = [
0, 1, 2,
1, 3, 2,
];
const attributes = new VertexAttributes();
attributes.addAttribute('position', 4, 4, positions);
attributes.addAttribute('texcoords', 4, 2, texcoords);
// create shader program
// create vertex array
}
The vertex shader computes the clip space position and passes along the texture coordinates:
uniform mat4 clipFromEye;
uniform mat4 eyeFromModel;
in vec4 position;
in vec2 texcoords;
out vec2 ftexcoords;
void main() {
gl_Position = clipFromEye * eyeFromModel * position;
ftexcoords = texcoords;
}
The fragment shader looks up its color in the texture:
uniform sampler2D sphereColorTexture;
in vec2 ftexcoords;
out vec4 fragmentColor;
void main() {
fragmentColor = texture(sphereColorTexture, ftexcoords);
}
In our render
function, we perform our usually drawing routine. We assume that the default framebuffer is bound, and we use the canvas dimensions to shape our viewport and projection matrix.
function render() {
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clearColor(1, 1, 1, 1);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
atomProgram.bind();
atomProgram.setUniform1i('sphereColorTexture', 0);
atomProgram.setUniformMatrix4('clipFromEye', clipFromEye);
atomProgram.setUniformMatrix4('eyeFromModel', trackball.rotation);
atomArray.bind();
atomArray.drawIndexed(gl.TRIANGLES);
atomArray.unbind();
atomProgram.unbind();
}
There’s our sphere. It looks perfectly smooth and only took four vertices and a texture to render. But if we rotate the sphere using our trackball interface, the ruse is revealed. The sphere shows itself to be flat.
Billboards
To maintain the illusion that we have full 3D geometry, we employ a trick called billboarding. We will ensure that the quadrilateral always faces the viewer, just as a billboard is designed to face traffic. One approach to billboarding is to find a rotation matrix that rotates the quadrilateral back to the viewer. Each quadrilateral would need its own matrix that would need to be updated every time the viewer moves. Surely there’s a better way.
The better way is to treat the quadrilateral as a point in model and world space. Once we’ve transformed to eye space, we have a clear picture of what it means for a surface to face the viewer. It must span the x- and y-axes of eye space. To make the quadrilateral visible, we push the vertices out along the eye’s x- and y-axes. We can use the texture coordinates to determine the direction in which to push a vertex.
First, we collapse our atom down so that its four vertices are coincident, at least initially:
function initializeAtoms() {
const positions = [
0.5, 0, 0, 1,
0.5, 0, 0, 1,
0.5, 0, 0, 1,
0.5, 0, 0, 1,
];
// ...
}
The coordinates now represent the atom’s center. They are shifted off the origin here so that rotation actually moves the atom. The texture coordinates stay the same. In the vertex shader, we head to eye space and push the position out by a displacement vector. Consider this mapping between the texture coordinates and the displacement vector:
texture coordinates | displacement |
---|---|
(0, 0) | (-1, -1) |
(1, 0) | (1, -1) |
(0, 1) | (-1, 1) |
(1, 1) | (1, 1) |
The displacement is derived from the texture coordinates by scaling by 2 and translating by -1. That leads to this vertex shader:
uniform mat4 eyeFromModel;
uniform mat4 clipFromEye;
uniform float radius;
in vec4 position;
in vec2 texcoords;
out vec2 ftexcoords;
void main() {
vec4 positionEye = eyeFromModel * position;
positionEye.xy += (texcoords * 2.0 - 1.0) * radius;
gl_Position = clipFromEye * positionEye;
ftexcoords = texcoords;
}
After these changes, we never see the side of our quadrilateral. This particular kind of billboard is called a screen-aligned billboard. The quadrilateral faces the viewport and is aligned to the axes of the viewport. There are other billboard types to align in different ways.
Lots of Atoms
The true test of our render-to-texture speedup is to render many atoms. Here we randomly generate 200 of them in the unit cube:
function initializeAtoms(n) {
const positions = [];
const texcoords = [];
const faces = [];
for (let i = 0; i < natoms; ++i) {
const x = Math.random() * 2 - 1;
const y = Math.random() * 2 - 1;
const z = Math.random() * 2 - 1;
// Make all four positions coincident.
positions.push(x, y, z, 1);
positions.push(x, y, z, 1);
positions.push(x, y, z, 1);
positions.push(x, y, z, 1);
texcoords.push(0, 0);
texcoords.push(1, 0);
texcoords.push(0, 1);
texcoords.push(1, 1);
const base = i * 4;
faces.push(base + 0, base + 1, base + 2);
faces.push(base + 1, base + 3, base + 2);
}
// ...
}
As we turn the scene, we discover some artifacts. First we see that the atoms have their background color intact. We can eliminate that with an alpha test. When rendering the texture, we give a transparent background color:
gl.clearColor(1, 1, 1, 0);
In the shader program for the atoms (not the shader program for the sphere), we discard any fragment whose color is not opaque:
if (fragmentColor.a < 1.0) discard;
This leaves a harsh edge on the atoms. A higher resolution in the texture would help. Or we could resort to blending, but that usually means we have to draw the scene from back to front.
Fixing Depth
The second artifact is a distracting popping when one quadrilateral suddenly appears in front of a neighbor that it overlaps with. Either we eliminate the overlapping or we make the quadrilaterals behave more like spheres. Let’s make them behave more like real spheres with curvature. Just as we used the texture to determine a fragment’s color from the sphere’s color texture, let’s also look up the fragment’s depth.
First we must capture the depth as a second attachment to our framebuffer object. We need a texture to hold the depth, which we can make with this helper function:
function reserveDepthTexture(width, height, unit = gl.TEXTURE0) {
gl.activeTexture(unit);
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT24, width, height, 0, gl.DEPTH_COMPONENT, gl.UNSIGNED_INT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
return texture;
}
This looks similar to reserveColorTexture
, but the pixel format is different. Not many formats are legal for depth textures. Additionally, WebGL refuses to interpolate textures made of integers, so we set the filtering to gl.NEAREST
. We create the depth texture and send it along with the color texture when we make our FBO:
const colorTexture = reserveColorTexture(size, size, gl.TEXTURE0);
const depthTexture = reserveDepthTexture(size, size, gl.TEXTURE1);
const framebuffer = initializeFbo(colorTexture, depthTexture);
The depth texture is attached much like the color texture:
function initializeFbo(colorTexture, depthTexture) {
const framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorTexture, 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTexture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return framebuffer;
}
Notice there’s no number after gl.DEPTH_ATTACHMENT
. Many color rasters can be attached, but only one depth raster.
In render
, we set the uniform for the depth texture:
shaderProgram.setUniform1i('sphereDepthTexture', 1);
In the fragment shader, we pull the depth of the corresponding fragment on the sphere out of the texture’s red channel:
// ...
uniform sampler2D sphereDepthTexture;
void main() {
float depth = texture(sphereDepthTexture, ftexcoords).r;
// ...
}
Tweaking the depth of the billboards requires some understanding of how depth is stored by the GPU. First, the depth we pull out of the texture is in [0, 1], even though internally it’s stored as an integer. A depth of 0 means the sphere’s fragment was on the near clipping plane, and a depth of 1 means the fragment was on the far clipping plane. We want to map this depth back to [-1, 1] space, which we accomplish in the usual way:
float depth = texture(sphereDepthTexture, ftexcoords).r * 2.0 - 1.0;
We want to perturb the billboard fragment’s incoming depth by the sphere depth. gl_FragCoord
is a builtin vec4
that holds the fragment’s pixel coordinates in its xy-components and its depth in its z-component. By default, gl_FragCoord.z
is assigned to the builtin gl_FragDepth
. But we can explicitly assign it if we want non-standard behavior, as we do here. The two depth ranges are likely different, so we scale our perturbation by their ratio:
gl_FragDepth = gl_FragCoord.z + depth * ratio;
Note that this perturbation applies only when an orthographic projection is being used. A perspective projection maps the z-coordinate to depth in a non-linear way. In such a case, simply adding the scaled displacement would not be accurate. However, it works here. The popping goes away. The rendering is fast. Thanks, framebuffer objects!
Conclusion
We have seen have framebuffer objects, a tool for capturing the render so it can be used for other purposes. Using FBOs, we can achieve a variety of effects, including mirrors, portals, synthetic cube maps, and motion blur. FBOs can also be used to implement deferred shading, a two-pass rendering scheme meant to achieve higher frame rates by calculating shading only for visible fragments. In the first pass, we record each fragment’s position, normal, texture coordinates, and other properties. The data is stored in one or more textures called gbuffers (geometry buffers). In the second pass, we render a screen-filling quadrilateral textured by the gbuffers. We pull out the properties and run the expensive shading calculations. Only the front-most fragments are considered in this second pass.
See you next time.
P.S. It’s time for a haiku!
Rule 5 of acting
Always face the audience
Unless you’re backstage