Welcome to the very last article of this series on WebGL. Here, I’m going to explain how to complete the animation with the addition of MSAA. I will also explain a final piece to understanding the rendering pipeline of OpenGL. To read last week’s article, click here.
When we just drew the rays directly onto the canvas, they looked sharp and crisp, but now with the post-processing, they don’t anymore. The reason is that, while the WebGL canvas has antialiasing enabled by default, we lose this ability once we draw to textures instead of the canvas directly. To understand why that happens, we have to understand the hidden part of the OpenGL pipeline better that runs in between our vertex and fragment shaders.
Understanding OpenGL’s Rendering Pipeline, Part Three
To recap, our vertex shader transforms screen coordinates into clip coordinates. And our fragment shader receives all pixels that are touched by one of the vertices, and provides color for that. To understand why we suddenly have ragged edges, we need to understand how OpenGL decides whether a vertex actually touches a pixel. To do so, it simply checks whether half of a pixel is covered by the vector. If it is, the pixel is considered “completely covered,” and if it doesn’t, the pixel is considered “completely outside” the vertex.
The reason this happens is that a texture only provides a single sample to check this. To add antialiasing, we actually have to introduce more samples. Adding more samples simple means that, instead of checking whether a vertex covers at least half of a pixel, we need to tell OpenGL to test each pixel four times (or more). It then checks first the top-right quadrant of a pixel, then the top-left quadrant, then bottom-right, then bottom-left. If a vertex covers two of these quadrants, the pixel is considered “half-covered,” if it covers three, it is considered covered to 75%. In that case, the fragment shader will receive the pixel and should calculate a color as if the pixel was completely covered by the vertex. However, instead of just using that color as-is, OpenGL will now add another check after our fragment shader that mixes the color with the background color to the degree that the pixel is actually covered by the vertex. In other words, OpenGL somehow needs a way to remember how much of the full color coming out of the fragment shader it should actually apply.
LearnOpenGL has a great visual introduction into this sampling, which I recommend you read.
Unfortunately, sampling doesn’t work with textures. For that, we will need yet another concept called a render buffer. This is the final puzzle piece to understanding OpenGL’s rendering pipeline.
A render buffer works almost like a texture, but not quite. In principle, a render buffer can be used as if it was a texture. However, it cannot be used as a source for a fragment shader. In order to use the contents of a render buffer as input to a fragment shader, we have to copy its contents into a regular texture and use that one instead. So, in short, you can let a fragment shader write to a render buffer, but you cannot let it read from it. To access the contents of a render buffer for any additional post-processing steps, you need to “blit” the contents of the render buffer onto a texture.
All of this may sound very abstract, so let us use a real-world example — the iris indicator.
Implementing MSAA
To implement MSAA manually, we first have to understand when using MSAA is actually useful. Remember that, for most of the rendering passes, we essentially just have a texture the size of our canvas and modify it progressively to apply blur and bloom and tone mapping. That means that we copy pixel information between two textures of the same size, merely adjusting each pixel’s color.
There is no way aliasing can happen here. The only place where aliasing can happen is if we convert from vector space to pixel space. And that only happens a single time: When we actually draw our rays.
So, to enable MSAA again, we have to convert our scenetarget to use render buffers. So let’s get back to the rendering engine and change the scene target:
this.scenetarget = {
fb: gl.createFramebuffer(),
scene: this.createTexture(),
fbMSAA: gl.createFramebuffer(),
rbMSAA: gl.createRenderbuffer()
}
As you can see, instead of replacing the existing scene texture with a render buffer, I have instead opted to adding a second frame buffer/texture pair. This way, one can turn MSAA on and off at will. If MSAA is active, we use the newly created frame buffer/render buffer pair, but if it’s disabled, we simply continue to use our existing frame buffer/texture pair.
Next, we have to set up this new pair:
gl.bindFramebuffer(gl.FRAMEBUFFER, this.scenetarget.fbMSAA)
gl.bindRenderbuffer(gl.RENDERBUFFER, this.scenetarget.rbMSAA)
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, gl.getParameter(gl.MAX_SAMPLES), internalFormat, cWidth, cHeight)
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, this.scenetarget.rbMSAA)
As you can see, this code is almost exactly the same as the one to set up the texture. However, instead of binding a texture, we now call bindRenderbuffer. Furthermore, instead of allocating a texture buffer, we allocate a renderbufferStorageMultisample. This tells OpenGL that we want to enable multi-sample antialiasing in this render buffer. What this does, technically, is only enable a setting that tells OpenGL: “Whenever you write to this render buffer, check each pixel multiple times whether a vertex touches it.” The last line now is again the same as binding a texture, only that we bind a render buffer.
Using this new render buffer is as simple as changing which frame buffer we draw our rays onto:
if (this.msaaEnabled) {
this.setFramebuffer(this.scenetarget.fbMSAA, cWidth, cHeight)
} else {
this.setFramebuffer(this.scenetarget.fb, cWidth, cHeight)
}
However, the crux now lies in how to get the information back out of this render buffer. Since we probably want to do some post-processing, we need to convert this render buffer into a regular texture. Otherwise, we won’t be able to do anything with this data. Fortunately, this next step is straight forward:
if (this.msaaEnabled) {
gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.scenetarget.fbMSAA)
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, this.scenetarget.fb)
gl.blitFramebuffer(0, 0, cWidth, cHeight, 0, 0, cWidth, cHeight, gl.COLOR_BUFFER_BIT, gl.LINEAR)
gl.bindFramebuffer(gl.READ_FRAMEBUFFER, null)
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, null)
}
It turns out you can explicitly specify whether you want to read from or draw to frame buffers when you bind them. What this code essentially does is tell OpenGL that we want to read data out of the render buffer, and into our regular texture (that is attached to the “normal” frame buffer). Then, we call gl.blitFramebuffer, and this is where the magic happens.
What this function does is it realizes that the source of the operation is not a texture but a render buffer. So it will look at the multiple samples to determine how much a pixel on the edge of an object should be colored. For example, if you have a pixel where ¾ samples are set to “Yes, this is touched by the vertex,” it will use the color in 75% intensity, and write that information into the target texture. The gl.LINEAR tells OpenGL that we want to use linear interpolation to perform that work.
Lastly, we unbind the frame buffers for good measure to prevent OpenGL from even thinking about making funny noises.
The beauty of this approach is that all the remaining code can remain unchanged, because regardless of whether MSAA is on or off, the scenetarget.scene-texture at this point will contain the rendered iris. And since no more aliasing can happen after this point (because we’re transferring equal pixels), this concludes adding MSAA to our little project.
Why did MSAA work in the first place…?
One final question you may now have is: Why did this whole antialiasing work to begin with, but stopped to work when we started drawing to textures instead of the canvas? Here’s a revelation: We never really talked about the canvas other than that it really just is another frame buffer. However, if it’s just another frame buffer, there is nothing that prevents us from attaching a render buffer to it, instead of a texture. It turns out that there is a setting, called antialias that is true by default:
const gl = canvas.getContext("webgl2", { antialias: true })
If we activate this setting, this tells the browser that we want the canvas to use a render buffer. If we disable this setting, we tell the browser that we want the canvas to use a regular texture. Furthermore, if we write from a texture that is the same size as the canvas onto the canvas, then regardless of this setting, no antialiasing can happen. The reason is that, again, we are transferring pixels, so regardless of how many samples you use, all pixels will be considered “fully covered” because there is no vertex information available.
Finally, you may ask how the browser then uses this render buffer? Here are the last two concepts to understand: In 3D rendering, you have usually two buffers: A front buffer and a back buffer. Whenever you draw something onto the screen, you aren’t actually drawing anything onto a screen, but in reality you draw onto a “hidden” back buffer. The contents of this back buffer are what the browser will use to actually draw onto the canvas (the front buffer).
The reason you can’t draw directly onto the canvas is that the browser needs to compose your 3D rendering with the rest of the website, which can include elements or colors behind the canvas. In addition, drawing onto a back buffer means that, if your browser must re-draw the entire website before your code is actually done rendering, it can just re-use the last frame. This avoids seeing half-drawn frames, because only “finished” buffers will be drawn on screen.
This is where the browser can actually perform the MSAA: Because you only write to a render buffer, the browser can check whether your code is done writing, and then the browser simply performs such a “blit” behind the scenes to transfer the written information onto the front buffer for you.
For very simple applications, this is great because you don’t have to manually do any MSAA and can just render things onto the canvas directly without having to worry. But, as you have seen: as soon as you need to do any post-processing, you need to use textures. And, as soon as the vertex-information has been rasterized, there is no way to undo any type of antialiasing. So, as a rule of thumb, whenever you convert vertex information into pixel data, you probably want to use a render buffer.
Final Thoughts
This concludes this … rather lengthy exploration into the realm of WebGL. When I sat down to write a few lines of code to make some triangles dance, I would’ve never thought that it would take me so long just to get to any barely acceptable state, and so much longer to get to a very visually pleasing state.
It was a crazy difficult project, and I am very happy that I won’t have to touch much WebGL code anymore in the near or mid-future.
What started as an “An Iris is a simple object, how hard can it be?” turned out to be an entire odyssey into obscure parts of programming. It really helped me to write everything down and tell you about my journey, because I really feel a great relief now.
In the end, I produced 1,685 lines of JavaScript code; 270 lines of GLSL code, and 344 lines of HTML — all just to render a bunch of triangles and make them shine (and give you a way to play with the settings). It’s mind-boggling to think about how much work there is to render things on a computer. And it took me just over 15,000 words to tell you about this journey.
This entire project gave me a whole new appreciation for the 3D artists that produce the movie effects we have come to enjoy; the game developers who allow us to play photorealistic games; and everyone who has to implement all of this nitty-gritty in such an optimized way that we rarely see any stuttering in animated graphics.
But, as for me: I am happy to having learned more about the fundamentals behind my own research; how LLMs calculate their weights; how we turn text into machines; and how mind-bogglingly complex juggling a bunch of numbers can become.
That being said, it feels as if I have freed myself from a curse. Now that I have written down these lines, I am extremely happy to being able to go back to just doing what I enjoy for much longer than two intensive weeks: sociology, and democracy.
I hope you enjoyed this rabbit hole! As always, if you have any questions, ping me on social media. Au revoir!