Attractor Settings

Intro

This post is essentially an add-on to the previous post that covered implementing a WebGPU based Chaotic Attractor visualiser which used compute shaders to simulate a high volume of particles to demonstrate the return to an orderly state effect. After uploading the post, I spent some time comparing the project to what I already achieved in Unity, it was immediately clear that without bloom the particle effects felt a little flat. I decided to focus on fixing this by implementing bloom support into my WebGPU renderer which is what this post covers.

To implement bloom I felt like I had two immediate options. I could either use a raster based approach or attempt to process the source image in a compute shader. Whilst the latter sounded fun, I chose to do the former. Primarily, I chose this because it gave me a reason to expand the raster based functionality of my WebGPU renderer, implementing features that’ll be useful for future exploratory projects, whilst also following an approach that seems to be common to most game engines.

Implementation

The approach I’ve used to implement bloom does the following:

  1. Prefilters non-contributing values
  2. Downsamples the source image
  3. Additively upsample the downsampled image
  4. Blends the upsampled result with the initial render buffer

As alluded to above, to implement this approach I had to add some new features to my renderer. The most important feature is the ability to change render targets on the fly. Previously, the renderer was hardcoded to only output to the page’s canvas, limiting us to a single render pass. I originally expected refactoring this would be tricky, but thankfully I had originally structured the render calls in such a way that it was open to being tinkered with. All I had to do was add a new function that changes the render pass’ description to point at a different texture view prior to the render commands being registered.

Being able to change the render target is essential to how the whole bloom effect works. Core to the bloom operation is blurring the original source image. This is where the downsampling and upsampling comes in. By copying a source image to a render target with half its resolution (downsample), it will naturally lose some information. Copying the image back into a render target with the original source resolution (upsample) will then return a slightly blurred version of the original image. You can test this in most image editing programs by scaling images up and down. If you do this iteratively, repeatedly downsampling until you reach the lowest possible resolution of a render target cascade, and then upsample back up towards the original source resolution you’ll receive a very blurry version of the original image as a lot of information has now been lost in the process.

In some cases, this is exactly what you want, but even with bilinear sampling the resultant texture is quite blocky and tends to lose slightly too much information. Using a different sampling method will solve this. In this demo I’m using box sampling. This is relatively simple to implement as instead of sampling the source texture from one point, it takes the average of four samples on the edges of a box around the uv position. The size of this box can also be optionally scaled which affects how focused the final result appears to be. Eventually I want to look into other sampling methods, box sampling appears to lack temporal consistency which is part of why some of the attractors start out a little bit wibbly. I’m curious about randomly distributing sampling points in a circle since I think that’d create an interesting effect.

Another change feature I needed to add to my renderer was the ability to change the ‘load’ operation of the render pass descriptor. The load operation runs at the start of a render pass and by default is set to clear the bound render target. During the upsample phase, we can produce a better quality image by performing the upsample additively. This improves quality as it ensures that no information is lost, it does this by not clearing the current render target and enabling additive blending. Doing this means that the upsample is being added to the original source’s downsample, this has the effect of also brightening the overall image which can be useful for preventing dark images from eating the bloom. Whilst it’d be nice to use this all the way up the chain to the canvas’ render target, for the final step we have to re-enable clear and manually add the source image and highest resolution bloom cascade render target, as otherwise each frame the bloom will propagate and just make your screen white.

Something I spent a little time dithering over was whether or not to have the demo require each pixel reach some set brightness threshold before it contributed to the bloom effect. In a game this makes sense because you only want the very bright objects contributing to bloom, but for a fancy particle simulation I think it makes sense to overdo the bloom a little bit. As I want to reuse my bloom logic in other parts of the project I decided to add the feature anyway and set the threshold to 0 for the time being. With the next update I plan on adding a toggle to the main post to allow the viewer to at least disable the bloom if they wish to do so, however I think the bloom does a good job of exposing just how many overlapping particles there is by making the bloom extra blobby at really intense points.

Thanks for reading, as this was a quick post I may come back to this at some point soon to add more images and flesh it out a bit more, but for now I hope this was an interesting read :)