Volumetrics Introduction – Some Light Theory

What do Volumetric Lighting and Volumetric Fog do - Graphics Settings  Explained
Game-Debate – Red Dead Redemption 2

Volumetric lighting is a means by which games/interactive 3D applications/animations/etc. attempt to provide a sense of depth and atmosphere to a scene. Many times, this depth and atmosphere is communicated via clouds, dust, atmospheric debris and fog, just to name a few. Unsurprisingly, many of the techniques that provide the means to render these volumes of particles are the most hardware intensive. And if you think about this intuitively, it actually makes a ton of sense! If we want to deliver the most realistic looking clouds in a scene, we’re likely required to do our absolute best not to cut corners and model our clouds as close to the ones we can find in the real world. So we’ll start our journey into Volumetric Lighting here in the real world!

Let’s take a look at clouds because they’re the most clear (or not so clear! … sorry) examples of a “volume of particles” that give way to a particular interaction with particles and waves of light. While it may be obvious, it’s worth noting that a cloud is the sum total of (relatively) larger particulates in a given region of the atmosphere. Depending on the conditions of those various points within the atmosphere, we may see big, fluffy cumulonimbus clouds or lighter, wispier stratus clouds, or a seemingly infinite variety between those ends of the spectrum. So with that in mind, we can begin to construct a more intuitive understanding of how the light that travels through a cloud enters our eyes!

Gases and Pressure
Gas Particles – saylordotorg.github.io

The first thing concept to cover is how light is understood. And don’t worry, you don’t need any real mathematical or physics-based background to follow this, because I don’t have either of those! We can think of light, referred to as photons, as particles that are emitted as radiation from the Sun. There are many “sizes” of these particles, or wavelengths. And you may remember the term “visible spectrum” from high school science. This spectrum refers to wavelengths of light that fall within a certain length – specifically, lengths that we can perceive visibly. On one end of the spectrum, we have red light, which has longer wavelengths and on the other, violet, which, you can imagine, has a much shorter wavelength. Now, the reason this wavelength matters for our clouds is that we can think of the different wavelengths having different interactions with the particles that just sort of hang around in the atmosphere. For example, what different interactions between a photon with a longer wavelength, say red, and an ozone particle vs. that same ozone particle and a photon with a shorter length, say blue, can we expect? Well, what tends to happen is that photons with shorter wavelengths tend to “scatter”, or collide, with more particles. The reason for this is surprisingly intuitive. Imagine for a moment we have a straight line representing the trajectory of a given photon.

Here we have our red line, a straight line. Next to our red line are two points. Let’s think of these two points as ozone molecules, and let’s think of our red line as a photon with an infinitely long wavelength, and in this case, it’s linear. We can view the y, or “up” axis, as time, and the x axis as the position of the particle. So at any given time, we can track the position of our particle. Given this fictitious scenario, the photon never intersects with an ozone molecule. But what if the wavelength were to change?

I’ve kept our original, linear line but changed its color to black and I’ve updated our new line to have the color red. We’ll think of this red line as a photon with a “red” wavelength. What’s interesting about this is that a shorter wavelength gives way for more opportunities for the photon to intersect with the molecules. Now, in reality, when that photon collides with an ozone particle, it doesn’t keep traveling along the same trajectory, but we’ll set that logic aside! Let’s take this one more step and another photon.

Now we have a third line, representing a photon with a violet wavelength – i.e. much shorter! What we can observe here is that the photon traveling along the violet line intersects with the green ozone particle and comes quite close to the blue one! With this thought experiment aside, I hope you can see why certain wavelengths of light interact differently with molecules that take up residence in our atmosphere!

So what determines the color of, well anything? You might ask. Well, the full answer goes beyond the scope of my knowledge, but to keep it short and hopefully sweet, imagine that our little violet photon bounces around the atmosphere for a few nanoseconds, jumbling up its trajectory and throwing it all over the place, eventually making its way down to planet earth. And let’s just say for a moment that that same little photon collided with something, say a leaf. Let’s just say that YOU were looking at that leaf at roughly the same time. Why isn’t the color of the leaf violet? Well, the color of the leaf is the sum of the parts of the visible spectrum that it DOESN’T absorb. This means that the leaf will soak up all of the other wavelengths of the visible spectrum except for green! So at that exact moment you were looking at the leaf, a whole host of green photons were scattered into your eye, giving you the means by which to interpret the leaf!

This scattering and viewing phenomena is often referred to as transmittance. Specifically, transmittance is the quantity of light that passes through a solution. What is the solution you may ask? The leaf? No, it’s the air between you and the leaf! And we’ve already discussed the absorption part of light, which, no surprise, is referred to as absorbance. Which is sometimes referred to as optical density (OD) and is the quantity of light absorbed by a solution. Fortunately for us, there have been some pretty smart folks in charge of understanding this phenomenon far better than I! Two gentlemen gave way to a law that is now formally known as the Beer-Lambert law, or just Beer’s law. There are many uses for Beer’s law, but the one we’re particularly interested in has to do with stellar radiation as it travels through planetary atmospheres. This law essentially states that the denser a medium, the less light we can expect to travel through. This is also pretty intuitive – if I shine a flashlight into a relatively clear hallway, I’ll see that the light travels through successfully. However, if I shine a flashlight into a hallway filled with smoke, there’s a chance I won’t be doing anything but lighting up the smoke!

Now, the reason that the smoke in the hallway appears darker than the fluffy clouds in the sky has to do with how the molecules of the smoke interact with the light. In the case of the smoke, the molecules will “capture”, or absorb, this light so that it will rarely be transmitted back to your eyes. And when it is transmitted back, we certainly won’t be seeing the full visible spectrum – i.e. the color white. On the contrary, the clouds that appear white a fluff, like the ones below, end up scattering more light which eventually ends up in your eye. But it’s not just that the clouds scatter more light, they scatter all wavelengths of light! This means that the wavelengths of light that the molecules in the cloud end up scattering fit the entire visible spectrum, giving them a white appearance!

Clouds 1 |
Google Images – Sky-Scapes

However, the more thick and dense a cloud is, the less light ever gets the chance of exiting, giving the darker clouds their more ominous appearance! This is because the thicker a cloud gets, the more water droplets and ice crystals gather. The more these molecules gather within the cloud, there are more opportunities for larger molecules to scatter the light between each other and more opportunities for the molecules to absorb this light.

Weather Wise Kids: Why do clouds turn gray before it rains?
https://www.wsav.com/news/weather-wise-kids-why-do-clouds-turn-gray-before-it-rains/

While I haven’t done an in-depth explanation as to how rendering engineers model this phenomena in 3D applications, I hope this brief introduction has given you a taste of what goes into simulating light interactions with participating media like clouds and fog. I’ll leave the more technical, rendering explanation for another post! Regardless of whether you’re a rendering nerd like me, I hope the next time you walk outside you spend an extra second appreciating the clouds.

Reflecting

Seeing as how we’re nearing the end of the spring semester, thus bringing an end to yet another stage of my academic development, I figured it would be appropriate to do a bit of reflecting amidst all of the deadlines, projects and assignments that have started to exhibit some notable volume – in that, I’m beginning to drown.

But everything is fine.

Now that we’re all fine, I just want to take a moment to stroll down memory lane to appreciate the crooked line we stumble upon. And I say “us” because as a collective, I would imagine that if one were to compress all of the human experiences every human experienced into a little ball it would probably look quite different on the surface, but the deeper you dug, you’re likely to find some uniformity. Before I get too heady, what I’m really getting at is that I have yet to meet someone who has walked a seemingly linear path with no hiccups, curveballs or twists. I’ll stop speaking for the entirety of the human race for a moment and just say that – at the very least – my experience has not been without a tad bit of uncertainty. And to zoom out just before I started my stay at OSU working towards a Bachelor’s degree in Computer Science, I was pretty unsure of what to do with my life (I’m only a bit closer to knowing that now, just FYI).

The year was 2017, I was likely getting off of work as a personal trainer working for a high-health-risk population in a small studio in Greensboro, North Carolina and I’m sure all I wanted to do was checkout and play a video game. That video game was about a little bug fighting his way through a dark, twisted and mysterious fantasy world filled with sick bugs. This game is Hollow Knight, developed by Team Cherry – a small team of 3 located in Australia. This game was a hit – an indie gem, if you will. And just for a qualifier, I’ve been playing video games ever since I was kid, hooked on the stuff since 1994 when I got my first Nintendo 64. And don’t get me wrong, I’ve been inspired by a number of video games in a myriad of ways for as long as I can remember. Super Mario 64: get out there and explore – jump on mushrooms. The Legend of Zelda: Ocarina of Time: pay attention to stories and the people who tell them and do your best to stop the baddie. Halo: Combat Evolved: be the best there is and save the damn world from aliens. Minecraft: relax, explore and build something and stay away from green things. I could go on and on, but something about Hollow Knight did something to me. And it had much less to do with the game itself: the story, the art, the music, the mechanics, the world design – all of which are fantastic, by the way. It inspired that creative part of me that wanted to build something unique. This began my journey into game development.

And let me to tell you, just in case you don’t know, making games is hard. Really, really hard. And I don’t even mean the programming or the art or the world design or story or the audio or any of the stuff we commonly associate with video games – and all of that stuff is really hard. I mean the part that makes a game “a game”. And I’m sure we could call up a game designer and ask them what this “thing” is, but in my opinion, the essence of a game comes down to it being “fun”. And that’s what makes making games so damn hard. What’s fun to me might be horrible for you. And what’s fun for you might be boring to me. So how can we possibly invest time, and often money, into developing a product that might not be considered fun? Well, as a hobbyist, the answer is probably a lot more wholesome than the answer you might get from a AAA game company marketing team. We do it for the same reason we tell each other stories, build things out of stuff, like wood, or whatever, paint stuff and turn sand into stuff at the beach. Because it’s another medium to uniquely express creativity. Now that being said, it’s really hard to put that awesome, kickass, “procedurally generated universe-scale simulation with dynamic environments that look beautiful and house intelligent life-forms that interface with the player in interesting ways” idea, that ultimately comes from that creative place, into reality.

But that’s how I got started. I was inspired by a piece of art from a small team of folks across the world to try to create my own little piece of art. Now, I have yet to create my own version of Hollow Knight, but I’ve been gifted with some strange, and sometimes masochistic, passion to create things that are meaningful to me and to the people I care about. I don’t fully attribute Hollow Knight to this sense of purpose, or whatever you might call it, but I think what Hollow Knight represents is the stuff that gives all of us a bit of direction. Even as a kid, playing Crash Bandicoot and Spyro the Dragon, I was taught that your ideas can be materialized, played with and met with a great deal of joy. Some people get that from a book, others from an oil painting, many from music and oftentimes from a good movie. Regardless, there’s something pretty wonderful about coming back to my roots and exploring the stuff that brought me to this place; I imagine what brought others all this journey isn’t too different. Today I love making 3D worlds, simulations and virtual environments that may bring people joy in some capacity or another and I hope to continue learning so that potential to deliver something good increases. Despite all of the homework, projects, assignments and “life stuff” that just keeps coming, I’m glad to be reminded that there’s always a source of inspiration laying around somewhere.

Devlog #0: Simulation Challenge

Seeing as how I haven’t done too much in the way of using this platform as a way to write about progress for the Capstone project, I figured now was as good a time as any. My motivation for documenting the development progress of “Virtual Bartender” isn’t because I’m particularly motivated at this moment; nor is it because I’m really leaning into accountability or transparency with the development of this project. I think it’s because I’d like to flesh out some thoughts that generally tend to come up around this point in a somewhat involved project. Specifically, I’ve noticed that I often face an interesting double-edged sword: a truly inspiring, motivating, imaginative state of mind that marks the entry into a project – this tends to be characterized by diagrams, drawings, mind-maps, inspirational art, prototypes, you name it – which is swiftly met with the colder reality that a project almost never meets my unrealistic, imaginative expectations. This often leads me to adding another body to my “dead-projects” folder left to rot somewhere in my C drive.

Alright, enough of the downer negativity! I really don’t want to be a drag, so bear with me here. Because here’s the thing that I’ve realized: all of that day-dreaming about the projects I come to love (and often hate) is ultimately the lifeblood of being a developer, a creator. I guess what I’m getting at is the two are inextricably related. I won’t have motivation for a project if I don’t get inspired – or I need to eat, which is an entirely different topic. However, my inspiration usually comes from an … “out-there” place – movies, music, pictures, dreams, day-dreams, etc. So if my inspiration is born from an imaginary place, which often leads to unrealistic expectations and an inevitable let down, what’s the solution? Well, I certainly don’t have a one-size-fits-all solution, but I’ve slowly been compiling my experiences which have guided me to, at the very least, a bit of awareness and a few tools that tend to help.

The first tool that comes to mind may be a bit pessimistic at first glance: knowing when your ideas are dumb. Something about the beginning parts of an exciting new project brings an unusual amount of motivation, energy and grandiose ideas! For example, when I started on this Simulation Challenge project, our group opted to do a “bartender” simulation. My initial thought wasn’t, “Let’s give the user the ability to make some simple drinks, let them run around a serve folks”, it was, “I’m going to make a physically realistic simulation of fluid to complement a full-scale realistic bartender simulation with PBR materials and photorealistic graphics!” Of course, this isn’t realistic, but it sure is fun to think about! So the first order of business for me, is to get those ideas out of my system quickly. The best way I can do that is to either sate them, by prototyping those nutty features in a session or two – this is usually just to prove to myself that it’s unrealistic for now – or to really map out everything that would go into implementing them. Again, these are just sobering techniques for the “honeymoon” phase of a project.

The second tool I like to carry around piggy-backs off of the last: breadcrumbs. Somewhere along the way, I go from “Let’s let the user pour fluid with a simple line primitive” to, “Fluid simulation using particles and Laplacian operators”. Now, it isn’t always so easy to remember where I crossed that line separating reality from AAA game feature, so I’ve found it helpful to keep track of my thoughts during this period of early design. The other benefit to keeping track of the thought train that forks off to “Stuck on a feature for weeks”-town is that some of those unrealistic features are really only unrealistic for now. There have been a number of times that I’ve ended up revisiting an idea well into development when it’s been appropriate to either pivot from a current feature or add something fresh. It’s also good to see months into a project that I was at one point really inspired!

The last simple tool that I’m absolutely awful at being consistent with is another really annoying cliché: time management. And I don’t mean, pace-yourself, it’s-a-journey type of management, I mean the you-can’t-allocate-resources-to-save-your-life time management I overestimate certain aspects of a project and underestimate others; I complain about simple, mundane tasks and turn a blind eye to the elephant in the room on a daily basis (refactor that class, seriously, go do it right now!). While I do think there is something to the starving artist trope, I believe there might be something a bit more to the “organized and boring” artist one we don’t often hear about. The times I’ve been the most successful in my short development career have been the times that I’ve been willing to sacrifice time and attention towards the shiny features of my application that I think are cool (they really are, I swear!) to features and tools that have been gasping for air for weeks. There’s also quite a bit to be said about having a semblance of work-life-balance. I have no idea what that is, I just heard that it was valuable.

Ultimately, what I’ve realized is that there’s a beautiful cycle that tends to take place during development that takes longer than a month. Infatuation followed by disinterest, followed by complacency mixed with glimpses of hope, motivation and thoughts of leaving for Mexico, eventually blossoming into the recognition of true progress and the value of hard work, only to be met with more problems to solve, kicking off the cycle anew. But yeah, I’ve been told about that whole work-life-balance thing and how that must fit into that equation somewhere. If you manage to figure out what that looks like, let me know.

Post Processing

In terms of computer graphics, and more prominently, post-processing, bloom is a technique that is used to reproduce a common visual artifact produced by real-world cameras. The effect looks a bit like this:

Light Blooming observed from a picture taken from a mounted camera

What we see here is an overwhelmingly bright area on the photo. This is… let’s just say, generally, undesirable – for obvious reasons. However, there are cases where a bit of this flaring effect adds a bit of intensity to points of interest in a scene, or perhaps creating a bit of dramatic contrast. This is also true for real-time (and offline) renders. Too much of this effect oversaturates the image with brightness, defeating the purpose; however, just enough adds some much needed realism in the way we capture the effects of lights in the real-world. And just to be clear, this “Bloom” effect I’m talking about here is a post-processing effect which enhances the relative intensity of the values defining a particular pixels color on the screen based on its relative brightness. That was a bit of a mouthful, so let’s break it down.

First of all, what is a post processing effect? Well, this is from Wikipedia: “The term post-processing (or postproc for short) is used in the video/film business for quality-improvement image processing (specifically digital image processing) methods used in video playback devices, such as stand-alone DVD-Video players; video playing software; and transcoding software. It is also commonly used in real-time 3D rendering (such as in video games) to add additional effects.” TLDR: post processing adds additional effects to the final image of a particular shot, or frame. There are many, many examples of post-processing effects. Just to name a few: Blur, DOF, Bokeh, Bump Mapping, Cel Shading, Film Grain, Dithering, Grayscale, Outlines, Scalines, Shadow Mapping, Vignette, Bloom, Upscaling, Downscaling, Night Vision, Infrared, HDR (High-Dynamic Range) Rendering… I could go on. But this isn’t shocking if you really think about it; how many ways can you potentially alter a picture? Well, your creativity (and your tech) is your only real limitation! All of those filters on your phone? Post-Processing.

Here are a few pictures demonstrating a few commonly seen image effects. No animals were harmed in the making of this blog post – Blue is simply acting as a nice leg rest for my dad.

Black and White
Normal
Vivid/Warm

So now that we have an idea of what Post-Processing effects are and a few common examples, how does one actually apply these effects to images? Well, in a previous blog post, Compute Shaders, I briefly walk through a simple example of utilizing compute shaders to invert the colors of an image. Inversion is a really simple post-processing effect. All we have to do is look at every pixel in the image and subtract it from 1.0. So values that were previously low, perhaps showing a black or grey color, will now look white!

Back to my loveable foot rest:

Pixel Inversion

This picture could be produced in a number of ways – I just happen to be using a compute shader for demoing the inversion of an image, but more common techniques are the use of Vertex/Fragment shaders that work directly in the rendering pipeline. If you want more info on the pipeline, that same post goes through a gentle introduction into the rendering pipeline. To put it simply, whether we’re viewing a picture or a video (a bunch of pictures!), the GPU has sent a 2D grid of pixels to our display. Those 2D pixels are accessible for per-pixel processing within Fragment Shaders. So this is great new for us! Especially if we want to apply these neat effects!

However, if you remember, the rendering pipeline takes a bunch of data about a piece of geometry and works to convert those bits of geometry into triangles that are viewable in 2D screen space so that we can effectively “color” each pixel. And one of the most common usages of the rendering pipeline is for the submission of a mesh. Think about any video game you’ve ever played and think about a character in that game. Whether they exist in 2D or 3D, that character is made up of some very specific geometry (frequently bundled in an object called a Mesh) and the only way that geometry shows up on your screen taking the correct shape, orientation and shading that you’re familiar with is because of that geometric data running through the pipeline. So we can effectively render a mesh by sending it through the pipeline, but how do we apply an effect, say Depth of Field, to every part of our scene that it made into our view? Well, this is where the real magic of post-processing effects come in.

To keep this short, as it could be in its own blog post, the final image we view on our screen is shipped by our GPU in what’s known as a Frame Buffer. This Frame Buffer, often called a Color Buffer, is essentially a 2D grid with the dimensions matching the resolution of your displays dimensions – each cell defining the color of each particular on your display. So, first what we want to do is capture the unfiltered, raw contents of the Frame Buffer that make up the picture we see on screen and store that somewhere. A two-dimensional texture is a pretty intuitive choice due to it’s grid-like nature; not to mention that our GPUs have been optimized to look-up values in a texture, so everyone wins here. Next, we’ll need to take that final image that makes up our scene, then apply our post-processing effects to that. Finally, we can use that filtered texture and map it to a more appropriate mesh, or collection of geometry, for a 2D display. A simple quadrilateral works pretty well here (if you’re viewing this on an 2D screen, you’ll notice that it’s shape is a quadrilateral). And by “map it”, I mean texture mapping, which we won’t get into here. Once the filtered texture has been mapped to our quad, we can simply render that quad every single frame!

After doing quite a bit of work to make my C++ engine render some neat post-processing effects, I learned about the varying complexities involved for a number of techniques. Bloom has got to be one of my favorites! I plan on putting together another post explaining a technique to achieve Bloom in real-time applications – but in the meantime, here are a few very simple photos demoing the effect that I’ve produced in the editor of my work-in-progress game engine!

No Bloom Added (personal C++/OpenGL engine – Ohm Engine)
50% Bloom Added (personal C++/OpenGL engine – Ohm Engine)
100% Bloom Added (personal C++/OpenGL engine – Ohm Engine)

Compute Shaders

I’ve recently found myself (once again) obsessing over the GPU. Specifically, the piece of functionality that modern GPUs have: programmable shaders. The goal of the GPU is to take in data from the CPU – namely data regarding say, a mesh – then run that data through various stages of what’s known as the Graphics Pipeline to ultimately output colors onto your display. Prior to the programmable shader, this pipeline was fixed; meaning developers were quite limited in their ability to fine tune images displayed to their users. While today we’ve been graced by the programmable shaders, there are still pieces of the pipeline that remain fixed. This is actually a good thing. Functionality like primitive assembly, rasterization and per-sample operations are better left to the hardware. If you’ve ever done any graphics programming, you’ve probably seen a diagram that looks like this:

The boxes in green are fixed-function, whereas the purple boxes are programmable. There are actually far more stages than this, but for the most part, these are the stages we’re interested in. And just to be clear, pipeline is a really good name for this as each stage of this process is dependent on the stages prior. The Input Assembly stage is dependent on data being sent to the GPU via the CPU. The Vertex Shader stage is dependent on the input assembler’s triangle/primitive generation so that it can perform any transformations on the vertices that make up each primitive or even perform per-vertex lighting operations. The Rasterization stage needs to figure out what pixels on the screen map to the specified primitives/shapes so data relating to points on the primitives can be interpolated across the primitive, figuring out which of these points are actually within the Camera’s view, calculating depth for perspective cameras and ultimately mapping 3D coordinates to a 2D point on the screen. This leads to the Fragment Shader stage which deals with the fragments generated by the Rasterization Stage; specifically, how each of these pixels should be colored. The final Color Blending stage is responsible for performing any “last-minute” processing on all of the visible pixels using per-pixel shading data from the previous stage, any information that’s context specific to the pipeline state, the current contents of the render target and the contents of the depth and stencil buffers. This final stage also performs depth-testing. This just means that any pixels that are determined to be “behind” another pixel, in other words, will never be seen, will actually be removed when blending occurs. Long story long, the pipeline performs quite a few tasks that are happening unbelievably fast using the power of many, many shader cores and hardware designed to handle these very specific inputs and generate these very specific outputs.

Now, onto our good friend the Compute Shader who doesn’t really belong in the pipeline. As we’ve already seen, the pipelines job is to take data and turn it into colors on your screen. This is all leveraged by the fact that the processors responsible for these operations run in parallel, executing the same (hopefully) operations for every single pixel on the screen. This needs to be fast, especially given the increase in screen resolutions over the past decade. A 4K monitor, 3840 x 2160 pixels, has 8,294,000 pixels! Adding on to this insanity, some of the newer 4K monitors boast a 144Hz refresh rate with a 1ms response time! That’s a lot of work for our GPU to perform! Fortunately, it’s been crafted specifically for this duty. But what if we wanted to use this super-duper parallel computing for other purposes outside of the traditional render-pipeline? Because at the end of the day, the GPU and it’s tons of processors are generally just doing some simple math.

So, if we want to leverage the GPUs parallel processing talents outside of sending a buffer of colors to your monitor, we can use Compute Shaders. Directly from Microsoft, ” A compute shader provides high-speed general purpose computing and takes advantage of the large numbers of parallel processors on the graphics processing unit (GPU). The compute shader provides memory sharing and thread synchronization features to allow more effective parallel programming methods.” What this means is that we can parallelize operations that would otherwise be impractical to run via the CPU (without some of it’s own multi-processing capabilities).

So what might benefit from the use of Compute Shader? Well, some of the more common uses are related to image processing. If you think about the architecture of the GPU as a big grid containing tons of processors, you can think about mapping these processors via their location in the grid. For example, lets say I have a 512 x 512 pixel image where I want to invert every pixel of the original texture and store those inverted values into a new texture. Fortunately, this is a pretty trivial task for a compute shader.

In Unity, the setup code on the CPU looks something like this:

using UnityEngine;

public class ComputeInvertTest : MonoBehaviour
{
    // We want 16 threads per group.
    private const int ThreadsPerGroup = 16;
    
    // The compute shader.
    [SerializeField] private ComputeShader _testInverter;
    
    // The texture we want to invert.
    [SerializeField] private Texture2D _sourceTexture;
    
    // The mesh renderer that we want to apply the inverted texture to.
    [SerializeField] private MeshRenderer _targetMeshRenderer;
    
    // The texture we're going to store the inverted values for.
    private RenderTexture _writeTexture;
    
    private void Start()
    {
        // Create the destination texture using the source textures dimensions.
        // Ensure enableRandomWrite is set so we can write to this texture.
        _writeTexture = new RenderTexture(_sourceTexture.width, _sourceTexture.height, 1) {enableRandomWrite = true};

        // Get the resolution of the main texture - in our case, 512 x 512.
        var resolution = new Vector2Int(_sourceTexture.width, _sourceTexture.height);
        
        // We need to tell the compute shader how many thread groups we want.
        // A good rule of thumb is to figure out how many threads we want per group, then divide
        // the target dimensions by this number.
        // In our case, for our 512 x 512 texture, we want 16 threads per group.
        // This gives us 512 / 16, 512 / 16, or 32 thread groups on both the x and y dimensions.
        var numThreadGroups = resolution / ThreadsPerGroup;

        // Let's find the kernel, or the function, responsible for doing work in the compute shader.
        var inverterKernel = _testInverter.FindKernel("Inverter");
        
        // Set the texture properties for the source texture and destination textures.
        _testInverter.SetTexture(inverterKernel, Shader.PropertyToID("_WriteTexture"), _writeTexture, 0);
        _testInverter.SetTexture(inverterKernel, Shader.PropertyToID("_ReadTexture"), _sourceTexture, 0);
        
        // The Dispatch function executes the compute shader using the specified number of thread groups.
        _testInverter.Dispatch(inverterKernel, numThreadGroups.x, numThreadGroups.y, 1);

        // Finally, after the texture has been updated, apply it to the Meshrenderers material.
        _targetMeshRenderer.material.mainTexture = _writeTexture;
    }
}

On the GPU, the code is much simpler.

#pragma kernel Inverter
// The name of the kernel the CPU will look for.
#pragma kernel Inverter

// The number of threads we want per work group - this needs to match what we decided on the CPU side.
static int ThreadsPerGroup = 16;

// The texture we're reading from - no writing allowed here.
Texture2D<float4> _ReadTexture;
// The texture we're writing to, declared as RWTexture2D, or Read/Write Texture2D.
// The <float4> just says that each element in this texture is a 4 component vector, each
// component of type float.
RWTexture2D<float4> _WriteTexture;

// Again, specify the number of threads we want to set per thread group.
[numthreads(ThreadsPerGroup, ThreadsPerGroup, 1)]
void Inverter (uint3 id : SV_DispatchThreadID)
{
    // Write the inverted value to the destination texture.
    _WriteTexture[id.xy] = 1 - _ReadTexture[id.xy]; 
}

The important bit to realize here is that the attribute above the kernel, [ThreadsPerGroup, ThreadsPerGroup, 1], needs to match the number of threads we set on the CPU side. This value needs to be set at compile time, meaning it can’t change when the program is running. You may also notice this peculiar statement: uint3 id : SV_DispatchThreadID. This is where the magic happens – mapping our threads to our textures. Let’s break down the simple math.

We have a 512 x 512 texture. We want 16 threads per thread-group (or work-group) on both the x and y axes (because our texture is 2D – if our texture was 3D, it would likely be easier to specify 16 on all three axes ) and 1 on the z (we have to have at least 1 thread per group). This SV_DispatchThreadID maps to the following:

SV_GroupID * ThreadsPerGroup + SV_GroupThreadID = SV_DispatchThreadID

This looks like nonsense, I know. The best way to visualize this mapping is, again, like a grid. Taken from Microsoft’s website describing this exact calculation:

To relate this to our example, let’s remember that our Dispatch call invoked 32 x 32 x 1 Thread groups in an undefined order. So we can think of a 32 x 32 x 1 grid. Each cell of this grid corresponds to another grid – this time mapping to our 16 x 16 x 1 threads per group. This means, if a particular thread has been assigned to say, SV_GroupID of (31, 31, 0), or the last group (as these groups are zero indexed), and it happens to have an SV_GroupdThreadID of (15, 15, 1), or the last thread of this last group, we can calculate it’s 3D id, SV_DispatchThreadID. Doing the math:

SV_GroupID * ThreadsPerGroup + SV_GroupThreadID = SV_DispatchThreadID

(31, 31, 0) * (16, 16, 1) + (15, 15, 1) = (511, 511, 0).

Coincidentally, this is the address, or index, of the last pixel of the texture we’re reading from and writing to! So this mapping worked perfectly. There are a ton of tricks that can be done with playing with threads per group and work group counts, but for this case, it’s pretty straight forward. This is the result:

While this is a pretty contrived example, benchmarking this yielded a grand total of 1ms to invert this 512 x 512 image on the GPU. Just for some perspective, this exact same operation on the CPU took 117ms.

Going even further, using a 4k image, 4096 x 4096 pixels, this is the result:

On the CPU, the inversion took 2,026ms, or just over 2 seconds. On the GPU, the inversion took, once again, 1ms. This is a staggering increase in performance! And just to provide a bit of machine specific information, I have an NVIDIA GeForce GTX 1080 GPU and an Intel Core i7-8700k CPU @ 3.70GHz.

I hope this was an enjoyable read and I hope that you’ve learned something about the wonders of modern technology! And maybe, if you’re a rendering engineer, you’ll consider putting your GPU to work a bit more if you don’t already!

Fluid Simulation

Every now and then I run into a feature in a project that I can’t help but become totally obsessed with – it’s a bit like a trip to Best Buy or Walmart when the original intent is to buy a phone charger. Half an hour later, your eyes are about as close to the silicon on the new GPU in the electronics section as possible – all the while, your wallet let’s out a puff of smoke reminding you that you’d rather eat this week. While the diehard project manager and realist in me says, “No, no, no, that’s not on the TODO list, that’s good ole fashion scope creep!”, the tech-obsessed junky in me let’s out a rebellious cry, “I don’t care it looks cool!” And in most cases, this is, generally speaking, an unproductive tangent in terms of achieving the original goal. However, in some cases, the time I spend oogling at some new tech, a particular implementation of an algorithm or a new design philosophy actually works to benefit the greater project. And maybe I’m being a bit too black and white, because the reality is, there has yet to be a project that I’m slightly interested in that I haven’t found something that sparks some passion and takes me for an unexpected journey. So, maybe, the two are inextricably connected – it’s difficult for me to truly engage with a project without – in some capacity – getting sucked into a feature that may or may not see the light of day.

This brings us to today’s random, unbelievably time-consuming tangent: fluid simulation. We regret to interrupt your regularly scheduled, Trello-board-approved application development to introduce the world of Navier-Stokes and computational fluid dynamics. Yes, I love simulations. Especially ones where I can really go, “Hm… that actually looks like “insert the real thing here”.” I’ve dabbled with weather simulations and particle simulations, so I figure fluid is next logical step. To put things in a bit of context before I begin my long-winded spiral into drowning the reader with information about fluid simulations (drowning… see what I – I’ll see myself out), this fork in the road originated from my team’s choice of simulation for the Simulation Challenge project for the CS467 Capstone Project.

Out of all of the possible simulations out there, we picked Virtual Bartender. Why? Well, I’m not entirely sure but it sounds pretty fun! I’m used to simulating really, really small stuff or really, really big stuff which isn’t very exciting unless you spent 90% of your childhood building HotWheels tracks. So, the Virtual Bartender gig sounded neat, a nice change of pace and in terms of scope, seemed the most reasonable given the allotted time and our team’s experience using the necessary tools. The tool we’ll be using the most is none other than the Unity game engine. In terms of the simulation itself, the goal is simple – make drinks using tactile, mechanical movements in a VR environment. While this is unbelievably vague, this is as far as we’ve made it so far. We’ve committed ourselves to restricting our focus to ensuring the main mechanic of this simulation, making the dang drinks, feel wonderful. After that, we can choose if the theme is “Space Bar”, or “Under-Sea Atlantis Bar”… or Canada or something… That being said, I don’t have a lot of info regarding the goal of our project, which isn’t a terrible thing at this point in time. We’ve fleshed out the general requirements pertaining to performance, platform/device agnostic API and a general algorithm from which we’ll work to establish the core simulation/gameplay loop. In other words, we have our sand in a truck which we’ll use to build our sand castle – it’s still in the truck, though.

Back to liquids and stuff! You may be two steps ahead of me here, but humor me. We’ve got a Virtual Bar, simulating the experience of a bartender. Now, I’m no bartender and I don’t know the first thing about casual conversation under a dimly lit, stained-glass light fixture from the 80’s while gently swirling a gin and tonic, but I do have a sneaking suspicion there is quite a lot of liquid involved. That being said, how does one simulate liquid? What even is liquid?

Well, to keep things simple, for our purposes, fluid is the volume that defines an infinitesimally small collection of particles that are in constant interaction with one another. And to keep things even simpler so our poor outdated GPUs don’t vomit, maybe we scale back the “infinitesimally” small to “reasonably” small and we partition these particles into their own little cells. Each cell is responsible for keeping track of a bunch of things. Namely, velocity and density. To be even more specific, the cell needs to keep a “memory” of the velocities and densities that it has already recorded. That way, the cell has some knowledge about how it may interact with its neighbors. However, before we get too deep into the meat and potatoes of the core algorithm, it’s worth noting that are a few assumptions we need to make about our fluid. The first is that it’s incompressible. This simply means that the amount of fluid that enters the space stays the same throughout the entire simulation. In other words, conservation of mass. The next is that the means by which the velocities that dictate the circulation of density within the volume is performed via a few processes: diffusion, projection and advection.

Diffusion is the means by which the fluids moves relative to the fluids viscosity, mass of the particles that make up the fluid and temperate (just to name a few variables here).

Projection is the process by which we maintain that first assumption, the incompressible nature of the fluid. After we do some work on the velocities that more or less define the rate of diffusion of the liquid, we need to make sure that we didn’t actually lose any liquid in the process.

Finally, Advection is the means by which the fluid particles themselves move due to the bulk motion of the fluid.

So there seems to be a bit of a niche distinction between Diffusion and Advection, right? Well, the most important difference is that Diffusion refers to a transport mechanism of the fluid that which occurs without any motion of the bulk fluid and Advection refers to the transport mechanism of the fluid due to the motion of the bulk fluid. Still a bit confusing, right? A simple example of Diffusion is the transfer of heat and energy between particles in a drink. A simple example of Advection is the transfer of heat and energy of the particles in the atmosphere as the general circulation of the Earth and regional movements of air around high and low areas of pressure. Diffusion: small. Advection: big.

Back to the general process of simulating fluid: Diffuse the velocities of each cell of the fluid in all axes using the viscosity as the leading factor of change. Perform projection to ensure that the velocities aren’t unreasonable values and that the fluid stays within it’s bounds. Perform advection on each of the cells in all axes using the diffused velocity values as a delta. Again, perform projection to make sure we haven’t broken anything. Finally, we need to diffuse and perform advection on any differing densities – meaning, if we placed dye in water, the dye is the differing density. This final stage is essentially what creates disturbances in the fluid, creating noticeable changes of density throughout the volume.

This was a lot and I’m not quite to the point of implementing this, as there’s just enough math to keep me busy. However, I hope to have a working prototype soon enough that it doesn’t interfere with the development of our Virtual Bar! Worst case scenario, the fluid simulation will never see the light of day and I have a bunch of fun learning about it.

“Hello, World”, Autonomous Flight, and The Simulation

01000111 01110010 01100101 01100101 01110100 01101001 01101110 01100111 01110011 00101100 00100000 01100101 01100001 01110010 01110100 01101000 01101100 01101001 01101110 01100111 01110011 00101110.

No, but really – welcome!  Thanks for checking out this blog.  It brings me great joy to have a place to chart the many discovers I stumble upon during this wonderful journey.  Before I dive into my many quirks and related aspirations, I suppose introductions are in order.

My name is Connor Wendt and I was born in Omaha, Nebraska.  For those of you that are unfamiliar with Nebraska, those of us who were chosen to start our journey from such a magnanimous state are birthed directly from a stalk of corn.  After less than a year and a few tons of corn later, my parents and I traveled around for the next decade – eventually finding respite in the great land of maple syrup and reasonable hospital bills – Ontario, Canada.  Fast forwarding through hockey, Nintendo 64 and all of the snow, we eventually landed in Greensboro, North Carolina, where I currently live.  I spent the majority of my childhood pretending Pokemon were real, watching Saturday morning cartoons and nurturing a growing obsession with all things digital – which brings us to today.  Pokemon are still real, I love Tom and Jerry and I have a passion for yelling at shaders and low-level API while my 90lb. lapdog, Blue, is forced into the role of “adorable therapist”.

Sticking with the topic of low-level API and graphics, I have been given the honor to work with the Simulation Rendering team of the project Wayfinder for Acubed, Airbus as an intern starting June 13th, 2022.  Wayfinder is a project with a simple, but not easy, goal in mind – automate flight.  How does one, “automate flight”, you might ask.  Well, to oversimplify, there are two means by which this is achievable – both requiring an unbelievably large amount of data pertaining to all of the conditions that an aircraft is subjected to and all of the options available to the pilot given the state of the craft and weather.  The first way we could go about this is to fly millions of planes into the sky, taking precise measurements as we go, then feed that data to a bunch of hungry, little machine learning bots.  A bit expensive, though?  The other option is to realistically simulate all of relevant conditions, then “fly” millions of “planes” into the simulated sky, taking precise measurements as we go, then feed THAT data to our hungry, little bots.  Much less expensive, yes?  To avoid a barrage of criticism, I’d like to emphasis that I’ve egregiously oversimplified this process.  There’s quite a bit of work between “siMuLate sKy” and “mAke pLaNe fLy bY itSelF”.  Regardless, I’m unbelievably excited to get to work.

Given my future work, I found the “Simulation Challenge” project in the CS 467 project listings unsurprisingly interesting, while also a bit vague.  I may have been a bit hasty to toss in my vote here, because who knows, it could mean anything!  I’ve spent a bit of free time simulating volumetrics, hydraulic erosion and particle interactions, to name a few, so I’m pretty open – unless it’s requires hours of connecting the Matrix to Simulation Theory (I don’t think my fragile ego could take it).

Tying all of this simulation mumbo-jumbo together, I would absolutely love to work on developing and maintaining the rendering systems of an interactive application one day.  Game Engines are the first types of applications that come to mind, but any interactive, graphical tool needs to be built upon an engine of some kind.  Like many of us, I spent a lot of my early development time building crappy, little games while having an unbelievable amount of fun doing it.  I love stories and I love art – so I naturally love movies and video games.

To wrap things up, I love all things graphics.  I’m not particularly gifted, but I am passionate and love to learn.  I try to keep my humility by frequently reminding myself that I’m not the best, nor am I the worst, but I’m likely right where I need to be.  I love teaching others because that’s how I really learn; I love being taught because that’s how I tend to grow.  And at the end of the day, I hope to contribute to projects that can make someone’s life just a little bit better.

Thanks for reading.

Connor