Specialization

Volumetric Lighting

 

 

Introduction

For the specialization part of my education, I chose to delve into a subject I’ve always been fascinated by, volumetric lighting. I can spend hours in games just admiring graphical fidelity and effects, so I thought it the perfect opportunity to learn something in that field. My goal was to learn about some of the ways one can implement volumetric lighting and implement one of them in our engine.

Implementations

After some research, I found two primary techniques for producing volumetric lighting: mesh marching and volume bound marching. I want to clarify that what I call them might not be the right terminology, but I have yet to find someone actually distinguishing them by name. My understanding is that both essentially use ray marching to collect data along the ray cast from a certain pixel, but they calculate the march itself differently. Mesh marching iterates its steps by the front- and back faces of a light volume mesh that is dynamically created from shadow map data. Volume bound marching can also use a mesh in the form of a convex light volume but it does not iterate by its faces, it’s only bound by it. The samples are instead evenly distributed along the ray and individually checked against the shadow map. There are steps one can take to optimize and tune the final result but they are the same for both techniques.

Result

I chose to implement the volume bound ray marching technique because I had limited time and wanted to maximize my chances of creating a functional, final product. From what I have seen, the mesh face interval technique can be a bit faster and reach farther, but it seems a lot harder to implement. NVIDIA has created an API for mesh marching and the creation of light volume meshes, but that kind of defeated the purpose of what I wanted to do.

I render the light accumulation data to a half size texture, and every pixel samples the ray 64 times. I then upscale it with two bilateral passes, one horizontal and one vertical. The result is sequentially added to the frame texture. At 2560x1440 fullscreen resolution the volumetric calculation takes about ~1.8ms and ~0.65ms at 1920x1080 on an NVIDIA GeForce RTX 2070.

Thoughts

I feel like I accomplished what I wanted to accomplish, although my bilateral blur is not ideal. At low values of accumulated light, the edges are smeared. I found something called “nearest depth up-sampling“ that apparently alleviates this problem, but I did not have the time to look into it further and implement it. I want to shout out and give a special thanks to Jesper Lundgren, for making a variable exposing system that greatly helped me iterate fast by allowing me to do it in runtime. The system is demonstrated in the video above.