Experiments with 'Scalable Real-Time Global Illumination'

Inspired by the ‘Scalable Real-Time Global Illumination for Large Scenes’ GDC talk held by Anton Yudintsev, I set out to implement a similar real-time global illumination solution.

The technique as described in the talk uses multiple cascades of irradiance volume probes around the camera to provide the indirect diffuse lighting. The probes store lighting information in the Half-Life 2 ‘ambient cube basis’ and are updated every frame by casting rays into a low resolution voxelized representation of the scene (also in multiple cascades). The voxel representation of the scene is initially created with GPU-voxelization and then updated every frame by feeding the lit frame back into the voxel scene (screen space voxelization). This setup makes the technique fairly cheap, both in terms of processing and memory.

However, when I implemented it in my hobby render engine (C++/Vulkan), the technique suffered from both light leaking artifacts common to many irradiance volume approaches and low lighting quality caused by the ambient cube basis used to store the irradiance. Searching for solutions to both problems, I found the Dynamic Diffuse Global Illumination with Ray-Traced Irradiance Fields paper by Majercik et al., which uses a similar approach for GI. It differs in three key aspects from the first technique:

  • It uses hardware raytracing on the actual scene instead of raymarching in a voxelized representation of it.

  • Octahedron Mapping is used to store 8x8 texels of irradiance per probe, resulting in a higher resolution of irradiance information compared to the Half-Life 2 ambient cube basis.

  • And most importantly: Each probe stores the first two moments of depth in a 16x16 texel mini shadow map. This data is used similar in spirit to Variance Shadow Mapping to determine shading point to probe visiblity and combat light leaking.

Since I had no GPU capable of hardware raytracing and wanted to keep the good performance characteristics of my initial implementation, I kept the voxel representation of the scene. However, adapting the other two aspects drastically improved the visual quality and reduced light leaking. See the image below for what the irradiance data of the probes packed into a texture looks like:

Irradiance Probe Atlas Texture

The final problem I had to solve was that the depth information stored in each probe was generated by raymarching the low resolution voxel scene, leading to inaccuracies, which in turn caused the light leaking reduction to not be effective enough.

I managed to partially solve this problem by increasing the resolution of the voxel scene. In order to stay in budget, I implemented a sparse voxel grid, where each grid cell has a pointer/index into a list of blocks of voxels. This way, only those cells that actually intersect geometry need to have memory for voxels allocated. Allocation, voxelization and freeing of voxel blocks is done entirely on the GPU without CPU intervention.

While this approach enabled sufficiently high voxel resolutions, it is still slower than the original low resolution brute force way of storing and raymarching voxels. Depending on the scene, my final implementation of the effect also sometimes takes too long to converge to acceptable results (especially in low light conditions such as the lower sponza hallways). However, these experiments were still fairly educational to me and showed that the light leaking approach introduced by Majercik et al. can in theory solve most problematic light leaking cases (just not when you generate the depth information by tracing low resolution voxels).

The following image shows the indirect diffuse lighting term of the scene shown at the start of this post:

Sponza Indirect Diffuse