🤖 AI Summary
Rasterization-based approximations in real-time differentiable rendering struggle to accurately model complex light transport phenomena such as reflection and refraction.
Method: This paper introduces the first fully software-based, voxel-grid-driven differentiable ray tracing framework. It eschews rasterization and hardware-accelerated ray tracing units (e.g., CUDA RT Cores), relying solely on general-purpose GPU computation. We adapt classical voxel-grid ray tracing to differentiable scene representation and integrate it with implicit scene parameterization for end-to-end optimization.
Contribution/Results: Compared to Gaussian splatting, our method achieves >30 FPS real-time rendering on standard GPUs while maintaining comparable image quality. Crucially, it enables accurate gradient computation and stable training—overcoming the longstanding reliance of differentiable rendering on rasterization-based simplifications and marking a significant advance in physically grounded, hardware-agnostic differentiable rendering.
📝 Abstract
Research on differentiable scene representations is consistently moving towards more efficient, real-time models. Recently, this has led to the popularization of splatting methods, which eschew the traditional ray-based rendering of radiance fields in favor of rasterization. This has yielded a significant improvement in rendering speeds due to the efficiency of rasterization algorithms and hardware, but has come at a cost: the approximations that make rasterization efficient also make implementation of light transport phenomena like reflection and refraction much more difficult. We propose a novel scene representation which avoids these approximations, but keeps the efficiency and reconstruction quality of splatting by leveraging a decades-old efficient volumetric mesh ray tracing algorithm which has been largely overlooked in recent computer vision research. The resulting model, which we name Radiant Foam, achieves rendering speed and quality comparable to Gaussian Splatting, without the constraints of rasterization. Unlike ray traced Gaussian models that use hardware ray tracing acceleration, our method requires no special hardware or APIs beyond the standard features of a programmable GPU.