🤖 AI Summary
Realistic rendering of participating media such as smoke under dynamic illumination has long faced a trade-off between visual fidelity and interactivity. Existing six-directional lightmap approaches support only precomputed animations and cannot respond to camera motion or dynamic lighting changes. This work proposes a neural six-directional lightmap generation method that leverages large-step ray marching to produce guiding images approximating smoke scattering and silhouettes, which in turn drive a lightweight neural network to synthesize high-quality lightmaps in real time. The approach is the first to simultaneously support camera movement, dynamic lighting, and smoke-obstacle interactions, and integrates seamlessly into mainstream game engines. Experiments demonstrate that the method achieves interactive frame rates while preserving visual realism, making it suitable for real-time applications such as gaming and VR/AR.
📝 Abstract
Participating media are a pervasive and intriguing visual effect in virtual environments. Unfortunately, rendering such phenomena in real-time is notoriously difficult due to the computational expense of estimating the volume rendering equation. While the six-way lightmaps technique has been widely used in video games to render smoke with a camera-oriented billboard and approximate lighting effects using six precomputed lightmaps, achieving a balance between realism and efficiency, it is limited to pre-simulated animation sequences and is ignorant of camera movement. In this work, we propose a neural six-way lightmaps method to strike a long-sought balance between dynamics and visual realism. Our approach first generates a guiding map from the camera view using ray marching with a large sampling distance to approximate smoke scattering and silhouette. Then, given a guiding map, we train a neural network to predict the corresponding six-way lightmaps. The resulting lightmaps can be seamlessly used in existing game engine pipelines. This approach supports visually appealing rendering effects while enabling real-time user interactivity, including smoke-obstacle interaction, camera movement, and light change. By conducting a series of comprehensive benchmarks, we demonstrate that our method is well-suited for real-time applications, such as games and VR/AR.