🤖 AI Summary
Inverse rendering suffers from physically inconsistent joint decomposition of geometry, material, and illumination, primarily due to the lack of a physically grounded model for the material-opacity relationship. This work introduces, for the first time, an explicit physical dependency of opacity on the optical cross-section coefficient. We propose a differentiable inverse rendering framework based on 3D Gaussian splatting, featuring a material-driven neural opacity module embedded with radiative transfer–inspired physical activation functions to enable dual-path gradient constraints on color and opacity. A lightweight material network and multi-baseline joint optimization strategy are further adopted. Evaluated on three mainstream Gaussian splatting inverse rendering baselines, our method achieves PSNR gains of 1.2–2.4 dB in novel-view synthesis and significantly improves material separation accuracy, enabling more robust and physically consistent real-time high-fidelity material reconstruction.
📝 Abstract
Decomposing geometry, materials and lighting from a set of images, namely inverse rendering, has been a long-standing problem in computer vision and graphics. Recent advances in neural rendering enable photo-realistic and plausible inverse rendering results. The emergence of 3D Gaussian Splatting has boosted it to the next level by showing real-time rendering potentials. An intuitive finding is that the models used for inverse rendering do not take into account the dependency of opacity w.r.t. material properties, namely cross section, as suggested by optics. Therefore, we develop a novel approach that adds this dependency to the modeling itself. Inspired by radiative transfer, we augment the opacity term by introducing a neural network that takes as input material properties to provide modeling of cross section and a physically correct activation function. The gradients for material properties are therefore not only from color but also from opacity, facilitating a constraint for their optimization. Therefore, the proposed method incorporates more accurate physical properties compared to previous works. We implement our method into 3 different baselines that use Gaussian Splatting for inverse rendering and achieve significant improvements universally in terms of novel view synthesis and material modeling.