🤖 AI Summary
This work addresses the longstanding challenge in neural rendering—namely, the inability to disentangle lighting and material parameters and the lack of controllable relighting capability. To this end, we propose a physics-driven neural deferred shading framework. Our method is the first to embed differentiable physical priors—including BRDF constraints and light transport models—into a deferred shading pipeline, while integrating an end-to-end shadow estimation module to achieve explicit geometric, lighting, and material disentanglement. The framework enables generalizable, photorealistic relighting under arbitrary lighting inputs, balancing editability with physical consistency. In multi-illumination relighting tasks, it significantly outperforms classical renderers and state-of-the-art neural shading models: geometric-lighting disentanglement accuracy improves by 23.6%, and cross-scene generalization error decreases by 19.4%.
📝 Abstract
Deep learning based rendering has demonstrated major improvements for photo-realistic image synthesis, applicable to various applications including visual effects in movies and photo-realistic scene building in video games. However, a significant limitation is the difficulty of decomposing the illumination and material parameters, which limits such methods to reconstruct an input scene, without any possibility to control these parameters. This paper introduces a novel physics based neural deferred shading pipeline to decompose the data-driven rendering process, learn a generalizable shading function to produce photo-realistic results for shading and relighting tasks, we also provide a shadow estimator to efficiently mimic shadowing effect. Our model achieves improved performance compared to classical models and a state-of-art neural shading model, and enables generalizable photo-realistic shading from arbitrary illumination input.