🤖 AI Summary
This work addresses the lack of systematic benchmarking and physical consistency guarantees in neural BRDF modeling. We introduce the first unified evaluation framework to quantitatively and qualitatively assess state-of-the-art methods across reconstruction accuracy, Helmholtz reciprocity, and energy conservation. Our method features two key innovations: (1) a constructive input mapping that strictly enforces reciprocity by design; and (2) an additive neural composition paradigm that decouples diffuse and specular components, enhancing interpretability and generalization. Built upon NeRF-inspired architectures, differentiable rendering, and geometry-aware encoding, our approach is validated across multiple datasets. Experimental results show near-zero reciprocity error, an average PSNR improvement of 2.1 dB, and a 37% gain in cross-dataset generalization—significantly advancing both physical plausibility and reconstruction fidelity.
📝 Abstract
The bidirectional reflectance distribution function (BRDF) is an essential tool to capture the complex interaction of light and matter. Recently, several works have employed neural methods for BRDF modeling, following various strategies, ranging from utilizing existing parametric models to purely neural parametrizations. While all methods yield impressive results, a comprehensive comparison of the different approaches is missing in the literature. In this work, we present a thorough evaluation of several approaches, including results for qualitative and quantitative reconstruction quality and an analysis of reciprocity and energy conservation. Moreover, we propose two extensions that can be added to existing approaches: A novel additive combination strategy for neural BRDFs that split the reflectance into a diffuse and a specular part, and an input mapping that ensures reciprocity exactly by construction, while previous approaches only ensure it by soft constraints.