🤖 AI Summary
In non-line-of-sight (NLOS) imaging, the weak and noise-prone indirect light signals severely degrade the accuracy and robustness of 3D reconstruction. To address this, we propose an end-to-end neural operator learning framework. First, we parameterize a neural operator to approximate the physical inverse mapping, integrated with differentiable light-transport modeling. Second, we introduce a noise-adaptive estimation module to dynamically respond to varying noise levels. Third, we design a spatiotemporal global–local distribution feature fusion mechanism to enhance fine-detail recovery. Leveraging deep algorithm unrolling, our method ensures efficient inference while significantly improving reconstruction accuracy and stability under sparse sampling and fast-scanning conditions—both in simulation and real-world experiments. This work establishes a new paradigm for practical NLOS imaging.
📝 Abstract
Computational imaging, especially non-line-of-sight (NLOS) imaging, the extraction of information from obscured or hidden scenes is achieved through the utilization of indirect light signals resulting from multiple reflections or scattering. The inherently weak nature of these signals, coupled with their susceptibility to noise, necessitates the integration of physical processes to ensure accurate reconstruction. This paper presents a parameterized inverse problem framework tailored for large-scale linear problems in 3D imaging reconstruction. Initially, a noise estimation module is employed to adaptively assess the noise levels present in transient data. Subsequently, a parameterized neural operator is developed to approximate the inverse mapping, facilitating end-to-end rapid image reconstruction. Our 3D image reconstruction framework, grounded in operator learning, is constructed through deep algorithm unfolding, which not only provides commendable model interpretability but also enables dynamic adaptation to varying noise levels in the acquired data, thereby ensuring consistently robust and accurate reconstruction outcomes. Furthermore, we introduce a novel method for the fusion of global and local spatiotemporal data features. By integrating structural and detailed information, this method significantly enhances both accuracy and robustness. Comprehensive numerical experiments conducted on both simulated and real datasets substantiate the efficacy of the proposed method. It demonstrates remarkable performance with fast scanning data and sparse illumination point data, offering a viable solution for NLOS imaging in complex scenarios.