🤖 AI Summary
Inverse problems are inherently ill-posed, causing conventional neural operators to fail. To address this, we propose the B²B⁻¹ framework—a bidirectional basis-to-basis neural operator that explicitly decouples functional representation from inverse mapping: it learns neural basis functions for both input and output spaces separately, then constructs an invertible, deterministic, or probabilistic inverse model in the coefficient space. This is the first approach to achieve explicit separation of representation learning and inverse modeling, enabling a single architecture to flexibly adapt to varying degrees of ill-posedness. The framework inherently supports implicit denoising, uncertainty quantification, and generalization across instances, domains, and ill-posedness levels. Evaluated on six inverse PDE tasks—including two newly introduced benchmarks—B²B⁻¹ consistently outperforms existing invertible neural operators, demonstrating significant improvements in noise robustness, re-simulation stability, and generalization performance.
📝 Abstract
Inverse problems challenge existing neural operator architectures because ill-posed inverse maps violate continuity, uniqueness, and stability assumptions. We introduce B2B${}^{-1}$, an inverse basis-to-basis neural operator framework that addresses this limitation. Our key innovation is to decouple function representation from the inverse map. We learn neural basis functions for the input and output spaces, then train inverse models that operate on the resulting coefficient space. This structure allows us to learn deterministic, invertible, and probabilistic models within a single framework, and to choose models based on the degree of ill-posedness. We evaluate our approach on six inverse PDE benchmarks, including two novel datasets, and compare against existing invertible neural operator baselines. We learn probabilistic models that capture uncertainty and input variability, and remain robust to measurement noise due to implicit denoising in the coefficient calculation. Our results show consistent re-simulation performance across varying levels of ill-posedness. By separating representation from inversion, our framework enables scalable surrogate models for inverse problems that generalize across instances, domains, and degrees of ill-posedness.