🤖 AI Summary
This work proposes a conditional flow matching framework for Bayesian inverse problems where the prior and likelihood densities cannot be evaluated explicitly due to physical constraints, yet joint samples are available. By modeling the velocity field of a probability flow ordinary differential equation with neural networks, the method directly maps a source distribution to the posterior conditioned on observed data, bypassing explicit density evaluations. It is applicable to nonlinear, high-dimensional, and non-differentiable forward models. The study identifies variance collapse and selective memorization phenomena arising from overfitting under limited data and mitigates these issues through regularization strategies such as early stopping. Experiments demonstrate that the approach accurately captures multimodal posteriors in various physical inverse problems, achieving both high fidelity and computational efficiency.
📝 Abstract
This study presents a conditional flow matching framework for solving physics-constrained Bayesian inverse problems. In this setting, samples from the joint distribution of inferred variables and measurements are assumed available, while explicit evaluation of the prior and likelihood densities is not required. We derive a simple and self-contained formulation of both the unconditional and conditional flow matching algorithms, tailored specifically to inverse problems. In the conditional setting, a neural network is trained to learn the velocity field of a probability flow ordinary differential equation that transports samples from a chosen source distribution directly to the posterior distribution conditioned on observed measurements. This black-box formulation accommodates nonlinear, high-dimensional, and potentially non-differentiable forward models without restrictive assumptions on the noise model. We further analyze the behavior of the learned velocity field in the regime of finite training data. Under mild architectural assumptions, we show that overtraining can induce degenerate behavior in the generated conditional distributions, including variance collapse and a phenomenon termed selective memorization, wherein generated samples concentrate around training data points associated with similar observations. A simplified theoretical analysis explains this behavior, and numerical experiments confirm it in practice. We demonstrate that standard early-stopping criteria based on monitoring test loss effectively mitigate such degeneracy. The proposed method is evaluated on several physics-based inverse problems. We investigate the impact of different choices of source distributions, including Gaussian and data-informed priors. Across these examples, conditional flow matching accurately captures complex, multimodal posterior distributions while maintaining computational efficiency.