π€ AI Summary
To address the inherent one-to-many ambiguity in low-light and underwater image enhancement, this paper proposes the Bayesian Enhancement Model (BEM)βthe first Bayesian modeling framework explicitly designed for one-to-many image enhancement mappings. BEM adopts a two-stage architecture that jointly leverages low-dimensional latent-space modeling and Bayesian neural networks (BNNs) to generate diverse, high-fidelity enhanced outputs. A novel Momentum Prior is introduced to dynamically regularize the BNNβs weight distribution, significantly accelerating training convergence and improving uncertainty calibration. Extensive experiments on multiple benchmarks demonstrate that BEM consistently outperforms deterministic methods in both fidelity metrics (PSNR, SSIM) and diversity measures, while maintaining real-time inference speed (>30 FPS). This work establishes a new paradigm for interpretable, controllable image enhancement under epistemic uncertainty.
π Abstract
In image enhancement tasks, such as low-light and underwater image enhancement, a degraded image can correspond to multiple plausible target images due to dynamic photography conditions, such as variations in illumination. This naturally results in a one-to-many mapping challenge. To address this, we propose a Bayesian Enhancement Model (BEM) that incorporates Bayesian Neural Networks (BNNs) to capture data uncertainty and produce diverse outputs. To achieve real-time inference, we introduce a two-stage approach: Stage I employs a BNN to model the one-to-many mappings in the low-dimensional space, while Stage II refines fine-grained image details using a Deterministic Neural Network (DNN). To accelerate BNN training and convergence, we introduce a dynamic emph{Momentum Prior}. Extensive experiments on multiple low-light and underwater image enhancement benchmarks demonstrate the superiority of our method over deterministic models.