🤖 AI Summary
Image inverse problems—such as parallel MRI reconstruction—suffer from solution non-uniqueness, optimization instability, and sensitivity to noise. To address these challenges, we propose the Locally Convex Multi-Scale Energy (LC-MuSE) model: the first framework to enforce local monotonicity of the gradient via a CNN-parameterized negative log-prior, ensuring strict local strong convexity of the energy function across multiple scales. Within the maximum a posteriori (MAP) estimation framework, this design guarantees solution uniqueness, global convergence of optimization algorithms, and robustness to input perturbations. Experiments on MRI reconstruction demonstrate that LC-MuSE outperforms conventional convex regularizers and matches the performance of plug-and-play and end-to-end deep learning methods, while providing verifiable theoretical guarantees. Thus, LC-MuSE bridges expressive modeling power with practical reliability—offering both high empirical accuracy and rigorous mathematical foundations.
📝 Abstract
We propose a multi-scale deep energy model that is strongly convex in the local neighbourhood around the data manifold to represent its probability density, with application in inverse problems. In particular, we represent the negative log-prior as a multi-scale energy model parameterized by a Convolutional Neural Network (CNN). We restrict the gradient of the CNN to be locally monotone, which constrains the model as a Locally Convex Multi-Scale Energy (LC-MuSE). We use the learned energy model in image-based inverse problems, where the formulation offers several desirable properties: i) uniqueness of the solution, ii) convergence guarantees to a minimum of the inverse problem, and iii) robustness to input perturbations. In the context of parallel Magnetic Resonance (MR) image reconstruction, we show that the proposed method performs better than the state-of-the-art convex regularizers, while the performance is comparable to plug-and-play regularizers and end-to-end trained methods.