🤖 AI Summary
The core challenge in image inpainting lies in modeling a realistic image prior that is invariant to geometric transformations—such as rotation and reflection—while preserving the underlying symmetry structure of natural image distributions. Existing deep priors often neglect equivariance, limiting their ability to capture such symmetries accurately. To address this, we propose ERED, the first group-equivariant denoising prior framework, which tightly integrates Lie group representation theory with Plug-and-Play (PnP) optimization. ERED employs group-equivariant convolutions for denoiser design and establishes a theoretical convergence guarantee under stochastic optimization. It is the first work to systematically incorporate group representation theory into prior learning for image restoration. Extensive experiments on denoising, super-resolution, and deblurring demonstrate consistent performance gains—particularly under rotational and reflective symmetries—where ERED achieves an average PSNR improvement of 0.8 dB, validating the effectiveness and generalizability of equivariant priors for modeling real-world image distributions.
📝 Abstract
One key ingredient of image restoration is to define a realistic prior on clean images to complete the missing information in the observation. State-of-the-art restoration methods rely on a neural network to encode this prior. Moreover, typical image distributions are invariant to some set of transformations, such as rotations or flips. However, most deep architectures are not designed to represent an invariant image distribution. Recent works have proposed to overcome this difficulty by including equivariance properties within a Plug-and-Play paradigm. In this work, we propose a unified framework named Equivariant Regularization by Denoising (ERED) based on equivariant denoisers and stochastic optimization. We analyze the convergence of this algorithm and discuss its practical benefit.