🤖 AI Summary
Existing self-supervised super-resolution and deblurring methods struggle to recover high-frequency details from measurements containing only low-frequency information, as they rely on translation/rotation invariance while neglecting scale characteristics. This work introduces, for the first time, scale-invariance prior modeling—leveraging the approximate scale invariance of natural image distributions as a core self-supervised signal. Methodologically, we propose a scale-constrained loss, multi-scale data augmentation, and frequency-domain consistency regularization to construct an end-to-end optimization framework that requires no paired ground truth. Evaluated on real-world datasets, our approach significantly outperforms existing self-supervised methods and matches the performance of fully supervised models. This work overcomes the long-standing bottleneck in high-frequency recovery from low-frequency measurements and establishes a novel paradigm for label-free medical imaging modalities such as MRI and CT.
📝 Abstract
Self-supervised methods have recently proved to be nearly as effective as supervised methods in various imaging inverse problems, paving the way for learning-based methods in scientific and medical imaging applications where ground truth data is hard or expensive to obtain. This is the case in magnetic resonance imaging and computed tomography. These methods critically rely on invariance to translations and/or rotations of the image distribution to learn from incomplete measurement data alone. However, existing approaches fail to obtain competitive performances in the problems of image super-resolution and deblurring, which play a key role in most imaging systems. In this work, we show that invariance to translations and rotations is insufficient to learn from measurements that only contain low-frequency information. Instead, we propose a new self-supervised approach that leverages the fact that many image distributions are approximately scale-invariant, and that enables recovering high-frequency information lost in the measurement process. We demonstrate throughout a series of experiments on real datasets that the proposed method outperforms other self-supervised approaches, and obtains performances on par with fully supervised learning.