🤖 AI Summary
To address the degradation of generalization performance in deep models for accelerated MRI reconstruction under distribution shifts, this paper introduces structured pruning into the initialization phase of untrained unfolding networks—marking the first such application. We propose a one-shot sparsification strategy: channel-level pruning is performed immediately after weight initialization, guided by importance scoring, without requiring post-initialization fine-tuning. This approach significantly enhances model robustness across out-of-distribution scenarios—including cross-scanner and cross-protocol settings—achieving an average 1.2 dB PSNR gain on multi-center MRI data while preserving or slightly surpassing in-distribution performance. Compared to conventional post-training pruning or dense models, our method jointly improves generalizability and computational efficiency. It establishes a new paradigm for unfolding-based reconstruction networks that is lightweight, stable, and transferable across diverse clinical environments.
📝 Abstract
Deep learning methods are highly effective for many image reconstruction tasks. However, the performance of supervised learned models can degrade when applied to distinct experimental settings at test time or in the presence of distribution shifts. In this study, we demonstrate that pruning deep image reconstruction networks at training time can improve their robustness to distribution shifts. In particular, we consider unrolled reconstruction architectures for accelerated magnetic resonance imaging and introduce a method for pruning unrolled networks (PUN) at initialization. Our experiments demonstrate that when compared to traditional dense networks, PUN offers improved generalization across a variety of experimental settings and even slight performance gains on in-distribution data.