🤖 AI Summary
Biomedical image denoising faces bottlenecks of high computational cost and reliance on scarce clean ground-truth annotations. Method: We propose Noise2Detail, an ultra-lightweight unsupervised multi-stage denoising framework. Departing from supervised learning, it adopts a Noise2Noise-inspired self-supervised training strategy without requiring clean labels. Its core innovation lies in a noise-decoupled multi-stage pipeline: the first stage breaks spatial noise correlations to generate structural priors, while the second stage directly reconstructs fine-grained details from noisy inputs. Leveraging a highly compact network architecture and stage-wise noise separation, it achieves both real-time inference speed and high-fidelity detail preservation under minimal computational overhead. Contribution/Results: Extensive experiments demonstrate that Noise2Detail significantly outperforms existing data-free methods across diverse biomedical imaging tasks—including fluorescence microscopy, electron microscopy, and histopathology—enabling practical deployment in annotation-scarce clinical settings.
📝 Abstract
Current self-supervised denoising techniques achieve impressive results, yet their real-world application is frequently constrained by substantial computational and memory demands, necessitating a compromise between inference speed and reconstruction quality. In this paper, we present an ultra-lightweight model that addresses this challenge, achieving both fast denoising and high quality image restoration. Built upon the Noise2Noise training framework-which removes the reliance on clean reference images or explicit noise modeling-we introduce an innovative multistage denoising pipeline named Noise2Detail (N2D). During inference, this approach disrupts the spatial correlations of noise patterns to produce intermediate smooth structures, which are subsequently refined to recapture fine details directly from the noisy input. Extensive testing reveals that Noise2Detail surpasses existing dataset-free techniques in performance, while requiring only a fraction of the computational resources. This combination of efficiency, low computational cost, and data-free approach make it a valuable tool for biomedical imaging, overcoming the challenges of scarce clean training data-due to rare and complex imaging modalities-while enabling fast inference for practical use.