🤖 AI Summary
In medical image segmentation, high-quality pixel-level annotations are costly to obtain, while sparse scribble annotations lead to noisy pseudo-labels and suboptimal model performance. To address this, we propose a weakly supervised learning framework. Our key contributions are: (1) a region-wise pseudo-label diffusion module that enforces local structural consistency; (2) a dynamic competitive pseudo-label selection mechanism incorporating adaptive thresholding and multi-stage consistency regularization to suppress error accumulation; and (3) scribble-guided contrastive learning to enhance boundary discrimination. Evaluated on the ACDC and MSCMRseg benchmarks, our method achieves state-of-the-art performance using only scribble annotations—surpassing fully supervised baselines with up to a 2.3% improvement in Dice coefficient. The source code is publicly available.
📝 Abstract
In clinical medicine, precise image segmentation can provide substantial support to clinicians. However, achieving such precision often requires a large amount of finely annotated data, which can be costly. Scribble annotation presents a more efficient alternative, boosting labeling efficiency. However, utilizing such minimal supervision for medical image segmentation training, especially with scribble annotations, poses significant challenges. To address these challenges, we introduce ScribbleVS, a novel framework that leverages scribble annotations. We introduce a Regional Pseudo Labels Diffusion Module to expand the scope of supervision and reduce the impact of noise present in pseudo labels. Additionally, we propose a Dynamic Competitive Selection module for enhanced refinement in selecting pseudo labels. Experiments conducted on the ACDC and MSCMRseg datasets have demonstrated promising results, achieving performance levels that even exceed those of fully supervised methodologies. The codes of this study are available at https://github.com/ortonwang/ScribbleVS.