🤖 AI Summary
To address the high cost of dense annotation in 3D medical image segmentation, this paper proposes a probability-aware weakly supervised learning framework that generates high-quality dense segmentation masks from sparse annotations (e.g., points, bounding boxes, or slice-level labels). Methodologically, we introduce a novel probability-driven pseudo-label generation mechanism, design a probabilistic multi-head self-attention network to model uncertainty-aware voxel-wise dependencies, and incorporate a confidence-weighted segmentation loss to explicitly leverage annotation reliability. The framework is inherently modality-agnostic, supporting both CT and MRI without modality-specific adaptation. Evaluated across multiple 3D medical imaging benchmarks, our approach achieves up to an 18.1% Dice score improvement over state-of-the-art weakly supervised methods, matching the performance of fully supervised baselines. The source code is publicly available.
📝 Abstract
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning. Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation. However, this approach heavily relies on labor-intensive and time-consuming fully annotated ground-truth labels, particularly for 3D volumes. To overcome this limitation, we propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging. Our pipeline integrates three innovative components: a probability-based pseudo-label generation technique for synthesizing dense segmentation masks from sparse annotations, a Probabilistic Multi-head Self-Attention network for robust feature extraction within our Probabilistic Transformer Network, and a Probability-informed Segmentation Loss Function to enhance training with annotation confidence. Demonstrating significant advances, our approach not only rivals the performance of fully supervised methods but also surpasses existing weakly supervised methods in CT and MRI datasets, achieving up to 18.1% improvement in Dice scores for certain organs. The code is available at https://github.com/runminjiang/PW4MedSeg.