Enhancing Dual Network Based Semi-Supervised Medical Image Segmentation with Uncertainty-Guided Pseudo-Labeling

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of labeled data in medical image segmentation, this paper proposes an uncertainty-guided dual-network semi-supervised framework. Methodologically, it integrates cross-consistency augmentation, KL-divergence-driven dynamic weight adjustment, entropy-based pseudo-label filtering, and feature-space contrastive learning to jointly suppress noisy pseudo-labels and reduce prediction uncertainty. Its key innovation lies in explicitly incorporating uncertainty quantification—via KL divergence—into both pseudo-label generation and weighting, while jointly optimizing feature representations through dual-network cross-supervision and self-supervised contrastive learning. Evaluated on the Left Atrial, NIH Pancreas, and BraTS-2019 datasets, the method achieves a Dice score of 89.95% on the Left Atrial dataset using only 10% labeled data, significantly outperforming existing semi-supervised approaches.

Technology Category

Application Category

📝 Abstract
Despite the remarkable performance of supervised medical image segmentation models, relying on a large amount of labeled data is impractical in real-world situations. Semi-supervised learning approaches aim to alleviate this challenge using unlabeled data through pseudo-label generation. Yet, existing semi-supervised segmentation methods still suffer from noisy pseudo-labels and insufficient supervision within the feature space. To solve these challenges, this paper proposes a novel semi-supervised 3D medical image segmentation framework based on a dual-network architecture. Specifically, we investigate a Cross Consistency Enhancement module using both cross pseudo and entropy-filtered supervision to reduce the noisy pseudo-labels, while we design a dynamic weighting strategy to adjust the contributions of pseudo-labels using an uncertainty-aware mechanism (i.e., Kullback-Leibler divergence). In addition, we use a self-supervised contrastive learning mechanism to align uncertain voxel features with reliable class prototypes by effectively differentiating between trustworthy and uncertain predictions, thus reducing prediction uncertainty. Extensive experiments are conducted on three 3D segmentation datasets, Left Atrial, NIH Pancreas and BraTS-2019. The proposed approach consistently exhibits superior performance across various settings (e.g., 89.95% Dice score on left Atrial with 10% labeled data) compared to the state-of-the-art methods. Furthermore, the usefulness of the proposed modules is further validated via ablation experiments.
Problem

Research questions and friction points this paper is trying to address.

Reducing noisy pseudo-labels in semi-supervised medical segmentation
Addressing insufficient supervision in feature space alignment
Improving 3D medical image segmentation with limited labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-network architecture for semi-supervised segmentation
Uncertainty-guided pseudo-labeling with KL divergence
Self-supervised contrastive learning for feature alignment
🔎 Similar Papers
No similar papers found.
Y
Yunyao Lu
School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China, 541004
Y
Yihang Wu
School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China, 541004
Ahmad Chaddad
Ahmad Chaddad
Professor @ School of Artificial Intelligence, GUET; LIVIA-ETS
Artificial intelligenceradiomic and radio-genomicsSignal & Image ProcessingElectrical & Electronic System
T
Tareef Daqqaq
College of Medicine, Taibah University, Al Madinah, Saudi Arabia, 42353; Department of Radiology, Prince Mohammed Bin Abdulaziz Hospital, Ministry of National Guard Health Affairs, Al Madinah, Saudi Arabia, 42324
R
Reem Kateb
College of Computer Science and Engineering, Taibah University, Madinah, Saudi Arabia, 42353; College of Computer Science and Engineering, Jeddah University, Jeddah, Saudi Arabia, 23445