Efficient Prototype Consistency Learning in Medical Image Segmentation via Joint Uncertainty and Data Augmentation

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient prototype representation caused by scarce annotated data in semi-supervised medical image segmentation, this paper proposes a prototype consistency learning method built upon the Mean-Teacher framework. Our key contributions are: (1) a novel prototype consistency mechanism that jointly leverages uncertainty quantification—via entropy and confidence—and multi-view data augmentation; (2) a lightweight prototype network designed to minimize memory overhead; and (3) a global class prototype construction strategy that integrates both labeled and unlabeled data to enhance semantic robustness. Extensive experiments on three benchmark datasets—Left Atrium, Pancreas-NIH, and Type-B Aortic Dissection—demonstrate consistent and significant improvements over state-of-the-art methods, validating the effectiveness of enhanced prototype representation for segmentation accuracy.

Technology Category

Application Category

📝 Abstract
Recently, prototype learning has emerged in semi-supervised medical image segmentation and achieved remarkable performance. However, the scarcity of labeled data limits the expressiveness of prototypes in previous methods, potentially hindering the complete representation of prototypes for class embedding. To overcome this issue, we propose an efficient prototype consistency learning via joint uncertainty quantification and data augmentation (EPCL-JUDA) to enhance the semantic expression of prototypes based on the framework of Mean-Teacher. The concatenation of original and augmented labeled data is fed into student network to generate expressive prototypes. Then, a joint uncertainty quantification method is devised to optimize pseudo-labels and generate reliable prototypes for original and augmented unlabeled data separately. High-quality global prototypes for each class are formed by fusing labeled and unlabeled prototypes, which are utilized to generate prototype-to-features to conduct consistency learning. Notably, a prototype network is proposed to reduce high memory requirements brought by the introduction of augmented data. Extensive experiments on Left Atrium, Pancreas-NIH, Type B Aortic Dissection datasets demonstrate EPCL-JUDA's superiority over previous state-of-the-art approaches, confirming the effectiveness of our framework. The code will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Enhance prototype expressiveness in medical image segmentation
Address labeled data scarcity via joint uncertainty and augmentation
Reduce memory demands in prototype learning frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint uncertainty and data augmentation for prototype learning
Prototype network reduces memory from augmented data
Fuses labeled and unlabeled data for global prototypes
🔎 Similar Papers
No similar papers found.