🤖 AI Summary
Medical image annotation scarcity severely limits semi-supervised segmentation accuracy—particularly in lesion boundary localization—thereby compromising clinical diagnostic reliability. To address this, we propose C3S3, a novel framework introducing a synergistic complementary competition and contrastive selection mechanism. Specifically, we design a result-driven contrastive learning module to explicitly enhance the discriminability of boundary features, and develop a dynamic complementary competition module to generate high-confidence pseudo-labels. Our method integrates semi-supervised learning, dual-branch collaborative training, and multi-modal (MRI/CT) adaptability. Evaluated on public benchmarks, C3S3 achieves state-of-the-art performance: it improves 95HD and ASD metrics by over 6%, while attaining leading accuracy in both boundary delineation and overall segmentation quality.
📝 Abstract
For the immanent challenge of insufficiently annotated samples in the medical field, semi-supervised medical image segmentation (SSMIS) offers a promising solution. Despite achieving impressive results in delineating primary target areas, most current methodologies struggle to precisely capture the subtle details of boundaries. This deficiency often leads to significant diagnostic inaccuracies. To tackle this issue, we introduce C3S3, a novel semi-supervised segmentation model that synergistically integrates complementary competition and contrastive selection. This design significantly sharpens boundary delineation and enhances overall precision. Specifically, we develop an $ extit{Outcome-Driven Contrastive Learning}$ module dedicated to refining boundary localization. Additionally, we incorporate a $ extit{Dynamic Complementary Competition}$ module that leverages two high-performing sub-networks to generate pseudo-labels, thereby further improving segmentation quality. The proposed C3S3 undergoes rigorous validation on two publicly accessible datasets, encompassing the practices of both MRI and CT scans. The results demonstrate that our method achieves superior performance compared to previous cutting-edge competitors. Especially, on the 95HD and ASD metrics, our approach achieves a notable improvement of at least $6%$, highlighting the significant advancements. The code is available at https://github.com/Y-TARL/C3S3.