BiPVL-Seg: Bidirectional Progressive Vision-Language Fusion with Global-Local Alignment for Medical Image Segmentation

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image segmentation has long overlooked clinical textual information; existing vision-language models suffer from weak cross-modal alignment and significant semantic gaps due to modality-independent processing, inadequate adaptation to medical terminology, and unidirectional fusion. To address these limitations, we propose a bidirectional progressive vision-language fusion architecture featuring a novel global-local contrastive alignment objective that jointly optimizes class-level and concept-level cross-modal embeddings. Our end-to-end trainable framework integrates an enhanced Vision Transformer (ViT) and a domain-specific medical text encoder, incorporating multi-stage cross-attention fusion and contrastive learning loss. Evaluated on CT/MR benchmarks for multi-organ and lesion segmentation, our method consistently surpasses state-of-the-art approaches, achieving Dice score improvements of 2.1–4.7 percentage points—particularly pronounced in complex, multi-class scenarios.

Technology Category

Application Category

📝 Abstract
Medical image segmentation typically relies solely on visual data, overlooking the rich textual information clinicians use for diagnosis. Vision-language models attempt to bridge this gap, but existing approaches often process visual and textual features independently, resulting in weak cross-modal alignment. Simple fusion techniques fail due to the inherent differences between spatial visual features and sequential text embeddings. Additionally, medical terminology deviates from general language, limiting the effectiveness of off-the-shelf text encoders and further hindering vision-language alignment. We propose BiPVL-Seg, an end-to-end framework that integrates vision-language fusion and embedding alignment through architectural and training innovations, where both components reinforce each other to enhance medical image segmentation. BiPVL-Seg introduces bidirectional progressive fusion in the architecture, which facilitates stage-wise information exchange between vision and text encoders. Additionally, it incorporates global-local contrastive alignment, a training objective that enhances the text encoder's comprehension by aligning text and vision embeddings at both class and concept levels. Extensive experiments on diverse medical imaging benchmarks across CT and MR modalities demonstrate BiPVL-Seg's superior performance when compared with state-of-the-art methods in complex multi-class segmentation. Source code is available in this GitHub repository.
Problem

Research questions and friction points this paper is trying to address.

Bridges vision-language gap in medical image segmentation
Enhances cross-modal alignment via bidirectional progressive fusion
Improves text encoder comprehension with global-local contrastive alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional progressive vision-language fusion
Global-local contrastive alignment training
End-to-end framework for medical segmentation
🔎 Similar Papers
No similar papers found.