Representation Learning with Semantic-aware Instance and Sparse Token Alignments

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in conventional medical vision–language pretraining (VLP), where unpaired samples are indiscriminately treated as negative examples, disregarding semantic similarities across images or reports from different patients. This oversight introduces false-negative interference and degrades representation quality. To mitigate this, the authors propose SISTA, a multi-level alignment framework that, for the first time in medical VLP, jointly models instance-level and fine-grained sparse alignment. Specifically, SISTA leverages report semantic similarity to identify and eliminate false negatives, enabling semantics-aware instance alignment, while simultaneously establishing sparse correspondences between image patches and relevant textual tokens. The method substantially improves transfer performance on downstream tasks—including image classification, segmentation, and object detection—with particularly pronounced gains in fine-grained tasks under limited annotation settings.

Technology Category

Application Category

📝 Abstract
Medical contrastive vision-language pre-training (VLP) has demonstrated significant potential in improving performance on downstream tasks. Traditional approaches typically employ contrastive learning, treating paired image-report samples as positives and unpaired ones as negatives. However, in medical datasets, there can be substantial similarities between images or reports from different patients. Rigidly treating all unpaired samples as negatives, can disrupt the underlying semantic structure and negatively impact the quality of the learned representations. In this paper, we propose a multi-level alignment framework, Representation Learning with Semantic-aware Instance and Sparse Token Alignments (SISTA) by exploiting the semantic correspondence between medical image and radiology reports at two levels, i.e., image-report and patch-word levels. Specifically, we improve the conventional contrastive learning by incorporating inter-report similarity to eliminate the false negatives and introduce a method to effectively align image patches with relevant word tokens. Experimental results demonstrate the effectiveness of the proposed framework in improving transfer performance across different datasets on three downstream tasks: image classification, image segmentation, and object detection. Notably, our framework achieves significant improvements in fine-grained tasks even with limited labeled data. Codes and pre-trained models will be made available.
Problem

Research questions and friction points this paper is trying to address.

medical vision-language pre-training
contrastive learning
false negatives
semantic similarity
representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

contrastive learning
semantic alignment
vision-language pre-training
medical imaging
sparse token matching
🔎 Similar Papers
No similar papers found.
Phuoc-Nguyen Bui
Phuoc-Nguyen Bui
Sungkyunkwan University
T
Toan Duc Nguyen
Sungkyunkwan University
J
Junghyun Bum
Sungkyunkwan University
D
Duc-Tai Le
Sungkyunkwan University
Hyunseung Choo
Hyunseung Choo
Sungkyunkwan University