🤖 AI Summary
This work addresses a critical limitation in conventional medical vision–language pretraining (VLP), where unpaired samples are indiscriminately treated as negative examples, disregarding semantic similarities across images or reports from different patients. This oversight introduces false-negative interference and degrades representation quality. To mitigate this, the authors propose SISTA, a multi-level alignment framework that, for the first time in medical VLP, jointly models instance-level and fine-grained sparse alignment. Specifically, SISTA leverages report semantic similarity to identify and eliminate false negatives, enabling semantics-aware instance alignment, while simultaneously establishing sparse correspondences between image patches and relevant textual tokens. The method substantially improves transfer performance on downstream tasks—including image classification, segmentation, and object detection—with particularly pronounced gains in fine-grained tasks under limited annotation settings.
📝 Abstract
Medical contrastive vision-language pre-training (VLP) has demonstrated significant potential in improving performance on downstream tasks. Traditional approaches typically employ contrastive learning, treating paired image-report samples as positives and unpaired ones as negatives. However, in medical datasets, there can be substantial similarities between images or reports from different patients. Rigidly treating all unpaired samples as negatives, can disrupt the underlying semantic structure and negatively impact the quality of the learned representations. In this paper, we propose a multi-level alignment framework, Representation Learning with Semantic-aware Instance and Sparse Token Alignments (SISTA) by exploiting the semantic correspondence between medical image and radiology reports at two levels, i.e., image-report and patch-word levels. Specifically, we improve the conventional contrastive learning by incorporating inter-report similarity to eliminate the false negatives and introduce a method to effectively align image patches with relevant word tokens. Experimental results demonstrate the effectiveness of the proposed framework in improving transfer performance across different datasets on three downstream tasks: image classification, image segmentation, and object detection. Notably, our framework achieves significant improvements in fine-grained tasks even with limited labeled data. Codes and pre-trained models will be made available.