🤖 AI Summary
Biomedical vision-language representation learning faces challenges in capturing fine-grained clinical semantics due to the inherent complexity of radiology reports and the limitations of conventional contrastive learning in modeling nuanced semantic structures.
Method: We propose a novel paradigm integrating perturbed report discrimination and multi-scale attention-guided contrastive learning. Specifically, we introduce a syntax- and semantics-preserving token-level perturbation mechanism to enhance deep semantic understanding; design multi-scale cross-modal attention contrasting between image regions and text subwords to improve fine-grained alignment; and unify these with text perturbation augmentation, multi-scale contrastive learning, attention-weighted region–subword matching, and end-to-end vision-language pretraining.
Contribution/Results: Our approach achieves state-of-the-art performance across downstream tasks—including report generation, cross-modal retrieval, and disease classification—demonstrating that the learned representations exhibit superior semantic fidelity and robustness.
📝 Abstract
Vision-language models pre-trained on large scale of unlabeled biomedical images and associated reports learn generalizable semantic representations. These multi-modal representations can benefit various downstream tasks in the biomedical domain. Contrastive learning is widely used to pre-train vision-language models for general natural images and associated captions. Despite its popularity, we found biomedical texts have complex and domain-specific semantics that are often neglected by common contrastive methods. To address this issue, we propose a novel method, perturbed report discrimination, for pre-train biomedical vision-language models. First, we curate a set of text perturbation methods that keep the same words, but disrupt the semantic structure of the sentence. Next, we apply different types of perturbation to reports, and use the model to distinguish the original report from the perturbed ones given the associated image. Parallel to this, we enhance the sensitivity of our method to higher level of granularity for both modalities by contrasting attention-weighted image sub-regions and sub-words in the image-text pairs. We conduct extensive experiments on multiple downstream tasks, and our method outperforms strong baseline methods. The results demonstrate that our approach learns more semantic meaningful and robust multi-modal representations.