Enhancing Biomedical Multi-modal Representation Learning with Multi-scale Pre-training and Perturbed Report Discrimination

📅 2024-06-25
🏛️ Conference on Algebraic Informatics
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Biomedical vision-language representation learning faces challenges in capturing fine-grained clinical semantics due to the inherent complexity of radiology reports and the limitations of conventional contrastive learning in modeling nuanced semantic structures. Method: We propose a novel paradigm integrating perturbed report discrimination and multi-scale attention-guided contrastive learning. Specifically, we introduce a syntax- and semantics-preserving token-level perturbation mechanism to enhance deep semantic understanding; design multi-scale cross-modal attention contrasting between image regions and text subwords to improve fine-grained alignment; and unify these with text perturbation augmentation, multi-scale contrastive learning, attention-weighted region–subword matching, and end-to-end vision-language pretraining. Contribution/Results: Our approach achieves state-of-the-art performance across downstream tasks—including report generation, cross-modal retrieval, and disease classification—demonstrating that the learned representations exhibit superior semantic fidelity and robustness.

Technology Category

Application Category

📝 Abstract
Vision-language models pre-trained on large scale of unlabeled biomedical images and associated reports learn generalizable semantic representations. These multi-modal representations can benefit various downstream tasks in the biomedical domain. Contrastive learning is widely used to pre-train vision-language models for general natural images and associated captions. Despite its popularity, we found biomedical texts have complex and domain-specific semantics that are often neglected by common contrastive methods. To address this issue, we propose a novel method, perturbed report discrimination, for pre-train biomedical vision-language models. First, we curate a set of text perturbation methods that keep the same words, but disrupt the semantic structure of the sentence. Next, we apply different types of perturbation to reports, and use the model to distinguish the original report from the perturbed ones given the associated image. Parallel to this, we enhance the sensitivity of our method to higher level of granularity for both modalities by contrasting attention-weighted image sub-regions and sub-words in the image-text pairs. We conduct extensive experiments on multiple downstream tasks, and our method outperforms strong baseline methods. The results demonstrate that our approach learns more semantic meaningful and robust multi-modal representations.
Problem

Research questions and friction points this paper is trying to address.

Improving biomedical vision-language models via multi-scale pre-training
Addressing neglected complex semantics in biomedical contrastive learning
Enhancing semantic robustness with perturbed report discrimination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale pre-training for biomedical images
Perturbed report discrimination for semantics
Attention-weighted contrastive learning granularity
🔎 Similar Papers
No similar papers found.
Xinliu Zhong
Xinliu Zhong
PhD student in Computer Science and Informatics, Emory University
Artificial Intelligence
K
K. Batmanghelich
Department of Electrical and Computer Engineering, Boston University, Boston, USA
L
Li Sun
Department of Electrical and Computer Engineering, Boston University, Boston, USA