Meta-Entity Driven Triplet Mining for Aligning Medical Vision-Language Models

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current medical vision-language models for chest X-ray–report alignment rely on contrastive learning, which only enforces coarse-grained separation of disease categories while neglecting clinically critical fine-grained attributes—such as lesion location, size, and severity—leading to representation bias. To address this, we propose MedTrim: the first meta-entity-driven multimodal triplet learning framework. It integrates ontology-guided pathological meta-entity recognition—explicitly modeling disease categories, descriptive adjectives, and spatial descriptors—and introduces a multi-dimensional similarity scoring mechanism coupled with a cross-modal triplet alignment loss, enabling weakly supervised fine-grained semantic alignment. This end-to-end paradigm significantly outperforms state-of-the-art methods on both retrieval and classification benchmarks, enhancing lesion interpretability and alleviating clinical interpretation burden.

Technology Category

Application Category

📝 Abstract
Diagnostic imaging relies on interpreting both images and radiology reports, but the growing data volumes place significant pressure on medical experts, yielding increased errors and workflow backlogs. Medical vision-language models (med-VLMs) have emerged as a powerful framework to efficiently process multimodal imaging data, particularly in chest X-ray (CXR) evaluations, albeit their performance hinges on how well image and text representations are aligned. Existing alignment methods, predominantly based on contrastive learning, prioritize separation between disease classes over segregation of fine-grained pathology attributes like location, size or severity, leading to suboptimal representations. Here, we propose MedTrim (Meta-entity-driven Triplet mining), a novel method that enhances image-text alignment through multimodal triplet learning synergistically guided by disease class as well as adjectival and directional pathology descriptors. Unlike common alignment methods that separate broad disease classes, MedTrim leverages structured meta-entity information to preserve subtle but clinically significant intra-class variations. For this purpose, we first introduce an ontology-based entity recognition module that extracts pathology-specific meta-entities from CXR reports, as annotations on pathology attributes are rare in public datasets. For refined sample selection in triplet mining, we then introduce a novel score function that captures an aggregate measure of inter-sample similarity based on disease classes and adjectival/directional descriptors. Lastly, we introduce a multimodal triplet alignment objective for explicit within- and cross-modal alignment between samples sharing detailed pathology characteristics. Our demonstrations indicate that MedTrim improves performance in downstream retrieval and classification tasks compared to state-of-the-art alignment methods.
Problem

Research questions and friction points this paper is trying to address.

Aligning medical image-text representations for better diagnostics
Improving fine-grained pathology attribute segregation in med-VLMs
Enhancing intra-class variation preservation in disease classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-entity-driven triplet mining for alignment
Ontology-based entity recognition from reports
Multimodal triplet alignment with pathology descriptors
S
Saban Ozturk
Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department Of Management Information Systems, Ankara Haci Bayram Veli University, Ankara 06570, Turkey
M
M. B. Yilmaz
Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
Muti Kara
Muti Kara
Bilkent University
M
M. T. Yavuz
Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
A
Aykut Kocc
Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
T
Tolga cCukur
Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Neuroscience, Bilkent University, Ankara 06800, Turkey