🤖 AI Summary
Existing medical multimodal large language models (med-MLLMs) rely on large-scale datasets and autoregressive pretraining, resulting in weak vision–language alignment and high instruction-tuning costs. To address this, we propose a long-context multi-image alignment framework featuring a novel fine-grained ternary alignment algorithm that explicitly models structured semantic correspondences among image regions, dialogue turns, and sentence spans—thereby overcoming modality decoupling bottlenecks. Our method integrates a multi-image graph neural network, black-box gradient estimation, end-to-end contrastive learning, and a lightweight LLaMA-7B adaptation architecture. Using only 10% of the standard pretraining data, our model surpasses LLaVA-Med by 20.13% on VQA-RAD and achieves 99.8% of full-data performance. It also establishes new state-of-the-art results on zero-shot image classification and visual dialogue tasks.
📝 Abstract
State-of-the-art medical multi-modal large language models (med-MLLM), like LLaVA-Med or BioMedGPT, leverage instruction-following data in pre-training. However, those models primarily focus on scaling the model size and data volume to boost performance while mainly relying on the autoregressive learning objectives. Surprisingly, we reveal that such learning schemes might result in a weak alignment between vision and language modalities, making these models highly reliant on extensive pre-training datasets - a significant challenge in medical domains due to the expensive and time-consuming nature of curating high-quality instruction-following instances. We address this with LoGra-Med, a new multi-graph alignment algorithm that enforces triplet correlations across image modalities, conversation-based descriptions, and extended captions. This helps the model capture contextual meaning, handle linguistic variability, and build cross-modal associations between visuals and text. To scale our approach, we designed an efficient end-to-end learning scheme using black-box gradient estimation, enabling faster LLaMa 7B training. Our results show LoGra-Med matches LLAVA-Med performance on 600K image-text pairs for Medical VQA and significantly outperforms it when trained on 10% of the data. For example, on VQA-RAD, we exceed LLAVA-Med by 20.13% and nearly match the 100% pre-training score (72.52% vs. 72.64%). We also surpass SOTA methods like BiomedGPT on visual chatbots and RadFM on zero-shot image classification with VQA, highlighting the effectiveness of multi-graph alignment.