π€ AI Summary
Medical vision-language models (Med-LVLMs) suffer from pervasive factual hallucinations in clinical diagnosis, primarily due to scarcity of high-quality annotated data, train-deployment distribution shift, and weak cross-modal alignment and poor domain generalization in existing retrieval-augmented generation (RAG) methods. To address these challenges, we propose MMed-RAGβthe first general-purpose multimodal RAG framework tailored for Med-LVLMs. It innovatively integrates domain-aware retrieval, adaptive context selection, and verifiable RAG preference fine-tuning to enforce strong factual alignment across modalities and between model outputs and real-world clinical knowledge. Evaluated on five diverse medical benchmarks spanning radiology, ophthalmology, and pathology, MMed-RAG achieves an average 43.8% improvement in factual accuracy for medical visual question answering and report generation, significantly outperforming state-of-the-art fine-tuning and RAG baselines.
π Abstract
Artificial Intelligence (AI) has demonstrated significant potential in healthcare, particularly in disease diagnosis and treatment planning. Recent progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new possibilities for interactive diagnostic tools. However, these models often suffer from factual hallucination, which can lead to incorrect diagnoses. Fine-tuning and retrieval-augmented generation (RAG) have emerged as methods to address these issues. However, the amount of high-quality data and distribution shifts between training data and deployment data limit the application of fine-tuning methods. Although RAG is lightweight and effective, existing RAG-based approaches are not sufficiently general to different medical domains and can potentially cause misalignment issues, both between modalities and between the model and the ground truth. In this paper, we propose a versatile multimodal RAG system, MMed-RAG, designed to enhance the factuality of Med-LVLMs. Our approach introduces a domain-aware retrieval mechanism, an adaptive retrieved contexts selection method, and a provable RAG-based preference fine-tuning strategy. These innovations make the RAG process sufficiently general and reliable, significantly improving alignment when introducing retrieved contexts. Experimental results across five medical datasets (involving radiology, ophthalmology, pathology) on medical VQA and report generation demonstrate that MMed-RAG can achieve an average improvement of 43.8% in the factual accuracy of Med-LVLMs. Our data and code are available in https://github.com/richard-peng-xia/MMed-RAG.