🤖 AI Summary
Medical image captioning faces two key challenges: inaccurate modality identification and poor alignment with clinical context. To address these, we propose a dual-prompt enhancement framework: (1) a modality-aware prompt leveraging semi-supervised classification for precise imaging modality recognition; and (2) a question-guided prompt integrating biomedical language model (BioLM) embeddings to improve clinical relevance. We further introduce a novel evaluation metric that jointly assesses spatial-semantic alignment and medical narrative quality. Our method synergistically integrates large vision-language models (LVLMs), multimodal alignment techniques, and advanced prompt engineering. On multiple benchmark medical datasets, our approach achieves a 22% higher modality recognition accuracy than BLIP-3, generates more comprehensive, clinically aligned, and trustworthy radiology reports, and significantly enhances expert knowledge modeling as well as automated annotation performance for downstream vision-language tasks.
📝 Abstract
Medical image captioning via vision-language models has shown promising potential for clinical diagnosis assistance. However, generating contextually relevant descriptions with accurate modality recognition remains challenging. We present DualPrompt-MedCap, a novel dual-prompt enhancement framework that augments Large Vision-Language Models (LVLMs) through two specialized components: (1) a modality-aware prompt derived from a semi-supervised classification model pretrained on medical question-answer pairs, and (2) a question-guided prompt leveraging biomedical language model embeddings. To address the lack of captioning ground truth, we also propose an evaluation framework that jointly considers spatial-semantic relevance and medical narrative quality. Experiments on multiple medical datasets demonstrate that DualPrompt-MedCap outperforms the baseline BLIP-3 by achieving a 22% improvement in modality recognition accuracy while generating more comprehensive and question-aligned descriptions. Our method enables the generation of clinically accurate reports that can serve as medical experts' prior knowledge and automatic annotations for downstream vision-language tasks.