🤖 AI Summary
To address inconsistent modality alignment, insufficient hard negative mining, and noisy knowledge integration in medical visual question answering (Med-VQA), this paper proposes a multi-level unified alignment framework. First, it introduces an optimal transport–based cross-modal alignment mechanism combined with contrastive learning to achieve fine-grained matching among image–text–answer triplets. Second, it employs a soft-label–driven hard negative mining strategy to sharpen decision boundary learning. Third, it designs a gated cross-attention module that selectively injects answer-vocabulary prior knowledge, mitigating noise during knowledge fusion. Evaluated on RAD-VQA, SLAKE, PathVQA, and VQA-2019, the method achieves new state-of-the-art performance, improving average accuracy by 2.1–4.7 percentage points. It significantly enhances model robustness to complex medical semantics and reasoning accuracy.
📝 Abstract
Medical Visual Question Answering (Med-VQA) is a challenging task that requires a deep understanding of both medical images and textual questions. Although recent works leveraging Medical Vision-Language Pre-training (Med-VLP) have shown strong performance on the Med-VQA task, there is still no unified solution for modality alignment, and the issue of hard negatives remains under-explored. Additionally, commonly used knowledge fusion techniques for Med-VQA may introduce irrelevant information. In this work, we propose a framework to address these challenges through three key contributions: (1) a unified solution for heterogeneous modality alignments across multiple levels, modalities, views, and stages, leveraging methods like contrastive learning and optimal transport theory; (2) a hard negative mining method that employs soft labels for multi-modality alignments and enforces the hard negative pair discrimination; and (3) a Gated Cross-Attention Module for Med-VQA that integrates the answer vocabulary as prior knowledge and selects relevant information from it. Our framework outperforms the previous state-of-the-art on widely used Med-VQA datasets like RAD-VQA, SLAKE, PathVQA and VQA-2019.