๐ค AI Summary
Existing medical visual question answering (VQA) methods suffer from insufficient fine-grained multimodal semantic alignment, as they perform modality interaction solely at the large language model (LLM) level, leading to weak semantic coupling. Method: We propose a dual-level semantic consistency constraint framework featuring novel model-level and feature-levelๅๅ alignment. Specifically, we introduce conditional visual feature learning for fine-grained cross-modal alignment, design a text-queue-driven cross-modal soft semantic loss, and construct BioVGQโthe first debiased medical visual-grounding question answering dataset with precise image-text localization annotations. Contribution/Results: Our approach achieves significant improvements over state-of-the-art methods across multiple medical VQA benchmarks, enhancing model robustness, generalization capability, and clinical applicability.
๐ Abstract
Biomedical visual question answering (VQA) has been widely studied and has demonstrated significant application value and potential in fields such as assistive medical diagnosis. Despite their success, current biomedical VQA models perform multimodal information interaction only at the model level within large language models (LLMs), leading to suboptimal multimodal semantic alignment when dealing with complex tasks. To address this issue, we propose BioD2C: a novel Dual-level Semantic Consistency Constraint Framework for Biomedical VQA, which achieves dual-level semantic interaction alignment at both the model and feature levels, enabling the model to adaptively learn visual features based on the question. Specifically, we firstly integrate textual features into visual features via an image-text fusion mechanism as feature-level semantic interaction, obtaining visual features conditioned on the given text; and then introduce a text-queue-based cross-modal soft semantic loss function to further align the image semantics with the question semantics. Specifically, in this work, we establish a new dataset, BioVGQ, to address inherent biases in prior datasets by filtering manually-altered images and aligning question-answer pairs with multimodal context, and train our model on this dataset. Extensive experimental results demonstrate that BioD2C achieves state-of-the-art (SOTA) performance across multiple downstream datasets, showcasing its robustness, generalizability, and potential to advance biomedical VQA research.