Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically identifies core barriers to clinical deployment of Medical Visual Question Answering (MedVQA) in radiology: weak clinical relevance, inadequate support for multi-view/multi-resolution imaging, lack of integration with electronic health records (EHR), and misaligned evaluation metrics. Guided by the Arksey & O’Malley scoping review framework, we conduct the first mixed-methods assessment—integrating qualitative analysis of 68 technical studies with quantitative and qualitative insights from 50 radiologists—thereby evaluating both technical advancement and clinical applicability. Results reveal that ~60% of QA pairs lack diagnostic utility; only 29.8% of radiologists deem current systems “highly useful”; 89.4% prefer conversational interaction; and 78.7% require multi-view support. We propose three evidence-based pathways for improvement: (1) clinical-task–anchored evaluation reform, (2) seamless integration into multimodal clinical workflows, and (3) enhanced model interpretability—providing empirical grounding and practical guidance for transitioning MedVQA from algorithmic refinement to tangible clinical value.

Technology Category

Application Category

📝 Abstract
Medical Visual Question Answering (MedVQA) is a promising tool to assist radiologists by automating medical image interpretation through question answering. Despite advances in models and datasets, MedVQA's integration into clinical workflows remains limited. This study systematically reviews 68 publications (2018-2024) and surveys 50 clinicians from India and Thailand to examine MedVQA's practical utility, challenges, and gaps. Following the Arksey and O'Malley scoping review framework, we used a two-pronged approach: (1) reviewing studies to identify key concepts, advancements, and research gaps in radiology workflows, and (2) surveying clinicians to capture their perspectives on MedVQA's clinical relevance. Our review reveals that nearly 60% of QA pairs are non-diagnostic and lack clinical relevance. Most datasets and models do not support multi-view, multi-resolution imaging, EHR integration, or domain knowledge, features essential for clinical diagnosis. Furthermore, there is a clear mismatch between current evaluation metrics and clinical needs. The clinician survey confirms this disconnect: only 29.8% consider MedVQA systems highly useful. Key concerns include the absence of patient history or domain knowledge (87.2%), preference for manually curated datasets (51.1%), and the need for multi-view image support (78.7%). Additionally, 66% favor models focused on specific anatomical regions, and 89.4% prefer dialogue-based interactive systems. While MedVQA shows strong potential, challenges such as limited multimodal analysis, lack of patient context, and misaligned evaluation approaches must be addressed for effective clinical integration.
Problem

Research questions and friction points this paper is trying to address.

Examining barriers to MedVQA integration in radiology workflows
Identifying gaps between MedVQA models and clinical diagnostic needs
Assessing clinician concerns about MedVQA utility and relevance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of 68 MedVQA publications
Surveyed 50 clinicians on clinical relevance
Identified gaps in datasets and models
D
Deepali Mishra
Asian Institute of Technology, Thailand
Chaklam Silpasuwanchai
Chaklam Silpasuwanchai
Asian Institute of Technology
Machine learningBrain Computer InterfacesHuman Computer Interfaces
Ashutosh Modi
Ashutosh Modi
Indian Institute of Technology Kanpur
Natural Language ProcessingMachine and Deep LearningArtificial IntelligenceAffective ComputingLegal AI
M
Madhumita Sushil
University of California, San Francisco
S
Sorayouth Chumnanvej
Faculty of Medicine Ramathibodi Hospital, Mahidol University