Route, Retrieve, Reflect, Repair: Self-Improving Agentic Framework for Visual Detection and Linguistic Reasoning in Medical Imaging

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing medical imaging systems—typically black-box models performing single-pass inference—by introducing R⁴, a novel self-improving multi-agent framework for medical vision-language tasks. R⁴ integrates four collaborative agents responsible for routing, retrieval, reflection, and repair, enabling dynamic prompt configuration, joint image-text generation, clinical error detection, and constraint-driven iterative refinement. The framework supports explainable inference, self-diagnosis of errors, and joint spatial-linguistic optimization without requiring model fine-tuning. Evaluated on chest X-ray data, R⁴ achieves a 1.7–2.5 point improvement in LLM-as-a-Judge report generation scores and a 2.5–3.5 percentage point gain in weakly supervised detection mAP50, significantly outperforming single vision-language model baselines.

Technology Category

Application Category

📝 Abstract
Medical image analysis increasingly relies on large vision-language models (VLMs), yet most systems remain single-pass black boxes that offer limited control over reasoning, safety, and spatial grounding. We propose R^4, an agentic framework that decomposes medical imaging workflows into four coordinated agents: a Router that configures task- and specialization-aware prompts from the image, patient history, and metadata; a Retriever that uses exemplar memory and pass@k sampling to jointly generate free-text reports and bounding boxes; a Reflector that critiques each draft-box pair for key clinical error modes (negation, laterality, unsupported claims, contradictions, missing findings, and localization errors); and a Repairer that iteratively revises both narrative and spatial outputs under targeted constraints while curating high-quality exemplars for future cases. Instantiated on chest X-ray analysis with multiple modern VLM backbones and evaluated on report generation and weakly supervised detection, R^4 consistently boosts LLM-as-a-Judge scores by roughly +1.7-+2.5 points and mAP50 by +2.5-+3.5 absolute points over strong single-VLM baselines, without any gradient-based fine-tuning. These results show that agentic routing, reflection, and repair can turn strong but brittle VLMs into more reliable and better grounded tools for clinical image interpretation. Our code can be found at: https://github.com/faiyazabdullah/MultimodalMedAgent
Problem

Research questions and friction points this paper is trying to address.

medical imaging
vision-language models
spatial grounding
reasoning control
clinical reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic framework
visual-language models
self-improving
medical imaging
iterative refinement
Md. Faiyaz Abdullah Sayeedi
Md. Faiyaz Abdullah Sayeedi
Undergraduate Teaching Assistant, United International University
Computer VisionLarge Language ModelResponsible AIMultimodal Machine Learning
R
Rashedur M. Rahman
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh (IUB)
S
Siam Tahsin Bhuiyan
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh (IUB)
S
Sefatul Wasi
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh (IUB)
Ashraful Islam
Ashraful Islam
Assistant Professor of Computer Science and Engineering, Independent University, Bangladesh
Human-Computer InteractionAI for Social GoodAI for Public HealthPervasive Computing
S
Saadia Binte Alam
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh (IUB)
A
AKM Mahbubur Rahman
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh (IUB)