🤖 AI Summary
This work addresses the limitations of existing medical imaging systems—typically black-box models performing single-pass inference—by introducing R⁴, a novel self-improving multi-agent framework for medical vision-language tasks. R⁴ integrates four collaborative agents responsible for routing, retrieval, reflection, and repair, enabling dynamic prompt configuration, joint image-text generation, clinical error detection, and constraint-driven iterative refinement. The framework supports explainable inference, self-diagnosis of errors, and joint spatial-linguistic optimization without requiring model fine-tuning. Evaluated on chest X-ray data, R⁴ achieves a 1.7–2.5 point improvement in LLM-as-a-Judge report generation scores and a 2.5–3.5 percentage point gain in weakly supervised detection mAP50, significantly outperforming single vision-language model baselines.
📝 Abstract
Medical image analysis increasingly relies on large vision-language models (VLMs), yet most systems remain single-pass black boxes that offer limited control over reasoning, safety, and spatial grounding. We propose R^4, an agentic framework that decomposes medical imaging workflows into four coordinated agents: a Router that configures task- and specialization-aware prompts from the image, patient history, and metadata; a Retriever that uses exemplar memory and pass@k sampling to jointly generate free-text reports and bounding boxes; a Reflector that critiques each draft-box pair for key clinical error modes (negation, laterality, unsupported claims, contradictions, missing findings, and localization errors); and a Repairer that iteratively revises both narrative and spatial outputs under targeted constraints while curating high-quality exemplars for future cases. Instantiated on chest X-ray analysis with multiple modern VLM backbones and evaluated on report generation and weakly supervised detection, R^4 consistently boosts LLM-as-a-Judge scores by roughly +1.7-+2.5 points and mAP50 by +2.5-+3.5 absolute points over strong single-VLM baselines, without any gradient-based fine-tuning. These results show that agentic routing, reflection, and repair can turn strong but brittle VLMs into more reliable and better grounded tools for clinical image interpretation. Our code can be found at: https://github.com/faiyazabdullah/MultimodalMedAgent