🤖 AI Summary
Existing medical visual question answering (VQA) reinforcement learning (RL) approaches are largely confined to closed-ended tasks, and model-based semantic rewards often suffer from “reward collapse,” failing to discriminate semantically distinct answers—thereby limiting open-ended clinical reasoning. This work proposes ARMed, the first framework to systematically address reward collapse in medical RL. ARMed integrates domain-specific medical knowledge to design an adaptive semantic reward mechanism that finely discriminates subtle semantic differences among answers. It further combines chain-of-thought supervised fine-tuning with a dual-driven RL objective—jointly optimizing for textual correctness and semantic reward. Evaluated on six medical VQA benchmarks, ARMed achieves +32.64% in-domain accuracy gain and +11.65% cross-domain improvement over strong baselines, significantly enhancing both accuracy and generalization in realistic clinical settings.
📝 Abstract
Reinforcement learning (RL) with rule-based rewards has demonstrated strong potential in enhancing the reasoning and generalization capabilities of vision-language models (VLMs) and large language models (LLMs), while reducing computational overhead. However, its application in medical imaging remains underexplored. Existing reinforcement fine-tuning (RFT) approaches in this domain primarily target closed-ended visual question answering (VQA), limiting their applicability to real-world clinical reasoning. In contrast, open-ended medical VQA better reflects clinical practice but has received limited attention. While some efforts have sought to unify both formats via semantically guided RL, we observe that model-based semantic rewards often suffer from reward collapse, where responses with significant semantic differences receive similar scores. To address this, we propose ARMed (Adaptive Reinforcement for Medical Reasoning), a novel RL framework for open-ended medical VQA. ARMed first incorporates domain knowledge through supervised fine-tuning (SFT) on chain-of-thought data, then applies reinforcement learning with textual correctness and adaptive semantic rewards to enhance reasoning quality. We evaluate ARMed on six challenging medical VQA benchmarks. Results show that ARMed consistently boosts both accuracy and generalization, achieving a 32.64% improvement on in-domain tasks and an 11.65% gain on out-of-domain benchmarks. These results highlight the critical role of reward discriminability in medical RL and the promise of semantically guided rewards for enabling robust and clinically meaningful multimodal reasoning.