Breaking Reward Collapse: Adaptive Reinforcement for Open-ended Medical Reasoning with Enhanced Semantic Discrimination

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical visual question answering (VQA) reinforcement learning (RL) approaches are largely confined to closed-ended tasks, and model-based semantic rewards often suffer from “reward collapse,” failing to discriminate semantically distinct answers—thereby limiting open-ended clinical reasoning. This work proposes ARMed, the first framework to systematically address reward collapse in medical RL. ARMed integrates domain-specific medical knowledge to design an adaptive semantic reward mechanism that finely discriminates subtle semantic differences among answers. It further combines chain-of-thought supervised fine-tuning with a dual-driven RL objective—jointly optimizing for textual correctness and semantic reward. Evaluated on six medical VQA benchmarks, ARMed achieves +32.64% in-domain accuracy gain and +11.65% cross-domain improvement over strong baselines, significantly enhancing both accuracy and generalization in realistic clinical settings.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) with rule-based rewards has demonstrated strong potential in enhancing the reasoning and generalization capabilities of vision-language models (VLMs) and large language models (LLMs), while reducing computational overhead. However, its application in medical imaging remains underexplored. Existing reinforcement fine-tuning (RFT) approaches in this domain primarily target closed-ended visual question answering (VQA), limiting their applicability to real-world clinical reasoning. In contrast, open-ended medical VQA better reflects clinical practice but has received limited attention. While some efforts have sought to unify both formats via semantically guided RL, we observe that model-based semantic rewards often suffer from reward collapse, where responses with significant semantic differences receive similar scores. To address this, we propose ARMed (Adaptive Reinforcement for Medical Reasoning), a novel RL framework for open-ended medical VQA. ARMed first incorporates domain knowledge through supervised fine-tuning (SFT) on chain-of-thought data, then applies reinforcement learning with textual correctness and adaptive semantic rewards to enhance reasoning quality. We evaluate ARMed on six challenging medical VQA benchmarks. Results show that ARMed consistently boosts both accuracy and generalization, achieving a 32.64% improvement on in-domain tasks and an 11.65% gain on out-of-domain benchmarks. These results highlight the critical role of reward discriminability in medical RL and the promise of semantically guided rewards for enabling robust and clinically meaningful multimodal reasoning.
Problem

Research questions and friction points this paper is trying to address.

Addresses reward collapse in medical reinforcement learning
Enhances open-ended medical VQA reasoning quality
Improves accuracy and generalization in medical tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive semantic rewards prevent reward collapse
Domain knowledge integration via supervised fine-tuning
Enhanced reasoning with textual correctness rewards
🔎 Similar Papers
No similar papers found.
Yizhou Liu
Yizhou Liu
MIT
Dynamical systemsStatistical physicsPhysics of living systemsPhysics of AI
J
Jingwei Wei
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Zizhi Chen
Zizhi Chen
Fudan university
Pathology Images
M
Minghao Han
College of Intelligent Robotics and Advanced Manufacturing, Fudan University, Shanghai, China
Xukun Zhang
Xukun Zhang
Fudan University;
K
Keliang Liu
College of Intelligent Robotics and Advanced Manufacturing, Fudan University, Shanghai, China
Lihua Zhang
Lihua Zhang
Wuhan University
computational biologybioinformaticsdata mining