Med-Evo: Test-time Self-evolution for Medical Multimodal Large Language Models

πŸ“… 2026-03-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of limited labeled data in medical multimodal large language models (MLLMs), which hinders performance improvement in annotation-costly and data-sensitive scenarios. To overcome this, we propose the first test-time self-evolution framework for medical MLLMs that dynamically refines model predictions during inference without requiring additional annotations. Our approach leverages feature-driven pseudo-label selection combined with a hard–soft hierarchical reward mechanism, generating reinforcement signals through semantic clustering, token-level evaluation, and semantic similarity. Evaluated across three medical visual question answering (VQA) benchmarks and two foundation models, our method consistently outperforms existing approaches. Notably, on the SLAKE dataset using Qwen2.5-VL, it achieves a 10.43% absolute gain in accuracy and a 4.68% improvement in recall.

Technology Category

Application Category

πŸ“ Abstract
Medical Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across diverse healthcare tasks. However, current post-training strategies, such as supervised fine-tuning and reinforcement learning, heavily depend on substantial annotated data while overlooking the potential of unlabeled test data for model enhancement. This limitation becomes particularly pronounced in medical domains, where acquiring extensive labeled medical data is difficult due to the strict data sensitivity and annotation complexity. Moreover, leveraging test data poses challenges in generating reliable supervision signals from unlabeled samples and maintaining stable self-evolution. To address these limitations, we propose Med-Evo, the first self-evolution framework for medical MLLMs that utilizes label-free reinforcement learning to promote model performance without requiring additional labeled data. Our framework introduces two key innovations: $1)$ Feature-driven Pseudo Labeling (FPL) that identifies semantic centroids from all heterogeneous candidate responses to select pseudo labels in each rollout, and $2)$ Hard-Soft Reward (HSR) that combines exact match with token-level assessment and semantic similarity to provide hierarchical reward. Experiments on three medical VQA benchmarks and two base MLLMs show clear advantages of our approach over SOTA methods, with significant improvements of 10.43\% accuracy and 4.68\% recall on the SLAKE dataset using Qwen2.5-VL, showing the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Medical Multimodal Large Language Models
Test-time Self-evolution
Unlabeled Test Data
Supervision Signal
Model Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-evolution
medical multimodal LLMs
pseudo labeling
reinforcement learning
test-time adaptation
πŸ”Ž Similar Papers
No similar papers found.
D
Dunyuan Xu
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
X
Xikai Yang
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
Juzheng Miao
Juzheng Miao
PhD student, The Chinese University of Hong Kong
Medical image analysislabel-efficient learningreinforcement learningcausality
Y
Yaoqian Li
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
Jinpeng Li
Jinpeng Li
The Chinese University of Hong Kong
Deep LearningMedical Image AnalysisPedestrian Detection
P
Pheng-Ann Heng
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Hong Kong, China