🤖 AI Summary
This work addresses the limitations of existing medical vision-language models, which rely on static image embeddings and struggle to capture localized clinical visual evidence, thereby compromising reasoning reliability. To overcome this, the authors propose MedLVR, a framework that introduces a reusable latent visual state into autoregressive decoding, enabling dynamic maintenance and iterative refinement of visual evidence. The method innovatively integrates sequential latent reasoning steps within the decoding process and employs a two-stage training strategy comprising region-of-interest (ROI) supervised fine-tuning and vision-latent policy optimization (VLPO). Built upon the Qwen2.5-VL-7B backbone, MedLVR achieves state-of-the-art performance on OmniMedVQA and five external medical VQA benchmarks, improving average accuracy from 48.3% to 53.4%, thus demonstrating the efficacy of dynamic latent visual reasoning in enhancing the reliability of medical visual question answering.
📝 Abstract
Medical vision--language models (VLMs) have shown strong potential for medical visual question answering (VQA), yet their reasoning remains largely text-centric: images are encoded once as static context, and subsequent inference is dominated by language. This paradigm is fundamentally limited in clinical scenarios, where accurate answers often depend on subtle, localized visual evidence that cannot be reliably preserved in static embeddings. We propose \textsc{MedLVR}, a latent visual reasoning framework that introduces an explicit visual evidence state into autoregressive decoding. Instead of relying solely on text-based intermediate reasoning, \textsc{MedLVR} interleaves a short latent reasoning segment within the decoder by reusing hidden states as continuous latent steps, enabling iterative preservation and refinement of query-relevant visual evidence before answer generation. To support effective visual supervision, we adopt a two-stage training strategy: region of interest (ROI)-supervised fine-tuning aligns latent states with clinically relevant image evidence, and Visual-Latent Policy Optimization (VLPO) further optimizes latent reasoning and answer generation under outcome-level rewards. Experiments on OmniMedVQA and five external medical VQA benchmarks show that \textsc{MedLVR} consistently outperforms recent reasoning baselines and improves the average score over the Qwen2.5-VL-7B backbone from 48.3\% to 53.4\%. These results show that latent visual reasoning provides an effective mechanism for preserving diagnostically relevant visual evidence and improving the reliability of medical VQA.