🤖 AI Summary
Existing reasoning-based multimodal large language models (MLLMs) excel at generating long reasoning chains but struggle to dynamically attend to and iteratively re-examine visual regions, resulting in imprecise alignment between textual reasoning and visual evidence. To address this, we propose Region-Conditioned Guided Reinforcement Policy Optimization (R-GRPO), enabling the model to autonomously determine when to supplement visual evidence, where to focus spatially, and how to fuse sub-image information. We further introduce VLIR—the first fine-grained vision-language interleaved reasoning corpus—designed to support visual-grounded, stepwise reasoning modeling. Our approach integrates MLLMs, reinforcement learning, differentiable visual region cropping/scaling, and fine-grained visual grounding supervision. Evaluated on MathVista and ScienceQA under zero-shot and few-shot settings, R-GRPO establishes new state-of-the-art performance, with particularly substantial gains in spatial reasoning and fine-grained visual cue identification tasks.
📝 Abstract
Recently, reasoning-based MLLMs have achieved a degree of success in generating long-form textual reasoning chains. However, they still struggle with complex tasks that necessitate dynamic and iterative focusing on and revisiting of visual regions to achieve precise grounding of textual reasoning in visual evidence. We introduce extbf{VLM-R$^3$} ( extbf{V}isual extbf{L}anguage extbf{M}odel with extbf{R}egion extbf{R}ecognition and extbf{R}easoning), a framework that equips an MLLM with the ability to (i) decide emph{when} additional visual evidence is needed, (ii) determine emph{where} to ground within the image, and (iii) seamlessly weave the relevant sub-image content back into an interleaved chain-of-thought. The core of our method is extbf{Region-Conditioned Reinforcement Policy Optimization (R-GRPO)}, a training paradigm that rewards the model for selecting informative regions, formulating appropriate transformations (e.g. crop, zoom), and integrating the resulting visual context into subsequent reasoning steps. To bootstrap this policy, we compile a modest but carefully curated Visuo-Lingual Interleaved Rationale (VLIR) corpus that provides step-level supervision on region selection and textual justification. Extensive experiments on MathVista, ScienceQA, and other benchmarks show that VLM-R$^3$ sets a new state of the art in zero-shot and few-shot settings, with the largest gains appearing on questions demanding subtle spatial reasoning or fine-grained visual cue extraction.