🤖 AI Summary
This work addresses the challenge that multimodal large language models often suffer from error propagation during complex reasoning due to the absence of intermediate supervision, resulting in noisy optimization signals and suboptimal performance. To mitigate this, we propose the Guided Verifier framework, which integrates a dynamic verifier with a policy model to detect inconsistencies in real time and provide directional guidance throughout the reasoning process, thereby enabling process-level supervision. We introduce a novel dynamic process supervision mechanism, construct the CoRe dataset comprising process-level negative samples and correctly guided reasoning trajectories, and further incorporate reinforcement learning, multimodal hallucination-aware data synthesis, and an interactive verifier-policy architecture. Our approach achieves significant performance gains on the MathVista, MathVerse, and MMMU benchmarks, with an 8B-parameter model attaining state-of-the-art results.
📝 Abstract
Reinforcement Learning (RL) has emerged as a pivotal mechanism for enhancing the complex reasoning capabilities of Multimodal Large Language Models (MLLMs). However, prevailing paradigms typically rely on solitary rollout strategies where the model works alone. This lack of intermediate oversight renders the reasoning process susceptible to error propagation, where early logical deviations cascade into irreversible failures, resulting in noisy optimization signals. In this paper, we propose the \textbf{Guided Verifier} framework to address these structural limitations. Moving beyond passive terminal rewards, we introduce a dynamic verifier that actively co-solves tasks alongside the policy. During the rollout phase, this verifier interacts with the policy model in real-time, detecting inconsistencies and providing directional signals to steer the model toward valid trajectories. To facilitate this, we develop a specialized data synthesis pipeline targeting multimodal hallucinations, constructing \textbf{CoRe} dataset of process-level negatives and \textbf{Co}rrect-guide \textbf{Re}asoning trajectories to train the guided verifier. Extensive experiments on MathVista, MathVerse and MMMU indicate that by allocating compute to collaborative inference and dynamic verification, an 8B-parameter model can achieve strong performance.