Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) continue to suffer from hallucination and reasoning-path deviation in multimodal reasoning. To address this, we propose an actor–critic-style decoupled framework: a Reasoner module generates stepwise reasoning paths, while a Critic module delivers fine-grained, context-sensitive, natural-language feedback—marking the first adaptation of reinforcement learning’s critic mechanism to multimodal reasoning and replacing scalar rewards with a text-based, preference-optimized Critic. Our method integrates Direct Preference Optimization (DPO), rule-guided preference data construction, dynamic textual policy fine-tuning, and a dual-module collaborative inference architecture. Evaluated on eight benchmarks, our approach outperforms GPT-4V on five, achieving significant gains in reasoning accuracy, computational efficiency, and real-world reliability—particularly in safety-critical applications such as autonomous driving and embodied AI.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have shown remarkable advancements in multimodal reasoning tasks. However, they still often generate inaccurate or irrelevant responses due to issues like hallucinated image understandings or unrefined reasoning paths. To address these challenges, we introduce Critic-V, a novel framework inspired by the Actor-Critic paradigm to boost the reasoning capability of VLMs. This framework decouples the reasoning process and critic process by integrating two independent components: the Reasoner, which generates reasoning paths based on visual and textual inputs, and the Critic, which provides constructive critique to refine these paths. In this approach, the Reasoner generates reasoning responses according to text prompts, which can evolve iteratively as a policy based on feedback from the Critic. This interaction process was theoretically driven by a reinforcement learning framework where the Critic offers natural language critiques instead of scalar rewards, enabling more nuanced feedback to boost the Reasoner's capability on complex reasoning tasks. The Critic model is trained using Direct Preference Optimization (DPO), leveraging a preference dataset of critiques ranked by Rule-based Reward~(RBR) to enhance its critic capabilities. Evaluation results show that the Critic-V framework significantly outperforms existing methods, including GPT-4V, on 5 out of 8 benchmarks, especially regarding reasoning accuracy and efficiency. Combining a dynamic text-based policy for the Reasoner and constructive feedback from the preference-optimized Critic enables a more reliable and context-sensitive multimodal reasoning process. Our approach provides a promising solution to enhance the reliability of VLMs, improving their performance in real-world reasoning-heavy multimodal applications such as autonomous driving and embodied intelligence.
Problem

Research questions and friction points this paper is trying to address.

Enhance VLM accuracy in multimodal reasoning tasks
Address hallucinated image understandings and unrefined reasoning paths
Improve reliability of VLMs for real-world applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples reasoning and critic processes
Uses reinforcement learning with natural language critiques
Trains Critic with Direct Preference Optimization
🔎 Similar Papers
No similar papers found.
D
Di Zhang
Fudan University, Shanghai Artificial Intelligence Laboratory
Jingdi Lei
Jingdi Lei
PhD student, Nanyang Technological University
Vision Language ModelsLanguage Model ReasoningMachine LearningArtificial Intelligence
Junxian Li
Junxian Li
NSEC lab,Shanghai Jiaotong University
AI securityReasoningData Mining
X
Xunzhi Wang
Nankai University, Shanghai Artificial Intelligence Laboratory
Y
Yujie Liu
Shanghai University, Shanghai Artificial Intelligence Laboratory
Zonglin Yang
Zonglin Yang
Ph.D. in Computer Science, Nanyang Technological University
Natural Language ProcessingLLMs for Scientific DiscoveryLarge Reasoning Models
Jiatong Li
Jiatong Li
PhD candidate, Hong Kong Polytechnic University
Natural Language ProcessingBioinformaticsMolecule Discovery
W
Weida Wang
Tongji University, Shanghai Artificial Intelligence Laboratory
Suorong Yang
Suorong Yang
Nanjing University
Computer VisionDeep LearningMultimodal Learning
J
Jianbo Wu
University of California, Merced, Shanghai Artificial Intelligence Laboratory
P
Peng Ye
Chinese University of Hong Kong, Shanghai Artificial Intelligence Laboratory
W
Wanli Ouyang
Shanghai Artificial Intelligence Laboratory
Dongzhan Zhou
Dongzhan Zhou
Researcher at Shanghai AI Lab
AI4Sciencecomputer visiondeep learning