Benchmarking Multimodal CoT Reward Model Stepwise by Visual Program

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing key challenges in multimodal large language model (MLLM) reward modeling—including high annotation costs, coarse-grained single-step rewards, and the absence of dedicated evaluation benchmarks—this paper proposes SVIP, a novel framework for step-wise, vision-program-guided reward modeling. SVIP introduces the first chain-of-thought (CoT) reward modeling paradigm grounded in executable visual programs: it automatically generates task-specific vision code and parses its execution trace to construct fine-grained, multi-dimensional step-level reward signals. To capture complex reward dependencies across modalities, reasoning steps, and reward dimensions, SVIP designs TriAtt-CoT, a triple-attention mechanism integrating cross-modal, inter-step, and intra-dimension modeling. Furthermore, SVIP establishes the first dedicated benchmark for evaluating multimodal CoT reward models. Experiments demonstrate that SVIP significantly improves MLLM training stability and inference consistency, reduces hallucination rates, and achieves state-of-the-art performance across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Recent advancements in reward signal usage for Large Language Models (LLMs) are remarkable. However, significant challenges exist when transitioning reward signal to the multimodal domain, including labor-intensive annotations, over-reliance on one-step rewards, and inadequate evaluation. To address these issues, we propose SVIP, a novel approach to train a step-level multi-dimensional Chain-of-Thought~(CoT) reward model automatically. It generates code for solving visual tasks and transforms the analysis of code blocks into the evaluation of CoT step as training samples. Then, we train SVIP-Reward model using a multi-head attention mechanism called TriAtt-CoT. The advantages of SVIP-Reward are evident throughout the entire process of MLLM. We also introduce a benchmark for CoT reward model training and testing. Experimental results demonstrate that SVIP-Reward improves MLLM performance across training and inference-time scaling, yielding better results on benchmarks while reducing hallucinations and enhancing reasoning ability.
Problem

Research questions and friction points this paper is trying to address.

Transitioning reward signals to multimodal domain challenges
Automating step-level CoT reward model training
Improving MLLM performance and reducing hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated step-level CoT reward model training
Code generation for visual task solutions
Multi-head attention mechanism TriAtt-CoT
🔎 Similar Papers
No similar papers found.
Minghe Gao
Minghe Gao
浙江大学
机器学习
X
Xuqi Liu
Zhejiang University
Z
Zhongqi Yue
Nanyang Technological University
Y
Yang Wu
Ant Group
S
Shuang Chen
Zhejiang University
Juncheng Li
Juncheng Li
East China Normal University
Super ResolutionImage RestorationComputer VisionMedical Image Analysis
Siliang Tang
Siliang Tang
Professor of Computer Science, Zhejiang University
Natural Language ProcessingCross-media AnalysisGraph Neural Network
F
Fei Wu
Zhejiang University
T
Tat-Seng Chua
National University of Singapore
Y
Yueting Zhuang
Zhejiang University