Video-CoM: Interactive Video Reasoning via Chain of Manipulations

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video-understanding multimodal large language models (MLLMs) encode videos as static context, relying entirely on textual reasoning—thus precluding visual evidence re-examination, refocusing, or verification, and limiting analysis to shallow temporal-spatial reasoning. This work introduces an **interactive video reasoning paradigm**, enabling models to actively gather dynamic visual evidence via chained visual operations (e.g., frame jumping, zooming, replaying). Methodologically, we propose a reasoning-aware grouped relative policy optimization framework that jointly integrates multi-step instruction tuning, GRPO-based reinforcement learning, and explicit visual action modeling. Evaluated on nine video reasoning benchmarks, our approach achieves an average 3.6% improvement over SOTA, using only 25K supervised fine-tuning and 3K GRPO samples. It significantly enhances fine-grained spatiotemporal reasoning capability, model interpretability, and training efficiency.

Technology Category

Application Category

📝 Abstract
Recent multimodal large language models (MLLMs) have advanced video understanding, yet most still "think about videos" ie once a video is encoded, reasoning unfolds entirely in text, treating visual input as a static context. This passive paradigm creates a semantic bottleneck: models cannot rewatch, refocus, or verify evidence, leading to shallow visual reasoning on tasks requiring fine grained spatio temporal understanding. In this work, we introduce Interactive Video Reasoning, a new paradigm that transforms video into an active cognitive workspace, enabling models to "think with videos". Our model, Video CoM, reasons through a Chain of Manipulations (CoM), performing iterative visual actions to gather and refine evidence. To support this behavior, we construct Video CoM Instruct, an 18K instruction tuning dataset curated for multi step manipulation reasoning. Beyond supervised learning, we further optimize the manipulation policy via reinforcement learning with reasoning aware Group Relative Policy Optimization (GRPO). Unlike prior work that relies solely on sparse answer rewards, our method introduces step level reasoning rewards, guiding the model toward grounded and consistent reasoning. Video CoM achieves strong results across nine video reasoning benchmarks, improving average performance by 3.6 percent over recent state of the art models, while training on only 25K SFT and 3K GRPO video samples, significantly fewer than comparable large scale models. Ablation studies demonstrate that reasoning aware rewards improve both accuracy and interpretability. Code: https://github.com/mbzuai-oryx/Video-CoM
Problem

Research questions and friction points this paper is trying to address.

Overcoming passive video understanding by enabling interactive visual reasoning
Addressing semantic bottlenecks through iterative evidence gathering and refinement
Improving fine-grained spatiotemporal understanding via chain of manipulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive video reasoning with iterative visual actions
Instruction tuning dataset for multi-step manipulation reasoning
Reinforcement learning with step-level reasoning rewards
🔎 Similar Papers
No similar papers found.