Motion-Grounded Video Reasoning: Understanding and Perceiving Motion at Pixel Level

📅 2024-11-15
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces the novel task of *pixel-level motion-aware video reasoning*, which aims to generate video segmentation masks in response to natural language questions, enabling implicit spatiotemporal reasoning and motion localization. To address the limitations of existing vision grounding methods—particularly their inability to handle causal, temporal, counterfactual, and compositional reasoning—the paper proposes the first visual answer generation paradigm supporting these four reasoning types. We introduce GROUNDMORE, the first large-scale benchmark explicitly designed for deep motion reasoning. Furthermore, we propose MORA, a unified architecture integrating a multimodal large language model (MLLM), Segment Anything Model (SAM), and a lightweight spatiotemporal localization head, trained jointly on video segmentation and question answering. On GROUNDMORE, MORA achieves an average relative improvement of 21.5% over the strongest baseline, significantly advancing interpretable, fine-grained video motion understanding.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce Motion-Grounded Video Reasoning, a new motion understanding task that requires generating visual answers (video segmentation masks) according to the input question, and hence needs implicit spatiotemporal reasoning and grounding. This task extends existing spatiotemporal grounding work focusing on explicit action/motion grounding, to a more general format by enabling implicit reasoning via questions. To facilitate the development of the new task, we collect a large-scale dataset called GROUNDMORE, which comprises 1,715 video clips, 249K object masks that are deliberately designed with 4 question types (Causal, Sequential, Counterfactual, and Descriptive) for benchmarking deep and comprehensive motion reasoning abilities. GROUNDMORE uniquely requires models to generate visual answers, providing a more concrete and visually interpretable response than plain texts. It evaluates models on both spatiotemporal grounding and reasoning, fostering to address complex challenges in motion-related video reasoning, temporal perception, and pixel-level understanding. Furthermore, we introduce a novel baseline model named Motion-Grounded Video Reasoning Assistant (MORA). MORA incorporates the multimodal reasoning ability from the Multimodal LLM, the pixel-level perception capability from the grounding model (SAM), and the temporal perception ability from a lightweight localization head. MORA achieves respectable performance on GROUNDMORE outperforming the best existing visual grounding baseline model by an average of 21.5% relatively. We hope this novel and challenging task will pave the way for future advancements in robust and general motion understanding via video reasoning segmentation
Problem

Research questions and friction points this paper is trying to address.

Develop motion understanding via visual answers to questions
Create dataset for benchmarking motion reasoning abilities
Propose model combining reasoning, perception, and temporal abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion-Grounded Video Reasoning for implicit spatiotemporal understanding
Large-scale GROUNDMORE dataset with diverse question types
MORA model combines LLM, SAM, and temporal perception
🔎 Similar Papers
No similar papers found.