🤖 AI Summary
Current video multimodal large language models (MLLMs) lack rigorous evaluation of pixel-level visual grounding for motion understanding, as existing benchmarks suffer from “static-appearance dominance”—most motion-descriptive tasks can be solved using single frames, failing to assess genuine temporal reasoning. Method: We systematically investigate language–vision alignment for motion patterns and propose MoCentric-Bench, a novel benchmark comprising four action-centric probing tasks that explicitly require motion cues over static appearance. It incorporates a strong single-frame baseline, motion–appearance disentanglement analysis, and motion-aware fine-tuning to isolate and quantify motion comprehension. Contribution/Results: Experiments reveal severe deficiencies in existing MLLMs’ motion localization capability. Our approach achieves state-of-the-art performance on MoCentric-Bench, advancing video dense spatiotemporal grounding toward authentic temporal awareness.
📝 Abstract
Multi-modal large language models (MLLMs) have shown impressive generalization across tasks using images and text modalities. While their extension to video has enabled tasks such as video question answering and video captioning, their pixel-level visual grounding abilities are less studied. In this work, we raise the pertinent question of whether motion is used in pixel-level visual grounding and whether video MLLMs can segment objects based on natural language expressions describing their motion patterns. We identify the shortcomings in the current benchmarks, where we show that a single frame can often suffice for capturing the motion referring expression without any temporal reasoning. To address this, we introduce four motion-centric probing techniques, particularly designed for the visual grounding task, to study video MLLMs' ability to identify true motion from a fake one and their ability to grasp the motion order. Consequently, we provide a motion-centric benchmark, MoCentric-Bench. It ensures that video MLLMs are evaluated towards leveraging the interaction between motion and language rather than being dominated by static appearance cues emphasized in existing visual grounding datasets. We further establish strong single-image baselines that are on par with or outperform prior methods. Finally, we explore simple motion-centric adaptation techniques that provide state-of-the-art performance on our MoCentric-Bench. Our motion-centric benchmark, evaluation and findings challenge future models to improve dense spatiotemporal grounding and pixel-level understanding within videos. Code and datasets will be made publicly available at https://github.com/MSiam/PixFoundation-2.0.git.