🤖 AI Summary
Existing STVG methods suffer from category bias, oversimplified reasoning, and poor language robustness in real-world complex scenes—primarily due to insufficient coverage of current benchmarks. To address this, we introduce OmniGround: the first large-scale spatiotemporal grounding benchmark explicitly designed for realistic, complex scenarios, comprising 3,475 videos across 81 categories and highlighting model bottlenecks under small-object detection, occlusion, and intricate spatial relations. We propose a novel forward-backward-refinement annotation protocol and DeepSTG, a four-dimensional quality evaluation framework. Furthermore, we present PG-TAF—a training-free, two-stage method that decouples temporal localization from fine-grained spatiotemporal propagation. On OmniGround, PG-TAF achieves +25.6% m_tIoU and +35.6% m_vIoU over prior methods, while consistently improving performance across four major benchmarks, significantly enhancing robustness in complex scenes.
📝 Abstract
Spatio-Temporal Video Grounding (STVG) aims to localize target objects in videos based on natural language descriptions. Despite recent advances in Multimodal Large Language Models, a significant gap remains between current models and real-world demands involving diverse objects and complex queries. We attribute this to limited benchmark scope, causing models to exhibit category bias, oversimplified reasoning, and poor linguistic robustness. To address these limitations, we introduce OmniGround, a comprehensive benchmark with 3,475 videos spanning 81 categories and complex real-world queries. We propose the Forward-Backward-Refinement annotation pipeline that combines multi-directional tracking with intelligent error correction for high-quality labels. We further introduce DeepSTG, a systematic evaluation framework quantifying dataset quality across four complementary dimensions beyond superficial statistics. Evaluations reveal performance average drop of 10.4% on complex real-world scenes, particularly with small/occluded objects and intricate spatial relations. Motivated by these, we propose PG-TAF, a training-free two-stage framework decomposing STVG into high-level temporal grounding and fine-grained spatio-temporal propagation. Experiments demonstrate PG-TAF achieves 25.6% and 35.6% improvements in m_tIoU and m_vIoU on OmniGround with consistent gains across four benchmarks.