🤖 AI Summary
Video large language models (Video LLMs) suffer from insufficient accuracy in zero-shot temporal grounding, hindering their practical deployment in video understanding and editing. To address this, we propose TimeLLM: the first framework to explicitly inject timestamp knowledge into visual tokens; it introduces absolute time embeddings to mitigate semantic drift across long temporal spans and designs a lightweight, slot-based frame compression mechanism that preserves critical temporal structure while reducing computational overhead. Leveraging the high-quality, re-annotated dataset VTG-IT-120K, TimeLLM significantly outperforms existing Video LLMs across multiple temporal grounding benchmarks. It achieves more robust and precise zero-shot temporal localization, enabling general-purpose, cross-task video interaction without task-specific fine-tuning.
📝 Abstract
Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing. Unlike traditional task-specific models, Video Large Language Models (video LLMs) can handle multiple tasks concurrently in a zero-shot manner. Consequently, exploring the application of video LLMs for VTG tasks has become a burgeoning research area. However, despite considerable advancements in video content understanding, video LLMs often struggle to accurately pinpoint timestamps within videos, limiting their effectiveness in VTG tasks. To address this, we introduce VTG-LLM, a model designed to enhance video LLMs' timestamp localization abilities. Our approach includes: (1) effectively integrating timestamp knowledge into visual tokens; (2) incorporating absolute-time tokens to manage timestamp knowledge without concept shifts; and (3) introducing a lightweight, high-performance, slot-based token compression technique designed to accommodate the demands of a large number of frames to be sampled for VTG tasks. Additionally, we present VTG-IT-120K, a collection of publicly available VTG datasets that we have re-annotated to improve upon low-quality annotations. Our comprehensive experiments demonstrate the superior performance of VTG-LLM in comparison to other video LLM methods across a variety of VTG tasks.