VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding

📅 2024-05-22
🏛️ arXiv.org
📈 Citations: 9
Influential: 3
📄 PDF
🤖 AI Summary
Video large language models (Video LLMs) suffer from insufficient accuracy in zero-shot temporal grounding, hindering their practical deployment in video understanding and editing. To address this, we propose TimeLLM: the first framework to explicitly inject timestamp knowledge into visual tokens; it introduces absolute time embeddings to mitigate semantic drift across long temporal spans and designs a lightweight, slot-based frame compression mechanism that preserves critical temporal structure while reducing computational overhead. Leveraging the high-quality, re-annotated dataset VTG-IT-120K, TimeLLM significantly outperforms existing Video LLMs across multiple temporal grounding benchmarks. It achieves more robust and precise zero-shot temporal localization, enabling general-purpose, cross-task video interaction without task-specific fine-tuning.

Technology Category

Application Category

📝 Abstract
Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing. Unlike traditional task-specific models, Video Large Language Models (video LLMs) can handle multiple tasks concurrently in a zero-shot manner. Consequently, exploring the application of video LLMs for VTG tasks has become a burgeoning research area. However, despite considerable advancements in video content understanding, video LLMs often struggle to accurately pinpoint timestamps within videos, limiting their effectiveness in VTG tasks. To address this, we introduce VTG-LLM, a model designed to enhance video LLMs' timestamp localization abilities. Our approach includes: (1) effectively integrating timestamp knowledge into visual tokens; (2) incorporating absolute-time tokens to manage timestamp knowledge without concept shifts; and (3) introducing a lightweight, high-performance, slot-based token compression technique designed to accommodate the demands of a large number of frames to be sampled for VTG tasks. Additionally, we present VTG-IT-120K, a collection of publicly available VTG datasets that we have re-annotated to improve upon low-quality annotations. Our comprehensive experiments demonstrate the superior performance of VTG-LLM in comparison to other video LLM methods across a variety of VTG tasks.
Problem

Research questions and friction points this paper is trying to address.

Video Large Language Model
Temporal Localization
Video Browsing and Editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal Integration
Video Time Localization
Multi-frame Processing
🔎 Similar Papers
No similar papers found.
Y
Yongxin Guo
Tencent PCG, School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen)
J
Jingyu Liu
Tencent PCG
M
Mingda Li
Tencent PCG
X
Xiaoying Tang
School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen), The Shenzhen Institute of Artificial Intelligence and Robotics for Society, The Guangdong Provincial Key Laboratory of Future Networks of Intelligence
X
Xi Chen
Tencent PCG
B
Bo Zhao
Tencent PCG