VideoExpert: Augmented LLM for Temporal-Sensitive Video Understanding

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit strong reliance on linguistic priors while neglecting visual temporal cues in time-sensitive video understanding tasks, leading to inaccurate event timestamp localization. To address this, we propose a dual-expert parallel architecture comprising temporally specialized and spatially specialized experts with parameter isolation and synergistic interaction. Our approach introduces a novel spatial compression module to efficiently preserve salient visual information while enabling semantic disentanglement; integrates high-frame-rate token compression modeling, a lightweight temporal prediction head, spatial token representation, and a collaborative token mechanism. Evaluated on multiple video temporal grounding benchmarks, our method achieves state-of-the-art performance, significantly improving timestamp localization accuracy. The framework demonstrates both strong generalizability across diverse video understanding tasks and effective task-specific adaptability.

Technology Category

Application Category

📝 Abstract
The core challenge in video understanding lies in perceiving dynamic content changes over time. However, multimodal large language models struggle with temporal-sensitive video tasks, which requires generating timestamps to mark the occurrence of specific events. Existing strategies require MLLMs to generate absolute or relative timestamps directly. We have observed that those MLLMs tend to rely more on language patterns than visual cues when generating timestamps, affecting their performance. To address this problem, we propose VideoExpert, a general-purpose MLLM suitable for several temporal-sensitive video tasks. Inspired by the expert concept, VideoExpert integrates two parallel modules: the Temporal Expert and the Spatial Expert. The Temporal Expert is responsible for modeling time sequences and performing temporal grounding. It processes high-frame-rate yet compressed tokens to capture dynamic variations in videos and includes a lightweight prediction head for precise event localization. The Spatial Expert focuses on content detail analysis and instruction following. It handles specially designed spatial tokens and language input, aiming to generate content-related responses. These two experts collaborate seamlessly via a special token, ensuring coordinated temporal grounding and content generation. Notably, the Temporal and Spatial Experts maintain independent parameter sets. By offloading temporal grounding from content generation, VideoExpert prevents text pattern biases in timestamp predictions. Moreover, we introduce a Spatial Compress module to obtain spatial tokens. This module filters and compresses patch tokens while preserving key information, delivering compact yet detail-rich input for the Spatial Expert. Extensive experiments demonstrate the effectiveness and versatility of the VideoExpert.
Problem

Research questions and friction points this paper is trying to address.

MLLMs struggle with temporal-sensitive video tasks
Existing methods rely on language over visual cues
Need precise event localization and content generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Temporal and Spatial Experts modules
Uses high-frame-rate tokens for dynamic variations
Introduces Spatial Compress for detail-rich input
H
Henghao Zhao
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Ge-Peng Ji
Ge-Peng Ji
Australian National University
Multimodal AIMedical AIComputer Vision
R
Rui Yan
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Huan Xiong
Huan Xiong
Harbin Institute of Technology
CombinatoricsMachine Learning
Z
Zechao Li
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China