When and What: Diffusion-Grounded VideoLLM with Entity Aware Segmentation for Long Video Understanding

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video large language models (VLMs) lack fine-grained temporal awareness for long-video understanding: timestamps are implicitly encoded, frame-order modeling is weak, and vision–language alignment deviates from key entities. To address this, we propose three core innovations: (1) diffusion-based temporal latent variable encoding—explicitly modeling event boundaries and continuity; (2) object-anchored representation—enforcing entity-centric cross-modal alignment; and (3) a discrete temporal token mixing architecture—jointly integrating explicit timestamp tokens with semantic tokens. Leveraging entity-aware video segmentation and refined cross-modal alignment, our approach significantly improves temporal localization accuracy and entity interaction reasoning. Extensive experiments demonstrate state-of-the-art performance on benchmark datasets including Charades-STA, NExT-QA, and VideoQA, outperforming leading video VLMs by substantial margins.

Technology Category

Application Category

📝 Abstract
Understanding videos requires more than answering open ended questions, it demands the ability to pinpoint when events occur and how entities interact across time. While recent Video LLMs have achieved remarkable progress in holistic reasoning, they remain coarse in temporal perception: timestamps are encoded only implicitly, frame level features are weak in capturing continuity, and language vision alignment often drifts from the entities of interest. In this paper, we present Grounded VideoDiT, a Video LLM designed to overcome these limitations by introducing three key innovations. First, a Diffusion Temporal Latent (DTL) encoder enhances boundary sensitivity and maintains temporal consistency. Second, object grounded representations explicitly bind query entities to localized visual evidence, strengthening alignment. Third, a mixed token scheme with discrete temporal tokens provides explicit timestamp modeling, enabling fine grained temporal reasoning. Together, these designs equip Grounded VideoDiT with robust grounding capabilities, as validated by state of the art results on Charades STA, NExT GQA, and multiple VideoQA benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing temporal perception in video understanding
Explicitly binding entities to visual evidence
Enabling fine-grained temporal reasoning with timestamps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Temporal Latent encoder enhances boundary sensitivity
Object grounded representations bind entities to visual evidence
Mixed token scheme enables explicit timestamp modeling
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30