🤖 AI Summary
Current video large language models (VLMs) lack fine-grained temporal awareness for long-video understanding: timestamps are implicitly encoded, frame-order modeling is weak, and vision–language alignment deviates from key entities. To address this, we propose three core innovations: (1) diffusion-based temporal latent variable encoding—explicitly modeling event boundaries and continuity; (2) object-anchored representation—enforcing entity-centric cross-modal alignment; and (3) a discrete temporal token mixing architecture—jointly integrating explicit timestamp tokens with semantic tokens. Leveraging entity-aware video segmentation and refined cross-modal alignment, our approach significantly improves temporal localization accuracy and entity interaction reasoning. Extensive experiments demonstrate state-of-the-art performance on benchmark datasets including Charades-STA, NExT-QA, and VideoQA, outperforming leading video VLMs by substantial margins.
📝 Abstract
Understanding videos requires more than answering open ended questions, it demands the ability to pinpoint when events occur and how entities interact across time. While recent Video LLMs have achieved remarkable progress in holistic reasoning, they remain coarse in temporal perception: timestamps are encoded only implicitly, frame level features are weak in capturing continuity, and language vision alignment often drifts from the entities of interest. In this paper, we present Grounded VideoDiT, a Video LLM designed to overcome these limitations by introducing three key innovations. First, a Diffusion Temporal Latent (DTL) encoder enhances boundary sensitivity and maintains temporal consistency. Second, object grounded representations explicitly bind query entities to localized visual evidence, strengthening alignment. Third, a mixed token scheme with discrete temporal tokens provides explicit timestamp modeling, enabling fine grained temporal reasoning. Together, these designs equip Grounded VideoDiT with robust grounding capabilities, as validated by state of the art results on Charades STA, NExT GQA, and multiple VideoQA benchmarks.