🤖 AI Summary
Current multimodal large language models (MLLMs) for video understanding suffer from two critical limitations: inaccurate temporal alignment and severe cross-scene hallucination. To address these, we propose VideoNarrator—a training-free, zero-shot collaborative pipeline for video narration generation. It dynamically orchestrates off-the-shelf MLLMs and vision-language models (VLMs), assigning them specialized roles: description generation, contextual modeling, and consistency verification—enabling fine-grained temporal alignment in dense video captioning. Crucially, VideoNarrator requires no parameter updates, relying solely on pre-trained models and modular tools to significantly suppress hallucination and improve temporal fidelity. Experiments demonstrate state-of-the-art performance on video summarization and video question answering benchmarks, with exceptional robustness on long-duration videos and unseen scenes. This makes VideoNarrator a highly deployable solution for real-world applications such as advertising analysis.
📝 Abstract
In this paper, we introduce VideoNarrator, a novel training-free pipeline designed to generate dense video captions that offer a structured snapshot of video content. These captions offer detailed narrations with precise timestamps, capturing the nuances present in each segment of the video. Despite advancements in multimodal large language models (MLLMs) for video comprehension, these models often struggle with temporally aligned narrations and tend to hallucinate, particularly in unfamiliar scenarios. VideoNarrator addresses these challenges by leveraging a flexible pipeline where off-the-shelf MLLMs and visual-language models (VLMs) can function as caption generators, context providers, or caption verifiers. Our experimental results demonstrate that the synergistic interaction of these components significantly enhances the quality and accuracy of video narrations, effectively reducing hallucinations and improving temporal alignment. This structured approach not only enhances video understanding but also facilitates downstream tasks such as video summarization and video question answering, and can be potentially extended for advertising and marketing applications.