VideoRAG: Retrieval-Augmented Generation with Extreme Long-Context Videos

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the modeling challenge of ultra-long videos (hundreds of hours) in retrieval-augmented generation (RAG). We propose the first RAG framework designed for arbitrarily long temporal video sequences. Methodologically, we introduce a dual-channel architecture: (1) a graph-structured textual knowledge grounding channel that models cross-video semantic relationships via knowledge graph construction; and (2) a multimodal contextual encoding channel that efficiently preserves fine-grained visual features through hierarchical video encoding. Our approach integrates graph neural networks, hierarchical video representation learning, knowledge graph induction, and multimodal retrieval. Evaluated on the LongerVideos benchmark—comprising 160+ videos totaling over 134 hours—our framework significantly outperforms existing RAG and long-video understanding methods. It is the first to jointly enable cross-video knowledge reasoning and holistic semantic modeling. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) has demonstrated remarkable success in enhancing Large Language Models (LLMs) through external knowledge integration, yet its application has primarily focused on textual content, leaving the rich domain of multi-modal video knowledge predominantly unexplored. This paper introduces VideoRAG, the first retrieval-augmented generation framework specifically designed for processing and understanding extremely long-context videos. Our core innovation lies in its dual-channel architecture that seamlessly integrates (i) graph-based textual knowledge grounding for capturing cross-video semantic relationships, and (ii) multi-modal context encoding for efficiently preserving visual features. This novel design empowers VideoRAG to process unlimited-length videos by constructing precise knowledge graphs that span multiple videos while maintaining semantic dependencies through specialized multi-modal retrieval paradigms. Through comprehensive empirical evaluation on our proposed LongerVideos benchmark-comprising over 160 videos totaling 134+ hours across lecture, documentary, and entertainment categories-VideoRAG demonstrates substantial performance compared to existing RAG alternatives and long video understanding methods. The source code of VideoRAG implementation and the benchmark dataset are openly available at: https://github.com/HKUDS/VideoRAG.
Problem

Research questions and friction points this paper is trying to address.

Video Integration
Retrieval-Enhanced Generation
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

VideoRAG
Retrieval-Augmented Generation
Long Video Processing
🔎 Similar Papers
No similar papers found.