Seeing Through the MiRAGE: Evaluating Multimodal Retrieval Augmented Generation

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-centric RAG evaluation methods are ill-suited for multimodal and complex reasoning scenarios, particularly failing to verify cross-modal information provenance and citation support. Method: We propose MiRAGE—the first evaluation framework tailored for multimodal RAG (encompassing heterogeneous sources such as audio, video, and images)—introducing a novel claim-centric paradigm. It defines two core metrics: InfoF1 (measuring factual accuracy and information coverage) and CiteF1 (assessing citation support and completeness). MiRAGE extends textual benchmarks (ACLE, ARGUE, RAGAS) into multimodal automated variants, enabling hybrid human-automated evaluation. Results: Experiments demonstrate strong correlation between MiRAGE scores and human quality judgments, systematically exposing the inherent biases of text-only metrics in multimodal settings. MiRAGE establishes a new benchmark for multimodal RAG evaluation and releases an open-source toolkit.

Technology Category

Application Category

📝 Abstract
We introduce MiRAGE, an evaluation framework for retrieval-augmented generation (RAG) from multimodal sources. As audiovisual media becomes a prevalent source of information online, it is essential for RAG systems to integrate information from these sources into generation. However, existing evaluations for RAG are text-centric, limiting their applicability to multimodal, reasoning intensive settings because they don't verify information against sources. MiRAGE is a claim-centric approach to multimodal RAG evaluation, consisting of InfoF1, evaluating factuality and information coverage, and CiteF1, measuring citation support and completeness. We show that MiRAGE, when applied by humans, strongly aligns with extrinsic quality judgments. We additionally introduce automatic variants of MiRAGE and three prominent TextRAG metrics -- ACLE, ARGUE, and RAGAS -- demonstrating the limitations of text-centric work and laying the groundwork for automatic evaluation. We release open-source implementations and outline how to assess multimodal RAG.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal retrieval-augmented generation systems
Addressing limitations of text-centric RAG evaluations
Measuring factuality and citation support for audiovisual sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

MiRAGE evaluates multimodal retrieval-augmented generation
Uses InfoF1 and CiteF1 metrics for assessment
Introduces automatic variants for text-centric limitations
🔎 Similar Papers
No similar papers found.