On The Reproducibility Limitations of RAG Systems

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
RAG systems face irreproducibility issues in dynamic scientific knowledge updating due to retrieval non-determinism. To address this, we propose ReproRAG—the first end-to-end reproducibility evaluation framework tailored for RAG. It systematically decouples key factors including embedding models, retrieval algorithms, precision configurations, and hardware environments, and introduces quantitative metrics—Exact Match Rate, Jaccard Similarity, and Kendall’s Tau—to measure result uncertainty. Large-scale empirical evaluation reveals that embedding model selection exerts the most significant impact on reproducibility; cross-hardware and distributed-environment assessments further validate the critical role of each component. The framework is publicly released as an open-source tool, providing a standardized benchmark and empirical foundation for building trustworthy scientific AI systems.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) is increasingly employed in generative AI-driven scientific workflows to integrate rapidly evolving scientific knowledge bases, yet its reliability is frequently compromised by non-determinism in their retrieval components. This paper introduces ReproRAG, a comprehensive benchmarking framework designed to systematically measure and quantify the reproducibility of vector-based retrieval systems. ReproRAG investigates sources of uncertainty across the entire pipeline, including different embedding models, precision, retrieval algorithms, hardware configurations, and distributed execution environments. Utilizing a suite of metrics, such as Exact Match Rate, Jaccard Similarity, and Kendall's Tau, the proposed framework effectively characterizes the trade-offs between reproducibility and performance. Our large-scale empirical study reveals critical insights; for instance, we observe that different embedding models have remarkable impact on RAG reproducibility. The open-sourced ReproRAG framework provides researchers and engineers productive tools to validate deployments, benchmark reproducibility, and make informed design decisions, thereby fostering more trustworthy AI for science.
Problem

Research questions and friction points this paper is trying to address.

Addressing reproducibility limitations in Retrieval-Augmented Generation systems
Systematically measuring reproducibility of vector-based retrieval components
Investigating uncertainty sources across embedding models and algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ReproRAG framework for benchmarking reproducibility
Investigates uncertainty sources across embedding models and algorithms
Uses metrics like Exact Match Rate and Jaccard Similarity
🔎 Similar Papers
No similar papers found.