🤖 AI Summary
Existing RAG systems exhibit weak contextual understanding, insufficient modeling of cross-turn dependencies, and sharp performance degradation in later dialogue turns. Method: We introduce MultiRAG-Bench—the first end-to-end, human-authored multi-turn RAG dialogue benchmark—covering four domains, 110 dialogues (avg. 7.7 turns), and 842 tasks. It systematically formalizes four core challenges: non-i.i.d. queries, unanswerable questions, cross-turn semantic dependencies, and cross-domain generalization. We propose an automated evaluation paradigm leveraging LLM-as-a-Judge and synthetic-data augmentation, integrated with human verification, multi-dimensional manual evaluation protocols, and a standardized RAG pipeline interface. Contribution/Results: Empirical analysis reveals that current SOTA methods suffer over 30% quality degradation after the fifth turn. MultiRAG-Bench is publicly released, establishing a new standard for rigorous, scalable, and reproducible evaluation of multi-turn RAG systems.
📝 Abstract
Retrieval-augmented generation (RAG) has recently become a very popular task for Large Language Models (LLMs). Evaluating them on multi-turn RAG conversations, where the system is asked to generate a response to a question in the context of a preceding conversation is an important and often overlooked task with several additional challenges. We present MTRAG: an end-to-end human-generated multi-turn RAG benchmark that reflects several real-world properties across diverse dimensions for evaluating the full RAG pipeline. MTRAG contains 110 conversations averaging 7.7 turns each across four domains for a total of 842 tasks. We also explore automation paths via synthetic data and LLM-as-a-Judge evaluation. Our human and automatic evaluations show that even state-of-the-art LLM RAG systems struggle on MTRAG. We demonstrate the need for strong retrieval and generation systems that can handle later turns, unanswerable questions, non-standalone questions, and multiple domains. MTRAG is available at https://github.com/ibm/mt-rag-benchmark.