RMIT-ADM+S at the SIGIR 2025 LiveRAG Challenge

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient relevance and faithfulness of Retrieval-Augmented Generation (RAG) in real-time scenarios, this paper proposes the Generate-Retrieve-Augment-Generate (GRAG) framework. GRAG first generates a hypothetical answer to guide dual-path retrieval input; then employs a large language model for pointwise re-ranking of retrieved passages; and finally produces a high-quality response. The method innovatively integrates query variation generation, question decomposition, and prompt engineering strategies. Furthermore, we introduce a systematic Grid of Points (GoP) experimental design and an N-way ANOVA-based multidimensional configuration analysis to rigorously evaluate component interactions. Evaluated on the private leaderboard of the LiveRAG 2025 Challenge, GRAG achieves Relevance = 1.199 and Faithfulness = 0.477, ranking among the top four. These results empirically validate the effectiveness of hypothesis-driven retrieval and multi-strategy collaborative optimization in enhancing RAG performance.

Technology Category

Application Category

📝 Abstract
This paper presents the RMIT--ADM+S participation in the SIGIR 2025 LiveRAG Challenge. Our Generation-Retrieval-Augmented Generation (GRAG) approach relies on generating a hypothetical answer that is used in the retrieval phase, alongside the original question. GRAG also incorporates a pointwise large language model (LLM)-based re-ranking step prior to final answer generation. We describe the system architecture and the rationale behind our design choices. In particular, a systematic evaluation using the Grid of Points (GoP) framework and N-way ANOVA enabled comparison across multiple configurations, including query variant generation, question decomposition, rank fusion strategies, and prompting techniques for answer generation. Our system achieved a Relevance score of 1.199 and a Faithfulness score of 0.477 on the private leaderboard, placing among the top four finalists in the LiveRAG 2025 Challenge.
Problem

Research questions and friction points this paper is trying to address.

Enhancing retrieval-augmented generation with hypothetical answers
Improving answer relevance via LLM-based re-ranking
Evaluating multiple configurations for optimal RAG performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRAG generates hypothetical answers for retrieval
LLM-based re-ranking before final answer generation
Systematic evaluation using GoP and ANOVA
🔎 Similar Papers
No similar papers found.