AMAQA: A Metadata-based QA Dataset for RAG Systems

πŸ“… 2025-05-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing RAG benchmarks lack support for structured metadata (e.g., timestamp, topic, sentiment, toxicity), hindering evaluation of QA tasks requiring multidimensional contextual retrieval. To address this, we propose AMAQAβ€”the first open-domain QA benchmark explicitly designed for metadata-driven question answering. It comprises 1.1 million Telegram messages annotated with fine-grained multidimensional metadata and 450 high-quality single-hop QA pairs, focusing on time-critical, context-sensitive domains such as cybersecurity. Our method innovatively incorporates metadata as filterable retrieval dimensions in single-hop QA and introduces a metadata-enhanced, noise-aware iterative RAG input reconstruction paradigm, featuring metadata-augmented retrieval, LLM-guided context reranking, and dynamic conditional context construction. Experiments show that metadata filtering alone improves accuracy from 0.12 to 0.61; integrating iterative optimization further boosts performance by 3 percentage points over the best baseline and by 14 points over naive metadata filtering.

Technology Category

Application Category

πŸ“ Abstract
Retrieval-augmented generation (RAG) systems are widely used in question-answering (QA) tasks, but current benchmarks lack metadata integration, hindering evaluation in scenarios requiring both textual data and external information. To address this, we present AMAQA, a new open-access QA dataset designed to evaluate tasks combining text and metadata. The integration of metadata is especially important in fields that require rapid analysis of large volumes of data, such as cybersecurity and intelligence, where timely access to relevant information is critical. AMAQA includes about 1.1 million English messages collected from 26 public Telegram groups, enriched with metadata such as timestamps, topics, emotional tones, and toxicity indicators, which enable precise and contextualized queries by filtering documents based on specific criteria. It also includes 450 high-quality QA pairs, making it a valuable resource for advancing research on metadata-driven QA and RAG systems. To the best of our knowledge, AMAQA is the first single-hop QA benchmark to incorporate metadata and labels such as topics covered in the messages. We conduct extensive tests on the benchmark, establishing a new standard for future research. We show that leveraging metadata boosts accuracy from 0.12 to 0.61, highlighting the value of structured context. Building on this, we explore several strategies to refine the LLM input by iterating over provided context and enriching it with noisy documents, achieving a further 3-point gain over the best baseline and a 14-point improvement over simple metadata filtering. The dataset is available at https://anonymous.4open.science/r/AMAQA-5D0D/
Problem

Research questions and friction points this paper is trying to address.

Lack of metadata integration in current QA benchmarks for RAG systems
Need for datasets combining text and metadata in fields like cybersecurity
Improving QA accuracy by leveraging structured metadata and context
Innovation

Methods, ideas, or system contributions that make the work stand out.

AMAQA integrates metadata for enhanced QA evaluation
Leverages Telegram data with rich contextual metadata
Metadata boosts accuracy from 0.12 to 0.61
πŸ”Ž Similar Papers
No similar papers found.
D
Davide Bruni
Institute for Informatics and Telematics, National Research Council, Italy
M
M. Avvenuti
Department of Information Engineering, University of Pisa, Italy
Nicola Tonellotto
Nicola Tonellotto
Associate Professor, University of Pisa
Information RetrievalCloud ComputingMachine Learning
Maurizio Tesconi
Maurizio Tesconi
Head of Cyber Intelligence Lab - IIT - CNR
Social Media AnalysisCyber IntelligenceBig DataText MiningSocial Mining