Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation

📅 2025-02-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG methods for open-domain QA suffer from three key bottlenecks: (1) retrieval independence leading to noisy and redundant evidence, (2) absence of memory-based summarization over retrieved content, and (3) inability to adaptively refine the retrieval process. This paper proposes Amber, a novel framework featuring agent-coordinated memory updating to enable iterative knowledge integration and dynamic retrieval. Amber supports autonomous retrieval termination, query rewriting, and multi-granularity semantic filtering; it jointly integrates multi-agent memory updating, adaptive retrieval scheduling, and iterative knowledge distillation. Evaluated on multiple open-domain QA benchmarks, Amber achieves significant improvements in accuracy and robustness while effectively mitigating hallucination and noise. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG), by integrating non-parametric knowledge from external knowledge bases into models, has emerged as a promising approach to enhancing response accuracy while mitigating factual errors and hallucinations. This method has been widely applied in tasks such as Question Answering (QA). However, existing RAG methods struggle with open-domain QA tasks because they perform independent retrieval operations and directly incorporate the retrieved information into generation without maintaining a summarizing memory or using adaptive retrieval strategies, leading to noise from redundant information and insufficient information integration. To address these challenges, we propose Adaptive memory-based optimization for enhanced RAG (Amber) for open-domain QA tasks, which comprises an Agent-based Memory Updater, an Adaptive Information Collector, and a Multi-granular Content Filter, working together within an iterative memory updating paradigm. Specifically, Amber integrates and optimizes the language model's memory through a multi-agent collaborative approach, ensuring comprehensive knowledge integration from previous retrieval steps. It dynamically adjusts retrieval queries and decides when to stop retrieval based on the accumulated knowledge, enhancing retrieval efficiency and effectiveness. Additionally, it reduces noise by filtering irrelevant content at multiple levels, retaining essential information to improve overall model performance. We conduct extensive experiments on several open-domain QA datasets, and the results demonstrate the superiority and effectiveness of our method and its components. The source code is available footnote{https://anonymous.4open.science/r/Amber-B203/}.
Problem

Research questions and friction points this paper is trying to address.

Improves retrieval-augmented generation for open-domain QA
Reduces noise from redundant retrieved information
Enhances adaptive retrieval and memory integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-based Memory Updater for knowledge integration
Adaptive Information Collector for dynamic retrieval
Multi-granular Content Filter to reduce noise
🔎 Similar Papers
No similar papers found.
Q
Qitao Qin
University of Science and Technology of China
Y
Yucong Luo
University of Science and Technology of China
Yihang Lu
Yihang Lu
University of Science and Technology of China, Hefei Institutes of Physical Science
Spatiotemporal dataTime SeriesDynamical SystemsTensor Methods
Z
Zhibo Chu
University of Science and Technology of China
X
Xianwei Meng
Hefei Institutes of Physical Science, Chinese Academy of Sciences