MemEvolve: Meta-Evolution of Agent Memory Systems

πŸ“… 2025-12-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM-based agents rely on static, hand-crafted memory architectures that lack meta-adaptability to task contexts, severely limiting their evolutionary capability. Method: We propose the first meta-evolutionary framework for memory systems, enabling joint co-evolution of memory architecture and experiential knowledge. Our approach defines a modular, comparable memory design space and employs evolutionary algorithms to jointly optimize encoding, storage, retrieval, and management components. We release EvolveLabβ€”a unified open-source experimental platform integrating 12 diverse memory systems. Contribution/Results: Evaluated across four agent benchmarks, our evolved memory architectures achieve up to 17.06% absolute performance gain. Critically, the learned architectures demonstrate strong cross-task and cross-model generalization: they seamlessly transfer to distinct agent frameworks (e.g., SmolAgent, Flash-Searcher) and various LLM backbones. This work marks the first realization of agents capable of *dynamically optimizing how to learn*, establishing a foundational step toward self-improving intelligent agents.

Technology Category

Application Category

πŸ“ Abstract
Self-evolving memory systems are unprecedentedly reshaping the evolutionary paradigm of large language model (LLM)-based agents. Prior work has predominantly relied on manually engineered memory architectures to store trajectories, distill experience, and synthesize reusable tools, enabling agents to evolve on the fly within environment interactions. However, this paradigm is fundamentally constrained by the staticity of the memory system itself: while memory facilitates agent-level evolving, the underlying memory architecture cannot be meta-adapted to diverse task contexts. To address this gap, we propose MemEvolve, a meta-evolutionary framework that jointly evolves agents' experiential knowledge and their memory architecture, allowing agent systems not only to accumulate experience but also to progressively refine how they learn from it. To ground MemEvolve in prior research and foster openness in future self-evolving systems, we introduce EvolveLab, a unified self-evolving memory codebase that distills twelve representative memory systems into a modular design space (encode, store, retrieve, manage), providing both a standardized implementation substrate and a fair experimental arena. Extensive evaluations on four challenging agentic benchmarks demonstrate that MemEvolve achieves (I) substantial performance gains, improving frameworks such as SmolAgent and Flash-Searcher by up to $17.06%$; and (II) strong cross-task and cross-LLM generalization, designing memory architectures that transfer effectively across diverse benchmarks and backbone models.
Problem

Research questions and friction points this paper is trying to address.

Evolves memory architecture alongside agent knowledge
Addresses static memory limitations in agent evolution
Enables cross-task and cross-model memory adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-evolutionary framework jointly evolves memory architecture and knowledge
Unified codebase distills memory systems into modular design space
Achieves performance gains and strong cross-task generalization
πŸ”Ž Similar Papers
No similar papers found.
Guibin Zhang
Guibin Zhang
National University of Singapore
Multi-Agent SystemEfficient AI
H
Haotian Ren
OPPO AI Agent Team, LV-NUS lab
C
Chong Zhan
OPPO AI Agent Team, LV-NUS lab
Zhenhong Zhou
Zhenhong Zhou
Nanyang Technological University
Large Language ModelAI SafetyLLM Safety
J
Junhao Wang
OPPO AI Agent Team, LV-NUS lab
H
He Zhu
OPPO AI Agent Team, LV-NUS lab
Wangchunshu Zhou
Wangchunshu Zhou
OPPO & M-A-P
artificial general intelligencelanguage agentslarge language modelsnatural language processing
S
Shuicheng Yan
OPPO AI Agent Team, LV-NUS lab