🤖 AI Summary
This work addresses the limitations of existing large language model agents in sustaining time-consistent, multi-hop reasoning and reusing evidence across extended interactive sessions due to inadequate memory mechanisms. To overcome this, the authors propose a tripartite coupled memory architecture that integrates temporal graph memory, experiential memory, and raw textual memory. A dual-channel retrieval strategy is designed to jointly access structured knowledge and unstructured evidence, enabling the generation of compact yet traceable reasoning contexts. Evaluated on the LoCoMo benchmark, the approach significantly improves accuracy in multi-hop and temporal reasoning tasks while reducing input length by over 95%, thereby substantially enhancing both computational efficiency and interpretability compared to long-context baselines.
📝 Abstract
Large language model-based agents operating in long-horizon interactions require memory systems that support temporal consistency, multi-hop reasoning, and evidence-grounded reuse across sessions. Existing approaches largely rely on unstructured retrieval or coarse abstractions, which often lead to temporal conflicts, brittle reasoning, and limited traceability. We propose MemWeaver, a unified memory framework that consolidates long-term agent experiences into three interconnected components: a temporally grounded graph memory for structured relational reasoning, an experience memory that abstracts recurring interaction patterns from repeated observations, and a passage memory that preserves original textual evidence. MemWeaver employs a dual-channel retrieval strategy that jointly retrieves structured knowledge and supporting evidence to construct compact yet information-dense contexts for reasoning. Experiments on the LoCoMo benchmark demonstrate that MemWeaver substantially improves multi-hop and temporal reasoning accuracy while reducing input context length by over 95\% compared to long-context baselines.