🤖 AI Summary
Existing long-context benchmarks suffer from insufficient narrative coherence, limited domain diversity, and poor cognitive plausibility, hindering effective evaluation of LLMs’ long-term memory capabilities. To address this, we propose BEAM—a novel million-token-scale, multi-domain, coherent long-dialogue benchmark, supporting automated construction of dialogues up to ten million tokens. We further introduce LIGHT, a human-cognition-inspired memory-augmented framework that decouples retrieval-augmented generation, context compression, and memory persistence via three synergistic components: long-term episodic memory, working memory, and a factual scratchpad. Experiments reveal severe performance degradation in state-of-the-art million-token models on long-dialogue tasks; LIGHT consistently improves accuracy by 3.5–12.69% across diverse backbone models, significantly outperforming strong baselines. Ablation studies confirm the complementary efficacy of LIGHT’s tripartite memory architecture.
📝 Abstract
Evaluating the abilities of large language models (LLMs) for tasks that require long-term memory and thus long-context reasoning, for example in conversational settings, is hampered by the existing benchmarks, which often lack narrative coherence, cover narrow domains, and only test simple recall-oriented tasks. This paper introduces a comprehensive solution to these challenges. First, we present a novel framework for automatically generating long (up to 10M tokens), coherent, and topically diverse conversations, accompanied by probing questions targeting a wide range of memory abilities. From this, we construct BEAM, a new benchmark comprising 100 conversations and 2,000 validated questions. Second, to enhance model performance, we propose LIGHT-a framework inspired by human cognition that equips LLMs with three complementary memory systems: a long-term episodic memory, a short-term working memory, and a scratchpad for accumulating salient facts. Our experiments on BEAM reveal that even LLMs with 1M token context windows (with and without retrieval-augmentation) struggle as dialogues lengthen. In contrast, LIGHT consistently improves performance across various models, achieving an average improvement of 3.5%-12.69% over the strongest baselines, depending on the backbone LLM. An ablation study further confirms the contribution of each memory component.