RAG or Learning? Understanding the Limits of LLM Adaptation under Continuous Knowledge Drift in the Real World

πŸ“… 2026-04-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the challenge that large language models, due to their static pre-trained knowledge, struggle to adapt to dynamically evolving real-world information, often producing outdated predictions and temporally inconsistent reasoning. To this end, the authors introduce a novel evaluation benchmark based on timestamped dynamic events to systematically assess model performance under continuous knowledge drift. They further propose Chronos, a training-free, time-aware retrieval method that leverages event evolution graphs to enhance temporal consistency in reasoning. Experimental results demonstrate that existing approaches commonly suffer from catastrophic forgetting and temporal inconsistency, whereas Chronos significantly improves the model’s capacity for temporal reasoning over dynamic knowledge.
πŸ“ Abstract
Large language models (LLMs) acquire most of their knowledge during pretraining, which ties them to a fixed snapshot of the world and makes adaptation to continuously evolving knowledge challenging. As facts, entities, and events change over time, models may experience continuous knowledge drift, resulting not only in outdated predictions but also in temporally inconsistent reasoning. Although existing approaches, such as continual finetuning, knowledge editing, and retrieval-augmented generation (RAG), aim to update or supplement model knowledge, they are rarely evaluated in settings that reflect chronological, evolving, and real-world knowledge evolution. In this work, we introduce a new benchmark of real-world dynamic events, constructed from time-stamped evidence that captures how knowledge evolves over time, which enables systematic evaluation of model adaptation under continuous knowledge drift. The benchmark reveals that most existing methods, including vanilla RAG and several learning-based approaches, struggle under this setting, exposing critical limitations such as catastrophic forgetting and temporal inconsistency. To mitigate these limitations, we propose a time-aware retrieval baseline, Chronos, which progressively organizes retrieved evidence into an Event Evolution Graph to enable more temporally consistent understanding in LLMs without additional training. Overall, this work provides a foundation for analyzing and advancing LLM adaptation to continuous knowledge drift in realistic settings.
Problem

Research questions and friction points this paper is trying to address.

continuous knowledge drift
temporal inconsistency
large language models
knowledge adaptation
real-world dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

continuous knowledge drift
time-aware retrieval
Event Evolution Graph
Chronos
temporal consistency
πŸ”Ž Similar Papers
No similar papers found.