🤖 AI Summary
Research on memory mechanisms in large language models (LLMs) has lagged behind their rapid performance advancements and has largely been confined to single model families, making it difficult to distinguish universal principles from model-specific artifacts. This work presents the first unified framework for cross-family memory analysis, encompassing Pythia, OpenLLaMA, StarCoder, and OLMo. By integrating statistical analysis, intermediate-layer decoding, attention head ablation, and perturbation recovery experiments, the study systematically uncovers common patterns—including a log-linear relationship between memorization rate and model scale, compressibility of memorized sequences, and shared frequency-domain distributions—alongside family-specific attention head allocation strategies. These findings lay the groundwork for a general theory of memory in LLMs.
📝 Abstract
Memorization is a fundamental component of intelligence for both humans and LLMs. However, while LLM performance scales rapidly, our understanding of memorization lags. Due to limited access to the pre-training data of LLMs, most previous studies focus on a single model series, leading to isolated observations among series, making it unclear which findings are general or specific. In this study, we collect multiple model series (Pythia, OpenLLaMa, StarCoder, OLMo1/2/3) and analyze their shared or unique memorization behavior at both the statistical and internal levels, connecting individual observations while showing new findings. At the statistical level, we reveal that the memorization rate scales log-linearly with model size, and memorized sequences can be further compressed. Further analysis demonstrated a shared frequency and domain distribution pattern for memorized sequences. However, different models also show individual features under the above observations. At the internal level, we find that LLMs can remove certain injected perturbations, while memorized sequences are more sensitive. By decoding middle layers and attention head ablation, we revealed the general decoding process and shared important heads for memorization. However, the distribution of those important heads differs between families, showing a unique family-level feature. Through bridging various experiments and revealing new findings, this study paves the way for a universal and fundamental understanding of memorization in LLM.