A Comparative Analysis of LLM Memorization at Statistical and Internal Levels: Cross-Model Commonalities and Model-Specific Signatures

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Research on memory mechanisms in large language models (LLMs) has lagged behind their rapid performance advancements and has largely been confined to single model families, making it difficult to distinguish universal principles from model-specific artifacts. This work presents the first unified framework for cross-family memory analysis, encompassing Pythia, OpenLLaMA, StarCoder, and OLMo. By integrating statistical analysis, intermediate-layer decoding, attention head ablation, and perturbation recovery experiments, the study systematically uncovers common patterns—including a log-linear relationship between memorization rate and model scale, compressibility of memorized sequences, and shared frequency-domain distributions—alongside family-specific attention head allocation strategies. These findings lay the groundwork for a general theory of memory in LLMs.

Technology Category

Application Category

📝 Abstract
Memorization is a fundamental component of intelligence for both humans and LLMs. However, while LLM performance scales rapidly, our understanding of memorization lags. Due to limited access to the pre-training data of LLMs, most previous studies focus on a single model series, leading to isolated observations among series, making it unclear which findings are general or specific. In this study, we collect multiple model series (Pythia, OpenLLaMa, StarCoder, OLMo1/2/3) and analyze their shared or unique memorization behavior at both the statistical and internal levels, connecting individual observations while showing new findings. At the statistical level, we reveal that the memorization rate scales log-linearly with model size, and memorized sequences can be further compressed. Further analysis demonstrated a shared frequency and domain distribution pattern for memorized sequences. However, different models also show individual features under the above observations. At the internal level, we find that LLMs can remove certain injected perturbations, while memorized sequences are more sensitive. By decoding middle layers and attention head ablation, we revealed the general decoding process and shared important heads for memorization. However, the distribution of those important heads differs between families, showing a unique family-level feature. Through bridging various experiments and revealing new findings, this study paves the way for a universal and fundamental understanding of memorization in LLM.
Problem

Research questions and friction points this paper is trying to address.

LLM memorization
cross-model comparison
model-specific signatures
statistical memorization
internal memorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM memorization
cross-model analysis
attention head ablation
log-linear scaling
memory compression
🔎 Similar Papers
No similar papers found.
Bowen Chen
Bowen Chen
The University of Tokyo
Natural Language ProcessingLarge Language Models
N
Namgi Han
Department of Computer Science, The University of Tokyo
Y
Yusuke Miyao
Department of Computer Science, The University of Tokyo; Research and Development Center for Large Language Models, National Institute of Informatics