Early Detection and Reduction of Memorisation for Domain Adaptation and Instruction Tuning

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are prone to premature memorization of training data during domain adaptation and instruction tuning, posing privacy leakage and copyright infringement risks; existing mitigation strategies lack awareness of the dynamic evolution of memorization. This work observes that memorization surges rapidly in early training—well before performance convergence—and proposes a two-stage mitigation framework: (1) a lightweight n-gram overlap–based memorization score to trigger early stopping, and (2) an n-gram–aware loss regularization term to suppress overfitting to high-frequency patterns. Evaluated across Pythia, Llama-3, and Mistral models (1.4B–70B), our method reduces memorization by 40% on average without degrading downstream task performance, significantly outperforming state-of-the-art unlearning and memorization-mitigation approaches.

Technology Category

Application Category

📝 Abstract
Although large language models excel across many tasks, they can memorise training data and thereby expose private or copyrighted text. Most defences target the pre-training stage, leaving memorisation during fine-tuning, especially for domain adaptation and instruction tuning, poorly understood. We fine-tune Pythia, Llama3, and Mistral models spanning 1.4B-70B parameters on common evaluation datasets and track verbatim memorisation throughout training. We find that memorisation increases dramatically in the first few epochs, often significantly before either validation perplexity or evaluation performance is optimised. We use a simple but effective n-gram memorisation score which reliably precedes verbatim memorisation; using it as an early-stopping criterion mitigates memorisation with minimal performance loss. Further, we introduce an n-gram-aware loss regulariser and show that it reduces memorisation across all model families tested by up to 40% while minimising evaluation performance trade-offs when compared to an existing memorisation mitigation strategy. These results yield practical, scalable insights into memorisation dynamics during language model fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Detecting and reducing memorization during domain adaptation and instruction tuning
Understanding memorization dynamics throughout fine-tuning of large language models
Mitigating training data memorization with minimal performance trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

N-gram score enables early stopping to mitigate memorization
N-gram-aware loss regularizer reduces memorization by 40%
Techniques maintain performance while minimizing verbatim memorization
🔎 Similar Papers