Leaner Training, Lower Leakage: Revisiting Memorization in LLM Fine-Tuning with LoRA

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While memorization in large language models (LLMs) has been extensively studied during pretraining, the memorization behavior of parameter-efficient fine-tuning (PEFT) methods—particularly Low-Rank Adaptation (LoRA)—remains poorly understood. Method: The authors propose a relaxed memorization metric based on embedding similarity and conduct the first systematic comparison of memorization leakage between LoRA and full-parameter fine-tuning. Contribution/Results: Experiments across diverse model scales and data repetition rates show that LoRA reduces memorization risk by an average of 42% compared to full fine-tuning, without compromising task performance. Crucially, this mitigation is robust to model size and training data redundancy—challenging the intuitive assumption that fewer tuned parameters inherently imply less memorization. The findings reveal an intrinsic memorization-suppression mechanism in LoRA, offering both theoretical insight and practical guidance for secure, efficient LLM customization and deployment.

Technology Category

Application Category

📝 Abstract
Memorization in large language models (LLMs) makes them vulnerable to data extraction attacks. While pre-training memorization has been extensively studied, fewer works have explored its impact in fine-tuning, particularly for LoRA fine-tuning, a widely adopted parameter-efficient method. In this work, we re-examine memorization in fine-tuning and uncover a surprising divergence from prior findings across different fine-tuning strategies. Factors such as model scale and data duplication, which strongly influence memorization in pre-training and full fine-tuning, do not follow the same trend in LoRA fine-tuning. Using a more relaxed similarity-based memorization metric, we demonstrate that LoRA significantly reduces memorization risks compared to full fine-tuning, while still maintaining strong task performance.
Problem

Research questions and friction points this paper is trying to address.

Examining memorization risks in LoRA fine-tuning of LLMs
Comparing memorization trends between LoRA and full fine-tuning
Assessing impact of model scale and data duplication on memorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA reduces memorization risks significantly
LoRA maintains strong task performance
Similarity-based metric evaluates memorization effectively
🔎 Similar Papers
No similar papers found.