Model State Arithmetic for Machine Unlearning

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretraining data for large language models (LLMs) often contains privacy-sensitive, copyrighted, erroneous, or low-quality content; however, full retraining to achieve data forgetting is computationally prohibitive. This paper proposes Model State Arithmetic (MSA), the first forgetting algorithm that precisely estimates and nullifies the influence of individual training examples directly in parameter space—by analyzing intermediate pretraining checkpoints—without fine-tuning or retraining. MSA integrates state-difference modeling, reverse influence estimation, and incremental parameter updates to enable efficient and accurate machine unlearning. Evaluated on multiple LLMs (LLaMA-2, Qwen) and benchmarks (ToxiGen, TruthfulQA), MSA substantially outperforms existing methods: it improves forgetting success rates by 12.7–28.4% while preserving overall model performance, yielding a +0.3% average accuracy gain.

Technology Category

Application Category

📝 Abstract
Large language models are trained on massive corpora of web data, which may include private data, copyrighted material, factually inaccurate data, or data that degrades model performance. Eliminating the influence of such problematic datapoints through complete retraining -- by repeatedly pretraining the model on datasets that exclude these specific instances -- is computationally prohibitive. For this reason, unlearning algorithms have emerged that aim to eliminate the influence of particular datapoints, while otherwise preserving the model -- at a low computational cost. However, precisely estimating and undoing the influence of individual datapoints has proved to be challenging. In this work, we propose a new algorithm, MSA, for estimating and undoing the influence of datapoints -- by leveraging model checkpoints i.e. artifacts capturing model states at different stages of pretraining. Our experimental results demonstrate that MSA consistently outperforms existing machine unlearning algorithms across multiple benchmarks, models, and evaluation metrics, suggesting that MSA could be an effective approach towards more flexible large language models that are capable of data erasure.
Problem

Research questions and friction points this paper is trying to address.

Efficiently remove problematic data influence without retraining
Estimate and undo individual datapoint effects accurately
Improve machine unlearning performance across benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses model checkpoints for unlearning
Estimates datapoint influence precisely
Outperforms existing unlearning algorithms
🔎 Similar Papers
No similar papers found.