A Survey on Unlearning in Large Language Models

๐Ÿ“… 2025-10-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) may inadvertently memorize sensitive data, copyrighted material, and harmful knowledge, necessitating precise and efficient knowledge removal under regulatory complianceโ€”e.g., the โ€œright to be forgotten.โ€ This work systematically reviews over 180 papers on machine unlearning published since 2021. We propose, for the first time, a unified taxonomic framework categorizing methods by training phase: training-time, post-training, and inference-time unlearning. Additionally, we establish a critical evaluation framework encompassing dataset characteristics, standardized metrics, and application scenarios. Our analysis identifies fundamental bottlenecks in generalizability, scalability, and evaluation consistency across existing approaches. We further delineate concrete research directions to advance the field. Collectively, this study provides both theoretical foundations and practical guidelines for developing safe, trustworthy LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
The advancement of Large Language Models (LLMs) has revolutionized natural language processing, yet their training on massive corpora poses significant risks, including the memorization of sensitive personal data, copyrighted material, and knowledge that could facilitate malicious activities. To mitigate these issues and align with legal and ethical standards such as the "right to be forgotten", machine unlearning has emerged as a critical technique to selectively erase specific knowledge from LLMs without compromising their overall performance. This survey provides a systematic review of over 180 papers on LLM unlearning published since 2021, focusing exclusively on large-scale generative models. Distinct from prior surveys, we introduce novel taxonomies for both unlearning methods and evaluations. We clearly categorize methods into training-time, post-training, and inference-time based on the training stage at which unlearning is applied. For evaluations, we not only systematically compile existing datasets and metrics but also critically analyze their advantages, disadvantages, and applicability, providing practical guidance to the research community. In addition, we discuss key challenges and promising future research directions. Our comprehensive overview aims to inform and guide the ongoing development of secure and reliable LLMs.
Problem

Research questions and friction points this paper is trying to address.

Selectively erase sensitive data from LLMs
Address memorization of copyrighted malicious content
Enable compliance with right to be forgotten
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of 180+ LLM unlearning papers
Novel taxonomies for unlearning methods and evaluations
Categorizes methods by training stage application
๐Ÿ”Ž Similar Papers
No similar papers found.
R
Ruichen Qiu
School of Advanced Interdisciplinary Sciences, UCAS, China
Jiajun Tan
Jiajun Tan
Institute of Computing Technology, CAS
Machine Unlearning
J
Jiayue Pu
University of Chinese Academy of Sciences, China
H
Honglin Wang
Institute of Computing Technology, CAS, China
Xiao-Shan Gao
Xiao-Shan Gao
AMSS, CAS
Automated ReasoningSymbolic ComputationMachine Learning Theory
F
Fei Sun
Institute of Computing Technology, CAS, China