π€ AI Summary
This work addresses the problem of controllable erasure of specific conceptual knowledge in large language models (LLMs). We propose the Erase-LLM (ELM) method, which constructs an introspective classifier using the model itself to model concept-conditioned probabilities for identifying target knowledge, followed by targeted low-rank parameter updates to achieve precise forgetting. ELM introduces the first βself-assessed distribution matchingβ erasure paradigm, enhancing conceptual elimination thoroughness while preserving general capabilities. Experiments across biomedical safety, cybersecurity, and literary domains demonstrate near-random-level topic removal, with no degradation in text coherence or accuracy on unrelated tasks; moreover, ELM exhibits strong robustness against adversarial prompt attacks. To our knowledge, this is the first approach enabling concept-level, verifiable, and cross-domain controllable forgetting in LLMs.
π Abstract
In this work, we propose Erasure of Language Memory (ELM), an approach for concept-level unlearning built on the principle of matching the distribution defined by an introspective classifier. Our key insight is that effective unlearning should leverage the model's ability to evaluate its own knowledge, using the model itself as a classifier to identify and reduce the likelihood of generating content related to undesired concepts. ELM applies this framework to create targeted low-rank updates that reduce generation probabilities for concept-specific content while preserving the model's broader capabilities. We demonstrate ELM's efficacy on biosecurity, cybersecurity, and literary domain erasure tasks. Comparative analysis shows that ELM achieves superior performance across key metrics, including near-random scores on erased topic assessments, maintained coherence in text generation, preserved accuracy on unrelated benchmarks, and robustness under adversarial attacks. Our code, data, and trained models are available at https://elm.baulab.info