Erasing Conceptual Knowledge from Language Models

πŸ“… 2024-10-03
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the problem of controllable erasure of specific conceptual knowledge in large language models (LLMs). We propose the Erase-LLM (ELM) method, which constructs an introspective classifier using the model itself to model concept-conditioned probabilities for identifying target knowledge, followed by targeted low-rank parameter updates to achieve precise forgetting. ELM introduces the first β€œself-assessed distribution matching” erasure paradigm, enhancing conceptual elimination thoroughness while preserving general capabilities. Experiments across biomedical safety, cybersecurity, and literary domains demonstrate near-random-level topic removal, with no degradation in text coherence or accuracy on unrelated tasks; moreover, ELM exhibits strong robustness against adversarial prompt attacks. To our knowledge, this is the first approach enabling concept-level, verifiable, and cross-domain controllable forgetting in LLMs.

Technology Category

Application Category

πŸ“ Abstract
In this work, we propose Erasure of Language Memory (ELM), an approach for concept-level unlearning built on the principle of matching the distribution defined by an introspective classifier. Our key insight is that effective unlearning should leverage the model's ability to evaluate its own knowledge, using the model itself as a classifier to identify and reduce the likelihood of generating content related to undesired concepts. ELM applies this framework to create targeted low-rank updates that reduce generation probabilities for concept-specific content while preserving the model's broader capabilities. We demonstrate ELM's efficacy on biosecurity, cybersecurity, and literary domain erasure tasks. Comparative analysis shows that ELM achieves superior performance across key metrics, including near-random scores on erased topic assessments, maintained coherence in text generation, preserved accuracy on unrelated benchmarks, and robustness under adversarial attacks. Our code, data, and trained models are available at https://elm.baulab.info
Problem

Research questions and friction points this paper is trying to address.

Erasing specific conceptual knowledge from language models
Using introspective classifiers for targeted concept unlearning
Maintaining model performance while removing undesired content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses introspective classifier for concept unlearning
Applies targeted low-rank updates selectively
Maintains model performance on unrelated tasks
πŸ”Ž Similar Papers
No similar papers found.