Sparse-Autoencoder-Guided Internal Representation Unlearning for Large Language Models

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM unlearning methods suffer from the “suppression ≠ true forgetting” issue and model collapse, where suppressing outputs does not eliminate underlying knowledge representations. Method: We propose a *true unlearning* approach based on *internal activation redirection*, which explicitly aligns neural activations associated with target entities to the representation space of semantically unknown entities—thereby enforcing semantic indistinguishability at the knowledge level. Unlike gradient-ascent-based suppression, our method employs sparse autoencoders to identify interpretable latent targets and applies gradient-based optimization to precisely modify activations in critical layers. Results: On QA benchmarks, the method reduces recall of target knowledge by 76.3% on average while preserving non-target knowledge performance (variance <1.2%) and avoiding model collapse. Contribution: This work is the first to formalize unlearning as *semantic alignment in activation space*, achieving simultaneous gains in effectiveness, safety, and interpretability.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly deployed across various applications, privacy and copyright concerns have heightened the need for more effective LLM unlearning techniques. Many existing unlearning methods aim to suppress undesirable outputs through additional training (e.g., gradient ascent), which reduces the probability of generating such outputs. While such suppression-based approaches can control model outputs, they may not eliminate the underlying knowledge embedded in the model's internal activations; muting a response is not the same as forgetting it. Moreover, such suppression-based methods often suffer from model collapse. To address these issues, we propose a novel unlearning method that directly intervenes in the model's internal activations. In our formulation, forgetting is defined as a state in which the activation of a forgotten target is indistinguishable from that of ``unknown'' entities. Our method introduces an unlearning objective that modifies the activation of the target entity away from those of known entities and toward those of unknown entities in a sparse autoencoder latent space. By aligning the target's internal activation with those of unknown entities, we shift the model's recognition of the target entity from ``known'' to ``unknown'', achieving genuine forgetting while avoiding over-suppression and model collapse. Empirically, we show that our method effectively aligns the internal activations of the forgotten target, a result that the suppression-based approaches do not reliably achieve. Additionally, our method effectively reduces the model's recall of target knowledge in question-answering tasks without significant damage to the non-target knowledge.
Problem

Research questions and friction points this paper is trying to address.

Addressing ineffective suppression-based unlearning in LLMs
Eliminating embedded knowledge in model activations
Preventing model collapse during unlearning process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directly intervenes in model's internal activations
Uses sparse autoencoder latent space alignment
Shifts target recognition from known to unknown
🔎 Similar Papers
No similar papers found.