Understanding Empirical Unlearning with Combinatorial Interpretability

πŸ“… 2026-02-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses a critical limitation in current machine unlearning methods, which often suppress rather than truly erase target knowledge, leaving it potentially recoverable. Due to the opacity of large models, verifying the effectiveness of such methods remains challenging. To overcome this, the study implements mainstream unlearning approaches within a fully interpretable two-layer neural network framework, enabling direct analysis of concept representations encoded in model weights. The authors systematically evaluate whether these methods genuinely remove targeted knowledge and assess its recoverability after fine-tuning. Their findings reveal, for the first time in an interpretable setting, that most unlearning techniques merely obscure knowledge rather than eliminate itβ€”β€œforgotten” information can be efficiently restored through simple fine-tuning, thereby exposing the fragility and fundamental limitations of existing unlearning mechanisms.

Technology Category

Application Category

πŸ“ Abstract
While many recent methods aim to unlearn or remove knowledge from pretrained models, seemingly erased knowledge often persists and can be recovered in various ways. Because large foundation models are far from interpretable, understanding whether and how such knowledge persists remains a significant challenge. To address this, we turn to the recently developed framework of combinatorial interpretability. This framework, designed for two-layer neural networks, enables direct inspection of the knowledge encoded in the model weights. We reproduce baseline unlearning methods within the combinatorial interpretability setting and examine their behavior along two dimensions: (i) whether they truly remove knowledge of a target concept (the concept we wish to remove) or merely inhibit its expression while retaining the underlying information, and (ii) how easily the supposedly erased knowledge can be recovered through various fine-tuning operations. Our results shed light within a fully interpretable setting on how knowledge can persist despite unlearning and when it might resurface.
Problem

Research questions and friction points this paper is trying to address.

unlearning
knowledge persistence
model interpretability
foundation models
combinatorial interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

combinatorial interpretability
model unlearning
knowledge persistence
neural network interpretability
empirical unlearning
πŸ”Ž Similar Papers
No similar papers found.
S
Shingo Kodama
Middlebury College
Niv Cohen
Niv Cohen
Research Scientist at New York University
Anomaly DetectionRepresentation LearningWatermarking
M
Micah Adler
MIT
N
Nir Shavit
MIT & Red Hat