CoUn: Empowering Machine Unlearning via Contrastive Learning

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine unlearning (MU) aims to eliminate the influence of specific “forgetting data” on a trained model while preserving its generalization performance on “retained data.” Existing approaches—such as label manipulation or weight perturbation—exhibit limited forgetting efficacy. This paper proposes CoUn, the first MU framework to integrate contrastive learning: it leverages semantic similarity to push representations of forgetting samples away from their class prototypes, while simultaneously enforcing intra-class compactness and inter-class separability for retained data via supervised learning—mimicking retraining solely on retained data. This joint optimization is efficient, non-intrusive, and requires no access to the original training procedure. Experiments across multiple datasets and architectures demonstrate that CoUn significantly outperforms state-of-the-art MU methods. Moreover, its contrastive learning module serves as a plug-and-play enhancement, consistently improving the forgetting performance of diverse baseline MU approaches.

Technology Category

Application Category

📝 Abstract
Machine unlearning (MU) aims to remove the influence of specific "forget" data from a trained model while preserving its knowledge of the remaining "retain" data. Existing MU methods based on label manipulation or model weight perturbations often achieve limited unlearning effectiveness. To address this, we introduce CoUn, a novel MU framework inspired by the observation that a model retrained from scratch using only retain data classifies forget data based on their semantic similarity to the retain data. CoUn emulates this behavior by adjusting learned data representations through contrastive learning (CL) and supervised learning, applied exclusively to retain data. Specifically, CoUn (1) leverages semantic similarity between data samples to indirectly adjust forget representations using CL, and (2) maintains retain representations within their respective clusters through supervised learning. Extensive experiments across various datasets and model architectures show that CoUn consistently outperforms state-of-the-art MU baselines in unlearning effectiveness. Additionally, integrating our CL module into existing baselines empowers their unlearning effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Removing specific data influence from trained models while preserving remaining knowledge
Addressing limited effectiveness of existing unlearning methods based on label manipulation
Adjusting learned representations using contrastive learning for better unlearning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contrastive learning for representation adjustment
Applies supervised learning to maintain cluster integrity
Leverages semantic similarity for indirect unlearning
🔎 Similar Papers
No similar papers found.