Synthetic Forgetting without Access: A Few-shot Zero-glance Framework for Machine Unlearning

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the practical challenge of machine unlearning under few-shot zero-glance settings—where target samples to be forgotten are inaccessible—this paper proposes an efficient unlearning framework that operates without access to original forgetting data. The method introduces a novel Generative Feedback Network (GFN) to synthesize optimal erasure samples for class-level knowledge removal. It further employs a two-stage fine-tuning strategy jointly constraining logits and representation layers, enabling precise trade-off between forgetting efficacy and model utility using only 5% retained data. Evaluated on three image classification benchmarks, the approach achieves an average 82.3% reduction in target-class retention metrics while degrading accuracy on retained classes by less than 1.2%, demonstrating strong compliance with privacy requirements and practical viability.

Technology Category

Application Category

📝 Abstract
Machine unlearning aims to eliminate the influence of specific data from trained models to ensure privacy compliance. However, most existing methods assume full access to the original training dataset, which is often impractical. We address a more realistic yet challenging setting: few-shot zero-glance, where only a small subset of the retained data is available and the forget set is entirely inaccessible. We introduce GFOES, a novel framework comprising a Generative Feedback Network (GFN) and a two-phase fine-tuning procedure. GFN synthesises Optimal Erasure Samples (OES), which induce high loss on target classes, enabling the model to forget class-specific knowledge without access to the original forget data, while preserving performance on retained classes. The two-phase fine-tuning procedure enables aggressive forgetting in the first phase, followed by utility restoration in the second. Experiments on three image classification datasets demonstrate that GFOES achieves effective forgetting at both logit and representation levels, while maintaining strong performance using only 5% of the original data. Our framework offers a practical and scalable solution for privacy-preserving machine learning under data-constrained conditions.
Problem

Research questions and friction points this paper is trying to address.

Eliminating specific data influence from models without accessing original forget set
Achieving machine unlearning with only small subset of retained data available
Maintaining model performance while forgetting class-specific knowledge privately
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Feedback Network synthesizes Optimal Erasure Samples
Two-phase fine-tuning enables aggressive forgetting then restoration
Achieves unlearning without access to original forget data
🔎 Similar Papers
No similar papers found.