🤖 AI Summary
Existing methods for machine unlearning—specifically, selectively erasing knowledge of target classes from pretrained models—struggle to simultaneously achieve fast deletion, high accuracy retention for remaining classes, and strong privacy guarantees.
Method: We propose a class-aware soft-pruning framework based on orthogonal convolutional kernel regularization. It identifies class-specific channels via activation difference analysis, then enforces filter decorrelation and channel-level soft pruning to enable millisecond-scale precise unlearning.
Results: Evaluated on CIFAR-10, CIFAR-100, and TinyImageNet, our method achieves complete forgetting of target classes while degrading accuracy on retained classes by less than 0.5%. It accelerates unlearning by 2–3 orders of magnitude over state-of-the-art approaches and significantly mitigates membership inference attacks—thereby satisfying real-time, regulatory-compliant, and robust privacy requirements in ML-as-a-Service (MLaaS) deployments.
📝 Abstract
Machine unlearning aims to selectively remove class-specific knowledge from pretrained neural networks to satisfy privacy regulations such as the GDPR. Existing methods typically face a trade-off between unlearning speed and preservation of predictive accuracy, often incurring either high computational overhead or significant performance degradation on retained classes. In this paper, we propose a novel class-aware soft pruning framework leveraging orthogonal convolutional kernel regularization to achieve rapid and precise forgetting with millisecond-level response times. By enforcing orthogonality constraints during training, our method decorrelates convolutional filters and disentangles feature representations, while efficiently identifying class-specific channels through activation difference analysis. Extensive evaluations across multiple architectures and datasets demonstrate stable pruning with near-instant execution, complete forgetting of targeted classes, and minimal accuracy loss on retained data. Experiments on CIFAR-10, CIFAR-100, and TinyImageNet confirm that our approach substantially reduces membership inference attack risks and accelerates unlearning by orders of magnitude compared to state-of-the-art baselines. This framework provides an efficient, practical solution for real-time machine unlearning in Machine Learning as a Service (MLaaS) scenarios.