CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning

📅 2025-02-27
🏛️ IEEE Transactions on Dependable and Secure Computing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage risks—such as data reconstruction and membership inference—arising from data deletion in machine unlearning, this paper proposes the Compressed Representation Forgetting (CRF) mechanism, which eliminates identifiable traces of deleted data in model outputs while preserving utility. CRF introduces the first information bottleneck–based unlearning paradigm, jointly optimizing memory constraints and forgetting rates via mutual information minimization and adversarial forgetting regularization to learn compressed representations. On MNIST, CRF reduces reconstruction MSE by approximately 200% and significantly strengthens defense against privacy reconstruction attacks, with only a 1.5% accuracy drop. The core contribution lies in pioneering the integration of the information bottleneck principle into the machine unlearning framework, enabling synergistic optimization of privacy protection and knowledge retention.

Technology Category

Application Category

📝 Abstract
Machine unlearning allows data owners to erase the impact of their specified data from trained models. Unfortunately, recent studies have shown that adversaries can recover the erased data, posing serious threats to user privacy. An effective unlearning method removes the information of the specified data from the trained model, resulting in different outputs for the same input before and after unlearning. Adversaries can exploit these output differences to conduct privacy leakage attacks, such as reconstruction and membership inference attacks. However, directly applying traditional defenses to unlearning leads to significant model utility degradation. In this paper, we introduce a Compressive Representation Forgetting Unlearning scheme (CRFU), designed to safeguard against privacy leakage on unlearning. CRFU achieves data erasure by minimizing the mutual information between the trained compressive representation (learned through information bottleneck theory) and the erased data, thereby maximizing the distortion of data. This ensures that the model's output contains less information that adversaries can exploit. Furthermore, we introduce a remembering constraint and an unlearning rate to balance the forgetting of erased data with the preservation of previously learned knowledge, thereby reducing accuracy degradation. Theoretical analysis demonstrates that CRFU can effectively defend against privacy leakage attacks. Our experimental results show that CRFU significantly increases the reconstruction mean square error (MSE), achieving a defense effect improvement of approximately $200%$ against privacy reconstruction attacks with only $1.5%$ accuracy degradation on MNIST.
Problem

Research questions and friction points this paper is trying to address.

Prevents privacy leakage in machine unlearning
Balances data erasure with model accuracy
Defends against reconstruction and inference attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

CRFU minimizes mutual information for data erasure.
CRFU balances forgetting with knowledge preservation.
CRFU increases reconstruction error, enhancing privacy defense.
🔎 Similar Papers
No similar papers found.
W
Weiqi Wang
School of Computer Science, University of Technology Sydney, Australia
Chenhan Zhang
Chenhan Zhang
PhD
deep Learningprivacy-preserving
Z
Zhiyi Tian
School of Computer Science, University of Technology Sydney, Australia
S
Shushu Liu
Department of Communication and Networking, Aalto University, Espoo, Finland
S
Shui Yu
School of Computer Science, University of Technology Sydney, Australia