BlindU: Blind Machine Unlearning without Revealing Erasing Data.

📅 2026-01-12
🏛️ IEEE Transactions on Pattern Analysis and Machine Intelligence
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the privacy dilemma in machine unlearning within sensitive settings such as federated learning, where existing methods require uploading data to be deleted—contradicting fundamental privacy principles. To resolve this, we propose BlindU, the first blind unlearning framework that enables effective forgetting without exposing raw user data. In BlindU, users locally generate compressed representations via an information bottleneck and transmit only these representations along with their labels to the server, which then performs unlearning solely based on this sanitized input. The approach integrates a dedicated unlearning module, a multi-gradient descent algorithm, and a noise-free differentially private masking mechanism to simultaneously guarantee strong privacy and high unlearning efficacy. Both theoretical analysis and empirical evaluations demonstrate that BlindU outperforms existing privacy-preserving baselines in terms of both privacy protection and unlearning performance.

Technology Category

Application Category

📝 Abstract
Machine unlearning enables data holders to remove the contribution of their specified samples from trained models to protect their privacy. However, it is paradoxical that most unlearning methods require the unlearning requesters to firstly upload their data to the server as a prerequisite for unlearning. These methods are infeasible in many privacy-preserving scenarios where servers are prohibited from accessing users' data, such as federated learning (FL). In this paper, we explore how to implement unlearning under the condition of not uncovering the erasing data to the server. We propose Blind Unlearning (BlindU), which carries out unlearning using compressed representations instead of original inputs. BlindU only involves the server and the unlearning user: the user locally generates privacy-preserving representations, and the server performs unlearning solely on these representations and their labels. For the FL model training, we employ the information bottleneck (IB) mechanism. The encoder of the IB-based FL model learns representations that distort maximum task-irrelevant information from inputs, allowing FL users to generate compressed representations locally. For effective unlearning using compressed representation, BlindU integrates two dedicated unlearning modules tailored explicitly for IB-based models and uses a multiple gradient descent algorithm to balance forgetting and utility retaining. While IB compression already provides protection for task-irrelevant information of inputs, to further enhance the privacy protection, we introduce a noise-free differential privacy (DP) masking method to deal with the raw erasing data before compressing. Theoretical analysis and extensive experimental results illustrate the superiority of BlindU in privacy protection and unlearning effectiveness compared with the best existing privacy-preserving unlearning benchmarks.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
privacy preservation
federated learning
data erasure
blind unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Blind Unlearning
Information Bottleneck
Federated Learning
Differential Privacy
Machine Unlearning
🔎 Similar Papers
No similar papers found.
Weiqi Wang
Weiqi Wang
University of Technology Sydney
Model Security and Data PrivacyMachine Unlearning
Z
Zhiyi Tian
School of Cyber Science and Engineering, Southeast University, China
Chenhan Zhang
Chenhan Zhang
PhD
deep Learningprivacy-preserving
S
Shui Yu
School of Computer Science, University of Technology Sydney, Australia