EVE: Efficient Verification of Data Erasure through Customized Perturbation in Approximate Unlearning

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of efficiently verifying machine unlearning without access to the model’s initial training process. The authors propose a novel verification method that requires neither backdoor insertion nor any intervention during training. By constructing tailored perturbations that induce detectable differences in model predictions before and after unlearning, the approach formulates perturbation generation as an adversarial optimization problem that aligns the gradient of the unlearning objective with the gradient of decision boundary shifts. This formulation enables, for the first time, a fully decoupled and efficient verification mechanism independent of the original training procedure. Experimental results demonstrate that the proposed method significantly outperforms existing techniques in both accuracy and computational efficiency, offering a practical and reliable solution for validating data removal in machine learning models.

Technology Category

Application Category

πŸ“ Abstract
Verifying whether the machine unlearning process has been properly executed is critical but remains underexplored. Some existing approaches propose unlearning verification methods based on backdooring techniques. However, these methods typically require participation in the model's initial training phase to backdoor the model for later verification, which is inefficient and impractical. In this paper, we propose an efficient verification of erasure method (EVE) for verifying machine unlearning without requiring involvement in the model's initial training process. The core idea is to perturb the unlearning data to ensure the model prediction of the specified samples will change before and after unlearning with perturbed data. The unlearning users can leverage the observation of the changes as a verification signal. Specifically, the perturbations are designed with two key objectives: ensuring the unlearning effect and altering the unlearned model's prediction of target samples. We formalize the perturbation generation as an adversarial optimization problem, solving it by aligning the unlearning gradient with the gradient of boundary change for target samples. We conducted extensive experiments, and the results show that EVE can verify machine unlearning without involving the model's initial training process, unlike backdoor-based methods. Moreover, EVE significantly outperforms state-of-the-art unlearning verification methods, offering significant speedup in efficiency while enhancing verification accuracy. The source code of EVE is released at \uline{https://anonymous.4open.science/r/EVE-C143}, providing a novel tool for verification of machine unlearning.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
unlearning verification
data erasure
efficient verification
model verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine unlearning verification
customized perturbation
adversarial optimization
data erasure
gradient alignment
πŸ”Ž Similar Papers
No similar papers found.
Weiqi Wang
Weiqi Wang
University of Technology Sydney
Model Security and Data PrivacyMachine Unlearning
Z
Zhiyi Tian
University of Technology Sydney
Chenhan Zhang
Chenhan Zhang
PhD
deep Learningprivacy-preserving
L
Luoyu Chen
University of Technology Sydney
S
Shui Yu
University of Technology Sydney