EFU: Enforcing Federated Unlearning via Functional Encryption

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key limitations of federated unlearning (FU)—including reliance on server cooperation, exposure of client intent and identity, and lack of enforceable guarantees—this paper proposes the first trustless, behaviorally covert, and verifiable FU framework. Methodologically, it binds update and aggregation computations via function encryption, preventing the server from identifying, skipping, or tampering with unlearning requests; further, it employs adversarial-sample-assisted loss and parameter-importance regularization to mask both model behavioral shifts and parameter updates without revealing the unlearning target. The framework is agnostic to client-side unlearning algorithms and enables fully autonomous execution. Experiments across multiple datasets and models demonstrate that unlearning accuracy approaches random guessing (≈50%), utility degradation remains comparable to full retraining, and the unlearning behavior is provably hidden from the server.

Technology Category

Application Category

📝 Abstract
Federated unlearning (FU) algorithms allow clients in federated settings to exercise their ''right to be forgotten'' by removing the influence of their data from a collaboratively trained model. Existing FU methods maintain data privacy by performing unlearning locally on the client-side and sending targeted updates to the server without exposing forgotten data; yet they often rely on server-side cooperation, revealing the client's intent and identity without enforcement guarantees - compromising autonomy and unlearning privacy. In this work, we propose EFU (Enforced Federated Unlearning), a cryptographically enforced FU framework that enables clients to initiate unlearning while concealing its occurrence from the server. Specifically, EFU leverages functional encryption to bind encrypted updates to specific aggregation functions, ensuring the server can neither perform unauthorized computations nor detect or skip unlearning requests. To further mask behavioral and parameter shifts in the aggregated model, we incorporate auxiliary unlearning losses based on adversarial examples and parameter importance regularization. Extensive experiments show that EFU achieves near-random accuracy on forgotten data while maintaining performance comparable to full retraining across datasets and neural architectures - all while concealing unlearning intent from the server. Furthermore, we demonstrate that EFU is agnostic to the underlying unlearning algorithm, enabling secure, function-hiding, and verifiable unlearning for any client-side FU mechanism that issues targeted updates.
Problem

Research questions and friction points this paper is trying to address.

Enables clients to remove data influence without server detection
Uses functional encryption to hide unlearning intent and identity
Maintains model performance while ensuring unlearning privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional encryption for secure unlearning updates
Adversarial losses mask behavioral parameter shifts
Server-agnostic unlearning with intent concealment
🔎 Similar Papers
No similar papers found.