REMISVFU: Vertical Federated Unlearning via Representation Misdirection for Intermediate Output Feature

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Enabling GDPR “right to be forgotten” in vertical federated learning (VFL) remains challenging, as existing unlearning methods are designed for horizontal settings and fail to accommodate feature-partitioned VFL architectures. Method: We propose the first client-level federated unlearning framework for VFL, centered on a representation misdirection mechanism. Specifically: (1) the forgetting client collapses its encoder outputs onto random anchor points on the unit sphere, explicitly severing statistical dependencies between its features and the global model; (2) we formulate a joint optimization objective comprising server-side retention loss and unlearning loss, augmented by gradient orthogonal projection to preserve utility for non-forgetting clients. Results: Evaluated on public benchmarks, our method reduces backdoor attack success rate to the natural class prior level while incurring only ~2.5% clean accuracy degradation—substantially outperforming state-of-the-art alternatives.

Technology Category

Application Category

📝 Abstract
Data-protection regulations such as the GDPR grant every participant in a federated system a right to be forgotten. Federated unlearning has therefore emerged as a research frontier, aiming to remove a specific party's contribution from the learned model while preserving the utility of the remaining parties. However, most unlearning techniques focus on Horizontal Federated Learning (HFL), where data are partitioned by samples. In contrast, Vertical Federated Learning (VFL) allows organizations that possess complementary feature spaces to train a joint model without sharing raw data. The resulting feature-partitioned architecture renders HFL-oriented unlearning methods ineffective. In this paper, we propose REMISVFU, a plug-and-play representation misdirection framework that enables fast, client-level unlearning in splitVFL systems. When a deletion request arrives, the forgetting party collapses its encoder output to a randomly sampled anchor on the unit sphere, severing the statistical link between its features and the global model. To maintain utility for the remaining parties, the server jointly optimizes a retention loss and a forgetting loss, aligning their gradients via orthogonal projection to eliminate destructive interference. Evaluations on public benchmarks show that REMISVFU suppresses back-door attack success to the natural class-prior level and sacrifices only about 2.5% points of clean accuracy, outperforming state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Enables client-level unlearning in vertical federated learning systems
Removes a party's contribution while preserving utility for others
Uses representation misdirection to break statistical links for forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representation misdirection for vertical federated unlearning
Encoder output collapsed to random anchor on unit sphere
Server optimizes retention and forgetting losses via orthogonal projection
🔎 Similar Papers
No similar papers found.
Wenhan Wu
Wenhan Wu
The University of North Carolina at Charlotte
Human Action RecognitionHuman Behavior AnalysisComputer Vision
Z
Zhili He
School of Computer Science, Wuhan University, Wuhan, China
H
Huanghuang Liang
School of Computer Science, Wuhan University, Wuhan, China
Y
Yili Gong
School of Computer Science, Wuhan University, Wuhan, China
J
Jiawei Jiang
School of Computer Science, Wuhan University, Wuhan, China
C
Chuang Hu
State Key Laboratory of Internet of Things for Smart City, University of Macau, Macau SAR
Dazhao Cheng
Dazhao Cheng
School of Computer Science, Wuhan University, Wuhan, China