🤖 AI Summary
Enabling GDPR “right to be forgotten” in vertical federated learning (VFL) remains challenging, as existing unlearning methods are designed for horizontal settings and fail to accommodate feature-partitioned VFL architectures.
Method: We propose the first client-level federated unlearning framework for VFL, centered on a representation misdirection mechanism. Specifically: (1) the forgetting client collapses its encoder outputs onto random anchor points on the unit sphere, explicitly severing statistical dependencies between its features and the global model; (2) we formulate a joint optimization objective comprising server-side retention loss and unlearning loss, augmented by gradient orthogonal projection to preserve utility for non-forgetting clients.
Results: Evaluated on public benchmarks, our method reduces backdoor attack success rate to the natural class prior level while incurring only ~2.5% clean accuracy degradation—substantially outperforming state-of-the-art alternatives.
📝 Abstract
Data-protection regulations such as the GDPR grant every participant in a federated system a right to be forgotten. Federated unlearning has therefore emerged as a research frontier, aiming to remove a specific party's contribution from the learned model while preserving the utility of the remaining parties. However, most unlearning techniques focus on Horizontal Federated Learning (HFL), where data are partitioned by samples. In contrast, Vertical Federated Learning (VFL) allows organizations that possess complementary feature spaces to train a joint model without sharing raw data. The resulting feature-partitioned architecture renders HFL-oriented unlearning methods ineffective. In this paper, we propose REMISVFU, a plug-and-play representation misdirection framework that enables fast, client-level unlearning in splitVFL systems. When a deletion request arrives, the forgetting party collapses its encoder output to a randomly sampled anchor on the unit sphere, severing the statistical link between its features and the global model. To maintain utility for the remaining parties, the server jointly optimizes a retention loss and a forgetting loss, aligning their gradients via orthogonal projection to eliminate destructive interference. Evaluations on public benchmarks show that REMISVFU suppresses back-door attack success to the natural class-prior level and sacrifices only about 2.5% points of clean accuracy, outperforming state-of-the-art baselines.