🤖 AI Summary
This work addresses the challenge of efficiently and privately removing specific participant data in federated learning without accessing clients’ raw data. To this end, the authors propose FOUL, a novel framework that enables federated unlearning through a purely server-side mechanism. FOUL operates in two phases—unlearning preparation and server-side knowledge aggregation—and introduces a new, evaluable formulation of federated unlearning along with an “unlearning time” metric to quantify efficiency. By leveraging gradient conflict mitigation and representation techniques, a two-stage architecture, and a client-free aggregation strategy, FOUL achieves significantly faster unlearning speeds and superior unlearning performance compared to retraining baselines across three datasets, all while incurring lower computational and communication overhead.
📝 Abstract
Federated Unlearning (FUL) aims to remove specific participants' data contributions from a trained Federated Learning model, thereby ensuring data privacy and compliance with regulatory requirements. Despite its potential, progress in FUL has been limited due to several challenges, including the cross-client knowledge inaccessibility and high computational and communication costs. To overcome these challenges, we propose Federated On-server Unlearning (FOUL), a novel framework that comprises two key stages. The learning-to-unlearn stage serves as a preparatory learning phase, during which the model identifies and encodes the key features associated with the forget clients. This stage is communication-efficient and establishes the basis for the subsequent unlearning process. Subsequently, on-server knowledge aggregation phase aims to perform the unlearning process at the server without requiring access to client data, thereby preserving both efficiency and privacy. We introduce a new data setting for FUL, which enables a more transparent and rigorous evaluation of unlearning. To highlight the effectiveness of our approach, we propose a novel evaluation metric termed time-to-forget, which measures how quickly the model achieves optimal unlearning performance. Extensive experiments conducted on three datasets under various unlearning scenarios demonstrate that FOUL outperforms the Retraining in FUL. Moreover, FOUL achieves competitive or superior results with significantly reduced time-to-forget, while maintaining low communication and computation costs.