LiD-FL: Towards List-Decodable Federated Learning

📅 2024-08-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the failure of model robustness in federated learning when malicious participants constitute a majority—exceeding 50% and even reaching 70%. To tackle this, we propose the first Byzantine-resilient list-decodable federated learning framework. Departing from the conventional assumption requiring honest clients to outnumber malicious ones (i.e., ≥1/2), our approach maintains a candidate set of models at the server, guaranteeing that at least one converges to a high-quality solution. It requires no prior knowledge of the honest client fraction and supports provable convergence for both convex and non-convex loss functions. Leveraging multi-model co-maintenance and a robust aggregation mechanism, the framework significantly enhances resilience against poisoning and gradient reversal attacks. Experiments demonstrate that, even with 70% malicious clients, our method achieves substantially higher accuracy than state-of-the-art Byzantine-robust baselines.

Technology Category

Application Category

📝 Abstract
Federated learning is often used in environments with many unverified participants. Therefore, federated learning under adversarial attacks receives significant attention. This paper proposes an algorithmic framework for list-decodable federated learning, where a central server maintains a list of models, with at least one guaranteed to perform well. The framework has no strict restriction on the fraction of honest workers, extending the applicability of Byzantine federated learning to the scenario with more than half adversaries. Under proper assumptions on the loss function, we prove a convergence theorem for our method. Experimental results, including image classification tasks with both convex and non-convex losses, demonstrate that the proposed algorithm can withstand the malicious majority under various attacks.
Problem

Research questions and friction points this paper is trying to address.

List-decodable federated learning framework
Withstands malicious majority attacks
No strict restriction on honest workers
Innovation

Methods, ideas, or system contributions that make the work stand out.

List-decodable federated learning framework
No strict honest worker requirement
Withstands malicious majority attacks
🔎 Similar Papers
No similar papers found.