🤖 AI Summary
In federated learning, clients often suffer from resource constraints and exhibit poor robustness against common data corruptions—such as noise, blur, and weather artifacts—while existing robust training methods impose prohibitive computational overhead, hindering practical deployment. To address this, we propose FedERL, the first framework achieving *zero client-side robustness overhead*: all robustness enhancement—including Data-Agnostic Robust Training (DART)—is performed exclusively on the server, without requiring clients to access raw data or execute any additional robust operations. DART introduces lightweight, data-agnostic adversarial augmentation and regularization on the server, balancing efficiency and corruption resilience. Experiments demonstrate that FedERL reduces average client inference time and energy consumption by over 60%, while attaining superior robust accuracy compared to state-of-the-art federated robust methods—particularly under severe resource constraints.
📝 Abstract
Federated learning (FL) accelerates the deployment of deep learning models on edge devices while preserving data privacy. However, FL systems face challenges due to client-side constraints on computational resources, and from a lack of robustness to common corruptions such as noise, blur, and weather effects. Existing robust training methods are computationally expensive and unsuitable for resource-constrained clients. We propose FedERL, federated efficient and robust learning, as the first work to explicitly address corruption robustness under time and energy constraints on the client side. At its core, FedERL employs a novel data-agnostic robust training (DART) method on the server to enhance robustness without access to the training data. In doing so, FedERL ensures zero robustness overhead for clients. Extensive experiments demonstrate FedERL's ability to handle common corruptions at a fraction of the time and energy cost of traditional robust training methods. In scenarios with limited time and energy budgets, FedERL surpasses the performance of traditional robust training, establishing it as a practical and scalable solution for real-world FL applications.