🤖 AI Summary
This paper addresses the challenge of robust federated learning under model heterogeneity and client-side data corruption—including noise and compression artifacts. We propose the first robust federated learning framework tailored for asymmetric heterogeneous settings. Methodologically, we innovatively integrate diversity-enhanced supervised contrastive learning with a selective one-way collaboration mechanism: the former strengthens robust cross-architecture feature representation, while the latter enables the server to actively reject low-quality client updates, facilitating adaptive knowledge transfer. Key technical components include hybrid data augmentation, asymmetric model aggregation, and client selection strategies. Extensive experiments under diverse data corruption and model heterogeneity scenarios demonstrate that our approach significantly improves both global model accuracy and convergence stability, consistently outperforming existing state-of-the-art methods.
📝 Abstract
This paper studies a challenging robust federated learning task with model heterogeneous and data corrupted clients, where the clients have different local model structures. Data corruption is unavoidable due to factors, such as random noise, compression artifacts, or environmental conditions in real-world deployment, drastically crippling the entire federated system. To address these issues, this paper introduces a novel Robust Asymmetric Heterogeneous Federated Learning (RAHFL) framework. We propose a Diversity-enhanced supervised Contrastive Learning technique to enhance the resilience and adaptability of local models on various data corruption patterns. Its basic idea is to utilize complex augmented samples obtained by the mixed-data augmentation strategy for supervised contrastive learning, thereby enhancing the ability of the model to learn robust and diverse feature representations. Furthermore, we design an Asymmetric Heterogeneous Federated Learning strategy to resist corrupt feedback from external clients. The strategy allows clients to perform selective one-way learning during collaborative learning phase, enabling clients to refrain from incorporating lower-quality information from less robust or underperforming collaborators. Extensive experimental results demonstrate the effectiveness and robustness of our approach in diverse, challenging federated learning environments.