FedEFC: Federated Learning Using Enhanced Forward Correction Against Noisy Labels

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), client data heterogeneity and communication constraints exacerbate the detrimental impact of label noise on model performance. To address this, we propose FedEFC—a novel FL framework that jointly mitigates label noise for the first time. FedEFC introduces a pre-stopping mechanism to prevent local overfitting to noisy labels, designs a federated loss correction strategy tailored to non-IID data and sparse communication rounds, and—grounded in Composite Proper Loss theory—rigorously proves that the FL optimization objective asymptotically aligns with the clean data distribution under label noise. Extensive experiments under standard heterogeneous settings demonstrate that FedEFC consistently outperforms state-of-the-art methods, achieving up to a 41.64% relative improvement in accuracy.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a powerful framework for privacy-preserving distributed learning. It enables multiple clients to collaboratively train a global model without sharing raw data. However, handling noisy labels in FL remains a major challenge due to heterogeneous data distributions and communication constraints, which can severely degrade model performance. To address this issue, we propose FedEFC, a novel method designed to tackle the impact of noisy labels in FL. FedEFC mitigates this issue through two key techniques: (1) prestopping, which prevents overfitting to mislabeled data by dynamically halting training at an optimal point, and (2) loss correction, which adjusts model updates to account for label noise. In particular, we develop an effective loss correction tailored to the unique challenges of FL, including data heterogeneity and decentralized training. Furthermore, we provide a theoretical analysis, leveraging the composite proper loss property, to demonstrate that the FL objective function under noisy label distributions can be aligned with the clean label distribution. Extensive experimental results validate the effectiveness of our approach, showing that it consistently outperforms existing FL techniques in mitigating the impact of noisy labels, particularly under heterogeneous data settings (e.g., achieving up to 41.64% relative performance improvement over the existing loss correction method).
Problem

Research questions and friction points this paper is trying to address.

Mitigating noisy labels in Federated Learning
Addressing data heterogeneity in decentralized training
Improving model performance with loss correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning with Enhanced Forward Correction
Prestopping to prevent overfitting noisy labels
Loss correction for noisy label distributions
🔎 Similar Papers
No similar papers found.