Debunking Optimization Myths in Federated Learning for Medical Image Classification

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) for medical image classification often overlooks the decisive impact of local optimization configurations—such as optimizer choice, learning rate, and number of local training epochs—on model performance and deployment robustness, especially on resource-constrained edge devices. Method: We systematically evaluate canonical FL frameworks on colorectal histopathology and blood cell classification tasks, conducting extensive benchmarking across multiple optimizer–learning rate combinations and varying local epoch counts. Contribution/Results: Our analysis reveals that local hyperparameter configuration exerts a stronger influence on convergence and accuracy than algorithmic complexity alone. Crucially, the number of local epochs exhibits a dual effect: moderate increases improve convergence, whereas excessive values degrade global model performance due to client drift and overfitting. By quantifying these effects, we establish the critical role of edge-side optimization tuning and provide a reproducible, empirically grounded guideline—including principled hyperparameter selection protocols—for robust FL deployment in low-resource clinical settings.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a collaborative learning method that enables decentralized model training while preserving data privacy. Despite its promise in medical imaging, recent FL methods are often sensitive to local factors such as optimizers and learning rates, limiting their robustness in practical deployments. In this work, we revisit vanilla FL to clarify the impact of edge device configurations, benchmarking recent FL methods on colorectal pathology and blood cell classification task. We numerically show that the choice of local optimizer and learning rate has a greater effect on performance than the specific FL method. Moreover, we find that increasing local training epochs can either enhance or impair convergence, depending on the FL method. These findings indicate that appropriate edge-specific configuration is more crucial than algorithmic complexity for achieving effective FL.
Problem

Research questions and friction points this paper is trying to address.

Impact of local optimizers and learning rates on FL performance
Effect of local training epochs on FL convergence variability
Edge-specific configurations outweigh algorithmic complexity in FL effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Revisiting vanilla FL for edge device impact
Benchmarking FL methods on medical tasks
Optimizing local configurations over complex algorithms
🔎 Similar Papers
No similar papers found.