🤖 AI Summary
In heterogeneous federated learning (HFL), low collaborative training efficiency arises from significant disparities in client device capabilities, non-independent and identically distributed (Non-IID) data, and architectural heterogeneity across local models. To address these challenges, this paper proposes a dual-path knowledge distillation framework that jointly models heterogeneity in both feature and logits spaces, introducing a bidirectional distillation mechanism to co-optimize local feature representations and output logits. The method integrates feature alignment, logits calibration, and heterogeneity-aware training, enabling cross-device model collaboration while preserving data locality and privacy. Evaluated on multiple HFL benchmarks, the approach achieves average accuracy improvements of 3.2–5.8%, accelerates convergence by 40%, and reduces communication overhead by 22%.