🤖 AI Summary
Existing analytical federated learning (AFL) frameworks struggle to support deep neural network (DNN) training and neglect client data heterogeneity, resulting in poor global model generalization. To address this, we propose pFedACnnL, an analytical federated meta-learning framework. Our method introduces ACnnL—a hierarchical analytical local learning algorithm enabling single-step closed-form solutions—and establishes the first provably convergent analytical personalized AFL framework, integrating personalized meta-initialization, data-distribution-aware clustering, and shared modeling. Training leverages closed-form least-squares estimation and hierarchical distributed analytical optimization. Experiments demonstrate a 98% reduction in DNN training time and state-of-the-art accuracy across most tasks under both convex and non-convex settings, significantly outperforming existing gradient-free and personalized FL approaches.
📝 Abstract
Analytic federated learning (AFL) which updates model weights only once by using closed-form least-square (LS) solutions can reduce abundant training time in gradient-free federated learning (FL). The current AFL framework cannot support deep neural network (DNN) training, which hinders its implementation on complex machine learning tasks. Meanwhile, it overlooks the heterogeneous data distribution problem that restricts the single global model from performing well on each client's task. To overcome the first challenge, we propose an AFL framework, namely FedACnnL, in which we resort to a novel local analytic learning method (ACnnL) and model the training of each layer as a distributed LS problem. For the second challenge, we propose an analytic personalized federated meta-learning framework, namely pFedACnnL, which is inherited from FedACnnL. In pFedACnnL, clients with similar data distribution share a common robust global model for fast adapting it to local tasks in an analytic manner. FedACnnL is theoretically proven to require significantly shorter training time than the conventional zeroth-order (i.e. gradient-free) FL frameworks on DNN training while the reduction ratio is $98%$ in the experiment. Meanwhile, pFedACnnL achieves state-of-the-art (SOTA) model performance in most cases of convex and non-convex settings, compared with the previous SOTA frameworks.