🤖 AI Summary
Federated learning (FL) confronts the dual challenges of high bidirectional communication overhead and severe client data heterogeneity. To address these, we propose pFed1BS—a novel FL framework that pioneers the integration of 1-bit randomized sketching for bidirectional communication compression, coupled with a sign-based regularization mechanism to enforce global consensus while preserving local personalization capability. Furthermore, pFed1BS incorporates the fast Hadamard transform to significantly reduce computational complexity. We establish theoretical convergence guarantees, proving that pFed1BS converges to a stable neighborhood of the global objective function. Empirical evaluations demonstrate that pFed1BS achieves state-of-the-art performance among communication-efficient FL methods: it reduces per-parameter communication cost to merely 1 bit per update, yet consistently outperforms existing approaches in both personalized accuracy and scalability—thereby jointly optimizing communication efficiency, model personalization, and computational tractability.
📝 Abstract
Federated Learning (FL) enables collaborative training across decentralized data, but faces key challenges of bidirectional communication overhead and client-side data heterogeneity. To address communication costs while embracing data heterogeneity, we propose pFed1BS, a novel personalized federated learning framework that achieves extreme communication compression through one-bit random sketching. In personalized FL, the goal shifts from training a single global model to creating tailored models for each client. In our framework, clients transmit highly compressed one-bit sketches, and the server aggregates and broadcasts a global one-bit consensus. To enable effective personalization, we introduce a sign-based regularizer that guides local models to align with the global consensus while preserving local data characteristics. To mitigate the computational burden of random sketching, we employ the Fast Hadamard Transform for efficient projection. Theoretical analysis guarantees that our algorithm converges to a stationary neighborhood of the global potential function. Numerical simulations demonstrate that pFed1BS substantially reduces communication costs while achieving competitive performance compared to advanced communication-efficient FL algorithms.