Federated Learning With L0 Constraint Via Probabilistic Gates For Sparsity

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) suffers from model redundancy, poor generalization, and high communication overhead under data and client heterogeneity; existing methods struggle to jointly achieve controllable sparsity and strong statistical performance. This paper proposes the first L₀-norm-constrained federated sparse learning framework: it enables explicit, differentiable control over model density via probabilistic gate reparameterization and Gumbel-Softmax continuous relaxation; and it introduces, for the first time in FL, entropy-maximization-derived L₀ regularization, supporting arbitrary target sparsity levels—including as low as 0.5%. Evaluated on RCV1, MNIST, and EMNIST, the method significantly outperforms pruning-based baselines under ultra-sparse conditions (ρ = 0.005), while maintaining high accuracy and stable convergence for both linear and nonlinear models. It thus achieves a favorable trade-off between communication efficiency and generalization capability.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a distributed machine learning setting that requires multiple clients to collaborate on training a model while maintaining data privacy. The unaddressed inherent sparsity in data and models often results in overly dense models and poor generalizability under data and client participation heterogeneity. We propose FL with an L0 constraint on the density of non-zero parameters, achieved through a reparameterization using probabilistic gates and their continuous relaxation: originally proposed for sparsity in centralized machine learning. We show that the objective for L0 constrained stochastic minimization naturally arises from an entropy maximization problem of the stochastic gates and propose an algorithm based on federated stochastic gradient descent for distributed learning. We demonstrate that the target density (rho) of parameters can be achieved in FL, under data and client participation heterogeneity, with minimal loss in statistical performance for linear and non-linear models: Linear regression (LR), Logistic regression (LG), Softmax multi-class classification (MC), Multi-label classification with logistic units (MLC), Convolution Neural Network (CNN) for multi-class classification (MC). We compare the results with a magnitude pruning-based thresholding algorithm for sparsity in FL. Experiments on synthetic data with target density down to rho = 0.05 and publicly available RCV1, MNIST, and EMNIST datasets with target density down to rho = 0.005 demonstrate that our approach is communication-efficient and consistently better in statistical performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses sparsity and generalizability in federated learning models
Implements L0 constraint via probabilistic gates for parameter density control
Ensures communication efficiency and statistical performance across heterogeneous data
Innovation

Methods, ideas, or system contributions that make the work stand out.

L0 constraint via probabilistic gates for sparsity
Federated stochastic gradient descent algorithm for distributed learning
Achieves target parameter density with minimal performance loss
🔎 Similar Papers
No similar papers found.