LocalKMeans: Convergence of Lloyd's Algorithm with Distributed Local Iterations

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the convergence of distributed K-means in a multi-machine setting with local data: nodes perform Lloyd’s algorithm in parallel and synchronize centroids only every $L$ iterations. Addressing the non-convex, non-smooth, and latent-variable nature of the clustering objective, we establish the first local iteration convergence theory for LocalKMeans. Leveraging virtual iteration analysis, tight statistical characterization of Lloyd steps, and distributional robustness analysis under Gaussian mixture models, we prove linear convergence to a neighborhood of the global optimum while precisely quantifying the trade-off between local computation cost and the elevated signal-to-noise ratio threshold required for convergence. Our core contribution breaks a longstanding bottleneck in distributed unsupervised learning analysis by revealing, for the first time, the *modelable impact* of local iterations on clustering accuracy—thereby providing a rigorous theoretical foundation for the communication–accuracy trade-off in distributed clustering.

Technology Category

Application Category

📝 Abstract
In this paper, we analyze the classical $K$-means alternating-minimization algorithm, also known as Lloyd's algorithm (Lloyd, 1956), for a mixture of Gaussians in a data-distributed setting that incorporates local iteration steps. Assuming unlabeled data distributed across multiple machines, we propose an algorithm, LocalKMeans, that performs Lloyd's algorithm in parallel in the machines by running its iterations on local data, synchronizing only every $L$ of such local steps. We characterize the cost of these local iterations against the non-distributed setting, and show that the price paid for the local steps is a higher required signal-to-noise ratio. While local iterations were theoretically studied in the past for gradient-based learning methods, the analysis of unsupervised learning methods is more involved owing to the presence of latent variables, e.g. cluster identities, than that of an iterative gradient-based algorithm. To obtain our results, we adapt a virtual iterate method to work with a non-convex, non-smooth objective function, in conjunction with a tight statistical analysis of Lloyd steps.
Problem

Research questions and friction points this paper is trying to address.

Analyzing convergence of distributed Lloyd's algorithm with local iterations
Studying trade-off between local steps and required signal-to-noise ratio
Adapting virtual iterate method for non-convex unsupervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed local iterations for Lloyd's algorithm
Synchronization every L steps in LocalKMeans
Virtual iterate method for non-convex analysis
🔎 Similar Papers
No similar papers found.
Harsh Vardhan
Harsh Vardhan
PhD CSE, UC San Diego
OptimizationLearning Theory
H
Heng Zhu
Electrical and Computer Engineering, University of California, San Diego
A
Avishek Ghosh
Computer Science and Engineering, Indian Institute of Technology, Bombay
Arya Mazumdar
Arya Mazumdar
HDSI Endowed Chair Professor in AI, University of California, San Diego
Information TheoryCoding TheoryLearning TheoryMathematical Statistics