Online Correlation Clustering: Simultaneously Optimizing All $ell_p$-norms

πŸ“… 2025-10-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of simultaneously optimizing all $ell_p$-norms ($p geq 1$) in online correlation clusteringβ€”a problem for which unified approximation algorithms exist offline, but only norm-specific online algorithms are known. We propose the first single algorithm for the online-with-a-sample (AOS) model that achieves unified competitive ratios across all $ell_p$-norms: $O(log^4 n)$ with high probability for $p < infty$, $O(log n)$ for $p = infty$, and $O(1)$ in expectation for $ell_1$. These guarantees are shown to be near-optimal via matching lower bounds. Our results uncover an inherent trade-off among multiple objectives in the random-order model and advance the theory of online clustering beyond worst-case analysis.

Technology Category

Application Category

πŸ“ Abstract
The $ell_p$-norm objectives for correlation clustering present a fundamental trade-off between minimizing total disagreements (the $ell_1$-norm) and ensuring fairness to individual nodes (the $ell_infty$-norm). Surprisingly, in the offline setting it is possible to simultaneously approximate all $ell_p$-norms with a single clustering. Can this powerful guarantee be achieved in an online setting? This paper provides the first affirmative answer. We present a single algorithm for the online-with-a-sample (AOS) model that, given a small constant fraction of the input as a sample, produces one clustering that is simultaneously $O(log^4 n)$-competitive for all $ell_p$-norms with high probability, $O(log n)$-competitive for the $ell_infty$-norm with high probability, and $O(1)$-competitive for the $ell_1$-norm in expectation. This work successfully translates the offline "all-norms" guarantee to the online world. Our setting is motivated by a new hardness result that demonstrates a fundamental separation between these objectives in the standard random-order (RO) online model. Namely, while the $ell_1$-norm is trivially $O(1)$-approximable in the RO model, we prove that any algorithm in the RO model for the fairness-promoting $ell_infty$-norm must have a competitive ratio of at least $Ξ©(n^{1/3})$. This highlights the necessity of a different beyond-worst-case model. We complement our algorithm with lower bounds, showing our competitive ratios for the $ell_1$- and $ell_infty$- norms are nearly tight in the AOS model.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously approximating all β„“p-norms in online correlation clustering
Bridging offline all-norms guarantee to online computational settings
Overcoming fundamental separation between β„“1 and β„“βˆž objectives online
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online algorithm simultaneously optimizes all correlation clustering norms
Uses small input sample to achieve competitive clustering guarantees
Translates offline all-norms approximation to online setting
πŸ”Ž Similar Papers
No similar papers found.
S
Sami Davies
Department of EECS at UC Berkeley and RelationalAI
Benjamin Moseley
Benjamin Moseley
Carnegie Mellon University
AlgorithmsMachine LearningDiscrete Optimization
H
Heather Newman
Department of Computer Science, Vassar College