🤖 AI Summary
Existing external clustering evaluation metrics (e.g., NMI, ARI) suffer from a lack of monotonicity, inability to identify worst-case scenarios, poor interpretability, and reliance on random baseline assumptions—hindering reliable algorithm comparison. To address these limitations, we propose Normalized Clustering Accuracy (NCA), an asymmetric, normalization-based accuracy measure grounded in optimal set matching. NCA is the first metric to simultaneously satisfy monotonicity, scale invariance, correction for cluster-size imbalance, and sensitivity to worst-case performance. Crucially, it does not rely on adjusted-for-chance assumptions. We provide rigorous theoretical analysis establishing its desirable mathematical properties. Empirical evaluation across multiple benchmark datasets demonstrates that NCA exhibits heightened sensitivity to low-quality clusterings and yields significantly more consistent algorithm rankings than conventional metrics. Thus, NCA provides a more robust, interpretable, and theoretically sound standard for external clustering evaluation when ground-truth labels are available.
📝 Abstract
There is no, nor will there ever be, single best clustering algorithm. Nevertheless, we would still like to be able to distinguish between methods that work well on certain task types and those that systematically underperform. Clustering algorithms are traditionally evaluated using either internal or external validity measures. Internal measures quantify different aspects of the obtained partitions, e.g., the average degree of cluster compactness or point separability. However, their validity is questionable because the clusterings they endorse can sometimes be meaningless. External measures, on the other hand, compare the algorithms’ outputs to fixed ground truth groupings provided by experts. In this paper, we argue that the commonly used classical partition similarity scores, such as the normalised mutual information, Fowlkes–Mallows, or adjusted Rand index, miss some desirable properties. In particular, they do not identify worst-case scenarios correctly, nor are they easily interpretable. As a consequence, the evaluation of clustering algorithms on diverse benchmark datasets can be difficult. To remedy these issues, we propose and analyse a new measure: a version of the optimal set-matching accuracy, which is normalised, monotonic with respect to some similarity relation, scale-invariant, and corrected for the imbalancedness of cluster sizes (but neither symmetric nor adjusted for chance).