Measuring the Validity of Clustering Validation Datasets

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clustering evaluation commonly relies on labeled benchmark datasets, yet their class labels may not accurately reflect the underlying cluster structure, leading to misleading validation. This paper addresses the “cluster–label matching” (CLM) problem by proposing, for the first time, four axiomatic principles to guide the design of invariant, adjusted internal validation measures (Adjusted IVMs). Methodologically, we build upon six classic IVMs—including Silhouette—and introduce a standardized transformation protocol coupled with axiom-driven normalization to ensure cross-dataset comparability and reliability in CLM quantification. Experiments demonstrate that the proposed Adjusted IVMs significantly outperform both original IVMs and state-of-the-art methods in single- and multi-dataset CLM evaluation. Our work provides both theoretical foundations and practical tools for constructing high-fidelity clustering benchmarks.

Technology Category

Application Category

📝 Abstract
Clustering techniques are often validated using benchmark datasets where class labels are used as ground-truth clusters. However, depending on the datasets, class labels may not align with the actual data clusters, and such misalignment hampers accurate validation. Therefore, it is essential to evaluate and compare datasets regarding their cluster-label matching (CLM), i.e., how well their class labels match actual clusters. Internal validation measures (IVMs), like Silhouette, can compare CLM over different labeling of the same dataset, but are not designed to do so across different datasets. We thus introduce Adjusted IVMs as fast and reliable methods to evaluate and compare CLM across datasets. We establish four axioms that require validation measures to be independent of data properties not related to cluster structure (e.g., dimensionality, dataset size). Then, we develop standardized protocols to convert any IVM to satisfy these axioms, and use these protocols to adjust six widely used IVMs. Quantitative experiments (1) verify the necessity and effectiveness of our protocols and (2) show that adjusted IVMs outperform the competitors, including standard IVMs, in accurately evaluating CLM both within and across datasets. We also show that the datasets can be filtered or improved using our method to form more reliable benchmarks for clustering validation.
Problem

Research questions and friction points this paper is trying to address.

Evaluate cluster-label matching across datasets
Develop Adjusted IVMs for accurate CLM assessment
Filter datasets to improve clustering validation benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced Adjusted IVMs for cross-dataset CLM evaluation
Developed standardized protocols to adjust existing IVMs
Established four axioms for validation measure independence
🔎 Similar Papers
No similar papers found.