Explainable cluster analysis: a bagging approach

πŸ“… 2026-03-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes an unsupervised clustering framework grounded in ensemble learning that enhances interpretability by identifying the key features driving cluster formationβ€”a limitation of traditional clustering methods. Inspired by feature importance in random forests, the approach generates diverse submodels through bootstrap sampling and random feature dropout. It quantifies the relevance between features and emergent cluster labels using mutual information and integrates clustering validity measures to weight and aggregate results across submodels. The framework simultaneously yields a consensus clustering assignment and a feature importance score. Evaluated on multiple synthetic and real-world datasets, the method demonstrates markedly improved interpretability, robustness, and stability, particularly under small-sample regimes and noisy conditions.

Technology Category

Application Category

πŸ“ Abstract
A major limitation of clustering approaches is their lack of explainability: methods rarely provide insight into which features drive the grouping of similar observations. To address this limitation, we propose an ensemble-based clustering framework that integrates bagging and feature dropout to generate feature importance scores, in analogy with feature importance mechanisms in supervised random forests. By leveraging multiple bootstrap resampling schemes and aggregating the resulting partitions, the method improves stability and robustness of the cluster definition, particularly in small-sample or noisy settings. Feature importance is assessed through an information-theoretic approach: at each step, the mutual information between each feature and the estimated cluster labels is computed and weighted by a measure of clustering validity to emphasize well-formed partitions, before being aggregated into a final score. The method outputs both a consensus partition and a corresponding measure of feature importance, enabling a unified interpretation of clustering structure and variable relevance. Its effectiveness is demonstrated on multiple simulated and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

explainable clustering
feature importance
cluster interpretability
unsupervised learning
clustering explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

explainable clustering
bagging
feature importance
mutual information
ensemble clustering
πŸ”Ž Similar Papers
No similar papers found.
F
Federico Maria Quetti
Department of Mathematics, University of Pavia, Pavia, 27100, Italy.
E
Elena Ballante
Department of Social and Political Sciences, University of Pavia, Pavia, 27100, Italy.
S
Silvia Figini
Department of Social and Political Sciences, University of Pavia, Pavia, 27100, Italy.
Paolo Giudici
Paolo Giudici
Professor of Statistics, University of Pavia
Bayesian statisticsGraphical modelsRisk managementExplainable AISafe AI