Explainable Unsupervised Anomaly Detection with Random Forest

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low detection accuracy, poor interpretability, and sensitivity to missing values and data preprocessing in unsupervised anomaly detection, this paper proposes a novel Unsupervised Random Forest (URF) framework. URF learns an anisotropic distance metric by discriminatively distinguishing real samples from synthetically generated uniform samples, thereby significantly enhancing anomaly identification near decision boundaries. It is the first work to leverage random forests for unsupervised similarity modeling and localized interpretability: it natively handles missing values and eliminates the need for feature standardization or other preprocessing steps; moreover, it provides feature-level attribution explanations via tree path analysis. Extensive experiments on multiple benchmark datasets demonstrate that URF consistently outperforms state-of-the-art unsupervised detectors, achieving simultaneous improvements in detection accuracy, robustness against data perturbations, and visualization-enabled interpretability.

Technology Category

Application Category

📝 Abstract
We describe the use of an unsupervised Random Forest for similarity learning and improved unsupervised anomaly detection. By training a Random Forest to discriminate between real data and synthetic data sampled from a uniform distribution over the real data bounds, a distance measure is obtained that anisometrically transforms the data, expanding distances at the boundary of the data manifold. We show that using distances recovered from this transformation improves the accuracy of unsupervised anomaly detection, compared to other commonly used detectors, demonstrated over a large number of benchmark datasets. As well as improved performance, this method has advantages over other unsupervised anomaly detection methods, including minimal requirements for data preprocessing, native handling of missing data, and potential for visualizations. By relating outlier scores to partitions of the Random Forest, we develop a method for locally explainable anomaly predictions in terms of feature importance.
Problem

Research questions and friction points this paper is trying to address.

Improving unsupervised anomaly detection accuracy
Developing explainable anomaly predictions via feature importance
Handling missing data without extensive preprocessing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised Random Forest for similarity learning
Distance measure from synthetic data discrimination
Explainable anomaly predictions via feature importance
🔎 Similar Papers
No similar papers found.