🤖 AI Summary
To address low detection accuracy, poor interpretability, and sensitivity to missing values and data preprocessing in unsupervised anomaly detection, this paper proposes a novel Unsupervised Random Forest (URF) framework. URF learns an anisotropic distance metric by discriminatively distinguishing real samples from synthetically generated uniform samples, thereby significantly enhancing anomaly identification near decision boundaries. It is the first work to leverage random forests for unsupervised similarity modeling and localized interpretability: it natively handles missing values and eliminates the need for feature standardization or other preprocessing steps; moreover, it provides feature-level attribution explanations via tree path analysis. Extensive experiments on multiple benchmark datasets demonstrate that URF consistently outperforms state-of-the-art unsupervised detectors, achieving simultaneous improvements in detection accuracy, robustness against data perturbations, and visualization-enabled interpretability.
📝 Abstract
We describe the use of an unsupervised Random Forest for similarity learning and improved unsupervised anomaly detection. By training a Random Forest to discriminate between real data and synthetic data sampled from a uniform distribution over the real data bounds, a distance measure is obtained that anisometrically transforms the data, expanding distances at the boundary of the data manifold. We show that using distances recovered from this transformation improves the accuracy of unsupervised anomaly detection, compared to other commonly used detectors, demonstrated over a large number of benchmark datasets. As well as improved performance, this method has advantages over other unsupervised anomaly detection methods, including minimal requirements for data preprocessing, native handling of missing data, and potential for visualizations. By relating outlier scores to partitions of the Random Forest, we develop a method for locally explainable anomaly predictions in terms of feature importance.