๐ค AI Summary
Transfer anomaly detection faces significant challenges under target-domain scarcity of anomalies and distributional shift. Method: This paper proposes the first general meta-algorithmic framework grounded in the NeymanโPearson (NP) decision criterion, the first to embed NP theory into transfer learning. It provides theoretical robustness guarantees against shifts in anomaly distribution, overcoming the failure of conventional balanced-classification transfer methods in highly imbalanced settings. The framework supports differentiable models (e.g., deep neural networks) and enables end-to-end optimization by jointly leveraging labeled source-domain data and unlabeled target-domain data. Results: Evaluated on multiple sparse-anomaly benchmarks, the method achieves average improvements of 12.6% in F1-score and AUROC, empirically validating the consistency between its theoretical guarantees and practical performance.
๐ Abstract
We consider the problem of transfer learning in outlier detection where target abnormal data is rare. While transfer learning has been considered extensively in traditional balanced classification, the problem of transfer in outlier detection and more generally in imbalanced classification settings has received less attention. We propose a general meta-algorithm which is shown theoretically to yield strong guarantees w.r.t. to a range of changes in abnormal distribution, and at the same time amenable to practical implementation. We then investigate different instantiations of this general meta-algorithm, e.g., based on multi-layer neural networks, and show empirically that they outperform natural extensions of transfer methods for traditional balanced classification settings (which are the only solutions available at the moment).