🤖 AI Summary
Outlier detection (OD) has long been hindered by the absence of anomaly labels under unsupervised settings. To address this, we propose DOUST—the first unsupervised OD method to incorporate test-time training (TTT), enabling online model adaptation solely using unlabeled test samples. DOUST integrates self-supervised pretraining, reconstruction loss, and consistency regularization to dynamically refine the model during inference, without requiring any anomaly annotations. Theoretical analysis shows that, given a moderately sized test set, DOUST asymptotically approaches the performance upper bound of fully supervised OD. Extensive experiments on multiple standard benchmarks demonstrate that DOUST significantly outperforms existing unsupervised methods; notably, with larger test sets, its AUC nears the supervised upper bound—providing the first empirical validation of strong generalization capability for pure test-time learning in outlier detection.
📝 Abstract
In this paper, we introduce DOUST, our method applying test-time training for outlier detection, significantly improving the detection performance. After thoroughly evaluating our algorithm on common benchmark datasets, we discuss a common problem and show that it disappears with a large enough test set. Thus, we conclude that under reasonable conditions, our algorithm can reach almost supervised performance even when no labeled outliers are given.