Can We Ignore Labels In Out of Distribution Detection?

📅 2025-04-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the theoretical feasibility of label-free out-of-distribution (OOD) detection under the “label-blind zone”—a regime where the mutual information between the learning objective and in-distribution labels vanishes—and proves that OOD detection inevitably fails in this setting. To expose critical security vulnerabilities overlooked by existing benchmarks, we propose *adjacent OOD detection*: constructing semantically proximal yet label-irrelevant OOD samples. Methodologically, we provide the first rigorous information-theoretic derivation of failure conditions for label-free OOD detection. Extensive evaluation—including self-supervised learning-based assessment and large-scale empirical validation across CIFAR, SVHN, and ImageNet with ResNet, ViT, and state-of-the-art SSL methods—demonstrates that all label-free OOD detectors suffer severe performance degradation under adjacent OOD settings, with average AUROC drops exceeding 40%. These results reveal a fundamental limitation for real-world deployment of current label-free OOD detection approaches.

Technology Category

Application Category

📝 Abstract
Out-of-distribution (OOD) detection methods have recently become more prominent, serving as a core element in safety-critical autonomous systems. One major purpose of OOD detection is to reject invalid inputs that could lead to unpredictable errors and compromise safety. Due to the cost of labeled data, recent works have investigated the feasibility of self-supervised learning (SSL) OOD detection, unlabeled OOD detection, and zero shot OOD detection. In this work, we identify a set of conditions for a theoretical guarantee of failure in unlabeled OOD detection algorithms from an information-theoretic perspective. These conditions are present in all OOD tasks dealing with real-world data: I) we provide theoretical proof of unlabeled OOD detection failure when there exists zero mutual information between the learning objective and the in-distribution labels, a.k.a. 'label blindness', II) we define a new OOD task - Adjacent OOD detection - that tests for label blindness and accounts for a previously ignored safety gap in all OOD detection benchmarks, and III) we perform experiments demonstrating that existing unlabeled OOD methods fail under conditions suggested by our label blindness theory and analyze the implications for future research in unlabeled OOD methods.
Problem

Research questions and friction points this paper is trying to address.

Investigates failure conditions for unlabeled OOD detection algorithms
Defines Adjacent OOD detection to address label blindness
Tests existing unlabeled OOD methods under label blindness conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theoretical proof of unlabeled OOD failure
Defines new Adjacent OOD detection task
Analyzes label blindness in OOD methods
🔎 Similar Papers
No similar papers found.
H
Hong Yang
Rochester Institute of Technology
Qi Yu
Qi Yu
Professor, Rochester Institute of Technology
Machine learningdata mining
T
Travis Desel
Rochester Institute of Technology