🤖 AI Summary
This work addresses the “simplicity paradox” in out-of-distribution (OOD) detection, where existing methods are sensitive to semantically subtle in-distribution samples yet fail on structurally dissimilar or high-frequency noisy inputs, revealing critical geometric blind spots. The authors propose the D-KNN framework, which uncovers for the first time the phenomenon of “semantic hegemony” in deep feature spaces and explains—through the lens of neural collapse—how spectral concentration bias obscures structural distribution shifts. By orthogonally decomposing features into semantic (principal) and structural (residual) subspaces and introducing a dual-space calibration mechanism, the method restores sensitivity to weak residual signals, enabling a training-free, plug-and-play OOD detector. It achieves state-of-the-art performance on CIFAR and ImageNet, reducing FPR95 from 31.3% to 2.3% and boosting AUROC to 94.9% against sensor failures such as Gaussian noise.
📝 Abstract
While feature-based post-hoc methods have made significant strides in Out-of-Distribution (OOD) detection, we uncover a counter-intuitive Simplicity Paradox in existing state-of-the-art (SOTA) models: these models exhibit keen sensitivity in distinguishing semantically subtle OOD samples but suffer from severe Geometric Blindness when confronting structurally distinct yet semantically simple samples or high-frequency sensor noise. We attribute this phenomenon to Semantic Hegemony within the deep feature space and reveal its mathematical essence through the lens of Neural Collapse. Theoretical analysis demonstrates that the spectral concentration bias, induced by the high variance of the principal subspace, numerically masks the structural distribution shift signals that should be significant in the residual subspace. To address this issue, we propose D-KNN, a training-free, plug-and-play geometric decoupling framework. This method utilizes orthogonal decomposition to explicitly separate semantic components from structural residuals and introduces a dual-space calibration mechanism to reactivate the model's sensitivity to weak residual signals. Extensive experiments demonstrate that D-KNN effectively breaks Semantic Hegemony, establishing new SOTA performance on both CIFAR and ImageNet benchmarks. Notably, in resolving the Simplicity Paradox, it reduces the FPR95 from 31.3% to 2.3%; when addressing sensor failures such as Gaussian noise, it boosts the detection performance (AUROC) from a baseline of 79.7% to 94.9%.