🤖 AI Summary
This work addresses the lack of correspondence between existing out-of-distribution (OOD) evaluation methods and human perception, which hinders principled analysis of human-model alignment. The authors propose a perceptual-difficulty-centered OOD spectrum framework that quantifies OOD severity through human recognition accuracy on images under varying distortions, partitioning the spectrum into four perceptual challenge intervals. Large-scale behavioral experiments and systematic evaluations across multiple model architectures—including CNNs, Vision Transformers (ViTs), and vision-language models—reveal that vision-language models align most closely with human performance across both near- and far-OOD conditions, while CNNs show better alignment in near-OOD settings and ViTs excel in far-OOD scenarios. These findings demonstrate that human-model alignment is strongly dependent on perceptual difficulty. The proposed framework uniquely maps OOD severity onto a continuous, interpretable dimension grounded in human perception, enabling cross-model and cross-condition alignment comparisons.
📝 Abstract
Determining whether AI systems process information similarly to humans is central to cognitive science and trustworthy AI. While modern AI models match human accuracy on standard tasks, such parity does not guarantee that their underlying decision-making strategies are aligned with human information processing. Assessing performance using i) error alignment metrics to compare how humans and models fail, and ii) using distorted, or otherwise more challenging, stimuli, provides a viable pathway toward a finer characterization of model-human alignment. However, existing out-of-distribution (OOD) analyses for challenging stimuli are limited due to methodological choices: they define OOD shift relative to model training data or use arbitrary distortion-specific parameters with little correspondence to human perception, hindering principled comparisons. We propose a human-centred framework that redefines the degree of OOD as a spectrum of human perceptual difficulty. By quantifying how much a collection of stimuli deviates from an undistorted reference set based on human accuracy, we construct an OOD spectrum and identify four distinct regimes of perceptual challenge. This approach enables principled model-human comparisons at calibrated difficulty levels. We apply this framework to object recognition and reveal unique, regime-dependent model-human alignment rankings and profiles across deep learning architectures. Vision-language models are the most consistently human aligned across near- and far-OOD conditions, but CNNs are more aligned than ViTs for near-OOD and ViTs are more aligned than CNNs for far-OOD conditions. Our work demonstrates the critical importance of accounting for cross-condition differences such as perceptual difficulty for a principled assessment of model-human alignment.