Do Machines Fail Like Humans? A Human-Centred Out-of-Distribution Spectrum for Mapping Error Alignment

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of correspondence between existing out-of-distribution (OOD) evaluation methods and human perception, which hinders principled analysis of human-model alignment. The authors propose a perceptual-difficulty-centered OOD spectrum framework that quantifies OOD severity through human recognition accuracy on images under varying distortions, partitioning the spectrum into four perceptual challenge intervals. Large-scale behavioral experiments and systematic evaluations across multiple model architectures—including CNNs, Vision Transformers (ViTs), and vision-language models—reveal that vision-language models align most closely with human performance across both near- and far-OOD conditions, while CNNs show better alignment in near-OOD settings and ViTs excel in far-OOD scenarios. These findings demonstrate that human-model alignment is strongly dependent on perceptual difficulty. The proposed framework uniquely maps OOD severity onto a continuous, interpretable dimension grounded in human perception, enabling cross-model and cross-condition alignment comparisons.

Technology Category

Application Category

📝 Abstract
Determining whether AI systems process information similarly to humans is central to cognitive science and trustworthy AI. While modern AI models match human accuracy on standard tasks, such parity does not guarantee that their underlying decision-making strategies are aligned with human information processing. Assessing performance using i) error alignment metrics to compare how humans and models fail, and ii) using distorted, or otherwise more challenging, stimuli, provides a viable pathway toward a finer characterization of model-human alignment. However, existing out-of-distribution (OOD) analyses for challenging stimuli are limited due to methodological choices: they define OOD shift relative to model training data or use arbitrary distortion-specific parameters with little correspondence to human perception, hindering principled comparisons. We propose a human-centred framework that redefines the degree of OOD as a spectrum of human perceptual difficulty. By quantifying how much a collection of stimuli deviates from an undistorted reference set based on human accuracy, we construct an OOD spectrum and identify four distinct regimes of perceptual challenge. This approach enables principled model-human comparisons at calibrated difficulty levels. We apply this framework to object recognition and reveal unique, regime-dependent model-human alignment rankings and profiles across deep learning architectures. Vision-language models are the most consistently human aligned across near- and far-OOD conditions, but CNNs are more aligned than ViTs for near-OOD and ViTs are more aligned than CNNs for far-OOD conditions. Our work demonstrates the critical importance of accounting for cross-condition differences such as perceptual difficulty for a principled assessment of model-human alignment.
Problem

Research questions and friction points this paper is trying to address.

out-of-distribution
error alignment
human perception
model-human alignment
perceptual difficulty
Innovation

Methods, ideas, or system contributions that make the work stand out.

human-centred OOD
error alignment
perceptual difficulty spectrum
model-human alignment
out-of-distribution evaluation
B
Binxia Xu
School of Data Science, Fudan University, Shanghai
Xiaoliang Luo
Xiaoliang Luo
AI researcher funded by Foresight institute; fmr. postdoc fellow, UCL
computational neurosciencedeep learningBrainGPTdeep nets evals
Luke Dickens
Luke Dickens
Associate Professor in Machine Learning, University College London
Machine LearningReinforcement LearningComputational Neuroscience
R
Robert M. Mok
Center for Information and Neural Networks, National Institute of Information and Communications Technology, University of Osaka, Osaka