🤖 AI Summary
This study investigates whether vision-language models (VLMs) can effectively emulate human judgments in perceptual image quality assessment, potentially replacing costly psychophysical experiments. For the first time, we systematically evaluate six VLMs—four closed-source and two open-source—against human judgments across three dimensions: contrast, color saturation, and overall preference, using established psychophysical data as a benchmark. Our analysis integrates attribute-weighted evaluation and intra-model consistency metrics. Results show that VLMs achieve up to 0.93 correlation with human judgments on color saturation but perform notably weaker on contrast. Most models align with human behavior by prioritizing color saturation in overall preference. A key contribution is revealing the trade-off between model self-consistency and human alignment, and demonstrating that enhancing perceptual separability improves human-model agreement.
📝 Abstract
Psychophysical experiments remain the most reliable approach for perceptual image quality assessment (IQA), yet their cost and limited scalability encourage automated approaches. We investigate whether Vision Language Models (VLMs) can approximate human perceptual judgments across three image quality scales: contrast, colorfulness and overall preference. Six VLMs four proprietary and two openweight models are benchmarked against psychophysical data. This work presents a systematic benchmark of VLMs for perceptual IQA through comparison with human psychophysical data. The results reveal strong attribute dependent variability models with high human alignment for colorfulness (ρup to 0.93) underperform on contrast and vice-versa. Attribute weighting analysis further shows that most VLMs assign higher weights to colorfulness compared to contrast when evaluating overall preference similar to the psychophysical data. Intramodel consistency analysis reveals a counterintuitive tradeoff: the most self consistent models are not necessarily the most human aligned suggesting response variability reflects sensitivity to scene dependent perceptual cues. Furthermore, human-VLM agreement is increased with perceptual separability, indicating VLMs are more reliable when stimulus differences are clearly expressed.