đ¤ AI Summary
Deep image quality assessment (IQA) models lack systematic evaluation of perceptual invariance to affine transformationsâsuch as rotation, translation, scaling, and spectral illuminant changesâdespite the human visual systemâs robustness to such geometric and photometric variations.
Method: We introduce a psychophysics-based quantification framework grounded in two-alternative forced-choice experiments to define and measure âimperceptibility thresholdsâ for affine distortions in a unified metric space, enabling direct comparison between model outputs and human perception.
Contribution/Results: This is the first work to systematically calibrate and benchmark affine invariance thresholds across leading deep IQA models (e.g., LPIPS, DISTS). Our framework is model-agnostic and generalizable. Experiments reveal that all state-of-the-art deep IQA models exhibit significant deviations from human-level invariance behavior, indicating that optimizing solely for distortion visibility fails to capture the human visual systemâs essential structural invariance mechanisms. The study establishes a new perceptual alignment benchmark and an interpretable calibration paradigm for IQA model evaluation.
đ Abstract
Deep architectures are the current state-of-the-art in predicting subjective image quality. Usually, these models are evaluated according to their ability to correlate with human opinion in databases with a range of distortions that may appear in digital media. However, these oversee affine transformations which may represent better the changes in the images actually happening in natural conditions. Humans can be particularly invariant to these natural transformations, as opposed to the digital ones. In this work, we evaluate state-of-the-art deep image quality metrics by assessing their invariance to affine transformations, specifically: rotation, translation, scaling, and changes in spectral illumination. Here invariance of a metric refers to the fact that certain distances should be neglected (considered to be zero) if their values are below a threshold. This is what we call invisibility threshold of a metric. We propose a methodology to assign such invisibility thresholds for any perceptual metric. This methodology involves transformations to a distance space common to any metric, and psychophysical measurements of thresholds in this common space. By doing so, we allow the analyzed metrics to be directly comparable with actual human thresholds. We find that none of the state-of-the-art metrics shows human-like results under this strong test based on invisibility thresholds. This means that tuning the models exclusively to predict the visibility of generic distortions may disregard other properties of human vision as for instance invariances or invisibility thresholds.