Invariance of deep image quality metrics to affine transformations

📅 2024-07-25
🏛️ arXiv.org
📈 Citations: 4
✨ Influential: 0
📄 PDF
🤖 AI Summary
Deep image quality assessment (IQA) models lack systematic evaluation of perceptual invariance to affine transformations—such as rotation, translation, scaling, and spectral illuminant changes—despite the human visual system’s robustness to such geometric and photometric variations. Method: We introduce a psychophysics-based quantification framework grounded in two-alternative forced-choice experiments to define and measure “imperceptibility thresholds” for affine distortions in a unified metric space, enabling direct comparison between model outputs and human perception. Contribution/Results: This is the first work to systematically calibrate and benchmark affine invariance thresholds across leading deep IQA models (e.g., LPIPS, DISTS). Our framework is model-agnostic and generalizable. Experiments reveal that all state-of-the-art deep IQA models exhibit significant deviations from human-level invariance behavior, indicating that optimizing solely for distortion visibility fails to capture the human visual system’s essential structural invariance mechanisms. The study establishes a new perceptual alignment benchmark and an interpretable calibration paradigm for IQA model evaluation.

Technology Category

Application Category

📝 Abstract
Deep architectures are the current state-of-the-art in predicting subjective image quality. Usually, these models are evaluated according to their ability to correlate with human opinion in databases with a range of distortions that may appear in digital media. However, these oversee affine transformations which may represent better the changes in the images actually happening in natural conditions. Humans can be particularly invariant to these natural transformations, as opposed to the digital ones. In this work, we evaluate state-of-the-art deep image quality metrics by assessing their invariance to affine transformations, specifically: rotation, translation, scaling, and changes in spectral illumination. Here invariance of a metric refers to the fact that certain distances should be neglected (considered to be zero) if their values are below a threshold. This is what we call invisibility threshold of a metric. We propose a methodology to assign such invisibility thresholds for any perceptual metric. This methodology involves transformations to a distance space common to any metric, and psychophysical measurements of thresholds in this common space. By doing so, we allow the analyzed metrics to be directly comparable with actual human thresholds. We find that none of the state-of-the-art metrics shows human-like results under this strong test based on invisibility thresholds. This means that tuning the models exclusively to predict the visibility of generic distortions may disregard other properties of human vision as for instance invariances or invisibility thresholds.
Problem

Research questions and friction points this paper is trying to address.

Evaluating image quality metrics' invariance to affine transformations
Assessing human-like invisibility thresholds for natural image changes
Testing metrics for rotation, translation, scaling, and illumination changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating metric invariance to affine transformations
Using psychophysics to determine visibility thresholds
Transducing thresholds to common subjective representation
🔎 Similar Papers
No similar papers found.
N
Nuria Alabau-Bosque
ValgrAI: Valencian Grad. School Research Network of AI, València, 46022, Spain
P
Paula DaudĂŠn-Oliver
Image Processing Lab, Universitat de València, Paterna, 46980, Spain
Jorge Vila-TomĂĄs
Jorge Vila-TomĂĄs
Image Processing Lab, Universitat de València
Deep Learning
Valero Laparra
Valero Laparra
Universitat de València
J
J. Malo
Image Processing Lab, Universitat de València, Paterna, 46980, Spain