🤖 AI Summary
This study addresses the challenge of subjective quality assessment for learning-based high-fidelity image compression (e.g., JPEG AI). Methodologically, it constructs a high-quality subjective dataset comprising 50 multi-source compressed images and 96,200 crowdsourced triplet comparisons; introduces the Meng–Rosenthal–Rubin statistical test—previously unused in QoE research—to rigorously quantify the significance of correlations between objective metrics (e.g., CVVDP) and human perception; and reconstructs the Just-Noticeable-Difference (JND) quality scale using a unified triplet model. Results reveal a pervasive over-optimistic bias among mainstream IQA metrics—including the top-performing CVVDP—in the high-fidelity regime. The study open-sources the complete subjective dataset, including raw triplet responses, establishing the first trustworthy, fine-grained perceptual quality benchmark for learning-based codecs.
📝 Abstract
Learning-based image compression methods have recently emerged as promising alternatives to traditional codecs, offering improved rate-distortion performance and perceptual quality. JPEG AI represents the latest standardized framework in this domain, leveraging deep neural networks for high-fidelity image reconstruction. In this study, we present a comprehensive subjective visual quality assessment of JPEG AI-compressed images using the JPEG AIC-3 methodology, which quantifies perceptual differences in terms of Just Noticeable Difference (JND) units. We generated a dataset of 50 compressed images with fine-grained distortion levels from five diverse sources. A large-scale crowdsourced experiment collected 96,200 triplet responses from 459 participants. We reconstructed JND-based quality scales using a unified model based on boosted and plain triplet comparisons. Additionally, we evaluated the alignment of multiple objective image quality metrics with human perception in the high-fidelity range. The CVVDP metric achieved the overall highest performance; however, most metrics including CVVDP were overly optimistic in predicting the quality of JPEG AI-compressed images. These findings emphasize the necessity for rigorous subjective evaluations in the development and benchmarking of modern image codecs, particularly in the high-fidelity range. Another technical contribution is the introduction of the well-known Meng-Rosenthal-Rubin statistical test to the field of Quality of Experience research. This test can reliably assess the significance of difference in performance of quality metrics in terms of correlation between metrics and ground truth. The complete dataset, including all subjective scores, is publicly available at https://github.com/jpeg-aic/dataset-JPEG-AI-SDR25.