🤖 AI Summary
Existing VLM-based image quality assessment (IQA) methods suffer from poor generalization and are hindered by insufficient data scale and quality, limiting their applicability in real-world scenarios. To address these limitations, we propose the first unified IQA framework supporting multi-task (distortion identification, instantaneous scoring, quality attribution reasoning), multi-granularity (concise vs. detailed descriptions), and both full-reference and no-reference settings. We introduce DQ-495K, a large-scale, high-quality descriptive IQA dataset, featuring three novel techniques: ground-truth-guided synthetic data generation, native-resolution preservation, and response confidence filtering with calibration. Our framework adopts an end-to-end VLM training paradigm. Extensive experiments demonstrate state-of-the-art performance across all three tasks—outperforming conventional score-based methods, existing VLM-IQA models, and GPT-4V. Further validation on web image assessment and generative output ranking confirms its strong cross-domain generalization capability.
📝 Abstract
With the rapid advancement of Vision Language Models (VLMs), VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression and capture the multifaceted nature of IQA tasks. However, current methods are still far from practical usage. First, prior works focus narrowly on specific sub-tasks or settings, which do not align with diverse real-world applications. Second, their performance is sub-optimal due to limitations in dataset coverage, scale, and quality. To overcome these challenges, we introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild). Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios. We introduce a ground-truth-informed dataset construction approach to enhance data quality, and scale up the dataset to 495K under the brief-detail joint framework. Consequently, we construct a comprehensive, large-scale, and high-quality dataset, named DQ-495K. We also retain image resolution during training to better handle resolution-related quality issues, and estimate a confidence score that is helpful to filter out low-quality responses. Experimental results demonstrate that DepictQA-Wild significantly outperforms traditional score-based methods, prior VLM-based IQA models, and proprietary GPT-4V in distortion identification, instant rating, and reasoning tasks. Our advantages are further confirmed by real-world applications including assessing the web-downloaded images and ranking model-processed images. Datasets and codes will be released in https://depictqa.github.io/depictqa-wild/.