🤖 AI Summary
To address the challenge of jointly modeling quality localization, perceptual assessment, and natural language description in fine-grained interpretable image quality assessment (IQA), this paper proposes the first unified multimodal large language model (MLLM) framework. Methodologically, we introduce a task-specific offline augmentation module and a data-mixing strategy, coupled with an online augmentation mechanism, enabling the first multi-task co-optimization under heterogeneous supervision sources. Trained and evaluated extensively on the ViDA-UGC benchmark, our approach achieves state-of-the-art performance across all tasks; it also ranks first in the ICCV MIPI 2025 Challenge. Our framework significantly improves IQA accuracy, fine-grained spatial localization capability, and semantic consistency between predictions and human-understandable natural language descriptions—advancing IQA toward intelligent, interpretable, and human-aligned assessment.
📝 Abstract
Image Quality Assessment (IQA) has progressed from scalar quality prediction to more interpretable, human-aligned evaluation paradigms. In this work, we address the emerging challenge of detailed and explainable IQA by proposing iDETEX-a unified multimodal large language model (MLLM) capable of simultaneously performing three key tasks: quality grounding, perception, and description. To facilitate efficient and generalizable training across these heterogeneous subtasks, we design a suite of task-specific offline augmentation modules and a data mixing strategy. These are further complemented by online enhancement strategies to fully exploit multi-sourced supervision. We validate our approach on the large-scale ViDA-UGC benchmark, where iDETEX achieves state-of-the-art performance across all subtasks. Our model ranks first in the ICCV MIPI 2025 Detailed Image Quality Assessment Challenge, demonstrating its effectiveness and robustness in delivering accurate and interpretable quality assessments.