🤖 AI Summary
This work addresses the long-standing fragmentation between image quality assessment (IQA) and image aesthetic assessment (IAA), characterized by disjoint modeling and lack of shared visual-semantic representations. To overcome critical bottlenecks—namely, the scarcity of textual descriptions in IQA datasets and pervasive textual noise in IAA datasets—the authors propose the first vision-language unified pretraining framework. Methodologically, it (1) leverages multimodal large language models to generate high-fidelity semantic captions and introduces a noise-aware caption purification strategy; (2) formulates a joint vision-language contrastive learning objective to co-optimize IQA- and IAA-aligned representations; and (3) incorporates lightweight adapters for parameter-efficient knowledge transfer. Extensive experiments demonstrate state-of-the-art performance on both IQA and IAA benchmarks, with substantial gains in zero-shot generalization and few-shot fine-tuning. This work establishes the first通用 perceptual model capable of jointly assessing image quality and aesthetics.
📝 Abstract
Image Quality Assessment (IQA) and Image Aesthetic Assessment (IAA) aim to simulate human subjective perception of image visual quality and aesthetic appeal. Existing methods typically address these tasks independently due to distinct learning objectives. However, they neglect the underlying interconnectedness of both tasks, which hinders the learning of task-agnostic shared representations for human subjective perception. To confront this challenge, we propose Unified vision-language pre-training of Quality and Aesthetics (UniQA), to learn general perceptions of two tasks, thereby benefiting them simultaneously. Addressing the absence of text in the IQA datasets and the presence of textual noise in the IAA datasets, (1) we utilize multimodal large language models (MLLMs) to generate high-quality text descriptions; (2) the generated text for IAA serves as metadata to purify noisy IAA data. To effectively adapt the pre-trained UniQA to downstream tasks, we further propose a lightweight adapter that utilizes versatile cues to fully exploit the extensive knowledge of the pre-trained model. Extensive experiments demonstrate that our approach attains a new state-of-the-art performance on both IQA and IAA tasks, while concurrently showcasing exceptional zero-shot and few-label image assessment capabilities. The source code will be available at https://github.com/zht8506/UniQA.