TuningIQA: Fine-Grained Blind Image Quality Assessment for Livestreaming Camera Tuning

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing blind image quality assessment (BIQA) models produce only a single holistic score, limiting their utility for fine-grained camera parameter optimization in live streaming scenarios. To address this, we propose a fine-grained BIQA framework: (1) we introduce FGLive-10K, the first large-scale, multi-attribute annotated dataset specifically designed for live streaming; (2) we pioneer a human visual perception–driven fine-grained quality assessment mechanism, integrating graph neural networks to model inter-parameter dependencies among camera settings, enabling joint prediction of multi-attribute quality regression and pairwise preference ranking. Our approach overcomes the fundamental limitation of scalar-quality estimation, achieving significant improvements over state-of-the-art BIQA models on both regression and ranking benchmarks. Deployment validation confirms its effectiveness in enhancing the precision of live-stream video quality tuning and improving end-user quality of experience (QoE).

Technology Category

Application Category

📝 Abstract
Livestreaming has become increasingly prevalent in modern visual communication, where automatic camera quality tuning is essential for delivering superior user Quality of Experience (QoE). Such tuning requires accurate blind image quality assessment (BIQA) to guide parameter optimization decisions. Unfortunately, the existing BIQA models typically only predict an overall coarse-grained quality score, which cannot provide fine-grained perceptual guidance for precise camera parameter tuning. To bridge this gap, we first establish FGLive-10K, a comprehensive fine-grained BIQA database containing 10,185 high-resolution images captured under varying camera parameter configurations across diverse livestreaming scenarios. The dataset features 50,925 multi-attribute quality annotations and 19,234 fine-grained pairwise preference annotations. Based on FGLive-10K, we further develop TuningIQA, a fine-grained BIQA metric for livestreaming camera tuning, which integrates human-aware feature extraction and graph-based camera parameter fusion. Extensive experiments and comparisons demonstrate that TuningIQA significantly outperforms state-of-the-art BIQA methods in both score regression and fine-grained quality ranking, achieving superior performance when deployed for livestreaming camera tuning.
Problem

Research questions and friction points this paper is trying to address.

Develops fine-grained blind image quality assessment for livestreaming camera tuning
Addresses limitation of coarse-grained quality scores in existing BIQA models
Provides perceptual guidance for precise camera parameter optimization decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-aware feature extraction for quality assessment
Graph-based camera parameter fusion technique
Fine-grained BIQA metric for livestreaming optimization
🔎 Similar Papers
No similar papers found.
X
Xiangfei Sheng
School of Artificial Intelligence, Xidian University
Zhichao Duan
Zhichao Duan
Tsinghua University
Natural Language Processing
X
Xiaofeng Pan
School of Artificial Intelligence, Xidian University
Y
Yipo Huang
School of Data Science and Artificial Intelligence, Chang’an University
Z
Zhichao Yang
School of Artificial Intelligence, Xidian University
P
Pengfei Chen
School of Artificial Intelligence, Xidian University
Leida Li
Leida Li
Xidian University, China
Visual quality evaluationComputational aestheticsAffective computing