🤖 AI Summary
To address the lack of perceptual quality assessment in 3D Gaussian Splatting (3DGS) rendering, this paper introduces 3DGS-QA—the first subjective quality dataset specifically designed for 3DGS—comprising 15 object categories and 225 degraded samples, systematically quantifying visual impacts of key degradations including viewpoint sparsity, noise, and chromatic distortion. We further propose the first no-reference quality prediction model operating directly on 3D Gaussian primitives, eliminating reliance on rendered images or ground-truth references; it jointly encodes spatial distribution and photometric features to enable structural-aware quality estimation. Extensive experiments demonstrate that our model significantly outperforms conventional and state-of-the-art learning-based methods across diverse degradation types, exhibiting strong robustness and generalization capability. Both the 3DGS-QA dataset and the proposed model are publicly released, establishing a reliable, perceptually grounded evaluation benchmark for optimizing 3DGS content generation and rendering pipelines.
📝 Abstract
With the rapid advancement of 3D visualization, 3D Gaussian Splatting (3DGS) has emerged as a leading technique for real-time, high-fidelity rendering. While prior research has emphasized algorithmic performance and visual fidelity, the perceptual quality of 3DGS-rendered content, especially under varying reconstruction conditions, remains largely underexplored. In practice, factors such as viewpoint sparsity, limited training iterations, point downsampling, noise, and color distortions can significantly degrade visual quality, yet their perceptual impact has not been systematically studied. To bridge this gap, we present 3DGS-QA, the first subjective quality assessment dataset for 3DGS. It comprises 225 degraded reconstructions across 15 object types, enabling a controlled investigation of common distortion factors. Based on this dataset, we introduce a no-reference quality prediction model that directly operates on native 3D Gaussian primitives, without requiring rendered images or ground-truth references. Our model extracts spatial and photometric cues from the Gaussian representation to estimate perceived quality in a structure-aware manner. We further benchmark existing quality assessment methods, spanning both traditional and learning-based approaches. Experimental results show that our method consistently achieves superior performance, highlighting its robustness and effectiveness for 3DGS content evaluation. The dataset and code are made publicly available at https://github.com/diaoyn/3DGSQA to facilitate future research in 3DGS quality assessment.