🤖 AI Summary
This study addresses the lack of objective, reproducible quantitative evaluation methods for color quality in human–computer interface (HCI) design. We propose the first end-to-end deep learning framework for automated color quality assessment. Methodologically, we develop a CNN-based multi-dimensional visual feature extraction model that integrates perceptually relevant features—including hue, brightness, and saturation—to establish a mapping between interface design attributes and user subjective ratings. Our key contribution is the first fully automated, cross-platform quantitative assessment of interface color quality, overcoming longstanding limitations of expert-dependent evaluations and small-sample subjective studies. Evaluated on a benchmark dataset of mainstream website interfaces, our model achieves a Pearson correlation coefficient of 0.96 with human ratings, and significantly outperforms existing baselines in both mean squared error (MSE) and mean absolute error (MAE). These results demonstrate high accuracy, strong generalizability, and practical deployability.
📝 Abstract
In this paper, a quantitative evaluation model for the color quality of human-computer interaction interfaces is proposed by combining deep convolutional neural networks (CNN). By extracting multidimensional features of interface images, including hue, brightness, purity, etc., CNN is used for efficient feature modeling and quantitative analysis, and the relationship between interface design and user perception is studied. The experiment is based on multiple international mainstream website interface datasets, covering e-commerce platforms, social media, education platforms, etc., and verifies the evaluation effect of the model on indicators such as contrast, clarity, color coordination, and visual appeal. The results show that the CNN evaluation is highly consistent with the user rating, with a correlation coefficient of up to 0.96, and it also shows high accuracy in mean square error and absolute error. Compared with traditional experience-based evaluation methods, the proposed model can efficiently and scientifically capture the visual characteristics of the interface and avoid the influence of subjective factors. Future research can explore the introduction of multimodal data (such as text and interactive behavior) into the model to further enhance the evaluation ability of dynamic interfaces and expand it to fields such as smart homes, medical systems, and virtual reality. This paper provides new methods and new ideas for the scientific evaluation and optimization of interface design.