Beyond Literacy: Predicting Interpretation Correctness of Visualizations with User Traits, Item Difficulty, and Rasch Scores

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of existing visualization literacy assessments, which rely on fixed questions and fail to capture individual differences or enable personalized prediction. The authors model users’ correctness in interpreting visualizations as a binary classification task, integrating 22 features encompassing user characteristics, historical performance, and item difficulty—quantified through both expert ratings and Rasch model–derived metrics. Notably, this work is the first to incorporate the Rasch model to characterize item difficulty for predictive purposes. Evaluations across 32 question subsets employ logistic regression, random forests, and multilayer perceptrons, combined with feature selection and ten repetitions of ten-fold cross-validation. Results show that logistic regression with feature selection achieves the best performance (median AUC = 0.72, Cohen’s κ = 0.32), with Rasch-based difficulty emerging as the strongest predictor, thereby demonstrating the feasibility of dynamically adapting visualization items to user ability and advancing personalized assessment and training.

Technology Category

Application Category

📝 Abstract
Data Visualization Literacy assessments are typically administered via fixed sets of Data Visualization items, despite substantial heterogeneity in how different people interpret the same visualization. This paper presents and evaluates an approach for predicting Human Interpretation Correctness (P-HIC) of data visualizations; i.e., anticipating whether a specific person will interpret a data visualization correctly or not, before exposure to that DV, enabling more personalized assessment and training. We operationalize P-HIC as a binary classification problem using 22 features spanning Human Profile, Human Performance, and Item difficulty (including ExpertDifficulty and RaschDifficulty). We evaluate three machine-learning models (Logistic Regression model, Random Forest, Multi Layer Perceptron) with and without feature selection, using a survey with 1,083 participants who answered 32 Data Visualization items (eight data visualizations per four items), yielding 34,656 item responses. Performance is assessed via a ten-time ten-fold cross-validation in each 32 (item-specific) datasets, using AUC and Cohen's kappa. Logistic Regression model with feature selection is the best-performing approach, reaching a median AUC of 0.72 and a median kappa of 0.32. Feature analyses show RaschDifficulty as the dominant predictor, followed by experts'ratings and prior correctness (PercCorrect), whose relevance increases across sessions. Profile information did not particularly support P-HIC. Our results support the feasibility of anticipating misinterpretations of data visualizations, and motivate the runtime selection of data visualizations items tailored to an audience, thereby improving the efficiency of Data Visualization Literacy assessment and targeted training.
Problem

Research questions and friction points this paper is trying to address.

Data Visualization Literacy
Interpretation Correctness
Personalized Assessment
Item Difficulty
Rasch Model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human Interpretation Correctness
Rasch Difficulty
Personalized Visualization Assessment
Data Visualization Literacy
Predictive Modeling
🔎 Similar Papers
No similar papers found.