Towards Human-Centered Early Prediction Models for Academic Performance in Real-World Contexts

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current academic performance prediction models suffer from three key limitations: an overemphasis on accuracy at the expense of interpretability and fairness; reliance on monthly-scale data, delaying timely intervention; and fragmented behavioral data. To address these, we propose the first week-level, person-centered prediction framework that leverages students’ behavioral logs, self-reported surveys, and educational features collected within the first week of semester. We introduce a multi-task 1D-CNN (MTL-1D-CNN) model explicitly designed for interpretability, fairness, and generalizability, enabling weekly identification of high- and low-performing students and facilitating collaborative interventions. This work is the first to systematically balance these three societal values while overcoming the bottleneck of coarse-grained, monthly data. Evaluated on two real-world academic semesters, our model achieves AUC ≥ 0.82—significantly outperforming baselines. Rigorous validation via SHAP-based interpretation and group fairness assessment confirms its trustworthiness and inclusivity.

Technology Category

Application Category

📝 Abstract
Supporting student success requires collaboration among multiple stakeholders. Researchers have explored machine learning models for academic performance prediction; yet key challenges remain in ensuring these models are interpretable, equitable, and actionable within real-world educational support systems. First, many models prioritize predictive accuracy but overlook human-centered considerations, limiting trust among students and reducing their usefulness for educators and institutional decision-makers. Second, most models require at least a month of data before making reliable predictions, delaying opportunities for early intervention. Third, current models primarily rely on sporadically collected, classroom-derived data, missing broader behavioral patterns that could provide more continuous and actionable insights. To address these gaps, we present three modeling approaches-LR, 1D-CNN, and MTL-1D-CNN-to classify students as low or high academic performers. We evaluate them based on explainability, fairness, and generalizability to assess their alignment with key social values. Using behavioral and self-reported data collected within the first week of two Spring terms, we demonstrate that these models can identify at-risk students as early as week one. However, trade-offs across human-centered considerations highlight the complexity of designing predictive models that effectively support multi-stakeholder decision-making and intervention strategies. We discuss these trade-offs and their implications for different stakeholders, outlining how predictive models can be integrated into student support systems. Finally, we examine broader socio-technical challenges in deploying these models and propose future directions for advancing human-centered, collaborative academic prediction systems.
Problem

Research questions and friction points this paper is trying to address.

Ensuring academic prediction models are interpretable, equitable, and actionable
Enabling early student performance prediction within the first week
Incorporating broader behavioral data for continuous, actionable insights
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early prediction using first-week behavioral data
Human-centered models with explainability and fairness
Multi-stakeholder actionable insights from diverse data
🔎 Similar Papers
No similar papers found.
H
Han Zhang
University of Washington, USA
Y
Yiyi Ren
University of Washington, USA
P
Paula S. Nurius
University of Washington, USA
Jennifer Mankoff
Jennifer Mankoff
University of Washington
AccessibilityFabricationUbicompHuman Computer Interaction
A
A. Dey
University of Washington, USA