🤖 AI Summary
Current academic performance prediction models suffer from three key limitations: an overemphasis on accuracy at the expense of interpretability and fairness; reliance on monthly-scale data, delaying timely intervention; and fragmented behavioral data. To address these, we propose the first week-level, person-centered prediction framework that leverages students’ behavioral logs, self-reported surveys, and educational features collected within the first week of semester. We introduce a multi-task 1D-CNN (MTL-1D-CNN) model explicitly designed for interpretability, fairness, and generalizability, enabling weekly identification of high- and low-performing students and facilitating collaborative interventions. This work is the first to systematically balance these three societal values while overcoming the bottleneck of coarse-grained, monthly data. Evaluated on two real-world academic semesters, our model achieves AUC ≥ 0.82—significantly outperforming baselines. Rigorous validation via SHAP-based interpretation and group fairness assessment confirms its trustworthiness and inclusivity.
📝 Abstract
Supporting student success requires collaboration among multiple stakeholders. Researchers have explored machine learning models for academic performance prediction; yet key challenges remain in ensuring these models are interpretable, equitable, and actionable within real-world educational support systems. First, many models prioritize predictive accuracy but overlook human-centered considerations, limiting trust among students and reducing their usefulness for educators and institutional decision-makers. Second, most models require at least a month of data before making reliable predictions, delaying opportunities for early intervention. Third, current models primarily rely on sporadically collected, classroom-derived data, missing broader behavioral patterns that could provide more continuous and actionable insights. To address these gaps, we present three modeling approaches-LR, 1D-CNN, and MTL-1D-CNN-to classify students as low or high academic performers. We evaluate them based on explainability, fairness, and generalizability to assess their alignment with key social values. Using behavioral and self-reported data collected within the first week of two Spring terms, we demonstrate that these models can identify at-risk students as early as week one. However, trade-offs across human-centered considerations highlight the complexity of designing predictive models that effectively support multi-stakeholder decision-making and intervention strategies. We discuss these trade-offs and their implications for different stakeholders, outlining how predictive models can be integrated into student support systems. Finally, we examine broader socio-technical challenges in deploying these models and propose future directions for advancing human-centered, collaborative academic prediction systems.