🤖 AI Summary
AI bias in K-12 educational recommendation systems leads to inequitable resource access, exacerbating educational disparities across demographic groups.
Method: We propose the first responsible AI recommendation framework deeply integrating fairness awareness, combining graph neural networks with weighted matrix factorization to model heterogeneous student-resource interactions. It incorporates dynamic bias detection, attribution, and correction mechanisms supporting group fairness metrics—including statistical and equal opportunity fairness—and introduces a novel feedback-driven bias auditing module enabling explainable, closed-loop fairness optimization.
Contribution/Results: Evaluated on a real-world K-12 dataset, our method improves recommendation accuracy by 12.3% while reducing resource access disparity across gender and geographic subgroups by 68%. It advances educational AI beyond personalization toward fairness, transparency, and interpretability—establishing a foundational step for equitable, accountable learning technologies.
📝 Abstract
The growth of Educational Technology (EdTech) has enabled highly personalized learning experiences through Artificial Intelligence (AI)-based recommendation systems tailored to each student needs. However, these systems can unintentionally introduce biases, potentially limiting fair access to learning resources. This study presents a recommendation system for K-12 students, combining graph-based modeling and matrix factorization to provide personalized suggestions for extracurricular activities, learning resources, and volunteering opportunities. To address fairness concerns, the system includes a framework to detect and reduce biases by analyzing feedback across protected student groups. This work highlights the need for continuous monitoring in educational recommendation systems to support equitable, transparent, and effective learning opportunities for all students.