🤖 AI Summary
Students generally exhibit low trust in and understanding of AI-based automated scoring systems, resulting in limited acceptance. This study systematically investigates, for the first time, the impact of transparency mechanisms on user acceptance within educational contexts. By integrating a natural language processing–driven automated scoring system with a user survey experiment, the research evaluates how transparency influences perceived accuracy, trust, and willingness to use. Findings indicate that transparency significantly enhances students’ perception of the AI system’s scoring accuracy and their willingness to provide feedback, yet it does not substantially increase their willingness to accept AI-generated scores in formal assessments. These results offer empirical guidance for the design of AI-powered educational tools and highlight both the value and limitations of transparency in fostering explainability.
📝 Abstract
The development of effective autograders is key for scaling assessment and feedback. While NLP based autograding systems for open-ended response questions have been found to be beneficial for providing immediate feedback, autograders are not always liked, understood, or trusted by students. Our research tested the effect of transparency on students'attitudes towards autograders. Transparent autograders increased students'perceptions of autograder accuracy and willingness to discuss autograders in survey comments, but did not improve other related attitudes -- such as willingness to be graded by them on a test -- relative to the control without transparency. However, this lack of impact may be due to higher measured student trust towards autograders in this study than in prior work in the field. We briefly discuss possible reasons for this trend.