When LLMs Help -- and Hurt -- Teaching Assistants in Proof-Based Courses

πŸ“… 2026-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the time-intensive and non-scalable nature of grading and providing feedback in proof-based courses. It presents the first systematic evaluation of the practical utility of large language models (LLMs) for this task, employing a multi-stage case study grounded in a detailed rubric. The research compares LLM-generated scores against those of teaching assistants with varying levels of experience and collects qualitative insights from TAs regarding the usability and perceived value of LLM-generated feedback. Findings reveal significant discrepancies between LLM and human grading; however, the LLM demonstrates practical utility in identifying critical errors. These results highlight the complementary potential of human–LLM collaboration and offer actionable directions for future co-design of AI-assisted educational workflows.

Technology Category

Application Category

πŸ“ Abstract
Teaching assistants (TAs) are essential to grading and feedback provision in proof-based courses, yet these tasks are time-intensive and difficult to scale. Although Large Language Models (LLMs) have been studied for grading and feedback, their effectiveness in proof-based courses is still unknown. Before designing LLM-based systems for this context, a necessary prerequisite is to understand whether LLMs can meaningfully assist TAs with grading and feedback. As such, we present a multi-part case study functioning as a technology probe in an undergraduate proof-based course. We compare rubric-based grading decisions made by an LLM and TAs with varying levels of expertise and examine TAs'perceptions of feedback generated by an LLM. We find substantial disagreement between LLMs and TAs on grading decisions but that LLM-generated feedback can still be useful to TAs for submissions with major errors. We conclude by discussing design implications for human-AI grading and feedback systems in proof-based courses.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Teaching Assistants
Proof-Based Courses
Grading
Feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
proof-based courses
grading assistance
AI-human collaboration
automated feedback
R
Romina Mahinpei
Princeton University, USA
S
Sofiia Druchyna
Princeton University, USA
Manoel Horta Ribeiro
Manoel Horta Ribeiro
Princeton
Data ScienceSocial ComputingComputational Social Science