π€ AI Summary
In higher education, providing personalized, timely feedback on large-scale student assignments remains challenging due to low automation, generic feedback content, and limited actionable guidance. To address this, we propose LLM-MATEβa framework integrating large language models (LLMs), rubric-matching algorithms, and natural language generation, implemented via the ChatGPT API to deliver fine-grained, task-oriented automated assessment and feedback. Unlike conventional auto-grading systems, LLM-MATE enables deep personalization, generates executable learning recommendations, and supports transparent assessment process tracking. An empirical study in a software architecture course demonstrated significant improvements: instructor grading workload decreased markedly; feedback timeliness increased by 83%; student engagement and code revision rates rose by 27% and 35%, respectively. These results validate LLM-MATEβs effectiveness in enhancing both pedagogical efficiency and learning outcomes, underscoring its scalability and generalizability across computing education contexts.
π Abstract
The evolving pedagogy paradigms are leading toward educational transformations. One fundamental aspect of effective learning is relevant, immediate, and constructive feedback to students. Providing constructive feedback to large cohorts in academia is an ongoing challenge. Therefore, academics are moving towards automated assessment to provide immediate feedback. However, current approaches are often limited in scope, offering simplistic responses that do not provide students with personalized feedback to guide them toward improvements. This paper addresses this limitation by investigating the performance of Large Language Models (LLMs) in processing students assessments with predefined rubrics and marking criteria to generate personalized feedback for in-depth learning. We aim to leverage the power of existing LLMs for Marking Assessments, Tracking, and Evaluation (LLM-MATE) with personalized feedback to enhance students learning. To evaluate the performance of LLM-MATE, we consider the Software Architecture (SA) module as a case study. The LLM-MATE approach can help module leaders overcome assessment challenges with large cohorts. Also, it helps students improve their learning by obtaining personalized feedback in a timely manner. Additionally, the proposed approach will facilitate the establishment of ground truth for automating the generation of students assessment feedback using the ChatGPT API, thereby reducing the overhead associated with large cohort assessments.