Personalized and Constructive Feedback for Computer Science Students Using the Large Language Model (LLM)

πŸ“… 2025-10-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In higher education, providing personalized, timely feedback on large-scale student assignments remains challenging due to low automation, generic feedback content, and limited actionable guidance. To address this, we propose LLM-MATEβ€”a framework integrating large language models (LLMs), rubric-matching algorithms, and natural language generation, implemented via the ChatGPT API to deliver fine-grained, task-oriented automated assessment and feedback. Unlike conventional auto-grading systems, LLM-MATE enables deep personalization, generates executable learning recommendations, and supports transparent assessment process tracking. An empirical study in a software architecture course demonstrated significant improvements: instructor grading workload decreased markedly; feedback timeliness increased by 83%; student engagement and code revision rates rose by 27% and 35%, respectively. These results validate LLM-MATE’s effectiveness in enhancing both pedagogical efficiency and learning outcomes, underscoring its scalability and generalizability across computing education contexts.

Technology Category

Application Category

πŸ“ Abstract
The evolving pedagogy paradigms are leading toward educational transformations. One fundamental aspect of effective learning is relevant, immediate, and constructive feedback to students. Providing constructive feedback to large cohorts in academia is an ongoing challenge. Therefore, academics are moving towards automated assessment to provide immediate feedback. However, current approaches are often limited in scope, offering simplistic responses that do not provide students with personalized feedback to guide them toward improvements. This paper addresses this limitation by investigating the performance of Large Language Models (LLMs) in processing students assessments with predefined rubrics and marking criteria to generate personalized feedback for in-depth learning. We aim to leverage the power of existing LLMs for Marking Assessments, Tracking, and Evaluation (LLM-MATE) with personalized feedback to enhance students learning. To evaluate the performance of LLM-MATE, we consider the Software Architecture (SA) module as a case study. The LLM-MATE approach can help module leaders overcome assessment challenges with large cohorts. Also, it helps students improve their learning by obtaining personalized feedback in a timely manner. Additionally, the proposed approach will facilitate the establishment of ground truth for automating the generation of students assessment feedback using the ChatGPT API, thereby reducing the overhead associated with large cohort assessments.
Problem

Research questions and friction points this paper is trying to address.

Providing personalized feedback for large student cohorts
Overcoming limitations of simplistic automated assessment systems
Generating constructive feedback using LLMs with marking criteria
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM processes assessments with predefined rubrics
Generates personalized feedback using ChatGPT API
Automates marking and evaluation for large cohorts
πŸ”Ž Similar Papers
No similar papers found.
Javed Ali Khan
Javed Ali Khan
University of Hertforshire, UK
Software EngineeringCrowdRERepositories MiningAI4SEHealth Analytics
Muhammad Yaqoob
Muhammad Yaqoob
Department of Computer Science, University of Hertfordshire, Hatfield, UK
M
Mamoona Tasadduq
Information Technology University, Lahore, Pakistan
H
Hafsa Shareef Dar
Department of Software Engineering, Faculty of Computing and IT, University of Gujrat, Pakistan
A
Aitezaz Ahsan
Department of Computer Science, University of Hertfordshire, Hatfield, UK