Design of AI-Powered Tool for Self-Regulation Support in Programming Education

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI programming tutors operate externally to Learning Management Systems (LMS), hindering seamless integration of course materials, problem context, and student code execution outputs—resulting in pedagogically inappropriate feedback. Moreover, current LLM-based educational tools emphasize knowledge delivery while neglecting self-regulated learning (SRL) skill development. This paper introduces CodeRunner Agent—the first LLM-powered programming assistant deeply embedded within the Moodle CodeRunner plugin. Leveraging context-fused prompt engineering and a strategy-aware response generation framework, it delivers context-sensitive feedback grounded in course resources, problem specifications, student submissions, and runtime outcomes. The system supports instructor-customizable interventions and explicitly scaffolds debugging strategies and metacognitive regulation. Empirical evaluation demonstrates significantly improved feedback relevance, enhanced student SRL awareness and capability, and—critically—the first realization of an LLM-driven pedagogical closed loop with skill-oriented feedback in authentic programming instruction.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) tools have demonstrated their potential to deliver high-quality assistance by providing instant, personalized feedback that is crucial for effective programming education. However, many of these tools operate independently from institutional Learning Management Systems, which creates a significant disconnect. This isolation limits the ability to leverage learning materials and exercise context for generating tailored, context-aware feedback. Furthermore, previous research on self-regulated learning and LLM support mainly focused on knowledge acquisition, not the development of important self-regulation skills. To address these challenges, we developed CodeRunner Agent, an LLM-based programming assistant that integrates the CodeRunner, a student-submitted code executing and automated grading plugin in Moodle. CodeRunner Agent empowers educators to customize AI-generated feedback by incorporating detailed context from lecture materials, programming questions, student answers, and execution results. Additionally, it enhances students' self-regulated learning by providing strategy-based AI responses. This integrated, context-aware, and skill-focused approach offers promising avenues for data-driven improvements in programming education.
Problem

Research questions and friction points this paper is trying to address.

Integrates AI tools with Learning Management Systems for programming education
Enhances self-regulation skills in students using LLM-based feedback
Provides context-aware feedback by leveraging learning materials and execution results
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based assistant integrates with Moodle
Customizable AI feedback using lecture context
Strategy-based responses enhance self-regulation
🔎 Similar Papers
No similar papers found.