Supervised Fine-Tuning LLMs to Behave as Pedagogical Agents in Programming Education

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) in programming education often over-provide solutions, undermining students’ autonomous debugging skills. Method: We propose an education-oriented supervised fine-tuning paradigm to develop GuideLM—a pedagogically guided LLM. Grounded in constructivist learning theory and cognitive load theory, we curate a 528-turn educational dialogue dataset and design a fine-tuning strategy prioritizing *instructional effectiveness* over general accuracy. GuideLM thus generates conceptual scaffolding, Socratic questioning, and concise error explanations—avoiding direct solution provision. Contribution/Results: We introduce the first theory-driven instructional adaptation framework for LLMs. Evaluation shows GuideLM improves Socratic guidance by 8% and linguistic conciseness by 58% over GPT-4o, with significantly enhanced pedagogical alignment—while incurring only a marginal reduction in general task accuracy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly being explored in higher education, yet their effectiveness as teaching agents remains underexamined. In this paper, we present the development of GuideLM, a fine-tuned LLM designed for programming education. GuideLM has been integrated into the Debugging C Compiler (DCC), an educational C compiler that leverages LLMs to generate pedagogically sound error explanations. Previously, DCC relied on off-the-shelf OpenAI models, which, while accurate, often over-assisted students by directly providing solutions despite contrary prompting. To address this, we employed supervised fine-tuning (SFT) on a dataset of 528 student-question/teacher-answer pairs, creating two models: GuideLM and GuideLM-mini, fine-tuned on ChatGPT-4o and 4o-mini, respectively. We conducted an expert analysis of 400 responses per model, comparing their pedagogical effectiveness against base OpenAI models. Our evaluation, grounded in constructivism and cognitive load theory, assessed factors such as conceptual scaffolding, clarity, and Socratic guidance. Results indicate that GuideLM and GuideLM-mini improve pedagogical performance, with an 8% increase in Socratic guidance and a 58% improvement in economy of words compared to GPT-4o. However, this refinement comes at the cost of a slight reduction in general accuracy. While further work is needed, our findings suggest that fine-tuning LLMs with targeted datasets is a promising approach for developing models better suited to educational contexts.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning LLMs for effective programming education.
Improving pedagogical guidance in error explanations.
Balancing accuracy and educational support in LLMs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supervised fine-tuning for educational LLMs
Integration into Debugging C Compiler
Improved Socratic guidance and clarity
🔎 Similar Papers
No similar papers found.
E
Emily Ross
University of New South Wales, Sydney, Australia
Yuval Kansal
Yuval Kansal
ECE PhD Student, Princeton University
AI for ScienceNLPReasoning in LLMs
Jake Renzella
Jake Renzella
University of New South Wales
Computer Science EducationArtificial IntelligenceSoftware Engineering
A
Alexandra Vassar
University of New South Wales, Sydney, Australia
A
Andrew Taylor
University of New South Wales, Sydney, Australia