Enhancing Large Language Models for Automated Homework Assessment in Undergraduate Circuit Analysis

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit insufficient accuracy in automatically grading undergraduate circuit analysis assignments. Method: This paper proposes an optimization framework integrating multi-step reasoning prompting, context-aware data augmentation, and error-directed prompt injection. Unlike conventional single-step prompting, the method employs a structured reasoning chain to guide the LLM through sequential circuit topology analysis, equation formulation, and physical consistency verification; it further dynamically injects corrective contextual cues derived from common student misconception patterns. Contribution/Results: Evaluated on GPT-4o, the framework increases correct response rates on foundational circuit analysis problems from 74.71% to 97.70%, reducing misclassification rate by 82.3%. To our knowledge, this is the first work to synergistically combine interpretable prompting mechanisms with domain-specific physical constraints, establishing a novel paradigm for high-reliability, traceable LLM-based assessment in engineering education.

Technology Category

Application Category

📝 Abstract
This research full paper presents an enhancement pipeline for large language models (LLMs) in assessing homework for an undergraduate circuit analysis course, aiming to improve LLMs' capacity to provide personalized support to electrical engineering students. Existing evaluations have demonstrated that GPT-4o possesses promising capabilities in assessing student homework in this domain. Building on these findings, we enhance GPT-4o's performance through multi-step prompting, contextual data augmentation, and the incorporation of targeted hints. These strategies effectively address common errors observed in GPT-4o's responses when using simple prompts, leading to a substantial improvement in assessment accuracy. Specifically, the correct response rate for GPT-4o increases from 74.71% to 97.70% after applying the enhanced prompting and augmented data on entry-level circuit analysis topics. This work lays a foundation for the effective integration of LLMs into circuit analysis instruction and, more broadly, into engineering education.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs for automated homework assessment in circuit analysis
Improving GPT-4o's accuracy in electrical engineering student evaluations
Addressing common errors in LLM responses through enhanced prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-step prompting enhances LLM assessment accuracy
Contextual data augmentation improves personalized feedback quality
Targeted hints address common student errors effectively
🔎 Similar Papers
No similar papers found.
Liangliang Chen
Liangliang Chen
Georgia Institute of Technology
Machine LearningRoboticsHuman-in-the-loop ControlAI in EducationControl Theory & Application
Huiru Xie
Huiru Xie
Georgia Institute of Technology
Z
Zhihao Qin
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, USA
Y
Yiming Guo
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, USA
Jacqueline Rohde
Jacqueline Rohde
Assessment Coordinator, Georgia Institute of Technology
engineering education
Y
Ying Zhang
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, USA