Benchmarking Large Language Models on Homework Assessment in Circuit Analysis

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates the capabilities and limitations of large language models (LLMs) in automated grading of circuit analysis coursework. Method: We construct the first LaTeX-formatted student solution dataset for this domain and introduce the first structured evaluation benchmark for circuit analysis. We propose a five-dimensional fine-grained assessment framework—covering solution completeness, methodological correctness, answer accuracy, computational errors, and unit conformity—that eliminates reliance on image recognition by uniformly representing solutions in LaTeX and employing zero-shot prompting. Contribution/Results: Experiments show that GPT-4o and Llama 3 70B significantly outperform GPT-3.5 Turbo; however, systematic deficiencies persist in symbolic reasoning and circuit topology comprehension. Our work establishes a reproducible, scalable benchmark for AI teaching assistants in engineering education and identifies concrete avenues for improvement.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have the potential to revolutionize various fields, including code development, robotics, finance, and education, due to their extensive prior knowledge and rapid advancements. This paper investigates how LLMs can be leveraged in engineering education. Specifically, we benchmark the capabilities of different LLMs, including GPT-3.5 Turbo, GPT-4o, and Llama 3 70B, in assessing homework for an undergraduate-level circuit analysis course. We have developed a novel dataset consisting of official reference solutions and real student solutions to problems from various topics in circuit analysis. To overcome the limitations of image recognition in current state-of-the-art LLMs, the solutions in the dataset are converted to LaTeX format. Using this dataset, a prompt template is designed to test five metrics of student solutions: completeness, method, final answer, arithmetic error, and units. The results show that GPT-4o and Llama 3 70B perform significantly better than GPT-3.5 Turbo across all five metrics, with GPT-4o and Llama 3 70B each having distinct advantages in different evaluation aspects. Additionally, we present insights into the limitations of current LLMs in several aspects of circuit analysis. Given the paramount importance of ensuring reliability in LLM-generated homework assessment to avoid misleading students, our results establish benchmarks and offer valuable insights for the development of a reliable, personalized tutor for circuit analysis -- a focus of our future work. Furthermore, the proposed evaluation methods can be generalized to a broader range of courses for engineering education in the future.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs for circuit analysis homework assessment
Comparing GPT-3.5 Turbo, GPT-4o, and Llama 3 70B performance
Developing reliable LLM-based tutors for engineering education
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking LLMs on circuit analysis homework
Converting solutions to LaTeX for image recognition
Designing prompt template for five evaluation metrics
🔎 Similar Papers
No similar papers found.
Liangliang Chen
Liangliang Chen
Georgia Institute of Technology
Machine LearningRoboticsHuman-in-the-loop ControlAI in EducationControl Theory & Application
Z
Zhihao Qin
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA.
Y
Yiming Guo
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA.
Jacqueline Rohde
Jacqueline Rohde
Assessment Coordinator, Georgia Institute of Technology
engineering education
Y
Ying Zhang
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA.