🤖 AI Summary
This study evaluates the capabilities and limitations of large language models (LLMs) in automated grading of circuit analysis coursework. Method: We construct the first LaTeX-formatted student solution dataset for this domain and introduce the first structured evaluation benchmark for circuit analysis. We propose a five-dimensional fine-grained assessment framework—covering solution completeness, methodological correctness, answer accuracy, computational errors, and unit conformity—that eliminates reliance on image recognition by uniformly representing solutions in LaTeX and employing zero-shot prompting. Contribution/Results: Experiments show that GPT-4o and Llama 3 70B significantly outperform GPT-3.5 Turbo; however, systematic deficiencies persist in symbolic reasoning and circuit topology comprehension. Our work establishes a reproducible, scalable benchmark for AI teaching assistants in engineering education and identifies concrete avenues for improvement.
📝 Abstract
Large language models (LLMs) have the potential to revolutionize various fields, including code development, robotics, finance, and education, due to their extensive prior knowledge and rapid advancements. This paper investigates how LLMs can be leveraged in engineering education. Specifically, we benchmark the capabilities of different LLMs, including GPT-3.5 Turbo, GPT-4o, and Llama 3 70B, in assessing homework for an undergraduate-level circuit analysis course. We have developed a novel dataset consisting of official reference solutions and real student solutions to problems from various topics in circuit analysis. To overcome the limitations of image recognition in current state-of-the-art LLMs, the solutions in the dataset are converted to LaTeX format. Using this dataset, a prompt template is designed to test five metrics of student solutions: completeness, method, final answer, arithmetic error, and units. The results show that GPT-4o and Llama 3 70B perform significantly better than GPT-3.5 Turbo across all five metrics, with GPT-4o and Llama 3 70B each having distinct advantages in different evaluation aspects. Additionally, we present insights into the limitations of current LLMs in several aspects of circuit analysis. Given the paramount importance of ensuring reliability in LLM-generated homework assessment to avoid misleading students, our results establish benchmarks and offer valuable insights for the development of a reliable, personalized tutor for circuit analysis -- a focus of our future work. Furthermore, the proposed evaluation methods can be generalized to a broader range of courses for engineering education in the future.