🤖 AI Summary
This study addresses the lack of systematic evaluation of open-source large language models (LLMs) in fine-grained assessment of UML class diagrams, a task previously examined primarily using closed-source models. The authors propose a structured scoring protocol in which teaching assistants and six prominent open-source LLMs independently evaluate student-submitted UML class diagrams from a software design course across specific rubric items. Quantitative analysis at the fine-grained level reveals that individual models achieve up to 88.56% accuracy on certain criteria, with a Pearson correlation coefficient of 0.78 relative to human raters. Furthermore, a novel hybrid strategy—selecting the best-performing model per scoring criterion—significantly outperforms any single-model approach and attains overall performance comparable to that of human teaching assistants.
📝 Abstract
In this paper, we investigate the potential of open-source Large Language Models (LLMs) for grading Unified Modeling Language (UML) class diagrams. In contrast to existing work, which primarily evaluates proprietary LLMs, we focus on non-proprietary models, making our approach suitable for universities where transparency and cost are critical. Additionally, existing studies assess performance over complete diagrams rather than individual criteria, offering limited insight into how automated grading aligns with human evaluation.
To address these gaps, we propose a grading pipeline in which student-generated UML class diagrams are independently evaluated by both teaching assistants (TAs) and LLMs. Grades are then compared at the level of individual criteria. We evaluate this pipeline through a quantitative study of 92 UML class diagrams from a software design course, comparing TA grades against assessments produced by six popular open-source LLMs. Performance is measured across individual criterion, highlighting areas where LLMs diverge from human graders. Our results show per-criterion accuracy of up to 88.56% and a Pearson correlation coefficient of up to 0.78, representing a substantial improvement over previous work while using only open-source models. We also explore the concept of an optimal model that combines the best-performing LLM per criterion. This optimal model achieves performance close to that of a TA, suggesting a possible path toward a mixed-initiative grading system. Our findings demonstrate that open-source LLMs can effectively support UML class diagram grading by explicitly identifying grading alignment. The proposed pipeline provides a practical approach to manage increasing assessment workloads with growing student counts.