π€ AI Summary
This study addresses the challenge of automated grading of handwritten diagram-based assignments in STEM courses. We propose a novel assessment framework integrating multimodal meta-learning with vision-language large models (VLLMs). To our knowledge, this is the first systematic comparative analysis of these two paradigms on handwritten graph recognition and classification tasks, revealing their complementary strengths: meta-learning achieves higher accuracy on binary classification, whereas VLLMs slightly outperform on ternary classification but exhibit lower stability. Methodologically, we jointly leverage image processing and textual understanding techniques to enable end-to-end feature modeling and fine-grained scoring of handwritten diagramβtext hybrid submissions. Evaluated on a real-world educational dataset, our approach establishes an interpretable and scalable paradigm for AI-assisted assessment, significantly improving consistency and efficiency in online mathematics education evaluation.
π Abstract
With the rise of online learning, the demand for efficient and consistent assessment in mathematics has significantly increased over the past decade. Machine Learning (ML), particularly Natural Language Processing (NLP), has been widely used for autograding student responses, particularly those involving text and/or mathematical expressions. However, there has been limited research on autograding responses involving students' handwritten graphs, despite their prevalence in Science, Technology, Engineering, and Mathematics (STEM) curricula. In this study, we implement multimodal meta-learning models for autograding images containing students' handwritten graphs and text. We further compare the performance of Vision Large Language Models (VLLMs) with these specially trained metalearning models. Our results, evaluated on a real-world dataset collected from our institution, show that the best-performing meta-learning models outperform VLLMs in 2-way classification tasks. In contrast, in more complex 3-way classification tasks, the best-performing VLLMs slightly outperform the meta-learning models. While VLLMs show promising results, their reliability and practical applicability remain uncertain and require further investigation.