🤖 AI Summary
To address the low efficiency of manual grading for handwritten open-ended questions in university STEM courses, this paper proposes the first end-to-end AI-assisted scoring system. The system integrates optical character recognition (OCR) with large language models (LLMs) to achieve high-accuracy transcription of handwritten content, semantics-driven automated scoring, confidence-aware evaluation, and generation of personalized feedback. It pioneers deep LLM integration across the entire grading pipeline—transcription, scoring, calibration, and feedback—enabling instructor-in-the-loop intervention and dynamic alignment with evolving rubrics. Deployed across 20+ universities and covering mathematics, physics, chemistry, and engineering, the system reduces grading time by 65% on average. It achieves a 95.4% agreement rate with human instructors for high-confidence predictions and has processed over 300,000 student responses.
📝 Abstract
Grading handwritten, open-ended responses remains a major bottleneck in large university STEM courses. We introduce Pensieve (https://www.pensieve.co), an AI-assisted grading platform that leverages large language models (LLMs) to transcribe and evaluate student work, providing instructors with rubric-aligned scores, transcriptions, and confidence ratings. Unlike prior tools that focus narrowly on specific tasks like transcription or rubric generation, Pensieve supports the entire grading pipeline-from scanned student submissions to final feedback-within a human-in-the-loop interface.
Pensieve has been deployed in real-world courses at over 20 institutions and has graded more than 300,000 student responses. We present system details and empirical results across four core STEM disciplines: Computer Science, Mathematics, Physics, and Chemistry. Our findings show that Pensieve reduces grading time by an average of 65%, while maintaining a 95.4% agreement rate with instructor-assigned grades for high-confidence predictions.