π€ AI Summary
Automatic essay scoring for Arabic is hindered by the languageβs morphological and syntactic complexity as well as the scarcity of annotated datasets, compounded by the absence of educator-friendly tools. This work proposes and implements the first end-to-end, user-friendly web platform that delivers real-time Arabic essay scoring while abstracting away the technical complexities of underlying model APIs. The system integrates assignment management, batch upload capabilities, configurable multi-model scoring, and granular analytic feedback. By incorporating multiple state-of-the-art Arabic automated essay scoring models, the platform supports both efficient batch processing and programmatic API access. It achieves a balanced trade-off between scoring accuracy and computational efficiency, significantly enhancing scoring consistency, accessibility, and pedagogical utility for educators and learners alike.
π Abstract
Over the past years, Automated Essay Scoring (AES) systems have gained increasing attention as scalable and consistent solutions for assessing the proficiency of student writing. Despite recent progress, support for Arabic AES remains limited due to linguistic complexity and scarcity of large publicly-available annotated datasets. In this work, we present Qayyem, a Web-based platform designed to support Arabic AES by providing an integrated workflow for assignment creation, batch essay upload, scoring configuration, and per-trait essay evaluation. Qayyem abstracts the technical complexity of interacting with scoring server APIs, allowing instructors to access advanced scoring services through a user-friendly interface. The platform deploys a number of state-of-the-art Arabic essay scoring models with different effectiveness and efficiency figures.