🤖 AI Summary
Traditional item difficulty estimation relies on costly field testing and is constrained by classical test theory’s assumptions. Method: This study systematically reviews and empirically evaluates text-based automated difficulty prediction methods, synthesizing findings from 37 studies within a unified evaluation framework. It benchmarks classical machine learning models against Transformer architectures—ranging from small to large—using only item stems as input, without manual feature engineering. Contribution/Results: For the first time, cross-study model benchmarks are aggregated, revealing Transformers’ superior capacity to capture syntactic and semantic difficulty cues. The best-performing model achieves RMSE = 0.165, Pearson correlation = 0.87, and classification accuracy = 0.806. These results demonstrate the feasibility of purely text-driven difficulty prediction, offering substantial gains in efficiency, scalability, and fairness—establishing a novel paradigm for intelligent assessment design.
📝 Abstract
Item difficulty plays a crucial role in test performance, interpretability of scores, and equity for all test-takers, especially in large-scale assessments. Traditional approaches to item difficulty modeling rely on field testing and classical test theory (CTT)-based item analysis or item response theory (IRT) calibration, which can be time-consuming and costly. To overcome these challenges, text-based approaches leveraging machine learning and language models, have emerged as promising alternatives. This paper reviews and synthesizes 37 articles on automated item difficulty prediction in large-scale assessment settings published through May 2025. For each study, we delineate the dataset, difficulty parameter, subject domain, item type, number of items, training and test data split, input, features, model, evaluation criteria, and model performance outcomes. Results showed that although classic machine learning models remain relevant due to their interpretability, state-of-the-art language models, using both small and large transformer-based architectures, can capture syntactic and semantic patterns without the need for manual feature engineering. Uniquely, model performance outcomes were summarized to serve as a benchmark for future research and overall, text-based methods have the potential to predict item difficulty with root mean square error (RMSE) as low as 0.165, Pearson correlation as high as 0.87, and accuracy as high as 0.806. The review concludes by discussing implications for practice and outlining future research directions for automated item difficulty modeling.