Text-Based Approaches to Item Difficulty Modeling in Large-Scale Assessments: A Systematic Review

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional item difficulty estimation relies on costly field testing and is constrained by classical test theory’s assumptions. Method: This study systematically reviews and empirically evaluates text-based automated difficulty prediction methods, synthesizing findings from 37 studies within a unified evaluation framework. It benchmarks classical machine learning models against Transformer architectures—ranging from small to large—using only item stems as input, without manual feature engineering. Contribution/Results: For the first time, cross-study model benchmarks are aggregated, revealing Transformers’ superior capacity to capture syntactic and semantic difficulty cues. The best-performing model achieves RMSE = 0.165, Pearson correlation = 0.87, and classification accuracy = 0.806. These results demonstrate the feasibility of purely text-driven difficulty prediction, offering substantial gains in efficiency, scalability, and fairness—establishing a novel paradigm for intelligent assessment design.

Technology Category

Application Category

📝 Abstract
Item difficulty plays a crucial role in test performance, interpretability of scores, and equity for all test-takers, especially in large-scale assessments. Traditional approaches to item difficulty modeling rely on field testing and classical test theory (CTT)-based item analysis or item response theory (IRT) calibration, which can be time-consuming and costly. To overcome these challenges, text-based approaches leveraging machine learning and language models, have emerged as promising alternatives. This paper reviews and synthesizes 37 articles on automated item difficulty prediction in large-scale assessment settings published through May 2025. For each study, we delineate the dataset, difficulty parameter, subject domain, item type, number of items, training and test data split, input, features, model, evaluation criteria, and model performance outcomes. Results showed that although classic machine learning models remain relevant due to their interpretability, state-of-the-art language models, using both small and large transformer-based architectures, can capture syntactic and semantic patterns without the need for manual feature engineering. Uniquely, model performance outcomes were summarized to serve as a benchmark for future research and overall, text-based methods have the potential to predict item difficulty with root mean square error (RMSE) as low as 0.165, Pearson correlation as high as 0.87, and accuracy as high as 0.806. The review concludes by discussing implications for practice and outlining future research directions for automated item difficulty modeling.
Problem

Research questions and friction points this paper is trying to address.

Automating item difficulty prediction using text-based machine learning methods
Overcoming limitations of traditional time-consuming field testing approaches
Benchmarking performance of language models for educational assessment equity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using machine learning for item difficulty prediction
Leveraging transformer models without manual feature engineering
Achieving high accuracy with text-based automated methods
🔎 Similar Papers
No similar papers found.
S
Sydney Peters
University of Maryland
N
Nan Zhang
University of Maryland
Hong Jiao
Hong Jiao
University of Maryland, College Park
educational measurementpsychometrics
M
Ming Li
University of Maryland
T
Tianyi Zhou
University of Maryland
R
Robert Lissitz
University of Maryland