๐ค AI Summary
This study addresses the challenge of automatically aligning mathematics test items with curriculum standards in large-scale educational assessments. We propose a fine-grained domain- and skill-label annotation method leveraging pre-trained language models (PLMs). Systematic comparisons are conducted among embedding-based models, BERT variants (BERT, RoBERTa, DeBERTa-v3), and ensemble strategies (majority voting, stacking). Results show that fine-tuned PLMs significantly outperform traditional approaches; while dimensionality reduction improves linear classifiers, it remains inferior to end-to-end PLM fine-tuning. DeBERTa-v3-base achieves the best domain alignment performance (weighted F1 = 0.950), and RoBERTa-large attains the highest skill alignment score (F1 = 0.869)โconstituting the current state-of-the-art for single-model solutions. Ensemble methods fail to surpass these top-performing single models, suggesting that PLMs sufficiently capture the semantic mapping between standards and test items. The work establishes a high-accuracy, reproducible technical framework for automated assessment interpretation.
๐ Abstract
Accurate alignment of items to content standards is critical for valid score interpretation in large-scale assessments. This study evaluates three automated paradigms for aligning items with four domain and nineteen skill labels. First, we extracted embeddings and trained multiple classical supervised machine learning models, and further investigated the impact of dimensionality reduction on model performance. Second, we fine-tuned eight BERT model and its variants for both domain and skill alignment. Third, we explored ensemble learning with majority voting and stacking with multiple meta-models. The DeBERTa-v3-base achieved the highest weighted-average F1 score of 0.950 for domain alignment while the RoBERTa-large yielded the highest F1 score of 0.869 for skill alignment. Ensemble models did not surpass the best-performing language models. Dimension reduction enhanced linear classifiers based on embeddings but did not perform better than language models. This study demonstrated different methods in automated item alignment to content standards.}