🤖 AI Summary
Educators exhibit poor discriminative ability in assessing the difficulty of true/false questions in neural networks and machine learning, achieving only marginal performance (AUC ≈ 0.55).
Method: We propose a novel paradigm for question difficulty prediction leveraging large language models’ (LLMs) intrinsic uncertainty—rather than relying on direct LLM-generated difficulty scores. Our approach extracts uncertainty indicators from the LLM’s reasoning process (e.g., confidence distribution, response consistency) and trains a few-shot supervised model using only 42 human-labeled samples.
Contribution/Results: (1) Educator judgments show near-chance discrimination; (2) Uncertainty-based features significantly outperform both educator assessments (AUC = 0.82) and prompt-based direct difficulty estimation (ΔAUC = +0.11); (3) We provide the first empirical validation that LLM uncertainty serves as an effective, low-resource proxy for question difficulty—enabling more intelligent, adaptive assessment design.
📝 Abstract
Estimating the difficulty of exam questions is essential for developing good exams, but professors are not always good at this task. We compare various Large Language Model-based methods with three professors in their ability to estimate what percentage of students will give correct answers on True/False exam questions in the areas of Neural Networks and Machine Learning. Our results show that the professors have limited ability to distinguish between easy and difficult questions and that they are outperformed by directly asking Gemini 2.5 to solve this task. Yet, we obtained even better results using uncertainties of the LLMs solving the questions in a supervised learning setting, using only 42 training samples. We conclude that supervised learning using LLM uncertainty can help professors better estimate the difficulty of exam questions, improving the quality of assessment.