🤖 AI Summary
This study investigates the alignment between large language models (LLMs) and human annotators in judging question difficulty within Japanese quiz-bowl–style问答. Using a novel, manually curated Japanese question-answering dataset, we employ multi-prompting strategies to elicit responses from LLMs and systematically compare their accuracy against human performance along two key dimensions: whether the answer is covered in Wikipedia and whether the answer is numeric. Results reveal that LLMs significantly underperform humans on questions involving Wikipedia-uncovered knowledge and those requiring numeric reasoning—highlighting their heavy reliance on training data coverage and fundamental limitations in flexible numerical computation. To our knowledge, this is the first empirical study to uncover structural misalignment in difficulty perception between LLMs and humans in a Japanese quiz context. The work provides a reproducible analytical framework for probing model knowledge boundaries and informing robust prompt engineering.
📝 Abstract
LLMs have achieved performance that surpasses humans in many NLP tasks. However, it remains unclear whether problems that are difficult for humans are also difficult for LLMs. This study investigates how the difficulty of quizzes in a buzzer setting differs between LLMs and humans. Specifically, we first collect Japanese quiz data including questions, answers, and correct response rate of humans, then prompted LLMs to answer the quizzes under several settings, and compare their correct answer rate to that of humans from two analytical perspectives. The experimental results showed that, compared to humans, LLMs struggle more with quizzes whose correct answers are not covered by Wikipedia entries, and also have difficulty with questions that require numerical answers.