An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems

📅 2026-02-24
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing automatic evaluation methods struggle to capture the fine-grained contextual nuances and domain-specific judgments that scientific experts rely on when assessing large language model outputs. To address this gap, this study systematically derives a structured error taxonomy through contextual interviews, thematic analysis, and close collaboration with domain scientists, identifying 20 distinct error patterns and seven evaluative dimensions. Validated by ten domain experts, the proposed framework not only effectively replicates the error types naturally recognized by experts but also reveals that structured evaluation can uncover issues otherwise overlooked. This work lays the foundation for developing assessment-pattern-based, personalized assistive tools to support expert evaluation in scientific domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are transforming scholarly tasks like search and summarization, but their reliability remains uncertain. Current evaluation metrics for testing LLM reliability are primarily automated approaches that prioritize efficiency and scalability, but lack contextual nuance and fail to reflect how scientific domain experts assess LLM outputs in practice. We developed and validated a schema for evaluating LLM errors in scholarly question-answering systems that reflects the assessment strategies of practicing scientists. In collaboration with domain experts, we identified 20 error patterns across seven categories through thematic analysis of 68 question-answer pairs. We validated this schema through contextual inquiries with 10 additional scientists, which showed not only which errors experts naturally identify but also how structured evaluation schemas can help them detect previously overlooked issues. Domain experts use systematic assessment strategies, including technical precision testing, value-based evaluation, and meta-evaluation of their own practices. We discuss implications for supporting expert evaluation of LLM outputs, including opportunities for personalized, schema-driven tools that adapt to individual evaluation patterns and expertise levels.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
scholarly question-answering
evaluation metrics
domain experts
error assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

expert schema
LLM error evaluation
scholarly question-answering
thematic analysis
contextual inquiry
A
Anna Martin-Boyle
University of Minnesota
W
William Humphreys
NASA Langley Research Center
M
Martha Brown
NASA Langley Research Center
C
Cara Leckey
NASA Langley Research Center
Harmanpreet Kaur
Harmanpreet Kaur
University of Minnesota
Human-Computer InteractionInterpretable ML