PaperTrail: A Claim-Evidence Interface for Grounding Provenance in LLM-based Scholarly Q&A

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models frequently generate unsupported assertions or omit critical information in academic question answering, while existing provenance mechanisms offer only coarse-grained traceability insufficient for scholarly rigor. This work proposes PaperTrail, a system that introduces, for the first time in academic QA, a fine-grained claim–evidence alignment mechanism: it employs natural language processing to extract claims from model responses and evidence units from source literature, then establishes explicit mappings between them. An interactive visualization interface enables users to inspect the support status—supported, unsupported, or missing—for each claim. User studies reveal that PaperTrail significantly reduces users’ trust in model outputs yet fails to substantially diminish their reliance on them, highlighting the critical role of cognitive load in human–AI collaboration.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used in scholarly question-answering (QA) systems to help researchers synthesize vast amounts of literature. However, these systems often produce subtle errors (e.g., unsupported claims, errors of omission), and current provenance mechanisms like source citations are not granular enough for the rigorous verification that scholarly domain requires. To address this, we introduce PaperTrail, a novel interface that decomposes both LLM answers and source documents into discrete claims and evidence, mapping them to reveal supported assertions, unsupported claims, and information omitted from the source texts. We evaluated PaperTrail in a within-subjects study with 26 researchers who performed two scholarly editing tasks using PaperTrail and a baseline interface. Our results show that PaperTrail significantly lowered participants'trust compared to the baseline. However, this increased caution did not translate to behavioral changes, as people continued to rely on LLM-generated scholarly edits to avoid a cognitively burdensome task. We discuss the value of claim-evidence matching for understanding LLM trustworthiness in scholarly settings, and present design implications for cognition-friendly communication of provenance information.
Problem

Research questions and friction points this paper is trying to address.

large language models
scholarly question-answering
provenance
claim-evidence
trustworthiness
Innovation

Methods, ideas, or system contributions that make the work stand out.

claim-evidence alignment
provenance visualization
scholarly QA
LLM trustworthiness
fine-grained attribution
🔎 Similar Papers
No similar papers found.
A
Anna Martin-Boyle
University of Minnesota
C
Cara A. C. Leckey
NASA Langley Research Center
M
Martha C. Brown
NASA Langley Research Center
Harmanpreet Kaur
Harmanpreet Kaur
University of Minnesota
Human-Computer InteractionInterpretable ML