From Binary Groundedness to Support Relations: Towards a Reader-Centred Taxonomy for Comprehension of AI Output

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of prevailing binary support/refutation frameworks in evaluating AI-generated text, which fail to capture the nuanced semantic relationships between generated content and source documents. Moving beyond conventional groundedness paradigms, the work proposes a reader-centered, fine-grained taxonomy of evidential relations by integrating insights from linguistics and philosophy of language, encompassing diverse linkage types such as syntactic rephrasing and inferential strategies. Through theoretical analysis, a human annotation protocol, and benchmark evaluations, the authors systematically demonstrate the feasibility and efficacy of this framework. The resulting approach offers a more transparent and interpretable provenance mechanism for AI outputs, establishing both theoretical foundations and practical pathways for fine-grained evaluation and explainable interfaces in natural language generation systems.
📝 Abstract
Generative AI tools often answer questions using source documents, e.g., through retrieval augmented generation. Current groundedness and hallucination evaluations largely frame the relationship between an answer and its sources as binary (the answer is either supported or unsupported). However, this obscures both the syntactic moves (e.g., direct quotation vs. paraphrase) and the interpretive moves (e.g., induction vs. deduction) performed when models reformulate evidence into an answer. This limits both benchmarking and user-facing provenance interfaces. We propose the development of a reader-centred taxonomy of grounding as a set of support relations between generated statements and source documents. We explain how this might be synthesised from prior research in linguistics and philosophy of language, and evaluated through a benchmark and human annotation protocol. Such a framework would enable interfaces that communicate not just whether a claim is grounded, but how.
Problem

Research questions and friction points this paper is trying to address.

groundedness
hallucination
support relations
reader-centred taxonomy
generative AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

support relations
reader-centred taxonomy
groundedness
retrieval-augmented generation
provenance interfaces
🔎 Similar Papers
No similar papers found.