A Rose by Any Other Name: LLM-Generated Explanations Are Good Proxies for Human Explanations to Collect Label Distributions on NLI

📅 2024-12-18
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses annotator disagreement in natural language inference (NLI) by investigating whether large language model (LLM)-generated explanations can effectively substitute human-written explanations for modeling the human judgment distribution (HJD), especially under explanation-scarce and out-of-distribution (OOD) settings. Methodologically, we propose an LLM-driven explanation generation framework, multi-strategy label-explanation fusion, and an automatic explanation filtering mechanism. We provide the first systematic empirical validation that HJD modeling using only LLM-generated explanations paired with human labels achieves performance comparable to that of fully human-annotated explanations. Our approach demonstrates strong cross-dataset robustness, substantially improves HJD construction efficiency, and maintains generalization capability on OOD evaluation. The core contribution is establishing the validity and practicality of LLM-generated explanations for HJD modeling—offering a novel, resource-efficient paradigm for NLI evaluation with enhanced generalizability.

Technology Category

Application Category

📝 Abstract
Disagreement in human labeling is ubiquitous, and can be captured in human judgment distributions (HJDs). Recent research has shown that explanations provide valuable information for understanding human label variation (HLV) and large language models (LLMs) can approximate HJD from a few human-provided label-explanation pairs. However, collecting explanations for every label is still time-consuming. This paper examines whether LLMs can be used to replace humans in generating explanations for approximating HJD. Specifically, we use LLMs as annotators to generate model explanations for a few given human labels. We test ways to obtain and combine these label-explanations with the goal to approximate human judgment distribution. We further compare the resulting human with model-generated explanations, and test automatic and human explanation selection. Our experiments show that LLM explanations are promising for NLI: to estimate HJD, generated explanations yield comparable results to human's when provided with human labels. Importantly, our results generalize from datasets with human explanations to i) datasets where they are not available and ii) challenging out-of-distribution test sets.
Problem

Research questions and friction points this paper is trying to address.

LLMs generate explanations to approximate human judgment distributions
Reducing time by replacing human explanations with LLM-generated ones
Testing LLM explanations on datasets without human annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate explanations for human labels
Combine label-explanations to approximate distributions
Compare human and model-generated explanations effectively
🔎 Similar Papers
No similar papers found.