Stress Testing Factual Consistency Metrics for Long-Document Summarization

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Factuality evaluation of long-document summarization suffers from input-length constraints and challenges in modeling long-range dependencies; existing reference-free factual consistency metrics—largely designed for short texts—exhibit questionable reliability in this setting. This paper systematically benchmarks six prominent metrics under seven semantic-preserving perturbations (e.g., paraphrasing, logical negation, compression, source insertion) and analyzes their behavior across varying retrieval context lengths and proposition information densities. Experiments reveal inconsistent scoring on semantically equivalent summaries and significant accuracy degradation on high-information-density propositions; context extension only partially mitigates instability. Based on these findings, we propose three principled improvements: multi-segment reasoning, context-aware calibration, and metric training using semantic-equivalence variants. Our work establishes the first systematic diagnostic framework and scalable optimization pathway for factual consistency assessment in long-document summarization.

Technology Category

Application Category

📝 Abstract
Evaluating the factual consistency of abstractive text summarization remains a significant challenge, particularly for long documents, where conventional metrics struggle with input length limitations and long-range dependencies. In this work, we systematically evaluate the reliability of six widely used reference-free factuality metrics, originally proposed for short-form summarization, in the long-document setting. We probe metric robustness through seven factuality-preserving perturbations applied to summaries, namely paraphrasing, simplification, synonym replacement, logically equivalent negations, vocabulary reduction, compression, and source text insertion, and further analyze their sensitivity to retrieval context and claim information density. Across three long-form benchmark datasets spanning science fiction, legal, and scientific domains, our results reveal that existing short-form metrics produce inconsistent scores for semantically equivalent summaries and exhibit declining reliability for information-dense claims whose content is semantically similar to many parts of the source document. While expanding the retrieval context improves stability in some domains, no metric consistently maintains factual alignment under long-context conditions. Finally, our results highlight concrete directions for improving factuality evaluation, including multi-span reasoning, context-aware calibration, and training on meaning-preserving variations to enhance robustness in long-form summarization. We release all code, perturbed data, and scripts required to reproduce our results at https://github.com/zainmujahid/metricEval-longSum.
Problem

Research questions and friction points this paper is trying to address.

Evaluating factual consistency metrics for long-document abstractive summarization
Testing robustness of short-form metrics through semantic-preserving perturbations
Analyzing metric sensitivity to retrieval context and information density
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically evaluates six reference-free factuality metrics
Probes robustness through seven factuality-preserving perturbations
Analyzes sensitivity to retrieval context and claim density
🔎 Similar Papers
No similar papers found.