🤖 AI Summary
Evaluating factual accuracy in summaries of long narrative texts (>100K tokens) remains challenging—particularly in modeling character relationships and states—due to the coarse-grained nature of conventional metrics (e.g., ROUGE, BERTScore) and the limited fine-grained fact consistency awareness of LLM-as-Judge approaches. To address this, we propose NarrativeFactScore, the first framework adopting an “Agent-as-Judge” paradigm. It constructs a Character Knowledge Graph (CKG), performs multi-hop factual reasoning over the graph, and leverages graph-structured cross-text alignment to enable interpretable, localized assessment and correction of character-related facts in summaries. Our method overcomes key limitations of existing evaluators in modeling narrative facts at scale. On mainstream long-summarization benchmarks, NarrativeFactScore improves factual consistency by 23.6% over strong baselines, while providing verifiable, actionable revision feedback.
📝 Abstract
Large Language Models (LLMs) have demonstrated near-human performance in summarization tasks based on traditional metrics such as ROUGE and BERTScore. However, these metrics do not adequately capture critical aspects of summarization quality, such as factual accuracy, particularly for long narratives (>100K tokens). Recent advances, such as LLM-as-a-Judge, address the limitations of metrics based on lexical similarity but still exhibit factual inconsistencies, especially in understanding character relationships and states. In this work, we introduce NarrativeFactScore, a novel"Agent-as-a-Judge"framework for evaluating and refining summaries. By leveraging a Character Knowledge Graph (CKG) extracted from input and generated summaries, NarrativeFactScore assesses the factual consistency and provides actionable guidance for refinement, such as identifying missing or erroneous facts. We demonstrate the effectiveness of NarrativeFactScore through a detailed workflow illustration and extensive validation on widely adopted benchmarks, achieving superior performance compared to competitive methods. Our results highlight the potential of agent-driven evaluation systems to improve the factual reliability of LLM-generated summaries.