Evaluate Summarization in Fine-Granularity: Auto Evaluation with LLM

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automatic summarization evaluation metrics (e.g., ROUGE) exhibit low correlation with human judgments, poor interpretability, and high sensitivity to model or prompt variations, hindering fine-grained quality diagnosis. To address these limitations, we propose SumAutoEval—the first large language model–based, fine-grained, multidimensional automatic evaluation framework for summarization. It decouples assessment into four orthogonal dimensions: completeness, correctness, alignment, and readability, integrating structured prompting engineering with empirically calibrated scoring. SumAutoEval significantly improves agreement with human annotations (average Pearson correlation coefficient increased by +0.32), while ensuring high traceability, cross-model robustness, and evaluation transparency. By enabling dimension-specific diagnostics and interpretable score decomposition, it overcomes the fundamental bottlenecks of traditional metrics—namely, their opacity and weak human alignment.

Technology Category

Application Category

📝 Abstract
Due to the exponential growth of information and the need for efficient information consumption the task of summarization has gained paramount importance. Evaluating summarization accurately and objectively presents significant challenges, particularly when dealing with long and unstructured texts rich in content. Existing methods, such as ROUGE (Lin, 2004) and embedding similarities, often yield scores that have low correlation with human judgements and are also not intuitively understandable, making it difficult to gauge the true quality of the summaries. LLMs can mimic human in giving subjective reviews but subjective scores are hard to interpret and justify. They can be easily manipulated by altering the models and the tones of the prompts. In this paper, we introduce a novel evaluation methodology and tooling designed to address these challenges, providing a more comprehensive, accurate and interpretable assessment of summarization outputs. Our method (SumAutoEval) proposes and evaluates metrics at varying granularity levels, giving objective scores on 4 key dimensions such as completeness, correctness, Alignment and readability. We empirically demonstrate, that SumAutoEval enhances the understanding of output quality with better human correlation.
Problem

Research questions and friction points this paper is trying to address.

Summary Evaluation
Information Integrity
Human Perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

SumAutoEval
Text Summary Evaluation
Multi-Dimensional Assessment
Dong Yuan
Dong Yuan
the University of Sydney
cloud and edge computingAIdeep learninginternet of thingsworkflow
E
Eti Rastogi
DeepScribe Inc.
F
Fen Zhao
DeepScribe Inc.
Sagar Goyal
Sagar Goyal
DeepScribe Inc.
G
Gautam Naik
DeepScribe Inc.
S
Sree Prasanna Rajagopal
DeepScribe Inc.