Can one size fit all?: Measuring Failure in Multi-Document Summarization Domain Transfer

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the failure mechanisms of multi-document summarization (MDS) models under zero-shot cross-domain transfer (news → scientific → conversational domains). We systematically evaluate four dominant paradigms—end-to-end pretraining, chunked summarization, extractive-then-generative, and GPT-style reasoning—using both human judgment and automated metrics. For the first time, we unify and quantify domain-transfer failure across three dimensions: reference similarity, summary quality, and factual consistency. Our analysis reveals that failure stems primarily from semantic structural mismatch between training paradigms and target-domain discourse properties—not merely from distributional shift. Moreover, standard automatic metrics (e.g., ROUGE, BERTScore) exhibit significant miscalibration in cross-domain settings. Based on these findings, we propose principled metric calibration strategies and introduce the first cross-domain benchmark specifically designed for robustness evaluation of MDS models. This benchmark provides empirical grounding and methodological guidance for developing generalizable, domain-agnostic summarization systems.

Technology Category

Application Category

📝 Abstract
Abstractive multi-document summarization (MDS) is the task of automatically summarizing information in multiple documents, from news articles to conversations with multiple speakers. The training approaches for current MDS models can be grouped into four approaches: end-to-end with special pre-training ("direct"), chunk-then-summarize, extract-then-summarize, and inference with GPT-style models. In this work, we evaluate MDS models across training approaches, domains, and dimensions (reference similarity, quality, and factuality), to analyze how and why models trained on one domain can fail to summarize documents from another (News, Science, and Conversation) in the zero-shot domain transfer setting. We define domain-transfer"failure"as a decrease in factuality, higher deviation from the target, and a general decrease in summary quality. In addition to exploring domain transfer for MDS models, we examine potential issues with applying popular summarization metrics out-of-the-box.
Problem

Research questions and friction points this paper is trying to address.

Evaluates MDS models across domains and training approaches.
Analyzes failure in zero-shot domain transfer for summarization.
Examines issues with standard summarization metrics application.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates MDS models across diverse training approaches.
Analyzes domain transfer failure in zero-shot settings.
Examines issues with standard summarization metrics.
🔎 Similar Papers
No similar papers found.