Multi-document Summarization through Multi-document Event Relation Graph Reasoning in LLMs: a case study in Framing Bias Mitigation

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address media bias mitigation in multi-document summarization, this paper proposes a neutralization-oriented summarization method based on a Multi-Document Event Relation Graph (MERG). MERG jointly models cross-document event coreference, intra-document event relations, and event-level moral perspectives—enabling the first unified characterization of framing, selection, and perspective biases. We introduce a dual-path graph injection mechanism: a hard prompt (textualized graph structure) and a soft prompt (graph embeddings encoded via Graph Attention Networks), both guiding large language models to generate more neutral and factually faithful summaries. Experiments demonstrate significant improvements over baselines across ROUGE scores, BiasScore, and human evaluation, confirming effective mitigation of media bias at both lexical and informational levels while preserving high content fidelity.

Technology Category

Application Category

📝 Abstract
Media outlets are becoming more partisan and polarized nowadays. Most previous work focused on detecting media bias. In this paper, we aim to mitigate media bias by generating a neutralized summary given multiple articles presenting different ideological views. Motivated by the critical role of events and event relations in media bias detection, we propose to increase awareness of bias in LLMs via multi-document events reasoning and use a multi-document event relation graph to guide the summarization process. This graph contains rich event information useful to reveal bias: four common types of in-doc event relations to reflect content framing bias, cross-doc event coreference relation to reveal content selection bias, and event-level moral opinions to highlight opinionated framing bias. We further develop two strategies to incorporate the multi-document event relation graph for neutralized summarization. Firstly, we convert a graph into natural language descriptions and feed the textualized graph into LLMs as a part of a hard text prompt. Secondly, we encode the graph with graph attention network and insert the graph embedding into LLMs as a soft prompt. Both automatic evaluation and human evaluation confirm that our approach effectively mitigates both lexical and informational media bias, and meanwhile improves content preservation.
Problem

Research questions and friction points this paper is trying to address.

Mitigate media bias in multi-document summarization
Use event relation graphs to reveal framing bias
Generate neutral summaries from ideologically diverse articles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-document event relation graph reasoning
Graph-to-text hard prompt for LLMs
Graph-attention-based soft prompt for LLMs
🔎 Similar Papers
No similar papers found.