Reasoning About Persuasion: Can LLMs Enable Explainable Propaganda Detection?

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing propaganda detection research lacks interpretability support for predictions, primarily due to the absence of human-annotated explanation resources. To address this, we introduce the first Arabic–English bilingual explanation-augmented propaganda detection dataset, featuring fine-grained, manually authored justification annotations. We further propose an integrated multilingual large language model (LLM) that jointly performs propaganda identification and rationale-driven natural language explanation generation. Our approach combines instruction-tuning with explanation-aware fine-tuning (EFT). Experimental results demonstrate that our model achieves detection performance competitive with state-of-the-art methods while generating high-fidelity, annotation-aligned explanations—marking the first work to jointly model discriminative capability and interpretability in multilingual propaganda detection.

Technology Category

Application Category

📝 Abstract
There has been significant research on propagandistic content detection across different modalities and languages. However, most studies have primarily focused on detection, with little attention given to explanations justifying the predicted label. This is largely due to the lack of resources that provide explanations alongside annotated labels. To address this issue, we propose a multilingual (i.e., Arabic and English) explanation-enhanced dataset, the first of its kind. Additionally, we introduce an explanation-enhanced LLM for both label detection and rationale-based explanation generation. Our findings indicate that the model performs comparably while also generating explanations. We will make the dataset and experimental resources publicly available for the research community.
Problem

Research questions and friction points this paper is trying to address.

Detects propagandistic content across multiple languages.
Generates explanations for predicted propaganda labels.
Introduces a multilingual, explanation-enhanced dataset.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual explanation-enhanced dataset introduced
Explanation-enhanced LLM for label detection
Generates rationale-based explanations alongside predictions