Detecting Winning Arguments with Large Language Models and Persuasion Strategies

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenging yet crucial task of assessing the persuasiveness of argumentative texts in human communication. The authors propose a strategy-guided, structured prompting approach that leverages large language models to integrate six canonical persuasive strategies—such as ad hominem attacks and red herrings—enabling multi-strategy collaborative reasoning to enhance both the interpretability and robustness of persuasiveness evaluation. Key contributions include the construction and release of the Winning Arguments dataset, annotated with discussion topics, and comprehensive experiments across three argumentation datasets demonstrating that the proposed method significantly outperforms baseline approaches. Additionally, the analysis reveals notable variations in model performance across different discussion topics, highlighting the influence of topical context on persuasiveness assessment.

Technology Category

Application Category

📝 Abstract
Detecting persuasion in argumentative text is a challenging task with important implications for understanding human communication. This work investigates the role of persuasion strategies - such as Attack on reputation, Distraction, and Manipulative wording - in determining the persuasiveness of a text. We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good. Our approach leverages large language models (LLMs) with a Multi-Strategy Persuasion Scoring approach that guides reasoning over six persuasion strategies. Results show that strategy-guided reasoning improves the prediction of persuasiveness. To better understand the influence of content, we organize the Winning Argument dataset into broad discussion topics and analyze performance across them. We publicly release this topic-annotated version of the dataset to facilitate future research. Overall, our methodology demonstrates the value of structured, strategy-aware prompting for enhancing interpretability and robustness in argument quality assessment.
Problem

Research questions and friction points this paper is trying to address.

persuasion detection
argument quality
persuasion strategies
winning arguments
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Persuasion Strategies
Strategy-guided Reasoning
Argument Quality Assessment
Topic-annotated Dataset
🔎 Similar Papers
No similar papers found.