Personality-Aware Reinforcement Learning for Persuasive Dialogue with LLM-Driven Simulation

๐Ÿ“… 2026-01-11
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a personality-aware reinforcement learning framework to enhance personalization and behavioral consistency in persuasive dialogue systems. The approach dynamically models usersโ€™ psychological states through turn-level personality embeddings, leverages LLM-generated argumentative dialogue data for training, and introduces a โ€œchange-of-mindโ€ penalty to reinforce commitment stability. It employs an 81-dimensional hybrid personality representation, a Maximal Marginal Relevance-based retrieval strategy, and Dueling Double DQN (D3QN) to optimize persuasion policies. Experiments on the PersuasionForGood dataset demonstrate that the method significantly improves cumulative persuasion rewards and generalization to unseen users, effectively reduces commitment reversal, and yields a modest increase in donation amounts.

Technology Category

Application Category

๐Ÿ“ Abstract
Effective persuasive dialogue agents adapt their strategies to individual users, accounting for the evolution of their psychological states and intentions throughout conversations. We present a personality-aware reinforcement learning approach comprising three main modules: (1) a Strategy-Oriented Interaction Framework, which serves as an agenda-based strategy controller that selects strategy-level actions and generate responses via Maximal Marginal Relevance (MMR) retrieval to ensure contextual relevance, diversity, and scalable data generation; (2) Personality-Aware User Representation Learning, which produces an 81-dimensional mixed-type embedding predicted at each turn from recent exchanges and appended to the reinforcement learning state; and (3) a Dueling Double DQN (D3QN) model and Reward Prediction, in which the policy is conditioned on dialogue history and turn-level personality estimates and trained using a composite reward incorporating agreement intent, donation amount, and changeof-mind penalties. We use an agenda-based LLM simulation pipeline to generate diverse interactions, from which personality estimation is inferred from the generated utterances. Experiments on the PersuasionForGood (P4G) dataset augmented with simulated dialogues reveal three main findings: (i) turn-level personality conditioning improves policy adaptability and cumulative persuasion rewards; (ii) LLM-driven simulation enhances generalization to unseen user behaviors; and (iii) incorporating a change-of-mind penalty reduces post-agreement retractions while slightly improving donation outcomes. These results demonstrate that structured interaction, dynamic personality estimation, and behaviorally informed rewards together yield more effective persuasive policies.
Problem

Research questions and friction points this paper is trying to address.

persuasive dialogue
personality-aware
reinforcement learning
user adaptation
dialogue strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personality-Aware Reinforcement Learning
LLM-Driven Simulation
Strategy-Oriented Interaction Framework
Dynamic User Representation
Composite Reward Design
๐Ÿ”Ž Similar Papers
No similar papers found.