DEBATE: A Large-Scale Benchmark for Role-Playing LLM Agents in Multi-Agent, Long-Form Debates

📅 2025-10-28
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical limitation that single-agent alignment fails to ensure authenticity in multi-agent group dynamics. We propose the first modeling framework for long-term debate scenarios that captures opinion evolution across multiple agents. To support empirical evaluation, we construct DEBATE—a large-scale benchmark comprising nearly 30,000 real-world debate messages and private opinion trajectories—and introduce the first formal definition and quantification of opinion-change authenticity in multi-agent role-playing. Leveraging multi-round debate data, we perform supervised fine-tuning to align LLM behavior with human opinion dynamics and employ a multi-level evaluation combining ROUGE-L, semantic similarity, and authenticity metrics. Results show significant improvement in surface-level text generation, yet persistent bottlenecks in deep semantic alignment, exposing a fundamental limitation of current role-playing paradigms in simulating authentic human group dynamics.

Technology Category

Application Category

📝 Abstract
Accurately modeling opinion change through social interactions is crucial for addressing issues like misinformation and polarization. While role-playing large language models (LLMs) offer a promising way to simulate human-like interactions, existing research shows that single-agent alignment does not guarantee authentic multi-agent group dynamics. Current LLM role-play setups often produce unnatural dynamics (e.g., premature convergence), without an empirical benchmark to measure authentic human opinion trajectories. To bridge this gap, we introduce DEBATE, the first large-scale empirical benchmark explicitly designed to evaluate the authenticity of the interaction between multi-agent role-playing LLMs. DEBATE contains 29,417 messages from multi-round debate conversations among over 2,792 U.S.-based participants discussing 107 controversial topics, capturing both publicly-expressed messages and privately-reported opinions. Using DEBATE, we systematically evaluate and identify critical discrepancies between simulated and authentic group dynamics. We further demonstrate DEBATE's utility for aligning LLMs with human behavior through supervised fine-tuning, achieving improvements in surface-level metrics (e.g., ROUGE-L and message length) while highlighting limitations in deeper semantic alignment (e.g., semantic similarity). Our findings highlight both the potential and current limitations of role-playing LLM agents for realistically simulating human-like social dynamics.
Problem

Research questions and friction points this paper is trying to address.

Modeling opinion change through multi-agent social interactions
Evaluating authenticity of role-playing LLM group dynamics
Bridging discrepancies between simulated and human opinion trajectories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale benchmark for multi-agent role-playing LLMs
Systematic evaluation of simulated versus authentic group dynamics
Supervised fine-tuning to align LLMs with human behavior