π€ AI Summary
Multi-agent debate (MAD) among large language models (LLMs) tends to amplify biases and homogenize perspectives, as agents share identical architectures and reasoning patterns. Method: We first uncover the intrinsic bias-amplification mechanism in MAD and propose DReaMADβa single-model diversity-aware reasoning framework that elicits heterogeneous reasoning paths without requiring multiple distinct models. DReaMAD integrates prompt refinement, strategy prior modeling, and diversity-oriented prompt perturbation. Contribution/Results: Evaluated on dynamic adversarial decision-making benchmarks (e.g., MetaNIM Arena), DReaMAD significantly improves decision accuracy (+12.7%), reasoning diversity (3.8Γ entropy increase), and bias mitigation (41.3% reduction in bias score). It eliminates the conventional reliance on model heterogeneity in MAD, establishing a novel paradigm for robust, diverse, and debiasing LLM reasoning within a unified model architecture.
π Abstract
Large Language Models $($LLMs$)$ solve complex problems using training-free methods like prompt engineering and in-context learning, yet ensuring reasoning correctness remains challenging. While self-correction methods such as self-consistency and self-refinement aim to improve reliability, they often reinforce biases due to the lack of effective feedback mechanisms. Multi-Agent Debate $($MAD$)$ has emerged as an alternative, but we identify two key limitations: bias reinforcement, where debate amplifies model biases instead of correcting them, and lack of perspective diversity, as all agents share the same model and reasoning patterns, limiting true debate effectiveness. To systematically evaluate these issues, we introduce $ extit{MetaNIM Arena}$, a benchmark designed to assess LLMs in adversarial strategic decision-making, where dynamic interactions influence optimal decisions. To overcome MAD's limitations, we propose $ extbf{DReaMAD}$ $($$ extbf{D}$iverse $ extbf{Rea}$soning via $ extbf{M}$ulti-$ extbf{A}$gent $ extbf{D}$ebate with Refined Prompt$)$, a novel framework that $(1)$ refines LLM's strategic prior knowledge to improve reasoning quality and $(2)$ promotes diverse viewpoints within a single model by systematically modifying prompts, reducing bias. Empirical results show that $ extbf{DReaMAD}$ significantly improves decision accuracy, reasoning diversity, and bias mitigation across multiple strategic tasks, establishing it as a more effective approach for LLM-based decision-making.