🤖 AI Summary
This work addresses key limitations of large language models (LLMs) in natural language bargaining—namely, weak strategic reasoning, suboptimal payoff outcomes, and inconsistent behavior. To this end, we propose AgreeMate, a decoupled dual-agent (buyer/seller) framework integrating coarse-grained action planning with semantic game-theoretic modeling. It introduces the first modular, negotiation-specific architecture and a novel multi-dimensional evaluation suite measuring bargaining success rate, payoff rationality, and strategy consistency. Furthermore, we pioneer attention probing to uncover real-time model focus on semantic cues—including price anchors, concession pacing, and opponent intent—during dynamic negotiations. Through joint training via prompt engineering, supervised fine-tuning, and chain-of-thought reasoning, AgreeMate achieves significant improvements: +28.6% in bargaining success rate, +34.1% in payoff rationality, and +41.3% in strategy consistency over baselines—empirically validating semantic-driven negotiation modeling.
📝 Abstract
We introduce AgreeMate, a framework for training Large Language Models (LLMs) to perform strategic price negotiations through natural language. We apply recent advances to a negotiation setting where two agents (i.e. buyer or seller) use natural language to bargain on goods using coarse actions. Specifically, we present the performance of Large Language Models when used as agents within a decoupled (modular) bargaining architecture. We demonstrate that using prompt engineering, fine-tuning, and chain-of-thought prompting enhances model performance, as defined by novel metrics. We use attention probing to show model attention to semantic relationships between tokens during negotiations.