🤖 AI Summary
This work addresses the challenge of detecting linguistic deception induced by “too-good-to-be-true” proposals in negotiation settings. Using the game *Diplomacy* as an experimental testbed, we propose a novel detection framework that integrates formal protocol logic with counterfactual reinforcement learning (RL). Methodologically, we pioneer the coupling of counterfactual RL with logical form parsing to construct a multi-agent value function model, combining contextual text encoding with a lightweight ensemble classifier. We further introduce a novel “friction-triggering” mechanism to support human-AI collaborative credibility assessment. Evaluated on real human dialogue segments, our approach achieves an F1 score of 0.86 and reduces false positive rate by 42% relative to a pure large language model baseline. The framework significantly enhances both accuracy and practical deployability of deception detection in strategic interpersonal interactions.
📝 Abstract
An increasingly prevalent socio-technical problem is people being taken in by offers that sound ``too good to be true'', where persuasion and trust shape decision-making. This paper investigates how abr{ai} can help detect these deceptive scenarios. We analyze how humans strategically deceive each other in extit{Diplomacy}, a board game that requires both natural language communication and strategic reasoning. This requires extracting logical forms of proposed agreements in player communications and computing the relative rewards of the proposal using agents' value functions. Combined with text-based features, this can improve our deception detection. Our method detects human deception with a high precision when compared to a Large Language Model approach that flags many true messages as deceptive. Future human-abr{ai} interaction tools can build on our methods for deception detection by triggering extit{friction} to give users a chance of interrogating suspicious proposals.