🤖 AI Summary
This study investigates how linguistic formulations amplify strategic coordination biases among agents in multi-agent systems through communication. Using the FAIRGAME framework, we conduct controlled experiments with GPT-4o and Llama 3 Maverick models in both one-shot and repeated games, systematically varying linguistic variants, agent personas, and game structures, while comparing conditions with and without inter-agent communication. Results demonstrate that communication—while enhancing cooperation—significantly exacerbates language-driven behavioral biases. Crucially, linguistic choice, agent persona, and task structure interact to jointly modulate coordination efficacy. This work provides the first empirical evidence of communication’s “double-edged sword” effect on linguistic bias: it facilitates alignment yet concurrently reinforces systematic deviations rooted in language design. We thereby establish linguistic engineering as a critical intervention dimension for strategic regulation in multi-agent systems.
📝 Abstract
Large Language Model (LLM)-based agents are increasingly deployed in multi-agent scenarios where coordination is crucial but not always assured. Previous studies indicate that the language used to frame strategic scenarios can influence cooperative behavior. This paper explores whether allowing agents to communicate amplifies these language-driven effects. Leveraging the FAIRGAME framework, we simulate one-shot and repeated games across different languages and models, both with and without communication. Our experiments, conducted with two advanced LLMs, GPT-4o and Llama 4 Maverick, reveal that communication significantly influences agent behavior, though its impact varies by language, personality, and game structure. These findings underscore the dual role of communication in fostering coordination and reinforcing biases.