FAIRGAME: a Framework for AI Agents Bias Recognition using Game Theory

📅 2025-04-19
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Multi-agent interactions exacerbate the opacity and unpredictability of AI decision-making, hindering trustworthy deployment. To address this, we propose FAIRGAME—a novel, standardized, and reproducible interpretability-testing framework that uniquely integrates game-theoretic modeling with large language model (LLM) agent bias analysis. FAIRGAME supports flexible configuration of LLMs, languages, personality traits, and strategic knowledge, enabling systematic quantification of biases across model, language, and strategy dimensions via multi-agent simulation, while predicting emergent behavioral patterns. Its key innovation lies in the deep embedding of formal game-theoretic analysis into the LLM agent evaluation pipeline, thereby enabling interpretable bias quantification and theory-grounded behavioral prediction. Extensive experiments on canonical game-theoretic scenarios demonstrate FAIRGAME’s effectiveness in bias detection, result reproducibility, and theoretical consistency.

Technology Category

Application Category

📝 Abstract
Letting AI agents interact in multi-agent applications adds a layer of complexity to the interpretability and prediction of AI outcomes, with profound implications for their trustworthy adoption in research and society. Game theory offers powerful models to capture and interpret strategic interaction among agents, but requires the support of reproducible, standardized and user-friendly IT frameworks to enable comparison and interpretation of results. To this end, we present FAIRGAME, a Framework for AI Agents Bias Recognition using Game Theory. We describe its implementation and usage, and we employ it to uncover biased outcomes in popular games among AI agents, depending on the employed Large Language Model (LLM) and used language, as well as on the personality trait or strategic knowledge of the agents. Overall, FAIRGAME allows users to reliably and easily simulate their desired games and scenarios and compare the results across simulation campaigns and with game-theoretic predictions, enabling the systematic discovery of biases, the anticipation of emerging behavior out of strategic interplays, and empowering further research into strategic decision-making using LLM agents.
Problem

Research questions and friction points this paper is trying to address.

Detecting biases in AI agent interactions using game theory
Standardizing frameworks for comparing AI strategic outcomes
Analyzing LLM-based agent biases across languages and traits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Game theory models strategic AI agent interactions
Standardized framework for bias recognition in AI
Simulates games to compare LLM agent behaviors
🔎 Similar Papers
No similar papers found.