Emulating Aggregate Human Choice Behavior and Biases with GPT Conversational Agents

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the modeling of individual-level human cognitive biases and their dynamic modulation by contextual factors—such as cognitive load—within interactive dialogue settings. By reframing classic decision-making tasks into conversational formats, the authors collected behavioral data from 1,100 participants, integrating demographic information and dialogue transcripts. Leveraging GPT-4 and GPT-5, they simulated decision behaviors under varying levels of conversational complexity. This work presents the first systematic evaluation of large language models’ capacity to replicate individual-level cognitive biases within a dialogue framework, demonstrating that both models accurately simulate biases across three distinct decision scenarios. Notably, significant differences in behavioral alignment were observed between GPT-4 and GPT-5, offering empirical foundations for developing bias-aware, adaptive AI systems.

Technology Category

Application Category

📝 Abstract
Cognitive biases often shape human decisions. While large language models (LLMs) have been shown to reproduce well-known biases, a more critical question is whether LLMs can predict biases at the individual level and emulate the dynamics of biased human behavior when contextual factors, such as cognitive load, interact with these biases. We adapted three well-established decision scenarios into a conversational setting and conducted a human experiment (N=1100). Participants engaged with a chatbot that facilitates decision-making through simple or complex dialogues. Results revealed robust biases. To evaluate how LLMs emulate human decision-making under similar interactive conditions, we used participant demographics and dialogue transcripts to simulate these conditions with LLMs based on GPT-4 and GPT-5. The LLMs reproduced human biases with precision. We found notable differences between models in how they aligned human behavior. This has important implications for designing and evaluating adaptive, bias-aware LLM-based AI systems in interactive contexts.
Problem

Research questions and friction points this paper is trying to address.

cognitive biases
human decision-making
large language models
conversational agents
individual-level prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

conversational agents
cognitive biases
large language models
human decision-making
behavioral emulation
🔎 Similar Papers
No similar papers found.