Can LLMs Replace Economic Choice Prediction Labs? The Case of Language-based Persuasion Games

📅 2024-01-30
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
Scarce human choice data severely constrains the training of behavioral prediction models, particularly in complex experimental economics settings such as multi-round strategic language interactions. Method: This work investigates whether large language models (LLMs)—specifically GPT-series models—can generate high-fidelity synthetic natural-language persuasion game dialogues to substitute for scarce real human data, and evaluates their utility via supervised fine-tuning and behavioral modeling. Contribution/Results: We present the first systematic empirical validation that LLM-synthesized persuasion dialogues, when used for training, yield behavioral prediction models that outperform baselines trained on real human data. In human choice prediction tasks, synthetic data not only proves feasible but achieves statistically significant accuracy gains—demonstrating its viability and advantage in modeling intricate strategic behavior. These findings establish a novel, low-cost, high-fidelity paradigm for behavioral modeling in experimental economics, with broad implications for data-scarce behavioral science applications.

Technology Category

Application Category

📝 Abstract
Human choice prediction in economic contexts is crucial for applications in marketing, finance, public policy, and more. This task, however, is often constrained by the difficulties in acquiring human choice data. With most experimental economics studies focusing on simple choice settings, the AI community has explored whether LLMs can substitute for humans in these predictions and examined more complex experimental economics settings. However, a key question remains: can LLMs generate training data for human choice prediction? We explore this in language-based persuasion games, a complex economic setting involving natural language in strategic interactions. Our experiments show that models trained on LLM-generated data can effectively predict human behavior in these games and even outperform models trained on actual human data.
Problem

Research questions and friction points this paper is trying to address.

Exploring LLMs as data generators for human choice prediction in economic contexts
Investigating LLMs' dual role in data generation and behavioral prediction
Analyzing how strategic factors influence decision-making in persuasion games
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate training data for human choice prediction
LLMs serve as both data generators and predictors
LLMs capture history-dependent patterns to improve predictions