Human Choice Prediction in Language-based Persuasion Games: Simulation-based Off-Policy Evaluation

๐Ÿ“… 2023-05-17
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the offline policy evaluation (OPE) challenge of predicting human decisions in language-based persuasive games. We propose a novel cross-agent-space simulation training paradigmโ€”first integrating full-agent-space simulation with virtual decision-maker modeling to enhance generalization to unseen expert agents. Leveraging 87K real human decision records, we construct a high-quality benchmark dataset that unifies simulated interaction generation, OPE methodology, and sequential decision modeling. Experimental results demonstrate a 7.1% improvement in prediction accuracy on the most challenging top-15% instances, significantly boosting the robustness and practicality of offline policy evaluation. The code and dataset are publicly released.
๐Ÿ“ Abstract
Recent advances in Large Language Models (LLMs) have spurred interest in designing LLM-based agents for tasks that involve interaction with human and artificial agents. This paper addresses a key aspect in the design of such agents: predicting human decisions in off-policy evaluation (OPE). We focus on language-based persuasion games, where an expert aims to influence the decision-maker through verbal messages. In our OPE framework, the prediction model is trained on human interaction data collected from encounters with one set of expert agents, and its performance is evaluated on interactions with a different set of experts. Using a dedicated application, we collected a dataset of 87K decisions from humans playing a repeated decision-making game with artificial agents. To enhance off-policy performance, we propose a simulation technique involving interactions across the entire agent space and simulated decision-makers. Our learning strategy yields significant OPE gains, e.g., improving prediction accuracy in the top 15% challenging cases by 7.1%. Our code and the large dataset we collected and generated are submitted as supplementary material and publicly available in our GitHub repository: https://github.com/eilamshapira/HumanChoicePrediction
Problem

Research questions and friction points this paper is trying to address.

Predict human decisions in off-policy evaluation (OPE)
Enhance OPE performance using simulation techniques
Improve prediction accuracy in challenging cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulation-based off-policy evaluation technique
Large dataset of 87K human decisions
Interaction simulation across entire agent space
๐Ÿ”Ž Similar Papers
No similar papers found.