🤖 AI Summary
Predicting human decision-making under risk and uncertainty remains a persistent interdisciplinary challenge; existing models—including prospect theory—exhibit limited predictive accuracy even on simplified tasks such as lottery choice. To address this, we propose a “theory-guided feature engineering” framework that explicitly encodes behavioral theories (e.g., rationality biases, learning theories) as interpretable, domain-informed features, thereby integrating functional descriptive modeling with supervised learning. Our analysis reveals, for the first time, that foundational learning theories consistently outperform dominant rationality-bias theories in predictive power. We establish a dual-driven paradigm for feature construction—grounded in both qualitative theoretical insight and quantitative model validation. Evaluated across multi-task generalization benchmarks and open prediction competitions, our approach achieves significant gains in both accuracy and interpretability. These results empirically validate the critical value of embedding behavioral theory directly into predictive modeling architectures.
📝 Abstract
Behavioral decision theories aim to explain human behavior. Can they help predict it? An open tournament for prediction of human choices in fundamental economic decision tasks is presented. The results suggest that integration of certain behavioral theories as features in machine learning systems provides the best predictions. Surprisingly, the most useful theories for prediction build on basic properties of human and animal learning and are very different from mainstream decision theories that focus on deviations from rational choice. Moreover, we find that theoretical features should be based not only on qualitative behavioral insights (e.g. loss aversion), but also on quantitative behavioral foresights generated by functional descriptive models (e.g. Prospect Theory). Our analysis prescribes a recipe for derivation of explainable, useful predictions of human decisions.