2-Step Agent: A Framework for the Interaction of a Decision Maker with AI Decision Support

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited understanding of adverse effects arising from human–AI interaction in AI-assisted decision-making. The authors propose a “two-stage agent” framework that, for the first time, models—through Bayesian causal inference—how AI predictions update a rational decision-maker’s beliefs and subsequently influence downstream decisions and outcomes. The framework demonstrates that even a single erroneous prior belief can cause AI assistance to degrade decision quality relative to unassisted scenarios. Simulation experiments validate this mechanism and further highlight the critical roles of model transparency and user training in enhancing the effectiveness of AI assistance.

Technology Category

Application Category

📝 Abstract
Across a growing number of fields, human decision making is supported by predictions from AI models. However, we still lack a deep understanding of the effects of adoption of these technologies. In this paper, we introduce a general computational framework, the 2-Step Agent, which models the effects of AI-assisted decision making. Our framework uses Bayesian methods for causal inference to model 1) how a prediction on a new observation affects the beliefs of a rational Bayesian agent, and 2) how this change in beliefs affects the downstream decision and subsequent outcome. Using this framework, we show by simulations how a single misaligned prior belief can be sufficient for decision support to result in worse downstream outcomes compared to no decision support. Our results reveal several potential pitfalls of AI-driven decision support and highlight the need for thorough model documentation and proper user training.
Problem

Research questions and friction points this paper is trying to address.

AI decision support
human decision making
belief updating
downstream outcomes
causal inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

2-Step Agent
Bayesian causal inference
AI-assisted decision making
belief updating
decision support systems
🔎 Similar Papers
No similar papers found.
O
Otto Nyberg
Department of Medical Informatics, Amsterdam UMC, University of Amsterdam, the Netherlands; Amsterdam Public Health Research Institute, the Netherlands
Fausto Carcassi
Fausto Carcassi
University of Amsterdam
quantificationmodalitygradabilitylanguage evolutionsemantics
Giovanni Cinà
Giovanni Cinà
Amsterdam University Medical Center | University of Amsterdam
Medical AIMachine LearningMathematical Logic