Biased Error Attribution in Multi-Agent Human-AI Systems Under Delayed Feedback

πŸ“… 2026-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In multi-agent settings with delayed feedback, human decision-makers often misattribute responsibility due to cognitive biases, leading to inefficient learning and suboptimal strategies. This study addresses this issue through controlled game experiments, integrating behavioral data analysis with causal attribution modeling to systematically identify and formally name the phenomenon of β€œbiased misattribution under delayed feedback.” The findings reveal that participants exhibit heightened sensitivity to negative outcomes yet frequently assign blame to irrelevant AI agents; moreover, delayed feedback significantly exacerbates this bias, undermining human-AI collaboration effectiveness. By uncovering systematic patterns of erroneous responsibility attribution in multi-agent environments, this work provides a theoretical foundation for designing cognitive interventions and algorithms that enhance human-AI coordination.

Technology Category

Application Category

πŸ“ Abstract
Human decision-making is strongly influenced by cognitive biases, particularly under conditions of uncertainty and risk. While prior work has examined bias in single-step decisions with immediate outcomes and in human interaction with a single autonomous agent, comparatively little attention has been paid to decision-making under delayed outcomes involving multiple AI agents, where decisions at each step affect subsequent states. In this work, we study how delayed outcomes shape decision-making and responsibility attribution in a multi-agent human-AI task. Using a controlled game-based experiment, we analyze how participants adjust their behavior following positive and negative outcomes. We observe asymmetric responses to gains and losses, with stronger corrective adjustments after negative outcomes. Importantly, participants often fail to correctly identify the actions that caused failure and misattribute responsibility across AI agents, leading to systematic revisions of decisions that are weakly related to the underlying causes of poor performance. We refer to this phenomenon as a form of attribution bias, manifested as biased error attribution under delayed feedback. Our findings highlight how cognitive biases can be amplified in human-AI systems with delayed outcomes and multiple autonomous agents, underscoring the need for decision-support systems that better support causal understanding and learning over time.
Problem

Research questions and friction points this paper is trying to address.

attribution bias
delayed feedback
multi-agent systems
human-AI interaction
error attribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

attribution bias
delayed feedback
multi-agent human-AI systems
cognitive bias
error attribution