🤖 AI Summary
Prior human-AI collaborative decision-making experiments suffer from ill-defined “decision problems” and lack rigorous information-theoretic criteria for attributing performance deficits to cognitive biases. Method: We propose a formal decision-problem framing grounded in statistical decision theory and information economics, establishing— for the first time—a necessary and sufficient information-sufficiency criterion for “attributable-bias experiments”: performance loss may be ascribed to cognitive bias only if participants possess all information required by a rational agent. Contribution/Results: A meta-assessment of 39 AI-augmented decision studies reveals that only 10 (26%) satisfy this criterion; 74% yield unreliable conclusions due to problem under-specification. We further introduce an analytically tractable taxonomy of performance loss, providing a normative methodological foundation for designing valid decision experiments.
📝 Abstract
Decision-making with information displays is a key focus of research in areas like human-AI collaboration and data visualization. However, what constitutes a decision problem, and what is required for an experiment to conclude that decisions are flawed, remain imprecise. We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics. We claim that to attribute loss in human performance to bias, an experiment must provide the information that a rational agent would need to identify the normative decision. We evaluate whether recent empirical research on AI-assisted decisions achieves this standard. We find that only 10 (26%) of 39 studies that claim to identify biased behavior presented participants with sufficient information to make this claim in at least one treatment condition. We motivate the value of studying well-defined decision problems by describing a characterization of performance losses they allow to be conceived.