Underspecified Human Decision Experiments Considered Harmful

📅 2024-01-25
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Prior human-AI collaborative decision-making experiments suffer from ill-defined “decision problems” and lack rigorous information-theoretic criteria for attributing performance deficits to cognitive biases. Method: We propose a formal decision-problem framing grounded in statistical decision theory and information economics, establishing— for the first time—a necessary and sufficient information-sufficiency criterion for “attributable-bias experiments”: performance loss may be ascribed to cognitive bias only if participants possess all information required by a rational agent. Contribution/Results: A meta-assessment of 39 AI-augmented decision studies reveals that only 10 (26%) satisfy this criterion; 74% yield unreliable conclusions due to problem under-specification. We further introduce an analytically tractable taxonomy of performance loss, providing a normative methodological foundation for designing valid decision experiments.

Technology Category

Application Category

📝 Abstract
Decision-making with information displays is a key focus of research in areas like human-AI collaboration and data visualization. However, what constitutes a decision problem, and what is required for an experiment to conclude that decisions are flawed, remain imprecise. We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics. We claim that to attribute loss in human performance to bias, an experiment must provide the information that a rational agent would need to identify the normative decision. We evaluate whether recent empirical research on AI-assisted decisions achieves this standard. We find that only 10 (26%) of 39 studies that claim to identify biased behavior presented participants with sufficient information to make this claim in at least one treatment condition. We motivate the value of studying well-defined decision problems by describing a characterization of performance losses they allow to be conceived.
Problem

Research questions and friction points this paper is trying to address.

Defining precise criteria for human decision problems in experiments
Assessing if studies provide sufficient information to claim decision bias
Evaluating recent AI-assisted decision research against normative standards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Define decision problem using decision theory
Require sufficient information for bias claims
Evaluate AI studies against normative standards
🔎 Similar Papers
No similar papers found.
J
J. Hullman
Northwestern University, USA
Alex Kale
Alex Kale
Assistant Professor of Computer Science and Data Science, University of Chicago
VisualizationuncertaintyHCI
J
Jason D. Hartline
Northwestern University, USA