🤖 AI Summary
How can belief updating preserve behavioral realism—particularly agents’ departures from Bayesian updating—while ensuring dynamic consistency across intertemporal decisions?
Method: We develop a multi-prior framework that formalizes ambiguity via a reference prior and a feasible prior set; we then introduce a maximum-likelihood selection rule to identify the optimal feasible prior, followed by standard Bayesian updating—constituting the first integration of maximum-likelihood inference into a dynamic multi-prior structure.
Contribution: We provide a rigorous axiomatic characterization of a preference-driven robust updating mechanism, reconciling theoretical rigor with empirical plausibility. Our model unifies explanations for canonical probabilistic reasoning biases—including conservatism and confirmation bias—thereby bridging a critical gap between normative decision models and experimental evidence on belief formation.
📝 Abstract
There is a large body of evidence that decision makers frequently depart from Bayesian updating. This paper introduces a model, robust maximum likelihood (RML) updating, where deviations from Bayesian updating are due to multiple priors/ambiguity. Using the decision maker's preferences over acts before and after the arrival of new information as the primitive of the analysis, I axiomatically characterize a representation where the decision maker's probability assessment can be described by a benchmark prior, which is reflected in her ex ante ranking of acts, and a set of plausible priors, which is revealed from her updated preferences. When new information is received, the decision maker revises her benchmark prior within the set of plausible priors via the maximum likelihood principle in a way that ensures maximally dynamically consistent behavior, and updates the new prior using Bayes' rule. RML updating accommodates most commonly observed biases in probabilistic reasoning.