Algorithmic Assistance with Recommendation-Dependent Preferences

📅 2022-08-16
🏛️ ACM Conference on Economics and Computation
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies that algorithmic recommendations not only update decision-makers’ beliefs but also reshape their preferences by establishing themselves as default anchors—inducing “recommendation-dependent preferences” that lead to excessive compliance and Pareto inefficiency. To address this, we formally model this preference-shaping mechanism and develop a behavioral game-theoretic framework integrating Bayesian belief updating and counterfactual evaluation, proving that standard recommendation mechanisms reduce social welfare under preference dependence. We then propose a preference-aware recommendation calibration framework, wherein recommendation intensity is endogenously regulated via mechanism design. Experiments in judicial and medical simulation settings demonstrate that our calibration algorithm significantly reduces excessive compliance while improving decision efficiency and social welfare. Our core contributions are: (i) the first formal identification and modeling of algorithms’ preference-shaping effect, and (ii) the first falsifiable, implementable calibration scheme for mitigating such effects.
📝 Abstract
One important application of algorithms is to turn complex data into simple predictions or recommendations that help decision-makers take better decisions. Examples of this include risk assessments presented to judges or doctors. We typically think of such algorithmic assessments as providing additional information about which choices will lead to better outcomes. But when a decision-maker obtains algorithmic assistance, they may not only react to the information. The decision-maker may view the input of the algorithm as recommending a default action, making it costly for them to deviate. In this article, we consider the effect and design of algorithmic recommendations when they affect choices not just by shifting beliefs, but also by altering preferences. We show that recommendation dependence creates inefficiencies where the decision-maker is overly responsive to the recommendation, and propose changes to the design of recommendation algorithms to counteract this response.
Problem

Research questions and friction points this paper is trying to address.

Algorithmic recommendations alter human preferences and create decision-making inefficiencies
Decision-makers become overly responsive to algorithmic defaults due to institutional pressures
Strategic withholding of recommendations can improve final decision quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model human-machine decision-making with preference effects
Algorithms strategically withhold recommendations to improve decisions
Minimax optimality achieved by confident recommendation implementation
🔎 Similar Papers
No similar papers found.