🤖 AI Summary
This paper investigates how a planner can intervene in sequential social learning by endogenously controlling the precision of individuals’ private signals—without distorting or censoring information. Using a Bayesian sequential decision-making framework augmented with optimal control theory, it is the first to treat signal precision as a controllable policy variable under full observability and no deception capability. The analysis characterizes optimal interventions for both altruistic and biased planners. Results show that socially optimal advertising personalization must dynamically adapt to the evolving population belief; even a neutral planner can substantially improve social welfare, whereas a goal-biased planner may systematically impair learning efficiency. Crucially, intervention efficacy depends critically on the population’s prior beliefs and the planner’s objective alignment. The study establishes fundamental theoretical boundaries for information governance, algorithmic transparency, and platform regulation, offering actionable policy insights for designing responsible recommendation systems.
📝 Abstract
We introduce a model of sequential social learning in which a planner may pay a cost to adjust the private signal precision of some agents. This framework presents a new optimization problem for social learning that sheds light on practical policy questions, such as how the socially optimal level of ad personalization changes according to current beliefs or how a biased planner might derail social learning. We then characterize the optimal policies of an altruistic planner who maximizes social welfare and a biased planner who seeks to induce a specific action. Even for a planner who has equivalent knowledge to an individual, cannot lie or cherry-pick information, and is fully observable, we demonstrate that it can dramatically influence social welfare in both positive and negative directions. An important area for future exploration is how one might prevent these latter outcomes to protect against the manipulation of social learning.