Persuasive Prediction via Decision Calibration

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies persuasive forecasting under high-dimensional or infinite state spaces without assuming a common prior: a sender forecasts outcome $Y$ given covariates $X$ and publishes predictions to influence the behavior of a rational receiver. We introduce *decision calibration*—a novel, prior-free notion replacing the common-prior assumption—to ensure predictions are unbiased under the receiver’s optimal response and eliminate swap regret. Our method integrates decision-calibration constraints, randomized predictors, statistical learning theory, and game-theoretic modeling. We establish the first prior-free guarantee that the sender’s utility asymptotically approaches the Bayesian-optimal utility achievable with full prior knowledge. The resulting algorithm is computationally efficient: in the single-receiver setting, it attains the same utility upper bound as in the fully known-prior case, and naturally generalizes to infinite predictor classes and stochastic receiver responses.

Technology Category

Application Category

📝 Abstract
Bayesian persuasion, a central model in information design, studies how a sender, who privately observes a state drawn from a prior distribution, strategically sends a signal to influence a receiver's action. A key assumption is that both sender and receiver share the precise knowledge of the prior. Although this prior can be estimated from past data, such assumptions break down in high-dimensional or infinite state spaces, where learning an accurate prior may require a prohibitive amount of data. In this paper, we study a learning-based variant of persuasion, which we term persuasive prediction. This setting mirrors Bayesian persuasion with large state spaces, but crucially does not assume a common prior: the sender observes covariates $X$, learns to predict a payoff-relevant outcome $Y$ from past data, and releases a prediction to influence a population of receivers. To model rational receiver behavior without a common prior, we adopt a learnable proxy: decision calibration, which requires the prediction to be unbiased conditioned on the receiver's best response to the prediction. This condition guarantees that myopically responding to the prediction yields no swap regret. Assuming the receivers best respond to decision-calibrated predictors, we design a computationally and statistically efficient algorithm that learns a decision-calibrated predictor within a randomized predictor class that optimizes the sender's utility. In the commonly studied single-receiver case, our method matches the utility of a Bayesian sender who has full knowledge of the underlying prior distribution. Finally, we extend our algorithmic result to a setting where receivers respond stochastically to predictions and the sender may randomize over an infinite predictor class.
Problem

Research questions and friction points this paper is trying to address.

Learning-based persuasion without common prior knowledge
Decision calibration for rational receiver behavior modeling
Efficient algorithm for sender-optimal decision-calibrated prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses decision calibration for rational receiver behavior
Learns decision-calibrated predictor efficiently
Matches Bayesian sender utility without prior knowledge
🔎 Similar Papers
No similar papers found.