🤖 AI Summary
In contextual multi-armed bandits, quantifying uncertainty is challenging due to missing future rewards. Method: This paper reformulates Thompson sampling as a missing-data imputation task: unobserved potential outcomes are treated as missing values, and a pre-trained offline generative model performs real-time imputation each round to reconstruct a full “oracle” policy for action selection. Contribution/Results: The proposed framework achieves a regret upper bound dependent solely on the offline prediction error of the generative model—without assumptions on model architecture or reward distribution. It naturally accommodates arbitrary policy constraints (e.g., fairness, resource limits). Unlike prior approaches, it preserves Bayesian decision consistency while attaining the state-of-the-art theoretical regret bound. Moreover, it is universally compatible with any oracle learning algorithm and general constraint specifications, offering both theoretical rigor and practical flexibility.
📝 Abstract
We introduce a framework for Thompson sampling contextual bandit algorithms, in which the algorithm's ability to quantify uncertainty and make decisions depends on the quality of a generative model that is learned offline. Instead of viewing uncertainty in the environment as arising from unobservable latent parameters, our algorithm treats uncertainty as stemming from missing, but potentially observable, future outcomes. If these future outcomes were all observed, one could simply make decisions using an"oracle"policy fit on the complete dataset. Inspired by this conceptualization, at each decision-time, our algorithm uses a generative model to probabilistically impute missing future outcomes, fits a policy using the imputed complete dataset, and uses that policy to select the next action. We formally show that this algorithm is a generative formulation of Thompson Sampling and prove a state-of-the-art regret bound for it. Notably, our regret bound i) depends on the probabilistic generative model only through the quality of its offline prediction loss, and ii) applies to any method of fitting the"oracle"policy, which easily allows one to adapt Thompson sampling to decision-making settings with fairness and/or resource constraints.