Wasserstein Gradient Flows for Batch Bayesian Optimal Experimental Design

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of optimizing expected information gain in batch Bayesian optimal experimental design under high-dimensional, non-convex settings. The authors propose a relaxation framework formulated in the space of probability measures, wherein entropy regularization transforms the original objective into a Gibbs-type optimal design distribution. Building upon this formulation, they develop a scalable stochastic algorithm by leveraging Wasserstein gradient flows and interacting particle systems. This approach effectively mitigates multimodality issues inherent in batch design and consistently yields high-information-gain experimental configurations across multiple numerical experiments, substantially enhancing optimization performance in complex posterior landscapes.

Technology Category

Application Category

📝 Abstract
Bayesian optimal experimental design (BOED) provides a powerful, decision-theoretic framework for selecting experiments so as to maximise the expected utility of the data to be collected. In practice, however, its applicability can be limited by the difficulty of optimising the chosen utility. The expected information gain (EIG), for example, is often high-dimensional and strongly non-convex. This challenge is particularly acute in the batch setting, where multiple experiments are to be designed simultaneously. In this paper, we introduce a new approach to batch EIG-based BOED via a probabilistic lifting of the original optimisation problem to the space of probability measures. In particular, we propose to optimise an entropic regularisation of the expected utility over the space of design measures. Under mild conditions, we show that this objective admits a unique minimiser, which can be explicitly characterised in the form of a Gibbs distribution. The resulting design law can be used directly as a randomised batch-design policy, or as a computational relaxation from which a deterministic batch is extracted. To obtain scalable approximations when the batch size is large, we then consider two tractable restrictions of the full batch distribution: a mean-field family, and an i.i.d. product family. For the i.i.d. objective, and formally for its mean-field extension, we derive the corresponding Wasserstein gradient flow, characterise its long-time behaviour, and obtain particle-based algorithms via space-time discretisations. We also introduce doubly stochastic variants that combine interacting particle updates with Monte Carlo estimators of the EIG gradient. Finally, we illustrate the performance of the proposed methods in several numerical experiments, demonstrating their ability to explore multimodal optimisation landscapes and obtain high-utility batches in challenging examples.
Problem

Research questions and friction points this paper is trying to address.

Bayesian optimal experimental design
batch design
expected information gain
non-convex optimization
high-dimensional optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wasserstein gradient flow
Bayesian optimal experimental design
entropic regularization
batch design
particle-based algorithm
🔎 Similar Papers