A survey of Monte Carlo methods for noisy and costly densities with application to reinforcement learning

📅 2021-08-01
🏛️ International Statistical Review
📈 Citations: 13
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of expensive, stochastic, and analytically intractable function evaluations in reinforcement learning (RL) and approximate Bayesian computation (ABC), this paper systematically reviews and refactors the Monte Carlo methodology framework. We first unify surrogate modeling approaches—designed for costly, noisy, and intractable densities—into three principled categories, and propose a modular surrogate modeling paradigm that jointly optimizes accuracy, computational cost, and robustness. Our framework is innovatively extended to likelihood-free inference and online RL settings. Integrating Bayesian optimization, Gaussian processes, sequential Monte Carlo, importance sampling, and adaptive experimental design, we conduct comprehensive numerical experiments to quantitatively characterize the trade-offs among sample efficiency, convergence stability, and noise robustness. The results provide a reusable, principled guideline for method selection in RL policy evaluation and hyperparameter optimization.
📝 Abstract
This survey gives an overview of Monte Carlo methodologies using surrogate models, for dealing with densities that are intractable, costly, and/or noisy. This type of problem can be found in numerous real‐world scenarios, including stochastic optimisation and reinforcement learning, where each evaluation of a density function may incur some computationally‐expensive or even physical (real‐world activity) cost, likely to give different results each time. The surrogate model does not incur this cost, but there are important trade‐offs and considerations involved in the choice and design of such methodologies. We classify the different methodologies into three main classes and describe specific instances of algorithms under a unified notation. A modular scheme that encompasses the considered methods is also presented. A range of application scenarios is discussed, with special attention to the likelihood‐free setting and reinforcement learning. Several numerical comparisons are also provided.
Problem

Research questions and friction points this paper is trying to address.

Monte Carlo Methods
Reinforcement Learning
Approximate Bayesian Computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monte Carlo methods
Surrogate modeling
Modular scheme
🔎 Similar Papers
No similar papers found.
F
F. Llorente
Stony Brook University, Stony Brook (USA)
Luca Martino
Luca Martino
Associate Professor - University of Catania
Bayesian inferencecomputational methods (MCMCparticle filtersexact sampling etc.. )
J
J. Read
École Polytechnique, Palaiseau (France)
D
D. Delgado
Universidad Carlos III de Madrid, Leganés (Spain)