Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In meta-reinforcement learning, point-estimate approximations of task posteriors lead to overconfidence and inconsistency in uncertainty quantification. To address this, we propose the first integration of Laplace approximation into standard RNN-based meta-policy frameworks—without architectural modifications—enabling full-task posterior inference at any stage: pre-, during, or post-training. Compared to variational methods, our approach incurs significantly fewer parameters while achieving performance on par with full-distribution baselines. Crucially, it enables precise estimation of distributional statistics (e.g., entropy), thereby enhancing policy robustness and probabilistic calibration. Our core contribution is the principled extension of classical Laplace approximation to the latent state space of recurrent meta-policies, establishing a lightweight, plug-and-play, stage-agnostic Bayesian uncertainty quantification paradigm for meta-RL.

Technology Category

Application Category

📝 Abstract
Meta-reinforcement learning trains a single reinforcement learning agent on a distribution of tasks to quickly generalize to new tasks outside of the training set at test time. From a Bayesian perspective, one can interpret this as performing amortized variational inference on the posterior distribution over training tasks. Among the various meta-reinforcement learning approaches, a common method is to represent this distribution with a point-estimate using a recurrent neural network. We show how one can augment this point estimate to give full distributions through the Laplace approximation, either at the start of, during, or after learning, without modifying the base model architecture. With our approximation, we are able to estimate distribution statistics (e.g., the entropy) of non-Bayesian agents and observe that point-estimate based methods produce overconfident estimators while not satisfying consistency. Furthermore, when comparing our approach to full-distribution based learning of the task posterior, our method performs on par with variational baselines while having much fewer parameters.
Problem

Research questions and friction points this paper is trying to address.

Extends meta-reinforcement learning with Bayesian posterior distributions
Addresses overconfidence in point-estimate task inference methods
Improves parameter efficiency versus full-distribution variational baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Laplace approximation for full distributions
Non-Bayesian agents' distribution statistics estimation
Fewer parameters than variational baselines
🔎 Similar Papers
No similar papers found.