EVaDE : Event-Based Variational Thompson Sampling for Model-Based Reinforcement Learning

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low exploration efficiency and high sample complexity of model-based reinforcement learning (e.g., PSRL) in object-centric visual control tasks under high-dimensional state-action spaces, this paper proposes an event-driven variational distribution modeling framework. Methodologically, it introduces three types of object-interaction event convolutional layers that explicitly encode structured semantic events as learnable convolutional kernels; incorporates Gaussian Dropout to enable variational Thompson sampling over neural network posteriors; and integrates the SimPLe framework for efficient policy optimization. The approach significantly improves both sample efficiency and final performance on the 100K-frame Atari benchmark. Empirical results demonstrate that event-guided variational exploration effectively enhances structured environmental modeling and generalization capability.

Technology Category

Application Category

📝 Abstract
Posterior Sampling for Reinforcement Learning (PSRL) is a well-known algorithm that augments model-based reinforcement learning (MBRL) algorithms with Thompson sampling. PSRL maintains posterior distributions of the environment transition dynamics and the reward function, which are intractable for tasks with high-dimensional state and action spaces. Recent works show that dropout, used in conjunction with neural networks, induces variational distributions that can approximate these posteriors. In this paper, we propose Event-based Variational Distributions for Exploration (EVaDE), which are variational distributions that are useful for MBRL, especially when the underlying domain is object-based. We leverage the general domain knowledge of object-based domains to design three types of event-based convolutional layers to direct exploration. These layers rely on Gaussian dropouts and are inserted between the layers of the deep neural network model to help facilitate variational Thompson sampling. We empirically show the effectiveness of EVaDE-equipped Simulated Policy Learning (EVaDE-SimPLe) on the 100K Atari game suite.
Problem

Research questions and friction points this paper is trying to address.

Posterior Sampling Reinforcement Learning
Large State and Action Spaces
Item-based Games
Innovation

Methods, ideas, or system contributions that make the work stand out.

EVaDE
Gaussian Dropout
Custom Convolutional Layers
🔎 Similar Papers
No similar papers found.