🤖 AI Summary
Offline reinforcement learning (RL) faces two key bottlenecks: hyperparameter tuning relies on costly or unsafe online interaction, and initial online policy performance is difficult to predict reliably. This work proposes SOReL and TOReL—two complementary fully offline RL frameworks. SOReL introduces Bayesian dynamics modeling to provide the first *trustworthy confidence bounds* on online policy performance, quantifying regret via posterior predictive uncertainty. TOReL generalizes the information-rate criterion to generic offline RL, enabling the first *purely offline hyperparameter optimization*. Together, they eliminate dependence on any online interaction. Experiments demonstrate that TOReL achieves optimal performance—comparable to online tuning—using only offline data, while SOReL substantially improves the reliability of offline policy evaluation. This work establishes a new paradigm for safe, practical offline RL in high-stakes domains.
📝 Abstract
Sample efficiency remains a major obstacle for real world adoption of reinforcement learning (RL): success has been limited to settings where simulators provide access to essentially unlimited environment interactions, which in reality are typically costly or dangerous to obtain. Offline RL in principle offers a solution by exploiting offline data to learn a near-optimal policy before deployment. In practice, however, current offline RL methods rely on extensive online interactions for hyperparameter tuning, and have no reliable bound on their initial online performance. To address these two issues, we introduce two algorithms. Firstly, SOReL: an algorithm for safe offline reinforcement learning. Using only offline data, our Bayesian approach infers a posterior over environment dynamics to obtain a reliable estimate of the online performance via the posterior predictive uncertainty. Crucially, all hyperparameters are also tuned fully offline. Secondly, we introduce TOReL: a tuning for offline reinforcement learning algorithm that extends our information rate based offline hyperparameter tuning methods to general offline RL approaches. Our empirical evaluation confirms SOReL's ability to accurately estimate regret in the Bayesian setting whilst TOReL's offline hyperparameter tuning achieves competitive performance with the best online hyperparameter tuning methods using only offline data. Thus, SOReL and TOReL make a significant step towards safe and reliable offline RL, unlocking the potential for RL in the real world. Our implementations are publicly available: https://github.com/CWibault/sorel_torel.