Guaranteeing Out-Of-Distribution Detection in Deep RL via Transition Estimation

๐Ÿ“… 2025-03-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In deep reinforcement learning deployment, out-of-distribution (OOD) detection becomes unreliable due to state-transition distribution shift. This work formally defines OOD in RL as a statistically significant deviation of the actual state-transition probability from the training distributionโ€”a first such definition in the literature. Method: We propose the first statistically grounded OOD detection framework for RL, leveraging a conditional variational autoencoder (CVAE) to model the conditional transition distribution ( p(s' mid s, a) ), and constructing a conformity-based detector using reconstruction error. Contribution/Results: Our framework achieves significantly improved detection performance over existing RL-OOD methods on standard benchmarks. Crucially, it provides verifiable, theoretically guaranteed detection with user-specified confidence levels (e.g., 95%), thereby filling a critical gap in statistical reliability for OOD detection in RL.

Technology Category

Application Category

๐Ÿ“ Abstract
An issue concerning the use of deep reinforcement learning (RL) agents is whether they can be trusted to perform reliably when deployed, as training environments may not reflect real-life environments. Anticipating instances outside their training scope, learning-enabled systems are often equipped with out-of-distribution (OOD) detectors that alert when a trained system encounters a state it does not recognize or in which it exhibits uncertainty. There exists limited work conducted on the problem of OOD detection within RL, with prior studies being unable to achieve a consensus on the definition of OOD execution within the context of RL. By framing our problem using a Markov Decision Process, we assume there is a transition distribution mapping each state-action pair to another state with some probability. Based on this, we consider the following definition of OOD execution within RL: A transition is OOD if its probability during real-life deployment differs from the transition distribution encountered during training. As such, we utilize conditional variational autoencoders (CVAE) to approximate the transition dynamics of the training environment and implement a conformity-based detector using reconstruction loss that is able to guarantee OOD detection with a pre-determined confidence level. We evaluate our detector by adapting existing benchmarks and compare it with existing OOD detection models for RL.
Problem

Research questions and friction points this paper is trying to address.

Ensuring reliable deep RL agent performance in real-life environments.
Defining and detecting out-of-distribution transitions in RL.
Using CVAE to approximate and detect OOD transitions confidently.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses conditional variational autoencoders (CVAE)
Implements conformity-based OOD detector
Guarantees OOD detection with confidence
๐Ÿ”Ž Similar Papers
No similar papers found.
Mohit Prashant
Mohit Prashant
Nanyang Technological University
AIMLRL
A
A. Easwaran
Nanyang Technological University
Suman Das
Suman Das
Nanyang Technological University
M
Michael Yuhas
Nanyang Technological University