Model-Based Reinforcement Learning in Discrete-Action Non-Markovian Reward Decision Processes

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Markov reinforcement learning fails on history-dependent decision tasks—where success depends on the full system trajectory rather than individual states—while existing non-Markovian reward decision process (NMRDP) methods lack sample efficiency and near-optimality guarantees. Method: We propose the first model-based RL framework for NMRDPs with PAC (Probably Approximately Correct) guarantees. It decouples transition dynamics and non-Markovian rewards via a reward machine, enabling provably efficient learning. For discrete-action NMRDPs, we establish the first polynomial-sample-complexity guarantee for ε-optimal policy convergence. Furthermore, we design Bucket-QR-MAX, integrating SimHash-based state discretization to achieve generalization over continuous states without manual binning or function approximation. Results: Experiments across diverse temporal-dependency tasks demonstrate significantly improved sample efficiency, stable convergence to optimal policies, and consistent superiority over state-of-the-art model-based RL baselines.

Technology Category

Application Category

📝 Abstract
Many practical decision-making problems involve tasks whose success depends on the entire system history, rather than on achieving a state with desired properties. Markovian Reinforcement Learning (RL) approaches are not suitable for such tasks, while RL with non-Markovian reward decision processes (NMRDPs) enables agents to tackle temporal-dependency tasks. This approach has long been known to lack formal guarantees on both (near-)optimality and sample efficiency. We contribute to solving both issues with QR-MAX, a novel model-based algorithm for discrete NMRDPs that factorizes Markovian transition learning from non-Markovian reward handling via reward machines. To the best of our knowledge, this is the first model-based RL algorithm for discrete-action NMRDPs that exploits this factorization to obtain PAC convergence to $varepsilon$-optimal policies with polynomial sample complexity. We then extend QR-MAX to continuous state spaces with Bucket-QR-MAX, a SimHash-based discretiser that preserves the same factorized structure and achieves fast and stable learning without manual gridding or function approximation. We experimentally compare our method with modern state-of-the-art model-based RL approaches on environments of increasing complexity, showing a significant improvement in sample efficiency and increased robustness in finding optimal policies.
Problem

Research questions and friction points this paper is trying to address.

Develops QR-MAX for discrete-action non-Markovian reward decision processes.
Provides PAC convergence guarantees with polynomial sample complexity.
Extends method to continuous states via Bucket-QR-MAX for stable learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based RL algorithm for non-Markovian reward decision processes
Factorizes Markovian transitions and non-Markovian rewards via reward machines
Extends to continuous states with SimHash-based discretization preserving structure
🔎 Similar Papers
No similar papers found.