Wavelet Predictive Representations for Non-Stationary Reinforcement Learning

πŸ“… 2025-10-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Real-world environments exhibit strong non-stationarity, making it difficult for agents to rapidly adapt to dynamically evolving MDP sequences. Existing non-stationary reinforcement learning (NSRL) methods often assume regular, predictable task evolution and thus suffer from limited generalization under highly stochastic or abrupt environmental shifts. To address this, we propose WISDOMβ€”a novel NSRL framework that pioneers the integration of wavelet analysis into reinforcement learning. WISDOM employs wavelet transforms to extract multi-scale trends and transient change features from task sequences, yielding predictive task representations. It introduces a wavelet-domain temporal difference update operator for fine-grained modeling of MDP evolution, with theoretical convergence guarantees. Furthermore, it unifies autoregressive modeling with multi-scale representation learning. Evaluated on multiple non-stationary benchmarks, WISDOM achieves significant improvements in sample efficiency and asymptotic performance, demonstrating robust adaptation capability in complex dynamic environments.

Technology Category

Application Category

πŸ“ Abstract
The real world is inherently non-stationary, with ever-changing factors, such as weather conditions and traffic flows, making it challenging for agents to adapt to varying environmental dynamics. Non-Stationary Reinforcement Learning (NSRL) addresses this challenge by training agents to adapt rapidly to sequences of distinct Markov Decision Processes (MDPs). However, existing NSRL approaches often focus on tasks with regularly evolving patterns, leading to limited adaptability in highly dynamic settings. Inspired by the success of Wavelet analysis in time series modeling, specifically its ability to capture signal trends at multiple scales, we propose WISDOM to leverage wavelet-domain predictive task representations to enhance NSRL. WISDOM captures these multi-scale features in evolving MDP sequences by transforming task representation sequences into the wavelet domain, where wavelet coefficients represent both global trends and fine-grained variations of non-stationary changes. In addition to the auto-regressive modeling commonly employed in time series forecasting, we devise a wavelet temporal difference (TD) update operator to enhance tracking and prediction of MDP evolution. We theoretically prove the convergence of this operator and demonstrate policy improvement with wavelet task representations. Experiments on diverse benchmarks show that WISDOM significantly outperforms existing baselines in both sample efficiency and asymptotic performance, demonstrating its remarkable adaptability in complex environments characterized by non-stationary and stochastically evolving tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses non-stationary reinforcement learning in dynamic environments
Captures multi-scale environmental changes using wavelet analysis
Enhances adaptability to stochastically evolving Markov Decision Processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet-domain predictive task representations for NSRL
Wavelet temporal difference update operator for MDP evolution
Multi-scale feature capture in non-stationary MDP sequences
πŸ”Ž Similar Papers
No similar papers found.
M
Min Wang
Beijing Institute of Technology
X
Xin Li
Beijing Institute of Technology
Y
Ye He
Beijing Institute of Technology
Yao-Hui Li
Yao-Hui Li
Beijing Institute of Technology
reinforcement learning
H
Hasnaa Bennis
Beijing Institute of Technology
Riashat Islam
Riashat Islam
Microsoft Research NYC
Deep Reinforcement LearningDeep LearningGenerative Models
Mingzhong Wang
Mingzhong Wang
University of the Sunshine Coast
Machine learningMobile computing