π€ AI Summary
Offline reinforcement learning (RL) suffers from policy degradation in multimodal action spaces due to cross-modal averaging, as existing methods typically assume unimodal behavior policies and thus fail to capture high-return action modes. To address this, we propose the Weighted Unimodal Mimicry (LOM) frameworkβthe first to explicitly identify (via Gaussian mixture modeling), select (based on expected return), and concentrate learning on the optimal action mode in offline RL, thereby avoiding modal confusion. We provide theoretical guarantees showing that LOM simultaneously improves policy performance and maintains structural simplicity. Evaluated on the D4RL benchmark, LOM consistently outperforms state-of-the-art methods, achieving particularly notable gains in high-complexity multimodal tasks, where it demonstrates superior robustness and training stability.
π Abstract
Offline reinforcement learning (RL) seeks to learn optimal policies from static datasets without interacting with the environment. A common challenge is handling multi-modal action distributions, where multiple behaviours are represented in the data. Existing methods often assume unimodal behaviour policies, leading to suboptimal performance when this assumption is violated. We propose weighted imitation Learning on One Mode (LOM), a novel approach that focuses on learning from a single, promising mode of the behaviour policy. By using a Gaussian mixture model to identify modes and selecting the best mode based on expected returns, LOM avoids the pitfalls of averaging over conflicting actions. Theoretically, we show that LOM improves performance while maintaining simplicity in policy learning. Empirically, LOM outperforms existing methods on standard D4RL benchmarks and demonstrates its effectiveness in complex, multi-modal scenarios.