Active Policy Improvement from Multiple Black-box Oracles

📅 2023-06-17
🏛️ International Conference on Machine Learning
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
Policy improvement under multiple suboptimal black-box experts—specifically, how to actively select the most suitable expert at the state level to enhance sample efficiency—remains challenging. Method: We propose MAPS (Multi-Expert Adaptive Policy Selection) and its variant MAPS-SE, a unified algorithmic framework that jointly optimizes expert selection policy and uncertainty-driven state exploration. It co-trains a value function and a behavior cloning model to enable end-to-end, active decision-making for expert scheduling and exploration. Contribution/Results: Theoretical analysis shows MAPS achieves superior sample complexity compared to existing imitation learning methods. Empirically, on multiple continuous-control tasks from the DeepMind Control Suite, MAPS significantly accelerates policy convergence and attains state-of-the-art sample efficiency. The implementation is publicly available.
📝 Abstract
Reinforcement learning (RL) has made significant strides in various complex domains. However, identifying an effective policy via RL often necessitates extensive exploration. Imitation learning aims to mitigate this issue by using expert demonstrations to guide exploration. In real-world scenarios, one often has access to multiple suboptimal black-box experts, rather than a single optimal oracle. These experts do not universally outperform each other across all states, presenting a challenge in actively deciding which oracle to use and in which state. We introduce MAPS and MAPS-SE, a class of policy improvement algorithms that perform imitation learning from multiple suboptimal oracles. In particular, MAPS actively selects which of the oracles to imitate and improve their value function estimates, and MAPS-SE additionally leverages an active state exploration criterion to determine which states one should explore. We provide a comprehensive theoretical analysis and demonstrate that MAPS and MAPS-SE enjoy sample efficiency advantage over the state-of-the-art policy improvement algorithms. Empirical results show that MAPS-SE significantly accelerates policy optimization via state-wise imitation learning from multiple oracles across a broad spectrum of control tasks in the DeepMind Control Suite. Our code is publicly available at: https://github.com/ripl/maps.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal black-box oracle per state for imitation
Improving policy efficiency with multiple suboptimal experts
Accelerating policy optimization via active state exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Actively selects multiple suboptimal oracles
Improves value function estimates through imitation
Leverages active state exploration criterion
🔎 Similar Papers
No similar papers found.