๐ค AI Summary
This work addresses the challenges of scarce and costly demonstration data in real-world settings and the error compounding in conventional imitation learning due to its reliance on the i.i.d. assumption during testing. To overcome these limitations, the paper introduces the โMaster Your Own Expertiseโ (MYOE) framework, which integrates a Queryable Mixture-of-Preferences State-Space Model (QMoP-SSM) with a preference-based regret mechanism. MYOE estimates desired targets at each step and refines a neural control policy through self-imitation from limited demonstrations. By unifying reinforcement learning, imitation learning, state-space modeling, and preference reasoning, MYOE circumvents the heavy dependence of existing RLfD approaches on large-scale data and distributional consistency. Experimental results demonstrate that MYOE significantly outperforms state-of-the-art methods in robustness, adaptability, and out-of-distribution generalization.
๐ Abstract
Robot reinforcement learning from demonstrations (RLfD) assumes that expert data is abundant; this is usually unrealistic in the real world given data scarcity as well as high collection cost. Furthermore, imitation learning algorithms assume that the data is independently and identically distributed, which ultimately results in poorer performance as gradual errors emerge and compound within test-time trajectories. We address these issues by introducing the "master your own expertise" (MYOE) framework, a self-imitation framework that enables robotic agents to learn complex behaviors from limited demonstration data samples. Inspired by human perception and action, we propose and design what we call the queryable mixture-of-preferences state space model (QMoP-SSM), which estimates the desired goal at every time step. These desired goals are used in computing the "preference regret", which is used to optimize the robot control policy. Our experiments demonstrate the robustness, adaptability, and out-of-sample performance of our agent compared to other state-of-the-art RLfD schemes. The GitHub repository that supports this work can be found at: https://github.com/rxng8/neurorobot-preference-regret-learning.