🤖 AI Summary
To address the low human feedback efficiency and poor sample efficiency in preference-based reinforcement learning (PbRL), this paper proposes SENIOR. First, it introduces a Motion-Discriminative Screening (MDS) mechanism that leverages kernel density estimation over state distributions to automatically identify easily distinguishable, task-relevant trajectory pairs, thereby enhancing query human-friendliness. Second, it designs a Preference-Guided Exploration (PGE) module that converts the learned preference model into intrinsic rewards, enabling value-directed active exploration. SENIOR integrates preference modeling, intrinsic reward shaping, online policy optimization, and trajectory contrastive learning. Evaluated on six simulated and four real-robot manipulation tasks, SENIOR significantly improves feedback efficiency and policy convergence speed, consistently outperforming five state-of-the-art baselines.
📝 Abstract
Preference-based Reinforcement Learning (PbRL) methods provide a solution to avoid reward engineering by learning reward models based on human preferences. However, poor feedback- and sample- efficiency still remain the problems that hinder the application of PbRL. In this paper, we present a novel efficient query selection and preference-guided exploration method, called SENIOR, which could select the meaningful and easy-to-comparison behavior segment pairs to improve human feedback-efficiency and accelerate policy learning with the designed preference-guided intrinsic rewards. Our key idea is twofold: (1) We designed a Motion-Distinction-based Selection scheme (MDS). It selects segment pairs with apparent motion and different directions through kernel density estimation of states, which is more task-related and easy for human preference labeling; (2) We proposed a novel preference-guided exploration method (PGE). It encourages the exploration towards the states with high preference and low visits and continuously guides the agent achieving the valuable samples. The synergy between the two mechanisms could significantly accelerate the progress of reward and policy learning. Our experiments show that SENIOR outperforms other five existing methods in both human feedback-efficiency and policy convergence speed on six complex robot manipulation tasks from simulation and four real-worlds.