๐ค AI Summary
To address the high cost of human feedback and low sample efficiency in reward function learning for human-in-the-loop reinforcement learning, this paper proposes the Suboptimal Data Pretraining (SDP) framework. SDP enables cold-start training of reward models without human annotations by leveraging unlabeled, low-quality trajectory dataโaugmented with pseudo-labels derived from environment-minimum rewards. The method integrates pseudo-labeling, scalar reward modeling, and preference-based learning within a human-in-the-loop RL architecture. Evaluated across diverse simulated robotic tasks, SDP achieves significant improvements over state-of-the-art methods: it attains comparable or superior performance while reducing human interaction counts by over 50%. Crucially, SDP is compatible with both simulated and real human teachers and, for the first time, enables efficient reward modeling without any manual annotation.
๐ Abstract
To create useful reinforcement learning (RL) agents, step zero is to design a suitable reward function that captures the nuances of the task. However, reward engineering can be a difficult and time-consuming process. Instead, human-in-the-loop RL methods hold the promise of learning reward functions from human feedback. Despite recent successes, many of the human-in-the-loop RL methods still require numerous human interactions to learn successful reward functions. To improve the feedback efficiency of human-in-the-loop RL methods (i.e., require less human interaction), this paper introduces Sub-optimal Data Pre-training, SDP, an approach that leverages reward-free, sub-optimal data to improve scalar- and preference-based RL algorithms. In SDP, we start by pseudo-labeling all low-quality data with the minimum environment reward. Through this process, we obtain reward labels to pre-train our reward model without requiring human labeling or preferences. This pre-training phase provides the reward model a head start in learning, enabling it to recognize that low-quality transitions should be assigned low rewards. Through extensive experiments with both simulated and human teachers, we find that SDP can at least meet, but often significantly improve, state of the art human-in-the-loop RL performance across a variety of simulated robotic tasks.