Leveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning

๐Ÿ“… 2024-04-30
๐Ÿ›๏ธ Adaptive Agents and Multi-Agent Systems
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high cost of human feedback and low sample efficiency in reward function learning for human-in-the-loop reinforcement learning, this paper proposes the Suboptimal Data Pretraining (SDP) framework. SDP enables cold-start training of reward models without human annotations by leveraging unlabeled, low-quality trajectory dataโ€”augmented with pseudo-labels derived from environment-minimum rewards. The method integrates pseudo-labeling, scalar reward modeling, and preference-based learning within a human-in-the-loop RL architecture. Evaluated across diverse simulated robotic tasks, SDP achieves significant improvements over state-of-the-art methods: it attains comparable or superior performance while reducing human interaction counts by over 50%. Crucially, SDP is compatible with both simulated and real human teachers and, for the first time, enables efficient reward modeling without any manual annotation.

Technology Category

Application Category

๐Ÿ“ Abstract
To create useful reinforcement learning (RL) agents, step zero is to design a suitable reward function that captures the nuances of the task. However, reward engineering can be a difficult and time-consuming process. Instead, human-in-the-loop RL methods hold the promise of learning reward functions from human feedback. Despite recent successes, many of the human-in-the-loop RL methods still require numerous human interactions to learn successful reward functions. To improve the feedback efficiency of human-in-the-loop RL methods (i.e., require less human interaction), this paper introduces Sub-optimal Data Pre-training, SDP, an approach that leverages reward-free, sub-optimal data to improve scalar- and preference-based RL algorithms. In SDP, we start by pseudo-labeling all low-quality data with the minimum environment reward. Through this process, we obtain reward labels to pre-train our reward model without requiring human labeling or preferences. This pre-training phase provides the reward model a head start in learning, enabling it to recognize that low-quality transitions should be assigned low rewards. Through extensive experiments with both simulated and human teachers, we find that SDP can at least meet, but often significantly improve, state of the art human-in-the-loop RL performance across a variety of simulated robotic tasks.
Problem

Research questions and friction points this paper is trying to address.

Improving feedback efficiency in human-in-the-loop RL
Reducing human interactions for reward function learning
Leveraging sub-optimal data to pre-train reward models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses sub-optimal data for pre-training
Pseudo-labels low-quality data automatically
Improves human-in-the-loop RL efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.
Calarina Muslimani
Calarina Muslimani
University of Alberta
reinforcement learningreward alignmenthuman-in-the-loop
M
M. E. Taylor
University of Alberta, Alberta Machine Intelligence Institute (Amii)