RLZero: Direct Policy Inference from Language Without In-Domain Supervision

📅 2024-12-07
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bottleneck in reinforcement learning (RL) imposed by reliance on handcrafted reward functions or domain-specific supervision. We propose a zero-shot language-to-policy mapping method grounded in an “Imagine–Project–Imitate” framework: (1) a video generation model transforms natural language instructions into cross-embodied visual imaginings; (2) unsupervised cross-domain observation projection aligns representations between source domains (e.g., YouTube videos) and target embodied environments; and (3) closed-form policy imitation transfers behaviors from pretrained RL agents to novel tasks. Crucially, our approach requires no task annotations, reward engineering, environment interaction, or fine-tuning. It achieves zero-shot policy generalization across diverse tasks and environments—including complex humanoid robot settings—demonstrating, for the first time, purely language-driven embodied policy generation.

Technology Category

Application Category

📝 Abstract
The reward hypothesis states that all goals and purposes can be understood as the maximization of a received scalar reward signal. However, in practice, defining such a reward signal is notoriously difficult, as humans are often unable to predict the optimal behavior corresponding to a reward function. Natural language offers an intuitive alternative for instructing reinforcement learning (RL) agents, yet previous language-conditioned approaches either require costly supervision or test-time training given a language instruction. In this work, we present a new approach that uses a pretrained RL agent trained using only unlabeled, offline interactions--without task-specific supervision or labeled trajectories--to get zero-shot test-time policy inference from arbitrary natural language instructions. We introduce a framework comprising three steps: imagine, project, and imitate. First, the agent imagines a sequence of observations corresponding to the provided language description using video generative models. Next, these imagined observations are projected into the target environment domain. Finally, an agent pretrained in the target environment with unsupervised RL instantly imitates the projected observation sequence through a closed-form solution. To the best of our knowledge, our method, RLZero, is the first approach to show direct language-to-behavior generation abilities on a variety of tasks and environments without any in-domain supervision. We further show that components of RLZero can be used to generate policies zero-shot from cross-embodied videos, such as those available on YouTube, even for complex embodiments like humanoids.
Problem

Research questions and friction points this paper is trying to address.

Defining scalar reward signals for RL is challenging for humans
Existing language-conditioned RL needs costly supervision or training
RLZero enables zero-shot policy inference from language without supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pretrained RL agent without supervision
Imagines observations with video generative models
Projects and imitates observations for zero-shot inference
🔎 Similar Papers
No similar papers found.
H
Harshit S. Sikchi
The University of Texas at Austin
Siddhant Agarwal
Siddhant Agarwal
The University of Texas at Austin
Reinforcement LearningAdvesarial AttacksExplainable AIRobotics
P
Pranaya Jajoo
University of Alberta
Samyak Parajuli
Samyak Parajuli
The University of Texas at Austin
Caleb Chuck
Caleb Chuck
University of Texas at Austin
Artificial IntelligenceReinforcement LearningRobotics
M
Max Rudolph
The University of Texas at Austin
P
Peter Stone
The University of Texas at Austin, Sony AI
A
Amy Zhang
The University of Texas at Austin, Meta AI
S
S. Niekum
UMass Amherst