Zero-Shot Reinforcement Learning via Function Encoders

📅 2024-01-30
🏛️ International Conference on Machine Learning
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of zero-shot cross-task transfer in reinforcement learning. We propose the Function Encoder framework, which maps reward and state-transition functions into low-dimensional, semantically consistent task embeddings via weighted combinations of nonlinear basis functions—enabling task alignment and immediate transfer without online fine-tuning. The framework is modular and seamlessly integrates with mainstream RL algorithms including PPO, SAC, and DQN. Experiments across multiple benchmark domains demonstrate substantial improvements in zero-shot generalization, achieving state-of-the-art performance in data efficiency, asymptotic policy quality, and training stability. Our core contribution is the first explicit encoding of task-level functional representations (i.e., reward and dynamics functions) into transferable vector embeddings—departing from conventional paradigms that rely solely on policy- or value-function-based transfer. This paradigm shift enables more principled and scalable cross-task knowledge reuse.

Technology Category

Application Category

📝 Abstract
Although reinforcement learning (RL) can solve many challenging sequential decision making problems, achieving zero-shot transfer across related tasks remains a challenge. The difficulty lies in finding a good representation for the current task so that the agent understands how it relates to previously seen tasks. To achieve zero-shot transfer, we introduce the function encoder, a representation learning algorithm which represents a function as a weighted combination of learned, non-linear basis functions. By using a function encoder to represent the reward function or the transition function, the agent has information on how the current task relates to previously seen tasks via a coherent vector representation. Thus, the agent is able to achieve transfer between related tasks at run time with no additional training. We demonstrate state-of-the-art data efficiency, asymptotic performance, and training stability in three RL fields by augmenting basic RL algorithms with a function encoder task representation.
Problem

Research questions and friction points this paper is trying to address.

Achieving zero-shot transfer across related RL tasks
Finding good task representations for agent understanding
Enhancing RL algorithms with function encoder representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Function encoder for zero-shot transfer learning
Learned non-linear basis functions combination
Coherent vector representation for task relation
🔎 Similar Papers
Tyler Ingebrand
Tyler Ingebrand
PhD Student, University of Texas at Austin
Reinforcement Learning
A
Amy Zhang
University of Texas at Austin
U
U. Topcu
University of Texas at Austin