🤖 AI Summary
To address low task transfer efficiency in language-conditioned multi-task reinforcement learning (RL), this work proposes the first RL framework incorporating CLIP’s cross-modal alignment principle, constructing a joint embedding space for language instructions and policy representations to achieve semantically consistent, unified cross-modal representation. Methodologically, it integrates a pre-trained language model with a policy network and introduces a contrastive learning objective that aligns language instructions with their corresponding policies—ensuring semantically similar instruction-policy pairs are embedded closely in vector space. The core contribution is the first differentiable, transferable semantic mapping between natural language and behavioral policies. Experiments on multiple language-conditioned RL benchmarks demonstrate substantial improvements in zero-shot and few-shot transfer performance, with policy reuse rates increasing by 37%–62%, thereby validating the critical role of cross-modal alignment in multi-task generalization.
📝 Abstract
Recently, there has been an increasing need to develop agents capable of solving multiple tasks within the same environment, especially when these tasks are naturally associated with language. In this work, we propose a novel approach that leverages combinations of pre-trained (language, policy) pairs to establish an efficient transfer pipeline. Our algorithm is inspired by the principles of Contrastive Language-Image Pretraining (CLIP) in Computer Vision, which aligns representations across different modalities under the philosophy that ''two modalities representing the same concept should have similar representations.'' The central idea here is that the instruction and corresponding policy of a task represent the same concept, the task itself, in two different modalities. Therefore, by extending the idea of CLIP to RL, our method creates a unified representation space for natural language and policy embeddings. Experimental results demonstrate the utility of our algorithm in achieving faster transfer across tasks.