🤖 AI Summary
To address insufficient representational capacity of backbone networks in vision-based reinforcement learning, this paper proposes Semore, a semantic-motion collaborative representation framework. Methodologically, Semore employs a dual-path backbone that separately processes RGB and optical flow inputs to jointly model semantic and motion features. It innovatively integrates commonsense knowledge—guided by a vision-language model (VLM)—at the feature level, enabling efficient collaboration between semantic understanding and motion modeling via decoupled supervised and self-supervised interaction mechanisms. Additionally, CLIP is leveraged to enhance text-image alignment. Experiments demonstrate that Semore significantly outperforms state-of-the-art methods across multiple vision-RL benchmarks, achieving superior generalization and decision-making efficiency. The implementation is publicly available.
📝 Abstract
The growing exploration of Large Language Models (LLM) and Vision-Language Models (VLM) has opened avenues for enhancing the effectiveness of reinforcement learning (RL). However, existing LLM-based RL methods often focus on the guidance of control policy and encounter the challenge of limited representations of the backbone networks. To tackle this problem, we introduce Enhanced Semantic Motion Representations (Semore), a new VLM-based framework for visual RL, which can simultaneously extract semantic and motion representations through a dual-path backbone from the RGB flows. Semore utilizes VLM with common-sense knowledge to retrieve key information from observations, while using the pre-trained clip to achieve the text-image alignment, thereby embedding the ground-truth representations into the backbone. To efficiently fuse semantic and motion representations for decision-making, our method adopts a separately supervised approach to simultaneously guide the extraction of semantics and motion, while allowing them to interact spontaneously. Extensive experiments demonstrate that, under the guidance of VLM at the feature level, our method exhibits efficient and adaptive ability compared to state-of-art methods. All codes are released.