Semore: VLM-guided Enhanced Semantic Motion Representations for Visual Reinforcement Learning

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient representational capacity of backbone networks in vision-based reinforcement learning, this paper proposes Semore, a semantic-motion collaborative representation framework. Methodologically, Semore employs a dual-path backbone that separately processes RGB and optical flow inputs to jointly model semantic and motion features. It innovatively integrates commonsense knowledge—guided by a vision-language model (VLM)—at the feature level, enabling efficient collaboration between semantic understanding and motion modeling via decoupled supervised and self-supervised interaction mechanisms. Additionally, CLIP is leveraged to enhance text-image alignment. Experiments demonstrate that Semore significantly outperforms state-of-the-art methods across multiple vision-RL benchmarks, achieving superior generalization and decision-making efficiency. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
The growing exploration of Large Language Models (LLM) and Vision-Language Models (VLM) has opened avenues for enhancing the effectiveness of reinforcement learning (RL). However, existing LLM-based RL methods often focus on the guidance of control policy and encounter the challenge of limited representations of the backbone networks. To tackle this problem, we introduce Enhanced Semantic Motion Representations (Semore), a new VLM-based framework for visual RL, which can simultaneously extract semantic and motion representations through a dual-path backbone from the RGB flows. Semore utilizes VLM with common-sense knowledge to retrieve key information from observations, while using the pre-trained clip to achieve the text-image alignment, thereby embedding the ground-truth representations into the backbone. To efficiently fuse semantic and motion representations for decision-making, our method adopts a separately supervised approach to simultaneously guide the extraction of semantics and motion, while allowing them to interact spontaneously. Extensive experiments demonstrate that, under the guidance of VLM at the feature level, our method exhibits efficient and adaptive ability compared to state-of-art methods. All codes are released.
Problem

Research questions and friction points this paper is trying to address.

Extracts semantic and motion representations from RGB flows
Fuses semantic and motion representations for decision-making
Enhances visual reinforcement learning with VLM-guided features
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLM extracts semantic and motion from RGB flows
Separately supervised fusion of semantics and motion
Pre-trained CLIP aligns text and image for grounding
🔎 Similar Papers
No similar papers found.
W
Wentao Wang
Institute for AI Industry Research, Tsinghua University
Chunyang Liu
Chunyang Liu
Didi Chuxing
Data MiningMarketplaceAutonomous Driving
K
Kehua Sheng
Didi Chuxing
B
Bo Zhang
Didi Chuxing
Y
Yan Wang
Institute for AI Industry Research, Tsinghua University