ManipLVM-R1: Reinforcement Learning for Reasoning in Embodied Manipulation with Large Vision-Language Models

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) suffer from heavy reliance on human annotations, poor generalization, and degraded performance in out-of-distribution (OOD) embodied manipulation scenarios. To address these limitations, this paper proposes RLVR—a Reinforcement Learning framework with Verifiable Rewards—that enables LVLMs to autonomously learn interactive reasoning via physical feedback. Our key innovation is a rule-driven dual-modeling reward: an affordance-aware reward for unsupervised deep physical reasoning, and a trajectory-matching reward that enforces spatial logic and kinematic constraints. Evaluated in a high-fidelity embodied manipulation simulator, RLVR significantly improves OOD generalization, manipulation localization accuracy, and physical plausibility of action trajectories. On multi-task benchmarks, it consistently outperforms supervised baselines.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) have recently advanced robotic manipulation by leveraging vision for scene perception and language for instruction following. However, existing methods rely heavily on costly human-annotated training datasets, which limits their generalization and causes them to struggle in out-of-domain (OOD) scenarios, reducing real-world adaptability. To address these challenges, we propose ManipLVM-R1, a novel reinforcement learning framework that replaces traditional supervision with Reinforcement Learning using Verifiable Rewards (RLVR). By directly optimizing for task-aligned outcomes, our method enhances generalization and physical reasoning while removing the dependence on costly annotations. Specifically, we design two rule-based reward functions targeting key robotic manipulation subtasks: an Affordance Perception Reward to enhance localization of interaction regions, and a Trajectory Match Reward to ensure the physical plausibility of action paths. These rewards provide immediate feedback and impose spatial-logical constraints, encouraging the model to go beyond shallow pattern matching and instead learn deeper, more systematic reasoning about physical interactions.
Problem

Research questions and friction points this paper is trying to address.

Reducing reliance on costly human-annotated datasets for robotic manipulation
Improving generalization in out-of-domain scenarios for LVLMs
Enhancing physical reasoning and task-aligned outcomes in manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning with Verifiable Rewards (RLVR)
Affordance Perception Reward for interaction localization
Trajectory Match Reward for physical plausibility
🔎 Similar Papers
No similar papers found.