🤖 AI Summary
This work addresses the semantic-physical disconnect in long-horizon structured tasks for heterogeneous multi-agent systems and the high cost of existing modular frameworks by proposing a unified vision-language model (VLM) controller that integrates policy learning and task execution. The approach embeds physical constraints via e-URDF representations, leverages sim-to-real topological mapping, and employs a multimodal state accumulation mechanism to enable semantic-physical hierarchical coordination, cross-platform skill transfer, and automated SDK generation. This end-to-end closed-loop framework supports real-time access to heterogeneous robot states, hardware-level validation, and dynamic task allocation, substantially reducing reliance on custom development while enhancing robustness in multi-policy collaboration and enabling continuous skill refinement and rapid deployment.
📝 Abstract
The integration of large language models (LLMs) with embodied agents has improved high-level reasoning capabilities; however, a critical gap remains between semantic understanding and physical execution. While vision-language-action (VLA) and vision-language-navigation (VLN) systems enable robots to perform manipulation and navigation tasks from natural language instructions, they still struggle with long-horizon sequential and temporally structured tasks. Existing frameworks typically adopt modular pipelines for data collection, skill training, and policy deployment, resulting in high costs in experimental validation and policy optimization. To address these limitations, we propose ROSClaw, an agent framework for heterogeneous robots that integrates policy learning and task execution within a unified vision-language model (VLM) controller. The framework leverages e-URDF representations of heterogeneous robots as physical constraints to construct a sim-to-real topological mapping, enabling real-time access to the physical states of both simulated and real-world agents. We further incorporate a data collection and state accumulation mechanism that stores robot states, multimodal observations, and execution trajectories during real-world execution, enabling subsequent iterative policy optimization. During deployment, a unified agent maintains semantic continuity between reasoning and execution, and dynamically assigns task-specific control to different agents, thereby improving robustness in multi-policy execution. By establishing an autonomous closed-loop framework, ROSClaw minimizes the reliance on robot-specific development workflows. The framework supports hardware-level validation, automated generation of SDK-level control programs, and tool-based execution, enabling rapid cross-platform transfer and continual improvement of robotic skills. Ours project page: https://www.rosclaw.io/.