Maestro: Orchestrating Robotics Modules with Vision-Language Models for Zero-Shot Generalist Robots

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the zero-shot generalization challenge for general-purpose robots. We propose a modular policy generation framework that employs a vision-language model (VLM) as a programmable agent to dynamically orchestrate reusable perception, planning, and control modules, thereby generating executable policy programs. Departing from end-to-end large-model training paradigms, our approach leverages code-driven, closed-loop interaction interfaces to enable task adaptation and rapid deployment. The core contribution is elevating the VLM from a passive perceptual module to an active policy orchestrator—endowing the system with strong editability, cross-platform transferability, and robust real-world adaptability. In evaluation, our method achieves superior zero-shot performance over state-of-the-art vision-language-action (VLA) models across diverse complex manipulation tasks and successfully transfers to novel robotic platforms, including a quadrupedal robot equipped with a manipulator arm.

Technology Category

Application Category

📝 Abstract
Today's best-explored routes towards generalist robots center on collecting ever larger "observations-in actions-out" robotics datasets to train large end-to-end models, copying a recipe that has worked for vision-language models (VLMs). We pursue a road less traveled: building generalist policies directly around VLMs by augmenting their general capabilities with specific robot capabilities encapsulated in a carefully curated set of perception, planning, and control modules. In Maestro, a VLM coding agent dynamically composes these modules into a programmatic policy for the current task and scenario. Maestro's architecture benefits from a streamlined closed-loop interface without many manually imposed structural constraints, and a comprehensive and diverse tool repertoire. As a result, it largely surpasses today's VLA models for zero-shot performance on challenging manipulation skills. Further, Maestro is easily extensible to incorporate new modules, easily editable to suit new embodiments such as a quadruped-mounted arm, and even easily adapts from minimal real-world experiences through local code edits.
Problem

Research questions and friction points this paper is trying to address.

Building generalist robots using vision-language models and modular components
Dynamically composing perception planning control modules for tasks
Enhancing zero-shot manipulation skills through extensible programmable policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLM dynamically composes modular robotics programs
Closed-loop interface with diverse tool repertoire
Extensible architecture adapts to new embodiments easily
🔎 Similar Papers
No similar papers found.