🤖 AI Summary
This work addresses the lack of systematic evaluation and improvement mechanisms in existing Code-as-Policy (CaP) approaches for embodied intelligence by introducing the CaP-X framework, which comprises an interactive environment, CaP-Gym, and a benchmark suite, CaP-Bench. Leveraging program synthesis and execution to control robots, this study is the first to systematically reveal models’ reliance on human-provided abstractions. It proposes two novel methods: a training-free CaP-Agent0 and a reinforcement learning–based CaP-RL agent that integrates multi-turn interaction, visual differencing, structured feedback, and verifiable rewards. Operating over low-level primitives, CaP-RL significantly enhances policy robustness. Experiments demonstrate that the approach achieves near-human-level manipulation reliability in both simulation and real-world robotic settings, substantially improving task success rates and enabling nearly lossless sim-to-real transfer.
📝 Abstract
"Code-as-Policy" considers how executable code can complement data-intensive Vision-Language-Action (VLA) methods, yet their effectiveness as autonomous controllers for embodied manipulation remains underexplored. We present CaP-X, an open-access framework for systematically studying Code-as-Policy agents in robot manipulation. At its core is CaP-Gym, an interactive environment in which agents control robots by synthesizing and executing programs that compose perception and control primitives. Building on this foundation, CaP-Bench evaluates frontier language and vision-language models across varying levels of abstraction, interaction, and perceptual grounding. Across 12 models, CaP-Bench reveals a consistent trend: performance improves with human-crafted abstractions but degrades as these priors are removed, exposing a dependence on designer scaffolding. At the same time, we observe that this gap can be mitigated through scaling agentic test-time computation--through multi-turn interaction, structured execution feedback, visual differencing, automatic skill synthesis, and ensembled reasoning--substantially improves robustness even when agents operate over low-level primitives. These findings allow us to derive CaP-Agent0, a training-free framework that recovers human-level reliability on several manipulation tasks in simulation and on real embodiments. We further introduce CaP-RL, showing reinforcement learning with verifiable rewards improves success rates and transfers from sim2real with minimal gap. Together, CaP-X provides a principled, open-access platform for advancing embodied coding agents.