🤖 AI Summary
Current Vision-Language-Action (VLA) models suffer from limited lookahead capability, susceptibility to error accumulation, and poor adaptability to dynamic environments in long-horizon robotic tasks. To address these limitations, this paper proposes a model-based multi-step planning and reward-driven trajectory selection framework. Our core contribution is the first integration of inference-time computational expansion with Model Predictive Control (MPC) principles, endowing single-step VLA models with explicit lookahead. We employ a Transformer-based dynamics model, pre-trained on BridgeV2 and fine-tuned in the SIMPLER simulator to mitigate sim-to-real discrepancy. Trajectory evaluation and selection are guided by a simulator-defined reward function. Experiments demonstrate that our method increases the average task success rate across multiple long-horizon tasks from 48% to 72%, significantly improving planning robustness and execution performance under complex, dynamic conditions.
📝 Abstract
Learning robust robotic control policies remains a major challenge due to the high cost of collecting labeled data, limited generalization to unseen environments, and difficulties in planning over long horizons. While Vision-Language-Action (VLA) models offer a promising solution by grounding natural language instructions into single-step control commands, they often lack mechanisms for lookahead and struggle with compounding errors in dynamic tasks. In this project, we introduce Scaling Inference-Time COMpute for VLAs (SITCOM), a framework that augments any pretrained VLA with model-based rollouts and reward-based trajectory selection, inspired by Model Predictive Control algorithm. SITCOM leverages a learned dynamics model to simulate multi-step action rollouts to select the best candidate plan for real-world execution, transforming one-shot VLAs into robust long-horizon planners. We develop an efficient transformer-based dynamics model trained on large-scale BridgeV2 data and fine-tuned on SIMPLER environments to bridge the Real2Sim gap, and score candidate rollouts using rewards from simulator. Through comprehensive evaluation across multiple tasks and settings in the SIMPLER environment, we demonstrate that SITCOM when combined with a good reward function can significantly improve task completion rate from 48% to 72% using trained dynamics model.