TGRPO :Fine-tuning Vision-Language-Action Model via Trajectory-wise Group Relative Policy Optimization

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of Vision-Language-Action (VLA) models—namely, their reliance on static trajectory datasets and inability to adapt to novel environments via online interactive feedback—this paper proposes a closed-loop, trajectory-level reinforcement learning framework. Methodologically, it introduces Groupwise Relative Policy Optimization (GRPO), the first trajectory-granular policy optimization mechanism that jointly leverages step-wise immediate rewards and trajectory-level success signals to enable more accurate advantage estimation and stable online training. The framework integrates vision-language-action joint modeling with dynamic advantage attribution, enabling online sampling and optimization of complete manipulation trajectories. Evaluated on the LIBERO-Object benchmark comprising ten robotic manipulation tasks, our approach significantly outperforms supervised fine-tuning (SFT) and multiple RL baselines, achieving a 12.7% absolute improvement in task completion rate while simultaneously enhancing policy robustness and cross-task generalization.

Technology Category

Application Category

📝 Abstract
Recent advances in Vision-Language-Action (VLA) model have demonstrated strong generalization capabilities across diverse scenes, tasks, and robotic platforms when pretrained at large-scale datasets. However, these models still require task-specific fine-tuning in novel environments, a process that relies almost exclusively on supervised fine-tuning (SFT) using static trajectory datasets. Such approaches neither allow robot to interact with environment nor do they leverage feedback from live execution. Also, their success is critically dependent on the size and quality of the collected trajectories. Reinforcement learning (RL) offers a promising alternative by enabling closed-loop interaction and aligning learned policies directly with task objectives. In this work, we draw inspiration from the ideas of GRPO and propose the Trajectory-wise Group Relative Policy Optimization (TGRPO) method. By fusing step-level and trajectory-level advantage signals, this method improves GRPO's group-level advantage estimation, thereby making the algorithm more suitable for online reinforcement learning training of VLA. Experimental results on ten manipulation tasks from the libero-object benchmark demonstrate that TGRPO consistently outperforms various baseline methods, capable of generating more robust and efficient policies across multiple tested scenarios. Our source codes are available at: https://github.com/hahans/TGRPO
Problem

Research questions and friction points this paper is trying to address.

Improving VLA model fine-tuning via trajectory-wise RL optimization
Enhancing policy robustness in diverse robotic manipulation tasks
Overcoming limitations of static dataset supervised fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning VLA model via TGRPO method
Fuses step-level and trajectory-level advantage signals
Improves GRPO's group-level advantage estimation
🔎 Similar Papers
No similar papers found.
Z
Zengjue Chen
School of Artificial Intelligence, Jilin University
Runliang Niu
Runliang Niu
Jilin University, China
Natural language processingInterpretability
H
He Kong
School of Artificial Intelligence, Jilin University
Q
Qi Wang
School of Artificial Intelligence, Jilin University