🤖 AI Summary
This work addresses the low sample efficiency and poor generalization of training robotic skills from scratch, focusing on the plug-insertion task. It systematically evaluates policy transfer across different robot platforms by comparing zero-shot transfer, fine-tuning, and training from scratch. The authors propose a policy transfer framework incorporating adaptive mechanisms that significantly enhance cross-platform generalization without requiring extensive retraining. Experimental results demonstrate that fine-tuning with only a small amount of interaction data substantially outperforms both zero-shot transfer and training from scratch, achieving state-of-the-art performance in terms of both success rate and execution efficiency. This approach offers a promising pathway toward sustainable and data-efficient robot learning.
📝 Abstract
Learning robot skills from scratch is often time-consuming, while reusing data promotes sustainability and improves sample efficiency. This study investigates policy transfer across different robotic platforms, focusing on peg-in-hole task using reinforcement learning (RL). Policy training is carried out on two different robots. Their policies are transferred and evaluated for zero-shot, fine-tuning, and training from scratch. Results indicate that zero-shot transfer leads to lower success rates and relatively longer task execution times, while fine-tuning significantly improves performance with fewer training time-steps. These findings highlight that policy transfer with adaptation techniques improves sample efficiency and generalization, reducing the need for extensive retraining and supporting sustainable robotic learning.