Learning Dexterous Object Handover

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dexterous object handover task between dual robotic arms in human-robot collaboration scenarios. To overcome large rotational errors and poor generalization inherent in conventional rotation representations, we propose a reinforcement learning framework grounded in dual quaternions. A novel reward function is designed to explicitly encode pose-coupling constraints, significantly reducing rotational distance error. By integrating multi-finger hand dynamics modeling and cross-distribution training, the policy achieves enhanced robustness against unseen objects and motion disturbances from collaborative robots. Experiments demonstrate a 94% success rate on the standard test set and only a 13.8% performance drop under dynamic perturbations—validating strong generalization and robustness. The core contribution lies in the first application of dual quaternions to RL reward design for dexterous handover, unifying high-precision pose control with cross-task transferability.

Technology Category

Application Category

📝 Abstract
Object handover is an important skill that we use daily when interacting with other humans. To deploy robots in collaborative setting, like houses, being able to receive and handing over objects safely and efficiently becomes a crucial skill. In this work, we demonstrate the use of Reinforcement Learning (RL) for dexterous object handover between two multi-finger hands. Key to this task is the use of a novel reward function based on dual quaternions to minimize the rotation distance, which outperforms other rotation representations such as Euler and rotation matrices. The robustness of the trained policy is experimentally evaluated by testing w.r.t. objects that are not included in the training distribution, and perturbations during the handover process. The results demonstrate that the trained policy successfully perform this task, achieving a total success rate of 94% in the best-case scenario after 100 experiments, thereby showing the robustness of our policy with novel objects. In addition, the best-case performance of the policy decreases by only 13.8% when the other robot moves during the handover, proving that our policy is also robust to this type of perturbation, which is common in real-world object handovers.
Problem

Research questions and friction points this paper is trying to address.

Develop RL for dexterous handover between multi-finger hands
Minimize rotation distance using dual quaternions reward function
Test robustness with unseen objects and handover perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning for dexterous handover
Dual quaternions reward minimizes rotation distance
Robust policy tested with novel objects, perturbations
🔎 Similar Papers
No similar papers found.
D
Daniel Frau-Alfaro
AUROVA Lab, Department of Physics, Systems Engineering, and Signal Theory, University of Alicante, 03690 Alicante, Spain
J
Julio Castano-Amoros
AUROVA Lab, Department of Physics, Systems Engineering, and Signal Theory, University of Alicante, 03690 Alicante, Spain
S
Santiago Puente
AUROVA Lab, Department of Physics, Systems Engineering, and Signal Theory, University of Alicante, 03690 Alicante, Spain
Pablo Gil
Pablo Gil
Full Professor, University of Alicante, Spain
RoboticsComputer VisionManipulationTactile sensingDeep learning
R
Roberto Calandra
LASR Lab, Technische Universitat Dresden, Dresden, Germany