HannesImitation: Grasping with the Hannes Prosthetic Hand via Imitation Learning

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prosthetic hands exhibit limited dexterity and adaptability in unstructured environments. Method: This paper introduces the first imitation learning control framework tailored for the Hannes prosthetic hand. We construct a vision-based demonstration dataset encompassing three realistic scenarios—desktop manipulation, shelf interaction, and human-prosthesis handover—and design a diffusion-model-based end-to-end policy that directly maps monocular visual input to wrist pose and finger closure commands. Contribution/Results: The approach significantly reduces user cognitive load and enhances operational autonomy. Experiments demonstrate substantially higher grasp success rates across multiple objects and scenarios compared to conventional segmentation-based visual servoing controllers, along with strong cross-scenario generalization. The framework has been successfully deployed on a physical Hannes prosthesis system, establishing a novel paradigm for practical intelligent upper-limb prosthetics.

Technology Category

Application Category

📝 Abstract
Recent advancements in control of prosthetic hands have focused on increasing autonomy through the use of cameras and other sensory inputs. These systems aim to reduce the cognitive load on the user by automatically controlling certain degrees of freedom. In robotics, imitation learning has emerged as a promising approach for learning grasping and complex manipulation tasks while simplifying data collection. Its application to the control of prosthetic hands remains, however, largely unexplored. Bridging this gap could enhance dexterity restoration and enable prosthetic devices to operate in more unconstrained scenarios, where tasks are learned from demonstrations rather than relying on manually annotated sequences. To this end, we present HannesImitationPolicy, an imitation learning-based method to control the Hannes prosthetic hand, enabling object grasping in unstructured environments. Moreover, we introduce the HannesImitationDataset comprising grasping demonstrations in table, shelf, and human-to-prosthesis handover scenarios. We leverage such data to train a single diffusion policy and deploy it on the prosthetic hand to predict the wrist orientation and hand closure for grasping. Experimental evaluation demonstrates successful grasps across diverse objects and conditions. Finally, we show that the policy outperforms a segmentation-based visual servo controller in unstructured scenarios. Additional material is provided on our project page: https://hsp-iit.github.io/HannesImitation
Problem

Research questions and friction points this paper is trying to address.

Control prosthetic hands via imitation learning for autonomy
Enhance dexterity restoration in unstructured environments
Improve grasping performance over manual annotation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Imitation learning controls prosthetic hand grasping
Diffusion policy predicts wrist and hand movements
Dataset includes diverse grasping scenarios
🔎 Similar Papers
No similar papers found.