Testing Human-Hand Segmentation on In-Distribution and Out-of-Distribution Data in Human-Robot Interactions Using a Deep Ensemble Model

๐Ÿ“… 2025-01-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Hand segmentation models exhibit insufficient robustness under out-of-distribution (OOD) conditions in human-robot collaborationโ€”e.g., gloved hands, interlaced fingers, motion blur, and multi-view occlusions. Method: We propose a dual-perspective (first-person + static) data acquisition protocol and a multi-source uncertainty modeling framework, integrating RGB multi-view features within a UNet-RefineNet deep ensemble architecture; prediction entropy is leveraged to quantify uncertainty. Contribution/Results: We introduce the first industrial-domain ID/OOD unified benchmark for hand segmentation and release a dedicated dataset featuring tool occlusion, 0โ€“4 hands, motion blur, and glove wear. Experiments demonstrate that industrially pre-trained models significantly outperform general-purpose models on OOD scenarios, validating the critical role of domain-specific pretraining in enhancing generalization for hand segmentation. This work establishes foundational infrastructure and methodology for robust hand perception in real-world industrial settings.

Technology Category

Application Category

๐Ÿ“ Abstract
Reliable detection and segmentation of human hands are critical for enhancing safety and facilitating advanced interactions in human-robot collaboration. Current research predominantly evaluates hand segmentation under in-distribution (ID) data, which reflects the training data of deep learning (DL) models. However, this approach fails to address out-of-distribution (OOD) scenarios that often arise in real-world human-robot interactions. In this study, we present a novel approach by evaluating the performance of pre-trained DL models under both ID data and more challenging OOD scenarios. To mimic realistic industrial scenarios, we designed a diverse dataset featuring simple and cluttered backgrounds with industrial tools, varying numbers of hands (0 to 4), and hands with and without gloves. For OOD scenarios, we incorporated unique and rare conditions such as finger-crossing gestures and motion blur from fast-moving hands, addressing both epistemic and aleatoric uncertainties. To ensure multiple point of views (PoVs), we utilized both egocentric cameras, mounted on the operator's head, and static cameras to capture RGB images of human-robot interactions. This approach allowed us to account for multiple camera perspectives while also evaluating the performance of models trained on existing egocentric datasets as well as static-camera datasets. For segmentation, we used a deep ensemble model composed of UNet and RefineNet as base learners. Performance evaluation was conducted using segmentation metrics and uncertainty quantification via predictive entropy. Results revealed that models trained on industrial datasets outperformed those trained on non-industrial datasets, highlighting the importance of context-specific training. Although all models struggled with OOD scenarios, those trained on industrial datasets demonstrated significantly better generalization.
Problem

Research questions and friction points this paper is trying to address.

Human-Robot Interaction
Accuracy Improvement
Out-of-Distribution (OOD)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robot-Human Interaction
Uncertainty Quantification
Factory-Specific Training
๐Ÿ”Ž Similar Papers