🤖 AI Summary
This study investigates how nonverbal cues—specifically gestures, body movements, and LED visual feedback—affect collaborative performance and anthropomorphic perception in human-NAO robot interaction during kitchen meal-preparation tasks. We developed a multimodal nonverbal feedback system integrating real-time pose estimation, programmable motion sequences, and LED-based state visualization, and conducted user studies combining subjective evaluations with objective behavioral measurements. Results demonstrate statistically significant improvements in mutual understanding accuracy and safety response latency (p < 0.01) when multimodal nonverbal cues are jointly deployed—constituting the first systematic empirical validation of such effects. Key contributions include: (1) empirical evidence that nonverbal cues enhance interaction predictability and task synchrony; and (2) identification of a perceptual threshold in anthropomorphism—excessive anthropomorphism fails to elicit consistent user acceptance and may undermine instrumental trust. Findings provide empirically grounded design principles and boundary constraints for context-aware nonverbal interaction in domestic service robots.
📝 Abstract
Humanoid robots, particularly NAO, are gaining prominence for their potential to revolutionize human-robot collaboration, especially in domestic settings like kitchens. Leveraging the advantages of NAO, this research explores non-verbal communications role in enhancing human-robot interaction during meal preparation tasks. By employing gestures, body movements, and visual cues, NAO provides feedback to users, improving comprehension and safety. Our study investigates user perceptions of NAO feedback and its anthropomorphic attributes. Findings suggest that combining various non-verbal cues enhances communication effectiveness, although achieving full anthropomorphic likeness remains a challenge. Insights from this research inform the design of future robotic systems for improved human-robot collaboration.