🤖 AI Summary
This work addresses the challenge of generating empathetic, semantically coherent, and pedagogically aligned co-speech gestures for humanoid robots in educational settings. The authors propose a Reasoning-Guided Vision–Language–Motion Diffusion framework (RG-VLMD), which integrates multimodal affect estimation, pedagogical intent reasoning, and behavior-conditioned motion synthesis. The approach introduces an emotion-driven pedagogical behavior categorization strategy and a reasoning-guided diffusion generation mechanism. A gated mixture-of-experts model is employed for affect prediction, complemented by auxiliary action-group supervision to enable precise gesture synthesis. The resulting gesture sequences exhibit clear structure and physical plausibility, support real-time execution on the NAO robot, and demonstrate significant improvements over baseline models in structural coherence, discriminability, and pedagogical expressiveness, as validated through motion statistics and distance-based analyses.
📝 Abstract
This article suggests a reasoning-guided vision-language-motion diffusion framework (RG-VLMD) for generating instruction-aware co-speech gestures for humanoid robots in educational scenarios. The system integrates multi-modal affective estimation, pedagogical reasoning, and teaching-act-conditioned motion synthesis to enable adaptive and semantically consistent robot behavior. A gated mixture-of-experts model predicts Valence/Arousal from input text, visual, and acoustic features, which then mapped to discrete teaching-act categories through an affect-driven policy.These signals condition a diffusion-based motion generator using clip-level intent and frame-level instructional schedules via additive latent restriction with auxiliary action-group supervision. Compared to a baseline diffusion model, our proposed method produces more structured and distinctive motion patterns, as verified by motion statics and pairwise distance analysis. Generated motion sequences remain physically plausible and can be retargeted to a NAO robot for real-time execution. The results reveal that reasoning-guided instructional conditioning improves gesture controllability and pedagogical expressiveness in educational human-robot interaction.