🤖 AI Summary
Children with autism spectrum disorder (ASD) commonly exhibit visuomotor integration deficits during mixed-reality dance/movement therapy (MR-DMT), resulting in gaze–action decoupling and impaired imitation learning. To address this, we propose a two-tiered visual guidance system: a perception tier that directs gaze fixation, and a transformation tier that explicitly models the “gaze→action” mapping. Integrated with eye-tracking, real-time MR rendering, and dynamic visual feedback, the system establishes a closed-loop intervention framework. Experimental results demonstrate significantly enhanced gaze–performance coupling strength (p < 0.01), improved imitation accuracy, and greater learning retention in ASD children. The core contribution lies in the first application of a perception–transformation dual-tier guidance mechanism to MR-DMT—establishing a quantifiable, scalable, and mechanism-driven paradigm for personalized neurodevelopmental intervention.
📝 Abstract
Autism Spectrum Disorder (ASD) is marked by action imitation deficits stemming from visuomotor integration impairments, posing challenges to imitation-based learning, such as dance movement therapy in mixed reality (MR-DMT). Previous gaze-guiding interventions in ASD have mainly focused on optimizing gaze in isolation, neglecting the crucial "gaze-performance link". This study investigates enhancing this link in MR-DMT for children with ASD. Initially, we experimentally confirmed the weak link: longer gaze durations didn't translate to better performance. Then, we proposed and validated a novel dual-level visual guidance system that operates on both perceptual and transformational levels: not only directing attention to task-relevant areas but also explicitly scaffolding the translation from gaze perception to performance execution. Our results demonstrate its effectiveness in boosting the gaze-performance link, laying key foundations for more precisely tailored and effective MR-DMT interventions for ASD.