Learning Multimodal AI Algorithms for Amplifying Limited User Input into High-dimensional Control Space

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-invasive high-dimensional motor control remains a critical challenge for severely paralyzed individuals. Method: We propose a context-aware multimodal shared autonomy framework that maps minimal physiological signals—such as weak EMG or gaze dynamics—in real time onto the 3D continuous operational space of dexterous robotic arms. Our approach integrates deep reinforcement learning, multimodal sensing (EMG/EEG/gaze/vision), adaptive shared autonomy, and sim-to-real transfer, establishing the first zero-shot closed-loop human-in-the-loop paradigm bridging simulation and real-world deployment. Results: Evaluated on 23 subjects, the system achieves a 92.88% task success rate, with trajectory smoothness and completion time comparable to state-of-the-art invasive BCIs. It enables high-precision dynamic intent decoding and robust, scalable non-invasive high-dimensional control—the first demonstration of its kind—thereby significantly enhancing clinical applicability and commercial viability.

Technology Category

Application Category

📝 Abstract
Current invasive assistive technologies are designed to infer high-dimensional motor control signals from severely paralyzed patients. However, they face significant challenges, including public acceptance, limited longevity, and barriers to commercialization. Meanwhile, noninvasive alternatives often rely on artifact-prone signals, require lengthy user training, and struggle to deliver robust high-dimensional control for dexterous tasks. To address these issues, this study introduces a novel human-centered multimodal AI approach as intelligent compensatory mechanisms for lost motor functions that could potentially enable patients with severe paralysis to control high-dimensional assistive devices, such as dexterous robotic arms, using limited and noninvasive inputs. In contrast to the current state-of-the-art (SoTA) noninvasive approaches, our context-aware, multimodal shared-autonomy framework integrates deep reinforcement learning algorithms to blend limited low-dimensional user input with real-time environmental perception, enabling adaptive, dynamic, and intelligent interpretation of human intent for complex dexterous manipulation tasks, such as pick-and-place. The results from our ARAS (Adaptive Reinforcement learning for Amplification of limited inputs in Shared autonomy) trained with synthetic users over 50,000 computer simulation episodes demonstrated the first successful implementation of the proposed closed-loop human-in-the-loop paradigm, outperforming the SoTA shared autonomy algorithms. Following a zero-shot sim-to-real transfer, ARAS was evaluated on 23 human subjects, demonstrating high accuracy in dynamic intent detection and smooth, stable 3D trajectory control for dexterous pick-and-place tasks. ARAS user study achieved a high task success rate of 92.88%, with short completion times comparable to those of SoTA invasive assistive technologies.
Problem

Research questions and friction points this paper is trying to address.

Amplifying limited user input into high-dimensional control for paralysis patients
Overcoming challenges of noninvasive high-dimensional motor control
Enhancing dexterous robotic arm control with multimodal AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal AI integrates user input and environment perception
Deep reinforcement learning enables adaptive intent interpretation
Closed-loop human-in-the-loop paradigm for dexterous control
🔎 Similar Papers
No similar papers found.
A
Ali Rabiee
Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
Sima Ghafoori
Sima Ghafoori
University of Rhode Island
NeuroroboticsSignal ProcessingMachine/Deep Learning
MH Farhadi
MH Farhadi
PhD Student, University of Rhode Island
RoboticsReinforcement LearningBiomedical Signal ProcessingHuman Computer Interaction
R
Robert Beyer
Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA
Xiangyu Bai
Xiangyu Bai
PhD Candidate, Computer Engineering at Northeastern university
Autonomy SimulationComputer VisionVideo DiffusionGenerative Models
D
David J Lin
Department of Neurology, Harvard Medical School, Boston, MA, USA
Sarah Ostadabbas
Sarah Ostadabbas
Electrical & Computer Engineering, Northeastern University
Computer VisionMachine LearningArtificial IntelligenceAugmented Cognition with Medical
R
Reza Abiri
Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI, USA