🤖 AI Summary
Current multi-robot shared control relies heavily on leader-follower architectures and high-workload teleoperation, limiting robot autonomy and increasing operator cognitive load. To address this, we propose a hierarchical shared control framework centered on a Human-Influenced Guidance Vector Field (HI-GVF), eliminating rigid master-slave structures to enable truly collaborative human-robot decision-making. Our approach innovatively introduces an *intention field* model that fuses multi-source human intent signals—enhancing intent propagation efficiency across robot teams. Stability and safety are rigorously guaranteed via Lyapunov-based analysis and Safety Barrier Certificates (SBCs), ensuring dynamic obstacle avoidance and robust coordination under uncertainty. The framework supports heterogeneous multimodal interfaces—including brain-computer interfaces (BCIs), electromyography (EMG) wristbands, and eye-tracking—and demonstrates significant improvements in task completion efficiency, real-time responsiveness, and human-robot collaboration performance in both fire-rescue simulations and physical experiments. Results validate the framework’s generality and engineering feasibility.
📝 Abstract
Human-multi-robot shared control leverages human decision-making and robotic autonomy to enhance human-robot collaboration. While widely studied, existing systems often adopt a leader-follower model, limiting robot autonomy to some extent. Besides, a human is required to directly participate in the motion control of robots through teleoperation, which significantly burdens the operator. To alleviate these two issues, we propose a layered shared control computing framework using human-influenced guiding vector fields (HI-GVF) for human-robot collaboration. HI-GVF guides the multi-robot system along a desired path specified by the human. Then, an intention field is designed to merge the human and robot intentions, accelerating the propagation of the human intention within the multi-robot system. Moreover, we give the stability analysis of the proposed model and use collision avoidance based on safety barrier certificates to fine-tune the velocity. Eventually, considering the firefighting task as an example scenario, we conduct simulations and experiments using multiple human-robot interfaces (brain-computer interface, myoelectric wristband, eye-tracking), and the results demonstrate that our proposed approach boosts the effectiveness and performance of the task.