π€ AI Summary
This work addresses the challenge of enabling safe and efficient collaboration between redundant robots and humans in unknown environments by proposing a decoupled control framework that separates task-space and null-space dynamics. Without requiring prior calibration, the approach employs adaptive visual servoing to ensure high accuracy in primary task execution while leveraging null-space motion to respond flexibly and safely to human interventions. Theoretical analysis based on Lyapunov methods establishes the stability of the closed-loop system and proves convergence of both task-space tracking error and the null-space damping model. Experimental validation with an augmented reality-guided robotic manipulator demonstrates that the proposed method effectively supports humanβrobot collaboration without compromising the performance of the primary task.
π Abstract
Human-robot collaboration aims to extend human ability through cooperation with robots. This technology is currently helping people with physical disabilities, has transformed the manufacturing process of companies, improved surgical performance, and will likely revolutionize the daily lives of everyone in the future. Being able to enhance the performance of both sides, such that human-robot collaboration outperforms a single robot/human, remains an open issue. For safer and more effective collaboration, a new control scheme has been proposed for redundant robots in this paper, consisting of an adaptive vision-based control term in task space and an interactive control term in null space. Such a formulation allows the robot to autonomously carry out tasks in an unknown environment without prior calibration while also interacting with humans to deal with unforeseen changes (e.g., potential collision, temporary needs) under the redundant configuration. The decoupling between task space and null space helps to explore the collaboration safely and effectively without affecting the main task of the robot end-effector. The stability of the closed-loop system has been rigorously proved with Lyapunov methods, and both the convergence of the position error in task space and that of the damping model in null space are guaranteed. The experimental results of a robot manipulator guided with the technology of augmented reality (AR) are presented to illustrate the performance of the control scheme.