🤖 AI Summary
Rigid-body assumptions in existing haptic grasping methods hinder manipulation of fragile or deformable objects, while single-arm systems cannot effectively coordinate large, heavy objects. Method: This paper proposes a bimanual robotic cooperative grasping framework integrating multi-agent model predictive control (MPC) with tactile-driven state recognition. Real-time object stiffness and surface texture are sensed via Gelsight Mini sensors; a deep learning model enables online grasp-state classification, and a closed-loop force-control policy supports dynamic inter-arm force allocation and real-time strategy adaptation. Results: Experiments demonstrate significantly higher grasping success rates across objects of varying sizes and stiffnesses compared to conventional PD and single-agent MPC baselines. To our knowledge, this is the first approach achieving tactile-feedback-driven adaptive bimanual cooperative grasping, overcoming both physical and algorithmic limitations inherent in single-arm haptic control.
📝 Abstract
Grasping is a core task in robotics with various applications. However, most current implementations are primarily designed for rigid items, and their performance drops considerably when handling fragile or deformable materials that require real-time feedback. Meanwhile, tactile-reactive grasping focuses on a single agent, which limits their ability to grasp and manipulate large, heavy objects. To overcome this, we propose a learning-based, tactile-reactive multi-agent Model Predictive Controller (MPC) for grasping a wide range of objects with different softness and shapes, beyond the capabilities of preexisting single-agent implementations. Our system uses two Gelsight Mini tactile sensors [1] to extract real-time information on object texture and stiffness. This rich tactile feedback is used to estimate contact dynamics and object compliance in real time, enabling the system to adapt its control policy to diverse object geometries and stiffness profiles. The learned controller operates in a closed loop, leveraging tactile encoding to predict grasp stability and adjust force and position accordingly. Our key technical contributions include a multi-agent MPC formulation trained on real contact interactions, a tactile-data driven method for inferring grasping states, and a coordination strategy that enables collaborative control. By combining tactile sensing and a learning-based multi-agent MPC, our method offers a robust, intelligent solution for collaborative grasping in complex environments, significantly advancing the capabilities of multi-agent systems. Our approach is validated through extensive experiments against independent PD and MPC baselines. Our pipeline outperforms the baselines regarding success rates in achieving and maintaining stable grasps across objects of varying sizes and stiffness.