π€ AI Summary
Mobile manipulators in autonomous driving laboratories exhibit insufficient robustness when grasping texture-varied objects.
Method: This paper proposes a vision-guided mobile manipulator system integrating perception and motion control. First, a texture-invariant 3D pose estimation method is developed, combining feature-based 2D planar pose estimation under homography constraints with depth information. Second, a DH-parameterized kinematic model is constructed, incorporating real-time object detection, depth-enhanced pose estimation, and closed-loop inverse kinematics solving.
Results: Experimental evaluation demonstrates high-grasp success rates and stable manipulation across diverse textured objects in dynamic environments. The system significantly improves experimental automation and reproducibility, providing a scalable technical foundation for fully autonomous scientific experimentation platforms.
π Abstract
Recent advances in robotics and autonomous systems have broadened the use of robots in laboratory settings, including automated synthesis, scalable reaction workflows, and collaborative tasks in self-driving laboratories (SDLs). This paper presents a comprehensive development of a mobile manipulator designed to assist human operators in such autonomous lab environments. Kinematic modeling of the manipulator is carried out based on the Denavit Hartenberg (DH) convention and inverse kinematics solution is determined to enable precise and adaptive manipulation capabilities. A key focus of this research is enhancing the manipulator ability to reliably grasp textured objects as a critical component of autonomous handling tasks. Advanced vision-based algorithms are implemented to perform real-time object detection and pose estimation, guiding the manipulator in dynamic grasping and following tasks. In this work, we integrate a vision method that combines feature-based detection with homography-driven pose estimation, leveraging depth information to represent an object pose as a $2$D planar projection within $3$D space. This adaptive capability enables the system to accommodate variations in object orientation and supports robust autonomous manipulation across diverse environments. By enabling autonomous experimentation and human-robot collaboration, this work contributes to the scalability and reproducibility of next-generation chemical laboratories