Kinematic Analysis and Integration of Vision Algorithms for a Mobile Manipulator Employed Inside a Self-Driving Laboratory

πŸ“… 2025-10-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Mobile manipulators in autonomous driving laboratories exhibit insufficient robustness when grasping texture-varied objects. Method: This paper proposes a vision-guided mobile manipulator system integrating perception and motion control. First, a texture-invariant 3D pose estimation method is developed, combining feature-based 2D planar pose estimation under homography constraints with depth information. Second, a DH-parameterized kinematic model is constructed, incorporating real-time object detection, depth-enhanced pose estimation, and closed-loop inverse kinematics solving. Results: Experimental evaluation demonstrates high-grasp success rates and stable manipulation across diverse textured objects in dynamic environments. The system significantly improves experimental automation and reproducibility, providing a scalable technical foundation for fully autonomous scientific experimentation platforms.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in robotics and autonomous systems have broadened the use of robots in laboratory settings, including automated synthesis, scalable reaction workflows, and collaborative tasks in self-driving laboratories (SDLs). This paper presents a comprehensive development of a mobile manipulator designed to assist human operators in such autonomous lab environments. Kinematic modeling of the manipulator is carried out based on the Denavit Hartenberg (DH) convention and inverse kinematics solution is determined to enable precise and adaptive manipulation capabilities. A key focus of this research is enhancing the manipulator ability to reliably grasp textured objects as a critical component of autonomous handling tasks. Advanced vision-based algorithms are implemented to perform real-time object detection and pose estimation, guiding the manipulator in dynamic grasping and following tasks. In this work, we integrate a vision method that combines feature-based detection with homography-driven pose estimation, leveraging depth information to represent an object pose as a $2$D planar projection within $3$D space. This adaptive capability enables the system to accommodate variations in object orientation and supports robust autonomous manipulation across diverse environments. By enabling autonomous experimentation and human-robot collaboration, this work contributes to the scalability and reproducibility of next-generation chemical laboratories
Problem

Research questions and friction points this paper is trying to address.

Kinematic modeling enables precise manipulation in autonomous labs
Vision algorithms detect objects and estimate poses for grasping
Integration supports adaptive manipulation across diverse laboratory environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kinematic modeling using Denavit Hartenberg convention
Vision algorithms combining feature detection with homography
Depth information for 2D planar projection in 3D space
πŸ”Ž Similar Papers
No similar papers found.
S
Shifa Sulaiman
Department of Electronic Systems, Aalborg University, Fredrik Bajers Vej 7, Aalborg, 9220, North Denmark Region, Denmark
T
Tobias Busk Jensen
Department of Electronic Systems, Aalborg University, Fredrik Bajers Vej 7, Aalborg, 9220, North Denmark Region, Denmark
Stefan Hein Bengtson
Stefan Hein Bengtson
Aalborg University
computer visionmachine learningroboticsaffordance detectionsemi-autonomous control
Simon BΓΈgh
Simon BΓΈgh
Department of Electronic Systems, Aalborg University, Fredrik Bajers Vej 7, Aalborg, 9220, North Denmark Region, Denmark