BioMARS: A Multi-Agent Robotic System for Autonomous Biological Experiments

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current biological experimentation automation faces critical bottlenecks, including poor adaptability of large language models (LLMs) and vision-language models (VLMs), limited fault tolerance, and operational complexity. To address these challenges, this work introduces the first autonomous experimental platform integrating LLMs, VLMs, and a modular robotic system, implemented via a hierarchical multi-agent architecture comprising biologist, technician, and inspector agents. The platform enables dynamic experimental design, context-aware optimization, and real-time anomaly detection. Technically, it incorporates retrieval-augmented generation, multimodal perception, robot-oriented pseudocode generation, and a lightweight anomaly identification algorithm, supported by modular hardware and a human-in-the-loop web interface. Evaluated on retinal pigment epithelium organoid differentiation, the system achieves or exceeds human performance in cell viability, morphological integrity, and batch consistency—demonstrating robustness and generalizability in real-world laboratory settings.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) and vision-language models (VLMs) have the potential to transform biological research by enabling autonomous experimentation. Yet, their application remains constrained by rigid protocol design, limited adaptability to dynamic lab conditions, inadequate error handling, and high operational complexity. Here we introduce BioMARS (Biological Multi-Agent Robotic System), an intelligent platform that integrates LLMs, VLMs, and modular robotics to autonomously design, plan, and execute biological experiments. BioMARS uses a hierarchical architecture: the Biologist Agent synthesizes protocols via retrieval-augmented generation; the Technician Agent translates them into executable robotic pseudo-code; and the Inspector Agent ensures procedural integrity through multimodal perception and anomaly detection. The system autonomously conducts cell passaging and culture tasks, matching or exceeding manual performance in viability, consistency, and morphological integrity. It also supports context-aware optimization, outperforming conventional strategies in differentiating retinal pigment epithelial cells. A web interface enables real-time human-AI collaboration, while a modular backend allows scalable integration with laboratory hardware. These results highlight the feasibility of generalizable, AI-driven laboratory automation and the transformative role of language-based reasoning in biological research.
Problem

Research questions and friction points this paper is trying to address.

Overcoming rigid protocol design in autonomous biological experiments
Enhancing adaptability to dynamic lab conditions and error handling
Reducing operational complexity in AI-driven laboratory automation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical multi-agent architecture for autonomy
Retrieval-augmented protocol generation and execution
Multimodal perception for real-time anomaly detection
🔎 Similar Papers
No similar papers found.
Y
Yibo Qiu
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
Z
Zan Huang
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
Z
Zhiyu Wang
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
H
Handi Liu
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
Y
Yiling Qiao
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
Y
Yifeng Hu
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
S
Shu'ang Sun
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
H
Hangke Peng
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
R
Ronald X Xu
Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
Mingzhai Sun
Mingzhai Sun
University of Science and Technology of China
Biomedical Engineeringdeep learningretinal imaging