Safe-ROS: An Architecture for Autonomous Robots in Safety-Critical Domains

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In safety-critical deployments of autonomous robots, reconciling real-time operational efficiency with rigorous safety compliance remains a fundamental challenge. Method: This paper proposes Safe-ROS—a novel architecture integrating a formally verified Safety Instrumented Function (SIF) modeled as an independent cognitive agent, decoupled from the primary control system yet tightly integrated with ROS. The SIF is specified using formal methods, implemented as a modular supervisory component, and validated through co-simulation in Gazebo and hardware-in-the-loop testing on the AgileX Scout Mini platform for nuclear inspection tasks. Contribution/Results: We demonstrate a collision-avoidance SIF fully verified end-to-end via formal methods, achieving deterministic safety guarantees without compromising real-time responsiveness. Comprehensive multi-scenario robustness evaluations confirm the architecture’s scalability, modularity, and compatibility with existing ROS-based robotic systems—enabling verifiable safety assurance in high-stakes autonomous operations.

Technology Category

Application Category

📝 Abstract
Deploying autonomous robots in safety-critical domains requires architectures that ensure operational effectiveness and safety compliance. In this paper, we contribute the Safe-ROS architecture for developing reliable and verifiable autonomous robots in such domains. It features two distinct subsystems: (1) an intelligent control system that is responsible for normal/routine operations, and (2) a Safety System consisting of Safety Instrumented Functions (SIFs) that provide formally verifiable independent oversight. We demonstrate Safe-ROS on an AgileX Scout Mini robot performing autonomous inspection in a nuclear environment. One safety requirement is selected and instantiated as a SIF. To support verification, we implement the SIF as a cognitive agent, programmed to stop the robot whenever it detects that it is too close to an obstacle. We verify that the agent meets the safety requirement and integrate it into the autonomous inspection. This integration is also verified, and the full deployment is validated in a Gazebo simulation, and lab testing. We evaluate this architecture in the context of the UK nuclear sector, where safety and regulation are crucial aspects of deployment. Success criteria include the development of a formal property from the safety requirement, implementation, and verification of the SIF, and the integration of the SIF into the operational robotic autonomous system. Our results demonstrate that the Safe-ROS architecture can provide safety verifiable oversight while deploying autonomous robots in safety-critical domains, offering a robust framework that can be extended to additional requirements and various applications.
Problem

Research questions and friction points this paper is trying to address.

Developing verifiable safety architecture for autonomous robots in critical domains
Ensuring operational safety through formally verified independent oversight systems
Integrating safety requirements into autonomous robotic systems for nuclear environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual subsystem architecture for intelligent control
Formally verifiable Safety Instrumented Functions oversight
Cognitive agent implementation for safety requirement verification
🔎 Similar Papers
No similar papers found.
D
Diana C. Benjumea
University of Manchester, Manchester, UK
Marie Farrell
Marie Farrell
The University of Manchester
Formal MethodsAutonomous Robotic SystemsSoftware Verification
L
Louise A. Dennis
University of Manchester, Manchester, UK