🤖 AI Summary
Autonomous driving system development faces a fundamental tension between safety compliance and continuous delivery. This study conducts a systematic literature review (SLR) following the PRISMA guidelines to address this challenge. It introduces “DevSafeOps” — the first conceptual framework explicitly delineating the interface between continuous integration and delivery (CI/CD) and compliance-driven verification in safety-critical AI systems. The SLR identifies 12 core challenges and 7 corresponding mitigation strategies, revealing three critical gaps: (1) absence of safety-aware CI/CD pipelines, (2) insufficient verifiability of the simulation-to-real-world closed-loop validation, and (3) lack of automated mechanisms for generating auditable safety evidence. These findings establish a theoretical foundation and practical roadmap for developing DevOps practices in autonomous driving that simultaneously ensure agility and assurance.
📝 Abstract
Developing autonomous driving (AD) systems is challenging due to the complexity of the systems and the need to assure their safe and reliable operation. The widely adopted approach of DevOps seems promising to support the continuous technological progress in AI and the demand for fast reaction to incidents, which necessitate continuous development, deployment, and monitoring. We present a systematic literature review meant to identify, analyse, and synthesise a broad range of existing literature related to usage of DevOps in autonomous driving development. Our results provide a structured overview of challenges and solutions, arising from applying DevOps to safety-related AI-enabled functions. Our results indicate that there are still several open topics to be addressed to enable safe DevOps for the development of safe AD.