🤖 AI Summary
Large-scale Cyber-Physical Systems (CPS) simulation and testing face prohibitively high resource consumption and maintenance costs in modeling hardware, software, and physical environments. Method: Through cross-enterprise workshops involving six industrial partners, this study systematically identifies key engineering bottlenecks and proposes three priority research directions: (1) AI-driven generation of dynamic scenario and environmental models; (2) co-integration of simulators and generative AI into CI/CD pipelines; and (3) trustworthiness assurance mechanisms for generative AI outputs in simulation contexts. Contribution/Results: The work delivers empirically grounded, forward-looking outcomes—distilling a reusable engineer challenge taxonomy and establishing an industry–academia collaborative research agenda. It provides both theoretical foundations and practical guidelines for the responsible deployment of generative AI in CPS simulation and testing.
📝 Abstract
Quality assurance for large-scale cyber-physical systems relies on sophisticated test activities using complex test environments investigated with the help of numerous types of simulators. As these systems grow, extensive resources are required to develop and maintain simulation models of hardware and software components, as well as physical environments. Meanwhile, recent advances in generative AI have led to tools that can produce executable test cases for software systems, offering potential benefits such as reducing manual efforts or increasing test coverage. However, the application of generative AI techniques to simulation-based testing of large-scale cyber-physical systems remains underexplored. To better understand this gap, this study captures practitioners' perspectives on leveraging generative AI, based on a cross-company workshop with six organizations. Our contribution is twofold: (1) detailed, experience-based insights into challenges faced by engineers, and (2) a research agenda comprising three high-priority directions: (a) AI-generated scenarios and environment models, (b) simulators and AI in CI/CD pipelines, and (c) trustworthiness in generative AI for simulation. While participants acknowledged substantial potential, they also highlighted unresolved challenges. By detailing these issues, the paper aims to guide future academia-industry collaboration towards the responsible adoption of generative AI in simulation-based testing.