Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Learning-based autonomous systems operating in open environments face critical challenges concerning safety assurance and trustworthiness. Method: This paper proposes a bidirectional synergistic framework integrating runtime verification (RV) and large language models (LLMs): RV enforces formal safety constraints on LLM outputs, while LLMs enhance RV capabilities in specification auto-generation, predictive reasoning, and uncertainty modeling. The framework incorporates predictive monitoring, natural-language-to-formal-specification translation, pattern recognition, and dynamic adaptation—overcoming traditional RV limitations of static analysis and heavy manual effort. Contribution/Results: We establish, for the first time, a systematic technical pathway and theoretical foundation for RV–LLM integration. The framework delivers a scalable, interpretable paradigm for real-time safety monitoring, dynamic verification, and trustworthy certification of learning-based systems.

Technology Category

Application Category

📝 Abstract
Assuring the safety and trustworthiness of autonomous systems is particularly difficult when learning-enabled components and open environments are involved. Formal methods provide strong guarantees but depend on complete models and static assumptions. Runtime verification (RV) complements them by monitoring executions at run time and, in its predictive variants, by anticipating potential violations. Large language models (LLMs), meanwhile, excel at translating natural language into formal artefacts and recognising patterns in data, yet they remain error-prone and lack formal guarantees. This vision paper argues for a symbiotic integration of RV and LLMs. RV can serve as a guardrail for LLM-driven autonomy, while LLMs can extend RV by assisting specification capture, supporting anticipatory reasoning, and helping to handle uncertainty. We outline how this mutual reinforcement differs from existing surveys and roadmaps, discuss challenges and certification implications, and identify future research directions towards dependable autonomy.
Problem

Research questions and friction points this paper is trying to address.

Combining runtime verification with LLMs to enhance autonomous system safety
Addressing limitations of formal methods through dynamic monitoring and prediction
Mitigating LLM errors while leveraging their pattern recognition capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

RV monitors system executions for runtime safety
LLMs assist specification capture and pattern recognition
Symbiotic integration enhances both verification and autonomy
🔎 Similar Papers
No similar papers found.