🤖 AI Summary
This work addresses the challenge that adaptive managers (AMs) generated by large language models (LLMs) for context-aware systems (CAS) often fail to meet precise functional behavior requirements. To overcome this limitation, the paper proposes a novel approach that integrates formal verification with iterative LLM-based code generation. The key innovation lies in the introduction of Fine-grained Temporal Constraint Logic (FCL), which is embedded into a “vibe coding” feedback loop to guide the LLM toward producing code compliant with dynamic behavioral specifications. By combining path coverage testing with state trajectory validation, the method efficiently generates functionally correct AMs within just a few iterations, as demonstrated in two CAS case studies. This significantly enhances the correctness and reliability of the generated code while maintaining alignment with rigorous formal constraints.
📝 Abstract
In CAS adaptation, a challenge is to define the dynamic architecture of the system and changes in its behavior. Implementation-wise, this is projected into an adaptation mechanism, typically realized as an Adaptation Manager (AM). With the advances of generative LLMs, generating AM code based on system specification and desired AM behavior (partially in natural language) is a tempting opportunity. The recent introduction of vibe coding suggests a way to target the problem of the correctness of generated code by iterative testing and vibe coding feedback loops instead of direct code inspection.
In this paper, we show that generating an AM via vibe coding feedback loops is a viable option when the verification of the generated AM is based on a very precise formulation of the functional requirements. We specify these as constraints in a novel temporal logic FCL that allows us to express the behavior of traces with much finer granularity than classical LTL enables.
Furthermore, we show that by combining the adaptation and vibe coding feedback loops where the FCL constraints are evaluated for the current system state, we achieved good results in the experiments with generating AMs for two example systems from the CAS domain. Typically, just a few feedback loop iterations were necessary, each feeding the LLM with reports describing detailed violations of the constraints. This AM testing was combined with high run path coverage achieved by different initial settings.