🤖 AI Summary
Traditional three-valued abstraction refinement (TVAR) for μ-calculus verification suffers from reliance on state-space modal transitions and intricate mechanisms to ensure monotonicity, limiting scalability and flexibility. Method: We propose an input-driven TVAR paradigm that centers abstraction on abstract inputs—not abstract states—thereby eliminating dependence on modal transitions. The approach integrates generalized symbolic trajectory evaluation (GSTE), delayed nondeterminism, three-valued logic, input-guided refinement, and symbolic implementation techniques. Contribution/Results: We formally establish the framework’s completeness, monotonicity, and soundness, and demonstrate its support for generic abstraction modeling—agnostic to concrete state representations. An open-source implementation shows substantial mitigation of the state-explosion problem, achieving exponential speedups in verifying complex systems compared to conventional TVAR methods.
📝 Abstract
Unlike Counterexample-Guided Abstraction Refinement (CEGAR), Three-Valued Abstraction Refinement (TVAR) is able to verify all properties of the mu-calculus. We present a novel algorithmic framework for TVAR that -- unlike the previous ones -- does not depend on modal transitions in the state space formalism. This entirely bypasses complications introduced to ensure monotonicity in previous frameworks and allows for simpler reasoning. The key idea, inspired by (Generalized) Symbolic Trajectory Evaluation and Delayed Nondeterminism, is to refine using abstract inputs rather than abstract states. We prove that the framework is sound, monotone, and complete, and evaluate a free and open-source implementation of an instantiation of the framework, demonstrating its ability to mitigate exponential explosion.