🤖 AI Summary
Scalability of formal safety verification for deep neural networks (DNNs) remains limited on large-scale models due to the computational intractability of existing constraint-solving approaches. Method: This paper introduces a novel UNSAT-proof-driven conflict clause learning mechanism—the first to automatically derive effective conflict clauses directly from UNSAT proofs—thereby substantially enhancing the applicability of Conflict-Driven Clause Learning (CDCL) to DNN verification. The approach integrates SMT solving, incremental constraint propagation, and a customized solver interface, enabling modular cooperation between SAT solvers and DNN verifiers. Contribution/Results: The proposed optimizations achieve 2–3× speedup across multiple standard benchmarks; in several cases, they outperform the current state-of-the-art. This work establishes a new paradigm for highly scalable, formally rigorous DNN verification.
📝 Abstract
The widespread adoption of deep neural networks (DNNs) requires efficient techniques for safety verification. Existing methods struggle to scale to real-world DNNs, and tremendous efforts are being put into improving their scalability. In this work, we propose an approach for improving the scalability of DNN verifiers using Conflict-Driven Clause Learning (CDCL) -- an approach that has proven highly successful in SAT and SMT solving. We present a novel algorithm for deriving conflict clauses using UNSAT proofs, and propose several optimizations for expediting it. Our approach allows a modular integration of SAT solvers and DNN verifiers, and we implement it on top of an interface designed for this purpose. The evaluation of our implementation over several benchmarks suggests a 2X--3X improvement over a similar approach, with specific cases outperforming the state of the art.