PREMAP: A Unifying PREiMage APproximation Framework for Neural Networks

📅 2024-08-17
🏛️ arXiv.org
📈 Citations: 4
Influential: 2
📄 PDF
🤖 AI Summary
Preimage verification for neural networks faces challenges including difficulty in abstracting high-dimensional input spaces and coarse-grained characterization of output set boundaries. This paper proposes the first unified preimage abstraction framework supporting arbitrary polyhedral output sets. Our method introduces three key innovations: (1) a parameterized linear relaxation scheme that balances precision and computational tractability; (2) a co-partitioning strategy jointly refining input features and hidden-layer neurons, enabling anytime refinement at arbitrary stages; and (3) an optimization-driven heuristic search guided by volume convergence of the abstract preimage. Experiments demonstrate substantial improvements in verification efficiency and scalability for high-dimensional image classification models. Crucially, our approach maintains formal completeness while enabling, for the first time, reliable quantitative robustness analysis—providing certified bounds on adversarial perturbation tolerance with rigorous guarantees.

Technology Category

Application Category

📝 Abstract
Most methods for neural network verification focus on bounding the image, i.e., set of outputs for a given input set. This can be used to, for example, check the robustness of neural network predictions to bounded perturbations of an input. However, verifying properties concerning the preimage, i.e., the set of inputs satisfying an output property, requires abstractions in the input space. We present a general framework for preimage abstraction that produces under- and over-approximations of any polyhedral output set. Our framework employs cheap parameterised linear relaxations of the neural network, together with an anytime refinement procedure that iteratively partitions the input region by splitting on input features and neurons. The effectiveness of our approach relies on carefully designed heuristics and optimization objectives to achieve rapid improvements in the approximation volume. We evaluate our method on a range of tasks, demonstrating significant improvement in efficiency and scalability to high-input-dimensional image classification tasks compared to state-of-the-art techniques. Further, we showcase the application to quantitative verification and robustness analysis, presenting a sound and complete algorithm for the former and providing sound quantitative results for the latter.
Problem

Research questions and friction points this paper is trying to address.

Framework for preimage approximation in neural networks
Improves efficiency in high-dimensional image classification tasks
Enables quantitative verification and robustness analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameterised linear relaxations for neural networks
Anytime refinement via input feature splitting
Optimized heuristics for rapid approximation improvement
🔎 Similar Papers
No similar papers found.
Xiyue Zhang
Xiyue Zhang
University of Bristol
Formal MethodsArtificial IntelligenceTrustworthy AI
Benjie Wang
Benjie Wang
University of California, Los Angeles
Machine LearningArtificial IntelligenceCausal InferenceTractable Probabilistic Models
M
Marta Z. Kwiatkowska
Department of Computer Science, University of Oxford, Oxford, OX1 3QD, UK
H
Huan Zhang
Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, USA