Branch and Bound for Piecewise Linear Neural Network Verification

📅 2019-09-14
🏛️ Journal of machine learning research
📈 Citations: 193
Influential: 20
📄 PDF
🤖 AI Summary
Formal verification of input-output properties for large-scale neural networks—particularly ReLU-based CNNs—in safety-critical applications remains computationally intractable due to exponential complexity. Method: This paper proposes a unified Branch-and-Bound (BaB) framework that integrates mixed-integer programming (MIP), satisfiability modulo theories (SMT) heuristics, and ReLU-specific branching strategies to significantly improve branching efficiency. It also introduces the first comprehensive benchmark suite covering diverse architectures and verification difficulties. Contributions/Results: Experiments demonstrate that our method achieves substantially higher verification success rates and scalability on high-dimensional inputs and convolutional networks compared to state-of-the-art approaches. Furthermore, it systematically identifies key determinants of verification hardness—including network depth, input dimensionality, and the number of local linear regions—thereby providing both theoretical insights and practical tools for provably robust machine learning.
📝 Abstract
The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models. In this context, verification involves proving or disproving that an NN model satisfies certain input-output properties. Despite the reputation of learned NN models as black boxes, and the theoretical hardness of proving useful properties about them, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure and taking insights from formal methods such as Satisifiability Modulo Theory. However, these methods are still far from scaling to realistic neural networks. To facilitate progress on this crucial area, we exploit the Mixed Integer Linear Programming (MIP) formulation of verification to propose a family of algorithms based on Branch-and-Bound (BaB). We show that our family contains previous verification methods as special cases. With the help of the BaB framework, we make three key contributions. Firstly, we identify new methods that combine the strengths of multiple existing approaches, accomplishing significant performance improvements over previous state of the art. Secondly, we introduce an effective branching strategy on ReLU non-linearities. This branching strategy allows us to efficiently and successfully deal with high input dimensional problems with convolutional network architecture, on which previous methods fail frequently. Finally, we propose comprehensive test data sets and benchmarks which includes a collection of previously released testcases. We use the data sets to conduct a thorough experimental comparison of existing and new algorithms and to provide an inclusive analysis of the factors impacting the hardness of verification problems.
Problem

Research questions and friction points this paper is trying to address.

Verifying neural network input-output properties for safety
Scaling verification methods to realistic neural networks
Improving Branch-and-Bound efficiency for ReLU nonlinearities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Branch-and-Bound for neural network verification
Introduces effective branching on ReLU non-linearities
Leverages Mixed Integer Linear Programming formulation
🔎 Similar Papers
No similar papers found.
R
Rudy Bunel
University of Oxford
J
Jingyue Lu
University of Oxford
I
Ilker Turkaslan
University of Oxford
P
Philip H. S. Torr
University of Oxford
Pushmeet Kohli
Pushmeet Kohli
DeepMind
AI for ScienceMachine LearningAI SafetyComputer VisionProgram Synthesis
M. Pawan Kumar
M. Pawan Kumar
Google DeepMind
Machine Learning