SnareNet: Flexible Repair Layers for Neural Networks with Hard Constraints

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that neural networks, when employed as surrogate solvers or control policies, often produce unconstrained outputs that violate hard physical, operational, or safety constraints. To resolve this, the authors propose SnareNet, an architecture incorporating a differentiable correction layer at the network output. This layer iteratively adjusts predictions within the range space of a constraint mapping to satisfy input-dependent nonlinear constraints. A key innovation is the integration of an adaptive relaxation mechanism: during early training, the feasible set is enlarged to encourage exploration, then gradually tightened toward the strict feasible region in later stages, balancing training stability with constraint satisfaction accuracy. Experiments on optimization and trajectory planning benchmarks demonstrate that SnareNet significantly outperforms existing methods, achieving both more reliable constraint adherence and superior objective performance.

Technology Category

Application Category

📝 Abstract
Neural networks are increasingly used as surrogate solvers and control policies, but unconstrained predictions can violate physical, operational, or safety requirements. We propose SnareNet, a feasibility-controlled architecture for learning mappings whose outputs must satisfy input-dependent nonlinear constraints. SnareNet appends a differentiable repair layer that navigates in the constraint map's range space, steering iterates toward feasibility and producing a repaired output that satisfies constraints to a user-specified tolerance. To stabilize end-to-end training, we introduce adaptive relaxation, which designs a relaxed feasible set that snares the neural network at initialization and shrinks it into the feasible set, enabling early exploration and strict feasibility later in training. On optimization-learning and trajectory planning benchmarks, SnareNet consistently attains improved objective quality while satisfying constraints more reliably than prior work.
Problem

Research questions and friction points this paper is trying to address.

neural networks
hard constraints
feasibility
constraint satisfaction
safety requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

SnareNet
repair layer
hard constraints
adaptive relaxation
feasibility-controlled architecture
🔎 Similar Papers
No similar papers found.
Y
Ya-Chi Chu
Department of Mathematics, Stanford University, CA, United States
A
Alkiviades Boukas
Institute for Computational and Mathematical Engineering, Stanford University, CA, United States
Madeleine Udell
Madeleine Udell
Assistant Professor, Management Science and Engineering, Stanford University
OptimizationMachine LearningData Science