🤖 AI Summary
Graph neural networks (GNNs) must produce outputs that satisfy input-dependent, dynamically varying convex constraints and accommodate variable-dimensional outputs—a challenge unaddressed by standard architectures. Method: This paper proposes ProjNet, a novel framework featuring a differentiable, nonsmooth constraint projection module that integrates sparse vector clipping with a GPU-accelerated component-averaged Dykstra (CAD) algorithm, coupled with a surrogate gradient technique for end-to-end training. Contribution/Results: ProjNet guarantees theoretical convergence while substantially improving optimization efficiency and stability on large-scale graph data. Experiments on linear programming, two classes of nonconvex quadratic programming, and wireless power allocation demonstrate high solution accuracy, strong generalization across problem instances, and excellent scalability with graph size.
📝 Abstract
Many machine learning applications require outputs that satisfy complex, dynamic constraints. This task is particularly challenging in Graph Neural Network models due to the variable output sizes of graph-structured data. In this paper, we introduce ProjNet, a Graph Neural Network framework which satisfies input-dependant constraints. ProjNet combines a sparse vector clipping method with the Component-Averaged Dykstra (CAD) algorithm, an iterative scheme for solving the best-approximation problem. We establish a convergence result for CAD and develop a GPU-accelerated implementation capable of handling large-scale inputs efficiently. To enable end-to-end training, we introduce a surrogate gradient for CAD that is both computationally efficient and better suited for optimization than the exact gradient. We validate ProjNet on four classes of constrained optimisation problems: linear programming, two classes of non-convex quadratic programs, and radio transmit power optimization, demonstrating its effectiveness across diverse problem settings.