🤖 AI Summary
This work addresses the challenge of efficiently executing constraint propagation for discrete constraint networks on GPUs. We propose a formal decomposition and preprocessing methodology that systematically transforms any discrete constraint network into a ternary constraint network (TCN) and introduces a structured preprocessing mechanism to generate regular, dense, and memory-efficient constraint data layouts. By explicitly modeling the ternary mapping of the variable-constraint bipartite graph and applying constraint reordering strategies, our approach significantly enhances GPU thread-level parallelism and memory access locality. Experimental evaluation on multiple benchmark problems demonstrates 2.1–5.8× speedup in the constraint propagation phase, while preserving solution correctness and completeness. Our core contribution is the first formal TCN decomposition framework tailored for GPU architectures, coupled with a scalable preprocessing paradigm—establishing a novel pathway for designing constraint solvers on heterogeneous hardware.
📝 Abstract
Constraint programming is a general and exact method based on constraint propagation and backtracking search. We provide a function decomposing a constraint network into a ternary constraint network (TCN) with a reduced number of operators. TCNs are not new and have been used since the inception of constraint programming, notably in constraint logic programming systems. This work aims to specify formally the decomposition function of discrete constraint network into TCN and its preprocessing. We aim to be self-contained and descriptive enough to serve as the basis of an implementation. Our primary usage of TCN is to obtain a regular data layout of constraints to efficiently execute propagators on graphics processing unit (GPU) hardware.