π€ AI Summary
This work addresses the significant impact of Nielsen transformation ordering on performance in solving systems of word equations. We propose a graph neural network (GNN)-based approach to learn equation ordering policies. Our core contributions are threefold: (1) We design a global heterogeneous graph representation for conjunctive word equations, encoding equation structure, variable constraints, and Nielsen transformation rules; (2) We leverage minimal unsatisfiable subsets (MUS) to generate supervision signals, mitigating label sparsity; (3) We introduce three adaptation mechanisms that convert multi-class classification outputs into robust ranking decisions, enabling effective modeling of variable-length conjunctions. Evaluated on the single-variable-occurrence (SVO) benchmark, our method substantially improves solving success rates over state-of-the-art string solvers.
π Abstract
Nielsen transformation is a standard approach for solving word equations: by repeatedly splitting equations and applying simplification steps, equations are rewritten until a solution is reached. When solving a conjunction of word equations in this way, the performance of the solver will depend considerably on the order in which equations are processed. In this work, the use of Graph Neural Networks (GNNs) for ranking word equations before and during the solving process is explored. For this, a novel graph-based representation for word equations is presented, preserving global information across conjuncts, enabling the GNN to have a holistic view during ranking. To handle the variable number of conjuncts, three approaches to adapt a multi-classification task to the problem of ranking equations are proposed. The training of the GNN is done with the help of minimum unsatisfiable subsets (MUSes) of word equations. The experimental results show that, compared to state-of-the-art string solvers, the new framework solves more problems in benchmarks where each variable appears at most once in each equation.