🤖 AI Summary
This work addresses the challenge of effectively enforcing declarative constraints expressed in symbolic logic during generative modeling. The authors propose the Logic-Guided Vector Field (LGVF) framework, which, for the first time, enables differentiable injection of logical constraints into flow-matching models during both training and inference. During training, the vector field is optimized via a weighted trajectory logic loss; during inference, constraint gradients guide the sampling process, endowing the model with explicit obstacle-avoidance and constraint-aware capabilities—without requiring explicit path planning. Experiments demonstrate that LGVF reduces constraint violation rates by 59%–82% across three constraint scenarios while maintaining or even improving distributional fidelity, thereby achieving a controllable trade-off between feasibility and fidelity.
📝 Abstract
Neuro-symbolic systems aim to combine the expressive structure of symbolic logic with the flexibility of neural learning; yet, generative models typically lack mechanisms to enforce declarative constraints at generation time. We propose Logic-Guided Vector Fields (LGVF), a neuro-symbolic framework that injects symbolic knowledge, specified as differentiable relaxations of logical constraints, into flow matching generative models. LGVF couples two complementary mechanisms: (1) a training-time logic loss that penalizes constraint violations along continuous flow trajectories, with weights that emphasize correctness near the target distribution; and (2) an inference-time adjustment that steers sampling using constraint gradients, acting as a lightweight, logic-informed correction to the learned dynamics. We evaluate LGVF on three constrained generation case studies spanning linear, nonlinear, and multi-region feasibility constraints. Across all settings, LGVF reduces constraint violations by 59-82% compared to standard flow matching and achieves the lowest violation rates in each case. In the linear and ring settings, LGVF also improves distributional fidelity as measured by MMD, while in the multi-obstacle setting, we observe a satisfaction-fidelity trade-off, with improved feasibility but increased MMD. Beyond quantitative gains, LGVF yields constraint-aware vector fields exhibiting emergent obstacle-avoidance behavior, routing samples around forbidden regions without explicit path planning.