Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to learning neural representations of formal specifications either rely on computationally expensive and irreversible symbolic kernels or employ syntactic embeddings that disregard semantic structure. This work proposes a teacher–student distillation framework that transfers the semantic geometric information encoded in symbolic robustness kernels into a Transformer encoder. The student model is trained under a novel geometric alignment loss based on continuous kernel weighting, enabling supervised learning that preserves semantic fidelity. The resulting method achieves, for the first time, a semantic-preserving, computationally efficient, and invertible neural representation of formal logic. The learned embeddings accurately predict the robustness and constraint satisfaction of Signal Temporal Logic (STL) formulas, faithfully maintain semantic similarity, and support efficient formula reconstruction.

Technology Category

Application Category

📝 Abstract
We introduce a framework for learning continuous neural representations of formal specifications by distilling the geometry of their semantics into a latent space. Existing approaches rely either on symbolic kernels -- which preserve behavioural semantics but are computationally prohibitive, anchor-dependent, and non-invertible -- or on syntax-based neural embeddings that fail to capture underlying structures. Our method bridges this gap: using a teacher-student setup, we distill a symbolic robustness kernel into a Transformer encoder. Unlike standard contrastive methods, we supervise the model with a continuous, kernel-weighted geometric alignment objective that penalizes errors in proportion to their semantic discrepancies. Once trained, the encoder produces embeddings in a single forward pass, effectively mimicking the kernel's logic at a fraction of its computational cost. We apply our framework to Signal Temporal Logic (STL), demonstrating that the resulting neural representations faithfully preserve the semantic similarity of STL formulae, accurately predict robustness and constraint satisfaction, and remain intrinsically invertible. Our proposed approach enables highly efficient, scalable neuro-symbolic reasoning and formula reconstruction without repeated kernel computation at runtime.
Problem

Research questions and friction points this paper is trying to address.

formal logic
neural representations
Signal Temporal Logic
semantic preservation
symbolic kernels
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural-symbolic reasoning
kernel alignment
Signal Temporal Logic
semantic distillation
invertible embeddings
🔎 Similar Papers
No similar papers found.