Bridging Logic and Learning: Decoding Temporal Logic Embeddings via Transformers

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semantic embeddings of Signal Temporal Logic (STL) formulas are typically irreversible—recovering syntactically correct and semantically faithful STL expressions from continuous vector representations remains challenging. Method: We propose a reversible semantic embedding framework based on a Transformer decoder, tailored for STL. It employs an STL-specific compact vocabulary and designs semantics-aware embeddings to enable end-to-end generation of grammatically valid and semantically accurate STL formulas from latent vectors. Contribution/Results: This work achieves the first efficient, fully reversible decoding of STL formula embeddings, supporting semantic-preserving automatic simplification. The model generates valid STL formulas within one training epoch and attains cross-scenario semantic generalization in approximately ten epochs. Empirical evaluation on few-shot trajectory classification demonstrates strong generalization and practical utility. Our approach establishes a novel paradigm for deep integration of symbolic knowledge and deep learning.

Technology Category

Application Category

📝 Abstract
Continuous representations of logic formulae allow us to integrate symbolic knowledge into data-driven learning algorithms. If such embeddings are semantically consistent, i.e. if similar specifications are mapped into nearby vectors, they enable continuous learning and optimization directly in the semantic space of formulae. However, to translate the optimal continuous representation into a concrete requirement, such embeddings must be invertible. We tackle this issue by training a Transformer-based decoder-only model to invert semantic embeddings of Signal Temporal Logic (STL) formulae. STL is a powerful formalism that allows us to describe properties of signals varying over time in an expressive yet concise way. By constructing a small vocabulary from STL syntax, we demonstrate that our proposed model is able to generate valid formulae after only 1 epoch and to generalize to the semantics of the logic in about 10 epochs. Additionally, the model is able to decode a given embedding into formulae that are often simpler in terms of length and nesting while remaining semantically close (or equivalent) to gold references. We show the effectiveness of our methodology across various levels of training formulae complexity to assess the impact of training data on the model's ability to effectively capture the semantic information contained in the embeddings and generalize out-of-distribution. Finally, we deploy our model for solving a requirement mining task, i.e. inferring STL specifications that solve a classification task on trajectories, performing the optimization directly in the semantic space.
Problem

Research questions and friction points this paper is trying to address.

Inverting semantic embeddings of Signal Temporal Logic formulae
Generating simpler yet semantically equivalent logic formulae
Solving requirement mining tasks via semantic space optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based decoder inverts STL embeddings
Small STL vocabulary enables rapid formula generation
Optimizes in semantic space for requirement mining
🔎 Similar Papers
No similar papers found.