Native Logical and Hierarchical Representations with Subspace Embeddings

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional neural embeddings represent concepts as point vectors, limiting their ability to model hierarchical structures and asymmetric logical relations. This paper proposes a subspace embedding paradigm, wherein concepts are mapped to linear subspaces: subspace dimension encodes generality, containment relations encode hierarchy, and geometric operations—such as intersection and orthogonal complement—enable differentiable logical reasoning. A key innovation is the introduction of a smooth, relaxed orthogonal projection operator, enabling joint optimization of subspace dimension and orientation. This is the first framework to unify conceptual hierarchy, logical expressivity, and vector-space representation within a single differentiable architecture. Empirically, it achieves state-of-the-art performance on WordNet reconstruction and link prediction tasks, and significantly outperforms dual-encoder baselines on natural language inference benchmarks. The approach provides geometrically grounded, interpretable modeling of semantic entailment.

Technology Category

Application Category

📝 Abstract
Traditional neural embeddings represent concepts as points, excelling at similarity but struggling with higher-level reasoning and asymmetric relationships. We introduce a novel paradigm: embedding concepts as linear subspaces. This framework inherently models generality via subspace dimensionality and hierarchy through subspace inclusion. It naturally supports set-theoretic operations like intersection (conjunction), linear sum (disjunction) and orthogonal complements (negations), aligning with classical formal semantics. To enable differentiable learning, we propose a smooth relaxation of orthogonal projection operators, allowing for the learning of both subspace orientation and dimension. Our method achieves state-of-the-art results in reconstruction and link prediction on WordNet. Furthermore, on natural language inference benchmarks, our subspace embeddings surpass bi-encoder baselines, offering an interpretable formulation of entailment that is both geometrically grounded and amenable to logical operations.
Problem

Research questions and friction points this paper is trying to address.

Modeling logical and hierarchical relationships in neural embeddings
Enabling set-theoretic operations like conjunction and negation
Providing interpretable geometric formulation of entailment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding concepts as linear subspaces
Smooth relaxation of orthogonal projection operators
Achieving state-of-the-art results on benchmarks
🔎 Similar Papers
No similar papers found.