π€ AI Summary
Current language models often produce semantic representations that lack interpretability and controllability, hindering localized, quasi-symbolic, and compositional semantic manipulation. To address this limitation, this work proposes a novel approach within the variational autoencoder (VAE) framework that explicitly models the geometric structure of the latent space to achieve disentangled, isolated, and directionally controllable semantic features. By systematically enhancing the interpretability and structural organization of the latent space, the method enables precise and reliable semantic control in tasks such as sentence generation and explanatory natural language inference (Explanatory NLI). This advancement significantly improves the modelβs capacity for high-level semantic manipulation while maintaining fidelity to the underlying linguistic structures.
π Abstract
This thesis advances semantic representation learning to render language representations or models more semantically and geometrically interpretable, and to enable localised, quasi-symbolic, compositional control through deliberate shaping of their latent space geometry. We pursue this goal within a VAE framework, exploring two complementary research directions: (i) Sentence-level learning and control: disentangling and manipulating specific semantic features in the latent space to guide sentence generation, with explanatory text serving as the testbed; and (ii) Reasoning-level learning and control: isolating and steering inference behaviours in the latent space to control NLI. In this direction, we focus on Explanatory NLI tasks, in which two premises (explanations) are provided to infer a conclusion. The overarching objective is to move toward language models whose internal semantic representations can be systematically interpreted, precisely structured, and reliably directed. We introduce a set of novel theoretical frameworks and practical methodologies, together with corresponding experiments, to demonstrate that our approaches enhance both the interpretability and controllability of latent spaces for natural language across the thesis.