Improving Rule-based Reasoning in LLMs via Neurosymbolic Representations

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from low accuracy and poor reliability in rule-guided reasoning tasks, such as mathematical reasoning. To address this, we propose a neuro-symbolic latent state encoding framework that enables end-to-end differentiable mapping from LLM hidden states to a symbolically interpretable vector space, coupled with joint decoding—thereby enhancing reasoning interpretability and efficiency without compromising general-purpose capabilities. Our core contributions are threefold: (1) a differentiable neuro-symbolic representation learning mechanism; (2) a bidirectional mapping between latent state space and symbolic vectors; and (3) a contrastive evaluation framework tailored to mathematical reasoning. Experiments demonstrate that our method reduces cross-entropy loss by 82.86% and improves correct solution rate by 24.5× on mathematical reasoning benchmarks—substantially outperforming chain-of-thought prompting and LoRA fine-tuning—while preserving performance on diverse downstream tasks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) continue to face challenges in reliably solving reasoning tasks, particularly tasks that involve precise rule following, as often found in mathematical reasoning tasks. This paper introduces a novel neurosymbolic method that improves LLM reasoning by encoding hidden states into neurosymbolic vectors, allowing for problem-solving within a neurosymbolic vector space. The results are decoded and combined with the original hidden state, boosting the model's performance on numerical reasoning tasks. By offloading computation through neurosymbolic representations, this method improves efficiency, reliability, and interpretability. Our experimental results demonstrate an average of $82.86%$ lower cross entropy loss and $24.50$ times more problems correctly solved on a suite of mathematical reasoning problems compared to chain-of-thought prompting and supervised fine-tuning (LoRA), while at the same time not hindering the performance of the LLM on other tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM reasoning via neurosymbolic vectors
Boosts performance in numerical reasoning tasks
Improves efficiency and interpretability in rule-based tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neurosymbolic vectors encode hidden states
Decoding enhances numerical reasoning tasks
Offloading computation boosts efficiency and reliability
🔎 Similar Papers
No similar papers found.