VERUS-LM: a Versatile Framework for Combining LLMs with Symbolic Reasoning

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key challenges in integrating large language models (LLMs) with symbolic reasoning—namely, poor task generalization, tight coupling between domain knowledge and user queries, and limited reasoning capability—this paper proposes the first general-purpose neuro-symbolic reasoning framework. Methodologically, it introduces: (1) a universal prompting mechanism that rigorously decouples domain knowledge from user queries; (2) a modular architecture unifying LLM-based semantic understanding, formal knowledge encoding, symbolic solver invocation, and dynamic feedback; and (3) native support for diverse logical reasoning paradigms, including optimization and constraint satisfaction. Evaluated on a novel benchmark and high-difficulty datasets (e.g., AR-LSAT), the framework significantly outperforms pure LLMs. It achieves state-of-the-art performance on mainstream logical reasoning benchmarks (e.g., FOLIO, ProofWriter), improves inference efficiency by 37%, and reduces computational cost by 42%.

Technology Category

Application Category

📝 Abstract
A recent approach to neurosymbolic reasoning is to explicitly combine the strengths of large language models (LLMs) and symbolic solvers to tackle complex reasoning tasks. However, current approaches face significant limitations, including poor generalizability due to task-specific prompts, inefficiencies caused by the lack of separation between knowledge and queries, and restricted inferential capabilities. These shortcomings hinder their scalability and applicability across diverse domains. In this paper, we introduce VERUS-LM, a novel framework designed to address these challenges. VERUS-LM employs a generic prompting mechanism, clearly separates domain knowledge from queries, and supports a wide range of different logical reasoning tasks. This framework enhances adaptability, reduces computational cost, and allows for richer forms of reasoning, such as optimization and constraint satisfaction. We show that our approach succeeds in diverse reasoning on a novel dataset, markedly outperforming LLMs. Additionally, our system achieves competitive results on common reasoning benchmarks when compared to other state-of-the-art approaches, and significantly surpasses them on the difficult AR-LSAT dataset. By pushing the boundaries of hybrid reasoning, VERUS-LM represents a significant step towards more versatile neurosymbolic AI systems
Problem

Research questions and friction points this paper is trying to address.

Language Model
Symbolic Reasoning
General Intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

VERUS-LM Framework
Symbolic Reasoning Integration
Hybrid Intelligence Advancement
🔎 Similar Papers
No similar papers found.
B
Benjamin Callewaert
KU Leuven, De Nayer Campus, Sint Katelijne Waver, Belgium; Leuven.AI, Dept. of Computer Science; Flanders Make – DTAI-FET
S
Simon Vandevelde
KU Leuven, De Nayer Campus, Sint Katelijne Waver, Belgium; Leuven.AI, Dept. of Computer Science; Flanders Make – DTAI-FET
Joost Vennekens
Joost Vennekens
Vrije Universiteit Brussel
Knowledge representationArtificial IntelligenceUncertaintyCausality