Adaptive LLM-Symbolic Reasoning via Dynamic Logical Solver Composition

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural-symbolic NLP approaches rely on static solver integration, limiting adaptability to diverse formal reasoning paradigms. Method: We propose the first adaptive multi-paradigm neural-symbolic reasoning framework that leverages large language models (LLMs) to infer implicit reasoning paradigms—e.g., first-order logic, constraint solving, or inductive reasoning—from natural language questions, and dynamically orchestrates specialized symbolic solvers via an automated formalization interface and multi-logic solver scheduler. Contribution/Results: Experiments demonstrate significant improvements over strong baselines: +27% accuracy over GPT-4o and +6% over DeepSeek-V3.1 on multi-paradigm reasoning tasks. Under zero-shot and chain-of-thought settings, our method boosts GPT-4o’s performance by up to 10%. To our knowledge, this is the first framework enabling end-to-end, natural-language-driven, adaptive formal reasoning.

Technology Category

Application Category

📝 Abstract
Neuro-symbolic NLP methods aim to leverage the complementary strengths of large language models and formal logical solvers. However, current approaches are mostly static in nature, i.e., the integration of a target solver is predetermined at design time, hindering the ability to employ diverse formal inference strategies. To address this, we introduce an adaptive, multi-paradigm, neuro-symbolic inference framework that: (1) automatically identifies formal reasoning strategies from problems expressed in natural language; and (2) dynamically selects and applies specialized formal logical solvers via autoformalization interfaces. Extensive experiments on individual and multi-paradigm reasoning tasks support the following conclusions: LLMs are effective at predicting the necessary formal reasoning strategies with an accuracy above 90 percent. This enables flexible integration with formal logical solvers, resulting in our framework outperforming competing baselines by 27 percent and 6 percent compared to GPT-4o and DeepSeek-V3.1, respectively. Moreover, adaptive reasoning can even positively impact pure LLM methods, yielding gains of 10, 5, and 6 percent on zero-shot, CoT, and symbolic CoT settings with GPT-4o. Finally, although smaller models struggle with adaptive neuro-symbolic reasoning, post-training offers a viable path to improvement. Overall, this work establishes the foundations for adaptive LLM-symbolic reasoning, offering a path forward for unifying material and formal inferences on heterogeneous reasoning challenges.
Problem

Research questions and friction points this paper is trying to address.

Automatically identifies formal reasoning strategies from natural language problems
Dynamically selects and applies specialized formal logical solvers
Enables flexible integration between LLMs and formal inference methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically identifies formal reasoning strategies from natural language
Dynamically selects and applies specialized logical solvers
Enables flexible integration between LLMs and formal solvers
🔎 Similar Papers
No similar papers found.
L
Lei Xu
Idiap Research Institute, Switzerland
Pierre Beckmann
Pierre Beckmann
EPFL, IDIAP, University of Bern
Philosophy of AIDeep LearningNeuro-symbolic AI
Marco Valentino
Marco Valentino
University of Sheffield
Natural Language ProcessingNeurosymbolic AIExplanation
A
André Freitas
Department of Computer Science, University of Manchester, United Kingdom