🤖 AI Summary
Existing LLM-based backward-chaining reasoning methods (e.g., Least-to-Most, LAMBADA) lack logical completeness and interpretability because they ignore core mechanisms of SLD resolution—backtracking, subgoal management, and unification-based matching.
Method: We propose a symbolic backward-chaining reasoning framework that systematically integrates the complete algorithmic components of SLD resolution into LLM inference. Our approach orchestrates a symbolic logic solver—which governs inference structure and control flow—with an LLM that dynamically supplies semantic content on demand. It employs dynamic invocation and structured rule application to enable goal-directed, structured proof generation.
Contribution/Results: Evaluated on seven deductive, relational, and arithmetic reasoning benchmarks, our method significantly outperforms mainstream baselines, achieving simultaneous improvements in both reasoning accuracy and proof verifiability.
📝 Abstract
To improve the performance and explainability of LLM-based natural language reasoning, structured reasoning can be applied to generate explicitly structured proofs. Among different methods for structured reasoning, we specifically focus on backward chaining, where the proof goal is recursively decomposed to subgoals by searching and applying rules. We argue that current LLM-based backward chaining systems (e.g. Least-to-most prompting and LAMBADA) are incomplete, as they omit crucial algorithmic components identified from the classic backward chaining algorithm in computational logic (SLD Resolution). To this end, we propose a novel backward chaining system, SymBa (Symbolic Backward Chaining), which integrates a symbolic solver and an LLM. In SymBa, the solver controls the proof process, and the LLM is only called when the solver requires new information to complete the proof. Empowered by completeness, SymBa achieves a significant improvement in seven deductive, relational, and arithmetic reasoning benchmarks compared to the baselines.