Recursive Decomposition of Logical Thoughts: Framework for Superior Reasoning and Knowledge Propagation in Large Language Models

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficient reasoning pathways and insufficient knowledge sharing in large language models (LLMs) for complex reasoning, this paper proposes the Recursive Decomposition and Knowledge Transfer framework (RDoLT). RDoLT introduces three key innovations: (1) a recursive task decomposition mechanism that hierarchically breaks down complex problems into subproblems; (2) a confidence-driven dynamic reasoning-path pruning strategy that adaptively retains high-value inference chains; and (3) a human-inspired strong–weak reasoning knowledge propagation module that models collaborative optimization between high- and low-confidence reasoning traces. The method integrates chain-of-thought prompting, multi-level decomposition, and memory-augmented knowledge propagation. Experiments demonstrate that RDoLT achieves 90.98% accuracy on GSM8K—surpassing the prior state-of-the-art by 6.28%—and yields consistent improvements of 5.5%–6.75% across major mathematical and logical reasoning benchmarks. Moreover, it significantly enhances reasoning robustness and generalization capability.

Technology Category

Application Category

📝 Abstract
Enhancing the reasoning capabilities of Large Language Models remains a critical challenge in artificial intelligence. We introduce RDoLT, Recursive Decomposition of Logical Thought prompting, a novel framework that significantly boosts LLM reasoning performance. RDoLT is built on three key innovations: (1) recursively breaking down complex reasoning tasks into sub-tasks of progressive complexity; (2) employing an advanced selection and scoring mechanism to identify the most promising reasoning thoughts; and (3) integrating a knowledge propagation module that mimics human learning by keeping track of strong and weak thoughts for information propagation. Our approach was evaluated across multiple benchmarks, including GSM8K, SVAMP, MultiArith, LastLetterConcatenation, and Gaokao2023 Math. The results demonstrate that RDoLT consistently outperforms existing state-of-the-art techniques, achieving a 90.98 percent accuracy on GSM8K with ChatGPT-4, surpassing state-of-the-art techniques by 6.28 percent. Similar improvements were observed on other benchmarks, with accuracy gains ranging from 5.5 percent to 6.75 percent. These findings highlight RDoLT's potential to advance prompt engineering, offering a more effective and generalizable approach to complex reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Complex Problem Solving
Knowledge Sharing
Innovation

Methods, ideas, or system contributions that make the work stand out.

RDoLT
Problem Decomposition
Logical Reasoning Enhancement
🔎 Similar Papers
No similar papers found.