Problem-Solving Logic Guided Curriculum In-Context Learning for LLMs Complex Reasoning

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited complex reasoning capabilities in in-context learning (ICL), primarily due to reliance on superficial input similarities rather than structured logical decomposition. Method: This paper proposes a curriculum-based, logic-aware ICL framework that explicitly models problem-solving logic as parseable instruction sequences. Leveraging the BREAK dataset, we construct logic instruction templates and fine-tune LLMs to recognize stepwise solution structures; difficulty is quantified by step count, enabling systematic, difficulty-ordered prompt curation—from simple to complex—within the ICL context. Contribution/Results: Our approach departs from conventional similarity-driven ICL paradigms by embedding explicit logical scaffolding into demonstration selection and ordering. On rigorous complex reasoning benchmarks—including GSM8K, MultiArith, and AddSub—it consistently outperforms state-of-the-art ICL methods, achieving absolute accuracy gains of 5.2–9.7 percentage points. Moreover, it reduces average reasoning steps and associated computational overhead, demonstrating improved reasoning efficiency and scalability.

Technology Category

Application Category

📝 Abstract
In-context learning (ICL) can significantly enhance the complex reasoning capabilities of large language models (LLMs), with the key lying in the selection and ordering of demonstration examples. Previous methods typically relied on simple features to measure the relevance between examples. We argue that these features are not sufficient to reflect the intrinsic connections between examples. In this study, we propose a curriculum ICL strategy guided by problem-solving logic. We select demonstration examples by analyzing the problem-solving logic and order them based on curriculum learning. Specifically, we constructed a problem-solving logic instruction set based on the BREAK dataset and fine-tuned a language model to analyze the problem-solving logic of examples. Subsequently, we selected appropriate demonstration examples based on problem-solving logic and assessed their difficulty according to the number of problem-solving steps. In accordance with the principles of curriculum learning, we ordered the examples from easy to hard to serve as contextual prompts. Experimental results on multiple benchmarks indicate that our method outperforms previous ICL approaches in terms of performance and efficiency, effectively enhancing the complex reasoning capabilities of LLMs. Our project will be publicly available subsequently.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs' complex reasoning via ICL.
Select examples using problem-solving logic.
Order examples from easy to hard.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curriculum ICL strategy
Problem-solving logic analysis
Difficulty-based example ordering
🔎 Similar Papers
No similar papers found.
X
Xuetao Ma
School of Artificial Intelligence, Beijing Normal University, Beijing, China
Wenbin Jiang
Wenbin Jiang
Hangzhou Dianzi University
Speech ProcessingSpeech EnhancementSpeech Recognition
H
Hua Huang
School of Artificial Intelligence, Beijing Normal University, Beijing, China