🤖 AI Summary
Conventional in-context learning (ICL) struggles with complex mathematical reasoning due to its heavy dependence on high-quality exemplars and opaque, unstructured reasoning paths. Method: We propose HiAR-ICL—a paradigm shift from exemplar-driven to higher-order cognitive pattern composition—where context is abstracted into modular, composable reasoning schemata. Our framework introduces: (1) five atomic reasoning actions and a structured “thinking card” mechanism; (2) Monte Carlo Tree Search (MCTS) for dynamic, interpretable reasoning path exploration; and (3) a cognitive complexity–driven problem–card matching model. We integrate Qwen2.5-7B-Instruct fine-tuning with chain-of-thought synthesis. Results: On the MATH benchmark, HiAR-ICL achieves 79.6% accuracy—surpassing GPT-4o (76.6%) and Claude 3.5 (71.1%)—setting a new state-of-the-art for open-source models.
📝 Abstract
In-context Learning (ICL) enables large language models (LLMs) to tackle downstream tasks through sophisticated prompting and high-quality demonstrations. However, this traditional ICL paradigm shows limitations when facing complex mathematical reasoning tasks, primarily due to its heavy dependence on example quality and the necessity for human intervention in challenging scenarios. To address these limitations, this paper presents HiAR-ICL, a extbf{Hi}gh-level extbf{A}utomated extbf{R}easoning paradigm in extbf{ICL} that shifts focus from specific examples to abstract thinking patterns, extending the conventional concept of context in ICL. HiAR-ICL introduces five atomic reasoning actions as fundamental components for constructing chain-structured patterns. Using Monte Carlo Tree Search, we explore reasoning paths and construct thought cards to guide subsequent inference. We then develop a cognitive complexity framework that dynamically matches problems with appropriate thought cards. Experimental results demonstrate HiAR-ICL's effectiveness, achieving state-of-the-art accuracy (79.6$%$) on the MATH benchmark with Qwen2.5-7B-Instruct, surpassing GPT-4o (76.6$%$) and Claude 3.5 (71.1$%$).