Diverge to Induce Prompting: Multi-Rationale Induction for Zero-Shot Reasoning

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of reasoning paths in existing chain-of-thought prompting methods, which often stems from insufficient guidance and the limited generalizability of single-strategy approaches across diverse tasks. To overcome these limitations, we propose Diverge-to-Induce Prompting (DIP), a novel framework that first guides large language models to generate multiple high-level reasoning rationales for the same problem, then refines these into detailed step-by-step drafts, and finally integrates them into a unified reasoning plan. DIP introduces, for the first time, a mechanism for generating and fusing multiple reasoning rationales, significantly enhancing the robustness and accuracy of zero-shot reasoning without requiring extensive sampling. Experimental results demonstrate that DIP consistently outperforms current single-strategy prompting methods across multiple benchmark tasks.

Technology Category

Application Category

📝 Abstract
To address the instability of unguided reasoning paths in standard Chain-of-Thought prompting, recent methods guide large language models (LLMs) by first eliciting a single reasoning strategy. However, relying on just one strategy for each question can still limit performance across diverse tasks. We propose Diverge-to-Induce Prompting (DIP), a framework that first prompts an LLM to generate multiple diverse high-level rationales for each question. Each rationale is then elaborated into a detailed, step-by-step draft plan. Finally, these draft plans are induced into a final plan. DIP enhances zero-shot reasoning accuracy without reliance on resource-intensive sampling. Experiments show that DIP outperforms single-strategy prompting, demonstrating the effectiveness of multi-plan induction for prompt-based reasoning.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought prompting
reasoning instability
single reasoning strategy
zero-shot reasoning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverge-to-Induce Prompting
multi-rationale induction
zero-shot reasoning
Chain-of-Thought prompting
large language models
🔎 Similar Papers
No similar papers found.