SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited effectiveness of chain-of-thought (CoT) prompting for large language models (LLMs) on complex reasoning tasks, this paper proposes a self-asking sequential question-answering reasoning engine. The method actively decomposes the primary query into auxiliary questions, explores problem dimensions through iterative multi-step generation and answering, and jointly verifies intermediate conclusions—thereby overcoming the unidirectional nature of conventional CoT. Technically, it integrates sequential question generation, stepwise answer derivation, and an ensemble response mechanism, and is compatible with mainstream models including Llama 3 and GPT-4o. Evaluated across multiple question-answering benchmarks, our approach achieves an average 12.7% improvement in reasoning accuracy over standard CoT and rephrasing-based baselines. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
In the rapidly evolving field of Natural Language Processing, Large Language Models (LLMs) are tasked with increasingly complex reasoning challenges. Traditional methods like chain-of-thought prompting have shown promise but often fall short in fully leveraging a model's reasoning capabilities. This paper introduces SQuARE (Sequential Question Answering Reasoning Engine), a novel prompting technique designed to improve reasoning through a self-interrogation paradigm. Building upon CoT frameworks, SQuARE prompts models to generate and resolve multiple auxiliary questions before tackling the main query, promoting a more thorough exploration of various aspects of a topic. Our expansive evaluations, conducted with Llama 3 and GPT-4o models across multiple question-answering datasets, demonstrate that SQuARE significantly surpasses traditional CoT prompts and existing rephrase-and-respond methods. By systematically decomposing queries, SQuARE advances LLM capabilities in reasoning tasks. The code is publicly available at https://github.com/IntelLabs/RAG-FiT/tree/square.
Problem

Research questions and friction points this paper is trying to address.

Enhances reasoning in large language models
Improves chain-of-thought prompting techniques
Systematically decomposes queries for better exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-interrogation paradigm enhances reasoning
Generates auxiliary questions before main query
Systematically decomposes queries for better exploration
🔎 Similar Papers
No similar papers found.