Table as Thought: Exploring Structured Thoughts in LLM Reasoning

📅 2025-01-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from unstructured reasoning, resulting in fragmented thought processes and limited accuracy. Method: We propose “Tabular Thinking,” a novel framework that formalizes reasoning as a two-dimensional table—rows encode sequential reasoning steps, while columns represent multidimensional constraints and contextual information—enabling structured inference via iterative cell filling and self-verification. Unlike chain-of-thought’s linear paradigm, our approach embeds multi-faceted constraint representations within a single inference step, inspired by cognitive neuroscience theories of structured cognition. The method employs prompt-engineered tabular templates, ensuring compatibility with both open- and closed-source LLMs. Contribution/Results: Experiments demonstrate substantial gains on planning tasks over strong baselines; mathematical reasoning accuracy improves by up to 12.7%, validating that structured tabular representation significantly enhances deep reasoning capabilities in LLMs.

Technology Category

Application Category

📝 Abstract
Large language models' reasoning abilities benefit from methods that organize their thought processes, such as chain-of-thought prompting, which employs a sequential structure to guide the reasoning process step-by-step. However, existing approaches focus primarily on organizing the sequence of thoughts, leaving structure in individual thought steps underexplored. To address this gap, we propose Table as Thought, a framework inspired by cognitive neuroscience theories on human thought. Table as Thought organizes reasoning within a tabular schema, where rows represent sequential thought steps and columns capture critical constraints and contextual information to enhance reasoning. The reasoning process iteratively populates the table until self-verification ensures completeness and correctness. Our experiments show that Table as Thought excels in planning tasks and demonstrates a strong potential for enhancing LLM performance in mathematical reasoning compared to unstructured thought baselines. This work provides a novel exploration of refining thought representation within LLMs, paving the way for advancements in reasoning and AI cognition.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Structured Thinking
Reasoning Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured Thinking
Tabular Representation
Enhanced Reasoning
🔎 Similar Papers
No similar papers found.