🤖 AI Summary
Large language models (LLMs) suffer from unstructured reasoning, resulting in fragmented thought processes and limited accuracy. Method: We propose “Tabular Thinking,” a novel framework that formalizes reasoning as a two-dimensional table—rows encode sequential reasoning steps, while columns represent multidimensional constraints and contextual information—enabling structured inference via iterative cell filling and self-verification. Unlike chain-of-thought’s linear paradigm, our approach embeds multi-faceted constraint representations within a single inference step, inspired by cognitive neuroscience theories of structured cognition. The method employs prompt-engineered tabular templates, ensuring compatibility with both open- and closed-source LLMs. Contribution/Results: Experiments demonstrate substantial gains on planning tasks over strong baselines; mathematical reasoning accuracy improves by up to 12.7%, validating that structured tabular representation significantly enhances deep reasoning capabilities in LLMs.
📝 Abstract
Large language models' reasoning abilities benefit from methods that organize their thought processes, such as chain-of-thought prompting, which employs a sequential structure to guide the reasoning process step-by-step. However, existing approaches focus primarily on organizing the sequence of thoughts, leaving structure in individual thought steps underexplored. To address this gap, we propose Table as Thought, a framework inspired by cognitive neuroscience theories on human thought. Table as Thought organizes reasoning within a tabular schema, where rows represent sequential thought steps and columns capture critical constraints and contextual information to enhance reasoning. The reasoning process iteratively populates the table until self-verification ensures completeness and correctness. Our experiments show that Table as Thought excels in planning tasks and demonstrates a strong potential for enhancing LLM performance in mathematical reasoning compared to unstructured thought baselines. This work provides a novel exploration of refining thought representation within LLMs, paving the way for advancements in reasoning and AI cognition.