Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics

📅 2024-10-28
🏛️ International Conference on Learning Representations
📈 Citations: 24
Influential: 1
📄 PDF
🤖 AI Summary
How do large language models (LLMs) perform arithmetic reasoning? This study challenges two prevailing accounts—robust algorithmic generalization and training-data memorization—and proposes a novel mechanism: the *Heuristic Bag*. Using causal circuit analysis, neuron-level functional attribution, heuristic-type clustering, and training dynamics tracking, we find that LLMs rely on sparsely activated neuron subsets encoding diverse numerical heuristics (e.g., range matching, sign sensitivity), whose composition is combinatorially unordered rather than systematically algorithmic. This mechanism explains over 85% of arithmetic accuracy across multiple LLMs and emerges dominantly early in training. Our work provides the first evidence that arithmetic competence arises from non-algorithmic, cooperative activation of heuristic-specialized neurons—offering a new paradigm for understanding the nature of LLM reasoning.

Technology Category

Application Category

📝 Abstract
Do large language models (LLMs) solve reasoning tasks by learning robust generalizable algorithms, or do they memorize training data? To investigate this question, we use arithmetic reasoning as a representative task. Using causal analysis, we identify a subset of the model (a circuit) that explains most of the model's behavior for basic arithmetic logic and examine its functionality. By zooming in on the level of individual circuit neurons, we discover a sparse set of important neurons that implement simple heuristics. Each heuristic identifies a numerical input pattern and outputs corresponding answers. We hypothesize that the combination of these heuristic neurons is the mechanism used to produce correct arithmetic answers. To test this, we categorize each neuron into several heuristic types-such as neurons that activate when an operand falls within a certain range-and find that the unordered combination of these heuristic types is the mechanism that explains most of the model's accuracy on arithmetic prompts. Finally, we demonstrate that this mechanism appears as the main source of arithmetic accuracy early in training. Overall, our experimental results across several LLMs show that LLMs perform arithmetic using neither robust algorithms nor memorization; rather, they rely on a"bag of heuristics".
Problem

Research questions and friction points this paper is trying to address.

Investigating whether LLMs use algorithms or memorization for reasoning tasks
Identifying heuristic neurons as the mechanism for arithmetic accuracy in LLMs
Demonstrating LLMs rely on heuristics rather than robust algorithms or memorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses causal analysis to identify arithmetic circuits
Discovers sparse neurons implementing simple heuristics
Combines heuristic neurons for arithmetic accuracy
🔎 Similar Papers
No similar papers found.