🤖 AI Summary
This work investigates how the ordering of reasoning steps in chain-of-thought (CoT) prompting affects Transformer models’ learnability in arithmetic tasks, addressing the training instability caused by fixed, arbitrary step sequences.
Method: We propose a two-stage hierarchical reordering framework: (1) dynamically filtering promising step orders based on early-training loss, and (2) efficiently searching among billions of candidates via coordinated inter-block and intra-block optimization.
Contribution/Results: This is the first systematic study to reveal the critical impact of CoT step ordering on learning efficiency. We introduce multi-order mixed training and a loss-driven order identification mechanism. Evaluated on four order-sensitive arithmetic tasks, our method significantly improves convergence speed and generalization. Notably, on multiplication tasks, it automatically rediscoveries the empirically optimal reverse-digit ordering pattern—demonstrating both interpretability and effectiveness.
📝 Abstract
The chain of thought is fundamental in Transformers, which is to perform step-by-step reasoning. Besides what intermediate steps work, the order of these steps critically affects the difficulty of the reasoning. This study addresses a novel task of unraveling chain of thought - reordering decoder input tokens to a learning-friendly sequence for Transformers to learn arithmetic tasks. The proposed pipeline first trains a Transformer on a mixture of target sequences arranged in different orders and then identifies benign orders as those with fast loss drops in the early stage. As the search space grows factorially with sequence length, we propose a two-stage hierarchical approach for inter- and intra-block reordering. Experiments on four order-sensitive arithmetic tasks show that our method identifies a learning-friendly order out of a few billion candidates. Notably, on the multiplication task, it recovered the reverse-digit order reported in prior studies.