Looking beyond the next token

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional causal language models assume token-level autoregressive generation conditioned solely on preceding context, fundamentally misaligning with human writing and reasoning—where goal specification precedes content generation. Method: We propose Trelawney, a data reordering technique that implicitly injects long-horizon goal signals via sequence-level permutation, without altering model architecture or training procedure. This enables standard Transformer training to spontaneously acquire goal-directed generation capabilities. Contribution/Results: Trelawney yields interpretable, goal-conditioned generation and enables goal-guided reasoning algorithms. It achieves significant performance gains across planning, algorithmic reasoning, and story generation benchmarks—demonstrating, for the first time, zero-cost extension of language models’ goal modeling capacity while preserving standard training paradigms.

Technology Category

Application Category

📝 Abstract
The structure of causal language model training assumes that each token can be accurately predicted from the previous context. This contrasts with humans' natural writing and reasoning process, where goals are typically known before the exact argument or phrasings. While this mismatch has been well studied in the literature, the working assumption has been that architectural changes are needed to address this mismatch. We argue that rearranging and processing the training data sequences can allow models to more accurately imitate the true data-generating process, and does not require any other changes to the architecture or training infrastructure. We demonstrate that this technique, Trelawney, and the inference algorithms derived from it allow us to improve performance on several key benchmarks that span planning, algorithmic reasoning, and story generation tasks. Finally, our method naturally enables the generation of long-term goals at no additional cost. We investigate how using the model's goal-generation capability can further improve planning and reasoning. Additionally, we believe Trelawney could potentially open doors to new capabilities beyond the current language modeling paradigm.
Problem

Research questions and friction points this paper is trying to address.

Addressing mismatch between human writing and token prediction
Improving planning and reasoning without architectural changes
Enabling long-term goal generation in language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rearranging training data sequences for accuracy
Trelawney technique improves benchmark performance
Enables long-term goal generation without cost
🔎 Similar Papers
No similar papers found.