Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the accuracy-efficiency trade-off in language model sequence generation caused by fixed inference step budgets. Methodologically, it introduces learnable `<pause>` and `<don’t know>` tokens, framing decoding as a sequential decision process with explicit time cost, and proposes the CYB family of loss functions—including CYB-VA (a variational approximation) and CYB-DP (a computational budget penalty)—to enable fine-grained, context-aware adaptive pausing and resuming. Key contributions include: (i) the first end-to-end framework enabling language models to autonomously control inference latency; (ii) achieving baseline performance with only one-third of the training data; and (iii) significantly improving accuracy on complex tokens while exhibiting sensitive, adaptive behavior toward syntactic ambiguity and morphological structure.

Technology Category

Application Category

📝 Abstract
We explore a class of supervised training objectives that allow a language model to dynamically and autonomously scale the number of compute steps used for each input token. For any token, the model can request additional compute steps by emitting a <don't know> output. If the model is granted a delay, a specialized <pause> token is inserted at the next input step, providing the model with additional compute resources to generate an output. The model can request multiple pauses. To train the model to use <don't know> outputs judiciously and to calibrate its uncertainty, we frame the selection of each output token as a sequential-decision problem with a time cost. We refer to the class of methods as $ extit{Catch Your Breath}$ losses and we study three methods in this class: CYB-AP frames the model's task as anytime prediction, where an output may be required at any step and accuracy is discounted over time; CYB-VA is a variational approach that aims to maximize prediction accuracy subject to a specified distribution over stopping times; and CYB-DP imposes a penalty based on a computational budget. Through fine-tuning experiments, we identify the best performing loss variant. The CYB model needs only one third as much training data as the baseline (no pause) model needs to achieve the same performance, and half as much data as a model with pauses and a cross-entropy loss. We find that the CYB model requests additional steps when doing so improves accuracy, and the model adapts its processing time to token-level complexity and context. For example, it often pauses after plural nouns like $ extit{patients}$ and $ extit{challenges}$ but never pauses after the first token of contracted words like $ extit{wasn}$ and $ extit{didn}$, and it shows high variability for ambiguous tokens like $ extit{won}$, which could function as either a verb or part of a contraction.
Problem

Research questions and friction points this paper is trying to address.

Enabling language models to dynamically adjust computational steps per token
Training models to request pauses when uncertain about token predictions
Optimizing computational efficiency while maintaining or improving prediction accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic compute scaling per token
Specialized pause tokens for uncertainty
Sequential decision training with time cost
🔎 Similar Papers
No similar papers found.
Alexandre Galashov
Alexandre Galashov
DeepMind
Meta-learningTransfer LearningReinforcement Learning
M
Matt Jones
Google DeepMind
R
Rosemary Ke
Google DeepMind
Y
Yuan Cao
Google DeepMind
Vaishnavh Nagarajan
Vaishnavh Nagarajan
Google
Artificial IntelligenceLanguage ModelingDeep LearningMulti-Token Prediction
M
Michael C. Mozer
Google DeepMind