Subjective Depth and Timescale Transformers: Learning Where and When to Compute

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard Transformers employ fixed, uniform computation allocation, limiting efficiency and scalability for large models and long sequences. To address this, we propose two novel architectures—Subjective Depth Transformer (SDT) and Subjective Time Transformer (STT)—that jointly integrate spatial-depth and temporal-scale conditional computation for the first time, guided by Bayesian surprise signals (i.e., deviations between predicted and observed states) to dynamically determine *where* and *when* to compute. Key innovations include a surprise-driven gating mechanism, alternating Decision/Dynamic layers, Top-K sparse routing, a residual update prediction network, and dynamic KV-cache management. Experiments demonstrate that, without sacrificing model accuracy, our approach skips on average 75% of self-attention computations and 50% of KV-cache operations per layer, substantially improving the computational-accuracy trade-off.

Technology Category

Application Category

📝 Abstract
The rigid, uniform allocation of computation in standard Transformer (TF) architectures can limit their efficiency and scalability, particularly for large-scale models and long sequences. Addressing this, we introduce Subjective Depth Transformers (SDT) and Subjective Timescale Transformers (STT), two distinct architectures that leverage Bayesian surprise signals to dynamically route computation, learning where and when to compute within decoder-only TFs. SDT augments a decoder-only stack with alternating Decision and Dynamic layers: a Decision layer computes a full block 'posterior' and a lightweight 'prior,' while a Dynamic layer employs fixed-capacity Top-K routing based on Bayesian surprise (Expected and Unexpected Change), maintaining a static compute graph. STT extends this conditional computation to the temporal domain: a transition network predicts residual updates, forming a temporal 'change hypothesis' that informs a router to dynamically execute or bypass TF blocks for each token, managing KV-cache contributions. Both architectures exhibit the predicted shift from novelty to prediction driven gating over training, suggesting alignment with surprise based principles. While operating at reduced capacity, they offer preliminary insights into the compute-accuracy trade-offs of conditional computation. The proposed architectures establish a flexible framework for efficiency, reducing self-attention computation by 75% and KV-cache requirements by 50% within each compute skipping layer, setting a pathway for more efficient models.
Problem

Research questions and friction points this paper is trying to address.

Dynamic computation routing to optimize Transformer efficiency and scalability
Reducing self-attention computation by 75% and KV-cache requirements by 50%
Learning where and when to compute using Bayesian surprise signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic routing using Bayesian surprise signals
Temporal change hypothesis for token-level execution
Reducing computation and KV-cache via conditional skipping
🔎 Similar Papers
No similar papers found.
F
Frederico Wieser
AI Centre, Department of Computer Science, University College London, London, UK
Martin Benfeghoul
Martin Benfeghoul
Research Engineer, Huawei R&D
machine learningreinforcement learningbio-inspired
H
Haitham Bou Ammar
AI Centre, Department of Computer Science, University College London, London, UK; Huawei, Noah’s Ark Lab, London
J
Jun Wang
AI Centre, Department of Computer Science, University College London, London, UK
Zafeirios Fountas
Zafeirios Fountas
Principal Research Scientist, Huawei Technologies, London
Artificial intelligenceTheoretical neuroscienceMachine learningMemoryTime perception