🤖 AI Summary
Standard Transformers employ fixed, uniform computation allocation, limiting efficiency and scalability for large models and long sequences. To address this, we propose two novel architectures—Subjective Depth Transformer (SDT) and Subjective Time Transformer (STT)—that jointly integrate spatial-depth and temporal-scale conditional computation for the first time, guided by Bayesian surprise signals (i.e., deviations between predicted and observed states) to dynamically determine *where* and *when* to compute. Key innovations include a surprise-driven gating mechanism, alternating Decision/Dynamic layers, Top-K sparse routing, a residual update prediction network, and dynamic KV-cache management. Experiments demonstrate that, without sacrificing model accuracy, our approach skips on average 75% of self-attention computations and 50% of KV-cache operations per layer, substantially improving the computational-accuracy trade-off.
📝 Abstract
The rigid, uniform allocation of computation in standard Transformer (TF) architectures can limit their efficiency and scalability, particularly for large-scale models and long sequences. Addressing this, we introduce Subjective Depth Transformers (SDT) and Subjective Timescale Transformers (STT), two distinct architectures that leverage Bayesian surprise signals to dynamically route computation, learning where and when to compute within decoder-only TFs. SDT augments a decoder-only stack with alternating Decision and Dynamic layers: a Decision layer computes a full block 'posterior' and a lightweight 'prior,' while a Dynamic layer employs fixed-capacity Top-K routing based on Bayesian surprise (Expected and Unexpected Change), maintaining a static compute graph. STT extends this conditional computation to the temporal domain: a transition network predicts residual updates, forming a temporal 'change hypothesis' that informs a router to dynamically execute or bypass TF blocks for each token, managing KV-cache contributions. Both architectures exhibit the predicted shift from novelty to prediction driven gating over training, suggesting alignment with surprise based principles. While operating at reduced capacity, they offer preliminary insights into the compute-accuracy trade-offs of conditional computation. The proposed architectures establish a flexible framework for efficiency, reducing self-attention computation by 75% and KV-cache requirements by 50% within each compute skipping layer, setting a pathway for more efficient models.