Paying Attention to Hybrid Attention: Untangling the Issues with Conversion Methods

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hybrid attention methods suffer from component imbalance when converting pretrained Transformers into linear models: the linear attention branch is effectively bypassed during inference, causing the model to rely almost entirely on sliding-window softmax (SWA) and thus failing to achieve true linearization. This work is the first to identify and characterize this attribution bias. We propose a triple-coordinated strategy: (1) dynamic inference-time mixing of attention branches, (2) HedgeCATs-based weight transfer combined with LoRA fine-tuning, and (3) a stochastic softmax suppression mechanism (SSD) applied during training to actively enforce balanced participation between linear and window-based components. Component-level diagnostic analysis confirms that our approach preserves O(n) computational complexity while substantially increasing the effective contribution of linear attention—restoring over 90% of the base model’s performance. The result is an interpretable, reproducible paradigm for efficient and equitable Transformer linearization.

Technology Category

Application Category

📝 Abstract
Transformers' quadratic computational complexity limits their scalability despite remarkable performance. While linear attention reduces this to linear complexity, pre-training such models from scratch remains, in most cases, prohibitively expensive. Recent post-training linearisation methods convert pre-trained Transformers to linear models efficiently, often using hybrid approaches that combine linear attention with sliding-window softmax. We identify a critical flaw: existing hybrid methods inadvertently bypass the linear component, relying almost entirely on SWA. Component-level diagnostics reveal this previously undetected behaviour stems from overlooked evaluation practices on common-sense benchmarks. We propose three solutions to ensure balanced component usage: (i) inference-time hybridisation of linear-only conversions with sliding-window softmax; (ii) HedgeCATs, combining attention-weight transfer with targeted LoRA fine-tuning; and (iii) Scheduled Sliding-window Dropout (SSD), which stochastically suppresses the softmax branch during training to prevent component collapse. Our methods maintain computational efficiency while recovering most base model performance and ensuring genuine linear attention adoption, restoring the validity of performance attributions in hybrid conversions.
Problem

Research questions and friction points this paper is trying to address.

Transformers' quadratic complexity limits model scalability
Hybrid linear attention methods bypass the linear component
Proposed solutions ensure balanced component usage in conversions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybridizes linear conversions with sliding-window softmax
Combines attention-weight transfer with targeted LoRA fine-tuning
Uses scheduled sliding-window dropout during training
🔎 Similar Papers
No similar papers found.
Martin Benfeghoul
Martin Benfeghoul
Research Engineer, Huawei R&D
machine learningreinforcement learningbio-inspired
T
Teresa Delgado
Huawei, Noah’s Ark Lab, London
A
Adnan Oomerjee
Huawei, Noah’s Ark Lab, London; AI Centre, Department of Computer Science, University College London, London, UK
H
Haitham Bou Ammar
Huawei, Noah’s Ark Lab, London; AI Centre, Department of Computer Science, University College London, London, UK
J
Jun Wang
AI Centre, Department of Computer Science, University College London, London, UK
Zafeirios Fountas
Zafeirios Fountas
Principal Research Scientist, Huawei Technologies, London
Artificial intelligenceTheoretical neuroscienceMachine learningMemoryTime perception