🤖 AI Summary
This work addresses the challenge of deploying Transformer models with full self-attention, whose quadratic complexity hinders efficiency, while purely linear attention variants often compromise performance. To bridge this gap, the authors propose a distillation-and-replacement framework that requires neither retraining from scratch nor neural architecture search. By leveraging block-level local knowledge distillation, the method transfers knowledge from a pretrained full-attention module to a linear attention counterpart, followed by a greedy layer replacement strategy that automatically constructs a task-adaptive hybrid attention model in a single efficient pass. The approach is compatible with any pretrained full-attention backbone and achieves substantial reductions in computational and memory costs while preserving downstream task performance.
📝 Abstract
Transformer architectures deliver state-of-the-art accuracy via dense full-attention, but their quadratic time and memory complexity with respect to sequence length limits practical deployment. Linear attention mechanisms offer linear or near-linear scaling yet often incur performance degradation. Hybrid models that integrate full and linear attention layers promise a balance between efficiency and expressiveness, but face two major challenges: training such hybrid models from scratch is computationally expensive, and manually designing the optimal placement of attention types is highly nontrivial. We address both issues by first transferring weights from the pretrained full-attention modules to its linear attention counterparts through blockwise local distillation, and second, introducing a greedy layer replacement strategy that iteratively substitutes full attention blocks with linear ones while monitoring validation performance on the target task. This yields a task-specific hybrid model in a single efficient pass, without costly re-training or neural architecture search, and can be applied to any pretrained full-attention backbone for diverse downstream tasks.