🤖 AI Summary
To address the limitation of Native Sparse Attention (NSA) in capturing long-range dependencies for long-context modeling, this paper proposes a dynamic alternating local-global attention mechanism, coupled with an inter-layer switching architecture that synergistically integrates Multi-Head Latent Attention (MLA) and Group-Head Latent Attention (GLA). The method jointly leverages sliding-window attention, key-value (KV) compression, and selective attention to enhance cross-region information propagation while preserving sparsity. Experiments on models ranging from 340M to 1.3B parameters demonstrate that our approach matches or surpasses full attention baselines on commonsense reasoning and long-text understanding tasks. Moreover, it reduces KV cache memory consumption by up to 50%, significantly improving both efficiency and effectiveness in long-sequence modeling.
📝 Abstract
In this work, we conduct a systematic analysis of Native Sparse Attention (NSA) and propose targeted improvements that enhance long-context modeling. A key insight is that alternating between local (sliding-window) and global (compression, selective) attention across layers, rather than using fixed patterns, enables more effective propagation of long-range dependencies and substantially boosts performance on long-sequence tasks. Meanwhile, we further refine NSA's branches with Latent Attention that the sliding-window branch is enhanced with Multi-head Latent Attention (MLA) while compression and selective branches adopt Group-head Latent Attention (GLA). These changes reduce KV-cache memory by 50% versus NSA while improving the model's common-sense reasoning and long-text understanding capabilities. Experiments on models from 340M to 1.3B parameters (trained on 15B and 100B tokens) show our method matches or exceeds full attention and native sparse attention in both common-sense reasoning and long-context understanding tasks.