Long-Context Modeling with Dynamic Hierarchical Sparse Attention for On-Device LLMs

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost of attention in long-context large language models (LLMs) deployed on edge devices, the lack of adaptability in static sparsity patterns, and the poor generalizability and critical token loss inherent in existing dynamic methods relying on predefined templates, this paper proposes Dynamic Hierarchical Sparse Attention (DyHSA). DyHSA is fully data-driven and template-free, enabling online variable-length sequence chunking, length-normalized embedding aggregation, and hierarchical upsampling—from block-level to token-level similarity—to adaptively retain salient contextual information. Experiments on Gemma2 demonstrate that DyHSA matches full-attention accuracy while reducing prefill latency by 20–60% and peak memory consumption by 35%. Moreover, it outperforms block-sparse attention by 6–18% in accuracy, achieving superior efficiency–accuracy trade-offs for edge deployment.

Technology Category

Application Category

📝 Abstract
The quadratic cost of attention hinders the scalability of long-context LLMs, especially in resource-constrained settings. Existing static sparse methods such as sliding windows or global tokens utilizes the sparsity of attention to reduce the cost of attention, but poorly adapts to the content-dependent variations in attention due to their staticity. While previous work has proposed several dynamic approaches to improve flexibility, they still depend on predefined templates or heuristic mechanisms. Such strategies reduce generality and prune tokens that remain contextually important, limiting their accuracy across diverse tasks. To tackle these bottlenecks of existing methods for long-context modeling, we introduce Dynamic Hierarchical Sparse Attention (DHSA), a data-driven framework that dynamically predicts attention sparsity online without retraining. Our proposed DHSA adaptively segments sequences into variable-length chunks, then computes chunk representations by aggregating the token embeddings within each chunk. To avoid the bias introduced by varying chunk lengths, we apply length-normalized aggregation that scales the averaged embeddings by the square root of the chunk size. Finally, DHSA upsamples the chunk-level similarity scores to token level similarities to calculate importance scores that determine which token-level interactions should be preserved. Our experiments on Gemma2 with Needle-in-a-Haystack Test and LongBench show that DHSA matches dense attention in accuracy, while reducing prefill latency by 20-60% and peak memory usage by 35%. Compared to other representative baselines such as block sparse attention, DHSA achieves consistently higher accuracy (6-18% relative gains) with comparable or lower cost, offering an efficient and adaptable solution for long-context on-device LLMs.
Problem

Research questions and friction points this paper is trying to address.

Reducing quadratic attention cost for long-context LLMs on devices
Overcoming static sparse attention limitations with dynamic adaptation
Improving accuracy while maintaining efficiency in constrained settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Hierarchical Sparse Attention reduces computation cost
Adaptive sequence segmentation into variable-length chunks
Length-normalized aggregation prevents chunk size bias
🔎 Similar Papers
No similar papers found.