Glinthawk: A Two-Tiered Architecture for High-Throughput LLM Inference

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) inference suffers from low GPU utilization and KV cache bottlenecks—particularly for long sequences—leading to constrained throughput and high operational costs. To address this, we propose Glinthawk, a two-tier heterogeneous architecture that pioneers cross-device decoupling of attention computation (executed on CPU) and the backbone transformer layers (on GPU), breaking the conventional full-stack reliance on high-end accelerators and enabling independent, elastic scaling of compute and memory resources. Implemented on NVIDIA T4 GPUs paired with CPU-based virtual machines, Glinthawk integrates a customized hierarchical scheduler, a lightweight cross-layer KV cache protocol, and a low-overhead sequence sharding mechanism. Experiments demonstrate a 5.9× throughput improvement and 2.8× reduction in generation cost versus single-GPU baselines; for long sequences, gains reach 16.3× in throughput and 2.4× in cost reduction—all while maintaining millisecond-level network latency compatibility.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLM) have revolutionized natural language processing, but their inference demands substantial resources, while under-utilizing high-end accelerators like GPUs. A major bottleneck arises from the attention mechanism, which requires storing large key-value caches, limiting the maximum achievable throughput way below the available computing resources. Current approaches attempt to mitigate this issue through memory-efficient attention and paging mechanisms, but remained constrained by the assumption that all operations must be performed on high-end accelerators. In this work, we propose Glinthawk, a two-tiered architecture that decouples the attention mechanism from the rest of the Transformer model. This approach allows the memory requirements for attention to scale independently, enabling larger batch sizes and more efficient use of the high-end accelerators. We prototype Glinthawk with NVIDIA T4 GPUs as one tier and standard CPU VMs as the other. Compared to a traditional single-tier setup, it improves throughput by $5.9 imes$ and reduces cost of generation by $2.8 imes$. For longer sequence lengths, it achieves $16.3 imes$ throughput improvement at $2.4 imes$ less cost. Our evaluation shows that this architecture can tolerate moderate network latency with minimal performance degradation, making it highly effective for latency-tolerant, throughput-oriented applications such as batch processing. We shared our prototype publicly at url{https://github.com/microsoft/glinthawk}.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Resource Consumption
Attention Mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention Mechanism Separation
Batch Processing Enhancement
Cost Efficiency Improvement
🔎 Similar Papers
No similar papers found.