From Tokens to Layers: Redefining Stall-Free Scheduling for LLM Serving with Layered Prefill

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address stringent tail-latency requirements—specifically, Time-To-First-Token (TTFT) and Time-Between-Tokens (TBT)—in large language model (LLM) serving, existing chunked prefill scheduling incurs redundant expert weight loading in Mixture-of-Experts (MoE) models, increasing memory traffic by 39% and introducing significant energy overhead. This work proposes a *layered prefill scheduling* paradigm: the model is vertically partitioned into contiguous layer groups, which serve as atomic scheduling units; prefill and decoding are then interleaved across these groups, eliminating redundant expert weight loading entirely. By elevating the scheduling granularity from token-level to layer-group-level, the approach simultaneously achieves low TTFT/TBT and high throughput. Evaluations under fixed resource constraints show end-to-end latency reduced by 41%, TTFT improved by up to 70%, and per-token energy consumption decreased by 22%, yielding substantial gains in both energy efficiency and responsiveness.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) inference in production must meet stringent service-level objectives for both time-to-first-token (TTFT) and time-between-token (TBT) while maximizing throughput under fixed compute, memory, and interconnect budgets. Modern serving systems adopt stall-free scheduling techniques such as chunked prefill, which splits long prompt processing along the token dimension and interleaves prefill with ongoing decode iterations. While effective at stabilizing TBT, chunked prefill incurs substantial overhead in Mixture-of-Experts (MoE) models: redundant expert weight loads increase memory traffic by up to 39% and inflate energy consumption. We propose layered prefill, a new scheduling paradigm that treats transformer layer groups as the primary scheduling unit. By vertically partitioning the model into contiguous layer groups and interleaving prefill and decode across the groups, layered prefill sustains stall-free decoding while eliminating chunk-induced MoE weight reloads. It reduces off-chip bandwidth demand, lowering TTFT by up to 70%, End-to-End latency by 41% and per-token energy by up to 22%. Evaluations show that layered prefill consistently improves the TTFT--TBT Pareto frontier over chunked prefill, reducing expert-load traffic and energy cost while maintaining stall-free decoding. Overall, shifting the scheduling axis from tokens to layers unlocks a new operating regime for high-efficiency, energy-aware LLM serving in co-located environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM serving to meet TTFT and TBT objectives efficiently
Reducing overhead from chunked prefill in Mixture-of-Experts models
Eliminating redundant expert weight reloads to save bandwidth and energy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scheduling by transformer layer groups instead of tokens
Eliminates chunk-induced MoE weight reload overhead
Reduces bandwidth demand and improves latency significantly
🔎 Similar Papers
No similar papers found.
G
Gunjun Lee
Seoul National University, Seoul, South Korea
J
Jiwon Kim
Seoul National University, Seoul, South Korea
J
Jaiyoung Park
Seoul National University, Seoul, South Korea
Y
Younjoo Lee
Seoul National University, Seoul, South Korea
Jung Ho Ahn
Jung Ho Ahn
Seoul National University
Computer Architecture