π€ AI Summary
In LLM serving, unpredictable request generation lengths hinder efficient scheduling, limit throughput, and cause load imbalance across GPU instances. Existing sequence-level scheduling (SLS) suffers from high latency for short requests due to static batching and FCFS policies; iteration-level scheduling (ILS), though enabling dynamic batching, remains constrained by out-of-memory (OOM) risks and fixed concurrency, failing to jointly optimize throughput and load balancing. This paper proposes slice-level scheduling (SCLS), which partitions the maximum generation length into fixed-duration, memory-predictable scheduling units. SCLS introduces the first timeβmemory joint modeling at the slice level, integrating dynamic batching with GPU memory-aware offloading. It enables fine-grained, request-level scheduling and cross-instance load balancing while guaranteeing zero OOM. Experiments show SCLS improves throughput by up to 315.8% over SLS and ILS, significantly mitigating load skew.
π Abstract
Large language models (LLMs) iteratively generate text token by token, with memory usage increasing with the length of generated token sequences. Since the request generation length is generally unpredictable, it is difficult to estimate the time and memory required to process requests, thus posing a challenge for effective request scheduling. Conventional sequence-level scheduling (SLS) serves requests in a first-come first-served (FCFS) manner with static batching where requests with short generation lengths are delayed until those with long ones have finished generation. Besides, to avoid out-of-memory (OOM) errors, SLS batches requests using a small batch size, which limits throughput. Recently proposed iteration-level scheduling (ILS) improves this with continuous batching, timely completing requests and dynamically adding new ones, but often limits the number of parallel-processing requests to OOM errors, thus compromising throughput. Moreover, both SLS and ILS fail to effectively balance workload across multiple LLM instances. To tackle these challenges, we propose slice-level scheduling (SCLS). By splitting the predefined maximal generation length limit into slices and serving batches slice by slice, it provides a precise range of serving time and memory usage for batched requests, laying the foundation for effective scheduling. Experiments confirm that compared with SLS and ILS schedulers, SCLS can improve throughput by up to 315.8% and greatly mitigate load imbalance with proposed batching and offloading algorithms.