🤖 AI Summary
This work addresses the challenges of high latency and resource contention in heavily loaded data centers during large language model inference, while underutilized edge or global computing resources remain largely untapped. The paper presents the first integration of speculative decoding with wide-area distributed computing through a cloud-edge collaborative inference architecture. By employing load-aware scheduling, the draft model is dynamically offloaded to idle nodes, and a redundancy control mechanism is introduced to ensure latency stability. Experimental results demonstrate that this approach reduces the forward computation load of the draft model in high-load data centers by over 50% without increasing end-to-end latency, thereby significantly improving global resource utilization efficiency.
📝 Abstract
Data centers capable of running large language models (LLMs) are spread across the globe. Some have high end GPUs for running the most advanced models (100B+ parameters), and others are only suitable for smaller models (1B parameters). The most capable GPUs are under high demand thanks to the rapidly expanding applications of LLMs. Choosing the right location to run an LLM inference workload can have consequences on the latency of requests due to these high demands. In this work, we explore options to shift some aspects of inference to the under-utilized data centers. We first observe the varying delays affecting inference in AWS services from different regions, demonstrating that load is not spread evenly. We then introduce WANSpec, which offloads part of LLM generation to the under-utilized data centers. In doing so, WANSpec can mitigate capacity issues as well as effectively use on-site compute (ie at universities) to augment cloud providers. This is done with speculative decoding, a widely used technique to speed up auto-regressive decoding, by moving the draft model to the under-utilized compute resources. Our experiments in simulation and cloud deployments show that WANSpec can judiciously employ redundancy to avoid increases in latency while still reducing the forward passes of speculative decoding's draft model in high demand data centers by over 50%.