🤖 AI Summary
To address high inference latency and potential accuracy degradation of large language models (LLMs) under resource-constrained mobile edge computing (MEC) scenarios, this paper proposes a resource-aware parallel speculative decoding framework. The framework pioneers the integration of parallel speculative decoding—where a lightweight draft model collaborates with a target LLM to jointly generate tokens—into the MEC architecture. It employs multi-agent deep reinforcement learning to jointly optimize user association and heterogeneous resource allocation across edge servers and end devices, thereby mitigating both communication overhead and asynchronous execution delays. Evaluated in the Sionna simulator, the proposed method achieves up to 28.0% and an average of 23.7% reduction in end-to-end inference latency while preserving original LLM accuracy. This significantly enhances the scalability and real-time responsiveness of LLM inference services in MEC environments.
📝 Abstract
The growing demand for on-device large language model (LLM) inference highlights the need for efficient mobile edge computing (MEC) solutions, especially in resource-constrained settings. Speculative decoding offers a promising solution by partitioning token generation between a lightweight draft model on mobile devices and a powerful target model on edge servers, but suffers from communication overhead and asynchronous delays. This paper is the first to propose a unified framework that jointly optimizes user association and resource allocation (UARA) to support efficient parallel speculative decoding. We solve the UARA problem using a multi-agent deep reinforcement learning algorithm. To evaluate our approach under realistic conditions, we conduct experiments using the Sionna simulator. Results show that our method achieves up to 28.0% and an average of 23.7% reduction in end-to-end latency without compromising inference accuracy, enabling scalable and low-latency LLM services in MEC systems.