SLIDE: Simultaneous Model Downloading and Inference at the Wireless Network Edge

πŸ“… 2025-12-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address excessive end-to-end latency in edge AI caused by sequential model downloading and on-device inference, this paper proposes SLIDEβ€”the first framework enabling fine-grained parallelism between downloading and inference: users initiate inference on already-downloaded layers while concurrently receiving subsequent ones. The core innovation lies in modeling inter-layer recursive dependencies and jointly optimizing download bandwidth allocation, spectrum resource scheduling, and computational resource assignment. We formulate the problem within a multi-user downlink system and design a polynomial-time optimal algorithm. Experiments demonstrate that, under stringent latency and communication-resource constraints, SLIDE achieves significantly higher task throughput than conventional serial approaches. This validates the effectiveness, optimality, and scalability of the parallel download-inference architecture.

Technology Category

Application Category

πŸ“ Abstract
To support on-device inference, the next-generation mobile networks are expected to support real-time model downloading services to mobile users. However, powerful AI models typically have large model sizes, resulting in excessive end-to-end (E2E) downloading-and-inference (DAI) latency. To address this issue, we propose a simultaneous model downloading and inference (SLIDE) framework, which allows users to perform inference with downloaded layers while simultaneously receiving the remaining layers of the model. To this end, we formulate a task throughput maximization problem by jointly optimizing model provisioning, spectrum bandwidth allocation, and computing resource allocation for multi-user downlink systems. Unlike traditional DAI frameworks, SLIDE introduces recursive dependencies across layers, where inference latency depends recursively on the downloading bandwidth and computing resource allocation for each of the preceding layers. To solve this challenging problem, we design an efficient algorithm that acquires the optimal solution with polynomial-time complexity. Simulation results demonstrate that the proposed SLIDE framework significantly improves task throughput under latency and communication resource constraints compared with the conventional model downloading schemes.
Problem

Research questions and friction points this paper is trying to address.

Minimizes end-to-end latency for AI model downloading and inference
Optimizes resource allocation for multi-user wireless edge networks
Enables simultaneous model downloading and inference to improve throughput
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simultaneous downloading and inference of model layers
Joint optimization of model provisioning and resource allocation
Polynomial-time algorithm for maximizing task throughput
πŸ”Ž Similar Papers
No similar papers found.
Guanqiao Qu
Guanqiao Qu
The University of Hong Kong
Artificial IntelligenceMachine LearningEdge IntelligenceNetworkingWireless Communications
T
Tao Li
Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong SAR, China
Q
Qian Chen
Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong SAR, China
Xianhao Chen
Xianhao Chen
Assistant Professor, The University of Hong Kong
Wireless networksmobile edge computingedge AIdistributed learning
S
Sheng Zhou
Department of Electronic Engineering, Tsinghua University, Beijing, China