🤖 AI Summary
To address the urgent need for efficient large language model (LLM) deployment on resource-constrained edge devices in 6G ubiquitous intelligent networks, this paper proposes CoEL, an Mixture-of-Experts (MoE)-driven collaborative edge LLM deployment framework. CoEL tackles the challenges of limited computation, memory, and communication capacity by introducing a novel four-dimensional closed-loop paradigm—perception, deployment, compression, and update—integrating intra-device resource orchestration with inter-device distributed deployment to enable sparse, dynamic scheduling. Key techniques include MoE architecture adaptation, quantization-aware compression, token fusion and pruning, edge resource state broadcasting, and dynamic topology-aware model updating. Experimental results demonstrate that CoEL significantly reduces memory footprint and intermediate data volume, while enhancing deployment flexibility and inference latency across heterogeneous edge devices. It thus enables elastic, adaptive AI services in 6G scenarios.
📝 Abstract
The powerfulness of LLMs indicates that deploying various LLMs with different scales and architectures on end, edge, and cloud to satisfy different requirements and adaptive heterogeneous hardware is the critical way to achieve ubiquitous intelligence for 6G. However, the massive parameter scale of LLMs poses significant challenges in deploying them on edge devices due to high computational and storage demands. Considering that the sparse activation in Mixture of Experts (MoE) is effective on scalable and dynamic allocation of computational and communications resources at the edge, this paper proposes a novel MoE-empowered collaborative deployment framework for edge LLMs, denoted as CoEL. This framework fully leverages the properties of MoE architecture and encompasses four key aspects: Perception, Deployment, Compression, and Updating. Edge servers broadcast their resource status and the specific resource requirements of LLMs to their neighbors. Then, utilizing this data, two sophisticated deployment strategies are proposed for satisfying varying model scales, ensuring that each model is deployed effectively. One for deploying LLMs on a single edge device through intra-device resource collaboration, and another for a distributed deployment across multiple edge devices via inter-device resource collaboration. Furthermore, both the models and the intermediate data are compressed for reducing memory footprint by quantization and reducing the volume of intermediate data by token fusion and pruning. Finally, given the dynamic of network topology, resource status, and user requirements, the deployment strategies are regularly updated to maintain its relevance and effectiveness. This paper also delineates the challenges and potential research directions for the deployment of edge LLMs.