Edge Large AI Models: Collaborative Deployment and IoT Applications

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying large AI models (LAMs) on geographically distributed, resource-constrained edge devices for diverse real-time IoT intelligent services, this paper proposes a collaborative training and microservice-based inference framework. We innovatively design an architecture-aware modular decomposition mechanism that enables adaptive decoupling across computation, communication, and modality dimensions; develop a multimodal token mapping scheme coupled with domain-knowledge-guided lightweight fine-tuning; and implement a microservice virtualized inference engine. Evaluated in industrial IoT and smart city scenarios, our framework reduces training communication overhead by 37% and end-to-end inference latency by 42% compared to baselines. It achieves millisecond-level mapping from raw sensor data to semantic tokens and, for the first time, enables end-to-end deployment of generative AI tasks at the edge.

Technology Category

Application Category

📝 Abstract
Large artificial intelligence models (LAMs) emulate human-like problem-solving capabilities across diverse domains, modalities, and tasks. By leveraging the communication and computation resources of geographically distributed edge devices, edge LAMs enable real-time intelligent services at the network edge. Unlike conventional edge AI, which relies on small or moderate-sized models for direct feature-to-prediction mappings, edge LAMs leverage the intricate coordination of modular components to enable context-aware generative tasks and multi-modal inference. We shall propose a collaborative deployment framework for edge LAM by characterizing the LAM intelligent capabilities and limited edge network resources. Specifically, we propose a collaborative training framework over heterogeneous edge networks that adaptively decomposes LAMs according to computation resources, data modalities, and training objectives, reducing communication and computation overheads during the fine-tuning process. Furthermore, we introduce a microservice-based inference framework that virtualizes the functional modules of edge LAMs according to their architectural characteristics, thereby improving resource utilization and reducing inference latency. The developed edge LAM will provide actionable solutions to enable diversified Internet-of-Things (IoT) applications, facilitated by constructing mappings from diverse sensor data to token representations and fine-tuning based on domain knowledge.
Problem

Research questions and friction points this paper is trying to address.

Develop collaborative framework for edge Large AI Models deployment
Optimize resource use in edge networks for AI model training
Enable real-time multi-modal AI inference for IoT applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative training framework for heterogeneous edge networks
Microservice-based inference framework reducing latency
Mapping sensor data to tokens for IoT
🔎 Similar Papers
No similar papers found.