🤖 AI Summary
To address the challenge of deploying large AI models (LAMs) on geographically distributed, resource-constrained edge devices for diverse real-time IoT intelligent services, this paper proposes a collaborative training and microservice-based inference framework. We innovatively design an architecture-aware modular decomposition mechanism that enables adaptive decoupling across computation, communication, and modality dimensions; develop a multimodal token mapping scheme coupled with domain-knowledge-guided lightweight fine-tuning; and implement a microservice virtualized inference engine. Evaluated in industrial IoT and smart city scenarios, our framework reduces training communication overhead by 37% and end-to-end inference latency by 42% compared to baselines. It achieves millisecond-level mapping from raw sensor data to semantic tokens and, for the first time, enables end-to-end deployment of generative AI tasks at the edge.
📝 Abstract
Large artificial intelligence models (LAMs) emulate human-like problem-solving capabilities across diverse domains, modalities, and tasks. By leveraging the communication and computation resources of geographically distributed edge devices, edge LAMs enable real-time intelligent services at the network edge. Unlike conventional edge AI, which relies on small or moderate-sized models for direct feature-to-prediction mappings, edge LAMs leverage the intricate coordination of modular components to enable context-aware generative tasks and multi-modal inference. We shall propose a collaborative deployment framework for edge LAM by characterizing the LAM intelligent capabilities and limited edge network resources. Specifically, we propose a collaborative training framework over heterogeneous edge networks that adaptively decomposes LAMs according to computation resources, data modalities, and training objectives, reducing communication and computation overheads during the fine-tuning process. Furthermore, we introduce a microservice-based inference framework that virtualizes the functional modules of edge LAMs according to their architectural characteristics, thereby improving resource utilization and reducing inference latency. The developed edge LAM will provide actionable solutions to enable diversified Internet-of-Things (IoT) applications, facilitated by constructing mappings from diverse sensor data to token representations and fine-tuning based on domain knowledge.