🤖 AI Summary
AI workloads impose stringent requirements on networks—ultra-low latency, high throughput, and efficient resource utilization.
Method: This paper proposes an AI-native in-network computing paradigm, enabling native execution of AI tasks on programmable network devices—including switches, routers, and smart NICs. It introduces the first integration of programmable data planes (PDP), lightweight AI model compression and mapping (via pruning and quantization), distributed in-network aggregation, and edge federated learning, forming a topology-aware intelligent co-design architecture. The design is realized through SDN-driven software-hardware co-optimization using Planter and Quark frameworks.
Contribution/Results: We establish a comprehensive technical taxonomy spanning architecture, algorithms, frameworks, and applications. Experiments demonstrate significant reductions in communication latency, substantial improvements in throughput and energy efficiency, and clarify three key future directions: runtime programmability, standardized AI-Native benchmarks, and novel AI-Native application paradigms.
📝 Abstract
In-network computation represents a transformative approach to addressing the escalating demands of Artificial Intelligence (AI) workloads on network infrastructure. By leveraging the processing capabilities of network devices such as switches, routers, and Network Interface Cards (NICs), this paradigm enables AI computations to be performed directly within the network fabric, significantly reducing latency, enhancing throughput, and optimizing resource utilization. This paper provides a comprehensive analysis of optimizing in-network computation for AI, exploring the evolution of programmable network architectures, such as Software-Defined Networking (SDN) and Programmable Data Planes (PDPs), and their convergence with AI. It examines methodologies for mapping AI models onto resource-constrained network devices, addressing challenges like limited memory and computational capabilities through efficient algorithm design and model compression techniques. The paper also highlights advancements in distributed learning, particularly in-network aggregation, and the potential of federated learning to enhance privacy and scalability. Frameworks like Planter and Quark are discussed for simplifying development, alongside key applications such as intelligent network monitoring, intrusion detection, traffic management, and Edge AI. Future research directions, including runtime programmability, standardized benchmarks, and new applications paradigms, are proposed to advance this rapidly evolving field. This survey underscores the potential of in-network AI to create intelligent, efficient, and responsive networks capable of meeting the demands of next-generation AI applications.