Improving inference time in multi-TPU systems with profiled model segmentation

📅 2023-03-01
🏛️ International Euromicro Conference on Parallel, Distributed and Network-Based Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Edge TPUs suffer from limited on-chip memory, causing frequent host memory accesses and severely constraining end-to-end throughput for multi-model inference. To address this bottleneck, we propose a fine-grained model partitioning and cross-TPU pipelined inference method guided by performance profiling. Our approach tightly integrates layer-wise performance profiling, graph-structure-aware model splitting, multi-device coordinated pipelined scheduling, and Edge TPU compiler optimizations—enabling dynamic, layer-specific computational load distribution and joint optimization of memory and bandwidth resources. Evaluated on four Edge TPUs, our method achieves 46× and 6× end-to-end inference speedup for fully connected and convolutional networks, respectively. It significantly enhances real-time inference throughput for large models at the edge and, for the first time, establishes an efficient, scalable multi-TPU pipelined inference architecture on resource-constrained edge devices.

Technology Category

Application Category

📝 Abstract
In this paper, we systematically evaluate the inference performance of the Edge TPU by Google for neural networks with different characteristics. Specifically, we determine that, given the limited amount of on-chip memory on the Edge TPU, accesses to external (host) memory rapidly become an important performance bottleneck. We demonstrate how multiple devices can be jointly used to alleviate the bottleneck introduced by accessing the host memory. We propose a solution combining model segmentation and pipelining on up to four TPUs, with remarkable performance improvements that range from 6x for neural networks with convolutional layers to 46x for fully connected layers, compared with single-TPU setups.
Problem

Research questions and friction points this paper is trying to address.

Optimize inference time in multi-TPU systems
Address memory bottleneck in Edge TPU systems
Improve performance using model segmentation and pipelining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model segmentation reduces host memory access.
Pipelining enhances multi-TPU system efficiency.
Performance improves up to 46 times.
🔎 Similar Papers
No similar papers found.
J
J. Villarrubia
Dept. Arquitectura de Computadores y Automática, Universidad Complutense de Madrid, Madrid, Spain
Luis Costero
Luis Costero
Universidad Complutense de Madrid
Parallel programmingresource managementenergy efficiency
Francisco D. Igual
Francisco D. Igual
Universidad Complutense de Madrid
High Performance ComputingDense Linear AlgebraGPUGPGPUDSP
K
Katzalin Olcoz
Dept. Arquitectura de Computadores y Automática, Universidad Complutense de Madrid, Madrid, Spain