🤖 AI Summary
Edge TPUs suffer from limited on-chip memory, causing frequent host memory accesses and severely constraining end-to-end throughput for multi-model inference. To address this bottleneck, we propose a fine-grained model partitioning and cross-TPU pipelined inference method guided by performance profiling. Our approach tightly integrates layer-wise performance profiling, graph-structure-aware model splitting, multi-device coordinated pipelined scheduling, and Edge TPU compiler optimizations—enabling dynamic, layer-specific computational load distribution and joint optimization of memory and bandwidth resources. Evaluated on four Edge TPUs, our method achieves 46× and 6× end-to-end inference speedup for fully connected and convolutional networks, respectively. It significantly enhances real-time inference throughput for large models at the edge and, for the first time, establishes an efficient, scalable multi-TPU pipelined inference architecture on resource-constrained edge devices.
📝 Abstract
In this paper, we systematically evaluate the inference performance of the Edge TPU by Google for neural networks with different characteristics. Specifically, we determine that, given the limited amount of on-chip memory on the Edge TPU, accesses to external (host) memory rapidly become an important performance bottleneck. We demonstrate how multiple devices can be jointly used to alleviate the bottleneck introduced by accessing the host memory. We propose a solution combining model segmentation and pipelining on up to four TPUs, with remarkable performance improvements that range from 6x for neural networks with convolutional layers to 46x for fully connected layers, compared with single-TPU setups.