Collaborative Processing for Multi-Tenant Inference on Memory-Constrained Edge TPUs

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant inference latency incurred by frequent data exchanges between host memory and edge TPUs due to their limited on-chip memory, a problem exacerbated under multi-tenant and dynamic workloads. The authors propose SwapLess, a novel system that introduces the first analytical queuing model jointly accounting for intra- and inter-model swapping overheads as well as CPU and TPU service times, enabling low-overhead, high-fidelity online resource scheduling. SwapLess adaptively partitions compute tasks between CPU and TPU, dynamically optimizing both the partitioning point and CPU core allocation to minimize end-to-end response time. Experimental results on an Edge TPU platform demonstrate that SwapLess reduces average latency by 63.8% in single-tenant scenarios and up to 77.4% in multi-tenant settings compared to the default compiler.

Technology Category

Application Category

📝 Abstract
IoT applications are increasingly relying on on-device AI accelerators to ensure high performance, especially in limited connectivity and safety-critical scenarios. However, the limited on-chip memory of these accelerators forces inference runtimes to swap model segments between host and accelerator memory, substantially inflating latency. While collaborative processing by partitioning the model processing between CPU and accelerator resources can reduce accelerator memory pressure and latency, naive partitioning may worsen end-to-end latency by either shifting excessive computation to the CPU or failing to sufficiently curb swapping, a problem that is further amplified in multi-tenant and dynamic environments. To address these issues, we present SwapLess, a system for adaptive, multi-tenant TPU-CPU collaborative inference for memory-constrained Edge TPUs. SwapLess utilizes an analytic queueing model that captures partition-dependent CPU/TPU service times as well as inter- and intra-model swapping overheads across different workload mixes and request rates. Using this model, SwapLess continuously adjusts both the partition point and CPU core allocation online to minimize end-to-end response time with low decision overhead. An implementation on Edge TPU-equipped platforms demonstrates that SwapLess reduces mean latency by up to 63.8% for single-tenant workloads and up to 77.4% for multi-tenant workloads relative to the default Edge TPU compiler.
Problem

Research questions and friction points this paper is trying to address.

multi-tenant inference
memory-constrained Edge TPUs
model partitioning
latency optimization
collaborative processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

collaborative inference
memory-constrained Edge TPU
multi-tenant
adaptive partitioning
queueing model
🔎 Similar Papers
No similar papers found.