🤖 AI Summary
This work addresses the challenges of deploying large-scale or multi-model deep neural network (DNN) inference on memory-constrained mobile GPUs, where conventional full preloading strategies suffer from excessive memory consumption and high latency. To overcome these limitations, we propose FlashMem, a memory-efficient streaming inference framework tailored for mobile GPUs. FlashMem innovatively integrates static scheduling with dynamic on-demand weight streaming and leverages 2.5D texture memory to optimize data layout, thereby minimizing data conversion overhead and effectively exploiting the GPU memory hierarchy. Experimental results across 11 diverse models demonstrate that FlashMem reduces memory usage by 2.0–8.4× and accelerates inference by 1.7–75.0× compared to state-of-the-art approaches, enabling, for the first time, efficient concurrent inference of large and multiple models on resource-constrained mobile devices.
📝 Abstract
The increasing size and complexity of modern deep neural networks (DNNs) pose significant challenges for on-device inference on mobile GPUs, with limited memory and computational resources. Existing DNN acceleration frameworks primarily deploy a weight preloading strategy, where all model parameters are loaded into memory before execution on mobile GPUs. We posit that this approach is not adequate for modern DNN workloads that comprise very large model(s) and possibly execution of several distinct models in succession. In this work, we introduce FlashMem, a memory streaming framework designed to efficiently execute large-scale modern DNNs and multi-DNN workloads while minimizing memory consumption and reducing inference latency. Instead of fully preloading weights, FlashMem statically determines model loading schedules and dynamically streams them on demand, leveraging 2.5D texture memory to minimize data transformations and improve execution efficiency. Experimental results on 11 models demonstrate that FlashMem achieves 2.0x to 8.4x memory reduction and 1.7x to 75.0x speedup compared to existing frameworks, enabling efficient execution of large-scale models and multi-DNN support on resource-constrained mobile GPUs.