Fulcrum: Optimizing Concurrent DNN Training and Inferencing on Edge Accelerators

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Edge accelerators (e.g., Jetson) lack GPU sharing mechanisms, exhibit numerous power states, and struggle to coordinate DNN training and inference efficiently. To address this, we propose an intelligent dynamic time-slicing scheduling framework. Our method jointly optimizes power modes and batch sizes by integrating multi-dimensional gradient descent (GMD) with active learning (ALS), achieving rapid convergence to Pareto-optimal configurations under low profiling overhead. It enables cross-workload policy reuse and maximizes training throughput under joint latency and power constraints. Evaluated across a configuration space of 273,000 possibilities, our approach satisfies both constraints in over 97% of scenarios and attains, on average, 93% of the theoretical maximum throughput—significantly outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
The proliferation of GPU accelerated edge devices like Nvidia Jetsons and the rise in privacy concerns are placing an emphasis on concurrent DNN training and inferencing on edge devices. Inference and training have different computing and QoS goals. But edge accelerators like Jetson do not support native GPU sharing and expose 1000s of power modes. This requires careful time-sharing of concurrent workloads to meet power--performance goals, while limiting costly profiling. In this paper, we design an intelligent time-slicing approach for concurrent DNN training and inferencing on Jetsons. We formulate an optimization problem to interleave training and inferencing minibatches, and decide the device power mode and inference minibatch size, while maximizing the training throughput and staying within latency and power budgets, with modest profiling costs. We propose GMD, an efficient multi-dimensional gradient descent search which profiles just $15$ power modes; and ALS, an Active Learning technique which identifies reusable Pareto-optimal power modes, but profiles $50$--$150$ power modes. We evaluate these within our Fulcrum scheduler for $273,000+$ configurations across $15$ DNN workloads. We also evaluate our strategies on dynamic arrival inference and concurrent inferences. ALS and GMD outperform simpler and more complex baselines with larger-scale profiling. Their solutions satisfy the latency and power budget for $>97%$ of our runs, and on average are within $7%$ of the optimal throughput.
Problem

Research questions and friction points this paper is trying to address.

Optimizing concurrent DNN training and inferencing on edge GPU accelerators
Managing power-performance trade-offs without native GPU sharing support
Minimizing costly profiling while meeting latency and power constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intelligent time-slicing for concurrent DNN workloads
GMD gradient descent search with minimal power mode profiling
Active Learning technique identifying reusable Pareto-optimal configurations
🔎 Similar Papers
No similar papers found.
P
Prashanthi S. K.
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
S
Saisamarth Taluri
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
Pranav Gupta
Pranav Gupta
Assistant Professor, Gies College of Business, UIUC
Collective IntelligenceHuman-AI TeamingTransactive AttentionDigital Nudging
A
Amartya Ranjan Saikia
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
K
Kunal Kumar Sahoo
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
A
Atharva Vinay Joshi
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
L
Lakshya Karwa
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
K
Kedar Dhule
Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012 India
Yogesh Simmhan
Yogesh Simmhan
Associate Professor, Indian Institute of Science
Distributed SystemsEdge AcceleratorsGraph AnalyticsCloud ComputingFederated Learning