Makespan Minimization in Split Learning: From Theory to Practice

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the joint client-to-compute-node assignment and task scheduling problem in split learning, with the objective of minimizing the makespan of distributed training. For both homogeneous and heterogeneous task settings, the study establishes the first theoretical results demonstrating that the problem admits no polynomial-time exact or approximation algorithms under several common assumptions. In response, the authors propose the first algorithm with a provable 5-approximation guarantee for the homogeneous case and further extend it into an efficient heuristic tailored for heterogeneous tasks. Extensive large-scale simulations show that the proposed method achieves the theoretical approximation bound in homogeneous scenarios and significantly outperforms existing approaches in heterogeneous environments, thereby confirming its effectiveness and practicality.

Technology Category

Application Category

📝 Abstract
Split learning recently emerged as a solution for distributed machine learning with heterogeneous IoT devices, where clients can offload part of their training to computationally-powerful helpers. The core challenge in split learning is to minimize the training time by jointly devising the client-helper assignment and the schedule of tasks at the helpers. We first study the model where each helper has a memory cardinality constraint on how many clients it may be assigned, which represents the case of homogeneous tasks. Through complexity theory, we rule out exact polynomial-time algorithms and approximation schemes even for highly restricted instances of this problem. We complement these negative results with a non-trivial polynomial-time 5-approximation algorithm. Building on this, we then focus on the more general heterogeneous task setting considered by Tirana et al. [INFOCOM 2024], where helpers have memory capacity constraints and clients have variable memory costs. In this case, we prove that, unless P=NP, the problem cannot admit a polynomial-time approximation algorithm for any approximation factor. However, by adapting our aforementioned 5-approximation algorithm, we develop a novel heuristic for the heterogeneous task setting and show that it outperforms heuristics from prior works through extensive experiments.
Problem

Research questions and friction points this paper is trying to address.

makespan minimization
split learning
client-helper assignment
task scheduling
heterogeneous tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Split Learning
Makespan Minimization
Approximation Algorithm
Heterogeneous Tasks
Computational Complexity
🔎 Similar Papers
No similar papers found.