PISA: An Adversarial Approach To Comparing Task Graph Scheduling Algorithms

📅 2024-03-11
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Fair evaluation of task-graph scheduling algorithms in heterogeneous distributed systems remains challenging due to insufficient structural coverage of conventional benchmarks. Method: This paper proposes PISA, a Simulated Annealing–based adversarial analysis framework, featuring a novel adversarial instance generation mechanism that systematically explores hard problem instances beyond standard benchmarks. It is complemented by SAGA, an open-source evaluation library enabling systematic comparison of 15 scheduling algorithms across 16 diverse datasets. Contribution/Results: Experiments reveal that algorithms exhibiting comparable performance on standard benchmarks diverge significantly—by up to 3.2× in makespan—on PISA-generated adversarial instances, demonstrating their performance boundaries are highly sensitive to task-graph structure. PISA thus establishes a new paradigm for robustness assessment of scheduling algorithms and provides a reproducible, extensible toolchain for rigorous empirical evaluation.

Technology Category

Application Category

📝 Abstract
Scheduling a task graph representing an application over a heterogeneous network of computers is a fundamental problem in distributed computing. It is known to be not only NP-hard but also not polynomial-time approximable within a constant factor. As a result, many heuristic algorithms have been proposed over the past few decades. Yet it remains largely unclear how these algorithms compare to each other in terms of the quality of schedules they produce. We identify gaps in the traditional benchmarking approach to comparing task scheduling algorithms and propose a simulated annealing-based adversarial analysis approach called PISA to help address them. We also introduce SAGA, a new open-source library for comparing task scheduling algorithms. We use SAGA to benchmark 15 algorithms on 16 datasets and PISA to compare the algorithms in a pairwise manner. Algorithms that appear to perform similarly on benchmarking datasets are shown to perform very differently on adversarially chosen problem instances. Interestingly, the results indicate that this is true even when the adversarial search is constrained to selecting among well-structured, application-specific problem instances. This work represents an important step towards a more general understanding of the performance boundaries between task scheduling algorithms on different families of problem instances.
Problem

Research questions and friction points this paper is trying to address.

Comparing task graph scheduling algorithms' performance gaps
Addressing limitations in traditional benchmarking methods
Analyzing adversarial scenarios for algorithm robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulated annealing-based adversarial analysis approach
Open-source library for algorithm comparison
Pairwise adversarial comparison of scheduling algorithms
🔎 Similar Papers
No similar papers found.