🤖 AI Summary
While multiserver SRPT (SRPT-$k$) is asymptotically optimal under high load, no scheduling policy has been rigorously proven to dominate it across all loads and job size distributions in the M/G/$k$ queue.
Method: We propose SEK-SMOD, a novel scheduling policy that selectively prioritizes large jobs to improve server utilization. To enable rigorous analysis, we develop a relative deviation framework unifying worst-case and stochastic analysis, and design a lightweight practical variant, Practical-SEK.
Contribution/Results: We provide the first formal proof that SEK-SMOD strictly dominates SRPT-$k$ in mean response time for arbitrary loads and job size distributions. Both theoretical analysis and extensive simulations confirm its universal superiority; Practical-SEK achieves significant reductions in mean response time under realistic workloads, thereby breaking SRPT-$k$’s long-standing performance benchmark.
📝 Abstract
A well-designed scheduling policy can unlock significant performance improvements with no additional resources. Multiserver SRPT (SRPT-$k$) is known to achieve asymptotically optimal mean response time in the heavy traffic limit, as load approaches capacity. No better policy is known for the M/G/$k$ queue in any regime.
We introduce a new policy, SRPT-Except-$k+1$ & Modified SRPT (SEK-SMOD), which is the first policy to provably achieve lower mean response time than SRPT-$k$. SEK-SMOD outperforms SRPT-$k$ across all loads and all job size distributions. The key idea behind SEK-SMOD is to prioritize large jobs over small jobs in specific scenarios to improve server utilization, and thereby improve the response time of subsequent jobs in expectation. Our proof is a novel application of hybrid worst-case and stochastic techniques to relative analysis, where we analyze the deviations of our proposed SEK-SMOD policy away from the SRPT-$k$ baseline policy. Furthermore, we design Practical-SEK (a simplified variant of SEK-SMOD) and empirically verify the improvement over SRPT-$k$ via simulation.