π€ AI Summary
This work addresses the challenge of efficiently executing irregular fork-join task-parallel applications on GPUs by proposing a persistent kernelβbased runtime architecture. The design employs a state machine to partition task execution into distinct phases and introduces an Execution Path-Aware Queue (EPAQ) to mitigate warp divergence, while enabling fine-grained task scheduling at both thread block and thread levels. Load balancing is achieved through work stealing, and a concise programming interface is provided via Clang frontend extensions using pragma annotations. Experimental results demonstrate that the proposed approach outperforms OpenMP task parallelism on a 72-core CPU across multiple irregular workloads, with EPAQ delivering up to 1.8Γ speedup on representative benchmarks such as Fibonacci.
π Abstract
Graphics Processing Units (GPUs) excel at regular data-parallel workloads where massive hardware parallelism can be readily exploited. In contrast, many important irregular applications are naturally expressed as task parallelism with a fork-join control structure. While CPU runtimes for fork-join task parallelism are mature, it remains challenging to efficiently support it on GPUs.
We propose GTaP, a GPU-resident runtime that supports fork-join task parallelism. GTaP is based on the persistent kernel model, and supports two worker granularities: thread blocks and individual threads. To realize fork-join on GPUs, GTaP represents joins as continuations and executes each task as a state machine that can be split into multiple execution segments. We also extend Clang's frontend with a pragma-based programming model that enables programmers to express fork-join without exposing low-level mechanisms. GTaP employs work stealing for load balancing, providing better scalability than a global-queue approach. For thread-level workers, we further introduce Execution-Path-Aware Queueing (EPAQ), which allows programmers to partition task queues using user-defined criteria, reducing warp divergence caused by mixing heterogeneous control flows within a warp.
Across representative irregular applications, GTaP outperforms OpenMP task-parallel execution on a 72-core CPU in many cases, especially for large problem sizes with compute-intensive tasks. We also show that GTaP's design choices outperform naive GPU alternatives. The benefit of EPAQ is workload-dependent: it can improve performance for some benchmarks while having little effect on others; on Fibonacci, EPAQ achieves up to a 1.8$\times$ speedup.