Parallel Spawning Strategies for Dynamic-Aware MPI Applications

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in dynamic resource scaling for MPI applications—namely, high expansion overhead and the inability to release nodes during contraction—this paper proposes a collaborative resource adaptation mechanism based on parallel *spawn*. Our approach enables communicator-level elastic scaling via full-process coordinated *spawn*, supporting process reuse and cross-node communication optimization across homogeneous, heterogeneous, and shared-resource environments. Compared to state-of-the-art methods, expansion overhead is bounded within 1.25×, while contraction cost is reduced by at least 20×. This significantly shortens job makespan, accelerates node resource reclamation, and improves overall system utilization. The core innovation lies in overcoming the traditional MPI constraint that prohibits node release during contraction, thereby achieving, for the first time, efficient and symmetric bidirectional dynamic scaling.

Technology Category

Application Category

📝 Abstract
Dynamic resource management is an increasingly important capability of High Performance Computing systems, as it enables jobs to adjust their resource allocation at runtime. This capability has been shown to reduce workload makespan, substantially decrease job waiting times and improve overall system utilization. In this context, malleability refers to the ability of applications to adapt to new resource allocations during execution. Although beneficial, malleability incurs significant reconfiguration costs, making the reduction of these costs an important research topic. Some existing methods for MPI applications respawn the entire application, which is an expensive solution that avoids the reuse of original processes. Other MPI methods reuse them, but fail to fully release unneeded processes when shrinking, since some ranks within the same communicator remain active across nodes, preventing the application from returning those nodes to the system. This work overcomes both limitations by proposing a novel parallel spawning strategy, in which all processes cooperate in spawning before redistribution, thereby reducing execution time. Additionally, it removes shrinkage limitations, allowing better adaptation of parallel systems to workload and reducing their makespan. As a result, it preserves competitive expansion times with at most a $1.25 imes$ overhead, while enabling fast shrink operations that reduce their cost by at least $20 imes$. This strategy has been validated on both homogeneous and heterogeneous systems and can also be applied in shared-resource environments.
Problem

Research questions and friction points this paper is trying to address.

Reducing reconfiguration costs for malleable MPI applications during runtime
Overcoming limitations in releasing unneeded processes when shrinking resources
Improving resource adaptation and reducing makespan in parallel systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel spawning strategy reduces MPI reconfiguration costs
Cooperative process spawning enables efficient resource redistribution
Fast shrink operations decrease cost by at least 20 times
🔎 Similar Papers
No similar papers found.
I
Iker Martín-Álvarez
Dpto. de Ingeniería y Ciencia de los Computadores, Universitat Jaume I, Av. Vicent Sos Baynat, s/n , Castelló, 12071, Comunitat Valenciana, Spain
J
J. I. Aliaga
Dpto. de Ingeniería y Ciencia de los Computadores, Universitat Jaume I, Av. Vicent Sos Baynat, s/n , Castelló, 12071, Comunitat Valenciana, Spain
M
Maribel Castillo
Dpto. de Ingeniería y Ciencia de los Computadores, Universitat Jaume I, Av. Vicent Sos Baynat, s/n , Castelló, 12071, Comunitat Valenciana, Spain
Sergio Iserte
Sergio Iserte
Senior Researcher @ BSC
HPCResource ManagementHeterogeneous ComputingAI for Scientific Computing