🤖 AI Summary
To address critical challenges in dynamic resource scaling for MPI applications—namely, high expansion overhead and the inability to release nodes during contraction—this paper proposes a collaborative resource adaptation mechanism based on parallel *spawn*. Our approach enables communicator-level elastic scaling via full-process coordinated *spawn*, supporting process reuse and cross-node communication optimization across homogeneous, heterogeneous, and shared-resource environments. Compared to state-of-the-art methods, expansion overhead is bounded within 1.25×, while contraction cost is reduced by at least 20×. This significantly shortens job makespan, accelerates node resource reclamation, and improves overall system utilization. The core innovation lies in overcoming the traditional MPI constraint that prohibits node release during contraction, thereby achieving, for the first time, efficient and symmetric bidirectional dynamic scaling.
📝 Abstract
Dynamic resource management is an increasingly important capability of High Performance Computing systems, as it enables jobs to adjust their resource allocation at runtime. This capability has been shown to reduce workload makespan, substantially decrease job waiting times and improve overall system utilization. In this context, malleability refers to the ability of applications to adapt to new resource allocations during execution. Although beneficial, malleability incurs significant reconfiguration costs, making the reduction of these costs an important research topic. Some existing methods for MPI applications respawn the entire application, which is an expensive solution that avoids the reuse of original processes. Other MPI methods reuse them, but fail to fully release unneeded processes when shrinking, since some ranks within the same communicator remain active across nodes, preventing the application from returning those nodes to the system. This work overcomes both limitations by proposing a novel parallel spawning strategy, in which all processes cooperate in spawning before redistribution, thereby reducing execution time. Additionally, it removes shrinkage limitations, allowing better adaptation of parallel systems to workload and reducing their makespan. As a result, it preserves competitive expansion times with at most a $1.25 imes$ overhead, while enabling fast shrink operations that reduce their cost by at least $20 imes$. This strategy has been validated on both homogeneous and heterogeneous systems and can also be applied in shared-resource environments.