🤖 AI Summary
AI-driven ML workloads on HPC systems exhibit a novel I/O pattern—characterized by massive small-file random reads—that diverges significantly from traditional HPC applications, causing severe performance bottlenecks in parallel file systems (e.g., Lustre, GPFS).
Method: Based on empirical studies conducted from 2019–2024 and bibliometric analysis of 300+ publications, we develop the first comprehensive ML-HPC I/O analytical framework. Leveraging I/O profiling tools (IOtracer, Darshan, LMT) and runtime logs from PyTorch/TensorFlow, we systematically characterize I/O behavior across data preprocessing, training, and inference stages.
Contribution/Results: We identify six critical research gaps and propose I/O-aware ML-system co-design principles. This work establishes a foundational theoretical framework and practical guidelines for designing AI-ready HPC storage architectures, bridging the gap between ML workload requirements and HPC I/O system capabilities.
📝 Abstract
Growing interest in Artificial Intelligence (AI) has resulted in a surge in demand for faster methods of Machine Learning (ML) model training and inference. This demand for speed has prompted the use of high performance computing (HPC) systems that excel in managing distributed workloads. Because data is the main fuel for AI applications, the performance of the storage and I/O subsystem of HPC systems is critical. In the past, HPC applications accessed large portions of data written by simulations or experiments or ingested data for visualizations or analysis tasks. ML workloads perform small reads spread across a large number of random files. This shift of I/O access patterns poses several challenges to modern parallel storage systems. In this paper, we survey I/O in ML applications on HPC systems, and target literature within a 6-year time window from 2019 to 2024. We define the scope of the survey, provide an overview of the common phases of ML, review available profilers and benchmarks, examine the I/O patterns encountered during offline data preparation, training, and inference, and explore I/O optimizations utilized in modern ML frameworks and proposed in recent literature. Lastly, we seek to expose research gaps that could spawn further R&D.