🤖 AI Summary
GPU performance log analysis faces critical bottlenecks—terabyte-scale memory consumption and multi-hour processing latency—severely hindering real-time diagnostics and automated integration. To address this, we propose the first distributed, parallel analysis framework tailored for high-dimensional GPU trace data. Our method leverages data sharding and MPI-based decentralized concurrent processing across nodes, drastically reducing per-node memory pressure. It natively supports end-to-end analysis of real-world HPC and AI workloads captured by Nsight Compute, enabling fine-grained quantification of correlations between kernel performance and memory transfer latency. Experiments on production traces demonstrate that the framework accurately identifies memory transfer latency as a key determinant of kernel performance degradation. Moreover, analysis throughput scales nearly linearly with both data volume and available compute resources, reducing TB-scale log analysis time from hours to minutes and lowering memory overhead by an order of magnitude.
📝 Abstract
Analyzing large-scale performance logs from GPU profilers often requires terabytes of memory and hours of runtime, even for basic summaries. These constraints prevent timely insight and hinder the integration of performance analytics into automated workflows. Existing analysis tools typically process data sequentially, making them ill-suited for HPC workflows with growing trace complexity and volume. We introduce a distributed data analysis framework that scales with dataset size and compute availability. Rather than treating the dataset as a single entity, our system partitions it into independently analyzable shards and processes them concurrently across MPI ranks. This design reduces per-node memory pressure, avoids central bottlenecks, and enables low-latency exploration of high-dimensional trace data. We apply the framework to end-to-end Nsight Compute traces from real HPC and AI workloads, demonstrate its ability to diagnose performance variability, and uncover the impact of memory transfer latency on GPU kernel behavior.