🤖 AI Summary
To address the challenges of intrusive instrumentation and fragmented cross-layer analysis in full-stack performance monitoring of AI/ML systems, this paper proposes an eBPF-based non-intrusive monitoring framework. The framework integrates eBPF kernel-level tracing with libnvml-driven process-level GPU telemetry to enable zero-modification, real-time collection of performance metrics across GPU hardware, networking, CUDA runtime, Python execution, and PyTorch framework layers. It introduces a novel multi-source heterogeneous time-series modeling approach, coupled with Gaussian Mixture Model (GMM)-driven unsupervised multidimensional anomaly clustering, enabling automated detection and root-cause localization of complex failures—including latency spikes, hardware faults, and communication inefficiencies. Evaluated in multi-node distributed training environments, the framework incurs <3% runtime overhead while achieving high detection accuracy and demonstrating production-grade scalability.
📝 Abstract
We present eACGM, a full-stack AI/ML system monitoring framework based on eBPF. eACGM collects real-time performance data from key hardware components, including the GPU and network communication layer, as well as from key software stacks such as CUDA, Python, and PyTorch, all without requiring any code instrumentation or modifications. Additionally, it leverages libnvml to gather process-level GPU resource usage information. By applying a Gaussian Mixture Model (GMM) to the collected multidimensional performance metrics for statistical modeling and clustering analysis, eACGM effectively identifies complex failure modes, such as latency anomalies, hardware failures, and communication inefficiencies, enabling rapid diagnosis of system bottlenecks and abnormal behaviors. To evaluate eACGM's effectiveness and practicality, we conducted extensive empirical studies and case analyses in multi-node distributed training scenarios. The results demonstrate that eACGM, while maintaining a non-intrusive and low-overhead profile, successfully captures critical performance anomalies during model training and inference. Its stable anomaly detection performance and comprehensive monitoring capabilities validate its applicability and scalability in real-world production environments, providing strong support for performance optimization and fault diagnosis in large-scale AI/ML systems.