🤖 AI Summary
Continuous, automated detection of performance regressions in open-source projects remains challenging. Method: This paper integrates the lightweight performance change detection service Nyrkiö into the MooBench benchmarking framework for the first time and builds an end-to-end CI pipeline using GitHub Actions. The approach enables low-overhead, reproducible performance monitoring for system components such as tracing agents: measurement data is automatically uploaded to Nyrkiö, which applies robust statistical analysis—including CUSUM and piecewise linear fitting—to detect significant performance deviations. Contribution/Results: Experiments successfully identified and validated a major performance regression (37% slowdown) induced by a Linux kernel version upgrade, demonstrating the method’s effectiveness, automation level, and reproducibility in real-world CI environments. This work establishes the first practical, tracing-oriented paradigm for continuous performance regression detection in open-source infrastructure.
📝 Abstract
In GitHub with its 518 million hosted projects, performance changes within these projects are highly relevant to the project's users. Although performance measurement is supported by GitHub CI/CD, performance change detection is a challenging topic.
In this paper, we demonstrate how we incorporated Nyrkiö to MooBench. Prior to this work, Moobench continuously ran on GitHub virtual machines, measuring overhead of tracing agents, but without change detection. By adding the upload of the measurements to the Nyrkiö change detection service, we made it possible to detect performance changes. We identified one major performance regression and examined the performance change in depth. We report that (1) it is reproducible with GitHub actions, and (2) the performance regression is caused by a Linux Kernel version change.