🤖 AI Summary
Statistical misuse is pervasive in empirical software engineering (ESE), severely undermining research credibility and cumulative knowledge building.
Method: This study uniquely integrates large-scale, LLM-driven classification of 27,000 ESE publications with an expert consensus workshop involving 33 domain specialists to systematically assess data analysis practices over the past three decades.
Contribution/Results: Analysis of 30 representative studies reveals widespread statistical misuse; experts identified only 42% of methodological flaws and demonstrated limited capacity to correct them—confirming that “copy-paste” adoption of analytical techniques has precipitated a crisis of scientific reliability. The study proposes an urgent, actionable framework for reconstructing the data foundations of ESE, offering empirically grounded methodological guidance to strengthen empirical rigor and reproducibility.
📝 Abstract
Context: Empirical Software Engineering (ESE) drives innovation in SE through qualitative and quantitative studies. However, concerns about the correct application of empirical methodologies have existed since the 2006 Dagstuhl seminar on SE. Objective: To analyze three decades of SE research, identify mistakes in statistical methods, and evaluate experts' ability to detect and address these issues. Methods: We conducted a literature survey of ~27,000 empirical studies, using LLMs to classify statistical methodologies as adequate or inadequate. Additionally, we selected 30 primary studies and held a workshop with 33 ESE experts to assess their ability to identify and resolve statistical issues. Results: Significant statistical issues were found in the primary studies, and experts showed limited ability to detect and correct these methodological problems, raising concerns about the broader ESE community's proficiency in this area. Conclusions. Despite our study's eventual limitations, its results shed light on recurring issues from promoting information copy-and-paste from past authors' works and the continuous publication of inadequate approaches that promote dubious results and jeopardize the spread of the correct statistical strategies among researchers. Besides, it justifies further investigation into empirical rigor in software engineering to expose these recurring issues and establish a framework for reassessing our field's foundation of statistical methodology application. Therefore, this work calls for critically rethinking and reforming data analysis in empirical software engineering, paving the way for our work soon.