🤖 AI Summary
This paper systematically investigates the root causes of low reproducibility in machine learning (ML) research, identifying key issues including insufficient transparency, non-public code and data, inadequate adherence to standards, and high sensitivity to training conditions. Through a systematic literature review, cross-domain case studies, and empirical community practice surveys, we propose the first “barrier–driver” bidirectional mapping framework—spanning methodology, code, data, and experimentation—that jointly models procedural and technical factors. Based on this framework, we introduce a structured intervention roadmap comprising prioritized research directions and actionable reproducibility checklists. Our work bridges the gap between reproducibility principles and implementable practices, thereby enhancing the credibility and scientific rigor of ML research. (128 words)
📝 Abstract
Many research fields are currently reckoning with issues of poor levels of reproducibility. Some label it a"crisis", and research employing or building Machine Learning (ML) models is no exception. Issues including lack of transparency, data or code, poor adherence to standards, and the sensitivity of ML training conditions mean that many papers are not even reproducible in principle. Where they are, though, reproducibility experiments have found worryingly low degrees of similarity with original results. Despite previous appeals from ML researchers on this topic and various initiatives from conference reproducibility tracks to the ACM's new Emerging Interest Group on Reproducibility and Replicability, we contend that the general community continues to take this issue too lightly. Poor reproducibility threatens trust in and integrity of research results. Therefore, in this article, we lay out a new perspective on the key barriers and drivers (both procedural and technical) to increased reproducibility at various levels (methods, code, data, and experiments). We then map the drivers to the barriers to give concrete advice for strategies for researchers to mitigate reproducibility issues in their own work, to lay out key areas where further research is needed in specific areas, and to further ignite discussion on the threat presented by these urgent issues.