Investigating Reproducibility in Deep Learning-Based Software Fault Prediction
Creators
Description
Over the past few years, increasingly complex machine learning methods have been applied for various Software Engineering (SE) tasks, particularly for the important task of automated fault prediction and localization. It, however, becomes much more difficult for scholars to reproduce the results that are reported in the literature, especially when the applied deep learning models and the evaluation methodology are not properly documented and when code and data are not shared. Given some recent---and very worrying---findings regarding reproducibility and progress in other areas of applied machine learning, this study aims to analyze to what extent the field of software engineering, in particular in the area of software fault prediction, is plagued by similar problems. We have therefore conducted a systematic review of the current literature and examined the level of reproducibility of 56 research articles that were published between 2019 and 2022 in top-tier software engineering conferences. Our analysis revealed that scholars are apparently largely aware of the reproducibility problem, and about two-thirds of the papers provide code for their proposed deep-learning models. However, it turned out that in the vast majority of cases, crucial elements for reproducibility are missing, such as the code of the compared baselines, code for data pre-processing, or code for hyperparameter tuning. In these cases, it, therefore, remains challenging to reproduce the results in the current research literature exactly. Overall, our meta-analysis, therefore, calls for improved research practices to ensure the reproducibility of machine-learning-based research.
Files
README.md
Files
(23.5 kB)
Name | Size | Download all |
---|---|---|
md5:05314990cbe361209385c2f89dce2602
|
2.3 kB | Preview Download |
md5:41eda936a79d890aec8fc6104be3baac
|
21.2 kB | Download |