Abstract
The ongoing development of computer systems requires massive software projects. Running the components of these huge projects for testing purposes might be a costly process; therefore, parameter estimation can be used instead. Software defect prediction models are crucial for software quality assurance. This study investigates the impact of dataset size and feature selection algorithms on software defect prediction models. We use two approaches to build software defect prediction models: a statistical approach and a machine learning approach with support vector machines (SVMs). The fault prediction model was built based on four datasets of different sizes. Additionally, four feature selection algorithms were used. We found that applying the SVM defect prediction model on datasets with a reduced number of measures as features may enhance the accuracy of the fault prediction model. Also, it directs the test effort to maintain the most influential set of metrics. We also found that the running time of the SVM fault prediction model is not consistent with dataset size. Therefore, having fewer metrics does not guarantee a shorter execution time. From the experiments, we found that dataset size has a direct influence on the SVM fault prediction model. However, reduced datasets performed the same or slightly lower than the original datasets.
Original language | English |
---|---|
Pages (from-to) | 72-88 |
Number of pages | 17 |
Journal | Inteligencia Artificial |
Volume | 24 |
Issue number | 68 |
DOIs | |
State | Published - 2021 |
Bibliographical note
Publisher Copyright:© 2021, Asociacion Espanola de Inteligencia Artificial. All rights reserved.
Keywords
- Feature Selection
- Software Defect Prediction
- Support Vector Machine
ASJC Scopus subject areas
- Software
- Artificial Intelligence