SMOTE vs. KNNOR: An evaluation of oversampling techniques in machine learning


abacı i., YILDIZ K.

Gümüşhane Üniversitesi Fen Bilimleri Dergisi, cilt.13, sa.3, ss.767-779, 2023 (Hakemli Dergi) identifier

Özet

The increasing availability of big data has led to the development of applications that make human life easier. In order to process this data correctly, it is necessary to extract useful and valid information from large data warehouses through a knowledge discovery process in databases (KDD). Data mining is an important part of this and it involves discovering data and developing models that extract unknown patterns. The quality of the data used in supervised machine learning algorithms plays a significant role in determining the success of predictions. One factor that improves the quality of data is a balanced dataset, where the input values are distributed close to each other. However, in practice, many datasets are unbalanced. To overcome this problem, oversampling techniques are used to generate synthetic data that is as close to real data as possible. In this study, we compared the performance of two oversampling techniques, SMOTE and KNNOR, on a variety of datasets using different machine learning algorithms. Our results showed that the use of SMOTE and KNNOR did not always improve the accuracy of the model. In fact, on many datasets, these techniques resulted in a decrease in accuracy. However, on certain datasets, both SMOTE and KNNOR were able to increase the accuracy of the model. Our results indicate that the effectiveness of oversampling techniques varies depending on the specific dataset and machine learning algorithm being used. Therefore, it is crucial to assess the effectiveness of these methods on a case-by-case basis to determine the best approach for a given dataset and algorithm.