Classification of diabetic retinopathy with feature selection over deep features using nature-inspired wrapper methods


Canayaz M.

APPLIED SOFT COMPUTING, cilt.128, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 128
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1016/j.asoc.2022.109462
  • Dergi Adı: APPLIED SOFT COMPUTING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC
  • Anahtar Kelimeler: Diabetic retinopathy, Wrapper methods, Deep learning models, Feature selection, CONVOLUTIONAL NEURAL-NETWORKS, AUTOMATED DETECTION, OPTIC DISC, SEGMENTATION, VALIDATION, ALGORITHM
  • Van Yüzüncü Yıl Üniversitesi Adresli: Evet

Özet

Diabetic retinopathy (DR) is the most common cause of blindness in middle-aged people. It shows that an automatic image evaluation system is needed in the diagnosis of this disease due to the low number of scans. It is critical to meet this need that these systems are large-scale, cost-effective, and minimally invasive screening programs. With the use of deep learning techniques, it has become possible to develop these systems faster. In this study, a new approach based on feature selection with wrapper methods used for fundus images is presented that can be used for the classification of diabetic retinopathy. The fundus images used in the approach were improved with image processing techniques, thus eliminating unnecessary dark areas in the image. In this new approach, the most effective features are selected by wrapping methods over 512 deep features obtained from EfficientNet and DenseNet models. Binary Bat Algorithm (BBA), Equilibrium Optimizer (EO), Gravity Search Algorithm (GSA), and Gray Wolf Optimizer (GWO) were chosen as wrappers for the proposed approach. Selected features are classified by support vector machines and random forest machine learning methods. Considering the performance of this new approach, it gives the highest value of 96.32 accuracy and 0.98 kappa. These performance values were obtained with a minimum of 250 selected features. The Asia Pacific Tele-Ophthalmology Society (APTOS) dataset used to obtain these values was taken from a competition organized by Kaggle. The highest kappa value in this competition was reported as 0.93. This parameter clearly demonstrates the success of our approach. (C) 2022 Elsevier B.V. All rights reserved.