International Symposium on AI-Driven Engineering Systems, Tokat, Türkiye, 19 - 20 Haziran 2025, ss.5, (Özet Bildiri)
In this study, we propose a robust deep learning framework that integrates attention mechanisms into convolutional neural networks to enhance binary image classification performance. The architecture incorporates Convolutional Block Attention Module (CBAM) and Squeeze-and-Excitation (SE) blocks into three variants of DenseNet architectures: DenseNet121, DenseNet169, and DenseNet201. Each model was trained on a custom-labeled dataset and evaluated using ROC-AUC, accuracy, F1-score, and confusion matrix metrics. Grad-CAM visualizations were used to provide spatial interpretability. Feature vectors were extracted from the final dense layers of the trained models and subjected to four different feature selection techniques: SHAP, PCA, RFE, and SelectKBest. These reduced feature sets were classified using five machine learning algorithms including Support Vector Machines, Random Forest, Logistic Regression, Naive Bayes, and XGBoost. Among all configurations, DenseNet169 combined with PCA and XGBoost achieved the highest classification performance with 96.08% accuracy and a ROC-AUC score of 0.9769. SHAP and RFE-based models also exceeded 96% accuracy. The results indicate that integrating attention mechanisms into DenseNet backbones significantly improves classification outcomes, and that transferring learned features to traditional classifiers further enhances performance and interpretability. The proposed framework is promising for high-stakes binary classification problems where both accuracy and explainability are critical