Towards Robust Monkeypox Diagnosis: Merging Datasets and Evaluating Explainable Deep Learning Models


SALEH R. A. A., Saeed S. H. M., Abualkebash H., KONYAR M. Z., ERTUNÇ H. M.

INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, cilt.39, sa.15, 2025 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 39 Sayı: 15
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1142/s0218001425400038
  • Dergi Adı: INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, Metadex, Civil Engineering Abstracts
  • Anahtar Kelimeler: Explainable artificial intelligence (XAI), Grad-CAM, monkeypox diagnosis, transfer learning, vision transformer (ViT)
  • Kocaeli Üniversitesi Adresli: Evet

Özet

Early and precise identification of monkeypox is crucial for controlling outbreaks and reducing the spread of this new infectious disease. However, existing deep learning-based diagnostic models face substantial obstacles due to a lack of diverse datasets and model explainability, both of which are critical for clinical adoption. In order to solve these problems, this research first proposes a way to increase dataset diversity by combining two frequently used datasets. This integrated dataset enhances the generalization capabilities of deep learning models by providing a more comprehensive representation of monkeypox cases. Second, four deep learning models-Vision Transformer (ViT), ConvMixer, Xception, and AlexNet-tuned are thoroughly evaluated for monkeypox detection. The Gradient-weighted Class Activation Mapping (Grad-CAM) method, which offers visual insights into each model's decision-making processes, is utilized to ensure the models' transparency and interpretability. The results demonstrate that combining the two datasets and integrating explainability into AI models increase diagnostic accuracy and offer important justifications for the model's predictions, hence boosting confidence in diagnoses driven by AI.