Biomedical Signal Processing and Control, vol.108, no.108, pp.1-26, 2025 (Scopus)
Skin cancer detection is a critical problem in medical image analysis, requiring accurate classification of distinct lesion types. Existing literature identifies key gaps, such as the challenge of unbalanced datasets and the explainability of model decisions. This study fills these gaps by presenting a novel architecture that includes YOLOv8 as a preprocessing step to improve skin cancer diagnosis. YOLOv8 is used to locate the region of interest, enhancing the model’s focus on critical features. To address the issue of unbalanced datasets, multiple data augmentation strategies are used, guaranteeing that the models are trained effectively across diverse lesion types. Furthermore, the proposed detection framework is made more transparent and reliable by using the Grad-CAM and SHAP values methods, which provide detailed insights into the model’s decision-making process. This strategy improves the models’ explainability, allowing for improved interpretation and confidence in the results. Eight distinct pre-trained models are fine-tuned to assess the performance of the proposed framework. Among these models, the Vision Transformer (ViT) when integrated with YOLOv8 shows considerable increases in performance metrics. The ViT with YOLOv8 achieved a balanced precision, recall, and F1-score of 93%, beating the standalone ViT model. These findings highlight the effectiveness of incorporating YOLOv8 in improving skin cancer detection and filling critical gaps in the literature, providing a robust and explainable strategy to improve diagnostic accuracy in clinical settings.