Fifth International Congress of Applied Statistics (UYIK-2024), İstanbul, Türkiye, 21 - 23 Mayıs 2024, ss.1-9
In recent years, deep learning based approaches
have gained widespread adoption in Earth observation and remote sensing,
mirroring their success in numerous other domains. However, unlike approaches
based on physical models, deep learning methods operate as black boxes,
concealing internal processes influencing final decisions. This lack of
transparency poses a challenge, particularly in applications where
interpretability is paramount, as outputs generated by these approaches cannot
be fully trusted or verified. Explainable Artificial Intelligence (XAI) aims to
make the deep learning processes and their outputs more interpretable for
researchers and end users. The purpose of this study is to investigate and
evaluate the performance of various XAI methodologies for post-hoc
explainability of object detection in satellite images using deep learning. Class-activation
mapping (CAM) based XAI methods, namely GradCAM, GradCAM++, EigenCAM, ScoreCAM,
and LayerCAM, are used for post-hoc explainability, following target detection
by You Look Only Once (YOLO) algorithm. Experimental results show that each
method provides considerably different saliency maps, which may be used for qualitative
performance analysis of the interpretability provided by these methods. However,
in a large dataset, a qualitative analysis by itself may be subjective and
misleading. As such, an evaluation framework tailored for remote sensing
applications is adopted to evaluate the interpretability performances of these
XAI methods quantitatively. The findings provide an important step towards
understanding the role and effectiveness of these XAI methods for
interpretability of object detection for remote sensing.