Brain tumor classification using the fused features extracted from expanded tumor region


Öksüz C., URHAN O., Güllü M. K.

Biomedical Signal Processing and Control, cilt.72, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 72
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1016/j.bspc.2021.103356
  • Dergi Adı: Biomedical Signal Processing and Control
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, EMBASE, INSPEC
  • Anahtar Kelimeler: Brain tumor classification, Convolutional Neural Network, Feature extraction, Computer-aided diagnosis, 1p, 19q codeletion, MRI IMAGES, GRADE, 1P/19Q, SEGMENTATION, CHEMOTHERAPY, INFORMATION, DELETION
  • Kocaeli Üniversitesi Adresli: Evet

Özet

© 2021 Elsevier LtdIn this study, a brain tumor classification method using the fusion of deep and shallow features is proposed to distinguish between meningioma, glioma, pituitary tumor types and to predict the 1p/19q co-deletion status of LGG tumors. Brain tumors can be located in a different region of the brain, and the texture of the surrounding tissues may also vary. Therefore, the inclusion of surrounding tissues into the tumor region (ROI expansion) can make the features more distinctive. In this work, pre-trained AlexNet, ResNet-18, GoogLeNet, and ShuffleNet networks are used to extract deep features from the tumor regions including its surrounding tissues. Even though the deep features are extremely important in classification, some low-level information regarding tumors may be lost as the network deepens. Accordingly, a shallow network is designed to learn low-level information. Next, in order to compensate the information loss, deep features and shallow features are fused. SVM and k-NN classifiers are trained using the fused feature sets. Experimental results achieved on two publicly available data sets demonstrate that using the feature fusion and the ROI expansion at the same time improves the average sensitivity by about 11.72% (ROI expansion: 8.97%, feature fusion: 2.75%). These results confirm the assumption that the tissues surrounding the tumor region carry distinctive information. Not only that, the missing low-level information can be compensated thanks to the feature fusion. Moreover, competitive results are achieved against state-of-the-art studies when the ResNet-18 is used as the deep feature extractor of our classification framework.