Evaluation of Deep Transfer Learning Methodologies on the COVID-19 Radiographic Chest Images

Evaluation of Deep Transfer Learning Methodologies on the COVID-19 Radiographic Chest Images

Athar Al-azzawi | Saif Al-jumaili* | Adil Deniz Duru | Dilek Göksel Duru | Osman Nuri Uçan 

Department of Electrical and Computer Engineering, Altinbas University, Istanbul 34218, Turkey

Department of Sport Health Sciences, Marmara University, Istanbul 34722, Turkey

Faculty of Science, Turkish-German University, Istanbul 34820, Turkey

Corresponding Author Email: 
saifabdalrhman@gmail.com
Page: 
407-420
|
DOI: 
https://doi.org/10.18280/ts.400201
Received: 
24 October 2022
|
Revised: 
1 March 2023
|
Accepted: 
13 March 2023
|
Available online: 
30 April 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In 2019, the world had been attacked with a severe situation by the new version of the SARS-COV-2 virus, which is later called COVID-19. One can use artificial intelligence techniques to reduce time consumption and find safe solutions that have the ability to handle huge amounts of data. However, in this article, we investigated the classification performance of eight deep transfer learning methodologies involved (GoogleNet, AlexNet, VGG16, MobileNet-V2, ResNet50, DenseNet201, ResNet18, and Xception). For this purpose, we applied two types of radiographs (X-ray and CT scan) datasets with two different classes: non-COVID and COVID-19. The models are assessed by using seven types of evaluation metrics, including accuracy, sensitivity, specificity, negative predictive value (NPV), F1-score, and Matthew’s correlation coefficient (MCC). The accuracy achieved by the X-ray was 99.3%, and the evaluation metrics that were measured above were (98.8%, 99.6%, 99.6%, 99.0%, 99.2%, and 98.5%), respectively. Meanwhile, the CT scan model classified the images without error. Our results showed a remarkable achievement compared with the most recent papers published in the literature. To conclude, throughout this study, it has been shown that the perfect classification of the radiographic lung images affected by COVID-19.

Keywords: 

deep learning, classification, CNN, deep transfer learning, X-ray, CT scan

1. Introduction

The first detected case of coronavirus was in Wuhan, China, in December 2019. The spread of the virus significantly around the world and the death that occurred due to the virus necessitated the World Health Organization to announce the existence of a pandemic [1-4]. By the time the outbreak was announced, the virus had spread dramatically throughout the world and deaths began to increase dramatically, which led to the collapse of the health sector in many countries [5-7]. The symptoms of Covid, it was very similar to the flu, the most common symptoms of COVID-19 are fever, headache, sore throat, cough, severe pneumonia, septic shock, runny nose, fatigue, muscle pain, diarrhea, hemoptysis, dyspnea, lymphatic distress, and distress syndrome. acute respiratory [8-14].

This spread (COVID-19) forced the health sector to start finding ways that have the ability to diagnose the disease in order to limit its spread. The traditional ways for diagnosis of COVID-19 are based on the diversity of the dataset provided by different tools such as blood tests (CBCs), the reverse-transcription polymerase chain reaction (RT-PCR), and the clinical image. Meanwhile, the WHO organization advised relying on the RT-PCR to confirm the coronavirus infection [6]. However, RT-PCR consumes time which is considered quite a risk for people with COVID-19 in addition to the error rate of the test, which is estimated at about 30% [15]. Thus, X-ray and CT scan techniques that provide a clinical image assist radiologists in determining the accurate decision [16-18]. Due to the inexpensive, available, and quite a low-risk to health, the X-ray modality became the first method of diagnosis of COVID-19, but at the same time considered fairly challenging in some countries that lack medical supplies [19, 20]. Moreover, it may incidence a high error in the diagnosis of the infection [21, 22].

The scientists resorted to using deep learning techniques and developing models to show the high ability to diagnose using images based on the promising results obtained by using deep learning techniques. Among these diseases that are based on medical imaging techniques such as Lung infection (pneumonia) [23], asthma [24], Chronic Obstructive Pulmonary Disease (COPD) [25], tuberculosis [26], lung cancer [27], and many others types associated with breathing problems.

Recently, deep learning with convolutional neural networks is utilized for the classification of medical images. There are two main popular types of images used to detect COVID-19 with deep learning techniques; X-rays and Computer tomography (CT) scans. These images are being used for the diagnosis of effects that occurred as a result of COVID-19 infection prior to the early stage of treatment [28, 29]. In the recent literature, different types of pre-trained deep learning models are formed to be used in the detection of COVID-19 named as GoogleNet [30], Xception [31], U-Net [32], AlexNet [33], VGG19 [34], RestNet50 [35], Mobilenets [36], DenseNet [37], and SqueezeNet [38].

We summarize the most important aims of this paper as the following:

(1) To investigate the classification accuracy of the COVID-19 affected radiological lung images from the images of healthy subjects, for this issue, eight different types of deep learning mechanisms are utilized using both X-ray and CT scans.

(2) To evaluate the deep transfer learning ability to classify COVID-19 infection based on chest radiography, which is a very helpful tool for the health sector for different COVID-19 variants such as Omicron.

(3) To pave the way for using pre-trained models with other diseases, which provide rapid classification and accurate results as well as less consumption time compared with developing a model from scratch.

In this paper, we present an overview of COVID-19 diagnostic methods utilizing transfer learning. Section II describes search methodologies according to previous various studies, and the data resources description is explained in Section III. Section IV illustrated the methodology of Implementing the classifiers used in this paper. Section V explores the DL techniques used for the detection of COVID-19 patients. Section VI shows the measurement parameters used to evaluate metrics. Section VII explains the results and discussion, and finally, the conclusion is described in Section VIII.

2. Literature Review

Different types of deep learning techniques are provided using radiography and computed tomography datasets to detect different diseases. In the study, done by Liu et al. [39], the authors developed an enhanced CNN model, which uses the radiographic data set for tuberculosis detection. Furthermore, in the model, random sampling was implemented to address the issue of the unbalanced dataset, while the best accuracy was observed as 85.68%. In another study done by Dong et al. [40], they used an X-ray data set with various types of pre-trained deep learning including ResNet, AlexNet, and VGG16. They utilized the pre-trained model with more than 16000 images as input for binary and multilevel classification. The maximum precision was obtained at 82% for the binary classification, while the others reached an accuracy of over 90%. Chouhan et al. [41] reported a 96% accuracy value for pneumonia detection from X-ray images after the implementation of an ensemble of AlexNet, DenseNet121, GoogLeNet, and ResNet18 with deep transfer learning. Similar to the previous study, Hemdan et al. [42] developed a new type of CNN model called the COVIDX-Net which was composed of seven types of pre-trained deep learning models, namely VGG19, DenseNet121, InceptionV3, ResNetV2, Inception-ResNet-V2, Xception, and MobileNetV2. In their study, X-ray images were also used as input for the COVIDX-Net. On the other hand, Waheed et al. [43] modelled Auxiliary Classifier Generative Adversarial Network (ACGAN), to enhance X-ray for better classification accuracy. Their method improved the accuracy value obtained using CNN from 85% to 95%. Likewise, Khan et al. [1] proposed a new deep learning model called CoroNet, which is based on the Xception architecture pre-trained. The CoroNet was trained by using an X-ray image dataset which was collected from different resources publicly available for both COVID-19 and pneumonia. Wang et al. [44] designed a new deep convolutional neural network model called COVID-Net that can perform the diagnosis of the COVID-19 virus from the publicly available X-ray image dataset collected from COVIDx. Mahmud et al. [5] CovXNets proposed to implement several types of classification for the detection of COVID/normal/Viral/Bacterial pneumonia, cases. The highest accuracy value obtained was 97.4%. Furthermore, Apostolopoulos and Mpesiana [45] evaluated the performance of popular state-of-the-art convolutional neural network (CNN) architectures that were proposed in recent years in the medical literature. They especially concentrated on the proceedings that consisted of transfer learning. They used two different types of datasets to examine the CNN models and achieved 96.78% accuracy. The first one was composed of 1427 X-ray images which were 224 images of COVID-19, and 700 images of bacterial pneumonia. For the second one, the datasets are composed of 1442 X-ray images (224 images of COVID-19, 714 images of bacterial viral pneumonia, and 504 images of a normal case). Horry et al. [46] used pre-trained models, with X-ray input images, and highlight challenges that affect the results within the use of COVID-19 data sets. Their new method helps to reduce the noise from X-ray images therefore the deep learning models will just concentrate on identifying COVID-19. As a result, they achieved 80% accuracy after the use of ResNet, Xception, Inception, and VGG models. Likewise, Sethy et al. [47] used deep learning feature extraction and then used feature results as input to support vector machine (SVM) for classification using X-ray images. The dataset consists of three types of X-ray images; normal, pneumonia, and COVID-19. These types of images were input for different types of CNN models including AlexNet, VGG16, VGG19, MobileNet, and ShuffelNet. The features extracted from images images are utilized as input to the SVM classifier. The best accuracy achieved was 95.38% within RestNet+SVM.

A new architecture of deep learning called Generative Adversarial Networks (GANs) was proposed by Sheykhivand et al. [48] for the identification of COVID-19. They compared the performance of their model with several pre-trained models that were already used in a recent study to detect COVID-19 (including Inception V4, MobileNet, Inception-ResNet V2, and VGG16). The best accuracy result that they achieved was 90% within 4 classes and 99% for 2 classes. In addition to the above, Haque et al. [49] proposed a model that can automatically detect COVID-19 by the use of fewer resources. They evaluated the model with 1501 X-ray images that consist of 70 images of COVID-19 and 1431 images of pneumonia patients while the accuracy result reached was 97.56% and the precision was 95.34%. In another attempt, Jain et al. [50] developed a new CNN model that can detect COVID-19 with a 97% accuracy from the dataset of X-ray images (COVID-19/Normal) which are available on the Kaggle repository. Later, Che Azemin et al. [51] proposed a Deep learning model that totally relies on the pre-trained Resnet101 architecture. More than a thousand X-ray images were used for the training of the new model and a 71.9% accuracy result was obtained. On the other hand, Abbas et al. [52] proposed a deep learning model called (DeTraC) that has ability to detect COVID-19 from X-ray images. As well, in their methodology, they used a decomposition mechanism that can deal with any irregularities of the dataset by investigating the boundaries of the classes. Their highest accuracy value was 93.1%.

3. Materials and Methods

Dataset

Since COVID-19 is a new disease, the raw datasets are not quite easily available and appropriate to be used for deep learning studies. Therefore, we aimed to select a dataset that can be publicly available. We collected chest X-ray images from Zenodo [53]. The author collected the images from authenticated sources such as Radiopaedia, the radiological society of north America (RSNA) [54], COVID-19 image data collection [55], COVIDX, Kaggle, etc. from different patients that have a COVID-19. At the time of this current study, the database contained 250 COVID-19 images and 327 non-COVID images, with a size of 250x250 pixels. In addition to X-ray images Figure 1, CT images were also used in this study. The number of CT images was 471, and 211 of them belong to COVID-19. CT images were downloaded from the open-source repository Kaggle, SARS-COV-2 Ct-Scan Dataset. Figure 2 shows a sample of CT scan images.

4. Methodology

Implementation of the classifiers

We selected different types of pre-trained deep learning models, which are namely (GoogleNet, AlexNet, VGG16, MobileNet-V2, ResNet50, DenseNet201, ResNet18, and Xception). All these experiments had been carried out on MATLAB (R2021a) with workstations (GPU NVIDIA GeForce GTX 1060 6GB, Intel processor i7-8700 @3.6HZ, Ram 16GB). The last fully connected layer was changed to a new one to classify only two classes. The parameters were fixed for all pre-trained networks, as the following learning rate as 0.00001, validation frequency as 5, batch size as 20, and the max epochs were set to 30. For each model, we set a stochastic gradient descent with a momentum (SGDM) optimizer. We used a 5-fold cross-validation method to avoid over-fitting. The dataset was divided into training and testing with a ratio of 7:3. The dataset was divided randomly between the training and test for all five parts. In order to increase the learning of the pre-training model, we used different types of augmentation of images such as vertical shift, horizontal shift, vertical flip, horizontal flip, rotation, brightness adjustment, and zoom in/out, which in turn positively affected the learning rate of this the model. The final performances of each model were computed through the average results that were obtained from the five stratified folds of the test results. Also, the data augmentation methods were applied in this study which includes cropping, rotation, reflection, and resizing images. Moreover, we used the L2-regularization method for the dropout Figure 3 clarifies the method used in this study.

Figure 1. Illustration of the X-ray dataset, a random selection for two types, COVID-19 and non-COVID

Figure 2. Illustration of the CT dataset, a random selection for two types, COVID-19 and non-COVID

Figure 3. Illustration of the methodology used in this study

5. Transfer Learning

Transfer learning is an efficient method of deep learning models. The pre-training mechanism is performed with the use of many images, and the resulting networks serve as a basis for reuse in the presence of a low number of input images. In deep learning, we used popular methodologies that were implemented for transfer learning with pre-trained networks called fine-tuning. Whereas this technique is based on the weights of the new layers (replaced) that add to the models (the last three layers that we changed in the last CNN networks to be appropriate with the new dataset that was utilized as input for classification).

5.1 Resnet

The family of deep residual networks is the most popular convolutional neural network that is used in deep learning, it is proposed by He et al. [35], Al-Jumaili et al. [56]. The family comprises different kinds of models such as ResNet16, ResNet18, ResNet34, ResNet50, ResNet101, ResNet110, ResNet152, ResNet164, ResNet1202, and so on [57]. The number beside the ResNet name means the depth of the models (number of layers that are used inside the models). However, two major problems remained in the development of the CNN model. Especially when it started training in order to raise the depth, which are degradation and vanishing gradients. This is solved by executing the ResNet block by adding a skip connection that can prevent loss of information with the network when it will be deeper. Indeed, the major idea behind constructing ResNet is the residual module, which is illustrated in Figure 3. In Figure 4a, on the left side, two convolutional layers are present. They used 3x3 kernels and the spatial dimensions are preserved beside that. The right side is skipping connection by adding the input to the output which is a method used in the ResNet18 model.

Different residual modules were used such as the bottleneck residual, which is illustrated in Figure 4b, where the input passes into two convolutional networks with 1x1 and 3x3 kernel sizes. While the right way is skipping the connection that connects the module’s input to an additional operation with the output of the left path, ResNet101 and ResNet50 models used this method. The deep residual network is built by using multiple residual modules to top of each other besides using a lot of different conventional convolution layers and pooling layers. In our study, we used two different depth types each one has its own mechanism which are ResNet50 and ResNet18.

Figure 4. The ResNet18 (A) model used, and the bottleneck residual module used in ResNet50 (B) both of them explained with details in the study of He et al. [35]

5.2 Xception

The Xception model is one of the CNN architectures that gets inspired by the Inception model. In other word, we can say that it is an extended architecture of the Inception with tiny modifications by replacing the standard Inception modules with depthwise separable convolutions layers. The depthwise separable convolution composes several spatial convolution kernels having different sizes (3×3, 5×5, etc.). It is performed on each of the input channels to set the spatial correlations, and then in the next stage, pointwise convolution (1×1) is used to set the cross-channel correlations. The Xception architectures are fully dependent on the depthwise separable convolution layers, where the model is composed of 36 convolutional layers and 14 modules. Residual connections are used in all models except the first and last ones. Figure 5 explains the architectures of the Xception model.

Figure 5. Xception architecture [58]

5.3 Densenet201

Densely connected convolutional Networks (DenseNets) are one of the CNN architectures that are presented by Huang et al. [37]. DenseNets come with various convincing characteristics such as achieving high performance, fostering feature reuse, consolidating feature propagation, diminishing vanishing gradients problems, and increasing computational efficiency. It has similarities with the ResNet model in the modification of the connections on the output. It used the convolutional layers in place of summing them up. Consequently, the output is a feature map for the next input layer. The model is visualized in Figure 6.

Figure 6. Layer dense models with each layer showing the concatenated output to be the feature maps as input to the next layer [37]

5.4 Mobilenet-V2

Figure 7. MobileNetV2 architecture [56]

MobileNetV2 is an improvement of the first generation MobileNetV1. The most important thing in the new model is the capability to feature extraction based on use. The basic unit used in this model is 1) linear bottlenecks between the layers, and 2) shortcut connections between the bottlenecks (Figure 7). The architecture of the model is suitable for the varying performances with multiple types of input images. The architectures of the MobileNetV2 model consist of 53 layers with 3.5million parameters while the input image size is (width multiplier 224×224).

5.5 Googlenet

GoogleNet consists of 22 layers which leads to high performance for the classification of images. The GoogleNet consists of 27 pooling layers where 9 inception blocks are stacked linearly. In each inception block, there are 4 parallel paths where the end of the inception block is connected to the global average pooling layer. Figure 8 clarifies the architecture of GoogleNet.

Figure 8. GoogleNet architecture [30]

5.6 Alexnet

AlexNet is one of the best CNN models available due to its fast and high performance for classification images when compared to the other deep learning models. The architecture of the AlexNet is comprised of 8 layers in total. The first 5 are convolutional layers and the rest are all fully connected layers where two of them are connected with overlapping max-pooling layers that are used to extract features from images. The third, fourth, and fifth convolutional layers are connected directly with fully connected layers. The output of all convolutional and fully connected layers is connected to the ReLu non-linear activation function. The last layer is SoftMax activation which produces 1000 classes.

5.7 VGG16

Figure 9. VGG16 architecture

VGG16 is one of the popular pre-trained models that is used for classifying multiple types of images. The unique property in VGG16 is that the model uses the hyper-parameter that is focused on the convolution layers of the 3×3 filter with stride 1. The same max pooling layer and padding of the 2×2 filter of stride 2 is used in the layers. The details about the architecture are shown in Figure 9. It illustrates the structure of VGG16, and Table 1 shows all the types of pre-trained models, the types of parameters, the dimensions of the images required for each model due to its impact on the results, and finally the number of images used for training each model separately.

Table 1. Deep learning architectures for different models

Name

Image Input Size

Size

Layers

Parameters (Millions)

ResNet-18

224x224x3

44MB

18

11.7

ResNet-50

224x224x3

96 MB

50

25.6

Xception

299x299x3

85MB

71

22.9

MobileNetV2

224x224x3

13MB

53

3.5

Densenet201

224x224x3

77MB

201

20

GoogleNet

224x224x3

27MB

22

7

VGG16

224x224x3

515MB

16

138

AlexNet

224x224x3

227MB

8

61

6. Evaluation Metrics

We used various types of performance of evaluation metrics to check each model separately, using confusion matrix outcomes from the validation tests. The confusion matrix results have been used as an input to check the metrics such as accuracy, sensitivity, specificity, precision, negative predictive value (NPV), F1-score, Matthew’s correlation coefficient (MCC), and receiver operating characteristic curve (ROC). Accuracy is calculated as the number of the correct predictions from the whole dataset as shown in Eq. (1). The sensitivity is calculated as the number of correct positive predictions from the total positives number Eq. (2). Specificity is the true negative prediction that had been calculated from the whole negatives of the dataset and called true negative rate (TNR) Eq. (3).

Accuracy $=\frac{T P+T N}{T P+F P+T N+F N}$     (1)

Sensitivity $=\frac{T P}{T P+F N}$     (2)

Specificity $=\frac{T N}{T N+F N}$      (3)

Correspondingly, Eq. (4), shows the precision, (positive predictive value (PPV)) as the ratio of correct positive predictions to the number of overall positive predictions. Whilst the negative predictive value (NPV) is given in Eq. (5). The Harmonic mean is known as F1-score and computed from precision and sensitivity, as shown in Eq. (6). Finally, Matthew’s correlation coefficient range (MCC) given in Eq. (7) is used to calculate the correction coefficient based on confusion matrix values.

Specificity $=\frac{T N}{T N+F N}$     (4)

negative predictive value $(N P V)=\frac{T N}{T N+F N}$     (5)

$F 1-$ score $=\frac{2 * T P}{2 * T P+F P+F N}$     (6)

$M C C=\frac{(T P * T N)-(F P * F N)}{\sqrt{(T P+F P)(T P+F N)(T N+F P)(T N+F N)}}$     (7)

7. Results and Discussion

This section presents the results of all experimental tests and configuration setups for deep learning network models. We discussed the results that were obtained using types of networks for both two types of datasets: X-ray and CT images. The confusion matrix is one of the most important accurate measurement tools that is used to check classifiers' performance. The confusion matrix comes out with four types of parameters namely (true-positive (TP), true negative (TN), false-positive (FP), and false-negative (FN)) which were used to check the performance of the models with different types of metrics.

In the first scenario, as shown in Table 2, the confusion matrix was obtained using X-ray images for eight types of deep transfer learning; GoogleNet, AlexNet, VGG16, MobileNet-V2, ResNet50, DenseNet201 ResNet18, and Xception. As we observe from Table 2, the DenseNet201 model achieved the highest classification accuracy. It can recognize 249 images within the labelled COVID-19 class, though only 3 images were classified as a non-COVID class. Besides, 324 images were classified correctly labelled as a non-COVID class, though only one image was classified as a COVID-19 class. The VGG16 model distinguished 208 images and labelled them as a COVID-19 class, but 3 images were wrongly classified. 324 images are accurately classified as non-COVID, while 2 images are misclassified. Furthermore, both GoogleNet and ResNet50 are identical for the number of distinguished images related to COVID-19 where 246 images were correctly predicted. But, for misprediction, only 3 and 6 images were being labelled as a non-COVID class, respectively.

The ResNet18 model predicted 245 images correctly, although 4 images were mis-predicted and labelled as a non-COVID class. Besides, 323 images are correctly predicted for the non-COVID class. But only 5 images were predicted as non-COVID. The AlexNet model predicted 242 images as in the COVID-19 class, though 10 images were falsely predicted. While 317 images were predicted as a non-COVID class. Lastly, MobileNetV2 and Xception models showed the lowest prediction compared to the former models for both COVID-19 and non-COVID classes. Both models can predict 324 and 322 images for non-COVID classes. However, these models were able to classify even 1 as well as 122 images as COVID-19, respectively.

Table 2. Confusion matrices for different types of deep learning models. The results are an average of the 5-fold cross-validations through the applying of X-ray datasets

CNN Name

Classes Name

Predicted Class

AlexNet

Actual Class

COVID-19

242

10

NON

8

317

DenseNET201

Actual Class

COVID-19

249

3

NON

1

324

GoogleNet

Actual Class

COVID-19

264

3

NON

4

324

MobileNet-V2

Actual Class

COVID-19

209

5

NON

1

322

Resnet18

Actual Class

COVID-19

245

4

NON

5

323

ResNet50

Actual Class

COVID-19

246

6

NON

4

321

VGG16

Actual Class

COVID-19

208

3

NON

2

324

Xception

Actual Class

COVID-19

128

3

NON

122

324

MobileNet-V2 and ResNet50 were identical in the sensitivity score. The Xception model attains the lowest average sensitivity results. The specificity scores of MobileNet-V2 and DenseNet201 were identical and on average they were approximately 99.6%. The specificity of VGG16 was approximated to the result of both former models. Among all these models, GoogleNet, ResNet50, and ResNet18 obtained a fairly acceptable average specificity. On the other hand, the performance of the Xception was the lowest. The DenseNet201 demonstrated domination for all other metrics; precision, NPV, F1-score, and MCC, as 99.6%, 99.0%, 99.2%, and 98.5%, respectively. The DensNet model showed superior performance when compared to other models concerning all evaluation metrics as shown in Table 3.

The accuracy obtained from the VGG16 model was 99.0%, which was the closest value to the DensNet model. GoogleNet, MobileNet-V2, ResNet50, and ResNet18 provide similar accuracy values at around 98%. Furthermore, from all models, Xception and AlexNet achieved the lowest results, 78.3%, and 96.8%, respectively. The sensitivity metric for the models namely DenseNet201, GoogleNet, VGG16, and ResNet18 achieved an average of around 98%.

Table 3. Different types of metrics used to check the performance of various types of deep learning models using the X-ray dataset

Dataset Types

Model

Evaluation Metrics

Acc

Sen

Spe

Pre

NPV

F1-Score

MCC

X-ray

GoogleNet

98.7

98.7

98.7

98.4

99.0

98.5

97.5

AlexNet

96.8

96.0

97.5

96.8

96.9

96.4

93.6

VGG16

99.0

98.5

99.3

99.0

99.0

98.8

98.0

MobileNet-V2

98.8

97.6

99.6

99.5

98.4

98.5

97.6

ResNet50

98.2

97.6

98.6

98.4

98.1

98.0

96.4

DenseNet201

99.3

98.8

99.6

99.6

99.0

99.2

98.5

ResNet18

98.4

98.3

98.4

98.0

98.7

98.1

96.8

Xception

78.3

97.7

72.6

51.2

99.0

69.1

59.4

Figure 10. The receiver operating characteristic (ROC) curves for various types of pre-trained models including, GoogleNet, AlexNet, ResNet50, ResNet18, VGG16, MobileNet-V2, DenseNet201, and Xception for X-ray dataset

In order to make the models more visualizable for the results, we used ROC to visualize the performance of the different models. As presented in Figure 10, it is obvious that DenseNet201 achieved the best performance compared to the other models. Similar to the aforementioned, in the second scenario we utilized CT images as input to the different models including GoogleNet, AlexNet, VGG16, MobileNet-V2, ResNet50, DenseNet201 ResNet18, and Xception. The results of models for the confusion matrix are illustrated in Table 4. The DenseNet201 model obtained the highest prediction results. The model could predict all the images for both classes (COVID-19 and non-COVID) without error. The VGG16 and MobileNet-V2 are identical in the prediction of the COVID-19 class with 259 images. The number of images detected in the non-COVID class was 211, and 210 images for each algorithm, respectively. The error rate was just one image in the MobileNet-V2 model for both classes (COVID-19 and non-COVID), but in the VGG16 model, the error rate was 4 images within the COVID-19 class.

The ResNet50 and ResNet18 have predicted all images correctly for the non-COVID class, but in the COVID-19 only one image was misclassified, which made them the closest result to DenseNet201. Regarding the GoogleNet model, the proportion of images that had been predicted for the COVID-19 class was 245, although 3 images were counted in the non-COVID class. The lowest prediction obtained by the models were AlexNet and Xception for both classes: for the COVID-19 class for the AlexNet, only 227 images were correctly predicted, whereas 2 images were labelled as in non-COVID class. On the other hand, the Xception model predicts 257 images for COVID-19, and just 3 images are labelled as a non-COVID class.

The DenseNet201 achieved the highest result with respect to all evaluation metrics of 100%. While VGG16, MobileNet-V2, ResNet50, and ResNet18 scored 98-99% on all evaluation metrics. Regarding, GoogleNet achieved the results ranged between 97-99% for all assessment measures. The lowest results obtained were obtained by AlexNet and Xception which did not exceed 92% and 69%, respectively. From all the model DensNet model showed superior performance when compared to other models concerning all evaluation metrics as shown in Table 5.

Table 4. Confusion matrices for different types of deep learning models. The results are an average of the 5-fold cross-validations through the applying of CT scan datasets

CNN Name

Classes Name

Predicted Class

AlexNet

Actual Class

COVID-19

227

2

NON

33

209

DenseNET201

Actual Class

COVID-19

260

0

NON

0

211

GoogleNet

Actual Class

COVID-19

259

4

NON

1

207

MobileNet-V2

Actual Class

COVID-19

259

1

NON

1

210

Resnet18

Actual Class

COVID-19

259

0

NON

1

211

ResNet50

Actual Class

COVID-19

259

0

NON

1

211

VGG16

Actual Class

COVID-19

260

4

NON

0

207

Xception

Actual Class

COVID-19

257

142

NON

3

69

Table 5. Different types of metrics used to check the performance of various types of deep learning models using the CT scan dataset whereas Sen refers to sensitivity, Spe refers to specificity, and Acc refers to accuracy

Dataset Types

Model

Evaluation Metrics

Acc

Sen

Spe

Pre

NPV

F1-Score

MCC

CT

GoogleNet

98.9

98.4

99.5

99.6

98.1

99.0

97.8

AlexNet

92.5

99.1

86.3

87.3

99.0

92.8

85.9

VGG16

99.1

98.4

100

100

98.1

99.2

98.2

MobileNet-V2

99.5

99.6

99.5

99.6

99.5

99.6

99.1

ResNet50

99.7

100

99.5

99.6

100

99.8

99.5

DenseNet201

100

100

100

100

100

100

100

ResNet18

99.7

100

99.5

99.6

100

99.8

99.5

Xception

69.2

64.4

95.8

98.8

32.7

77.9

43.5

Table 6. Comparison between the state-of-art results published in deep learning models and our results using both dataset X-ray and CT scan, whereas Sen refers to sensitivity, Spe refers to specificity, and Acc refers to accuracy, F1 refers to F1-score

Ref.

Dataset

Image Types

DL Model

Layers Num.

Classifier

Sen

Spe

F1

Acc

[59]

Different Datasets

X-ray

WRN

28

SoftMax

97.53

88.52

94.5

93

[60]

Different Datasets

CT

DenseNet

Standard

SoftMax

NA

NA

90

89

[61]

Different Datasets

CT

DenseNet

169

 

NA

NA

85

86

[62]

Different Datasets

X-ray

Resnet18

Standard

SoftMax

85

96

84

88

[63]

Different Datasets

CT

VGG

Modified

weakly supervised

93

93

NA

94

[64]

Different Datasets

CT

ResNet-101, Xception

Standard

SoftMax

98

100

NA

99

[65]

SIRM

CT

VGG-16, GoogleNet and ResNet-50

Standard

SVM

98.93

97.60

98.28

98.27

[66]

UCSD-AI4H

CT

DECAPS

developed

HAMs

NA

84.3

87.1

87.6

[67]

Different Datasets

CT

Alexnet

Modified

SoftMax

100

96

NA

98

[68]

Medical

CT

ResNet50

Modified

Dense Layer

81.1

61.5

NA

76

[69]

Medical

CT

DRE-Net

Modified

MLP

93

93

93

93

[70]

Medical

CT

AFS-DF

developed

Ensemble

93.05

89.95

NA

91.79

[71]

Different Datasets

CT/X-ray

ResNet101

Standard

SoftMax

100

97.50

NA

98.75

[72]

Medical

CT

inception

Modified

Decision Tree and Adaboost

81

84

77

82.9

[73]

Medical

CT

DenseNet

Standard

SoftMax

97

87

93

92

[74]

Medical

3D-CT

ResNet18

29

Noisy-OR Bayesian function

NA

NA

83.9

86.7

[75]

Different Datasets

CT

CNN as a feature extractor

12

LSTM

NA

NA

NA

99.68

[76]

COVID-CT-Dataset

CT

ResNet50

Modified

SoftMax

80.85

91.43

NA

82.91

[77]

Different Datasets

CT

ResNet50 as a feature extractor

14

SoftMax

91.45

94.77

NA

93.01

[78]

Different Datasets

CT

CNN

7

NA

90

90

NA

92

[79]

COVIDx

X-ray

COVIDiagnosis-Net (SqueezeNet)

Standard

Decision-Making

95.13

95.3

96.51

98.08

[80]

Different Datasets

X-ray

DarkCovidNet

39

Linear

NA

99

98

98

[52]

Different Datasets

X-ray

DeTraC (ResNet18)

Standard

SoftMax

NA

NA

NA

95

[81]

COVIDx

X-ray

COVID-CAPS

9

Capsule Layer

90

95.8

NA

95.7

[1]

Different Datasets

X-ray

CoroNet

Modified

SoftMax

89.92

96.4

89.9

89.5

[82]

Different Datasets

X-ray

GSA-DenseNet121-COVID-19

Modified

SoftMax

98

NA

98

98

[83]

Different Datasets

X-ray

DCSL Framework

Modified

SoftMax

97.09

NA

96.98

97.01

[84]

Different Datasets

X-ray

CNN

12

Grad-CAM

NA

NA

NA

90.1

[85]

Kaggle

X-ray

CNN

11

SoftMax

NA

NA

NA

93

[86]

COVIDx

X-ray

CoroNet

2 separates (FPAE) + ResNet18

SoftMax

93.50

NA

93.51

93.50

[87]

COVIDx

X-ray

DenseNet121

Modified

SoftMax

92

NA

92

96.4

[88]

Different Datasets

X-ray, CT

Inception-ResNetV2

Standard

MLP Classifier

92.11

96.06

92.07

92.18

[45]

Different Datasets

X-ray

MobileNet

Modified

NA

NA

NA

NA

96.78

[89]

Different Datasets

X-ray

mobilenetv2, Densenet121, Resnet (18,50,101,152), DenseNet (169,201), Resnext50, WRN (50,101) RresNext101

Modified

SoftMax

NA

NA

0.64

89.4

[90]

Different Datasets

X-ray

CAD-based YOLO Predictor

54

Tensor of prediction

85.15

99.05

84.81

97.40

[5]

Clinical (3 Datasets)

X-ray

CovXNet

Various layer

Grad-CAM

97.8

94.7

97.1

97.4

[91]

Different Datasets

X-ray

VGG16

Modified

SoftMax

87.7

NA

NA

84.1

[47]

Different Datasets

X-ray

ResNet50

Standard

SVM

95.33

NA

95.34

95.33

Our

Zenodo

X-ray

DenseNet201

Modified

SoftMax

98.8

99.6

99.2

99.3

Our

Kaggle

CT

DenseNet201

Modified

SoftMax

100

100

100

100

Various models were verified using the ROC to ensure the performance and to visualize the results of the different models in a more transparent and understandable way as shown in Figure 11. It is undeniable that the DensNet201 model achieves exceptional performances that exceed those of its peers. Finally, we compared our results with other leading-edge studies recently published in the COVID-19 literature for X-ray and CT datasets. It is clear that our results outperformed the previous results published as shown in Table 6, whereas the best results achieved were written in bold.

Figure 11. The receiver operating characteristic (ROC) curves for various types of pre-trained models including GoogleNet, AlexNet, ResNet50, ResNet18, VGG16, MobileNet-V2, DenseNet201, and Xception using CT scan dataset

8. Conclusion

The use of deep learning is one of the most crucial technologies currently available because of its ability to help and speed up decision-making. Especially under the current circumstances that the world is experiencing due to the existence of the COVID-19 pandemic, governments have supported deep learning to accelerate the identification of a diagnosis. In this study, eight types of pre-trained deep learning models were implemented, (GoogleNet, AlexNet, VGG16, MobileNet-V2, ResNet50, DenseNet201 ResNet18, and Xception) using the built-in routines of MATLAB. In the first scenario, X-ray images were adopted, and 577 images are divided into (COVID-19, non-COVID). Later, in the second scenario, we used CT images for the classification (471 images). Similar to the X-ray image input set, DenseNet201 achieved the highest accuracy value of 100%. The major limitation of the current paper is the small dataset, where this technique can both fine-tune the weights of pre-trained networks on small datasets and train the weights of networks on big datasets. Although this limitation exists, superior results were obtained by applying the techniques of regularization, learning rate schedulers, and data augmentation, which in turn increased the learning ability to be used for the classification of COVID-19. Due to pre-trained models having advantages such as reduced consumption time, and simplified implementation, there is no need for huge amounts of images (datasets) for training and testing, because they are already trained compared to developing a model from scratch that will be developed for a specific type of image. We conclude that applying a pre-trained model allows classification with perfect results for both types of datasets (CT and X-rays).  

The results that were obtained in the previous papers' published literature did not obtain impressive results compared to the results that we achieved, due to the images needed to train the developed model from scratch must be huge in order to obtain sufficient test accuracy results which can be adopted in hospitals. Based on these limitations found in previous studies, we used pre-trained models to classify COVID-19 with the best possible accuracy.

To conclude, in this study, we presented that the classification of the COVID-19 and non-COVID radiological images is possible with the absence of error. In terms of applications, based on the results obtained, we can recommend the utilize this methodology with chest-related infectious diseases (radiographs), which lacks the amount of data that allows developing (training and testing) a convolutional neural network from scratch.

  References

[1] Khan, A.I., Shah, J.L., Bhat, M.M. (2020). CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Computer methods and programs in biomedicine, 196: 105581. https://doi.org/10. 6/j.cmpb.2020.105581

[2] Lu, H., Stratton, C.W., Tang, Y.W. (2020). Outbreak of pneumonia of unknown etiology in Wuhan, China: The mystery and the miracle. Journal of Medical Virology, 92(4): 401. https://doi.org/10.1002%2Fjmv.25678

[3] World Health Organization, 2. (2020). WHO Director-General’s remarks at the media briefing on 2019-nCoV on 11 February 2020.

[4] Zhu, N., Zhang, D., Wang, W., Li, X., Yang, B., Song, J., Zhao, X., Huang, B., Shi, W., Lu, R., Niu, P., Zhan, F., Ma, X., Wang, D., Xu, W., Wu, G., Gao, G.F., Tan, W. (2020). A novel coronavirus from patients with pneumonia in China, 2019. New England Journal of Medicine. https://doi.org/10.1056/NEJMoa2001017

[5] Mahmud, T., Rahman, M.A., Fattah, S.A. (2020). CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Computers in Biology and Medicine, 122, 103869. https://doi.org/10. 6/j.compbiomed.2020.103869

[6] Sohrabi, C., Alsafi, Z., O'neill, N., Khan, M., Kerwan, A., Al-Jabir, A., Iosifidis, C., Agha, R. (2020). World health organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19). International Journal of Surgery, 76: 71-76. https://doi.org/10. 6/j.ijsu.2020.02.034

[7] Wang, C., Horby, P.W., Hayden, F.G., Gao, G.F. (2020). A novel coronavirus outbreak of global health concern. The Lancet, 395(10223): 470-473. https://doi.org/10. 6/S0140-6736(20)30185-9

[8] Guan, W.J., Ni, Z.Y., Hu, Y., Liang, W.H., Ou, C.Q., He, J.X., Liu, L., Shan, H., Lei, C.L., Hui, D.S.C., Du, B., Li, L.J., Zeng, G., Yuen, K.Y., Chen, R.C., Tang, C.L., Wang, T., Chen, P.Y., Xiang, J., Li, S.Y., Wang, J.L., Liang, Z.J., Peng, Y.X., Wei, L., Liu, Y., Hu, Y.H., Peng, P., Wang, J.M., Liu, J.Y., Chen, Z., Li, G., Zheng, Z.J., Qiu, S.Q., Luo, J., Ye, C.J., Zhu, S.Y., Zhong, N.S. (2020). Clinical characteristics of 2019 novel coronavirus infection in China. MedRxiv. https://doi.org/10.1 /2020.02.06.20020974

[9] Huang, C., Wang, Y., Li, X., Ren, L., Zhao, J., Hu, Y., Zhang, L., Fan, G., Xu, J., Gu, X., Cheng, Z., Yu, T., Xia, J., Wei, Y., Wu, W., Xie, X., Yin, W., Li, H., Liu,, M., Xiao, Y., Gao, H., Guo, L., Xie, J., Wang, G., Jiang, R., Gao, Z., Jin, Q., Wang, J., Cao, B. (2020). Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. The Lancet, 395(10223): 497-506. https://doi.org/10. 6/S0140-6736(20)30183-5

[10] Singhal, T. (2020). A review of coronavirus disease-2019 (COVID-19). The Indian Journal of Pediatrics, 87(4): 281-286. https://doi.org/10.1007/s12098-020-03263-6

[11] Li, Q., Guan, X., Wu, P., Wang, X., Zhou, L., Tong, Y., Ren, R., Leung, K.S.M., Lau, E.H.Y., Wong, J.Y., Xing, X., Xiang, N., Wu, Y., Li, C., Chen, Q., Li, D., Liu, T., Zhao, J., Liu, M., Tu, W., Chen, C., Jin, L., Yang, R., Wang, Q., Zhou, S., Wang, R., Liu, H., Luo, Y., Liu, Y., Shao, G., Li, H., Tao, Z., Yang, Y., Deng, Z., Liu, B., Ma, Z., Zhang, Y., Shi, G., Lam, T.T.Y., Wu, J.T., Gao, G.F., Cowling, B J., Yang, B., Leung, G.M., Feng, Z. (2020). Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. New England Journal of Medicine. https://doi.org/10.1056/NEJMoa2001316

[12] Al-Jumaili, S., Al-Azzawi, A., Duru, A.D., Ibrahim, A.A. (2021). Covid-19 X-ray image classification using SVM based on local binary pattern. In 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, pp. 383-387. https://doi.org/10.1109/ISMSIT52890.2021.9604731

[13] Al-Jumaili, S., Duru, A.D., Uçan, O.N. (2021). Covid-19 Ultrasound image classification using SVM based on kernels deduced from convolutional neural network. In 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, pp. 429-433. https://doi.org/10.1109/ISMSIT52890.2021.9604551

[14] Al-jumaili, S., Duru, D.G., Ucan, B., Uçan, O.N., Duru, A.D. (2022). Classification of Covid-19 effected CT images using a hybrid approach based on deep transfer learning and machine learning. Research Square. https://doi.org/10.21203/rs.3.rs-1541093/v1

[15] Ai, T., Yang, Z., Hou, H., Zhan, C., Chen, C., Lv, W., Tao, Q., Sun, Z., Xia, L. (2020). Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A report of  4 cases. Radiology, 296(2): E32-E40. https://doi.org/10.1148/radiol.2020200642

[16] Mahdy, L.N., Ezzat, K.A., Elmousalami, H.H., Ella, H. A., Hassanien, A.E. (2020). Automatic X-ray covid-19 lung image classification system based on multi-level thresholding and support vector machine. MedRxiv, 2020-03. https://doi.org/10.1 /2020.03.30.20047787

[17] Kadry, S., Rajinikanth, V., Rho, S., Raja, N.S.M., Rao, V.S., Thanaraj, K.P. (2020). Development of a machine-learning system to classify lung CT scan images into normal/COVID-19 class. arXiv Preprint arXiv, 2004: 13122. https://doi.org/10.48550/arXiv.2004.13122

[18] Pereira, R.M., Bertolini, D., Teixeira, L.O., Silla Jr, C.N., Costa, Y.M. (2020). COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Computer Methods and Programs in Biomedicine, 194: 105532. https://doi.org/10. 6/j.cmpb.2020.105532

[19] Self, W.H., Courtney, D.M., McNaughton, C.D., Wunderink, R.G., Kline, J.A. (2013). High discordance of chest X-ray and computed tomography for detection of pulmonary opacities in ED patients: Implications for diagnosing pneumonia. The American Journal of Emergency Medicine, 31(2): 401-405. https://doi.org/10. 6/j.ajem.2012.08.041

[20] Rubin, G.D., Ryerson, C.J., Haramati, L.B., Sverzellati, N., Kanne, J.P., Raoof, S., Schluger, N.W., Volpi, A., Yim, J.J., Martin, I.B.K., Anderson, D.J., Kong, C., Altes, T., Bush, A., Desai, S.R., Goldin, O., Goo, J.M., Humbert, M., Inoue, Y., Kauczor, H.U., Luo, F., Mazzone, P.J., Prokop, M., Remy-Jardin, M., Richeldi, L., Schaefer-Prokop, C.M., Tomiyama, N., Wells, A.U., Leung, A.N. (2020). The role of chest imaging in patient management during the COVID-19 pandemic: A multinational consensus statement from the fleischner society. Radiology, 296(1): 172-180. https://doi.org/10.1148/radiol.2020201365

[21] Alizadehsani, R., Alizadeh Sani, Z., Behjati, M., Roshanzamir, Z., Hussain, S., Abedini, N., Hasanzadeh, F., Khosravi, A., Shoeibi, A., Roshanzamir, M., Moradnejad, P., Nahavandi, S., Khozeimeh, F., Zare, A., Panahiazar, M., Acharya, U.R., Islam, S.M.S. (2021). Risk factors prediction, clinical outcomes, and mortality in COVID‐19 patients. Journal of Medical Virology, 93(4): 2307-2320. https://doi.org/10.1002/jmv.26699

[22] Kroft, L.J., van der Velden, L., Girón, I.H., Roelofs, J.J., de Roos, A., Geleijns, J. (2019). Added value of ultra-low-dose computed tomography, dose equivalent to chest X-ray radiography, for diagnosing chest pathology. Journal of Thoracic Imaging, 34(3): 179. https://doi.org/10.1097%2FRTI.0000000000000404

[23] Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K., Lungren, M.P., Ng, A.Y. (2017). Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv Preprint arXiv:1711.05225. https://doi.org/10.48550/arXiv.1711.05225

[24] Norgeot, B., Glicksberg, B.S., Butte, A.J. (2019). A call for deep-learning healthcare. Nature Medicine, 25(1): 14-15. https://doi.org/10.1038/s41591-018-0320-3

[25] Altan, G., Kutlu, Y., Allahverdi, N. (2019). Deep learning on computerized analysis of chronic obstructive pulmonary disease. IEEE Journal of Biomedical and Health Informatics, 24(5): 1344-1350. https://doi.org/10.1109/JBHI.2019.2931395

[26] Pasa, F., Golkov, V., Pfeiffer, F., Cremers, D., Pfeiffer, D. (2019). Efficient deep network architectures for fast chest X-ray tuberculosis screening and visualization. Scientific Reports, 9(1): 6268. https://doi.org/10.1038/s41598-019-42557-4

[27] Gordienko, Y., Gang, P., Hui, J., Zeng, W., Kochura, Y., Alienin, O., Rokovyi, O., Stirenko, S. (2019). Deep learning with lung segmentation and bone shadow exclusion techniques for chest X-ray analysis of lung cancer. In Advances in Computer Science for Engineering and Education. Springer International Publishing, 13: 638-647. https://doi.org/10.1007/978-3-319-91008-6_63

[28] Zu, Z.Y., Jiang, M.D., Xu, P.P., Chen, W., Ni, Q.Q., Lu, G.M., Zhang, L.J. (2020). Coronavirus disease 2019 (COVID-19): a perspective from China. Radiology, 296(2): E15-E25. https://doi.org/10.1148/radiol.2020200490

[29] Baltruschat, I.M., Nickisch, H., Grass, M., Knopp, T., Saalbach, A. (2019). Comparison of deep learning approaches for multi-label chest X-ray classification. Scientific Reports, 9(1): 1-10. https://doi.org/10.1038/s41598-019-42294-8

[30] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. https://doi.org/10.1109/CVPR.2015.7298594

[31] Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251-1258.

[32] Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III, Springer International Publishing, 18: 234-241. https://doi.org/10.1007/978-3-319-24574-4_28

[33] Krizhevsky, A., Sutskever, I., Hinton, G.E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6): 84-90. https://doi.org/10.1145/3065386

[34] Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv Preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556

[35] He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. https://doi.org/10.1109/CVPR.2016.90

[36] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv Preprint arXiv: 1704.04861. https://doi.org/10.48550/arXiv.1704.04861

[37] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700-4708. https://doi.org/10.1109/CVPR.2017.243

[38] Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size. arXiv Preprint arXiv: 1602.07360. https://doi.org/10.48550/arXiv.1602.07360

[39] Liu, C., Cao, Y., Alcantara, M., Liu, B., Brunette, M., Peinado, J., Curioso, W. (2017). TX-CNN: Detecting tuberculosis in chest X-ray images using convolutional neural network. In 2017 IEEE International Conference on Image Processing (ICIP), pp. 2314-2318. https://doi.org/10.1109/ICIP.2017.8296695

[40] Dong, Y., Pan, Y., Zhang, J., Xu, W. (2017). Learning to read chest X-ray images from 16000+examples using CNN. In 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), pp. 51-57. https://doi.org/10.1109/CHASE.2017.59

[41] Chouhan, V., Singh, S.K., Khamparia, A., Gupta, D., Tiwari, P., Moreira, C., Damaševičius, R., De Albuquerque, V.H.C. (2020). A novel transfer learning based approach for pneumonia detection in chest X-ray images. Applied Sciences, 10(2), 559. https://doi.org/10.3390/app10020559

[42] Hemdan, E.E.D., Shouman, M.A., Karar, M.E. (2020). Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv Preprint arXiv: 2003.11055. https://doi.org/10.48550/arXiv.2003.11055

[43] Waheed, A., Goyal, M., Gupta, D., Khanna, A., Al-Turjman, F., Pinheiro, P.R. (2020). Covidgan: Data augmentation using auxiliary classifier gan for improved covid-19 detection. IEEE Access, 8: 91916-91923. https://doi.org/10.1109/ACCESS.2020.2994762

[44] Wang, L., Lin, Z.Q., Wong, A. (2020). Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest X-ray images. Scientific Reports, 10(1): 1-12. https://doi.org/10.1038/s41598-020-76550-z

[45] Apostolopoulos, I.D., Mpesiana, T.A. (2020). Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Physical and Engineering Sciences in Medicine, 43: 635-640. https://doi.org/10.1007/s13246-020-00865-4

[46] Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N. (2020). X-ray image based COVID-19 detection using pre-trained deep learning models. EngRxiv. https://doi.org/10.31224/osf.io/wx89s

[47] Sethy, P.K., Behera, S.K., Ratha, P.K., Biswas, P. (2020). Detection of coronavirus disease (Covid-19) based on deep features and support vector machine. Preprints.

[48] Sheykhivand, S., Mousavi, Z., Mojtahedi, S., Rezaii, T. Y., Farzamnia, A., Meshgini, S., Saad, I. (2021). Developing an efficient deep neural network for automatic detection of COVID-19 using chest X-ray images. Alexandria Engineering Journal, 60(3): 2885-2903. https://doi.org/10. 6/j.aej.2021.01.011

[49] Haque, K.F., Haque, F.F., Gandy, L., Abdelgawad, A. (2020). Automatic detection of Covid-19 from chest X-ray images with convolutional neural networks. In 2020 International Conference on Computing, Electronics & Communications Engineering (ICCECE). IEEE, pp. 125-130. https://doi.org/10.1109/iCCECE49321.2020.9231235

[50] Jain, R., Gupta, M., Jain, K., Kang, S. (2021). Deep learning based prediction of Covid-19 virus using chest X-Ray. Journal of Interdisciplinary Mathematics, 24(1): 155-173. https://doi.org/10.1080/09720502.2020.1833460

[51] Che Azemin, M.Z., Hassan, R., Mohd Tamrin, M.I., Md Ali, M.A. (2020). Covid-19 deep learning prediction model using publicly available radiologist-adjudicated chest X-ray images as training data: Preliminary findings. International Journal of Biomedical Imaging, 2020. https://doi.org/10.1155/2020/8828855

[52] Abbas, A., Abdelsamea, M.M., Gaber, M.M. (2021). Classification of Covid-19 in chest X-ray images using DeTraC deep convolutional neural network. Applied Intelligence, 51: 854-864. https://doi.org/10.1007/s10489-020-01829-7

[53] Gao, X.H. (2021). Covid19 X-ray Dataset. Available. https://doi.org/10.5281/zenodo.4010718

[54] RSNA. https://www.rsna.org/covid-19/COVID-19-RICORD, accessed on 8 January 2023.

[55] Cohen, J.P., Morrison, P., Dao, L., Roth, K., Duong, T.Q., Ghassemi, M. (2020). Covid-19 image data collection: Prospective predictions are the future. arXiv Preprint arXiv: 2006.11988. https://doi.org/10.48550/arXiv.2006.11988

[56] Al-Jumaili, S., Al-Jumaili, A., Alyassri, S., Duru, A.D., Uçan, O.N. (2022). Recent advances on convolutional architectures in medical applications: Classical or quantum? In 2022 International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, pp. 800-805. https://doi.org/10.1109/ISMSIT56059.2022.9932857

[57] Chandola, Y., Virmani, J., Bhadauria, H., Kumar, P. (2021). Chapter 4-End-to-end pre-trained CNN-based computer-aided classification system design for chest radiographs. Deep Learning for Chest Radiographs; Primers in Biomedical Imaging Devices and Systems; Chandola, Y., Virmani, J., Bhadauria, H., Kumar, P., Eds, 117-140.

[58] Srinivasan, K., Garg, L., Datta, D., Alaboudi, A.A., Jhanjhi, N.Z., Agarwal, R., Thomas, A.G. (2021). Performance comparison of deep CNN models for detecting driver’s distraction. CMC-Computers, Materials & Continua, 68(3): 4109-4124.

[59] Rajaraman, S., Antani, S. (2020). Training deep learning algorithms with weakly labeled pneumonia chest X-ray data for COVID-19 detection. MedRxiv. https://doi.org/10.1 %2F2020.05.04.20090803

[60] Zhao, J., Zhang, Y., He, X., Xie, P. (2020). Covid-CT-dataset: A CT scan dataset about covid-19. arXiv.

[61] He, X., Yang, X., Zhang, S., Zhao, J., Zhang, Y., Xing, E., Xie, P. (2020). Sample-efficient deep learning for Covid-19 diagnosis based on CT scans. MedRxiv, 2020: 04. https://doi.org/10.1 /2020.04.13.20063941

[62] Oh, Y., Park, S., Ye, J.C. (2020). Deep learning Covid-19 features on CXR using limited training data sets. IEEE Transactions on Medical Imaging, 39(8): 2688-2700. https://doi.org/10.1109/TMI.2020.2993291

[63] Hu, S., Gao, Y., Niu, Z., Jiang, Y., Li, L., Xiao, X., Wang, M., Fang, E.F., Menpes-Smith, W., Xia, J., Ye, H., Yang, G. (2020). Weakly supervised deep learning for Covid-19 infection detection and classification from CT images. IEEE Access, 8: 118869-118883. https://doi.org/10.1109/ACCESS.2020.3005510

[64] Ardakani, A.A., Kanafi, A.R., Acharya, U.R., Khadem, N., Mohammadi, A. (2020). Application of deep learning technique to manage Covid-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Computers in Biology and Medicine, 121: 103795. https://doi.org/10. 6/j.compbiomed.2020.103795

[65] Özkaya, U., Öztürk, Ş., Barstugan, M. (2020). Coronavirus (Covid-19) classification using deep features fusion and ranking technique. Big Data Analytics and Artificial Intelligence Against Covid-19: Innovation Vision and Approach, 281-295. https://doi.org/10.1007/978-3-030-55258-9_17

[66] Mobiny, A., Cicalese, P.A., Zare, S., Yuan, P., Abavisani, M., Wu, C.C., Ahuja, J., de Groot, P.M., Van Nguyen, H. (2020). Radiologist-level covid-19 detection using CT scans with detail-oriented capsule networks. arXiv Preprint arXiv: 2004.07407. https://doi.org/10.48550/arXiv.2004.07407

[67] Maghdid, H.S., Asaad, A.T., Ghafoor, K.Z., Sadiq, A.S., Mirjalili, S., Khan, M.K. (2021). Diagnosing Covid-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. In Multimodal Image Exploitation and Learning 2021, SPIE, 11734: 99-110. https://doi.org/10.1117/12.2588672

[68] Wu, X., Hui, H., Niu, M., Li, L., Wang, L., He, B., Yang, X., Li, L., Li, H., Tian, J., Zha, Y. (2020). Deep learning-based multi-view fusion model for screening 2019 novel coronavirus pneumonia: A multicentre study. European Journal of Radiology, 128: 109041. https://doi.org/10. 6/j.ejrad.2020.109041

[69] Song, Y., Zheng, S., Li, L., Zhang, X., Zhang, X., Huang, Z., Chen, J., Wang, R., Zhao, H., Chong, Y., Shen, J., Zha, Y., Yang, Y. (2021). Deep learning enables accurate diagnosis of novel coronavirus (Covid-19) with CT images. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 18(6): 2775-2780. https://doi.org/10.1109/TCBB.2021.3065361

[70] Sun, L., Mo, Z., Yan, F., Xia, L., Shan, F., Ding, Z., Song, B., Gao, W., Shao, W., Shi, F., Yuan, H., Jiang, H., Wu, D., Wei, Y., Gao, Y., Sui, H., Zhang, D., Shen, D. (2020). Adaptive feature selection guided deep forest for Covid-19 classification with chest CT. IEEE Journal of Biomedical and Health Informatics, 24(10): 2798-2805. https://doi.org/10.1109/JBHI.2020.3019505

[71] Rehman, A., Naz, S., Khan, A., Zaib, A., Razzak, I. (2022). Improving coronavirus (Covid-19) diagnosis using deep transfer learning. In Proceedings of International Conference on Information Technology and Applications: ICITA 2021. Singapore: Springer Nature Singapore, pp. 23-37. https://doi.org/10.1007/978-981-16-7618-5_3

[72] Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X., Xu, B. (2021). A deep learning algorithm using CT images to screen for corona virus disease (Covid-19). European Radiology, 31: 6096-6104. https://doi.org/10.1007/s00330-021-07715-1

[73] Yang, S., Jiang, L., Cao, Z., Wang, L., Cao, J., Feng, R., Zhang, Z., Xue, X., Shi, Y., Shan, F. (2020). Deep learning for detecting corona virus disease 2019 (Covid-19) on high-resolution computed tomography: A pilot study. Annals of Translational Medicine, 8(7). https://doi.org/10.21037%2Fatm.2020.03.132

[74] Brunese, L., Mercaldo, F., Reginelli, A., Santone, A. (2020). Explainable deep learning for pulmonary disease and coronavirus Covid-19 detection from X-rays. Computer Methods and Programs in Biomedicine, 196: 105608. https://doi.org/10. 6/j.cmpb.2020.105608

[75] Hasan, A.M., Al-Jawad, M.M., Jalab, H.A., Shaiba, H., Ibrahim, R.W., AL-Shamasneh, A.A.R. (2020). Classification of Covid-19 coronavirus, pneumonia and healthy lungs in CT scans using Q-deformed entropy and deep learning features. Entropy, 22(5): 517. https://doi.org/10.3390/e22050517

[76] Loey, M., Manogaran, G., Khalifa, N.E.M. (2020). A deep transfer learning model with classical data augmentation and CGAN to detect Covid-19 from chest CT radiography digital images. Neural Computing and Applications, 1-13. https://doi.org/10.1007/s00521-020-05437-x

[77] Pathak, Y., Shukla, P.K., Tiwari, A., Stalin, S., Singh, S. (2022). Deep transfer learning based classification model for Covid-19 disease. IRBM, 43(2): 87-92. https://doi.org/10. 6/j.irbm.2020.05.003

[78] Singh, D., Kumar, V., Kaur, M. (2020). Classification of Covid-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. European Journal of Clinical Microbiology & Infectious Diseases, 39: 1379-1389. https://doi.org/10.1007/s10096-020-03901-z

[79] Ucar, F., Korkmaz, D. (2020). COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (Covid-19) from X-ray images. Medical hypotheses, 140: 109761. https://doi.org/10. 6/j.mehy.2020.109761

[80] Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, O., Acharya, U.R. (2020). Automated detection of Covid-19 cases using deep neural networks with X-ray images. Computers in Biology and Medicine, 121: 103792. https://doi.org/10. 6/j.compbiomed.2020.103792

[81] Afshar, P., Heidarian, S., Naderkhani, F., Oikonomou, A., Plataniotis, K.N., Mohammadi, A. (2020). Covid-caps: A capsule network-based framework for identification of Covid-19 cases from X-ray images. Pattern Recognition Letters, 138: 638-643. https://doi.org/10. 6/j.patrec.2020.09.010

[82] Ezzat, D., Ella, H.A. (2020). GSA-DenseNet121-Covid-19: A hybrid deep learning architecture for the diagnosis of Covid-19 disease based on gravitational search optimization algorithm. arXiv Preprint arXiv: 2004.05084. https://doi.org/10. 6/j.asoc.2020.106742

[83] Li, T., Han, Z., Wei, B., Zheng, Y., Hong, Y., Cong, J. (2020). Robust screening of Covid-19 from chest X-ray via discriminative cost-sensitive learning. arXiv Preprint arXiv: 2004.12592. https://doi.org/10.48550/arXiv.2004.12592

[84] Basu, S., Mitra, S., Saha, N. (2020). Deep learning for screening Covid-19 using chest X-ray images. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 2521-2527. https://doi.org/10.1109/SSCI47803.2020.9308571

[85] Medhi, K., Jamil, M., Hussain, M.I. (2020). Automatic detection of Covid-19 infection from chest X-ray using deep learning. MedRxiv, 2020-05. https://doi.org/10.1 /2020.05.10.20097063

[86] Khobahi, S., Agarwal, C., Soltanalian, M. (2020). Coronet: A deep network architecture for semi-supervised task-based identification of Covid-19 from chest X-ray images. MedRxiv, 2020-04. https://doi.org/10.1 /2020.04.14.20065722

[87] Sarker, L., Islam, M.M., Hannan, T., Ahmed, Z. (2020). COVID-DenseNet: A deep learning architecture to detect Covid-19 from chest radiology images. Preprint, 2020050151.

[88] El Asnaoui, K., Chawki, Y. (2021). Using X-ray images and deep learning for automated detection of coronavirus disease. Journal of Biomolecular Structure and Dynamics, 39(10): 3615-3626. https://doi.org/10.1080/07391102.2020.1767212

[89] Goodwin, B.D., Jaskolski, C., Zhong, C., Asmani, H. (2020). Intra-model variability in Covid-19 classification using chest X-ray images. arXiv Preprint arXiv: 2005.02167. https://doi.org/10.48550/arXiv.2005.02167

[90] MA, A.A., Hua, C.H., Lee, S. (2020). Fast deep learning computer-aided diagnosis against the novel Covid-19 pandemic from digital chest X-ray images. Research Square. https://doi.org/10.21203/rs.3.rs-36353/v1

[91] Moutounet-Cartan, P.G. (2020). Deep convolutional neural networks to diagnose Covid-19 and other pneumonia diseases from posteroanterior chest x-rays. arXiv Preprint arXiv: 2005.00845. https://doi.org/10.48550/arXiv.2005.00845