A Scoping Review on COVID-19's Early Detection Using Deep Learning Model and Computed Tomography and Ultrasound

A Scoping Review on COVID-19's Early Detection Using Deep Learning Model and Computed Tomography and Ultrasound

Ali Abdulqader Bin-SalemHaider Dhia Zubaydi Mahmood Alzubaidi Zain Ul Abideen Tariq Hamad Naeem 

School of Computer Science and Technology, Zhoukou Normal University, Zhoukou 466001, China

Department of Telecommunications and Media Informatics (TMIT), Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest 1111, Hungary

College of Science and Engineering, Hamad Bin Khalifa University, Doha 3410, Qatar

Corresponding Author Email: 
ali@zknu.edu.cn
Page: 
205-219
|
DOI: 
https://doi.org/10.18280/ts.390121
Received: 
9 January 2021
|
Revised: 
2 February 2022
|
Accepted: 
12 February 2022
|
Available online: 
28 February 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Since the end of 2019, a COVID-19 outbreak has put healthcare systems worldwide on edge. In rural areas, where traditional testing is unfeasible, innovative computer-aided diagnostic approaches must deliver speedy and cost-effective screenings. Conducting a full scoping review is essential for academics despite several studies on the use of Deep Learning (DL) to combat COVID-19. This review examines the application of DL techniques in CT and ULS images for the early detection of COVID-19. In this review, the PRISMA literature review approach was followed. All studies are retrieved from IEEE, ACM, Medline, and Science Direct. Performance metrics were highlighted for each study to measure the proposed solutions' performance and conceptualization; A set of publicly available datasets were appointed; DL architectures based on more than one image modality such as CT and ULS are explored. Out of 32 studies, the combined U-Net segmentation and 3D classification VGG19 network had the best F1 score (98%) on ultrasound images, while ResNet-101 had the best accuracy (99.51%) on CT images for COVID-19 detection. Hence, data augmentation techniques such as rotation, flipping, and shifting were frequently used. Grad-CAM was used in eight studies to identify anomalies on the lung surface. Our research found that transfer learning outperformed all other AI-based prediction approaches. Using a UNET with a predefined backbone, like VGG19, a practical computer-assisted COVID-19 screening approach can be developed. More collaboration is required from healthcare professionals and the computer science community to provide an efficient deep learning framework for the early detection of COVID-19.

Keywords: 

COVID-19, deep learning, computed tomography CT, ultrasound ULS, early detection

1. Introduction

1.1 Background

According to the World Health Organization, the first case of infection of the novel coronavirus was reported in December 2019, and metagenomic next-generation sequencing was used to identify the virus [1]. Since then, people’s lives have been profoundly altered by the virus, which spread rapidly and triggered an unprecedented public health crisis. COVID-19 has a wide range of symptoms, including fatigue, dry cough, and fever as well as less common symptoms such as skin rashes, headaches, loss of smell or taste, sore throat, and aches and pains [2]. Situation reports were published to monitor the spread of COVID-19 for a specific period, such as [3]. The case fatality rate of COVID-19 is between 8% and 15%, and elderly individuals are deemed to be high risk [4]. To combat COVID-19, scientists and researchers have explored and deployed a wide range of new technologies to stop the spread of the virus.

Researchers are currently exploring the potential application of cutting-edge technologies such as artificial intelligence (AI), big data, and the Internet of things (IoT) [5]. The IoT is an IT network of devices, from the smartphone on your desk to machines and buildings. Such items are connected and comprise the IoT. Such devices are fitted with sensors that collect huge amounts of data each minute or each period of time [6]. Meanwhile, big data are the information gathered by linked devices over time. Organizations are struggling to realize the potential of big data and how they can improve their business owing to the vast amount of data involved. Data must be analyzed to be useful, which precisely describes the function of AI, which uses algorithms to analyze the data created by the devices in the IoT. As this study focuses on the analysis of data rather than on the limitations of IoT communication or big data, it also focuses on AI.

AI is currently under development. In the fight against emerging diseases [7, 8], some technologies can make predictions and projections before a virus reaches its full potential. For decades, academics have focused on AI owing to its potential to revolutionize healthcare [9]. AI can be used in a variety of ways to accomplish a wide range of activities and is considered as a general-purpose technology. In the healthcare industry, AI is extremely beneficial, as it can enable rapid decision making and response procedures, which are two of the most critical aspects of any system created for the industry [10]. AI can play a major role in decreasing and alleviating the burden on clinics and hospitals from the large number of infected patients [11].

Machine learning (ML) is a branch of AI that focuses on methods that allow computers to infer complicated correlations or patterns from empirical data without needing to be programmed explicitly. ML is utilized in a variety of applications in the healthcare domain, including medical imaging diagnostics, illness diagnoses, smart health records, remote health monitoring, and clinical trials and research. Deep learning (DL) is a derivative of ML, which is the focus of this study. A specific healthcare application task can be accomplished using enormous historical data created by a vast neural network supplied and performed by DL. DL is extremely useful when utilizing computed tomography (CT) and ultrasound (ULS), as it can help in the segmentation of regions of interest, which is helpful in detecting COVID-19 [12]. Preventing such the spread of virus can be accomplished through having precise knowledge of host response and dynamics [13], which is the first step in the creation of an effective DL approach. Learning from existing and simple representations to solve complex problems is one of the most important features of DL [14]. DL allows learning in a deep manner using deep neural networks and has the ability to identify correct representations to present accurate results. This aspect is important, because it can help predict future COVID-19 patterns [15].

1.2 Research problem and aim

As COVID-19 manifests as a wide range of clinical symptoms, the testing procedure for diagnosing an infection should be able to distinguish the virus from a variety of other viruses that manifest similar symptoms. Polymer chain reaction (PCR) and antibody testing are the two most common procedures used worldwide to screen patients for COVID-19 infection, though both approaches have a set of limitations [16]. The PCR approach is prone to yielding a high number of false negatives and requires a lengthy testing period, whereas the antibody testing method yields a substantial number of false positives. Thus, scientists were compelled to find alternative means for accurate and fast detection as well as an automated procedure requiring minimum human intervention, with the goal of shortening testing time and enhancing accuracy. CT scans, chest radiograph images, and other clinical methods are also used in the assessment of the severity of a COVID-19 infection. As a result, hospitals admit only and provide necessary treatment, such as oxygen and ventilator support, to critically ill individuals [17].

Numerous approaches using AI to combat COVID-19 and clinical healthcare applications are discussed in the studies [18-21]. For additional information on AI applications and DL, refer to the studies [22-25]. Various AI data systems can be managed to obtain accurate forecasts that can benefit the healthcare environment and other public health stakeholders. Moreover, patient data can be analyzed, segmented, augmented, scaled, normalized, sampled, aggregated, and sifted in various stages.

Recently, the speed of DL research accelerated in response to the COVID-19 pandemic. Hybrid AI models are effective in detecting COVID-19, as demonstrated by the model developed in the study [26]. The effectiveness of AI applications in the medical imaging domain was demonstrated in a variety of studies aiming to diagnose various diseases, including brain tumors from MR images [27], different types of Parkinson’s disease from EEG and medical images [28], breast cancer from mammography exams [29, 30], and pneumonic diseases such as Covid-19 from X-rays and CT scans [31]. Recently, DL altered expectations on many AI image-processing applications by matching human-level precision [32] in a variety of tasks, such as classification, segmentation, and object recognition [33].

 Conducting a full scoping review is important for academics despite the publication of several studies on the use of DL to combat COVID-19. In this study, a scoping review is conducted on studies published between April 2020 and December 2020 to examine the application of DL techniques in CT and ULS images for the early detection of COVID-19. Several studies focused on COVID-19 detection; however, the majority focused on X-ray detection procedures [34-38]. Owing to their promising results, this scoping review focuses on DL methods for CT and ULS images for detecting COVID-19.

As shown in Figure 1, the DL classification process is divided into three main steps: the preprocessing and enhancement of each input sample, the extraction of input features, and classification. DL enables the processing of large amounts of data while minimizing the need for human intervention and produces accurate conclusions. Transfer learning and convolutional neural networks (CNNs) are the two main elements used in DL to detect COVID-19. Transfer learning is the process of applying knowledge from one training session to another. Similar to human brain neurons, CNNs can be defined as multilayer artificial neural networks, with each layer consisting of multiple neurons. In other words, CNNs are similar to neural networks in the human brain.

To produce a complete and unique review, CNNs, transfer learning, datasets, and segmentation, augmentation, visualization, and assessment techniques are examined in this study. Rather than identifying only current research implications, the aforementioned techniques should concentrate on real-world assessments based on large-scale deployment to highlight the limits of AI and DL. As a starting point, this work examines the use of DL for COVID-19 detection.

The paper is organized as follows: Section 2 explains the methods used in this paper; sources, search terms, eligibility, data synthesis and extraction, and the selection process. Section 3 discusses the primary information for all papers followed by datasets characteristics, data segmentation, augmentation, visualization, evaluation metrics, and validation methods. Section 4 describes the principal results, future work, research implications, strengths, and limitations of this review. Section 5 highlights the research agenda and the conclusion is presented in Section 6.

Figure 1. Deep learning vs machine learning

2. Methods

This scoping review is performed according to the standards of PRISMA Extension for scoping reviews to guarantee its openness and reliability (PRISMA-ScR) [39]. PRISMA-SCR is the most popular and thorough scoping guidelines strongly endorsed by Cocrane and the Joanna Briggs Institute (JBI) [40]. The following sections describe the procedure in this review.

2.1 Search strategy through sources and terms

In this review, MEDLINE, IEEE Xplore, Science Direct and ACM Digital library are used as the main databases for this research since there is a very low number of papers discussing ultrasonography. The research is mainly focused on the computer science database and due to the limitation of this research we included MEDLINE as medical database.

Specified search keywords were utilized to differentiate between related and unrelated research on the target databases. These keywords have been selected are "artificial intelligence, AI, machine learning, ML, deep learning, DL," that targeting "COVID-19, Coronavirus" as illness. Total studies are listed in (Appendix A).

2.2 Study eligibility criteria

Numerous articles were obtained for this review, including related and unrelated studies. The unrelated studies were excluded from the review procedure. The idea of this review was to collect different publications for each method to create comprehensive research; however, the number of papers on CT and ULS seemed sufficient for one review. Thus, the main focus of this review is the use of CT and ULS images for the purpose of detection of COVID-19 in the early stages. Only articles published between April 2020 and December 2020 were included, because most studies were published during this period, as shown in the bibliometrics in Figure 2. However, two retrieved papers were accepted in late 2020 but published in January 2021, which were also included in this review. The detailed basic information of the publications is described in the succeeding sections. Peer-reviewed articles and conference proceedings were included; however, reviews, conference abstracts, proposals, and preprinted studies were excluded. The validation and evaluation methods were described to determine the efficiency of each proposed model. An Excel sheet was used to manage the data synthesis and describe the dataset source used (e.g., public or private). This review focused on DL regardless of whether the model was built from scratch or involved transfer learning. This study answering the following questions:

  1. How much has the use of AI, ML, and DL improved the regular diagnostic procedures of COVID-19?
  2. What modalities may be utilized in conjunction with DL to assist in detecting and diagnosing COVID-19?
  3. Were DL able to address the weaknesses of diagnostic methods?
  4. How is the diagnosis of COVID-19 compared to each other effectively promoted by the various kinds of DL and their architectures?

2.3 Data extraction and data synthesis

The data extraction form is presented in Appendix B. The following information was extracted from the retrieved studies: 1) model type, 2) datasets for model training and testing, and the 3) DL model validation and evaluation. The narrative method was used to synthesize the structured data. The DL models in the retrieved studies were classified and defined according to their utilized imaging technique (i.e., CT or ULS), DL branch (e.g., CNN or VGG), and dataset source (e.g., public or private). In addition, the procedures for validating and evaluating each model were provided in order to establish its effectiveness. The data synthesis was managed using an Excel sheet.

Figure 2. Bibliometrics figure of the scope review

3. Results

3.1 Search results

A total of 457 papers were collected from the aforementioned databases. After the duplicates and studies with irrelevant sample populations, designs, and publication type were removed, 110 papers remained. Other papers were excluded after the full screening of the abstracts. The final number of articles after the full-text screening was 32. The process is described in Figure 3.

Figure 3. Study selection process chart

Table 1. Primary information for all papers

Ref No.

Type

Name of Publisher

Month

Country

Used Model

Computed Tomography (CT)

[41]

Journal

IEEE Transactions on Medical Imaging

August

China

DeCoVNet

[42]

Journal

IEEE Access

June

China

Modified VGG

[43]

Journal

IEEE Transactions on Medical Imaging

August

China

AD3D-MIL

[44]

Journal

IEEE Journal of Biomedical and health Informatics

October

China

3D ResNet-18

[45]

Journal

IEEE Journal of Biomedical and health Informatics

September

China

COVID-Net Redesi

[46]

Conference

IEEE International Conference on Systems, Man, and Cybernetics (SMC)

October

Australia

Deep bayesian ensembling framework based on three bayesian ensembling classifiers

[47]

Journal

ELSEVIER - Engineering

October

China

location-attention mechanism Classical ResNet with location-attention mechanism

[48]

Journal

Neural Computing and Applications

October

Egypt

classical data augmentation techniques along with CGAN

[49]

Journal

IEEE Access

November

Canada

Two-Dimensional Sparse Matrix Profile DenseNet

[50]

Journal

IEEE Journal of Biomedical and health Informatics

August

China

AFS-DF

[51]

Conference

IEEE International Conference on Intelligent Computer Communication and Processing (ICCP)

September

Romania

AGL

[52]

Journal

Computers in Biology and Medicine

June

Iran

CAD approach

[53]

Journal

Chaos, Solitons & Fractals

November

China

Neural Network with MSCNN Multi-Scale C

[54]

Journal

IEEE Journal of Biomedical and health Informatics

December

China

features, DL scores, and multivariable logistic regression Merged model based on significant radiomic features, DL scores, and multivariable logistic regression

[55]

Conference

2020 International Conference on Computer, Information and Telecommunication Systems (CITS)

October

China

ResNet-18

[56]

Journal

Informatics in Medicine Unlocked

September

Brazil

Voting-based approach (EfficientNet-B0)

[57]

Conference

2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)

November

Turkey

CNN algorithm with VGG-16, ResNet, GoogleNet

[58]

Conference

2020 International Conference on Information Science, Parallel and Distributed Systems (ISPDS)

November

China

Cross-layer connection neural network based on high-dimensional tensor

[59]

Conference

2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)

November

Turkey

Using deep learning to lessen these diagnosis difficulties

[60]

Journal

Scientific Reports

November

China

Identification of viral pneumonia model

[61]

Conference

2020 IEEE Symposium on Computers and Communications (ISCC)

October

Brazil

CNN and XGBoost

[62]

Conference

2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)

November

Turkey

 

Several deep learning methods: AlexNet, ResNet-18, ResNet-50, VGG, SqueezeNet, and MobileNet-v2

[63]

Journal

IEEE Transactions on Medical Imaging

May

China

Dual-sampling attention network Novel onli

[64]

Conference

2020 5th International Conference on Communication, Image and Signal Processing (CCISP)

December

China

AlexNet network

[65]

Journal

Computers in Biology and Medicine

November

France

New multitask deep learning model

[66]

Journal

Applied Soft Computing Journal

January

China

Ensemble deep learning model

Ultrasound (ULS)

[67]

Journal

IEEE Transactions on Medical Imaging

April

Italy

Reg-STN

[68]

Journal

IEEE Access

August

Australia

VGG19

[69]

Journal

Computer Vision and Pattern Recognition

September

Switzerland

Frame-based models & Video-based model

[70]

Conference

2020 5th International Conference on Communication, Image and Signal Processing (CCISP)

December

Singapore

EfficientNet

[71]

Conference

2020 IEEE International Ultrasonics Symposium (IUS)

December

Spain

MobileNet

[72]

Journal

Image and Video Processing

May

Switzerland

POCOVID-Net

3.2 Description of the included studies

As shown in Figure 4, 21 (65.62%) papers were published in different journals [41-45, 47-50, 52-54, 56, 60, 63, 65-69, 72], whereas the remaining 11 (34.38%) were published in conferences [46, 51, 55, 57-59, 61, 62, 64, 70, 71]. Most of the papers, that is, 15, were published in China [41-45, 47, 50, 53-55, 58, 60, 63, 64, 66], and the other papers were published in different countries such as Australia, Canada, Turkey, Brazil, and so on. Each paper used a different model to design an effective approach, such as DeCoVNet [41], a modified VGG [42], AD3D-MIL [43], and 3D ResNet-18 [44]. However, some of the studies redesigned or improved a previous design, such as a redesigned COVID-Net [45]. Table 1 presents the detailed basic information of each paper.

Figure 4. Publication type

3.3 Characteristics of used datasets for training and testing of DL models

Three types of datasets were used in the retrieved papers, that is, public, private, and combined datasets. As shown in Figure 5, 53.12% of the papers used a public dataset [42, 45, 46, 48-52, 55-57, 59, 61, 62, 68, 71, 72], 37.5% used a private dataset [41, 43, 44, 47, 53, 54, 58, 60, 63, 67, 69, 70], 6.25% used a combined dataset (i.e., public and private [65, 66]), and 3.13% (one paper) did not include information on the dataset [64]. The private datasets included a personal collection of private data from various hospitals and the organization of data is based on a unique design. The public datasets were from data, belong to papers, published on websites such as GitHub and hospitals that allowed the sharing of medical data for research purposes. Most of the data collected from the public and private datasets originated from China, as the country stored data during the early phases of the outbreak. AI uses such datasets for training and testing to detect and recognize different types of infection. Specifically, data are stored, labelled, and arranged hierarchically based on the type of fungus, bacteria, or virus. Works based on public datasets typically require a long processing time and high capabilities owing to the size of the data. Studies using private datasets have certain limitations, as they cannot be evaluated or improved by other researchers. Detailed information of the datasets used in the retrieved studies is presented in Table 2.

Figure 5. Dataset Source

Table 2. Characteristics of the used datasets

Ref No.

Dataset Source (Public or private)

Dataset Type

Training Dataset

Testing Dataset

 

 

Computed Tomography (CT)

 

 

[41]

Private

Local hospital, Union Hospital, Tongji Medical College

499

133

[42]

Public

The Cancer Imaging, Archive (TCIA) Public Access [73]

40

20

[43]

Private

Designated COVID-19, hospitals in Shandong

276

184

[44]

Private

10 medical centers China

2028

518

[45]

Public

SARS-CoV-2 (Kaggle). [74, 75]

N/A

N/A

[46]

Public

Obtained from [75]

752

188

[47]

Private

618 transverse-section CT samples from three hospitals

10161

1710

[48]

Public

Collected from bioRxiv and medRxiv

19050

796

[49]

Public

Two publicly available COVID-19, lung CT image datasets

Dataset 1: 329 Dataset 2: 11185

Dataset 1: 69 Dataset 2: 1398

[50]

Public

Five hospitals and Shanghai Public Health Clinical Center

2018/2016 subjects

504/506 subjects

[51]

Public

Obtained from Ref: [75, 76, 74, 77, 78]

For each input: COVID-CT-349a: 425, HCC-Parenchyma- 68: 4074

For each input: COVID-CT-349a: 203, HCC-Parenchyma- 68: 1180

[52]

Public

ImageNet dataset [68]

816

102

[53]

Private

Xiangyang Central Hospital, Xiangyang No.1 People’s Hospital

16296 CP slices

3816 CP slices

[54]

Private

Renmin Hospital of Wuhan University, Henan Provincial People’s Hospital, First Affiliated Hospital of Anhui Medical University

174

43

[55]

Public

SARS-CoV-2 CT-scan dataset [74], COVID-CT dataset [75]

2904

323

[56]

Public

SARS-CoV-2 CT-scan dataset [74], COVID-CT dataset [75]

2635

659

[57]

Public

Database of chest X-ray images and Viral Pneumonia images by a research team from Qatar, Bangladesh, Pakistan, Malaysia

Only the total number of images is mentioned: 2905

[58]

Private

N/A

9722

7822

[59]

Public

A set of chest CT data sets from multi-centre hospitals included five categories [79]

250

75

[60]

Private

Renmin Hospital of Wuhan University

35,355 images were selected and split into training and retrospectively testing datasets

[61]

Public

CT images developed by [75] collected from medRxiv, bioRxiv, NEJM, JAMA and Lancet

708 CT images

[62]

Public

A set of chest CT data sets from multi-centre hospitals included five categories [79]

927

112

[63]

Private

Seven Hospitals and Shanghai Public Health Clinical Center

2186

2796

[64]

N/A

N/A

N/A

N/A

[65]

Public and private

Three datasets from different hospitals: [75, 80] and Henri Becquerel Cancer Center (HBCC) in Rouen city of France

1069

150

[66]

Public and private

Previous publications, authoritative media reports, and public databases

6000

1500

 

 

Ultrasound (ULS)

 

 

[67]

Private

5 Local Italian hospital COVID-19 Lung Ultrasound Database (ICLUS-DB). Extended and fully-annotated version of [81]

1005

426

[68]

Public

POCOVID (GitHub) [82]

N/A

N/A

[69]

Private

Designed by the authors

139

66

[70]

Private

Self-made LUS datasets

2608

288

[71]

Public

GrepMed The POCUS Atlas Butterfly iQ EMCrit project Twitter

9539

3179

[72]

Public

Github (Covid19_ultrasound) [83]

1103 Image

64 Video

Table 3. Segmentation, augmentation, and visualization methods used

Ref No.

Data Segmentation

Data Augmentation

CAM, Grad-CAM Visualization

 

Computed Tomography (CT)

 

[41]

Unet

Random affine, color jittering

CAM

[42]

Unet

Cropping, Rotation, Reflection, Adjust contrast

CAM

[43]

N/A

Random affine, color jittering

CAM

[44]

N/A

Not Specified

N/A

[45]

N/A

Cropping, Flipping

Grad-CAM

[46]

N/A

Cropping, padding and horizontal flipping and other minor alterations

N/A

[47]

Three- dimensional (3D) CNN

Generic data-expansion mechanisms: random clipping, left-right flipping, up-down flipping, and mirroring operation

N/A

[48]

N/A

Rotation, shifting, flipping, zooming, transformation, add noise

N/A

[49]

N/A

N/A

N/A

[50]

VB-Net

N/A

N/A

[51]

N/A

N/A

Grad-CAM

[52]

N/A

N/A

N/A

[53]

N/A

Rotation, cropping, shifting, flipping, zooming

N/A

[54]

Automated segmentation algorithm

N/A

Grad-CAM

[55]

N/A

Cropping, horizontal flipping

CAM

[56]

N/A

Rotation, horizontal flip, and scaling

N/A

[57]

N/A

N/A

N/A

[58]

N/A

N/A

N/A

[59]

N/A

Rotation, cropping

N/A

[60]

Unet

N/A

N/A

[61]

N/A

Tree augmentation algorithm

N/A

[62]

N/A

Rotation, cropping

N/A

[63]

VB-Net

N/A

Grad-CAM

[64]

N/A

N/A

N/A

[65]

Automatic classification segmentation tool

Translation and rotation

N/A

[66]

N/A

N/A

N/A

 

 

Ultrasound (ULS)

 

[67]

Ensemble model

Sampling, rotation scaling, shearing blurring, flipping additive noise

Grad-Cam

[68]

Combined U-Net segmentation and 3D classification CNN

Rotation, Flipping, Shifting

N/A

[69]

Pre-trained segmentation models (Segment-Enc)

Horizontal and vertical flips, rotations up to 10 degrees and translations of up to 10%

CAM

[70]

N/A

Random cropping, horizontal flipping

N/A

[71]

CT dataset with fine-grained pixel-level annotations

N/A

N/A

[72]

N/A

Keras ImageDataGenerator (in-place augmentation)

N/A

Table 4. Evaluation and validation methods

 

Methods

Definition

Number of studies

Evaluation

Accuracy

(TN+ TP)/(TN + TP + FN + FP)

N=30

Precision

TP/(TP + FP)

N= 18

Recall / Sensitivity

TP/(TP+ FN)

N=30

F1 score

2(Precision * Recall)/(Precision + Recall)

N=18

Specificity

TN/(TN + FP)

N=20

Cohen’s kappa

(p0 - pe)/(1 + pe)

N=2

Validation

Folds-cross validation

To identify how many folds the dataset is going to be splitted. Every fold gets chance to appears N=5 during training where (k-1)

N=13

Abbreviations

TP: True Positive; TF: True Negative;

FP: False Positive; FN: False Negative;

p0: Observed agreement;

pe: Expected agreement

N: Number of Studies

3.4 Data segmentation, augmentation, and visualization

Data segmentation is an important step for achieving high accuracy and prediction and describes the process of partitioning a single image into multiple segments to improve analysis and recognition precision for infected parts. The segmentation techniques employed in the retrieved papers included U-Net [41, 42, 60, 68], VB-Net [50, 63], and other methods such as Segment-Enc, automatic classification segmentation tools, automated segmentation algorithms, and so on. However, 62.5% of the papers did not describe the segmentation method. Augmentation is also useful, as it can increase the amount of relevant data in a dataset. Augmentation methods include random affine, color jittering, cropping, rotation, flipping, zooming, noise addition, and so on. Among the 32 studies, 20 used at least one data augmentation technique. Finally, visualization techniques can also help in identifying infected parts. Two methods were included in this review, such as CAM and Grad-CAM. Information on the data segmentation, augmentation, and visualization techniques is listed in Table 3. Among the 32 studies, only four considered the CAM technique, whereas five considered the Grad-CAM technique. The remaining 23 studies did not employ a visualization technique

3.5 Evaluation metrics and validation

This section explains the metrics and validation methods used in this review. Based on the retrieved papers, only the most relevant and clearest metrics were chosen, such as accuracy, precision, recall/sensitivity, F1 score, specificity, and Cohen’s kappa. An important factor in the validation process was also included, that is, k-fold cross-validation. The reasons for the use of k-fold cross-validation in the retrieved papers were as follows: 1) to make predictions on the data for training and testing and multiclass problems, 2) to obtain other metrics and draw important conclusions on algorithms and data, 3) to work with dependent/grouped data, and 4) to fine tune parameters. Each of the selected metrics was calculated based on the equations listed in Table 4, with abbreviations for all the terms. The validation is also defined in the same table. As shown in Table 4, different k-fold cross-validations were used, such as four folds, 10 folds, and the most commonly used five folds. The assessment measurements were used to ensure the efficiency of the models in detecting COVID-19.

4. Discussion

4.1 Principal results and analysis

This scoping review discuss studies on the detection of COVID-19 using DL based on CT and ULS imaging published between April 2020 and December 2020. It shows that most of the data and papers were obtained from and published in China, as it was the first country to identify and confront the virus.

Most of the papers were published in IEEE and ScienceDirect and focused on CT imaging. All the proposed approaches promised excellent results. However, the proposed approaches are at risk of bias, which implies that when the designs are implemented in a real-time environment, they are expected to obtain results lower than those described in the papers, because some of the approaches have yet to be implemented in a real environment. Most of the datasets used are small and do not reflect ideal results owing to the shortage in public datasets.

Although numerous studies on the early detection of COVID-19 were published, we have yet to witness real-time experiments. This observation creates doubt on whether the designs will work properly. Furthermore, some researchers chose other disciplines to achieve the same goals owing to their misgivings from their lack of understanding of how DL works in real time when dealing with COVID-19. Comparing the different approaches would be nearly impossible, because of the datasets used, testing environment, and validation methods. Some of the methods were built on top of other designs, making them comparable to only certain types. Use of DL to develop an early detection mechanism requires large amounts of data to obtain acceptable results. Training a DL model is expensive, owing to the complexity of the data model, as it requires high computing power and GPUs. Researchers can investigate this issue in future works and include it in their results to create a comprehensive environment for future research in this field. Recurrent neural networks (RNNs) and reinforcement learning can also be considered in future works.

In this review, we find that 87.5% of the retrieved studies disclosed how the training–testing dataset was split, and 40.6% implemented validation methods, whereas 59.4% did not mention how the validation was conducted. In addition, as shown in Table 5, none of the studies covered all six relevant evaluation metrics, 31.25% covered five evaluation metrics, 28.125% covered four evaluation metrics, 37.5% covered three evaluation metrics, none covered two evaluation metrics, and 3.125% covered only one evaluation metric. Furthermore, 93.75%, 62.5%, 62.5%, 96.87%, 62.5%, and 6.25% of the studies used the accuracy, F1 score, precision, recall, specificity, and Cohen’s kappa, respectively, which indicates the significance of recall and accuracy compared with the other metrics. The kappa metric seems to have the least evaluation value for researchers. Several studies also used additional ROC, AUC, and PPV metrics to improve their evaluations.

The best results were achieved by [68], which employed a ULS imaging, with a 98% F1 score, 99% precision, and 97% sensitivity or recall. The paper used the VGG-19 model trained and tested on a public dataset named POCOVID from GitHub [82].

The best results with a CT scan approach were achieved by [52], with the following evaluation metric values: 99.51% accuracy, 98.04% sensitivity, and 100% specificity. The study used a ResNet-101 model trained and tested on the ImageNet public dataset [68] using the CAD approach. The results of this model are also the overall best among the models in the retrieved studies. Another study [57] that employed ResNet-50, VGG-16, and GoogleNet also obtained remarkable results, specifically, a 96.91% accuracy, 97% F1 score, 98% precision, 97.73% sensitivity, and 100% specificity. The DL model in the study was trained and tested on a public dataset of chest X-ray images and viral pneumonia images by a research team from Pakistan, Qatar, Bangladesh, and Malaysia. Interestingly, neither of the two studies used k-fold cross-validation. A study [49] that employed a two-dimensional sparse matrix DenseNet-201 model on two publicly available COVID-19 lung CT image datasets came close to the best results, with a 97.88% accuracy, 98.56% F1 score, 97.99% precision, and 99.14% sensitivity. The researchers [58] showed comparable results by using a cross-layer-connection neural network DenseNet-121 model with a high tensor dimension of 16 on a private dataset, with a 92% accuracy, 95% F1 score, 90% precision, and 99% sensitivity. The findings give us a clue on which DL models to consider when proceeding with future research in this area. The best models are ResNet-101, ResNet-50, VGG-16, GoogleNet, DenseNet-201, and DenseNet-121. Interestingly, referring to Table 5, it can be seen that none of the data preprocessing techniques were used by the four papers that achieved the best results with the CT approach [49, 52, 57, 58]. However, the paper that achieved the best results with the ULS approach [68] used data segmentation techniques such as a combined U-Net segmentation and 3D classification CNN and data augmentation techniques such as rotation, flipping, and shifting. These findings also show us that the best results were achieved by the models trained on publicly available datasets.

4.2 Research implications and future work

To reduce researchers’ confusion, the scientific community and developers should agree on a standard protocol for conducting COVID-19 research, such as gathering appropriate datasets from various medical centers, including a variety of images for each patient. Improvements should be made in the dataset preprocessing phase, such as switching from U-Net to FC-DenseNet103 for segmentation. Furthermore, data augmentation was completely excluded or yielded negligible results in the reviewed studies. Scientists in countries with limited resources should focus on developing lightweight models for COVID-19 detection [84]. Instead of X-rays and CT scans, researchers should pay more attention to ULS images. However, some of the studies showed that X-rays and CT scans can be used successfully, which requires further investigation. In order to detect COVID-19 and create a solid prototype that can detect various types of diseases from images, all the evaluation metrics should be used. Furthermore, RNNs and reinforcement learning have yet to be used in the field of COVID-19 detection, which could be a promising direction for future research.

Image classification and object detection are utilized in ULS and CT procedures for real-time tumor segmentation, disease diagnosis, and prediction. DL models can generate reasonable interpretations by combining several image data components, such as tissue size, volume, and shape, to provide a complete view of a particular medical issue. Such models are capable of highlighting crucial areas in medical images. For instance, they are utilized to diagnose diabetic retinopathy and early onset of Alzheimer’s disease and detect breast lumps in ULS images. However, DL also plays a significant role in drug and vaccine discovery; thus, the contributions of DL models in drug discovery and interaction prediction are becoming increasingly important. DL can analyze genetic, clinical, and demographic data in real time and find potential medication combinations for clinical trials. Pharmaceutical researchers can take advantage of DL toolkits to focus on patterns in massive datasets, which will allow them to make effective decisions. Furthermore, the application of DL models gained popularity owing to the COVID-19 pandemic. Researchers have begun to investigate DL applications for various purposes, including the detection of COVID-19 through the use of different medical image modalities, prediction of intensive care unit admissions, identification of patients at high risk for COVID-19, calculation of requirements for mechanical ventilation, drug development, and vaccine discovery and testing [85].

4.3 Strengths and limitations

4.3.1 Strengths

To the best of our knowledge, this scoping review is the first to discuss CT and ULS imaging with such findings. We present a comprehensive review that can be used to employ DL for COVID-19 detection. In addition, we provide researchers with detailed results on all the works conducted during the period examined in this research. We summarize the information from the reviewed papers, including their primary information, datasets, and different segmentation, augmentation, and visualization methods. In addition, our review includes different metrics and definitions, with detailed results of all the papers collected from the most common databases. Furthermore, our research follows the PRISMA-ScR guidelines.

4.3.2 Limitations

This scoping review cannot be considered up to date, because the field is quickly entering the medical literature, with new publications on COVID-19-related AI and DL models. This work also covers only peer-reviewed articles and conference proceedings and a total of 32 approaches. It does not mention other works, such as preprinted studies, proposals, and conference abstracts. Other papers may have been accepted but have yet to be published when this paper was written. Moreover, this review covers only papers on Medline, ScienceDirect, IEEE, and the ACM Digital Library, with special terms, which can be useful for finding related studies. Three of the four databases are prominent in the field of computer science, and the remaining database is prominent in the field of medicine. All the information found and analyzed by the authors was based on given information from related studies; thus, the findings of this review may also be affected.

Table 5. Evaluation metrics and detailed results

Ref No.

Evaluation Metrics

Accuracy

F1-Score

Precision

Sensitivity or Recall

Specificity

Kappa

K-folds Cross Validation

 

 

 

Computed Tomography (CT)

 

 

 

 

[41]

Accuracy, ROC, Precision, recall curve, FLOPs

90%

N/A

97%

95%

95%

N/A

N/A

[42]

Accuracy, precision, sensitivity, specificity

94%

N/A

95%

93%

93%

N/A

Five-Folds

[43]

Accuracy, F1 score, precision, recall, Cohen kappa score, ROC, AUC.

97%

97%

97%

97%

N/A

95%

Five-Folds

[44]

F1 score, precision, recall

N/A

90%

97%

84%

N/A

N/A

N/A

[45]

Accuracy, F1 score, sensitivity, precision, AUC

90%

90%

95%

85%

N/A

N/A

Four-Folds

[46]

Accuracy, Sensitivity, specificity, precision, F1 score, ROC

Anchored ensembling: 81.4% regularized ensembling: 81.9%

unconstrained ensembling: 82.76%

Anchored ensembling: 84.33%

regularized ensembling: 84%

unconstrained ensembling: 84.33%

Anchored ensembling: 83.66%

regularized ensembling: 83.66%

unconstrained ensembling: 84.33%

Anchored ensembling: 85.33%

regularized ensembling: 83.1%

unconstrained ensembling: 88.5%

Anchored ensembling: 81.33%

regularized ensembling: 80.33%

unconstrained ensembling: 78.33%

N/A

N/A

[47]

Accuracy, F1 score, precision, recall

Overall (mean): 86.7%

Overall (mean): 86.7%

Overall (mean): 86.87%

Overall (mean): 86.67%

N/A

N/A

N/A

[48]

Accuracy, sensitivity, specificity, precision, F1 score

82.91%

ResNet50 with AUGMENTATION > 80%

ResNet50 with AUGMENTATION 80%

77.66%

87.62%

N/A

N/A

[49]

Accuracy, precision,

Recall, AUC, F1 score

DenseNet201

Dataset 1: 78.07%

Dataset 2: 97.88%

DenseNet201

Dataset 1: 71.19%

Dataset 2: 98.56%

DenseNet201

Dataset 1: 69.30%

Dataset 2: 97.99%

DenseNet201

Dataset 1: 73.19%

Dataset 2: 99.14%

N/A

N/A

N/A

[50]

Accuracy, sensitivity, specificity, AUC

91.79%

93.07%

93.10%

93.05%

89.95%

N/A

Five-Folds

[51]

Accuracy, precision, recall, F1-score, AUC

Xception trained on dataset, COVID-CT-349a: 87.74%

Xception trained on dataset, COVID-CT-349a: 86.59%

Xception trained on dataset, COVID-CT-349a: 91%

Xception trained on dataset, COVID-CT-349a: 82.60%

N/A

N/A

N/A

[52]

Accuracy, sensitivity, specificity, PPV, NPV

ResNet-101: 99.51%

N/A

N/A

ResNet-101: 98.04%

ResNet-101: 100%

N/A

N/A

[53]

Accuracy, sensitivity, specificity, AUC

97.7%

N/A

N/A

99.5%

95.6%

N/A

N/A

[54]

Threshold, accuracy, sensitivity, specificity, PR-AUC, AUC, CI, Youden index

Merged Model: 81.4%

N/A

N/A

Merged Model: 87.5%

Merged Model: 77.8%

N/A

Five-Folds

[55]

Accuracy, precision, sensitivity, specificity, F1 score, AUC

94.3%

94.2%

97.1%

91.4%

97.3%

N/A

N/A

[56]

Accuracy, sensitivity, COVID-19 + PC, F1- score, AUC

Highest value using the EfficientNet-B0: 87.68%

Highest value using the EfficientNet-B0: 86.19%

N/A

Highest value using the EfficientNet-B0: 83.67%

N/A

N/A

Five-Folds

[57]

Accuracy, sensitivity, specificity,

precision, recall, F1 score

Highest value using ResNet 50: 96.9%

Highest value using ResNet 50: 97%

Highest value using ResNet 50: 98%

Highest value using Vgg16 Net: 97.73%

Highest value using ResNet 50 and GoogleNet: 100%

N/A

N/A

[58]

Accuracy, precision, recall, F1 score, AUC

Highest results: 92%

(When DenseNet-121, tensor dimension is 16)

Highest results: 95%

(When DenseNet-121, tensor dimension is 16)

Highest results: 90%

(When DenseNet-121, tensor dimension is 16)

Highest results: 99%

(When DenseNet-121,

tensor dimension is 16)

N/A

N/A

N/A

[59]

AUC, accuracy, sensitivity, and specificity.

Highest results: 89%

(When ResNet-18 is used)

N/A

N/A

Highest results: 98%

(When ResNet-18 is used)

Highest results: 86% (When ResNet-18 is used)

N/A

N/A

[60]

Accuracy, sensitivity, specificity, PPV, NPV

Overall (mean for both retrospective and prospective dataset): 95.67%

N/A

N/A

Overall (mean for both retrospective and prospective dataset): 98.08%

Overall (mean for both retrospective and prospective dataset): 92.13%

N/A

N/A

[61]

Accuracy, precision, recall, F1 score, AUC, kappa index

95.07%

95%

94.99%

95.09%

N/A

90%

Five-Folds

[62]

AUC, accuracy, sensitivity, and specificity

Refer to reference

N/A

N/A

Refer to reference

Refer to reference

N/A

N/A

[63]

AUC, accuracy, sensitivity, specificity, F1 score

87.5%

82%

N/A

86.9%

90.1%

N/A

Five-Folds

[64]

Accuracy, precision and recall

90.90%

(based on 100 training times)

N/A

74.36%

(based on 100 training times)

71.31%

(based on 100 training times)

N/A

N/A

N/A

[65]

Dice coefficient, accuracy, sensitivity, specificity, AUC

94.67%

N/A

N/A

96%

92%

N/A

N/A

[66]

Accuracy, sensitivity, specificity, F value, Matthews correlation coefficient.

Refer to reference

Refer to reference

N/A

Refer to reference

Refer to reference

N/A

Five-Folds

Ultrasound (ULS)

[67]

Accuracy, F1 scores, Precision, Recall

96%

N/A

N/A

N/A

N/A

N/A

Five-Folds

[68]

N/A

N/A

98%

99%

97%

N/A

N/A

N/A

[69]

Accuracy, Recall, Precision, F1-score, Specificity, MCC

Frame-based: 90%

Mean Value: 88%

Frame-based >93%

Frame-based >93%

Frame-based >93%

N/A

Five-Folds

[70]

Accuracy, Feasibility,

Sensitivity, specificity

3 clinical stages: 94.62%

4 clinical stages: 91.18%

8 clinical stages: 82.75%

3 clinical stages: 93.2%

4 clinical stages: 89.9%

8 clinical stages: 81.6%

3 clinical stages: 93.2%

4 clinical stages: 89.9%

8 clinical stages: 81.6%

3 clinical stages: 93.2%

4 clinical stages: 89.9%

8 clinical stages: 81.6%

3 clinical stages: 96.6%

4 clinical stages: 96.6%

8 clinical stages: 97.4%

N/A

10-Folds

[71]

Accuracy, ZeroRule classifier

Mean Value: 97.07%

N/A

N/A

Mean Value: 97.20%

Mean Value: 95.63%

N/A

N/A

[72]

Accuracy, Sensitivity, Specificity, Precision, F1-score, Frames

89%

92% (COVID-19 Detection)

88% (COVID-19 Detection)

96% (COVID-19 Detection)

79% (COVID-19 Detection)

N/A

Five-Folds

5. Research Agenda

For this research agenda, we carefully investigated 32 studies. We realized that limitations persisted in the application of DL in COVID-19 detection via CT and ULS images and state-of-the-art techniques. However, the proposed solutions found in the studies showing promising results are currently in the maturation stage, with tenuous outcomes for actual clinical applications. As a contribution, we outlined a preliminary but well-founded research agenda to fill the research gaps, including studies that achieved the following:

  1. Designated objective metrics, which will allow researchers to measure the performance and conceptualization of the proposed solutions.
  2. Appointed a set of publicly available datasets to encourage the creation and sharing of datasets among researchers and healthcare professionals.
  3. Increased the impact of generated COVID-19 images annotated with crowdsourced tools, particularly for individuals involved in medical imaging processes and work.
  4. Explored DL architectures based on more than one image modality, such as models effective for CT or ULS images or a combination of both techniques.
  5. Highlighted several research implications and future work directions for researchers and healthcare professionals.
6. Conclusion

This scoping review included 32 studies on the use of DL for early COVID-19 detection. The findings of our research showed that this field is gaining traction, and some studies demonstrated that the method is highly accurate and effective. Our study examined various aspects of DL, including CNNs and transfer learning, and the primary information of the retrieved papers, including the datasets used; different segmentation, augmentation, and visualization methods; and evaluation metrics, to create a comprehensive and unique review. To highlight the limitations of AI and DL in this field, the approaches should not only identify current research implications but also focus on real-world evaluations based on large-scale deployments. Other limitations include the slightly outdated studies owing to the recent numerous publications on COVID-19-related AI and DL models; the exclusion of preprinted studies, proposals, and conference abstracts; and access to a limited number of databases.

As a starting point, this review discussed the use of DL for COVID-19 detection, which can serve as a guide for future research. Future works can include up-to-date articles on various topics, such as the prediction of outcomes and vaccines, and cover other databases with unrestricted access.

Appendix

APPENDIX A: Used search terms and total number of retrieved studies per database APPENDIX B: Data extraction form.

Search terms and total number of retrieved studies per database

Database name

Research terms

Number of retrieved studies

MEDLINE

("artificial intelligence" OR

"machine learning" OR

"deep learning") AND

("COVID-19" OR

"COVID19" OR

"coronavirus") AND

("ultrasound" OR

"Computed tomography")

N=14

Science Direct

("artificial intelligence" OR

"machine learning" OR

"deep learning") AND

("COVID-19" OR

"COVID19" OR

"coronavirus") AND

("ultrasound" OR

"Computed tomography")

N=144

IEEE Explore

("artificial intelligence" OR

"machine learning" OR

"deep learning") AND

("COVID-19" OR

"COVID19" OR

"coronavirus") AND

("ultrasound" OR

"Computed tomography")

N=277

ACM

("artificial intelligence" OR

"machine learning" OR

"deep learning") AND

("COVID-19" OR

"COVID19" OR

"coronavirus") AND

("ultrasound" OR

"Computed tomography")

N=22

 

Total studies 2020

N=457

Data extraction form

Concept

Definition

Study Characteristics

 

Author

The first author of the study

Year Submission

The year in which the study was submitted

Country of publication

The country where the study was published

Publication type

The paper type (i.e., peer-reviewed, conference or preprint)

AI, ML, and DL techniques characteristics

 

Detection modality

What type of medical images are used (CT, and ULS)?

AI, ML, and DL branches

The branches/areas of that were used (e.g., CNN, Transfer learning, ... etc.)

Dataset Characteristics

 

Data sources

Source of data that were used for the development and validation of AI models/ algorithms (e.g., public databases, clinical settings, government sources)

Dataset size

The total number of data that were used for the development and validation of AI models/ algorithms

Type of validation

How the dataset was split/used to develop and test the proposed models/ algorithms (e.g., Train-test split, K-fold cross-validation, External validation)

Proportion of training set

Percentage of the training set of the total dataset

Proportion of test set

Percentage of the test set of the total dataset

Evaluation metrics

Any evaluation method that are used to check the performance of the model. (e.g., accuracy, precision, F1 score, recall and Kappa)

Visualization method

Type of used visualization method

  References

[1] Yang, P., Wang, X. (2020). COVID-19: A new challenge for human beings. Cellular & Molecular Immunology, 17(5): 555-557. https://doi.org/10.1038/s41423-020-0407-x

[2] Kong, W.H., Li, Y., Peng, M.W., Kong, D.G., Yang, X.B., Wang, L., Liu, M.Q. (2020). SARS-CoV-2 detection in patients with influenza-like illness. Nature Microbiology, 5(5): 675-678. https://doi.org/10.1038/s41564-020-0713-1

[3] World Health Organization. (2020). Coronavirus disease 2019 (COVID-19): Situation Report, 73. https://apps.who.int/iris/handle/10665/331686

[4] Del Rio, C., Malani, P.N. (2020). COVID-19—new insights on a rapidly changing epidemic. Jama, 323(14): 1339-1340. https://doi.org/10.1001/jama.2020.3072

[5] Chamola, V., Hassija, V., Gupta, V., Guizani, M. (2020). A comprehensive review of the COVID-19 pandemic and the role of IoT, drones, AI, blockchain, and 5G in managing its impact. IEEE Access, 8: 90225-90265. https://doi.org/10.1109/ACCESS.2020.2992341

[6] Bin-Salem, A., Bindahman, S., Hanshi, S.M., Munassar, W., Aladhal, K. (2019). Efficient power-delay management framework for enhancing the lifetime of IoT devices. In 2019 First International Conference of Intelligent Computing and Engineering (ICOICE), pp. 1-5. https://doi.org/10.1109/ICOICE48418.2019.9035197

[7] Vaishya, R., Javaid, M., Khan, I.H., Haleem, A. (2020). Artificial Intelligence (AI) applications for COVID-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 14(4): 337-339. https://doi.org/10.1016/j.dsx.2020.04.012

[8] Abd-Alrazaq, A., Alajlani, M., Alhuwail, D., et al. (2020). Artificial intelligence in the fight against COVID-19: scoping review. Journal of Medical Internet Research, 22(12): e20756. https://doi.org/10.2196/20756

[9] Shaw, J., Rudzicz, F., Jamieson, T., Goldfarb, A. (2019). Artificial intelligence and the implementation challenge. Journal of medical Internet Research, 21(7): e13659. https://doi.org/10.2196/13659

[10] Mondal, B. (2020). Artificial intelligence: state of the art. Recent Trends and Advances in Artificial Intelligence and Internet of Things, pp. 389-425. https://doi.org/10.1007/978-3-030-32644-9_32

[11] Ting, D.S.W., Carin, L., Dzau, V., Wong, T.Y. (2020). Digital technology and COVID-19. Nature Medicine, 26(4): 459-461. https://doi.org/10.1038/s41591-020-0824-5

[12] Shi, F., Wang, J., Shi, J., et al. (2020). Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19. IEEE Reviews in Biomedical Engineering, 14: 4-15. https://doi.org/10.1109/RBME.2020.2987975

[13] Chen, Y., Li, L. (2020). SARS-CoV-2: Virus dynamics and host response. The Lancet Infectious Diseases, 20(5): 515-516. https://doi.org/10.1016/S1473-3099(20)30235-8

[14] Pham, Q.V., Nguyen, D.C., Huynh-The, T., Hwang, W.J., Pathirana, P.N. (2020). Artificial intelligence (AI) and big data for coronavirus (COVID-19) pandemic: A survey on the state-of-the-arts. IEEE Access, 8: 130820-130839. https://doi.org/10.1109/ACCESS.2020.3009328

[15] Fontanarosa, P.B., Bauchner, H. (2020). COVID-19—looking beyond tomorrow for health care and society. Jama, 323(19): 1907-1908. https://doi.org/10.1001/jama.2020.6582

[16] Siddiqui, M.A., Ali, M.A., Deriche, M. (2021). On the early detection of COVID19 using advanced machine learning techniques: A review. In 2021 18th International Multi-Conference on Systems, Signals & Devices (SSD), pp. 1-7. https://doi.org/10.1109/SSD52085.2021.9429345

[17] Mossa-Basha, M., Meltzer, C.C., Kim, D.C., Tuite, M.J., Kolli, K.P., Tan, B.S. (2020). Radiology department preparedness for COVID-19: Radiology scientific expert review panel. Radiology, 296(2): E106-E112. https://doi.org/10.1148/radiol.2020200988

[18] Bullock, J., Luccioni, A., Pham, K.H., Lam, C.S.N., Luengo-Oroz, M. (2020). Mapping the landscape of artificial intelligence applications against COVID-19. Journal of Artificial Intelligence Research, 69: 807-845. https://doi.org/10.1613/jair.1.12162

[19] Van Hartskamp, M., Consoli, S., Verhaegh, W., Petkovic, M., Van de Stolpe, A. (2019). Artificial intelligence in clinical health care applications. Interactive Journal of Medical Research, 8(2): e12100. https://doi.org/10.2196/12100

[20] Santosh, K.C. (2020). AI-driven tools for coronavirus outbreak: need of active learning and cross-population train/test models on multitudinal/multimodal data. Journal of Medical Systems, 44(5): 1-5. https://doi.org/10.1007/s10916-020-01562-1

[21] Alimadadi, A., Aryal, S., Manandhar, I., Munroe, P.B., Joe, B., Cheng, X. (2020). Artificial intelligence and machine learning to fight COVID-19. Physiological Genomics, 52(4): 200-202. https://doi.org/10.1152/physiolgenomics.00029.2020

[22] Glassner, A. (2019). Deep learning: A crash course. InACM SIGGRAPH 2019 Courses, pp. 1-550.

[23] Alom, M.Z., Taha, T.M., Yakopcic, C., et al. (2019). A state-of-the-art survey on deep learning theory and architectures. Electronics, 8(3): 292. https://doi.org/10.3390/electronics8030292

[24] Wynants, L., Van Calster, B., Bonten, M.M., et al. (2020). Systematic review and critical appraisal of prediction models for diagnosis and prognosis of COVID-19 infection. MedRxiv. https://doi.org/10.1101/2020.03.24.20041020

[25] Ulhaq, A., Born, J., Khan, A., Gomes, D.P.S., Chakraborty, S., Paul, M. (2020). COVID-19 control by computer vision approaches: A survey. IEEE Access, 8: 179437-179456. https://doi.org/10.1109/ACCESS.2020.3027685

[26] Zheng, N., Du, S., Wang, J., et al. (2020). Predicting COVID-19 in China using hybrid AI model. IEEE Transactions on Cybernetics, 50(7): 2891-2904. https://doi.org/10.1109/TCYB.2020.2990162

[27] Ghassemi, N., Shoeibi, A., Rouhani, M. (2020). Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomedical Signal Processing and Control, 57: 101678. https://doi.org/10.1016/j.bspc.2019.101678

[28] Alzubaidi, M.S., Shah, U., Dhia Zubaydi, H., Dolaat, K., Abd-Alrazaq, A.A., Ahmed, A., Househ, M. (2021). The role of neural network for the detection of Parkinson’s disease: A scoping review. In Healthcare, 9(6): 740. https://doi.org/10.3390/healthcare9060740

[29] Mohammad poor, M., Shoeibi, A., Shojaee, H. (2016). A hierarchical classification method for breast tumor detection. Iranian Journal of Medical Physics, 13(4): 261-268. http://eprints.gmu.ac.ir/id/eprint/540.

[30] Tiwari, D., Dixit, M., Gupta, K. (2021). Deep multi-view breast cancer detection: A multi-view concatenated infrared thermal images based breast cancer detection system using deep transfer learning. Traitement du Signal, 38(6): 1699-1711. https://doi.org/10.18280/ts.380613

[31] Alzubaidi, M., Zubaydi, H.D., Bin-Salem, A.A., Abd-Alrazaq, A.A., Ahmed, A., Househ, M. (2021). Role of deep learning in early detection of COVID-19: Scoping review. Computer Methods and Programs in Biomedicine Update, 1: 100025. https://doi.org/10.1016/j.cmpbup.2021.100025

[32] Savadjiev, P., Chong, J., Dohan, A., Vakalopoulou, M., Reinhold, C., Paragios, N., Gallix, B. (2019). Demystification of AI-driven medical image interpretation: past, present and future. European Radiology, 29(3): 1616-1624. https://doi.org/10.1007/s00330-018-5674-x

[33] Alzubaidi, M., Agus, M., Alyafei, K., et al. (2022). Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. arXiv preprint arXiv:2201.07935.

[34] Malik, S., Singh, S., Singh, N.M., Panwar, N. (2021). Diagnosis of COVID-19 using chest X-ray. International Journal of Informatics, Information System and Computer Engineering (INJIISCOM), 2(1): 55-64.

[35] Ilyas, M., Rehman, H., Naït-Ali, A. (2020). Detection of covid-19 from chest x-ray images using artificial intelligence: An early review. arXiv preprint arXiv:2004.05436.

[36] Alghamdi, H., Amoudi, G., Elhag, S., Saeedi, K., Nasser, J. (2021). Deep learning approaches for detecting COVID-19 from chest X-ray images: A survey. IEEE Access, pp. 20235-20254. https://doi.org/10.1109/ACCESS.2021.3054484

[37] Nayak, S.R., Nayak, D.R., Sinha, U., Arora, V., Pachori, R.B. (2021). Application of deep learning techniques for detection of COVID-19 cases using chest X-ray images: A comprehensive study. Biomedical Signal Processing and Control, 64: 102365. https://doi.org/10.1016/j.bspc.2020.102365

[38] Sadeghi, Hosein, M., Omidi, H., Sina, S. (2020). A Systematic review on the use of artificial intelligence techniques in the diagnosis of COVID-19 from Chest X-Ray images. Avicenna Journal of Medical Biochemistry, 8(2): 120-127. https://doi.org/10.34172/ajmb.2020.17

[39] Tricco, A.C., Lillie, E., Zarin, W., et al. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7): 467-473. https://doi.org/10.7326/M18-0850

[40] Munn, Z., Peters, M.D., Stern, C., Tufanaru, C., McArthur, A., Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18(1): 1-7. https://doi.org/10.1186/s12874-018-0611-x

[41] Wang, X., Deng, X., Fu, Q., et al. (2020). A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT. IEEE Transactions on Medical Imaging, 39(8): 2615-2625. https://doi.org/10.1109/TMI.2020.2995965

[42] Hu, S., Gao, Y., Niu, Z., et al. (2020). Weakly supervised deep learning for COVID-19 infection detection and classification from CT images. IEEE Access, 8: 118869-118883. https://doi.org/10.1109/ACCESS.2020.3005510

[43] Han, Z., Wei, B., Hong, Y., et al. (2020). Accurate screening of COVID-19 using attention-based deep 3D multiple instance learning. IEEE Transactions on Medical Imaging, 39(8): 2584-2594. https://doi.org/10.1109/TMI.2020.2996256

[44] Li, Y., Wei, D., Chen, J., et al. (2020). Efficient and effective training of COVID-19 classification networks with self-supervised dual-track learning to rank. IEEE Journal of Biomedical and Health Informatics, 24(10): 2787-2797. https://doi.org/10.1109/JBHI.2020.3018181

[45] Wang, Z., Liu, Q., Dou, Q. (2020). Contrastive cross-site learning with redesigned net for COVID-19 CT classification. IEEE Journal of Biomedical and Health Informatics, 24(10): 2806-2813. https://doi.org/10.1109/JBHI.2020.3023246

[46] Tabarisaadi, P., Khosravi, A., Nahavandi, S. (2020). A deep bayesian ensembling framework for COVID-19 detection using chest CT images. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1584-1589. https://doi.org/10.1109/SMC42975.2020.9283003

[47] Xu, X., Jiang, X., Ma, C., et al. (2020). A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering, 6(10): 1122-1129. https://doi.org/10.1016/j.eng.2020.04.010

[48] Loey, M., Manogaran, G., Khalifa, N.E.M. (2020). A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Computing and Applications, pp. 1-13. https://doi.org/10.1007/s00521-020-05437-x

[49] Liu, Q., Leung, C.K., Hu, P. (2020). A two-dimensional sparse matrix profile DenseNet for COVID-19 diagnosis using chest CT images. IEEE Access, 8: 213718-213728. https://doi.org/10.1109/ACCESS.2020.3040245

[50] Sun, L., Mo, Z., Yan, F., et al. (2020). Adaptive feature selection guided deep forest for covid-19 classification with chest ct. IEEE Journal of Biomedical and Health Informatics, 24(10): 2798-2805. https://doi.org/10.1109/JBHI.2020.3019505

[51] Dan-Sebastian, B., Delia-Alexandrina, M., Sergiu, N., Radu, B. (2020). Adversarial graph learning and deep learning techniques for improving diagnosis within CT and ultrasound images. In 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 449-456. https://doi.org/10.1109/ICCP51029.2020.9266242

[52] Ardakani, A.A., Kanafi, A.R., Acharya, U.R., Khadem, N., Mohammadi, A. (2020). Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Computers in Biology and Medicine, 121: 103795. https://doi.org/10.1016/j.compbiomed.2020.103795

[53] Yan, T., Wong, P.K., Ren, H., Wang, H., Wang, J., Li, Y. (2020). Automatic distinction between COVID-19 and common pneumonia using multi-scale convolutional neural network on chest CT scans. Chaos, Solitons & Fractals, 140: 110153. https://doi.org/10.1016/j.chaos.2020.110153

[54] Li, C., Dong, D., Li, L., et al. (2020). Classification of severe and critical COVID-19 using deep learning and radiomics. IEEE Journal of Biomedical and Health Informatics, 24(12): 3585-3594. https://doi.org/10.1109/JBHI.2020.3036722

[55] Cai, X., Wang, Y., Sun, X., Liu, W., Tang, Y., Li, W. (2020). Comparing the performance of ResNets on COVID-19 diagnosis using CT scans. In 2020 International Conference on Computer, Information and Telecommunication Systems (CITS), pp. 1-4. https://doi.org/10.1109/CITS49457.2020.9232574

[56] Silva, P., Luz, E., Silva, G., Moreira, G., Silva, R., Lucio, D., Menotti, D. (2020). COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis. Informatics in Medicine Unlocked, 20: 100427. https://doi.org/10.1016/j.imu.2020.100427

[57] Mertyüz, İ., Mertyüz, T., Taşar, B., Yakut, O. (2020). Covid-19 disease diagnosis from radiology data with deep learning algorithms. In 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1-4. https://doi.org/10.1109/ISMSIT50672.2020.9255380

[58] Wang, Q., Wang, W., Chen, X., Chen, L., Chen, W. (2020). Deep learning based on high-dimensional tensor for COVID-19 diagnosis. 2020 International Conference on Information Science, Parallel and Distributed Systems (ISPDS), pp. 183-188. https://doi.org/10.1109/ISPDS51347.2020.00045

[59] Serener, A., Serte, S. (2020). Deep learning for mycoplasma pneumonia discrimination from pneumonias like COVID-19. In 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1-5. https://doi.org/10.1109/ISMSIT50672.2020.9254561

[60] Chen, J., Wu, L., Zhang, J., et al. (2020). Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Scientific Reports, 10(1): 1-11. https://doi.org/10.1038/s41598-020-76282-0

[61] Carvalho, E.D., Carvalho, E.D., de Carvalho Filho, A.O., De Araújo, F.H.D., Rabêlo, R.D.A.L. (2020). Diagnosis of COVID-19 in CT image using CNN and XGBoost. In 2020 IEEE Symposium on Computers and Communications (ISCC), pp. 1-6. https://doi.org/10.1109/ISCC50000.2020.9219726

[62] Serte, S., Serener, A. (2020). Discerning COVID-19 from mycoplasma and viral pneumonia on CT images via deep learning. In 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1-5. https://doi.org/10.1109/ISMSIT50672.2020.9254970

[63] Ouyang, X., Huo, J., Xia, L., et al. (2020). Dual-sampling attention network for diagnosis of COVID-19 from community acquired pneumonia. IEEE Transactions on Medical Imaging, 39(8): 2595-2605. https://doi.org/10.1109/TMI.2020.2995508

[64] Wang, T., Zhao, Y., Zhu, L., Liu, G., Ma, Z., Zheng, J. (2020). Lung CT image aided detection COVID-19 based on Alexnet network. In 2020 5th International Conference on Communication, Image and Signal Processing (CCISP), pp. 199-203. https://doi.org/10.1109/CCISP51026.2020.9273512

[65] Amyar, A., Modzelewski, R., Li, H., Ruan, S. (2020). Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Computers in Biology and Medicine, 126: 104037. https://doi.org/10.1016/j.compbiomed.2020.104037

[66] Zhou, T., Lu, H., Yang, Z., Qiu, S., Huo, B., Dong, Y. (2021). The ensemble deep learning model for novel COVID-19 on CT images. Applied Soft Computing, 98: 106885. https://doi.org/10.1016/j.asoc.2020.106885

[67] Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N. (2020). COVID-19 detection through transfer learning using multimodal imaging data. IEEE Access, 8: 149808-149824. https://doi.org/10.1109/ACCESS.2020.3016780

[68] Roy, S., Menapace, W., Oei, S., et al. (2020). Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound. IEEE Transactions on Medical Imaging, 39(8): 2676-2687. https://doi.org/10.1109/TMI.2020.2994459

[69] Born, J., Wiedemann, N., Brändle, G., Buhre, C., Rieck, B., Borgwardt, K. (2020). Accelerating COVID-19 differential diagnosis with explainable ultrasound image analysis. arXiv preprint arXiv:2009.06116.

[70] Zhang, J., Chng, C.B., Chen, X., et al. (2020). Detection and classification of pneumonia from lung ultrasound images. In 2020 5th International Conference on Communication, Image and Signal Processing (CCISP), pp. 294-298. https://doi.org/10.1109/CCISP51026.2020.9273469

[71] Almeida, A., Bilbao, A., Ruby, L., et al. (2020). Lung ultrasound for point-of-care COVID-19 pneumonia stratification: computer-aided diagnostics in a smartphone. first experiences classifying semiology from public datasets. In 2020 IEEE International Ultrasonics Symposium (IUS), pp. 1-4. https://doi.org/10.1109/IUS46767.2020.9251716

[72] Born, J., Brändle, G., Cossio, M., Disdier, M., Goulet, J., Roulin, J., Wiedemann, N. (2020). POCOVID-Net: Automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS). arXiv preprint arXiv:2004.12084.

[73] Yang, J., Veeraraghavan, H., Armato III, S.G., et al. (2018). Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017. Medical Physics, 45(10): 4568-4581. https://doi.org/10.1002/mp.13141

[74] Angelov, P., Almeida Soares, E. (2020). SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification. MedRxiv. https://doi.org/10.1101/2020.04.24.20078584

[75] Zhao, J., Zhang, Y., He, X., Xie, P. (2020). Covid-Ct-dataset: A CT scan dataset about COVID-19. arXiv preprint arXiv:2003.13865, 490.

[76] Rahimzadeh, M., Attar, A., Sakhaei, S.M. (2021). A fully automated deep learning-based network for detecting covid-19 from a new and large lung CT scan dataset. Biomedical Signal Processing and Control, 68: 102588. https://doi.org/10.1016/j.bspc.2021.102588

[77] Bickle, I., Bell, D.J. (2020). COVID-19. Available online: https://radiopaedia.org/articles/covid-19-4?lang=us, accessed on Feb. 1, 2020.

[78] Xia, B., Jiang, H., Liu, H., Yi, D. (2016). A novel hepatocellular carcinoma image classification method based on voting ranking random forests. Computational and Mathematical Methods in Medicine. https://doi.org/10.1155/2016/2628463

[79] Yan, T. (2020). CCAP. Available online: https://ieee-dataport.org/documents/ccap, accessed on Feb. 1, 2020.

[80] COVID-19 CT segmentation dataset. (2020). Available online: http://medicalsegmentation.com/covid19/, accessed on Feb. 1, 2020.

[81] Soldati, G., Smargiassi, A., Inchingolo, R., Buonsenso, D., Perrone, T., Briganti, D.F., Perlini, S., Torri, E., Mariani, A., Mossolani, E.E., Tursi, F., Mento, F., Demi, L. (2020). Proposal for international standardization of the use of lung ultrasound for patients with COVID-19: A simple, quantitative, reproducible method. Journal of Ultrasound in Medicine, 39(7): 1413-1419. https://doi.org/10.1002/jum.15285

[82] Mhorry. (2020). N-Clahe-Medical-Images. Available online: https://github.com/mhorry/N-CLAHE-MEDICAL-IMAGES, accessed on Feb. 1, 2020.

[83] Jannisborn. (2020). Covid19_ultrasound. Available online: https://github.com/jannisborn/covid19_ultrasound, accessed on Feb. 1, 2020.

[84] Özyurt, F. (2021). Automatic detection of COVID-19 disease by using transfer learning of light weight deep learning model. Traitement du Signal, 38(1): 147-153. https://doi.org/10.18280/ts.380115

[85] Bharati, S., Podder, P., Mondal, M., Prasath, V.B. (2021). Medical imaging with deep learning for COVID-19 diagnosis: a comprehensive review. arXiv preprint arXiv:2107.09602.