Automated Identification and Categorization of COVID-19 via X-Ray Imagery Leveraging ROI Segmentation and CART Model

Automated Identification and Categorization of COVID-19 via X-Ray Imagery Leveraging ROI Segmentation and CART Model

Bayan Alsaaidah* Zaid Mustafa Moh’d Rasoul Al-Hadidi Lubna A. Alharbi

Computer Science Department, Al-Balqa Applied University, Al-Salt 19117, Jordan

Computer Information Systems Department, Al-Balqa Applied University, Al-Salt 19117, Jordan

Electrical Engineering Department, Al-Balqa Applied University, Al-Salt 19117, Jordan

Department of Computer Science, University of Tabuk, Tabuk 71491, Saudi Arabia

Corresponding Author Email: 
Bayan-saaidah@bau.edu.jo
Page: 
2259-2265
|
DOI: 
https://doi.org/10.18280/ts.400543
Received: 
8 February 2023
|
Revised: 
19 July 2023
|
Accepted: 
1 August 2023
|
Available online: 
30 October 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

COVID-19, a novel disease first identified in China in December 2019, has rapidly precipitated a global pandemic, impacting public health and the global economy with unprecedented severity. Accurate detection of this virus is of paramount importance, yet current methodologies present significant limitations and challenges. Polymerase Chain Reaction (PCR) diagnostic kits, a commonly utilized detection method, often yield false-negative results. Moreover, the recent strains of the virus elude detection solely by PCR testing. In contrast, imaging techniques such as chest X-rays or Computerized Tomography (CT) scans offer radiologists a higher degree of diagnostic accuracy. However, the vast quantity of required imaging coupled with a shortage of radiologists has underscored the necessity for automated detection methods. This study proposes an integrated system for the automated detection and classification of COVID-19 infection. By utilizing an amalgamation of computer vision tools and machine learning algorithms, this system aims to provide clinicians with rapid and accurate diagnoses without the need for human intervention. This paper, therefore, presents an advancement in the use of medical imaging for the detection and classification of COVID-19, offering a potential solution to the current limitations in testing capabilities.

Keywords: 

CART model, COVID-19, segmentation, SUPERPIXEL, X-ray images, ROI

1. Introduction

COVID-19, a disease identified amid a series of unexplained pneumonia cases in Wuhan, swiftly propagated worldwide, leading to its declaration as a pandemic by the World Health Organization (WHO) [1, 2]. The total confirmed cases till July,2023 is shown in Figure 1. Its rapid transmission, similar to that of the influenza virus, was facilitated by direct contact [3]. Due to its substantial similarity to the SARS virus, it was named SARS-CoV-2, reflecting their shared family and symptomatology [4]. Like its predecessor, COVID-19 is a zoonotic disease, with origins traceable to bats, a common starting point for coronaviruses [5].

The swift global spread of COVID-19 has been largely attributed to asymptomatic transmission, where individuals exhibiting no symptoms facilitate the virus's propagation, significantly contributing to the pandemic's extent [6]. While the virus typically presents with symptoms such as fever, dry cough, sore throat, shortness of breath, and bilateral lung infiltrates observed on clinical images, other less common symptoms include headache, rhinorrhea, vomiting, skin rash, and sneezing [7].

While most patients recover within a short span, a subset grapples with more severe COVID-19 symptoms, leading to critical complications, including pulmonary edema, cardiac injury, Multi-Organ Failure (MOF), acute kidney injury, and Acute Respiratory Distress Syndrome (ARDS) [8, 9]. To date, no antiviral drug or medicine is available for COVID-19 treatment, necessitating the need for social distancing, travel restrictions, and quarantine to aid in controlling the spread [10, 11].

Recently, the emergence of COVID-19 variants, such as the COVID-20 strain first identified in England, has introduced new challenges [11]. The Polymerase Chain Reaction (PCR) test, while commonly used for diagnosis, has shown limited sensitivity, with a significant number of false-negatives [12, 13]. In several instances, patients with negative PCR results were later diagnosed with COVID-19 through chest image examination, underscoring the inadequacy of relying solely on PCR tests for diagnosis [14, 15].

As such, a combined approach, integrating clinical chest image features with PCR test results, has been recommended for early detection of COVID-19 [16-18]. Chest imaging has proven to yield accurate and sufficient information for COVID-19 diagnosis, with features such as nodular opacity of the lung region being pivotal for accurate diagnosis [19].

Artificial intelligence applications have gained traction in the medical field for automatic diagnosis systems [20-23], with machine learning techniques successfully applied to diverse problems ranging from skin cancer detection [24-26], brain tumor detection and classification [27-30], breast cancer detection [31-35], pneumonia detection [20, 36-39], to fundus segmentation [40].

This study proposes an integrated automated system for the detection and classification of COVID-19. The system presents an end-to-end model that processes raw chest images to provide clinicians with diagnoses.

Figure 1. Total confirmed cases of COVID-19 from Jan. 22, 2020-Feb. 7, 2023

2. Related Work

Using chest images in machine learning is not new. Many researchers and studies use them for disease detection and classification. Recently, chest images have been used to detect COVID-19 and classify abnormality of the lung as pneumonia or COVID-19/20. Each presented study has a different model with different results using different data sets. Barstugan et al. [41] proposed a machine-learning model to detect COVID-19 in CT images. Several features were extracted from different types of patches. These features include Grey Level Co-occurrence Matrix (GLCM), Local Directional Pattern (LDP), Grey Level Run Length Matrix (GLRLM), Grey-Level Size Zone Matrix (GLSZM), and Discrete Wavelet Transform (DWT). These features are provided to Support Vector Machine (SVM) for training and testing. The system accuracy was 99.68 using the GLSZM feature extraction method.

Using a different approach, Asif et al. [42] proposed Deep Convolutional Neural Networks (DCNN) based model Inception V3 to detect coronavirus pneumonia-infected patients using chest X-ray images. The model accuracy in the training stage was 97% and the validation accuracy was 93%. Most studies have been introduced based on deep convolutional neural networks. Wang et al. [43] designed tailored DCNN for COVID-19 detection using chest X-ray images that are available for public use via Kaggle [44] based on COVID-Net.

The proposed study [45] explores COVID-19 detection results using a small data set of X-ray images using a pre-trained DCNN. The proposed model correctly classified 95% of 20 unseen COVID-19 cases. This study showed that the lack of information on COVID-19 cases reduces the model's reliability [45]. Another Deep Convolutional Neural Network (DCNNN) method was proposed in study of Medhi et al. [46] for COVID-19 diagnosis using X-ray images. This model used a public data set from Kaggle [47] that is available for research purposes. The accuracy of the proposed system was 93% after noise removal and segmentation [46].

A simple convolution neural network (CNN) and a pre-trained AlexNet model are used for COVID-19 detection on X-rays and CT scan images. The result of the proposed models shows different accuracy value, using the pre-trained network provide the research with 98% whereas using the simple CNN shows 94.1% as detection accuracy [48].

3. Material

The most common image type (X-ray images) which is used in this study, which has been asked by most doctors for COVID-19 diagnosis as shown in Figure 2. This data set is available for public use in research, and researchers update it when more images are available to increase the data set size. Cohen et al. have developed this data set using different open-access sources [49, 50]. Figure 2 shows a sample from the used dataset.

Figure 2. X-ray chest images: a) COVID-19 defected lung b) Non-defected lung

4. Method

The proposed system is an end-to-end model which consists of several stages starting from providing the raw image and ending with providing the clinician with the detected case. Figure 3 summarizes these major steps.

4.1 Pre-processing

The main goal behind the pre-processing is to prepare the raw images for the next step to get the most important information from the Region of Interest (ROI) of images. Focusing only on the ROI can support the proposed system because the image has various details and objects and using the most important information by focusing on the lung area helps detect the defective cells efficiently. Several steps have been applied to the raw images including filtering, histogram equalization, and thresholding. Filtering using a mask has been used twice for the gray image, then the result has been enhanced based on the image histogram. The next step aims to binarize the image which has been done using the Otsu algorithm that is based on an image histogram as shown in Figure 4 and Figure 5 [51].

Figure 3. The proposed system stages

Figure 4. The pre-processing operations for COVID-19 infected case: a) Original image b) Top hat filter c) Bottom hat filter d) Histogram equalization e) Thresholded image f) After morphological operations

Figure 5. The pre-processing operations for COVID-19 non-infected case: a) Original image b) Top hat filter c) Bottomhat filter d) Histogram equalization e) Thresholded image f) After morphological operations

4.2 Segmentation

For COVID-19 diagnosis, the target area for this process is the human lung. Figure 6 shows the infected and the non-infected areas of the lung [52].

To specify the ROI for an efficient feature extraction process and to remove any extra parts of the X-ray image, a segmentation process has been done based on the image dimensions and the pixel values inside the image Figure 7.

Figure 6. Chest X-ray of a patient with COVID-19

Figure 7. Lung segmentation: a) with COVID-19 b) without COVID-19

4.3 Feature extraction

In artificial Intelligence, to help the model correctly recognize the cases it should be efficiently trained using the most important characteristics of the input data. This process requires extracting and analyzing the most important features to specify which features represent the most efficient ones.

In this paper, many features related to color and dimensions are extracted based on our knowledge of COVID-19 diagnosis. These features are analyzed to select and use the most important ones based on their distribution. The features that are used in this model are autocorrelation, cluster prominence, energy, entropy, maximum probability, the sum of squares, sum average, sum variance, and sum entropy.

The second-order statistics are calculated depending on a matrix C, d (Ip1, Ip2) of the relative frequencies that describes how often the two pixels (Ip1, Ip2) of different or similar gray levels Ng appear as a pair in the image matrix concerning the distance d and the direction. The value of this parameter Ng is 8 levels.

-Autocorrelation:

$a u t=\sum_i \sum_j(i j)(C(i, j))$

-Cluster prominence:

Pro $=\sum_{i=0}^{N_g-1} \sum_{j=0}^{N_g-1}\left(i+j-u_x-u_y\right)^4 p(i, j)$

-Energy:

$A S M=\sum_{i=0}^{N_g-1} \sum_{j=0}^{N_g-1} p(i, j)^2$

-Entropy:

$E_n=\sum_i \sum_j C(i, j) \log (C(i, j))$

The rest of the features are correlated using Matlab functions like maximum probability [53]. The sum of squares, sum average, sum variance, sum entropy [54].

4.4 Classification

Decision trees or CART models are inventive instruments that assist specialists with choosing between a few choices to make a final decision. They give an exceptionally compelling example inside which one can clarify choices and explore the potential results of taking those choices. A specific advantage of the CART model is its cross-validation highlight [55], which endeavors to distinguish over-fitting, which would somehow or another lead to false future predictions. For these and different reasons, CART has regularly brought about more precise expectations than other factual methodologies (Kattan and Beck 1995). A test that actually remains, in any case, is to all the more likely comprehend the exhibition of these strategies with an end goal to recognize their best use which is the main challenge of machine learning algorithms.

The main idea of the CART model can be illustrated as conditions; several questions are provided to the model and are answered sequentially by the tree parts from top to bottom. The extracted features that are provided to the model specify what the questions will be, and the answers are based on a learning strategy.

Choosing CART in this research comes with many advantages of this type of ML algorithm. The tree is simple and easy to understand how it is work, it can be easily modified, and this model is flexible with high efficiency.

The two main steps of any learning algorithm are training and testing, the model should be trained using a sample of data to get the main features for the three classes and then it should be tested using unseen data to recognize the system performance. To achieve that, the data set is divided into two subsets, two-quarters of the data set for the training stage and one-quarter of the data set for the testing stage.

After extracting the most important features, these features with their labels are provided to the CART model for the training stage. On the other hand, the non-labeled data are used for prediction and classification using the same model and after the cross-validation process. This step integrated with the training step is used to find the performance metrics of the proposed model.

The cross-validation process is used to enhance the system's performance and reduce overfitting. Ten folds of k-fold cross-validation are used and applied during the training stage. This process aims to make the system more reliable and general for more data sets and to reach an accurate classification model.

5. Experimental Results

The proposed system aims to detect the COVID-19 virus and classify the chest images into three categories (COVID-19, Pneumonia, and Healthy). This aim has been done using the extracted features and CART algorithm.

The proposed system uses the extracted features to train three models where the overall system will use these models to consider the image class (COVID-19, Pneumonia, or Healthy).

Figure 8. Confusion matrix of the proposed system Fold-5

To verify the system performance, a set of unknown images were provided to the system, and the system performance was analyzed and found out. Various metrics can be used for system verification such as accuracy, sensitivity, and specificity metrics. These metrics are found based on the confusion matrix. Figure 8 shows the confusion matrix of these classes.

Sensitivity is a measure of the system's performance in identifying the true positives whereas specificity is a measure of the system's performance by measuring how well the model can identify the true negatives.

Sensitivity $=\frac{T P}{T P+F N}$

Specificity $=\frac{T N}{T N+F P}$

The system accuracy is 93.42% and based on more details the sensitivity and the specificity are 90.31% and 97.37% respectively. These values are found using cross-validation. The system is evaluated for various cross-validation fold values and the performance differs as shown in Figure 9. Based on these values which are favorable and promised, Fold-5 shows the highest and the best performance.

The results that are obtained by the proposed system are compared to other studies. Table 1 presents a summary of these studies comparing their performance with the proposed model performance.

As illustrated above, the system performance is promising and it can be used effectively for COVID-19 diagnosis. The system has several stages from segmentation to feature extraction and finally the classification with cross-validation with no need for manual intervention. Furthermore, the system can be improved and modified by extracting more features and it can be tested using other types of images such as CT scan images.

The specialty of the feature extraction process is to separate the most basic information with the most efficient operation which is introduced here by determining the ROI to keep away from the tedious during the preparation and testing processes. Moreover, the ROI segmentation presents a convincing and fundamental stage, especially with the high similarity between the lung images from the three classes.

Table 1. Comparison of the proposed model with the existing machine learning methods

Study

Method

Accuracy

[55]

DarkNet

87.02%

[43]

Covid-Net

93.3%

[56]

CNN

90.54%

[57]

AlexNet, GoogleNet and Resnet50

87.8%

[58]

DenseNet201, Resnet50V2 and Inceptionv3

91.62%

[59]

IKONOS

89.78%

[60]

CNN

91.5%

The proposed model

CART/ROI

93.42%

The most challenge of using a deep learning system is the low number of images which does not affect our proposed system due to the several processes that have been done. In addition, using CNN means reading all information/features in the images which has not happened with this system where only the most important information is extracted and used for model training and testing and this is carried out only for the ROI. These factors provide more accurate and reliable results. By using the proposed system, the manual intervention is limited, and the test time should be shorter than usual.

Figure 9. Model performance with cross-validation

6. Conclusion

The proposed system set out to design and improves the automatic intelligent system to classify the chest X-ray images to detect and identify the COVID-19 virus using machine learning. One of the most important reasons for the proposed work is the lack of radiologists compared with the huge number of tests which leads the engineers and programmers to design and implement automatic detection and classification systems for the virus which can be an alternative to radiologists and can help with the early and more accurate diagnosis than the traditional PCR tests.

The proposed system is evaluated using non-labeled images. Furthermore, the same information/features are used to train other machine learning algorithms to find out the most suitable and the most accurate algorithm which is here the CART algorithm.

The main contribution of this work, it can be used for a small data set, focusing on the main and the most important features, and saving computational time and storage while using the ROI which is the lungs. Where most recent studies used deep learning in which the whole image is processed and extracting the features randomly requiring a huge data set especially with pneumonia class.

  References

[1] Kassania, S.H., Kassanib, P.H., Wesolowskic, M.J., Schneidera, K.A., Detersa, R. (2021). Automatic detection of coronavirus disease (COVID-19) in X-ray and CT images: A machine learning based approach. Biocybernetics and Biomedical Engineering, 41(3): 867-879. https://doi.org/10.1016/j.bbe.2021.05.013

[2] Soares, L.P., Soares, C.P. (2020). Automatic detection of covid-19 cases on x-ray images using convolutional neural networks. arXiv Preprint arXiv: 2007.05494. https://doi.org/10.48550/arXiv.2007.05494

[3] Roosa, K., Lee, Y., Luo, R., Kirpich, A., Rothenberg, R., Hyman, J.M., Yan, P, Chowell, G. (2020). Real-time forecasts of the COVID-19 epidemic in China from February 5th to February 24th, 2020. Infectious Disease Modelling, 5: 256-263. https://doi.org/10.1016/j.idm.2020.02.002

[4] Stoecklin, S.B., Rolland, P., Silue, Y., Mailles, A., Campese, C., Simondon, A., Mechain, M., Meurice, L., Nguyen, M., Bassi, C., Yamani, E., Behillil, S., Ismael, S., Nguyen, D., Malvy, D., Lescure, F.X., Georges, S., Lazarus, C., Tabaï, A., Stempfelet, M., Enouf, V., Coignard, B., Levy-Bruhl, D. (2020). First cases of coronavirus disease 2019 (COVID-19) in France: Surveillance, investigations and control measures. Eurosurveillance, 25(6): 294.

[5] Haider, N., Rothman-Ostrow, P., Osman, A.Y., Arruda, L.B., Macfarlane-Berry, L., Elton, L., Thomason, M.J., Yeboah-Manu, D., Ansumana, R., Kapata, N., Mboera, L., Rushton, J., McHugh, T.D., Heymann, D.L., Zumla, A., Kock, R.A. (2020). COVID-19-zoonosis or emerging infectious disease? Frontiers in Public Health, 8: 763. https://doi.org/10.3389/fpubh.2020.596944

[6] Kupferschmidt, K., Cohen, J. (2020). Will novel virus go pandemic or be contained? Science, 367(6478). https://doi.org/10.1126/science.367.6478.610

[7] WHO coronavirus (COVID-19) dashboard. https://covid19.who.int, accessed on Jun. 18, 2021.

[8] Guo, H., Zhou, Y., Liu, X., Tan, J. (2020). The impact of the COVID-19 epidemic on the utilization of emergency dental services. Journal of Dental Sciences, 15(4): 564-567. https://doi.org/10.1016/j.jds.2020.02.002

[9] Chavez, S., Long, B., Koyfman, A., Liang, S.Y. (2021). Coronavirus disease (COVID-19): A primer for emergency physicians. The American Journal of Emergency Medicine, 44: 220-229. https://doi.org/10.1016/j.ajem.2020.03.036

[10] Coronavirus disease (COVID-19) pandemic. https://www.who.int/emergencies/diseases/novel-coronavirus-2019, accessed on 18 June 2021.

[11] Ruiz Estrada, M.A. (2020). COVID-X. https://doi.org/10.2139/ssrn.3756721

[12] Xie, X., Zhong, Z., Zhao, W., Zheng, C., Wang, F., Liu, J. (2020). Chest CT for typical coronavirus disease 2019 (COVID-19) pneumonia: Relationship to negative RT-PCR testing. Radiology, 296(2): E41-E45. https://doi.org/10.1148/radiol.2020200343

[13] Kanne, J.P., Little, B.P., Chung, J.H., Elicker, B.M., Ketai, L.H. (2020). Essentials for radiologists on COVID-19: An update-radiology scientific expert panel. Radiology, 296(2): E113-E114.

[14] Bernheim, A., Mei, X., Huang, M., Yang, Y., Fayad, Z.A., Zhang, N., Diao, K., Lin, B., Zhu, X., Li, K., Li, S., Shan, H., Jacobi, A., Chung, M. (2020). Chest CT findings in coronavirus disease-19 (COVID-19): Relationship to duration of infection. Radiology, 295(3): 685-691. https://doi.org/10.1148/radiol.2020200463

[15] Long, C., Xu, H., Shen, Q., Zhang, X., Fan, B., Wang, C., Zeng, B., Li, Z., Li, X., Li, H. (2020). Diagnosis of the coronavirus disease (COVID-19): rRT-PCR or CT. European Journal of Radiology, 126: 108961. https://doi.org/10.1016/j.ejrad.2020.108961

[16] Shi, H., Han, X., Jiang, N., Cao, Y., Alwalid, O., Gu, J., Fan, Y., Zheng, C. (2020). Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: A descriptive study. The Lancet Infectious Diseases, 20(4): 425-434. https://doi.org/10.1016/S1473-3099(20)30086-4

[17] Zhao, W., Zhong, Z., Xie, X., Yu, Q., Liu, J. (2020). Relation between chest CT findings and clinical conditions of coronavirus disease (COVID-19) pneumonia: A multicenter study. American Journal of Roentgenology, 214(5): 1072-1077. https://doi.org/10.2214/AJR.20.22976

[18] Li, Y. and Xia, L. (2020). Coronavirus disease 2019 (COVID-19): Role of chest CT in diagnosis and management. American Journal of Roentgenology, 214(6): 1280-1286. https://doi.org/10.2214/AJR.20.22954

[19] Yoon, S.H., Lee, K.H., Kim, J.Y., Lee, Y.K., Ko, H., Kim, K.H., Park, C.M. and Kim, Y.H. (2020). Chest radiographic and CT findings of the 2019 novel coronavirus disease (COVID-19): Analysis of nine patients treated in Korea. Korean Journal of Radiology, 21(4): 494. https://doi.org/10.3348/kjr.2020.0132

[20] Moh'd Rasoul, A., Al-Hadidi, D., Razouq, R.S. (2016). Pneumonia identification using organizing map algorithm. APRN Journal of Engineering and Applied Sciences, 11(5): 1819-6608.

[21] Al-Hadidi, M.D.R., AlSaaidah, B., Al-Gawagzeh, M.Y. (2020). Glioblastomas brain tumour segmentation based on convolutional neural networks. International Journal of Electrical Computer Engineering (2088-8708), 10(5).

[22] Alarabeyyat, A., Alhanahnah, M. (2016). Breast cancer detection using k-nearest neighbor machine learning algorithm. In 2016 9th International Conference on Developments in eSystems Engineering (DeSE). IEEE, pp. 35-39. https://doi.org/10.1109/DeSE.2016.8

[23] Moh’dRasoul, A., Al-Gawagzeh, M.Y., Alsaaidah, B.A. (2012). Solving mammography problems of breast cancer detection using artificial neural networks and image processing techniques. Indian Journal of Science and Technology, 5(4): 2520-2528.

[24] Dai, X., Spasić, I., Meyer, B., Chapman, S., Andres, F. (2019). Machine learning on mobile: An on-device inference app for skin cancer detection. In 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC). IEEE, pp. 301-305. https://doi.org/10.1109/FMEC.2019.8795362

[25] Taufiq, M.A., Hameed, N., Anjum, A., Hameed, F. (2017). M-Skin Doctor: A mobile enabled system for early melanoma skin cancer detection using support vector machine. In eHealth 360: International Summit on eHealth, Budapest, Hungary, Springer International Publishing. June 14-16, 2016, Revised Selected Papers, pp. 468-475. https://doi.org/10.1007/978-3-319-49655-9_57

[26] Vijayalakshmi, M.M. (2019). Melanoma skin cancer detection using image processing and machine learning. International Journal of Trend in Scientific Research and Development (IJTSRD), 3(4): 780-784.

[27] García-Gómez, J.M., Tortajada, S., Vicente, J., Sáez, C., Castells, X., Luts, J., Julia-Sap´e, M., Juan-C´ıscar, A., Van Huffel, S., Barcel´o, A., Ariño, J., Arús, C., Robles, M. (2007). Genomics and metabolomics research for brain tumour diagnosis based on machine learning. In Computational and Ambient Intelligence: 9th International Work-Conference on Artificial Neural Networks, IWANN 2007, San Sebastián, Spain, Springer Berlin Heidelberg. June 20-22, 2007. Proceedings, 9: 1012-1019.

[28] Rehman, Z.U., Zia, M.S., Bojja, G.R., Yaqub, M., Jinchao, F., Arshid, K. (2020). Texture based localization of a brain tumor from MR-images by using a machine learning approach. Medical Hypotheses, 141: 109705. https://doi.org/10.1016/j.mehy.2020.109705

[29] Podnar, S., Kukar, M., Gunčar, G., Notar, M., Gošnjak, N., Notar, M. (2019). Diagnosing brain tumours by routine blood tests using machine learning. Scientific Reports, 9(1): 14481. https://doi.org/10.1038/s41598-019-51147-3

[30] Bonte, S., Goethals, I., Van Holen, R. (2018). Machine learning based brain tumour segmentation on limited data using local texture and abnormality. Computers in Biology and Medicine, 98: 39-47. https://doi.org/10.1016/j.compbiomed.2018.05.005

[31] Asri, H., Mousannif, H., Al Moatassime, H., Noel, T. (2016). Using machine learning algorithms for breast cancer risk prediction and diagnosis. Procedia Computer Science, 83: 1064-1069. https://doi.org/10.1016/j.procs.2016.04.224

[32] Amrane, M., Oukid, S., Gagaoua, I., Ensari, T. (2018). Breast cancer classification using machine learning. In 2018 Electric Electronics, Computer Science, Biomedical Engineerings' Meeting (EBBT). IEEE, pp. 1-4. https://doi.org/10.1109/EBBT.2018.8391453

[33] Ganggayah, M.D., Taib, N.A., Har, Y.C., Lio, P., Dhillon, S.K. (2019). Predicting factors for survival of breast cancer patients using machine learning techniques. BMC Medical Informatics and Decision Making, 19: 1-17. https://doi.org/10.1186/s12911-019-0801-4

[34] Sharma, S., Aggarwal, A., Choudhury, T. (2018). Breast cancer detection using machine learning algorithms. In 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), IEEE, pp. 114-118. https://doi.org/10.1109/CTEMS.2018.8769187

[35] Alzu’bi, A., Najadat, H., Doulat, W., Al-Shari, O., Zhou, L. (2021). Predicting the recurrence of breast cancer using machine learning algorithms. Multimedia Tools and Applications, 80(9): 13787-13800. https://doi.org/10.1007/s11042-020-10448-w

[36] Chandra, T.B., Verma, K. (2020). Pneumonia detection on chest x-ray using machine learning paradigm. In Proceedings of 3rd International Conference on Computer Vision and Image. Springer Singapore. Processing: CVIP 2018, 1: 21-33. https://doi.org/10.1007/978-981-32-9088-4_3

[37] Pankratz, D.G., Choi, Y., Imtiaz, U., Fedorowicz, G.M., Anderson, J.D., Colby, T.V., Myers, J.L., Lynch, D.A., Brown, K.K., Flaherty, K.R., Steele, M.P., Groshong, S.D., Raghu, G., Barth, N.M., Walsh, P.S., Huang, J., Kennedy, G.C., Martinez, F.J. (2017). Usual interstitial pneumonia can be detected in transbronchial biopsies using machine learning. Annals of the American Thoracic Society, 14(11): 1646-1654. https://doi.org/10.1513/AnnalsATS.201612-947OC

[38] Toğaçar, M., Ergen, B., Cömert, Z., Özyurt, F. (2020). A deep feature learning model for pneumonia detection applying a combination of mRMR feature selection and machine learning models. Irbm, 41(4): 212-222. https://doi.org/10.1016/j.irbm.2019.10.006

[39] Jhuo, S.L., Hsieh, M.T., Weng, T.C., Chen, M.J., Yang, C.M., Yeh, C.H. (2019). Trend prediction of influenza and the associated pneumonia in taiwan using machine learning. In 2019 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), IEEE, pp. 1-2. https://doi.org/10.1109/ISPACS48206.2019.8986244

[40] Celik, Y., Talo, M., Yildirim, O., Karabatak, M., Acharya, U.R. (2020). Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recognition Letters, 133: 232-239. https://doi.org/10.1016/j.patrec.2020.03.011

[41] Barstugan, M., Ozkaya, U., Ozturk, S. (2020). Coronavirus (covid-19) classification using ct images by machine learning methods. arXiv Preprint arXiv: 2003.09424. https://doi.org/10.48550/arXiv.2003.09424

[42] Asif, S., Wenhui, Y., Jin, H., Jinhai, S. (2020). Classification of COVID-19 from chest X-ray images using deep convolutional neural network. In 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pp. 426-433. https://doi.org/10.1109/ICCC51575.2020.9344870

[43] Wang, L., Lin, Z.Q., Wong, A. (2020). Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific Reports, 10(1): 19549. https://doi.org/10.1038/s41598-020-76550-z

[44] Kaggle. (2020). Kaggle’s chest X-Ray images (Pneumonia) dataset. https://www.kaggle.com/paultimothymooney/chest-xray-pneumoniawh.

[45] Hall, L.O., Paul, R., Goldgof, D.B., Goldgof, G.M. (2020). Finding covid-19 from chest x-rays using deep learning on a small dataset. arXiv Preprint arXiv: 2004.02060. https://doi.org/10.48550/arXiv.2004.02060

[46] Medhi, K., Jamil, M., Hussain, M.I. (2020). Automatic detection of COVID-19 infection from chest X-ray using deep learning. Medrxiv, 2020(05). https://doi.org/10.1101/2020.05.10.20097063

[47] Kaggle, COVID-19 dataset. https://www.kaggle.com/bachrr/ covid-chest-xray.

[48] Maghdid, H.S., Asaad, A.T., Ghafoor, K.Z., Sadiq, A.S., Mirjalili, S., Khan, M.K. (2021). Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. In Multimodal Image Exploitation and Learning. SPIE, 11734: 99-110. https://doi.org/10.1117/12.2588672

[49] Cohen, J.P., Morrison, P., Dao, L., Roth, K., Duong, T.Q., Ghassemi, M. (2020). Covid-19 image data collection: Prospective predictions are the future. arXiv Preprint arXiv: 2006.11988. https://doi.org/10.48550/arXiv.2006.11988

[50] Covid-chestxray-dataset. https://github.com/ieee8023/COVID-chestxray-dataset.

[51] Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1): 62-66.

[52] https://www.iaea.org/bulletin/infectious-diseases/a-window-inside-the-body-and-covid-19, accessed on 25 April 2023.

[53] Soh, L.K., Tsatsoulis, C. (1999). Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Transactions on Geoscience and Remote Sensing, 37(2): 780-795. https://doi.org/10.1109/36.752194

[54] Spomer, W., Pfriem, A., Alshut, R., Just, S., Pylatiuk, C. (2012). High-throughput screening of zebrafish embryos using automated heart detection and imaging. Journal of Laboratory Automation, 17(6): 435-442. https://doi.org/10.1177/2211068212464223

[55] Breiman, L. (1984). Introduction to tree classification. Classification and Regression Trees, 18-55.

[56] Degadwala, S., Vyas, D., Dave, H. (2021). Classification of COVID-19 cases using fine-tune convolution neural network (FT-CNN). In 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), IEEE, pp. 609-613. https://doi.org/10.1109/ICAIS50930.2021.9395864

[57] Bhatti, S., Aziz, D., Nadeem, D., Usmani, I., Aamir, P., Khan, D. (2021). Automatic classification of the severity of covid-19 patients based on CT scans and x-rays using deep learning. European Journal of Molecular Clinical Medicine, 7(10): 1436-1455.

[58] Das, A.K., Ghosh, S., Thunder, S., Dutta, R., Agarwal, S., Chakrabarti, A. (2021). Automatic COVID-19 detection from X-ray images using ensemble learning with convolutional neural network. Pattern Analysis and Applications, 24: 1111-1124. https://doi.org/10.1007/s10044-021-00970-4

[59] Gomes, J.C., Barbosa, V.A.D.F., Santana, M.A., Bandeira, J., Valença, M.J.S., de Souza, R.E., Ismael, A.M., dos Santos, W.P. (2020). IKONOS: An intelligent tool to support diagnosis of COVID-19 by texture analysis of X-ray images. Research on Biomedical Engineering, 1-14. https://doi.org/10.1007/s42600-020-00091-7

[60] Ohata, E.F., Bezerra, G.M., das Chagas, J.V.S., Neto, A.V.L., Albuquerque, A.B., de Albuquerque, V.H.C., Reboucas Filho, P.P. (2020). Automatic detection of COVID-19 infection using chest X-ray images through transfer learning. IEEE/CAA Journal of Automatica Sinica, 8(1): 239-248. https://doi.org/10.1109/JAS.2020.1003393