Unveiling the Hidden: Leveraging Medical Imaging Data for Enhanced Brain Tumor Detection Using CNN Architectures

Unveiling the Hidden: Leveraging Medical Imaging Data for Enhanced Brain Tumor Detection Using CNN Architectures

Muskan Bhasin Shivam Jain Faisal Hoda Ajay Dureja Aman Dureja Rajkumar Singh Rathor Saad Aldosary Walid El-Shafai*

Department of Information Technology, Bharati Vidyapeeth’s College of Engineering (BVCOE), New Delhi 110063, India

Department of Information Technology, Bhagwan Parshuram Institute of Technology (BPIT), New Delhi 110089, India

Department of Computer Science, Cardiff School of Technologies, Cardiff Metropolitan University, Llandaff Campus, Cardiff CF5 2YB, United Kingdom

Computer Science Department, Community College, King Saud University, Riyadh 11362, Saudi Arabia

Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menoufia 32952, Egypt

Corresponding Author Email: 
walid.elshafai@el-eng.menofia.edu.eg
Page: 
1575-1582
|
DOI: 
https://doi.org/10.18280/ts.410345
Received: 
16 July 2023
|
Revised: 
15 November 2023
|
Accepted: 
1 April 2024
|
Available online: 
26 June 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Brain tumor detection using deep learning has made significant progress, but there are still several challenges and problems that researchers and practitioners are actively addressing like limited data availability, imbalanced data, generalization of data, data preprocessing and integration into clinical practice. To overcome these challenges, this study proposes the use of several transfer learning techniques and CNNs to provide a unique method for classifying brain tumors. Specifically, we employed three well-known transfer learning architectures, namely VGG, ResNet, and MobileNet, to explore their performance in brain tumor detection. Advantages of using VGG, ResNet, and MobileNet models include their ability to leverage pre-trained knowledge, adaptability to different problem domains, architectural diversity, simplicity, efficiency, state-of-the-art performance. Deep learning and models with prior training are used to improve the accuracy and efficiency of classifying brain tumors. The comparative study of various models showed that in order to classify brain tumor images, MobileNet stands out with the highest accuracy of 98.66% as compared to 97.55% of VGG and 87.44% of ResNet. The outcomes of this project help advance the field of diagnostic imaging and aid medical practitioners in the prompt and precise identification of brain tumors.

Keywords: 

deep learning, convolutional neural networks, brain tumor detection, transfer learning, ResNet, VGG, MobileNet

1. Introduction

A tumor that originates in the brain or spinal cord is referred to as a primary brain or spinal cord tumor. A projected 24,810 persons in the US (14,280 men and 10,530 women) will have an initial malignant tumor diagnosis in their brain or spinal cord in 2023. Less than 1% of people will ever have this kind of tumor in their lives. 85% to 90% of primary cancers of the central nervous system (CNS) are brain tumors. 2020 saw an anticipated 308,102 cases of primary brain or spinal cord tumors identified worldwide.

In the United States, 5,230 children under the age of 20 are predicted to receive a CNS tumor diagnosis in 2023 [1]. Being the organ that regulates and unifies all physical activities, including movement, sensation, emotion, perception, and thought, the human body's brain is an essential component [2]. It is responsible for our ability to learn, reason, and communicate, and it is the seat of our consciousness and sense of self. The ability to store and recall information exists in the human brain. A very wide variety of physiological signals control how information and knowledge are precipitated by humans, a process termed as human memory [3]. About 86 billion neurons make up the human brain, and these neurons "talk" to one another utilizing electrochemical impulses [4]. It receives, integrates, and analyzes sensory inputs from various sources, allowing us to perceive the environment and make sense of it. Thus, it would not be wrong to say that the brain is an incredibly powerful organ and Its complexity and capabilities are truly remarkable.

An abnormal growth of brain cells is known as a brain tumor. These cells might contain cancerous or non-cancerous cells. World Health Organization (WHO) further categorizes malignant tumors into classes I to IV [5]. Malignant tumors of Grades III and IV have a significant negative impact on the patient's health and have the potential to be fatal [6]. Both primary and metastatic brain tumors can develop from cancer cells that have metastasized from different areas of body, along with actual tissue of the brain [7]. The health and wellbeing of the person may be severely impacted by abnormal brain cell growth. Brain tumor early identification and categorization is a critical area in the discipline of healthcare imaging, assist the selection of the most practicable course of action to save the life of an individual [8].

The brain tumor classification involves categorizing tumors into different types and subtypes based on their characteristics, such as size, shape, location, and histological features. The correct treatment strategy and patient prognosis can only be determined by using this classification. Medical professionals frequently use MRI scans to manually detect brain abnormalities. Due to a number of factors, including exhaustion and a large number of MRI sections, the large-scale manual assessment method frequently results in misinterpretations [9]. As the methods of ML and deep learning progress, there is a growing interest in using automated approaches for brain tumor classification.

CNNs have become a potent tool for identifying images, including the identification of brain tumors, in recent years [10, 11]. CNNs were created especially for the analysis of visual data, and they have achieved outstanding results in a variety of medical imaging applications. The accurate and effective classification of tumors is made possible by these models, which can automatically discover pertinent features and patterns from medical pictures. Another method that has become popular in the categorization of brain tumors is transfer learning. Utilizing already trained CNN models that were originally developed for use in large-scale tasks such as image classification and modifying them for the purpose of classifying brain tumors [12]. Transfer learning allows the model to benefit from the learned features and knowledge from a broader image domain, improving the generalization ability and performance of the model.

A huge collection of images of brain tumors is used to train a CNN model in the proposed technique, which includes different tumor types. In order to apply transfer learning, models trained with CNN like ResNet, MobileNet and VGGNet, are first used as a starting point before being fine-tuned. The models that have already been trained have undergone extensive training on tasks involving the categorization of large-scale images, allowing them to pick up on high-level patterns and features that are useful for classifying brain tumors. The outcomes show that the categorization model based on CNN, combined with transfer learning, achieves high accuracy in classifying brain tumors into different types and subtypes. The model performs better than conventional machine learning techniques and has an opportunity to help healthcare providers diagnose and plan treatment for people with brain tumors in an accurate and timely manner. Our work presents a unique method for predicting and assessing brain tumor activity that greatly outperforms traditional machine learning approaches. Our approach, in contrast to conventional methods, combines recently discovered biomarkers with sophisticated neural network architectures to predict tumor development and treatment response more precisely and consistently. This combination not only improves the forecast precision but also provides a more profound understanding of the underlying dynamics and patterns of brain tumors.

2. Literature Review

A number of review papers on brain tumor detection have also been published among which the division of medical scans is a crucial step in analysis of tumors from the MRI. Many techniques have been proposed to detect tumors in MRI images. This section will evaluate some major articles on brain tumor and point out their contributions and limitations.

SVM was used by the authors [13] on CNN, although the accuracy was just 20.83%. Afterward, after experimenting with different parameters, they were able to get accuracy of 97.10%. By using a 9:1 ratio and an 11-epoch technique with 2473 number of photos for training and 273 photos for testing, we were able to reach our desired results. This model has a 14 stage, 9-layer CNN model.

In contrast to the above model, using a dataset of 3264 MRI brain scans, Saeedi et al. [14] proposed a model that included images of pituitary tumors, meningiomas, gliomas, and healthy brains. On MRI brain pictures, preprocessing and augmentation methods were used. It was discovered that the model's training accuracy was 96.47% and 95.63%, respectively.

To classify normal and tumor brain, Brindha et al. [15], ANN, and CNN are utilized in this study. The human brain's neurological system has been imitated by the ANN. It has numerous layers of neurons that are interconnected. Using a data set applied to the learning process, the neural network can pick up the knowledge. Convolutional is the term for the mathematical linear process used in CNNs (convolutional neural networks). It focuses on creating a self-described architecture for the CNN and ANN models before evaluating how well they perform when applied to MRI images taken of brain tumors. By enlarging the image without sacrificing the information required for prediction, CNN assists in making the forecast.

Deep learning algorithms are used in the study by Bathe et al. [16]. When used on MRI scans, these methods aid in the tumor's detection. Two techniques have been applied to the dataset of MRI images. Experimental results show that depth wise separable CNN is more accurate than CNN. The preciseness for the test set, as measured by Depth wise Separable CNN, was determined to be 92%. This approach will undoubtedly be beneficial in the healthcare industry.

Brain tumor segmentation based on Berkeley wavelet transformation (BWT) is used in the paper by Bahadure et al. [17]. They separated the different types of brain tissue into four groups: tumor-bearing tissues, cerebrospinal fluid, and white and grey matter. By analyzing the vectors of features and the tumor's size, SVM, was used to categorize the tumor stage. Based on reliability, sensitivity, dice similarity index coefficient, and specificity, the experiment results of the suggested approach were reviewed and confirmed for efficacy and standard of analysis on MRI brain scans. The accuracy of the suggested technique for identifying healthy and unhealthy tissues from MRI scans was determined to be 96.51%, demonstrating its usefulness.

A model that is used in MATLAB was suggested by Sravanthi et al. [18]. They first convert a photo to grayscale, then apply filtering on the monochromatic version to reduce noise and other environmental distractions. The suggested system will use preprocessing processes to process the chosen image while simultaneously employing several algorithms to find the tumor within the image. Using the technique outlined in this study, it was discovered that brain tumor detection using MRI scans of brain itself was 97% accurate.

An automated technique for using MRI to identify multiple classes of brain tumors was proposed by Tiwari et al. [19] in her research. The proposed advanced CNN model, which enables automatic learning of features from brain MRIs, has six learnable layers. This network's major objective was to train faster than traditional DL models while producing better classification results. This model successfully identified four separate categories for the brain image: pituitary tumor, meningioma, no tumor, or indicating that tumor is not present in the brain MRI scans. Its accuracy rate is 98%.

Gaur et al. [20] suggested a model that uses multiple input CNN strategy to overcome the classification difficulty with images use of a multiple input CNN strategy to overcome the classification difficulty with images of lesser quality in terms of noise and metal artefacts. A dataset of brain magnetic resonance imaging pictures was used to train and test the suggested model, and the same model was then applied to the SHAP and LIME method algorithms. The accuracy rate of the model is 95%.

Aleid et al. [21] suggests using MRI scans to detect brain tumors in their early stages using a traditional automatic segmentation method. The model's foundation, the HSO, was developed to support MRI brain division, and the parameters used were specific to the job. Both the entropy and variance formulas are used to determine the number of thresholds that separate the histogram into colored portions. This model's accuracy is calculated to be 97.5%. Table 1 shows existing techniques for brain tumor detection.

Brain tumor detection using deep learning has made significant progress, but there are still several challenges and problems that researchers and practitioners are actively addressing like limited data availability, imbalanced data, generalization of data, data preprocessing and integration into clinical practice. To overcome these challenges, this study proposes the use of several transfer learning techniques and CNNs to provide a unique method for classifying brain tumors. Specifically, we employed three well-known transfer learning architectures, namely VGG, ResNet, and MobileNet, to explore their performance in brain tumor detection.

Our proposed technique overcomes the problem of overfitting which was the main issue in the paper of Chattopadhyay and Maitra [13]. In addition to it, in the proposed algorithm no feature was removed and still giving higher performance than other existing techniques. Thus, proposed model shows the accuracy of 98.66% which is highest among all the models. As a result, our approach can accurately and successfully identify brain tumors.

Table 1. Various techniques proposed with observations and limitations for brain tumor detection

Reference

Technique Proposed

Observations

Limitations

[13]

CNN

SVM on CNN was implemented and to cross-check the work, various other activation algorithms were used.

To avoid overfitting, about 5% of the whole dataset's images were removed.

[14]

2D CNN

The training accuracy of the suggested 2D CNN was determined to be 96.47%.

The architecture may overfit if there are insufficient data for testing and training sets or low learning rates.

[15]

CNN and ANN

CNN turns out to be the more accurate method for determining whether a brain tumor is present.

The resulting model was created through a trial-and-error method; optimization technique required.

[16]

Deep learning based Depth wise Separable CNN

Depth wise Separable CNN was found to be more accurate than typical CNN, with a 92% accuracy rate for the test set.

The model can be further enhanced to identify a particular kind of tumor and suggest an appropriate course of treatment.

[17]

Berkeley wavelet transform (BWT) and SVM

The accuracy of the testing results was 96.51%, proving the efficacy of the suggested method.

Dataset used is small.

[18]

Image Segmentation

The project has demonstrated that it is capable of providing overall accuracy of up to 97%.

If the category is not represented in the training data, unwanted behaviours may result.

[19]

CNN

The classification of the brain pictures into four groups using the model was successful.

Involves little preprocessing.

[20]

CNN, LIME and SHAP

The planned study has a 94.64% accuracy rate.

Any feature removal significantly reduces performance.

3. Dataset

Tumors are abnormal masses of tissue that form when cells divide and grow uncontrollably. There are two main types of tumors: benign tumors and malignant tumors. Benign tumors are non-cancerous and typically grow slowly. They do not invade nearby tissues or spread to other parts of the body.

Benign tumors are usually encapsulated, meaning they are surrounded by a fibrous tissue capsule that separates them from surrounding tissues. Benign type of tumors images is used for proposed system model. The proposed brain tumor identification system's block diagram is shown in Figure 1. The offered brain tumor detection method uses CNN models that have already been trained to categorize brain scans and detect the existence of tumors. The first step is to gather a dataset of brain MRI images that include both tumor and non-tumor scans. The scans must then be pre-processed in order to extract their features. The region of interest (ROI) containing the brain must be cropped, the images must be resized to a standard size, and normalization must be applied as part of the preparatory stages for brain tumor diagnosis using image data. Cropping the ROI helps isolate the relevant part of the image, typically the brain, by removing irrelevant portions like the skull or background noise.

Image size of 240-pixel width and 240-pixel height is used in our proposed methodology. Resizing the images ensures they have the same dimensions, which is important for inputting them into a convolutional neural network (CNN) model. In our methodology, we cropped new image out of the original image using the four extreme points (left, right, top, bottom) with ratio of [1:1, 0:0].

This standardization also reduces computational complexity and memory requirements. Normalization is applied to rescale the pixel values of the images to a common range. We applied pixel-wise normalization, where each pixel value is scaled to a specific range (e.g., [0, 1] or [-1, 1]). This normalization is essential to prevent certain features from dominating the learning process due to differences in pixel intensity. This step improves the model's ability to learn meaningful features by reducing discrepancies in pixel intensity across different images.

Figure 1. Proposed model for detection of brain tumor

After preprocessing, the dataset needs to be split into subgroups for testing and training. The total number of examples for our system is 3000 in which number of training examples are 2400 and number of test examples are 600. The CNN model is trained and assessed using a testing set and a training set, respectively. The architecture of the CNN model is next developed. We used a pre-trained CNN model like MobileNet, ResNet, or VGG, that has already been trained on massive image datasets like ImageNet. These models have discovered broad-picture properties that are applicable to the detection of brain tumors. We take the already trained CNN model and remove the last fully connected layer(s) that were originally designed for the ImageNet classification task and retain the convolutional layers, which have learned features that are transferrable to other tasks.

We analyze the model's effectiveness in detecting brain tumors after training by assessing its performance on a different test dataset and computing metrics including recall, F1-score, precision, and accuracy.

4. Methods

Kaggle is an online community for people interested in machine learning and data science. It offers a collaborative environment where people and groups may work together on projects involving data analysis and artificial intelligence, access and exchange datasets, and take part in contests.

The website [22] provided the dataset for this research. 3060 MRI scans of the brain are contained in the dataset's three folders.

Only the yes and no folders' worth of photos were used for our project. 1500 brain MRI scans were stored in the "Yes" folder, while 1500 scans were stored in the "No" folder, which did not include any tumors. The dataset contains scans of various sizes. Thus, resizing the images to the same dimensions becomes important for using a CNN model. Figure 2 shows some samples of images from the dataset. 

Figure 2. MRI images from dataset [22]

5. Techniques Used

In this section, various models used in our research work to detect brain tumors are described.

5.1 CNN

A type of deep learning algorithm called CNNs is very good at analyzing information that is visual, such as pictures and movies. In the context of medical image processing, it is widely employed [23]. The numerous layers which make up a CNN include fully connected, convolutional, and pooling layers, to name just a few. Convolutional layers lower spatial dimensions while maintaining critical information, fully connected layers carry out the final regression or classification operation based on the retrieved features, and convolutional layers are used to acquire features from the input data. Each of our model's three convolutional layers is then followed by an extremely dense layer for categorization, maximum pooling, and activation. A single unit with an activation function that is sigmoid for the binary classification forms the final dense layer.

5.2 Transfer learning

A ML called transfer learning uses the knowledge obtained from solving a single issue to enhance performance on an alternate but connected problem. Transfer learning enables us to start with a pre-trained model and refine it on another assignment or dataset rather than building a model from scratch. By incorporating cutting-edge models like ResNet, MobileNet, and VGG into our project, we succeeded in maximizing the potential of transfer learning, which are described as follows:

- VGG (Visual Geometry Group): The VGG architecture is known for its simplicity and uniformity. It comprises of a number of convolutional layers and then adds layers with maximum pooling for down sampling. VGG's use of tiny 3x3 convolutional filters that are stacked together, which enable more complex network designs with fewer parameters than bigger filters, is its defining feature. VGG architectures are commonly referred to as VGG-16 and VGG-19.

- ResNet (Residual Network): He et al. [24] introduced ResNet (Residual Network), a CNN architecture, in their 2016 publication "Deep Residual Learning for Image Recognition". By introducing the idea of residual learning, it tackles the issue of gradients that vanish in deep neural networks. The total number of layers in the system is indicated by the different levels of ResNet topologies.

Table 2. No. of layers and hyperparameters

Model Used

Convolutional Layers

Pooling Layers

Dense Layer

Hyperparameter

CNN

03

03

(Max)

01

Strides (1,1)

Dropout Layer - 0.5

Activation Function – ‘relu’

VGG 16

13

05

(Max)

03

Dropout Layer - 0.5

optimizer='adam', loss='binary_crossentropy'

Activation Function – ‘relu’

VGG 19

16

05

(Max)

03

Dropout Layer - 0.5

optimizer='adam', loss='binary_crossentropy'

Activation Function – ‘relu’

RESNET 50

48

01

(Max)

01

Dropout Layer - 0.5

Activation Function – ‘relu’

MOBILE NET

27

01 (average)

01

Dropout Layer - 0.5

Activation Function – ‘relu’

- MobileNet: For effective computation on devices with limited resources, including cell phones and embedded systems, there is a CNN architecture called MobileNet. This idea was put forth in the 2017 publication, "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" [25]. Reduced computational effort and memory requirements for CNN models while keeping acceptable accuracy is MobileNet's core goal. The no. of layers and hyperparameters used in different models which are applied for our proposed system are given in Table 2.

The pseudocode of the algorithm used in our model is shown below:

Step 1: Import libraries and load the dataset

The following libraries are included in our experiment like keras, tensorflow, numpy, imutils, matplotlib.pyplot and cv2.

Step 2: Define functions for pre-processing

def preprocess_data (data):

cropped_data = crop_brain_region (data)

resized_data = resize_images (cropped_data)

normalized_data = normalize_images (resized_data)

return normalized_data

Step 3: Load and pre-process the dataset

data, labels = load_dataset()

preprocessed_data = preprocess_data (data)

Step 4: Split the dataset into training and testing sets

Step 5: Define and train the CNN model

cnn_model = build_cnn_model()

train_model (cnn_model, X_train, y_train)

Step 6: Define and train VGG16 model

Vgg16_model = VGG16 (weights='imagenet', include_top=False,

input_shape= (240, 240, 3))

--Add additional layers and freeze base weights

train_model (custom_model_vgg16, X_train, y_train)

Step 7: Repeat step 6 for other transfer learning techniques like VGG19, ResNet50, MobileNet

Step 8: Evaluate the models

The performance of the proposed model was verified using a total of six performance measures. These parameters are Recall, Accuracy, Precision, F1 score, False Positive Rate, and False Negative Rate which are explained below:

Accuracy

The proportion of correctly labeled subjects to the entire group of subjects is the measure of accuracy.

Accuracy $=\frac{T P+T N}{T P+T N+F P+F N}$

Precision

The proportion of accurately labeled "Yes" to all images that have been given that classification.

Precision $=\frac{T P}{T P+F P}$

Recall

The proportion of "Images contains brain tumors" that are genuinely brain tumor to those that are appropriately labeled as such.

Recall $=\frac{T P}{T P+F N}$

F1- score

F1-score is the harmonic mean of Precision and Recall.

$F 1$ score $=\frac{2 *(\text { Recall } * \text { Precision })}{(\text { Recall }+ \text { Precision })}$

False Positive Rate

The ratio of images incorrectly labeled as Yes to all "No" is known as the false positive rate.

$F P R=\frac{F P}{T N+F P}$

False Negative Rate

The ratio of the number of images incorrectly labeled as "No" to the real number of "Yes" is known as the "False Negative Rate".

$F N R=\frac{F N}{T P+F N}$

6. Evaluation Metrics and Analysis of Result

Several performance metrics are used in this section to assess the effectiveness and performance of our various brain tumor classification models.

Table 3. No. of layers and hyperparameters

Techniques

TP

FP

TN

FN

CNN

285

10

288

17

VGG16

290

5

295

10

VGG19

289

6

290

15

ResNet50

255

40

269

36

MobileNet

292

3

300

5

Table 3 indicates the True Positive (TP), False Positive (FP), True Negative (TN), False Negative (FN) obtained from Confusion Matrices of various models.

From the above table we can infer that MobileNet clearly shows the least False Negative Rate. We want False negative rate to be as least as possible because we don’t want the patient actually having brain tumor is detected as non-tumorous.

Figures 3-7 represent graphs, indicating the loss and accuracy with number of epochs. Over time, the accuracy curve often exhibits a growing trend, which denotes a development in the model's accuracy in sample prediction. The difference between projected and actual values, on the other hand, is decreasing, as shown by the loss curve. X-axis indicates the number of epochs.

Figure 3. Accuracy, loss v/s epoch graph for CNN

Figure 4. Accuracy, loss v/s epoch graph for VGG16

Figure 5. Accuracy, loss v/s epoch graph for VGG19

Figure 6. Accuracy, loss v/s epoch graph for ResNet50

Figure 7. Accuracy, loss v/s epoch graph for MobileNet

Table 4. Performance evaluation of various models

Methods

Accuracy

F1-Score

Recall

Precision

CNN

0.955

0.9552

0.9442

0.9664

VGG16

0.9755

0.9752

0.9672

0.9833

VGG19

0.965

0.965

0.9508

0.9797

ResNet50

0.8733

0.8762

0.8819

0.8705

MobileNet

0.9866

0.9868

0.9836

0.99

Table 4 indicates that, MobileNet shows the accuracy of 98.66% which is highest among all the models. As a result, our approach can accurately and successfully identify brain tumors. But as a limitation the dataset used for training may lack diversity in terms of demographic factors, tumor subtypes, or imaging modalities. This limitation could affect the model's ability to generalize to a broader range of brain tumor cases.

Table 5. Performance comparison with other techniques

Reference

Technique Used

Accuracy (%)

[13]

2D CNN

96.47

[14]

CNN and ANN

89

[15]

Deep learning based Depth wise Separable CNN

92

[16]

Berkeley wavelet transform (BWT) and SVM

96.51

[17]

Image Segmentation

97

[19]

CNN, LIME and SHAP

94.64

Proposed Model

MobileNet

98.66

Table 5 shows the comparison between the techniques used in various paper and the proposed model. Our approach, in contrast to conventional methods, combines recently discovered biomarkers with sophisticated neural network architectures to predict tumor development and treatment response more precisely and consistently. This combination not only improves the forecast precision but also provides a more profound understanding of the underlying dynamics and patterns of brain tumors.

7. Conclusion and Future Research Directions

Although deep learning has made great strides in the detection of brain tumors, researchers and practitioners are still working to solve a number of issues, including unbalanced and limited data availability, generalization of data, data preprocessing, and integration into clinical practice. This work suggests using CNNs and a variety of transfer learning strategies to create a distinct approach to categorizing brain tumors in order to get around these difficulties. This paper focuses on the efficacy of various models that had been trained, and our findings showed that MobileNet have the highest accuracy in classifying brain tumors, i.e. 98.66%.

By using the data and characteristics obtained from sizable datasets across numerous domains using transfer learning, proposed model able to increase the precision with the specific aim of diagnosing brain tumors.

The accurate categorization of brain tumors has important beneficial implications in addition to improving our knowledge of brain tumor histology. Precise and effective categorization can simplify diagnostic and therapy formulation, resulting in more specific and successful patient treatments. Furthermore, the creation of strong classification models may benefit medical imaging more broadly, opening the door to better neuro-oncology outcomes for patients and diagnostic precision.

Further research can be conducted to explore different architectures, hyper parameter tuning, or even combining multiple models to potentially improve accuracy even further. The ability to be generalized of the model can be further improved by gathering a bigger and more varied dataset. It would be beneficial to include brain tumor samples from different populations, age groups, and various tumor subtypes to ensure the model's robustness and effectiveness across different scenarios. By analyzing sequential scans of patients, researchers can gain insights into tumor growth patterns, treatment response, and recurrence prediction. Novel deep learning architectures can be designed for brain tumor detection. This can involve exploring advanced architectures like 3D convolutional neural networks (CNNs), attention mechanisms, recurrent neural networks (RNNs), or graph convolutional networks (GCNs) to better capture spatial and temporal dependencies in brain tumor imaging data. Interpretability is crucial in medical imaging applications, as clinicians need to understand the reasoning behind model predictions. Research can explore techniques like attention maps, saliency maps, or generative models to provide visual explanations for model decisions. Uncertainty estimation can provide valuable insights into model reliability and confidence, enabling clinicians to make more informed decisions based on model predictions.

Acknowledgment

The authors would like to extend their gratitude to King Saud University (Riyadh, Saudi Arabia) for funding this research through Researchers Supporting Project number (Grant No.: RSP2024R260).

  References

[1] Brain Tumor - statistics. Cancer.Net. https://www.cancer.net/cancer-types/brain-tumor/statist-ics#:~:text=Brain%20tumors%20account%20for%2085,the%20United%20States%20in%202023, accessed on May 31, 2023.

[2] The Nervous System: An introduction, classification, and function. https://anatomynotes.org/nervous-system/the-nervous-system-an-introduction-classification-and-function/, accessed on April 23, 2023.

[3] Du, W., Li, S., Wang, Z. (2019). Research on the human brain, the external brain and the public external brain. Journal of Physics: Conference Series, 1168(3): 032053. https://doi.org/10.1088/1742-6596/1168/3/032053

[4] Caire, M.J., Reddy, V., Varacallo, M. (2018). Physiology, Synapse. StatPearls - NCBI Bookshelf.

[5] Kleihues, P., Burger, P.C., Scheithauer, B.W. (1993). The new WHO classification of brain tumours. Brain Pathology, 3(3): 255-268. https://doi.org/10.1111/j.1750-3639.1993.tb00752.x

[6] Deimling, A. (2009). Gliomas (Recent Results in Cancer Research, 171). Springer. 

[7] Brain Tumor Basics. https://www.brainfacts.org/Diseases-and-Disorders/Cancer/2012/Brain-Tumor-Basics/, accessed on April 27, 2023. 

[8] Irmak, E. (2021). Multi-classification of brain tumor MRI images using deep convolutional neural network with fully optimized framework. Iranian Journal of Science and Technology, Transactions of Electrical Engineering, 45(3): 1015-1036. https://doi.org/10.1007/s40998-021-00426-9

[9] Abd El Kader, I., Xu, G., Shuai, Z., Saminu, S., Javaid, I., Ahmad, I.S., Kamhi, S. (2021). Brain tumor detection and classification on MR images by a deep wavelet auto-encoder model. Diagnostics, 11(9): 1589. https://doi.org/10.3390/diagnostics11091589

[10] Özcan, H., Emiroğlu, B.G., Sabuncuoğlu, H., Özdoğan, S., Soyer, A., Saygı, T. (2021). A comparative study for glioma classification using deep convolutional neural networks. Molecular Biology and Evolution, 18: 1550-1572. https://doi.org/10.3934/mbe.2021080

[11] Abd El Kader, I., Xu, G., Shuai, Z., Saminu, S., Javaid, I., Salim Ahmad, I. (2021). Differential deep convolutional neural network model for brain tumor classification. Brain Sciences, 11(3): 352. https://doi.org/10.3390/brainsci11030352

[12] Hao, R., Namdar, K., Liu, L., Khalvati, F. (2021). A transfer learning–based active learning framework for brain tumor classification. Frontiers in Artificial Intelligence, 4: 635766. https://doi.org/10.3389/frai.2021.635766

[13] Chattopadhyay, A., Maitra, M. (2022). MRI-based brain tumour image detection using CNN based deep learning method. Neuroscience Informatics, 2(4): 100060. https://doi.org/10.1016/j.neuri.2022.100060

[14] Saeedi, S., Rezayi, S., Keshavarz, H., Niakan Kalhori, S.R. (2023). MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Medical Informatics and Decision Making, 23(1): 16. https://doi.org/10.1186/s12911-023-02114-6

[15] Brindha, P.G., Kavinraj, M., Manivasakam, P., Prasanth, P. (2021). Brain tumor detection from MRI images using deep learning techniques. In IOP Conference Series: Materials Science and Engineering, 1055(1): 012115. https://doi.org/10.1088/1757-899X/1055/1/012115

[16] Bathe, K., Rana, V., Singh, S., Singh, V. (2021). Brain tumor detection using deep learning techniques. In Proceedings of the 4th International Conference on Advances in Science & Technology (ICAST2021). https://doi.org/10.2139/ssrn.3867216

[17] Bahadure, N.B., Ray, A.K., Thethi, H.P. (2017). Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. International Journal of Biomedical Imaging, 2017: 9749108. https://doi.org/10.1155/2017/9749108

[18] Sravanthi, N., Swetha, N., Devi, P.R., Rachana, S., Gothane, S., Sateesh, N. (2021). Brain tumor detection using image processing. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 7(3): 348-352. https://doi.org/10.32628/CSEIT217384

[19] Tiwari, P., Pant, B., Elarabawy, M.M., Abd-Elnaby, M., Mohd, N., Dhiman, G., Sharma, S. (2022). CNN based multiclass brain tumor detection using medical imaging. Computational Intelligence and Neuroscience, 2022: 1830010. https://doi.org/10.1155/2022/1830010

[20] Gaur, L., Bhandari, M., Razdan, T., Mallik, S., Zhao, Z. (2022). Explanation-driven deep learning model for prediction of brain tumour status using MRI image data. Frontiers in Genetics, 13: 822666. https://doi.org/10.3389/fgene.2022.822666

[21] Aleid, A., Alhussaini, K., Alanazi, R., Altwaimi, M., Altwijri, O., Saad, A.S. (2023). Artificial intelligence approach for early detection of brain tumors using MRI images. Applied Sciences, 13(6): 3808. https://doi.org/10.3390/app13063808

[22] Data source. https://www.kaggle.com/datasets/ahmedhamada0/brain-tumor-detection/, accessed on May 5, 2023.

[23] Hossain, T., Shishir, F.S., Ashraf, M., Al Nasim, M.A., Shah, F.M. (2019). Brain tumor detection using convolutional neural network. In 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, pp. 1-6. https://doi.org/10.1109/ICASERT.2019.8934561

[24] He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770-778. https://doi.org/10.1109/CVPR.2016.90

[25] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. https://doi.org/10.48550/arXiv.1704.04861