Brain Tumor Classification in MRI Using Hybrid ASA-Based Deep Learning and Masi-Entropy Multilayer Thresholding Segmentation with Sunflower Optimization

Brain Tumor Classification in MRI Using Hybrid ASA-Based Deep Learning and Masi-Entropy Multilayer Thresholding Segmentation with Sunflower Optimization

Kannan Balasuburamani* Karthigai Lakshmi Shanmugavel

Department of ECE, Ramco Institute of Technology, Rajapalayam 626117, India

Department of ECE, SSM Institute of Engineering and Technology, Dindigul 624002, India

Corresponding Author Email: 
kannabalasubramani@ritrjpm.ac.in
Page: 
223-241
|
DOI: 
https://doi.org/10.18280/ts.420120
Received: 
30 January 2024
|
Revised: 
25 June 2024
|
Accepted: 
25 September 2024
|
Available online: 
28 February 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Accurately diagnosing brain tumors at an early stage is critical for successful therapy and saves the lives of many people worldwide. Magnetic resonance imaging (MRI) scans are frequently employed for tumor detection because of their noninvasive nature, sparing patients with the discomfort of undergoing a biopsy. The process of identifying tumors is arduous and time-consuming because of the extensive array of three-dimensional (3D) images generated by an MRI scan of a patient's brain from various angles. Moreover, the diverse sizes, positions, and shapes of brain tumors pose challenges for their identification and classification. Consequently, computer-aided diagnostic (CAD) systems have been suggested as solutions for detecting brain tumors. This paper presents a new unified deep learning (DL) model called enhanced AlexNet for brain tumor detection and classification. Initially, a fast nonlocal means (FNLM) filter was used for preprocessing. The tumor nodule was segmented from the MR images using Masi-entropy-based multilevel thresholding with the sunflower optimization algorithm (MasiEMT-SFO). BMC-19 was used to extract various features in the feature extraction process. The extracted features were then classified using a hybrid classifier called the enhanced AlexNet classifier algorithm with the Anopheles search algorithm (ASA). The proposed hybrid classifier accurately detected brain tumors. The proposed model was implemented in MATLAB using two Kaggle datasets. The experimental results show that the proposed enhanced AlexNet algorithm outperforms the existing methods, providing compelling evidence for its application in other diseases. The proposed model outperformed existing methods in distinguishing abnormal and healthy brain tissue from MRI images, with an F1-score of 97.21%, specificity of 98.76%, precision of 97.61%, sensitivity of 96.73%, and accuracy of 99.76%. These findings confirm the efficacy of the proposed approach.

Keywords: 

MRI, deep learning, brain tumor detection, fast non-local means, MasiEMT-SFO, VGG-19, enhanced AlexNet algorithms, brain tumor classification

1. Introduction

Brain tumors arise from excessive cell mutations or proliferation, leading to abnormal cell clusters that interfere with brain function and damage healthy cells [1]. Common symptoms of brain tumors include memory issues, fatigue, personality shifts, nausea, speech difficulties, and vision problems [2, 3]. Radiologists used to detect and classify malignancies in brain images [4]. Magnetic resonance imaging (MRI) and computed tomography (CT) are commonly used to obtain data on various regions of the human body [5, 6].

Physicians use MRI of the brain to perform two types of tasks: (i) determining whether an MRI image is healthy or tumorous [7-9] and (ii) classifying an MRI image into distinct types [10-12]. Moreover, manual identification is time-consuming, inefficient, and unreliable when dealing with a vast amount. An inaccurate diagnosis can have serious consequences, possibly leading to patient death. Furthermore, classifying brain tumors into multiclass classifications presents more challenges than binary classifications. There is an urgent need for a reliable CAD system for tumor categorization to assist [13].

Although medical examinations such as MRI, CT, positron emission tomography (PET), and magnetic resonance spectroscopy (MRS) assist healthcare professionals in detecting brain tumors and are regarded as essential tools, these techniques have some limitations, such as failure to diagnose infected tumors, particularly in the early stages, which can result in false negative results. Some medical examinations, such as biopsies and intrusive imaging, can be invasive and unpredictable. Furthermore, these techniques are extremely expensive, and radiation exposure during CT and MRI scan procedures can gradually increase the possibility of developing a brain tumor. Other medical treatments, including angiography, can cause pain. As a result, all the implemented approaches are entirely dependent on the analysis of experts who perform these evaluations according to their expertise.

However, the algorithms suggested by various researchers have produced excellent outcomes and made significant contributions to the medical profession by assisting researchers in disease identification and classification; however, the possibility of obtaining more accurate, efficient, and improved results when recognizing brain tumors. Furthermore, the majority of existing models face difficulties, such as underfitting, poor generalization, an imbalanced dataset, overfitting, the need for expensive computational resources, and complexity.

Recent developments greatly improved the ability to identify and classify patterns in medical images, with a focus on MRI data for tumor detection. Although prior studies have demonstrated the potential of ML and DL algorithms in this area, they often have significant drawbacks, including high computational demands. Moreover, biases in training data and the complex, often opaque nature of DL model decision-making can further hinder their effectiveness [14]. To overcome these issues, it is essential to use robust DL models trained on diverse datasets and apply methods like cross-validation to ensure the model generalizes well. The approach proposed here by delivering precise and prompt evaluations, utilizing a straightforward, end-to-end DL algorithm that is practical for real-time use.

To address these issues, the proposed enhanced AlexNet classifier with the Anopheles Search Algorithm (ASA) was combined with a DL algorithm in the suggested research to achieve more accurate, efficient, and improved results in the medical field. The enhanced AlexNet classifier, with its deep convolutional layers, excels at extracting high-level features from MRI images that are critical for distinguishing between tumor and nontumor regions. It detects complex patterns and textures that are indicative of brain tumors. ASA integration allows the hybrid system to fine-tune the feature-representation process. The ASA efficiently tunes the AlexNet architecture's hyperparameters, such as the learning rate, batch size, and regularization parameters. This optimization process ensures that the classifier is well suited to the unique features of brain tumor MRI datasets. The hybrid classifier aims to detect and classify brain tumors with high accuracy, sensitivity, and specificity by combining DL capabilities with optimization techniques. The hybrid approach, which takes advantage of both enhanced AlexNet's classifier capabilities and the ASA's optimization, has the potential to achieve cutting-edge results in brain tumor classification tasks. It seeks to outperform traditional methods by incorporating advanced techniques for classification and parameter optimization.

The novelty of the proposed method is summarized as follows.

·The enhanced AlexNet algorithm was optimized using the Anopheles Search Algorithm (ASA) for detecting brain tumors.

·The proposed technique is flexible and can be adapted in variation of size and location of the tumors.

·The proposed model compared to existing methods, extensive simulations were performed using publicly available datasets from Kaggle, such as the Brain Tumor Classification (BTC) and CE-MRI Figshare datasets.

The paper is structures as follows. Section II discusses the deep analysis of existing model and techniques. Section III tailored the methodology. Section IV shows the experimental findings and Section V discusses conclusions and future implications.

2. Literature Survey

Khan et al. [15] proposed work consists of two open accessible available datasets with 3064 and 152 MRI images, respectively, for detecting dual (benign and malignant) and multiclass (glioma, pituitary, and meningioma) brain tumors using two DL techniques. As there is a considerable amount of MRI for training objectives, initially, this work applied a 23-layer convolutional neural network (CNN) to the first dataset to develop the framework. Overfitting is an issue in the suggested "23-layer CNN" framework. Transfer learning and reflections of the suggested "23-layer CNN" framework were employed to overcome this challenge. According to research results, the model's classification accuracy ranges from 97.8% to 100%.

Ranjbarzadeh et al. [16] suggested that strategy reduces computing duration and resolves the overfitting problem in cascaded DL networks. Next, a simple yet effective cascaded CNN (C-CNN) is presented by analyzing MRI scans of minor areas within every portion. The proposed C-CNN method retrieves both global and local characteristics in distinct ways. Younis et al. [17] suggested that the approach is tested utilizing MR scans on a dataset of brain tumor diagnoses containing 253 MRI scans and 155 exhibiting tumors. The suggested approach can detect brain tumors in MRI scans. The approach surpasses existing traditional techniques for identifying brain tumors in test data (F1 score = 91.29%, 91.78%, and 92.6%, and precision = 98.15%, 96%, and 98.41%, respectively). Tummala et al. [18] proposed ensemble method outperforms earlier CNN models for MRI tumor diagnosis at 384 × 384 resolutions, with a total assessment accuracy of 98.7% and 99.4% specificity. Using an identical ensemble method, the scan correctly diagnosed the glioma. L/32 was the most accurate specific model, with a total assessment accuracy of 98.2% at 384 × 384 resolutions. At the exact resolution, each of the four ViT simulations obtained a total assessment of 98.7% accuracy, outperforming the abilities of a sole model with two resolutions and its 224 × 224 resolution set.

ZainEldin et al. [19] proposed a CNN-based brain tumor technique for classifying brain tumor types. The hyperparameters were configured before constructing the pre-trained model with Inception-ResnetV2. To enhance brain tumor detection, the suggested approach employs a typically utilized pretrained model (Inception-ResnetV2), and its outcome is binary 0 (normal) or 1 (tumor). Because CNN-optimized parametric variables boost CNN efficiency, research findings reveal that BCM-CNN obtains outstanding outcomes as a classifier. Polat and Güngen [20] suggested that the transfer learning method has an impressive level of efficacy compared to previous research, with the maximum performance in classification for ResNet50 using Adadelta being 99.02%.

Asad et al. [21] presented a classification of brain tumors using a deep CNN with a Stochastic Gradient Descent SGD optimization technique. Kesav and Jibukumar [22] research aims to use low-computational cost framework to minimize the execution time of existing frameworks and to develop an approach for detecting brain cell tumors. A similar framework was then utilized as a feature extraction tool for region-based CNN (RCNN) to identify brain tumor locations in pre-stage categorized tumor MRI samples and define tumor regions using boxes as boundaries. Alsubai et al. [23] proposed an integrated CNN-long-term memory (CNN-LSTM) DL model. These studies used an MRI brain scan dataset. After successfully preprocessing the input, the CNN retrieves the essential components from the image. The suggested approach accurately predicted important brain tumor categorization with 99.1% accuracy and 98.9% recall. Khairandish et al. [24] proposed a research study in which MRI brain images got excellent results of CNNs in open-accessible data sets.

Yoganathan et al. [25] demonstrated that the k-nearest neighbor (KNN) method effectively segments normal cells and tumors in brain MRI images, achieving a mean DSC value of 0.87 for tumor segmentation, surpassing previous methods in both efficiency and accuracy. Öksüz et al. [26] employed pre-trained models like ShuffleNet and ResNet-18 to extract features from tumor regions, leading to an 11.72% improvement in sensitivity through feature fusion and ROI expansion on publicly available datasets. Kokkalla et al. [27] introduced an approach that outperformed existing methods, achieving an average accuracy of 99.69% even in the presence of noisy data. Kadry et al. [28] used resized brain MRI images with various classifiers, obtaining significant classification accuracy with models like ResNet50, VGG16, and VGG-19. Mandle et al. [29] developed a kernel-based SVM for brain tumor classification.

Tiwari et al. [30] proposed that a deep CNN model has six learning layers and facilitates features from MRI brain images. Generated better evaluation results while learning faster than those of typical DL models. The results show that the approach is practical with 99.9% accuracy. Maqsood et al. [31] suggested an approach that includes five main steps: 1) linear contrast stretching, 2) a unique 17-layer deep neural network, 3) an enhanced MobileNetV2 framework, 4) an entropy-based control approach, and 5) multiclass support vectors. The proposed tumor identification and categorization approach surpasses previous visual and quantitative approaches, with accuracies of 97.47% and 98.92%, respectively. Wang et al. [32] proposed an artificial intelligence (AI)-based screening tool for chest CT images. The proposed CNN employs multiple novel algorithms, including rank-based average clustering and multipass data segmentation. In contrast, graph convolutional networks (GCN) are used to learn the related representations. The proposed FGCNet model outperformed all current methods. Zhang et al. [33] proposed an effective multiple sclerosis classification model. The proposed work uses AlexNet as the base model and transfer learning to adapt AlexNet to classify multiple sclerosis brain images. The proposed model outperforms seven advanced MS classification methods [33, 34].

Lamba et al. [35] presented a new integrated technique that employs advanced AI techniques such as DL and supervised learning algorithms. The newly proposed model outperformed the current methods on several metrics, including 98.87% accuracy, 99.09% precision, 98.73% recall, 99.02% specificity, and an F-measure of 98.91%. Appiah et al. [36] proposed a simplified model for analyzing 2D images from MRI scans for precise brain tumor detection that employs proper orthogonal decomposition (POD) and a CNN. Their study, which used Explainable AI with SHAP, found that MobileNetV2 had superior predictive capabilities in delineating tumor boundaries. Islam et al. [37] developed an approach that effectively distinguished between brain tumors and healthy data by integrating three different datasets. Agarwal et al. [38] proposed a solution for enhancing contrast in low-quality MRI images. Optimized Dual-Tree Wavelet Contrast Enhancement (ODTWCHE) improves image contrast. Yang et al. [39] form a hybrid Gated Recurrent Unit (GRU) network with the Enhanced Hybrid Dwarf Mongoose Optimization (EHDMO) algorithm, which is effective in handling sequential data, such as natural language and time series. Table 1 shows the existing tumor classifications.

Table 1. Existing brain tumor classification techniques [19]

Reference

Technique Used

Accuracy

Limitations

[15]

CNN-based DL Model

97.80%

Requires a substantial amount of images

[17]

VGG-16 deep CNN

98.50%

Small dataset

[18]

ImageNet-based ViT

98.70%

The size and location need to consider

[19]

BCM-CNN

99.98%

It took a while to process additional optimization steps

[20]

Transfer learning-based Classification

99.02%

The accuracy of classification must be improved

[21]

SGD

99.50%

Generation difficulties for new dataset

[22]

RCNN-based Model

98.21%

Restricted detection is required

[23]

CNN-LSTM

99.10%

Performance needs to be improved

[24]

Hybrid CNN-SVM

98.49%

The shape and location tumor must be considered

[26]

k-NN and SVM Classifiers

97.25%

Accuracy needs to be improved

[27]

IRNet v2

99.69%

More number of parameters

[28]

Hybrid DL based

96.00%

Accuracy is improved

[29]

Kernel-based SVM

97.00%

The dataset is small. Accuracy is improved

[30]

Multi-classification model

99.00%

Accuracy is improved

[31]

DL-based multimodal classification

97.80%

Accuracy is improved

2.1 Problem statement

Existing studies on brain tumor identification and classification cannot obtain better results because many methods use unbalanced or limited datasets. Tumors vary in size and shape making it more challenging to recognize tumors and resulting in poor efficiency. Binary classification of tumors was a crucial problem in previous systems, creating more ambiguity for physicians. The lack of data limits physicians’ ability to obtain reliable results. Previous DL algorithms enhanced the tumor identification efficiency but required a vast dataset for analysis. However, brain tumor detection has a high computational cost and requires extensive training.

In previous approaches, analyzing brain MRIs to segment and detect tumor regions has been difficult and time consuming. Previous methods for detecting tumors using genetic algorithms (GA) require less data, but it is difficult to remove the objective function. GAs are nondeterministic and time consuming.

3. Proposed Methodology

The proposed approach is divided into five stages, as illustrated in Figure 1. Brain tumor images were initially collected from the Kaggle dataset. The image was converted to grayscale, and the noise was removed using a filter. An FNLM filtering approach was employed. In addition to this segmentation, MasiEMT-SFO was used. The image was segmented based on histogram values, which increased segmentation accuracy. Subsequently, the features were obtained using VGG-19. The Minimum Redundancy Maximum Relevance (mRMR) method was used to select the best features from the feature extraction process. Finally, the selected features were allocated to a hybrid classifier called the AlexNet classifier method, which was improved using the ASA for classification. This will also provide information regarding tumor malignancy.

Figure 1. Proposed methodology block diagram

3.1 Dataset

3.1.1 BTC dataset

The brain tumor classification (BTC) dataset [40] in this study finds appropriate DL classifiers by training and testing multiple learning-based algorithms on MRI datasets. The dataset contained two MRI brain images: test and training. This study only utilized MRIs for meningiomas, pituitary gland tumors, and gliomas. Table 2 presents a description of the BTC dataset. Figure 2 shows the sample images.

Table 2. BTC dataset description

Tumor Type

Number of Slices

Training Set (80%)

Testing Set (20%)

Pituitary

899

827

72

Meningioma

937

822

115

Glioma

926

826

100

Figure 2. Sample MRI dataset

3.1.2 CE-MRI Figshare dataset

The publicly available CE-MRI Figshare dataset [41] was the second dataset used to classify tumors. The T1 modality highlights particular features of each multiclass brain tumor. The most recent version of this dataset contains 708 images of meningiomas, 1426 gliomas, and 930 images of pituitary tumors. We randomly divided the dataset by allocating 80% for training and 20% for testing.

3.2 Preprocessing

Preprocessing is a critical step in brain tumor detection, as it enhances detection accuracy by removing unnecessary elements from MRI scans, such as surrounding tissues or blood vessels. The images are first converted to grayscale to eliminate extraneous data, simplifying the processing and reducing data demands. Following this, noise is removed using the Fast Non-Local Means (FNLM) filter [42], which is particularly effective in preserving image structure while reducing noise.

3.2.1 Grayscale conversion

Converting to grayscale minimizes storage and processing requirements, making the subsequent analysis more efficient.

3.2.2 Filtering

Unlike conventional filters that may blur the entire image, the FNLM filter retains crucial edge information by weighting pixel values based on their Euclidean distance. This approach, now feasible due to advances in computing power, has proven effective in enhancing image clarity, as demonstrated by qualitative and quantitative analyses.

NL[i](M)=NiW(M,N)i(N)    (1)

where, W(M, N): weight of the pixels M&N, ranges between 0≤W(M,N)≤1 and NiW(M,N)=1.

Here, i(N) and i(M) denote the luminance of pixels N and M, respectively, and Eq. (2) establishes weight W(M, N) regarding the similarity between pixels M and N.

W(M,N)=1Z(M)e    (2)

where, XM and XN are pixel vectors for M and N. Z (M) is the normalization coefficient and determined as in Eq. (3).

Z(M)=\sum_N e^{-\frac{\left[P\left(X_M\right)-P\left(x_N\right)\right]^2}{D^2}}    (3)

The coefficient D in Eqs. (2) and (3) control how much noise is eliminated, and this value is typically a constant multiplied by the standard deviation. A larger D value improves the efficacy of noise elimination while increasing image smoothing.

Algorithm 1: Preprocessing pseudocode

Input: The authenticated input image and its size

Output: Processed image

Steps:

1. The pixel values were estimated.

2. The intensity differences between adjacent pixels are calculated by ||P(XM)–P(XN)||

3. Eq. (2) is used to obtain the weight value for each region.

4. Eq. (1), was used to filter the images.

3.3 Segmentation

Segmenting images is an essential component of medical diagnosis. Segmentation based on images can be challenging. A later step is used to segment infected brain areas on MRI: The preprocessed MRI brain tumor image is initially turned into a binary image using an FNLM filter. The binary image is then segmented into the next stage. The MasiEMT-SFO approach was employed in this study to segment MRI tumor images efficiently.

3.3.1 MasiEMT-SFO

A new multi-level threshold-based image segmentation approach was developed using Masi entropy as the objective function. While bi-level thresholding is effective for segmenting images with a single object against a background, it often falls short when dealing with complex images containing multiple objects. In such cases, multi-level thresholding provides better segmentation results. To optimize the threshold search process and reduce computational costs, metaheuristic algorithms are commonly used. This study proposes the combination of the Sailfish Optimization (SFO) algorithm with Masi entropy for multi-level threshold-based image segmentation, significantly lowering computational complexity. The SFO algorithm optimizes the search space to find ideal thresholds, making the segmentation process more efficient even as the number of thresholds increases. For the segmentation of an image, we assume an array of m thresholds {T1 ... TN}. These thresholds were applied for image segmentation in the N+1 class. Gray levels [0, T1], [T1+1, T2] ... [TN+1, L−1] are covered by pixels G0 and G1.... GN, in the range mentioned above. To perform multilayer thresholding, Masi entropy is expressed as follows:

\begin{gathered}\mathrm{M}_{\mathrm{R}}\left(\mathrm{I} \mid\left\{\mathrm{T}_1, \ldots, \mathrm{~T}_{\mathrm{N}}\right\}\right) =\mathrm{M}_{\mathrm{R}}\left(\mathrm{G}_0\right)+\mathrm{M}_{\mathrm{R}}\left(\mathrm{G}_1\right)+\ldots+\mathrm{M}_{\mathrm{R}}\left(\mathrm{G}_{\mathrm{N}}\right)\end{gathered}    (4)

where,

\mathrm{M}_{\mathrm{R}}\left(\mathrm{G}_0\right)=\frac{\left(\log \left(1-(1-R) \sum_{i=1}^{T_1} \frac{H_i}{W_0} \log \left(\frac{H_i}{W_0}\right)\right)\right)}{1-R}    (5)

\mathrm{M}_{\mathrm{R}}\left(\mathrm{G}_1\right)=\frac{\left(\log \left(1-(1-R) \sum_{i=T_1+1}^{T_2} \frac{H_i}{W_1} \log \left(\frac{H_i}{W_0}\right)\right)\right)}{1-R}    (6)

\mathrm{M}_{\mathrm{R}}\left(\mathrm{G}_{\mathrm{N}}\right)=\frac{\left(\log \left(1-(1-R) \sum_{i=T_N+1}^{L-1} \frac{H_i}{W_N} \log \left(\frac{H_i}{W_1}\right)\right)\right)}{1-R}    (7)

where, \mathrm{W}_0=\sum_{i=1}^{T_1} H_{\mathrm{i}}, \mathrm{W}_1=\sum_{i=T_1+1}^{T_2} H_{\mathrm{i}}, \ldots, \mathrm{W}_{\mathrm{N}}=\sum_{i=T_N+1}^{L-1} H_{\mathrm{i}}.

The pollination process of the two nearest sunflower ports in the direction of the sun was simulated using the SFO algorithm. Sunflowers have a consistent life cycle: they emerge each day with the sun, much like the hands of the clock. At night, they faced in the reverse direction, patiently waiting for the sun to rise in the morning. The inverse square law of radiation is another important nature-based optimization method. In this context, the inverse square-law principle is a key component of natural optimization. The amount of radiation received varies directly with the plant's distance from the sun but remains relatively constant at closer distances. In contrast, longer distances resulted in less solar radiation absorption. Hence, similar considerations apply to this study: moving closer to solar sources maximizes the potential for global optimization as shown in Figure 3. The calorie Q consumed by each plant was calculated using Eq. (8):

\mathrm{Q}=\frac{P}{4 \pi d^2}    (8)

where, Q is the quantity of heat obtained, P is the power of the sun, and c is the separation between the sun Y* (the best solution) and the sunflower Yi.

Figure 3. Flowchart of the sunflower optimization algorithm

The direction vector of sunflowers is defined as follows:

\vec{S}=\frac{\mathrm{Y}^*-\mathrm{Y}_{\mathrm{i}}}{\left\|\mathrm{Y}^*-\mathrm{Y}_{\mathrm{i}}\right\|} \mathrm{i}=1,2, \ldots \ldots, \mathrm{~m}    (9)

where, Y* represents the global best solution, Yi represents the solution i, and m represents the population size. Eq. (10) shows the calculation of each sunflower Yi’s step size (D).

D=\mu \times p_i\left(| | \mathrm{Y}_{\mathrm{i}}-\mathrm{Y}_{\mathrm{i}-1} \| \mid\right) \times\left\|\mathrm{Y}_{\mathrm{i}}-\mathrm{Y}_{\mathrm{i}-1}\right\|    (10)

where, µ is a constant value indicating a plant's "inertial" displacement, and p_i\left(\left\|\mathrm{Y}_{\mathrm{i}}-\mathrm{Y}_{\mathrm{i}-1}\right\|\right) is the probability of pollination.

All sunflower steps were limited to the maximum step size of the DMAX to avoid skipping each solution boundary. Eq. (11) shows the computation of the maximum step size:

D_{M A X}=\frac{\left\|Y_{M A X}-Y_{M I N}\right\|}{2 \times \mathrm{m}}    (11)

where, YMAX is the upper boundary and YMIN is the lower boundary. The most current planting method in the population is:

\vec{Y}_{i+1}=\vec{Y}_i+D \times \vec{S}    (12)

This approach begins with creating a uniform or random population of entities. Evaluating each individual helps to determine which one will evolve into the "Sun." Although future versions may manage multiple suns, the current study focused on a single individual. Subsequently, all new entities resembling sunflowers align themselves with the sun and migrate in a controlled random manner. The SFOA was used to identify high-quality solutions in problem-solving scenarios by reducing the fitness function.

Algorithm 2: MasiEMT-SFO Pseudo code

1) Initialize the population size m, pollination rate P, and maximum iteration number MaxI.

2) Configure T=0.

3) Create population Yi(t)\in[L,U] randomly, where i=1, …, m.

4) Assess the fitness value of the population's members (sunflowers) f(Yi(t)).

5) Allocate the population’s optimal solution Y*.

6) All individuals (sunflowers) were aligned to the answers presented in Eq. (9).

7) The above steps were repeated.

8) Determine the direction vector (fitness) of each individual.

9) Eliminate the worst n% of individuals.

10) Eq. (10) is used to calculate the individuals' steps towards the sun.

11) The finest sunflowers were fertilized in the vicinity of the sun.

12) The maximum step of the individuals was determined using Eq. (11).

13) Refresh the individuals in (12).

14) Assessment of new individuals.

15) Utilize new individuals if their fitness levels are higher than the present ones.

16) Configure T=T+1.

17) The above steps were performed until (T>MaxI).

An optimization algorithm that finds the most optimal solution by analogizing sunflower pollination is shown in Figure 3. This algorithm begins by initializing a population of random agents. A predetermined percentage of flowers that are the furthest from the sun is then eliminated by the algorithm as part of a selection procedure. By taking this step, resources can be directed towards the more promising applicants. Each bloom then goes through a procedural upgrade to improve its pollination behavior or position. The best-performing flowers are chosen to pollinate close to the sun, producing new individuals close to the optimal solution. By concentrating on regions with significant potential, this local exploration seeks to enhance the best solution.

A new population is created once the new individuals have been created. If a better option is found, the algorithm updates the sun or the best solution. Otherwise, it checks to see if this population constitutes a global minimum. The procedure is repeated until an ideal solution is found or the maximum number of iterations (called days) is reached. At this stage, the algorithm comes to an end after repeatedly selecting, orienting, and pollinating the population to get the greatest potential result.

3.4 Feature extraction

Feature extraction is an essential component for classification. The significant elements of the images have an enormous impact on the classification performance. Object characteristics are classified as global or local, based on color, size, or structure. Texture and color are local characteristics; however, their structures are global. Deep and artisan features were acquired for image categorization in this study. Feature extraction techniques are critical for tumor detection. The VGG-19 algorithm obtains various features from the segmented images. The process described above extracts specific features from an image for brain tumor imaging.

3.4.1 VGG-19

VGG-19 enables it to capture distinct features in images, which is critical for tasks such as medical image analysis, in which subtle details can have a diagnostic impact. Medical imaging studies, including MRI analysis, have shown that VGG-19 excels in feature extraction tasks because of its ability to effectively capture low and high-level features [43]. This performance reliability is critical for accurate detection of tumors using MRI. VGG-19 achieves a balance between depth and computational feasibility as shown in Figure 4.

Figure 4. VGG-19 Architecture model

The feature extraction capabilities of VGG-19 provide a rich set of features for the enhanced AlexNet brain tumor classifier for processing. These features are more abstract and representative than earlier layers, which improves the classifier's ability to detect complex tumor-related patterns. Using VGG-19 as the feature extractor, the enhanced AlexNet classifier can focus on refining these features through additional training. This fine-tuning process improves classification accuracy and robustness.

3.5 Feature selection (FS)

The FS is a critical task that reduces the number of features by removing redundant, irrelevant, and noisy data. In this study, the mRMR method is used to select the best features. mRMR-based feature selection is especially effective in detecting brain tumors from MRI images because of its ability to select highly relevant and non-redundant features, optimize subset sizes, robustly handle noise, ensure interpretability, and accommodate complex data structures.

3.5.1 mRMR

mRMR is a type of feature selection method that acts as a filter to identify the optimal feature set by maximizing the correlation with the target variables while minimizing redundancy among features. This classic approach effectively gathers relevant and nonredundant features. mRMR is efficient for selecting feature subsets by emphasizing relevant features and excluding irrelevant features. Mutual information plays a crucial role in assessing the similarities between features. Specifically, for features fi within set S, maximizing mutual information enhances the relevance between features and the target class c:

\max D(S, c), D=\frac{1}{|S|} \sum_{f_i \in S} I_M\left(f_i ; c\right)    (13)

where, IM represents the mutual information between feature fi and class c.

\min R(s), R=\frac{1}{\left|S^2\right|} \sum_{f_i, f_j \in S} I_M\left(f_i, f_j\right)    (14)

Feature selection operates on the entire feature vector by using leave-one-out cross-validation with a voting scheme. In this process, each case selected the best features through cross-validation. Each feature receives a vote if selected by case, and those with the highest accumulated votes are chosen as the final features. These selected features are subsequently utilized in the classification stage to categorize each super pixel as tumor or non-tumor. The mRMR-based feature selection is highly beneficial for enhancing the performance of an enhanced AlexNet classifier in brain tumor detection from MRI images. It optimizes feature selection by maximizing relevance and minimizing redundancy, thereby enhancing the classification accuracy and robustness in medical imaging. The integration of mRMR with enhanced AlexNet combines their strengths, providing a potent approach for precise and dependable brain tumor diagnosis using MRIs.

3.6 Classification

After the FS, the selected features are fed into a hybrid AlexNet classifier with the ASA as an input. Conventional algorithms frequently require handwritten functions, which are time consuming and may not fully capture the complexities of medical imagery. DL models such as AlexNet can automatically discover important features and decrease the need for manual feature engineering. The enhanced AlexNet output layer uses a SoftMax classifier to detect brain tumors [44]. Ultimately, the proposed classifier efficiently detects brain tumors while improving the classification accuracy. Figure 5 shows a flowchart of the proposed method.

Figure 5. Process flow of the proposed system

AlexNet as shown in Figure 6 consists of five convolutional layers, three max-pooling layers, two normalization layers, two fully connected layers, and a softmax layer, with ReLU activation applied after each convolution. The input size is typically 224 × 224 × 3, but with padding, the actual output is 227 × 227 × 3, and the model contains over 60 million parameters.

Figure 6. AlexNet architecture

3.6.1 Enhanced AlexNet

The architecture includes five convolutional layers, three fully connected (FC) layers, and various pooling and normalization layers, processing input images of size 227 × 227 × 3 [45]. The ReLU activation function is used throughout to address issues like gradient vanishing, allowing for faster convergence during training. Dropout is applied in the FC layers to prevent overfitting by training only a subset of neurons in each iteration, thus improving the network's generalization capability.

The enhancements described in this study differ from the existing AlexNet classification framework in the following ways.

·Convolutional layers were added to the existing AlexNet architecture to maintain responsive local areas and employ max-average pooling methods to improve image classification.

·A global average pooling layer that can significantly reduce overfitting while retaining the final features. The absence of many network variable computations does not affect the final result, which improves the network performance.

·Finally, a local response normalization (LRN) layer was added to the convolutional layer to avoid specific additional numerical issues, thus eliminating neuron saturation. Following each convolution layer, the BN layer was included and directed to the next network layer as shown in the Figure 7.

Figure 7. Architecture of the enhanced AlexNet model

The output from the convolutional layer is given as:

X_j^1=f\left(\sum_{a=1}^N W_j^{I-1} * Y_a^{I-1}+b_j^i\right)    (15)

where, Xj1 s the j-th feature map on layer l, YaI-1 is the feature map on layer I–1, WjI-1 is the j-th kernel on layer I-1, N is the total number of features on layer I–1, bji is the bias for the j-th feature map on layer l, and (*) denotes convolution. Pooling layers reduce the feature map size, and the final layer utilizes the SoftMax function for classification:

\operatorname{ReLU}(\mathrm{X})= \begin{cases}0, & X<0 \\ X & X \geq 0,\end{cases}    (16)

\operatorname{SoftMax}\left(X_i\right)=\frac{e^{x_i}}{\sum_{y=1}^m e^{x_y}}    (17)

where, Xi and m are the input data and number of classes, respectively. To normalize features, AlexNet uses parameters to adjust the mean and variance.

P_i=\varphi \tilde{t}_i+\beta    (18)

\tilde{t}_i=\frac{t_i-\mu}{\sqrt{\sigma^2+\gamma}}    (19)

where, \mu and \sigma are the mean and variance, respectively. \gamma is a constant, and is the parameter for feature extraction. To optimize the parameters and reduce error rates during classification, AlexNet was tuned using the Adaptive Simulated Annealing (ASA) algorithm. This optimization adjusts \beta to increase accuracy and \phi to minimize error rates.

Algorithm 3: Pseudocode of the Enhanced AlexNet [46]

Initialize - AlexNet layers

Pre-process input with MRI images (Gray-scale conversion, filtering)

Partition datas

Initialize and train the model

Train_Model (Enhanced_AlexNet, Labeled_Data)

Perform data augmentation and generate synthetic samples

Synthetic_Data = Generate_Synthetic_Data (Labeled_Data)

Fine-tune the model

Model_Tuned = Fine_Tune_Model (Enhanced_AlexNet, Labeled_Data)

Execute the main training loop

for each iteration from 1 to the Total_Iterations:

Perform Forward Propagation(Model_Tuned, Unlabeled_Data)

Update the model based on predictions

Calculate and assess accuracy

accuracy = Calculate_Accuracy (Model_Tuned, Unlabeled_Data)

 Evaluate the final model

Final_Accuracy = Calculate_Accuracy.

3.6.2 ASA

Recent studies have uncovered the molecular mechanism of female Anopheles mosquitoes, which enables them to detect their prey several miles away. Human sweat contains a compound called methyl phenol 4, which is detected by insect olfactory receptors (Igor-1), helping it locate humans. It has been observed that Igor-1 cells are exclusively present in female mosquitoes. When malaria parasites are ready to move from one host to the next, they prompt the infected body to release odors that attract Anopheles mosquitoes. Furthermore, the search agents within the solution space act like mosquitoes. These agents evaluate the optimality of each point by randomly navigating through the solution space. As the iterations progress, more agents converge towards the optimal points.

\mathrm{D}=\mathrm{a} \log (\mathrm{C})+\mathrm{b}    (20)

where, D denotes the odor density, b signifies the tracking constant, and C represents the Weber-Fechner chemical concentration coefficient. In Eq. (1), if a denotes the inverse distance between point Xi mosquito Ai.

Odor _{A_i X_i=\frac{1}{{Dist}\left(A_i X_i\right)} \times log \left( { fitness }\left(X_i\right)\right)+b, \quad 0 \leq b \leq 0.5}    (21)

where, {Dist}\left(A_i X_i\right)=\sqrt{\sum_{j=1}^n\left(\left(x_j\left(A_i\right)-x_j\left(X_i\right)\right)^2\right)}.

The metaheuristic ASA consists of the following steps.

·The optimization problem can be described as follows:

{{}_{x}^{max}{f(X)}},~~X \in x_{i}~~~i = 1,2,\ldots.N    (22)

where, f(x) denotes the objective function, X represents the set of decision variables, xi denotes a variable with a set of possible values for each decision, and N signifies the number of decision variables. The definition of continuous decision variables includes the upper and lower bounds, as specified in Eq. (23).

l_{x_i} \leq x_i \leq u_{x_i}    (23)

x_i \in\left\{x_1, x_2, \ldots . x_n\right\}    (24)

·The number of Anopheles mosquitoes, number of decision variables, and range of each decision variable were the variables of the search algorithm.

·In this step, mosquitoes are dispersed randomly throughout the search space, and the location of each mosquito's optimal point is determined.

·Each mosquito senses the odor density according to Eq. (23), and based on the value obtained, it travels in the direction of the optimal point. In a two-dimensional space, for instance, we have:

Odor _{A_i X_i=\frac{1}{\sqrt{\left(x_{A_i}-x_i\right)^2+\left(y_{A_i}-y_i\right)^2}}} \times log \left(\right. fitmess \left.\left(X_i\right)\right)+b, 0 \leq b \leq 0.5,    (25)

x_{ {new }}=x_{ {old }} \pm Odor _{A_i X_i} \times\left|x_{A_i}-x_i\right|    (26)

y_{ {new }}=y_{ {old }} \pm Odor _{A_i x_i} \times\left|y_{A_i}-y_i\right|    (27)

·The stopping conditions are as follows:

For a fixed number of iterations, based on the conducted simulations, a range between 15 and 30 is recommended. This approach is proposed as a decision parameter for problems characterized by intricate objective functions and a substantial number of decision variables.

Attaining a sufficiently good solution: this criterion is suitable for problems with a defined optimality threshold.

Number of iterations without alteration in the optimal function; this is applicable to problems where achieving solution convergence is crucial.

Algorithm 4: Pseudo code of ASA

% Initialize Anopheles structure

Anopheles_Stract = struct('Anopheles_Loc', [], 'Anopheles_fitness', []);

% Generating initial locations of Anopheles

for i = 1:Anopheles_No

Anopheles_Stract(i).Anopheles_Loc = random_value; % Replace with actual random initialization method

end

% Evaluation

for i = 1:Anopheles_No

Anopheles_Stract(i).Anopheles_fitness = Calculator fitness (Anopheles_Stract(i),Anopheles_Stract(i)). Anopheles_Loc);

end

% Main loop of Anopheles Search Algorithm

while ~Stop_Condition

for i = 1:Anopheles_No

for j = 1:Anopheles_No

Odor_Anopheles_Stract_i_to_j = 1 / Dist(Anopheles_Stract(i), Anopheles_Stract(j)) × log(Anopheles_Stract(i).Anophele fitness) + b

% Update position

Anopheles_Stract(i).Anopheles_Loc=Updated_Location(Anopheles_Stract(i)),Anopheles_Loc, Odor_Anopheles_Stract_i_to_j).     

% Evaluation

Anopheles_Stract(i).Anopheles_fitness=Calculate_Fitness(Anopheles_Stract(i),Anopheles_Stract(i)).Anopheles_Loc);

end

end

end

3.6.3 Hybrid Enhanced AlexNet with ASA

The optimization was based on Anopheles mosquitoes with approximately three tents. Certain Anopheles mosquitoes were discovered near three tents but were not carriers. Initially, an infant infected with malaria and parasites was ready to spread. Second, malaria-infected children and parasites are unprepared for spreading the disease, and third, there is the existence of a healthy child. The results indicated that mosquitoes showed twice the attraction to the scent of children in the first tent compared to the others. Figure 8 illustrates the flowchart for the ASA-AlexNet.

Figure 8. Flowchart of proposed hybrid enhanced AlexNet with ASA

Step 1: Initialization.

where ASA denotes the population, B denotes the best value, and I denote the number of iterations. The decision variable was bounded by lb (lower bound) and ub (upper bound). The ASA algorithm was operationalized using the following equations:

\begin{aligned} P_i=(1, j) & =l b_j+X\left(u b_j-l b_j\right) ; j =1,2,3 \ldots . N\end{aligned}    (28)

P_i=P_i+X\left(P_i\right)    (29)

In Eq. (30), the inverse distance between Xi and point Yi is calculated, where C denotes the optimal point in the solution space based on the location Xi, as follows:

O_{X_i Y_i}=\frac{1}{{Dist}\left(X_i Y_i\right)} \times log \left(\right. fitness \left.\left(Y_i\right)\right)+b, 0 \leq b \leq 0.5    (30)

Step 2: Random generation.

Eq. (31) illustrates the calculation of distance.

{Dist}\left(X_i Y_i\right)=\sqrt{\sum_{j=1}^n\left(X_j\left(X_i\right)-X_j\left(Y_i\right)\right)^2}    (31)

If b=0.5, O increases. The value of b toward the apex of the Anopheles motion was the maximum. If b=0, distance and optimization are used to compute the motion of the Anopheles mosquito.

Step 3: Fitness function.

The fitness function is represented in Eq. (32):

Fitness function \left(X_i Y_i\right)=Maximixe (Accuracy), Minimize (Error rate)    (32)

where, \beta represents the accuracy and \varphi represents the error rate.

Step 4: ASA updates are aimed at enhancing the accuracy and reducing the error rates. This optimization process calculates parameters by assessing the scent density detected by each mosquito and advancing toward the optimal point based on these values. The focus was on improving the accuracy while minimizing the error rates, as depicted in the following maximization equation:

Maximixe (Accuracy) \left(X_i\right), X_i \in x_i, i=1,2, \ldots . N    (33)

where, Maximixe (Accuracy) (Xi), represents the objective function. If the decision variable is continuous, it is divided by an upper bound (U_{X_i}) and a lower bound (L_{X_i}), as shown in Eqs. (34) and (35).

L_{X i} \leq X_i \leq U_{X i}    (34)

X_i \in\left\{X_1, X_2 \ldots \ldots X_n\right\}    (35)

The minimized error rate is given as:

Minimize ( Error rate )\left(Y_i\right)=X_1^2+\left(X_2-1\right)^2    (36)

Eqs. (31) and (34) are the optimization equations for achieving the desired function. The smells of anopheles were used to optimize the parameters.

Step 5: Termination.

Finally, the fitness function is employed to achieve optimum accuracy while reducing the computational time. Finally, the ASA mechanism selects the optimal value of the AlexNet classifier to effectively classify input MR images as normal or abnormal.

4. Results and Discussion

Evaluating algorithm efficiency is crucial in computerized system design, particularly for predicting unexpected data. Simulations were performed in MATLAB 2021a on an Intel Core i5-8250U CPU with 8 GB of RAM, as detailed in Table 3. The different performance metrics and the formulas are summarized in Table 4.

Figures 9(a) and 9(b) illustrate the input and filtered images, and Figures 9(c) and 9(d) depict the segmented and categorized MRI images.

Figure 9. Experimental result of tumor detection for MR image

Table 3. Details of experimental setup

Sl. No

Name

Value

1

RAM

8GB

2

CPU

Intel (R) Core (TM) i5-5200U

3

HDD

500GB

4

Window

Window 10, 64bit

5

Implementation Tool

MATLAB2021a

Table 4. Performance metrics [19]

Performance Metric Formula
Sensitivity/Recall TP/(TP+FN)
Precision TP/(TP+FP)
Specificity TN/(TN+FP)
Accuracy (TP+TN)/(TP+TN+FP+FN)
F-measure 2*TP/(2*TP+FP+FN)
False Positive Rate FAR = FP/(FP+TN)
Dice similarity coefficient (DSC) 2*TP/(FP+2TP+FN)
Jaccard similarity index (JSI) TP/(TP+FP+FN)

4.1 Comparative analysis using BTC dataset

As shown in Figures 10-11 and Table 5 the proposed method achieves an accuracy of 99.76%, which is the highest among the compared models, reflecting its superior capability in correctly identifying positive cases. Furthermore, the proposed method also demonstrates a marked improvement in sensitivity (96.73%) and specificity (98.76%), indicating its effectiveness in both detecting true positives and correctly identifying negatives. The precision and F-measure values are also notably high, at 97.61% and 97.21% respectively, further supporting the robustness of the proposed method in delivering consistent and reliable classification results.

As presented in Table 6, multiple ML and DL algorithms and their variants were evaluated. the conventional approach of several performance measures, such as the DSC, JSI, Error rate, and FPR has been performed. The table shows crucial characteristics, such as the DSC and JSI tested against the current approach and new process. Figure 12 also shows a graphical illustration of the comparison between the DSC and JSI. In this example, the JSI defines the region of interest. DSC parameters were used to determine the exact number of ratios of accessible tumors.

Figure 10. Performance evaluation accuracy, sensitivity, and specificity

Figure 11. Performance evaluation precision and f-measure

Figure 12. Comparison of DSC and JSI

Table 5. Comparison of the proposed algorithm to existing methods

Approach

Accuracy (%)

Sensitivity (%)

Specificity (%)

Precision (%)

F-Measure (%)

SVM

92.74

90.78

92.52

91.23

91.91

CNN

94.64

92.24

94.72

93.91

93.86

DCNN

95.60

92.16

95.30

94.41

93.71

ResNet-50

97.86

92.36

96.59

95.76

96.16

VGG-16

98.90

93.96

97.21

96.21

96.46

Proposed method

99.76

96.73

98.76

97.61

97.21

Table 6. Parametric evaluation and comparison

Approach

DSC

JSI

Error Rate

FPR

SVM

82.74

88.10

0.088

0.2297

CNN

87.00

89.30

0.073

0.1099

DCNN

90.10

93.21

0.076

0.1086

ResNet-50

92.23

95.43

0.031

0.0588

VGG-16

94.63

96.52

0.020

0.0321

Proposed Method

97.32

98.50

0.010

0.0024

The fraction of tumor pixels was similar to the expected number of tumor pixels. The tabular and graphical representations show the suggested methods, with a DSC value of 97.32% and a JSI value of 98.50%. The error rate quantifies the relationship between inaccurate predictions and total evaluation instances. The error rate comparison is shown graphically in Figure 13. Shows a lower error rate of 0.010 than existing techniques. Figure 14 shows a graphical illustration of the FPR comparison. The tabular and graphical representations show that the approach has a slight advantage in FPR value of 0.0024. The FPR of the proposed model is 0.0024, which is lower than that of other techniques. The computational time of each method in seconds using a similar dataset and parameters shown in Figure 15 that the SVM method takes a long time to simulate (660 s), whereas the proposed enhanced AlexNet model takes the least time (187.8 s).

In the confusion matrix, the column represents an instance's expected value, and the row represents the instance's accurate (true) value. This matrix lives up to its name by helping us see whether there are any confused findings or class overlaps. Figures 16-17 depict the confusion matrices, respectively. Figure 18(a) and Figure 18(b) show accuracy and loss and Table 7 shows the performance comparison of existing approaches.

Figure 13. Comparison of error rate

Figure 14. FPR comparison graph

Table 7. Performance comparison of the proposed and existing algorithms

 

SVM

CNN

DCNN

ResNet-50

VGG-16

Proposed

Glioma

Precision

0.85

0.90

0.92

0.93

0.91

0.95

Recall

0.80

0.92

0.94

0.95

0.94

0.96

F-measure

0.87

0.90

0.93

0.94

0.95

0.97

Meningioma

Precision

0.76

0.80

0.88

0.92

0.93

0.94

Recall

0.78

0.82

0.85

0.93

0.94

0.95

F-measure

0.82

0.86

0.88

0.90

0.92

0.93

Pituitary

Precision

0.80

0.82

0.87

0.90

0.93

0.95

Recall

0.82

0.83

0.89

0.91

0.94

0.96

F-measure

0.84

0.85

0.86

0.88

0.93

0.97

Figure 15. Comparison of simulation time

Figure 16. Confusion matrix for testing set

Figure 17. Confusion matrix for training set

(a)

(b)

Figure 18. Training and validation graph

Table 8. Performance analysis based on preprocessing

Filter Applied

PSNR (dB)

MSE (dB)

MAE (dB)

SSIM

Anisotropic diffusion filter

9.0927

14.622

0.06783

0.598

Averaging filter

10.022

11.635

0.05672

0.620

Guided filter

10.2054

9.825

0.05387

0.789

FNLM

15.3656

5.129

0.00582

0.820

Table 8 lists the PSNR, MSE, MAE, and SSIM values for different filters. The image means of the FNLM approach achieved the highest SSIM of 0.820, while those of the guided filter, averaging filter, and anisotropic diffusion filter were 0.789, 0.620, and 0.598, respectively. The FNLM model yielded a minimal MAE of 0.00582 for the image mean and values of 0.05387 and 0.05672 for the guided and average filters, respectively. The maximum average PSNR image value calculated by the FNLM was 15.3656, whereas the guided and average filters yielded 10.2054 and 10.022, respectively.

The sensitivity, specificity, and accuracy metrics presented in Table 9 were used to analyze multiple automated segmentation algorithms, yielding valid results. Furthermore, we compared the results of three well-known approaches (watershed algorithm, K-means, DWT, and the proposed method). According to these data, the proposed MasiEMT-SFO technique was the most efficient for segmenting 3D MRI images. For instance, the MasiEMT–SFO approach achieved the best accuracy with positive but insignificant sensitivity and specificity metrics. The MasiEMT-SFO method proposed in this work can segment 3D MRI images more accurately and in a shorter time than the other three methods. Consider T2-weighted MRI, for example. Watershed, K-means, and DWT algorithms had accuracies of 62.54%±0.04, 79.18%±0.138, and 86.54%±0.03, respectively, with a volume average accuracy of 90.9%±0.052. The proposed approach is practical, with a sensitivity of 90.23% ± 0.026.

Table 10 shows that enhanced AlexNet outperforms the four models in practically every performance indicator, with 99% precision, 98% accuracy, 99.8% specificity, 99.67% recall, and 99.7% F-score for brain tumor diagnosis. GoogLeNet had the second-highest accuracy (96.22%), whereas DenseNet201 had the lowest accuracy (95.00%) within the compared architecture.

Table 9. Performance analysis based on segmentation

Different Parameters

Different Segmentation

Sensitivity

Specificity

Accuracy

T2-weighted

Watershed algorithm

63.36% ± 0.143

61.37% ± 0.136

62.54% ± 0.04

K-means

72.07% ± 0.43

75.8% ± 0.545

79.18% ± 0.138

DWT

87.08% ± 0.045

89.2% ± 0.156

86.54% ± 0.03

proposed

90.23% ± 0.026

91.4% ± 0.0067

90.9% ± 0.052

FLAIR

Watershed algorithm

66.5% ± 0.0037

59.4% ± 0.464

70.12% ± 0.038

K-means

79.4% ± 0.030

65.35% ± 0.35

78.2% ± 1.118

DWT

80.5% ± 0.008

78.53% ± 0.025

82.5% ± 0.026

proposed

95.4% ± 0.03

90.5% ± 0.0088

94.7% ± 0.636

T1-C

Watershed algorithm

78.4% ± 0.036

70.13% ± 0.436

77.2% ± 0.0323

K-means

82.5% ± 0.45

78.3% ± 0.025

80.3% ± 0.0456

DWT

86.3% ± 0.12

84.5% ±0.034

85.6% ± 0.046

proposed

96.3% ± 0.003

94.7%± 0.003

95.7% ± 0.018

Table 10. Comparison with state-of-the-art DL models

Algorithms

Accuracy (%)

Precision (%)

Recall (%)

F1-score (%)

Specificity (%)

ResNet50

97.06

96.20

95.21

97.26

94.54

DenseNet201

95.00

97.00

96.54

98.24

96.76

DarkNet53

95.26

98.22

97.57

97.32

96.36

GoogLeNet

96.22

98.50

98.22

98.40

87.00

Proposed

98.00

99.00

99.67

99.70

99.80

Table 11. Brain tumor detection and classification comparison with existing approaches

Reference No.

Algorithm

Accuracy

[45]

Weighted KNN

89.80%

[47]

BrainMRNet

96.05%

[48]

ResNet-50

95.00%

[49]

TumorDetNet

96.08%

[50]

Residual-CNN

96.60%

[51]

RCNN

98.21%

[52]

KNN-SVM

98.30%

[53]

DCNN

98.00%

[54]

MLP

96.50%

[55]

Ensemble Classifer

96.40%

[56]

RBF

88.00%

[57]

CNN

98.90%

[58]

EfficientNet

98.86%

[59]

VGG-16 CNN

98.93%

[60]

ResNet, AlexNet, UNet, and VGG16

99.30%

[61]

InceptionV3

99.60%

Proposed

Enhanced AlexNet

99.98%

The experiment as shown in Table 11 prioritizes accuracy as the performance metric for the baseline method. Additionally, to highlight the method's effectiveness, reliability, and superiority, the study compared it with leading techniques on datasets with lower performance. The proposed framework achieved an accuracy of 99.98%, followed by study [60] with 99.6%, and study [45] with the lowest accuracy of 89.8% for tumor recognition.

4.2 Comparative analysis using CE-MRI Figshare dataset

Using the CE-MRI Figshare dataset, Figure 19 shows the proposed simulated enhanced AlexNet classifier increased by 98.42%, as indicated in Table 12. The SVM model had the lowest average accuracy at 91.67%, while VGG-16 had the lowest average accuracy of 95.60%. The enhanced AlexNet algorithms have the following metrics: 98.42%, 97.62%, 97.30%, 96.43%, and 96.03% for accuracy, recall, precision, specificity, and f-measure, respectively. Figure 20(a) shows the analysis of the methods with an accuracy parameter. SVM, CNN, DCNN, ResNet-50, VGG-16, and the proposed method have accuracy values of 0.72, 0.73, 0.76, 0.78, 0.81, and 0.83, respectively, for 50% of the training set. SVM, CNN, DCNN, ResNet-50, VGG-16, and the proposed method have accuracy values of 0.75, 0.77, 0.82, 0.85, 0.86, and 0.88 for 90% of the training data. Figure 20(b) shows an analysis of the methods with sensitivity parameters. The sensitivities of SVM, CNN, DCNN, ResNet-50, VGG-16, and the proposed method were 0.61, 0.625, 0.67, 0.7, 0.75, and 0.78 for 50% training data. The recall or sensitivity of SVM, CNN, DCNN, ResNet-50, VGG-16, and the suggested method were 0.65, 0.67, 0.74, 0.76, 0.82, and 0.85, respectively, for 90% of the training data. Figure 20(c) shows the analysis of the methods with a specificity parameter. The specificities of SVM, CNN, DCNN, ResNet-50, VGG-16, and the suggested method are, for 50% training data, 0.91, 0.92, 0.93, 0.94, 0.947, and 0.96, respectively. The specificities of SVM, CNN, DCNN, ResNet-50, VGG-16, and the suggested method are, for 90% of training data, 0.95, 0.955, 0.96, 0.96, 0.97, and 0.987, respectively.

Figure 19. Performance evaluation of the suggested method using CE-MRI Figshare dataset

Table 12. Comparison with existing methods for CE-MRI Figshare dataset

Approach

Accuracy (%)

Sensitivity (%)

Specificity (%)

Precision (%)

F-Measure (%)

SVM

91.67

89.34

90.30

90.32

90.60

CNN

92.31

90.11

91.41

91.60

92.42

DCNN

93.30

90.44

92.03

92.30

92.40

ResNet-50

94.53

91.45

93.26

93.43

93.03

VGG-16

95.60

95.63

94.10

94.12

94.13

Proposed method

98.42

97.62

96.43

97.30

96.03

Table 13. Statistical analysis using BTC dataset

BTC Dataset

Algorithm

 

Accuracy

Sensitivity

Specificity

SVM

Mean

0.9224

0.9028

0.9202

Variance

0.004

0.002

0.002

CNN

Mean

0.9414

0.9174

0.9422

Variance

0.003

0.005

0.003

DCNN

Mean

0.951

0.9166

0.952

Variance

0.002

0.004

0.004

ResNet-50

Mean

0.9736

0.9186

0.9609

Variance

0.004

0.005

0.005

VGG-16

Mean

0.984

0.9346

0.9671

Variance

0.004

0.002

0.003

Proposed

Mean

0.9926

0.9623

0.9826

Variance

0.003

0.001

0.002

Table 13 presents an accuracy mean of 0.9926, while SVM, CNN, DCNN, ResNet-50, and VGG-16 achieve accuracies of 0.9224, 0.9414, 0.951, 0.9736, and 0.984, respectively. The proposed model achieved an accuracy of 0.003, whereas SVM, CNN, DCNN, ResNet-50, and VGG-16 exhibited variances of 0.004, 0.003, 0.002, 0.004, and 0.003, respectively.

The proposed model achieves a specificity mean of 0.9826, while SVM, CNN, DCNN, ResNet-50, and VGG-16 achieve specificities of 0.9202, 0.9422, 0.952, 0.9609, and 0.9671, respectively. In terms of variance, the proposed model achieved a specificity of 0.001, whereas the SVM, CNN, DCNN, ResNet-50, and VGG-16 exhibited sensitivities of 0.004, 0.003, 0.004, 0.003, and 0.002, respectively. Similarly, Table 14 presents the statistical analysis of the CE-MRI Figshare dataset including accuracy, sensitivity, and specificity parameters. The proposed model achieves an accuracy mean of 0.9792, while SVM, CNN, DCNN, ResNet-50, and VGG-16 achieve accuracies of 0.9117, 0.9181, 0.928, 0.9403, and 0.951, respectively.

Table 14. Statistical analysis using CE-MRI Figshare dataset

CE-MRI Figshare Dataset

Algorithm

 

Accuracy

Sensitivity

Specificity

SVM

Mean

0.9117

0.8884

0.898

Variance

0.003

0.002

0.004

CNN

Mean

0.9181

0.8961

0.9091

Variance

0.004

0.004

0.003

DCNN

Mean

0.928

0.8994

0.9153

Variance

0.004

0.003

0.004

ResNet-50

Mean

0.9403

0.9095

0.9276

Variance

0.002

0.002

0.003

VGG-16

Mean

0.951

0.9513

0.936

Variance

0.001

0.004

0.002

proposed

Mean

0.9792

0.9712

0.9593

Variance

0.002

0.003

0.001

Ethical Considerations:

Ethical approval for this study was granted by the Ethics Committee following local regulations and guidelines. Throughout the study, stringent measures were implemented to safeguard data privacy and confidentiality. When applicable, informed consent was obtained from all patients. Steps were also taken to address potential biases and ensure the ethical and fair application of AI-driven technologies in neurological healthcare.

Clinical Relevance:

Clinical data sourced from imaging repositories were integrated into AI-powered clinical decision support systems (CDSS). These systems deliver evidence-based recommendations directly to clinicians during patient care, assisting with tasks such as diagnosis, treatment selection, and prognosis estimation. The integration included the use of the proposed DL algorithms, predictive analytics models, and expert systems to comprehensively analyze patient data, generate clinical insights, and facilitate informed clinical decision-making.

The proposed enhanced AlexNet classifier with ASA was combined with a DL algorithm in the suggested research to achieve more accurate, efficient, and improved results in the medical field. The hybrid approach, which takes advantage of both enhanced AlexNet's classifier capabilities and the ASA's optimization, has the potential to achieve cutting-edge results in brain tumor classification tasks. It seeks to outperform traditional methods by incorporating advanced techniques for classification and parameter optimization.

(a)

(b)

(c)

Figure 20. Comparison CE-MRI Figshare dataset in terms of (a) Accuracy (b) Sensitivity (c) Specificity

5. Conclusions

This paper presents an enhanced AlexNet classifier for the detection of brain tumors. The suggested algorithm can accurately detect and categorize tumors into two types: benign and malignant, as well as three types: pituitary, meningioma, and glioma. The efficiency of the proposed technique was further tested using the publicly available Kaggle datasets. The proposed system outperformed traditional algorithms, with an accuracy of 99.76% for detecting brain tumors and 97% for classifying them as pituitary tumors, gliomas, or meningiomas. Simulation results show that the proposed architecture outperforms conventional brain tumor detection and classification architectures. The proposed approach provides the highest brain tumor diagnosis and classification accuracy compared to existing methods while requiring minimal preprocessing. Our future goals include optimizing the model to decrease the computation time, reduce memory usage, and simplify the system complexity. This research can be extended to identify and classify additional diseases and more intricate types of tumors.

  References

[1] Kavitha, A.R., Chitra, L., Kanaga, R. (2016). Brain tumor segmentation using genetic algorithm with SVM classifier. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, 5(3): 1468-1471.

[2] Sinha, A., Aneesh, R.P., Suresh, M., Nitha Mohan, R., Abinaya, D., Singerji, A.G. (2021). Brain tumour detection using deep learning. In 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, pp. 1-5. https://doi.org/10.1109/ICBSII51839.2021.9445185 

[3] Saeedi, S., Rezayi, S., Keshavarz, H., Niakan Kalhori, S.R. (2023). MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Medical Informatics and Decision Making, 23(1): 16. https://doi.org/10.1186/s12911-023-02114-6

[4] Zhang, Y., Li, A., Peng, C., Wang, M. (2016). Improve glioblastoma multiforme prognosis prediction by using feature selection and multiple kernel learning. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13(5): 825-835. https://doi.org/10.1109/TCBB.2016.2551745

[5] Khambhata, K.G., Panchal, S.R. (2016). Multiclass classification of brain tumor in MR images. International Journal of Innovative Research in Computer and Communication Engineering, 4(5): 8982-8992.

[6] Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., van der Laak, J.A.W.M., van Ginneken, B., Sánchez, C.I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42: 60-88. https://doi.org/10.1016/j.media.2017.07.005

[7] Dogra, J., Jain, S., Sharma, A., Kumar, R., Sood, M. (2020). Brain tumor detection from MR images employing fuzzy graph cut technique. Recent Advances in Computer Science and Communications (Formerly: Recent Patents on Computer Science), 13(3): 362-369. https://doi.org/10.2174/2213275912666181207152633

[8] Amin, J., Sharif, M., Raza, M., Saba, T., Sial, R., Shad, S.A. (2020). Brain tumor detection: A long short-term memory (LSTM)-based learning model. Neural Computing and Applications, 32: 15965-15973. https://doi.org/10.1007/s00521-019-04650-7

[9] Sharif, M., Amin, J., Raza, M., Anjum, M.A., Afzal, H., Shad, S.A. (2020). Brain tumor detection based on extreme learning. Neural Computing and Applications, 32: 15975-15987. https://doi.org/10.1007/s00521-019-04679-8

[10] Afshar, P., Mohammadi, A., Plataniotis, K.N. (2020). Bayescap: A bayesian approach to brain tumor classification using capsule networks. IEEE Signal Processing Letters, 27: 2024-2028. https://doi.org/10.1109/LSP.2020.3034858

[11] Alqudah, A.M., Alquraan, H., Qasmieh, I.A., Alqudah, A., Al-Sharu, W. (2020). Brain tumor classification using deep learning technique-A comparison between cropped, uncropped, and segmented lesion images with different sizes. arXiv preprint arXiv:2001.08844. https://doi.org/10.48550/arXiv.2001.08844

[12] Kumar, S., Mankame, D.P. (2020). Optimization driven deep convolution neural network for brain tumor classification. Biocybernetics and Biomedical Engineering, 40(3): 1190-1204. https://doi.org/10.1016/j.bbe.2020.05.009

[13] Ullah, N., Khan, J.A., Khan, M.S., Khan, W., Hassan, I., Obayya, M., Negm, N., Salama, A.S. (2022). An effective approach to detect and identify brain tumors using transfer learning. Applied Sciences, 12(11): 5645. https://doi.org/10.3390/app12115645

[14] Gore, S., Jagtap, J. (2022). Radiogenomic analysis: 1p/19q codeletion based subtyping of low-grade glioma by analysing advanced biomedical texture descriptors. Journal of King Saud University-Computer and Information Sciences, 34(10): 8449-8458. https://doi.org/10.1016/j.jksuci.2021.08.024

[15] Khan, M.S.I., Rahman, A., Debnath, T., Karim, M.R., Nasir, M.K., Band, S.S., Mosavi, A., Dehzangi, I. (2022). Accurate brain tumor detection using deep convolutional neural network. Computational and Structural Biotechnology Journal, 20: 4733-4745. https://doi.org/10.1016/j.csbj.2022.08.039

[16] Ranjbarzadeh, R., Bagherian Kasgari, A., Jafarzadeh Ghoushchi, S., Anari, S., Naseri, M., Bendechache, M. (2021). Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Scientific Reports, 11(1): 1-17. https://doi.org/10.1038/s41598-021-90428-8

[17] Younis, A., Qiang, L., Nyatega, C.O., Adamu, M.J., Kawuwa, H.B. (2022). Brain tumor analysis using deep learning and VGG-16 ensembling learning approaches. Applied Sciences, 12(14): 7282. https://doi.org/10.3390/app12147282

[18] Tummala, S., Kadry, S., Bukhari, S.A.C., Rauf, H.T. (2022). Classification of brain tumor from magnetic resonance imaging using vision transformers ensembling. Current Oncology, 29(10): 7498-7511. https://doi.org/10.3390/curroncol29100590

[19] ZainEldin, H., Gamel, S.A., El-Kenawy, E.S.M., Alharbi, A.H., Khafaga, D.S., Ibrahim, A., Talaat, F.M. (2022). Brain tumor detection and classification using deep learning and sine-cosine fitness grey wolf optimization. Bioengineering, 10(1): 18. https://doi.org/10.3390/bioengineering10010018

[20] Polat, Ö., Güngen, C. (2021). Classification of brain tumors from MR images using deep transfer learning. The Journal of Supercomputing, 77(7): 7236-7252. https://doi.org/10.1007/s11227-020-03572-9

[21] Asad, R., Rehman, S.U., Imran, A., Li, J., Almuhaimeed, A., Alzahrani, A. (2023). Computer-aided early melanoma brain-tumor detection using deep-learning approach. Biomedicines, 11(1): 184. https://doi.org/10.3390/ biomedicines11010184

[22] Kesav, N., Jibukumar, M.G. (2022). Efficient and low complex architecture for detection and classification of brain tumor using RCNN with two channel CNN. Journal of King Saud University-Computer and Information Sciences, 34(8): 6229-6242. https://doi.org/10.1016/j.jksuci.2021.05.008

[23] Alsubai, S., Khan, H.U., Alqahtani, A., Sha, M., Abbas, S., Mohammad, U.G. (2022). Ensemble deep learning for brain tumor detection. Frontiers in Computational Neuroscience, 16: 1005617. https://doi.org/10.3389/fncom.2022.1005617

[24] Khairandish, M.O., Sharma, M., Jain, V., Chatterjee, J.M., Jhanjhi, N.Z. (2022). A hybrid CNN-SVM threshold segmentation approach for tumor detection and classification of MRI brain images. Irbm, 43(4): 290-299. https://doi.org/10.1016/j.irbm.2021.06.003

[25] Yoganathan, S.A., Zhang, R. (2022). Segmentation of organs and tumor within brain magnetic resonance images using K-nearest neighbor classification. Journal of Medical Physics, 47(1): 40-49. https://doi.org/10.4103/jmp.jmp_87_21

[26] Öksüz, C., Urhan, O., Güllü, M.K. (2022). Brain tumor classification using the fused features extracted from expanded tumor region. Biomedical Signal Processing and Control, 72: 103356. https://doi.org/10.1016/j.bspc.2021.103356

[27] Kokkalla, S., Kakarla, J., Venkateswarlu, I.B., Singh, M. (2021). Three-class brain tumor classification using deep dense inception residual network. Soft Computing, 25(13): 8721-8729. https://doi.org/10.1007/s00500-021-05748-8

[28] Kadry, S., Nam, Y., Rauf, H.T., Rajinikanth, V., Lawal, I.A. (2021). Automated detection of brain abnormality using deep-learning-scheme: A study. In 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, pp. 1-5. https://doi.org/10.1109/icbsii51839.2021.9445122

[29] Mandle, A.K., Sahu, S.P., Gupta, G. (2022). Brain tumor segmentation and classification in MRI using clustering and kernel-based SVM. Biomedical and Pharmacology Journal, 15(2): 699-716. https://doi.org/10.13005/bpj/2409

[30] Tiwari, P., Pant, B., Elarabawy, M.M., Abd-Elnaby, M., Mohd, N., Dhiman, G., Sharma, S. (2022). CNN based multiclass brain tumor detection using medical imaging. Computational Intelligence and Neuroscience, 2022(1): 1830010. https://doi.org/10.1155/2022/1830010

[31] Maqsood, S., Damaševičius, R., Maskeliūnas, R. (2022). Multi-modal brain tumor detection using deep neural network and multiclass SVM. Medicina, 58(8): 1090. https://doi.org/10.3390/medicina58081090

[32] Wang, S.H., Govindaraj, V.V., Górriz, J.M., Zhang, X., Zhang, Y.D. (2021). COVID-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network. Information Fusion, 67: 208-229. https://doi.org/10.1016/j.inffus.2020.10.004

[33] Zhang, Y.D., Govindaraj, V.V., Tang, C., Zhu, W., Sun, J. (2019). High performance multiple sclerosis classification by data augmentation and AlexNet transfer learning model. Journal of Medical Imaging and Health Informatics, 9(9): 2012-2021. https://doi.org/10.1166/jmihi.2019.2692

[34] Dutta, T.K., Nayak, D.R., Zhang, Y.D. (2024). Arm-net: Attention-guided residual multiscale CNN for multiclass brain tumor classification using MR images. Biomedical Signal Processing and Control, 87: 105421. https://doi.org/10.1016/j.bspc.2023.105421

[35] Lamba, K., Rani, S., Anand, M., Maguluri, L.P. (2024). An integrated deep learning and supervised learning approach for early detection of brain tumor using magnetic resonance imaging. Healthcare Analytics, 5: 100336. https://doi.org/10.1016/j.health.2024.100336

[36] Appiah, R., Pulletikurthi, V., Esquivel-Puentes, H.A., Cabrera, C., Hasan, N.I., Dharmarathne, S., Gomez, L.J., Castillo, L. (2024). Brain tumor detection using proper orthogonal decomposition integrated with deep learning networks. Computer Methods and Programs in Biomedicine, 250: 108167. https://doi.org/10.1016/j.cmpb.2024.108167

[37] Islam, M.N., Azam, M.S., Islam, M.S., Kanchan, M.H., Parvez, A.S., Islam, M.M. (2024). An improved deep learning-based hybrid model with ensemble techniques for brain tumor detection from MRI image. Informatics in Medicine Unlocked, 47: 101483. https://doi.org/10.1016/j.imu.2024.101483

[38] Agarwal, M., Rani, G., Kumar, A., Kumar, P., Manikandan, R., Gandomi, A.H. (2024). Deep learning for enhanced brain Tumor Detection and classification. Results in Engineering, 22: 102117. https://doi.org/10.1016/j.rineng.2024.102117

[39] Yang, Y., Razmjooy, N. (2024). Early detection of brain tumors: Harnessing the power of GRU networks and hybrid dwarf mongoose optimization algorithm. Biomedical Signal Processing and Control, 91: 106093. https://doi.org/10.1016/j.bspc.2024.106093

[40] Brain Tumor Classification (MRI). https://www.kaggle.com/sartajbhuvaji/brain-tumor-classification-mri.

[41] Cheng, J. (2024). Brain tumor dataset. https://figshare.com/articles/dataset/brain_tumor_dataset/1512427.

[42] Shim, J., Yoon, M., Lee, M.J., Lee, Y. (2021). Utility of fast non-local means (FNLM) filter for detection of pulmonary nodules in chest CT for pediatric patient. PhysicaMedica, 81: 52–59. https://doi.org/10.1016/j.ejmp.2020.11.038

[43] Wan, X., Zhang, X., Liu, L. (2021). An improved VGG19 transfer learning strip steel surface defect recognition deep neural network based on few samples and imbalanced datasets. Applied Sciences, 11(6): 2606. https://doi.org/10.3390/app11062606

[44] Rohini, A., Praveen, C., Mathivanan, S.K., Muthukumaran, V., Mallik, S., Alqahtani, M.S., Al-Rasheed, A., Soufiene, B.O. (2023). Multimodal hybrid convolutional neural network based brain tumor grade classification. BMC Bioinformatics, 24(1): 382. https://doi.org/10.1186/s12859-023-05518-3

[45] Kiraz, H. (2021). Design and implementation of image processing method on brain tumor detection with machine learning approach.

[46] Azhagiri, M., Rajesh, P. (2024). EAN: Enhanced AlexNet deep learning model to detect brain tumor using magnetic resonance images. Multimedia Tools and Applications, 83(25): 66925-66941. https://doi.org/10.1007/s11042-024-18143-w

[47] Toğaçar, M., Ergen, B., Cömert, Z. (2020). BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Medical Hypotheses, 134: 109531. https://doi.org/10.1016/j.mehy.2019.109531

[48] Saxena, P., Maheshwari, A., Maheshwari, S. (2020). Predictive modeling of brain tumor: A deep learning approach. In Innovations in Computational Intelligence and Computer Vision: Proceedings of ICICV 2020, pp. 275-285. https://doi.org/10.1007/978-981-15-6067-5_30

[49] Ullah, N., Javed, A., Alhazmi, A., Hasnain, S.M., Tahir, A., Ashraf, R. (2023). TumorDetNet: A unified deep learning model for brain tumor detection and classification. Plos One, 18(9): e0291200. https://doi.org/10.1371/journal.pone.0291200

[50] Demir, F., Akbulut, Y. (2022). A new deep technique using R-CNN model and L1NSR feature selection for brain MRI classification. Biomedical Signal Processing and Control, 75: 103625. https://doi.org/10.1016/j.bspc.2022.103625

[51] Kesav, N., Jibukumar, M.G. (2022). Efficient and low complex architecture for detection and classification of Brain Tumor using RCNN with Two Channel CNN. Journal of King Saud University-Computer and Information Sciences, 34(8): 6229-6242. https://doi.org/10.1016/j.jksuci.2021.05.008

[52] Sindhu, A., Radha, V. (2020). An optimal feature selection with whale algorithm and adaboost ensemble model for pancreatic cancer classification in PET/CT images. Biosc Biotech Res Comm, 13(4): 18861894.

[53] Mishra, P.K., Satapathy, S.C., Rout, M. (2021). Segmentation of MRI brain tumor image using optimization based deep convolutional neural networks (DCNN). Open Computer Science, 11(1): 380-390. https://doi.org/10.1515/comp-2020-0166

[54] Yin, B., Wang, C., Abza, F. (2020). New brain tumor classification method based on an improved version of whale optimization algorithm. Biomedical Signal Processing and Control, 56: 101728. https://doi.org/10.1016/j.bspc.2019.101728

[55] Moftah, H.M., Hefny, H.A. (2020). Brain diagnoses detection using whale optimization algorithm based on ensemble learning classifier. International Journal of Intelligent Engineering & Systems, 13(2): 40-51.

[56] Gong, S., Gao, W., Abza, F. (2020). Brain tumor diagnosis based on artificial neural network and a chaos whale optimization algorithm. Computational Intelligence, 36(1): 259-275. https://doi.org/10.1111/coin.12259

[57] Ramtekkar, P.K., Pandey, A., Pawar, M.K. (2023). Accurate detection of brain tumor using optimized feature selection based on deep learning techniques. Multimedia Tools and Applications, 82(29): 44623-44653. https://doi.org/10.1007/s11042-023-15239-7

[58] Zulfiqar, F., Bajwa, U.I., Mehmood, Y. (2023). Multi-class classification of brain tumor types from MR images using EfficientNets. Biomedical Signal Processing and Control, 84: 104777. https://doi.org/10.1016/j.bspc.2023.104777

[59] Malla, P.P., Sahu, S., Alutaibi, A.I. (2023). Classification of tumor in brain MR images using deep convolutional neural network and global average pooling. Processes, 11(3): 679. https://doi.org/10.3390/pr11030679

[60] Kumar, S., Choudhary, S., Jain, A., Singh, K., Ahmadian, A., Bajuri, M.Y. (2023). Brain tumor classification using deep neural network and transfer learning. Brain Topography, 36(3): 305-318. https://doi.org/10.1007/s10548-023-00953-0

[61] Islam, M.M., Barua, P., Rahman, M., Ahammed, T., Akter, L., Uddin, J. (2023). Transfer learning architectures with fine-tuning for brain tumor classification using magnetic resonance imaging. Healthcare Analytics, 4: 100270. https://doi.org/10.1016/j.health.2023.100270