BT Detection Using Improved Whale Optimization and Convolutional Neural Networks

BT Detection Using Improved Whale Optimization and Convolutional Neural Networks

Pushparaj Elango Arun Arthanareeswaran*

Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur, Chengalpattu 603203, Tamil Nadu, India

Corresponding Author Email: 
aruna2@srmist.edu.in
Page: 
815-823
|
DOI: 
https://doi.org/10.18280/ria.380308
Received: 
31 August 2023
|
Revised: 
12 October 2023
|
Accepted: 
22 November 2023
|
Available online: 
21 June 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Medical image processing was indispensable to a growing need for quick, effective, and systematic Brain Tumour (BT). Pixels are grouped into larger regions via a process called "region growing" that begins at the seed locations. In noisy images where edges are hard to identify, region growth-related methods perform better than edge-related methods. Dataset taken from Kaggle and URI repository as brain MRI images. The input image undergoes morphological edge detection and the image is then enhanced by reconstructing it through erosion and dilation. In this study, we employ an approach that involves median filtering of the image, Otsu automated segmentation, morphological filtration and dilation, and Improved Whale Optimization – Region based Convolutional Neural Network (IWO-RCNN) classification. It used the Weka 3.9 tool to perform the classification after preparing the brain Magnetic Resonance Imaging (MRI) database and carrying out the approach in MATLAB R2015a. We compared our approach to the Brain-Surface Extractor (BSE) and a layer-set technique proposed for the mouse brain they analyzed its performance under increasing Signal to Noise Ratio (SNR) and resolution. According to the data, this approach works better than the competition and is reliable at lower resolutions of partial volume impacts and low SNR. The system has a greater accuracy of 98.7%, precision 96.5% and recall 95.6%.

Keywords: 

BT detection, region growth, region of interest, magnetic resonance imaging, brain-surface extractor, morphological edge detection

1. Introduction

The concept of image processing, particularly the concept of artificial neural networks a play big impact on this research article as a result of technological advancements [1]. And for effective diagnosis of BT, in the medical field, we employ deep learning image processing concepts that have shown promising outcomes in a variety of other fields, including handwriting character recognition, disease detection, speech recognition, disease detection and image classification. BT symptoms: Benign BT symptoms (signs) are frequently vague [2]. The list of warning indicators and symptoms that benign BTs may produce singly or in combination; regrettably, these signs and symptoms can also be seen in many other conditions: vision issues, balance issues, hearing issues, changes in mental function (such as attention, memory, speech), seizures, muscular jerks, changes in sense of smell, migraines, nausea/vomiting, facial paralysis, and number to the limbs are just a few of the symptoms that can occur section [3]. The classification is carried out using a deep convolution neural network, and the study is centered on the automatic detection of BT.

Determining the edge of a region we are intrigued by is a crucial step in the majority of medical imaging evaluation systems. The identification of BTs using CT or MRI images to schedule radiation treatment, and the determination of the epicardial or endocardial limitations of the left ventricle to examine cardiac processes like the volumetric evaluation and the pressure-volume ratio of relative white and grey are important in studying degenerative disorders are examples of potential applications [4].

The brain is one of the main areas of interest in MRI. At present, the edge of a malignant to a head image is typically delineated in many therapeutic applications. In the case of handling huge data sets, this manual method is rendered useless. For instance, creating the boundaries of the BT totally by hand is no longer possible for a database containing 35 parts intersecting the malignant. Consequently, a computerized system is very necessary to carry out the task of tumor edge determination [5]. For the semi-automatic or automatic detection of different head structures, a variety of methodologies have been proposed, likewise knowledge-based, region-based, and edge-based, combination approaches. Recently, multiple attempts were made to analyze brain images using neural network topologies [6].

In this research, they provide a new technique called IWO-RCNN to autonomously locate the edges of BT using neural networks. According to the presented active contour approach, the computational process was structured as an optimization challenge, and the boundary was found to look for a functional reactive contour that reduces energy consumption. In our method, the energy reduction procedure is performed on discrete grids to account for the boundary identification problem's discrete nature. For energy minimization, an NN design that was an altered version of the proposed framework is employed. By rigidly lowering the total energy iteration, the internet maintains the focus of the power reduction operation. The proposed method of edge identification is quicker and more stable since it makes use of the neural network's ability to converge and collective computational power. To demonstrate the viability of our strategy, examples of its application to various MRI information sets are provided.

1.1 Problem statement

The brain is the most important organ in the human body, as it is in charge of all of the body's activities.

A BT is a lethal disease caused by the irregular progression of cells within the brain.

Tumours in the brain have an impact on the nervous system's functioning and can be classified as benign or malignant.

To bring the affected person back into existence, an early and accurate prediction of a brain tumor is required.

The most promising approach used by radiologists for brain diagnosis is MRI.

Tumour segmentation and classification remain difficult due to their likeness in shape, size, and appearance.

Even more, the image's intensity variability also complicates the processing.

As a result, optimization techniques are still required, enabling further investigation into better classification techniques and accuracy.

1.2 Motivation

From the challenges of BT classification in MRI images, it is analysed that the BT classification algorithm needs more enhancement for effective BT detection. So, the motivation of this research work is to present an optimized BT classification technique to classify the tumor from the given MRI image effectually at early stage. The model and effort is to introduce effective segmentation and classification techniques using deep learning processes with improved accuracy and quality for BT.

2. Literature Survey

To gather variables such as segmented area, MSE and PSNR to detect the tumor, the proposed Fuzzy C-Means (FCM) method is utilized. The MRI wavelet coefficients are extracted using a three-layer reduction of the Discrete Wavelet Transform (DWT), and its dimensionality is subsequently reduced using Principle Component Analysis (PCA) methodology [7]. Support Vector Machine (SVM) Separator classifies 105 MR images of BTs into two categories: malignant and benign using the test images as input. Performance indicators are evaluated using SVM classification.

The method of image segmentation that has been proposed is separated into two parts. In the first phase, a database of MRI images (grades I to IV) was gathered, and preprocessing is then carried out to enhance the image quality [8]. The second phase consists of three steps: image segmentation, gathering features, and choosing attributes. The proposed approach employs the Gray Level Co-Occurrence Matrix (GLCM) of feature extraction. To increase precision, just a subset of characteristics is chosen using a genetic algorithm membership and fuzzy rules algorithms were constructed to separate BT and MRI images based on these characteristics [9]. ANFIS is an adaptive network that blends neural networks with fuzzy networks. Finally, a comparison of the specificity, sensitivity, and precision of Adaptive Neuro-Fuzzy Inference System (ANFIS), neural networks, texture combined DWT+PCA+KN, +SVM fuzzy DWT+SOM, FCM, texture correlated +ANN, and logic K-NN, is done.

To enable the visualization, characterization, and delineation of regions of interest in any clinic imaging, can be regarded as the most fundamental step [10]. Despite extensive research, the various image content, grouped objects, non-uniform object texture, and image noise, and other characteristics make segmentation a difficult job. Although there are numerous methods and image segmentation approaches available, a quick and effective method for segmenting medical images still needs to be developed. a method for segmenting images that is effective, combining fuzzy C-means with K-means grouping. To achieve accurate BTD, level set segmentation and threshold steps are then applied. In terms of low calculation time, the proposed method can benefit from K-means clustering of image segmentation [11].

Wavelet feature determination and wavelet characteristic vector treatment as input to FCM Clustering is proposed. The Wavelet FCM (WFCM) technique is used, and the results are contrasted with those of the Kernalized FCM (KFCM) and regular FCM. Unsupervised clustering, which is employed with FCM and kernel method of image segmentation, depends on classifying the data into two or more groups. KF relies on nonlinear mapping to transform low-dimensional input space to a greater-dimensional characteristics space, which aids in resolving challenging linear issues in the low-dimensional environment [12]. The original space's inner product, which transferred the space into greater-dimensional characteristics space, was replaced by the kernel functions. The KFCM technique modifies the KFCM methodology by substituting Euclidean distance for Kernel-induced distance in the objective parameter of the normal FCM [13]. To derive new goal functions as well as group non-Euclidean architectures within the information, KFCM techniques involve introducing a group of reliable non-Euclidean distance measurements to represent the starting information space while maintaining mathematical simplicity. This improves the early grouping approaches' resistance to disturbance or outliers. When contrasted with the other two techniques, the proposed segmentation method performs satisfactorily.

The recommended wavelet deconstruction of the outcome image was recreated by image improvement using soft thresholding [14]. The technology for computerized BT identification of MRI scans aids in the region of interest being extracted from the images, which is very important to the doctors' diagnostic and therapeutic processes. The tumor is then segmented and detected from the brain MRI image using clustering (FCM) and seeded region growth [15]. The Relative Ultimate Measurement Accuracy (RUMA), Standard Deviation (SD), reliability, sensitivity, specificity, and accuracy metrics are used to perform the Sobel function. For elements that have been divided into segments, the RUMA was determined. The calculation is based on a comparison between the values of the characteristics retrieved from the segmented image and the reference image. The segmentation method has to have a lower RUMA value to perform better.  To smooth and de-noise the image, perform wavelet-based decomposition last [16].

The patch-based K-means approach is also used as a preprocessing step for skull stripping. The proposed approach outperformed earlier state-of-the-art algorithms of ground truth performance based on statistical volume measures. Additionally, it has been observed through experimentation that the Rough-Fuzzy C-Means (RFCM) approach outperforms Hard C Means (HCM) and FCM in terms of most interesting results and accuracy [17]. A system for automatically segmenting BT that makes use of shape-based topological features and RFCM. Overlapping partition is managed via fuzzy membership and uncertainty in RFCM [18]. It provided a technique for choosing initial centroids that speed up RFCM execution compared to random initial centroids [19, 20].

Several research articles were studied to find the efficient brain tumor segmentation methods. Many of the articles used a clustering method for segmentation of an abnormal area of the brain. Before segmentation of abnormal tissue it is necessary to identify the abnormal image from the database. The abnormality can be identified by extraction of various statistical and texture features of brain images. The extracted features are trained by the classifier network to find the abnormality of the brain. Further the abnormal images were subjected to various clustering algorithms like Clustering, Spatial Fuzzy C means and Hybrid Clustering.

3. Proposed System

The tumour’s perimeter and rate of growth were determined by the RG technique. Region Growing is a pixel grouping process shown in Figure 1. Calculating neighboring pixels is the process of region growth. Small areas expanding into larger areas are the basis for regional growth. A pixel shares the same group of characteristics that can be utilized to distribute pixels for the growth process in the area. The shape of the region grows based on the intensity and the variety of related seed points for every area. The seed points are where this procedure starts. Automated seed selection is carried out. The nearby pixels from the seeds are introduced through the area's development. The operation was halted as soon as the plant points cannot be transferred to the area.

Figure 1. Proposed architecture

3.1 Image acquisition

The proposed technique is intended to accurately remove the tumor. It consists of seven phases, as illustrated in Figure 2, which includes image capture, zone of interest filtration, pre-processing, candidate regions detection, edge identification, and dilatation, as well as tumor classification. The original RGB image of the car, with a size of 2048 by 1536 pixels, is used as the method’s input. The following subsections provide information on the other stages.

Figure 2. Input image

3.2 Noise filtering (NF)

A NF was a computerized non-linear filtration that is excellent at reducing impulse noise (also known as salt and pepper noise) and sharp signal fluctuations. The grey level of an impulse noise was different from the nearest location in both directions. The common median procedure is carried out by moving an odd-sized window (such as a 3×3 window) over an image. As demonstrated in Figure 3, 3×3 windows, the signal or image sampled values are organized at each window position, and the specimen in the middle of the window is changed to the group’s median score.

Figure 3. Flow of proposed system

3.3 Edge detection techniques

A method for locating and recognizing the sharp discontinuities in an image is called edge recognition. Discontinuities are described as abrupt changes in pixel intensity that indicate the edges of objects in a scene. Convolution of the image with a function and obtaining large gradients in the image by returning values of zero in constant regions are two practices that are common to all basic edge detection algorithms. There are numerous edge detection algorithms available, and each of these algorithms is built to be sensitive to a particular class of edges. The operator's programming architecture can be used to ascertain the feature direction, in which case it is extremely sensitive to the obtained edges. The operators are designed to search for edges that are vertical, horizontal, and diagonal. Because both noise and edges have high-frequency content, edge identification is difficult to achieve in noisy images. All edge detection operators fall into one of two categories known as first-order derivatives.

Edge Detection and Segmentation

  1. Select the number f cluster ‘C’ to identify tumor and non-tumor.
  2. C=(No. of clusters)
  3. Identify the region ® of tumor
  4. The region R=r1, r2, r3, …, rn (set the region)
  5. Apply morphological concept (Dilation and Erosion) to segmenting the tumor
  6. The Dilation process identify the tumor growth
  7. $A \circ B=(A \ominus B) \oplus B$
  8. The erosion process used to identify the seed point
  9. $A \circ B=(A \oplus B) \ominus B$
  10. Region growing is based on dilation process (D=Dilation)
  11. D={[Dr]r=1, 2, …, n} (set the region ‘R’ label the Dilation ‘D’
  12. Rx←Cy ɛ D (random selection of region in dilation)
  13. Apply erosion concept to identify the seed point
  14. S=Seed Point
  15. S={[Sr]/r=1, 2, 3, …, n} (seed point of region)
  16. Sx ← rx ɛ E (seed point of region in Erosion)
  17. Finally the seed point, region are selected and segmented

3.4 Image segmentation and morphological operations

An image is segmented into its areas or objects. Segmentation is the process of assembling pixels with related characteristics. An image was segmented into non-intersecting areas so that was homogenous and two adjacent areas when combined are homogeneous. Challenges with segmentation are frequently linked to those with pattern recognition. It can also be referred to as object isolation and is regarded as the initial stage of a pattern recognition process. One typical image processing operation is to change a greyscale image to monochrome shown in Figure 4.

Figure 4. Image segmentation

The morphological properties of the image serve as the foundation for morphological function. The two most often utilized basic morphological operations are dilation and erosion. The size of the image can be enlarged or reduced via dilation or erosion, respectively. The intensity level provides the foundation for edge detection. Edge identification during morphological operations is straightforward, and the segmentation results are highly accurate. Edge detection uses the image's shifting level of intensity. Dilation and erosion, two morphological operations, are frequently employed to improve and recreate images shown in Figures 5 and 6.

Figure 5. Low-grade tumor evaluation

Figure 6. High-grade tumor evaluation

Otsu thresholding was a crucial component of image segmentation and was an automated thresholds method that produces significantly better results than other segmentation techniques. It employ the automatic Otsu thresholds technique because of this. Additionally, it employ computational morphological processes in our work for the dilation, erosion, and extraction of the solely contaminated region shown in Figure 7 and performance analysis of the proposed and existing system is shown in Figures 8 and 9.

Figure 7. Morphological image boundary segmentation

Figure 8. Performance analysis (accuracy) of low and high grade

Figure 9. Performance analysis of low and high grade

3.5 Classification and optimization

Convolutional neural networks are a particular kind of neural network that has proven to be highly effective at functions like image identification and classification that enable the self-driving cars and vision of robots, RCNN has proved excellent at recognizing faces, objects, illnesses detection, or traffic signs. The RCNN in Figure 3 separates an outcome image into 4 groups: bird cat, boat, or dog (the initial LeNet was mostly employed for character identification functions). Its architecture is similar to that of the original LeNet. As shown in the graphic above, the network properly assigns the greatest likelihood (0.94) for the boat when given an image of a boat as input. The output layer's likelihood should add up to one (more on this later in the post). The RCNN has four primary activities, as depicted in Figure 10.

RCNN consist of an input level, an output level, or a few hidden layers. Instead of being arranged in two dimensions like in a typical neural network, the neurons in each level of a convolution network are arranged in 3 dimensions (width, height, and depth). A three-dimensional input volume can now be transformed into an outcome volume by the RCNN to this. The hidden layers are composed of convolution, pooling, normalization, and completely connected levels. RCNN employ several convolutional layers to abstractly filter input volumes at greater stages.

3.6 IWO-RCNN

IWO-RCNN could be thought of as a binary classification method that iteratively combines neighboring pixels of comparable intensities. The IWO-RCNN method was a special NN framework that is modeled based on the cat's visual brain. Its main feature is the presence of a supplementary receptive field called the "linking" field, which combines inputs of neighboring neurons to adjust the main "feeding" region. This neural network differs from ordinary NNs in that learning was not necessary.

In a grayscale input image, a neural network's neurons are each represented by a single voxel. The voxel itself and weighted inputs of adjacent voxels are both outcomes that were given to it. Iteration updates the "feeding" area, which is computed using these inputs. The "feeding" field is changed by the "linking" area that only accepts input from adjacent voxels. A linking coefficient is capable of regulating the modulation's intensity. The adjusted value known as the neuron's "internal activation" is then compared to a "threshold" area to determine if the neuron has been "switched on." The subsequent cycle receives this binary output.

When adjacent voxels have comparable intensity values, neurons were "switched on" faster, enabling stimulation to spread fast (within a few iterations) among regions of similar strength. The edge between the cerebrospinal fluid and cerebral cortex is one area with strong image gradients it requires multiple steps for neighboring voxels to become "switched on." There will be a temporary pause to the stimulation of voxels beyond the enclosed area if a continuous boundary is present until the "linking" and "feeding" fields may reach a certain value. Consequently, pooling layers provide a certain quantity of translation and rotation consistency, they would discover that this value still has a certain capability for strangely situated objects if one keeps track of the overall amount of pixels that were "on" before every repetition. By using less memory, pooling also makes it possible to use more convolutional layers Normalization stages are used to normalize over local input regions by moving all inputs in an area to a mean of zero and an average of one. Other regularization techniques can also be used, such as batch normalization, which normalizes the activations of the entire batch, or dropout, which ignores neurons chosen at random during learning. Fully connected levels' neurons calculate dot products identically to convolution layers' neurons, but they differ in that they are connected to all of the activations from the layer before.

Figure 10. RCNN brain-extraction method's workflow

ReLU Layer: A node that uses the rectifier activation function was referred to as a ReLu node. The rectifier function, which has the formula f(x) = Max (0, x), could be employed by neurons stimulation operation.

Pooling or Sub Sampling: The results of neuron groups at the first level are combined into a single neuron at the subsequent layer during local or global pooling phases in convolutional systems. The greatest value to every group of neurons in the preceding layer was utilized, for instance, in the max pooling method. Classification (Fully Connected Layer): The neural network uses connected stages to do high-level reasoning after several convolutional and max pooling layers. RCNN with IWO shown in Figure 11 and result is explained in Figure 12.

Figure 11. RCNN with IWO

Weka Simulation: WS was an arsenal of ML techniques for DM applications. There are tools in Weka for association rules, visualization classification, regression, grouping, and pre-processing data. Weka version 3.8.3 is used in this research for classification shown in Figure 13.

Figure 12. BT identification using IWO-RCNN

Figure 13. Proposed system implementation

4. Results and Discussions

4.1 MRI

The Agency for Science, Technology, and Research (A*STAR), Use Committees of the Biomedical Sciences Institutes, and Institutional Animal Care gave their approval to all animal procedures. Ten mature male C57BL/6 mice (weighing between 23 and 28 g) were first put to sleep using 1.5% isoflurane mixed with 2:1 air. While being imaged, the breathing and temperature of the anesthetized cats are monitored by a stereotactic owner.

4.2 Comparison with other automatic methods

Dataset taken from Kaggle and URI repository as brain MRI images. Examples of the four approaches' identification of brain edges to 3 slices of a different slice database are shown in Figure 14.

Figure 14. Experimental results

Figure 15. Performance measures

Figure 16. Tumor detected using proposed system

Only IWO-RCNN of the automated techniques can accurately segment the olfactory bulb. PCNN and IWO-RCNN function admirably of coronal slices to the middle of the cerebral, but BSE and CLS has trouble precisely segmenting the cerebellum's underside. IWO-RCNN outperforms CNN in the reduced comparison regions of the cerebral for slices displaying the cerebellum and brainstem shown in Figure 15.

Our technique for boundary identification seeks to identify the tumor's perimeter in each image slice and distinguish it from healthy brain tissue. Once the tumor has been separated, additional processing can be done to estimate its volume and render it in three dimensions. On several MRI databases, the proposed method's performance was assessed. A data set of an axial MRI with 74 slices is used in the first operation. Our technique for boundary identification seeks to identify the tumor's perimeter in each image slice and distinguish it from healthy brain tissue. Once the tumor has been separated, additional processing could be done to estimate its volume and render it in three dimensions. On several MRI data sets, the proposed method's performance was assessed. A data set from an axial MRI with 74 slices is used in the first operation shown in Figure 16.

4.3 Limitations of IWO-RCNN

There are a few drawbacks to IWO-RCNN. It was difficult to differentiate images with a poor comparison between the cortex and the skull, such as those found in high-field T1-weighted rodent brain scans, just like all of the other automatic approaches listed in this paper. In some animals, we additionally captured 3-D MPRAGE T1-weighted images. The contrast of the brain boundaries was, however, fairly poor because of The reduced density of grey matter (as opposed to T2-weighted images, where the grey matter has an increased level). This may be the reason that T2-weighting was used to acquire the majority of MRI rodent brain atlases to date. Consequently, acquiring T2-weighted information is advised for separation and registration.

5. Conclusions

This paper's goal is to examine the theory underlying the development of IWO-RCNN autonomous system examining and addressing every problem to the methods that were utilized in recent years to identify BT.  Researchers are concentrating on the number of repetitions (hidden levels) because classification is crucial to tumor identification. The outcome of the proposed procedure demonstrates improved precision in removing the diseased zone. The early detection of tumors by tumor detection devices is crucial for lifesaving. The system has a greater accuracy of 98.7%, precision 96.5% and recall 95.6% implemented using MATLAB R2014a.

  References

[1] Latchoumi, T.P., Ezhilarasi, T.P., Balamurugan, K. (2019). Bio-inspired weighed quantum particle swarm optimization and smooth support vector machine ensembles for identification of abnormalities in medical data. SN Applied Sciences, 1(10): 1137. https://doi.org/10.1007/s42452-019-1179-8

[2] Garikapati, P., Balamurugan, K., Latchoumi, T.P., Malkapuram, R. (2021). A cluster-profile comparative study on machining AlSi7/63% of SiC hybrid composite using agglomerative hierarchical clustering and K-means. Silicon, 13(4): 961-972. https://doi.org/10.1007/s12633-020-00447-9

[3] Latchoumi, T.P., Parthiban, L. (2022). Quasi oppositional dragonfly algorithm for load balancing in cloud computing environment. Wireless Personal Communications, 122(3): 2639-2656. https://doi.org/10.1007/s11277-021-09022-w

[4] Talukder, M.A., Islam, M.M., Uddin, M.A., Akhter, A., Pramanik, M.A.J., Aryal, S., Almoyad, M.A.A., Hasan, K.F., Moni, M.A. (2023). An efficient deep learning model to categorize BT using reconstruction and fine-tuning. Expert Systems with Applications, 2023: 120534. https://doi.org/10.1016/j.eswa.2023.120534

[5] Jyothi, P., Singh, A.R. (2023). Deep learning models and traditional automated techniques for BT segmentation in MRI: A review. Artificial Intelligence Review, 56(4): 2923-2969. https://doi.org/10.1007/s10462-022-10245-x

[6] Liu, Z., Tong, L., Chen, L., Jiang, Z., Zhou, F., Zhang, Q., Zhang, X., Jin, Y., Zhou, H. (2023). Deep learning based BT segmentation: A survey. Complex & Intelligent Systems, 9(1): 1001-1026. https://doi.org/10.1007/s40747-022-00815-5

[7] Saeedi, S., Rezayi, S., Keshavarz, H., Kalhori, S.R.N. (2023). MRI-based BT detection using convolutional deep learning methods and chosen machine learning techniques. BMC Medical Informatics and Decision Making, 23(1): 1-17. https://doi.org/10.1186/s12911-023-02114-6

[8] Demir, F., Akbulut, Y., Taşcı, B., Demir, K. (2023). Improving BT classification performance with an effective approach based on new deep learning model named 3ACL from 3D MRI data. Biomedical Signal Processing and Control, 81: 104424. https://doi.org/10.1016/j.bspc.2022.104424

[9] Mahmud, M.I., Mamun, M., Abdelgawad, A. (2023). A deep analysis of BT detection from mr images using deep learning networks. Algorithms, 16(4): 176. https://doi.org/10.3390/a16040176

[10] Steyaert, S., Qiu, Y. L., Zheng, Y., Mukherjee, P., Vogel, H., Gevaert, O. (2023). Multimodal deep learning to predict prognosis in adult and pediatric brain tumors. Communications Medicine, 3(1): 44. https://doi.org/10.1038/s43856-023-00276-y

[11] Hossain, A., Islam, M.T., Abdul Rahim, S.K., Rahman, M.A., Rahman, T., Arshad, H., Khandakar, A., Ayari, M.A., Chowdhury, M.E. (2023). A lightweight deep learning based microwave brain image network model for BT classification using reconstructed microwave brain (RMB) images. Biosensors, 13(2): 238. https://doi.org/10.3390/bios13020238

[12] Mijwil, M.M., Doshi, R., Hiran, K.K., Unogwu, O.J., Bala, I. (2023). MobileNetV1-based deep learning model for accurate BT classification. Mesopotamian Journal of Computer Science, 2023: 32-41. https://doi.org/10.58496/MJCSC/2023/005

[13] Ali, M.U., Hussain, S.J., Zafar, A., Bhutta, M.R., Lee, S.W. (2023). WBM-DLNets: Wrapper-based metaheuristic deep learning networks feature optimization for enhancing BT detection. Bioengineering, 10(4): 475. https://doi.org/10.3390/bioengineering10040475

[14] Asif, S., Zhao, M., Tang, F., Zhu, Y. (2023). An enhanced deep learning method for multi-class BT classification using deep transfer learning. Multimedia Tools and Applications, 1-28. https://doi.org/10.1007/s11042-023-14828-w

[15] Pacheco, B.M., de Souza e Cassia, G., Silva, D. (2023). Towards fully automated deep-learning-based BT segmentation: Is brain extraction still necessary?. Biomedical Signal Processing and Control, 82: 104514. https://doi.org/10.1016/j.bspc.2022.104514

[16] Ruba, T., Tamilselvi, R., Beham, M.P. (2023). Brain tumor segmentation using JGate-AttResUNet–A novel deep learning approach. Biomedical Signal Processing and Control, 84: 104926. https://doi.org/10.1016/j.bspc.2023.104926

[17] Kalyani, B.J.D., Meena, K., Murali, E., Jayakumar, L., Saravanan, D. (2023). Analysis of MRI brain tumor images using deep learning techniques. Soft Computing, 27: 7535-7542. https://doi.org/10.1007/s00500-023-07921-7

[18] Emam, M.M., Samee, N.A., Jamjoom, M.M., Houssein, E.H. (2023). Optimized deep learning architecture for BT classification using improved hunger games search algorithm. Computers in Biology and Medicine, 160: 106966. https://doi.org/10.1016/j.compbiomed.2023.106966

[19] Wang, B., Zhao, H., Wang, X., Lyu, G., Chen, K., Xu, J., Cui, G.S., Zhong, L.H., Yu, L., Huang, H.B., Sheng, Q. (2024). Bamboo classification based on GEDI, time-series Sentinel-2 images and whale-optimized, dual-channel DenseNet: A case study in Zhejiang province, China. ISPRS Journal of Photogrammetry and Remote Sensing, 209: 312-323. https://doi.org/10.1016/j.isprsjprs.2024.02.002

[20] Tian, H., Wu, H. (2024). Fault distance measurement method based on wavelet energy spectrum and BWO algorithm optimized CNN-GRU hybrid neural network. Academic Journal of Science and Technology, 11(1): 247-256. https://doi.org/10.54097/s30txd46