Optimized Graph-Based Segmentation for Brain Tumor Detection

Optimized Graph-Based Segmentation for Brain Tumor Detection

D. Vamsidhar Parth Desai Sagar Joshi Priyanka V. Deshmukh* Anjali S. More

Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune 412115, India

Corresponding Author Email: 
priyanka.deshmukh@sitpune.edu.in
Page: 
771-778
|
DOI: 
https://doi.org/10.18280/isi.300321
Received: 
13 January 2025
|
Revised: 
3 March 2025
|
Accepted: 
14 March 2025
|
Available online: 
31 March 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Brain tumor segmentation is challenging in medical imaging because misclassifications or wrong segmentations may cause grave treatment errors. This work takes MRI brain images through a normalization process to a standard scale and then performs segmentation by unsupervised machine learning algorithms. This involves two phases: The preliminary Phase; here, the data for MRI is preprocessed through various methods like cropping and contrast enhancement; the Segmentation Phase- evaluation of multiple algorithms including k-means clustering, Fuzzy C-Means, t-SNE, expectation-maximization and graph-based segmentation (Improvised Felzenszwalb's Technique, IFT) In this, graph-based segmentation had outperformed the others with exact segmentation around the tumor region depending upon the intensity and proximity. The graph-based segmentation algorithm constructs a graph using intensity and spatial features. Regions are segmented through iterative merging based on edge weights and internal differences. This method achieved exact tumor segmentation, visualized using encrusted color maps to delineate tumor regions. Results demonstrate that the graph-based segmentation technique is computationally efficient, lightweight, and outperforms other approaches, which makes it a promising step towards enhanced tumor segmentation in brain MRI analysis. However, some edge cases where the segmentation failed highlight areas for improvement. This research provides a pathway for an efficient pipeline to help the clinical decision-making process while diagnosing brain tumors.

Keywords: 

AI in healthcare, brain tumor segmentation, graph-based segmentation, image pre-processing, contrast enhancement

1. Introduction

1.1 Background

Brain tumors, are a significant concern for all professionals and care seekers in the field of healthcare due to their severe effects on both the mental and physical health of the victims and on mortality too. Magnetic Resonance Imaging (MRI) is the biggest revolutionizing invention that has improved the quality of diagnosis of various health conditions and helped both doctors and patients irrespective of their economic background [1]. MRI produces high-resolution images of internal organs of the human body like the brain, pancreas, kidneys, etc. Similarly, MRI helps in detecting brain tumors that form due to uncontrolled growth of brain cells which leads in the absorption of nutrients meant for healthy cells thus failing other cells due to lack of nutrition. Figure 1 shows an animated image and a MRI image of a brain with tumor.

Studies show that there are 7 to 11 cases of brain tumor for every 100,000 in various age groups per year. There are many surveys and research which prove that the number of victims of brain tumors is increasing rapidly. Some studies claim that 227,000 people die of this illness every year. An early diagnosis of a brain tumor may help prevent disability [2]. Using Artificial Intelligence (AI) in healthcare had a lot of positive impacts. One of those positive impacts is the application of deep learning pre-trained models for brain tumor detection [3, 4].

Figure 1. Structure of the brain with a tumor and MRI images of the brain showing the tumor

1.2 Motivation

The main motivation behind working on brain tumor detection is to build a model that accurately detects the tumor in the brain which helps in diagnosing and understanding a patient's condition and helps to analyze the process of treatment. Since the detection of tumors requires human involvement in general and may lead to potential human errors which is dangerous [5]. So, to avoid such consequences, this will be the most helpful solution. A brain tumor is one of the most dangerous health disorders that a human body can manifest. It is typically classified into two types which are primary and secondary. A primary brain tumor forms in the brain itself and stays there, whereas a secondary brain tumor starts in a different part of the body and spreads to the brain [6].

2. Literature Survey

The authors used a Multi-Planar U-Net (MPUnet) to segment different subregions of tumors across three challenging datasets: the Paediatrics Tumor Challenge (PED), Brain Metastasis Challenge (MET), and Sub-Sahara-Africa Adult Glioma (SSA). The MPUnet architecture aims to enhance segmentation accuracy by using multi-planar information. The paper shows that brain tumor segmentation is challenging and requires the improvement of the MPUnet for better performance as well as the availability of MRI images [7].

There are papers which proposed unsupervised machine learning techniques, such as singular value decomposition, for image preprocessing and noise removal. The study highlights the application of Multiresolution Singular Value Decomposition (MSVD) to effectively reduce image noise and employs a deep neural network for tumor segmentation [8].

The authors presented a kernel dictionary learning algorithm for brain tumor segmentation. The MRI brain image scans are divided into 3 × 3-pixel patches. The first-order and second-order statistical feature vectors are extracted from the scans. For extracting these feature vectors, a correlation-based sampling technique was developed that enhanced the efficiency of the dictionary and reduced the training time. Furthermore, a subset of feature vectors improves the quality of segmentation [9].

The authors proposed a new pipeline called Brain Radiology Aided by Intelligent Neural NETworks, BRAINNET, that uses a vision transformer model MaskFormer for the robust tumor segmentation masks. It is an ensemble of nine predictions based on three independent models that are trained on one of the three orthogonal 2D slice directions of a 3D brain MRI volume: axial, sagittal, and coronal. The models were trained and tested on the publicly available UPennGBM dataset, which includes 3D multi-parametric MRI (mpMRI) scans from 611 subjects [10].

The article will introduce a novel brain-tumor segmentation framework termed Two-Stage Generative Model (TSGM). An integration of a cycle-based generative adversarial network with a variance exploding stochastic differential equation using joint probability would be done. The paper has an approach that trains CycleGAN on unpaired data to transform healthy images to abnormal ones as a way of prior data. Later, VE-JP is used to reconstruct healthy images using the synthetic paired abnormal images as a reference. Only pathological regions are modified without affecting the healthy regions using this method [11].

Researchers have investigated whether deep learning is applied for the purpose of enhancing multi-modality MRI data based on its efficiency in accurate brain tumor segmentation among Sub-Saharan African patients. In that, an ensemble-based method consisting of eleven distinct models from the three primary architecture- UNet3D, ONet3D, and SphereNet3D that were also combined with custom loss functions [12].

The author has proposed a method to detect abnormalities in the brain by developing a general unsupervised anomaly detection based only on MRI scans of normal brains. A 3D deep autoencoder network was developed which detects and segments various brain abnormalities without being trained on abnormal brain MRI images. Unsupervised anomaly detection is a powerful approach to detect brain abnormalities without labeled samples [13].

There are papers that proposed a multi-modal MRI brain tumor segmentation approach that uses a driving mechanism with multi-expert integration through prototypes [14]. It focuses on the use of features of every tumor sub-region using guidance of prototypes. To make sure the prototypes capture comprehensive information, authors presented a mutual transmission mechanism transferring features between modalities so that the issue of feature insufficiency can be addressed in a single mode [15].

Jiang et al. [16] presented BrainNet which represents the first brain-to-brain interface (BBI) that uses electroencephalography (EEG) for direct communication among multiple participants through transcranial magnetic stimulation (TMS). Three participants successfully used their neural signals to play a Tetris-like game by jointly solving it through a nonverbal and interaction-free communication system. Brain-based interfaces show promising potential in multi-person communication because this research develops new technologies which enable group decision-making and brain-based computing applications.

Nema et al. [17] developed RescueNet as a GAN-based framework for MRI image brain tumor segmentation. RescueNet differs from usual paired techniques because its unpaired GAN architecture produces high-quality results without needing pixel-by-pixel annotations. The model achieved both enhanced accuracy levels and operational stability across multiple tumor kinds thus establishing a strong tool for medical diagnosis automation and radiology needs.

Akbar et al. [18] created a brain tumor segmentation model using a single-level 3D U-Net design that contains multipath residual attention blocks for better performance. The algorithm delivers quick comprehension of distant and surrounding elements which enhances recognition accuracy for intricate tumors. The research assessment proved the model's exceptional performance on benchmark data while providing better results than conventional deep learning methods which indicates its potential for clinical implementation in automatic tumor detection applications.

Pinaya et al. [19] developed a transformer-based approach for unsupervised anomaly detection alongside segmentation tasks in 3D brain imaging. Self-attention mechanisms in this model lead to successful abnormality identification while working without needing any labeled examples. The research validated its usefulness for brain lesion and tumor recognition by performing better than conventional convolutional methods when analyzing medical images which demonstrated transformer technology's potential applications in medical diagnostics.

Peng and Sun [20] introduced AD-Net which represents a deep learning framework that utilizes MRI modalities for brain tumor segmentation. AD-Net enhances its MRI modality processing by employing adaptive functions which improve both segmentation reliability and accuracy. The model surpassed previous methods in tumor localization and boundary refinement while achieving promising results during tests with publicly available datasets thus making it a suitable tool for clinical diagnostics along with personalized treatment planning.

3. Methodology

There are three phases shown in Figure 2: the preliminary phase and the segmentation phase.

Figure 2. Block diagram of the method

3.1 Preliminary phase

This is the first phase where tasks like data collection, and data preparation are performed which are essential to ensure that the input data is suitable for segmentation. It includes the following components:

Data collection: In this phase, a relevant data is possessed like MRI scans images of tumor brain and healthy brain.

Data preparation: After the data acquisition is done it is necessary to perform some necessary to perform some preprocessing steps like image cropping, contrast enhancements to ensure the suitability of data for segmentation.

3.2 Segmentation phase

This is the core phase, where the tumor is segmented from the brain. To achieve the desired results, multiple algorithms have been experimented with to compare the quality of segmentation using various techniques. To explore the functionality of unsupervised machine learning algorithms, multiple techniques such as k-means clustering and fuzzy c-means have been utilized. All the results are displayed on the next page.

Algorithm k-means clustering

Step 1: Import important libraries like numpy, cv2, K-Means clustering.

Step 2: Load the MRI image.

Step 3: Preprocess the image through resizing and flattening which is necessary for k-means clustering.

Step 4: Train k-means clustering algorithm with n_clusters = 3. This will group similar pixels into 3 clusters.

Step 5: Visualize the new labels into a segmented image.

Figure 3 shows the results of using K-means clustering to segment highlight the brain tumor.

Figure 3. Result of k-means clustering program for segmentation

Figure 4. Results of t-SNE program for segmentation

Algorithm t-SNE

Step 1: Import necessary libraries numpy, cv2, TSNE.

Step 2: Load the MRI image.

Step 3: Normalize the image’s pixel values by dividing the pixel values by dividing it with 255.

Step 4: Extract overlapping patches from the images and apply t-SNE. This will result in data being reduced to 2D spaces.

Step5: Visualize the t-SNE result with matplotlib.pyplot.scatter.

Figure 4 is the result plot obtained by applying t-SNE to highlight the brain tumor.

Algorithm expectation-maximization

Step 1: Import necessary libraries matplotlib.pyplot, sklearn.mixture.

Step 2: Load the MRI image.

Step 3: Normalize the image’s pixel values by dividing the pixel values by dividing it with 255 to scale between 0 and 1.

Step 4: Apply the Expectation-Maximization algorithm to the normalized images in the previous step.

Step 5: Visualize the EM(Expectation-Maximization) algorithm result.

Figure 5 shows the implementation of expectation-maximization algorithm to highlight the brain tumor.

Figure 5. Results of expectation-maximization algorithm for segmentation

Algorithm fuzzy c means

Step 1: Import necessary libraries matplotlib.pyplot, fcmeans.

Step 2: Load the MRI image.

Step 3: Normalize the image’s pixel values by dividing the pixel values by dividing it with 255 to scale between 0 and 1.

Step 4: Apply the Fuzzy C Means algorithm to the normalized images in the previous step.

Step 5: Visualize the Fuzzy C Means algorithm result.

Figure 6 is the result obtained by implementing fuzzy c means algorithm to highlight the brain tumor.

Algorithm graph-based segmentation with IFT

Step 1: Import necessary libraries matplotlib.pyplot, skimage.segmenation.

Step 2: Load the MRI image.

Step 3: Normalize the image’s pixel values by dividing the pixel values by dividing it with 255 to scale between 0 and 1.

Step 4: Apply IFT algorithm to the normalized images in the previous step.

Step 5: Visualize the IFT algorithm result.

Figure 7 demonstrates that the graph-based segmentation method has been the most effective among the algorithms tested. Building on this success, we have further refined the technique with improved parameters.

Figure 6. Results of fuzzy c means for segmentation

Figure 7. Results of graph-based segmentation for segmentation

3.3 Datasets description

The research dataset originated from Kaggle while containing 3,000 images. Three thousand images in the dataset receive total equal distribution between "Yes" with 1,500 images and "No" also having 1,500 images thereby creating a balanced dataset which can be seen in Figures 8 and 9. A perfectly balanced dataset offers several advantages because it reduces model biases while providing equal representation of the classes which leads to better classification outcomes.

Image resolution and acquisition conditions: The horizontal and vertical resolution of each dataset image equals 96 dots per inch. Image collection takes place through standardized acquisition procedures that create uniformity for minimized brightness and contrast differences as well as noise levels. Standardized acquisition conditions along with these conditions lead to optimized segmentation performance through uniform image quality and format.

Preprocessing requirements: The images receive basic processing steps including dimension standardization through cropping and resizing before they can be employed for educational functions. Pixel intensity values within the dataset benefit from technical preprocessing through intensity normalization as an approach to create consistent values. Model generalization together with pattern detection effectiveness improves with this essential step.

Data distribution that evenly distributes images across the dataset decreases the time needed to manage data especially in situations where classes do not match. An unbalanced dataset causes model predictions to become biased because it enables the model to recognize one class more frequently than the other such as "No Tumor" compared to "Tumor." This dataset maintains a balanced structure which delivers trustworthy and impartial training results to deep learning-based segmentation operations.

Figure 8. Brain MRI images that have tumor

Figure 9. Brain MRi images that do not have tumor

3.4 Algorithm and mathematical modelling

The proposed IFT Algorithm graph-based segmentation is a widely used unsupervised machine-learning technique to segment images into meaningful regions. This method segments tumors from the brain based on the intensity and spatial proximity of the tumor-infected region. The tumor-infected region becomes distinguishable at a pixel level due to the variation in average intensity of that area that is highlighted due to image preprocessing techniques like contrast enhancements during MRI imaging.

Hardware/Software Requirements

•Hardware Used

OS – Windows 11

CPU – AMD RYZEN 5 5600H; Clock

speed – 3301 MHz

GPU – NVIDIA GeForce GTX 1650

RAM – 16 GB RAM

•Software/Tech stacks Used

Language – Python

IDE – VS Code

Technologies – Unsupervised Machine

Learning models

A brain tumor segmentation method for MRI images operates through the IFT within the provided code. The method begins by loading an MRI image before converting it to grayscale and conducting normalization steps through Gaussian filter noise reduction. An adaptive parameter estimation occurs following the implementation of an entropy calculation which adjusts segmentation parameters.

3.5 Improvements of IFT over the original Felzenszwalb algorithm

The IFT contains various performance improvements above the original Felzenszwalb algorithm. The adaptive segmentation parameters of IFT differ from the original method because they determine their values according to image entropy changes. The system optimizes segmentation outputs on MRI scans regardless of their contrasting or intense patterns. The fusion procedure in IFT employs multiple segmentation scales to provide both detailed image preservation and correct global segmenting results throughout the process. The GLCM (Gray Level Co-occurrence Matrix) function selects tumor regions through analysis of texture intensity differences which enhances the separation of tumor areas.

IFT performs two functions: it removes tiny ambiguous segmentations unrelated to tumors before using edge-smoothing techniques to optimize results.

The developments result in more precise tumor segmentation with enhanced robustness to image noise together with better adaptability to MRI image variations. The segmented tumor region appears through the original MRI image as an overlay in the final output results.

Algorithm Graph-Based SegmentationIFT

Step 1: Take an image as input.

Step 2: Mention the parameters k (controls the granularity of segmentation), σ (Standard deviation of Gaussian smoothing), and min_size (minimum size of segments).

Step 3: Construction of graph using pixels as nodes, and edge weights which represent intensity differences.

Step 4: Edges are sorted in an ascending order.

Step 5: Merging the edges while maintaining internal differences conditions.

Step 6: Plotting the segmented image with each region represented with a different color.

The code provided implements a brain tumor segmentation technique for MRI images using the optimized Felzenszwalb method. It begins with the loading of the MRI image, preprocessing by transforming the image to grayscale, normalizing it, and reducing noise using a Gaussian filter. Then, an adaptive segmentation parameter estimation is performed, whereby the image entropy is calculated, so that segmentation parameters are adjusted efficiently. Core segmentation is done using the fusion approach for regions that enhance the tumor region detection and GLCM (Gray Level Co-occurrence Matrix) based texture intensity selection to isolate the tumor region in question. The small segment of objects is removed and binary closing is done to smooth the edges for further refining the segmentation. In the last steps, results are shown with the original image of the MRI displaying the region containing the tumor as the overlay. Adaptive tuning combined with multiscale fusion along with texture based refining techniques are incorporated, which ensures reliable segmentation for tumors.

# Adaptive parameter tuning based on image entropy

def adaptive_parameters(image):

 entropy = filters.rank.entropy((image * 255).astype(np.uint8), morphology.disk(5))

 avg_entropy = np.mean(entropy)

 scale = 150 if avg_entropy > 4 else 100

 sigma = 0.6 if avg_entropy > 4 else 0.8

 min_size = 75 if avg_entropy > 4 else 50

  return scale, sigma, min_size

# Multiscale segmentation with fusion

def apply_multiscale_segmentation(image):

 scales = [50, 100, 150]

 sigma = 0.8

 min_size = 50

  fused_mask = np.zeros_like(image, dtype=bool)

  for scale in scales:

  segmented_image = felzenszwalb(image, scale=scale, sigma=sigma, min_size=min_size)

  mask = isolate_tumor_segment(segmented_image, image)

  fused_mask |= mask # Union of masks from different scales

 return fused_mask

# Isolate tumor region based on intensity and texture analysis

def isolate_tumor_segment(segmented_image, original_image):

 unique_segments, counts = np.unique(segmented_image, return_counts=True)

 tumor_segment = None

 max_score = -np.inf

 for segment in unique_segments:

  mask = (segmented_image == segment)

  segment_intensity = np.mean(original_image [mask])

  # Apply GLCM on a small patch surrounding the segment

  coords = np.argwhere(mask)

  if coords.shape [0] > 0:

   min_row, min_col = coords.min(axis=0)

   max_row, max_col = coords.max(axis=0)

   # Extracting the patch

   patch = original_image [max(min_row-2,0):max_row+3, max(min_col-2,0):max_col+3]

   if patch.ndim == 2 and patch.size > 0:

    glcm = feature.graycomatrix((patch * 255).astype(np.uint8), [1], [0], symmetric=True, normed=True)

    contrast = feature.graycoprops(glcm, 'contrast') [0, 0]

   else:

    contrast = 0

  else:

   contrast = 0

  score = segment_intensity + contrast # Combined intensity-texture score

  if score > max_score:

   max_score = score

   tumor_segment = segment

 tumor_mask = (segmented_image == tumor_segment)

Figure 10. Workflow of a graph-based algorithm for segmentation

Figure 10 gives the workflow of graph-based algorithm for segmentation by IFT.

3.6 Mathematical concepts

Weight of edge between pixels:

$w p 1, p 2=I p 1-I p 2$     (1)

Internal difference of a segment C:

$IntC =\max \, e \in C w e$    (2)

Merging conditions:

$w e \leq {minIntC} 1+k / C 1, {Int} C 2+k / C 2$       (3)

Gaussian smoothing:

$Ismooth =G_{-} \sigma * I$    (4)

Graph representation:

$G=V, E$   (5)

4. Results and Discussion

The primary objective was to highlight and segment the tumor from brain mri images. To achieve the desired results the best performing algorithm was graph-based segmentation which highlighted the tumor region accurately, resulting in perfect segmentation of brain tumor (Table 1). The dataset used to perform this project doesn’t have any ground truth labels so to evaluate the performance of the algorithm, metrics like region contrast mesaurement, compactness and circularity etc. have been used by running the algorithm on all the dataset.

Table 1. Performance of algorithm on all the dataset

Evaluation Metric

Evaluation Score

Average Region Contrast

0.2186

Average Circularity

0.2271

Average Edge Alignment Score

0.0156

Average SSIM (Structural Similarity Index)

0.1736

Figure 11. Results of graph-based algorithm for segmentation

Figure 12. Results of the graph-based segmentation algorithm that were unsuccessful

Figure 11 shows the successful segmentation results where the algorithm accurately highlighted the tumor region, achieving near-perfect delineation of the tumor boundaries. To further understand the limitations of the technique, Figure 12 consists of MRI images where the technique failed to highlight and segment the tumor region.

Images on which the segmentation performed poorly are shown below where the technique fails to highlight the tumor region.

The complexity analysis of the algorithm is discussed below:

Time Complexity: O(nmlog(nm)).

Space Complexity: O(nm), where n × m is image size.

5. Conclusion

Graph-based segmentation by IFT turned out to be the best unsupervised technique for brain tumor segmentation from MRI images because of its simplicity, not trainable, and its ability to delineate the boundaries of the tumor in most cases with an accuracy that is high. Although it has its advantages, the algorithm had its drawbacks with low-contrast or noisy MRI images and was incapable of dealing with overlapping tissue intensities. These challenges demand additional improvements, such as introducing pre-processing techniques to enhance image quality, developing hybrid methods that are both graph-based and machine learning or deep learning techniques, and introducing automated measurement of tumor size tools for a proper quantitative analysis in the clinical sector. Future work will address the current limitations by collaborating with domain experts in a collaborative effort to develop reliable, accurate, and clinically applicable techniques for brain tumor segmentation to further the progress of diagnostics and treatment planning.

  References

[1] Hilabi, B.S., Alghamdi, S.A., Almanaa, M., Alghamdi Sr, S.A. (2023). Impact of magnetic resonance imaging on healthcare in low-and middle-income countries. Cureus, 15(4): e37698. https://doi.org/10.7759/cureus.37698

[2] Anantharajan, S., Gunasekaran, S., Subramanian, T. (2024). MRI brain tumor detection using deep learning and machine learning approaches. Measurement: Sensors, 31: 101026. https://doi.org/10.1016/j.measen.2024.101026

[3] Leung, J.H., Karmakar, R., Mukundan, A., Lin, W.S., Anwar, F., Wang, H.C. (2024). Technological frontiers in brain cancer: A systematic review and meta-analysis of hyperspectral imaging in computer-aided diagnosis systems. Diagnostics, 14(17): 1888. https://doi.org/10.3390/diagnostics14171888

[4] Kumar, G.S., Nivaashini, M., Maheshwari, G.U., Rasi, D., Kumar, B.M., Arun, R. (2023). Convolution neural network with unsupervised machine learning approach for feature extraction and brain tumor detection in human beings. In 2023 International Conference on Emerging Research in Computational Science (ICERCS), Coimbatore, India, pp. 1-8. https://doi.org/10.1109/ICERCS57948.2023.10434117

[5] Brindha, P.G., Kavinraj, M., Manivasakam, P., Prasanth, P. (2021). Brain tumor detection from MRI images using deep learning techniques. IOP Conference Series: Materials Science and Engineering, 1055(1): 012115. https://doi.org/10.1088/1757-899X/1055/1/012115

[6] ZainEldin, H., Gamel, S.A., El-Kenawy, E.S.M., Alharbi, A.H., Khafaga, D.S., Ibrahim, A., Talaat, F.M. (2022). Brain tumor detection and classification using deep learning and sine-cosine fitness grey wolf optimization. Bioengineering, 10(1): 18. https://doi.org/10.3390/bioengineering10010018

[7] Ramtekkar, P.K., Pandey, A., Pawar, M.K. (2023). Accurate detection of brain tumor using optimized feature selection based on deep learning techniques. Multimedia Tools and Applications, 82(29): 44623-44653. https://doi.org/10.1007/s11042-023-15239-7

[8] Ahadian, P., Babaei, M., Parand, K. (2024). Using singular value decomposition in a convolutional neural network to improve brain tumor segmentation accuracy. arXiv preprint arXiv:2401.02537. https://doi.org/10.5121/csit.2022.121717

[9] Wang, W., Cui, Z.X., Cheng, G., Cao, C., Xu, X., Liu, Z. (2024). A two-stage generative model with CycleGAN and joint diffusion for MRI-based brain tumor detection. IEEE Journal of Biomedical and Health Informatics, 28(6): 3534-3544. https://doi.org/10.1109/JBHI.2024.3373018

[10] Luo, G., Xie, W., Gao, R., Zheng, T., Chen, L., Sun, H. (2023). Unsupervised anomaly detection in brain MRI: Learning abstract distribution from massive healthy brains. Computers in Biology and Medicine, 154: 106610. https://doi.org/10.1016/j.compbiomed.2023.106610

[11] Zhang, Y., Li, Z., Li, H., Tao, D. (2024). Prototype-driven and multi-expert integrated multi-modal MR brain tumor image segmentation. IEEE Transactions on Instrumentation and Measurement, 74: 2500614. https://doi.org/10.1109/TIM.2024.3500067

[12] Felzenszwalb, P.F., Huttenlocher, D.P. (2004). Efficient graph-based image segmentation. International Journal of Computer Vision, 59: 167-181. https://doi.org/10.1023/B:VISI.0000022288.19776.77

[13] Guimarães, S., Kenmochi, Y., Cousty, J., Patrocínio Jr, Z., Najman, L. (2017). Hierarchizing graph-based image segmentation algorithms relying on region dissimilarity: The case of the Felzenszwalb-Huttenlocher method. Mathematical Morphology-Theory and Applications, 2(1): 55-75. https://doi.org/10.1515/mathm-2017-0004

[14] Ehsaeyan, E. (2023). An efficient image segmentation method based on expectation maximization and Salp swarm algorithm. Multimedia Tools and Applications, 82(26): 40625-40655. https://doi.org/10.1007/s11042-023-15149-8

[15] Amin, J., Sharif, M., Haldorai, A., Yasmin, M., Nayak, R.S. (2022). Brain tumor detection and classification using machine learning: A comprehensive survey. Complex & Intelligent Systems, 8(4): 3161-3183. https://doi.org/10.1007/s40747-021-00563-y

[16] Jiang, L., Stocco, A., Losey, D.M., Abernethy, J.A., Prat, C.S., Rao, R.P. (2019). BrainNet: A multi-person brain-to-brain interface for direct collaboration between brains. Scientific Reports, 9(1): 6115. https://doi.org/10.1038/s41598-019-41895-7

[17] Nema, S., Dudhane, A., Murala, S., Naidu, S. (2020). RescueNet: An unpaired GAN for brain tumor segmentation. Biomedical Signal Processing and Control, 55: 101641. https://doi.org/10.1016/j.bspc.2019.101641

[18] Akbar, A.S., Fatichah, C., Suciati, N. (2022). Single level UNet3D with multipath residual attention block for brain tumor segmentation. Journal of King Saud University-Computer and Information Sciences, 34(6): 3247-3258. https://doi.org/10.1016/j.jksuci.2022.03.022

[19] Pinaya, W.H., Tudosiu, P.D., Gray, R., Rees, G., Nachev, P., Ourselin, S., Cardoso, M.J. (2022). Unsupervised brain imaging 3D anomaly detection and segmentation with transformers. Medical Image Analysis, 79: 102475. https://doi.org/10.1016/j.media.2022.102475

[20] Peng, Y., Sun, J. (2023). The multimodal MRI brain tumor segmentation based on AD-Net. Biomedical Signal Processing and Control, 80: 104336. https://doi.org/10.1016/j.bspc.2022.104336