Classification of Brain Tumors Using Convolutional Neural Network over Various SVM Methods

Classification of Brain Tumors Using Convolutional Neural Network over Various SVM Methods

Venkata Ramakrishna SajjaHemantha Kumar Kalluri 

Department of CSE, Vignan’s Foundation for Science Technology and Research deemed to be University, Vadlamudi 522213, India

Corresponding Author Email: 
svrk_cse@vignan.ac.in
Page: 
489-495
|
DOI: 
https://doi.org/10.18280/isi.250412
Received: 
21 May 2020
|
Revised: 
15 July 2020
|
Accepted: 
23 July 2020
|
Available online: 
20 September 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

A computer-based method is presented in this paper to define brain tumor using MRI images. The main classification motive is to identify a brain into a healthy brain or classify a brain with a tumor when a patient’s MRI images are given. Magnetic Resonance Imaging (MRI) is an important one among the common imaging treatments, which presents more detailed brain tumor identification information and provides detailed pictures of inside your body other than computed tomography (CT). Currently, CNNs is a famous technique to deal with most of the problems with image classification as they provide greater accuracy compared to other classifiers. Hbridized CNN has been used in this work. It consists of three convolution layers and three max pooling layers which could provide outrated performance. Images from open databases such as BRATS were tested on brain MRI images. The proposed model has given the improved performance over the existing model with an accuracy of 96.15%.

Keywords: 

magnetic resonance imaging (MRI), brain tumor, convolutional neural network (CNN), convolution layer, max pooling

1. Introduction

Brain tumors have been rare in the previous but are now growing at a very fast rate. The medical doctor might also detect brain tumors using MRI scans. Magnetic Resonance Imaging generated pics is a prosperous supply of records for the prognosis and cure of talent tumors. It is shaped when, as in ordinary cells, cells in the human being does not develop, split, or expire. The brain plays a key position in controlling all voluntary and involuntary tactics in the human body. To live longer, it is very indispensable to hold a healthy brain. However, tumors can enhance in the brain due to motives such as environmental and genetic factors [1]. Brain tumor signs and symptoms range, relying on the type and vicinity of the tumor. Because one of a kind areas of the brain controls the body’s range of functions, some tumors do now not have signs till they are quite giant and cause a severe, rapid decline in health. Some foremost signs and symptoms include listening, troubles with stability, imbalance in speech, changes in visualization, recollection problem from the memory, trouble with walking, adjustments in personality, lack of concentration & weak spot in one part of the body. The main frequent MRI series are the T1 weighted image and the T2 weighted image. The images which have been generated by the T1- weighted image are utilized for short TE and TR times, T1properties of tissue determine the distinction and brightness of the image. The images which have been generated by the T2 weighted image are utilized for longer TE and TR times. In these images, the tissue’s T2 residences determine the distinction and brightness. The tumors are labeled by region and cell type as follows: minor risk-oriented brain tumors and major risk-oriented brain tumors.

Minor risk-oriented brain tumors happen within the brain. Some such brain tumors are called malignant, and others are called benign. Major risk-oriented brain tumor initially happens in some different phase of the physique as an important tumor and later extend to the brain also. If unwanted growths in the lung, colon, breast, skin, and kidney are untreated at the correct time, those can be spread into the brain quickly. These are also known as talent tumors that are metastatic. These tumors can be referred to as Genius most cancers due to the fact they are malignant [2]. Usually, benign brain tumors have, in reality, defined boundaries and are now normally deeply rooted in brain tissue. This, makes it simpler for them to remove surgically furnished they are in a brain area that can be operated safely on. But they can nonetheless return even after they have been removed, even although benign tumors are less probably than malignant ones to recur. Figure 1 depicts the difference between a healthy brain and a brain with a tumor.

 

(a)

(b)

  Figure 1. Sample images of the brain (a) Healthy brain (b) Brain with tumor  

The WHO report says that the molecular genetic properties and histology are helpful in the classification of tumors located in the brain. According to the latest categories, gliomas are formed massively depending on the behavior, increasing rate of the tumor, and inherited transformations [2].

In spite of WHO classification or tumor stages, gliomas are categorized into three types, includes Oligodendroglioma, Astrocytoma, and Glioblastoma. Oligodendroglioma tumors are originated from oligodendrocyte cells. We can find such tumors in the cerebral hemisphere more frequently. Astrocytoma tumors have occurred from the cells which generate the caring tissue of the brain. Glioblastomas are called stage IV astrocytoma tumors. We can find such tumors in the brain’s cerebral hemisphere, and they increase quickly along with age. As Glioblastomas contains various cell categories, it is called as most difficult tumor category to treat and diagnose it. Approximately 2%, 7%, and 17% of major tumors are Oligodendrogliomas, Astrocytomas, and Glioblastomas, respectively [3].

If the tumors are not diagnostic at their early stage, then those can be transformed as malignant tumors.Behind the neurological examination, in order to diagnose, the tumor scanning will be taken to know the low-level details of the brain, such as size and location. MRI scans are preferable compared to the CT scan. CT scan produces low rate information, and the patient would effect the radiation in the course of a CT scan. Apart from the healthy cells, tumor cells are separated by injecting a dye during the MRI scanning. Grey matter, white matter, and cerebrospinal fluid are three important properties of any general brain image. A diagnosed brain has edema, necrosis, and tumor regions. Figure 2 depicts the three imaging planes of MRI, which include the Coronal plane, Sagittal plane, and axial plane. Coronal images are considered from back to the head towards the face. Sagittal images are considered from the side, moving from one ear to another ear. Finally, axial images are considered from the chin towards the head. Proton density-weighted, T1 weighted, and T2 weighted images can be taken by considering the weights during MRI scanning [4].

(a)

(b)

(c)

Figure 2. (a) Coronal image; (b) Sagittal image; (c) Axial image

2. Related Work

Saxena et al. [5] implemented the hybrid models which could classify the brain tumors efficiently. The models used in their work are Resnet50, Vgg16, and InceptionV3. In their study, they achieved 95% accuracy with Resnet50.

Ramakrishna et al. [6] proposed the model which could classify the brain tumors using SVM. In this study, they performed FCM segmentation to isolate the tumored image and then extracted LBP features. Extracted feature set has been given to the SVM classifier as input. It classifies the image to determine whether it is benign or malignant. They achieved 94.8% accuracy.

Mohsen et al. [7] devised an innovative approach using Discrete Wavelet Transform with deep learning models to classify the tumor to identify whether it is benign or malignant. They obtained a 93.94% accuracy rate with their proposed model. Citak et al. [8] mentioned and they had been utilized three various computer aided techniques to classify the tumor. The algorithms used are Support Vector Machine, logistic regression, and multilayer perceptron. They achieved a 93% accuracy rate with this hybrid and improved method. Vaniet et al. [9] said that they had been utilized Support Vector Machine to classify the tumor in their work. They obtained an 81.48% accuracy rate in their study.

Padma et al. [10] worked on brain tumor images, segmentation, and classification. Textual features are extracted by applying the discrete wavelet decomposition and SGLDM methods without wavelet transform. Tumor grades are classified by using classification methods like SVM and BPN. It is observed that 96% is the accuracy of SVM classification.

Balasooriya [11] focused on classification and tumor grading of MRI tumor images. For classification, they used convolutional neural networks. For grading Backpropagation neural networks and convolutional neural networks are used.

Zhao [12] proposed a model N4ITK to accurate the bias field of every image. Fully convolutional neural network and Conditional random field are used to do segmentation. As one image can produce lots of patches, training FCNN via patches can be avoided in the hassle of samples, which leads to the missing education. This technique can additionally assist in keeping away the learning pattern unbalance problem because of the wide variety and role of learning samples for every classification can be easily driven by utilizing the use of distinct patch sampling schemes. Post-processed the segmentation results by way of getting rid of small 3D-connected regions and correcting some pixels labels by an easy thresholding technique.

Badran [13] for segmentation methods used had been the Adaptive threshold method, location growing approaches, and Canny edge detection. Among these strategies, adaptive threshold method and canny aspect detection methods have been observed to be more suitable than other as segmentation method to identify tumor vicinity in the brain, for characteristic extraction LOG-Lindeberg algorithm was once used. For classification, neural networks had been used.

Pan [14] compared grading overall performance on both backpropagation neural networks and convolutional neural networks. The outcomes show most overall performance on Convolutional neural networks depended on specificity and sensitivity when in contrast with other NN.

Sharma [15] proposed a technique with a preprocessing step that removes the noise in the images by imposing the median filter. Texture features are described from the segmented image via the GLCM matrix in the process of feature extraction. Classification SVM, a binary classifier based totally on supervised mastering success of training in greater overall performance in terms of classification.

Glotsos et al. [16] used a probabilistic neural network-based clustering algorithm that was used for segmentation. Afterimage segmentation, the morphological and textual features are extracted, for image classification support vector machine-based decision tree procedure was used and obtained different classification accuracies such as for low grade 95%, high grade 91%, doubtful cases 83.3% and overall accuracy obtained 92.1%.

Li et al. [17] used the feature selection wrapper method based on SVMs, and the backward floating method is combined to select related features. Support vector machines with feature selection produced higher accuracy with additional benefits of rule extraction and redundancy feature elimination compared to the backward floating method. The researchers obtained 83.21% accuracy.

3. Architecture and Proposed Model

3.1 Architecture of CNN

The convolutional neural network is widely using to classify brain tumors in medical image processing. CNN contains advanced features to solve the problem with less time complexity. In this study, a new CNN is suggested to classify whether the input image holds a tumor or not. Primarily, CNN contains three important layers, such as a convolutional layer, max-pooling layer, and fully connected layer. The RGB or Grayscale image will be given as input to the first layer, and it is called a convolutional layer. Once the input is given to a convolutional layer, then it produces the output by computing a cross-product among the weights and regions. If the size of the input image is 64x64, the resultant output image is forwarded to the max-pooling layer. The pooling layer reduces the output by means of a given factor from the convolutional layers. At last, the required numbers of output classes are computed from the fully connected layer class scores. The architecture of CNN, which consisting of three important layers, is shown in Figure 3.

Figure 3. Architecture of the convolutional neural network

A classification model has been developed to categorize the input MRI image, whether it holds a tumor or not. Categories consisting of two types of brain tumors, one set with healthful Genius MRI snapshots and some other set of unidentified tumor MRI information, are used in this proposed work. The performance of the classifier will be estimated based on the two output classes. In this process, training will play an important role in predicting the output class.

3.2 Proposed CNN model

A deep convolutional neural network [18, 19] has been proposed to classify the input MRI image with more accuracy. The structure of the network is depicted in Figure 4. It is designed for main by experiments on MRI pictures of the brain. This network consists of ten layers consisting of three max-pooling layers, three convolutional layers, one flatten, two fully connected layers, and an output layer. Increasing the convolutional layer to neural network improves the accuracy of classifier significantly. It also reduces the noise in the input image and transforms the image that is more interpretable by the system. Also increasing the pooling layers helps us to both underline features and size of the image will be reduced that improves the training time.

Figure 4. Proposed convolutional neural networks

The process of each layer evolved in the design is described below.

3.2.1 Input layer

The first layer is the input layer, it reads the original image, then it will forward those images to the next layer for extracting a good amount of feature set.

3.2.2 Convolutional layer

The second layer of the first layer (i.e) input layer is the 2D convolutional layer. Here the necessary number of filters are imposed on original images to extract feature set from the original images. The extracted feature set have been employed for testing to calculate the similarity matches. In general, convolutional function has been mathematically defined as a product of i and j object functions. The two functions i and j upon the interval [0,k] is explained in Eq. (1).

$[i * j](k)=\int_{0}^{\tau} f(\tau) g(k-\tau) d \tau$      (1)

where, [i*j](k) tells about the convolution of i and j.

In the proposed architecture, 3x3 filters with stride 2 are applied on a 64x64x3 RGB image. After performing the convolution function, the size of the output image is 64x64. The size of the output image is calculated using Eq. (2).

$\left[\frac{w i-f i+2 p a}{s t}\right]+1$       (2)

where, wi×he is 64×64, stride (st) is 0, filter (fi) is 3×3, and padding (pa) is 0. Hence the resultant image of size 64x64 is given to the max-pooling layer. Likewise, all convolution layers are calculated in the proposed network.

3.2.3 ReLU activation function

The rectified linear activation function is a linear function that at once outputs the input if it is positive. It has become the default activation feature for many kinds of neural networks because a mannequin that uses it is simpler to instruct and often performs better. The characteristic is linear for values higher than zero, which means that when training a neural network using returned propagation, it has many of the acceptable houses of a linear activation function. Yet, as bad values are always output as zero, it is a nonlinear function. Its mathematical derivative is shown in Eq. (3).

$f(x)=\begin{array}{l}0, x<0 \\ 1, x \geq 0\end{array}$        (3)

The sigmoid and hyperbolic tangent activation functions cannot be used in networks with many layers due to the vanishing gradient problem. The rectified linear activation function overcomes the vanishing gradient problem, allowing models to learn faster and perform better results.

3.2.4 Max pooling layer

In CNN, the pooling layer performs a large position in reducing the characteristic dimension. To decrease the quantity of output neurons in the convolutional layer, pooling algorithms ought to be applied to mix the adjacent elements in the output matrices of the convolution. Max-pooling and average-pooling are, in many instances, used pooling algorithms.

In this work, to generate one component in the output matrix, the max-pooling layer with a 2x2 kernel dimension selects the feature, which is of maximum from the four adjoining factors of the entered matrix. The output of the 2D convolution layer is given as input to the max-pooling layer. The output size of the image produced by the max-pooling layer is computed using Eq. (4).

$\left[\frac{O+2 p a-2}{s t}\right]+1$       (4)

where, O is 64×64 size of the filter (fi) is 3×3 number of stride (st) is 2, and padding (pa) is 0. So, the dimensions of the image generated from the max pool layer is 32×32 (i.e. $\left[\frac{64+0-2}{2}\right]$ + 1). The same approach has been employed for the remaining max pooling layers in the proposed architecture.

3.2.5 Flatten layer

A multi-dimensional tensor is an output after performing the convolution and max-pooling functions. We want a one-dimensional tensor to convert it. This is done in the layer of flattening. It is given as input to fully connected layer

3.2.6 Fully connected layer

Fully connected means that each layer's nodes are fully connected to all the nodes of the next layer like feed-forward neural networks, where the activation function is used to classify the output values of the two tumor classes.

In Convolution Neural Network, epochs are increased to train the classifier adequately which results in better performance. While moving from epoch to epoch, weights are being updated using weights updation function used in CNN back propagation algorithm. It is shown in Eq. (5).

$w_{i j}=w_{i j}+\Delta w_{i j}$       (5)

where,

$\Delta w_{i j}=(l) \operatorname{Err}_{j} O_{i}$        (6)

4. Experimental Setup and Result Analysis

In this study, BRATS dataset [20-22] has been used to test the proposed architecture. Intially MRI images are collected from BRATS database. Collected images were in .nii format, it’s a raster format means that it is a 3 dimensional data. So vv tool is used to convert the .nii format into required format (i.e.) .png and it is a 2 dimensional image.

We have considered a total of 577 images from BRATS database. Dataset includes 290 of normal MRI images and 287 of abnormal MRI images. Proposed classifier has been trained with 210 normal MRI images and 185 abnormal MRI images. Proposed classifier has been tested with 80 normal MRI images and 102 abnormal MRI images. The data distribution for training and testing is given in Table 1.

Table 1. Distribution of BRATS Dataset to test the proposed model

BRATS

Normal MRI Images

Abnormal MRI Images

Total

Training

210

185

395

Testing

80

102

182

Total

290

287

577

4.1 Anaconda navigator

Anaconda Navigator is a desktop graphical user interface (GUI) blanketed in the Anaconda distribution that approves users to start the application and manipulate conda applications, environments, and channels besides using command-line commands. Navigator can search Anaconda Cloud repository for packages, install them in an environment, run packages, and replace them. Windows, macOS, and Linux are handy. The Jupyter notebook is an incredibly effective device to improve and present facts science initiatives interactively. A notebook integrates code and its output into a single file combining visualizations, narrative text, math equations, and other wealthy media. The intuitive workflow fosters iterative and rapid development, making notebooks an increasingly famous preference at the heart of present-day records science, evaluation, and science as a whole. The proposed model has been developed and trained the use of Tensor Flow [23], TFLearn, scikit-learn, and other computer learning python libraries. The model’s layers have been programmed with the use of TFLearn. Once the model was constructed, the two-fold move validation was used to train and check the fashions by means of dividing the entire dataset into two folds. For the complete dataset, every statistics partition was trained in 50 epochs. On the foundation of overcoming the underneath becoming problem and the time to train the community in a single epoch, the quantity of epochs is chosen.

Some of the overall performance metrics we regarded to be accuracy, confusion matrix, precision, recall, and F1 rating a confusion matrix is a desk that is regularly used to describe a classification model’s performance on a set of check records for which the genuine values are known. It lets in the overall performance of an algorithm to be visualized. It enables confusion between training to be effortlessly identified, e.g., one classification is, in many instances, mislabeled as the other. Most performance measurements are calculated from the matrix of confusion.

TP, TN, FP and FN are the four basic building blocks those can be utilized in calculating the performance evaluation metrics of any classifier. Those building blocks are explained in Table 2 in detail. Actual values are taken on the Y-axis and predicted values are taken on the X-axis.

Table 2. Building blocks of classifier in Confusion matrix

CLASS

Predicted

TOTAL

Positive

Negative

Actual

Positive

TP

FN

P

Negative

FP

TN

N

TOTAL

Pl

Nl

P+N

TP means true positive, it returns the count of positive cases that have been correctly classified. TN means true negative, it returns the count of negative cases that have been correctly classified. FP means false positive, it returns the count of negative cases that have been misclassified. FN means false negative, it returns the count of positive cases that have been misclassified.

Performance of the classifier has been estimated by accuracy, it computes the percentage of both positive and negative cases that are correctly classified from the dataset using Eq. (7).

Accuracy $=\frac{T N+T P}{T P+F N+F P+T N}$       (7)

Performance of misclassification rate has been estimated by error rate, it computes the percentage of both positive and negative cases that are misclassified from the dataset using Eq. (8).

Error Rate $=\frac{F N+F P}{T P+F N+F P+T N}$       (8)

Performance of true positive rate has been estimated by TPR, it computes the percentage of positive cases that are correctly classified from the actual positive cases of dataset using Eq. (9).

$\mathrm{TPR}=\frac{T P}{F N+T P}$          (9)

Performance of true negative rate has been estimated by TNR, it computes the percentage of negative cases that are correctly classified from the actual negative cases of dataset using Eq. (10).

$\mathrm{TNR}=\frac{T N}{F P+T N}$        (10)

Performance of correct prediction rate has been estimated by precision, it computes the percentage of positive cases that are correctly classified from the predicted positive cases of dataset using Eq. (11).

Precision $=\frac{T P}{F P+T P}$        (11)

F1 score is a measure that combines both precisions and recalls into a single measure. It is determined by using Eq. (12).

F1score $(F)=\frac{2 \text { XPrecision } X \text { Recall }}{\text { Precision }+\text { Recall }}$         (12)

We performed the experimentation in Python (Jupyter Notebook) with the downloaded BRATS dataset. The resultant counts returned by the building blocks of classifier are given in Figure 5.

Figure 5. Contingency matrix for the proposed model

This study achieved with accuracy of 96.15%, error rate of 3.85%, TPR of 97.05%, TNR of 95%, precision of 96.1%, and F1 score of 96.53%.

The test data results are narrated in confusion matrix. After testing the 182 images by using CNN classifier, which was trained with 395 images. It returns that 99(TP) positive cases and 76(TN) negative cases are correctly classified, 3(FN) positive cases and 4(FP) negative cases are misclassified. The various evaluation measures of different models are observed, and those results are given in Tables 3-6.

Table 3. Performance criteria of CNN

Confusion Matrix

99 (TP)

3 (FN)

4 (FP)

76 (TN)

Accuracy

96.15

Error rate

03.85

TPR

97.05

TNR

95.00

Precision

96.11

F1 score

96.53

Table 4. Performance criteria of FCM+SVM

Confusion Matrix

97 (TP)

4 (FN)

6 (FP)

75 (TN)

Accuracy

94.50

Error rate

05.50

TPR

96.03

TNR

92.59

Precision

94.17

F1 score

95.09

Table 5. Performance criteria of K Means + SVM

Confusion Matrix

95 (TP)

6 (FN)

8 (FP)

73 (TN)

Accuracy

92.30

Error rate

07.69

TPR

94.05

TNR

90.12

Precision

92.23

F1 score

93.13

Table 6. Performance criteria of all models

Evaluation Measures

CNN

FCM + SVM

K Means + SVM

Accuracy

96.15

94.50

92.30

Error rate

03.85

05.50

07.69

TPR

97.05

96.03

94.05

TNR

95.00

92.59

90.12

Precision

96.11

94.17

92.23

F1 score

96.53

95.09

93.13

In FCM+SVM method, we have set the criteria in FCM segmentation for convergence of єL = 0.01 and used linear kernel in SVM classification to classify the tumor. In K Means+SVM Method, we have processed the K Means sementation algorithm until no change in the cluster centroids and used linear kernel in SVM classification to classify the tumor.

Figure 6. Performance analysis of all models

(a)

(b)

Figure 7. (a) Accuracy of proposed method; (b) Loss of proposed method

Less error rate and more accuracy, TPR, TNR, Precision, F1 score are achieved by proposed method compared to SVM classifier. Because of the inherent excessive CNN architecture, it gives better results compared to existing classifiers. Performance rates of proposed classifier are compared with existing classifier performance rates and that are shown in Figure 6. The performance rate of the proposed classifier is 96.15%compared to an existing classifier which achieve the accuracy of 94.5% and 92.3% of two different models.

The Figure 7a and 7b show the accuracy and loss rate of both training data and testing data. The Figure 7a shows accuracy of testing data overwhelms the training data, and 7b shows the loss rate of the testing data is low over training data. It has been observed that as the number of epochs increases, accuracy of both training data and testing data also increases gradually. It has also been observed that as the number of epochs increases, loss rate of both training data and testing data also decreases gradually. Hence it is concluded that if the classifier is trained adequately with many numbers of epochs, accuracy will be increased and loss rate will be decreased.

The following Table 7 describes the various studies performed to classify the tumor to identify whether it is a normal or abnormal tumor.

Table 7. Various brain tumor studies

Authors/Year

Method

Performance (Accuracy)

Saxena et al. [5] /2019

Resnet50

95.00%

Ramakrishna et al. [6] /2019

FCM+LBP+SVM

94.80%

Mohsen et al. [7] /2018

DWT

93.94%

Citak et al. [8] /2018

SVM and logistic regression

93.00%

Vani et al. [9] /2017

SVM

81.47%

Proposed Method /2020

CNN

96.15%

5. Conclusions

In this study, we have used the BRATS data set to test the performance of the classifier. Hybridized CNN classifier was trained with 395 images includes 210 normal images and 185 abnormal images. Classifier was tested with 182 images which includes 80 normal images and 102 abnormal images. Experimentation is targeted to classify the brain tumor using Convolutional Neural Networks. With feedforward neural network as a seed, a hybrid Convolutional Neural Network model has been presented. CNN produces better results compared to other classifiers. By increasing the number of epochs, we can reach more accuracy as it is well trained. Epoch means that one complete iteration of the architecture, starting from the input layer and moving to the output layer. The proposed model is evaluated across the evaluation parameters such as accuracy, error rate, TPR, TNR, precision, and F1 score. It proves that the accuracy of CNN is 96.15% over the FCM+SVM classifier with an accuracy of 94.5% and K Means+SVM classifier with an accuracy of 92.3%. The advantage of the proposed classifier is that it classifies the tumor more accurately compared to the SVM classifier. The disadvantage of the proposed classifier is that it will take more time to classify the tumor if dataset size is big.

  References

[1] Shiminskis-Maher, T., Woodman, C., Keene, N. (2014). Childhood Brain & Spinal Cord Tumors: A Guide for Families, Friends & Caregivers. Childhood Cancer Guides. https://www.barnesandnoble.com/w/childhood-brain-spinal-cord-tumors-tania-shiminski-maher/1118621737.

[2] Louis, D.N., Perry, A., Reifenberger, G., Von Deimling, A., Figarella-Branger, D., Cavenee, W.K., Ohgaki, H., Wiestler, O.D., Kleihues, P., Ellison, D.W. (2016). The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta neuropathologica, 131(6): 803-820. https://doi.org/10.1007/s00401-016-1545-1

[3] Kanter, C., D’Agostino, N.M., Daniels, M., Stone, A., Edelstein, K. (2014). Together and apart: providing psychosocial support for patients and families living with brain tumors. Supportive Care in Cancer, 22(1): 43-52. https://doi.org/10.1007/s00520-013-1933-1

[4] El-Dahshan, E.S.A., Mohsen, H.M., Revett, K., Salem, A.B.M. (2014). Computer-aided diagnosis of human brain tumor through MRI: A survey and a new algorithm. Expert systems with Applications, 41(11): 5526-5545. https://doi.org/10.1016/j.eswa.2014.01.021

[5] Ramakrishna Sajja, V., Kalluri, H.K. (2019). Brain tumor segmentation using fuzzy C-means and classification using SVM. Lecture notes in Networks and Systems, Proceedings of Smart Technologies in Data Science and Communication, 105: 197-204. https://doi.org/10.1007/978-981-15-2407-3_24

[6] Saxena, P., Maheshwari, A., Maheshwari, S. (2019). Predictive modeling of brain tumor: A Deep learning approach. arXiv preprint arXiv:1911.02265.

[7] Mohsen, H., El-Dahshan, E.S.A., El-Horbaty, E.S.M., Salem, A.B.M. (2018). Classification using deep learning neural networks for brain tumors. Future Computing and Informatics Journal, 3(1): 68-71. https://doi.org/10.1016/j.fcij.2017.12.001

[8] Citak-Er, F., Firat, Z., Kovanlikaya, I., Ture, U., Ozturk-Isik, E. (2018). Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T. Computers in Biology and Medicine, 99: 154-160. https://doi.org/10.1016/j.compbiomed.2018.06.009

[9] Vani, N., Sowmya, A., Jayamma, N. (2017). Brain tumor classification using support vector machine. International Research Journal of Engineering and Technology (IRJET), 4. https://www.irjet.net/archives/V4/i7/IRJET-V4I7367.pdf.

[10] Padma, A., Sukanesh, D.R. (2011). Automatic diagnosis of abnormal tumor region from brain computed tomography images using wavelet based statistical texture features. arXiv preprint arXiv:1109.1067.

[11] Balasooriya, N.M., Nawarathna, R.D. (2017). A sophisticated convolutional neural network model for brain tumor classification. In 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, pp. 1-5. https://doi.org/10.1109/ICIINFS.2017.8300364

[12] Zhao, X., Wu, Y., Song, G., Li, Z., Fan, Y., Zhang, Y. (2016). Brain tumor segmentation using a fully convolutional neural network with conditional random fields. In International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pp. 75-87. https://doi.org/10.1007/978-3-319-55524-9_8

[13] Badran, E.F., Mahmoud, E.G., Hamdy, N. (2010). An algorithm for detecting brain tumors in MRI images. In the 2010 International Conference on Computer Engineering & Systems, Cairo, Egypt, pp. 368-373. https://doi.org/10.1109/ICCES.2010.5674887

[14] Pan, Y., Huang, W., Lin, Z., Zhu, W., Zhou, J., Wong, J., Ding, Z. (2015). Brain tumor grading based on neural networks and convolutional neural networks. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, pp. 699-702. https://doi.org/10.1109/EMBC.2015.7318458

[15] Sharma, B., Mitra, P. (2018). Abnormality detection in brain CT image using support vector machine. Proceedings on International Conference on Emerging Trends in Expert Applications & Security, 2: 74-80. https://doi.org/10.29007/jsfg

[16] Glotsos, D., Tohka, J., Ravazoula, P., Cavouras, D., Nikiforidis, G. (2005). Automated diagnosis of brain tumours astrocytomas using probabilistic neural network clustering and support vector machines. International Journal of Neural Systems, 15(01n02): 1-11. https://doi.org/10.1142/S0129065705000013

[17] Li, G.Z., Yang, J., Ye, C.Z., Geng, D.Y. (2006). Degree prediction of malignancy in brain glioma using support vector machines. Computers in Biology and Medicine, 36(3): 313-325. https://doi.org/10.1016/j.compbiomed.2004.11.003

[18] Sajja, T.K., Devarapalli, R.M., Kalluri, H.K. (2019). Lung cancer detection based on CT scan images by using deep transfer learning. Traitement du Signal, 36(4): 339-344. https://doi.org/10.18280/ts.360406

[19] Sajja, T.K., Kalluri, H.K. (2019). Deep learning and transfer learning approaches for image classification. International Journal of Recent Technology and Engineering (IJRTE), 7(5S4): 427-432. 

[20] Brats 2013. https://www.smir.ch/BRATS/Start2013, accessed on May 16, 2019.

[21] Brats 2015. https://www.smir.ch/BRATS/Start2015, accessed on May 16, 2019.

[22] Radiopaedia. https://radiopaedia.org/articles/brain-tumours?lang=us, accessed on May 16, 2019.

[23] Tensorflow. https://www.tensorflow.org/tutorials/keras/basic_classification, accessed on May 16, 2019.