Dense Hierarchical CNN – A Unified Approach for Brain Tumor Segmentation

Dense Hierarchical CNN – A Unified Approach for Brain Tumor Segmentation

Roohi Sille Tanupriya Choudhury* Piyush Chauhan Durgansh Sharma

Research Scholar, School of Computer Science, University of Petroleum & Energy Studies (UPES), Dehradun 248007, India

Department of Systematics, School of Computer Science, University of Petroleum & Energy Studies (UPES), Dehradun 248007, India

Department of Informatics, School of Computer Science, University of Petroleum & Energy Studies (UPES), Dehradun 248007, India

Department of Cybernatics, School of Computer Science, University of Petroleum & Energy Studies (UPES), Dehradun 248007, India

Corresponding Author Email: 
tanupriya1986@gmail.com
Page: 
223-233
|
DOI: 
https://doi.org/10.18280/ria.350306
Received: 
6 April 2021
|
Revised: 
2 June 2021
|
Accepted: 
11 June 2021
|
Available online: 
30 June 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Brain tumor segmentation is an essential and challenging task because of the heterogeneous nature of neoplastic tissue in spatial and imaging techniques. Manual segmentation of the tumor in MRI images is prone to error and time-consuming tasks. An efficient segmentation mechanism is vital to the accurate classification and segmentation of tumorous cells. This study presents an efficient hierarchical clustering-based dense CNN approach for accurately classifying and segmenting the brain tumor cells in MRI images. The research focuses on improving the efficiency of the segmentation algorithms by considering the qualitative measures such as the dice score coefficient using quantitative parameters such as mean square error and peak signal to noise ratio. The experimental analysis states the efficacy and prominence of the proposed technique compared to other models are tabulated within the paper.

Keywords: 

brain tumor segmentation, dense convolution neural networks (CNN), hierarchical clustering, MRI, deep learning

1. Introduction

1.1 Background of the research

Brain tumor segmentation is most vital for medical image analysis. Cells present inside the human brain grow abnormally, and this condition of cell aggravation is identified as a brain tumor. The excess of cell growth leads to the nerves growing out of the brain cells and blood vessels [1]. Tumors are of two types: Malignant and Benign, distinguished as cancerous and non-cancerous respectively. Non-cancerous tumors do not increase abruptly, and they do not spread to the other adjacent tissues, but they apply harmful pressure on the affected cells. In contrast, cancerous tumors are known for their fast-spreading tendency, which is also capable of spreading rapidly inside the brain [2]. The carcinogenic tumors because of the inflammation and enormous pressure on the affected cells affect healthier cells in the brain. Tumor classification and segmentation is a challenging task because the nature of neoplastic tissue is heterogeneous in spatial and imaging techniques [3]. Previous studies have adopted MR spectroscopy to classify brain tumors. Mainly, spectroscopic and traditional MRI techniques were developed to distinguish cancerous and non-cancerous brain neoplasms by using enhanced deep learning algorithms. The spectroscopic MRI technique evaluated the heterogeneity characteristics of brain neoplasms by describing the different regions of interest in the tumoral and peritumoral region [4]. In brain tumor segmentation, existing approaches adopt a parametric or a non-parametric probabilistic model for the underlying data.

The advancements in MRI systems have given birth to too many possibilities in the field of preventive medicine. A preliminary diagnosis can be more helpful in diagnosing the disease and providing quality medication; in these cases, image-processing techniques using MRI systems always play a vital role. The segmentation of brain tumors using images obtained by MRI is determined to detect and truncate the cancerous tissues [5]. The analysis of brain tumor segmentation is of great help in clinical diagnosis, identifying tumor, radiotherapy, and drug identification. The automated segmentation technique improves the diagnostic ability of the experts and lessens the time taken to provide a precise diagnosis. Segmentation techniques involving human efforts require more time to complete the same process [6]. Despite significant initiatives and encouraging results procured in imaging processing, reproducible segmentation and tumor classification is still a difficult task [7, 8].

1.2 Brain tumor segmentation

Various algorithms developed to perform similar segmentation tasks, and frequently used classifiers are not able to provide an upfront solution to the image optimization techniques. The actual outcome shows that various algorithms such as SVM, random forest, and boosted trees give the best insight into the precise probability after calibration [9]. The main purpose of the image segmentation is to split an image into conjoined exclusive and exhausted areas so that every ROI is spatially contiguous and there are required numbers of pixels that are homogeneous within the vicinity of the region with pre-defined criteria. Largely adopted homogeneity criteria involve intensity values, image texture, and its color, surface curvature, and range. In the segmentation process (color-based), K- means clustering algorithm is used widely for detecting brain tumors [10], which yielded quality results compared to other edge-detection algorithms. Another perspective of image segmentation discusses the clustering problems that occur in image segmentation Clustering problem deals with distinguishing different pixels in an image that belongs to each other appropriately. There is a heap of existing works on various techniques to adopt brain tumor segmentation using different clustering techniques such as the K-means clustering algorithm [11]. These techniques concentrate on describing the clustering process in two ways: By partitioning or by aggregating the pixels. In partitioning, an entire image is divided into different small regions, which are identified as efficient based on few restrictions. Whereas in pixel aggregation, pixels are grouped based on a few considerations which determine the group preference. The image segmentation process can be improved by integrating the K-means clustering algorithm with the Fuzzy C-means algorithm [12]. Artificial Neural Networks (ANN) are a few of the primarily adopted image classifiers. The significant drawback of the conventional classifiers is their ineffectiveness in classifying the images precisely. Whereas, the traditional classifiers selected as probabilistic classifiers suffer from the technical challenges that evolved while evaluating the conditional probabilities [13]. From many previous works on brain tumor segmentation, it is evident that ANN significantly outperforms when compared to other basic classifiers. ANNs are superior compared to their peers because of their reliability, scalability, precise accuracy, quick adopting technique, and high tolerance towards fault occurrences [14]. Quantitative analysis of different components on a healthy brain performed using artificial neural networks (ANN). In the conventional classification process, the classifiers often consider all the different segments of the image as equal and independent segments. However, there are certain variations and distinctions between these segments and few images possess weak inter-category features which can easily be classified and few images with strong attributes are difficult to classify [15]. Latest technological advancements in deep learning (DL) have demonstrated accelerated progress, and deep convolutional neural networks (CNNs) have super-seeded in brain tumor classification and segmentation [16]. In particular, CNN has achieved exemplary success in the field of image segmentation with its efficacy in employing a hierarchical classification structure. Deep CNN is a brilliantly designed image segmentation technique wherein it enhances the feature representation attribute of the classifier by predicting entire low and high-level data and the feature extraction from the complex features are directly done from the data with increasing hierarchy. In the proposed research, and enhanced deep learning technique called Convolutional Neural Network (CNN) is presented. CNN is a feed-forward neural network mainly used for image recognition and processing. In this research, CNN is trained for the automatic segmentation of brain tumors [17, 18]. A hierarchical clustering segmentation effectively performs multi-category segmentation tasks. In the brain tumor segmentation, not all the images obtained from the MRI image dataset are of the same intensity. Few images possess robust inter-categories similarity while others do not. Hence, it is not practically feasible to consider all these categories as equal and in such contexts hierarchical classification offers a potential solution [19].

In summary, this paper has the following contributions: i) A dense hierarchical convolutional neural network (DH-CNN) to fully extract high-level features from brain magnetic resonance images is proposed. ii) Two stages- T-net for transfer learning and S-net for segmentation along with different dropout layers have been proposed. iii) The performance of DH-CNN is investigated on the brain tumor segmentation dataset. The results show that a hierarchical dense convolutional neural network significantly outperforms deep CNN.

The organization of the research paper is as follows: Section II includes a brief overview of various existing research methodologies on brain tumors and their classification through MRI images. Section III briefs about the problem statement based on which the current research is carried out for brain tumor segmentation. Section IV includes the framework of the proposed technique used for brain tumor segmentation using an automatic classification through MRI images and details about CNN. Section V will provide the analysis of the simulation results and performance evaluation of the proposed methodology. Section VI concludes the process of the proposed research and includes the future work.

2. Literature Review

Recent research has been done on brain tumor segmentation. This literature survey focuses on the researches from 2012 to 2019. The process of brain tumor identification and segmentation is a challenging task because the nature of neoplastic tissue is heterogeneous in spatial and imaging techniques [20]. Few of the methods involved in tumor detection overlap most of the time with the healthier tissue. Texture analysis is one of the useful techniques used to evaluate the structural orientation, regularity, harshness, and softness and distinguishing between different regions of an image. Texture analysis provides convincing results as an image classifier to detect visible and non-visible scrapes with multiple applications in the MRI technique. Traditional classification methods employ grey-level pixel-based analytical attributes. Besides, different systematic and machine learning (ML) approaches were selected as image classifiers [21]. Various algorithms developed to perform automatic brain tumor segmentation through MRI images and frequently used classifiers are not able to provide a straightforward solution to the image optimization techniques. A K-means clustering algorithm was proposed for performing brain tumor segmentation in MRI images [22]. In the study, a DICOM MRI image was considered as input and the tumor cells were extracted from the input image. The obtained image was subjected to pre-processing to eliminate noise. A k-means clustering algorithm employed and the skull removed through morphological operations to detect and characterize the tumor cells easily from the clustered image. Lastly, an image thresholding and level set segmentation incorporated to extract tumor cells. The performance of the clustering algorithm is evaluated based on different performance metrics such as precision and recall to validate the accuracy of the proposed technique. A hybrid Fuzzy K Mean-Self Organization Mapping (FKM-SOM) was applied for classifying tumor images for segmentation [23]. Image classification and feature extraction performed to enhance the accuracy of the classifier. Discrete Wavelet Transformation (DWT) was used for feature extraction wherein 13 features from every image dataset were classified using Support Vector Machine (SVM) and kernel classification (RBF, linear, polygon). The experimental evaluation stated the prominence of the proposed system concerning F-score, recall, accuracy, and specificity. Ensemble Classifier for brain tumor segmentation was proposed [24]. The system incorporated various stages such as Preprocessing, segmentation, feature extraction, and classification. Preprocessing was carried out using a median filtering algorithm, segmentation is done through a Fuzzy C means algorithm (FCM technique), and lastly, the features of images are extracted using a Gray Level Co-Occurrence Matrix (GLCM) technique. Automatic segmentation of the brain tumor images is performed using an ensemble classifier. In the segmentation process, the brain images are classified into tumors and non-tumors employing a Feedforward Artificial neural network-based classifier. Experimental analysis shows the efficacy of the proposed system in terms of computation speed and accuracy. Among various primary tumors, Gliomas is the most prominent tumor that possesses the highest mortality rate. An enhanced brain tumor segmentation technique was proposed to detect diverse tumor cells for both high-grade and low-grade gliomas in MRI images based on the gradient and the context-sensitive attributes [25]. The study also incorporates two and three-dimensional gradient data for analyzing the gradient change. An overall 62 features classified and optimized using random forest algorithms. Results proved the competitiveness of the proposed system. Mushtaq and Singh [26] discussed different types of segmentation techniques such as Amplitude segmentation, Region-based segmentation, Edge-based segmentation, and Hybrid techniques of segmentation. Existing literature shows that there is limited availability of efficient techniques to segment the tumor from MRI images. Every technique has an edge over other techniques but based on the concept of over or under segmentation characteristics these techniques as less worth trusting. Lather and Singh [27] investigated various works carried on to automate brain tumor segmentation techniques. The study also focuses on future works in the field of medical image processing to timely detect brain tumors for appropriate diagnosis. It is observed from the existing approaches in Table 1, which employ pre-processing, and segmentation stages for detecting brain tumors are not capable of distinguishing whether the segmented region is normal or abnormal. In addition, existing segmentation techniques with enhanced attributes such as classification of images and feature extraction are capable of classifying the regions as normal or abnormal. However, the accuracy of these techniques was not satisfactory hence, the study stresses the need for an enhanced and supervised technique for automating the segmentation with improved accuracy and higher precision. Hierarchical classification has gained prominence in recent times in the field of machine learning and deep learning [28]. Hierarchical classification is advantageous compared to conventional classification in terms of efficiency and its capability to handle imbalanced data. Hierarchical classifiers achieve higher classification accuracy compared to binary classifiers and during testing, it requires very few classification stages compared to flat classifiers [29, 30]. The robust structural attributes of the hierarchical classifier enable it to achieve higher classification accuracy while performing data segmentation using structured data. A branch convolutional neural network (B-CNN) was proposed by Zhu and Bain [31] which can perform multiple predictions using the hierarchical classification process.

Table 1. Literature survey of deep learning based brain tumor segmentation algorithms

Technique

Image Modality

Computation Time

Methodology

Evaluation

Milletari et al. [32]

NR

NR

3D volumetric medical image segmentation with CNN

Dice Score-0.83

Sensitivity-0.82

Specificity-0.79

Milletari et al. [33]

NR

NR

Hough-voting to plot from CNN structures to full patch segmentations

Dice Score-0.869+/-0.033

Kleeseik et. al. [34]

T1, T2, T1C, and FLAIR

30 times faster

3D fully convolutional neural network

Dice Score-0.85

Sensitivity-0.87

Specificity-0.89

Kamnitsas et al. [35]

MPRAGE, axial FLAIR, T2

NR

3D 11- layers deep fully connected CNN with CRF

Dice Score-89.8

Precision-91.5

Sensitivity-89.1

Ejaz et al. [23]

T2

NR

hybrid Fuzzy K Mean-Self Organization Mapping (FKM-SOM)

Accuracy-66.67

Specificity-66.67

Precision-66.67

Recall-66.67

F-score-66.67

Kumar and VijayKumar [24]

NR

NR

an Ensemble Classifier consisting of median filtering and Fuzzy C means algorithms

Accuracy-91.17

Precision-94.17

Sensitivity-95.47

F1-Score-94.81

Zhao et al. [25]

BRATS 2015

NR

Random forest algorithms

Dice Score-0.91

Sensitivity-0.89

Specificity-0.75

Shakeri [36]

T1-weighted

NR

Fully Convolutional network trailed by Markov random fields

Dice Score-0.87

Zhu and Bian [31]

MNIST, CIFAR

NR

Branch CNN

Accuracy-

MNIST-99.40%

CIFAR100-59%

Yan et al. [37]

CIFAR

NR

Hierarchical Deep CNN

NR

Moeskops et al. [38]

T1

Not mentioned

CNN trained on multiple patch sizes

Dice Score-0.7353

Zhao et al. [39]

T1, T1-enhanced, T2, and FLAIR

NR

Multiscale CNN Tumor segmentation

Accuracy=0.81

Variance=0.99

*NR- Not reported

The B-CNN provides superior classification accuracy by employing a branch training strategy. However, it requires an efficient tree structure for enhancing prediction accuracy. Yan et al. [37] proposed a hierarchical deep CNN (HD-CNN) by integrating CNN with the tree structure.

From the above-mentioned literature survey, it has been clearly stated that hierarchical dense CNN architectures have proven to be the state of the art. Research significance is highlighting more on the requirement of the proposed model. For this reason, a proposed model comprises of hierarchical dense convolutional neural network. DH-CNN is a two-level network i.e. transfer-learning model (T-net) and the segmentation model (S-net). In the proposed model, DH-CNN, two-level hierarchy categories are evaluated such as coarse and fine categories.

3. Research Significance

Classification of brain tumors for MRI uses different techniques such as block-wise fine-tuning and transfer learning. Classification and segmentation are essential for tumor detection from MRI images. Manual detection of brain tumors is time-consuming and it can be challenging sometimes for healthcare professionals to provide timely assistance in critical conditions. In such cases, it requires an advanced and efficient classifier for classifying the tumors to ease the process of brain tumor detection. This research proposes an enhanced deep learning-based CNN algorithm to perform brain tumor segmentation in MRI images. The block-wise fine-tuning and transfer learning techniques are used to enhance deep CNN for small datasets, and the results are superior compared to conventional methods. To improve the performance of CNN, training of parameters is performed where these parameters ensure an automatic segmentation process with faster convergence. This research paper evaluates the concept of a hierarchical clustering algorithm in dense convolution neural networks (CNN). One of the prominent challenges associated with segmentation algorithms is obtaining the desired dice score coefficient.

4. Hierarchical Dense Convolutional Neural Network

The preliminary objective of this research is to perform brain tumor segmentation in MRI images using the DH-CNN technique. The proposed hierarchical clustering dense CNN model is a two-stage model. The initial stage is a transfer-learning network (T-Net) and the later stage is a segmentation network (S-Net). The CNN is trained to perform automatic segmentation of tumor images using hierarchical classification to improve the accuracy and efficiency of the convolutional network. The segmentation process follows a hierarchical clustering method instead of conventional single-level clustering. As a first step, the shape and location of the brain tumor are determined and then the tumors are classified into various categories of tumorous tissues. Later the data is augmented for balancing the training dataset based on the image intensity. The images obtained from the dataset consist of different inter-categories and considering all these images as equal affects the classification accuracy. Hence, this study performs a hierarchical clustering segmentation to achieve higher classification accuracy for multi-category segmentation. By performing hierarchical clustering, the study aims to enhance the efficiency of the segmentation process by segmenting intricate regions. This research incorporates qualitative functionalities such as accuracy, sensitivity, precision using quantitative parameters such as entropy, mean square error, and peak signal to noise ratio for determining the effectiveness of the proposed approach.

4.1 Hierarchical clustering segmentation using Convolutional Neural Network (CNN)

In CNN, the surface layers are incorporated with low-level attributes, whereas the inside layers consist of domain-specific quality where the last segments of the network are fine-tuned by performing shallow fine-tuning. Shallow fine-tuning is used to fine-tune the last segments of the network. The same features are not required by CNN as it affects the performance of deep tuned CNN. Hence, in this research fine-tuning is performed for the last layers to increase the efficiency of CNN in brain tumor segmentation. In MR images, there is no fixed meaning in the intensity values, and from previous works, it is witnessed that these intensity values significantly vary within-subjects and these values are highly sensitive. Their sensitivity is towards the condition of acquisition. CNN techniques require normalization of inputs; otherwise, the network is termed ill-conditioned. In this research, the functionality of CNN is bifurcated into four key areas [40].

The input layer comprises the pixel values of the image.

The convolutional layer in combination with the input regions will decide upon the output neurons, with the help of computed scalar product of the volume of input regions along with the neuron weights. Afterward, the number of parameters is reduced with the help of the pooling layer in which input is downsampled.

The fully connected layer will then produce notches for the various categories that are utilized for the process of segmentation.

The hierarchical clustering process is illustrated in Figure 1.

Table 2. Specifications of hierarchical dense CNN

Stage

Layer

Features

Convolution

33 * 36 layers

36

T-Net

33 * 48 layers

33 * 60 layers

33 * 84 layers

33 * 96 layers

33 * 108 layers

108

S-Net

33 * 120 layers

33 * 132 layers

33 * 144 layers

33 * 156 layers

33 * 168 layers

33 * 180 layers

180

The obtained images from the datasets are processed using a hierarchical clustering technique wherein a cascaded CNN architecture is used for processing the coarse and fine layers of an image by considering the pixel-wise probability of the initial weights. The main block of the CNN is considered as a pre-trained model wherein a stack of convolutions is used to generate a hierarchy of features for optimizing the initial weights of the training category images during the training phase as shown in Figure 1. Classification of different MRI sequences T1, T1c, T2, FLAIR involves a layer-by-layer classification. The convolution process for 33 * 36 layers was performed in two stages: stage 1 and stage 2 as shown in Figure 2. The non-dense architecture of the hierarchical CNN is shown in Table 2 and for the non-dense Structure, to predict the improvement of dense architecture, a 6 layer CNN was utilized as in label 4 in linear chain topology where each successive layer has 12 more kernels. The two-stage classification process of the CNN methodology is shown in Figure 2.

For Non-Dense Structure, to depict the improvement of dense architecture we have utilized a six-layer CNN as in label 4 in linear chain topology. Each successive layer has 12 more kernels. Other components are the same as the model.

The different stages involved in the segmentation process are discussed below:

4.1.1 Image-preprocessing

Image preprocessing is the basic step of image processing. Its main operations are noise removal, contrast enhancement, and illumination equalization. In this research, image processing is accompanied by intensity normalization, data augmentation, and transfer learning. For processing the aggregated images, we followed the following MRI sequences: T1, T1c, T2, FLAIR. The sequence of the MRI images is illustrated in Figure 3.

Brain tumor images are categorized into four different types such as enhancing, non-enhancing, necrosis, and edema tissues. Edema surrounds the core of the tumors. In the MRI sequence illustrated in Figure 2, the image sequence T1 defines the structural data of the brain tissue; however, it is difficult to distinguish tumorous tissues. The sequence T1c, the enhancing region of the brain tissue exhibits more contrast and appears with high intensity. Hence, it provides a better categorization of tumorous tissues. In the T2 sequence, the edema tissue sur-rounding the tumor appears with high intensity. These two sequences provide better detection of the tumor alongside FLAIR. Additionally, a bias field correction and intensity normalization performed as a part of the image preprocessing process.

4.1.2 Bias field correction

MRI images are often affected because of the bias field distortion. These results in intensity variation of the same tissues vary across the image. Image acquisition from the MRI dataset observes homogeneities of images because of some blemishes and disfigurements in the magnetic field generated by the MRI scanner. This causes variations in the image intensities within the same tissue. This can be resolved by applying an N4ITK method [41]. However, this technique is not capable of ensuring equal intensity distribution in tissue across different regions for a similar MRI sequence. The MRI sequence can be either explicit or implicit in most of the segmentation techniques [42, 43].

Figure 1. Hierarchical clustering process

Figure 2. Hierarchical dense CNN

Figure 3. MRI image sequence

4.1.3 Intensity normalization

The tissue intensity varies if the same patient's image is captured in the same scanner at different time frames. To maintain a uniform contrast and intensity across different patients, and intensity normalization technique proposed by Nyúl et al. [44] is applied for every image sequence. In this process, a set of intensity landmarks was identified for every sequence from the training dataset and selected for the MRI sequence. These points represent the intensity in terms of percentage. After completing the training process, intensity normalization is performed by a linear transformation of the original intensities between two critical points into respective learned landmarks. This makes the histogram of every sequence uniform across various regions.

After intensity normalization of MRI images, histogram of each sequence evaluated for all training patches extracted for every sequence, and these patches are normalized to obtain zero mean and unit variance. During image transformation, the actual landmark locations obtained from the training phase are matched to different linear mappings from first to last and during patch extraction i.e. a sub-image of the original image is obtained in image pixel ratio of 240x240 as shown in the below Figure 4.

Figure 4. Patch extraction

The size of each slice image measured is 240 x 240. Training a CNN for entire image results in an increased number of parameters that need a large amount of data. However, due to the small dataset available, the model is trained using patches. The considered size of the testing patch 33 x 33, which works effectively with the obtained dataset.

4.1.4 Image data augmentation

Augmentation is performed in cases where the training dataset is small and the number of parameters to train is significantly large. Data augmentation can increase the size of the training set by 10 fold or even more. Augmentation is performed for smaller training datasets as it becomes ineffective under large training datasets. In this study, rotation and flipping were performed randomly in the training dataset at run time to create a new dataset. Some of the samples (t) are selected from the batch size of 128 to these samples; one of the three operations like rotation, horizontal and vertical flip is applied or a random combination of the three operations is applied to get modified samples. These samples merged with the remaining samples to get a new batch of 128 samples, used to train the model.

Dense Hierarchical CNN (DH-CNN)

Start

Input: Different patches of size 33x 33 are fetched from the X image dataset

Output: Segmented tumor image- 240 x 240

Initialisation:

  1. Normalization of image is done with batch normalization

      def norm(image):

               return (image - image_nonzero.mean()) / image_nonzero.std()

  1. Voxels are generated after image normalization and preprocessed with

               def vox_preprocess(vox):

                 return np.reshape(vox, vox_shape)

    Procedure:

  1. Segmentation loss = Ground truth image – truth image

  2. Different voxels are preprocessed by reshaping and rescaling

  3. def dice_coef_np(y_true, y_pred, num_classes):

               return dice coefficients values

  1. Train the dense hierarchical two-stage model with the preprocessed images

def train():

A small patch of image xi is fed to the Convolution model of 33 * 36 layers: 36 features are fetched

Then the output is fed to stage 1 comprising 5 layers extracting 108 features.

In the next step, it is fed to stage 2 comprising 6 layers of extraction 180 features

Return the segmented tumors 

  1. A segmented tumor image of size 240 x 240 is the output from the proposed model.

  2. Dice score coefficients are calculated for the voxels trained with the two-stage model

End

4.2 Training neural network using transfer learning

In this study, CNN is trained using transfer learning. Transfer learning improves the learning curve with a new task through knowledge transferring from a task that is similar to the model which is already trained and learned from the process [45]. Considering an example of knowledge obtained during the process of recognizing humans. This knowledge can be applied to learn and recognize different humans at various age levels [46]. This process is the learning process of the machine, using external sources of additional training information from one or more related tasks considering the basic training data.

The proposed model utilizes hierarchical clustering along with the dense CNN shown in Figure 5. In the dense CNN, input patches of size (33x33x4) are fed to t-net i.e. transfer learning model containing six layers including convolutional, max-pooling, and convolutional with dropout rate = 0.8. The output from t-net i.e. the vector of size (128x16x16) is fed to the S-net which is the segmentation layer including five layers i.e. convolutional layer, max-pooling, fully convolutional layer, and fully convolutional layer with dropout rate = 0.5. Then the obtained best-learned weights, which are learned earlier for segmentation of tumor, are loaded. After extracting these vectors for all the training patches of the patients, CNN is trained for the other patients using these vectors. The proposed model combining the t-net and s-net is illustrated in Figure 5.

The CNN model with layer-wise comparison is presented in Table 3.

Figure 5. Dense CNN

Table 3. CNN model chart

 

Input

Type

filters

Filter Size

Stride

FC units

Layer1

128*16*16

Conv

128

3*3

1*1

NA

Layer2

128*16*16

Max-pool

NA

3*3

2*2

NA

Layer3

6272

FC +drop

NA

NA

NA

256

Layer4

256

FC +drop

NA

NA

NA

256

Layer5

256

FC

NA

NA

NA

5

5. Results and Discussions

Images showing the difference between automatic segmentation and the actual ground values are shown in Figure 6. From the images are shown below it can be seen that the proposed model for patients precisely detects the location, shape, size.

A comparison of the proposed model with the deep learning method [43] is shown in Figure 7.

The below table provides information about various performance properties or parameters like dice score, similarity index, signal-to-noise ratio calculated over 150 image datasets. From this dataset, the images which have shown the best results are been indicated in Table 4.

Figure 6. Ground truth vs automated segmentation

Table 4. Parameters for segmented tissues and its performance analysis

Images

Mean Square Error (MSE)

Peak Signal to Noise Ratio (PSNR)

Structured Similarity Index (SSIM)

Dice Score Coefficient (DSC)

Img1

1.957

56.45 dB

0.8951

0.85

Img2

0.65

68.91 dB

0.900

0.87

Img3

5.9

58.21 dB

0.8168

0.92

Img4

6.01

60.34 dB

0.8266

0.92

Img5

2.23

60.55 dB

0.901

0.79

Img6

4.16

57.61 dB

0.9561

0.80

Img7

5.42

60.60 dB

0.822

0.91

Figure 7. Performance evaluation with different benchmarks model

The performance of different benchmark models to the proposed hybrid model is detailed in Table 4. The prediction of the sub-regions is combined to produce the whole tumor (WT), tumor core (CT), and active tumor (AT). Zhao et al. [39] and, Pereira et al. [43] used 2D CNN architecture models which take 33 * 33 patches as inputs and predict the label of the core. The work as proposed by Kamnitsas et al. [36] used a receptive field of the voxel in the map layer. In this work, multi-scale contextual information is incorporated by generating different predictions at different levels of the network making the model more efficient. The model achieves a dice score as in Figure 7.

6. Conclusion & Future Scope

In the proposed research methodology, a hierarchical dense CNN is proposed for brain tumor segmentation through MRI images. The proposed model incorporated a pre-processing stage associated with a bias field correction, intensity normalization. While training the CNN model, the number of training patches augmented as the obtained dataset was very small and the number of parameters to be trained was prominently more. After the preprocessing, small patches are extracted and transferred to the hierarchical structure of the CNN model. The hierarchical dense CNN consists of two stages i.e. transfer learning model (T-net) and the segmentation model (S-net). The dropout layer added to both the T-net and S-net resulted in a higher dice score coefficient in segmenting the tumorous tissues. The CNN was modeled with small 3 * 3 kernels to allow deeper architectures. The preliminary objective of the methodology was to enhance the efficiency of the segmentation algorithms by considering the qualitative measures such as dice score coefficient using quantitative parameters such as mean square error and peak signal to noise ratio. The performance of the proposed approach was validated by comparing it with other existing approaches and the observations are tabulated. From the results, it can be observed that the proposed approach achieved a dice score of 0.71 for whole tumor (WT), 0.80 for core tumor (CT), and 0.82 for active tumor (AT) compared to other approaches and the parameters for segmented tissues and its performance anal-sis are evaluated and the obtained results show the efficacy of the proposed approach [46-52].

Since the efficiency of the segmentation algorithm reduces the mortality rate, therefore it is mandatory to enhance the accuracy of these segmentation algorithms. For further research, an enhanced deep learning algorithm with parameter optimization enlarges the computational speed and efficiency of the brain tumor segmentation process. This research when combined with the prediction model results in early diagnosis and treatment of brain tumors.

Nomenclature

SSIM

Structural Similarity Index

DSC

Dice Score Coefficient

PSNR

Peak Signal to Noise Ratio

MSE

MRI

CNN

Mean square error

Magnetic Resonance Imaging

Convolutional Neural Networks

DBN

RBM

SAE

CT

WT

AT

HGG

LGG

FCNN

Deep Belief Networks

Restricted Boltzmann’s Machine

Stacked Auto-encoders

Core Tumor

Whole Tumor

Active Tumor

High-Grade Gliomas

Low-Grade Gliomas

Fully Convolutional Neural Network

  References

[1] Kaur, R., Doegar, A. (2019). Localization and classification of brain tumor using machine learning & deep learning techniques. International Journal of Innovative Technology and Exploring Engineering (IJITEE), 8(9): 2278-3075. 

[2] Anitha, V., Murugavalli, S. (2015). A literature survey on brain tumour classification techniques. International Journal of Applied Engineering Research (IJAER), 10(6). 

[3] Bahadure, N.B., Ray, A.K., Thethi, H.P. (2017). Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. International Journal of Biomedical Imaging, 2017: 1-12. https://doi.org/10.1155/2017/9749108

[4] Zacharaki, E.I., Wang, S., Chawla, S., Soo Yoo, D., Wolf, R., Melhem, E.R., Davatzikos, C. (2009). Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 62(6): 1609-1618. https://doi.org/10.1002/mrm.22147

[5] Rathi, V.P., Palani, S. (2012). Brain tumor MRI image classification with feature selection and extraction using linear discriminant analysis. arXiv preprint arXiv:1208.2128. 

[6] Rajini, N.H. (2019). Brain tumor image classification and grading using convolutional neural network and particle swarm optimization algorithm. Int. J. Eng. Adv. Technol., 8(3): 42-48. 

[7] Goswami, A., Dixit, M. (2020). An analysis of image segmentation methods for brain tumour detection on MRI images. In 2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT), pp. 318-322. https://doi.org/10.1109/CSNT48778.2020.9115791

[8] Sobhaninia, Z., Rezaei, S., Karimi, N., Emami, A., Samavi, S. (2020). Brain tumor segmentation by cascaded deep neural networks using multiple image scales. In 2020 28th Iranian Conference on Electrical Engineering (ICEE), pp. 1-4. https://doi.org/10.1109/ICEE50131.2020.9260876

[9] Lefkovits, L., Lefkovits, S., Vaida, M.F., Emerich, S., Măluțan, R. (2017). Comparison of classifiers for brain tumor segmentation. In International Conference on Advancements of Medicine and Health Care through Technology, 59: 195-200. https://doi.org/10.1007/978-3-319-52875-5_43

[10] Dahab, D.A., Ghoniemy, S.S., Selim, G.M. (2012). Automated brain tumor detection and identification using image processing and probabilistic neural network techniques. International Journal of Image Processing and Visual Communication, 1(2): 1-8. 

[11] Nayak, L.P., Mishra, S., Pattnaik, P., Rana, D. (2018). Automatic classification and detection of brain tumor using hybrid k-means radial basis function neural network and fast fuzzy C-Means algorithm. International Journal of Engineering, Science and Mathematics, 7(4): 105-118.

[12] Abdel-Maksoud, E., Elmogy, M., Al-Awadi, R. (2015). Brain tumor segmentation based on a hybrid clustering technique. Egyptian Informatics Journal, 16(1): 71-81. https://doi.org/10.1016/j.eij.2015.01.003

[13] Kothavari, K., Keerthana, R., Mariselvam, M., Kaveya, S., Mekala, L. (2013). A hybrid for PNN-based MRI brain tumor classification and patient detail authentication using separable reversible hiding. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, 2(4). 

[14] Hemanth, D.J., Vijila, C.K.S., Anitha, J. (2010). Performance improved PSO-based modified counter propagation neural network for abnormal MR brain image classification. Int. J. Advance. Soft Comput. Appl, 2(1): 65-84. 

[15] He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. https://doi.org/10.1109/CVPR.2016.90

[16] Rajesh Sharma, R., Marikkannu, P. (2015). Hybrid RGSA and support vector machine framework for three-dimensional magnetic resonance brain tumor classification. The Scientific World Journal, 2015: 1-14. https://doi.org/10.1155/2015/184350

[17] Sharma, M., Mittal, R., Choudhury, T., Vishnu, K. (2018). Classification of Brain simulation using artificial intelligence cognitive science and neuroscience. 2018 International Conference on Communication Computing and Internet of Things (IC3IoT), pp. 403-407.

[18] Arora, B., Choudhury, T., Kumar, P., Mukherjee. (2016). An intelligent way to play music by brain activity using brain computer interface. 2016 2nd International Conference on Next Generation Computing Technologies (NGCT), pp. 223-228. https://doi.org/10.1109/NGCT.2016.7877419

[19] Ren, Z., Qian, K., Zhang, Z., Pandit, V., Baird, A., Schuller, B. (2018). Deep scalogram representations for acoustic scene classification. IEEE/CAA Journal of Automatica Sinica, 5(3): 662-669. https://doi.org/10.1109/JAS.2018.7511066

[20] Angulakshmi, M., Lakshmi Priya, G.G. (2017). Automated brain tumour segmentation techniques—a review. International Journal of Imaging Systems and Technology, 27(1): 66-77. https://doi.org/10.1002/ima.22211

[21] Nachimuthu, D.S., Baladhandapani, A. (2014). Multidimensional texture characterization: On analysis for brain tumor tissues using MRS and MRI. Journal of Digital Imaging, 27(4): 496-506. https://doi.org/10.1007/s10278-013-9669-5

[22] Reddy, D., Bhavana, V., Krishnappa, H.K. (2018). Brain tumor detection using image segmentation techniques. In 2018 International Conference on Communication and Signal Processing (ICCSP), pp. 0018-0022. https://doi.org/10.1109/ICCSP.2018.8524235

[23] Ejaz, K., Rahim, M.S.M., Rehman, A., Chaudhry, H., Saba, T., Ejaz, A. (2018). Segmentation method for a pathological brain tumor and accurate detection using MRI. International Journal of Advanced Computer Science and Applications, 9(8): 394-401.

[24] Kumar, P., VijayKumar, B. (2019). Brain tumor MRI segmentation and classification using ensemble classifier. International Journal of Recent Technology and Engineering (IJRTE), 8(1S4). 

[25] Zhao, J., Meng, Z., Wei, L., Sun, C., Zou, Q., Su, R. (2019). Supervised brain tumor segmentation based on gradient and context-sensitive features. Frontiers in Neuroscience, 13: 144. https://doi.org/10.3389/fnins.2019.00144

[26] Mushtaq, U., Singh, S.K. (2018). Brain tumor segmentation techniques in MRI images an analysis. In 2018 International Conference on Intelligent Circuits and Systems (ICICS), pp. 81-86. https://doi.org/10.1109/ICICS.2018.00029

[27] Lather, M., Singh, P. (2020). Investigating brain tumor segmentation and detection techniques. Procedia Computer Science, 167: 121-130. https://doi.org/10.1016/j.procs.2020.03.189

[28] Zhao, T., Chen, Q., Kuang, Z., Yu, J., Zhang, W., Fan, J. (2018). A deep mixture of diverse experts for large-scale visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(5): 1072-1087. https://doi.org/10.1109/TPAMI.2018.2828821

[29] Yao, C., Liu, Y. F., Jiang, B., Han, J., Han, J. (2017). LLE score: A new filter-based unsupervised feature selection method based on nonlinear manifold embedding and its application to image recognition. IEEE Transactions on Image Processing, 26(11): 5257-5269. https://doi.org/10.1109/TIP.2017.2733200

[30] Chen, S., Yang, J., Luo, L., Wei, Y., Zhang, K., Tai, Y. (2017). Low-rank latent pattern approximation with applications to robust image classification. IEEE Transactions on Image Processing, 26(11): 5519-5530. https://doi.org/10.1109/TIP.2017.2738560

[31] Zhu, X., Bain, M. (2017). B-CNN: Branch convolutional neural network for hierarchical classification. arXiv preprint arXiv:1709.09890. 

[32] Milletari, F., Ahmadi, S.A., Kroll, C., Plate, A., Rozanski, V., Maiostre, J., Navab, N. (2017). Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Computer Vision and Image Understanding, 164: 92-102. https://doi.org/10.1016/j.cviu.2017.04.002

[33] Milletari, F., Navab, N., Ahmadi, S.A. (2016). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 565-571. https://doi.org/10.1109/3DV.2016.79

[34] Kleesiek, J., Urban, G., Hubert, A., Schwarz, D., Maier-Hein, K., Bendszus, M., Biller, A. (2016). Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. Neuroimage, 129: 460-469. https://doi.org/10.1016/j.neuroimage.2016.01.024

[35] Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Glocker, B. (2017). Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis, 36: 61-78. https://doi.org/10.1016/j.media.2016.10.004

[36] Shakeri, M., Tsogkas, S., Ferrante, E., Lippe, S., Kadoury, S., Paragios, N., Kokkinos, I. (2016). Sub-cortical brain structure segmentation using F-CNN's. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 269-272. https://doi.org/10.1109/ISBI.2016.7493261

[37] Yan, Z., Zhang, H., Piramuthu, R., Jagadeesh, V., DeCoste, D., Di, W., Yu, Y. (2015). HD-CNN: Hierarchical deep convolutional neural networks for large-scale visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2740-2748.

[38] Moeskops, P., Viergever, M.A., Mendrik, A.M., De Vries, L.S., Benders, M.J., Išgum, I. (2016). Automatic segmentation of MR brain images with a convolutional neural network. IEEE Transactions on Medical Imaging, 35(5): 1252-1261. https://doi.org/10.1109/TMI.2016.2548501

[39] Zhao, X., Wu, Y., Song, G., Li, Z., Fan, Y., Zhang, Y. (2016). Brain tumor segmentation using a fully convolutional neural network with conditional random fields. In International Workshop on Brain lesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 10154: 75-87. https://doi.org/10.1007/978-3-319-55524-9_8

[40] Hu, W., Huang, Y., Wei, L., Zhang, F., Li, H. (2015). Deep convolutional neural networks for hyperspectral image classification. Journal of Sensors, 2015: 1-12. https://doi.org/10.1155/2015/258619

[41] Tustison, N.J., Avants, B.B., Cook, P.A., Zheng, Y., Egan, A., Yushkevich, P.A., Gee, J.C. (2010). N4ITK: improved N3 bias correction. IEEE Transactions on Medical Imaging, 29(6): 1310-1320. https://doi.org/10.1109/TMI.2010.2046908

[42] Shah, M., Xiao, Y., Subbanna, N., Francis, S., Arnold, D. L., Collins, D.L., Arbel, T. (2011). Evaluating intensity normalization on MRIs of the human brain with multiple sclerosis. Medical Image Analysis, 15(2): 267-282. https://doi.org/10.1016/j.media.2010.12.003

[43] Pereira, S., Pinto, A., Alves, V., Silva, C.A. (2016). Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Transactions on Medical Imaging, 35(5): 1240-1251. https://doi.org/10.1109/TMI.2016.2538465

[44] Nyúl, L.G., Udupa, J.K., Zhang, X. (2000). New variants of a method of MRI scale standardization. IEEE Transactions on Medical Imaging, 19(2): 143-150. https://doi.org/10.1109/42.836373

[45] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61: 85-117. https://doi.org/10.1016/j.neunet.2014.09.003

[46] Dridi, M., Bouallegue, B., Hajjaji, M.A., Mtibaa, A. (2016). An enhancement medical image compression algorithm based on neural network. International Journal of Advanced Computer Science and Applications, 7(5): 484-489. 

[47] Choi, H., Jin, K.H. (2016). Fast and robust segmentation of the striatum using deep convolutional neural networks. Journal of Neuroscience Methods, 274: 146-153. https://doi.org/10.1016/j.jneumeth.2016.10.007

[48] Chen, H., Dou, Q., Yu, L., Qin, J., Heng, P.A. (2018). VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage, 170: 446-455. https://doi.org/10.1016/j.neuroimage.2017.04.041

[49] Ghafoorian, M., Karssemeijer, N., Heskes, T., Bergkamp, M., Wissink, J., Obels, J., Platel, B. (2017). Deep multi-scale location-aware 3D convolutional neural networks for automated detection of lacunes of presumed vascular origin. NeuroImage: Clinical, 14: 391-399. https://doi.org/10.1016/j.nicl.2017.01.033

[50] Ghafoorian, M., Karssemeijer, N., Heskes, T., van Uden, I.W., Sanchez, C.I., Litjens, G., Platel, B. (2017). Location sensitive deep convolutional neural networks for segmentation of white matter hyperintensities. Scientific Reports, 7(1): 1-12. https://doi.org/10.1038/s41598-017-05300-5

[51] Bao, S., Chung, A.C. (2018). Multi-scale structured CNN with label consistency for brain MR image segmentation. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 6(1): 113-117. https://doi.org/10.1080/21681163.2016.1182072

[52] Zhang, W., Li, R., Deng, H., Wang, L., Lin, W., Ji, S., Shen, D. (2015). Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage, 108: 214-224. https://doi.org/10.1016/j.neuroimage.2014.12.061