MRI and CT Image Based Breast Tumor Detection Framework with Boundary Detection Technique

MRI and CT Image Based Breast Tumor Detection Framework with Boundary Detection Technique

Velpula Nagi ReddyPeram Subba Rao 

Dept of IT, VFSTR Deemed to be University, Vadlamudi, Guntur 522213, India

Corresponding Author Email: 
vnreddy834@gmail.com
Page: 
1797-1805
|
DOI: 
https://doi.org/10.18280/ts.390539
Received: 
11 August 2022
|
Revised: 
25 September 2022
|
Accepted: 
5 October 2022
|
Available online: 
30 November 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Image segmentation is vital in image processing and computer vision, and it is also regarded as a bottleneck in image processing technology development. Picture segmentation is the process of dividing an image into a group of disjoint sections with uniform and homogeneous characteristics. Before proceeding with various statistical methods of analyzing segmentation of tumor, one has to understand the labels consisting of brain MR image. Because of the high inconstancy in tumor morphology and the low sign to-commotion proportion characteristic to mammography, manual characterization of mammogram yields a critical number of patients being gotten back to, and consequent enormous number of biopsies performed to decrease the danger of missing malignant growth. The convolutional neural networks (CNN) is a mainstream profound learning build utilized in picture arrangement. This procedure has accomplished huge progressions in enormous set picture arrangement challenges in later a long time. In this examination, we had acquired more than 3000 excellent unique mammograms with endorsement from an institutional survey board at the University of Kentucky. Various classifiers dependent on CNNs were manufactured, and every classifier was assessed dependent on its exhibition comparative with truth esteems created by histology results from biopsy furthermore, two-year negative mammogram follow-up affirmed by master. In this paper, a method for classifying traffic signs is proposed that is based on training the convolutional neural networks (CNN). Furthermore, it shows the preliminary classification performance of using this CNN to automatically learn and categorise RGB-D images. For this four-class classification job, the method of transfer learning known as fine tuning technique is proposed which involves reusing layers learnt on the ImageNet dataset to discover the optimal design.

Keywords: 

convolutional neural network, deep learning, transfer learning, ImageNet

1. Introduction

Convolutional neural systems (ConvNets or CNNs) are one of the most common classifications in neural systems for image recognition and order [1]. CNNs are widely used in a variety of areas, including article identification, recognition faces, and so on. Convolutional systems perceive visuals as volumes, such as three-dimensional items [2], rather as level canvases, which are measured just by their width and height. Because computerised shading pictures contain a red-blue-green (RGB) encoding, which blends those three colours to form the shading range that individuals view, this is the case. Such images are ingested by a convolutional system [3] as three different strata of shading placed one on top of the other.

So a convolutional system obtains a typical shading picture as a rectangle box whose width and stature are approximated by the number of pixels along those measurements, and whose profundity is three layers deep, one for each letter in RGB [4]. Those layers of profundity are referred to as channels.

CNN image instructions examine a visual representation of data, categorise the results, and display them to the user. A computer sees an image as a collection of pixels [5], and how it interprets the image changes depending on its purpose. To achieve its visual objectives, the computer will process data in the form h x w x d (h for height, w for width, and d for dimension). Examples include a picture of a 441 grayscale image grid and a picture of a 663 RGB network cluster (3 refers to RGB values) [6]. Figure 1 depicts a cluster representing the RGB network.

Figure 1. RGB network

In practise, when developing and evaluating CNN models in deep learning, every input image will undergo a series of convolution layers that include channel (Kernals), Consolidating, totally related layers and Softmax ability to arrange data with stochastic features between 0 and 1. In order to process an information stream and classify articles according to their values, CNN follows the procedure shown in the diagram below. Figure 2 depicts the basic CNN framework.

Figure 2. Architecture of CNN

The structure of the deep CNN model is provided clearly. Convolutional layer, maxpooling operation, and fully-connected layer are all represented by the notation "conv. ", "max.", and "full," respectively. "relu" means a corrected linear transformation [7].

This thesis employs deep CNNs with a structure similar to that of Mnih (2013), with minor modifications. The number of filters is increased to 128 on each of the first three convolutional layers. Extensive trials show that this change enables improvement in accuracy of item retrieval across a number of tasks without significantly increasing the amount of processing required. Parameter tuning and structural improvements may potentially lead to incremental gains in the reliability of item extraction. However that goes beyond the purpose of the research [8].

The first three layers of the model are convolutional, while the latter two are completely connected, for a total of five layers. The output from the bottom layer becomes the input for the top layer in this hierarchical structure of 5 layers. In the first layer, the input is a 3x3-pixel image patch. Every intermediate layer generates its output using function maps, an array with a fixed dimension [9]. In the last layer, a logistic regression is used to predict the probabilities of pixels being labelled based on the functions map. Furthermore, each layer can have multiple sub-layers, with each level employing a different set of operations depending on its depth. For instance, the first convolutional layer has three stages: it convolves the entire image using a fixed set of linear filters, then performs a non-linear adjustment [10] to the result of convolution, followed by a max pooling at the very top. The following is information about operations performed at various layers:

A. Convolutional layer

In Figure 2, a convolutional layer labelled "Conv.1" is shown for illustration purposes. Typically, a convolutional layer consists of three stages—convolution, non-linear modification, and spatial pooling—but in some circumstances, just the first two will be used. Let X stand for the convolutional layer's input, which is a three-dimensional array of lengthsxsxcx where sx is the spatial dimension and cx is the channel dimension. In this context, let's refer to Y as the convolutional layer's output, which is a three-dimensional array of lengths sy X sy X cy, where sy is a measurement and cy is a channel measurement [11]. Let's call the linear filter weights W for short. It is represented as a 4-dimensional tensor with dimensions of sw X swX cx X cy, which includes the weights of a fixed pair of swsw 2-dimensional filters that map input X to output Y. The output of a standard 3-level convolutional layer can be represented as:

$Y j=(g(b j+\Sigma W i j * X i c x i=1))$   (1)

Convolutional layers like the one labelled "Conv.1" in Figure 2 are used in deep neural networks. Despite the fact that spatial pooling won't be implemented in some circumstances (for example, the second and 1/three convolutional layer of the deep CNNs employed in this research), a typical convolutional layer comprises three stages: convolution, non-linear transformation, and spatial pooling [12]. The input to the convolutional layer, denoted by X, is a three-dimensional arrays of duration sx X sx X cx, where sx is the spatial size and cx is the channel size. Let Y stand for the convolutional layer's output; this may be a three-dimensional array of durations (sy, sy, cy), where sy is the layer's size and cy is the channel size [13]. Let's call the linear filter weights W for short. Weights of a challenging and fast of two-dimensional filters of durationswsw connecting the input X with the output Y are expressed in a four-dimensional tensor of duration sw X sw X cx X cy. The output of a typical three-stage convolutional layer can be represented as:

$(x)=\max (x, 0)$   (2)

pool represents spatial pooling for activations.

B. Fully connected layer

A fully connected layer is represented by the "Full2" layer in Figure 2. Let's say a fully-linked layer's input is a vector of size sx, and we'll refer to it as X. An appropriate notation for a weight matrix of sysx size is W. The output of layer Y can be written as:

$Y=g(b+W X)$   (3)

where, b is biases; g(x) is nonlinear activation function.

C. Full model

Figure 2 presents the full procedure of deep CNN prediction, outlining the architecture and activities of each layer in great detail. As input, deep CNNs receive a 64x64 picture patch of data representing the R, G, and B channel spectra values within the patch. First, a corrected quadratic transformation is employed after the source images patch has been convolved with 3128 1616-pixel spatial filters using a 4-pixel stride. Thus, 128 13-by-13-pixel feature maps have been produced. The output is subjected to a max pooling procedure with a pooling size of 22 pixels and a stride of 1 pixel [14]. The result of this layer is 128 feature maps, each of which is 1212 pixels in size. Linearly correcting the input from the first layer, the second layer convolves it with 128128 filters of coordinates 444 pixels at a stride of 1 pixel [15]. Finally, the third layer receives 128 feature maps with 909 pixel coordinates as input. The third layer linearly corrects the input after it has been combined with 128128 filters with coordinates of 22 pixels and a delay of 1 pixel [16]. As a result, 128 33-by-33-pixel feature maps have been generated. Fourth, the 1152-dimensional feature vector resulting from the combination of these feature maps is fed into the relevant neural network.

1.1 Data set

Contains mammographic pictures of variations from the norm generally experienced. MIAS database of mammograms speaks to a valuable and handy commitment to PC vision investigate in mammography [17]. It is made out of around 200-micron pixel edge and cut/cushioned picture in 1024 × 1024 pixels. Mammogram Images are gathered from a publically accessible database Mammographic Image Analysis Society (MIAS), an association of UK examine bunches [1]. The X-beam films in the database have been cautiously chosen from the United Kingdom National Breast Screening Program and igitized with a Joyce-Lobel checking microdensitometer,74optical thickness extend 0-3.2, to a goal of 50 μm × 50 μm, and every pixel is spoken to with 8-piece word. The database contains a sum of 322 pictures of size 1024 x 1024, left and right bosom pictures for 161 patients. Among the 322 pictures, 210 pictures are ordinary, 61 considerate and 51 threatening [18]. Data with respect to nature of foundation tissue (Fatty, Fatty-glandular, and Denseglandular), class of variations from the norm, for example, calcification, encompassed masses, uessed masses, badly characterized masses, building mutilation, asymmetry and ordinary, and if irregularity is available, its temperament, directions of focus of anomaly [19] and range of a circle encasing the variation from the norm are given. Figure 3 below shows test pictures with tissue types.

Figure 3. (a) Normal (b) fatty (c) glandular (d) dense glandular

1.2 Training on a small dataset

Due to the high cost and vital nature of the work performed by radiology specialists, a wealth of clearly defined data in restoring imaging is desirable yet infrequently available. Methods such as data expansion and transfer learning can be used to efficiently create a model using a smaller dataset. Due to the rapid obscuration of information expansion in the preceding section, the focus here is on the acquisition of new movement patterns.

In order to train a system on a small dataset, a common and effective method is transfer learning. In this scenario, a system is first pretrained on a massive dataset, such as ImageNet, which has 1.4 million images with 1000 classifications, and then it is applied to the specified task. The fundamental assumption of transfer learning is that generalised highlights learnt on a sufficiently large dataset may be generalised to seemingly dissimilar datasets. This adaptability of academic traditional highlights is a unique advantage of deep learning that becomes useful in a variety of spatial tasks despite having relatively small datasets. AlexNet [20], VGG [21], ResNet [22], Inception [23], and DenseNet [24] are just a few examples of the many models that have been pretrained on the ImageNet challenge dataset and are now publicly and immediately available, together with their educated portions and loads. Two common applications of pretrained arranging are fixed-element extraction and calibration [25].

The term "fixed element extraction methodology" refers to a method for removing totally associated levels from a network pretrained on ImageNet while still using the remaining system, which consists of a series of convolution and pooling layers (the "convolutional base"), as an extractor. This scenario-io allows for the addition of any AI classifier on top of the fixed component extractor, such as irregular woodlands and bolster vector machines, or the standard fully connected layers in CNNs, resulting in training limited to the additional classifier on the given dataset of interest. Due of the dissimilarity between ImageNet and the provided therapeutic images, this approach is not typically used in deep learning research on medical photographs.

One common way of calibrating models, especially in radiology research, involves not only retraining the model from scratch using a new set of fully connected layers, but also adjusting all or some of the sections in the pre - trained models convolutional base via backpropagation. It is possible to make changes to each of the convolutional base's layers individually, or to keep some of the earlier layers constant while making changes to the rest of the more advanced layers. Figure 4 depicts a training session.

Figure 4. Training process structure

From the above diagram the accessible information are commonly part into three sets: a preparation, an approval, and a test set. A preparation set is utilized to prepare a system, where misfortune esteems are determined by means of forward proliferation and learnable parameters are refreshed by means of backpropagation. An approval set is utilized to screen the model execution during the preparation procedure, tweak hyperparameters, and perform model choice. A test set is obviously utilized just once at the finish of the venture so as to assess the exhibition of the last model that is tweaked and chose on the preparation procedure with preparing and approval sets.

1.3 Classification

In medicinal picture examination, grouping with profound adapting normally uses target injuries delineated in restorative pictures, and these sores are arranged into at least four classes like Binary Classification, Multi-Class Classification, Multi-Label Classification and Imbalanced Classification. For ex-adequate, profound learning is every now and again utilized for the order. Various classifiers dependent on CNNs were manufactured, and every classifier was assessed dependent on its exhibition comparative with truth esteems created by histology results from biopsy furthermore, two-year negative mammogram follow-up affirmed by master radiologists. Our outcomes indicated that CNN model we had assembled and improved by means of information enlargement and transfer learning have an incredible potential for programmed breast images growth discovery utilizing mammograms.

1.4 Implementation of classification with CNN

The convolutional neural systems were actualized utilizing TensorFlow/keras. Every one of the tests were performed on a machine with four gatherings of number of images with size of 1024X1024 pixels. The dataset was arbitrarily apportioned into preparing and testing datasets. The preparation set was utilized to prepare the model; the aftereffects of forecasts made on the testing set were utilized to assess the presentation of the model. The preparation testing proportion utilized in all approval tests. Figure 5 shows the neural network with layers.

Figure 5. Neural network with many convolutional layers

The classification with CNN is shown in Figure 6.

Figure 6. Classification with CNN

Figure 7. Data flow diagram

Figure 8. Tumor analysis based on Input Image

Figure 9. Tumor analysis based on Input Image

Figure 10. Tumor analysis based on Input Image

The data flow process for classification system is shown in Figure 7. The Figures 8, 9, 10 and 11 represents the tumor analysis based on input image.

Figure 11. Tumor analysis based on Input Image

2. Existing Method

The division is the process by which the portrayal of a picture is transformed to something more useful and important, and the process of breaking down the picture is made much easier [29]. The diseased image is separated from the rest of the image. Following the application of return on capital invested, that part is further grouped. The result of picture division is a large number of portions that collectively spread the entire image [30], or a large number of forms that are deleted from the image.

2.1 Region of interest (ROI) extraction

By mapping the directions of pixels in the sectioned part of the aforementioned progress to those of the initial information image, the area of curiosity is extracted. This will delete the tainted portion of the first image before extracting the highlights. The computation is based on the location of the intrigue. Using pixel mapping between portioned photos, the return on original capital investment is removed from the first picture.

Steps for the existing method:

Step 1: An image.

Step 2: Select one of the tumor's pixels.

Step 3: Replace the four neighbour pixels with new ones.

Step 4: During this step, any pixel that reaches the thin / thick in the tumour will produce brightness according to the specified code.

Step 5: Next, find the mean of that area till you locate the tumor's boundary.

Step 6: Save the pixel's x and y coordinates.

Mean of the region is =I(x,y)

(x,y)=maximum intensity

Step 7: Return step-2.

2.2 Disadvantages

1. Regardless of time or strength, the calculation is growing.

2. Power diversity may result in openings or over-segmentation.

3. This method may fail to detect the concealment of authentic images.

We can effectively overcome the tumour problem by using some cover to channel the openings or exceptions. In this approach, the problem of tumours is eliminated.

3. Proposed Method

My strategy is extremely straightforward. We only need a few pixels to represent the property we're after. The objective of this method is to separate the tumor region present in image. The technique identifies the region based on detecting the changes in gray levels and referred them as boundaries. The similarity based segmentation is used to merge the pixel in the area. Region based segmentation is formulated in this manner.

(a) With in a disease area, each pixel must be connected. i.e. $\bigcup_{i=1}^n R_i=R$ Ri is a connected region, i=1,2,…,n.

(b) $R_i \cap R_j=\emptyset$ foralli $=1,2, \ldots, n$

(c) $\left(R_i \cup R_j\right)=$ FALSEforanyadjacentregion $R_i$ and $R_j$.

(d) $P\left(R_i\right)$ is a logical value to predicate points

(e) in Ri and $\emptyset$ is empty set.

Algorithm:

Step 1: Input image: (x,y)=maximum force; t=threshold value; region mean=I(x,y).

Step 2. Continue to develop until the distance between the area power means and the new pixels force mean exceeds the limit t.

Step 3: Create four new neighbours pixels.

Step 4. Add a neighbour if they are inside but not yet a part of the portioned zone.

Step 5. To the locality, add a pixel with the power closest to the region's mean.

Step 6: Determine the district's new mean.

Step 7. Save the pixel's x and y coordinates.

Step 8: Return to Step 2.

The above algorithm is implementing with the help of MATLAB (R2016).in this project the dataset was taken from the MIAS the entire dataset will be classified with the help of tensraflow/keras platform in machine learning. This classification process is explained as above.

Figure 12. Groving process

In this case, the above strategy is the polar opposite of the split-and-merge strategy. Iteratively, a group of small areas is merged based on similarity constraints. Begin by selecting a random seed pixel and comparing it to nearby pixels. The size of the zone is increased by adding in neighbouring pixels that are comparable to the seed pixel. When the growth of one region reaches a halt, we simply select a new seed pixel that does not yet belong to any region and begin the process all over again. This method is repeated until all pixels are assigned to a specific location. Boundary Detection methods often give very good segmentations that correspond well to the observed edges. Figure 12 represents the groving process.

4. Result & Discussion

Here, the categorised photographs serve as the input. By combining the existing control ROI and Border Identification models with the suggested, a series of tonic images is displayed. These breast photos were from the reenacted mammary database and the Government Medical Center in Guntur. Large numbers of original reverberation mammography images, labelled with popular voxel separation structures, are included in the set. We also determine the actual name of the tissue associated with each pixel, and it's not hard to evaluate the elements involved in the likelihood vitality work. The effectiveness of my strategy was assessed using a wide variety of imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), and mammography (MAMMOGRAM). The proposed method works well for segmenting any type of image in this scenario.

Figure 13 shows the MIAS dataset grayscale images converted to binary and the images acquired using my proposed approach.

4.1 Performance evaluation metrics

Unbiased division appraisal recommendations based on the Jaccard Coefficient (JC), Dice Coefficient (DC) formulations are used to evaluate the new division's effectiveness:

Cancellation of x and y show ground-level tumour precision and produce tumour images:

$\mathrm{JC}=\frac{|x \cap y|}{|x \cup y|}=\frac{a}{a+b+c}$  $\mathrm{DC}=2^* \operatorname{Jaccard}(p, q) /(1+\operatorname{Jaccard}(p, q))$    (4)

Figure 13. Tumor prediction

4.2 Jaccard coefficient

The Jaccard Index (or Jaccard similarity coefficient) was the first attempt to quantify the degree to which two sets of data are alike. In the context of binary data, it serves as a popular similarity index. Using the formula, we can calculate how much of the actual picture Bj is contained within the reference image Gj. The measure is 1, if the actual item and the gold standard picture Gj are identical, and 0, if they are completely different; nonetheless, the higher the value, the closer the two items are. We evaluate the Jaccard index of the suggested approach against those of K-means and Fuzzy-C means. Table 1 and Figure 14 illustrate that the proposed approach is the best option.

Table 1. Tumor jaccard coeeficients

Categire

 

Jaccard coefficient

 

K-mean

rg

Proposed

Tumours

img-1

0.6745

0.7961

0.8059

img-2

0.651

0.7686

0.7957

img-3

0.7098

0.7961

0.651

img-4

0.8

0.6784

0.8249

img-5

nill

0.5843

0.8213

Figure 14. Tumor category accuracy levels

Table 2. Benign jaccard coeeficients

Categire

 

Jaccard coefficient

 

K-mean

rg

Proposed

Bengning

img-1

0.498

0.5451

0.6667

img-2

0.6

0.6157

0.7412

img-3

0.7098

0.698

0.702

img-4

0.5412

0.5529

0.6745

img-5

0.6275

0.5608

0.6314

Figure 15. Benign classification levels

Table 3. Malignant jaccard coefficient levels

Categire

 

Jaccard coefficient

 

K-mean

rg

Proposed

Malignment

img-1

0.6784

0.6627

0.6784

img-2

0.6314

0.6588

0.6783

img-3

0.6157

0.3961

0.6863

img-4

0.6

0.5765

0.651

img-5

0.6863

0.6314

0.6863

Figure 16. Malignant classification accuracy levels

Table 2 and Figure 15 indicates the Benign Jaccard coefficients and the Benign classification levels of the proposed and traditional models.

Table 3 and Figure 16 indicates the Malignant Jaccard coefficient levels malignant classification accuracy levels of the proposed and existing models.

Table 4. Tumor less jaccard coefficient levels

Categire

 

Jaccard coefficient

 

K-mean

rg

Proposed

Tumour less

img-1

0.3843

error

0.4941

img-2

0.4078

error

0.5255

img-3

0.4118

0.5373

0.6588

img-4

0.3176

0.2863

0.6314

img-5

0.5294

0.5843

0.7725

Figure 17. Tumor less classification accuracy levels

Table 4 and Figure 17 represents the Tumor less Jaccard coefficient levels and Tumor less classification accuracy levels of the proposed and existing models.

4.3 Dice coefficient

To avoid this, users might also use an alternative to the Jaccard Index, the Dice coefficient looks like this: In order to evaluate effectiveness, an equation containing two indicators is provided. We evaluate how close or different the image's results are from the reference image. Table 5 and Figure 18 indicates the Benign dice coefficients and Tumor dice classification accuracy levels of the proposed and existing models.

Table 6 and Figure 19 indicates the Benign dice coefficients and Dice benign classification accuracy levels of the existing and proposed models.

Table 5. Benign dice coeeficients

Categire

 

Dice coefficient

 

K-mean

rg

Proposed

Tumours

img-1

1.349

1.5922

1.6118

img-2

1.302

1.5372

1.5914

img-3

1.4196

1.5922

1.302

img-4

1.6

1.3568

1.6498

img-5

1.08

1.1686

1.6426

Figure 18. Tumor dice classification accuracy levels

Table 6. Benign dice coeeficients

Categire

 

Dice coefficient

 

K-mean

rg

Proposed

Bengning

img-1

0.996

1.0902

1.3334

img-2

1.2

1.2314

1.4824

img-3

1.4196

1.396

1.404

img-4

1.0824

1.1058

1.349

img-5

1.255

1.1216

1.2628

Figure 19. Dice benign classification accuracy levels

Table 7. Malignant dice coeeficients

Categire

 

Dice coefficient

 

K-mean

Rg

Proposed

Malignment

img-1

1.3568

1.3254

1.3568

img-2

1.2628

1.3176

1.3566

img-3

1.2314

0.7922

1.3726

img-4

1.2

1.153

1.302

img-5

1.3726

1.2628

1.3726

Figure 20. Malignant classification accuracy levels

Table 8. Tumor less dice coeeficients

Categire

 

Dice coefficient

 

K-mean

rg

Proposed

Tumour less

img-1

0.7686

#VALUE!

0.9882

img-2

0.8156

#VALUE!

1.051

img-3

0.8236

1.0746

1.3176

img-4

0.6352

0.5726

1.2628

img-5

1.0588

1.1686

1.545

Figure 21. Tumor less dice classification accuracy levels

Table 7 and Figure 20 represents the Malignant dice coefficients and Malignant classification accuracy levels of the proposed and existing models.

Table 8 and Figure 21 represents the Tumor less dice coefficients and Tumor less dice classification accuracy levels of the proposed and existing models.

5. Conclusions

This paper's primary preliminary results suggest that the isolated properties such as region, edge, estimation, slimness, and minimum point extent play a significant role in the recognition of microcalcification. This paper examines whether or not a limited number of form parameters are enough to detect microcalcification. 92.5% accuracy was achieved thanks to the unusually clear extents of body shape Ta and Tb. In this study, we are developing a method for the microcalcification modification area using manual data. With its forward-thinking approach, ANN will be able to grasp a motorised robotic framework for recognising microcalcification, which is an important diagnostic step.

Acknowledgment

The authors are very thankful to the management of VFSTR Deemed to be University, for providing the necessary facilities to carry out this research work.

  References

[1] Zhou, L.Q., Wu, X.L., Huang, S.Y., Wu, G.G., Ye, H.R., Wei, Q., Bao, L.Y., Deng, Y.B., Li, X.R., Cui, X.W., Dietrich, C.F. (2020). Lymph node metastasis prediction from primary breast cancer US images using deep learning. Radiology, 294(1): 19-28. https://doi.org/10.1148/radiol.2019190372

[2] Tan, W., Tiwari, P., Pandey, H.M., Moreira, C., Jaiswal, A.K. (2020). Multimodal medical image fusion algorithm in the era of big data. Neural Computing and Applications, 1-21. https://doi.org/10.1007/s00521-020-05173-2

[3] Singh, V.K., Rashwan, H.A., Romani, S., Akram, F., Pandey, N., Sarker, M.M.K., Sarker, M.M.K., Saleh, A., Arenas, M., Arquez, M., Puig, D., Torrents-Barrena, J. (2020). Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Systems with Applications, 139: 112855. https://doi.org/10.1016/j.eswa.2019.112855

[4] Reddy, A.V., Krishna, C., Mallick, P.K., Satapathy, S.K., Tiwari, P., Zymbler, M., Kumar, S. (2020). Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks. Journal of Big Data, 7(1): 1-17. https://doi.org/10.1186/s40537-020-00311-y

[5] Qian, J., Tiwari, P., Gochhayat, S.P., Pandey, H.M. (2020). A noble double-dictionary-based ECG compression technique for IoTH. IEEE Internet of Things Journal, 7(10): 10160-10170. https://doi.org/10.1109/JIOT.2020.2974678

[6] Mondal, M.R.H., Bharati, S., Podder, P., Podder, P. (2020). Data analytics for novel coronavirus disease. Informatics in Medicine Unlocked, 20: 100374. https://doi.org/10.1016/j.imu.2020.100374

[7] Vreemann, S., Gubern-Mérida, A., Schlooz-Vries, M.S., Bult, P., van Gils, C.H., Hoogerbrugge, N., Karssemeijer, N., Mann, R.M. (2018). Influence of risk category and screening round on the performance of an MR imaging and mammography screening program in carriers of the BRCA mutation and other women at increased risk. Radiology, 286(2): 443-451. https://doi.org/10.1148/radiol.2017170458

[8] Kuhl, C.K., Strobel, K., Bieling, H., Leutner, C., Schild, H.H., Schrading, S. (2017). Supplemental breast MR imaging screening of women with average risk of breast cancer. Radiology, 283(2): 361-370. https://doi.org/10.1148/radiol.2016161444

[9] Houssami, N. (2017). Overdiagnosis of breast cancer in population screening: Does it make breast screening worthless? Cancer Biol Med., 14(1): 1-8. https://doi.org/10.20892/j.issn.2095-3941.2016.0050

[10] Eelbode, T., Bertels, J., Berman, M., Vandermeulen, D., Maes, F., Bisschops, R., Blaschko, M.B. (2020). Optimization for medical image segmentation: Theory and practice when evaluating with dice score or Jaccard index. IEEE Transactions on Medical Imaging, 39(11): 3679-3690. https://doi.org/10.1109/TMI.2020.3002417

[11] Schettino, C.J., Kramer, E.L., Noz, M.E., Taneja, S., Padmanabhan, P., Lepor, H. (2004). Impact of fusion of indium-111 capromab pendetide volume data sets with those from MRI or CT in patients with recurrent prostate cancer. American Journal of Roentgenology, 183(2): 519-524. https://doi.org/10.2214/ajr.183.2.1830519

[12] Behrenbruch, C., Marias, K., Armitage, P., Moore, N., Clarke, J., Brady, J.M. (2001). Prone-supine breast MRI registration for surgical visualisation. In Medical image understanding and analysis. Unv of Birmingham http://www.cs.bham.ac.uk/research/proceedings/miua2001/.

[13] Pfluger, T., Vollmar, C., Wismüller, A., Dresel, S., Berger, F., Suntheim, P., Leinsinger, G., Hahn, K. (2000). Quantitative comparison of automatic and interactive methods for MRI–SPECT image registration of the brain based on 3-dimensional calculation of error. Journal of Nuclear Medicine, 41(11): 1823-1829.

[14] Maguire, G.Q., Jaeger, J., Farde, L., Noz, M.E. (1987). Use of graphical techniques for error evaluation. Journal of Medical Systems, 11(4): 277-286. https://doi.org/10.1007/BF00994013

[15] Noz, M.E., Maguire, G.Q., Zeleznik, M.P., Kramer, E.L., Mahmoud, F., Crafoord, J. (2001). A versatile functional–anatomic image fusion method for volume data sets. Journal of Medical Systems, 25(5): 297-307. https://doi.org/10.1023/A:1010633123512

[16] DeWyngaert, J., Noz, M., Ellerin, B., Murphy-Walcott, A., Kramer, E. (2002). Procedure for unmasking localization information from ProstaScint scans for prostate radiation therapy treatment planning. International Journal of Radiation Oncology, Biology, Physics, 54(2): 32.

[17] Maguire Jr, G.Q., Noz, M.E., Rusinek, H., Jaeger, J., Kramer, E.L., Sanger, J.J., Smith, G. (1991). Graphics applied to medical image registration. IEEE Computer Graphics and Applications, 11(2): 20-28. https://doi.ieeecomputersociety.org/10.1109/38.75587

[18] Dinsha, D. (2014). Breast tumor Segmentation and classification using SVM and Bayesian from thremogram images. Unique Journal of Engineering and Advanced Sciences, 2: 147-151.

[19] Khokher, M.R., Ghafoor, A., Siddiqui, A.M. (2013). Image segmentation using multilevel graph cuts and graph development using fuzzy rule‐based system. IET Image Processing, 7(3): 201-211. https://doi.org/10.1049/iet-ipr.2012.0082

[20] Shanthi, S., Bhaskaran, V.M. (2014). Modified artificial bee colony based feature selection: A new method in the application of mammogram image classification. International Journal of Science, Engineering and Technology Research, 3(6): 1664-1667.

[21] Ibrahim, S.I., Abuchaiba, Elfarra, B.K. (2013). New feature extraction method for mammogram CAD diagnosis. International Journal of Signal Processing, 6(1).

[22] Tripathi, S., Kumar, K., Singh, B.K., Singh, R.P. (2012). Image segmentation: A review. International Journal of Computer Science and Management Research, 1(4): 838-843.

[23] Monticciolo, D.L., Newell, M.S., Moy, L., Niell, B., Monsees, B., Sickles, E.A. (2018). Breast cancer screening in women at higher-than-average risk: Recommendations from the ACR. J Am Coll Radiol., 15(3 Pt A): 408-414. https://doi.org/10.1016/j.jacr.2017.11.034

[24] Basheer, N.M., Mohammed, M.H. (2013). Segmentation of breast masses in digital mammograms using adaptive median filtering and texture analysis. Int. J. Recent Technol. Eng.(IJRTE), 2(1): 2277-3878.

[25] Singh, I., Kumar, D. (2014). A review on different image segmentation techniques. Indian Journal of Applied Research, 4(4): 1-3. https://doi.org/10.15373/2249555X/APR2014/200