Diagnosis of Melanoma Lesion Using Neutrosophic and Deep Learning

Diagnosis of Melanoma Lesion Using Neutrosophic and Deep Learning

Shubhendu Banerjee Sumit Kumar Singh Avishek ChakrabortySharmistha Basu Atanu Das Rajib Bag 

Department of CSE, Narula Institute of Technology, Kolkata 700109, India

Department of CSE, University of Essex, Colchester CO43SQ, UK

Department of Basic Science and Humanities, Narula Institute of Technology, Kolkata 700109, India

Department of MCA, Netaji Subhash Engineering College, Kolkata 700152, India

Indas Mahavidyalaya, Indas, Bankura 722205, India

Corresponding Author Email: 
avishek.chakraborty@nit.ac.in
Page: 
1327-1338
|
DOI: 
https://doi.org/10.18280/ts.380507
Received: 
29 August 2021
|
Revised: 
3 October 2021
|
Accepted: 
12 October 2021
|
Available online: 
31 October 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Melanoma is a kind of skin cancer which occurs due to too much exposure of melanocyte cells to the dangerous UV radiations, that gets damaged and multiplies uncontrollably. This is popularly known as malignant melanoma and is comparatively less heard of than certain other types of skin cancers; however it can be more detrimental as it swiftly spreads if not detected and attended at a primary stage. The differentiation between benign and melanocytic lesions sometimes may be confusing, but the symptoms of the disease can reasonably be discriminated by a profound investigation of its histopathological and clinical characteristics. In the recent past, Deep Convolutional Neural Networks (DCNNs) have advanced in accomplishing far better results. The necessity of the present day is to have faster and computationally efficient mechanisms for diagnosis of the deadly disease. This paper makes an effort to showcase a deep learning-based ‘Keras’ algorithm, which is established on the implementation of DCNNs to investigate melanoma from dermoscopic and digital pictures and provide swifter and more accurate result as contrasted to standard CNNs. The main highlight of this paper, basically stands in its incorporation of certain ambitious notions like the segmentation performed by a culmination of a moving straight line with a sequence of points and the application of the concept of triangular neutrosophic number based on uncertain parameters. The experiment was done on a total of 40,676 images obtained from four commonly available datasets— International Symposium on Biomedical Imaging (ISBI) 2017, International Skin Imaging Collaboration (ISIC) 2018, ISIC 2019 and ISIC 2020 and the end result received was indeed motivating. It attained a Jac score of 86.81% on ISIC 2020 dataset and 95.98%, 95.66% and 94.42% on ISBI 2017, ISIC 2018 and ISIC 2019 datasets, respectively. The present research yielded phenomenal output in most instances in comparison to the pre-defined parameters with the similar types of works in this field.

Keywords: 

skin cancer, melanoma, skin lesion segmentation, Keras, deep learning, neutrosophic

1. Introduction

The past two decades have witnessed several research activities on the treatment of different forms of Cancer, however still many areas remain unexplored. The skin, being a crucial organ of the human body is vulnerable to various diseases, especially skin cancer. The annual statistics published from the American Cancer Society in 2019 reports about 96,480 new cases of melanoma out of which the condition of 7230 people were fatal. The number of patients suffering from Skin cancer is increasing at a rapid rate and it accounts to one-third of the total cancers cases which are diagnosed worldwide, as per the report of the World Health Organization (WHO) [1]. The skin constitutes of various layers- the epidermis, dermis, and hypodermis with a set of different cells for each layer with their defined characteristics and functions. The epidermal and dermal cells are designed to protect the layers from harsh exposures; however increasing levels of Growth Hormone might result in DNA damage and eventually lead to several skin diseases [2]. The abnormal surge in skin cancer with time is an effect of the depletion of the ozone layer resulting in more exposure of UV-A and UV-B radiations to the human body [3]. Besides, the radiations acting as one of its causes, an increase in count of dysplastic nevi and benign melanocytic, hair color of individuals, and a record of carcinogenic genes in the family are the other contributing factors for skin cancer. Out of the three frequently diagnosed skin cancers, the Squamous Cell Carcinoma (SCC) and the Basal Cell Carcinoma (BCC) fall under the benign category (non- melanocytic) [4, 5]. It accounts to maximum cases of skin cancers whereas Malignant Melanoma (MM), caused by the uncontrolled multiplication of melanocyte cells is the deadliest form with a high mortality rate, about 75%. Melanocytes, the cells responsible for producing melanin within the body are found in the eyes, skin, mucosal epithelia, and meninges which are responsible to determine their colors for every individual [6]. These cells are mainly accountable for pigmentation and photo-protection from the UV radiations by producing two types of pigments- brown/black eumelanin and red pheomelanin. There is no precise evidence of the photo-protective feature of the melanocytes but the malignant transformation of these cells gives rise to melanoma. As eumelanin happens to be the darker pigment, it acts more as UV protective layer in comparison to pheomelanin that is a lighter pigment [7, 8]. Melanoma is originated on the skin is similar to that of a mole or pigmentation. A pre-developed mole can change its feel, color, shape, and size over a certain period to give rise to melanoma, even a newly formed weirdly looking mole can also initiate melanoma. However it is difficult to differentiate between a melanoma and a non-melanoma. The progress of melanoma is believed to originate in the epidermis of the skin tissue when the radial growth phase takes place but the prognosis of cancer becomes difficult after it spreads deep into the dermis and further metastasizes from there to different other body parts which even include the cerebrum and bones. However, if detected accurately at an earlier stage and treated accordingly, the rate of survival of a malignant patient increases to 95%, but the fact that detection at a later stage causing death cannot be ruled out [9, 10].

Motivated by the mathematical modeling and uncertainty theory we incorporate the idea of coordinate geometry to evaluate the image segmentation portion. Here, we developed an algorithm based on straight line and consider a series of points after finite number of iteration which will gives us the approximated image segmented area. But we want to grab the exact location of segmented image. Thus, we have applied the concept of neutrosophic set here to capture the exact threshold value of the segmented portion. Normally, a neutrosophic number can capture the truth, false and indeterminacy components of an uncertain parameter very prominently. So, we utilized the triangular neutrosophic number [11] and its de-neutrosophication value to fix the exact threshold value of the uncertain parameter. Further, we have applied the knowledge of Manhattan distance measure computation for the modifications of the segmented portions. Our primary aim in this paper was detecting melanomatic lesions which could be achieved to a greater extent by the integration of the above mentioned features. A wide range of techniques and algorithms exploring the skin lesion segmentation was introduced in ISIC 2018. In our previous paper, along with the conventional techniques, a new method known as the fuzzy set theory was used to obtain improved and precise results [12]. For efficient and trustworthy segmentation of dermoscopic image, intelligence-based segmentation techniques like deep learning algorithm, Artificial Neural Networks (ANN), genetic algorithms, and fuzzy logics were employed. In the present study, the prediction of the lesion along with its model training and validation was enforced by the Keras classifier owing to its integrated approach towards the deep neural networks. However, the Keras library is preferred over the rest as it provides faster results with improved extensibility and minimal tactics. Using this classifier with the backend type- Tensorflow, model training and validation were conducted on testing images of melonomatic and non-melanomatic lesions. We aimed to choose this classifier in our work as we noted that the classification is extensively sped up by using it and thus further minimises the error in comparison to the other CNNs. We have divided our entire work primarily into six sections- literature review, mathematical preliminaries, proposed methodology, result analysis, discussion with the established articles and conclusion. These sections are further subdivided in subsections to focus and elaborate the specific areas.

2. Literature Review

During the twentieth century, malignant lesions were detected exclusively by naked eyes on the basis of its, size and bleeding characteristic of a mole. For further analysis the unusual lesions were sent for biopsy and were clinically correlated. Traditionally, manual screening and visual inspection were carried out to detect skin cancer. These methods of screening and visual inspection of dermoscopic skin image by dermatologists were complicated, subjective, time consuming and were not accurate [13]. Over the years newer technologies developed which led to error free detection. The concept of Deep learning, also known as image-based machine learning has emerged as a highly effective process in medical imaging. Deep learning methods have also attained futuristic heights for analysis of skin lesion in recent times such that digital photographs are used for diagnosis of skin diseases [14]. A popular deep learning method, namely, Deep Convolutional Neural Networks (DCNNs), possess the ability to process common and highly variable tasks in fine-grained objects which is an important characteristics of skin lesion images [15, 16].

Deep learning being a perpetual task in the domain of computer vision have significantly achieved great results for segmentation and classification of dermoscopic images. Various deep learning models have been introduced by researchers across the globe. Some of the popular deep learning models are U-net, Deep Fully Convolutional Residual Network (FRCN), Fully Convolutional Network (FCN), Convolutional De-Convolutional neural network (CDCNN). Bi et al. [17] have proposed a parallel integration technique for segmentation of dermoscopic image which was used to intensify the edges of suspected lesion by developing multi-staged FCN. This proposed algorithm scores 91.18% of dice coefficient and 95.51% of accuracy when evaluated over International Symposium on Biomedical Imaging 2016 (ISBI 16) dataset. Al-masni et al. [18] have proposed FRCN model for segmentation of dermoscopic image, it is an extension of FCN architecture. The U-Net architecture was introduced for medical research and is extensively employed by various researchers for segmentation and classification of skin lesion. Berseth [19] have made an probable estimation of each pixel by employing u-net model for segmentation of image. Vesal et al. [20] have modified u-net architecture to develop a novel model called SkinNet, where the convolutional change are dilated into sub layers of convolution thereby forming modified version of U-Net architecture. Notwithstanding the fact that a weak decoding module for revieing feature maps along with weak minimum path connection of the model (which losses the important details of dermoscopic image) does not makes U-Net an optimum deep learning model for segmentation and classification of skin lesion. Therefore, Keras is employed by the proposed article, which is highly praised in the field of biomedical.

After the invention of neutrosophic set theory concept [21] in 1998 it is widely used in various sectors of engineering, science, technical fields. Further, Wang et al. [22] manifested the design of single typed neutrosophic set which plays an essential role in NS theory. Recently, Chakraborty et al. [23-25] germinated the perception of triangular, trapezoidal, pentagonal neutrosophic number respectively and its categorization according to the dependency of the membership function and applied it in various kinds of decision making problems. Also, Karaaslan and Hayat [26] focused on single valued neutrosophic matrix concept and utilized in decision making domain; Abdel-Basset et al. [27] proposed type-2 NS; Deli et al. [28] propounded the thought of bipolar NS; Ali and Smarandache [29] focused on complex neutrosophic set; Nabeeh et al. [30] categorized neutrosophic AHP skill; Mullai and Broumi [31] developed NS based minimum spanning tree problem, Pal and Chakraborty [32] created EOQ model using NS etc.

Uncertainty theory becomes a burning topic nowadays to grab the idea of vagueness and ambiguity of an imprecise model in any sectional area. Several image segmentation works based on neutrosophic theory [33-36] has been already published in various journals and books. Also, Sengur and Guo [37] proposed neutrosophic based color texture image segmentation, Sengur et al. [38] incorporated a survey on neutrosophic based medical image segmentation, Guo et al. [39] introduced neutrosophic based clustering, Dhingra et al. [40] described leaf disease classification based on neutrosophic theory, Ali et al. [41] focused on segmentation on X-ray images based on neutrosophic theory etc. Recently, researchers developed several useful theories and algorithms on neutrosophic fields [42-44] which are very much essential in image segmentation problems. Also, a few researchers have worked on structural portion of neutrosophic set, incorporated the idea of triangular, trapezoidal, pentagonal neutrosophic set and made a classification based on dependency and independency of the membership functions [45]. These developments are so much crucial, effective and useful for the future study of work in case of neutrosophic theory and application in computing fields.

3. Mathematical Preliminaries

3.1 Neutrosophic set [21]

A set $\widetilde{\mathrm{A}_{\mathrm{Neu}}}$ in the universal discourse X which is most commonly designated by x is termed as a neutrosophic set if  $\widetilde{\mathrm{A}_{\mathrm{Neu}}}=\left\{\left\langle\mathrm{x} ;\left[\omega_{\widetilde{\mathrm{A}_{\mathrm{Ne}}}}(\mathrm{x}), \sigma_{\widetilde{\mathrm{A}_{\mathrm{Ne}}}}(\mathrm{x}), £_{\widetilde{\mathrm{A}_{\mathrm{Ne}}}}(\mathrm{x})\right]\right\rangle \vdots \mathrm{x} \in \mathrm{X}\right\}$, where $\omega_{\widetilde{\mathrm{Aeu}}}(\mathrm{x}): \mathrm{X} \rightarrow ] 0^{-}, 1^{+}[$ is termed as the truth membership function which represents the index of truthiness, $\sigma_{\widetilde{\mathrm{A}_{\text {Neu }}}}(\mathrm{x}): \mathrm{X} \rightarrow]0^{-}, 1^{+}[$ is called the hesitant membership function which depicts the degree of uncertainty and $£_{\widetilde{\mathrm{A}_{\text {Neu }}}}(\mathrm{x}): \mathrm{X} \rightarrow]0^{-}, 1^{+}[$ is called the inaccuracy membership function which depicts the falsity in the decision making procedure.

Where, $\omega_{\widetilde{A_{N e u}}}(x), \sigma_{\widetilde{A_{N e u}}}(x) \& £_{\widetilde{A_{N e u}}}(x)$  Satisfy the following the relation,

$0^{-} \leq \operatorname{Sup}\left\{\omega_{\widetilde{A_{\text {Neu }}}}(\mathrm{x})\right\}+\operatorname{Sup}\left\{\sigma_{\widetilde{A_{\text {Neu }}}}(\mathrm{x})\right\}+\operatorname{Sup}\left\{£_{\widetilde{A_{\text {Neu }}}}(\mathrm{x})\right\} \leq 3^{+}$.    (1)

3.2 Triangular Neutrosophic Number (TNN) [23]

A triangular neutrosophic number of is defined as: $\widetilde{\mathrm{A}}_{\mathrm{Ne}}=\left(\mathrm{a}_{1}, \mathrm{a}_{2}, \mathrm{a}_{3} ; \mathrm{b}_{1}, \mathrm{~b}_{2}, \mathrm{~b}_{3} ; \mathrm{c}_{1}, \mathrm{c}_{2}, \mathrm{c}_{3}\right)$, whose truth, indeterminacy and falsity membership functions are defined as follows:

$\mathrm{T}_{\widetilde{A}_{\mathrm{Ne}}}(\mathrm{x})= \begin{cases}\frac{\mathrm{x}-\mathrm{a}_{1}}{\mathrm{a}_{2}-\mathrm{a}_{1}} & \text { when } \mathrm{a}_{1} \leq \mathrm{x}<\mathrm{a}_{2} \\ {1} & \text { when } \mathrm{x}=\mathrm{a}_{2} \\ \frac{\mathrm{a}_{3}-\mathrm{x}}{\mathrm{a}_{3}-\mathrm{a}_{2}} & \text { when } \mathrm{a}_{2}<\mathrm{x} \leq \mathrm{a}_{3} \\ 0 & \text { otherwise }\end{cases}$    (2)

$I_{{\widetilde{\mathrm{A}}_{\mathrm{Ne}}}}(\mathrm{x})= \begin{cases}\frac{\mathrm{b}_{2}-\mathrm{x}}{\mathrm{b}_{2}-\mathrm{b}_{1}} & \text { when } \mathrm{b}_{1} \leq \mathrm{x}<\mathrm{b}_{2} \\ 0 & \text { when } \mathrm{x}=\mathrm{b}_{2} \\ \frac{\mathrm{x}-\mathrm{b}_{2}}{\mathrm{~b}_{3}-\mathrm{b}_{2}} & \text { when } \mathrm{b}_{2}<\mathrm{x} \leq \mathrm{b}_{3} \\ {1} & \text { otherwise }\end{cases}$    (3)

$F_{\widetilde{A}_{\mathrm{Ne}}}(\mathrm{x})= \begin{cases}\frac{\mathrm{c}_{2}-\mathrm{x}}{\mathrm{c}_{2}-\mathrm{c}_{1}} & \text { when } \mathrm{c}_{1} \leq \mathrm{x}<\mathrm{c}_{2} \\ 0 & \text { when } \mathrm{x}=\mathrm{c}_{2} \\ \frac{\mathrm{x}-\mathrm{c}_{2}}{\mathrm{c}_{3}-\mathrm{c}_{2}} & \text { when } \mathrm{c}_{2}<\mathrm{x} \leq \mathrm{c}_{3} \\ {1} & \text { otherwise }\end{cases}$    (4)

where, $-0 \leq \mathrm{T}_{\widetilde{A}_{\mathrm{Ne}}}(\mathrm{x})+\mathrm{I}_{\widetilde{A}_{\mathrm{Ne}}}(\mathrm{x})+\mathrm{F}_{\widetilde{\mathrm{A}}_{\mathrm{Ne}}}(\mathrm{x}) \leq 3$, $x \in \widetilde{A}_{\mathrm{Ne}}$.

3.3 Generalised Single Valued Triangular neutrosophic number (GSVTNN) [11]

A generalised single valued triangular neutrosophic number (GSVTNN) of is defined as: $\tilde{A}=\left(a_{1}, a_{2}, a_{3} ; \theta, \varphi, \pi\right)$, whose truth, indeterminacy and falsity membership functions are defined as follows:

$T(x)=\left\{\begin{array}{cc}\theta \frac{x-a_{1}}{a_{2}-a_{1}} & \text { when } a_{1} \leq x<a_{2} \\ \theta & \text { when } x=a_{2}\\ \theta{\frac{a_{3}-x}{a_{3}-a_{2}}} & \text { when } a_{2}<x \leq a_{3} \\ 0  & \text { otherwise }\end{array}\right.$    (5)

$I(x)=\left\{\begin{array}{cc}\frac{a_{2}-x+\varphi\left(x-a_{1}\right)}{a_{2}-a_{1}} & \text { when } a_{1} \leq x<a_{2} \\ \varphi & \text { when } x=a_{2} \\ \frac{x-a_{2}+\varphi\left(a_{3}-x\right)}{a_{3}-a_{2}} & \text { when } a_{2}<x \leq a_{3} \\ 1 & \text { otherwise }\end{array}\right.$    (6)

$F(x)=\left\{\begin{array}{cc}\frac{a_{2}-x+\pi\left(x-a_{1}\right)}{a_{2}-a_{1}} & \text { when } a_{1} \leq x<a_{2} \\ \pi & \text { when } x=a_{2} \\ \frac{x-a_{2}+\pi\left(a_{3}-x\right)}{a_{3}-a_{2}} & \text { when } a_{2}<x \leq a_{3} \\ 1 & \text { otherwise }\end{array}\right.$    (7)

where, $T(x), I(x), F(x) \in[0,1]$.

4. Proposed Methodology

4.1 Training Keras with ISBI 2017, ISIC 2018, ISIC 2019 and ISIC 2020 Dataset

In recent times, medical imaging has witnessed a radical change in the area of melanoma lesion detection. Moreover, training the proposed algorithm with relevant lesion images is a strenuous task. Our proposed method is being trained and tested on publicly accessible datasets- ISBI 2017, ISIC 2018, 2019 and 2020. A total of 40,676 dermoscopic images are used to train and test the algorithm, thereby enhancing the accuracy of the proposed module. The datasets vary in testing and training images and only the testing data for both the melanomatic and non-melanomatic images itself sum up to a total of 7030 images. ISIC 2017 dataset is generated by ISBI in the year 2017 it contains 2000 images for training the classifier and additional 600 and 150 dermoscopic images for testing and validation respectively. This dataset contains three different lesion categories, namely: Benign Nevus (BN), Seborrhoeic Keratosis (SK) and Malignant Melanoma (MM). These images are 8-bit RGB images with a resolution ranging from 1022 × 767 px to 6748 × 4499 px. Distribution of dermoscopic images for each category in ISIC 2017 dataset is shown in Table 1.

Table 1. Detailed description of ISIC 2017 dataset

Datasets

Categories

MM

SK

BN

Total

ISBI 2017

Training

374

254

1372

2000

Testing

117

90

393

600

Validation

30

42

78

150

Total

521

386

1843

2750

HAM 10000 has originally sourced ISIC 2018 dataset which contains 10015 images for training the classifier. It is constituted of seven different categories like Melanoma, Melanocytic Nevus (MN), Basal Cell Carcinoma (BCC), Actinic Keratosis (AK), Dermatofibroma and Vascular Lesion which are well encapsulated in Table 2.

Table 2. Tabulated representation of different categories of ISIC 2018 dataset

Class

No. of Images

Melanoma

1113

MN

6705

BCC

514

AK

327

BK

1099

Dermatofibroma

115

Vascular lesion

142

Total

10015

MSK, BCN20000 and HAM10000 datasets are accumulated to create ISIC 2019 dataset which have 25,331 RGB images along with their ground truths which are drawn by expert dermatologist. Additionally, it also contains metadata which stores the personal data such as sex, age and anatomic site of the suspected lesion. It is constituted of eight different classes, namely, Malignant Melanoma (MM), Melanocytic Nevus (NV), Basal Cell Carcinoma (BCC), Actinic Keratosis (AK), Benign Keratosis (BK), Dermatofibroma (DF), Vascular Lesion (VL) and Squamous Cell Carcinoma (SCC). Table 3 represents a detailed description of ISIC 2019 dataset.

Table 3. Detailed description of ISIC 2019 dataset

MM

NV

AK

DF

VL

SCC

BCC

BKL

Total

4522

12875

867

239

253

628

3323

2624

25331

ISIC 2020 dataset is the recently published dataset with 33126 dermoscopic images for training and 10982 test images. The dataset is developed by ISIC by accumulating skin lesion images of over 2000 patients from the Hospital Clínic de Barcelona, University of Athens Medical School, University of Queensland, Medical University of Vienna Melanoma Institute Australia and Memorial Sloan Kettering Cancer Centre. Along with skin lesion images and respective ground truths the dataset also consists of metadata of each lesion which stores the patient personal details like age, sex and site of lesion. Ground truth images are drawn by expert histopathologist by mutual agreement among the panel, which increases the legitimacy of the result. Table 4 illustrates the division of dermoscopic image of ISIC 2020 dataset. In this proposed work we use ISIC 2020 only of testing purpose for thar reason we take 584 melanoma and 1996 non-melanomatic images which is described in Table 5.

Table 4. Detailed illustration of ISIC 2020 dataset

Categories

Malignant

Bening

Total

Training

584

32542

33126

Table 5. ISIC 2020 dataset used in the proposed work

Categories

Malignant

Non-Malignant

Total

Training

584

1996

2580

Keras is a deep learning model which was built using Theano and TensorFlow. It is extensively used for medical vision for diagnosis of suspected lesion of human anatomy. It uses a sequential model of fully connected neural networks, max pooling layers, activation layers and dropout layer [46]. Figure 1 shows the pictorial representation of keras classifier. The network consists of two blocks of convolutional detectors which is followed by max pooling layers and two layers of densely connected neural networks. The kernel size is considered to be 3 × 3 whereas the dimensionality of pooling operation is considered to be 2 × 2. Dropout layers are used for regularization which are operated by randomly turning off a certain unit. Flatten layers are employed to convert the input of any dimension into 1 × n. Moreover a softmax and loss function is used for categorizing entropy of multi classification problem.

In order to make it appropriate for Keras training, the resolutions were resized to 512 × 512 before operating any additional changes. After proper conversion to the desired configurations, Keras was trained using specific parameters as follows: batch size= 16, subdivisions= 16, momentum= 0.9, decay= 0.0005 and a learning rate of 0.0001. According to the results generated from the training of Keras through 70,000 epochs, saving epoch results after every 10000 weights have conclusively proven to efficiently detect the location of the desired lesion from the dermoscopic images that were provided and developed.

4.2 Pre-processing

The identification of malignant melanoma with only our naked eyes could be erroneous and confusing at times, hence medical professionals often prefer dermatoscopy. Dermatoscopy is an expensive alternative; however researches have been quite successful in maintaining the quality of the images even after cutting down on the high cost of the process. Here, we have pre-processed the obtained image by three different steps. The first step being the hair removal from the lesion region for which we have employed the DullRazor algorithm [47]. This algorithm with the aid of a grey morphological closing operation, begins its work by detecting the locations of the hair follicles on the targeted area. Secondly, on the basis of the length and thickness of the follicle, the location of the same is verified by determining the recognized pixels. A bilinear interpolation method is incorporated to replace these pixels and with the assistance of an adaptive median filter, it is subsequently smoothened. Finally, the image is enlarged by the way of histogram equalization. The output of the above chronological process is demonstrated in Figure 2.

Figure 1. Keras model architecture

Figure 2. Pre-processing approach (a) Test image (ISIC_0000115) (b) Digitally removal of hair by employing DullRazor algorithm (c) Modified image

4.3 Segmentation

Let us consider the equation of the principal diagonal having extremities at $\left(\mathrm{x}_{1}, \mathrm{y}_{1}\right)$ and $\left(x_{n}, y_{n}\right)$ as follows,

$\frac{\mathrm{y}-\mathrm{y}_{1}}{\mathrm{y}_{\mathrm{n}}-\mathrm{y}_{1}}=\frac{\mathrm{x}-\mathrm{x}_{1}}{\mathrm{x}_{\mathrm{n}}-\mathrm{x}_{1}}$    (8)

Thus,

$=y_{1}+\frac{\left(y_{n}-y_{1}\right)\left(x-x_{1}\right)}{\left(x_{n}-x_{1}\right)}$    (9)

Here all points on the diagonal will satisfy this Eq. (8). Now we assume that (i,j) is an arbitrary point on the diagonal. So, we can write the modified generalized equation of the diagonal as,

$\mathrm{j}=\mathrm{y}_{1}+\frac{\left(\mathrm{y}_{\mathrm{n}}-\mathrm{y}_{1}\right)\left(\mathrm{i}-\mathrm{x}_{1}\right)}{\left(\mathrm{x}_{\mathrm{n}}-\mathrm{x}_{1}\right)}$    (10)

where, $\mathrm{i}=\mathrm{x}_{1}, \mathrm{x}_{2}, \mathrm{x}_{3} \ldots \ldots \ldots \ldots \mathrm{x}_{\mathrm{n}}$ and $\mathrm{j}=\mathrm{j}_{1}, \mathrm{j}_{2}, \mathrm{j}_{3} \ldots \ldots \ldots \ldots \mathrm{j}_{\mathrm{n}}$.

Now, the equation of the perpendiculars of this principal diagonal can be written as,

$\mathrm{j}=-\frac{\left(\mathrm{x}_{\mathrm{n}}-\mathrm{x}_{1}\right)}{\left(\mathrm{y}_{\mathrm{n}}-\mathrm{y}_1\right)} \mathrm{i}+\mathrm{C}$    (11)

Thus,

$C=j+\frac{\left(x_{n}-x_{1}\right)}{\left(y_{n}-y_{1}\right)} i \Rightarrow C$

$=y_{1}+\frac{\left(y_{n}-y_{1}\right)\left(i-x_{1}\right)}{\left(x_{n}-x_{1}\right)}$

$+\frac{\left(\mathrm{x}_{\mathrm{n}}-\mathrm{x}_{1}\right)}{\left(\mathrm{y}_{\mathrm{n}}-\mathrm{y}_{1}\right)} \mathrm{i}$

$=\frac{y_{1}\left(x_{n}-x_{1}\right)\left(y_{n}-y_{1}\right)+\left\{\left(y_{n}-y_{1}\right)^{2}+\left(x_{n}-x_{1}\right)^{2}\right\} i-x_{1}\left(y_{n}-y_{1}\right)}{\left(x_{n}-x_{1}\right)\left(y_{n}-y_{1}\right)}$    (12)

So,

$\mathrm{j}=-\frac{\left(\mathrm{x}_{\mathrm{n}}-\mathrm{x}_{1}\right)}{\left(\mathrm{y}_{\mathrm{n}}-\mathrm{y}_{1}\right)} \mathrm{i}$

$+\frac{y_{1}\left(x_{n}-x_{1}\right)\left(y_{n}-y_{1}\right)+\left\{\left(y_{n}-y_{1}\right)^{2}+\left(x_{n}-x_{1}\right)^{2}\right\} i-x_{1}\left(y_{n}-y_{1}\right)}{\left(x_{n}-x_{1}\right)\left(y_{n}-y_{1}\right)}$    (13)

Let us consider, two arbitrary neighbouring points $X_{R}=\left(x_{r}, y_{r}\right)$ and $X_{R+1}=\left(x_{r+1}, y_{r+1}\right)$ which satisfies the Eq. (13). Then we can say that these two points will lie on the boundaries of the segmented portion. Now, we will focus on the following condition (14) to judge or set the accurate point on the segmented boundary.

$\left|X_{R}-X_{R+1}\right|<$ or $>\theta$    (14)

Here θ is called the threshold Value.

Case-1: If $\left|\mathrm{X}_{\mathrm{R}}-\mathrm{X}_{\mathrm{R}+1}\right|<\theta$, then we can say that no two points will lie on the segmented boundary.

Case-2: If $\left|X_{R}-X_{R+1}\right| \geq \theta$, then we can say that $\mathrm{X}_{\mathrm{R}+1}$ will lie on the proposed segmented boundary and it will be the required intersecting point on the boundary.

Here we consider a threshold value θ due to the colour fluctuation of the image and we use the idea of neutrosophy to compute the exact threshold value. Normally, in reality colour fluctuation threshold value setting cannot be calculated properly using our general crisp domain idea as several uncertainty factors are correlated with it and as we know that general crisp number cannot able to grab the impreciseness portion properly. For this reason, we proposed the idea of neutrosophic number here to tackle this uncertainty portion and set the threshold value $\theta$ logically and efficiently. A neutrosophic number can able to describe all three components of an uncertain parameter i) truth ii) false and iii) hesitation very prominently, so in this circumstance we proposed a Triangular Neutrosophic Number (TNN) to tackle the threshold value computation. Sahin et al. [11] proposed score value technique to compute the triangular neutrosophic number based crispfication value. Here we will apply the same concept to set the threshold value T.

Figure 3. Graphical representation of triangular neutrosophic number

Thus, θ =$\frac{1}{8}[(\mathrm{a}+\mathrm{b}+\mathrm{c}) \times(2+\mathrm{T}-\mathrm{I}-\mathrm{F})]$ such that T,F,I$\in$ [0,1], where (a,b,c) triplet represents the triangular neutrosophic number and T denotes the truth; I denotes the indeterminacy; F denotes the falsity part of the membership function. Here, we also considered the asymmetrical TNN, as in case of reality to find out the threshold value we cannot say that it is symmetrical TNN always. Thus, we can fluctuate the position of b and can check dynamic threshold value. This is the biggest advantage of this computation. The Proposed technique has been shown here graphically as follows (see Figure 3).

After setting value of threshold using the conception of TNN, now we will focus on the computation of proper boundary point choosing approach. Since, the points are placed very dense so we can say that several points can arise after computation of the above process. Now, we need to find out the exact points to detect the boundary line that is the segmented region very clearly. To do this we will apply the Manhattan distance measure computation and will take the minimum distance measure to evaluate the estimated choosing points. Suppose we choose the approximated boundary points are $\left(\mathrm{x}_{1}, \mathrm{y}_{1}\right),\left(\mathrm{x}_{2}, \mathrm{y}_{2}\right),\left(\mathrm{x}_{3}, \mathrm{y}_{3}\right) \ldots \ldots \ldots$ $\left(x_{n}, y_{n}\right)$. Now, initially we fixed a point say $\left(\mathrm{x}_{\mathrm{k}}, \mathrm{y}_{\mathrm{k}}\right)$ on the boundary among these points. Now we will compute the following distance measure to set the next fixed point.

$\mathrm{D}_{\mathrm{M}}=$ Minimum $\left\{\left|\mathrm{x}_{\mathrm{k}}-\mathrm{x}_{\mathrm{i}}\right|+\left|\mathrm{y}_{\mathrm{k}}-\mathrm{y}_{\mathrm{i}}\right|\right.$, for $\left.\mathrm{i}=1,2, \ldots \ldots . \mathrm{n} ; \mathrm{i} \neq \mathrm{k}\right\}$    (15)

After computation of $\mathrm{D}_{\mathrm{M}}$ we can select the next fixed point after $\left(\mathrm{x}_{\mathrm{k}}, \mathrm{y}_{\mathrm{k}}\right)$ and suppose the next point is $\left(x_{m}, y_{m}\right)$, then we will compute the same min distance measure with respect to $\left(\mathrm{x}_{\mathrm{m}}, \mathrm{y}_{\mathrm{m}}\right)$ and other close points to identify the next fixed point. In case of tie in computation of $\mathrm{D}_{\mathrm{M}}$ we will consider any one point randomly and will continue the process.

$D_{M}=\operatorname{Minimum}\left\{\left|x_{m}-x_{p}\right|+\left|y_{m}-y_{p}\right|\right.$, for $\left.p=1,2, \ldots \ldots n ; p \neq m\right\}$    (16)

5. Performance Evaluations

The proposed method of melanoma skin lesion detection was assessed in two stages: Location detection of the lesion being the first in the list, where IOU metric is being evaluated and considered notable if the score is more than 80%. Secondly, pre-defined parameters like Sensitivity (Sen), Accuracy (Acc), Specificity (Spe), Jaccard index value (Jac) and Dice coefficient score (Dic) were used to evaluate the performance of the proposed methodology. False negative (FN), True negative (TN), False positive (FP), and True positive (TP), are used to calculate the values of Accuracy-measurement of accurate detection, Sensitivity (Sen) is the measurement of true-lesion detection, Specificity (Spe) is the ratio of false skin lesions, Jaccard index value (Jac) is the ratio of convergence proportion over achieved segmentation and ground truth values and Dice coefficient (Dic) quantifies segmented lesion to elucidate connection of ground truth images. Eventually, the performance of the overall pixel- wise segmentation is expressed by accuracy. Formula to calculate the above listed evaluation parameters are given below.

$I O U=\frac{\text { Area of Overlap }}{\text { Area of Union }}$    (17)

$\operatorname{Sen}=\frac{T P}{T P+F N}$    (18)

$S p e=\frac{T N}{T N+F P}$    (19)

$D i c=\frac{2 \times T P}{(2 \times T P)+F P+F N}$    (20)

$J a c=\frac{T P}{T P+F N+F P}$    (21)

$A c c=\frac{T P+T N}{T P+T N+F N+F P}$    (22)

6. Result Analysis

Here we have described the complete analysis of the performance of our proposed method in this paper and recorded it with regards to four different parameters which are: lesion location detection capacity, performance of segmentation, accurateness of feature extraction and computational time. After identifying the location of the lesion, the execution of segmentation and the validation of the method were calculated using the metrics based on Jaccard index, dice coefficient, specificity, sensitivity, and accuracy. The four publicly accessible datasets used are ISBI 2017, ISIC 2018, ISIC 2019 and ISIC 2020 on which our entire research work is focused upon. A PC powered by i7 processor, 32 GB RAM with 4GB GPU and the operating system as Ubuntu 18.04  was utilized in our technique for the detection and segmentation of all the datasets. We have developed the entire system of image processing and classification on OpenCv framework using python programming language. Implementation of localization was established by taking into account three metrics—IOU, specificity and sensitivity, which helped to locate the accurate lesion in proper manner. The dataset of ISBI 2017 denoted 94 score of IOU, 99.26% mark for specificity and 97.97% for sensitivity in the localization phase. The IOU of the proposed system scores as 93 for ISIC 2018 dataset, whereas it accounts to 97.77% for specificity and 98.20% for sensitivity, while in case of ISIC 2019 the IOU was 91, specificity was 98.28% and sensitivity was 97.56%. Whereas for ISIC 2020 exhibits 90 IOU, 98.05% specificity and 96.75% sensitivity. The Table 6 can be considered to understand the implementation of localization of the model on four datasets.

After evaluation of the detection of the lesion location, the segmentation execution of the said process was assessed through four datasets considering Dic, Jac, specificity, sensitivity, and accuracy metrics. The segmentation output has been discussed in Table 7 and it is noticed that the results are better amongst the researches done in the current era. The step by step output obtained from this proposed model has been depicted in Figure 4.

The model which we applied in this study has produced a remarkable output and we in the next section we are going to discuss in details.

Table 6. Analysis of performance (in %) for localization of Skin lesion using Keras classifier

Datasets

Sen

Spe

IOU

ISBI 2017

97.97

99.26

94

ISIC 2018

98.20

97.77

93

ISIC 2019

97.56

98.28

91

ISIC 2020

96.75

98.05

90

Table 7. Segmentation performance (in %)

Datasets

Acc

Sen

Spe

Jac

Dic

ISBI 2017

98.67

96.95

99.50

95.98

97.95

ISIC 2018

98.86

99.10

98.78

95.66

97.78

ISIC 2019

98.50

97.78

98.75

94.42

97.13

ISIC 2020

96.74

94.69

97.34

86.81

92.94

Figure 4. (a) Input images (ISIC_0014800, ISIC_0034813, ISIC_0061442 and ISIC_0599605), (b) outcome of pre-processing phase, (c) localization by Keras, (d)segmentation result

7. Discussion

The scholars have made significant contributions in reviewing the segmentation process over the last few years. Our work has been acknowledged by the four widely accepted and popular datasets- ISBI 2017, ISIC 2018, ISIC 2019 and ISIC 2020. The skin lesion segmentation process which was earlier proposed have been examined in contrast to the segmentation structure based on FrCN method [48], You Only Look Once (YOLO) with the GrabCut algorithm [49], automatic segmentation [50] and bootstrapping technique [51]. Moreover, our study referred some of the exquisite piece of work on lesion segmentation methods which were accepted in the recent past, like semantic segmentation technique [52] and full resolution convolutional network (FrCN) method along with a convolutional neural network classifier was proposed that deals with concurrent segmentation and classification [53]. In addition with a combination of graph theory and L-type fuzzy number based segmentation was proposed [45]. A network-based architecture of encoder-decoder for skin lesion segmentation on PSP-Net and Deep-Lab was designed and ResNet101 was used for feature extraction in this architecture [54]. In addition, in the recent years some most successful lesion segmentation methods are introduced like the performance of the U-Net was improvised by BCDU-Net with ConvLSTM [55] moreover MCGU-Net method improved the segmentation performance by combining Squeeze and Excitation with the U-Net [56]. In the recent years some more advanced segmentation and feature extraction methods were introduced like transfer learning along with AlexNet [57], Difficulty-Guided Curriculum Learning (DGCL) [58] and a Deep Saliency Segmentation method with a custom convolutional neural network (CNN) of ten layers [59]. Comparative studies of the aforesaid works are projected through Table 8, Table 9, and Table 10. The performance of segmentation by the proposed method on ISIC 2020 is depicted in Table 11. All performance has been evaluated on the well-known metric of Dic, Jac, specificity, sensitivity and accuracy and the computation of FN, FP, TN and TP cases are projected through Figure 5.

Table 8. Comparison of proposed segmentation method with state-of-art techniques on ISBI 2017 dataset

References

Year

Acc

Sen

Spe

Jac

Dic

Li and Shen [48]

2018

93.20

82.00

97.80

76.20

84.70

Unver and Ayan [49]

2019

93.39

90.82

92.68

74.81

84.26

Bi et al. [50]

2019

95.06

86.05

95.95

79.15

88.95

Xie et al. [51]

2020

94.70

87.40

96.80

80.40

87.80

Hasan et al. [52]

2020

95.30

87.50

85.50

*

*

Al-Masni et al. [53]

2020

81.57

75.67

80.62

*

*

Banerjee et al. [12]

2020

97.33

91.45

98.76

86.99

93.04

Proposed Method

2021

98.67

96.95

99.50

95.98

97.95

Table 9. Comparison of proposed segmentation method with state-of-art techniques on ISIC 2018 dataset

References

Year

Acc

Sen

Spe

Jac

Dic

Qian et al. [54]

2018

94.20

90.60

96.30

83.80

89.80

Azad et al. [55]

2019

93.70

78.50

98.20

93.70

*

Asadi-Aghbolaghi et al. [56]

2020

95.50

84.80

98.60

95.50

*

Hosny et al. [57]

2020

98.70

95.60

99.27

*

*

Al-Masni et al. [53]

2020

81.79

81.80

71.40

*

*

Tang et al. [58]

2021

94.80

89.10

96.40

80.70

88.10

Khan et al. [59]

2021

92.69

*

*

*

*

Proposed Method

2021

98.86

99.10

98.78

95.66

97.78

Figure 5. Analysis of the proposed segmentation method based on True positive (TP), true negative (TN), false positive (FP), false negative (FN)

Table 10. Comparison of proposed segmentation method with state-of-art techniques on ISIC 2019

References

Year

Acc

Sen

Spe

Jac

Dic

Banerjee et al. [12]

2020

93.98

91.55

94.84

79.84

88.79

Proposed Method

2021

98.50

97.78

98.75

94.42

97.13

Table 11. Comparison of proposed segmentation method with state-of-art techniques on ISIC 2020

References

Year

Acc

Sen

Spe

Jac

Dic

Proposed Method

2021

96.74

94.69

97.34

86.81

92.94

By analyzing the target method’s result with these trending segmentation methods apparently display that the target method has given a better result than the present deep-learning techniques available. The evaluation of the method’s performance on ISBI 2017 dataset shows that, it excelled the highest contributions in specificity scoring 99.50% with the Jac score 95.98%.

Bi et al. [17] have proposed an automated segmentation of skin lesion via a novel deep class-specific learning approach which learns the important visual characteristics of the skin lesions of each individual class (melanoma vs non-melanoma). A new probability-based was introduced which employs a step-wise integration to combine complementary segmentation results derived from individual class-specific learning models. It uses PH2, ISIC 2016 and ISIC 2017 dataset for training the deep learning model and segmentation of skin lesion. However due to lack of sufficient training data and parameters of training the model does not predicts the segmentation outcome accurately. Thus the model is insufficient in fetching high score of accuracy for segmentation and sometimes it might not predict lesion boundary due to presence of dense artefacts. An diagnostic framework which combines with segmentation and classification stage is proposed by Al-Masni et al. [53] in 2020. FRCN is used for segmentation of skin lesion from Field of View (FOV). Although FRCN model is not as promising as Keras for segmentation of dermoscopic image. Banerjee et al. uses graph theory and fuzzy logic for segmentation of lesion, however use of graph theory for specific window size and discarding the window when it is not in threshold range might loss important features thereafter decreasing the efficiency of classification. Khan et al. estimates the saliency by employing Deep Saliency Segmentation method where a CNN model of 10 layers is used to segmenting the image, Thereafter the generated heat map is converted into a binary image using a thresholding function. Notwithstanding the fact that conversion of heat map may convert extra unnecessary features and segment parts of FOV along with ROI.

Moreover, the results for segmentation analysis for ISIC 2018 dataset represents the suggested methodology which performed better than the rest by a remarkable margin but only to the motivating research of Hosny who have gained a staggering specificity mark of 99.27%. Moreover in the ISIC 2019 and ISIC 2020 dataset we also obtained a satisfactory score in all the parameters. Efficiency of proposed method over other published works for all the four datasets is represented by receiver operating characteristic (ROC) curve, where false positive rate is plotted versus true positive rate. Area under the curve is directly proportional to effectiveness of the algorithm. Figure 6(a) graphically represents the ROC curve for ISBI 2017 dataset, where the plotted curve for proposed method tends to be more towards y-axis then other published works, which clearly indicates the supremacy of accuracy for melanoma detection. Comparative ROC curves for segmentation of melanoma lesion for ISIC 2018, ISIC 2019 and ISIC 2020 dataset are plotted in Figure 6(b), Figure 6(c) and Figure 6(d) respectively, all the plotted graphs depicts the larger area under curve and higher accuracy of detection for proposed method over pre-existing algorithms. It is inferred from the ROC graph that, if the plotted graph tends to form a 900 it is considered to be most effective algorithm, all the plotted graphs showcase supremacy of results for the proposed method.

Figure 6. ROC diagrams of the proposed method for four datasets (a) ISBI 2017, (b) ISIC 2018, (c) ISIC2019 and (d) ISIC 2020

Table 12. Comparison of well-known classifiers with proposed classifier for classification of skin lesion images of ISBI 2017 dataset

Classifier

Acc

Sen

Spec

Prec

Time

 Keras

98.83

97.97

99.26

98.47

9.91

YOLO

98.67

96.95

99.50

98.96

7.54

KNN

97.00

94.42

98.26

96.37

13.76

MP

95.33

91.88

97.02

93.78

12.63

MG-SVM

93.33

89.34

95.29

90.26

21.43

Random Forest

93.33

87.82

96.03

91.53

29.89

Naïve Bayes

91.33

87.31

93.30

86.43

19.78

Linear SVM

88.83

85.79

90.32

81.25

22.31

Logistic

90.00

85.79

92.06

84.08

39.74

Decision Tree

88.33

81.73

91.56

82.56

42.28

Bayesian Network

87.00

79.70

90.57

80.51

48.19

Table 13. Comparison of well-known classifiers with proposed classifier for classification of skin lesion images of ISIC 2018 dataset

Classifier

Acc

Sen

Spec

Prec

Time

 Keras

98.33

97.31

98.68

96.15

16.41

YOLO

97.80

96.41

98.28

94.99

12.90

KNN

96.89

94.61

97.67

93.22

19.89

MP

95.91

90.12

97.87

93.48

21.83

MG-SVM

93.86

86.23

96.45

89.16

29.16

Random Forest

93.79

87.13

96.04

88.18

26.59

Naïve Bayes

92.35

86.53

94.32

83.77

31.48

Linear SVM

88.94

81.74

91.38

76.26

30.26

Logistic

89.39

84.13

91.18

76.36

48.72

Decision Tree

87.35

82.04

89.15

71.92

45.76

Bayesian Network

86.52

78.14

89.35

71.31

58.21

Table 14. Comparison of well-known classifiers with proposed classifier for classification of skin lesion images of ISIC 2019 dataset

Classifier

Acc

Sen

Spec

Prec

Time

 Keras

97.86

96.89

98.20

94.99

18.92

YOLO

97.40

96.44

97.73

93.74

14.19

KNN

96.65

94.22

97.50

92.98

21.74

MP

95.84

92.67

96.95

91.45

24.25

MG-SVM

94.62

89.56

96.41

89.76

35.12

Random Forest

92.60

86.89

94.61

85.00

30.77

Naïve Bayes

90.87

86.44

92.42

80.04

39.71

Linear SVM

89.36

83.78

91.33

77.25

52.49

Logistic

88.32

84.67

89.61

74.12

59.10

Decision Tree

87.86

80.89

90.31

74.59

49.21

Bayesian Network

85.61

76.00

88.98

70.81

72.46

Table 15. Comparison of well-known classifiers with proposed classifier for classification of skin lesion images of ISIC 2020 dataset

Classifier

Acc

Sen

Spec

Prec

Time

 Keras

97.05

96.06

97.34

91.37

20.54

YOLO

96.98

95.03

97.55

91.89

15.29

KNN

96.24

94.35

96.79

89.59

26.78

MP

94.96

92.29

95.74

86.38

30.83

MG-SVM

95.54

93.84

96.04

87.40

40.26

Random Forest

92.25

86.99

93.79

80.38

37.51

Naïve Bayes

91.20

84.93

93.04

78.11

45.52

Linear SVM

90.16

84.08

91.93

75.31

59.72

Logistic

89.07

83.22

90.78

72.54

58.14

Decision Tree

87.75

80.65

89.83

69.88

52.44

Bayesian Network

85.58

76.03

88.38

65.68

82.19

Proposed classifier (Keras) is compared with some new classifiers like You Only Look Once (YOLO), K-nearest Neighbors (KNN), Multilayer Perceptron (MP), Medium Gaussian Support Vector Machine (MG-SVM), Random Forest, Naïve Bayes, Linear SVM, Logistic, Decision Tree and Bayesian Network. Comparisons are done based on accuracy, precision, specificity, sensitivity, and time (in second). Table 12, Table 13, Table 14, and Table 15 show the analogical contrast between well-established classifiers for entities belonging to ISBI 2017, ISIC 2018, ISIC 2019, and ISIC 2020 datasets.

The comparisons evidently illustrate the supremacy of proposed classifier over other well-known classifiers for classification for dermoscopic skin lesion images. The proposed classification method, not only projects high results spanning for all evaluation metrics when compared with other established classifiers but also portrays fast classification results by the proposed method. The classifier has proven to diagnose a lesion much more efficiently and faster than other classifiers. Selecting Keras for classification of melanoma skin cancer has not only increased the efficiency of diagnosis but also reduced detection time. By employing preprocessing techniques for digital hair removal which is followed by image refinement process and efficient segmentation methods contributed to fetch enhanced accuracy score for the said method.

8. Conclusions

In this article, an effective mathematical modeling is demonstrated for the purpose of segmentation. The studies have been implemented on four widely acceptable datasets such as ISBI 2017, ISIC 2018, ISIC 2019 and ISIC 2020. Moreover the results of the tests ranging over a numerous criterions claim that the proposed method using Keras achieved optimistic end-results in comparison to other deep learning-based methodologies. Here, we have analyzed the computational steps to eventually scrutinize cancer by use of different digital and dermatological images from the previously mentioned datasets. The segmentation technique combining moving straight line with a succession of points and the application of the theory of triangular neutrosophic number amplified the segmentation results which led to impact the recognition process classification with precision. The proposed characteristics of this study have furnished an appreciable effectiveness to the comprehensive process of cancer diagnosis though there’s a lot more to be unveiled, examined and realized in this field. In the future extensive training of the system with varied range of datasets having several lesions and classification of lesion by upgraded CAD techniques or clinical testing can help to achieve more pronounced output.

  References

[1] Feng, J., Isern, N.G., Burton, S.D., Hu, J.Z. (2013). Studies of secondary melanoma on C57BL/6J mouse liver using 1H NMR metabolomics. Metabolites, 3(4): 1011-1035. https://doi.org/10.3390/metabo3041011

[2] Abuzaghleh, O., Faezipour, M., Barkana, B.D. (2015). Skincure: An innovative smart phone-based application to assist in melanoma early detection and prevention. arXiv preprint arXiv:1501.01075. https://doi.org/10.5121/sipij.2014.5601

[3] D'Orazio, J., Jarrett, S., Amaro-Ortiz, A., Scott, T. (2013). UV radiation and the skin. International Journal of Molecular Sciences, 14(6): 12222-12248. https://doi.org/10.3390/ijms140612222

[4] Karimkhani, C., Green, A.C., Nijsten, T., Weinstock, M.A., Dellavalle, R.P., Naghavi, M., Fitzmaurice, C. (2017). The global burden of melanoma: results from the Global Burden of Disease Study 2015. British Journal of Dermatology, 177(1): 134-140. https://doi.org/10.1111/bjd.15510

[5] Gandhi, S.A., Kampp, J. (2015). Skin cancer epidemiology, detection, and management. The Medical Clinics of North America, 99(6): 1323-1335. https://doi.org/10.1016/j.mcna.2015.06.002

[6] Mayer, J.E., Swetter, S.M., Fu, T., Geller, A.C. (2014). Screening, early detection, education, and trends for melanoma: current status (2007-2013) and future directions: Part I. Epidemiology, high-risk groups, clinical strategies, and diagnostic technology. Journal of the American Academy of Dermatology, 71(4): 599-e1. https://doi.org/10.1016/j.jaad.2014.05.046

[7] Brochez, L., Verhaeghe, E., Grosshans, E., Haneke, E., Piérard, G., Ruiter, D., Naeyaert, J.M. (2002). Inter‐observer variation in the histopathological diagnosis of clinically suspicious pigmented skin lesions. The Journal of Pathology: A Journal of the Pathological Society of Great Britain and Ireland, 196(4): 459-466. https://doi.org/10.1002/path.1061

[8] Dadzie, O.E., Goerig, R., Bhawan, J. (2008). Incidental microscopic foci of nevic aggregates in skin. The American Journal of Dermatopathology, 30(1): 45-50.  https://doi.org/10.1097/DAD.0b013e31815f9854

[9] Giotis, I., Molders, N., Land, S., Biehl, M., Jonkman, M. F., Petkov, N. (2015). MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Systems with Applications, 42(19): 6578-6585. https://doi.org/10.1016/j.eswa.2015.04.034

[10] Abbas, Q., Celebi, M.E., García, I.F. (2011). Hair removal methods: A comparative study for dermoscopy images. Biomedical Signal Processing and Control, 6(4): 395-404. https://doi.org/10.1016/j.bspc.2011.01.003

[11] Şahin, M., Kargın, A., Smarandache, F. (2018). Generalized Single Valued Triangular Neutrosophic Numbers and Aggregation Operators for Application to Multi-attribute Group Decision Making. Infinite Study.

[12] Banerjee, S., Singh, S.K., Chakraborty, A., Das, A., Bag, R. (2020). Melanoma diagnosis using deep learning and fuzzy logic. Diagnostics, 10(8): 577. https://doi.org/10.3390/diagnostics10080577

[13] Vestergaard, M.E., Macaskill, P.H.P.M., Holt, P.E., &Menzies, S.W. (2008). Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: A meta‐analysis of studies performed in a clinical setting. British Journal of Dermatology, 159(3): 669-676. https://doi.org/10.1111/j.1365-2133.2008.08713.x

[14] Goceri, E. (2019). Skin disease diagnosis from photographs using deep learning. In ECCOMAS thematic conference on computational vision and medical image processing (pp. 239-246). Springer, Cham. https://doi.org/10.1007/978-3-030-32040-9_25

[15] Goceri, E., Songül, C. (2017, October). Automated detection and extraction of skull from MR head images: preliminary results. In 2017 International Conference on Computer Science and Engineering (UBMK), pp. 171-176. https://doi.org/10.1109/UBMK.2017.8093370

[16] Milton, M.A.A. (2019). Automated skin lesion classification using ensemble of deep neural networks in ISIC 2018: Skin lesion analysis towards melanoma detection challenge. arXiv preprint arXiv:1901.10802.

[17] Bi, L., Kim, J., Ahn, E., Kumar, A., Fulham, M., Feng, D. (2017). Dermoscopic image segmentation via multistage fully convolutional networks. IEEE Transactions on Biomedical Engineering, 64(9): 2065-2074. https://doi.org/10.1109/TBME.2017.2712771

[18] Al-Masni, M.A., Al-Antari, M.A., Choi, M.T., Han, S.M., Kim, T.S. (2018). Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Computer Methods and Programs in Biomedicine, 162: 221-231. https://doi.org/10.1016/j.cmpb.2018.05.027

[19] Berseth, M. (2017). ISIC 2017-skin lesion analysis towards melanoma detection. arXiv preprint arXiv:1703.00523.

[20] Vesal, S., Ravikumar, N., Maier, A. (2018). Skinnet: A deep learning framework for skin lesion segmentation. In 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), pp. 1-3. https://doi.org/10.1109/NSSMIC.2018.8824732

[21] Smarandache, F. (1998). Neutrosophy: neutrosophic probability, set, and logic: Analytic synthesis & synthetic analysis.

[22] Wang, H., Smarandache, F., Zhang, Y., Sunderraman, R. (2010). Single valued neutrosophic sets. Infinite study.

[23] Chakraborty, A., Mondal, S. P., Ahmadian, A., Senu, N., Alam, S., Salahshour, S. (2018). Different forms of triangular neutrosophic numbers, de-neutrosophication techniques, and their applications. Symmetry, 10(8): 327. https://doi.org/10.3390/sym10080327

[24] Chakraborty, A., Mondal, S.P., Mahata, A., Alam, S. (2021). Different linear and non-linear form of trapezoidal neutrosophic numbers, de-neutrosophication techniques and its application in time-cost optimization technique, sequencing problem. RAIRO-Operations Research, 55: S97-S118. https://doi.org/10.1051/ro/2019090

[25] Chakraborty, A., Mondal, S., Broumi, S. (2019). De-neutrosophication technique of pentagonal neutrosophic number and application in minimal spanning tree. Infinite Study.

[26] Karaaslan, F., Hayat, K. (2018). Some new operations on single-valued neutrosophic matrices and their applications in multi-criteria group decision making. Applied Intelligence, 48(12): 4594-4614. https://doi.org/10.1007/s10489-018-1226-y

[27] Abdel-Basset, M., Saleh, M., Gamal, A., Smarandache, F. (2019). An approach of TOPSIS technique for developing supplier selection with group decision making under type-2 neutrosophic number. Applied Soft Computing, 77:  438-452. https://doi.org/10.1016/j.asoc.2019.01.035

[28] Deli, I., Ali, M., Smarandache, F. (2015). Bipolar neutrosophic sets and their application based on multi-criteria decision making problems. In 2015 International Conference on Advanced Mechatronic Systems (ICAMechS), pp. 249-254. https://doi.org/10.1109/ICAMechS.2015.7287068

[29] Ali, M., Smarandache, F. (2017). Complex neutrosophic set. Neural Computing and Applications, 28(7): 1817-1834. https://doi.org/10.1007/s00521-015-2154-y

[30] Nabeeh N.A., Abdel-Basset M., El-Ghareeb H.A., Aboelfetouh, A. (2019) Neutrosophic multi-criteria decision making approach for IoT-based enterprises. IEEE Access, 7: 59559-59574. https://doi.org/10.1109/ACCESS.2019.2908919

[31] Mullai, M., Broumi, S. (2018) Neutrosophic inventory model without shortages. Asian Journal of Mathematics and Computer Research, 23(4): 214-219. 

[32] Pal, S., Chakraborty, A. (2020). Triangular neutrosophic based production reliability model of deteriorating item with ramp type demand under shortages and time discounting. Neutrosophic Sets and Systems, 35: 347-367. 

[33] Guo, Y., Cheng, H.D. (2009). New neutrosophic approach to image segmentation. Pattern Recognition, 42(5): 587-595. https://doi.org/10.1016/j.patcog.2008.10.002

[34] Jha, S., Kumar, R., Priyadarshini, I., Smarandache, F., Long, H.V. (2019). Neutrosophic image segmentation with dice coefficients. Measurement, 134: 762-772. https://doi.org/10.1016/j.measurement.2018.11.006

[35] Zhang, M., Zhang, L., Cheng, H.D. (2010). A neutrosophic approach to image segmentation based on watershed method. Signal Processing, 90(5): 1510-1517. https://doi.org/10.1016/j.sigpro.2009.10.021

[36] Guo, Y., Şengür, A. (2014). A novel image segmentation algorithm based on neutrosophic similarity clustering. Applied Soft Computing, 25: 391-398. https://doi.org/10.1016/j.asoc.2014.08.066

[37] Sengur, A., Guo, Y. (2011). Color texture image segmentation based on neutrosophic set and wavelet transformation. Computer Vision and Image Understanding, 115(8): 1134-1144. https://doi.org/10.1016/j.cviu.2011.04.001

[38] Sengur, A., Budak, U., Akbulut, Y., Karabatak, M., Tanyildizi, E. (2019). A Survey on Neutrosophic Medical Image Segmentation. In Neutrosophic Set in Medical Image Analysis (pp. 145-165). Academic Press. https://doi.org/10.1016/B978-0-12-818148-5.00007-2

[39] Guo, Y., Şengür, A., Akbulut, Y., Shipley, A. (2018). An effective color image segmentation approach using neutrosophic adaptive mean shift clustering. Measurement, 119: 28-40. https://doi.org/10.1016/j.measurement.2018.01.025

[40] Dhingra, G., Kumar, V., Joshi, H.D. (2019). A novel computer vision based neutrosophic approach for leaf disease identification and classification. Measurement, 135: 782-794. https://doi.org/10.1016/j.measurement.2018.12.027

[41] Ali, M., Khan, M., Tung, N.T. (2018). Segmentation of dental X-ray images in medical imaging using neutrosophic orthogonal matrices. Expert Systems with Applications, 91: 434-441. https://doi.org/10.1016/j.eswa.2017.09.027

[42] Dhar, S., Kundu, M.K. (2021). Accurate multi-class image segmentation using weak continuity constraints and neutrosophic set. Applied Soft Computing, 112: 107759. https://doi.org/10.1016/j.asoc.2021.107759

[43] Lu, Z., Qiu, Y., Zhan, T. (2019). Neutrosophic C-means clustering with local information and noise distance-based kernel metric image segmentation. Journal of Visual Communication and Image Representation, 58: 269-276. https://doi.org/10.1016/j.jvcir.2018.11.045

[44] Qureshi, M.N., Ahamad, M.V. (2018). An improved method for image segmentation using K-means clustering with neutrosophic logic. Procedia Computer Science, 132, 534-540. https://doi.org/10.1016/j.procs.2018.05.006

[45] Chakraborty, A., Broumi, S., Singh, P.K. (2019). Some properties of pentagonal neutrosophic numbers and its applications in transportation problem environment. Infinite Study.

[46] Ketkar, N. (2017). Introduction to keras. In Deep learning with Python (pp. 97-111). Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-2766-4_7

[47] Kiani, K., Sharafat, A.R. (2011). E-shaver: An improved DullRazor® for digitally removing dark and light-colored hairs in dermoscopic images. Computers in Biology and Medicine, 41(3): 139-145. https://doi.org/10.1016/j.compbiomed.2011.01.003

[48] Li, Y., Shen, L. (2018). Skin lesion analysis towards melanoma detection using deep learning network. Sensors, 18(2): 556. https://doi.org/10.3390/s18020556

[49] Ünver, H.M., Ayan, E. (2019). Skin lesion segmentation in dermoscopic images with combination of YOLO and grabcut algorithm. Diagnostics, 9(3): 72. https://doi.org/10.3390/diagnostics9030072

[50] Bi, L., Kim, J., Ahn, E., Kumar, A., Feng, D., Fulham, M. (2019). Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognition, 85: 78-89. https://doi.org/10.1016/j.patcog.2018.08.001

[51] Xie, Y., Zhang, J., Xia, Y., Shen, C. (2020). A mutual bootstrapping model for automated skin lesion segmentation and classification. IEEE Transactions on Medical Imaging, 39(7): 2482-2493. https://doi.org/10.1109/TMI.2020.2972964

[52] Hasan, M.K., Dahal, L., Samarakoon, P.N., Tushar, F.I., Martí, R. (2020). DSNet: Automatic dermoscopic skin lesion segmentation. Computers in Biology and Medicine, 120: 103738. https://doi.org/10.1016/j.compbiomed.2020.103738

[53] Al-Masni, M.A., Kim, D.H., Kim, T.S. (2020). Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Computer Methods and Programs in Biomedicine, 190: 105351. https://doi.org/10.1016/j.cmpb.2020.105351

[54] Qian, C., Liu, T., Jiang, H., Wang, Z., Wang, P., Guan, M., Sun, B. (2018). A detection and segmentation architecture for skin lesion segmentation on dermoscopy images. arXiv preprint arXiv:1809.03917.

[55] Azad, R., Asadi-Aghbolaghi, M., Fathy, M., Escalera, S. (2019). Bi-directional ConvLSTM U-Net with densley connected convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 0-0). https://doi.org/10.1109/ICCVW.2019.00052

[56] Asadi-Aghbolaghi, M., Azad, R., Fathy, M., Escalera, S. (2020). Multi-level context gating of embedded collective knowledge for medical image segmentation. arXiv preprint arXiv:2003.05056.

[57] Hosny, K.M., Kassem, M.A., Fouad, M.M. (2020). Classification of skin lesions into seven classes using transfer learning with AlexNet. Journal of Digital Imaging, 33(5): 1325-1334. https://doi.org/10.1007/s10278-020-00371-9

[58] Tang, P., Yan, X., Liang, Q., Zhang, D. (2021). AFLN-DGCL: Adaptive Feature Learning Network with Difficulty-Guided Curriculum Learning for skin lesion segmentation. Applied Soft Computing, 110: 107656. https://doi.org/10.1016/j.asoc.2021.107656

[59] Khan, M.A., Sharif, M., Akram, T., Damaševičius, R., Maskeliūnas, R. (2021). Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics, 11(5): 811. https://doi.org/10.3390/diagnostics11050811