An Automatic Region Based Optimal Segmentation and Detection of Features on Dermoscopy Images Using V-Shaped Waterfall and Water Ridges

An Automatic Region Based Optimal Segmentation and Detection of Features on Dermoscopy Images Using V-Shaped Waterfall and Water Ridges

Annapoorani Gopal Prabhu Chakkaravarthy Alagarsamy* Uma Maheswari Kalivaradhan Lathaselvi Gandhimaruthian

Department of CSE & IT, College of Engineering, Anna University, Tiruchirappalli 620024, India

Department of Networking and Communications, School of Computing, College of Engineering and Technology, SRM Institute of Science & Technology, Kanchipuram 603203, India

Department of Information Technology, St. Joseph's College of Engineering, Chennai 600119, India

Corresponding Author Email: 
drprabhucse@gmail.com
Page: 
511-522
|
DOI: 
https://doi.org/10.18280/ts.400210
Received: 
10 May 2022
|
Revised: 
15 January 2023
|
Accepted: 
28 January 2023
|
Available online: 
30 April 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Among all cancer, skin cancer is the devastating cancerous growth. This disease initially starts from the epidermis of human body. To obtain accurate evaluation of skin cancer, the computerized analysis on the image, produces efficient influence. All around the world, skin cancer affects many people in different parts of the body. To make a perfect diagnosis of skin cancer, the dermatologist should examine the pigment on the skin image using computational method. This could be a pre-screening system for the dermatologist for an early diagnosis. The proposed work reports about the segmentation of lesion from the dermoscopy images with the fundamentals steps such as pre-processing, segmentation and post processing. In this work, a set of patterns are extracted from uneven borders using watershed segmentation. The levelset and active contour detection makes a perfect curve as boundary to segment affected region. The proposed work initiates with pre-processing followed by segmentation and ends with post processing and this is explained perfectly. The proposed simulation measures the accurate diagnosis of Ground Truth image and Segmented Image and confirms the best-offered values of accuracy up to 94.79% for PH2dataset and 90.658% for DermQuest Dataset.

Keywords: 

watershed segmentation, image gradient, gaussian filter, levelset, Neumann boundary, Dirac

1. Introduction

General form of skin cancer is Melanoma. The identification of melanoma can be analyzed by the change in dimension on existing mole or generating new mole on body. The shape of melanoma is irregular in most cases. The melanoma can be tracked by a dermatologist or by a computational method only. The computational method provides the best and accurate diagnosis. The initial part of computation is segmentation of skin lesion.

An automatic system for segmentation and classification for identifying pigment skin lesion by decision system was given [1]. The segmentation process is done with edge detection and HSV. The segmentation deals with the separation of foreground and background, where the histogram of background is represented by Gaussian distribution. The segmentation is sequenced with the threshold value of foreground image using HSV segmentation. A FCN based initial segmentation and super pixel based fine turning is developed for deep convolution learning [2]. The image is segmented by fine tuning method to extract the lesion border. The boundary is acquired by fuzzy with certain condition and complex textures. These extract some local contextual information over the boundary. The research took on ISBI 2016 dataset. To organize each pixel, feed forward computation and back propagation technique are used in semantic segmentation.

Fusion of structured features like wavelet and curvelet transform to develop texture features like local binary pattern operator [3]. This feature is extracted after segmentation by SVM classifier. This work is done on public dataset PH2. The performance measures are given with the sensitivity of 78.93%, specificity of 93.25% and accuracy up to 86.07%. Even after identifying structure and texture, some feature like stream, dots and borders are also identified. Identification of lesion by global method with two set of different system is used to classify the skin lesion with trained set of images [4]. Later the local features contain a bag of feature classifier which is used to test the lesion. The classification is done by verifying the color and texture features. The performance measure for global method is given as sensitivity of 96% and specificity of 80%. For local method the sensitivity is up to 100% and specificity up to 75%.

Fully convolutional UNet architecture is used for segmenting the skin lesion [5]. The segmentation divides the task into two ways the initial part extracts the foreground pattern and final part extracts pattern in back ground. The Gaussian filter is used to remove the artifacts in the lesion image. The performance measure of FCUA is given as accuracy up to 94%, specificity up to 96.3% and sensitivity up to 91.4%. The deep convolutional neural network was proposed to construct the training and testing phase with ISIC 2016 dataset [6]. Hand crafted features are extracted using the unsupervised training feature. After a perfect segmentation with regular globules and typical network the foreground and background are segmented and the foreground is converted into segmented contour image.

An automated system was developed to recognize melanoma disease by segmenting the image using fusion strategy. There are some set of features extracted to describe about correctness of segmentation. The segmentation is further continued with global Thresholding and dynamic Thresholding. The threshold images are clustered on basis of colour. Shape, Radiometric, Line, Area and Perimeter features are extracted from segmented image. The performance measure of this algorithm is 87% of sensitivity and 92% of specificity. Novel texture segmentation was developed to extract texture analysis with texture distribution [7]. On brief description of texture distribution TGLS algorithm is used to calculate TD metric. The TD metric describes the performance measure. The algorithm is evaluated with DermQuest dataset.

An automated system was developed to detect and segment vascular structure of lesion from dermoscopy image using independent component segmentation [8]. The skin and erythema were extracted from image by decomposing the pixel using K-Means clustering. The performance measure of the system is given with sensitivity of 84.4% and specificity of 98.8%. An automatic system was developed to find the psoriasis disease, where 40% of skin disease was psoriasis [9]. To detect the psoriasis, area and severity index are identified using automated seeded region growing algorithm. If the output of algorithm is 1 then the image is an infected image else if the output is 0 then it is not an infected image. Further the quality of image is improved by Markov Random Field (MRF) and hyper plane is derived from SVM.

The 2D wavelet packet decomposition was developed to extract fractal texture analysis [10]. The borders of skin lesion are always irregular. To analyze the irregularity, the irregular texture pattern is extracted by organizing the boundary on basis of colour and texture. The elimination of other color is done with recursive feature and SVM classifier. The elimination of pixels is done automatically by selecting the correlation Thresholding and comparing each pixels recursively to remove correct bias. The unwanted pixels or artifacts are removed using median filter and morphological bottom filtering. Some features like area, eccentricity and perimeter are extracted for classification. A Novel segmentation technique was proposed using threshold region based active contour [11]. The region based segmentation forms boundary on some vector growing condition over the regions. The active contour generates a snake tool that surrounds the boundary. Discussed about edge based segmentation, threshold segmentation, region based segmentation and active contours.

To enhance the texture features chan-vese model segmentation is applied on RGB image and extracts features like asymmetric, border, color and texture analysis [12]. Morphological filter is used to remove unwanted pixels in Gray scale image. To remove unwanted artifacts a deep filtering technique anisotropic diffusion filter is used to compare pixel by pixel. Delaunay triangulation is a new technique which is used to extract binary mask of lesion. The boundary of skin lesion is identified by morphological closing. The image quality is improved by increasing the contrast to enhance the equalization technique. In this segmentation process the lesion region of interest are extracted by filtering methodology and the edge is detected and segmented using Delaunay triangulation. The image enhancement is done on PH2 dataset and the performance measure of this work determines with the specificity up to 97.6% and sensitivity up to 87.17% [13]. To extract lesion, a non-negative matrix factorization technique was developed to segment dermoscopy image [14]. The lesion region was identified by extracting the region on image that was imposed with textures i.e., the texture laid over the lesion region. The unwanted pixels like artifacts are removed using Gaussian filter. To improve the quality of NMF an additional learning technique like information theoretic dictionary learning tool is used to compress and classify the pixel intensity. The information about the certain principle is trained in NMF as temporary processing unit.

1.1 Objectives

(1) To segment the dermoscopy images to find the lesion binary image.

(2) To identify the irregular shape of boundaries by Asymmetry method.

(3) To identify the irregular edge of boundaries by Border method.

The rest of the paper includes as follows, Section 2 describes the methodology and Section 3 describes the proposed work. Section 4 presents the results and description. Section 5 describes the performance measure and section 6 describes the feature extraction and finally the paper concludes in Section 7.

2. Methodology

Developed an optimal segmentation tool for diagnosing melanoma consists of three sections are pre-processing, Segmentation and Post-Processing. The following sections provide an explanation of the method proposed for developing optimal segmentation.

2.1 Gaussian flattening

Gaussian Flattening is a 2D convolution operator which impresses image pixels to eradicate noise. The Gaussian scattering in 1D is in as Eq. (1):

$G(x)=\frac{1}{\sqrt{2 \pi \sigma}} e^{-\frac{x^2}{2 \sigma^2}}$                     (1)

where, σ is variance of Gaussian scattering [15]. During scattering the average mean is zero (x=0). The Gaussian scattering in 2D is given, as in Eq. (2):

$G(x, y)=\frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{x^2+y^2}{2 \sigma^2}}$                        (2)

This has x=0, y=0 and σ=1. Gaussian flattening use 2D scattering as point-spread task achieved through convolution. The image of Gaussian outcomes is completely discrete with pixels that produce discrete estimation.

2.2 2D convolution

The convolution integral in 1D is as in Eq. (3):

$g(t)=\int_\propto^{-\propto} f(T) h(t-T) d T$                    (3)

f(t) is time varying signal and h(t) is impulse response. T is a variable which represents shifting of its one function [6]. Convolution theorem relates product of time domain to convolution in frequency domain as in Eq. (4) and Eq. (5):

$f(t) h(t)=F(w) \otimes H(w)$                   (4)

$F(w) H(w)=f(t) \otimes h(t)$                  (5)

In 2D continuous convolution the integral process is expressed as in Eq. (6):

$g(x, y)=f(x, y) \bigotimes h(x, y)=\iint_{\propto}^{-\propto} f(T u, T v) h(x-T u, y-T v) d T u \,\, d T v$                   (6)

The function h(0-Tu, 0-Tv) is image function h(Tu, Tv) rotated by 180 degrees about origin [16].

The function h(x-Tu, y-Tv) is further decoded to transfer origin of image function g to (x, y) in (m, n) plane. Integral function is further transformed to discrete summations over the image dimensions m and n as in Eq. (7):

$g(x, y)=\sum_{T u} \sum_{T v} f(T u, T v) h(x-T u, y-T v)$                     (7)

The number of required multiply and add operations is equal to number of pixels in h(x, y) times the number of pixel is f(x, y) [17, 18].

2.3 Image gradients

The direction and the magnitude of the gradient are encoded as vector information. The vector length determines magnitude while directional flow determines gradient direction [19]. Partial derivative of x & y are combined to form gradient vector as in Eq. (8):

$\nabla I=\left[\frac{g_x}{g_y}\right]\left(\frac{\partial I}{\partial x}, \frac{\partial I}{\partial y}\right)$                      (8)

The direction of image gradient is given in Eq. (9):

$\emptyset=\tan ^{-1}\left[\frac{g_y}{g_x}\right]$                        (9)

The magnitude of image gradient is computed as in Eq. (10):

$\sqrt{g_y^2+g_y^2}$                     (10)

2.4 Building the watershed

An initial part of watershed algorithm uses mathematical and morphological operators that describe the flow of segmentation between each pixel. The watershed transformation is a flood of rearranged sectors of function f [20, 21]. Consider Zi(f) at level i, flood has grasped the height. Consider Zi+1(f) flood performed on components of Zi(f) in Zi+1(f). There are some regions where flood is not reached. Hence, minima is at level i+1. Wi(f) denotes section at level I as in Eq. (11) and Eq. (12) with the iterative algorithm as in Eq. (13):

$W_i(f)=\left[I Z_{Z i+1(f)}\left(X_t(f)\right)\right] \cup m_{i+1}(f)$                      (11)

at i+1,

$m_{i+1}(f)=Z_{i+1}(f) / R_{Z i+1(f)}\left(Z_i(f)\right)$                   (12)

Its iterative algorithm W-1(f)=0.

$D L(f)=W_N^c(f)$ With $\max (f)=N$                       (13)

The individual basins are generated by the segmentation of watershed lines. Region edges are high watersheds and low gradient region catchment basins [16, 22]. These are belonging to the same catchment basins i.e., homogenous connected with basin regions by simple path of pixels.

2.5 Region growing

Skin lesion borders are easily detected by region growing segmentation method. The segmentation of border is not much easy in noisy image [9, 23]. But region growing overcomes this task with better result using homogeneity of regions. Merge adjacent regions if the common edges are weak, then boundary regions will not be considered. The edge significance is analyzed below [24, 25] as in Eq. (14) and Eq. (15):

$V_{i j}=0$ if $S_{i j}<T 1$                       (14)

$V_{i j}=1\,\, otherwise$                (15)

where, vij=1 indicates a significant edge, Vij=0 a weak edge, T1 is a present threshold and Sij is the crack edge value.

2.6 Active contour

Active contour model is like a snake model. To describe the outline of any object, the active contour model acts as a framework. The advantage of the active contour model is that it acts on noisy image in 2D [13, 26]. To identify or to detect the lesion, the border should be identified. The affected layers are of uneven boundary structure. To incorporate all possible edges we need a zig - zag tool to reach all the possible edges. The active contour model is a popular application to frame object, recognize shape and edge detection [27].

To draw curves or splines with some constraints on images that transform into object contours, the active contour model obtains its edge through point distribution model [28]. The active contour model can also be called as mutable snake, which is organized by set of m points Sj where j=0, 1, 2…………m-1.

The core mutable energy of snake is defined as Ecore and peripheral edge energy is defined as Epheri.

Ecore – Control deformation made to snake.

Epheri – Contour image fitting control.

Econ – Constraint forces.

The peripheral energy is a combination of forces on image is in Eq. (16), Eq. (17) and Eq. (18)

Snake Energy function $=\mathrm{E}_{\text {pheri }}+\mathrm{E}_{\text {core }}$.                   (16)

$E_{\text {Mutable Snake }} \quad=E_{\text {core }}+E_{\text {img }}+E_{\text {cond }}$                       (17)

Internal Energy:

$E_{\text {core }}=E_{\text {cont }}+E_{\text {curv }}$                       (18)

where, Econt=continuity of contour, Ecurv=smoothing of contour.

2.7 Levelset method

A Cartesian system describes Level-Set in numerical functionality of shapes. It allows faster computation without parameterization and improves efficiency, and thus it describes dynamic variation in objects [29]. Low complexity noiseless images are good for DRM. It also uses more resources for each calculation but it is not preferable. It has a surface, which intersects the plane and gives us a contour, with image segmentation and the surface is updated with force derived from the image [7, 23].

2.8 Neumann boundary conditions

The NBC is a directional derivative formed by sum of boundary surfaces expressed as normal derivative $\frac{\partial \emptyset}{\partial n}$, is 0. The zero NBC occurs like symmetry where potential points over plane are identical [10]. If x=0, symmetry is given as $\emptyset$(x,y,z)≡$\emptyset$(-x,y,z) for all points (x, y, z).

The zero Neumann boundary condition, $\frac{\partial \emptyset}{\partial n}=0$, is existing at all points on a symmetry surface where this derivative $\frac{\partial \emptyset}{\partial n}$ exists. The derivative is not on the symmetry surface which contains Dirichlet boundary conditions as their potential gradient may not be continuous [11, 16].

2.9 Dataset

The PH² dataset has been developed for research and benchmarking purposes, in order to facilitate comparative studies on both segmentation and classification algorithms of dermoscopic images. PH² is a dermoscopic image database acquired at the Dermatology Service of Hospital Pedro Hispano, Matosinhos, Portugal.

DermQuest dataset contains the lesion tags. Unlike the diagnosis, which is unique for each image, multiple lesion tags may be associated with a dermatology image. There are a total of 134 lesion tags for the 22,082 dermatology images from DermQuest.

3. Proposed Work

The input RGB image is taken from the publically available datasets PH2 and DermQuest. The proposed system is divided into three sectors shown in Figure 1. Initially the system starts with Pre-processing for removing noise followed by second sector, Segmentation used to identify the difference between foreground and background. The third sector is Post-Processing that extracts boundaries and filters unwanted regions. Using the segmented binary image some features like Asymmetry and Border are extracted. The experiment is taken on PH2 dataset contains 200 dermoscopy images and DermQuest dataset contains 100 dermoscopy images and few samples are displayed.

3.1 Algorithm (Region optimal segmentation)

Input: Lesion RGB Image

Output: Optimal Segmented Binary Image

Figure 1. Architecture diagram of region optimal segmentation

1. Segment the RGB Image

$\mathrm{I}(\mathrm{i}, \mathrm{i}) \leftarrow$ RGB to Optimal Segmentation

$\mathrm{I}_{\circ}(\mathrm{i}, \mathrm{j}) \leftarrow$ Segmented Lesion Binary Image

$\mathrm{H}_{\circ}[$ range, thickness, depth $] \leftarrow$ Histogram $\left(\mathrm{I}_{\circ}(\mathrm{i}, \mathrm{j})\right)$

2. Sequestering with Filter

$N(i, j) \leftarrow$ Noise Image

$\mathrm{G}(\mathrm{x}, \mathrm{y})=\mathrm{I}_{\circ}(\mathrm{x}, \mathrm{y})+\mathrm{N}(\mathrm{x}, \mathrm{y})$

$\mathrm{G}(\mathrm{x}, \mathrm{y}) \leftarrow$ Salt and Pepper Noise Image

$\mathrm{G}_{\mathrm{a}}(\mathrm{x}, \mathrm{y})=$ Gaussian Filter $(\mathrm{G}(\mathrm{x}, \mathrm{y}))$

3. Generate Convolution Matrix

$\begin{aligned} & \mathrm{H}_{\mathrm{r}}(\mathrm{x}, \mathrm{y})=\mathrm{H}(\mathrm{x}) * \mathrm{H}(\mathrm{y}) \\ & \mathrm{H}(\mathrm{y}) \leftarrow(\mathrm{H}(\mathrm{x}))^{\mathrm{T}} \\ & \mathrm{G}_{\mathrm{r}}(\mathrm{x}, \mathrm{y})=\operatorname{Sobel}\left(\mathrm{H}_{\mathrm{r}}(\mathrm{x}, \mathrm{y}), \mathrm{G}_{\mathrm{a}}(\mathrm{x}, \mathrm{y})\right)\end{aligned}$

4. Generate Gradient Matrix

$\mathrm{G}_{\mathrm{r}}(\mathrm{x}, \mathrm{y}) \leftarrow$ Input image for Gradient Image

$\mathrm{G}_{\mathrm{ra}}(\mathrm{x}, \mathrm{y}) \leftarrow$ Derive (Ix, Iy)

5. Build Watershed

$\begin{aligned} & W(x, y) \leftarrow \text { Watershed_} \operatorname{Segment}\left(G_{\mathrm{ra}}(\mathrm{x}, \mathrm{y})\right) \\ & M(x, y)=\text { Morphological_Open }(W(\mathrm{x}, \mathrm{y}))\end{aligned}$

6. Set Boundaries

$\mathrm{R}(\mathrm{x}, \mathrm{y}) \leftarrow \operatorname{resize}(\mathrm{M}(\mathrm{x}, \mathrm{y}), 60 * 80)$

$\operatorname{Dirac}(\mathrm{x}, \mathrm{y}) \leftarrow$ Gaussian(Delta(R $(\mathrm{x}, \mathrm{y})))$

Distance_reg= Single Well? Double Well Potential

Outline $(x, y) \leftarrow(\operatorname{Dirac}(x, y)$, distance_reg$)$

Figure 2. Provided set of input image a) Name b) Test Image c) RGB Histogram d) Ground Truth

Figure 3. Morphological dilation with histogram a) Name b) Test image c) Morphological open d) Histogram

4. Results and Discussion

The input RGB dermoscopy image and the ground truth image are obtained from the public dataset. The ground truth image is drawn by the expert in dermatologist shown in Figure 2. The lesion part in the RGB images are extracted through their regions of interest by the dermatologist is given as ground truth image. The input image may contain plenty of noise like artifacts, hairs etc. to overcome this, the hairs and artifacts should be removed before taking into segmentation. To balance the artifacts or hairs some noise object should be added to equalize the hair and artifacts. The pre-processing stage starts with accumulation of noise to the input image. The object used here is salt and pepper noise. The salt and pepper noise are in the form of black and white that mix with the input image and makes the image into a diluted image. it is very easy to remove the artifacts and hairs because the colors of salt and pepper noise and artifacts are more or less equal in intensity value.

The Gaussian filtering with the best standard deviation is used to remove noise from the input image. The task of the Gaussian filter is to smooth the image and to encapsulate the foreground and the background. The original image and the kernel convolution 3*3 images are compared to each other and the intersected pixels of the images are taken as the Sobel segmented image. The convolution image contains a distributed set of pixels on boundaries. To reorganize the boundary pixels, the gradient magnitude is invited to rearrange the pixels in a unique direction to filter the unwanted pixels. The image gradient filters the image flow in a single direction through the magnitude. Hence it is very easy to filter the pixel regions.

Further the collection of pixels in the gradient image is taken into watershed segmentation. The segmentation divides the gradient image into different color clusters. The cluster differentiates the formation of edges for the Region of Interest. Further the image is labeled out with color variation to organize the boundaries. The segmented image is taken into the morphological open as input to find out the presence of the monochrome image shown in Figure 3. The levelset can be initiated only by morphological monochrome image. The morphological image is obtained from watershed segmentation. The morphological image is resized to 60x80 pixels for better results. The resized image is stretched to its maximum extent, which makes it very easy to identify the boundary by levelset. The levelset initialize a set of growing rectangle object at the centre of morphological image.

From the centre of ROI the rectangle radius are increased towards the boundary for edge detection. The contours identify the edge by the threshold value from the ROI. The rectangle region is grown pixel by pixel till the threshold value reaches the boundary shown in Figure 4. The contours which can stretch the rectangle like a flood of water by identifying its threshold at each level and detect edges perfectly while reaching boundary. The contour is designed as a snake, which can bend to any extent till it reaches its neighborhood same threshold pixel to complete the outer boundary layer. The contour identifies the exact boundary by an average threshold value, because when the image is stretched the boundary layer threshold will be reduced. Hence the average threshold is set to extend the rectangle and derives the uneven shape of boundary. To identify the edge and to enhance the region based models simultaneously, the double well potential function is used.

Figure 4. Initial levelset test, final contour, final set function and segmented a) Name b) Initial level-set function c) Final level set contour d) Final level set function e) Segmented image

Figure 5. Test image, segmented image and ground truth image a) Name b) Test image c) Ground truth d) Segmented image

The final stage of gradient magnitude with Neumann boundary condition derives the contour area shown in Figure 5. On segmentation the blurriness of the image is eradicated by zero boundary condition. The Neumann boundary condition derives the current pixels to organize the boundary of the domain. It is used to remove blurriness with the help of gradient magnitude. The gradient magnitude derives the direction of ROI pixels into proper validation. The distance of each pixel is finalized by gradient magnitude and the final binary images are obtained as shown in Figure 5.

5. Performance Measure

The lesion binary image of the proposed region optimal segmentation is compared with expert’s ground truth image to show the results. The performance measure describes the exact outcomes of the proposed work in terms of certain parameter values. The performance measures are calculated using True Positive, True Negative, False Positive and False Negative. From these values the exact solution is derived. Table 1 describes the outcomes of our proposed system on PH2 dataset and DermQuest dataset. There is not much difference between the accuracy values of each image in the proposed system.

The mathematical features of performance measure are given below:

Jaccard Index $=\mathrm{TP} /(\mathrm{FP}+\mathrm{TP}+\mathrm{FN}) E_{\text {core }}=E_{\text {cont }}+E_{\text {curv }}$                  (19)

Average Dice Coefficient $=2 * \mathrm{TP} /((\mathrm{FP}+\mathrm{TP})+(\mathrm{TP}+\mathrm{FN}))$                          (20)

Average Sensitivity $=\mathrm{TP} / \mathrm{P}$                          (21)

Average Specificity $=\mathrm{TN} / \mathrm{N}$                       (22)

Average Accuracy $=(\mathrm{TP}+\mathrm{TN}) /(\mathrm{P}+\mathrm{N})$                     (23)

Average Error Rate $=(\mathrm{FP}+\mathrm{FN}) /(\mathrm{TP}+\mathrm{FN}+\mathrm{FP}+\mathrm{TN})$                     (24)

Table 1. Performance measurement of Optimal Segmentation

Performances

PH2

DermQuest

IMD016

IMD175

SSM7

SSM19

Accuracy (%)

93.91

95.67

90.47

90.83

Dice Coefficient

93.92

95.69

90.66

90.39

Jaccard Index (%)

88.53

91.74

82.92

82.46

Sensitivity (%)

95.16

94.28

88.29

82.61

Specificity (%)

92.68

97.13

92.87

99.80

Error Rate (%)

06.08

04.32

09.52

09.16

The proposed system Region Optimal Segmentation is applicable to all dermoscopy dataset for extracting binary contour image. From the obtained solution, the proposed system is compared with the existing system accuracy to describe the efficiency of the proposed system. In review, plenty of methods are described and their accuracy is compared with our proposed system shown in Table 2. In comparison to all existing systems, our proposed system has the higher accuracy.

Table 2. Comparison of existing accuracy with proposed accuracy value

Methods

Accuracy

Our Method

94%

Super pixel Based Fine-Tuning [3]

92.3%

FCN [22]

91.6%

FCN + MRF [22]

87.7%

SegNet (VGG) [2]

82.1%

SegNet (Basic) [2]

70.4%

6. Feature Extraction

The input RGB image is converted into morphological image using watershed segmentation. The morphological image is in the form of grayscale is taken as input in to levelset and contour segmentation. After a fine segmentation with Dirac and Newmann Boundary condition the segmented binary image is obtained.

6.1 Asymmetry feature

The asymmetry feature is discussed to prove the given input image is melanoma or not. The asymmetry feature is proved by comparing the entire quadrants radius. The major axis and minor axis values are considered as radius. The radius for the four quadrants is identified and compared each other to prove the symmetry which is shown in Figure 6.

Algorithm (Asymmetry Diffusion)

Input: Lesion RGB Image

Output: Shape Asymmetry Index

1. Segment the RGB Image

$\mathrm{S}(\mathrm{i}, \mathrm{j}) \leftarrow \mathrm{RGB}$ to Segmentation

$\mathrm{I}_{\mathrm{p}}(\mathrm{i}, \mathrm{j}) \leftarrow$ Segmented Lesion Binary Image

2. Aligning the center of lesion

$[$ Row, Column $]=\operatorname{Size}\left(\mathrm{I}_{\mathrm{p}}(\mathrm{i}, \mathrm{j})\right)$

Centroid(x) = Row/2

Centroid(y) = Column/2

Center(x) = (x1+x2)/2

Center(y) = (y1+y2)/2

$\nabla \mathrm{xy}=\{\{$ Center(x) - Centroid(x) $\},\{$ Center(y) - Centroid(y) $\}\}$

Translate Ip(i,j) using $\nabla$xy to IA(x,y)

3. Compute Asymmetric Index

$\mathrm{I}_{\mathrm{A}}(\mathrm{x}, \mathrm{y}) \leftarrow$ Aligned and Centered Segmented Image

[Row, Column]= Size(IA(i,j))

Centroid(x) = Row/2

Centroid(y) = Column/2

a. Symmetry over Upper Limit

Divide IA(i,j) to Upper-Left and Upper-Right

$\begin{aligned} & \mathrm{I}_{\mathrm{A}}(\mathrm{i}, \mathrm{j})=\left[\mathrm{ULeft}\left(\mathrm{I}_{\mathrm{A}}(\mathrm{i}, \mathrm{j})\right), \mathrm{URight}\left(\mathrm{I}_{\mathrm{A}}(\mathrm{i}, \mathrm{j})\right)\right] \\ & \mathrm{I}_{\text {ULeft }}=\text { Gather(InitialPoint, Centroid }\left(\mathrm{X}_{\mathrm{c}}, \mathrm{Y}_{\mathrm{c}}\right) \text { ) } \\ & \mathrm{I}_{\mathrm{URight}}=\mathrm{Gather}\left(\text { Centroid }\left(\mathrm{X}_{\text {max }} \mathrm{Y}_{\text {max }}\right), \text { MaximumPoint }\right) \\ & \end{aligned}$

b. Symmetry over Lower Limit

Divide IA(i,j) to Lower-Left and Lower-Right

​$\begin{aligned} & \mathrm{I}_{\mathrm{A}}(\mathrm{i}, \mathrm{j})=\left[\mathrm{LLeft}\left(\mathrm{I}_{\mathrm{A}}(\mathrm{i}, \mathrm{j})\right), \operatorname{LRight}\left(\mathrm{I}_{\mathrm{A}}(\mathrm{i}, \mathrm{j})\right)\right] \\ & \mathrm{I}_{\text {LLeft }}=\mathrm{Gather}\left(\text { Centroid }\left(\mathrm{X}_{\text {min }}, \mathrm{Y}_{\text {min }}\right) \text {, Centroid }\left(\mathrm{X}_{\text {mid }}, \mathrm{Y}_{\text {mid }}\right)\right) \\ & \mathrm{I}_{\mathrm{LRight}}=\text { Gather(Centroid(} \mathrm{X}_{\text {mid }} ,\mathrm{Y}_{\text {mid }} \text { ), MaximumPoint) } \\ & \end{aligned}$

4. Calculate Symmetry Index

Upper = (IULeft+ IURight)/2

Lower = (ILLeft+ ILRight)/2

Asymmetry Index:

If Associate (Upper, Lower) = Symmetry? Asymmetry

Figure 6. Architecture diagram for asymmetry diffusion

The segmented binary image cannot be taken directly into feature extraction, because the binary image has not centered. To identify the perfect center the centroid of the segmented binary image should be identified to translate the image shown in Figure 7.

After fine translation the image should be rotated to center with respect to centroid. From the centroid point, the image is divided in to four quadrants, upper and lower region. The upper region contains upper right and upper left regions. The lower regions contain lower right and lower left regions. All the four quadrants are compared by taking major and minor axis value of each quadrant image. The average of upper left and upper right region and the average of lower left and lower right region are compared to finalize the region are symmetry or not shown in Figure 8.

Figure 7. Analysis of quadrants radius

Figure 8. Distribution of upper and lower regions

Table 3. Extraction of radius values all regions

Region

Major Axis Length (Ma)

Minor Axis Length (Mi)

Upper Left (UL)

186.61

106.95

Upper Right (UR)

184.75

104.56

Lower Left (LL)

185.53

105.80

Lower Right (LR)

181.53

103.86

Ma Upper =(Ma (UL) + Ma (UR))/2=(186.61+184.75)/2=185.68

Ma Lower = (Ma (LL) + Ma (LR))/2=(185.53+181.53)/2=183.53

Mi Upper = (Mi (UL) + Mi (UR))/2=(106.95+104.56)/2=105.76

Mi Lower = (Mi (LL) + Mi (LR))/2=(105.80+103.86)/2=104.83

Table 3 describes all regions of quadrants and displays the obtained radius values. The average upper major axis and the average lower major axis are compared and proved there is much variation between Major axis upper and lower regions. After this verification, the minor axis regions are considered to verify. The average upper minor axis and the average lower minor axis are compared and proved there is much variation between Minor axis upper and lower regions. Hence the sample input image shown here is melanoma.

6.2 Border feature

The border feature is discussed to prove the given input image is melanoma or not. The border feature is proved by identifying radius from centroid to all possible edges. The main aim to find the border is to recognize the taken image is melanoma or not. The outer layer or the boundary of the binary image should be noted perfectly because the difference between a mole and melanoma is only the border region. If there is any deviation in border it can be easily assumed that it may be melanoma. Melanoma borders will in general be lopsided and may be scalloped or depressed edges, while usual moles will be common and have smoother shown in Figure 9.

Input: Lesion RGB Image

Output: Irregular Border Index

1. Segment the RGB Image

IS(i,j)$\leftarrow$RGB to Segmentation

IS(i,j)=0.2489* Red + 0.5870* Green + 0.1140*Blue

2. Finding the center of Segmented Image

[Row, Column]= Size (IS(i,j))

Centroid(x) = Row/2

Centroid(y) = Column/2

Center(x) = (x1+x2)/2

Center(y) = (y1+y2)/2

$\nabla$x={ {Center(x) - Centroid(x)},{Center(y) - Centroid(y)} }

Translate IS(i,j) using $\nabla$xy to IB(x,y)

3. Identify Border from Center

c. Border over X-Axis

Line(IB(Center(x)), IB(Center(y)), IB(Right(x)), TruePixels(x))

Line(IB(Center(x)), IB(Center(y)), IB(Left(x)), TruePixels(x))

d. Border over Y-Axis

Line(IB(Center(x)), IB(Center(y)), IB(Top(y)), TruePixels(y))

Line(IB(Center(x)), IB(Center(y)), IB(Bottom(y)), TruePixels(y))

4. Marking Borders and Calculate Symmetry Index

PB = Segmeted_Perimeter(IB(i,j))

Drawline(PB,IB(i,j))

Border Asymmetric Index:

X Radius = Radius(IB(Right(x))) + Radius(IB(Left(x)))

Y Radius = Radius(IB(Top(x))) + Radius(IB(Bottom(x)))

Asymmetry Index:

If Associate (X Radius, Y Radius)=Symmetry? Asymmetry.

6.2.1 Architecture diagram for border recognition

The radius values are compared to each other radius values. The segmented binary image is centered and the perimeter, area of the image is identified to form the boundaries of the lesion image. To find the perfect radius, the image is divided into four quadrants from the centroid to right over x axis, left over x axis, top on y axis and bottom on y axis. The radius is identified from centroid of the possible boundary edges. The average of right and left radius over the x axis and the average of top and bottom radius over y axis are compared to find the border symmetry or not shown in Figure 10.

Figure 9. Architecture diagram for border recognition

Figure 10. Radius of possible edges to extract border

7. Conclusions

The input RGB image is purified from noise by Gaussian filter and perfectly segmented with watershed segmentation, converting the segmented image to morphological dilation. With the help of Neumann Boundary function the borders are marked from centre of ROI. In the earlier methodology, the region growing does not cover the entire affected region. In the proposed methodology, Dirac and Neumann methods are introduced with Level set region growing in order to fix the borders of affected region based on color complexity. A 3D level set structure is constructed to analyze the borders in three dimensions. The accuracy, sensitivity and specificity comprise the efficiency of the segmented image than the expert’s ground truth image. An average accuracy rate of 94% and average specificity rate of 97% in PH2 dataset are obtained. An average accuracy rate of 90% and average specificity rate of 94.88% in DermQuest dataset are obtained. The automatic segmentation with levelset will be highly useful for dermatologist in identifying lesion area. The feature extraction like asymmetry and border are extracted to prove the input dermoscopy image is melanoma or not.

Acknowledgment

Data used in preparation of this article were obtained from the PH2 database. (https://www.fc.up.pt/addi/ph2%20database.html).

DermIS database (https://www.dermis.net/dermisroot/en/home/index.htm). A well-known benchmark dataset to study the Skin Melanoma and its related disease.

  References

[1] Prabhu Chakkaravarthy, A., Chandrasekar, A. (2019). An automatic threshold segmentation and mining optimum credential features by using HSV model. 3D Research, 10(2): 1-17. https://doi.org/10.1007/s13319-019-0229-8

[2] Bozorgtabar, B., Sedai, S., Roy, P.K., Garnavi, R. (2017). Skin lesion segmentation using deep convolution networks guided by local unsupervised learning. IBM Journal of Research and Development, 61(4/5): 6-1. https://doi.org/10.1147/JRD.2017.2708283

[3] Adjed, F., Safdar Gardezi, S.J., Ababsa, F., Faye, I., Chandra Dass, S. (2018). Fusion of structural and textural features for melanoma recognition. IET Computer Vision, 12(2): 185-195. https://doi.org/10.1049/iet-cvi.2017.0193

[4] Barata, C., Ruela, M., Francisco, M., Mendonça, T., Marques, J.S. (2013). Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Systems Journal, 8(3): 965-979. https://doi.org/10.1109/JSYST.2013.2271540

[5] Codella, N.C., Nguyen, Q.B., Pankanti, S., Gutman, D.A., Helba, B., Halpern, A.C., Smith, J.R. (2017). Deep learning ensembles for melanoma recognition in dermoscopy images. IBM Journal of Research and Development, 61(4/5): 5-1. https://doi.org/10.1147/JRD.2017.2708299

[6] Demyanov, S., Chakravorty, R., Abedini, M., Halpern, A., Garnavi, R. (2016). Classification of dermoscopy patterns using deep convolutional neural networks. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 364-368. https://doi.org/10.1109/ISBI.2016.7493284

[7] Glaister, J., Wong, A., Clausi, D.A. (2014). Segmentation of skin lesions from digital images using joint statistical texture distinctiveness. IEEE Transactions on Biomedical Engineering, 61(4): 1220-1230. https://doi.org/10.1109/TBME.2013.2297622

[8] Kharazmi, P., AlJasser, M.I., Lui, H., Wang, Z.J., Lee, T.K. (2016). Automated detection and segmentation of vascular structures of skin lesions seen in Dermoscopy, with an application to basal cell carcinoma classification. IEEE Journal of Biomedical and Health Informatics, 21(6): 1675-1684. https://doi.org/10.1109/JBHI.2016.2637342

[9] Kodeeswari, C., Sunitha, S., Pavithra, M., Santhia, K.R. (2014). Automatic segmentation of scaling in 2-D psoriasis skin images using SVM and MRF. International Journal of Advances in Science Engineering and Technology, 2(2): 31-35.

[10] Chatterjee, S., Dey, D., Munshi, S. (2018). Optimal selection of features using wavelet fractal descriptors and automatic correlation bias reduction for classifying skin lesions. Biomedical Signal Processing and Control, 40: 252-262. https://doi.org/10.1016/j.bspc.2017.09.028

[11] Oliveira, R.B., Marranghello, N., Pereira, A.S., Tavares, J.M.R. (2016). A computational approach for detecting pigmented skin lesions in macroscopic images. Expert Systems with Applications, 61: 53-63. https://doi.org/10.1016/j.eswa.2016.05.017

[12] Oliveira, R.B., Mercedes Filho, E., Ma, Z., Papa, J.P., Pereira, A.S., Tavares, J.M.R. (2016). Computational methods for the image segmentation of pigmented skin lesions: A review. Computer Methods and Programs in Biomedicine, 131: 127-141. https://doi.org/10.1016/j.cmpb.2016.03.032

[13] Pennisi, A., Bloisi, D.D., Nardi, D., Giampetruzzi, A.R., Mondino, C., Facchiano, A. (2016). Skin lesion image segmentation using Delaunay Triangulation for melanoma detection. Computerized Medical Imaging and Graphics, 52: 89-103. https://doi.org/10.1016/j.compmedimag.2016.05.002

[14] Flores, E., Scharcanski, J. (2016). Segmentation of melanocytic skin lesions using feature learning and dictionaries. Expert Systems with Applications, 56: 300-309. https://doi.org/10.1016/j.eswa.2016.02.044

[15] Cao, G., Ruan, S., Peng, Y., Huang, S., Kwok, N. (2018). Large-complex-surface defect detection by hybrid gradient threshold segmentation and image registration. IEEE Access, 6: 36235-36246. https://doi.org/10.1109/ACCESS.2018.2842028

[16] Gaetano, R., Masi, G., Poggi, G., Verdoliva, L., Scarpa, G. (2014). Marker-controlled watershed-based segmentation of multiresolution remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 53(6): 2987-3004. https://doi.org/10.1109/TGRS.2014.2367129

[17] Jalba, A.C., Westenberg, M.A., Roerdink, J.B. (2015). Interactive segmentation and visualization of DTI data using a hierarchical watershed representation. IEEE Transactions on Image Processing, 24(3): 1025-1035. https://doi.org/10.1109/TIP.2015.2390139

[18] Hodneland, E., Tai, X.C., Kalisch, H. (2015). PDE based algorithms for smooth watersheds. IEEE Transactions on Medical Imaging, 35(4): 957-966. https://doi.org/10.1109/TMI.2015.2503328

[19] Nou-Shene, T., Pudi, V., Sridharan, K., Thomas, V., Arthi, J. (2015). Very large-scale integration architecture for video stabilisation and implementation on a field programmable gate array-based autonomous vehicle. IET Computer Vision, 9(4): 559-569. https://doi.org/10.1049/iet-cvi.2014.0120

[20] Cavalcanti, P.G., Scharcanski, J., Lopes, C.B. (2010). Shading attenuation in human skin color images. In International Symposium on Visual Computing, pp. 190-198. https://doi.org/10.1007/978-3-642-17289-2_19

[21] Sghaier, M.O., Lepage, R. (2015). Road extraction from very high resolution remote sensing optical images based on texture analysis and beamlet transform. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(5): 1946-1958. https://doi.org/10.1109/JSTARS.2015.2449296

[22] Guang, H., Ji, L., Shi, Y. (2018). Focal vibration stretches muscle fibers by producing muscle waves. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(4): 839-846. https://doi.org/10.1109/TNSRE.2018.2816953

[23] Lee, J., Tang, H., Park, J. (2016). Energy efficient canny edge detector for advanced mobile vision applications. IEEE Transactions on Circuits and Systems for Video Technology, 28(4): 1037-1046. https://doi.org/10.1109/TCSVT.2016.2640038

[24] Chang, L., Li, K. (2017). Unified form for the robust Gaussian information filtering based on M-estimate. IEEE Signal Processing Letters, 24(4): 412-416. https://doi.org/10.1109/LSP.2017.2669238

[25] Tang, H., Liu, Y., Wang, H. (2018). Constraint gaussian filter with virtual measurement for on-line camera-odometry calibration. IEEE Transactions on Robotics, 34(3): 630-644. https://doi.org/10.1109/TRO.2018.2805312

[26] Vitetta, G.M., Sirignano, E., Montorsi, F. (2018). Particle smoothing for conditionally linear Gaussian models as message passing over factor graphs. IEEE Transactions on Signal Processing, 66(14): 3633-3648. https://doi.org/10.1109/TSP.2018.2835379

[27] Chow, C.W., Chen, C.Y., Chen, S.H. (2015). Enhancement of signal performance in LED visible light communications using mobile phone camera. IEEE Photonics Journal, 7(5): 1-7. https://doi.org/10.1109/JPHOT.2015.2476757

[28] Rodrigues, M.B., Da Nobrega, R.V.M., Alves, S.S.A., Rebouças Filho, P.P., Duarte, J.B.F., Sangaiah, A.K., De Albuquerque, V.H.C. (2018). Health of things algorithms for malignancy level classification of lung nodules. IEEE Access, 6: 18592-18601. https://doi.org/10.1109/ACCESS.2018.2817614

[29] Cavalcanti, P.G., Scharcanski, J. (2011). Automated prescreening of pigmented skin lesions using standard cameras. Computerized Medical Imaging and Graphics, 35(6): 481-491. https://doi.org/10.1016/j.compmedimag.2011.02.007