A Multi Range Morphological Model on Dermoscopy Images with Edge Based Segmentation for Image Quality Enhancement for Skin Lesion Classification

A Multi Range Morphological Model on Dermoscopy Images with Edge Based Segmentation for Image Quality Enhancement for Skin Lesion Classification

Deepthi Rapeti* Vivekananda D. Reddy

Department of Computer Science and Engineering, SVUCE, Sri Venkateswara University, Tirupati 517502, Andhra Pradesh, India

Corresponding Author Email: 
deepthirapeti3@gmail.com
Page: 
331-339
|
DOI: 
https://doi.org/10.18280/ria.370211
Received: 
12 January 2023
|
Revised: 
3 February 2023
|
Accepted: 
8 February 2023
|
Available online: 
30 April 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The most frequent form of cancer around the world is skin cancer. When compared to unaided visual inspection, Dermoscopy image processing enhances diagnostic accuracy for detecting malignant melanoma and other pigmented skin lesions. Medical professionals are very interested in computer-based methods that assist in diagnosis. For the most deadly skin diseases, such as melanoma, there has been a lot of research and development into the best ways to detect them. Skin cancer that originates in the pigment-producing cells known as melanocytes, melanoma, is the most severe form of the disease. Melanoma evolution might include changes in size, form, colour, irritation, and even skin disintegration. There is a need for a novel strategy to enhance the contrast and fine details of Dermoscopy Images. This study proposes a multi-range morphological approach to reducing the detrimental impact of low contrast on image quality. The rise in the number of skin cancer cases prompted the creation of these new technologies. The images must be precisely segmented in order to characterize skin lesions. That information can be used by a classifier or a dermatologist to classify a lesion more accurately, making it easier for them to do so. Segmentation issues might arise when images are obtained in an unsystematic and uncontrolled manner. Dermoscopy skin lesion photos can be used to aid in the computer-aided diagnosis of melanoma by automatically selecting a lesion border. Segmenting skin lesions is a challenge because of the wide range of photography techniques used in Dermoscopy Images. The proposed model introduced a Multi Range Morphological Model on Dermoscopy Images with Edge based Segmentation (MRMM-DI-EbS) for image quality enhancement for lesion classification. The proposed model is compared with the traditional model and the results show that the proposed model performance is high in image quality enhancement.

Keywords: 

Dermoscopy Images, skin lesions, morphology operations, image quality, noise, lesion classification, Edge based Segmentation

1. Introduction

Skin cancers are the most prevalent type of human cancer. Melanoma, squamous cell carcinoma, and basal cell carcinoma are the most frequent types of malignant skin lesions. In the Asian Countries, it is anticipated that over 9,500 people would lose their lives because of their melanoma in 2021, out of an estimated 102,315 cases [1]. Lives are saved when melanoma is detected early. Sometimes it is hard to tell the difference between a benign lesion and a skin cancer. Specialists in skin cancer use handheld Dermoscopy to aid in a visual evaluation of skin lesions on their patients. Digital macro (macroscopic) and micro (dermoscopic) images may be taken into consideration for prediction of skin cancer [2]. Dermoscopy is a method of examining the skin that makes use of polarisation or immerse fluid to minimise surface reflection. Dermoscopy has been widely used for the previous 20 years, and it has greatly increased the diagnostic success rate compared to simple visual examination [3]. In order to distinguish common benign melanomas naevi from melanoma, the ABCD criteria can be used for screening purposes [4]. The ABCD based images are shown in Figure 1.

Melanoma detection can be predicted using the ABCD rule, which takes into account the characteristics of a mole's asymmetry, border, color, and diameter.

The asymmetry property compares the two parts of a skin lesion to see if they are the same in terms of colour, form, and margins. Melanoma typically presents with an asymmetrical look.

The property on the edge of town is denoted with the letter B. This characteristic determines whether or not a skin lesion has sharp, clearly delineated borders. Melanoma often has irregular, fuzzy, and jagged borders.

Third letter, C, stands for the third attribute of colours. A melanoma's colour can range from light tan to dark brown to red to black, depending on where it is located.

In the fourth position, D, we find the diameter characteristic. It is a rough gauge of the size of the wound in question. Typically, the diameter of a melanoma is bigger than 6 mm.

In order to successfully apply the clinical ABCD Rule, it is preferable to have an end-to-end computerised system that can accurately segment skin lesions of any form. Important performance indicators for algorithms used in medical image segmentation [5] include the Jaccard Similarity Index (JSI), as well as specificity and sensitivity. The performance measures mentioned above are crucial for the success of computerised approaches. The bulk of current state-of-the-art Dermoscopy-based CAD systems are multi-stage [6], involving image pre-processing, segmentation, feature extraction, and classification. Using manually constructed feature descriptors, it is found that benign naevi are typically tiny in size and have a somewhat spherical form. Previous efforts have also made use of asymmetry features, colour features, and texture features [7]. The dermoscopic appearance of skin lesions is often described using pattern analysis, and many different computer methods have been developed to categorise lesion types utilising feature descriptors and pattern analysis based on image processing and traditional machine learning methodologies [8].

Figure 1. ABCD rules for skin lesion diagnosis

The dermatoscope, a handheld instrument used in the Dermoscopy test, allows for a close study of a skin lesion. When it comes to Dermoscopy, the most common application is in helping detect skin cancer. This procedure causes no discomfort or harm, and it is completely non-invasive. A strong light source and a high-quality magnifying lens are necessities for Dermoscopy. This paves the way for the study of cutaneous architecture and patterning. There is a wide variety of portable, lightweight, battery-operated gadgets available. Because it can be difficult to identify cancer from noncancerous lesions such seborrheic keratosis, hemangiomas, atypical moles, and benign lentigines, Dermoscopy is performed to aid in the diagnosis. Early melanoma is notoriously challenging to diagnose because it often looks like a benign mole, or nevus.

Skin cancer is among the most common forms of the disease. Basal cell carcinoma, melanoma, intraepithelial carcinoma, squamous cell carcinoma, etc. are all forms of skin cancer. Human skin is made up of three layers: the dermis, the epidermis, and the hypodermis [9]. Melanocytes, which are found in the epidermis, are capable of producing melanin at a very high pace in certain situations. For instance, melanin formation is triggered by prolonged exposure to the sun's powerful UV light. Melanoma is an aggressive form of skin cancer caused by the uncontrolled development of melanocytes [10]. Statistics show that melanoma has the highest fatality rate of all skin malignancies, at 2.45%. In order to effectively treat melanoma, an early diagnosis is crucial. Early detection of melanoma greatly improves prognosis; the relative survival rate after five years is 92%. However, the greatest difficulty in recognising melanoma is the apparent similarity between benign and malignant skin lesions. This means that even a highly skilled specialist may have trouble making a definitive diagnosis of melanoma [11].

Manually classifying lesions is a laborious technique fraught with difficulties. Therefore, numerous imaging techniques, such as Dermoscopy, have been adopted over time [12]. Dermoscopy, in which a light magnifying equipment and immersion fluid are used to examine the skin, is a noninvasive imaging procedure [13]. In addition to being one of the most used imaging methods in dermatology, its use has been linked to a 50 percent improvement in the success rate of diagnosing malignant patients. But because it relies on the dermatologist's experience, relying solely on human eyesight for the diagnosis of melanoma in dermoscopic pictures may be erroneous, subjective, or irreproducible [14]. Melanoma can be diagnosed with 75%-84% accuracy using Dermoscopy photos by a less-experienced professional.

Millions of individuals throughout the world suffer from skin lesions, a common condition with potentially severe effects [15]. Diseases of the skin are difficult to reproduce and can only be correctly identified by dermatologists with extensive clinical expertise. Misdiagnosis by a less-experienced dermatologist is common and can delay or prevent effective therapy. As a result, a dependable and speedy way to aid with data processing and dermatologist judgement is required [16].

Inspired by the structure and function of the human nervous system, recent developments in deep learning have had a far-reaching impact across a wide range of scientific and industrial disciplines. Many experts have begun using deep learning to handle biomedical data because of its quick development and ability to obtain more exact and accurate information [17]. Deep learning has had great success in a number of medical image processing challenges, with large part to the exponential growth in the quantity of available biomedical data such as pictures, medical records, and omics. To that end, deep learning is anticipated to affect the roles of picture experts in biomedical diagnosis as a result of its speed and accuracy in making diagnoses. This study describes the features of skin lesions, provides an overview of image approaches, summarises the recent advances in deep learning for the classification of skin diseases, and reflects on the potential and pitfalls of automatic diagnosis.

Expert diagnosticians need computer-aided diagnosis (CAD) technologies to help them overcome the challenges they face when trying to diagnose melanoma. Preprocessing, segmentation, feature extraction, and classification are the four stages involved in CAD systems for determining whether or not a lesion is melanoma. As such, lesion segmentation is a crucial part of CAD systems for accurate melanoma detection. The considerable variations in dermoscopic pictures between skin lesions in colour, texture, position, and size make this segmentation stage a challenging operation. More importantly, the image's lack of contrast makes it impossible to distinguish between adjacent tissues. Lesion segmentation is already complicated by the presence of lesions itself, so adding in things like air bubbles, hair, ebony frames, ruler markings, blood vessels, and colour illumination just makes things worse. Pictures of lesions seen during a Dermoscopy are presented in Figure 2.

There are a number of approaches that have been suggested for the separation of skin lesions. One of the deep learning techniques, convolutional neural network (CNNs), has seen recent success in segmenting skin lesions. But CNNs tolerate low- quality images to reduce the amount of calculations and network parameters. Some image details may be lost as a result of this problem. The goal of this study is to create a technique for segmenting skin lesions in dermoscopic pictures that is not dependent on the image quality by pre enhancing the image quality using morphological operations [18].

Figure 2. Lesions in Dermoscopy Images

Images obtained using Dermoscopy of the skin often suffer from a lack of contrast due to the wide range of available lighting. Dermoscopy Images of melanoma frequently show poor contrast, making the disease difficult to distinguish from the healthy skin. Additionally, some visual details are obscured by the lack of contrast [19]. Therefore, it is important to devise a method that may improve the contrast and features of dermoscopic images. To mitigate the effects of low contrast and improve image quality, a multi stage morphological strategy is proposed in this work. The image is processed by adding the regional bright features and subtracting the local dark features [20]. This method yields images with improved contrast and clarity. In recent decades, novel methods for contrast enhancement based on morphology operations have emerged [21]. Due to its capacity to extract dark and bright information from images using structural pieces of various sizes and shapes, the upper transformation has gained a lot of attention [22]. When preprocessing photos containing melanocytic lesions for a feature extraction investigation, the upper transformation was utilised to adjust the lighting.

Benign cancer is less systemic than malignant cancer, but it can still have environmental effects if it presses on nearby nerves or blood arteries [23]. Benign cancers progress more slowly than malignant ones. Negative health consequences may result from ineffective treatment of certain tumours [24]. This means that finding skin cancer early is the most important aspect of a correct diagnosis. A biopsy is an invasive procedure that can be quite distressing for cancer patients. Dermoscopy is an imaging procedure that use a microscopy and other specialized illumination equipment to examine the skin in great detail in order to determine whether or not a biopsy is necessary [25]. The detection and classification of skin lesion types in Dermoscopy pictures presents the primary challenge. Segmentation, feature extraction, and classification are only few of the stages needed for skin lesion detection [26]. For the purpose of classifying skin lesions, this research proposes an automatic segmentation technique [27]. The lesion regions can be identified and located automatically, aiding dermatologists in the detection of skin cancer. The proposed model introduced a Multi Range Morphological Model on Dermoscopy Images with Edge based Segmentation (MRMM-DI-EbS) for image quality enhancement for lesion classification.

2. Literature Review

Riaz et al. [2] offered a unique approach for image segmentation that makes use of the object & background probability distributions. A fresh region-based term is added to the standard variational level sets formulation, providing a complementary functional that has the potential to lead to accurate image segmentation. The core of the concept is based on the observation that in most medical imaging circumstances, objects can be classified according to universal features like colour, texture, shape, etc. As a result, the distributions of the background and the object in an image can be described as a Gaussian mixture. A new term is added to the segmentation framework during curve evolution, one that measures the difference between the GMMs representing the item and the backdrop. The proposed approach has been used to the segmentation of images acquired using MRI, Dermoscopy, and chromoendoscopy.

Diagnosing skin cancer relies heavily on accurate identification and classification of skin lesions. When it comes to difficult skin lesions with complicated properties such fuzzy boundaries, artefacts presence, low contrast with the backdrop, and limited training datasets, previous Deep learning-based Computer-aided diagnostic (CAD) algorithms still perform badly. They also require a careful adjustment of millions of parameters, which can cause over-fitting, bad generalisation, and excessive computer load. In order to automate the identification of skin cancer, Adegun et al. [3] suggested a new architecture that can segment and classify skin lesions. First, an encoder-decoders Fully-Convolutional Network (FCN) is used to learn the complicated and inhomogeneous features of skin lesions, with the encoding stage understanding the coarse appearances and the decoder stage learning the lesion boundary details. The FCN is built so that the sub-networks are linked together by a number of skip routes that use both lengthy skip and short-cut connections, as opposed to the solely long hidden neurons often used in the conventional FCN, to facilitate an efficient residual learning method and training.

To evaluate the morphology of the vessels under examination and to identify potential atherosclerotic lesions, automatic lumen contour segmentation plays a crucial role in diagnostic imaging and diagnosis. New approaches that work well with intravenous optical -coherence- tomography (IVOCT) images have emerged since quantitative can only be retrieved by segmentation. Tang et al. [5] presented a novel multi-scale characteristics based deep neural network (N-Net) for automatic lumen segmentation in IVOCT pictures. With a multi-scale input layer, an N-type convolution network layer, and a cross-entropy algorithm, the N-Net is a powerful machine learning tool. The proposed N-multi-scale Net's input layer is meant to prevent the data loss associated with pooling in conventional U-Nets, while also enriching the granularity of the data in each layer. As the backbone of the suggested deep architecture, an N-type convolution network is proposed. Last but not least, the loss function ensures how closely the suggested method's results match those obtained through human labelling. Data augmentation is also implemented to extend the size of the training set.

One of the difficulties of robot-assisted automatic fruit sorting involves the timely and accurate detection of grasping points based on machine vision. An enhanced picture segmentation technique that relies on adaptive morphology is suggested by Zhang and Gao [6] as an answer to the problem of low accuracy in the classification of branch candidates for random positioned fruit clusters using existing morphology algorithms. The adaptable convolution kernel is built with the help of edge distances defined using the minimal distance between edge points and disconnected components in the minimum domain. In addition, the time required to calculate edge distances is minimised through the use of a run analysis technique that uses several, unordered labels. A better region classification approach based on main components of various features is proposed as a solution to the challenge of describing and classifying unconstraint stalk using existing features. The descriptions on the characteristics of object region are constructed and principle component of multiple options were extracted based on variation contributions to improve speed and accuracy of stalk extraction. Experimental evidence with grape clusters based on a parallel robot sorting system confirms the effectiveness of the suggested gripping point identification approach for randomly positioned fruit clusters based on enhanced morphological image segments and region classification algorithms.

The size of the bubbles in a flotation cell is a key indicator that may be used to assess the quality of the flotation process and the conditions under which it is produced. Foam segmentation is a challenging topic that cannot be solved by current segmentation methods because to the poor contrast and noise in bubble images. An enhanced watershed algorithm is proposed by Peng et al. [9], one that makes use of optimal labelling in addition to edge constraints. The mixed foreground tag is the result of a fusion of the obtained content of three separate tags obtained using three different methods. The edge operator is used to extract the bubbles boundaries, and the bounding priori condition is applied as a constrain to correct the delineation line, reducing overall offset of the segmented line. When all else fails, combining foreground markers with external limitations yields the best segmentation line possible.

Duan et al. [10] offered a new method for classifying hyper-spectral images (HSIs) that relies on the fusion of various edge-preserving operations (EPOs). To begin, the dimensionally-reduced HSI undergoes two types of EPOs—local edge-preserving filtration and global edge-preserving smoothing to generate the edge-preserving features. Following this, the edge-preserving characteristics are enhanced with the use of a pixels segmentation method by taking into account the intra and inter spectrum properties of super pixels. When everything is said and done, the support vector machines (SVM) is fed the combined kernel of spectrum and edge-preserving features using a majority vote fusion procedure.

Classifying high-resolution remotely sensed pictures using an object-oriented convolutional neural network (CNN) has shown promising results. By combining the benefits of picture segmentation with those of a deep network, better accuracy and better edge preservation can be achieved. There is additional work to be done to address the discrepancy between the predicted and actual boundaries of the ground object. A CNN model tailored to learning superior feature representations is also crucial to enhancing classification precision. Li et al. [14] suggested a modified version of sample linear iteration cluster (SLIC) that produces sharper segmentation boundaries. By incorporating more characteristics, this technique is able to boost boundary performance beyond what was previously possible, removing a key bottleneck in SLIC's otherwise promising approach. Additionally, a novel CNN model has been created that can fully leverage spectral data to extract first-order and second-order fusing features for classification, allowing for improved feature representations to be obtained. Four actual remote sensing photos have been used to validate this strategy. The proposed method outperforms competing approaches in terms of both edge and accuracy rate.

When it comes to processing images from remote sensors, semantic segmentation is a crucial step. It has been the subject of numerous proposals for implementation at the pixel or object level. In particular, pixel-based methods are typically capable of extracting the finer details and edges, whereas object-based methods are able to preserve the internal logic of each land use / cover or land use. The Markov random fields (MRF) model is a statistical method that brings together the benefits of both the pixel and object levels of granularity. However, a challenge remains for existing MRF-based techniques, which is how to guarantee that benefits from varying granularities will supplement one another rather than cancel one another out. This issue is addressed by proposing a novel multigranularity edge-preservation optimization designed by Zheng et al. [15]. The proposed method starts by down-sampling the image from the level of the object to that of the pixel. The MRF paradigm is then defined at each level of detail. Since the proposed method defines an edges set for every granularity, it may continually fix edges while preserving intra class consistency during down sampling.

3. Proposed Methodology

Due to the high success rate of surgical removal of tumour cells in the initial stage of treatment, early detection is a critical element in melanoma prognosis. Dermoscopy Images are often examined by a dermatologist for melanoma diagnoses. This method plays a significant role in determining a proper diagnosis. While time-consuming and tedious, it also lacks in precision, is difficult to play, and calls for a high level of expertise due to its heavy reliance on individual opinion [28]. It is a recognised fact that inexperienced dermatologists have a reduced dermoscopic diagnosis accuracy.

Important properties, such as texture and border, are impacted by each hair pixel, making segmentation of lesion images extremely challenging. Even the feature extraction procedure is made more difficult by the presence of hairs. The adaptive canny-edge detector and rarefaction by a morphological operator are used to separate out the various hair types and colours [29]. To get rid of hair, first segmentation is performed. After that, the lines are segmented and refined properly. In this work, a refined strategy that makes use of threshold and morphological procedures to get rid of this kind of artifact is proposed.

Skin lesion segmentation uses an automated learning method. This method has two stages: pre-processing and segmentation. Segmentation is defined as the technique of detecting the skin lesion boundary from the Dermoscopy Image [30]. This area of the Dermoscopy Image of the skin lesion is particularly significant. Diagnosis requires a close inspection of the precisely delineated lesion area. This section classifies the various skin cancers. This piece is then put to work in the process of feature extraction. Segmentation can be accomplished with a wide variety of approaches, including edge-based, region-based, and boundary-based methods in which the edge based method is considered in this research.

The global population of people with cancer has skyrocketed in recent decades due to a number of factors including longer life expectancy, more time spent outside in the sun, and other factors. Cancers of the skin can be either malignant or benign. What distinguishes the second group from the first is their capacity for metastasis, or the spread to other regions of the body. Malignant cancer has the potential to metastasis, or spread to and kill healthy tissue in its immediate surroundings. While benign malignancies are less likely to spread throughout the body, they can nevertheless have local effects by pressing on nerves or blood vessels in the area. Most benign malignancies progress more slowly than their malignant counterparts. If certain malignancies are not adequately treated, they may cause harm to the human body. As a result, spotting skin cancer early is crucial for making the right diagnosis. Cancer patients often feel anxious and fearful during biopsies because of the intrusive nature of the process. In order to decide if a biopsy is necessary, the top layers of skin are imaged using a microscope and other specialised illumination equipment in a process known as Dermoscopy. The detection and categorization of skin abnormalities in Dermoscopy pictures is a serious challenge.

Mathematical morphology operation on Dermoscopy Images is one strategy that can be used to tackle the picture improvement challenge. The process selects a new grey level between two patterns for each pixel in the processed image, based on some proximity criterion. Despite the extensive research into morphological contrast, there are currently no methods that can both normalize and enhance the contrast in low-light photos. In contrast, nonlinear functions like logarithms and powers are frequently used in image processing to brighten shadowy areas, and the homomorphism filter is a similar technique that operates in the frequency domain. Unfortunately, histogram equalization often fails to adequately preserve fine detail because it applies the image's global attributes in a way that doesn't make sense at the local level.

The proposed method for improving contrast involves finding the optimal solution to an optimization problem that maximizes the average local contrast of a picture. Small images employed as building blocks are at the heart of the morphological procedures in mathematical morphology. The organizing principle functions like a roving probe, probing each pixel in turn. However, complex images may not be processed correctly since the transition matrix moves in a set direction throughout the image. As a result, near the object's periphery, an artefact in the form of structuring elements may be produced. This limitation is especially problematic because the objects in biological photos have fine structural details.

To perform edge-based segmentation, multiple edge detection operators are used to locate and highlight edges within an image. These borders denote when in an image there is a change in grayscale, colour, texture, etc. It's possible that the shade of grey will shift as we travel from one area to the next. Morphological operations take an input image and use it as a template for a new image of the same dimensions. Each pixel in the final image is assigned a value determined by how it compares to its counterparts in the original image via a morphological process.

Skin lesion identification requires a small number of processes, including segmentation, feature extraction, and a classification procedure. In this research, an automatic segmentation technique as a first step toward classifying skin lesions is proposed. This automated technique aids dermatologists in the detection of skin cancer by identifying and pinpointing the places where lesions have occurred on the skin. Segmenting skin lesions is a challenge because of the wide range of photography techniques used in Dermoscopy Images. The proposed model introduced a Multi Range Morphological Model on Dermoscopy Images with Edge based Segmentation (MRMM-DI-EbS) for image quality enhancement for lesion classification.

Algorithm MRMM-DI-EbS

{

Input: Dermoscopy Image Dataset{DISet}

Output: Segmented Lesion Classification Set { SLCSet}

Step-1: Initially the image from the DIset is loaded for analysis and all the images are loaded and the segmentation is performed on all images. The image loading is performed and the basic image properties are calculated as:

$\begin{aligned} \operatorname{Img}\left(D I_{\text {Set }}(L)\right)= & \sum_{L=1}^M \operatorname{getImage}\left(L \varepsilon D I_{S e t}\right)+\operatorname{getSize}(L) +\operatorname{count}\left(D I_{S e t}\right)\end{aligned}$

$contrast$ $=\sum_{x, y=0}^{\mathrm{M}} {Img}_{x, y}(\mathrm{~A}-B)^2+max {Intensity}(A, B)$

$\begin{aligned} { Dissimilarity }= & \sum_{x, y=0}^M \operatorname{Img}|x-y|+G(B, A)  -\min ( { Contrast })\end{aligned}$

$Entropy$ $=\sum_{x, y=0}^M-\ln \left(A_{x, y}\right) * B_{x y}-\min$ (Dissimm)

Here G is the function that identifies the dull and bright pixels of the considered image.

Step-2: The pixels in the images are extracted for processing the values for lesion classification. The pixels are stored in a n x n matrix format for processing. The pixel extraction is performed as:

$\begin{aligned} R\left(l M G_L\right)= & \sum_{L=1, x, y=1}^M {maxIntensity}(L(A, B)) \\ & +\frac{{mean}(B, B+1)}{{mean}(A, A+1)} \\ & +\exp \left(\frac{\left(\mathrm{A}_{\mathrm{L}}(x)-\mathrm{B}(y)\right)^2}{\left(\frac{\lambda}{2}\right) *(\mathrm{x}+\mathrm{y})}\right)+T h\end{aligned}$

Here λ is the pixel count that has maximum intensity value and Th is the threshold value considered in the pixel extraction process, A, A+1, B and B+1 are the current pixel set and neighbor pixel set, x and y are the coordinates of the pixels.

Step-3: Morphological operations take an input image and, using that image as a template, apply a structural element to it, producing an identically sized output image. Each pixel in a picture has a value that is proportional to the values of the pixels around it when performing a morphological operation. The initial morphological operations are applied on the image for quality enhancement that is performed as:

$\begin{aligned} & {Morph}(\operatorname{Img}(x, y)) = \frac{\lambda}{\min ({ Contrast }(\operatorname{Img}(L))} \\ &+\sum_{L=1}^M \min \left(\operatorname{Intensity}\left(\left(\left(\operatorname{Img}_i-\operatorname{Img}_j^2\right)\right)\right)\right. \\ &+\frac{M}{\lambda} * \sum_{L=1}^M F(((\operatorname{Img}(\mathrm{B})-\operatorname{Img}(\mathrm{A}))))\end{aligned}$

$\begin{aligned} & B A  =\left\{\begin{array}{lr}0, & \text { if } \operatorname{pix}(I(x, y))>180 \text { and }<255 \\ 1 &  { Otherwise }\end{array}\right\}\end{aligned}$

Here F is the function that adds the threshold intensity to the pixels that are dull and having poor intensity levels.

Step-4: Segmentation using edges is based on the results of applying several edge detection operators to an input image. The proposed model identifies the accurate edge of the lesion for processing and detection of cancerous lesion or not. The Edge based Segmentation process is applied as:

$\begin{aligned} & {ISeg}({Img}(A, B))  =\sum_{L, x, y=0}^M \frac{\sum_{x, y=0}^M B_{x, y}((y+1)-(x))^2}{\lambda-\delta} \\ & \end{aligned}+\sum_{\mathrm{x}, \mathrm{y}=0}^{\mathrm{M}} \sqrt{\frac{\sum_{\mathrm{j}=0}^{\mathrm{N}} \operatorname{sim}(\text { minIntensity }(\mathrm{x}, \mathrm{y}), \text { maxIntensity }(\mathrm{x}+1, \mathrm{y}+1)}{\operatorname{size}\left(\mathrm{DI}_{\text {Set }}\right)+\lambda}}$

Here δ is the pixels that have maximum intensity levels in the pixel set considered.

Step-5: A variety of techniques, including mathematical morphology, can be applied to the digital image enhancement challenge for segmentation. Operators of this type select a new grayscale value between two patterns for each pixel in the evaluated image based on a predetermined proximity criterion. The multi range morphological operations are performed on the Dermoscopy Image as:

$\begin{gathered}\lambda_{(x+1, y+1)}=\frac{\max \left({Intensity}(L)_{(x, y)}\right.}{2} \forall_{x, y}=1,2, \ldots, n \\  { Where } \,L_{x, y}=\frac{255-\lambda}{\log (255)} \forall_i=1,2, \ldots, n\end{gathered}$

$\operatorname{Mmorph}(\lambda)=\max(\operatorname{Iseg}(x,y))+\sum_{\substack{L=1}}^M\operatorname{simm}(\delta(\mathrm{B}, \mathrm{B}+1))+\sum_{L=L+1}^M\operatorname{simm}(\delta(\mathrm{A},\mathrm{A}+1))+\frac{\operatorname{maxIntensity}(\lambda)}{\operatorname{minIntensity}(\delta)}$

Step-6: The feature extraction process is applied on the enhanced image and the complete feature set is generated for lesion classification. The feature extraction process is performed and the feature set is maintained as:

$\begin{aligned} {FeSet}({Iseg}(L)) & =\sum_{\mathrm{L}=0} \operatorname{getrange}(\operatorname{Iseg}(\mathrm{L}), \operatorname{Iseg}(\mathrm{L}  +1)) \\ & +\operatorname{Corr}(\max (\operatorname{Intensity}(\mathrm{A}, \mathrm{B}))) \\ & +\frac{\sqrt{\operatorname{getrange}(M M \operatorname{Morph}(L))^2}}{\operatorname{Size}\left(\mathrm{DI}_{\text {Set }}\right)}\end{aligned}$

}

4. Results

Identifying the skin lesion inside the digital image is a significant hurdle to the successful implementation of such a system. The majority of currently available skin lesion segmentation techniques are geared toward dermatoscope-captured pictures. The process of locating the lesion in digital photographs is made more difficult by the existence of light variance, such as shadows. Creating a system to automatically fix and segment the skin lesion in an input photograph is the focus of this study. First, a multi-stage lighting modelling technique is proposed and used to simulate illumination variation, which is subsequently used to improve the original image. Second, a texture distinctiveness measure is computed for each of a collection of typical texture distributions that are acquired from the corrected photograph. Lastly, a texture-based segmentation method uses the presence of typical texture distributions to label locations in the snapshot as normal skin or lesion. This partitioning can then be fed into feature extraction and melanoma classification methods independently. In order to evaluate the efficacy of the proposed segmentation framework, we compare the outcomes of lesion segmentation and melanoma classification achieved with various alternative state-of-the-art techniques.

The suggested framework outperforms previously tested algorithms in terms of segmentation accuracy. Lesions are classified as melanoma or non-melanoma based on the segmentation findings generated by the tested techniques, which are then utilised to train an existing classification system. With the suggested framework, classification accuracy is maximised, and sensitivity and specificity are maximised to an equal degree. The proposed model introduced a Multi Range Morphological Model on Dermoscopy Images with Edge based Segmentation (MRMM-DI-EbS) for image quality enhancement for lesion classification. The proposed model considered dermoscpopy images from the link https://www.fc.up.pt/addi/ph2%20database.html. The proposed model is compared with the traditional Skin Lesion Segmentation in Dermoscopic Images With Ensemble Deep Learning Methods (Ensemble-A) and the results represent that the proposed model performance in segmentation for skin cancer detection is high.

To show and depict an image and to analyze it, the smallest unit is the pixel. In computer graphics, a pixel is the fundamental building block of an image. The pixels on a computer screen add up to an entire picture, video, or text. The pixel extraction is performed to analyze the image completely. The image pixel extraction accuracy level of the proposed model is high than the traditional models. The Figure 3 represents the pixel extraction accuracy levels of the proposed and traditional models.

Figure 3. Pixel extraction accuracy levels

An edge, in the context of image processing, is a sequence of neighboring pixels whose intensity values suddenly shift. The edges of an object denote its separation from its background. Due to insufficient intensity difference, the edge-pixel-sequence may be broken at times. Finding the edges of objects in a picture is edge detection that is all about in image processing. There are several different mathematical approaches to detecting edges, which are defined as sharp changes in brightness or discontinuities in an image, along which a digital image curves. The image edge detection time levels of the proposed and existing models are represented in Figure 4.

Figure 4. Image edge detection time levels

Image quality enhancement is used in many different areas, including medical imaging, filmmaking, and autonomous vehicles, with the goal of recovering lost features in deteriorated images. Improved image quality has been rapidly developed with the advent of deep convolution neural networks. The term image enhancement refers to the process of editing digital photos to improve their quality for presentation or analysis. The image quality enhancement accuracy levels of the proposed model is high than the traditional models that is depicted in Figure 5.

Image enhancement aims to better present information in images, either for human interpretation or perception or as better input for other automated image processing methods. The term image color enhancement process refers to a set of procedures designed to increase an image's aesthetic appeal or transform it into a format more conducive to human or automated inspection. Images can be better understood and interpreted with the help of enhancements. The digital pixel values of an image can be changed, which is a major benefit of digital photography. The Image Quality Enhancement Time Levels of the proposed and existing models are shown in Figure 6.

Figure 5. Image quality enhancement accuracy levels

Figure 6. Image Quality Enhancement Time Levels

Figure 7. Edge based Segmentation accuracy levels

Edge-based segmentation involves applying an edge filter to the image, classifying pixels as edge or non-edge depending just on filter output, and grouping pixels that are not separated by an edge into the same category. Separation is the breaking up of such an edge contour produced by the operator oscillating above and below the threshold, which is typically used to determine whether edges in an edge image are relevant. The Figure 7 represents the Edge based Segmentation Accuracy Levels of the proposed and existing models.

In Edge based Segmentation, the regions' edges are sufficiently dissimilar from one another and from background to permit boundary identification based on the local discontinuity in intensity. Edge based Segmentation is the action of seeking out and identifying edges in a picture. Knowing that edges include relevant features and crucial information is a huge step toward comprehending image features. The Edge based Segmentation Time Levels of the proposed model is very less than the existing model that is depicted in Figure 8.

Figure 8. Edge based Segmentation time levels

By focusing on the most significant variables and omitting those that are less important, feature selection enhances machine learning and boosts the prediction potential of machine learning algorithms. What makes up a feature set is the collection of characteristics that apply to a given domain. Features and their properties are described in a feature set, together with any necessary constraints. The Skin Lesion Feature Set Classification Accuracy Levels of the proposed and traditional models are shown in Figure 9.

Figure 9. Skin lesion feature set classification accuracy levels

5. Conclusion

For accurate melanoma lesion detection, it is crucial to have access to robust end-to-end skin separation technologies, as stated by the ABCD rule system. To further take use of the pre-processing, a colour consistency algorithm to standardise the data, and finally, morphological image functions in post-processing to provide segmentation outcomes is performed.  With the aim of improving image quality for lesion detection, the proposed model implemented a Multi Range Morphology Model on Dermoscopy Images with Edge based Segmentation. This method aids dermatologists in locating skin lesions in Dermoscopy Images. Both pre-processing and segmentation are required for the suggested method. An improved technique based on thresholds and morphological operations is used to minimize noise like hairs and makers during the pre-processing stage. This processed image is used to successfully isolate the area containing the skin lesion. The suggested algorithm is used to the pre-processed picture to segment the lesion region, resulting in a region with more precise borders. In future, the denoising hybrid filtration models need to be implemented for complete noise removal from the images and pixel normalization techniques can be applied to smoothen the images for better accuracy rate.

  References

[1] Goyal, M., Oakley, A., Bansal, P., Dancey, D., Yap, M.H. (2020). Skin lesion segmentation in dermoscopic images with ensemble deep learning methods. IEEE Access, 8: 4171-4181. https://doi.org/10.1109/ACCESS.2019.2960504

[2] Riaz, F., Rehman, S., Ajmal, M., Hafiz, R., Hassan, A., Aljohani, N.R., Nawaz, R., Young, R., Coimbra, M. (2020). Gaussian mixture model based probabilistic modeling of images for medical image segmentation. IEEE Access, 8: 16846-16856. https://doi.org/10.1109/ACCESS.2020.2967676

[3] Adegun, A.A., Viriri, S. (2020). FCN-based densenet framework for automated detection and classification of skin lesions in Dermoscopy Images. IEEE Access, 8: 150377-150396. https://doi.org/10.1109/ACCESS.2020.3016651

[4] Ji, J., Lu, X., Luo, M., Yin, M., Miao, Q., Liu, X. (2021). Parallel fully convolutional network for semantic segmentation. IEEE Access, 9: 673-682. https://doi.org/10.1109/ACCESS.2020.3042254

[5] Tang, J.J., Lan, Y.S., Chen, S.R., Zhong, Y.S., Huang, C.X., Peng, Y.H., Liu, Q.Y., Cheng, Y.Q., Chen, F., Che, W.L. (2019). Lumen contour segmentation in IVOCT based on n-type CNN. IEEE Access, 7: 135573-135581. https://doi.org/10.1109/ACCESS.2019.2941899

[6] Zhang, Q., Gao, G. (2019). Grasping point detection of randomly placed fruit cluster using adaptive morphology segmentation and principal component classification of multiple features. IEEE Access, 7: 158035-158050. https://doi.org/10.1109/ACCESS.2019.2946267

[7] Hao, Q.B., Pei, Y., hou, R., Sun, B., Sun, J., Li, S.T., Kang, X.D. (2021). Fusing multiple deep models for in vivo human brain hyperspectral image classification to identify glioblastoma tumor. IEEE Transactions on Instrumentation and Measurement, 70: 1-14. https://doi.org/10.1109/TIM.2021.3117634

[8] Ahammad, S.H., Rahman, M.Z.U., Rao, L.K., Sulthana, A., Gupta, N., Lay-Ekuakille, A. (2021). A multi-level sensor-based spinal cord disorder classification model for patient wellness and remote monitoring. IEEE Sensors Journal, 21(13): 14253-14262. https://doi.org/10.1109/JSEN.2020.3012578

[9] Peng, C., Liu, Y., Gui, W., Tang, Z., Chen, Q. (2022). Bubble image segmentation based on a novel watershed algorithm with an optimized mark and edge constraint. IEEE Transactions on Instrumentation and Measurement, 71: 1-10. https://doi.org/10.1109/TIM.2021.3129873

[10] Duan, P., Kang, X., Li, S., Ghamisi, P., Benediktsson, J.A. (2019). Fusion of multiple edge-preserving operations for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 57(12): 10336-10349. https://doi.org/10.1109/TGRS.2019.2933588

[11] Chen, X., Zou, Q., Xu, X., Wang, N. (2022). A stronger baseline for seismic facies classification with less data. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-10. https://doi.org/10.1109/TGRS.2022.3171694

[12] Riaz, F., Naeem, S., Nawaz, R., Coimbra, M. (2019). Active contours based segmentation and lesion periphery analysis for characterization of skin lesions in Dermoscopy Images. IEEE Journal of Biomedical and Health Informatics, 23(2): 489-500. https://doi.org/10.1109/JBHI.2018.2832455

[13] Tu, B., Liao, X., Zhou, C., Chen, S., He, W. (2021). Feature extraction using multitask superpixel auxiliary learning for hyperspectral classification. IEEE Transactions on Instrumentation and Measurement, 70: 1-16. https://doi.org/10.1109/TIM.2021.3116289

[14] Li, Z., Li, E., Samat, A., Xu, T., Liu, W., Zhu, Y. (2022). An object-oriented cnn model based on improved superpixel segmentation for high-resolution remote sensing image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15: 4782-4796. https://doi.org/10.1109/JSTARS.2022.3181744

[15] Zheng, C., Chen, Y., Shao, J., Wang, L. (2022). An MRF-based multigranularity edge-preservation optimization for semantic segmentation of remote sensing images. IEEE Geoscience and Remote Sensing Letters, 19: 1-5. https://doi.org/10.1109/LGRS.2021.3058939

[16] He, Q., Zou, B.J. hu, C.Z., Liu, X.Y., Fu, H. P., Wang, L. (2021). Multi-label classification scheme based on local regression for retinal vessel segmentation. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 18(6): 2586-2597. https://doi.org/10.1109/TCBB.2020.2980233

[17] Yue, J., Zhu, D., Fang, L., Ghamisi, P., Wang, Y. (2022). Adaptive spatial pyramid constraint for hyperspectral image classification with limited training samples. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-14. https://doi.org/10.1109/TGRS.2021.3095056

[18] Wang, Y., Chen, Q., Chen, S., Wu, J. (2020). Multi-scale convolutional features network for semantic segmentation in indoor scenes. IEEE Access, 8: 89575-89583. https://doi.org/10.1109/ACCESS.2020.2993570

[19] Wang, S., Pan, Y., Chen, M., Zhang, Y., Wu, X. (2020). FCN-SFW: Steel structure crack segmentation using a fully convolutional network and structured forests. IEEE Access, 8: 214358-214373. https://doi.org/10.1109/ACCESS.2020.3040939

[20] Mubashar, M., Khan, N., Sajid, A.R., Javed, M.H., Hassan, N.U. (2022). Have we solved edge detection? A Review of state-of-the-art datasets and DNN based techniques. IEEE Access, 10: 70541-70552. https://doi.org/10.1109/ACCESS.2022.3187838

[21] Pan, C., Jia, X., Li, J., Gao, X. (2021). Adaptive edge preserving maps in markov random fields for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 59(10): 8568-8583. https://doi.org/10.1109/TGRS.2020.3035642

[22] Wang, B., Si, S.Z., Cui, E., Zhao, H. Yang, D.X., Dou, S.C., Zhu, J. (2020). A fast and efficient CAD system for improving the performance of malignancy level classification on lung nodules. IEEE Access, 8: 40151-40170. https://doi.org/10.1109/ACCESS.2020.2976575

[23] Zhao, L., Wang, Y., Duan, Z., Chen, D., Liu, S. (2021). Multi-source fusion image semantic segmentation model of generative adversarial networks based on FCN. IEEE Access, 9: 101985-101993. https://doi.org/10.1109/ACCESS.2021.3097054

[24] Jung, H., Choi, H.S., Kang, M. (2022). Boundary enhancement semantic segmentation for building extraction from remote sensed image. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-12. https://doi.org/10.1109/TGRS.2021.3108781

[25] Contreras, J., Sickert, S., Denzler, J. (2020). Region-based edge convolutions with geometric attributes for the semantic segmentation of large-scale 3-D point clouds. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13: 2598-2609. https://doi.org/10.1109/JSTARS.2020.2998037

[26] Quan, S., Xiang, D., Wang, W., Xiong, B., Kuang, G. (2021). Scattering feature-driven superpixel segmentation for polarimetric SAR images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 2173-2183. https://doi.org/10.1109/JSTARS.2021.3053161

[27] Chattopadhyay, S., Kak, A.C. (2022). Uncertainty, edge, and reverse-attention guided generative adversarial network for automatic building detection in remotely sensed images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15: 3146-3167. https://doi.org/10.1109/JSTARS.2022.3166929

[28] Liu, Y., Wei, Y., Wang, C. (2019). Subcortical brain segmentation based on atlas registration and linearized kernel sparse representative classifier. IEEE Access, 7: 31547-31557. https://doi.org/10.1109/ACCESS.2019.2902463

[29] Lu, X., Zha, Y., Qiao, Y., Wang, D. (2019). Feature-based deformable registration using minimal spanning tree for prostate MR segmentation. IEEE Access, 7: 138645-138656. https://doi.org/10.1109/ACCESS.2019.2943485

[30] Bi, H., Sun, J., Xu, Z. (2019). A graph-based semisupervised deep learning model for PolSAR image classification. IEEE Transactions on Geoscience and Remote Sensing, 57(4): 2116-2132. https://doi.org/10.1109/TGRS.2018.2871504