© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
Dermoscopy images of melanoma frequently show low contrast, making the lesion look quite similar to the surrounding skin. Furthermore, several visual details are obscured due to the poor contrast. A method needs to be devised to improve the contrast of dermoscopy images. To mitigate the effects of low contrast and improve image quality, a multi-scale morphological method is proposed in this research. The image can be enhanced by adding the local bright characteristics and removing the dark ones. This research presents a multi-level technique for dermoscopy image pre-processing that enhances the raw images' quality and makes them more applicable to skin lesion detection. Automated skin lesion segmentation is positively affected by this multi-level pre-processing strategy. The process of skin lesion segmentation begins with de-noising, followed by illumination correction, contrast augmentation, sharpening and reflection removal. This research proposes a Multi-Level Image Quality Enhancement Model using Enhanced Morphology Model with Edge-Based Segmentation (MLIQE-EMM-ES) for accurate detection of melanoma in dermoscopy images. Melanoma dermoscopy images with low contrast make lesions look like the surrounding skin, which reduces the accuracy of segmentation and makes it harder to see minute details. This can cause early signs to be ignored or misclassified. Although MLIQE-EMM-ES demonstrates superior contrast enhancement and segmentation, it does not provide robust comparison data when compared to diverse current models. The proposed model performs better in image quality enhancement and segmentation.
melanoma, dermoscopy images, enhanced morphology operations, image quality enhancement, segmentation
Tumors form when human skin cells divide and grow unevenly; melanoma, squamous cell carcinoma, and basal cell carcinoma are the three main types of skin cancer [1]. According to the data, the predicted incidence of skin cancer is 35 percent. The fact that half of the 20 million cancer cases reported globally in 2023 were fatal lends credence to the severity of this disease. Melanoma, a form of skin cancer that starts in melanocytes, grows quickly and is very dangerous [2]. Melanocytes are specialized cells whose primary role is to produce melanin. The proliferative potential of the cell is drastically reduced during this differentiation operation.
Examining a skin lesion traditionally has involved measuring its size, shape, and formation manually [3]. Because professionals and dermatologists need to physically perform these operations, they are less precise and take more time. It is crucial to prioritize the early detection of skin cancer in order to control the patient fatality rate [4]. On the other hand, these differences are being gradually addressed by Artificial Intelligence (AI) models. In recent years, new deep and machine learning approaches have been established that are based on computer vision. These cutting-edge methods allow medical professionals to diagnose and categorize these ailments with the help of a machine [5]. As a result, getting a computer-aided prognosis is crucial for getting better results and more accurate results.
Each skin lesion must first be preprocessed and its boundaries estimated before features can be extracted and used for lesion classification [6]. For the last ten years, researchers have been preparing medical images using a variety of methods. Artefact removal, color normalisation, and contrast stretching are all examples of preprocessing approaches [7]. While these preprocessing models do their jobs well, they add significant processing time to the algorithm, which impacts both the training and testing phases [8]. In order to improve the classifier's performance, the dermoscopy images are preprocessed with the boundary estimation procedure, which separates the borders of the images. In order to recover the Region of Interest (RoI) from surrounding background lesions, the segmentation model must exist. Furthermore, this method aids in the improved recognition [9] of the intrinsic clinical aspects of skin lesions. The accuracy of melanoma prognoses is correlated with the precision of segmentation algorithms. In order to improve the classifier's overall accuracy, segmentation approaches are typically the backbone of most research models [10].
Due to the many databases of skin lesion types, each with its own unique complexity, the procedure remains an open task, leaving lots of room for future research, even after complex segmentation models have been proposed [11]. False classifications and accuracy issues caused by these asymmetric features make segmentation models more difficult to implement [12]. Changing the brightness, contrast, levels of distortion, and lightness in the dermoscopy images also drastically reduces the segmentation accuracy [13]. More effective ways to deal with these practical challenges are thus greatly needed. The aberrant reinforcement of certain cells that occurs as a result of changes in gene expression on the skin layer is the defining feature of skin cancer. These malignant cells spread to neighboring cells. Because the human body is susceptible to a wide range of influences in the modern world, including, but not limited to, increased life expectancy and exposure to ultraviolet radiation, the number of cancer patients has risen dramatically [14].
Malignant and benign skin cancers are the two main categories. Metastasis, the process by which cancer cells travel to other parts of the body, distinguishes the two types. Cancers that are considered malignant have the ability to penetrate and kill off nearby tissues. Additionally, it can travel through the lymphatic system and bloodstream to distant organs and tissues [15]. A large percentage of the population is affected by this disease. By putting pressure on nearby nerves or blood arteries, even benign cancer, which is more limited, can impact the environment. Benign cancers tend to grow at a slower rate than malignant ones [16]. Negligible treatment of these malignancies may have devastating consequences for human health. Preliminary testing for skin cancer is, hence, essential for an accurate diagnosis. The biopsy is an intrusive procedure that causes pain and discomfort for cancer patients [17]. The goal of dermoscopy imaging is to examine the skin layers in great detail using a microscope and other specialized lighting equipment in order to prevent the need for an unneeded biopsy. The most challenging aspect of dermoscopy is identifying skin lesion types and confirming their existence in pictures. Segmentation, feature extraction, and the classification process are the few stages needed for skin lesion detection [18]. This research proposes an automatic segmentation method that can be used as a first step in skin lesion classification. Dermatologists find this automated approach useful for detecting skin cancer since it identifies and locates the areas of skin lesion.
Figure 1. Dermoscopy images
Researchers have paid a lot of attention to computer-aided technologies that can analyze medical images for diagnostic purposes. These are specifically created and adjusted to help with things like segmenting and classifying the ROI, which in this case includes areas with cancer. As a general rule, cancer typically has a delayed clinical beginning [19], therefore early detection and delimitation of lesion boundaries are crucial for effective treatment of the disease, especially in its early stages. Nearly 17 million individuals are impacted by cancer each year, with approximately 9.6 million losing their lives as a result of treatment delays. As a result, cancer is now the top killer on a global scale. One of the most common malignancies in both children and adults, skin cancer develops in the skin's outer layer called epidermis [20]. For the purpose of detecting cancer boundaries from dermoscopy images, several computer-assisted methods have been suggested. The dermoscopy images is shown in Figure 1.
In addition to being the most common form of skin cancer, melanoma is also the most aggressive and lethal because of its high metastatic rate. A malignant skin cancer known as melanoma occurs when melanocytes [21], the skin's pigmented cells, grow irregularly. Cancer can start anywhere on the skin's surface, spread to other parts of the body, and even start in the chest or back. This skin cancer has the highest fatality rate compared to all others, and its incidence rate has been steadily rising, reaching 4-6% every year. The five-year survival rate can reach as high as 98% with an early diagnosis. Given the statistics surrounding melanoma incidence and mortality rate, it is crucial to diagnose patients promptly in order to provide them with effective therapy [22].
Among the many possible approaches to the issue of digital image enhancement, mathematical morphology stands out. For every pixel in the processed image, these operators choose a new grayscale value between two patterns based on a proximity metric [23]. Alternatively, the homomorphism filter operates in the frequency domain, and nonlinear functions like logarithm or power functions are among the most used approaches in image processing for improving dark areas [24]. One major drawback of histogram equalization is that it often fails to preserve details well because it applies the image's global attributes incorrectly in a local context. The normal and melanoma images are shown in Figure 2.
Figure 2. Normal and melanoma images
Figure 3. a) Normal image; b) Segmented Image; c) Extracted portion
Segmenting images is an essential step in most computer vision, video, and image applications. Partitioning an image into sections that should ideally represent distinct items in the actual world is a common usage for it. Content analysis and image comprehension rely on it. It is difficult to compare several segmentation methods or even alternative parameterizations of a single approach since there is still no adequate performance measure, despite the development of numerous segmentation methods [25]. Segmenting an image into its component components and then extracting the items of interest is a common way to explain image segmentation. The dermoscopy image segmentation process is shown in Figure 3.
The outcomes of segmentation have a profound impact on all the subsequent steps of image analysis, including object representation and description, feature measurement, and even higher-level tasks like object classification and scene interpretation, making it a crucial component of automatic image analysis [26]. The process essentially entails dividing a digital image into various zones that correspond to specific surfaces, objects, or intrinsic object features. It entails classifying each pixel, then finding clusters of pixels that have certain visual traits or areas that are similar to one another. Each of these areas may, ideally, represent an object or pattern in the image. This research proposes a Multi-Level Image Quality Enhancement Model using Enhanced Morphology Model with Edge-Based Segmentation (MLIQE-EMM-ES) for accurate detection of melanoma in dermoscopy images.
While traditional grayscale morphology-based image segmentation algorithms are capable of accurately segmenting images with varying degrees of illumination, real-time issues become apparent when the amount of image data increases. Liu et al. [1] presented a quantum image segmentation technique that uses a quantum mechanism to swiftly convert a grayscale image into a binary image by performing morphological operations on all of the pixels in the image at the same time. Furthermore, comprehensive quantum circuits for segmenting the new enhanced quantum representation images are constructed by combining various individually created quantum circuit components, such as dilation, erosion, bottom-hat transformation, top-hat transformation, etc.
The fusion of numerous distant sensors has garnered significant interest in the field of ground observation due to the advancements in sensor technology. Here, Cao et al. [2] offered a mathematical morphological approach to combining the supplementary data from hyperspectral images (HSI) and infrared images (IFI). There are still several limitations to the operation methods that rely solely on hyperspectral data, despite the fact that HSI provides extensive spatial and spectral information. Even while it can pick up infrared light emitted by the item, IFI isn't very good at classifying complicated terrain. The data acquired by HSI and IFI about things is distinct from one another, although it is highly complementary to one another. An HSI and IFI collaborative categorization framework based on a Threshold-based Local Contain Profile (TLCP) is proposed in this research to fully utilize the information provided by HSI and IFI. TLCP is the novel design for suppressing interferes within spatial extractions.
When left untreated, skin cancer can metastasize to other organs, making it one of the deadliest forms of the disease. Recent years have seen a watershed moment in health care with the application of deep learning to skin cancer, with dermoscopy images serving as the nerve center of this technological revolution. Presently, deep learning-based automated skin cancer detection from dermoscopy images is the main topic analyzed by Nie et al. [3]. The current state of deep learning and its potential uses in dermoscopy image diagnosis are investigated in depth in this study. The author explained and reviewed the most recent approaches to melanoma categorization and methods for making these approaches better. The author gone over recent developments in deep learning-based skin cancer diagnostic solutions, as well as certain obstacles and potential avenues for improvement in order to fortify these automated systems to bolster dermatologists' diagnostic capabilities.
Accurate disease diagnosis and therapy planning rely on medical image segmentation. Recent years have shown outstanding success on medical image segmentation tasks when using approaches based on CNNs, specifically U-Net and its derivatives. But they aren't always reliable on images with intricate architecture and a wide range of ROIs. This can be because of the information-losing effects of repeated down-sampling processes or the fixed geometric structure of the feature extraction receptive fields. Alam et al. [4] addressed these issues by modifying the standard U-Net architecture. Specifically, the convolution block is replaced with a dilated convolution block, which allows for the extraction of multi-scale context features with varying receptive field sizes. Additionally, a dilated inception block is added between the encoder and decoder paths to address the problem of information recession and the semantic gap between features. Along with re-weighting the channel-wise feature responses to improve overall feature representation, a squeeze and excitation unit adds the input of each dilated convolution block to the output, which alleviates the vanishing gradient problem. To get a bigger receptive field, the author modified the original inception block by making the spatial filter smaller and adding dilated convolution. Using three difficult medical image segmentation tasks with ROIs of varied sizes, the suggested network was tested: segmenting the nucleus on microscopy cell images, skin lesion on dermoscopy images, and lung on chest X-ray (CXR) images.
The alarmingly high rates of both melanoma and non-melanoma skin cancers highlight the critical need to address the public health concern. Automated categorization and diagnosis methods may be hindered when examining dermoscopy images of these lesions due to the possibility that hairs and their shadows on the skin could obscure important diagnostic information about the lesion. Using deep learning techniques, Talavera-Martínez et al. [5] introduced a novel method for dermoscopy hair removal in this study. The suggested algorithm finds hair pixels in images and restores them posteriorly using an encoder-decoder architecture using convolutional neural networks. Furthermore, during the training phase of the network, the author implemented a novel combined loss function that integrates the L1 distance, total variation loss, and a loss function derived from the structural similarity index metric. To quantitatively test this model, the author considered a dataset that includes both haired and hairless photos, which does not yet exist. Using similarity metrics that compare the reference image without hair and the one with artificial hair, the author compared the outcomes with six state-of-the-art systems built on conventional computer vision approaches.
The surface of the skin is made up of wrinkles that form a microstructure that resembles a network. Simple, consistent, and effective evaluation procedures for skin diagnosis include observing and studying the microstructure of the skin, which changes with skin condition and age. On the other hand, several topological and morphological changes occur on the skin's surface as a person ages. Accurately extracting and analyzing a skin microstructure that includes these changes is challenging. Because of this, Moon and Lee [6] used CNN models to segment skin microstructure and analyze skin aging. To begin, the author suggested a fusion UNet model for microstructure extraction in skin. Using a technique from image processing and deep learning models, the author assessed and compared the segmentation performance. The skin microstructure is then used to categorize skin aging. The author selected four mobile CNN models—NASNet-Mobile, MobileNetV2, MobileNetV3-Small, and EfficientNet-B0 to perform the classification. Afterwards, the author assessed and contrasted how well they classify. According to the results, the fusion U-Net model is able to detect tiny creases that are hard to see with the human eye, and its segmentation images are the most accurate representations of the real world. With an accuracy of 94%, MobileNetV3-Small demonstrates the best performance in the microstructure-based classification of skin aging. An objective and quantitative examination of the skin surface exhibiting a wider range of aging traits is made possible by the suggested technique. So, it's clear that changes in skin microstructure accompany skin aging.
Computer vision tasks pertaining to medical imaging heavily rely on CNNs. Nevertheless, the CNN model may not be able to learn the target object's features if the dataset's raw images are of poor enough quality. As a result of the CNN model's fixation on structures in the backdrop and other areas that aren't relevant to lesion recognition, the accuracy of the output prediction suffers when the input picture has a complicated background. Kimori [7] demonstrated that a mathematical morphology-based image preprocessing approach, which makes use of a priori information regarding the lesion shape, can effectively resolve this issue. To selectively enhance the lesion region from the background, the suggested method uses h-dome transformation based on the geometrical shape information of the lesion region and subsequent image histogram-modification operations. This makes it possible to generate images that stand in for the crucial area that the CNN model needs to understand.
Table 1. Limitations of traditional models
| Author(s) | Year | Proposed Model | Advantages | Limitations | 
| Liu et al. [1] | 2022 | Quantum Image Segmentation Based on Grayscale Morphology | Performs morphological operations on all pixels simultaneously using quantum mechanisms for fast grayscale-to-binary conversion. | Quantum implementation is complex; limited by current quantum hardware scalability. | 
| Cao et al. [2] | 2021 | HSI and IFI Collaborative Classification Based on Morphology Feature Extraction | Combines hyperspectral and infrared data for complementary feature extraction; improves classification of complex terrains. | High computational cost; dependent on availability of both HSI and IFI data. | 
| Nie et al. [3] | 2022 | Deep Learning Approaches for Skin Lesion Diagnosis | Comprehensive review of deep learning methods improving melanoma classification and detection. | Lack of unified evaluation datasets; variations in methods make direct comparison difficult. | 
| Alam et al. [4] | 2023 | Multi-Scale Context Aware Attention Model | Uses dilated convolutions and attention to capture multi-scale features and reduce information loss. | Still struggles with very small or irregular lesions; requires high computational resources. | 
| Talavera-Martínez et al. [5] | 2021 | Hair Segmentation and Removal Using Deep Learning | Effectively detects and removes hair artefacts in dermoscopic images using encoder-decoder CNN. | Model trained on limited dataset; performance may drop with unseen hair patterns. | 
| Moon and Lee [6] | 2022 | Skin Microstructure Segmentation and Aging Classification with Fusion U-Net | Accurately segments microstructures and classifies skin aging with high accuracy using lightweight CNN models. | Limited generalization across different ethnicities and skin conditions. | 
| Kimori [7] | 2022 | Morphological Image Preprocessing Based on Lesion Geometry | Enhances lesion regions selectively using h-dome transformation and histogram modification. | May fail with highly irregular lesion boundaries or noisy backgrounds. | 
| Olmez et al. [8] | 2024 | Improved PSO with Visit Table and Multi-Direction Search | Reduces redundant searches and improves segmentation performance for skin cancer images. | Optimization process may still converge to local optima in complex cases. | 
| Gururaj et al. [9] | 2023 | DeepSkin: Deep Learning for Skin Cancer Classification | Uses DenseNet169 and ResNet50 with transfer learning for multi-class lesion classification. | Relies heavily on pre-trained models; requires large annotated datasets for optimal performance. | 
Skin professionals rely on automated screening to help them discover skin abnormalities early and properly. When it comes to improving the classification of skin cancer images, multilevel thresholding is a popular and effective strategy. To enhance the effectiveness of multilevel thresholding, Olmez et al. [8] suggested enhancing Particle Swarm Optimization (PSO) using a unique visit table and multiple directions search algorithms. By enabling the identification of new points with fewer visits to regularly visited locations and their neighbors, a visit table method reduces the original PSO algorithm from conducting needless searches. Additionally, in order to improve exploration capabilities and boost population diversity, a multiple direction search technique has been implemented for the PSO. This will help overcome the problem of being stuck at the local optimum. Fifty benchmark functions were used to conduct the qualitative, quantitative, and scalability analyses of the improved PSO (IPSO) approach. In the majority of these functions, the suggested method attained the best performance. 2D non-local means histograms, improved PSO, and Renyi's entropy are employed in a multilevel image segmentation application that is demonstrated using skin cancer images. This study employs a number of performance evaluation indicators to segment images from the ISIC 2017 skin cancer image dataset. The Table 1 represents the limitations of traditional models.
Because of the scarcity of resources, skin cancer is a disease that spreads at an alarming rate. Accurate identification of skin cancer by early detection is essential for preventative measures in general. In recent years, deep learning has been widely used for both supervised and unsupervised learning tasks, making it more difficult for dermatologists to detect skin cancer at an early stage. In tests for object recognition and classification, one of these models Convolutional Neural Networks has proven to be superior to the others. With a sample size of 10015, the dataset is derived from MNIST: HAM10000 and includes seven distinct types of skin lesions performed by Gururaj et al. [9]. Data pre-processing methods are utilized, including sampling, segmentation using autoencoder and decoder, and dull razor. In order to get these findings, the model was trained using transfer learning techniques such as DenseNet169 and ResNet 50.
One goal of automated visual assessment in medical imaging is the early detection of certain disorders. The key to successful treatment and, more importantly, a full recovery from potentially fatal diseases is early detection. In the absence of prompt medical attention, many disorders have the potential to progress and even kill. Because skin lesions are initially too small to be properly identified by human vision, automated visual inspections with computer-aided diagnostic systems supplement a human expert examination, which is crucial for the early detection, diagnosis, and treatment of abnormal cell development in humans’ skin [27]. Cancers of the skin that arise from aberrant skin cell development fall into three main categories: melanoma, basal cell carcinoma, and squamous cell carcinoma. Although it is uncommon, melanoma is the leading cause of death among skin cancer patients. The consequences can be catastrophic if it spreads to deeper tissues and organs. Dermoscopy is a technique for skin lesion segmentation, pattern detection, and classification that uses magnification and the elimination of skin reflection [28].
Figure 4. Dermoscopy image segmentation
Because they look so similar, skin lesions can be difficult to distinguish between benign and malignant. Imperfect borders, artificial markings, hair artifacts, and background fluctuations are only a few of the drawbacks [29]. Basic thresholding methods with simple objective functions can be utilized for segmentation when working with images that have clear borders and no artifacts. Finding the optimal thresholding value, however, requires a well-defined objective function when images display a great deal of fluctuation [30]. It is worth noting that prior studies have mostly concentrated on utilizing various machine learning approaches, ignoring the usage of thresholding methods. The segmented dermoscopy image is shown in Figure 4.
When diagnosing a condition and tracking the efficacy of a treatment plan, medical imaging plays a crucial role. The resulting images could not be of sufficient quality for a precise diagnosis, despite the fact that methods of obtaining these images are becoming more advanced. Images may have inferior quality for a variety of reasons, including but not limited to: technical limitations of imaging instruments, patient unique conditions in images, surrounding sounds, and emergency scenarios. When reimaging is not an option, image enhancement techniques can be a lifesaver. The damaged images are repaired and their quality and contrast are enhanced using these new techniques. One method of image processing that relies on an object's shape is morphology. In order to generate a new image of the same size from an input image, morphological approaches apply a structuring element. A comparison of the matching pixel in the input image with its neighbors is used to determine the value of each pixel in the input image, create a morphological procedure that can identify and respond to particular shapes in the input image by adjusting the neighbor's size and shape. It is possible to establish the morphological procedures on grayscale images with a planar single-channel source image. Afterwards, the scope can be broadened to encompass full-color pictures. The morphology operations on an image are shown in Figure 5.
Figure 5. Morphology operations
Figure 6. Proposed model framework
One crucial stage in the computer-assisted diagnosis of melanoma is the automatic segmentation of skin lesions on dermoscopy images. However, there are large individual differences in the appearance of lesions, making this a difficult process. When working with massive amounts of image data, this problem becomes much more complex. To further increase the segmentation performance, color information is explicitly used from several color spaces. This helped with network training. When doing image analysis, segmenting the image is the initial step. The process of segmentation splits an image into its individual elements. The problem at hand dictates the extent to which this subdivision is pursued. Two segmentation techniques the discontinuity detection approach and the similarity detection technique are available in case the item needs to be separated from the backdrop in order to read the image properly and determine its content. Partitioning an image according to sudden changes in gray-level image is one way to go about the first method. In the second method, the expanding threshold and region serve as the foundation. The proposed model framework is shown in Figure 6.
An additional method for precise melanoma diagnosis in dermoscopy pictures is the combination of erosion and dilatation with edge-based segmentation. As effective preprocessing procedures, morphological erosion and dilation can eliminate minor bright artifacts like residual hair, specular reflections, and noise, while dilation can fill in tiny gaps and rejoin broken lesion edges. These procedures, when applied in conjunction with one another during opening and closure procedures, eliminate rough edges around lesions, hide imperfections in the background, and protect the lesion's general structure. Performing this preprocessing step guarantees a clearly defined lesion region prior to implementing more accurate border detection techniques. Next, the cleaned image is subjected to edge-based segmentation, which can identify distinct changes in intensity between the lesion and the surrounding skin. With much of the unnecessary high-frequency noise removed by morphological filtering, edge detection yields more precise and clearer boundary maps. The combination of morphology's lesion isolation and edge detection's fine shape refinement yields pixel-level segmentation precision, creating a synergistic effect.
The flexibility and ability to integrate with other processing stages are two key ways in which enhanced morphological procedures differ from classical ones. Conventional morphology ignores local image features in favor of applying global, fixed-structure elements such as opening, closure, dilation, or erosion. A contrast to this is the MLIQE-EMM-ES model's enhanced morphological approach, which uses multi-scale structuring elements with different radii to capture coarse and fine lesion features. To make sure the improvement is concentrated where it is most required, it uses pixel-intensity analysis to preferentially target low-contrast pixels rather than applying procedures randomly. In addition, improved morphology uses local pixel normalization to dynamically modify intensity ranges, which improves contrast in poorly defined lesion edges. The technique improves lesion-to-background separation by integrating white top-hat procedures, which enhance bright features, with bottom-hat operations, which emphasize dark features. Especially for difficult low-contrast melanoma images, this method maintains lesion information, suppresses noise, and generates high-quality segmentation outputs when integrated into a multi-level processing pipeline including edge detection.
The Pseudocode for the proposed model is clearly discussed.
| Pseudocode: MLIQE-EMM-ES INPUT: DIset ← Set of dermoscopy images SE_sizes ← {1, 2, 3, 5, 7} // Structuring element radii for multi-scale morphology Hair_SEs ← Line SEs with lengths {9, 15} and angles {0°, 45°, 90°, 135°} $\lambda$1, $\lambda$2 ← Weights for top-hat/bottom-hat combination $\tau$ ← Threshold for hair detection (Otsu or percentile-based) Edge_threshold ← Threshold for edge detection Output_dir ← Path to save processed images OUTPUT: FEset ← Set of extracted features from segmented lesion regions BEGIN 1. Initialize FEset ← ∅ 2. FOR each Image in DIset DO a. Read image: f ← Load(Image) b. Convert to grayscale: IF f is RGB THEN f_gray ← RGB2Gray(f) ELSE f_gray ← f ENDIF c. Hair detection and removal: hair_mask ← 0 FOR each SE in Hair_SEs DO bth ← BottomHat(f_gray, SE) hair_mask ← hair_mask OR (bth > τ) ENDFOR IF any pixel in hair_mask = 1 THEN f_gray ← Inpaint(f_gray, hair_mask) ENDIF d. Multi-scale morphological contrast enhancement: wth_sum ← 0 bth_sum ← 0 FOR each radius s in SE_sizes DO SE ← DiskStructuringElement(s) wth_sum ← wth_sum + WhiteTopHat(f_gray, SE) bth_sum ← bth_sum + BottomHat(f_gray, SE) ENDFOR f_enh ← f_gray + $\lambda$ 1 * wth_sum - $\lambda$ 2 * bth_sum e. Edge detection: grad_max ← 0 FOR each radius s in SE_sizes DO SE ← DiskStructuringElement(s) grad ← Dilate(f_enh, SE) - Erode(f_enh, SE) grad_max ← max(grad_max, grad) ENDFOR E ← Normalize(grad_max) edge_map ← E > Edge_threshold f. Segmentation: lesion_mask ← ActiveContourOrRegionGrow(f_enh, seed=edge_map) lesion_region ← ApplyMask(f_enh, lesion_mask) g. Feature extraction: features ← ExtractFeatures(lesion_region) FEset ← FEset ∪ {features} h. Save processed results: SaveImage(Output_dir, Image_ID + "_enhanced.png", f_enh) SaveImage(Output_dir, Image_ID + "_segmented.png", lesion_region) 3. ENDFOR 4. RETURN FEset END | 
First, the model-based design employed for image pre-processing is an aspect of the edge detection technique. For subsequent processing, the 2D data needs to be translated to 1D, and a color space conversion block does just that, turning RGB into a grayscale image. The second method is morphological operations segmentation, which is different from the first. Objects' shapes and forms are examined in morphological processes. Image segmentation, which involves separating related items, object extraction which involves removing small objects or noise from an image, and measurement operations which include texture analysis and shape description are all under the purview of morphological image analysis. Elements of the Image Processing Blockset can be used to execute morphological operations like opening, closing, dilation, and erosion. For morphological image analysis, a mix of these blocks need to be considered. Users can filter images, segment them, and quantify them all with morphological image analysis. Using dermoscopy image segmentation, skin degradation can be more accurately recognized. Reliable skin lesion detection is difficult for a number of reasons, including as the image capture method, the characteristics of the lesion, and the skin's texture. Currently, most segmentation datasets rely on noisy expert annotations to represent skin lesion borders because accurate annotations are expensive and time-consuming to create. For dermoscopy-based lesion localization and accurate skin lesion type identification, segmenting the lesion border is crucial. This research proposes a MLIQE-EMM-ES for accurate detection of melanoma in dermoscopy images.
Algorithm MLIQE-EMM-ES
{
Input: Dermoscopy Image Dataset {DIset}
Output: Features Extracted Set {FEset}
Step-1: Initially, the images from the dermoscopy dataset is considered and these images will be processed by considering all the pixels. The pixels in the image will be analyzed and its pixel contrast levels are considered for image processing. The image pixel contrast processing is performed as
$\begin{aligned} \tau[\mathrm{M}]=\sum_{\text {img}=1}^{\mathrm{M}} & \frac{\sum \text {imgattr}(\text {img})}{\mathrm{M}}+\text {getminInten}(\text {img})+ \text { getmaxInten(img)}+\operatorname{diff}(\text {img}(\mathrm{x}, \mathrm{y}), \text {img}(\mathrm{x}+1, \mathrm{y}+1))\end{aligned}$
$\begin{gathered}\operatorname{IPcontr}[\mathrm{M}]=\sum_{\mathrm{img}=1}^{\mathrm{M}} \text { getImgattr }(\mathrm{img})+\tau(\mathrm{img})+\operatorname{minInten}(\mathrm{img})\end{gathered}$
where, τ is the model that considers the pixel contrast, minInten() model considers the pixels with the minimum intensity levels and maxInten() model considers the pixels with maximum intensity contrast levels. getImgattr() is used to extract the image attributes like color, texture, shape.
Step-2: The process of dilation enlarges objects and fills up minute imperfections. Erosion removes insignificant particles, leaving behind only the actual objects. The two most fundamental morphological operators are dilation and erosion. Dilation chooses the area around the structuring element with the brightest value, whereas erosion chooses the area with the darkest value. The morphological operations are applied on the image as
Step-3: After applying the morphological operations on the images, each image will be analyzed with pixel contrast dissimilarities and such pixels are only considered so that enhanced morphology operations will be applied on them. The enhanced morphology operations are applied as
$\begin{aligned} \text {Emorp}[\mathrm{M}]=\prod_{\text {img}=1}^{\mathrm{M}} & \text { getattr(Erosion(img)) }+\operatorname{getattr}(\text {Dilation(img)})+\gamma(\text { Erosion(img) })+\beta(\text { Dilation(img)}) \\ & +\max (\operatorname{diff}(\mathrm{x}, \mathrm{x}+1))+\max (\operatorname{diff}(\mathrm{y}, \mathrm{y}+1))+\gamma(\text {Opening}(\text {img}))+\beta(\text {Closing}(\text {img}))\end{aligned}$
Here γ is the model that considers the pixel with poor contrast among the processed image and β is the pixels with high contrast so that pixel normalization is performed that improves the image quality.
Step-4: The image quality will be enhanced using the enhanced morphology operations. The edge detection will be performed on the images where the exact shape of the skin lesion will be detected. The exact edges of the image will be helpful for easy and accurate detection of melanoma in the images. The edge detection is performed as
$\begin{aligned} & \text {EdgeSet}[\mathrm{M}]=\frac{\operatorname{getminrange}(\operatorname{Img}(\mathrm{i}))+\beta(\mathrm{img})}{\max \left(\sum_{\mathrm{i}} \operatorname{Img}-\operatorname{Emorp}(\operatorname{Img}(\mathrm{i}))\right)} +\sum_{i=1}^N\left\{\frac{\mid \max \operatorname{range}(\gamma(\operatorname{Img}+\operatorname{Emorp}(\operatorname{img}(\mathrm{x}, \mathrm{y}))| |}{\mathrm{M}}\right\}^{\mathrm{x} * \mathrm{y}}\end{aligned}$
Step-5: Segmentation is applied on the images after edge detection. Segmentation divides the image into multiple portions as only relevant portion is extracted. The features from the relevant regions will be considered for feature processing for melanoma detection. The segmentation is performed as
$\begin{aligned} & \text {Iseg}[\mathrm{M}]=\sum_{i m g=1}^{\mathrm{M}} \sqrt{\frac{\sum_{\text {img}=1} \operatorname{getattr}(\text {EdgeSet}(\text {img}))+\operatorname{maxPix}(\text {img})+\operatorname{minPix}(\text {img})}{(x * y)}}+\prod_{i m g=1}^{\mathrm{M}} \max (\operatorname{simm}(\operatorname{EdgeSet}(\mathrm{x}, \mathrm{x}+1)))+\max (\operatorname{simm}(\operatorname{EdgeSet}(\mathrm{y}, \mathrm{y}+1)))\end{aligned}$
Here simm() model is used to check the similarity levels of the edge feature patterns.
}
To automatically identify areas of skin lesions in dermoscopy images, the proposed approach is helpful for dermatologists. Two steps make up the suggested method: pre-processing and segmentation. When doing pre-processing, an improved method based on threshold and morphological operations is used to decrease artefacts such as hairs and makers. For accurate skin lesion region segmentation, this preprocessed image is used. To obtain the lesion region with enhanced bounds, the pre-processed image is used for lesion region segmentation using the suggested technique. The term medical imaging refers to a wide range of techniques used to create images of the inside of a patient's body for the purposes of diagnosis and therapy. Poor contrast quality and noise, however, are typical degradations in medical photographs. The diagnostic method becomes quite challenging when multiple items are present and neighboring pixel values are very near together. The basic premise of image enhancement methods is to raise the image quality. An integral part of medical image segmentation's pre-processing phase, medical image enhancement is crucial for accurate melanoma detection. A final processed image's outcome is determined by the contrast enhancement procedure. In medical image noise detection, the two concepts are considered to be too high-frequency, rendering conventional edge detection algorithms such as the Sobel algorithm, the Prewitt algorithm, and the Laplacian of the Gaussian operator inapplicable. Medical images used in real-life settings often include noise, shadows, and object boundaries. As a result, individuals can have trouble differentiating between the precise edge and background noise or insignificant geometric details. A new mathematical theory that can be applied to process and analyze the images is mathematical morphology. Using the idea of shapes, it offers a new way to process images.
The proposed model is implemented using python and executed in Google Colab. The dataset is considered from Kaggle and available at the link https://www.kaggle.com/datasets/hasnainjaved/melanoma-skin-cancer-dataset-of-10000-images. There are 10,000 images in the Melanoma Skin Cancer Dataset. Many lives can be saved if melanoma skin cancer is detected and treated early. The development of deep learning models for reliable melanoma categorization will benefit from this dataset. A total of 8000 photos were used for training the model, while 2000 images were used for evaluation.
With the advent of cutting-edge medical technology, medical image enhancing technologies have become hot topics. Noise, other data gathering devices, lighting conditions, etc., can degrade medical image quality, hence surgeons seek out enhanced images to aid in diagnosis and interpretation. This research proposes an MLIQE-EMM-ES for accurate detection of melanoma in dermoscopy images. The proposed model is compared with the traditional Quantum Image Segmentation Based on Grayscale Morphology (QISGM) and Hyperspectral and Infrared Image Collaborative Classification Based on Morphology Feature Extraction (TLCP). The proposed model when compared with the traditional models performs better in image enhancement and segmentation.
Contrast is the ratio of an image's highest and lowest pixel intensities. Hence, widening the gap between the brightest and darkest pixels is the way to boost an image's contrast. Each pixel value in an image will be identified and then pixel normalization is performed to balance the contrast. The Image Pixel Contrast Processing Accuracy Levels of the proposed and existing models is shown in Table 2 and Figure 7.
Table 2. Image pixel contrast processing accuracy levels
| Images Considered | Models Considered | ||
| MLIQE-EMM-ES Model | QISGM Model | TLCP Model | |
| 100 | 97.5 | 92.3 | 94.7 | 
| 200 | 97.7 | 92.5 | 94.9 | 
| 300 | 97.9 | 92.7 | 95.1 | 
| 400 | 98.0 | 92.9 | 95.3 | 
| 500 | 98.2 | 93.1 | 95.5 | 
| 600 | 98.4 | 93.3 | 95.7 | 
Figure 7. Image pixel contrast processing accuracy levels
Three models were tested across varied numbers of dermoscopy images: MLIKE-EMM-ES, QISGM, and TLCP. The Table 1 displays the image pixel contrast processing accuracy levels for each model. The numbers show the percentage accuracy that each model obtained when processing the contrast of image pixels, which is an important step in making melanoma lesions easier to see for segmentation. With 100 images, MLIQE-EMM-ES achieves an accuracy of 97.5%; with 600 images, it reaches 98.4%, proving that performance gains are constant with increasing dataset size. Maintaining excellent accuracy even with increasing data variability, this pattern demonstrates the model's robustness and adaptation to larger image sets.
Accuracy for the QISGM model begins at 92.3% with 100 images and rises to 93.3% with 600 images. Its performance is still significantly lower than MLIQE-EMM-ES, suggesting less effective contrast processing, particularly in low-contrast circumstances, although it does exhibit a similar rising trend. Based on 100 images, the TLCP model achieves 94.7% accuracy, which improves to 95.7% after 600 images, although it is still behind MLIQE-EMM-ES. While TLCP's collaborative categorization methodology is useful, it falls short of the proposed strategy when it comes to contrast enhancement quality.
Morphology refers to a wide category of image processing techniques that analyze images according to their shapes. By applying a structural element to an input image, morphological procedures generate an identically sized output image. The process of dilation enlarges objects and fills up minute imperfections. Erosion removes insignificant particles, leaving behind only the actual objects. The Table 3 and Figure 8 shows the Morphology Image Processing Accuracy Levels.
Table 3. Morphology image processing accuracy levels
| Images Considered | Models Considered | ||
| MLIQE-EMM-ES Model | QISGM Model | TLCP Model | |
| 100 | 97.3 | 95.5 | 94.1 | 
| 200 | 97.5 | 95.7 | 94.3 | 
| 300 | 97.7 | 95.9 | 94.5 | 
| 400 | 97.9 | 96.1 | 94.7 | 
| 500 | 98.0 | 96.3 | 94.9 | 
| 600 | 98.2 | 96.4 | 95.1 | 
Figure 8. Morphology image processing accuracy levels
With an accuracy of 97.3% for 100 images and a steady increase to 98.2% with 600 images, the MLIQE-EMM-ES model shows the highest accuracy across all dataset sizes. This steady progress is a result of the model's capability to keep morphological enhancement at a high level of accuracy even when input image volume and variability grow. The second-best performer is the QISGM model, which goes from 95.5% with 100 images to 96.4% with 600 photographs. Compared to MLIQE-EMM-ES, QISGM demonstrates a similar progressive rising trend; however, the 1%–1.8% gap across all cases suggests that QISGM is not as successful at enhancing lesion details, especially in complicated or noisy images. Starting at 94.1% with 100 photos and increasing to 95.1% with 600 images, the TLCP model continuously achieves the lowest accuracy among the three. Its collaborative classification method is useful, but it seems to have less dermoscopy-optimized morphological processing capabilities, which could hinder its performance on downstream segmentation tasks.
By applying a structural element to an input image, morphological approaches generate an output image of the same size. By comparing the relevant pixel in the input image with its neighbors, the value of each pixel in the input image can be determined. Erosion and dilation are the two most fundamental morphological processes. When we dilate an image, we add pixels to the edges of objects, but when we erode them, we remove pixels from their edges. The Image Quality Enhancement Time Levels are indicated in Table 4 and Figure 9.
Table 4. Image quality enhancement time levels
| Images Considered | Models Considered | ||
| MLIQE-EMM-ES Model | QISGM Model | TLCP Model | |
| 100 | 7.7 | 15.5 | 18.3 | 
| 200 | 7.9 | 15.7 | 18.5 | 
| 300 | 8.0 | 15.9 | 18.7 | 
| 400 | 8.2 | 16.1 | 18.9 | 
| 500 | 8.4 | 16.3 | 19.1 | 
| 600 | 8.6 | 16.5 | 19.3 | 
Figure 9. Image quality enhancement time levels
Starting at 7.7 seconds for 100 images and slightly increasing to 8.6 seconds for 600 images, the MLIQE-EMM-ES model consistently shows the fastest processing times among the three approaches. The model is suitable for real-time or high-throughput melanoma screening applications because to its computational efficiency and scalability, which is demonstrated by the minor rise in processing time with bigger datasets.
When compared to this, the QISGM model takes a lot longer 15.5 seconds for 100 images and 16.5 seconds for 600 images. Its processing time is about double that of MLIQE-EMM-ES, suggesting a larger computing cost; this could be a problem in time-sensitive clinical contexts, but it still maintains a progressive increase. With processing times ranging from 18.3 seconds for 100 images to 19.3 seconds for 600 images, the TLCP model consistently reports the longest processing times. Its slower execution is likely caused by its more resource-intensive collaborative categorization and feature extraction stages, even though it achieves competitive accuracy in certain metrics.
Table 5. Enhanced morphology image processing accuracy levels
| Images Considered | Models Considered | ||
| MLIQE-EMM-ES Model | QISGM Model | TLCP Model | |
| 100 | 97.7 | 94.3 | 93.7 | 
| 200 | 97.9 | 94.5 | 93.9 | 
| 300 | 98.1 | 94.7 | 94.1 | 
| 400 | 98.3 | 94.9 | 94.3 | 
| 500 | 98.5 | 95.1 | 94.5 | 
| 600 | 98.8 | 95.3 | 94.7 | 
Figure 10. Enhanced morphology image processing accuracy levels
In the realm of image processing, the traditional theoretical notion of local contrast enhancement via mathematical morphology has been put into practice. The overall goal is to keep the speckle region unaltered while improving the tissue boundaries. The criteria for the speckle region are similarity values derived from histogram matching between the processing window's histogram and a reference one obtained from a speckle area. To achieve local contrast enhancement, the values of intensity of the scale-specific features of the tissue boundaries area in the image that are obtained through the multiscale top hat transformation are adjusted. Finally, the locally enhanced features are blended together to create the final enhanced image. Table 5 and Figure 10 show the enhanced morphology image processing accuracy levels.
In an image, the boundary between two things is called an edge. Thus, edge detection is helpful for measuring things, recognizing them, or dividing up images. By manipulating the image's brightness, the idea of edge detection can be employed to identify the existence and position of edges. In image processing, edge detection makes use of many procedures. Although it is slow to react to noise, it may pick up on subtle changes in greyscale. The image edge detection time levels are represented in Table 6 and Figure 11.
Table 6. Image edge detection time levels
| Images Considered | Models Considered | ||
| MLIQE-EMM-ES Model | QISGM Model | TLCP Model | |
| 100 | 11.0 | 17.6 | 19.5 | 
| 200 | 11.2 | 17.8 | 19.8 | 
| 300 | 11.4 | 17.9 | 19.9 | 
| 400 | 11.7 | 18.1 | 20.1 | 
| 500 | 11.9 | 18.3 | 20.2 | 
| 600 | 12 | 18.5 | 20.4 | 
Figure 11. Image edge detection time levels
The MLIQE-EMM-ES model finds edges the fastest for all sizes of datasets. It starts at 11.0 seconds and goes up a little to 12.0 seconds. This small increase in time as the dataset size grows shows how efficient and scalable the model is, allowing it to handle bigger datasets without major delays. The QISGM model takes longer to find edges, starting at 17.6 seconds for 100 photos and going up to 18.5 seconds for 600 images. The increase is slow, but its continually longer timings compared to MLIQE-EMM-ES signal that its processing pipeline is not as optimized for speed, which could be a problem for diagnostic processes that need to be done quickly. The TLCP model has the slowest edge detection performance, starting at 19.5 seconds for 100 photos and going up to 20.4 seconds for 600 images. The slower processing speed is probably due to the fact that its feature extraction and classification methods are more complicated, which makes them need more computing power.
Figure 12. Dermoscopy images segmentation
The dermoscopy image segmentation process on numerous images is indicated in Figure 12. The proposed model accurately applies segmentation process accurately and extracts the relevant region from dermoscopy images.
The goal of image segmentation, a computer vision approach, is to facilitate object detection and associated tasks by dividing a digital image into distinct groups of pixels. To identify potentially relevant areas for additional processing, image segmentation divides an image into sections, each with its own unique shape and boundary. The segmentation accuracy levels are indicated in Table 7 and Figure 13.
Table 7. Segmentation accuracy levels
| Images Considered | Models Considered | ||
| MLIQE-EMM-ES Model | QISGM Model | TLCP Model | |
| 100 | 98.3 | 92.7 | 93.3 | 
| 200 | 98.5 | 92.9 | 93.5 | 
| 300 | 98.7 | 93.0 | 93.7 | 
| 400 | 98.9 | 93.2 | 93.9 | 
| 500 | 99.0 | 93.5 | 94.1 | 
| 600 | 99.2 | 93.7 | 94.3 | 
Figure 13. Segmentation accuracy levels
Figure 14. ROC curve
Figure 14 shows the AUC-ROC curve, which plots the True Positive Rate (Sensitivity) against the False Positive Rate (1 − Specificity) at different classification thresholds, to demonstrate the diagnostic performance of a melanoma detection algorithm. A straight line extending from the upper left corner to the upper right corner of the ROC curve, which closely follows the upper border, denotes almost flawless classification in this instance. A perfect separation of melanoma from non-melanoma instances without overlap in prediction scores is achieved by the model, as evidenced by an Area Under the Curve (AUC) value of 0.99. The red dot represents the best cutoff value of 0.72 for sensitivity and specificity, which is derived from the dot nearest the upper left corner of the graph. The model's sensitivity and specificity are both set at 0.99 at this threshold, meaning it detects all real melanoma instances and correctly identifies all non-melanoma cases. While flawless results are unusual in the actual world and can signal that the model is being tested on a tiny or well-separated dataset, this performance indicates that the model achieves optimum classification on the examined dataset.
The melanoma detection model's sensitivity and specificity values are represented in Figure 15 for comparison. The blue region is the sensitivity, which means that 99.3% of real melanoma cases were accurately recognized by the model, with only 0.7% of instances missing (false negatives). The model properly identified 99.1% of non-melanoma cases with just 0.9% of instances incorrectly tagged as melanoma (false positives), as demonstrated in the green specificity metric.
Figure 15. Sensitivity and specificity levels
Because it tackles a major problem with dermoscopy image analysis, the lack of contrast between lesions and surrounding skin, the suggested MLIQE-EMM-ES model outperforms its competitors. The model mitigates undetectable lesion borders and superimposes improved multi-scale morphological procedures, thereby eliminating irrelevant background noise. When lesions have slight color fluctuations or fuzzy edges, which classical morphology or thresholding methods don't always pick up on, this selective augmentation makes segmentation far more accurate. The substantial improvement over baseline techniques can be explained by the use of edge-based segmentation following morphological augmentation, which enables accurate boundary detection. When lighting is inconsistent, there are tiny artifacts, or slight hair occlusions, the model works best with somewhat well-defined lesions. In these cases, precise localization of lesions is possible without masking important diagnostic characteristics with the harmony between morphological filtering and contrast normalization.
It is important to give serious thought to the method's limits, notwithstanding its virtues. Lesion architecture can be altered during the inpainting process if the lesion is hairy and has dense occlusions. Similarly, the contrast enhancement method may not work as well on those with dark skin or atypical melanoma subtypes, which could result in false negatives. Many benchmark datasets have a disproportionate amount of lighter skin tones and clearer lesions acquired under standardized conditions, which raises issues about dataset bias. Additionally, the dependence on a small number of dermoscopy datasets is problematic. This raises the possibility that the reported accuracy of the model exaggerates its practical usefulness. In addition, morphological approaches take computing efficiency into account, although they could not work as well in very variable imaging settings like low-resolution images, overexposed areas, or darkened areas. By delving further into these aspects, the suggested method's strengths and weaknesses and the necessity of doing more extensive validation is considered.
In order to fully assess the MLIQE-EMM-ES model, it is important to consider all possible failure situations. These will help to identify its limitations and provide direction for future development. Although the model performs admirably on the tested datasets, it is susceptible to performance degradation under a number of difficult circumstances. Important lesion boundaries and small color variations become blurry or pixelated in low-resolution photos, which is a major difficulty. Incomplete segmentation or incorrect edge localization could result if morphological processes failed to capture precise structural information in such circumstances. Hairy lesions also introduce occlusions, which can lead inpainting to misrepresent lesion structures or even produce residual artefacts after hair removal preprocessing, especially in cases where hair density is high or lesion margins overlap.
Melanoma and surrounding skin are not as easily distinguished on people with darker skin tones, which is another drawback. This may raise the possibility of false negatives and reduce the efficacy of top-hat/bottom-hat contrast enhancement. Lesion detection may be much more challenging in these instances due to fluctuations in lighting or shadows that occur throughout the acquisition of the images. Lesions with unusual textures or amelanotic melanomas with low pigmentation levels are two examples of abnormal lesion presentations that could cause misclassification or diminished confidence due to the model's failure to meet the learnt patterns.
Not only would this strengthen the model's resilience, but it would also provide light on its practical clinical utility if these circumstances were explicitly recognized and tested against. Adaptive preprocessing that takes skin tone into account, sophisticated hair removal algorithms for dense occlusions, and super-resolution approaches for low-quality inputs could all be part of future versions.
In order to improve segmentation accuracy, this study tackled the serious problem of poor contrast in melanoma dermoscopy images, which makes it difficult to see lesion borders. Automated categorization and diagnosis methods may be hindered when examining dermoscopy images of these lesions due to the possibility that hairs and their shadows on the skin could obscure important diagnostic information about the lesion. An image quality enhancement model using enhanced morphology operations and segmentation technique is proposed in this research. Enhanced mathematical morphology operators were used to expand upon this research. There was a problem with using morphological erosion and dilation, therefore a new way to detect the image backdrop using morphologically related transformations is considered in this research This research proposes a MLIQE-EMM-ES for accurate detection of melanoma in dermoscopy images. Changes to improve the morphology of contrast were also implemented. The proposed model achieved 99.2% accuracy in segmentation and 98.8% accuracy in Enhanced Morphology Image Processing. Improving the suggested MLIQE-EMM-ES model's adaptability and generalizability will be the primary focus of future study. To reduce the possibility of bias in the dataset, it is essential to increase the size and diversity of the validation pool. This should include instances with different skin tones, unusual subtypes of melanoma, and images taken using different dermoscopy systems. Hybrid image processing techniques can be applied in future for further image quality enhancement and also for accurate image segmentation.
[1] Liu, W., Wang, L., Cui, M. (2022). Quantum image segmentation based on grayscale morphology. IEEE Transactions on Quantum Engineering, 3: 1-12. https://doi.org/10.1109/TQE.2022.3223368
[2] Cao, D., Zhang, M., Li, W., Ran, Q. (2021). Hyperspectral and infrared image collaborative classification based on morphology feature extraction. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14: 4405-4416. https://doi.org/10.1109/JSTARS.2021.3072843
[3] Nie, Y., Sommella, P., Carratù, M., Ferro, M., O’Nils, M., Lundgren, J. (2022). Recent advances in diagnosis of skin lesions using dermoscopic images based on deep learning. IEEE Access, 10: 95716-95747. https://doi.org/10.1109/ACCESS.2022.3199613
[4] Alam, M.S., Wang, D., Liao, Q., Sowmya, A. (2023). A multi-scale context aware attention model for medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 27(8): 3731-3739. https://doi.org/10.1109/JBHI.2022.3227540
[5] Talavera-Martínez, L., Bibiloni, P., González-Hidalgo, M. (2021). Hair segmentation and removal in dermoscopic images using deep learning. IEEE Access, 9: 2694-2704. https://doi.org/10.1109/ACCESS.2020.3047258
[6] Moon, C.I., Lee, O. (2022). Skin microstructure segmentation and aging classification using CNN-based models. IEEE Access, 10: 4948-4956. https://doi.org/10.1109/ACCESS.2021.3140031
[7] Kimori, Y. (2022). A morphological image preprocessing method based on the geometrical shape of lesions to improve the lesion recognition performance of convolutional neural networks. IEEE Access, 10: 70919-70936. https://doi.org/10.1109/ACCESS.2022.3187507
[8] Olmez, Y., Koca, G.O., Sengür, A., Acharya, U.R., Mir, H. (2024). Improved PSO with visit table and multiple direction search strategies for skin cancer image segmentation. IEEE Access, 12: 840-867. https://doi.org/10.1109/ACCESS.2023.3347587
[9] Gururaj, H.L., Manju, N., Nagarjun, A., Aradhya, V.N.M., Flammini, F. (2023). DeepSkin: A deep learning approach for skin cancer classification. IEEE Access, 11: 50205-50214. https://doi.org/10.1109/ACCESS.2023.3274848
[10] Siegel, R.L., Miller, K.D., Fuchs, H.E., Jemal, A. (2021). Cancer statistics 2021. CA: A Cancer Journal for Clinicians, 71(1): 7-33. https://doi.org/10.3322/caac.21654
[11] Adegun, A., Viriri, S. (2020). Deep learning techniques for skin lesion analysis and melanoma cancer detection: A survey of state-of-the-art. Artificial Intelligence Review, 54: 811-841. https://doi.org/10.1007/s10462-020-09888-5
[12] Mahmud, A., Azam, S., Khan, I.U., Montaha, S., et al. (2024). SkinNet-14: A deep learning framework for accurate skin cancer classification using low-resolution dermoscopy images with optimized training time. Neural Computing and Applications, 36(30): 18935-18959. https://doi.org/10.1007/s00521-024-10225-y
[13] Razzaq, O.A. (2024). Investigation of machine learning on gene expression data for cancer detection. Mathematical Modelling of Engineering Problems, 11(9): 2368-2376. https://doi.org/10.18280/mmep.110910
[14] Pérez, E., Reyes, O., Ventura, S. (2021). Convolutional neural networks for the automatic diagnosis of melanoma: An extensive experimental study. Medical Image Analysis, 67: 101858. https://doi.org/10.1016/j.media.2020.101858
[15] Lakshman Narayana,V., Lakshmi Patibandla, R.S.M., Pavani, V., Radhika, P. (2023). Optimized nature-inspired computing algorithms for lung disorder detection. In Nature-Inspired Intelligent Computing Techniques in Bioinformatics. Studies in Computational Intelligence, Springer, Singapore. https://doi.org/10.1007/978-981-19-6379-7_6
[16] Nigar, N., Umar, M., Shahzad, M.K., Islam, S., Abalo, D. (2022). A deep learning approach based on explainable artificial intelligence for skin lesion classification. IEEE Access, 10: 113715-113725. https://doi.org/10.1109/ACCESS.2022.3217723
[17] Narayana, V.L., Sirisha, S., Divya, G., Pooja, N.L.S., Nouf, S.A. (2022). Mall customer segmentation using machine learning. In 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, pp. 1280-1288. https://doi.org/10.1109/ICEARS53579.2022.9752447
[18] Mahbod, A., Schaefer, G., Wang, C., Dorffner, G., Ecker, R., Ellinger, I. (2020). Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. Computer Methods and Programs in Biomedicine, 193: 105475. https://doi.org/10.1016/j.cmpb.2020.105475
[19] Patibandla, R.S.M.L., Narayana, V.L. (2021). Computational intelligence approach for prediction of COVID-19 using particle swarm optimization. In Computational Intelligence Methods in COVID-19: Surveillance, Prevention, Prediction and Diagnosis. Studies in Computational Intelligence, Springer, Singapore. https://doi.org/10.1007/978-981-15-8534-0_9
[20] Liu, Y., Jain, A., Eng, C., Way, D.H., et al. (2020). A deep learning system for differential diagnosis of skin diseases. Nature Medicine, 26(6): 900-908. https://doi.org/10.1038/s41591-020-0842-3
[21] Cai, L., Hou, K., Zhou, S. (2024). Intelligent skin lesion segmentation using deformable attention transformer U-net with bidirectional attention mechanism in skin cancer images. Skin Research and Technology, 30(8): e13783. https://doi.org/10.1111/srt.13783
[22] Hassan, H.F., Ozer, S.T. (2025). A comparative study of deep learning models for skin cancer detection: Leveraging transfer learning. Mathematical Modelling of Engineering Problems, 12(1): 166-180. https://doi.org/10.18280/mmep.120119
[23] Maurya, A., Stanley, R.J., Lama, N., Jagannathan, S., (2022). A deep learning approach to detect blood vessels in basal cell carcinoma. Skin Research and Technology, 28(4): 571-576. https://doi.org/10.1111/srt.13159
[24] Narayana, V.L., Sujatha, V., Sri, K.S., Pavani, V., Prasanna, T.V.N., Ranganarayana, K. (2023). Computer tomography image based interconnected antecedence clustering model using deep convolution neural network for prediction of COVID-19. Traitement du Signal, 40(4): 1689-1696. https://doi.org/10.18280/ts.400437
[25] Gouda, W., Sama, N.U., Waakid, G.A. (2022). Detection of skin cancer based on skin lesion images using deep learning. Healthcare, 10(7): 1183. https://doi.org/10.3390/healthcare10071183
[26] Al-masni, M.A., Kim, D.H., Kim, T.S. (2020). Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Computer Methods and Programs in Biomedicine, 190: 105351. https://doi.org/10.1016/j.cmpb.2020.105351
[27] Patibandla, R.S.M.L., Vejendla, L.N. (2022). Significance of blockchain technologies in industry. In Blockchain Security in Cloud Computing. EAI/Springer Innovations in Communication and Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-70501-5_2
[28] Srinivasu, P.N., SivaSai, J.G., Ijaz, M.F., Bhoi, A.K., Kim, W., Kang, J.J. (2021). Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors, 21(8): 2852. https://doi.org/10.3390/s21082852
[29] Shelatkar, T., Urvashi, D., Shorfuzzaman, M., Alsufyani, A., Lakshmanna, K. (2022). Diagnosis of brain tumor using light weight deep learning model with fine-tuning approach. Computational and Mathematical Methods in Medicine, 2022(1): 2858845. https://doi.org/10.1155/2022/2858845
[30] Sarwar, N., Irshad, A., Naith, Q.H., D. Alsufiani, K., Almalki, F.A. (2024). Skin lesion segmentation using deep learning algorithm with ant colony optimization. BMC Medical Informatics and Decision Making, 24(1): 265. https://doi.org/10.1186/s12911-024-02686-x