Adaptive Fine-Tuned AdaBoost and Improved Firefly Algorithm for Skin Cancer Detection

Adaptive Fine-Tuned AdaBoost and Improved Firefly Algorithm for Skin Cancer Detection

Anupama Damarla Sumathi Doraikannan*

School of Computer Science and Engineering, VIT-AP University, Amaravati 522237, India

Corresponding Author Email: 
sumathi.d@vitap.ac.in
Page: 
1583-1596
|
DOI: 
https://doi.org/10.18280/ts.410346
Received: 
22 February 2023
|
Revised: 
12 September 2023
|
Accepted: 
31 October 2023
|
Available online: 
26 June 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Skin cancer is a major malignancy caused by exposure to the sun's ultraviolet radiation. The patients are completely oblivious to the early stages of skin cancer development. Computer Vision Systems (CVS) that evaluate digital images of skin lesions are being used in research to accomplish an early diagnosis of melanoma. These methods give an automated analytical model for a precise and quick assessment of the lesions. In this study, we propose a Median Filter (MF) and Contour-Based Image Enhancement (CIE) for pre-processing, Inception v3 Clustering Algorithm (IV3-CA) for data segmentation, Inception ResNet v2 (IRV2) approach for feature extraction. Furthermore, the efficiency of the CNN was enhanced using an Improved Firefly Algorithm (IFFA) and classified with the Adaptive Fine-Tuned AdaBoost algorithm. The performance is investigated on the ISIC-2017 dataset using 2000 images. According to assessments, the modified model has remarkable identification benefits and has achieved an accuracy of 97.14 percent. The results show that the suggested method performs better than the current approach.

Keywords: 

skin cancer, Median Filter (MF), Contour-Based Image Enhancement (CIE), Inception v3 Clustering Algorithm (IV3-CA), Inception ResNet v2 (IRV2), Adaptive Fine Tuned AdaBoost Algorithm (AFTAA), Improved Firefly Algorithm (IFFA)

1. Introduction

Melanoma is among the most frequently pinpointed and extremely dangerous carcinomas currently being investigated. The irregular development of skin cells is what is known as skin cancer. If left untreated, the cancerous cells in this area of the skin might spread to the surrounding living tissue. Many morphological characteristics, including color systems, dots, Streaks, blue-white regions, and blotches, can be identified. Squamous cell carcinoma (SCC), basal cell carcinoma (BCC) & malignant melanoma (MM) are indeed the 3 most prevalent kinds of tumors. The worst kind of skin cancer is malignant melanoma (MM). It typically begins as a mole and then develops into a large dark area on the skin. It is critical to diagnose and treat in the worst cases [1].

After the excision of the lesion, there is a possibility of its recurrence. To precisely recognize skin conditions for cancer diagnosis, a mere visual examination of the skin is inadequate. The inclusion of a scheme in the diagnostic test for cancer detection is considered crucial. This scheme is responsible for various tasks such as picture pre-processing to improve contrast by removing artifacts and noise, segmentation to separate the lesion from healthy tissue through boundary detection, removal of features to enhance the speed and accuracy of supervised methods, and categorization by labeling image data into predefined classes. The uneven margins, structure, thickness, and color change of the lesion make it hard to identify it from healthy skin in image data. The appearance of artifacts on the lesions would make segmentation that much more difficult. Artifacts, including bubbles, hair, ink marks, spots, and pores, produce abnormalities in dermoscopic pictures, leading to improper segmentation during skin lesion assessment. As a result, preparing dermoscopy pictures to eliminate image capture distortion, texture, color variations, and unwanted structures like hairs, air bubbles, and blurs surrounding the lesion is critical in determining the quality of lesion diagnosis. The non-invasive nature of image recognition has made it a popular method for identifying melanoma skin cancer, and as a result, fast and suitable therapy can be given to the patient.

As a result, identifying new or altering lesions that could be malignant is critical. Skin tumors that are discovered and removed early on are virtually always curable. Pre-processing is necessary for skin cancer testing. Most of those methods are well-liked [2]. The process of fragmenting the desired object region is defined as segmentation. The process of extracting features plays a crucial role in acquiring a discerning representation of skin lesions. Identifying valuable attributes is a challenging endeavor, and extensive research has been conducted in this field, enabling the recognition of a diverse array of features that define skin lesions [3]. The selection of features is the difficulty of deciding which features are essential and adequate to represent a notion. Feature classification is a difficult task that necessitates the careful involvement of multiple variables. Selecting training data, processing images, extracting features, choosing appropriate classification techniques, and then assessing performance are all significant processes in image classification [4].

The following are the contributions of this work:

  • The skin images are preprocessed using MF and CIE techniques to homogenize all true skin images and improve their quality, respectively.
  • During the segmentation process, the data is broken up into smaller groups called "subclasses." IV3-CA sparse representations with discriminative light codes to characterize features are provided for skin lesions.
  • To extract the proportions of skin lesions in the feature extraction stage, IRV2 is implemented.
  • To identify dependable feature subsets, the proposed IFFA which fits the characteristic of the fireflies' approach is employed.
  • In continuation to achieve the optimal classification of cutaneous tumors, the recommended AFTAA technique is implemented.

The additional details of this research were organized into the following parts: Part 2 explains the literature survey and problem statement. Part 3 explains the presented approach. Part 4 displays the findings and performance investigation. Part 5 summarizes the overall research.

2. Literature Survey

Standard 24-bit color images often encompass a vast array of colors, rendering them challenging to manage directly. Due to this rationale, color quantization is frequently employed as a preliminary procedure for color image segmentation. In their study, Celebi et al. [5] demonstrated that to achieve accurate quantization for skin lesions, it is recommended to employ a color quantization technique that reduces the number of colors in the image to 20. Lack of contrast is one issue that makes it challenging to identify borders in dermoscopy images. The image contrast is increased to enhance the visibility of the lesion's edges. Celebi et al. [6, 7] suggested an iterative technique that eliminates the black frames created during the digitization process based on the HSL (Hue-Saturation-Lightness) color space's lightness component. The authors Gomez et al. [8] have recently introduced a contrast enhancement technique that employs Independent Histogram Pursuit (IHP). Some effective methods for improving image contrast are histogram stretching and histogram equalization. The former technique maps the pixel values onto a range of [0, 255], and the latter technique involves the modification of pixel values to achieve an equal distribution. To get the high contrast lesion images, homomorphic filtering [9], Fast Fourier Transform (FFT), and high pass filters can be utilized to correct for specular reflection fluctuations or uneven lighting. The adaptive and recurrent weighted MF, which was developed by Sweet [10], is a highly effective filtering technique that can be employed to effectively eliminate air bubbles and dermoscopic gels. A method for finding lines that use the derivatives of Gaussian (DOG) in two-dimensional [11] and an exemplar-based object removal algorithm [12] can be employed to effectively eliminate dark lines, such as ruler markings. Hair is a frequently encountered undesirable element in dermatoscopic images. Schmid [13] employed mathematical morphology in their research. In their study, In the study conducted by Fleming et al. [14], the detection of curvilinear structures was carried out in combination with multiple constraints, followed by the process of gap-filling. The implementation of erosion/dilation techniques involving straight-line segments has been found to effectively eliminate or weaken the presence of hairs. In their study, Schmid et al. [15, 16] proposed a method that is based on the application of the morphological closing operator [17] to the 3 different components of the L*u*v* constant color space [18]. It has been suggested by Zhou et al. [19] and Wighton et al. [20] that advanced methodologies apply in-painting techniques. However, it has been observed that many of these techniques frequently result in undesirable blurring, disruption of tumor texture, and color bleeding.

Self-computing has several applications but image segmentation is one of the most promising [21]. To separate skin lesions in dermoscopic pictures, a picture segmentation process relying on perceptual color difference saliency (PCDS) was proposed [22]. Some difficult dermoscopic pictures of ISIC 2016 and PH2 datasets were used to evaluate PCDS. The calculation of the average rating for background color pixels as well as the average score of item color pixels is indeed an essential part of the PCDS method that requires additional research. Color indicators must be paired with other signals, such as texture, to maximize the PCDS system's effectiveness. The process of manually segmenting the dermoscopic skin images is both laborious and vulnerable to errors. In the work [23], a lesion separation was offered a unique network-in-network CNN technique. Before cropping and feeding the segmentation model with the lesion masking, the picture is pre-processed with a Faster RCNN. The segmentation design combines UNet as well as Hourglass. The model is implemented on the ISIC 2018 dataset and also cross-validated on PH2 and ISBI 2017 datasets. A melanoma recognition system was proposed and implemented in the ISBI 2016 dataset. The system uses a Fully Convolutional Residual Network (FCRN) to precisely segment the lesion. This approach aimed to enhance the accuracy of melanoma detection. Deep Residual Network (DRN) was used to classify melanoma and non-melanoma patches. In the work [24] R2U-Net (Recurrent Residual Convolutional (RRN) U-Net) was proposed, and an improved version of the U-Net model was tested for skin cancer segmentation on ISIC-2017. The results showed significant improvement against the SegNet and the ResU-Net models. The impact of sound, artifacts, and background inhomogeneity on the outcome of image segmentation didn’t disturb the model. In their study, Li et al. [25, 26] proposed 2 distinct methods for addressing this limitation in liver tumor segmentation. Two approaches can be utilized. The first approach involves the utilization of morphological operations. The second approach involves the incorporation of spatial information in fuzzy c-mean clustering.

Based on image data, a CNN relying upon the DenseNet design was developed and tested for the automatic detection of 7 skin diseases [27]. These have been intended to produce a series of specialized structures with highly discriminative characteristics using the unique multilevel fine-tuning algorithm. In the work [28] the ideal combination of Wiener filter coefficients enabling denoising as well as mean PSF computation is determined. Features were extracted based on high perceptive characteristics like color, boundaries, etc., and the correlation coefficient was used for the feature set used for cell distinction and achieved an efficiency of 95% and sensitivity of 93.3%. Malignant melanoma (A431), as well as benign (HaCat) samples, were differentiated utilizing electrical impedance spectroscopy (EIS) having greater sensitivity, and reutilization capability was demonstrated in the work [29]. After five days of cell growth in the EIS apparatus, the two categories' NI curves at 1465 Hz (the optimal frequency for discriminating) showed that NI levels had risen significantly. The concurrent microscope scanning indicated that A431 & HaCaT individuals have diverse growth patterns. A unique core-shell nanofiber medication delivery method for skin cancers has been studied in the work [30]. PCL/PVP hybrid nanofiber core-shell architecture carrying CS & 5-FU in shell & core layers, etc. The PCL-CS/PVP-5-FU fibers used to have a diameter of 503 nanometers and even had superior mechanical qualities as well as excellent drug-encapsulating performance. Radiomic analysis includes the evaluation of size, shape, and textural features that have useful spatial information on pixel distribution, and pattern groupings are included in the work [31]. The top-level features acquired by ResNet50, DenseNet201, and DarkNet53 and bottom-level features from Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP) were also mentioned.

Lack of information and reducing the potential for over-fitting are few limitations. Dimensionality reduction is an important criterion for feature selection that improves accuracy. A new hierarchical system for the prevention of skin disease using microscopic images was presented in the work [32]. As per the given method, after conducting denoising of the source microscopic images the relevant region has been classified depending on the basic Otsu. Next, to recover significant characteristics from the photos, feature extraction was applied to the filtered image. The best characteristics were chosen via a modified metaheuristic technique known as the Modified Thermal Exchange Optimization Procedure to change the system's results in accuracy and reliability to produce an exact solution. An innovative two-stage genetic programming (GP) [33], proposed oriented characteristics selection & characteristic creation strategy for melanoma picture categorization is developed. The dermoscopy data are captured with a local binary. Less normally employed classification techniques score higher when using the GP chosen as well as produced characteristics. The authors of the work [34] created a novel CNN optimum technique for detecting skin cancer utilizing input data. An enhanced version of the whale optimization technique was used to optimize CNN's effective result. The method is used to identify the system's ideal weights and biases to lower the discrepancy between the output of the system and its true output.

A deep learning approach for multi-class melanoma histologic picture categorization was demonstrated in the work [35]. They suggest employing the effective Inception v3 model for initial 4-class categorization to take advantage of current improvements in computer recognition. For image-wise categorization, they present a novel ensemble method for combining patch possibilities. With Base Classifiers, a two-step approach showed some promising results on the ISIC 2019 dataset [36]. The initial stage uses a stack system instead of simple averaging. Moreover, the CS-KSU Module Set performed well in identifying unique categories. Achieving a BMA AUC score of 59.1 percent for the unidentified category was found to be correlated, and that is the crux of this task. In the work [37] provides a comparative analysis for dermoscopy image categorization of skin lesion malignancy. These methods improve visual quality as well as generalization ability by correcting brightness and contrast and removing artifacts. Dauntingly, researchers used arbitrary contrast and brightness to improve the provision of skin disease. Adding information increased the training dataset capacity and reduced overfitting. The classification accuracy reached a high of 92.08 percent with F-score having 92.74 percent. Apart from the previous discussions, Table 1 also shows literature summary of various methods. There are a few more significant limitations associated with Fuzzy C-Means (FCM) build algorithms when utilized for the segmentation of images. Firstly, the provided information lacks in terms of spatial details. Secondly, it primarily focuses on intensity information without considering other factors. The presence of textured or differently colored regions on the skin can result in over-segmentation. All aforementioned issues have the potential to result in an escalation. These motivated me to work and propose the model.

2.1 Research gap

Firstly, previous works have been focused on over-segmentation and lack spatial details which makes it difficult to identify the parts of an image. Hence the purpose of this work is to segment the data spatially which is the primary purpose of the inception model. Secondly, reduced classifier complexity for better generalization behavior also motivated me to do this work.

The main objective of this work is to determine the perimeter of the cutaneous lesion in digital dermatoscopic images and locate a lesion that corresponds to melanoma rather than to determine the disease's prognosis. Accurate execution of this step is crucial due to the fact that numerous features utilized in the evaluation of melanoma risk are derived from the lesion border. To develop an Adaptive Fine Tuned AdaBoost Algorithm (AFTAA) and IFFA model that effectively diagnoses and classifies skin cancer into different categories. This will be achieved by using an optimized set of features.

3. Proposed Work

The suggested method involves preprocessing before segmenting the lesion from the healthy skin. In the pre-processing phase, the MF was used for the de-noising process and CIE was processed for data enrichment. In the segmentation phase, the IV3-CA was utilized on the pre-processed data for segmenting the images, and the IRV2 methodology was applied to extract the segmented data in the extraction stage. Finally, the selected data can be classified using the proposed AFTAA for skin cancer classification shown in Figure 1.

3.1 Problem statement

More people get epidermal cancer than all other types of cancer combined. Skin cancer death rates for both melanoma and non-melanoma keep rising. Skin cancer is a poorly understood illness that primarily affects women but also affects a wide range of people. The accuracy and speed of treatment can both be significantly improved by computer-assisted testing. It could be capable of extracting characteristics like color variation, asymmetries, and texture qualities that are not easily apparent to human vision. Several approaches and techniques have been implemented that enhance and detect skin cancers, which include the 7-point checklist, the ABCD guideline, and the Menzies approach. Consequently, we present the adaptive fine-tuned AdaBoost algorithm (AFTAA), which has good performance as well as quickness for skin cancer detection.

3.2 Dataset

In this research, we collect nearly 2000 dermoscopic images from the ISIC-2017 dataset to assess the effectiveness of the proposed approach. The ISIC-2017 dataset is the most famous publicly accessible dataset. It contains three types of raw dermoscopic images primarily the Nevus (Nev)-1843, Seborrheic Keratosis (SK)-386, and Melanoma (Mel)-521 utilized to detect pigmented epidermis Laceration. A total of 70% of images are enforced for the training task, 15% for the validation task, and nearly 15% of data are enforced for the testing task. Figure 2 displays the skin cancer dataset including benign and malignant. This collection of datasets comes from a variety of demographics and ages. Segmentation, feature extraction, selection, and classification may all be done using this dataset. Table 2 shows the division of the dataset.

Table 1. Literature summary of various methods

Ref

Dataset

Model

Highlights

Limitations

Acc(%)

[38]

ISIC Archive

VGG

Investigated four distinct data augmentation techniques and a multi-layer augmentation technique for the classification of melanoma.

The data augmentation techniques examined in this research have certain limitations and haven't been tested on a sizable number of datasets.

82.9

[39]

Private dataset

Resnet34, ResNet50 ResNet101 ResNet152

Methods for enhancing dermoscopy categorization using deep learning and producing datasets have been suggested.

Information from further modalities is not taken into account, such as the patient's medical background or details on additional symptoms.

85.0

[40]

IAD dataset

VGG19

The VGG-19 architecture was initially used to assess the thickness of melanoma.

The clinical significance of accurately predicting melanoma thickness has led to a shift away from using pre-training approaches for comparison.

87.2

[41]

HAM10k ISIC2019

ResNeXt, SeResNeXt, DenseNet Xception, and ResNet

To determine the best ensemble learning algorithms for skin cancer categorization, a grid search strategy was used.

The current quantity of training data remains inadequate, and most of the networks utilized in ensemble learning are derived from identical models.

88.0

[42]

PAD-UFES

App linked to CNN

To address the issue of data imbalance, two evolutionary algorithms were developed, and weighted loss function and oversampling were also used.

To boost the performance even more, a bigger dataset was required.

92.0

[43]

ISIC-2018

GAN

CGANs were used to extract important data from all layers and create skin lesion images with a variety of textures and shapes while maintaining the training data's stability.

The quantity of data utilized for training purposes was relatively restricted.

94.1

[44]

ISIC-2018

GAN

Skin lesion-specific customizations were made to the proposed GAN framework. Additionally, by tweaking the GAN network's progressive growth structure of the generator and discriminator, it may generate higher resolution and more diversified skin disease images.

The synthetic dataset generated by the GAN did not exhibit sufficient complexity and diversity in comparison to the original dataset.

95.2

[45]

ISBI-2017, PH2

Denseunet Network

An adversarial training strategy combined with an attention module is proposed to improve the model's robustness in skin disease classification and segmentation.

The model continues to exhibit under and over-segmentation as a result of the sparse training data and hazy boundaries of skin disease images.

96.8

Figure 1. Displays the proposed framework

Figure 2. ISIC skin dataset

Table 2. Dataset ratio

Split Ratio

Count

Training set

1400

Validation set

300

Testing set

300

3.3 Pre-processing with MF and CIE

The preprocessing stage allows for the improvement of an image's clarity and precision. The collected dataset's noise can be removed using the MF, and the brightness and contrast have been improved using CIE.

3.3.1 MF

The MF is a highly effective non-linear filtering technique renowned for its ability to preserve image information. The performance of the MF is influenced by the size of the filter window. While a smaller window retains the features, it also results in a decrease in noise reduction. Larger windows possess a considerable capacity for noise suppression, albeit at the expense of maintaining image quality. In this, the neighboring noisy pixels' median value is used to replace the targeted noisy pixels as shown in Eq. (1) and in Figure 3.

$I[i, j]=$ median $(\mathrm{p}[\mathrm{x}, \mathrm{y}],(\mathrm{x}, \mathrm{y}) \in w)$                    (1)

where, $w$ indicated the weightage of the surrounding pixels and $[i, j]$ is the point at the median of an image. The primary function of the filter is to organize the values of a pixel within the image slice in increasing order and then replace the selected point with the value of the middle point. (If there is an equal number of pixels to its neighborhood, the mean of the center two pixels is utilized.) The pseudocode is as follows:

    for (p1=0; p1<n; p1++)

        for (p2=0; p2<n; p2++)

           if(I[p1] < I[p2])

            {

             w=I[p1]

              I[p1] = I[p2]

              I[p2] = w

             }

Figure 3. Median filter

3.3.2 CIE

Contour lines were worn to represent the shape of a region on a two-dimensional map. The borders of the melanoma are captured using CIE. The contour of the binary image has the lesion area deleted. The digital images of the affected parts and the original images are then combined to create the exact images and skin lesions. It could move in both spatial and temporal directions. Since it helps to increase contrast, this method is especially helpful in the field of medical imaging, especially when the contrast between the ROI and its surroundings is similar. The disparity of the image is described as a parameter using the contrast augmentation index (CAI) formula in Eq. (2) and Eq. (3).

$\mathrm{CAI}=\frac{c_{\text {processed }}}{c_{\text {actual }}}$                   (2)

where,

Cprocessed = Considered image contrast value

Cactual = Original image contrast value

$\mathrm{IAC}=\frac{m-s}{m+s}$                    (3)

where,

IAC=ImageAreaContrast

m=Foreground image gray-level value

s=Background image gray-level value

3.4 Image segmentation using Inception v3 Clustering Algorithm (IV3-CA)

Image segmentation is the technique of separating an image into main parts. The primary purpose of segmentation was to separate sections of strong correlation and regions of interest (ROI). An important crucial phase of skin lesion dermoscopy images after preprocessing is ROI extraction. For skin segmentation to be successfully categorized, good extracted features and ROI removal are necessary.

Certain clustering methods, such as k-means, exhibit subpar performance in high-dimensional spaces, such as those found in real-world images, where data points are located on complex submanifold areas and different clusters can’t distinguish with the colors and image composition. Initially, the images are processed using a trained iteration of the Inception v3 neural network. It is known for using inception modules designed to learn a combination of local and global features from input data. Now, these extracted features are used for clustering. The Inception architecture hypothesized clustering the similar sparse nodes into a dense structure. It generates chances supported by the evidence shown in Eq.4 retrieved from the pictures. This implementation of dimensionality reduction techniques facilitates the clustering process by simplifies the cluster task. The network produces results that possess significant semantic value. The neural activations within this layer are utilized for final clustering.

By utilizing various filter sizes on the same stage, the Inception v1 network [30] (the first edition of Inception) can be used to handle the problem of change and positioning of important components in an image that applies a technique of "widening" rather than "deepening”. Many of the advantages of v1 are present in IV3-CA, along with improvements from v2. Label softening factorized 7×7 convolutions to minimize computation effort and limit overfitting in its supplementary classifications. We have used an Image Net to train the IV3-CA pre-trained framework. The system fine-tuning technique is largely identical to that of the VGG-16 however there are some minor changes. It employs a functional API instead of VGG-16, which allows for further flexibility when creating complicated techniques with various inputs & outputs. After the network was established, the addition of a convolutional layer coupled with an activation function Rectified Linear Unit (ReLU), a global spatial average pooling layer, a fully connected layer (also with RELU activation), and finally a logistic layer (with sigmoid activation), much like a supermodel. The supermodel, which was trained uses an input image size of (299,299) which is cooperative with the architecture. The learning rate(lr) was set to 0.0001 using the RMSprop optimizer. The dataset was split into train, validation, and test sets with a ratio of 70:15:15 percentage of images respectively. Dropout was set to 0.5. We trained the model for 30 epochs with a batch size of 32. We were capable of fine-tuning convolutional layers of IV3-CA using the training model. For the adjustments to be impacted, the system was modified utilizing SGD with just a low learning speed and momentum. Figure 4 displays the IV3-CA design.

SoftMax analysis was used to retrain the final layer in Inception, where we generate probabilities based on the evidence extracted shown in Eq. (4) retrieved from the pictures. The evidence is calculated based on a sum of weights detected by the intensity of pixels, with added bias.

$Evidence _r=\Sigma_s W_{r s} M_s+b_r$                     (4)

where, $M=$ input, $r=$ class, $W=$ weight, $b=$ bias, and $s=$ index. After that, we estimate the chances by running the evidence through the softmax function shown in Eq. (5).

$Outcome (y)= softmax ( Evidence )$                       (5)

Figure 4. Inception v3 clustering architecture

Figure 5. Inception ResNet v2 (IRV2) architecture

3.5 Feature extraction using inception ResNet v2 (IRV2)

The Inception ResNet v2 architecture network was trained on our dataset, while all trainable parameters were fine-tuned across all layers.

This was achieved by fine-tuning the model across all layers and replacing the top layers with a global average pooling layer, a fully connected layer, and a softmax layer. These modifications enabled the classification of the data into two diagnostic categories. The size of input images was all resized to (299,299) to be compatible with this model. The learning rate was set to 0.0001 and ReLU was used for the optimizer. The Soft Attention (SA) block is introduced for the Inception ResNet C block of the system in IRV2 in which the picture size is (8×8). This mechanism highlights the important input parts of an image. It increases the value of crucial characteristics and reduces the impact of disruptive features [41].

Throughout this scenario, the soft attention phase is defined by a (2×2) Maxpooling layer, which will then be combined with the inception building's filter concatenate layer. After the concatenate layer, there is a ReLU activation block. The activation block is followed by the (0.5) dropout layer to regulate the outcome of the attention layer that was depicted in Figure 5. The batch size is set to 16. The convolution procedure of the input data is used to extract the features. The calculation utilized is as follows in Eq. (6):

$V_s^l=\mathrm{f}\left(\Sigma_{r \in m_s}\left(K_{r s}^l \times V_r^{l-1}+b_s^n\right)\right)$                    (6)

where, k=kernel number, V= vectors, b= bias, m=collection of every layer characteristic map.

Below is the summary model of IRV2 with SA shown in Figure 6.

Figure 6. Summary model of IRV2

3.6 Feature selection using Meta-heuristic approach (FSMH)

The benefits of utilizing a feature selection strategy are shown in Figure 1 and include the increasing ability of a classifier to forecast, obtaining a quick and very efficient gainer, and offering an elementary classification approach. The proposed IFFA modeling with ambient boosting (Ada-Boosting) outperforms previous state-of-the-art Firefly versions in solving several challenging uni-modal, multi-modal minimization, and ensemble minimization challenges. Additionally, the quality of the generated ensemble classifier is superior to the earlier, thorough ensemble models. As a result, using the feature selection method in classification has the major benefit of producing models that are easier to comprehend. In this work, a subset for the prediction of skin cancer was created utilizing the feature selection using the meta-heuristic (FSMH) method. As a result, a technique utilized in feature selection is called an IFFA.

3.6.1 IFFA

One of the newest optimization methods, IFFA, was created by studying how fireflies behave. It's a meta-heuristic technique with naturalistic origins. The algorithm is based on three key traits of fireflies. The gender of fireflies, which is known to be unisex, is one of these traits. Each firefly can therefore be drawn to any other firefly by them. Second, there is an inverse relationship between the distance of the fireflies and their attractiveness. The attraction of two fireflies will be strongest if there is little space between them. The objective function judges a firefly's brightness, which brings us to our final point. According to each problem, Eq. (7) can vary, as illustrated in IFFA Algorithm 1.

$B(d)=\beta_0 e^{-\gamma^{d^2}}$                     (7)

Eq. (7) can be used to calculate the attractiveness of fireflies. Gamma is the firefly's d-dimensional attractiveness. When the distance is 0, the attraction is represented by the fixed light consumption coefficient, beta, which is commonly assumed to be 1. Eq. (8) expresses attractiveness, with 0 representing the starting point at (d = 0), and Eq. (9) representing the updating value

$B=\beta o \exp \left(-\gamma \cdot r^m, j\right), m>=1$                      (8)

$X_i=X_i+B(d) *\left(X_p-X_i\right)+\alpha$                        (9)

Using the attraction formula as indicated in Eq. (8), it is calculated where the less-shiny fireflies will go to join the shiner ones. $\alpha$ and $rand$ are random values that are evenly produced numbers in the range [0, 1] which are used in the equation. The ith and pth fireflies in the samples are $x_i$ and $x_p$. The Euclidean Distance (ED) formula, which is depicted in Eq. (10), can be used to determine the distance between two fireflies.

Euclidean distance formula

$d_{i j}=\left\|x_i-x_j\right\| x=\mid \sqrt{\sum_p^d\left(x_{i, p}-x_{j, p}\right)^2}$                  (10)

The proposed IFFA targets two important aspects:

• Reduce the incidents to shorten the optimization period.

• A strict focus on local optima to focus on feasible solutions as possible

Algorithm 1: IFFA [42]

Step 1: Start

Step 2: The Objective functional prototype f(x),

where x= (x1, .........., xd) T

Step 3: Start the pop samples with n flies.

Step 4: Calculate the intensity of light; that is related to f (x)

Step 5: While (s<=MaxGen)

Step 6: Determine the absorptivity coefficient

for C1 = 1: C, for every cluster 1, ................, C

for C2 = 1: R, Execute IFFA R numb of repetitions (with n <= s)

for i=1: n (n Fireflies)

for j=1: n (n Fireflies)

Step7: If (Ij > Ii)

Move ffly i in the direction of j.

 end if

Step 8: Alter the attraction distance d via exp ($-\gamma^{d^2}$)

Step 9: Evaluate recent data and modify light intensity

       end for j

end for i

Step 10: Evaluate recent data and update light intensity

           ( I1 , ..., m′)

every ffly ( X1 , .................................., m′)

end for loop C2

Step 11: The flies congregate and depart with the optimal

   X local

end for loop C1

Step 12: By ranking every firefly, find the current global best.

end while

3.7 Ensemble skin cancer classification

The ensemble classifier is generally the average or mean combination of predictions. The ensemble is a form of supervised learning system that can be constructed and shown to predict outcomes. Bagging and Boosting are two popular ensemble algorithms.

3.7.1 Bagging

An ensemble classifier—which frequently consists of a weighted or averaged combination of predictors—performs better than a single classifier. It is possible to develop an ensemble, a type of supervised learning system, and discover that it is capable of prognostication. There are many ensemble algorithms. The most well-known ensemble algorithms are Bagging and Boosting.

3.7.2 Boosting

The performance of a classifier will be assessed by how well it can boost the corresponding F-measure. The AFTAA technique is an improved technique and a leading model of enhancement. The basic idea behind it is to train a lot of weak classifiers providing general abilities on the train set, then merge the weak classifiers using a combination approach to create an active(strong) classifier with better identification capabilities. The most well-known boosting algorithm [20] is called "AdaBoost ," and it uses decision trees as its weak classifiers. More specifically, the (AdaBoost ) functions work as below: Samples for a train set are formulated in Eq. (11).

$T_d=\left\{\left(i_1, t_1\right),\left(i_2, t_2\right), \ldots\left(i_n, t_n\right)\right\}$                  (11)

where, $T_d$  = training data, i = input data, and t = data type.

Step 1: Initialize the weights for every train dataset with N training data points. Weights are given by the AFTA algorithm shown in Eq. (12). Initially every component is given a constant(uniform) weight.

$d(I)=\left(w_{11}, w_{12}, w_{13}, \ldots, w_{1 n}\right)$                     (12)

Step 2: Train the information with the prior or existing weights. The weak classifier being trained should possess an accuracy exceeding 0.5, indicating superior performance. The weak classifier Ar(a) shown in Eq. (13) is counted.

$A_r(\mathrm{a}): \mathrm{a} \rightarrow(-1(0),+1(1))$                    (13)

where, r = 1, 2, . . .n represents the r-th run(cycle).

Step 3: Consider the error level er and importance of each weak model Ar(a) shown in Eq. (14):

$\begin{aligned} & e_r=P\left(A_r\left(i_j\right) \neq t_j\right) \\ & \sum_{j=1}^n w_{r j} I\left(A_r\left(i_j\right) \neq t_j\right)\end{aligned}$                    (14)

where, $e_r$= False categorized samples by Ar(a). To transform the problem into a regression task, set true classes to 1 whereas false classes to -1 as shown below in Eq. (15).

$I\left(A_r\left(i_j\right) \neq t_j\right)= \begin{cases}1 & A_r\left(i_j=t_j\right) \\ 0 & A_r\left(i_j \neq t_j\right)\end{cases}$                   (15)

A weight is updated for each observation based on the $e_r$. Eq. (16) shows the recalculating of the weights with a weak classifier.

$f_r=\frac{1}{2} \log \left(\frac{1-e_r}{e_r}\right)$                  (16)

The importance of Ar(i) in the last active(strong) classifier is given as fr. Its $e_r$ is indirectly proportional to fr.

Step 4: Normalize the weight dispersion d(r + 1) of the training sample according to fr. Repeat this process until the number of iterations exceeds the predetermined value “n” as below Eq. (17)

$d(r+1)=\left(w_{r+1,1}, w_{r+1,2}, \ldots w_{r+1, n}\right)$                   (17)

where, $w_{r+1,1}=w_{r+1} / N_r \exp \left(-f_r t_j A_r\left(i_j\right)\right)$, j=1,2,..,n, and Nr is a normalization factor. This factor is denoted in Eq. (18).

$N_r=\sum_{j=1}^n w_{r j} \exp \left(-f_r t_j A_r\left(i_j\right)\right)$                  (18)

Step 5: Merge the weak classifier with a fusion technique to produce the final active(strong) classifier Aa as shown in Eq. (19).

$\mathrm{A}_{\mathrm{a}}=\operatorname{sign}\left(\mathrm{f}_{\mathrm{a}}\right)=\operatorname{sign}\left(\sum_{j=1}^n f_r A_r(a)\right)+\mathrm{T}(\mathrm{f})$                    (19)

where, fa = sequential function of the weak classifiers across individuals. The factor of the r-th cycle is denoted by fr. Each weak classifier cycle is indicated as Ar(a). T(f) is a parameter for fine-tuning and is used to note the classified outcome with a higher degree of accuracy.

4. Experimental Results

This section compares the proposed optimization method and AdaBoost classifier to assess its effectiveness in identifying the best compared with the state-of-the-art results. The experiment configuration was set up for this study in Table 3.

Table 3. Configuration setting

Parameters

Specifications

Input Image Size

224×224×3

Dataset Split Ratio

70-15-15 (Tr-V-T)

Epochs Count

30

Activation Functions

ReLU, Softmax

Optimizer

Adam, RMSprop

Loss Function

Categorical Cross entropy

Dropout

0.5

Batch Size

16

Learning Rate

0.00001

Technologies

Tensorflow, Keras, Numpy, OpenCV,

Scripting Language

Python

4.1 Performance metrics

A crucial element for relating to and understanding the effectiveness of the classifier is the confusion matrix. S11 stands for the number of instances where both true and false predictions were made, S22 for the number of instances where both true and false predictions were made, and S12 and S21 for the number of classifier errors. Table 4 is analyzed with the assistance of (TP-true positive, TN-true negative, FP-false positive, and FN-false negative). In Eqs. (20)-(24) the following parameters are displayed: $Acc$, TrueLesion, FalseLesion, Specificity, and Precision.

Table 4. Confusion matrix

Actual Values

Predicted Values

S11(TP)

S12(FN)

S21(FP)

S22(TN)

$A c c=\frac{S_{11}+S_{22}}{S_{11}+S_{12}+S_{21}+S_{22}}$                (20)

$TrueLesion =\frac{s_{12}}{s_{11}+s_{12}}$                   (21)

$FalseLesion=\frac{S_{21}}{S_{21}+S_{22}}$                   (22)

$Specificity =\frac{s_{22}}{s_{21}+s_{22}}$                  (23)

$Precision=\frac{S_{11}}{S_{11}+S_{21}}$                (24)

4.2 Analytical results

The below Figures 7 and 8 show the accuracy and loss graphs. The performance of the graphs is accuracy versus loss neural network.

The analytical results are shown in Figures 8 and 9 below. Initially, the dataset is pre-trained on the Inception ResNet v2 (IRV2) architecture. Later it is processed with the suggested proposed model using the configuration setting mentioned in Table 3. The outcomes show loss, accuracy, and AUC graphs. The outcomes are based on a split ratio dataset using Python simulation. The graphs obtained post-implementation indicate that there is a positive correlation between the number of epochs and the accuracy.

Ensemble classification techniques, like boosting and bagging, are used in these experimental setups. In the following stage, an FSMH (Feature selection using Meta-heuristic) algorithm was used. The optimization includes the IFFA technique. Finally, the effectiveness of the suggested model strategies is evaluated.

Figure 7. Accuracy graph of training and validation

Figure 8. Loss graph of training and validation

4.2.1 Ensemble classification performance

Various evaluation parameters have been used to rate the effectiveness of the procedure just discussed. In the suggested work, average computing precision, recall/sensitivity, specificity, and F1-score are used to gauge how well the classification algorithm performs. Without feature selection, Table 5 and Figure 9 show the classifier's performance on the given data set without IFFA. The performance of the classifiers is affected by feature selection on the data set. The classifier can give biased results by focusing on the most common classes. The number of iterations divided by the number of samples is equivalent to the accuracy value of boosting approaches.

Table 5. Classifiers without IFFA

Classifier

Bagging%

Boosting%

True Lesions

92.33

94.07

False Lesions

4.31

1.72

Accuracy

89.01

94.89

Specificity

93.93

96.11

Precision

93.13

94.22

By choosing the appropriate feature subset, the classification method embedded with feature selection achieved the highest accuracy. The output of optimization methods determines many qualities. The trained copy was produced and modeled, the testing samples were examined, and predictions were formed using the model developed by the classifiers under examination. Table 6 and Figure 10 show the classifier's performance on the given data set with IFFA.

Figure 9. Ensemble classifiers performance without IFFA

Figure 10. Ensemble classifiers performance with IFFA

Table 6. Classifier performance with IFFA

Classifier

Bagging%

Boosting%

True Lesions

93.33

96.61

False Lesions

6.89

2.53

Accuracy

92.24

97.14

Specificity

92.16

94.89

Precision

93.32

94.81

The results of evaluating several meta-heuristic-based feature selection algorithms are shown in Table 7 w.r.t the supplied dataset. ACO, GA, PSO, and IFFA are examples of some optimization strategies. Finally, the effectiveness of the suggested model tactics is evaluated against currently used methods. The accuracy number obtained via boosting and the Improved Firefly method is 97.14, which is the highest when compared to some existing models.

Figure 11. ROC curve using skin dataset

The proposed deep learning-based model outperformed the prior work with comparative analysis based on training, validation accuracy/loss. In the training period, the accuracy is measured to be 0.98 and 0.293 loss. The validation period accuracy is 0.97 and 0.339. The above ROC curve depicted in Figure 11 shows the performance of the proposed method. 0.96 is the recommended area under the ROC curve. Moreover, 0.50 is the ROC cutoff value.

Figure 12 displays the outcome of the skin cancer classification utilizing the proposed approach and we have found the skin cancer as benign and malignant.

4.2.2 Performance comparison with other existing models

In this section, various existing methods were compared. Table 8 and Figure 13 list the methodologies that were used and are analyzed. The outcomes are listed below. All the existing models were focused on different faces of classification. However, the proposed model outperformed well with the improved task.

Figure 12. AFTAA classification outcome

Table 7. Evaluation results w.r.t ISIC-2017 dataset

Sno

Author Name

Preprocessing

Segmentation

Classification

Acc (%)

1

Proposed Method

MF + CIE

IV3-CA

PSO+ AdaBoost

92.53

ACO+ AdaBoost

96.51

GA+ AdaBoost

92.51

FFA+ AdaBoost

93.56

IFFA+ AdaBoost

97.14

Table 8. Recent study focused on the skin lesion classification

Sno

Segmentation

Feature Selection

Classifier

Classification

Accuracy %

[46]

FCM, Thresholding

PCA(Principal Component Analysis)

Ensemble-methods SVM, KNN, GML

Malignant(M)/Benign(B)/Dysplastic

75.69

[47]

K-means clustering

-NA-

Ensemble method

Melanoma(Mn)/ Nevus(Nv)

81

[48]

Thresholding

CFS (Correlation-based feature selection)

LMT (Logistic Model Tree)

Melanoma(Mn)/ Nevus(Nv)

86.0

[49]

Dynamic programming

SFFS (Sequential floating feature selection)

SVM (Support Vector Machine)

Melanoma(Mn)/ Nevus(Nv)

88.2

[50]

Threshold (T)

Incremental stepwise

Linear classifier

Melanoma/Nevus(Nv)/BCC/SK

90.48

[51]

Thresholding

GRFS (Gain ratio feature selection)

RF(Random Forest)

Malignant(M)/Benign(B)

91.26

[52]

Thresholding, region growth, and merging

FCBF (Fast correlation-based feature filter)

SVM (Support vector machine)

Melanoma(Mn)/Benign(B)

93.83

[53]

Threshold(T), region growing

Incremental stepwise

ANN( Artificial Neural Network)

Melanoma(Mn)/Nevus(Nv)

94.10

[54]

Thresholding

-NA-

KNN-DT/KNN

Malignant(M)/Benign(B)

96.71

 

IV3-CA

IFFA

Ensemble method (AdaBoost )

Malignant(M)/Benign(B)

97.14

Figure 13. Accuracy comparison of recent studies

5. Conclusion

Early detection of cancer progression is essential for classifying and treating it, and it may even save a life. As a result, from a wide variety of medical data, expert computer algorithms may treat cancer without the intervention of humans. To identify skin cancer using computer vision, the AFTAA approach is proposed in this research. Nearly 2000 photos from the ISIC-2017 were gathered and analyzed using the proposed approach. Additionally, the dataset was correctly classified as benign and malignant using the suggested technique.

It has been demonstrated that the methods employed in classification may deal with the common problem of noisy data (a) by learning descriptive archetypal examples of the two classes (malignant and benign) (b) Segmentation, key point detection, and other appropriate localization techniques are used to extract features, and (c)enhanced procedures are used to get the desired results (selection, classification). Finally, the performance of the suggested technique was examined and compared to cutting-edge procedures. It has been determined that the IFFA with AFTAA has outperformed the other methods.

In addition to the current approaches, the suggested method showed higher accuracy (97.14%), specificity (94.89%) and sensitivity (93.4%). This method, which includes improved pre-processing, may be applied in the future to several datasets for lesion identification and medical diagnostics. Finally, we provide a publicly available dataset. Future research shall focus on expanding to the 3D imaging domain, which could render it beneficial in other medical disciplines as well.

Availability of Data and Material (Data Transparency)

The data are publicly available at https://challenge.isic-archive.com/data/.

Conflict of Interest

The authors declare that they have no conflicts of interest.

Nomenclature

B

Initial attractiveness

w

Weights

C

Classifier

er

Error rate

V

Vectors

K

Kernel

B

Bias

C1, C2

Clusters

R

Iterations

Greek symbols

$\alpha$

Random values

$\beta$

Light consumption coefficient

$\gamma$

Attractiveness

Subscripts

r

Class

s

Index

d

Dimensional

l

Layers

n

Count of fireflies

m

Feature maps

  References

[1] Choudhary, P., Singhai, J., Yadav, J.S. (2021). Curvelet and fast marching method-based technique for efficient artifact detection and removal in dermoscopic images. International Journal of Imaging Systems and Technology, 31(4): 2334-2345. https://doi.org/10.1002/ima.22633

[2] Masoud Abdulhamid, I.A., Sahiner, A., Rahebi, J. (2020). New auxiliary function with properties in nonsmooth global optimization for melanoma skin cancer segmentation. BioMed Research International, 2020: 5345923. https://doi.org/10.1155/2020/5345923

[3] Thanh, D.N., Prasath, V.S., Hieu, L.M., Hien, N.N. (2020). Melanoma skin cancer detection method based on adaptive principal curvature, colour normalisation and feature extraction with the ABCD rule. Journal of Digital Imaging, 33: 574-585. https://doi.org/10.1007/s10278-019-00316-x

[4] Khamparia, A., Singh, P.K., Rani, P., Samanta, D., Khanna, A., Bhushan, B. (2021). An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning. Transactions on Emerging Telecommunications Technologies, 32(7): e3963. https://doi.org/10.1002/ett.3963

[5] Celebi, M.E., Aslandogan, Y.A., Bergstresser, P.R. (2005). Unsupervised border detection of skin lesion images. In International Conference on Information Technology: Coding and Computing (ITCC'05)-Volume II, Las Vegas, NV, USA, pp. 123-128. https://doi.org/10.1109/ITCC.2005.283

[6] Celebi, M.E., Kingravi, H.A., Iyatomi, H., Lee, J., Aslandogan, Y.A., Van Stoecker, W., Moss, R., Malters, J.M., Marghoob, A.A. (2007). Fast and accurate border detection in dermoscopy images using statistical region merging. Medical Imaging 2007: Image Processing, 6512: 1297-1306. https://doi.org/10.1117/12.709073

[7] Emre Celebi, M., Kingravi, H.A., Iyatomi, H., Alp Aslandogan, Y., Stoecker, W.V., Moss, R.H., Malters, J.M., Grichnik, J.M., Marghoob, A.A., Rabinovitz, H.S., Menzies, S.W. (2008). Border detection in dermoscopy images using statistical region merging. Skin Research and Technology, 14(3): 347-353. https://doi.org/10.1111/j.16000846.2008.00301.x

[8] Gomez, D.D., Butakoff, C., Ersboll, B.K., Stoecker, W. (2007). Independent histogram pursuit for segmentation of skin lesions. IEEE Transactions on Biomedical Engineering, 55(1): 157-161. https://doi.org/10.1109/TBME.2007.910651

[9] Adelmann, H.G. (1998). Butterworth equations for homomorphic filtering of images. Computers in Biology and Medicine, 28(2): 169-181. https://doi.org/10.1016/S0010-4825(98)00004-3

[10] Sweet, M.R. Adaptive and recursive median filtering. Available from: http://www.easysw.com/~mike/gimp/despeckle.html. 

[11] Li, Q., Zhang, L., You, J., Zhang, D., Bhattacharya, P. (2008). Dark line detection with line width extraction. In 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, pp. 621-624. https://doi.org/10.1109/ICIP.2008.4711831

[12] Criminisi, A., Pérez, P., Toyama, K. (2004). Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing, 13(9): 1200-1212. https://doi.org/10.1109/TIP.2004.833105

[13] Schmid, P. (1999). Segmentation of digitized dermatoscopic images by two-dimensional color clustering. IEEE Transactions on Medical Imaging, 18(2): 164-171. https://doi.org/10.1109/42.759124

[14] Fleming, M.G., Steger, C., Zhang, J., Gao, J., Cognetta, A.B., Dyer, C.R. (1998). Techniques for a structural analysis of dermatoscopic imagery. Computerized Medical Imaging and Graphics, 22(5): 375-389. https://doi.org/10.1016/S0895-6111(98)00048-2

[15] Schmid, P. (1999). Lesion detection in dermatoscopic images using anisotropic diffusion and morphological flooding. In Proceedings 1999 International Conference on Image Processing (Cat. 99CH36348), Kobe, Japan, pp. 449-453. https://doi.org/10.1109/ICIP.1999.817154

[16] Schmid-Saugeona, P., Guillodb, J., Thirana, J.P. (2003). Towards a computer-aided diagnosis system for pigmented skin lesions. Computerized Medical Imaging and Graphics, 27(1): 65-78. https://doi.org/10.1016/S0895-6111(02)00048-4

[17] Gonzales, R.C. (2002). Digital Image Processing. 2nd ed, New Jersey: Prentice Hall. 

[18] Wyszecki, G. (1983). Color Science: Concepts and Methods, Quantitative Data and Formulae. New York: Wiley.

[19] Zhou, H., Chen, M., Gass, R., Rehg, J.M., Ferris, L., Ho, J., Drogowski, L. (2008). Feature-preserving artifact removal from dermoscopy images. In Medical Imaging 2008: Image Processing, 6914: 439-447. https://doi.org/10.1117/12.770824

[20] Wighton, P., Lee, T.K., Atkins, M.S. (2008). Dermascopic hair disocclusion using inpainting. Medical Imaging 2008: Image Processing, 6914: 735-742. https://doi.org/10.1117/12.770776

[21] Senthilkumaran, N., Rajesh, R. (2009). Image segmentation-a survey of soft computing approaches. In 2009 International Conference on Advances in Recent Technologies in Communication and Computing, Kottayam, India, pp. 844-846. https://doi.org/10.1109/ARTCom.2009.219

[22] Olugbara, O.O., Taiwo, T.B., Heukelman, D. (2018). Segmentation of melanoma skin lesion using perceptual color difference saliency with morphological analysis. Mathematical Problems in Engineering, 2018: 1-19. https://doi.org/10.1155/2018/1524286

[23] Saini, S., Gupta, D., Tiwari, A.K. (2020). Detector-SegMentor Network for Skin Lesion Localization and Segmentation. In: Babu, R.V., Prasanna, M., Namboodiri, V.P. (eds) Computer Vision, Pattern Recognition, Image Processing, and Graphics. NCVPRIPG 2019. Communications in Computer and Information Science, Springer, Singapore. https://doi.org/10.1007/978-981-15-8697-2_55

[24] Alom, M.Z., Aspiras, T., Taha, T.M., Asari, V.K. (2019). Skin cancer segmentation and classification with NABLA-N and inception recurrent residual convolutional networks. arXiv preprint arXiv:1904.11126. https://doi.org/10.48550/arXiv.1904.11126

[25] Li, B.N., Chui, C.K., Ong, S.H., Chang, S. (2009). Integrating FCM and Level Sets for Liver Tumor Segmentation. In: Lim, C.T., Goh, J.C.H. (eds) 13th International Conference on Biomedical Engineering. IFMBE Proceedings, vol 23. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92841-6_49

[26] Li, B.N., Chui, C.K., Chang, S., Ong, S.H. (2011). Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation. Computers in Biology and Medicine, 41(1): 1-10. https://doi.org/10.1016/j.compbiomed.2010.10.007

[27] Carcagnì, P., Leo, M., Cuna, A., Mazzeo, P.L., Spagnolo, P., Celeste, G., Distance, C. (2019). Classification of skin lesions by combining multilevel learnings in a DenseNet architecture. In: Ricci, E., Rota Bulò, S., Snoek, C., Lanz, O., Messelodi, S., Sebe, N. (eds) Image Analysis and Processing – ICIAP 2019. ICIAP 2019. Lecture Notes in Computer Science, vol 11751. Springer, Cham. https://doi.org/10.1007/978-3-030-30642-7_30

[28] Srividhya, V., Sujatha, K., Ponmagal, R. S., Durgadevi, G., Madheshwaran, L. (2020). Vision based detection and categorization of skin lesions using deep learning neural networks. Procedia Computer Science, 171: 1726-1735. https://doi.org/10.1016/j.procs.2020.04.185

[29] Zhang, F., Jin, T., Hu, Q., He, P. (2018). Distinguishing skin cancer cells and normal cells using electrical impedance spectroscopy. Journal of Electroanalytical Chemistry, 823: 531-536. https://doi.org/10.1016/j.jelechem.2018.06.021

[30] Zhu, L.F., Zheng, Y., Fan, J., Yao, Y., Ahmad, Z., Chang, M.W. (2019). A novel core-shell nanofiber drug delivery system intended for the synergistic treatment of melanoma. European Journal of Pharmaceutical Sciences, 137: 105002. https://doi.org/10.1016/j.ejps.2019.105002

[31] Attallah, O., Sharkas, M. (2021). Intelligent dermatologist tool for classifying multiple skin cancer subtypes by incorporating manifold radiomics features categories. Contrast Media & Molecular Imaging, 2021: 7192016. https://doi.org/10.1155/2021/7192016

[32] Wei, L., Pan, S.X., Nanehkaran, Y.A., Rajinikanth, V. (2021). An optimized method for skin cancer diagnosis using modified thermal exchange optimization algorithm. Computational and Mathematical Methods in Medicine, 2021: 5527698. https://doi.org/10.1155/2021/5527698

[33] Ain, Q.U., Xue, B., Al-Sahaf, H., Zhang, M. (2018). Genetic programming for feature selection and feature construction in skin cancer image classification. In: Geng, X., Kang, BH. (eds) PRICAI 2018: Trends in Artificial Intelligence. PRICAI 2018. Lecture Notes in Computer Science, vol 11012. Springer, Cham. https://doi.org/10.1007/978-3-319-97304-3_56

[34] Zhang, N., Cai, Y.X., Wang, Y.Y., Tian, Y.T., Wang, X.L., Badami, B. (2020). Skin cancer diagnosis based on optimized convolutional neural network. Artificial Intelligence in Medicine, 102: 101756. https://doi.org/10.1016/j.artmed.2019.101756

[35] Wang, M., Zhang, X., Niu, X., Wang, F., Zhang, X. (2019). Scene classification of high-resolution remotely sensed image based on ResNet. Journal of Geovisualization and Spatial Analysis, 3: 1-9. https://doi.org/10.1007/s41651-019-0039-9

[36] Bagchi, S., Banerjee, A., Bathula, D.R. (2020). Learning a meta-ensemble technique for skin lesion classification and novel class detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, pp. 3221-3228. https://doi.org/10.1109/CVPRW50498.2020.00381 

[37] Kassani, S.H., Kassani, P.H. (2019). A comparative study of deep learning architectures on melanoma detection. Tissue and Cell, 58: 76-83. https://doi.org/10.1016/j.tice.2019.04.009

[38] Lee, K.W., Chin, R.K.Y. (2020). The effectiveness of data augmentation for melanoma skin cancer prediction using convolutional neural networks. In 2020 IEEE 2nd International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, pp. 1-6. https://doi.org/10.1109/IICAIET49801.2020.9257859

[39] Mishra, S., Imaizumi, H., Yamasaki, T. (2019). Interpreting fine-grained dermatological classification by deep learning. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, pp. 2729-2737. https://doi.org/10.1109/CVPRW.2019.00331

[40] Jaworek-Korjakowska, J., Kleczek, P., Gorgon, M. (2019). Melanoma thickness prediction based on convolutional neural network with VGG-19 model transfer learning. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, pp. 2748-2756. https://doi.org/10.1109/CVPRW.2019.00333

[41] Rahman, Z., Hossain, M.S., Islam, M.R., Hasan, M.M., Hridhee, R.A. (2021). An approach for multiclass skin lesion classification based on ensemble learning. Informatics in Medicine Unlocked, 25: 100659. https://doi.org/10.1016/j.imu.2021.100659

[42] Castro, P.B., Krohling, B., Pacheco, A.G., Krohling, R.A. (2020). An app to detect melanoma using deep learning: An approach to handle imbalanced data based on evolutionary algorithms. In 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, pp. 1-6. https://doi.org/10.1109/IJCNN48605.2020.9207552

[43] Kaur, R., GholamHosseini, H., Sinha, R. (2021). Synthetic images generation using conditional generative adversarial network for skin cancer classification. In TENCON 2021-2021 IEEE Region 10 Conference (TENCON), Auckland, New Zealand, pp. 381-386. https://doi.org/10.1109/TENCON54134.2021.9707291

[44] Qin, Z., Liu, Z., Zhu, P., Xue, Y. (2020). A GAN-based image synthesis method for skin lesion classification. Computer Methods and Programs in Biomedicine, 195: 105568. https://doi.org/10.1016/j.cmpb.2020.105568

[45] Wei, Z., Song, H., Chen, L., Li, Q., Han, G. (2019). Attention-based DenseUnet network with adversarial training for skin lesion segmentation. IEEE Access, 7: 136616-136629. https://doi.org/10.1109/ACCESS.2019.2940794

[46] Rahman, M.M., Bhattacharya, P., Desai, B.C. (2008). A multiple expert-based melanoma recognition system for dermoscopic images of pigmented skin lesions. In 2008 8th IEEE International Conference on BioInformatics and BioEngineering, Athens, Greece, pp. 1-6. https://doi.org/10.1109/BIBE.2008.4696799

[47] Giotis, I., Molders, N., Land, S., Biehl, M., Jonkman, M.F., Petkov, N. (2015). MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Systems with Applications, 42(19): 6578-6585. https://doi.org/10.1016/j.eswa.2015.04.034

[48] Alcón, J.F., Ciuhu, C., Ten Kate, W., Heinrich, A., Uzunbajakava, N., Krekels, G., Siem, D., de Haan, G. (2009). Automatic imaging system with decision support for inspection of pigmented skin lesions and melanoma diagnosis. IEEE Journal of Selected Topics in Signal Processing, 3(1): 14-25. https://doi.org/10.1109/JSTSP.2008.2011156

[49] Abbas, Q., Emre Celebi, M., Garcia, I.F., Ahmad, W. (2013). Melanoma recognition framework based on expert definition of ABCD for dermoscopic images. Skin Research and Technology, 19(1): e93-e102. https://doi.org/10.1111/j.1600-0846.2012.00614.x

[50] Shimizu, K., Iyatomi, H., Celebi, M.E., Norton, K.A., Tanaka, M. (2014). Four-class classification of skin lesions with task decomposition strategy. IEEE Transactions on Biomedical Engineering, 62(1): 274-283. https://doi.org/10.1109/TBME.2014.2348323

[51] Garnavi, R., Aldeen, M., Bailey, J. (2012). Computer-aided diagnosis of melanoma using border-and wavelet-based texture analysis. IEEE Transactions on Information Technology in Biomedicine, 16(6): 1239-1252. https://doi.org/10.1109/TITB.2012.2212282

[52] Schaefer, G., Krawczyk, B., Celebi, M.E., Iyatomi, H. (2014). An ensemble classification approach for melanoma diagnosis. Memetic Computing, 6: 233-240. https://doi.org/10.1007/s12293-014-0144-8

[53] Iyatomi, H., Oka, H., Celebi, M.E., Hashimoto, M., Hagiwara, M., Tanaka, M., Ogawa, K. (2008). An improved internet-based melanoma screening system with dermatologist-like tumor area extraction algorithm. Computerized Medical Imaging and Graphics, 32(7): 566-579. https://doi.org/10.1016/j.compmedimag.2008.06.005

[54] Cavalcanti, P.G., Scharcanski, J. (2011). Automated prescreening of pigmented skin lesions using standard cameras. Computerized Medical Imaging and Graphics, 35(6): 481-491. https://doi.org/10.1016/j.compmedimag.2011.02.007