A Novel Deep Convolutional Neural Network for Diagnosis of Skin Disease

A Novel Deep Convolutional Neural Network for Diagnosis of Skin Disease

Koteswara Rao KodepoguJagadeeswara Rao Annam Aruna Vipparla Bala Venkata Naga Vudata Suresh Krishna Naresh Kumar Ramya Viswanathan Lalitha Kumari Gaddala Suresh Kumar Chandanapalli 

Dept. of CSE, PVP Siddhartha Institute of Institute of Technology, Vijayawada 520007, India

Dept. of CSE, CVR College of Engineering, Hyderabad, Telangana 501510, India

Dept. of CSE, NRI Institute of Technology Agiripalli, Vijayawada 521211, India

Dept. of CSE, NRI Institute of Technology Visadala, Guntur 522438, India

Dept. of CSE, Maharaja Surajmal Institute of Institute of Technology, New Delhi 110058, India

Dept. of ECE, Dhanalakshmi College of Engineering, Chennai 602109, Tamilnadu, India

Dept. of CSE, SR Gudlavalleru College of Engineering, Gudlavalleru 521356, India

Corresponding Author Email: 
kkrao@pvpsiddhartha.ac.in
Page: 
1873-1877
|
DOI: 
https://doi.org/10.18280/ts.390548
Received: 
13 July 2022
|
Revised: 
3 September 2022
|
Accepted: 
12 September 2022
|
Available online: 
30 November 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Due to its intricacy, dermatology presents the most challenging and uncertain terrain for diagnosis. Skin conditions like Carcinoma and Melanoma are frequently very challenging to identify in the early stages and are much more challenging to define independently. The use of pattern recognition models to automate detection has been studied by a number of writers. This research describes a novel Deep Convolutional Neural Network (DCNN) for Skin Disease Detection. The photographs of skin would undergo processing to remove unwanted noise as well as to improve the photos. The performance of classification will be greatly impacted by the pixel values of a picture. The picture is classified using the softmax classifier method by feature extraction utilising DCCN, and a diagnosis report is produced as the result. In comparison to more classic approaches like KNN (K-Nearest Neighbour) and CNN, this methodology will provide results faster and with improved accuracy, precision, and recall. With a detection time of 10,000 milliseconds, DCNN achieved accuracy, precision, and recall percentages of 98.4%, 96.3%, and 97.2%, respectively.

Keywords: 

image processing, deep convolutional neural network, skin images, skin diseases, melanoma and carcinoma

1. Introduction

About 10 to 12% of the Indian population suffers from skin problems [1]. In the same way that the skin protects the body and takes in sensory input from the environment, the largest organ in the human body, the skin is made up of seven layers of ectodermic tissue and protects the muscles, internal organs, and skin.

Increased population, harmful UV (Ultra-Violet) rays, poor hygiene, and global warming are the factors that stimulate skin disease. Dermatological conditions are among the most common illnesses in the world [2].

Due to the scarcity of dermatological specialists, complexity, and variation, dermatological disorders are excellent hard terrains for quick, accurate diagnosis in both developed and developing nations [3].

Additionally, it is well knowledge that in many circumstances, early identification lowers the likelihood of serious results. The most recent environmental conditions, however, simply serve as a trigger for some skin illnesses. When conditions like melanoma, psoriasis, eczema, and herpes are discovered in their early stages, a person's life is saved from imminent risk.

Skin infections are considered to be more fundamental in most diseases and risk factors for the majority of malignancies [4]. There are many different skin problems that can afflict people. These conditions can be recognised and diagnosed by their symptoms, and they can all be treated effectively by a skin expert. In the past, a pathologist would perform a prognostic study on the affected skin area in order to classify the skin infection.

Biopsies are used to carry out the process, which may involve removing the afflicted skin area and sending it for testing at a lab to determine whether cancer is present. Histopathology can be performed on a sample of skin to grade the diseases and determine the most appropriate course of treatment for infections of the later classified as appropriate categories.

An inconsistent education in dermatology at the undergraduate level, which is typically prohibited, suggests that students need to re-evaluate their present knowledge and abilities in this particular area. Nearly 90% of skin conditions are now only treated by primary care. This would imply that many questions about skin problems might have answers if care were to be taken at an earlier time.

The quality of life for people with skin conditions may be considerably impacted. As prevalence of skin problems have increased, earlier stage detection is becoming more important for successful treatment. Early identification of skin conditions is crucial, and general practitioners (GP) play a key role in this [5]. There are several initiatives for integrating traditional medicine across the world, especially in less technologically developed nations where attempts are made to overcome challenges including the absence of affordable medical tools and equipment and medical competence.

The pixel is the smallest controlled component in an image of dermatological skin because it is a compact addressed component in the display system in digital imaging that addresses all the points. Every pixel serves as an illustration of how a real image should be represented.

Changes in pixel values are a representation of the presence of colour intensity in a particular area of the image. The production of new images during the scaling of picture pixels may not result in image quality loss [6]. The elimination of unwanted pixels from the image requires scaling the pixel information, and as a consequence, the image is ready for preprocessing. Data normalization is a crucial evaluation that is taken into account during picture pre-processing. The primary goal of picture value normalization is to convert numerical column results in a dataset to the basic scale, especially if different ranges are included in the data features.

Machine learning (ML) and its subfield DL (Deep Learning) have made major strides recently, and ongoing research will lead to technological innovations that billions of people might use [7]. The Convolutional Neural Networks, or CNNs, are used to detect skin problems. The outcomes provide unequivocal proof that DCNN accurately detects skin diseases from all angles. The structure of this analysis is as follows: Explaining the literature review is Section II. Deep Convolutional Neural Network (DCNN) for Diagnosis of Skin Disease is described in Section III. Section IV presents results, while Section V wraps up the analysis.

2. Literature Study

Mustafa et al. [8] presented color space utilization by conducting an experiment with luminance for enhancing the visualization for GrabCut segmentation accuracy. According to recent occurrences, the prevalence of melanoma skin cancer is rising worldwide as a result of rising UV (Ultra-Violet) radiation and populations with darker skin. Image segmentation, pre-processing, and feature extraction are essential steps for the accuracy in segmented skin lesions categorization in computer-aided diagnosis systems used to diagnose melanoma. In order to improve the training of the SVM (Support Vector Machine) ML method, corner and geometric characteristics are extracted. explained how MLA better works in disease detection.

Platiša et al. [9] has explored the requirement for defining the clinical task in digital pathology image quality valuation. In this study, six pathologists watch while three separate experimental cases are carried out using the identical pathology materials. These tests were created to work with different methods. However, the outcomes of Exams 2 and 3 will prevent the use of the set of questions. However, the results of the trial showed that pathologists did not identify the similarities between Mega-none and Mega-JPG and JPG (Joint Photographic Group) imperfections. The outcomes of experiment three showed that the observers had failed to notice what had been seen in experiment 1 earlier. The experiment was not algorithmic, and the scientists used samples of animal skin instead of any pre-processing techniques to guarantee complete control over picture modification. a concise summary of digital pathology and picture.

Cheerla and Frazier [10] presented an automatic technique for the segmentation of lesion To segment the texture, Ostu and LBP (Local Binary Pattern) are used. Using a neural network classifier, classification was accomplished with 93% specificity and 97% sensitivity. Other forms of lesions, such as non-melanocytic skin lesions, were not taken into account by the researcher in this study (NoMSLs). A categorization method for skin cancer was shown by Abu Mahmoud et al. [11] in order to distinguish between benign and malignant melanoma. Other forms of lesions, such as nonmelanocytic skin lesions, are not used in this study, nor are measuring parameters given additional emphasis.

Barata et al. [12] demonstrated local and global techniques to identify melanoma in dermoscopy images. The first system is used to classify skin lesions, and the second system is used to extract characteristics from a bag of features. The global technique in this study reached 96% of sensitivity and the local approaches achieved 80% of specificity when used to execute and compare the features of colour with the properties of texture. Only melanoma is addressed here; other lesion forms, such as NoMSLs, are not. Arifin et al. [13] have very well employed skin colour as a primary pattern of categorization to distinguish between benign and malignant tumours. The appearance of skin colour varied according to the condition, yet it may have a consistent pattern. Finding this pattern makes categorization more efficient and straightforward.

Kalinski et al. [14] carried out research on the impact of lossless Joint Photographic Experts Group (JPEG) 2000 compression on diagnostic virtual microscopy. Virtual 3-dimensional microscopy is used to evaluate the whole stomach biopsy specimen slide pictures in JPEG2000. According to the pathologists' findings, a significant difference is shown in Consultant B's observation when classifying the density of H pylori (Helicobater pylori gastritis), but no significant difference is seen in the direction of H pylori. This study used statistical analysis to show the human diagnostics' relative performance. Chang et al. [15] provides a technique for automatically detecting face skin abnormalities. The most important step before beginning medical cosmetology is skin analysis. First, the system can identify a human face in a facial picture. Based on the identified face, facial traits may be retrieved to identify the region of interest (ROI). In order to identify face skin flaws like wrinkles and spots in ROI, a pattern recognition model is utilised. The classifier is created to provide good performance in recognising for a certain type of detect. This algorithm effectively identified skin flaws using certain characteristics retrieved from the ROI. Results obtained showed that it is quite effective.

3. DCNN for Diagnosis of Skin Disease

The block diagram of Deep Convolutional Neural Network (DCNN) for Diagnosis of Skin Disease is represented in below Figure 1.

The Dermatological HAM10000 (Human Against Machine) dataset is utilised in this work to demonstrate the suggested approach. The Medical University of Vienna and Harvard have published an image collection that will be used to classify dermatological skin diseases into seven different categories. About 10,015 skin picture samples from diverse backgrounds in the population make up this collection. When manually cutting these photos, the lesion would be centred at 800 x 600 pixels at 720 Projected Digital Image (PDI). The manual histogram adjustments are used to improve colour reproduction and visual contrast. Both the training set and the testing set employ 20% of the test dataset.

Figure 1. Block diagram of skin disease diagnosis

Recent picture production may incorporate scaling by fewer or more pixels without compromising image standards. In order to remove unwanted pixels from the image and prepare it for preprocessing, scaling image pixel information is crucial. One of the key steps in pre-processing photos is features extraction, which reduces processing time by removing unneeded images and enhances model performance while growing the dataset. When preparing the data for pre-processing, the pixel values of the skin picture would be retrieved, and certain statistics would be run over the pixel values. After that, scaled and unscaled (real) pixel values are used to train the classifiers. A denoising filter is used to filter the pictures under consideration. In many of the conditions, salt and pepper noise is presented and is removed using average filter.

Data taken from the picture database would be examined for propensity to a certain illness. The picture database may have already undergone enhancements, such as hair removal, image centering, and softening.

The majority of the picture pixel values are converted to 1-D (1-Dimensional) arrays and then converted to RGB (Red, Green, Blue) format for these pre-enhanced photos. These observed pixel intensities were scaled between 0 and 1. Here, 64*64 pixels are taken into account, therefore each image has around 12288 values recorded.

The image pre-processing unit improves the picture by removing noise and undesirable skin components. The image is then split into separate segments to distinguish them from ordinary skin, and image attributes are retrieved to determine whether or not the skin has been infected. To distinguish the picture from real skin, the image enhancement unit and segmentation concentrated on the diseased region while growing portion and segmenting section into various segments. The segmentation step is the most crucial since the precision of the subsequent phase will be impacted by it. The range of lesion forms, colours, and shapes, together with the varied types of skin and textures, make segmentation challenging. Additionally, certain lesions may have uneven borders, and only sometimes does a smooth transition develop. Various other difficulties correspond to dark hair presence covering the lesions and specular reflection presence.

The main level in the majority of classification-oriented problems is feature extraction. The characteristics are essential for testing and training purposes. This feature includes crucial information about the image that is used to identify diseases. By adding the majority of pixel intensities from full photos, all 1-D arrays would be sorted in database form. The database will be created as a result, together with the majority of picture pixel intensities. The total pixel intensities of the pictures would be collected, stored as a database, and then used to train the DCNN.

The DCNN includes a number of neural networks, as seen in Figure 2. Convolutional and pooling layers would alternate one after the other. Every filter depth in the network is raised from left to right. The last level typically consists of one or more completely linked layers. Since CNN receives an image as input, the neurons in those layers are organised in three dimensions: depth, width, and height. The CNN network is made up of several layers, each of which uses a distinct function to modify the activation levels from one to another. Three different types of layers—the pooling layer, the fully connected layer, and the convolutional layer—make up the majority of its design.

Figure 2. DCNN architecture

Convolution: In the case of CNN, the main use of convolution is for the identification of appropriate properties from an image that serves as an input for the first layer. Pixels' spatial connection would be preserved by the convolution.

Pooling or sub-sampling: Spatial pooling, often referred to as down-sampling or sub-sampling, helps to reduce the size of each feature on a map while retaining the most of the map's significant data. The 3-D feature map would be transformed to a 1-D feature vector after pooling.

Classification (Fully Connected layer): Pooling and convolution processes' outputs can offer the well-known qualities that are gleaned from a picture. In order to delegate the input picture to various groups' assumptions over the training dataset, the characteristics were then utilised through the Fully Connected layer. The Skin Disease Identification unit ultimately identifies the diseased skin.

4. Result Analysis

Any platform can implement the DCNN that is being shown. Python is recommended here, too, as it gives the developer access to a greater variety of neural networks and machine learning frameworks.

The photographs are first pre-processed and converted to normal sizes of 120X120. Photos would be rotated in every direction (each angle might vary by 90 degrees) and even turned in order to fit a large number of images into a dataset. The following image is used as the first network layer's input. The CNN is applied to it until the network provides the higher-level properties, such as colour, border, and edge. Up until the picture is flattened down as an image, this is done using ConvNet (Convolutional Networks) different operations such Mac Pooling, Convolution, etc.

These vectors are used in the classification process because they include the data necessary to identify high level qualities. First batch size is 20 and epoch size is assumed to be 25. Once the features have been extracted, the model is saved in the dataset. The determined model performance is based on performance metrics including accuracy, precision, recall, and detection time. These are the equations:

Accuracy $=\frac{T P+T N}{T P+T N+F N+F P}$                (1)

Recall $=\frac{T P}{T P+F N}$                (2)

Precision $=\frac{T P}{T P+F P}$                (3)

where, TP - True Positive, FP - False Positive TN - True Negative and FN - False Negative.

The Skin disease detection performance of the KNN, CNN and DCNN classifications are evaluated in form of duration required and accuracy. The comparative values are represented in below Table 1.

Table 1. Performance comparison table

Parameters

KNN

CNN

DCNN

Accuracy (%)

87.2

92.4

98.4

Precision (%)

86.1

90.2

96.3

Recall (%)

91

94.3

97.2

Detection time (ms)

14000

12000

10000

In scientific research, it is imperative to minimise bias and error as well as to be precise and accurate in the collection of data. It is always best when measurements are both precise and accurate. Precision can be thought of as a measure of quality and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, while recall means that algorithm returns most relevant results. The following table clearly illustrates how, in our experimental studies, DCNN outperforms KNN and CNN in terms of accuracy, precision, and recall.

Figure 3 compares the KNN, CNN, and DCNN algorithms for accuracy, precision, and recall using different thresholds. The X-axis displays categorization, while the Y-axis displays percentage (%). When compared to KNN, DCNN produced the most accurate predictions.

Figure 4 illustrates the Time comparison between DCNN and CNN, KNN algorithms with various sizes. The Y-axis represents the time in ms (milli seconds) and X-axis represents the algorithms. The DCCN requires less time compared to CNN, KNN in order to classify the largest dataset.

Detection time is very important for any algorithm to achieve the objective, in the Table 1 we can observe the DCNN, KNN, CNN and only DCNN take less time.

Therefore, DCNN classification technique efficiently and effectively detects the skin diseases with high Accuracy as 98.4%, Precision, 96.3%, Recall 97.2% and less detection time as 10000 ms.

Figure 3. Comparative graph for accuracy, precision and recall values

Figure 4. Detection time comparison graph

5. Conclusion

Deep Convolutional Neural Network (DCNN) for Skin Disease Diagnosis is discussed in this paper. The diagnosis of skin disorders using DCNN involves many phases. Skin pictures are extracted when data is being prepared for processing. In this study, the dermatological HAM10000 dataset was used. The collection includes 10,015 skin picture samples that were taken from people of various origins. The pre-enhanced photos are converted to RGB format, and the majority of the RGB pixel values are then converted to 1-D arrays. Finally, the detection time, precision, recall, and accuracy of the outputs of the DCNN, CNN, and KNN algorithms are compared. When compared to CNN and KNN classifications, DCNN classification has higher accuracy and takes less time to complete Obtained Accuracy, Precision and Recall percentages of DCNN are 98.4%, 96.3%, and 97.2% respectively with Detection time 10000 ms. Therefore, DCNN classification technique efficiently diagnosis the skin diseases in terms of time, accuracy, recall and precision and in any aspect. In future it can be extended that use of the ADHCNN to detect the skin disease.

  References

[1] Borade, S., Kalbande, D. (2021). Survey paper based critical reviews for cosmetic skin diseases. In 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), pp. 580-585. https://doi.org/10.1109/ICAIS50930.2021.9395803

[2] Adeyemo, A.A., Bashir, S.A., Mohammed, A.D., Abisoye, O.O. (2021). Impact of pixel scaling on classification accuracy of dermatological skin disease detection. In 2020 IEEE 2nd International Conference on Cyberspac (CYBER NIGERIA), pp. 36-43. https://doi.org/10.1109/CYBERNIGERIA51635.2021.9428813

[3] Chowdhary, G., Toppo, N.K., Das, D. (2020). Skin lesion diagnosis in healthcare-cyber physical system. In 2020 IEEE International Conference for Innovation in Technology (INOCON), pp. 1-6. https://doi.org/10.1109/INOCON50539.2020.9298433

[4] Yılmaz, F., Edizkan, R. (2020). Improvement of skin cancer detection performance using deep learning technique. In 2020 28th Signal Processing and Communications Applications Conference (SIU), pp. 1-4. https://doi.org/10.1109/SIU49456.2020.9302339

[5] Uriawan, W., Atmadja, A.R., Irfan, M., Taufik, I., Luhung, N.J. (2018). Comparison of certainty factor and forward chaining for early diagnosis of cats skin diseases. In 2018 6th International Conference on Cyber and IT Service Management (CITSM), pp. 1-7. https://doi.org/10.1109/CITSM.2018.8674381

[6] San Pedro, J.M.M., Barfeh, D.P.Y. (2018). Tuberculin skin test checker using digital image processing. In 2018 3rd International Conference on Control and Robotics Engineering (ICCRE), pp. 233-237. https://doi.org/10.1109/ICCRE.2018.8376471

[7] Hameed, N., Shabut, A., Hossain, M.A. (2018). A Computer-aided diagnosis system for classifying prominent skin lesions using machine learning. In 2018 10th Computer Science and Electronic Engineering (CEEC), pp. 186-191. https://doi.org/10.1109/CEEC.2018.8674183

[8] Mustafa, S., Dauda, A.B., Dauda, M. (2017). Image processing and SVM classification for melanoma detection. In 2017 International Conference on Computing Networking and Informatics (ICCNI), pp. 1-5. https://doi.org/10.1109/ICCNI.2017.8123777

[9] Platiša, L., Van Brantegem, L., Kumcu, A., Ducatelle, R., Philips, W. (2017). Influence of study design on digital pathology image quality evaluation: The need to define a clinical task. Journal of Medical Imaging, 4(2): 021108. https://doi.org/10.1117/1.JMI.4.2.021108, 2017

[10] Cheerla, N., Frazier, D. (2014). Automatic melanoma detection using multi-stage neural networks. International Journal of Innovative Research in Science, Engineering and Technology, 3(2): 9164-9183.

[11] Abu Mahmoud, M., Al-Jumaily, A., Takruri, M.S. (2013). Wavelet and curvelet analysis for automatic identification of melanoma based on neural network classification. International Journal of Computer Information Systems and Industrial Management (IJCISIM), 5: 606-614. http://hdl.handle.net/10453/28022.

[12] Barata, C., Ruela, M., Francisco, M., Mendonça, T., Marques, J.S. (2013). Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE systems Journal, 8(3): 965-979. https://doi.org/10.1109/JSYST.2013.2271540

[13] Arifin, M.S., Kibria, M.G., Firoze, A., Amini, M.A., Yan, H. (2012). Dermatological disease diagnosis using color-skin images. In 2012 International Conference on Machine Learning and Cybernetics, 5: 1675-1680. https://doi.org/10.1109/ICMLC.2012.6359626

[14] Kalinski, T., Zwönitzer, R., Grabellus, F., Sheu, S.Y., Sel, S., Hofmann, H., Roessner, A. (2011). Lossless compression of JPEG2000 whole slide images is not required for diagnostic virtual microscopy. American Journal of Clinical Pathology, 136(6): 889-895. https://doi.org/10.1309/AJCPYI1Z3TGGAIEP

[15] Chang, C.Y., Li, S.C., Chung, P.C., Kuo, J.Y., Tu, Y.C. (2010). Automatic facial skin defect detection system. In 2010 International Conference on Broadband, Wireless Computing, Communication and Applications, pp. 527-532. https://doi.org/10.1109/BWCCA.2010.126