A Meta-Learning Approach for Diabetic Retinopathy Severity Grading

ABSTRACT


INTRODUCTION
According to projections made by the World Health Organization, there will be 300 million individuals living with diabetes by the year 2025.One of the most severe effects that diabetes may have on a person's eyes is a condition known as diabetic retinopathy [1].The leading cause of visual impairment and blindness in diabetics is a condition called diabetic retinopathy (DR) [2].According to "Prevent Blindness," the number of persons with age-related eye illnesses and visual impairment will triple in the next three decades.Diabetes mellitus destroys the retina's tiny blood vessels, which may lead to blindness if left untreated [3].Diabetic retinopathy affects 80% of individuals with diabetes for at least ten years, making it the leading cause of new blindness in ages 25 to 74.
The two forms of diabetic retinopathy are non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR).NPDR is the first stage of diabetic retinopathy and the most common.Because blood vessels in the retina have burst, this disease causes minimal blood and other fluid to leak into the eye [4].PDR develops when the blood vessels in the retina tighten and restrict proper blood flow.This results in decreased retinal blood volume.Many technologies were developed to detect DR automatically from the RGB fundus images.However, those approaches have limitations in estimating the severity level due to the large-scale unavailability of datasets.Over the past several years, many diagnosis tools have been developed [5] to detect DR to classify fundus images automatically.The extraction of handcrafted features is a time-consuming process.Therefore, a new technology that can learn features from the images evolved and attracted the researcher's attention.Only when DR is diagnosed early on can it be adequately treated.As a result, early detection through routine retinal screening is crucial for people with diabetes.
As a consequence of this, early diagnosis and screening of people suffering from retinopathy will be able to contribute to the prevention of vision loss.Analysis of medical images is a burgeoning field of research that is gaining favor among researchers and practitioners in the medical field [6].Using different methods of neural networking, a large number of studies are now being carried out to identify and grade DR.A CNN method for diagnosing and categorizing DR from fundus pictures based on severity was shown to have an average accuracy of 75% after being tested on 5,000 validation images [7].This model, however, has a high time complexity, which may have an impact on the classifier's performance.To address these challenges, this work developed a weighted stacking-based ensemble classifier (WSEC) with a metalearning (ML) technique and multipath CNN-based feature extraction to improve diabetic retinopathy detection performance.The purpose of this research is to devise a strategy that is automated, suitable, and as cutting-edge as possible and that makes use of image processing to detect DR at an early stage and reduce retinal damage.
The remaining portion of this work is organized in the way that follows below.Section 2 is a review of similar work that has recently been published in the literature.The proposed methodology and classification strategy for diabetic retinopathy diagnosis are discussed in detail in Section 3. In Section 4, you will find a presentation of the effectiveness analysis of the suggested technique.The concluding remarks were added in Section 5.

LITERATURE REVIEW
In this part, we will go through some of the more modern methods that have been developed in order to identify diabetic retinopathy.
Gao et al. [8] created an automated Diabetic Retinopathy (DR) diagnosis system in which Deep convolutional neural network models were built to assess the severity of DR fundus images using this dataset and acquired a four-degree classification test with an accuracy of 88.72%.
Roshini et al. [9] used three key stages to construct an automatic DR detection model: (a) picture preprocessing, (b) blood vessel segmentation, and (c) classification.The suggested strategy is referred to as FP-CSO since the traditional CSO is based on a fitness probability in the upgraded algorithm.Tariq et al. [10] developed a reliable, automatic, and computer-based DR diagnosis.The most accurate categorization is accomplished by the pre-trained model Se-ResNeXt-50, which reaches all pre-trained models for the dataset with a score of 97.53 percent.
Niemeijer et al. [11] created many computer-aided detection or diagnosis (CAD) algorithms that operate on several photos that make up an exam in order to determine the probability, a condition that often affects diabetics.The area that was contained inside the receiver operator characteristic curve was 0.881 for the fusion method that performed the best.
Chakraborty et al. [12] developed a method that is based on supervised learning and uses an artificial neural network (ANN) to deliver more accurate diagnostic results for diabetic retinopathy patients.The proposed method's accuracy was determined to be 97.13 percent.Gatti et al. [13] used a Probabilistic Neural Network (PNN) with other machine learning techniques, SVM, and Bayesian separation to diagnose early DR.3000 images were processed for both training and testing.Features were extracted using some image processing techniques, and an accuracy of 84.12 percent was achieved.
Khalil et al. [14] developed a Convolutional Neural Networks (CNN) classification of the images of the retinal fundus into three categories-normal, background, and preproliferative retinopathy-which is carried out.Five convolutional layers are used in the model, and five max pooling layers follow them.In conclusion, a pooling based on a worldwide average is used.To attain the level of accuracy of 95.23% in this task.Kalyani et al. [15] described a reformed capsule network (CapsNet) to detect and classify diabetic retinopathy: Class capsule and softmax layers were used to find the probability of an image belonging to a particular class.According to the results of the experiments, the proposed method attains an accuracy of 94.30%.
Using deep learning and Machine Learning (ML) techniques, Gayathri et al. [16]  As a result of the aforementioned study, it has been determined that the current model has some advantages and disadvantages.An M-CNN extraction and ML classifier was developed from previous research efforts in order to extract significant features from fundus pictures and categorize lesions according to the severity levels they exhibit.This model, however, has a high time complexity, which may have an impact on the classifier's performance.Different state-ofart methods are represented in Table 1.

PROPOSED METHODOLOGY
This research suggests combining weighted stacking-based ensemble classifiers (WSEC) with meta-learning (ML) to improve diabetic retinopathy detection performance.The job may be broken down into the following four sections: preprocessing, feature extraction, meta-learner-based prediction using weighted stacking-based ensemble classifiers (WSEC), and classification module.
• Initially, the preprocessing process is done.
• The next step is feature extraction, which is carried out with the assistance of a Multipath Convolutional Neural Network (M-CNN), which is utilized for the extraction of global and local features from pictures.
• Third, the weighted stacking-based ensemble classifier (WSEC) based prediction is processed.Here, the ensemble classifiers are Support Vector Machine (SVM), Fuzzy Neural Network (FNN), and AdaBoost classifier.
• Finally, the Enhanced Recurrent Neural Network (ML-ERNN) based meta-model is developed and implemented with the purpose of improving the overall level of efficiency of the categorization during the detection of diabetic retinopathy.Figure 1 illustrates the process of the proposed methodology.Learning classifier will be used for DR multiclass classification/severity grading.M-CNN was trained using 31029 DR category fundus images from the Kaggle and APTOS datasets (not utilized for performance evaluation of the system).

Data augmentation
The supplied data in many classification situations is inadequate to train classifiers that have high accuracy and robustness.The use of data augmentation is a common strategy for coping with the limited amount of data available in comparison to the number of variables that may be adjusted in a classifier.Figure 4 represents the transformation and rotation of the image in augmentation for both datasets.The feature maps from the previous layer are moved to the second route, which is responsible for carrying out 9×9 convolutional operations using 32 kernels and a max-pooling layer.Following the primary route, one arrives at a maxpooling layer, where one then loops back around to a 5×5 convolution layer.
The number of kernels that make up the network is determined by a process of trial and error.
The M-CNN architecture is designed with kernel settings that have a 0.97 percent error rate.In the max pooling process, the input is down-sampled utilizing the value that was highest among each cluster of neurons in the layer that came before it.Through the process of downsampling, the spatial resolution of the succeeding layers is reduced, but the essential local structures are maintained.After that, the feature maps go through an integration process in the weighted transform layer (1-1 without bias) before being sent to the Fully Connected Layer (FCL).
Both the first and second FCLs have hidden neuron populations of 128 and 64, respectively.The performance of the network's prediction is hindered when the softmax classifier is used.After the first fully connected layer, a dropout (the process of firing out units in a neural network) of 0.5 is applied before getting characteristics from the second fully connected layer.This is done to prevent overfitting during the machine learning classifier training process.As a consequence of this, integrating a CNN with an ML classifier has the power to significantly boost the efficiency of the classification system as a whole.

Procedure of feature extraction using M-CNN
The features produced by a standard neural network may have losses in the global structures if the simple route is either extremely short or excessively lengthy.Shortcut pathways can be used to fix this.The multipath extracted features aid in the preservation of global structure losses, resulting in the M-CNN producing more meaningful global and local structures.
The second iteration is when the result characteristic matrices that may be used for categorization are gathered from FCL after concatenating the feature maps from the two routes.
(1) The opportunity to deceive CNN in its evaluation of global structures while simultaneously moving features from the currently active layer to the shortcut route is one of the primary challenges encountered in multipath feature extraction.(2) The network will be prone to global noise interference if the image resolution is inadequate.Because the shortest route has a sufficient number of convolutional layers, these difficulties can be overcome.The recommended M-CNN has the highest level of performance in the shortest route for DR classification when it consists of a 9×9 convolutional layer and 32 kernels.

Feature extraction using pre-trained networks
There are currently pre-trained networks that can be utilized to solve categorization challenges.Deep convolutional neural networks (CNNs) like these may double as feature extractors if necessary.In these kinds of situations, the characteristics are gleaned from the levels in between.In this research, the integrated networks ResNet-50 [17] and VGG-16 are used to extract DR characteristics from images of the retinal fundus.

Feature extraction using grey level co-occurrence matrix (GLCM)
The gray-level co-occurrence matrix, sometimes known as the GLCM, is a tool that analyses the spatial relationship of pixels in a texture.The idea is to calculate the co-occurrence matrix for tiny picture parts and then utilize that matrix to derive statistics such as contrast, correlation, uniformity, homogeneity, probability, inverse, and entropy.This b) Inverse difference moment The local homogeneity is denoted by the acronym IDM, which stands for "inverse difference moment."When the local gray level is consistent throughout, it has a high value, and the inverse GLCM also has a high value.
The value of the IDM weight is the opposite of the Contrast weight.
c) Entropy Entropy is a measure of the amount of information contained in a picture that must be sacrificed in order to achieve a certain level of compression.Entropy is a measurement that may be applied to both the image information and the loss of information or message that occurs during the transmission of a signal.

d) Correlation
A linear connection between the grey levels of nearby pixels may be measured via correlation.The optical technology known as digital image correlation uses tracking and image registration methods to provide precise 2D and 3D measurements of the changes that occur in pictures.Although it is most often employed to detect deformation, displacement, strain, and optical flow, this technique finds widespread use in a variety of scientific and technical fields.The measurement of the movement of an optical mouse is an example of a fairly popular application.    (4)

Weighted stacking ensemble learning model
For further prediction, the weighted stacking ensemble classifier based on majority votes is used in this study.The retrieved M-CNN features are used to train and evaluate the classifiers.It is recommended to use an ensemble classifier rather than individual classifiers because of the restrictions imposed by these classifiers.As a result, the error made by one classifier differs from that of others.The proposed ensemble classification model is shown in Figure 6.
The most straightforward kind of majority voting, known as hard voting, forecasts the ultimate class level in accordance with the prediction of the base classifier.

𝑌 = 𝑀𝑎𝑥 𝑗=1
∑  ,  =1 (5) where, N is the number of base classifiers, while C denotes the number of classifications.C=2 and N=3 in this work.

𝐾(𝑋 𝑖 , 𝑋 𝑗 ) = Φ(𝑋 𝑖 ). Φ(𝑋 𝑗 )
Kernels calculate dot products in high-dimensional spaces without mapping data.Decision functions include: () = ∑     (,   ) +   =1 (7) This work explores two commonly used kernel functions, and they are: A data-dependent kernel known as a radial basis function (RBF) kernel has evolved as a potent alternative tool.RBF kernels have a slower convergence time than polynomial kernels, but they provide superior results.Dot products are used in the classification process.The number of support vectors scales linearly with the classification job.For nonseparable data, soft margin classifiers are utilized.Slack variables are used to ease the separation requirements.
. +  ≤ −1 +   ,    = −1 (11) According to the results of the preceding investigation, the proposed hybrid feature selection methodology is a very efficient method for assessing the data.

Fuzzy neural network (FNN)
There are three layers in a fuzzy neural network: Fuzzy input, fuzzy rules, and fuzzy output [19].A popular neural network uses Back Propagation (BP) least mean-square learning.The structure is shown in Figure 7. Edges link neuron processing units.Each neuron input is weighted to indicate its relevance.The net value is found by linearly combining the inputs from the neurons.The Fuzzy Backpropagation (FBP) estimates net value using a fuzzy LR number and doesn't assume criterion independence.The FBP algorithm never oscillates and never falls into a local minimum, another benefit [20].

Fuzzy Back Propagation algorithm (FBP algorithm)
Step 1: The input hidden layer's initial weight sets, w, should be generated at random such that each wji=(wmji, wαji, wβji) is a fuzzy number of the LR type.Produce the weight set W' for the concealed output layer as well.where,   ′ = (  ′ ,   ′ ,   ′ ) ,   = (  ,   ,   ) , Step 2: Let (Ip, Dp) p=1, 20, ..., N. The fuzzy back propagation method requires a certain input-output pattern set in order to be trained.Here Ip=(Ip0, Ip1, Ip1), where each Ipi is an LR-type fuzzy number.
Step 7: Compute the change of weights ∆ w'(t) for the hidden output layer as follows: Compute ∆  () = (  /'  ,   /, / '  ).Compute ∆'() = −∆  () + ∆'( − 1).The updated weight i of hidden to output neuron is W'(t)=W'(t-1)+∆W'(t). Let Y represent the set of potential class labels and X represent the input data space.We analyze the situation of two classes, Y={-1, +1}, and assume that X=Rn.Creating a mapping function is the objective here.F: X→Y that, taking into consideration the feature vector x∈X, calculates the (correct) class label y.Additionally, we take into consideration the scenario in which a collection of labeled data for the purpose of training is available: All strategies that are based on AdaBoost may be thought of as a greedy optimization strategy for the purpose of reducing the exponential error function: Classifier decision F(x) Boosting's strong approximation skills are theoretically demonstrated, but its amazing generalization capabilities are only empirically validated, leaving room for future advancements.Other boosting techniques lack a natural stopping criteria like ours.The results are compared using Gentle AdaBoost in Matlab.The boosting strategy surpasses Gentle AdaBoost in generalization error (measured on a control subset), but lowers training error more slowly.

Classification using meta-learning model
A machine learning technique called meta-learning explores the learning process and identifies its mechanisms in order to reuse them in the future.The goal is to build a flexible automatic learning machine that can solve different learning problems using meta-data, such as learning algorithm properties, learning problem traits, or previously discovered patterns derived from the relationship between learning problems and the efficacy of different learning algorithms.This helps learning algorithms.

Enhanced recurrent neural network (ERNN)
Recurrent Neural Networks (RNNs), which are used to handle sequential input, are one of the most significant subfields in deep learning.Considering that input is processed sequentially, RNNs are naturally deep in time.When performing tasks like language translation and natural language processing, RNN has outperformed other sequence learning algorithms [21].
This paper proposes an Enhanced Recurrent Neural Network (ERNN), which overcomes the drawbacks of RNN by leveraging the absolute value of the recurrent layer output.The suggested RNN gains nonlinearity from absolute value while maintaining information flow during forward and backpropagation.The proposed approach uses two recurrent layers.As the recurrent layer's final output at each time step, the mean modulus of these two layers is determined.
Backpropagation with an adaptive learning rate is used for training.Despite the fact that ML-ERNN has more parameters than ERNN, the calculations of the two channels in ML-ERNN are independent of one another and can be done in parallel.In ERNN, ht is dependent on ht-1 at time step t through the identity matrix whh.
The hidden layer activation ht=0; if input<0, hence information about the prior hidden state ht-1 is lost.In the suggested method, input data is never lost while it passes through both ℎ  (1) and ℎ  (2) .Because when considering their absolute values, a big negative value is more dominant than a small positive value, the suggested ML-ERNN uses the channels' absolute values.As a result of this research, a weight updating factor was devised to solve the problem.The Gaussian distribution factorbased improvement for improving prediction rate performance is described in the section below.
Gaussian Distribution Factor (GDF): This paper proposes a modified technique to solve these disadvantages.A type of relationship known as a Gaussian distribution is used in this strategy to factor its neighboring data toward its own data.This Gaussian distribution factor is influenced by two factors: data intensities, or feature attraction λ(0<λ<1), and spatial position of neighbors, or distance attraction ξ(0<ξ<1), which is also influenced by neighborhood structure.Taking into account the Gaussian distribution factor, which is defined as follows: where, Hij indicates the primary focal point of interest, and Fij exemplifies the allure of keeping your distance.The underlying criteria λ and ξ rebalance the proportions of the two attractions in the community.Last but not least, the ERNNbased meta-learning model yields the highest quality outcomes.

RESULTS AND DISCUSSION
The suggested ML-ERNN approach has been tested on two benchmark datasets, APTOS and Kaggle [22], to see how well it performs.Different classifiers are trained and assessed on these datasets, which are separated into training and test sets.
The training and testing rates are 80 and 20 percent, respectively.Table 3 provides details of confusion matrix for both Kaggle and APTOS datasets.All existing classifiers and the proposed meta-learning model are run in a static environment where the entire dataset is analyzed at once, and the accuracies are measured using a 10-fold cross-validation technique.Other than classification accuracy, some statistical measurements, as shown in Eqs. ( 20)-(23), and the average results for the classifiers are used to evaluate the classifier.Precision may be defined as the proportion of correctly discovered positive observations relative to the total number of projected positive observations.Precision=TP/TP+FP (20) Recall=TP/TP+FN F1 term "score" refers to the calculated weighted average of "precision" and "recall."As a consequence of this, it generates both false positives and false negatives.F1 Score=2*(Recall * Precision)/(Recall+Precision) (22) The following formula may be used to evaluate efficiency in terms of positives and negatives: Accuracy=(TP+FP)/(TP+TN+FP+FN) (23) The precision comparison of the suggested ML-ERNN is superior to the existing approaches, as shown in Figure 8.The suggested ML-ERNN technique offers superior Precision results of 98%for the APTOS dataset, whereas the M-CNN method metric is 97.34%, the CNN method metric is 90.86%, the PNN method metric is 86.76%, and the ANN method metric is 83.11%.The suggested ML-ERNN offers superior Precision results of 98% for the Kaggle dataset, whereas the M-CNN method metric is 96%, the CNN method metric is 90.67%, the PNN method metric is 87%, and the ANN method metric is 83.94%.The proposed strategy, it finds, produces superior precision.The recall comparison of the suggested ML-ERNN is superior to the existing approaches, as shown in Figure 9.The suggested ML-ERNN produces higher recall results of 99.93% for the APTOS dataset, compared to M-CNN method metric of 98.04%, CNN method meter of 91%, PNN method metric of 88.78%, and ANN method metric of 84.34%.The suggested ML-ERNN produces greater recall results of 99.96%for the Kaggle dataset, compared to the M-CNN method metric of 97.61%, CNN method metric of 94%, PNN method metric of 89.64%, and ANN method metric of 84.13%.The recommended strategy, it concludes, produces superior recall.
The F-measure comparison of the suggested ML-ERNN is better than the existing approaches, as shown in Figure 10.
The suggested ML-ERNN approach gives higher f-measure values of 98.98% for the APTOS dataset, whereas the M-CNN method metric is 97.89%,CNN method metric is 90.86%,PNN method metric is 87.71%, and ANN method metric is 83.23%.The suggested ML-ERNN produces higher f-measure results of 98.98% for the Kaggle dataset, compared to the M-CNN method metric of 97.04%, CNN method meter of 92.73%, PNN method metric of 88.45%, and ANN method metric of 83.33%.The proposed strategy, it concludes, gives a better f-measure.
The suggested ML-ERNN outperforms the previous approaches in terms of accuracy, as shown in Figure 11.The proposed ML-ERNN approach produces greater accuracy results of 99.93% for the APTOS dataset, whereas the M-CNN method metric is 98%, the CNN method metric is92.94%, the PNN method metric is 88%, and the ANN method metric is 84.12%.Table 4 shows the performance comparison table for the proposed and existing methods.The suggested ML-ERNN produces more excellent accuracy results of 99.96% for the Kaggle dataset, compared to the M-CNN method metric of 98.17%, CNN method metric of 91.16%, PNN method metric of 85.32%, and ANN method metric of 82.12%.It comes to the conclusion that the proposed method is more accurate.

CONCLUSIONS
In this study, the authors aimed to develop an Enhanced Recurrent Neural Network (ML-ERNN) with the purpose of improving the overall level of efficiency of the categorization during the detection of diabetic retinopathy.In this work, the M-CNN extraction was used to extract essential properties from fundus photographs and classify lesions according to varying degrees of severity.SVM, FNN, and AdaBoost were the EML classifiers used in the trials.Finally, a meta-modelbased classifier is developed to improve the accuracy of diabetic retinopathy identification.The attributes that were retrieved by employing pre-trained networks are also employed for the assessment process.This multipath network may be altered to forecast a variety of retinal illnesses, which will result in an improvement to the system for monitoring the health of the retina.The proposed approach yielded a high accuracy of 99.96% on the testing set of the Kaggle dataset and 99.93% on the testing set of the APTOS dataset.Following a series of experiments, it was shown that the M-CNN features perform most effectively when combined with the ML-ERNN classifier.Due to the fact that we have only dealt with voting by a strict majority, one way that this work might be expanded is via the use of voting by a more flexible majority.For ophthalmologists, the proposed model will be immensely useful in assisting doctors in improving patient outcomes by leveraging insights gained from a broad range of patient data and treatment experiences.This model can analyze historical data to identify patterns associated with patients who are at a higher risk of developing severe diabetic retinopathy.

Figure 1 .
Figure 1.The overall process of the proposed methodology 3.1 Dataset description APTOS and Kaggle (for DR detection) are the datasets used in this study.APTOS has 3,662 images, while the Kaggle dataset has 35,126.Table 2 contains a description of the dataset, and Figure 2 gives a description of class-wise categorization.The features of Diabetic Retinopathy (DR) are retrieved independently from the images in these datasets and utilized to train the Meta-Learning (ML) classifiers.The M-CNN will be used for feature extraction and a machine.

Figure 2 .
Figure 2. Datasets analysis on class-wise categorization 3.2 Preprocessing Before feature extraction on a retina fundus image, preprocessing increases the image's desirable features.The common strategies for preparing retinal pictures of the eye introduced in this work are detailed in depth.The image is enlarged in this step to help with the subsequent Procedure for detecting different stages of diabetic retinopathy.Image Resizing: Resizing an image alters its proportions, affecting file size and image quality.The most common reason for scaling an image is to reduce the size and complexity of large files.Two datasets, APTOS and Kaggle, are used in this research.The input image size in APTOS is (2848×4288×3) (Figure 3(a)), and the image is downsized to (224×224×3) (Figure 3(b)) after preprocessing, as shown in Figure 3.

Figure 3 .
Figure 3.The preprocessing process of images

Figure 4 .
Figure 4. Transformation and rotation of the image in augmentation 3.4 Feature extraction The disease is graded using popular meta-machine learning classifiers and a unique M-CNN architecture for the extraction of characteristics from images of the retinal funds in order to grade DR.The transfer learning via feature extraction method is used.Figure 5 depicts M-CNN architecture.CNN's input layer picture is 196 × 196 pixels.Two feed-forward paths are presented.

Figure 5 .
Figure 5. M-CNN architecture First, there's mainstream CNN.Second, multipath properties are extracted.After the initial CL, both feature maps are concatenated before linking.ReLU is a superior activation function for gradient change to sigmoid and tan.The first convolutional layer is composed of an eight-kernel 5×5 convolutional operation.The way is then divided.The feature maps from the previous layer are moved to the second route, which is responsible for carrying out 9×9 convolutional operations using 32 kernels and a max-pooling layer.Following the primary route, one arrives at a maxpooling layer, where one then loops back around to a 5×5 convolution layer.The number of kernels that make up the network is determined by a process of trial and error.The M-CNN architecture is designed with kernel settings research implements the Angular Second Moment (energy), Inertia Moment, Correlation, and Entropy Moment.Inverse Difference Moment is also important.a) Angular second moment Uniformity or Energy are other names for Angular Second Moment.GLCM squares sum A2M assesses image homogeneity.A high Angular Second Moment indicates visual homogeneity or comparable pixels.

Figure 6 .
Figure 6.Proposed weighted stacking ensemble model 3.5.1 Support vector machine (SVM)The support vector machine (SVM) is a machine that directly predicts decision surfaces.It has excellent performance indicators.The distance between the hyperplanes is specified as the margin[18].The training examples are used to define an SVM classifier.Data that can only be separated using a nonlinear decision surface is used in realworld classification.The usage of a kernel-based transformation is included in the optimization of the input data in this situation.

Figure 7 .
Figure 7. Example neural network with single-output neuron and a hidden layer

Table 3 .
Confusion matrix details for Kaggle and APTOS dataset

Table 4 .
The performance comparison table for the proposed and existing methods