© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
One of the more dangerous and prevalent malignancies in women is cervical cancer. Pap smear evaluation by a pathologist is preferred method for cervical cancer diagnosis. However, it suffers from the subjectivity of the pathologists and is limited to his/her expertise. More precise and effective methods are required, even though numerous studies have suggested using pap smear pictures to automatically diagnose cervical cancer. In this paper, an automated computer-aided diagnosis system uses a hybrid approach as per transfer learning and unbounded FMNN is proposed. Proposed system combines pre-trained deep learning models and unbounded FMNN classifier. Performance of proposed system is evaluated by experimenting with different pretrained models on benchmark pap smear datasets of Herlev and Sipakmed. The hybrid system based on AlexNet and the unbounded FMNN has the highest accuracy of 90%, according to the experimental data.
transfer learning, fuzzy min max neural network (FMMN), pre-trained models, cytology, image classification, convolutional neural networks
Cervical cancer is the most prevalent malignancy among Indian women. This disease takes a longer incubation period of almost 10-15 years and slowly progresses initially from mild dysplasia to cervical cancer. So, during this period it can be early detected and prevented. This disease becomes highly prevalent, especially in rural areas where the majorities of women are socio-economically weak, illiterate, and there exist other risk factors like unhygienic conditions, multiple pregnancies, early marriage, lack of medical facilities, etc. In addition to the awareness camps to discuss the different cervical cancer prevention strategies, the availability of cervical cancer screening in rural hospitals with limited resources is crucial. The automated cervical cancer diagnosis will be useful in these circumstances.
There are possible screening techniques to diagnose cervical cancer in its early stage [1]. Visualization of Pap Smear Test and Colposcopy Test is shown in Figure 1. All these techniques need involvement of doctors and pathology labs in some cases which are not easily available in most of the rural parts of India. Intelligent screening systems can be helpful in such situations. Such systems can be the representatives for expert doctors and will also guide novice doctors by serving them a second opinion. Many efforts are made to develop in this direction; still, the development of cost-effective and accurate systems is under research and development.
Figure 1. (a) Pap smear and (b) Colposcopy test for cervical cancer diagnosis
Many researchers have investigated potential of machine-learning as well as deep learning techniques for development of computer-aided diagnosis (CAD) systems for classification of pap smear images. Most of these techniques focus on the segmentation of the nucleus and cytoplasm to classify the image into cancerous and non-cancerous categories [2]. But the cells in the images are very much overlapped and they can’t be distinctly distinguished from their background and other image contents. This is because many artifacts get introduced during slide preparation. The segmentation is followed by handcrafted feature extraction that focuses on morphological structure of image [3]. Feature extraction is followed by the feature selection and classification by any machine learning classifier [4, 5]. This is an extensive and multistep process that requires unambiguous and clear input images. In contrast to this, deep learning models can classify these images automatically and accurately [6, 7]. However deep learning techniques are data-hungry methods that require tremendous data for the model to get trained [8]. But the transfer learning method of deep learning helps to solve this problem to some extent by using pre-trained models like AlexNet, GoogleNet, ResNet, etc. [9]. The models are already trained on tremendous datasets wherein some feature extraction layers can directly be used for any image classification problem [10]. These extracted features are robust enough because of the large amount of training data. These features can directly be given to a suitable classifier.
Many classifiers are there in the literature but FMMN proposed by Simpson [11] has many desirable properties. Since then, many researchers have modified the original FMMN to improve upon its drawbacks and proposed new variants. One of the major drawbacks of all the variants of FMMN is their sensitivity to the maximum hyper box size θ. This drawback is removed in the unbounded recurrent FMNN (URFMN) proposed by Waghmare and Kulkarni [12]. In URFMN, several other modifications are suggested by making it suitable for online learning which is a very useful feature of any classifiers thereby removing the need for retraining.
The paper proposes hybrid system consisting of two parts namely feature extraction by pretrained models followed by classification by URFMN. Thus, it combines the advantages of pretrained models for accurate feature extraction and efficient URFMN for classification. Thus, proposed system has following contributions-
1) Proposed system is the first hybrid model that combines deep learning models and FMMN for biomedical image classification.
2) The proposed system focuses on novel application of pap smear pathology image-based cervical cancer diagnosis with a fuzzy min-max neural network.
3) Only two benchmark datasets of pap smear images are available online and both experiment in this paper.
4) Instead of Handcrafted feature extraction and inputting these to FMMN, the proposed system extracts the deep image features by using pretrained models and fed to FMMN.
5) The proposed system experiments with different variants of FMMN including basic FMMN, EFMMN, and UFMMN.
6) Feature extraction by using different variants of pre-trained models and classification by using different FMMN classifiers experimented and performance is compared.
7) Features extracted by using the proposed system are passed to different machine learning classifiers and performance evaluation is done.
8) As the FMMN can learn in single pass over data, it is computationally efficient as compared to the fully connected networks.
9) UFMMN is independent of the value of θ by making it a more efficient and non-parametric classifier.
The FMNN is a neuro fuzzy pattern classification algorithm that divides whole pattern space of n-dimensional features into the sets of hyperbox regions wherein each set of hyperboxes corresponds to each class in the original dataset. The basic structure of all fuzzy min-max neural networks considers following desirable properties of any classifier-
1) Ability to learn online
2) Learning Non-linear classification boundaries
3) Dealing with overlapping classes
4) Support for both hard and soft decisions
5) Nonparametric classification
This algorithm was originally proposed by Patrick Simpson in 1992, thereafter this algorithm is continuously enhanced by many researchers to remove its weaknesses and to modify them further. One of the major weaknesses of fuzzy min-max is the contraction step in learning algorithms. Some researchers have replaced the contraction step with modified architecture and others have retained this step and proposed the new additions. So, we categorized the existing literature into two broad categories namely with and without contraction. Table 1 lists the FMMN variants with contraction [11-21] and Table 2 lists without contraction [22-26]. Table 3 gives the listing of FMMN research publications categorized according to the broad application area [27-79].
Table 1. FMMN variants with contraction
FMMN with Contraction |
Year |
Task |
Description |
FMMN [11] |
1992 |
Classification |
First fuzzy min max neural network Proposed for classification task Hyperbox concept is introduced |
GFMNN [13] |
2000 |
Classification and clustering |
Hybrid model that deals with supervised and unsupervised learning Input is in terms of hyperboxes |
Weighted WFMNN [14] |
2004 |
Classification |
Weight in terms of frequency of features is added Useful for feature extraction |
Modified MFNN [15] |
2008 |
Classification |
Euclidian distance concept is used Pruning to reduce complexity is used |
FMM-GA [16] |
2010 |
Pattern Classification and Rule Extraction |
Pruning used GA is used for rule extraction Don’t care condition is used to make fewer features appear in the rules |
Adaptive AFMMN [17] |
2012 |
Classification |
PCA is used for pre-processing Adaptive GA is used for parameter optimization |
Enhanced EFMMN [18] |
2015 |
Classification |
Decreases the rate of overlapping during growth by defining a new set of hyperbox expansion rules. The EFMM expands the overlap test rules of the FMM to identify new overlapping situations. EFMM defines new contraction rules in order to address the aforementioned new overlapping test rules. Some researchers employed a pruning approach to get rid of the less effective hyperboxes, which allowed for even further improvements to EFMM. |
KNN based EFMNN (KNEFMNN) [19] |
2017 |
Classification |
The k-Nearest expansion rule prevents an excessive number of tiny hyperboxes from forming. |
SS-FMM Semi-supervised FMNN for Data Classification [20] |
2020 |
Classification |
Handles the labeled as well as unlabelled data Dynamic hyperbox pruning and relabelling staged feedback process |
Refined FMMN [21] |
2019 |
Classification |
New expansion process and overlap test is defined to remove existing overlap leniency and irregularity Prediction based on membership value and distance-based metric |
Table 2. FMMN variants without contraction
FMMN without Contraction |
Year |
Task |
Description |
Inclusion/exclusion fuzzy hyperbox classifier [22] |
2004 |
Classification |
Two new kinds of hyperboxes: Inclusion and exclusion Inclusion hyperbox for the patterns of the same class and exclusion to represent the overlap region. Each class is represented by subtracting the exclusion hyperbox from the union of inclusion hyperboxes Problem occurs when exclusion hyperboxes are larger than inclusion hyperboxes |
The fuzzy min-max neural network with a compensatory neuron (FMCN) [23] |
2007 |
Classification |
Three kinds of neurons: Classified neuron (CLN) the overlap compensation neuron (OCN) and the containment compensation. Two different activation functions for OCN and CNN are defined. Problem here is the complexity of the network and incorrect decisions in the overlap region |
GRFMN [24] |
2007 |
Classification and clustering |
Handles the overlap problem of FMCN Performs both classification and clustering |
A data-core-based FMM neural network (DCFMN) [25] |
2011 |
Classification |
Classifying and overlapping neurons are the two types of neurons. |
Multi-level FMM neural network [26] |
2014 |
Classification |
Improves the accuracy in overlap region Each node is an independent classifier having two types of nodes: hyberbox segment and overlap hyperbox segment High accuracy and low sensitivity to expansion parameter |
Table 3. Application wise distribution of FMMN research publications
Industrial |
Healthcare |
Biometrics and Security |
Power Generation & cooling [27-29] Cooling System [30] Robotics Motion [31] FDD Induction Motors [32-34] Electrical Motors [35] Vehicle Suspension System [36] Water Leakage Detection [13] Oil Leakage Detection [11] Cellular manufacturing [37] Business Intelligence [38] |
Heart Disease Diagnosis [39] Acute Coronary Syndrome [18] [40] Acute Stroke Patient Diagnosis [41] Cervical Cancer Diagnostic [42] Lung Disease Detection [43-45] Brain Tumor Detection [46] Patient Admission Prediction [47] Classification of Medical Data like diabetes, mammographic mass data, etc. [48, 49] Gene Expression Data [50] Fall Detection Systems [51-52] Glaucoma Image classification [53] Liver Disease Diagnosis [liver 2018] |
Face Detection [54-55] Emotion Recognition [56] Human Action Recognition [57] Iris Recognition [58] Object Recognition [59-61] Signature Recognition [62-63] Speech Recognition [64] Speaker identification [65] Users Authenticating [66] Intrusion Detection [67-69] Attack intention [70] Software Reliability Prediction [71] |
Character Recognition |
Image Processing |
|
Chinese Handwritten [72] Printed English [73] Printed Persian Numeral [74] |
Image Retrieval [75] Shadow Detection and Removal Tool [76] Image Segmentation [77] Color image segmentation [78-79] |
In the studies by Ye et al. [80] and Wang et al. [81], a rule-based approach utilizing a fuzzy min-max neural network is proposed for the diagnosis of brain glioma. In the study by Kumar et al. [82], breast cancer diagnosis based on histopathological images is done by using fuzzy min-max neural networks.
In some studies [82-85], a feature extractor based on the Gray Level Co-occurrence Matrix (GLCM) is used on histopathological images of breast cancer. Subsequently, the Fuzzy Min-Max Neural Network (FMMN), Enhanced Fuzzy Min-Max Neural Network (EFMNN), and K-nearest Hyperbox Selection Rule (Kh-FMNN) are applied for classification, respectively.
In the study by Chinnasamy and Shashikumar [86], segmentation by using fuzzy c-means clustering followed by statistical and semantic feature extraction is performed on breast mammogram images. The only single paper [42] is found in the literature wherein cervical cancer classification based on fuzzy min-max is performed. In this paper, cervical cell segmentation by using the Adaptive Fuzzy Moving K-means followed by handcrafted feature extraction. In the second stage, feature extraction and then classification using FMNN With Genetic Algorithm are performed. However, all of these methods use handcrafted features extracted from breast cancer images. Such features need to be optimized by using different approaches as given in some studies [87-89]
In the diagnostic study presented by Holmström et al. [90], 740 Papanicolaou test results from women in a rural clinic in Kenya were digitized and analysed using a deep learning algorithm. The algorithm demonstrated high sensitivity (96%-100%) in detecting atypical samples. It showed greater specificity for high-grade lesions (93%-99%) compared to low-grade lesions (82%-86%). Importantly, the algorithm did not misclassify any slides manually identified as high grade as negative.
In this paper, handcrafted GLCM features are extracted from histopathological images of breast cancer, and FMMN variants are used for classification. As compared to handcrafted features, deep learning features are more effective. No document focuses on deep learning features and the FMMN classifier together. The proposed methodology focuses on deep feature extraction followed by advanced FMMN.
In summary, previous studies, like those on Adaptive Fuzzy Min-Max Neural Networks (AFMMN) and Enhanced Fuzzy Min-Max Neural Networks (EFMMN), improved classification accuracy in medical diagnosis. This study introduces a new model, the Cervi-Unbounded Recurrent Fuzzy Min-Max Neural Network (C-URFMN) for cervical cancer diagnosis. The URFMN solves issues in earlier models by allowing unbounded hyperbox expansion. This improves scalability and pattern recognition in complex data, such as cervical cancer diagnosis. The study builds on previous work by refining fuzzy hyperbox structures, making it a valuable extension of past research.
Give an image dataset, there are mainly two workflows to develop an intelligent model.
1) Machine learning-based approach: Extract handcrafter features from image dataset and use a suitable machine learning algorithm to train these features
2) Deep Learning based approach:
In this paper, approach 2(b) is used for the classification of cervical cancer images. Here features of pap smear images are extracted by using a pre-trained model called AlexNet and these features are given as input to the different variants of the fuzzy min-max neural networks.
The proposed method is shown in Figure 2. Input is the dataset of pap smear images. These input images are augmented and passed to fine-tuned pre-trained models for feature extraction. Extracted features are normalized to convert them into an acceptable form for FMMN. The FMMN used here is the URFMN for classification. The reason behind using URFMN is its insensitivity to the parameter θ, maximum hyperbox size, and improved accuracy. Other than URFMN, all other FMMN need to finetune the training at the value of θ for which training accuracy is highest. URFMN classifies the input test image into either normal or abnormal class. At the end testing, performance evaluation is done in terms of different performance evaluation parameters. These steps of the proposed methodology are explained in detail in the following subsections.
Figure 2. C-URFMN architecture
3.1 Fine-tuning of pretrained models for feature extraction
Transfer learning using pretrained models is used here for feature extraction. The reason is that pretrained models are proven to be more accurate for feature extraction and image classification tasks, particularly for medical images [9, 10]. As pretrained models are already trained on larger datasets, more useful features can be extracted from them. This solves problem of less availability of data [91].
In this paper, initial experiments are performed by using four pretrained models name AlexNet, ResNet-50, ResNet-18, and GoogleNet. Features extracted by all four models are given to the classifiers. For both the datasets, AlexNet and ResNet-50 have given higher classification accuracy. So further experiments are performed on the feature matrices of AlexNet and ResNet-50.
These pretrained models are fine-tuned by freezing earlier layers extracts higher-level abstract features. Figure 3 shows the feature extraction process used.
Figure 3. Feature extraction and classification of input image dataset
3.2 Feature normalization
All variants of FMMN including URFMN accept input data from the range of 0 to 1, as the maximum hyperbox size is 1 in each dimension of the pattern space. So, the extracted features from pre-trained models are normalized in the range of 0 to 1 by using min-max normalization given by Eq. (1).
xnew=(x−xmin)(xmax−xmin)( new max− new min)+ new min (1)
where, new max and new _{\min }=0. \mathrm{x}_{\min } and \mathrm{x}_{\max } are the old minimum and maximum values of x and \mathrm{x}_{\text {new }} is the new converted value of x.
3.3 Unbounded recurrent FMNN
Architecture of URFMMN as shown in Figure 3. UFMMN defines three kinds of node structures: fuzzy set hyperbox nodes, discrete hyperbox nodes, nested fuzzy set hyperbox nodes.
Fuzzy set hyperbox node: These are the nodes created in learning phase.
Membership function for this category of nodes is given by Eq. (2)
\begin{aligned} b_j\left(X_j, V_j, W_j\right)= & \min \left(\min \left(\left[1-f\left(x_{h i}-w_{j i}, \gamma\right)\right],[1\right.\right.\left.\left.\left.-f\left(v_{j i}-x_{h i}, \gamma\right)\right]\right)\right),\end{aligned} (2)
where, V_j are min as well as max points of jth hyperbox Bj where \gamma is sensitivity parameter. In this equation f(r, \gamma) the ramp threshold function is defined by Eq. (3).
f(r, \gamma)=\left\{\begin{array}{c}1 \text { if } r \gamma>1 \\ r \gamma \text { if } 0 \leq r \gamma \leq 1 \\ 0 \text { if } r \gamma<0\end{array}\right. (3)
Discrete hyperbox node: These nodes are created during online training when an input pattern fallsinsidebox of another class. The membership function for this category of a node is given by Eq. (4).
b_j\left(X_h\right)=\left\{\begin{array}{lr}1 & \text { if } X_h=V_j=W_j \\ 0 & \text { Otherwise }\end{array}\right. (4)
The output of discrete hyperbox output node is always binary.
Nested Fuzzy Set Hyperbox node: In online training, whenever discrete hyperbox node falls inside the fuzzy set hyperbox node, the fuzzy set hyperbox nodes are converted into nested fuzzy set hyperbox nodes. Input given to the nested fuzzy set neuron is the input pattern along with the output of all discrete neurons. If the output of a discrete neuron is 0 then fuzzy set hyperbox membership function given in Eq. (5) is used and if it is 1 the following membership function is used.
b_j\left(X_h, V_k, W_k, C_j\right)=1-f(l, \varphi)
where, V is centroid of j-th nested hyperbox \psi is the kindness parameter
f(l, \varphi)=\left\{\begin{array}{c}0 \text { if } l=0 \\ \varphi l \text { for } l>0\end{array}\right. (5)
where,
l=\left[\sum_{=1}^n\left(c_{j i}-v_{k i}\right)^2\right]^{1 / 2}
where,
c_{j i}=\frac{v_{j i}+w_{j i}}{2}, \quad \forall i=1,2, \ldots . \mathrm{n}.
The learning of URFMN has two phases: offline and online. During offline learning, the static dataset is used and fuzzy set hyperboxes are created. Offline learning takes place in three steps: expansion, inclusion, and exclusion as defined by Bargiela et al. [22]. Online learning takes the network consisting of fuzzy set hyperboxes formed during offline learning and differentiates input either for prediction or for adaption. Online training takes place in three steps: expansion, overlap test and hyperbox contraction as defined by Waghmare and Kulkarni [12].
The Hyperbox layer has three kinds of nodes as defined above. During offline training, fuzzy set hyperbox nodes are created as in original FMMN and during online training other two categories of nodes including discrete and nested nodes are created by transforming the architecture into the recurrent neural network. Weights of the feedback link from discrete to nested nodes are binary and represented by matrix Z and given by Eq. (6):
z_{i j}=\left\{\begin{array}{lr}1 & \text { if } B_j \text { is contained in } B_i \\ 0 & \text { Otherwise }\end{array}\right. (6)
where, Bj is jth discrete hyperbox and Bi is i-th nested fuzzy set hyperbox.
The weights of links from the second to the third layer are represented by matrix U and usually given by Eq. (7).
u_{j k}=\left\{\begin{array}{lr}1 & \text { if } B_j \text { is a hyperbox of class } c_k \\ 0 & \text { Otherwise }\end{array}\right. (7)
The section describes cervical cancer image datasets followed by the results.
4.1 Pap smear image datasets details
The following two pap-smear image benchmark datasets of cervical cancer are available on the internet and are used for experimentation in some studies [87, 88].
Almost all research papers on pap smear-based cervical cancer research available in the literature have used these datasets. The description of each of these datasets is given in the following subsections.
The Herlev Pap Smear Dataset and SIPaKMeD datasets are standard for training deep learning models to detect cervical cancer. The Herlev dataset contains 917 cell images categorized into seven types, while the SIPaKMeD dataset contains 9,250 images categorized into five classes. Both datasets offer high-resolution, manually segmented images, reducing overfitting and ensuring consistency, making them suitable for detecting different stages of cervical abnormalities. The constraints of both datasets include their emphasis on individual cells, possible demographic and source biases, and very limited sample sizes.
This dataset has 917 images and seven classes which are broadly classified into two classes as normal and abnormal whose details are given in Table 4.
Table 4. Herlev dataset details
Seven Classes |
No of Images |
Two Classes |
Total Images |
Superficial Squamous Epithelial |
74 |
Normal Class |
242 |
Intermediate Squamous Epithelial |
70 |
||
Columnar Epithelial |
98 |
||
Mild Dysplasia |
182 |
Abnormal Class |
675 |
Moderate Dysplasia |
146 |
||
Severe Dysplasia |
197 |
||
Carcinoma In Situ |
150 |
||
Total |
917 |
Figure 4 shows the sample images in Hervel Dataset. The images for seven different classes are also shown in Figure 5.
Figure 4. Sample images of Herlev dataset
This dataset has 4049 images and five classes which are broadly classified into two categories normal and abnormal. The Sipakmed dataset is categorised into five classes; superficial-intermediate, parabasal, koilocytotic, metatic, and dyskeratotic as shown in Table 5. Figure 5 shows Sipakmed dataset image samples.
Table 5. Sipakmed dataset details
Five Classes |
No of Images |
Three Classes |
Total Images |
Superficial |
831 |
Normal Class |
1618 |
Parabasal |
787 |
||
Koilocytotic |
825 |
Abnormal Class |
1638 |
Dyskeratotic |
813 |
||
Metaplastic |
793 |
Benign |
793 |
Total |
4049 |
Figure 5. Sample images of Sipakmed datas
6.1 Implementation details
Implementation and coding of pretrained models and URFMN are done in MATLAB. For training machine learning algorithms, the Weka tool [89] is used. Weka is a free tool wherein most of the machine learning algorithms are implemented.
Algorithm: Proposed Feature Extraction and Classification Model Input: Pap Smear Images of Cervical Cancer from Herlev and Sipakmed Datasets Output: Class of the input image- Normal or Abnormal begin
end |
Image augmentation enhances training data diversity, improving model robustness and reducing overfitting. For the used dataset augmentation performed are transformations with rotation (-90 to 90 degrees), scaling (0.5 to 0.9), and shearing (-2 to 2 degrees) to introduce variation in image orientation and size. Zooming (15%) alters object proximity, while horizontal and vertical flips mirror images for better orientation recognition. Gaussian blur (sigma=1.22) is done to simulates out-of-focus conditions, and adjustments to hue and saturation (0.5 to 1.5) are done to create color variability.
Image preprocessing is used to resize the input to match the required dimensions for the network, which is 227×227 for AlexNet and 224×224 for ResNet50. When choosing layers for feature extraction, fully connected layers ‘'fc7' in AlexNet are used for high-level features. In ResNet50, the 'avg_pool' layer is used to extract global features from the image.
6.2 Experimentation results
The spatial features of the Pap smear images are extracted using two pre-trained CNNs, Alexnet and ResNet-50. Table 6 shows the number of features extracted by AlexNet and Resnet-50 pre-trained models. The number of features extracted by the pretrained model is shown in Table 6. The efficiency of the proposed method for the classification of cervical cancer pap smear images is evaluated using two datasets. Section A describes the performance evaluation on the Herlev dataset, and section B describes the results acquired on Sipakmed dataset. The different combinations of features and classification models used in this work are given below:
Table 6. Number of features
No. of Features |
||
AlexNet |
Resnet-50 |
|
Herlev |
4096 |
1000 |
Sipakmed |
4096 |
1000 |
1) Pretrained CNNs for feature extraction and proposed C-URFMN model for classification
2) Pretrained CNNs for feature extraction and fuzzy min-max neural network for classification.
3) Pretrained CNNs for features extraction and machine learning for classification
All the implementations are performed on Google Colab notebook with Python. Benchmark datasets are performed for experimentation.
A. Result Analysis on Herlev Dataset
The accuracy obtained by the proposed C-URFMN classification model when the Alexnet feature is used on the Herlev dataset is 88.32%; the other performance parameters recall, precision and F1-score are used to evaluate the performance of the proposed model and are shown in Table 7. With the Resnet-50, the classification accuracy obtained is 88.77%. Comparing the two deep learning pre-trained models, ResNet-50 has given better accuracy than the AlexNet model.
Table 7. Classification accuracy of Alexnet and ResNet model with proposed C-URFMN on Herlev dataset
Parameter |
AlexNet Model-Herlev |
ResNet50 Model-Herlev |
Total Testing |
276*4096 |
276*1001 |
No of Correctly Classified Patterns |
241 |
245 |
No of Misclassified Patterns |
35 |
31 |
Accuracy |
241 |
245 |
PA |
88.32 |
88.77 |
HBCount |
4 |
5 |
Recall |
0.8261 |
0.8140 |
Precision |
0.8416 |
0.8881 |
F1-score |
0.8338 |
0.8494 |
Table 8 and Table 9 show% the detailed results of the fuzzy min-max neural network (FMMN). The expansion parameter values are varied from 0 to 1 of the fuzzy min-max neural networks. Alxenet features with FMMN has given the highest accuracy of 89.13%, with the expansion parameter \theta value of 0.5. Resnet-50 features have given highest accuracy of 88.40%, with the \theta value of 0.5. Comparing the two models with FMMN, Alexnet features have higher accuracy than the ResNet-50 features.
Table 8. AlexNet pre-trained model performance evaluation with FMNN on Herlev dataset
|
Theta (\theta) |
0 |
0.1 |
0.2 |
0.3 |
0.4 |
0.5 |
0.6 |
0.7 |
0.8 |
0.9 |
1 |
AlexNet |
Accuracy |
87.32 |
80.07 |
41.67 |
59.42 |
89.13 |
89.13 |
87.31 |
79.71 |
76.08 |
42.75 |
36.23 |
Precision |
83.30 |
75.24 |
63.26 |
62.51 |
90.30 |
90.30 |
87.01 |
74.53 |
72.11 |
64.65 |
64.65 |
|
F1 Score |
84.26 |
77.28 |
61.30 |
64.13 |
85.51 |
85.51 |
82.87 |
76.17 |
74.55 |
62.59 |
60.39 |
|
Recall |
85.23 |
79.43 |
59.47 |
65.83 |
81.20 |
81.20 |
79.09 |
77.87 |
77.16 |
60.64 |
56.65 |
Table 9. Results of ResNet-50 with FMNN on Herlev dataset
ResNet 50 |
Theta (\theta) |
0 |
0.1 |
0.2 |
0.3 |
0.4 |
0.5 |
0.6 |
0.7 |
0.8 |
0.9 |
1 |
Accuracy |
88.76 |
77.17 |
77.17 |
67.75 |
87.32 |
87.32 |
88.40 |
88.04 |
80.79 |
82.24 |
85.86 |
|
Precision |
0.85 |
0.72 |
0.73 |
0.68 |
0.89 |
0.89 |
0.89 |
0.87 |
0.76 |
0.77 |
0.82 |
|
F1 Score |
0.86 |
0.72 |
0.75 |
0.71 |
0.83 |
0.83 |
0.84 |
0.84 |
0.78 |
0.78 |
0.81 |
|
Recall |
0.87 |
0.77 |
0.78 |
0.73 |
0.78 |
0.78 |
0.80 |
0.81 |
0.80 |
0.80 |
0.80 |
Table 10 shows the comparison of experimental results of machine learning, FMMN and C-URFMN when Alxenet is used as feature extractor on Herlev dataset. The analysis shows that the FMMN has given best accuracy of 89.13%, with the FMMN as the classification model.
Table 10. Results of AlexNet with ML classifiers, FMNN, and C-URFMN Herlev dataset
Classifier |
BayeNet |
Naive Bayes |
Random Forest |
Random Tree |
Decision Table |
Part |
FMMN |
URFMN |
Testing Accuracy (%) |
83.33 |
82.24 |
87.68 |
81.8 |
88.04 |
86.59 |
89.13 |
88.32 |
Similarly, with the Resenet-50 pre-trained model and comparison of the neural networks, classification accuracy is highest at 88.40% with FMMN followed by C-URFMN with an accuracy of 88.77%. Among the different machine learning classifiers, the highest classification accuracy is 89.13% for the Naïve Bayes classifier. Table 11 shows Resnet-50 Pre-trained model on the Herlev dataset results.
Table 11. Results of Herlev dataset with ML Classifiers, FMNN, and C-URFMN on Herlev dataset
Classifier |
BayeNet |
Naive Bayes |
Random Forest |
Random Tree |
Decision Table |
Part |
FMMN |
URFMN |
Testing Accuracy (%) |
88.04 |
89.13 |
88.04 |
78.62 |
86.23 |
81.88 |
88.40 |
88.77 |
In all the following comparison tables, green color represents the highest accuracy and orange color represents the next to highest accuracy.
B. Result Analysis on Sipakmed Dataset
Experimentation results obtained on the Sipakmed dataset are discussed further. The proposed C-URFMN results obtained on the Sipakmeddataset are shown in Table 12. AlexNet model features have given 92.54% accuracy, whereas ResNet-50 has given the accuracy of 88.85%.
Table 12. Classification accuracy of AlexNet and ResNet model with proposed C-URFMN on Sipakmed dataset
Parameter |
AlexNet Model-Sipakmed Dataset |
ResNet50 Model- Sipakmed Dataset |
Total Testing |
1220*4097 |
1220*1000 |
No of Correctly Classified Patterns |
1129 |
1048 |
No of Misclassified Patterns |
91 |
136 |
Accuracy |
1129 |
1048 |
PA |
92.54 |
88.85 |
HBCount |
118 |
125 |
Recall |
0.9175 |
0.8792 |
Precision |
0.9264 |
0.8869 |
F1-score |
0.9219 |
0.8830 |
Table 13 shows classification accuracy obtained by varying the size of the expansion parameter in FMMN. The Sipkamed dataset's highest classification accuracy when AlexNet features are used is 91.96 %, with a 0.6 expansion value.
Table 13. Results of AlexNet with FMMN on Sipakmed dataset
AlexNet |
Theta (\theta) |
0 |
0.1 |
0.2 |
0.3 |
0.4 |
0.5 |
0.6 |
0.7 |
0.8 |
0.9 |
1 |
Sipakmed Dataset |
Accuracy |
91.80 |
88.52 |
71.96 |
89.34 |
91.72 |
91.72 |
91.96 |
91.80 |
91.47 |
88.93 |
81.55 |
Precision |
92.09 |
88.06 |
71.19 |
89.46 |
91.35 |
91.23 |
91.48 |
91.57 |
91.13 |
90.29 |
86.77 |
|
F1 Score |
91.94 |
88.01 |
71.54 |
88.82 |
91.36 |
91.41 |
91.67 |
91.42 |
91.10 |
88.57 |
81.70 |
|
Recall |
91.79 |
87.96 |
71.90 |
88.18 |
91.38 |
91.59 |
91.86 |
91.27 |
91.07 |
86.91 |
77.19 |
Table 14 shows the performance of FMMN where Resnet-50 features are used for classification. The highest classification accuracy obtained is 91.32%, with a value of 0. With value \theta=0, number of hyperboxes in FMMN are equal to the number of input samples, wherein FMMN performs equal to the K-nearest neighbor (K-NN) algorithm. Due to this computational complexity of FMMN becomes higher to process these many higher numbers of hyperboxes.
Table 15 displays the results of Alexnet using fuzzy min-max neural network, machine learning classifiers, and the proposed C-URFMN on the Sipakmed Dataset, whereas Table 16 displays the results of Resnet-50 using the same neural network, fuzzy min-max, and machine learning classifiers on the same dataset.
Table 14. Results of ResNet-50 with FMNN on Sipakmed dataset
Sipakmed |
Theta (\theta) |
0 |
0.1 |
0.2 |
0.3 |
0.4 |
0.5 |
0.6 |
0.7 |
0.8 |
0.9 |
1 |
Accuracy |
91.32 |
87.54 |
82.45 |
79.83 |
84.50 |
85.81 |
88.19 |
85.90 |
83.44 |
84.67 |
63.27 |
|
Precision |
91.72 |
87.08 |
82.55 |
79.63 |
83.94 |
85.59 |
87.60 |
85.81 |
84.72 |
83.97 |
81.05 |
|
F1 Score |
91.70 |
86.97 |
81.49 |
78.66 |
83.78 |
86.34 |
88.27 |
86.57 |
85.30 |
84.05 |
64.75 |
|
Recall |
91.69 |
86.86 |
80.45 |
77.72 |
83.61 |
87.10 |
88.94 |
87.34 |
85.89 |
84.13 |
53.91 |
Table 15. Results of AlexNet with ML classifiers, FMNN, C-URFMN on SIPaKMeD dataset
AlexNet-Sipakmed |
||||||||
Classifier |
BayesNet |
Naive Bayes |
Random Forest |
Random Tree |
Decision Table |
Part |
FMMN |
URFMN |
Testing Accuracy (%) |
91. 2% |
91.6% |
91.2 |
90.70 |
93.23 |
89.5 |
91.96 |
92.54 |
Table 16. Results of ResNet-50 with ML classifiers, FMNN, proposed C-URFMN on SIPaKMeD dataset
ResNet-50-Sipakmed |
||||||||
Classifier |
BayesNet |
Naive Bayes |
Random Forest |
Random Tree |
Decision Table |
Part |
FMMN |
URFMN |
Testing Accuracy (%) |
89.67 |
88.19 |
89.83 |
81.8 |
84.75 |
90 |
87.54 |
88.85 |
Table 17 shows a summarization of results obtained in terms of classification accuracy on two datasets with the proposed C-URFMN method. The comparative results of the table are represented in the Figure 6.
Table 17. Classification accuracy of proposed C-URFMN
AlexNet |
ResNet50 |
|
Herlev |
88.32 |
88.77 |
Ipakmed |
92.54 |
88.85 |
Figure 6. Classification accuracy of proposed C-URFMN
In summary, it can be clearly seen from all the comparative Tables 10, 11, 16 and 17, the highest accuracy is given by the different classifiers (highlighted in green color) while URFMN has given consistently the second highest performance which is much closer to the highest performance (highlighted in orange color).
Also, URFMN is the only classifier among all FMNN is independent on expansion coefficient \theta. So, it is a stable classifier that gives consistently good performance without any \theta value. For all other FMNN, needs to get tuned into its best performance resulting into many passes. But URFMN can learn in a single pass without the need of tuning \theta parameter.
The proposed computer-aided diagnostic system “C-URFMN” presented in this paper is an application of deep learning for cancer image analysis. It has two stages of feature extraction and classification. AlexNet and ResNet models are used for feature extraction and these features are assigned to various classifiers including URFMN, FMMN and machine learning classifiers including BayesNet, Naive Bayes, Random Forest, Random Tree, Decision Table and Part. The advantage min max fuzzy classifiers over machine learning classifiers online learning, nonlinear boundary learning, hard and soft decisions, non-parametric classification, etc. Only disadvantage of fuzzy min max neural network is the sensitivity to the value of the \theta expansion coefficient. URFMN is just a FMNN which is not sensitive to \theta value. Therefore, URFMN receives the extracted characteristics in this paper and provides good accuracy. In conclusion, Pap smear images are accepted by the proposed C-URFMN, which then divides them into normal and pathological categories. In addition to classification accuracy, C-URFMN has leveraged additional benefits of FMMN.
The proposed C-URFMN framework can be simplified in terms of its architecture. Additionally, future research could focus on applying the model to multicell image analysis. Since there is a limited number of publicly available datasets, data collection from hospitals is also necessary to advance research in this area. Furthermore, an extended form of Pap smear imaging, known as liquid-based cytology, offers potential for further exploration and development.
[1] Rubin, R. (2019). Artificial intelligence for cervical precancer screening. JAMA, 321(8): 734. https://doi.org/10.1001/jama.2019.0888
[2] Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42: 60-88. https://doi.org/10.1016/j.media.2017.07.005
[3] Jantzen, J., Norup, J., Dounias, G., Bjerregaard, B. (2005). Pap-smear benchmark data for pattern classification. Nature Inspired Smart Information Systems (NiSIS 2005), 1-9.
[4] Liu, X., Liu, F., Li, Y., Shen, H., Lim, E.T., Tan, C.W. (2021). Image Analytics: A consolidation of visual feature extraction methods. Journal of Management Analytics, 8(4): 569-597. https://doi.org/10.1080/23270012.2021.1998801
[5] Pradhan, K., Chawla, P. (2020). Medical internet of things using machine learning algorithms for lung cancer detection. Journal of Management Analytics, 7(4): 591-623. https://doi.org/10.1080/23270012.2020.1811789
[6] Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep Learning. MIT Press. https://www.deeplearningbook.org/.
[7] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3): 211-252. https://doi.org/10.1007/s11263-015-0816-y
[8] Landau, M.S., Pantanowitz, L. (2019). Artificial intelligence in cytopathology: A review of the literature and overview of commercial landscape. Journal of the American Society of Cytopathology, 8(4): 230-241. https://doi.org/10.1016/j.jasc.2019.03.003
[9] Shinde, S., Kulkarni, U., Mane, D., Sapkal, A. (2021). Deep learning-based medical image analysis using transfer learning. In Patgiri, R., Biswas, A., & Roy, P. (Eds.), Health Informatics: A Computational Perspective in Healthcare, Studies in Computational Intelligence, Springer, Singapore, 932: 19-42. https://doi.org/10.1007/978-981-15-9735-0_2
[10] Shinde, S.V., Mane, D.T. (2022). Deep learning for COVID-19: COVID-19 Detection based on chest X-ray images by the fusion of deep learning and machine learning techniques. Understanding COVID-19: The Role of Computational Intelligence, Springer, Cham, 963: 471-500. https://doi.org/10.1007/978-3-030-74761-9_21
[11] Simpson, P.K. (1992). Fuzzy min-max neural networks-part 1: Classification. IEEE Transactions on Neural Networks, 3(5): 776-786.
[12] Waghmare, J.M., Kulkarni, U.V. (2019). Unbounded recurrent fuzzy min-max neural network for pattern classification. In 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, IEEE, pp. 1-8. https://doi.org/10.1109/IJCNN.2019.8852310
[13] Gabrys, B., Bargiela, A. (2000). General fuzzy min-max neural network for clustering and classification. IEEE Transactions on Neural Networks, 11(3): 769-783. https://doi.org/10.1109/72.846747
[14] Kim, H.J., Ryu, T.W., Nguyen, T.T., Lim, J.S., Gupta, S. (2004). A weighted fuzzy min-max neural network for pattern classification and feature extraction. In Computational Science and Its Applications-ICCSA 2004: International Conference, Assisi, Italy, pp. 791-798. https://doi.org/10.1007/978-3-540-24768-5_85
[15] Quteishat, A., Lim, C.P. (2008). A modified fuzzy min-max neural network with rule extraction and its application to fault detection and classification. Applied Soft Computing, 8(2): 985-995. https://doi.org/10.1016/j.asoc.2007.07.013
[16] Quteishat, A., Lim, C.P., Tan, K.S. (2010). A modified fuzzy min-max neural network with a genetic-algorithm-based rule extractor for pattern classification. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(3): 641-650. https://doi.org/10.1109/TSMCA.2010.2043948
[17] Liu, J., Yu, Z., Ma, D. (2012). An adaptive fuzzy min-max neural network classifier based on principle component analysis and adaptive genetic algorithm. Mathematical Problems in Engineering, 2012(1): 483535. https://doi.org/10.1155/2012/483535
[18] Mohammed, M.F., Lim, C.P. (2014). An enhanced fuzzy min–max neural network for pattern classification. IEEE Transactions on Neural Networks and Learning Systems, 26(3): 417-429. https://doi.org/10.1109/TNNLS.2014.2315214
[19] Song, Y., Huang, J., Zhou, D., Zha, H., Giles, C.L. (2007). Iknn: Informative k-nearest neighbor pattern classification. In European Conference on Principles of Data Mining and Knowledge Discovery, Berlin, pp. 248-264. https://doi.org/10.1007/978-3-540-74976-9_25
[20] Liu, J., Ma, Y., Qu, F., Zang, D. (2020). Semi-supervised fuzzy min-max neural network for data classification. Neural Processing Letters, 51: 1445-1464. https://doi.org/10.1007/s11063-019-10142-5
[21] Sayaydeh, O.N.A., Mohammed, M.F., Alhroob, E., Tao, H., Lim, C.P. (2020). A refined fuzzy min-Max neural network with new learning procedures for pattern classification. IEEE Transactions on Fuzzy Systems, 28(10): 2480-2494. https://doi.org/10.1109/TFUZZ.2019.2939975
[22] Bargiela, A., Pedrycz, W., Tanaka, M. (2004). An inclusion/exclusion fuzzy hyperbox classifier. International Journal of Knowledge-Based and Intelligent Engineering Systems, 8(2): 91-98. https://doi.org/10.3233/KES-2004-8204
[23] Nandedkar, A.V., Biswas, P.K. (2007). A fuzzy min-Max neural network classifier with compensatory neuron architecture. IEEE Transactions on Neural Networks, 18(1): 42-54. https://doi.org/10.1109/TNN.2006.882811
[24] Nandedkar, A.V., Biswas, P.K. (2007). A general reflex fuzzy min-Max neural network. Engineering Letters, 29: 1-11.
[25] Zhang, H., Liu, J., Ma, D., Wang, Z. (2011). Data-core-based fuzzy min-max neural network for pattern classification. IEEE Transactions on Neural Networks, 22(12): 2339-2352. https://doi.org/10.1109/TNN.2011.2175748
[26] Davtalab, R., Dezfoulian, M.H., Mansoorizadeh, M. (2013). Multi-level fuzzy min-max neural network classifier. IEEE Transactions on Neural Networks and Learning Systems, 25(3): 470-482. https://doi.org/10.1109/TNNLS.2013.2275937
[27] Ge, Z., Song, Z., Ding, S.X., Huang, B. (2017). Data mining and analytics in the process industry: The role of machine learning. IEEE Access, 5: 20590-20616. https://doi.org/10.1109/ACCESS.2017.2756872
[28] Wang, Y., Huang, W., Wang, J. (2021). Redefined fuzzy min-max neural network. In 2021 International joint conference on neural networks (IJCNN), pp. 1-8. https://doi.org/10.1109/IJCNN52387.2021.9533765
[29] Chen, K.Y., Lim, C.P., Lai, W.K. (2004). Fault detection and diagnosis using the fuzzy min-max neural network with rule extraction. In Knowledge-Based Intelligent Information and Engineering Systems: 8th International Conference, KES 2004, Wellington, New Zealand, pp. 357-364. https://doi.org/10.1007/978-3-540-30134-9_48
[30] Meneganti, M., Saviello, F.S., Tagliaferri, R. (1998). Fuzzy neural networks for classification and detection of anomalies. IEEE Transactions on Neural Networks, 9(5): 848-861. https://doi.org/10.1109/72.712157
[31] Duan, Y., Cui, B., Xu, X. (2007). State space partition for reinforcement learning based on fuzzy min-max neural network. In International Symposium on Neural Networks, Berlin, pp. 160-169. https://doi.org/10.1007/978-3-540-72393-6_21
[32] Seera, M., Lim, C.P., Ishak, D., Singh, H. (2012). Fault detection and diagnosis of induction motors using motor current signature analysis and a hybrid FMM-CART model. IEEE Transactions on Neural Networks and Learning Systems, 23(1): 97-108. https://doi.org/10.1109/TNNLS.2011.2178443
[33] Seera, M., Lim, C.P., Ishak, D., Singh, H. (2013). Application of the fuzzy min-max neural network to fault detection and diagnosis of induction motors. Neural Computing and Applications, 23: 191-200. https://doi.org/10.1007/s00521-012-1310-x
[34] Hussain, M., Memon, T.D., Hussain, I., AhmedMemon, Z., Kumar, D. (2022). Fault detection and identification using deep learning algorithms in InductionMotors. CMES-Computer Modeling in Engineering & Sciences, 133(2): 435-470. https://doi.org/10.32604/cmes.2022.020583
[35] Singh, H., Abdullah, M.Z., Qutieshat, A. (2011). Detection and classification of electrical supply voltage quality to electrical motors using the fuzzy-min-max neural network. In 2011 IEEE International Electric Machines & Drives Conference (IEMDC), Niagara Falls, ON, Canada, pp. 961-965. https://doi.org/10.1109/IEMDC.2011.5994946
[36] Lv, Y., Wei, X., Guo, S. (2015). Research on fault isolation of rail vehicle suspension system. In The 27th Chinese Control and Decision Conference (2015 CCDC), Qingdao, China, pp. 929-934. https://doi.org/10.1109/CCDC.2015.7162052
[37] Dobado, D., Lozano, S., Bueno, J.M., Larraneta, J. (2002). Cell formation using a fuzzy min-max neural network. International Journal of Production Research, 40(1): 93-107. https://doi.org/10.1080/00207540110073064
[38] Susan, S., Khowal, S.K., Kumar, A., Kumar, A., Yadav, A.S. (2013). Fuzzy min-max neural networks for business intelligence. In 2013 International Symposium on Computational and Business Intelligence, New Delhi, India, pp. 115-118. https://doi.org/10.1109/ISCBI.2013.31
[39] Rajakumar, B.R., George, A. (2013). On hybridizing fuzzy min max neural network and firefly algorithm for automated heart disease diagnosis. In 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT), Tiruchengode, India, pp. 1-5. https://doi.org/10.1109/ICCCNT.2013.6726611
[40] Al Sayaydeh, O.N., Mohammed, M.F., Lim, C.P. (2018). Survey of fuzzy min–max neural network for pattern classification variants and applications. IEEE Transactions on Fuzzy Systems, 27(4): 635-645. https://doi.org/10.1109/TFUZZ.2018.2865950
[41] Alhroob, E., Mohammed, M.F., Al Sayaydeh, O.N., Hujainah, F., Ab Ghani, N., Lim, C.P. (2024). A flexible enhanced fuzzy min-max neural network for pattern classification. Expert Systems with Applications, 251: 124030. https://doi.org/10.1016/j.eswa.2024.124030
[42] Alnabelsi, S.H. (2013). Cervical cancer diagnostic system using adaptive fuzzy moving k-means algorithm and fuzzy min-max neural network. Journal of Theoretical and Applied Information Technology, 57(1): 48-53.
[43] Darne, K.S., Panicker, S.S. (2013). Use of fuzzy C-mean and fuzzy min-max neural network in lung cancer detection. International Journal of Soft Computing and Engineering (IJSCE), 3(3): 265-269.
[44] Zhai, Z., Shi, D., Cheng, Y., Guo, H. (2014). Computer-aided detection of lung nodules with fuzzy min-max neural network for false positive reduction. In 2014 Sixth International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, pp. 66-69. https://doi.org/10.1109/IHMSC.2014.24
[45] Deshmukh, S., Shinde, S. (2016). Diagnosis of lung cancer using pruned fuzzy min-max neural network. In 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (Icacdot), Pune, India, pp. 398-402. https://doi.org/10.1109/ICACDOT.2016.7877616
[46] Vasuki, M., Geethapriya, N., Brindha, M. (2017). An efficient classification scheme based on common reflex fuzzy min-max neural network for brain tumor detection. International Journal of Advanced Information Science and Technology (IJAIST), 6: 24-31. https://doi.org/10.15693/ijaist/2017.v6i4. 24-31
[47] Wang, J., Lim, C. P., Creighton, D., Khorsavi, A., Nahavandi, S., Ugon, J., Vamplew, P., Stranieri, A., Martin, L., Freischmidt, A. (2015). Patient admission prediction using a pruned fuzzy min-max neural network with rule extraction. Neural Computing and Applications, 26: 277-289. https://doi.org/10.1007/s00521-014-1631-z
[48] Rafi, D.M., Bharathi, C.R. (2016). Optimal fuzzy min-max neural network (fmmnn) for medical data classification using modified group search optimizer algorithm. International Journal of Intelligent Engineering and Systems, 9(3): 1-10. https://doi.org/10.22266/ijies2016.0930.01
[49] Kulkarni, A. (2020). Fuzzy neural network for pattern classification. Procedia Computer Science, 167: 2606-2616. https://doi.org/10.1016/j.procs.2020.03.321
[50] Juan, L., Fei, L., Yongqiong, Z. (2007). An improved fmm neural network for classification of gene expression data. In Fuzzy Information and Engineering: Proceedings of the Second International Conference of Fuzzy Information and Engineering (ICFIE). Springer Berlin Heidelberg, pp. 65-74. https://doi.org/10.1007/978-3-540-71441-5_8
[51] Xi, X., Tang, M., Miran, S.M., Luo, Z. (2017). Evaluation of feature extraction and recognition for activity monitoring and fall detection based on wearable sEMG sensors. Sensors, 17(6): 1229. https://doi.org/10.3390/s17061229
[52] Jahanjoo, A., Tahan, M.N., Rashti, M.J. (2017). Accurate fall detection using 3-axis accelerometer sensor and MLF algorithm. In 2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA), Shahrekord, Iran, pp. 90-95. https://doi.org/10.1109/PRIA.2017.7983024
[53] Abirami, S.S., Shoba, S.G. (2013). Glaucoma images classification using fuzzy min-max neural network based on data-core. International Journal of Science and Modern Engineering (IJISME), 1(7): 9-15.
[54] Kim, H.J., Lee, J., Yang, H.S. (2006). A weighted FMM neural network and its application to face detection. In International Conference on Neural Information Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 177-186. https://doi.org/10.1007/11893257_20
[55] Khuat, T.T., Gabrys, B. (2020). A comparative study of general fuzzy min-max neural networks for pattern classification problems. Neurocomputing, 386: 110-125. https://doi.org/10.1016/j.neucom.2019.12.090
[56] Jawarkar, N.P. (2007). Emotion recognition using prosody features and a fuzzy min-max neural classifier. IETE Technical Review, 24(5): 369-373. https://doi.org/10.4103/02564602.10876619
[57] Kim, H.J., Lee, J.S., Yang, H.S. (2007). Human action recognition using a modified convolutional neural network. In International Symposium on Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 715-723. https://doi.org/10.1007/978-3-540-72393-6_85
[58] Lee, C.M., Narayanan, S., Pieraccini, R. (2001). Recognition of negative emotions from the speech signal. In IEEE Workshop on Automatic Speech Recognition and Understanding, Madonna di Campiglio, Italy, pp. 240-243. https://doi.org/10.1109/ASRU.2001.1034632
[59] Pawar, D. (2015). Fuzzy min-max neural network with compensatory neuron architecture for invariant object recognition. In 2015 International Conference on Computer, Communication and Control (IC4), Indore, India, pp. 1-5. https://doi.org/10.1109/IC4.2015.7375660
[60] Tolia, P.A. (2013). Compensatory fuzzy min-Max neural network for object recognition. International Journal of Computer Science and Network, 2(3): 42-47.
[61] Hou, T., Ding, W., Huang, J., Jiang, S., Yao, H., Zhou, T., Ju, H. (2025). Quality-aware fuzzy min–max neural networks for dynamic brain network analysis and its application to schizophrenia identification. Applied Soft Computing, 169: 112538. https://doi.org/10.1016/j.asoc.2024.112538
[62] Chaudhari, B.M., Barhate, A.A., Bhole, A.A. (2009). Signature recognition using fuzzy min-max neural network. In 2009 International Conference on Control, Automation, Communication and Energy Conservation, Perundurai, India, pp. 1-7.
[63] Chaudhari, B.M., Patil, R.S., Rane, K.P., Shinde, U.B. (2010). Online signature classification using modified fuzzy min-max neural network with compensatory neuron topology. In International Conference on Contemporary Computing. Berlin, pp. 467-478. https://doi.org/10.1007/978-3-642-14834-7_44
[64] Doye, D., Sontakke, T. (2002). Speech recognition using modular general fuzzy min-max neural network. IETE Journal of Research, 48(2): 99-103. https://doi.org/10.1080/03772063.2002.11416263
[65] Jawarkar, N.P., Holambe, R.S., Basu, T.K. (2011). Use of fuzzy min-max neural network for speaker identification. In 2011 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, pp. 178-182. https://doi.org/10.1109/ICRTIT.2011.5972455
[66] Zhang, Y., Zhang, Y., Zhang, H. (2023). A PPG-based biometric system using fuzzy min-max model. Third International Conference on Computer Vision and Data Mining (ICCVDM 2022), 12511: 250-257. https://doi.org/10.1117/12.2660113
[67] Azad, C., Jha, V.K. (2017). Fuzzy min-Max neural network and particle swarm optimization based intrusion detection system. Microsystem Technologies, 23: 907-918. https://doi.org/10.1007/s00542-016-2873-8
[68] Azad, C., Jha, V.K. (2016). A novel fuzzy min-max neural network and genetic algorithm-based intrusion detection system. In Proceedings of the Second International Conference on Computer and Communication Technologies, India, pp. 429-439. https://doi.org/10.1007/978-81-322-2523-2_41
[69] Akramifard, H., Khanli, L.M., Balafar, M.A., Davtalab, R. (2015). Intrusion detection in the cloud environment using multi-level fuzzy neural networks. In Proceedings of the International Conference on Security and Management (SAM). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), p. 75.
[70] Ahmed, A.A., Mohammed, M.F. (2018). SAIRF: A similarity approach for attack intention recognition using fuzzy min-max neural network. Journal of Computational Science, 25: 467-473. https://doi.org/10.1016/j.jocs.2017.09.007
[71] Bhuyan, M.K., Mohapatra, D.P., Sethi, S. (2016). Software reliability prediction using fuzzy min-Max algorithm and recurrent neural network approach. International Journal of Electrical & Computer Engineering, 6(4): 1929-1938. https://doi.org/10.11591/ijece.v6i4.9991
[72] Chiu, H.P., Tseng, D.C. (1997). Invariant handwritten Chinese character recognition using fuzzy min-max neural networks. Pattern Recognition Letters, 18(5): 481-491. https://doi.org/10.1016/S0167-8655(97)00029-9
[73] Vijayasree, R., Bhaskar, P.V., Reddy, P. (2018). Moment of inertia based radial coding features of invariant character recognition using fuzzy min-max neural networks. International Journal of Computer Engineering and Science Research (IJCESR), 5(2): 26-31. https://troindia.in/journal/ijcesr/vol5iss2part8/6.%2026-31.pdf.
[74] Boveiri, H.R. (2010). Persian printed numeral characters recognition using geometrical central moments and fuzzy min-max neural network. International Journal of Computer and Information Engineering, 4(1): 76-82. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=673eb10a2cfa6151d962665ec76dfc528fb85ec2
[75] Kshirsagar, D.B., Kulkarni, U.V. (2016). A generalized neuro-fuzzy based image retrieval system with modified colour coherence vector and texture element patterns. IIn 2016 IEEE International Conference on Advances in Electronics, Communication and Computer Technology (ICAECCT), Pune, India, pp. 68-75. https://doi.org/10.1109/ICAECCT.2016.7942558
[76] Huang, W., Sun, M., Zhu, L., Oh, S.K., Pedrycz, W. (2022). Deep fuzzy min–max neural network: Analysis and design. IEEE Transactions on Neural Networks and Learning Systems, 35(6): 8229-8240. https://doi.org/10.1109/TNNLS.2022.3226040
[77] Pham, D., Eldukhri, E., Soroka, A., Ruza’b, G., Estéveza, P. (2005). Image segmentation using fuzzy min-max neural networks for wood defect detection. In Intelligent Production Machines and Systems Conference, Vol. 183.
[78] Liao, W., Zhang, Q., Xie, Q., Gao, M., Jin, P. (2024). A new adaptive and effective granular ball generation method for classification. International Journal of Machine Learning and Cybernetics, 1-20. https://doi.org/10.1007/s13042-024-02463-2
[79] Deshmukh, K., Shinde, G. (2006). Adaptive color image segmentation using fuzzy min-Max clustering. Engineering Letters, 13(3): 57-64.
[80] Ye, C.Z., Yang, J., Geng, D.Y., Zhou, Y., Chen, N.Y. (2002). Fuzzy rules to predict degree of malignancy in brain glioma. Medical & Biological Engineering & Computing, 40(2): 145-152. https://doi.org/10.1007/BF02348118
[81] Wang, X., Yang, J., Jensen, R., Liu, X. (2006). Rough set feature selection and rule induction for prediction of malignancy degree in brain glioma. Computer Methods and Programs in Biomedicine, 83(2): 147-156. https://doi.org/10.1016/j.cmpb.2006.06.007
[82] Kumar, A.S., Kumar, A., Bajaj, V., Singh, G.K., Kuldeep, B. (2019). A fuzzy min-Max neural network based classification of histopathology images. In 2019 International Conference on Signal Processing and Communication (ICSC), pp. 143-146.
[83] Chu, F., Xie, W., Fazayeli, F., Wang, L. (2008). Assisting cancer diagnosis with fuzzy neural networks. Computational Intelligence in Biomedicine and Bioinformatics, pp. 223-235. https://doi.org/10.1007/978-3-540-70778-3_9
[84] Kumar, A.S., Kumar, A., Bajaj, V., Singh, G.K. (2019). K-Highest fuzzy min-Max network to classify histopathological images. In 2019 International Conference on Communication and Signal Processing (ICCSP), pp. 0240-0244.
[85] Kumar, A.S., Kumar, A., Bajaj, V., Singh, G.K. (2021). Class label altering fuzzy min-max network and its application to histopathology image database. Expert Systems with Applications, 176: 114880. https://doi.org/10.1016/j.eswa.2021.114880
[86] Chinnasamy, V.A., Shashikumar, D.R. (2020). Breast cancer detection in mammogram image with segmentation of tumour region. International Journal of Medical Engineering and Informatics, 12(1): 77-94. https://doi.org/10.1504/IJMEI.2020.105658
[87] Li, H.X., Xu, L.D. (2001). Feature space theory-A mathematical foundation for data mining. Knowledge-Based Systems, 14(5-6): 253-257. https://doi.org/10.1016/S0950-7051(01)00103-4
[88] Das, S., Mishra, S., Senapati, M. (2021). Improving time series forecasting using elephant herd optimization with feature selection methods. Journal of Management Analytics, 8(1): 113-133. https://doi.org/10.1080/23270012.2020.1818321
[89] Li, H.X., Xu, L.D., Wang, J.Y., Mo, Z.W. (2003). Feature space theory in data mining: transformations between extensions and intensions in knowledge representation. Expert Systems, 20(2): 60-71. https://doi.org/10.1111/1468-0394.00226
[90] Holmström, O., Linder, N., Kaingu, H., Mbuuko, N., Mbete, J., Kinyua, F., Tornquist, S., Muinde, M., Krogerus, L., Lundin, M., Diwan, V., Lundin, J. (2021). Point-of-care digital cytology with artificial intelligence for cervical cancer screening in a resource-limited setting. JAMA Network Open, 4(3): e211740-e211740. https://doi.org/10.1001/jamanetworkopen.2021.1740
[91] Kalbhor, M., Shinde, S., Popescu, D.E., Hemanth, D.J. (2023). Hybridization of deep learning pre-trained models with machine learning classifiers and fuzzy min-Max neural network for cervical cancer diagnosis. Diagnostics, 13(7): 1363. https://doi.org/10.3390/diagnostics13071363