© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
The ability to identify bird species in the nest without having to crack the eggs is a destructive process makes the phenomenon of bird egg recognition of great interest to scientists. Due to this behavior, a significant number of future birds disappear, and the behavior continues. In fact, the visual features of the eggs used by colonial birds as visual signals of identity would allow parents to identify their eggs and avoid the fitness costs associated with providing care to others. The majority of studies have concentrated on an individual's capacity to perform certain behavioral tasks while paying less attention to the egg and how its signals may change over time to improve its identification. Seeing the success of machine learning models in image classification tasks, many studies have been done to classify eggs into the best clutches based on the morphological characteristics of the egg shell. In this work, we built an automatic system to recognize the egg of a slender-billed gull in their best clutches to avoid the genetic test. We present an ensemble voting classifier that uses the Fast Beta Wavelet Network (FBWN) features. Our model has an egg recognition accuracy of 90\%, which has outperformed the state-of-the-art method and shows the efficiency and robustness of our system.
Fast Beta Wavelet Network (FBWN), egg recognition, ensemble voting classifier, visual feature
One of the fundamental cognitive abilities of animals is the capacity to recognize other individuals. To distinguish mates, one's offspring, rival predators, or prey. In addition, to other animal interactions, identification is crucial. There are numerous methods to identify other individuals based on their unique physical or chemical [1]. This behavior is known in many animal families.
Egg-laying parasitism, which consists of laying eggs in the nests of another breeding pair, is a reproductive strategy that is frequently encountered in birds. This strategy, adopted by a high number of species, can be interspecific or intraspecific. Thus, transferring the cost of rearing its offspring to its host [2, 3].
The majority of research on brood parasitism has focused on the hosts' capacity for individual recognition and consequent egg-rejection behavior. Since host decisions are dependent on egg features, characterizing them is a necessary first step in studying this feature extract from the egg. Then, eggs have traditionally been categorized simply by one or more phenotypic characteristics, frequently focusing solely on eggshell coloration. This method might work for plain eggs, but it is insufficient for animals with intricate patterns on their eggshells (color pattern, pigments, etc.) [4, 5].
To recognize the appropriate eggs for each female, scientists have been using genetics for several years to extract the embryos that will be genotyped. In parallel, other methods have been developed to overcome the drawbacks of genetics. Given the technological evolution of artificial intelligence in several fields, an approach has been considered to solve the problems of identifying eggs based on their visual characteristics [6-11].
The species chosen as a model for the study is the slender-billed gull. Indeed, due to its low level of aggressiveness and its high degree of coloniality, the slender-billed gull meets all the criteria for intraspecific parasitism. Intraspecific parasitism in this species has recently been detected by the presence of four or five eggs in some clutches, whereas females lay a maximum of three eggs [12].
We hypothesize that the eggshell is very similar between eggs of the same parentage while it is different between eggs of different clutches (not having the same parentage). To test this hypothesis, it is thus expected that the eggs of the same parentage are near to each other in the multivariate space and that the parasites are isolated from the clutches to which they belong.
The objective of our work is to determine if it is possible to classify the eggs in their corresponding clutches solely from the image of an eggshell.
In this work, the following is a definition of the proposed approach's principle.
(1) Use the Fast Beta Wavelet Network (FBWN) to extract features.
(2) An ensemble machine learning model that combines the predictions from several different models is known as a voting ensemble, also known as a "majority voting ensemble" used for classification tasks.
Our paper is organized as follows: in section 2, we present the methods used to identify the eggs and the feature extraction methods, in section 3, we explain the principle of the proposed approach in which we treat each phase separately. Section 4 is dedicated to the testing of the effectiveness of our proposed method and we will end with a conclusion.
For several years, scientists have been using genetic tests to know with certainty the species and the degree of relationship of each egg in the clutch of the laying. It is necessary to extract the embryos from the eggs and phenotype them using genetically specific markers for the species. Research has reported progress in overcoming the limitations of these methods. In addition, several methods have been developed that use the visual features of eggshells to classify the egg in their corresponding clutch. In this section, several feature extraction methods have been presented.
A so-called granulometric method, which is based on Fourier transformations, has been presented to extract information on shell colouration patterns from photographs of eggs [13]. In addition, two programs that implement this method—España [14] and SpotEgg [15]—have recently been developed.
These tools were designed to assess egg similarity using morphological and shell-color parameters extracted from photographs. More recently, attempts to incorporate egg cracking have been presented by Bulla et al. [1], who used photographs of gull eggs to extract quantitative visual information on staining patterns with the SpotEgg image-analysis software and to quantify the similarity or dissimilarity of eggs within a single clutch. They also performed genetic analyses on the same eggs to identify parasitic eggs accurately and to determine whether specific egg-coloration characteristics are reliable indicators of intraspecific brood parasitism [16].
Wegmann et al. [17] introduced a spectrophotometric method that measures the absorbance of the background colour and shell spots; it can be implemented in the field with a portable spectrophotometer. Because the technique provides no information on the size, number, or distribution of spots, it is poorly suited for parasitism studies, but it may be valuable for other questions. For example, it has shown that in Herring Gulls egg coloration can serve as a bio-indicator of environmental contaminant load.
Finally, a destructive chromatographic technique (HPLC) has been used since 2012 to quantify the total pigment content of the shell [18]. Although highly accurate, the method destroys the shell and therefore precludes analysis of the staining pattern, making it unsuitable for parasitism research.
To our knowledge, these methods have not been used in recognizing the egg in this type of species. On the other hand, an image classification method for the egg classification problem was offered by Caves et al. [6]. Using calibrated digital photos, this system employs SpotEgg to characterize eggs using 27 properties, including color, spots, shape, and size. A supervised machine learning method that divided eggs into multiple classes, placing each egg in the clutch that fits it the best out of all the clutches. Low recognition accuracy (53%) was the result.
The major interest of deep learning lies in feature learning given the importance of this phase for the classification result. Then, seeing the structure of eggs as very particular for feature extraction, we thought in this work to use a particular method for feature extraction.
Nowadays, many methods focused on feature extraction have been developed and have significant applications. Feature extraction methods have been documented in several articles in recent years [7, 9, 19-23].
Wiem et al. [8] proposed an egg classification system based on convolutional neural networks (CNN). Extensive testing has shown that the proposed method performs admirably. Many factors have been experimented with in this work (number of filters, number of iterations, and size of filters). The system has been evaluated on the slender-billed Gull dataset. It has given an accuracy of 87%. Then, the deep CNN model extracted automatically the characteristics of the egg and used a softmax layer for classification.
Subsequently, the system was used to classify egg images of the slender-billed gull [9]. This system used a discrete wavelet transform and CNN architecture, which are introduced to softmax and SVM classifiers. The method has been evaluated on the slender-billed gull dataset it has given an accuracy of 91% with the softmax classifier and 93% with the SVM classifier. The multi-resolution analysis given through this technique aids in extracting more details from the treated image. It has been a multi-resolution investigation of CNN architecture.
Then, Wiem et al. [11] proposed a system to identify the parasitic egg from a slender-billed gull. In this work, a FBWN is used for the feature extraction phase and a stacked auto-encoder for classification. This work achieved an accuracy of 89.9%.
Several feature extraction methods have not yet been tested on egg datasets, but these methods have promising results for image classification.
To effectively identify each Slender Billed Gull female's eggs, we built an ensemble voting model [24] that has been trained with egg visually distinctive characteristics. Then, the goal of this work is to classify the egg of slender-billed gull species in their corresponding clutches slowly from the egg image. We hypothesize that the eggshell pattern changes more between eggs from different clutches than within a single clutch. The proposed method consists of a feature extraction phase using FBWN to extract three descriptors (shape, texture, and color) and fusion them in only a descriptor vector. Then, we focused on improving the model's ability to generalize by applying feature augmentation. Subsequently, a voting ensemble to classify the feature extracted from the egg image. The following Figure 1 depicts the proposed step of the proposed approach.
Figure 1. Pipeline for the suggested approach
3.1 Image preprocessing
To obtain precise visual characteristics of the eggs, a pre-processing step was performed. Actually, the images of the eggs were better in terms of quality and luminosity. Additionally, to get uniformly sized egg photos, all of the RGB egg images were enlarged to 1024 × 2048 × 3 [9].
Our dataset is quite limited in size, which makes it challenging to train machine learning models effectively. To address this, we applied data augmentation techniques such as rotation, translation, and zooming to artificially expand the training set. These transformations introduce variability in the data, helping to reduce overfitting and improving the model’s ability to generalize to new, unseen samples.
3.2 Feature extraction method
The FBWN is a specialized neural architecture that employs wavelet transforms as its fundamental components. By leveraging these transforms, the network is able to capture and retain the most informative features required for accurately reconstructing the input data [9, 19].
Indeed, the FBWN decomposes the input image into several frequency sub-bands using horizontal $\left(\psi \mathrm{H}_{\mathrm{i}}\right)$, vertical $\left(\psi \mathrm{V}_{\mathrm{i}}\right)$, and diagonal $\left(\psi \mathrm{D}_{\mathrm{i}}\right)$ wavelets in combination with a scaling function $\varphi_{\mathrm{i}}$. This decomposition enables the capture of both directional and structural information at multiple resolutions. From these sub-bands, convolutional operations are applied, and the most significant coefficients are retained. The selected wavelet coefficients $(\left.\omega H_i, \omega D_i, \omega V_i\right)$ are used to represent shape and texture characteristics, while the scaling function coefficients (vi) contribute to capturing additional structural details. In parallel, color descriptors are extracted from the reconstructed image by computing the first and second statistical moments in the HSV color space, which offers a closer alignment with human visual perception compared to standard RGB representation. Finally, the three groups of descriptors shape, texture, and color are concatenated into a single feature vector that serves as the input for the classification stage. The process of FBWN-based feature extraction is presented in Figure 2, where, for illustration purposes, a simplified configuration with a random number of neurons and coefficients is depicted.
Figure 2. Application of the FBWN for feature extraction [9]
We obtained three descriptors 6 color features, 6 features of shape, and 64 features of texture will be merged into one vector per image as shown in Figure 3.
Figure 3. Three-descriptor extraction method using FBWN
3.3 Fusion of descriptors
The three resulting descriptors for each peer image are combined into a single unified descriptor in the fusion descriptor phase. This step merges the individual features to capture the unique characteristics of each image.
Indeed, this process employs feature concatenation, which helps integrate complementary information while preserving the diversity of the data. We evaluated each descriptor individually, but the findings demonstrated that combining the descriptors led to better performance than using them separately [9]. The resulting fused vector is then refined using deep learning techniques, such as a stacked autoencoder, to highlight the most relevant features for classification. To further enhance the model's robustness against image variability and improve its generalization ability, data augmentation techniques are applied afterward.
To improve the diversity and strength of these features and make the model more adaptable to varying conditions, data augmentation techniques are then applied.
3.4 Classification voting ensemble
In this paper, we propose the application of a classification voting ensemble, a powerful ensemble learning technique, to enhance the robustness and accuracy of image classification tasks as shown in Figure 4. The ensemble is constructed by aggregating the predictions of diverse base classifiers, each employing distinct machine learning algorithms and architectures.
Figure 4. Architecture of ensemble voting technique
Our ensemble includes a combination of Support Vector Machines (SVMs) with different kernel functions and SVM [25] with different polynomial degrees. The diversity in the underlying classifiers aims to capture a broader range of features and patterns within the image data. Through a majority voting mechanism, where the final prediction is determined by the class label receiving the most votes across the ensemble, we exploit the collective intelligence of the individual classifiers. Hard voting and soft voting are the two methods used to predict the majority vote for classification.
In hard voting, identify the class that received the most votes overall from the models, and in soft voting, identify the class that received the most summed probability from the models.
Experimental results demonstrate the efficacy of the proposed classification voting ensemble in improving classification accuracy, robustness to variations in input data, and overall generalization performance compared to individual classifiers. The findings underscore the potential of ensemble methods as a valuable strategy for advancing the state-of-the-art in image classification tasks, offering a promising avenue for future research in the field.
4.1 Study sites and image collection
The biologist gathered the gull clutches in the Sfax Tunisia salt flats in 2015 from three small colonies that were doomed to failure. All eggs were then genotyped using 13 micro-satellite markers specifically developed for this species. In parallel, the eggs were photographed under standard conditions to analyze shell coloration patterns.
For the evaluation, we relied on a dataset consisting of 8 complete clutches, each comprising 7 egg images captured from different viewing angles. As the dataset was relatively small, additional steps were required to enrich its diversity and strengthen the robustness of the learning process. To this end, we applied a two-level augmentation strategy. At the image level, transformations such as rotation, translation, and zooming were introduced to generate variations in egg orientation and perspective. At the feature level, we performed augmentation after extracting descriptors with the FBWN method, thereby expanding the variability within the feature space and reducing the risk of overfitting. We allocated 70% of the dataset for training and the remaining 30% for testing, ensuring a reliable assessment of both model learning and generalization capability.
Each input image is a 1024 × 1024 × 3 RGB matrix array (3 refers to RGB values). Figure 5 provides a sample of images egg of slender billed gull.
Figure 5. Egg images of slender-billed gull
To evaluate our model classification, we used an accuracy metric. Accuracy is the percentage of predictions that our model accurately identifies. Accuracy is defined as follows in the performance form in Eq. (1):
Accuracy $=\frac{\text { Correct Predictions }}{\text { Total Predictions }}$ (1)
4.2 Result and discussion
Each female induces a distinct trait in her eggs. Based on this property, we propose to extract descriptors (texture, color, and shape) for each image of the eggshell base. Our contribution is the fusion of the three feature vectors from the RGB egg image. Then, these attributes are used for classification using a voting ensemble classifier.
Specifically, we combined beta wavelet analysis with statistical methods such as hue moments and energy calculation to develop suitable shape, color, and texture descriptors. Then, we extracted for each image three vector descriptors, noting that the attributes calculated have as a result respectively 6, 6, 64.
Once the attributes have been derived and before moving to the stage of training it is necessary to merge these attributes to compose a descriptor vector to be injected in the ensemble majority voting classifier. The predictions from various variants of the same model are combined to operate as a voting ensemble. It applies to classification.
Table 1 illustrates the results of the SVM models using ensemble voting with different kernel functions. Table 2 illustrates the results of the SVM different kernel-based ensemble voting. Soft voting achieved an accuracy of 90% and hard voting achieved an accuracy of 88.4%. Then, we tested the SVM with different polynomial degrees. Soft voting achieved an accuracy of 89.5% and soft achieved an accuracy of 84.6%.
Table 1. Accuracy of SVM models using ensemble voting with different kernel functions
|
Kernel Type |
Accuracy (%) |
|
Radial Basis (RBF) |
89.4 |
|
Polynomial |
89.1 |
|
Sigmoid |
71.7 |
|
Linear |
75.5 |
|
Precomputed |
77.3 |
|
Soft-voting ensemble |
90 |
|
Hard-voting ensemble |
88.4 |
Table 2. Accuracy of SVM models using ensemble voting across polynomial degree variations
|
Polynomial Degree |
Accuracy (%) |
|
1 |
83.9 |
|
2 |
83.2 |
|
3 |
84.1 |
|
4 |
68.9 |
|
5 |
71.7 |
|
Ensemble (Soft Voting) |
89.5 |
|
Ensemble (Hard Voting) |
84.6 |
We thought about testing different classifiers to evaluate the performance of our model because there were no state-of-the-art results on the same dataset. The results of the performance comparison for various classifiers are displayed in Table 3. Actually, we have conducted tests on the SVM linear kernel classifier with our FBWN feature and it attained an accuracy of 75.5%. Then this features with stacked auto encode classifier an accuracy of 89.9%. Nevertheless, deep learning models are applied directly to images and CNN achieved an accuracy of 87%.
Although these methods were not re-evaluated in the current study, our previous research [25] applied several ensemble techniques to the same FBWN features derived from RGB images. The results indicated that Adaboost reached 43.5%, Gradient Boosting achieved 86.3%, and Bagging with KNN obtained 88.2% accuracy. In comparison, the Voting Classifier in this study achieved 90%, demonstrating competitive or better performance while also offering advantages in terms of simplicity and stability. These outcomes reinforce our choice to adopt the Voting Classifier as the final classification method.
Table 3. Performance comparison of different classifiers
|
Classifier |
Input Feature Type |
Accuracy (%) |
|
Support Vector Machine (SVM) |
FBWN extracted from RGB |
5.5 |
|
Convolutional Neuron Network (CNN) [8] |
RGB image |
87 |
|
Stacked Auto Encoder (SAE) [9] |
FBWN extracted from RGB |
89.9 |
|
Adaboost [25] |
FBWN extracted from RGB image |
43.5 |
|
Gradient Boosting [25] |
FBWN from RGB image |
86.3 |
|
Bagged (KNN) [25] |
FBWN extracted from RGB |
88.2 |
|
Ensemble Voting |
FBWN extracted from RGB |
90 |
In the criteria that we have exploited in our work, there is the pattern of coloration, not the colors but how the spots are organized on the egg (more or less strong concentration at the bottom, more or less dispersion, and more or less strong heterogeneity of the size and the color of the spots). There are potentially a lot of characteristics to get out, these characteristics influence the results of the shape and texture descriptor. Specifically, we combined beta wavelet analysis with statistical methods such as Hue moments and energy calculation to develop suitable shape and texture descriptors, respectively.
Despite the use of data augmentation to balance the dataset and enhance the model's generalization capabilities, some classification errors still occur. Upon analysis, these errors are mainly attributed to the high visual similarity between certain parasitic egg types. This highlights the intrinsic difficulty of the task and suggests that more discriminative features or attention-based models might be beneficial. Future work may also incorporate explainability methods to further investigate the decision-making process of the classifier.
We considered testing various classifiers to assess the effectiveness of our model with clean samples. Table 3 illustrates the performance comparison results for different classifiers.
To evaluate our model's performance using clean data, we thought about experimenting with different classifiers. The performance comparison results for several classifiers are shown in Table 3.
Several studies have shown that females of some bird species lay eggs with the same color and pattern lifelong, but here, to our knowledge, we have provided the first demonstration that this repeatability can indeed be related to individual egg recognition. The most important features for individual visual recognition (egg shape) are the features of egg coloration.
Thus, in order to classify the eggs from the slender-billed gull's data, our suggested method has taken advantage of the FBWN's ability to extract precise visual features and the ensemble voting system for classifying these features. The outcomes of the experiment validate the efficacy of our approach.
However, because our architecture requires multiple steps to identify the parasite egg, it takes a fair amount of time. This is not a major issue, though, as the biologist does not need to identify eggs in real time, thus meeting the time constraint is not a pressing concern.
The algorithm in our approach aimed to represent the sensory and cognitive tasks of a bird to identify parasitic eggs. Interestingly, our algorithm achieved a discrimination ability similar to that of birds.
While the proposed method has demonstrated strong performance under controlled conditions, its deployment in real-world biological contexts presents additional challenges. From a scalability standpoint, the method is adaptable to larger datasets, especially when combined with parallel computing and GPU-based acceleration. Nonetheless, factors such as inconsistent image quality, varying lighting conditions, and complex natural backgrounds in field environments could affect classification accuracy. To mitigate these issues, future work will focus on incorporating more diverse and representative data, along with robust preprocessing strategies. Furthermore, the current reliance on manual annotation poses a constraint on scalability; exploring semi-supervised or active learning approaches could help reduce this dependency and improve practicality.
We have demonstrated that machine learning ensemble voting is effective for egg recognition. During this work, we find two essential and crucial points. The first is that the fusion of the three descriptors gives a very important result for the identification of eggs and the second is that the increase of the base improves the result Building on these results, our future work will focus on improving the system by integrating more advanced deep learning techniques, such as EfficientNet, Residual Networks (ResNets), and Vision Transformers (ViTs). We also plan to explore more efficient feature extraction methods, particularly through hybrid approaches that combine handcrafted descriptors with deep feature representations. Additionally, enriching the dataset with more intra-specific image variations will be a priority to further improve the model's robustness and generalization capabilities. In the longer term, we aim to extend this methodology to the identification of other biological species, with the goal of providing biologists with automated tools that support their efforts in classification and ecological monitoring.
This study was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R759), Princess Nourah bint Abdulrahman University.
[1] Bulla, M., Šálek, M., Gosler, A.G. (2012). Eggshell spotting does not predict male incubation but marks thinner areas of a shorebird's shells. The Auk, 129(1): 26-35. https://doi.org/10.1525/auk.2012.11090
[2] Del Hoyo, J., Elliott, A., Sargatal, J. (1992). Handbook of the Birds of the World. Barcelona: Lynx Edicions.
[3] Besnard, A., Gimenez, O., Lebreton, J.D. (2002). A model for the evolution of crèching behaviour in gulls. Evolutionary Ecology, 16(5): 489-503. https://doi.org/10.1023/A:1020809528816
[4] Barbosa, A., Litman, L., Hanlon, R.T. (2008). Changeable cuttlefish camouflage is influenced by horizontal and vertical aspects of the visual background. Journal of Comparative Physiology A, 194(4): 405-413. https://doi.org/10.1007/s00359-007-0311-1
[5] Cassey, P., Mikšík, I., Portugal, S.J., Maurer, G., et al. (2012). Avian eggshell pigments are not consistently correlated with colour measurements or egg constituents in two Turdus thrushes. Journal of Avian Biology, 43(6): 503-512. https://doi.org/10.1111/j.1600-048X.2012.05576.x
[6] Caves, E.M., Stevens, M., Iversen, E.S., Spottiswoode, C.N. (2015). Hosts of avian brood parasites have evolved egg signatures with elevated information content. Proceedings of the Royal Society B: Biological Sciences, 282(1810): 20150598. https://doi.org/10.1098/rspb.2015.0598
[7] Mahmoudi, S., Nhidi, W., Bennour, C., Ben Belgacem, A., Ejbali, R. (2022). An intelligent approach to identify the eggs of the insect bemisia tabaci. Lecture Notes in Networks and Systems, 717: 62-70. https://doi.org/10.1007/978-3-031-35510-3_7
[8] Wiem, N., Ejbali, R., Hassen, D. (2020). An intelligent approach to identify parasitic eggs from a slender-billed’s nest. In Twelfth International Conference on Machine Vision (ICMV 2019), Amsterdam, Netherlands, pp. 1143309. https://doi.org/10.1117/12.2558685
[9] Nhidi, W., Aoun, N.B., Ejbali, R. (2023). Deep learning-based parasitic egg identification from a slender-billed gull’s nest. IEEE Access, 11: 37194-37202. https://doi.org/10.1109/ACCESS.2023.3267083.
[10] Nhidi, W., Ben Aoun, N., Ejbali, R. (2023). Ensemble machine learning-based egg parasitism identification for endangered bird conservation. Communications in Computer and Information Science, 1864: 364-375. https://doi.org/10.1007/978-3-031-41774-0_29
[11] Wiem, N., Ali, C.M., Ridha, E. (2020). Wavelet feature with CNN for identifying parasitic egg from a slender-Billed’s nest. In International Conference on Hybrid Intelligent Systems, pp. 365-374. https://doi.org/10.1007/978-3-030-73050-5_37
[12] Chokri, M.A., Selmi, S. (2012). Nesting phenology and breeding performance of the Slender-billed Gull Chroicocephalus genei in Sfax salina, Tunisia. Ostrich, 83(1): 13-18. https://doi.org/10.2989/00306525.2012.659226
[13] Gómez, J., Liñán-Cembrano, G. (2017). SpotEgg: An image-processing tool for automatised analysis of colouration and spottiness. Journal of Avian Biology, 48(4): 502-512. https://doi.org/10.1111/jav.01117
[14] Hanley, D., Doucet, S.M. (2012). Does environmental contamination influence egg coloration? A long-term study in herring gulls. Journal of Applied Ecology, 49(5): 1055-1063. https://doi.org/10.1111/j.1365-2664.2012.02184.x
[15] Gómez, J., Pereira, A.I., Pérez-Hurtado, A., Castro, M., Ramo, C., Amat, J.A. (2016). A trade-off between overheating and camouflage on shorebird eggshell colouration. Journal of Avian Biology, 47(3): 346-353. https://doi.org/10.1111/jav.00736
[16] Gomez, J., Gordo, O., Minias, P. (2021). Egg recognition: The importance of quantifying multiple repeatable features as visual identity signals. PLoS One, 16(3): e0248021. https://doi.org/10.1371/journal.pone.0248021
[17] Wegmann, M., Vallat-Michel, A., Richner, H. (2015). An evaluation of different methods for assessing eggshell pigmentation and pigment concentration using great tit eggs. Journal of Avian Biology, 46(6): 597-607. https://doi.org/10.1111/jav.00495
[18] Ornés, A.S., Herbst, A., Spillner, A., Mewes, W., Rauch, M. (2014). A standardized method for quantifying eggshell spot patterns. Journal of Field Ornithology, 85(4): 397-407. https://doi.org/10.1111/jofo.12079
[19] El Adel, A., Ejbali, R., Zaied, M., Amar, C.B. (2014). A new system for image retrieval using beta wavelet network for descriptors extraction and fuzzy decision support. In 2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR), Tunis, Tunisia, pp. 232-236. https://doi.org/10.1109/SOCPAR.2014.7008011
[20] ElAdel, A., Ejbali, R., Zaied, M., Amar, C.B. (2016). A hybrid approach for content-based image retrieval based on fast beta wavelet network and fuzzy decision support system. Machine Vision and Applications, 27(6): 781-799. https://doi.org/10.1007/s00138-016-0789-z
[21] Hassairi, S., Ejbali, R., Zaied, M. (2018). A deep stacked wavelet auto-encoders to supervised feature extraction to pattern classification. Multimedia Tools and Applications, 77(5): 5443-5459. https://doi.org/10.1007/s11042-017-4461-z
[22] Ben Ali, R., Ejbali, R., Zaied, M. (2020). Classification of medical images based on deep stacked patched auto-encoders. Multimedia Tools and Applications, 79(35): 25237-25257. https://doi.org/10.1007/s11042-020-09056-5
[23] Ben Aoun, N., Nhidi, W., Ejbali, R. (2025). Automatic avian parasitic egg identification from pertinent visual features using hybrid machine learning models. International Journal of Machine Learning and Cybernetics. https://doi.org/10.1007/s13042-025-02762-2
[24] Saqlain, M., Jargalsaikhan, B., Lee, J.Y. (2019). A voting ensemble classifier for wafer map defect patterns identification in semiconductor manufacturing. IEEE Transactions on Semiconductor Manufacturing, 32(2): 171-182. https://doi.org/10.1007/10.1109/tsm.2019.2904306
[25] Chandra, M.A., Bedi, S.S. (2021). Survey on SVM and their application in image classification. International Journal of Information Technology, 13(5): 1-11. https://doi.org/10.1007/s41870-017-0080-1