Developing Chicken Health Classification Model Using a Convolutional Neural Network and Support Vector Machine (CNN-SVM) Approach

Developing Chicken Health Classification Model Using a Convolutional Neural Network and Support Vector Machine (CNN-SVM) Approach

Eko Supriyanto* R. Rizal Isnanto Sutrisno Hadi Purnomo

Doctoral Program of Information System, School of Postgraduate Studies, Diponegoro University, Semarang 50275, Indonesia

Department of Electrical Engineering, Politeknik Negeri Semarang, Semarang 50275, Indonesia

Department of Animal Science, Faculty of Animal Science, Universitas Sebelas Maret, Surakarta 57126, Indonesia

Corresponding Author Email: 
ekosupriyanto@students.undip.ac.id
Page: 
2525-2532
|
DOI: 
https://doi.org/10.18280/isi.290637
Received: 
24 July 2024
|
Revised: 
9 August 2024
|
Accepted: 
3 September 2024
|
Available online: 
25 December 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

There is a critical need for the early detection of disease in order to mitigate its effects on chicken populations and prevent its transmission to other chickens. Nevertheless, farmers frequently need more efficiency and accuracy in their manual chicken health monitoring. Especially on large-scale farms, visual observation of disease symptoms is frequently either inaccurate or too late. Therefore, this research aims to develop a system for non-invasive monitoring of chicken health conditions using CNN-SVM image classification techniques based on RGB and infrared images in the chicken coop. The DenseNet121 architecture yielded the most promising feature extraction results, with an accuracy level of 83-98%, as indicated by the results of the tests. Furthermore, the model can accurately identify and classify chicken health images with a 93.6% accuracy rate during the evaluation process.

Keywords: 

chicken health classification, Convolutional Neural Network (CNN), image classification, feature extraction, support vector machine, DenseNet121

1. Introduction

The chicken farming industry plays a crucial role in fulfilling the global demand for animal protein. The Indonesian National Food Agency estimates that each person in Indonesia will consume approximately 7.46 kilograms of chicken meat annually in 2023.

The chicken farming industry necessitates meticulous attention to ensure the chicken's well-being and soundness, particularly in light of the growing demand for premium chicken meat. Chicken farmers frequently encounter various health issues, including infectious diseases like bird influenza, chicken cholera, Newcastle disease, and the others [1].

Monitoring the health of chickens has significant economic and social implications at both the individual and global levels.

According to data gathered by the Medion Technical Education & Consultation Team in 2023, the incidence of chicken illnesses fluctuated throughout the year, following a pattern similar to the previous year's forecast. Respiratory conditions such as Chronic Respiratory Disease (CRD), Colibacillosis, and complex CRD primarily contribute to the prevalence of bacterial diseases. The emergence of CRD cases in Indonesia can be attributed to inadequate livestock health control, a wide range of livestock ages within the population, high population density, and challenges in livestock management.

It is crucial to identify diseases early to mitigate their effects on chicken populations and prevent their transmission to other chickens [2]. Nevertheless, farmers' manual monitoring of chicken health needs to be more accurate and efficient. In large-scale farms, visual observation of disease symptoms is frequently either inaccurate or too late. It underscores the significance of developing automated systems that can identify and categorize health conditions in chickens [3]. Manual methods of disease diagnosis in chickens are undoubtedly time-consuming and exhausting, and they frequently need to identify infections accurately. These models may concentrate on direct observances of the chicken, including body temperature, pulse, feather or skin colour, eating and drinking behavior, and other physical symptoms that may suggest health issues.

The CNN approach to chicken health classification is auspicious because it can automatically extract significant features from images and employ a hierarchy of layers to identify intricate patterns [4, 5]. A high level of accuracy can be achieved in recognizing disease signs by CNN models when they are trained with an adequate dataset that encompasses a variety of chicken health conditions. Interest in various medical fields, particularly imaging, is rising due to the continued advancement of computer vision technology in object detection. The utilization of these fields can facilitate the diagnosis of a variety of diseases [6]. A combination of hardware and software components that perform a variety of image processing, analysis, and decision-making tasks is used in poultry monitoring systems to employ computer vision [7]. Computer vision systems can analyze a bird's behavior, posture, and movement patterns to detect indications of stress or illness [8-10]. Early detection of health issues enables immediate intervention, enhancing the flock's overall health.

Numerous researchers have implemented diverse methodologies to research chicken health monitoring. Aydin [11] introduced a novel approach to the automatic monitoring of the health of broiler chickens through the use of 3D cameras. By knowing the level of inactivity, farmers can detect health problems in their chickens earlier. Wang et al. [12] offer an automated way to analyze the health of broiler chickens through their feces. Images of chicken feces are classified into normal and abnormal categories using a deep Convolutional Neural Network (CNN). Meanwhile, Suthagar et al. [13] classify chicken diseases based on chicken feces (faecal) images using deep learning techniques. The findings of the proposed model are up to 97% accurate. In addition, Dwicahyo et al. [14] show that the CNN method could detect and recognize early disease detection in chicks correctly in 15 out of 50 test images, resulting in an accuracy value of 81.82%.

Degu and Simegn [15] employed two core algorithms, YOLO-V3 and ResNet50, to detect and classify poultry health conditions into four categories: Healthy, Coccidiosis, Salmonella, and New Castle Disease. Their research findings indicate that the YOLO-V3 object detection model, implemented in Darknet, achieved an average precision of 87.48% for detecting regions of interest (ROI). Conversely, the ResNet50 image model demonstrated a classification accuracy of 98.7%.

The most prevalent diseases affecting chickens can be easily identified by analyzing images of chicken droppings. A deep learning model based on CNN was proposed by Mbelwa et al. [16], which achieved an accuracy rate of 94% in identifying hidden patterns in various chicken feces (faecal) images.

An early warning algorithm was created by Zhuang et al. [17] to identify sick broiler chickens. In their research, they compared a variety of machine learning algorithms. The Support Vector Machine (SVM) model achieved an accuracy level of 99.469%, as indicated by the results.

It is crucial to detect chicken diseases with precision and accuracy to mitigate economic losses and prevent disease transmission. In their research, Aulia et al. [18] proposed a method for classifying chicken diseases that employ convolutional neural networks (CNN). The dataset utilized in this investigation comprises images of chickens affected by three distinct diseases: Newcastle disease, bird flu, and infectious bursal disease. The results demonstrate the efficacy of CNN in the classification of chicken diseases, as evidenced by an overall F1 score of 99.04%.

The primary objective of the research we are proposing is to create a system for the non-invasive monitoring of chicken health conditions in the chicken coop. This system will be based on CNN-SVM image classification techniques, which will be applied to RGB and infrared images.

Our proposed research stands out from previous studies because of its distinctive contribution, because of:

(1) The Use of RGB and Infrared Imagery Image Technology: The proposed study used a combination of RGB and infrared imagery for chicken health monitoring. This is a novelty because many previous studies, such as those conducted by Aydin [11] or Wang et al. [12], focused more on images from one spectrum, either RGB or chicken feces images. The integration of RGB and infrared images can provide more complete and accurate information about chicken health by capturing variations in light conditions and body temperature that may not be visible in RGB images alone.

(2) Non-Invasive Monitoring System Utilizing a Non-Invasive Method: This study highlights the importance of non-invasive monitoring, which refers to the use of methods that do not involve direct interaction or sampling from the chickens. Wang et al. [12] and Suthagar et al. [13] frequently used chicken droppings or other techniques that could involve direct sampling or monitoring in their prior research. The non-invasive method enables regular health monitoring and reduces stress for the chickens.

(3) A Combination of CNN and SVM Classification Techniques Using Hybrid Model: This study proposes the use of a combination of CNN and SVM for chicken health classification. Despite the widespread use of CNN in previous studies [14, 18], the application of CNN for feature extraction and SVM as the final classifier in chicken health monitoring has not been widespread. This approach can leverage the strengths of CNN in extracting complex features and SVM in accurate classification.

(4) Accuracy Improvement: This study aims to improve the accuracy of chicken health classification. Although several previous studies, such as those conducted by Suthagar et al. [13] and Aulia et al. [18], have achieved high accuracy, this study aims to develop a more accurate system by using data from both image spectra (RGB and infrared). This can improve the accuracy of detection and classification by integrating information from both types of images.

2. Material and Methods

2.1 Chicken Health Monitoring System (CHMS)

The CHMS is implemented in this investigation as an automated system for monitoring chicken health. It employs a variety of technologies, including artificial intelligence (AI), cameras, and sensors, to gather information regarding chicken health. Subsequently, this data can be examined to identify indicators of illness or other health issues. For this reason, it is observed in numerous previous studies that a variety of technologies are developed to monitor the health of chickens, predominantly including:

(1) A sensor-based system that monitors environmental parameters using sensors. However, it may not recognize all signs of disease, though it is typically relatively inexpensive and straightforward to install [19].

(2) A camera-based system that utilizes cameras to observe the behavior of chickens. It has the potential to identify a more significant number of disease indicators than sensor-based systems; however, it may be more costly and more challenging to install [20].

(3) An AI-based system that employs AI to analyze data obtained from sensors and cameras. It can detect disease signs with remarkable precision but may be the most costly and challenging to install [21, 22].

Figure 1 presents the general architecture of a CHMS.

Figure 1. General architecture of a CHMS

As seen in Figure 1, the CHMS architecture consists of four main components:

(1) Sensor Module

Sensors: Measuring temperature, humidity, chicken activity, and health parameters.

Actuators: Controlling the environment such as heating or watering.

(2) Communication Module

Wi-Fi, Zigbee, and LoRa are used for data transfer and to connect sensors to a central server; handling initial data.

(3) Data Processing Module

Receiving and processing data from the gateway; analyze data to detect health problems.

(4) User Interface Module

Dashboard: Interface for monitoring data in real-time.

The CHMS is a comprehensive solution that utilizes sensors and Internet of Things (IoT) technology to oversee the well-being of chickens and their surroundings. This system employs a comprehensive architecture comprising of sensors, gateways, servers, and user interfaces, enabling farmers to actively monitor chicken health and enhance production efficiency. Figure 2 presents an example of a model for implementing a CHMS.

Figure 2. One example of a CHMS model [23]

Several prior researchers have successfully developed a CHMS utilizing Internet of Things (IoT) technology. In their study, Sasirekha et al. [24] employed IoT technology to automate various management tasks in chicken farms. Budiarto et al. [25] developed an intelligent chicken farming system that allows farmers to easily monitor and assess the status of food supplies by logging onto a website. If the food supplies are running low, farmers will receive a direct SMS reminder.

This research focuses primarily on the use of IoT technology for management and automation. Hence, our study provides a more precise and comprehensive method for monitoring the health of chickens. By employing CNN and SVM for image analysis, we can enhance the precision and comprehensiveness of chicken health detection and classification. This approach overcomes the limitations of conventional monitoring systems, which primarily focus on managerial aspects.

2.2 CNN-SVM approach

Convolutional Neural Network (CNN) is a subset of Artificial Neural Network (ANN) that is most frequently employed in the analysis of visual images in the context of deep learning [26]. CNN is a deep learning architecture consisting of three layers: the input, hidden, and output layers. The CNN architecture includes numerous hidden layers. CNN comprises numerous blocks, including convolution, pooling, and fully connected layers [27]. In the interim, the Support Vector Machine is a supervised learning model that employs algorithms to analyze data for classification and regression analysis [28]. A hybrid model is also called the concept of combining the CNN-SVM method. This hybrid model, which combines CNN and SVM, offers enhanced classification capabilities and more efficient methodologies. The hybrid CNN SVM has the potential to achieve an overall accuracy of 98.4959% [29]. Figure 3 illustrates the Hybrid CNN-SVM model's structure.

Figure 3. The structure of hybrid CNN-SVM [29]

2.3 Model evaluation

Several calculation stages are conducted to evaluate the accuracy and quality of the model that has been developed, ensuring that it produces the highest possible results and has the lowest possible error values, following the design, training, and testing of the model with test data. The evaluation process also seeks to determine the model's functionality. The confusion matrix method is employed to evaluate the accuracy, precision, recall, and f1-score [30, 31]. The following equation represents the performance evaluation parameters:

$Accuracy=\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN}}$             (1)

$Recall=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}$             (2)

$Precision=\frac{T N}{T N+F P}$             (3)

$F1-Score=\frac{2 *\ Recall\  *\ Precision }{Recall\ +\ Precision}$             (4)

2.4 K-Fold validation

K-Fold validation is a validation technique that is frequently employed in the field of Machine Learning to assess the performance of models more accurately and to mitigate potential biases in the evaluation process. Essentially, this method divides the data into multiple folds and trains the model repeatedly using a variety of combinations of test and training data as input [32, 33].

2.5 Proposed methodology architecture

This study employs a hybrid approach of CNN-SVM to accurately categorize the health status of chickens into three classes: healthy, sick, and dead. CNN and SVM have distinct yet mutually beneficial functions. The CNN process of extracting information or features from CNNs is referred to as extraction. Features are responsible for extracting characteristics or attributes from images. Convolution, pooling, and activation layers in CNN transform input images into more abstract and informative feature representations. After CNN extracts features from the image, it employs SVM as the ultimate classifier to determine the object class based on these features.

The research process consists of several stages: data collection, pre-processing, segmentation, feature extraction using CNN, classification using SVM, testing of both the test and training data, and finally, evaluating and validating the performance. The study utilized a total of 12,000 images, consisting of 2,000 RGB images of healthy chickens, 2,000 RGB images of sick chickens, 2,000 RGB images of dead chickens, 2,000 infrared images of healthy chickens, 2,000 infrared images of sick chickens, and 2,000 infrared images of dead chickens. The chicken images undergo initial processing through resizing and normalization before entering the YOLO model. This study utilizes the YOLOv8 version. Therefore, Figure 4 presents the suggested approach.

Figure 4. Proposed methodology architecture

3. Results and Discussion

3.1 Experimental results

3.1.1 Data collection

An IP camera that is bi-spectrum and infrared is used to capture images. This camera is powered by a 12-volt DC (direct current) power supply. It is equipped with an RJ45 connector connected to a computer via a LAN (local area network) data cable. Additionally, the camera is equipped with a voice interface. This camera is employed to record or acquire RGB and infrared image data. It is positioned on a support or tripod at a height of 100 cm and a tilt angle of 30°. The images captured by the camera are the data that is utilized. Each image contains chicken objects, including both individual chickens and groups of chickens. This camera generates two output images for each capture: one infrared and one RGB image. Training and test data are the two components of the dataset, consisting of two images. Computers are employed to receive, store, and convert chicken image data into information mathematically or logically by a sequence of predetermined instructions. Figure 5 illustrates the image capture technique schematically.

Furthermore, Table 1 presents examples of image capture results.

Figure 5. Image capture technique

Table 1. Example of image capture results (RGB & Infrared)

RGB Image

Infrared Image

RGB image of a dead chicken

Infrared Image of dead chicken

RGB image of a healthy chicken

Infrared image of a healthy chicken

RGB image of sick chicken

Infrared image of sick chicken

3.1.2 Preprocessing

The data augmentation process is employed in the data preprocessing stage to increase the variation in the training dataset, thereby reducing overfitting. It is achieved through rotation, cropping, and image brightening [34]. In addition, the image is transformed from 640x480 pixels to 70×70 pixels before the feature extraction process using CNN. The image transformation process in this research was executed using YOLO, which necessitated the completion of several critical steps, including the image's reading, the image size adjustment to meet the model's requirements, and the image annotation. The StandardScaler library in Python is employed to execute this procedure. Figure 6 illustrates an example of preprocessing results.

3.1.3 Feature extraction

The feature extraction process in the CHMS uses Convolutional Neural Network (CNN) models such as Xception, ResNet, DenseNet121, MobileNetV2, and InceptionV3. Table 2 presents the feature extraction results from the various CNN models.

(a) Preprocessing result RGB image

(b) Infrared image preprocessing results

Figure 6. Image preprocessing results (a) RGB image preprocessing results (b) infrared image preprocessing results

Table 2. Results of extraction of chicken health image characteristics (Xception, ResNet, DenseNet, MobileNetV2, dan InceptionV3)

Chicken Image Sample Model
Dataset Xception ResNet50 DenseNet121 MobileNetV2 InceptionV3 Keterangan
  0.8 0.85 0.93 0.9 0.87 Healthy
0.71 0.83 0.94 0.75 0.66 Sick
0.83 0.8 0.98 0.86 0.74 Dead

The results of extracting chicken health image features using Convolutional Neural Network (CNN) models, including Xception, ResNet, DenseNet121, MobileNetV2, and InceptionV3, are presented in Table 2. Compared to other methods, DenseNet121 exhibits exceptional performance, with a 93-98% success rate. It is certainly consistent with the findings of Shazia et al. [35], who demonstrated that DenseNet121 can achieve an accuracy level of 99.48% compared to other CNN methods. Additionally, Kherraki and El Ouazzani [36] demonstrated that DenseNet121 could generate highly accurate classification results with a precision of 95.14%., Babu and Atluri [37] In their research, they obtained an accuracy result of 96.99% by combining features derived from different CNN architectures, then introduced. Support Vector Machine (SVM) classifier training. Syafaah et al. [38] obtained accuracy results of 71.25% for dead chickens, 98.25% for sick chickens and healthy chickens.

3.1.4 Model classification and testing

The classification process is the subsequent step after completing the feature extraction process. This process entails the utilization of previously extracted features to differentiate and classify objects or data into predefined classes. It is imperative to partition the test and training data before conducting classification. In this study, the model's ability to perform classification based on predetermined classes (healthy, sick, dead) is evaluated using 960 sets of test data and 240 sets of training data, which are composed of 80:20. Figure 7 illustrates the results of the chicken health image classification (healthy, sick, dead).

Figure 7. Chicken health image classification results (healthy, sick, dead)

3.1.5 Model evaluation

Two distinct scenarios were implemented during the evaluation process. The first scenario employs a collection of image data that is categorized by class (healthy, sick, dead), while the second scenario employs a data set that is randomly selected. It aims to determine how the proposed model can correctly classify and detect various conditions in the data. The first scenario demonstrates the model's capacity to identify and distinguish between predetermined classes or categories (healthy, sick, dead). In contrast, the second scenario evaluates the model's capacity to adapt to a broader range of data without a grouping process. Furthermore, Table 3 presents the model evaluation results (grouped data collection) using a grouped data set.

Table 3. Model evaluation results (grouped data collection)

Fold Accuracy
Healthy Sick Dead
1 0.923 0.907 0.935
2 0.961 0,895 0,898
3 0.915 0.908 0.948
4 0.941 0.911 0.914
5 0.917 0.904 0.909
6 0.905 0.899 0.903
7 0.943 0.914 0.947
8 0.901 0.922 0.929
9 0.975 0.952 0.971
10 0.988 0.973 0.981
Avg. 0.936 0.918 0.933

The model's performance is satisfactory, as evidenced by the model evaluation results with a grouped data set. The highest accuracy value was achieved at fold 10 (0.988) in the image of a healthy chicken, with an average accuracy result of 0.936. It demonstrates that the model performs optimistically when identifying chicken images in the healthy category. Additionally, the accuracy value can be influenced and enhanced by the use of a large number of folds in the simulation process [39].

The subsequent scenario involves conducting evaluation measurements with a varied (random) data set following the completion of an evaluation using a grouped image data set. Table 4 presents the results of the model evaluation with a random data set.

The model's propensity to misclassify samples from each category is demonstrated by the model evaluation results, which were conducted using 20 samples of chicken image data sets belonging to a variety of categories. These results are presented in Table 4. The chicken image sample from the sick category exhibited the most significant prediction error, with an accuracy of 33%. It suggests that the model's performance can be enhanced by concentrating on classes that are challenging to predict. Figure 8 illustrates the prediction results of the model evaluation using a random data set in the confusion matrix.

Table 4. Model evaluation with a random data set

Category Number of Samples Correct Prediction Accuracy
Healthy 7 6 85%
Sick 6 2 33%
Dead 7 4 57%
Total 20 12 60%

Figure 8. Confusion matrix

4. Conclusions

This article analyses the chicken health classification model using a CNN-SVM approach. Convolutional Neural Network (CNN) using the DenseNet121 architecture for chicken health classification can provide the best results in the feature extraction process with accuracy results ranging from 93-98% for chicken health images in various categories before the classification stage is carried out.

The superiority of SVM depends on the quality and feature representation provided by DenseNet121. If DenseNet121 is successful in extracting relevant and meaningful features from chicken health images, then SVM has the potential to produce classification results with a high level of accuracy.

Developing chicken health classification model using a CNN-SVM (CNNs) approach offers substantial practical advantages for the chicken farming sector. These advantages encompass enhanced operational efficiency, decreased expenses, enhanced chicken health and productivity, and the utilization of state-of-the-art technologies that provide a competitive edge. The livestock industry can improve its ability to effectively manage chicken health, optimize production output, and ensure compliance with industry standards by utilizing these technologies.

Acknowledgment

This research was supported by the Doctoral Program of Information Systems, School of Postgraduate Studies, Diponegoro University.

Nomenclature

CNN-SVM

Convolutional Neural Network and Support Vector Machine

RGB

Red Green Blue

LAN

Local Area Network

TN

True Negative

TP

True Positive

CHMS

Chicken Health Monitoring System

  References

[1] Wang, Y., Jiang, Z., Jin, Z., Tan, H., Xu, B. (2013). Risk factors for infectious diseases in backyard poultry farms in the Poyang Lake area, China. PloS One, 8(6): e67366. https://doi.org/10.1371/journal.pone.0067366

[2] Grace, D., Knight-Jones, T.J., Melaku, A., Alders, R., Jemberu, W.T. (2024). The public health importance and management of infectious poultry diseases in smallholder systems in Africa. Foods, 13(3): 411. https://doi.org/10.3390/foods13030411

[3] Liu, Y., Johar, M.G.M., Hajamydeen, A.I. (2023). Poultry disease early detection methods using deep learning technology. Indonesian Journal of Electrical Engineering and Computer Science, 32(3): 1712-1723, 2023, https://doi.org/10.11591/IJEECS.V32.I3.PP1712-1723

[4] Kucukkara, Z., Ozkan, I.A., Tasdemir, S. (2022). Identification of chicken Eimeria species from microscopic images by using convolutional neural network method. Selcuk University Journal of Engineering Sciences, 21(2): 69-74.

[5] Patil, A., Rane, M. (2021). Convolutional neural networks: An overview and its applications in pattern recognition. Information and Communication Technology for Intelligent Systems: Proceedings of ICTIS 2020, pp. 21-30. https://doi.org/10.1007/978-981-15-7078-0_3

[6] Ahsan, M.S., Ariatmanto, D. (2023). Chicken disease classification based on inception V3 algorithm for data imbalance. Sinkron: Jurnal Dan Penelitian Teknik Informatika, 7(3): 1875-1882. https://doi.org/10.33395/sinkron.v8i3.12737

[7] Okinda, C., Nyalala, I., Korohou, T., Okinda, C., Wang, J., Achieng, T., Wamalwa, P., Mang, T., Shen, M. (2020). A review on computer vision systems in monitoring of poultry: A welfare perspective. Artificial Intelligence in Agriculture, 4: 184-208. https://doi.org/10.1016/j.aiia.2020.09.002

[8] Okinda, C., Lu, M., Liu, L., Nyalala, I., Muneri, C., Wang, J., Zhang, H., Shen, M. (2019). A machine vision system for early detection and prediction of sick birds: A broiler chicken model. Biosystems Engineering, 188: 229-242. https://doi.org/10.1016/j.biosystemseng.2019.09.015

[9] McDowell, R. (1997). Red foxes. Hudson Review, 49(4): 569-577. https://doi.org/10.2307/3851889

[10] Bhuiyan, M.R., Wree, P. (2023). Animal behavior for chicken identification and monitoring the health condition using computer vision: A systematic review. IEEE Access, 11: 126601-126610. https://doi.org/10.1109/ACCESS.2023.3331092

[11] Aydin, A. (2017). Using 3D vision camera system to automatically assess the level of inactivity in broiler chickens. Computers and Electronics in Agriculture, 135: 4-10. https://doi.org/10.1016/j.compag.2017.01.024

[12] Wang, J., Shen, M., Liu, L., Xu, Y., Okinda, C. (2019). Recognition and classification of broiler droppings based on deep convolutional neural network. Journal of Sensors, 2019(1): 3823515. https://doi.org/10.1155/2019/3823515

[13] Suthagar, S., Mageshkumar, G., Ayyadurai, M., Snegha, C., Sureka, M., Velmurugan, S. (2023). Faecal image-based chicken disease classification using deep learning techniques. In Inventive Computation and Information Technologies: Proceedings of ICICIT 2022, pp. 903-917. https://doi.org/10.1007/978-981-19-7402-1_64

[14] Dwicahyo, A., Mufandi, I., Nurfadila, A.R., Ardani, M. T., Dzilhilmi, U. (2024). Early detection of disease in chicks using CNN on bangkok chicken health. Buletin Ilmiah Sarjana Teknik Elektro, 6(2): 126-141. https://doi.org/10.12928/biste.v6i2.10245

[15] Degu, M.Z., Simegn, G.L. (2023). Smartphone based detection and classification of poultry diseases from chicken fecal images using deep learning techniques. Smart Agricultural Technology, 4: 100221. https://doi.org/10.1016/j.atech.2023.100221

[16] Mbelwa, H., Machuve, D., Mbelwa, J. (2021). Deep convolutional neural network for chicken diseases detection. International Journal of Advanced Computer Science and Applications (IJACSA), 12(2): 759-765. https://doi.org/10.14569/IJACSA.2021.0120295

[17] Zhuang, X., Bi, M., Guo, J., Wu, S., Zhang, T. (2018). Development of an early warning algorithm to detect sick broilers. Computers and Electronics in Agriculture, 144: 102-113. https://doi.org/10.1016/j.compag.2017.11.032

[18] Aulia, S.N., Haekal, M., Endarko, E. (2023). Classification of thorax diseases using deep learning. In AIP Conference Proceedings, 2604(1): 9-17. https://doi.org/10.1063/5.0114172

[19] Pereira, W.F., da Silva Fonseca, L., Putti, F.F., Góes, B.C., de Paula Naves, L. (2020). Environmental monitoring in a poultry farm using an instrument developed with the internet of things concept. Computers and electronics in agriculture, 170: 105257. https://doi.org/10.1016/j.compag.2020.105257

[20] Colles, F.M., Cain, R.J., Nickson, T., Smith, A.L., Roberts, S.J., Maiden, M.C., Lunn, D., Dawkins, M.S. (2016). Monitoring chicken flock behaviour provides early warning of infection by human pathogen Campylobacter. Proceedings of the Royal Society B: Biological Sciences, 283(1822): 20152323. https://doi.org/10.1098/rspb.2015.2323

[21] Sadeghi, M., Banakar, A., Minaei, S., Orooji, M., Shoushtari, A., Li, G. (2023). Early detection of avian diseases based on thermography and artificial intelligence. Animals, 13(14): 2348. https://doi.org/10.3390/ani13142348

[22] Jebari, H., Mechkouri, M.H., Rekiek, S., Reklaoui, K. (2023). Poultry-Edge-AI-IoT system for real-time monitoring and predicting by using artificial intelligence. International Journal of Interactive Mobile Technologies (iJIM), 17(12): 149-170. https://doi.org/10.3991/ijim.v17i12.38095

[23] Zheng, H., Zhang, T., Fang, C., Zeng, J., Yang, X. (2021). Design and implementation of poultry farming information management system based on cloud database. Animals, 11(3): 900. https://doi.org/10.3390/ani11030900

[24] Sasirekha, R., Kaviya, R., Saranya, G., Mohamed, A., Iroda, U. (2023). Smart poultry house monitoring system using IoT. In International Conference on Newer Engineering Concepts and Technology (ICONNECT-2023), 399: 04055. https://doi.org/10.1051/e3sconf/202339904055

[25] Budiarto, R., Gunawan, N.K., Nugroho, B.A. (2020). Smart chicken farming: monitoring system for temperature, ammonia levels, feed in chicken farms. In IOP Conference Series: Materials Science and Engineering, 852(1): 012175. https://doi.org/10.1088/1757-899X/852/1/012175

[26] Taye, M.M. (2023). Theoretical understanding of convolutional neural network: Concepts, architectures, applications, future directions. Computation, 11(3): 52. https://doi.org/10.3390/computation11030052

[27] Galety, M., Al Mukthar, F.H., Maaroof, R.J., Rofoo, F. (2021). Deep neural network concepts for classification using convolutional neural network: A systematic review and evaluation. Sustainable Future and Technology Development, 3(8): 58-70. https://doi.org/10.47577/technium.v3i8.4554

[28] Yennimar, Y., Kelvin, K., Suwandi, S., Amir, A. (2022). Comparison analysis of SVM algorithm with linear regression in predicting used car prices. Jurnal Mantik, 5(4): 2720-2028. https://iocscience.org/ejournal/index.php/mantik/article/view/2067

[29] Khairandish, M.O., Sharma, M., Jain, V., Chatterjee, J. M., Jhanjhi, N.Z. (2022). A hybrid CNN-SVM threshold segmentation approach for tumor detection and classification of MRI brain images. IRBM, 43(4): 290-299. https://doi.org/10.1016/j.irbm.2021.06.003

[30] Powers, D.M. (2020). Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061. https://doi.org/10.48550/arXiv.2010.16061

[31] Li, G., Gates, R.S., Ramirez, B.C. (2023). An on-site feces image classifier system for chicken health assessment: A proof of concept. Applied Engineering in Agriculture, 39(4): 417-426. https://doi.org/10.13031/aea.15607

[32] Yadav, S., Shukla, S. (2016). Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification. In 2016 IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, pp. 78-83. https://doi.org/10.1109/IACC.2016.25

[33] Rana, S., Gautam, A.K. (2023). K-Fold cross-validation through identification of the opinion classification algorithm for the satisfaction of university students. International Journal of Online & Biomedical Engineering, 19(9): 122-130. http://arxiv.org/abs/2401.16407

[34] Mumuni, A., Mumuni, F. (2022). Data augmentation: A comprehensive survey of modern approaches. Array, 16: 100258. https://doi.org/10.1016/j.array.2022.100258

[35] Shazia, A., Xuan, T.Z., Chuah, J.H., Usman, J., Qian, P., Lai, K.W. (2021). A comparative study of multiple neural network for detection of COVID-19 on chest X-ray. EURASIP Journal on Advances in Signal Processing, 2021: 1-16. https://doi.org/10.1186/s13634-021-00755-1

[36] Kherraki, A., El Ouazzani, R. (2022). Deep convolutional neural networks architecture for an efficient emergency vehicle classification in real-time traffic monitoring. IAES International Journal of Artificial Intelligence, 11(1): 110-120. https://doi.org/10.11591/ijai.v11.i1.pp110-120 

[37] Babu, P.R., Atluri, S.K. (2023). Deep learning-assisted SVMs for efficacious diagnosis of tomato leaf diseases: a comparative study of GoogLeNet, AlexNet, and ResNet-50. Ingenierie des Systemes d'Information, 28(3): 639-645. https://doi.org/10.18280/isi.280312 

[38] Syafaah, L., Faruq, A., Setyawan, N., Khair, M.I. (2024). Sick and dead chicken detection system based on YOLO algorithm. Ingenierie des Systemes d'Information, 29(5): 1723-1729. https://doi.org/10.18280/isi.290506

[39] Wong, T.T., Yeh, P.Y. (2019). Reliable accuracy estimates from k-fold cross validation. IEEE Transactions on Knowledge and Data Engineering, 32(8): 1586-1594. https://doi.org/10.1109/TKDE.2019.2912815