Poultry Meat Classification Using MobileNetV2 Pretrained Model

Poultry Meat Classification Using MobileNetV2 Pretrained Model

Sekhra Salma* Mohammed Habib Adil Tannouche Youssef Ounejjar

Faculty of Sciences, Spectrometry, Materials and Archeomaterials Laboratory (LASMAR), Moulay Ismail University, Meknes 50000, Morocco

Ecole Supérieure de Technologie de Béni Mellal, Laboratoire de l'Ingénierie et de Technologies Appliquées (LITA), Université Sultan Moulay Slimane, Beni Mellal 23000, Morocco

Corresponding Author Email: 
sekhrasalma3@gmail.com
Page: 
275-280
|
DOI: 
https://doi.org/10.18280/ria.370204
Received: 
16 November 2022
|
Revised: 
6 February 2023
|
Accepted: 
11 February 2023
|
Available online: 
30 April 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In Morocco the meat business risks being targeted by fraud and adulteration, leading customers to probe the authenticity of the meat. The traditional styles for verifying meat types are expensive and consuming time. In this work, we propose a method based on computer vision and deep learning, which allows the bracket and isolation between turkey and chicken and Fayoumi and chicken farmer meat. We created a model grounded on the pre-trained Mobile Net V2 model and trained it with a Dataset containing the collected images of the four poultries. The evaluation of this model has given satisfactory results and has demonstrated that the model is suitable to predict the meat class with a delicacy of over 98%. The algorithm can be generalized to separate between authentic and fake meat.

Keywords: 

computer vision, deep learning, poultry meat classification, meat, chicken, authentication

1. Introduction

Moroccans are one of the peoples who consume the most meat, on average each inhabitant consumes 30 kg per year, and in Morocco, as in many other countries, meat is a socially highly valued food [1]. The type of meat most consumed by Moroccans is poultry, because of its lower price [2].

The importance of this merchandise and its turnover has led to the presence of fraud and adulteration of consumable meat with meat derived from other species. The evolution of consumer needs towards this product leads them to look for authenticity and sanitary quality of the products. In reply to this, several techniques based on microbiological and chemical analyses have been developed, among them the detection of biological material by PCR-RFLP [3], DNA hybridisation for the identification of chicken meat [4] etc. These traditional methods are more reliable and accurate, but they are expensive and time-consuming [5, 6].

In parallel to this, computer vision tries to solve this problem through its techniques. Several research studies have proposed solutions, among them, the evaluation of the color of poultry meat using a computer vision system [7, 8], digital image analysis as an alternative tool for chicken quality evaluation [9] etc. These methods are limited to the type of meat being processed and cannot be generalized.

The works that have been realized in our research team [10, 11] propose a method based on computer vision and deep learning [12] to make the classification between four types of poultry, with a very high accuracy by retraining the Mobile Net V2 model [13], is a highly efficient deep learning model designed for mobile and embedded vision applications. It uses depthwise separable convolutions and inverted residuals to achieve high accuracy while maintaining low computational complexity and small model size. It has been shown to achieve state-of-the-art performance on several benchmark datasets.

2. Materials and Methods

Our method consists to classify 4 different classes of Poultry (chicken, turkey, Fayoumi, chicken farmer); using a pre-trained model called MobileNet_v2. We will then use these trained feature maps without having to start from scratch by training a large model on a large Dataset. The model generated at the end will be able to visually classify chicken and turkey and Fayoumi and chicken farmers with a very high accuracy.

2.1 Images acquisition

In Morocco, the most popular Poultry consumed is chicken, turkey, and chicken farmer that is why we chose them and we added Fayoumi to have more accuracy and work with 4 types of Poultry.

This data base or this collection of images are created by buying a multiple piece of the four different meats turkey and chicken and Fayoumi and chicken farmer in a market located in MEKNES MOROCCO and taking photos using a digital camera phone 16Mpx (Huawei Y9 prime) With the help of a photography Led box shown in Figure 1 and also the help of the program coding by. Net for cutting images to augmented the data.

Pictures are taken of different parts of the bird thigh, drumstick, wing, and breast and neck because eyes can detect photos of full-size and the future object of the search is scanning the small details and attributes the parts of the meat because we need to know the types of poultry in minced meat.

These images taken in 20 days and the original dimensions of the pictures are 4608x3456 pixels which will be resized to 408x306 for reasons of acceptable size for the model and the limitation of storing the data in Google Drive also we need to concentrate on color and texture of the meats [14] and zoom the features and the attributes of the images. The samples of the captured images are shown in Figure 2.

Figure 1. Images acquisition devices (about 25cm from the bottom)

Figure 2. Dataset samples

2.2 Dataset augmentation

The training data contains 746 images divided into four classes, and to enrich the Dataset, we proceeded to increase the data, using a coding program with .Net. From a single image we generated 8 different images, rotation, horizontal mirroring, with these methods we obtained 7614 images as it shows the Figure 3.

2.3 The MobileNetV2 model

MobileNet-v2 is a convolutional neural network that is 53 layers deep. The network has learned rich feature representations for a wide range of images. The network has an image input size of 224-by-224.

MobileNetV2 is very similar to the original Mobile Net, except that it uses inverted residual blocks with bottlenecking features. It has a drastically lower parameter count than the original Mobile Net. Mobile Nets support any input size greater than 32x32, with larger image sizes offering better performance.

We are using a pre-trained model called MobileNet_v2, which is a popular network for image-based classification, and trained, on 1000 classes of ImageNet dataset with more than 20 million parameters [15, 16].

MobileNetV2 is a significant improvement over MobileNetV1 and pushes the state of the art for mobile visual recognition including classification, object detection and semantic segmentation. MobileNetV2 is released as part of Tensor Flow-Slim Image Classification Library, or you can start exploring MobileNetV2 right away in Collaboratory. Figure 4 shows Structure of MobileNET (a) MobileNet-v1 and (b) MobileNetv2.

Overall, the MobileNetV2 models are faster for the same accuracy across the entire latency spectrum. In particular, the new models use 2x fewer operations, need 30% fewer parameters and are about 30-40% faster on a Pixel phone than MobileNetV1 models, all while achieving higher accuracy.

MobileNetV2 is a very effective feature extractor for object detection and segmentation. Figure 5 gives an idea about the structure of the MobileNet V2 model.

For model creation and training, we used the Google Colab [17] virtual machine, which encompasses all the necessary libraries as well as integrating the GPU processor [18] that allows for faster learning process, and for the Dataset, we imported it on Google Drive, to allow for fast synchronization with the training code.

Figure 3. Example of a one-image augmentation

Figure 4. Structure of MobileNET (a) MobileNet-v1 and (b) MobileNetv2

Figure 5. The basic structure of the MobileNet V2 model [19]

3. Results and Discussion

The code that we have created follows the general machine learning workflow:

  • Read and process the dataset, in this context we have divided the images into 80% of the data for training and 20% for validation
  • Create the model
  • Load in the pretrained base model (and pretrained weights)
  • Add the classification layers on top
  • Train the model
  • Evaluate model

3.1 Loading and adapting Mobile Net V2

Mobile Net V2 is an open-source model downloadable from Colab, we have preloaded it with ImageNet trained weights, and it does not include the classification layers at the top, which allows for better feature extraction. and is a deep learning model designed for mobile and embedded vision applications [20-22]. It uses depthwise separable convolutions and linear bottleneck layers to achieve high accuracy while maintaining low computational complexity and small model size. The model also uses inverted residuals to improve performance and is available in several pre-trained versions with different input sizes. MobileNetV2 has been shown to achieve state-of-the-art performance on several benchmark datasets while being highly efficient in terms of computational resources and memory usage.

We are going to use MobileNet_v2 for our dataset, which has four different classes of meats, and we are using Keras functional API for further coding.

3.2 First training

We have set the Batch size at 64 and we used A higher batch size to get result in faster training, improve generalization, and make more efficient use of hardware resources when training a MobileNet V2 model. The training started with 0.67 as loss and the accuracy at 87%. After the 10 epochs, the validation loss stops at around 0.38 and the validation accuracy at 92% (Figure 6).

Figure 6. The first training results

The first training results are not satisfactory; this is because we only trained the top layers. The weights of the pre-trained model were not updated during the training.

3.3 Fine tuning

To get more performance we augmented the data and trained (refined) the weights of the top layers of the pre-trained model in parallel with the training of the classifier. We forced the weights to be tuned from generic feature maps to features specifically associated with the Dataset.

Before recompiling the model, we unfroze the base model and declare its lower layers as untrainable. Then resume learning. the fine tuning is mentioned in the Table 1.

Figure 7. The results after fine-tuned MobileNET V2 model

Table 1. Model summary

Layer (type)

Output shape

Param

keras_layer_2 (KerasLayer)

(None, 1280)

2257984

dense_2 (Dense)

(None, 4)

5124

Total params: 2,263,108

Trainable params: 5,124

Non-trainable params: 2,257,984

After the modifications were made, we resumed the training with 10 other epochs.

Looking at the training results, it can be seen that the fine-tuned MobileNet V2 model has given good results (Figure 7), losses have dropped 10 times less, as well as the accuracy has reached 98%.

3.4 Evaluation and prediction

We evaluated the performance of the model we created on other images, and found that it makes a better prediction for the classification and identification of chicken meat and turkey and chicken farmer and Fayoumi.

Looking at the results obtained, it can be seen that the model able to know in each image the type of the meat So the trained model can identify the meat with high predictions. The samples of the captured images are shown in Figures 8-11.

Figure 8. Evaluation of the chicken(poulet) image model

Figure 9. Evaluation of the Fayoumi image model

Figure 10. Evaluation of the chicken farmer (poulet ferme) image model

Figure 11. Evaluation of the turkey (Dinde) image model

4. Conclusions

In this research, we succeeded in developing an algorithm based on deep learning for the classification and differentiation between chicken and turkey and chicken farmer and Fayoumi meat. The model was created based on the pre-trained Mobile Net V2 model and was trained by a collected and processed Dataset. The resulting model performed the classification with an accuracy of up to 98%. As we succeeded in differentiating between four types of meat. Our idea can be generalized to verify the authenticity of any kind of consumable meat; this can be achieved by enriching the Dataset with images of authentic meat and counterpart images of faked meat. The satisfactory results we obtained show the feasibility of an algorithm based on deep learning to fight against frauds in consumable meat, also the future object of this search is creating a mobile application for satisfy and help customers to can be able to know types of the meat fluently and quickly without need the traditional styles.

  References

[1] Tisza, Sarter, G. (2006). Manger et élever des moutons au Maroc: Sociologie des préférences et des pratiques de consommation et de production de viande (Doctoral dissertation, Université Panthéon-Sorbonne-Paris I), https://theses.hal.science/tel-00273344, accessed on Jan. 8, 2023.

[2] Cohen, N., Ennaji, H., Bouchrif, B., Hassar, M., Karib, H. (2007). Comparative study of microbiological quality of raw poultry meat at various seasons and for different slaughtering processes in Casablanca (Morocco). Journal of Applied Poultry Research, 16(4): 502-508. https://doi.org/10.3382/japr.2006-00061

[3] Girish, P.S., Anjaneyulu, A.S.R., Viswas, K.N., Santhosh, F.H., Bhilegaonkar, K.N., Agarwal, R.K., Nagappa, K. (2007). Polymerase chain reaction–restriction fragment length polymorphism of mitochondrial 12S rRNA gene: A simple method for identification of poultry meat species. Veterinary Research Communications, 31: 447-455. https://doi.org/10.1007/s11259-006-3390-5

[4] Chikuni, K., Ozutsumi, K., Koishikawa, T., Kato, S. (1990). Species identification of cooked meats by DNA hybridization assay. Meat Science, 27(2): 119-28. https://doi.org/10.1016/0309-1740(90)90060-J

[5] Alom, M.Z., Taha, T.M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M.S., Esesn, V.B.C., Awwal, A.A.S.,Asari, V.K. (2018). The history began from alexnet: A comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164. https://doi.org/10.48550/arXiv.1803.01164

[6] Tisza, Á., Csikós, Á., Simon, Á., Gulyás, G., Jávor, A., Czeglédi, L. (2016). Identification of poultry species using polymerase chain reaction-single strand conformation polymorphism (PCR-SSCP) and capillary electrophoresis-single strand conformation polymorphism (CE-SSCP) methods. Food Control, 59: 430-438. https://doi.org/10.1016/j.foodcont.2015.06.006

[7] Tomasevic, I., Tomovic, V., Ikonic, P., Lorenzo Rodriguez, J.M., Barba, F.J., Djekic, I., Nastasijevic, I., Stajic, S., Zivkovic, D. (2019). Evaluation of poultry meat colour using computer vision system and colourimeter: Is there a difference? British Food Journal, 121(5): 1078-1087. https://doi.org/10.1108/BFJ-06-2018-0376

[8] Wideman, N., O'bryan, C.A., Crandall, P.G. (2016). Factors affecting poultry meat colour and consumer preferences-A review. World's Poultry Science Journal, 72(2): 353-366. https://doi.org/10.1017/S0043933916000015

[9] Barbin, D.F., Mastelini, S.M., Barbon Jr, S., Campos, G.F., Barbon, A.P.A., Shimokomaki, M. (2016). Digital image analyses as an alternative tool for chicken quality assessment. Biosystems Engineering, 144: 85-93. https://doi.org/10.1016/j.biosystemseng.2016.01.015

[10] Tannouche, A., Sbai, K., Miloud R., Agounoun, R., Abdelhai R., Rahmani, A. (2016). Real time weed detection using a boosted cascade of simple features. International Journal of Electrical and Computer Engineering (IJECE), 6(6): 2755. http://dx.doi.org/10.11591/ijece.v6i6.pp2755-2765

[11] Mohammed, H., Tannouche, A., Ounejjar, Y. (2022). Weed detection in pea cultivation with the faster RCNN ResNet 50 convolutional neural network. Revue d'Intelligence Artificielle, 36(1): 13-18. https://doi.org/10.18280/ria.360102

[12] Liu, C., Cao, Y., Luo, Y., Chen, G., Vokkarane, V., Ma, Y. (2016). Deepfood: Deep learning-based food image recognition for computer-aided dietary assessment. In: Inclusive Smart Cities and Digital Health: 14th International Conference on Smart Homes and Health Telematics, ICOST, Wuhan, China, pp. 37-48. https://doi.org/10.1007/978-3-319-39601-9_4

[13] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. https://doi.org/abs/1704.04861

[14] Tomasevic, I., Tomovic, V., Milovanovic, B., Lorenzo, J., Đorđević, V., Karabasil, N., Djekic, I. (2019). Comparison of a computer vision system vs. traditional colorimeter for color evaluation of meat products with various physical properties. Meat Science, 148: 5-12. https://doi.org/10.1016/j.meatsci.2018.09.015

[15] Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6): 1-9. https://doi.org/10.1145/3065386

[16] He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. https://doi.org/ 10.1109/CVPR.2016.90

[17] Bisong, E. (2019). Building machine learning and deep learning models on google cloud platform. Professional and Applied Computing, pp. 59-64. http://dx.doi.org/10.1007/978-1-4842-4470-8

[18] Carneiro, T., Da NóBrega, R.V.M., Nepomuceno, T., Bian, G.B., De Albuquerque, V.H.C., Rebouças Filho, P.P. (2018). Performance analysis of google colaboratory as a tool for accelerating deep learning applications. IEEE Access, 6: 61677-61685. https://doi.org/10.1109/ACCESS.2018.2874767

[19] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510-4520. https://arxiv.org/abs/1801.04381

[20] Zhang, L., Wang, J., Li, B., Liu, Y., Zhang, H., Duan, Q. (2022). A MobileNetV2-SENet-based method for identifying fish school feeding behavior. Aquacultural Engineering, 99: 102288. https://doi.org/10.1016/j.aquaeng.2022.102288

[21] Phiphiphatphaisit, S., Surinta, O. (2020, Mar). Food image classification with improved MobileNet architecture and data augmentation. In: Proceedings of the 3rd International Conference on Information Science and Systems, Cambridge, UK, pp. 51-56. https://doi.org/10.1145/3388176.3388179

[22] Zaki, S.Z.M., Zulkifley, M.A., Stofa, M.M., Kamari, N.A.M., Mohamed, N.A. (2020). Classification of tomato leaf diseases using MobileNet v2. IAES International Journal of Artificial Intelligence, 9(2): 290. https://doi.org/10.11591/ijai.v9.i2.pp290-296