Enhanced Local Line Binary Patterns for Palmprint Images

Enhanced Local Line Binary Patterns for Palmprint Images

Arwa Hamid Salih Hamdany Lubab H. Albak Rabei Raad Ali*

Technical College of Engineering for Computer and AI, Northern Technical University, Mosul 41001, Iraq

Corresponding Author Email: 
rabei@ntu.edu.iq
Page: 
2105-2111
|
DOI: 
https://doi.org/10.18280/isi.300816
Received: 
4 June 2025
|
Revised: 
5 August 2025
|
Accepted: 
25 August 2025
|
Available online: 
31 August 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Palmprint recognition remains a central focus in biometric research due to the uniqueness and richness of palmprint features. A significant challenge is developing effective feature extraction methods that consistently identify distinctive patterns for accurate recognition. This paper introduces a new technique called Modified Local Line Binary Patterns (MLLBP), which enhances traditional Local Binary Patterns (LBP). LBP is used to analyze multiple pixel lines in important directions. The MLLBP method was tested with a contact-free “3D/2D” hand image dataset from the Hong Kong Polytechnic University (PolyU), achieving an impressive equal error rate (EER) of 0.4%. This performance surpasses several existing methods, including [insert comparator methods here, e.g., traditional LBP, LPQ, or CNN-based approaches]. The results show that the MLLBP method is highly effective for palmprint recognition, offering superior accuracy and robustness.

Keywords: 

feature extraction, binary pattern, palmprint, palmprint recognition, neural network, equal error rate, biometrics

1. Introduction

Today, fingerprint-based biometric systems are widely used in various applications for personal verification because fingerprints are easily visible to others and the condition of the finger’s surface. Everyone can observe different biometrics. Examples of these biometrics include iris [1, 2], face [3, 4], voice [5-7], ear [8], and hand dorsal [9]. Most of the current available systems for fingerprint-based biometrics have similarities in the content-based feature extraction method [10, 11]. However, fingerprint-based biometric systems are not always clear and can show irregular shadings. Therefore, identification errors may happen via the feature extraction methods due to the low quality of fingerprint images [6]. The main features of palmprint can be captured using inexpensive and low-resolution devices [12].

In 1996, LBP was first introduced as a texture analysis technique in reference [13]. Since then, it has been developed with various analysis methods. In 2008, two approaches—the Three-Patch LBP (TPLBP) and Four-Patch Local Binary Pattern (FPLBP)—were introduced to analyze pixel patches [14].

In 2009, a Local Line Binary Pattern (LLBP) was also described. LLBP focuses on analyzing only vertical and horizontal line features [15]. In the study [16], the author proposed Image Feature Enhancement (IFE) as a novel feature extraction method. It enhances the appearance of images.

Palmprint has significant characteristics, mainly wrinkles and visible principal lines. It can provide remarkable recognition performance and is more distinctive than fingerprints. Figure 1 shows some biometric traits classified into two groups (physiological characters and behavioral characters).

Figure 1. Common biometric examples

In this paper, palmprints are examined in detail, especially regarding feature extraction. Since LBP is one of the most effective recent methods for feature extraction, it was proposed in 1996 as a simple yet highly efficient texture operator [6]. It is used to label the pixels of an image by analyzing a 3x3 neighborhood around each pixel.

This paper proposes a new technique named Modified Local Line Binary Patterns (MLLBP), which extends the traditional Local Binary Patterns (LBP). The LB is used to analyze multiple lines of pixels in crucial directions.

The MLLBP method was evaluated by using a Contact-free “3D/2D” hand image dataset. The dataset is collected from the Hong Kong Polytechnic University database.

2. Related Work

In reference [17], an enhanced local LBP (ELLBP) was introduced. This method improves the performance of LLBP; however, it focuses solely on vertical and horizontal line features. In reference [18], an innovative feature extraction method called the Surrounded Patterns Code (SPC) was employed. This approach effectively considers the surrounding patterns of the main features.

In reference [19], the author proposed a Controlled Tiger Optimization (ENcTO) approach using Convolutional Neural Networks (CNNs) for image identification. They applied the Entropy Controlled Tiger Optimization and a multi-scale deep neural network identification method to recognize palmprints and human palms. In the processing flow, the author used feature extraction and finger recognition with a multi-scale deep CNN identifier. The results of this research demonstrate that the ENcTO approach outperforms the state-of-the-art method for gesture identification, achieving a recognition rate of 96.72%.

In reference [20], the author applied Machine Learning with content-based feature extraction methods for optimizing file cluster types of identification. The author used three content-based feature extraction methods, which are the Rate of Change, Entropy, and Byte Frequency Distribution, to produce an image cluster histogram. The ELM classifier was used to evaluate entropy, ROC, and BFD methods for an image file or a non-image file type. The results demonstrate that the proposed approach using a combination of the 3 feature methods produces high identification accuracy (93.46%) in classifying the file type.

In reference [21], the author proposed a support vector machine (SVM) identification method to identify image and non-image file clusters. The SVM identifies applied based on three entropies, byte frequency distribution, and the rate of change content-based feature extraction method. Radial basis and polynomial kernel functions are used to optimize the identification of image cluster content. The results show that the accuracy (96.21%) of identification of the SVM identifier by the polynomial function, and the accuracy (57.58%) of the SVM classifier by radial basis function.

In reference [22], the authors propose a new approach using the LBP method and the Gabor content-based feature. Also, using K nearest neighbor identifiers used with these features of a finger vein. Three datasets were used for the experiment. The results show that the proposed method outperforms compared with the state-of-the-art approaches. The proposed approach achieved the lowest identification rate (0.039%) using the SDUMLA, the lowest identification rate (0.064%) using the FV-USM, and the lowest identification rate (0.037%) using the MMCBNU_6000, without using any enhancements.

In reference [23], the author presented a new approach using the LBP operator for image face recognition. The LBP is used by distributing the face image into non-overlapped regions. The results show that the proposed approach has a 97.5% rate using Olivetti Research Laboratory databases.

In reference [24], a new approach based on the LBP method and the support vector machine method is presented. The results show that the LBP demonstrates plant classification accuracy with 91.85% using four subclasses, background, canola, corn, and radish, with a 24,000-image dataset.

In reference [25], the author proposed an automatic approach to detect the suspicious regions of interest for breast dynamic contrast-enhanced (DCE) MRI based on region growing. The results show that the new approach is able to identify suspicious regions on the PIDER breast MRI dataset.

In reference [26], the author proposed the VisGIN approach using and passes Visibility Graphs (VG). GINConv is used for the convolutional layers. The Visibility Graph (VG) is used for input. The results show that the fruition of the approach achieves 99.76% in the grant-access decisions evaluation. Therefore, the authors' VisGIN approach demonstrates the effectiveness for time-series binary identification tasks by using graph machine learning methods.

In reference [27], the authors proposed a novel method named the VGG16-PCA-NN approach. The main objective of the proposed approach is to improve the precision of facial authentication. To extract features, the author used the VGG16 model as a pre-trained model on the ImageNet dataset. The method improves reliability under varying conditions by handling environmental and physical challenges. Its high accuracy across multiple modalities supports secure, real-time biometric identification.

3. Local Binary Patterns (LBP) Overview

In image classification, the Local Binary Pattern (LBP) method is widely used to extract meaningful patterns from image regions. In addition, the LBP has been developed to extract texture content features effectively with low computational cost [1]. The main idea of the LBP is to compare individual pixels with neighboring pixels to encode local texture data into a binary pattern. The resulting binary values resulting from the pixel contrasts are organized in sequence in a circular track to an 8-bit binary amount for respectively pixel. To perform classification tasks, a histogram serves as a feature vector that represents the frequency of each LBP pattern across all pixels in the image, as shown in Figure 2.

As shown in Figure 2, the LBP operator labels the image pixels with decimal records. It encodes the local structure around each pixel. Figure 3 shows an example of LBP codes. It is used to compare the centre pixel with 8 neighbours in a 3*3 dimension [24].

Figure 2. The basic LBP operator [23]

Figure 3. Computing LBP codes [24]

As shown in Figure 3, every pixel in a 3×3 neighborhood is compared with its eight neighbors. The strictly negative comparison results are encoded as “0”, but the others are encoded as “1" in the LBP method. Then, the binary value is concatenated in a right-handed order, starting from the neighbor in the top-left, to form an 8-bit binary number.

4. Proposed Method

This paper introduced a new feature extraction method called MLLB. This approach can analyze the texture features of the palmprint by handling multiple directions of palmprint lines. Specifically, it first examines the vertical and horizontal directions using the equations outlined in reference [17].

$\begin{aligned} L L B P_H= & \sum_{n=1}^{c-1} s\left(H_n-H_c\right) 2^{(c-n-1)} +\sum_{n=c+1}^N s\left(H_n-H_c\right) 2^{(n-c-1)}\end{aligned}$            (1)

$\begin{aligned} L L B P_V= & \sum_{n=1}^{c-1} s\left(V_n-V_c\right) 2^{(c-n-1)} +\sum_{n=c+1}^N s\left(V_n-V_c\right) 2^{(n-c-1)}\end{aligned}$            (2)

where, Hn is the number of pixels in a horizontal vector, Hc represents the horizontal vector center pixel, c indicates the position of the center pixel, and N is the length of the vector in pixels. This is a value of a vertical direction, Vn is the pixels number in a vertical vector, Vc is the center pixel of a vertical vector, and can be defined as follows: Vn is the number of pixels in a vertical vector, Vc is the vertical vector center pixel, and s(x) can be defined as follows:

$s(x)= \begin{cases}1, & x \geq 0 \\ 0, & x<0\end{cases}$            (3)

Secondly, the proposed MLLBP method considers the primary and secondary diagonals according to the following Eqs. (4) and (5):

$\begin{aligned} L L B P_{D 1}= & \sum_{n=1}^{c-1} s\left(D 1_n-D 1_c\right) 2^{(c-n-1)} +\sum_{n=c+1}^N s\left(D 1_n-D 1_c\right) 2^{(n-c-1)}\end{aligned}$            (4)

$\begin{aligned} L L B P_{D 2}= & \sum_{n=1}^{c-1} s\left(D 2_n-D 2_c\right) 2^{(c-n-1)} \quad+\sum_{n=c+1}^N s\left(D 2_n-D 2_c\right) 2^{(n-c-1)}\end{aligned}$              (5)

where, $L L B P_{D 1}$ is a value of a primary diagonal direction, D1n is pixels of a primary diagonal vector, D1c represents the center pixel of a primary diagonal vector, $L L B P_{D 2}$ is the value of a secondary diagonal direction, D2n is pixels of a secondary diagonal vector, and D2c represents the center pixel of a secondary diagonal vector.

Consequently, combinations are applied between the horizontal and vertical values, and between the primary and secondary diagonal values. The combination is implemented according to the following calculations:

$E L L B P_1=v_1 \times L L B P_H+v_2 \times L L B P_V$         (6)

$E L L B P_2=v_1 \times L L B P_{D 1}+v_2 \times L L B P_{D 2}$            (7)

where, $E L L B P_1$ represents the combination of the horizontal and vertical values, $E L L B P_2$ represents the combination of the primary diagonal and secondary diagonal values, and $v_1$ and $v_2$ are weighted summation parameters [19].

Hence, to attain the effective feature extraction value from the $E L L B P_1$ and $E L L B P_2$, competitive method is considered between them.

Empirically, the following equation can be useful to compute:

$I L L B P_T=\min \left(E L L B P_1, E L L B P_2\right)$         (8)

where, $I L L B P_T$ represents the effective feature extraction value, and min represents a minimum operation selection.

An example of the main MLLBP processes is given in Figure 4. All the above calculations are used in each single pixel of the palmprint image. Afterwards, the featured palmprint images are collected. To complete collecting the data variance information of the 0 palmprint image, the following equations are considered [20] after partitioning each featured image into multiple blocks:

$M_{b l}=\frac{1}{j}$             (9)

$S T D_{b l}=\sqrt{\frac{1}{j-1} \sum_{i=1}^j\left(b l_i-M_{b l}\right)^2}$           (10)

$C O V_{b l}=\frac{S T D_{b l}}{M_{b l}}$              (11)

where, $j$ represents several pixels, $b l$ represents a pixel ( $5 \times 5$ pixels as in references $[21,22]$ ), $M_{b l}$ represents an average value of each block of pixels, $S T D_{b l}$ is the standard deviation value of each pixel and $C O V_{b l}$ is an achieved Coefficient of Variance ( COV ) value. COV values have many benefits.

Figure 4. An example of the main MLLBP processes

To illustrate, it can efficiently provide variances among pixel values; its computations are simple; it can decrease the size of the image [17]; all computed values are positive, and the variances among image features are efficiently defined for different subjects [26-34]. So, a vector of COV values can be collected for each palmprint image. These values are used as inputs to a Probabilistic Neural Network (PNN). The main architecture of PNN is shown in Figure 4.

It consists of input, hidden, summation, and decision layers. Also, this network, like other neural networks, works in two phases: training and testing.

Important benefits of using a PNN can be observed [27]: it is so fast during the train; it does not need more than one epoch; it is not suffer from the local minima problem as in the Multi-Layer Perceptron (MLP) neural network; its architecture is so flexible, where it can easily remove or add training data; and the input training information are stored in training weights.

5. Results and Discussions

This paper used a Contact-free 3D/2D Hand Images Database (Version 1.0). The palm side is considered during acquisition. Ten images are captured for each hand in two sessions (five in each). The elapsed time ranges from one week (for 27 individuals only) to three months. Participants' ages range from 18 to 50 years. Students and staff of various ethnic backgrounds and genders participated.

Participants were asked to slightly change their hand positions after each capture and remove their jewelry or rings. Each participant contributed 10 images. No fixed pegs were used to determine the exact hand position. A black background in an indoor environment was used for image collection. All images are bitmap files. Each hand image has a resolution of 640x480x3 pixels and was captured approximately 0.7 meters from the scanner. Segmented 2D palmprint images are provided, each in grayscale and 128x128 pixels in size [28].

The 2D hand images in this database are considered very low resolution [19]. A total of 1000 2D palmprint images are used in this study—500 for training and 500 for testing, covering 100 subjects.

The PNN trains the COV values of MLLBP features obtained from these 2D palmprint images. Afterwards, the PNN is evaluated using other COV values of MLLBP features that were not previously considered. First, palmprint images are shown before and after the MLLBP process. Figure 2 displays various palmprint images before and after applying the MLLBP.BP.

Figure 5. The images before and after the MLLBP

As shown in Figure 5, the 1st and 3rd columns display original palmprint images, while the second and fourth columns show MLLBP images. The left two columns present samples for one subject, whereas the right two columns depict samples for another subject. Therefore, the MLLBP effectively captures palmprint features, even at very low resolution.

Additionally, the MLLBP is illumination invariant, allowing it to efficiently describe palmprint line features. It provides reasonable similarity measures between samples of the same subject and reasonable dissimilarity measures between samples of different subjects. It is worth noting that the competitive method in Eq. (8) has been examined using various techniques.

Table 1 presents the results of the various methods applied.

Table 1. The results of the MLLBP for different applied methods

Feature Extraction

Method

EER

MLLBP

Summation

0.80%

Average

0.60%

Maximum

0.60%

Minimum

0.40%

From this Table, the minimum operation attained the best competitive performance among the different applied methods. It has achieved the best EER of 0.40%, a remarkable value in the context of human verification based on palmprints. Moreover, comparisons are implemented with different feature extraction methods, see Table 2.

Additionally, Table 2 shows that the proposed method has achieved an impressive EER value. The FPLBP and TPLBP [14] recorded high error rates of 6.40% and 2.60%, respectively. This is because they use patches of sub-blocks instead of individual pixels, which wastes macrotextures.

Table 2. Comparisons with different feature extraction

Ref.

Method

EER

[14]

FPLBP

6.40%

[16]

IFE

5.60%

[14]

TPLBP

2.60%

[18]

SPC

1.60%

[13]

LBP

1.40%

[15]

LLBP

0.80%

[17]

ELLBP

0.80%

This paper

MLLBP

0.40%

The IFE [16] also produced high error values, with an EER of 5.60%. This method improves the image appearance but does not focus on its essential features. The SPC method [18] achieved a value of 1.60% because it concentrates only on surrounding features. Although these features are useful, they are not as effective as the main features. The LBP [13] achieved 1.40% because it can analyze micro textures of palmprint images, but cannot analyze their line features.

The LLBP [15] and ELLBP [17] both achieved 0.80%. Although this is an interesting result, both can analyze line patterns, but only vertical and horizontal lines.

The proposed MLLBP attained the best EER of 0.40%, indicating that this method is efficient and superior. Figure 6 shows a new approach that is suggested and utilized to obtain the Receiver Operating Characteristic (ROC) curve.

Figure 6. ROC curves of different feature extraction

6. Conclusion

In this article, we introduce the MLLBP as a novel feature extraction method for contactless palm print recognition. By expanding traditional Local Binary Patterns to analyze not only horizontal and vertical directions but also primary and secondary diagonals, MLLBP captures more detailed textural information. Our competitive matching strategy, which selects the minimum response among horizontal, vertical, diagonal, and antidiagonal analyses, has proven more effective at highlighting the most discriminative linear features. An experimental evaluation on a 3D/2D PolyU hand image database demonstrated the effectiveness of MLLBP. Using coefficient of variation (COV) feature vectors fed into a probabilistic neural network, we achieved an equal error rate (EER) of just 0.40%. This result not only exceeds previous linear methods such as LLBP and ELLBP (each with EERs of 0.80%) but also significantly outperforms patch, histogram, and surrounding pattern approaches, which have EERs ranging from 1.40% to 6.40%. Besides offering superior verification accuracy, MLLBP shows robustness to changes in illumination and low image resolution, making it especially suitable for practical and cost-effective biometric systems. The simple block-based COV computation ensures compact feature representation and quick processing, while the PNN classifier allows fast training and avoids issues with local minima.

In the future, we plan to extend MLLBP to multispectral or three-dimensional palm print data and integrate it with deep learning frameworks for end-to-end feature learning. Additionally, large-scale testing on more diverse populations and real-world capture conditions will help verify the method's generalizability and feasibility for deployment. Additionally, we aim to extend the proposed method to multispectral or three-dimensional palm print data and incorporate it into deep learning frameworks for end-to-end feature extraction. Specifically, Convolutional Neural Networks (CNNs) can be used to automatically learn hierarchical spatial features from palm print textures. Meanwhile, the Siamese CNN architecture might be effective for one-shot learning in verification tasks.

Acknowledgment

The Hong Kong Polytechnic University Contact-free 3D/2D Hand Images Database version 1.0.

  References

[1] Hebbache, K., Aiadi, O., Khaldi, B., Benziane, A. (2025). Blind medical image watermarking based on LBP-DWT for telemedicine applications. Circuits, Systems, and Signal Processing, 44: 4939–4964. https://doi.org/10.1007/s00034-025-03023-x

[2] Dheepak, G., Vaithe Shali, D. (2024). Brain tumor classification: A novel approach integrating GLCM, LBP and composite features. Frontiers in Oncology, 13: 1248452.‏ https://doi.org/10.3389/fonc.2023.1248452

[3] Yang, W., Wang, S., Hu, J., Zheng, G., Chaudhry, J., Adi, E., Valli, C. (2018). Securing mobile healthcare data: A smart card based cancelable finger-vein bio-cryptosystem. IEEE Access, 6: 36939-36947. https://doi.org/10.1109/ACCESS.2018.2844182

[4] Chang, M., Ji, L., Zhu, J. (2024). Multi-scale LBP fusion with the contours from deep CellNNs for texture classification. Expert Systems with Applications, 238: 122100.‏ https://doi.org/10.1016/j.eswa.2023.122100

[5] Wang, H., Cao, X. (2025). Palmprint recognition using bifurcation line direction coding. IEEE Access.‏ https://doi.org/10.1109/ACCESS.2025.3562648

[6] Saleema, A., Thampi, S.M. (2021). Speaker identification approach for the post-pandemic era of internet of things. In Advances in Computing and Network Communications: Proceedings of CoCoNet 2020. Singapore: Springer Singapore, 1: 573-592. https://doi.org/10.1007/978-981-33-6977-1_42

[7] Salman, A.S., Salman, A.S., Salman, O.S. (2021). Using behavioral biometrics of fingerprint authentication to investigate physical and emotional user states. In Proceedings of The Future Technologies Conference. Cham: Springer International Publishing, pp. 240-256. https://doi.org/10.1007/978-3-030-89880-9_19

[8] Nalamothu, A., Rayachoti, E. (2025). An effective feature selection and classification technique for palmprint biometric identification systems. Knowledge and Information Systems, 1-28.‏ https://doi.org/10.1007/s10115-025-02478-3

[9] He, Z., Nguyen, H., Vu, T.H., Zhou, J., Asteris, P.G., Mammou, A. (2022). Novel integrated approaches for predicting the compressibility of clay using cascade forward neural networks optimized by swarm-and evolution-based algorithms. Acta Geotechnica, 17(4): 1257-1272. ‏https://doi.org/10.1007/s11440-021-01358-8

[10] Shakil, S., Arora, D., Zaidi, T. (2023). Feature identification and classification of hand based biometrics through ensemble learning approach. Measurement: Sensors, 25: 100593.‏ https://doi.org/10.1016/j.measen.2022.100593

[11] Albak, L.H., Al-Nima, R.R.O., Salih, A.H. (2021). Palm print verification based deep learning. TELKOMNIKA (Telecommunication Computing Electronics and Control), 19(3): 851-857.‏ http://doi.org/10.12928/telkomnika.v19i3.16573

[12] Montaño, J.J., Gervilla, E., Jiménez, R., Sesé, A. (2025). From acute to chronic low back pain: The role of negative emotions. Psychology, Health & Medicine, 1-14. https://doi.org/10.1080/13548506.2025.2478657

[13] Kaplan, K., Kaya, Y., Kuncan, M., Minaz, M.R., Ertunç, H.M. (2020). An improved feature extraction method using texture analysis with LBP for bearing fault diagnosis. Applied Soft Computing, 87: 106019. https://doi.org/10.1016/j.asoc.2019.106019

[14] Krig, S. (2025). Computer vision metrics: Survey, taxonomy, and analysis of computer vision, visual neuroscience, and visual AI. Springer Nature. Singapore: Springer Nature Singapore, pp. 141-193. https://doi.org/10.1007/978-981-99-3393-8

[15] Khare, V., Kumar, R. (2025). Classification and analysis of Parkinson’s disease based on gabor and wavelet filters. Ingénierie des Systèmes d’Information, 30(2): 367-373. https://doi.org/10.18280/isi.300208

[16] Youssef, D., Atef, H., Gamal, S., El-Azab, J., Ismail, T. (2025). Early breast cancer prediction using thermal images and hybrid feature extraction based system. IEEE Access. https://doi.org/10.1109/ACCESS.2025.3541051

[17] Al-Kaltakchi, M.T., Al-Hussein, M.A.S., Al-Nima, R.R.O. (2025). Identifying three-dimensional palmprints with modified four-patch local binary pattern (MFPLBP). International Journal of Electronics and Telecommunications, 555-559.

[18] Mieres-Perez, J., Almeida-Hernandez, Y., Sander, W., Sanchez-Garcia, E. (2025). A computational perspective to intermolecular interactions and the role of the solvent on regulating protein properties. Chemical Reviews, 125(15): 7023-7056. https://doi.org/10.1021/acs.chemrev.4c00807

[19] Kaliaperumal, R., Anandan, S. (2025). Performance analysis of neural network based human palmprint and hand gesture recognition techniques. Traitement du Signal, 42(1): 541. https://doi.org/10.18280/ts.420146

[20] Ali, R.R., Al-Dayyeni, W.S., Gunasekaran, S.S., Mostafa, S.A., Abdulkader, A.H., Rachmawanto, E.H. (2022). Content-based feature extraction and extreme learning machine for optimizing file cluster types identification. In Future of Information and Communication Conference. Cham: Springer International Publishing. Springer, Cham, pp. 314-325. https://doi.org/10.1007/978-3-030-98015-3_21

[21] Ali, R.R., Waisi, N.Z., Saeed, Y.Y., Noori, M.S., Rachmawanto, E.H. (2024). Intelligent classification of JPEG files by support vector machines with content-based feature extraction. Journal of Intelligent Systems & Internet of Things, 11(1). https://doi.org/10.54216/JISIoT.110101

[22] Manzoor, H., Khursheed, F., Hafiz, A.M. (2025). Finger vein recognition using an ensemble of KNN classifiers based on robust image features. Signal, Image and Video Processing, 19(10): 1-12. https://doi.org/10.1007/s11760-025-04369-0

[23] Aizan, J., Ezin, E.C., Motamed, C. (2016). A face recognition approach based on nearest neighbor interpolation and local binary pattern. In 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Naples, Italy, IEEE, pp. 76-81.‏ https://doi.org/10.1109/SITIS.2016.21

[24] Le, V.N.T., Apopei, B., Alameh, K. (2019). Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods. Information Processing in Agriculture, 6(1): 116-131.‏ https://doi.org/10.1016/j.inpa.2018.08.002

[25] Salehi, L., Azmi, R. (2014). A novel method based on learning automata for automatic lesion detection in breast magnetic resonance imaging. Journal of Medical Signals & Sensors, 4(3): 202-210.

[26] Aslan, H.İ., Choi, C. (2024). VisGIN: Visibility graph neural network on one-dimensional data for biometric authentication. Expert Systems with Applications, 237: 121323. https://doi.org/10.1016/j.eswa.2023.121323

[27] Abdul-Al, M., Kyeremeh, G.K., Qahwaji, R., Ali, N.T., Abd-Alhameed, R.A. (2024). A novel approach to enhancing multi-modal facial recognition: Integrating convolutional neural networks, principal component analysis, and sequential neural networks. IEEE Access. https://doi.org/10.1109/ACCESS.2024.3467151

[28] Pottluarai, B.D., Kasinathan, S. (2022). Thinning algorithms analysis minutiae extraction with terminations and bifurcation extraction from the single-pixeled thinned biometric image. Instrumentation Mesure Métrologie, 21(6): 225-230. https://doi.org/10.18280/i2m.210603

[29] Chyad, H.S., Abbes, T. (2025). Robust inner knuckle print recognition system using DenseNet201 and InceptionV3 models. Iraqi Journal for Computer Science and Mathematics, 6(2): 4. https://doi.org/10.52866/2788-7421.1245

[30] Kodepogu, K.R., Patnala, E., Annam, J.R., Gorintla, S., Krishna, V.V.R., Aruna, V., Manjeti, V.B., Pallikonda, A.K. (2025). Ensemble machine learning for the classification and prediction of mellitus diabetes. Ingénierie des Systèmes d’Information, 30(7): 1685-1691. https://doi.org/10.18280/isi.300701

[31] Zhao, S., Zhang, B., Yang, J., Zhou, J., Xu, Y. (2024). Linear discriminant analysis. Nature Reviews Methods Primers, 4(1): 70. https://doi.org/10.1038/s43586-024-00346-y

[32] Kaç, S.B., Eken, S., Balta, D.D., Balta, M., Iskefiyeli, M., Özçelik, I. (2024). Image-based security techniques for water critical infrastructure surveillance. Applied Soft Computing, 161: 111730. https://doi.org/10.1016/j.asoc.2024.111730

[33] Kaur, B., Saini, S.S. (2024). Integrating handcrafted features with deep convolutional neural network and BWOA optimization for improved postmortem iris recognition system. Soft Computing, 28(21): 13009-13023. https://doi.org/10.1007/s00500-024-10316-x

[34] Bashi, O.I.D., Hasan, W.W., Azis, N., Shafie, S., Wagatsuma, H. (2018). Autonomous quadcopter altitude for measuring risky gases in hazard area. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 10(2-5): 31-34. https://jtec.utem.edu.my/jtec/article/view/4345.