© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
The Multimodal Biometric System (MBS) is widely utilized in security due to its superior performance compared to Unimodal Biometric Systems (UBSs). However, developing an MBS with high accuracy and acceptable complexity is still of prime interest. This paper proposes an innovative MBS based on advanced feature extraction and selection methods to improve face-iris recognition. The proposed method introduces three algorithms for Local Feature Extraction (LFE) of both faces and irises, effectively capturing detailed image characteristics. These extracted local features are fused in a unified matrix to provide a better description of modalities. In addition, the concatenated data undergoes dimensionality reduction by using a binary bat algorithm (BBA) intended for selecting the most significant features required for iris-face recognition. This contributes to improving the recognition accuracy and computational efficiency. The BBA is adopted due to its robust global optimization capabilities and adaptive exploration-exploitation balance. For classification at the score level, the extreme learning machine (ELM) is suggested, which demonstrates superior performance over the support vector machine (SVM) and genetic algorithm (GA). The system's robustness is validated using the CASIA Iris distance database, containing high-resolution images of both left and right eyes. The experimental results show significant improvements over (UBSs), underscoring the effectiveness of the designed MBS.
Multimodal Biometric System (MBS), face and iris recognition, local feature extraction (LFE), feature selection, Binary Bat Optimization Algorithm, extreme machine learning (EML)
Biometric systems (MBs) have been adopted in multiple real-world applications to ensure people’s security. Multimodal BS (MBSs), which handle multiple biometric traits, have attracted much research attention compared to unimodal BSs (UBSs), due to their capability to improve the accuracy and reliability of recognition systems [1-3]. Customizing MBSs uses either physical characteristics such as face, iris, and fingerprint modality, or behavioral characteristics like signature or footstep. For example, the combination of the face and iris traits, which are popular and complement each other, is an effective solution that has been widely applied to enhance persons’ identification [4, 5]. However, the tradeoff between system complexity and accuracy is still an issue that should be dealt with. In this regard, the development of an MBS that ensures high face-iris recognition accuracy and acceptable complexity is still challenging in real-world applications.
Extensive research has been made for creating MBSs that can offer high recognition accuracy and simplicity of implementation in real-world applications. Advancing feature extraction, feature fusion, and selection was one of the main research focuses for reaching adequate performance. In MBSs, feature extraction plays a critical role, as it involves identifying and capturing the most relevant features of each biometric modality. Indeed, effective feature extraction ensures that information from all modalities is utilized optimally. Feature extraction often includes; principal component analysis (PCA) [6], linear discriminant analysis (LDA) [7], local binary pattern (LBP) [8], Zernike moment (ZM) [9], LOG Gabor filter (LGF) [10], and multilinear PCA (M-PCA) [11], convolutional neural networks (CNNs) [12], etc. The combination of two algorithms is an effective solution that is adopted for the effective extraction of the local features. For instance, a combination of ZM and LGF is proposed for feature extraction in the study by Bouzouina and Hamami [13]. This method has contributed to significantly increasing the recognition rate. On the other hand, the fusion of the extracted features is essential to combine information from multiple biometric modalities. In multimodal face-iris recognition systems, some methods establish fusing the face and one eye iris [14-16]. Unfortunately, in such a fusion method, if one of the biometric modalities is unavailable, then the recognition accuracy decreases [17]. Thus, the fusion of the face and both left and right irises is proposed to improve recognition accuracy [17-19]. However, the feature fusion of the face and both left and right irises results in over-dimensionality and causes the redundancy of data. To this end, feature selection methods are suggested not only to remove the redundancy of data but also to keep just the data that improves the recognition rate. Special efforts have been made by attempting to select the best features from the original data to get the best result following an objective function. For example, the genetic algorithm (GA) proposed by Bouzouina and Hamami [13], is defined by chromosome encoding and fitness function, for binary selection ’1’ is affected by the feature selected, and ‘0’ is affected by the feature rejected of the chromosome evaluation. In addition, the PSO method has been widely used in feature selection [6, 20, 21]. This method is initialized with a population of a random solutions and aims to find the best position and velocity giving the best fitness. However, the major drawback of these methods is the possibility of failure in achieving the global optimum. To this end, advanced metaheuristic optimization algorithms are adopted. Following the approach of Sharifi and Eskandari [22], we employ a backtracking search algorithm (BSA) to optimize the feature set of the face and left/right iris. In addition, a modified chaotic binary PSO (MCBPSO) algorithm is proposed to select features within the face-iris multimodal biometric identification system [23]. However, the created MBSs still suffer from losing some feature information during feature fusion. This may lead to a decrease in the face-iris recognition rate.
All in all, this literature reveals that the advancement in feature extraction, fusion, and selection is essential for improving the performance of multimodal face-iris identification systems. More particularly, it highlights that the combination of multiple algorithms to extract features from different modalities and the fusion of the face and both left and right iris features are essential to improve the system recognition performance. Furthermore, the adoption of a proficient optimization algorithm for feature selection is vital to decrease system complexity and enhance recognition accuracy.
In this regard, an innovative MBS considering face and both irises modalities and advancement in feature extraction, fusion, and selection is proposed in this paper. It is developed to improve the accuracy of the face and iris recognition with acceptable complexity. The main contributions made in this paper are:
Considering three modalities, face, iris left, and iris right to ensure that all traits are involved. The detection of the right eye and left eye from the face is achieved using a trained cascade object detector (TCOD);
Proposing local feature extraction (LFE) based on three algorithms, LOG Gabor Filer (LGF), ZM, and LBP, and concatenation of the extracted features of the face and both irises in a unified matrix;
Adopting a bat binary optimization algorithm (BBOA) for feature selection. The BBAO is used to select the data sets that train the extreme machine learning (ELM)-based feature classification until achieving the best recognition rate;
Validating the effectiveness of the suggested techniques-based MBS over UBS through experimental tests.
The remainder of the paper is structured as follows; Section 2 presents the description of the designed MBS. Section 3 describes the proposed feature extraction and feature selection based on BBA. The BBA and ELM algorithms are also discussed in this section. Section 4 presents the experimental results and Section 5 provides the main conclusions.
Figure 1 depicts the structure of the designed MBS, including the proposed LFE-selection technique intended for face and both irises traits. It is well known that the face represents a common way to recognize people referring to its good ratio between accuracy and cost. In this light, the first step in the designed system, as Figure 1 shows, is uploading the full-face image. Second, a train-cascaded object detector (TCOD) is used to extract the face, left eye, and right eye images from the captured image, as Figure 2 shows. In such a technique, the modality of the iris is chosen according to the accuracy that it gives, and its features are not changed over time. In the third step, an effective LFE is adopted to improve system performance. Three algorithms, LOG Gabor filter (LGF), ZM, and local binary pattern (LBP), are introduced for the LFE of both irises and face modalities. These algorithms can offer a good characterization of the image. In addition, the concatenation of the data is curtailed to reduce the characteristic space. This is achieved by using a binary bat algorithm (BBA) for feature selection. The BBA is the binary version of the BA, a heuristic algorithm quoted from the echolocation behavior of bats, and is adopted, as the optimization problem is binary.
Figure 1. Proposed iris-face multimodal recognition system
Figure 2. Right and left eye extraction using TCOD [25]
The BBA algorithm is chosen because it can automatically shift from the exploration stage to the exploitation stage, providing a good result in global optimization. Further, it can provide better results compared to PSO and GA [24]. At the final stage, the ELM is considered for feature classification, while the robustness of the designed MBS is tested considering the CASIA iris distance database.
The proposed LFE based on the three algorithms and feature selection optimization process based on the BBA are described in detail in the subsequent section.
In this section, the proposed LFE based on LGF, ZM, and LBP is presented. In addition, the adopted feature selection based on BBA considering the ELM classifier model training is described. These two proposed techniques are highlighted in Figure 3.
Figure 3. Proposed feature extraction, fusion, and selection
3.1 Iris-face LFE
The proposed structure for LFE is shown in Figure 3. At this level, for a given image A, a block window with a size of (p×q) is, first, predetermined. Then, by sliding a window along the image from left to right, and from top to bottom, Ci blocks of the image are achieved. Considering our case, Cif, Cil, and Cir, characteristics’ blocks for the face, left eye, and right eye images, are determined. Next, each modality characteristics are handled by three local feature extractors (LFE) based on LGF, ZM, and LBP algorithms. Each LFE provides a feature matrix corresponding to the used algorithm of one block image. The feature fusion results in a unified matrix, Mi, which includes the features of the three transformations, i.e., LGF, ZM, and LBP, of the left and right iris and face. This matrix can be expressed as follows:
$M_i=\left[G_i, Z_i, L_i\right]$ (1)
where, Gi=[Gif, Gil, Gir], Zi=[Zif, Zil, Zir], and Li =[Lif, Lil, Lir] are, respectively, the feature matrices of the LGF, ZM, and LBP of one block image. More precisely, Gi, Zi, and Li correspond to an LGF, ZM, and LBP vector of the ith block image. Notice that a dynamic block-based algorithm is considered for extracting local characteristics. In addition, the three algorithms are adopted each for its specific performance.
(a) Log Filter Gabor
(b) ZM
Figure 4. LFG and ZM transformation images
LOG Gabor 1D Filter (LGF): The LGF filter is adopted for feature extraction due to its direction and frequency selectivity [6]. For each block, just one sale and one direction are used to avoid redundancy. In addition, it offers robustness against noise by focusing on relevant frequencies, which contribute to maintaining accurate feature extraction. Figure 4(a) illustrates face traits extracted using LGF.
ZM: The magnitude of ZMs is a descriptor invariant to the image rotation. The localized ZMs features are computed based on the image blocks extracted from the normalized images with n-order and m-repetition, when the best couple is found by drawing the curve recognition accuracy =f (n, m), with m=n=5. The ZM is chosen as it is a powerful tool that can extract the relevant features effectively and ensures high robustness to small distortions and noise. Figure 4(b) portrays an example of face traits extracted using ZM.
Local Binary Pattern (LBP): LBP is an efficient feature extractor due to its invariance against the image’s light non-uniformity or when the light changes [26].
On the other hand, it is worth mentioning that the reason for collecting all features in one single image is to make the selection of features and the training of ELM easy, so every row represents one person. However, the collected data may include undesirable feature redundancy, thus, a BBA algorithm is introduced to select the most significant features, hence, enhancing the MBS system performance in terms of dimensionality reduction and recognition accuracy.
3.2 BBA-based feature selection
The structure of the BBA optimization process-based feature selection is depicted in Figure 3. As seen, the BBA is introduced to select precisely the most significant feature through training the ELM while considering the classification accuracy as a fitness function. Particularly, in our case, the BBA retains the training and testing subsets of the most significant features, M’, that minimize the equal error rate (ERR). The adopted BBA-based feature selection allows for overfitting reduction of the dataset and system accuracy improvement through the elimination of noise in the database. In addition, as it is used to train the ELM classifier, the ELM will benefit from reducing the training time.
The following subsections describe in detail the BBA and the ELM algorithms and provide the pseudo-code of the BBA-based optimization process.
3.2.1 BBA
The flowchart of the BBA algorithm, the binary version of the bat algorithm (BA), is illustrated in Figure 5. The first step consists of the initialization of the parameters and conditions including the initial position xi, initial velocity vi, and initial frequency f. The movement of the bat is determined by updating its position and the velocity. The recurrent equation of the bat movement is updated as follows [27]:
$v_i(t+1)=v_i(t)+f_i\left(x_i(t)-x b_i\right)$ (2)
$x_i(t+1)=x_i(t)+v_i(t+1)$ (3)
$f_i=f_{\min }+\left(f_{\max }-f_{\min }\right) \beta$ (4)
where, xbi is the best position at the ith iteration, and $\beta$ is a random number between [0, 1], fmin and fmax are the lower and upper-frequency bounds, and $\beta$ is a random number between [0, 1].
Figure 5. Flowchart of the BBA
To improve the exploitability of the solution, random movement is considered as follows:
$x_{\text {new }}=x_{\text {old }}+\varepsilon A$ (5)
where, $\varepsilon$ is a uniform random number in the range of [-1, 1], and A denotes the emitted sound loudness. This loudness, A, and pulse emission rate, r, can be updated using:
$A_i(t+1)=\alpha A(t)$ (6)
$r_i(t+1)=r_i(0)\left[1-\mathrm{e}^{-\lambda t}\right]$ (7)
with $\alpha$ and $\varepsilon$ are constants.
In binary space, the position of the particles should be updated by switching between ‘0’ and ‘1’ with the probability of velocity. Therefore, a sigmoid function, S(vi(t)), is employed in this paper, to move in binary space.
$S\left(v_i(t)\right)=\frac{1}{1+\mathrm{e}^{v_i(t)}}$ (8)
Accordingly, the position of the particles yields:
$x_i= \begin{cases}1 & \text { if } S\left(v_i\right)>\text { rand } \\ 0 & \text { if } S\left(v_i\right)<\text { rand }\end{cases}$ (9)
In this regard, the BBA bats’ positions are randomly chosen with binary values based on Eq. (9), which corresponds to whether the position is selected or not for building a new data set. After that, Eqs. (2)-(4) are used for new training and evaluation of the data set, and then the loudness Ai and the rate of pulse emission ri are updated, based on Eqs. (6) and (7) if a new solution is accepted.
3.2.2 ELM algorithm
The ELM algorithm is based on a neural network (NN) with a single hidden layer and high learning speed [28]. The ELM compared to NN, benefits from an enhancement of the speed of data classification and regression, while the weight of the input hidden layer is randomly defined.
Considering $\left\{x_i, y_i\right\}^N$ ELM input and output, thus, for an array of n and m elements, yields:
$\begin{gathered}X_i=\left[x_{i 1}, x_{i 2}, \cdots, x_{i m}\right]^T ; y_i=\left[y_{i 1}, y_{i 2}, \cdots, y_{i m}\right]^T \\ y_i=\sum_{i=1}^m \beta_i f\left(w_i x_i+b_i\right)\end{gathered}$ (10)
where, the wi are the input node weights, and bi are the biases of the ith node. The matrix form of the single-layer feedforward NN is written as follows:
$Y=\beta F$ (11)
where, F is the activation function, which is defined as:
$F=\left(\begin{array}{ll}f\left(w_1 x_1+b_1\right) & \cdots f\left(w_m x_1+b_m\right) \\ f\left(w_1 x_m+b_1\right) & \cdots f\left(w_m x_N+b_m\right)\end{array}\right)$ (12)
with
$\beta_i=\left[\beta_1, \beta_2, \cdots, \beta_m\right]^T ; y_i=\left[y_1, y_2, \cdots, y_N\right]^T$ (13)
To train the ELM the weights and biases, β and bi, are randomly chosen, and then the input weights are calculated using Eq. (14), below
$w=\left(\begin{array}{ll}f^T & f\end{array}\right)^{-1} f_t^T ; f^{+}=\left(\begin{array}{ll}f^T & f\end{array}\right)^{-1} f^T$ (14)
where, f+ denotes the Moor-Penrose pseudo-inverse of the matrix f.
On the other hand, the sigmoid function chosen as the activation function is defined as follows:
$S(w, b, x)=\frac{1}{1+e^{(-w x+b)}}$ (15)
3.2.3 Pseudo-code of BBA-based ELM training
Algorithm 1 presents the pseudo-code of the BBA-based optimization process used to train the ELM classifier. First, the bats’ population is initialized randomly with a binary value, which means the feature is selected or not. Then, we train and evaluate the bat to update the fitness value, the loudness Ai, and the rate of pulse emission ri. The global Fit function returns the minimum fitness function found during the training of the m population of the bats for each iteration t. Afterward, the positions of the bats are updated via the new loudness, while the loudness decreases until the bat finds its prey. The bats’ positions will also will be updated using the velocity and the frequency, finally, the new features matrix, M’, is generated with a selection of best features. The input data are; M1, M2,…, M9 training subset, M10 validation subset, M11 evolution subset. The output is the M’ data subset, which gives maximum accuracy. Note that N and T denote the number of features and iterations, and $r, \varepsilon, \alpha$, and $\sigma$ are the pulse emission parameters.
|
Algorithm 1: BBA-based ELM training Pseudo-code [13] |
|
For each bat bi =1: m For each feature j=1: n $x_i^j$ =Random [0, 1]; $v_i^j$ =0; $A_i$ =0; $r_i$ =0; end For For each iteration t=1: N For each bat i=1: m For k=1: 9 Divided M into k folds with an equal number of persons; Create M’ such that it contains only features $x_i^j$; Train classifier over M (k=1: k-1); Evaluate over M10; end For if (rand <Ai and Fit <EERi) Fiti =EERi ; Ai =αi Ai ; $r_i=r_{i-0}[1-\exp (-\gamma t)]$ end For Global Fit=min (EERi); For each bat i=1: m $\beta=$=Random [0, 1]; if (rand>ri) do For j=1: n $x_i^j=x_i^j+\varepsilon \bar{A} ;$ $varepsilon$=Random [0, 1]; if ($\left(\sigma<\frac{1}{1+e^{x_j^i}}\right)$) $x_i^j=1$; else $x_i^j=0$; end For if (rand <Ai and Fit <EERi) do For each feature j =1: n; $\begin{aligned} & \hat{x}=x_{\operatorname{minEER}(i)}^j ; \\ & f_i=f_{\min }+\left(f_{\max }-f_{\min }\right) \\ & v_i^j=v_i^j+\left(\hat{x}-x_i^j\right) f_i \\ & \sigma=\operatorname{Random}[0,1] \\ & \quad \text { if }\left(\sigma<\frac{1}{1+e^{x_i^j}}\right) \mathrm{t} \\ & \quad x_i^j=1 \text {; else } x_i^j=0 ;\end{aligned}$ end For end For end For For each feature j=1: n $M^j=\hat{x}^{\mathrm{j}}$ Return M end For |
To train the ELM algorithm, we initialize the network and fix the parameters, the size of the input layer vector dimension, which is the dimension of the output of the BBA optimized matrix, M’, transformed to a vector. The number of hidden neurons chosen is 5,000 neurons, and the weights and biases of the input are randomly initialized. Furthermore, the sigmoid function given by Eq. (15) is used as an activation function.
The performance of the designed MBS based on the proposed feature extraction-selection technique is evaluated in this section. Experiments are carried out based on the multimodal CASIA-Iris-Distance database. This database contains 2576 images with a resolution of 2352×1728 pixels.
The irises are captured over a 3-meter distance with a high-resolution camera. It is important to note that the famous algorithm 10-fold cross-validation is considered for the validation. This is to say that the process of one fold is used for the test and nine folds are used for training the data. The process is repeated 10 times for each iteration and then the results are combined.
4.1 Feature extraction performance assessment
In this subsection, the effectiveness of the proposed feature extraction technique considering local feature combinations is assessed in comparison with the ones with a single algorithm. Note that the block window size of the image is chosen according to Figure 6. The results of the transformations are presented in Figure 6. This figure shows the EER in terms of order and repetition (p, q). The test is realized by varying the couple p, q, and drawing the curve EER (p, q). Then, we have to initiate the quantification part. The extraction of the left and right eye images, from the face, is carried out using the Viola-Jones algorithm implemented in the TCOD toolbox of MATLAB. Notice that this system detects the eyes by tracking a filter over the face, and then, it uses a cascade classifier to detect whether the window corresponds to an eye or not. In this process, the size of the window chosen is 320*280 pixels in accordance with the CASIA V3 database eyes dimensions.
Figure 6. EER of the proposed method
Figure 7. ROC of the proposed method for LFE
Table 1. LFE results
|
Metrics Feature Extraction Method |
EER (%) |
FAR (%) |
GAR (%) |
|
LGF |
1.0 |
2.55 |
75.1 |
|
ZM |
0.88 |
2.66 |
76.4 |
|
LBP |
0.77 |
2.6 |
76.7 |
|
Three algorithms-based LFE |
0.68 |
1.35 |
77.3 |
Figure 7 depicts the receiver operating characteristic (ROC) curve obtained based on the adopted method. From this figure, the verification rate at 0.01 is about 77.3%. Table 1 summarizes the numerical results of the adopted local feature combination compared to those of the LGF, ZM, and LBP algorithms. Note that this table presents the EER, false acceptance rate (FAR), and genuine acceptance rate (GAR) as evaluation metrics. The findings showcase that the adopted method-based BMS ensures a significant reduction of the EER and FAR, with 0.68% and 1.35%, and the highest GAR of about 77.3%, compared to the ones based on only LGF, ZM, and LBP algorithms. Therefore, it can be concluded that it’s judicious to fuse the feature extracted using the three algorithms in a unified matrix to get the benefit of each algorithm.
4.2 Feature selection performance assessment
In this subsection, the performance of the BBA optimizer-based feature selection is compared to the one based on GA. In addition, a comparative study investigating the performance of the proposed MBS considering BBA-based ELM classifier training, a BBA-based SVM training system, and a UMS based on BBA optimizer.
Figure 8. Performance of the BBA-based feature selection
Table 2. Optimization results
|
Metrics Feature Selection Method |
EER (%) |
FAR (%) |
GAR (%) |
|
GA |
0.66 |
1.6 |
80.2 |
|
BBA |
0.68 |
1.35 |
86.1 |
Figure 8 depicts the ROC curve obtained by the BBA-based feature selection. Meanwhile, Table 2 reports the numerical results of the BBA and GA-based feature selection. The results show that the BBA provides better accuracy, with a GRA of about 86.1% at 0.01% compared to the GA algorithm's 80.1%. In addition, the BBA ensures a FAR of 1.35% less than the GA's 1.6%, while the two algorithms give almost an equal EER.
Figure 9. Performance of the BBA-based ELM and SVM algorithms
Table 3. Classification results
|
Metrics Method |
EER (%) |
FAR (%) |
GAR (%) |
|
UBS based on BBA optimizer |
0.62 |
1.1 |
85.2 |
|
BBA-based VSM classifier training |
0.65 |
1.2 |
88.2 |
|
Proposed method (BBA-based ELM classifier training) |
0.6 |
0.9 |
91.6 |
Figure 10. ROC curve for combination and optimized features
Figure 9 portrays the ROC curves achieved by using the BBA-based ELM algorithms and SVM algorithm. From this figure, it can be observed that the ELM-based MBS provides better accuracy than the SVM. On the other hand, the numerical results are summarized in Table 3. It compares the performance of the UBS, and MBS based on SVM and ELM considering the EER, FAR, and GAR metrics. This table demonstrates that the ELM provides the lowest EER and FAR, 0.6% and 0.9%, and the best GAR, 91.6%, compared to the SVM and the UBS. Further, the SVM is better in GAR, 88.2%, compared to the UBS, 85.2%, but the UBS offers a higher EER, with 0.62%, than the SVM-based MBS, 0.65. In Figure 10, the ROC of the feature combination and optimization is presented. It highlights that the adopted methods achieve ROC enhancement. Moreover, the performed experiments show that the ELM consumes less time than SVM due to the fast learning feature of the ELM. This is noticed for each algorithm of classification at matching score level.
Overall, the results show that the ELM method is better than SVM due to its high capability of data regression and high learning speed.
Figure 11. Performance of the BBA-based ELM and SVM algorithms
Figure 11 displays the ZM selection corresponding to the favourable results.
In this paper, an innovative MBS is proposed for face-iris recognition. This system, first, considers the adoption of the face and both right and left iris features for enhancing the recognition rate. Second, it adopts three algorithms, LGF, ZM, and LBP for LFE. All the extracted features are fused in a unified matrix to give a better description of modalities. Third, a BBA is involved in feature selection optimization by removing the redundancy due to the concatenation of LFE data. The optimized subsets are achieved through the training of the ELM classifier, which offers high learning speed. The experimental results reveal that the suggested LFE provides better performance, with EER of 0.65%, FAR of 1.35%, and GAR of 77.3%. In addition, by applying the BBA, an enhancement of the GAR from 80.2%, of the GA, to 86.1%, of the BBA, is achieved. Furthermore, they demonstrate that the proposed method, including BBA-based ELM training, offers high recognition performance and improves the system’s accuracy, an EER, and GAR of about 0.6% and 91.6%, respectively, compared to the BBA-based SVM training method, with 0.65% EER and 88.2% GAR. Besides, they highlight the superiority of the MBS compared to the UBS. In future work, the use of the CNN will be considered in the proposed MBS.
[1] Abdul-Al, M., Kyeremeh, G.K., Qahwaji, R., Ali, N.T., Abd-Alhameed, R.A. (2024). The a deep dive into multi-modal facial recognition: A review case study. IEEE Access, 12: 179010-179038. https://doi.org/10.1109/ACCESS.2024.3486552
[2] Alghamdi, S.M., Jarraya, S.K., Kateb, F. (2024). Enhancing security in multimodal biometric fusion: Analyzing adversarial attacks. IEEE Access, 12: 106133-106145. https://doi.org/10.1109/ACCESS.2024.3435527
[3] Singh, M.K., Kumar, S., Nandan, D. (2024). Biometric face identification: Utilizing soft computing methods for feature-Based recognition. Traitement du Signal, 41(5): 2721-2728. https://doi.org/10.18280/ts.410545
[4] Kumar, M., Bhola, A., Tiwari, A., Gulhane, M. (2023). Face and Iris‐Based secured authorization model using CNN. Multimodal Biometric and Machine Learning Technologies: Applications for Computer Vision, Wiley Online Library, pp. 283-299. https://doi.org/10.1002/9781119785491.ch14
[5] Vannurswamy, K., Shekar, B.H., Pilar, B., Kotegar, A.K., Jiang, F. (2024). Enhanced multimodal biometric fusion with DWT, LSTM, and attention mechanism for face and iris recognition. In 2024 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI), Prayagraj, India, pp. 1-6. https://doi.org/10.1109/CVMI61877.2024.10781994
[6] Eskandari, M., Toygar, Ö. (2015). Selection of optimized features and weights on face-Iris fusion using distance images. Computer Vision and Image Understanding, 137: 63-75. https://doi.org/10.1016/j.cviu.2015.02.011
[7] Abdulhasan, R.A., Abd Al-latief, S.T., Kadhim, S.M. (2024). Instant learning based on deep neural network with linear discriminant analysis features extraction for accurate iris recognition system. Multimedia Tools and Applications, 83(11): 32099-32122. https://doi.org/10.1007/s11042-023-16751-6
[8] Eskandari, M., Toygar, Ö. (2014). Fusion of face and iris biometrics using local and global feature extraction methods. Signal, Image and Video Processing, 8: 995-1006. https://doi.org/10.1007/s11760-012-0411-4
[9] Sarıyanidi, E., Dağlı, V., Tek, S.C., Tunc, B., Gökmen, M. (2012). Local zernike moments: A new representation for face recognition. In 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, pp. 585-588. https://doi.org/10.1109/ICIP.2012.6466927
[10] Ammour, B., Bouden, T., Boubchir, L. (2018). Face-Iris multi‐Modal biometric system using multi‐resolution Log‐Gabor filter with spectral regression kernel discriminant analysis. IET Biometrics, 7(5): 482-489. https://doi.org/10.1049/iet-bmt.2017.0251
[11] Nelson, R.A., Roberts, R.G. (2021). Some multilinear variants of principal component analysis: Examples in grayscale image recognition and reconstruction. IEEE Systems, Man, and Cybernetics Magazine, 7(1): 25-35. https://doi.org/10.1109/MSMC.2020.3012304
[12] Abdul-Al, M., Kyeremeh, G.K., Qahwaji, R., Ali, N.T., Abd-Alhameed, R.A. (2024). A novel approach to enhancing multi-Modal facial recognition: Integrating convolutional neural networks, principal component analysis, and sequential neural networks. IEEE Access. 12: 140823-140846. https://doi.org/10.1109/ACCESS.2024.3467151
[13] Bouzouina, Y., Hamami, L. (2017). Multimodal biometric: Iris and face recognition based on feature selection of iris with GA and scores level fusion with SVM. In 2017 2nd International Conference on Bio-engineering for Smart Technologies (BioSMART), Paris, France, pp. 1-7. https://doi.org/10.1109/BIOSMART.2017.8095312
[14] Najah Kadhim, O., Hasan Abdulameer, M., Mahdi Hadi Al-Mayali, Y. (2024). A multimodal biometric system for Iris and face traits based on hybrid approaches and score level fusion. BIO Web of Conferences, 97: 00016. https://doi.org/10.1051/bioconf/20249700016
[15] Wang, Z., Wang, E., Wang, S., Ding, Q. (2011). Multimodal biometric system using face-Iris fusion feature. Journal of Computers, 6(5): 931-938. https://doi.org/10.4304/jcp.6.5.931-938
[16] Huo, G., Liu, Y., Zhu, X., Dong, H., He, F. (2015). Face-iris multimodal biometric scheme based on feature level fusion. Journal of Electronic Imaging, 24(6): 063020-063020. https://doi.org/10.1117/1.JEI.24.6.063020
[17] Al-Waisy, A.S., Qahwaji, R., Ipson, S., Al-Fahdawi, S. (2017). A multimodal biometrie system for personal identification based on deep learning approaches. In 2017 Seventh International Conference on Emerging Security Technologies (EST) Canterbury, UK, pp. 163-168. https://doi.org/10.1109/EST.2017.8090417
[18] Ammour, B., Boubchir, L., Bouden, T., Ramdani, M. (2020). Face-Iris multimodal biometric identification system. Electronics, 9(1): 85. https://doi.org/10.3390/electronics9010085
[19] Hattab, A., Behloul, A. (2024). Face-Iris multimodal biometric recognition system based on deep learning. Multimedia Tools and Applications, 83(14): 43349-43376. https://doi.org/10.1007/s11042-023-17337-y
[20] Subban, R., Susitha, N., Mankame, D.P. (2018). Efficient iris recognition using Haralick features based extraction and fuzzy particle swarm optimization. Cluster Computing, 21: 79-90. https://doi.org/10.1007/s10586-017-0934-0
[21] Tawhid, M.A., Dsouza, K.B. (2018). Hybrid binary dragonfly enhanced particle swarm optimization algorithm for solving feature selection problems. Mathematical Foundations of Computing, 1(2): 181-200. https://doi.org/10.3934/MFC.2018009
[22] Sharifi, O., Eskandari, M. (2016). Optimal face-Iris multimodal fusion scheme. Symmetry, 8(6): 48. https://doi.org/10.3390/sym8060048
[23] Xiong, Q., Zhang, X., Xu, X., He, S. (2021). A modified chaotic binary particle swarm optimization scheme and its application in face-Iris multimodal biometric identification. Electronics, 10(2): 217. https://doi.org/10.3390/electronics10020217
[24] Nakamura, R.Y.M., Pereira, L.A.M., Rodrigues, D., Costa, K.A.P., Papa, J.P., Yang, X.S. (2013). Binary bat algorithm for feature selection. In Swarm Intelligence and Bio-Inspired Computation. Elsevier, pp. 225-237. https://doi.org/10.1016/B978-0-12-405163-8.00009-0
[25] BIT, IRIS. http://biometrics.idealtest.org/findTotalDbByMode.do?mode=Iris#/, accessed on Jun. 20, 2024.
[26] Lei, L., Kim, D.H., Park, W.J., Ko, S.J. (2014). Face recognition using LBP Eigenfaces. IEICE TRANSACTIONS on Information and Systems, 97(7): 1930-1932. https://doi.org/10.1587/transinf.E97.D.1930
[27] Mirjalili, S., Mirjalili, S.M., Yang, X.S. (2014). Binary bat algorithm. Neural Computing and Applications, 25: 663-681. https://doi.org/10.1007/s00521-013-1525-5
[28] Bhardwaj, I., Londhe, N.D., Kopparapu, S.K. (2019). Performance evaluation of fingerprint dynamics in machine learning and score level fusion framework. IETE Technical Review, 36(2): 178-189. https://doi.org/10.1080/02564602.2018.1444515