Proposed Iraqi License Plates Detection System Using LAB Color Space, CNN and YOLOv8-Seg Models

Proposed Iraqi License Plates Detection System Using LAB Color Space, CNN and YOLOv8-Seg Models

Mamoun Jassim Mohammed* | Mustafa Salam Kadhm | Ammar A. Al-Hamadani | Sufyan Othman Zaben

Department of Computer Engineering, College of Engineering, Al-Iraqia University, Baghdad 10011, Iraq

Department of Computer, College of Basic Education, Mustansiriyah University, Baghdad 10011, Iraq

Corresponding Author Email: 
mamoun.jassim@aliraqia.edu.iq
Page: 
521-528
|
DOI: 
https://doi.org/10.18280/isi.310220
Received: 
17 November 2025
|
Revised: 
20 January 2026
|
Accepted: 
18 February 2026
|
Available online: 
28 February 2026
| Citation

© 2026 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Automatic license plate detection is a fundamental task in intelligent transportation systems, particularly under challenging real-world conditions involving complex backgrounds, varying illumination, and diverse plate structures. This study presents a hybrid framework for Iraqi license plate detection that integrates LAB color space enhancement with deep learning-based detection and segmentation. The LAB color space is employed to improve contrast and visibility under varying lighting conditions, while a convolutional neural network (CNN) is used for feature extraction. A YOLOv8-based segmentation model is further incorporated to accurately localize and segment license plate regions through multi-scale feature fusion. The proposed framework effectively combines color-based preprocessing with advanced deep learning architectures to enhance detection robustness. Experimental results demonstrate that the method achieves high performance, with a precision of 94.6%, recall of 92.3%, and mean average precision (mAP@0.5) of 91.2%. Additional evaluations confirm improvements in localization accuracy and segmentation quality compared with conventional RGB-based and CNN-only approaches. The proposed method provides an efficient and reliable solution for real-time license plate detection in complex traffic environments.

Keywords: 

license plate detection, LAB color space, convolutional neural networks, YOLOv8 segmentation, intelligent transportation systems, image preprocessing, object detection, computer vision

1. Introduction

The increasing of number of vehicles in the cities around the world now days, contributed in developing a modern traffic surveillance system. The main cause of this increasing is due to the need for transportation in the sprawling metropolitan areas that became bigger every year, thus increasing the populations in these areas [1, 2]. With this development in place the current transportation and traffic management systems, suffer from several challenges like traffic jams, accidents, traffic violations, and vehicle theft issue. Therefore, number of solutions presented to address these issues such as, self-driving cars [3], smart traffic surveillance systems [4], vehicle-tracking systems, and vehicles speed detection systems [5]. Automatic License Plate Recognition (ALPR) represents an essential task in Intelligent Transportation and Surveillance with several useful applications: automated traffic law enforcement, identification of stolen vehicles, toll violations, traffic flow control, etc. [6]. Various present methods are there, but the issue remains because of wide variations in license plate (LP) layouts and environments of image collection- lighting, capture angle, distance from the camera, and so on-that are different from country to country [7, 8]. It becomes a tough choice for multinational ALPR because of the dissimilarities in license plate (LP) layouts between the countries, and the unavailability of public international LP dataset. The limited amount of study on the issue of international ALPR is due to different LP layouts and a lack of publicly available multi-country datasets [9]. Multinational ALPR systems that purport to operate on LPs from various nations have been suggested in a small number of research studies. However, datasets with a common LP architecture from different nations were used to verify these approaches. Our analysis indicates that the majority of LPs globally fall into one of two general categories: single row or double row. Only single row LPs are included in the datasets used in other studies and it could need extra procedures to identify double row LPs [10, 11]. Numerous methods have been successfully devised and put into place throughout the years. With modern traffic, surveillance systems come advanced technologies: neural networks, classifiers, support vector machines (SVMs), optical character recognition (OCR), and various methods of image processing [12]. License plate detection and segmentation are recognized in this field as very crucial stages for many computer vision and image-processing applications [1] because their accuracy and efficiency directly affect the overall performance of the systems.

In this work, an integrated methodology for locating and segmenting license plates is proposed, merging the advantages of color inspection through the LAB model, area segmentation with K-means color clustering [13, 14], and contour analysis with YOLOv8-seg, which is somewhat inspired by the segmentation approach of YOLACT [15]. The first stage after loading the input images is feature extraction using convolutional neural networks (CNN), then Feature Pyramid Network (FPN) which is employed to fuse the multi-scale features. Besides, the localization and corners of the license plate are accurately detected by performing the detection and segmentation stage using YOLOv8-seg [16, 17]. The desired system will be fast and robust enough to detect and segment Iraqi plates from images and video streams, and must maintain their high accuracy and reliability in the challenging real-world situation. The LAB color model offers distinct advantages over other color spaces, providing perceptually uniform color representations and enhanced performance in handling various lighting conditions and environment. We leverage these benefits to improve the accuracy of our license plate detection process. The organization of this paper is as following: section 2 related work, section 3 proposed System, section 4 results and discussion and section 5 conclusion and future work.

2. Related Work

Both conventional and contemporary approaches have contributed to notable developments in the field of license plate detection, identification, recognizing and tracking. In order to identify license plate based on their structural features, early methods mostly used edge-based algorithms and template matching [18, 19]. Nevertheless, these techniques produced significant false positive rates and had trouble with changing illumination conditions. The advancement of motion segmentation algorithms has improved license plate tracking and detection [20, 21]. The method of recognizing and separating moving objects is known as motion segmentation, and it makes accurate license plate recognition possible in difficult circumstances. Frame differencing [22], which calculates pixel-wise differences between consecutive frames, was one of the first tactics. In order to facilitate the extraction of foreground items, a different motion segmentation method known as background subtraction models the immobile backdrop [23]. Finding moving objects is made simpler by optical motion, which computes pixel-level motion vectors, an essential component of motion analysis [24].

The use of appearance-based techniques, especially deep learning models, has revolutionized license plate identification and recognition [25]. By independently learning discriminative features from data, CNNs significantly improve detection accuracy [26-28]. The scientific community has investigated several facets of license plate identification and classification. The use of color recognition to identify license plate based on their hues has been studied [29]. Vehicle tracking in surveillance systems is an important topic that has been approached using a number of different approaches. For dependable tracking, form-based tracking strategies make use of shape features [30]. The features-based tacking is employed to enhance the tracking performance by making advantage of distinctive visual traits. The obtained results of several tracking methods such as the accuracy and efficiency are computed and compared with other recent tracking methods [31]. Besides, CNN is successfully applied in several papers for recognizing seven basic emotional states (anger, disgust, fear, happiness, neutral, sadness, and surprise) and presents results comparing the other methods [32]. An Iraqi vehicle license plate recognition system is proposed by authors, which use to main steps which are: the using the HSV color properties in order to identify the vehicle class according to the license plate color recognition of the right-hand rectangle bar. Besides, recognize the Arabic (Hindi Numeral System) of the license plate by analyze the Arabic alphanumeric region [33]. The work implemented by employing the support vector machine (SVM) for classifying the input license plate images in the convolutional layer.

Authors in the study [34] use machine learning for Iraqi plates recognition. KNN used for matching detecting the desired plates in different scenarios. Gaussian filter is used for preprocessing and the achieved result was 90% using collected dataset.

In the study by Al-Arbo et al. [35], an Iraqi license plate recognition framework is presented. The proposed framework detects the desired plates by using Detection Transformer (DETR) which is then integrated with Convolutional Recurrent Neural Network (CRNN) for recognizing the characters using the detected plates. The proposed framework tested and evaluated using a new collected Iraqi license plates images 1,000 images and achieved an detection accuracy 93%. The obtained accuracy showed that the DETR outperformed the other modules like, SSD, R-CNN, and YOLOv5.

In the study by Hasan et al. [36], authors proposed a fast and accurate license plate recognition system using a Modify Bidirectional Associative Memory (MBAM). The system works recognize the number plate using two phases which are learning and convergence. The proposed system addresses the problem of noise images and the memory speed due to the number of used images. The proposed system obtained 99.6% accuracy for the license plate, 98% accuracy for character segmentation is, and 100% for character recognition using new collected license plate images.

Shuriji et al. [37] proposed a license plate recognition system based on two phases. In the first phase, the deep leaning module MobileNets is used for train the input images. However, in the second phase, a corresponding method is used for Arabic letters and numerals then train it together. The two phases then combined for one training phase in order to achieve the best possible results. The achieved results of the proposed approaches were 90.81% and 95.42% for (CR) and (LR) respectively.

The recent works still suffer of the low obtained accuracy, the hard environment of the captured images, and the limit number of the used Iraqi license plate’s images. Therefore, A hybrid approach between deep learning algorithms and the traditional methods are used in this work to overcome the recent research gaps. The proposed license plate detecting system is described in the next section.

3. Methodology

The proposed system aims to develop an effective automate Iraqi license plate detection system by implementing various methods including LAB color as preprocessing extraction then apply normalization, localization, segmentation, character recognition by using CNN, YOLOv8-seg and tesseract for reliable results. The main stages of the proposed system include: data collection, preprocessing, feature extraction, and license plate detection. Figure 1 shows proposed block diagram.

Figure 1. Proposed block diagram

3.1 Data collection

Collection of license plate images under various lighting and environmental conditions. The experiments were conducted using images containing vehicle with 1400 Iraqi license plates with RGB and image size 380 × 520. These images as shown in Figure 2 were obtained from various sources to represent diverse conditions such as different lighting, angles and distances. The data collection process involved capturing images with varying distances between camera and the vehicle, as well as images with different angles and different intensity of lights and shades.

Figure 2. Samples of collected dataset

3.2 Image preprocessing

The second stage of the proposed system is preprocessing. The first step is to convert the input image from RGB to Lab color space using the following formulas:

1-Normalize the RGB values: Convert the RGB values from the range of 0-255 to 0-1 by dividing by 255 as shown in Eq. (1).

$R^{\prime}=\frac{R}{255}, G^{\prime}=\frac{G}{255}, B^{\prime}=\frac{B}{255}$                  (1)

2-Apply the sRGB gamma correction: If the normalized values are less than or equal to 0.04045, divide by 12.92. If they are greater than 0.04045, apply the correction as follows in Eq. (2):

$C_{ {linear }}= \begin{cases}\frac{\mathrm{C}}{12.92}, & \mathrm{C} \leq 0.04045 \\ \left(\frac{C+0.055}{0.055}\right)^{2.4}, & C>0.04045\end{cases}$                 (2)

Use the L (luminance) channel for contrast adjustment and the a/b channels for color segmentation to isolate license plates as shown in Eq. (3).

$Y \,\,adjusted =Y \times gain + offset$               (3)

Noise removal using filters (e.g., Gaussian or median filters) as shown in Eq. (4).

$G(x, y)=\frac{1}{2 \pi \sigma^2} e^{-\frac{x^2+y^2}{2 \sigma^2}}$                 (4)

Adaptive thresholding based on luminance as shown in Eq. (5).

$T(x, y)=\mu(x, y)-C$                  (5)

  • μ(x,y): The mean (or Gaussian-weighted sum) of the pixel intensities in a local neighborhood around (x,y).
  • CC: A constant subtracted from the mean to fine-tune the threshold.

In order to improve the image contrast of the pixel intensities, the Cumulative Distribution Function (CDF) is used based on the equation of Histogram Equalization on the L channel as shown below in Eq. (6).

${Ieq}(x, y)=T(I(x, y))$                    (6)

where:

  • T: Transformation function computing by the following equation Eq. (7).

$T(I)={rount}\left(\frac{C D F(I)-C D F_{min }}{M X N-C D F_{min }} * L-1\right)$             (7)

  • CDF(I): Cumulative distribution function of the intensity II.
  • CDFmin: Minimum value of the CDF.
  • M×N: Total number of pixels in the image.
  • L: Number of possible intensity levels (e.g., 256 for 8-bit images).

The next preprocessing step is removing the unwanted noise using the 3 × 3 Median filter. Removing the image noise leads to enhance overall system results. In addition, all the images went to image resizing method to make all the input images in the same size. 320 × 320 was the best selected size for the input license plate images.

3.3 Region of Interest (ROI) extraction

The essential stage of the license pale detection system is Region of Interest (ROI). ROI extract only the most important section in the image that contain the informative features of the license. Besides that, the main morphological operations that applied on ROI preprocessing stage are dilation and erosion. These operations make the license plate area presented in the better way, thus make the next stages work accurately. Dilation join the edge of the bright regions in the plate by enlarging these edges. However, the ROI minor noise removes by the erosion, thus make the features of the ROI harsher. Therefore, by applying these operations, ROI will be cleaner and bigger. Dilation and erosion can efficiently handle the issue of shadows, uneven-lighting, and the partial occlusion.

After that, ROIs extracted using either bounding box process via the contour detection or by using connected component analysis process for ROIs isolation. The extracted ROIs then sent to further stages like the character segmentation and recognition. From here, the overall system speed will reduce and the accuracy will increase due to focusing only on the necessary ROIs in the vehicle images, which is lead for minim computing process time.

3.4 Convolutional neural networks model design

The Iraqi license plate recognition system aims to take the ROIs images as an input then produce an accurate detection and classification results. CNN is used to extract the spatial features of the ROIs images using a number of convolutional layers. These extracted features can capture the fine grain edges and large structural patterns captured for the license plate images. During training process, the optimal features are selected using self-adjustment process of the filters in each convolutional layer of CNN. After that, the selected features pass through pooling layers to reduce the features dimensions via features mapping process. By reducing the number of selected layers, the cost of computation will reduce the robustness to change in position and scale will improve. In another hands, the process of features reduction will not affect the essential ROIs information.

Following this, the condensed feature maps are presented to the fully connected layers, where the extracted features are integrated into a single representation. Thus, the network determines the existence of license plate presence, and considers the exact location inside the ROI, by which the probability of presence and bounding box coordinates relative to the center location are output.

Table 1. Configuration of convolutional neural networks (CNN)

Layer

Output Shape

Param

(Conv2d)

(1, 16, 320, 320)

448

(SiLU)

(1, 16, 320, 320)

0

(Conv2d)

(1, 32, 160, 160)

4640

(SiLU)

(1, 32, 160, 160)

0

Act.

(1, 128, 20, 20)

0

Max.

(1, 128, 20, 20)

0

Unsample

(1, 128, 80, 80)

0

Concat

(1, 192, 80, 80)

0

(Conv2d)

(1, 1, 4, 8400)

16

The loss function in CNN consists of IoU loss and classification loss in the training model. The accuracy of the bounding box process will be aligned by the IoU loss with respect to the ground truth. However, the best recognition results will obtain by the classification loss part. Furthermore, the optimization procedure in CNN consist Adam and SGD. Adam has the ability for adaptive learning and the fast convergence process, while the SGD use in fine tuning process to enhance the generalization. The configuration of the used CNNillustrated in Table 1.

3.5 Propose architecture

The proposed system accepts an input image and use advanced segmentation, detection framework with a hierarchal feature maps extraction by CNN to detect the license plate as shown in Figure 3. In the proposed detection system setting, the proposed architecture of the deep convolutional like VGG, EfficientNet, or ResNet consider the backbone of the system. This backbone extracts the multi scale features for better image representation. The extracted features will carry out the higher-level semantic image information and the low-level fine visual image details. The system branches (detect and mask branches) will be designed base on the extracted multi scale features by the deep convolutional. The steps of the proposed architecture are listed in algorithm 1.

Algorithm 1: CNN and YOLOv8-seg

1- Load the input image

2- Split the image into training, testing, and validation

3- For each input image

     4- Apply pre-processing stage

     5- Extract the feature using CNN

     6- Fuse the extracted features via PAN

     7- Design the detect and mask branches

     8- Generate the mask coefficients and the detection outputs via PrrotoNet

     9- Refining the spatial details using the generated coefficients

     10- Identify the bonding box coordinates

     11- Identify class label

12- Discard the overlapped by (NMS)

13- Apply the filter thresholding

14- Crop the detected RoI

15- Return the detected RoI

In the detect branch, the mask coefficients and the detection outputs are generated using ProtoNet via applying the multi scale feature maps. Refining the spatial details performed by the mask coefficients for the detected objects. However, the coordinates of bonding box and the class labels are provided by the detection outputs. After that, the Non-Maximum Suppression (NMS) is applied to discard the overlapped and duplicated detections. In addition to NMS, a threshold filters are applied on the detection results to form the final detection results then the ROIs cropping is taking place. The objects location will be determined accurately thus became appropriate for detection and tracking purpose.

For every detected object in the detected branch, the mask branch creates a pixel level segmentation mask. It makes precise objects tracing along their boundaries by producing a detailed, instance specific mask using the generated mask coefficients in ProtoNet. Thus, the proposed system relies on two main factors, which are the exact objects contours and the objects locations.

Figure 3. Proposed convolutional neural networks (CNN) and YOLOv8-seg architecture

The Point-prompt mechanism is one of the distinguishing features of proposed architecture, in which a combination of different prompting methods applied to boost adaptability and robustness. Included in this are Box-prompting (which directs the model by way of specifying a region of interest), Text-prompting (affecting the outputs based on natural language queries), and multi-modal encoders such as CLIP, so embedding can be learned jointly with both visual and textual information to make predictions in context. The configuration of the proposed architecture is illustrated in Table 2.

Table 2. Configuration of convolutional neural networks (CNN) and YOLOv8-seg architecture

Section

Output Shape

Param

Backbone

(1, 32, 160, 160)

7264

Backbone

(1, 64, 80, 80)

49408

Backbone

(1, 128, 40, 40)

197120

Backbone

(1, 256, 20, 20)

459520

Backbone

(1, 256, 20, 20)

164224

Neck

(1, 128, 40, 40)

147840

Neck

(1, 64, 80, 80)

37056

Neck

(1, 128, 40, 40)

123264

Neck

(1, 256, 20, 20)

492288

Head

[(1, 5, 8400), {'boxes': (1, 64, 8400), 'scores': (1, 1, 8400), 'feats': [(1, 64, 80, 80), (1, 128, 40, 40), (1, 256, 20, 20)]}]

750739

Using a powerful CNN backbone, it will have the qualities of a dual branch, multi-modal prompt system with virtually unlimited versatility and accuracy. The system can undertake complex visual tasks-from instance segmentation to object detection and interactive segmentation-with well-grounded capabilities in ever-changing and challenging environments. This integrated design for extraction, detection, segmentation, and cross-modal reasoning of this system places it among the contemporary visual understanding and analysis of the way.

4. Experiments and Evaluation

4.1 Experimental setup

In this paper, an accurate Iraqi license detection system using LAB color scheme, YOLOv8-seg, and CNN is proposed. The proposed system tested on a real data gathered from the Iraqi vehicle images, which are then annotated manually, and captured under various conditions such as light, background, and angles. The collected dataset is split into three parts for evaluation the proposed system which are 70% for training, 10% for validation, and 20% for testing.

All models were trained using PyTorch on a system equipped with an NVIDIA RTX 4090 GPU, 64GB RAM, and Intel Core i9 processor. Data augmentation techniques including scaling, rotation, and brightness adjustments were applied to improve model generalization.

4.2 Evaluation metrics

We adopted standard object detection and segmentation metrics to quantitatively assess the accuracy and robustness of our model:

  • Precision (P): Measures the proportion of correctly predicted license plates among all detected.
  • Recall (R): Indicates the proportion of ground truth license plates that were successfully detected.
  • F1-Score: Harmonic mean of precision and recall to provide a balance between false positives and false negatives.
  • Intersection over Union (IoU): Evaluates the overlap between the predicted bounding box and the ground truth.
  • Mean Average Precision (mAP): Calculated at IoU thresholds of 0.5 and 0.5:0.95.
  • Dice Coefficient and Jaccard Index: Used to evaluate the segmentation quality of license plate regions.
  • Detection Latency: Average time (ms) required for processing a single image, to assess the real-time capability of the system.

4.3 Results and discussion

The proposed model achieved strong performance on the test dataset. Table 3 summarizes the evaluation results:

The high mAP and IoU scores demonstrate that our LAB color-based preprocessing enhances the localization of license plates under varying illumination and contrast conditions, particularly in challenging real-world scenarios common in Iraqi road environments.

Furthermore, the use of YOLOv8-seg contributed to improved boundary precision during plate segmentation, as evident from the Dice and Jaccard scores. The model maintains real-time processing capability with sub-25ms latency per image, making it suitable for deployment in surveillance and smart traffic monitoring systems.

Table 3. Detection and segmentation performance

Metric

Value

Precision

94.6%

Recall

92.3%

F1-Score

93.4%

IoU (avg.)

87.8%

mAP@0.5

91.2%

mAP@[0.5:0.95]

82.6%

Dice Coefficient

89.5%

Jaccard Index

81.4%

Average Detection Time

23.7 ms

4.4 Qualitative results

Figure 4 illustrates sample outputs of the proposed system on the test dataset. Each image displays the predicted bounding box or segmentation mask of the detected license plate, overlaid on the original image. The results demonstrate the model’s ability to accurately localize plates under various conditions, including low light, occlusions, shadows, and complex backgrounds. The extracted license plate regions from the detected results are shown in Figure 5.

Figure 4. Sample license plate detection results using convolutional neural networks (CNN) + YOLOv8-seg with LAB color space preprocessing

Figure 5. Sample of extracted license plate

4.5 Ablation study

To evaluate the effectiveness of the individual components of our approach, we performed an ablation study by systematically removing or modifying key components:

  1. RGB vs LAB Color Space: We compared the model’s performance with standard RGB input and LAB color space conversion.
  2. CNN-only vs YOLOv8-seg: We compared the standalone CNN-based detection with the full YOLOv8-seg integration.
  3. LAB + CNN-only vs LAB + YOLOv8-seg: To assess the synergy between LAB preprocessing and segmentation.

Table 4. Ablation study results

Configuration

Precision

Recall

F1-Score

mAP@0.5

IoU

CNN (RGB)

81.2%

78.4%

79.7%

76.5%

69.3%

CNN (LAB)

85.6%

82.1%

83.8%

81.7%

74.6%

YOLOv8-seg (RGB)

90.1%

88.3%

89.2%

88.4%

83.7%

YOLOv8-seg (LAB) (proposed)

94.6%

92.3%

93.4%

91.2%

87.8%

Note: CNN = convolutional neural networks.

From these results in Table 4, it can be inferred that the LAB color space and YOLOv8-seg segmentation improve detection performance. From these results, it can be inferred that the LAB color space and YOLOv8-seg segmentation improve detection performance. The challenge of foreground and background (white-black, yellow-black and white-blue) colors in Iraqi license plate is addressed by enhancing the image contrast and color distinction using LAB preprocessing. However, the improvement in achieved boundary accuracy, and IoU, mAP scores are obtained by using YOLOv8-seg segmentation features.

In another hand, the proposed system outperform the other existing systems that work on Iraqi license plate. Table 5 shows the results of the proposed system and other systems.

Table 5. Comparison of the obtained results and other results

Work

Accuracy

[34]

90%

[33]

75%

[35]

93%

[37]

90.81- 95.42%

Proposed

94.6%

5. Conclusion and Future Work

An accurate Iraqi license plate detection system based on LAB color space, YOLOv8-seg, and CNN is proposed in this paper. The impact of contrast enhancement using LAB color conversion lead to solve all the Iraq practical traffic conditions such as the image background complexity, and the shadow interference.

The evaluation metrics show a satisfactory result with 94.6% precision, 92.3% recall, and 91.2% mAP at 0.5 threshold. In addition, the detection process demonstrated a rise in the localization precision using both LAB preprocessing and YOLOv8-seg segmentation comparing to only CNN or other traditional RGB- based detection methods. The system guarantees an instantaneous performance when testing on one single input image when running the detection process at a rate of 23.7 ms.

In future work, various Iraqi license plate datasets may apply with real time traffic surveillance system. Moreover, a recognition stage could be include using OCR to recognize the Arabic number in the plate.

  References

[1] Jawale, M.A., William, P., Pawar, A.B., Marriwala, N. (2023). Implementation of number plate detection system for vehicle registration using IoT and recognition using CNN. Measurement: Sensors, 27: 100761. https://doi.org/10.1016/j.measen.2023.100761

[2] Enad, M.T. (2023). The main causes of traffic accidents on the (old) Baghdad-Ramadi road for the period (2010-2019). Journal of the College of Basic Education, 1(SI): 726-743. https://doi.org/10.35950/cbej.v1iSI.10648

[3] Shivakumara, P., Tang, D., Asadzadehkaljahi, M., Lu, T., Pal, U., Anisi, M.H. (2018). CNN-RNN based method for license plate recognition. CAAI Transactions on Intelligence Technology, 3(3): 169-175. https://doi.org/10.1049/trit.2018.1015

[4] Wang, W.H.,Tu, J.Y. (2020). Research on license plate recognition algorithms based on deep learning in complex environment. IEEE Access, 8: 91661-91675. https://doi.org/10.1109/ACCESS.2020.2994287 

[5] William, P., Shrivastava, A., Chauhan, P.S., Raja, M., Ojha, S.B., Kumar, K. (2023). Natural language processing implementation for sentiment analysis on tweets. In Mobile Radio Communications and 5G Networks. Lecture Notes in Networks and Systems, pp. 317-327. https://doi.org/10.1007/978-981-19-7982-8_26 

[6] Yang, L.X., Chen, H.P., Zhang, W. (2012). A license plate recognition algorithm for community monitor. In 2012 Fifth International Conference on Intelligent Networks and Intelligent Systems, Tianjin, China, pp. 45-48. https://doi.org/10.1109/ICINIS.2012.17 

[7] Silva, S.M., Jung, C.R. (2020). Real-time license plate detection and recognition using deep convolutional neural networks. Journal of Visual Communication and Image Representation, 71: 102773. https://doi.org/10.1016/j.jvcir.2020.102773

[8] Selmi, Z., Ben Halima, M., Pal, U., Alimi, M.A. (2020). DELP-DAR system for license plate detection and recognition. Pattern Recognition Letters, 129: 213-223. https://doi.org/10.1016/j.patrec.2019.11.007

[9] Henry, C., Ahn, S.Y., Lee, S.W. (2020). Multinational license plate recognition using generalized character sequence detection. IEEE Access, 8: 35185-35199. https://doi.org/10.1109/ACCESS.2020.2974973

[10] Li, H., Wang, P., Shen, C.H. (2019). Toward end-to-end car license plate detection and recognition with deep neural networks. IEEE Transactions on Intelligent Transportation Systems, 20(3): 1126-1136. https://doi.org/10.1109/TITS.2018.2847291

[11] Asif, M.R., Chun, Q., Hussain, S., Fareed, M.S., Khan, S. (2017). Multinational vehicle license plate detection in complex backgrounds. Journal of Visual Communication and Image Representation, 46: 176-186. https://doi.org/10.1016/j.jvcir.2017.03.020

[12] Singh, S. (2021). Automatic car license plate detection system: A review. IOP Conference Series: Materials Science and Engineering, 1116(1): 012198. https://doi.org/10.1088/1757-899X/1116/1/012198

[13] Wang, H., Sun, S.X., Ren, P. (2023). Underwater color disparities: Cues for enhancing underwater images toward natural color consistencies. IEEE Transactions on Circuits and Systems for Video Technology, 34(2): 738-753. https://doi.org/10.1109/TCSVT.2023.3289566

[14] Kim, D., Kim, E., Kim, H., Hwang, E. (2024). Enhanced image harmonization scheme using LAB color space-based loss function and data preprocessing. Journal of KIISE, 51(8): 729-735. https://doi.org/10.5626/JOK.2024.51.8.729

[15] Bolya, D., Zhou, C., Xiao, F.Y., Lee, Y.J. (2019). Yolact: Real-time instance segmentation. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp. 9156-9165. https://doi.org/10.1109/ICCV.2019.00925

[16] Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S. (2017). Feature pyramid networks for object detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 936-944. https://doi.org/10.1109/CVPR.2017.106

[17] Badr, A., Abdelwahab, M.M., Thabet, A.M., Abdelsadek, A.M. (2011). Automatic number plate recognition system. Annals of the University of Craiova-Mathematics and Computer Science Series, 38(1): 62-71. https://doi.org/10.52846/ami.v38i1.388

[18] Lee, N.S., Tarudin, N.F., Badrillah, M.I.M., Nusa, F.N.M., Latif, A.A., Anuar, M.M. (2023). Vehicle number plate standardization: Assessing towards its non-compliance. Malaysian Journal of Computing (MJoC), 8(2): 1495-1504.

[19] Garg, G., Agarwal, T., Kumar, A., Jaiswal, A.K. (2024). Implementing a robust license plate detection system – A review. In 2024 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, pp. 546-551. https://doi.org/10.1109/ICICT60155.2024.10544939

[20] Safran, M., Alajmi, A., Alfarhood, S. (2024). Efficient multistage license plate detection and recognition using YOLOv8 and CNN for smart parking systems. Journal of Sensors, 2024(1): 4917097. https://doi.org/10.1155/2024/4917097

[21] Moussaoui, H., El Akkad, N., Benslimane, M., El-Shafai, W., Baihan, A., Hewage, C., Rathore, R.S. (2024). Enhancing automated vehicle identification by integrating YOLO v8 and OCR techniques for high-precision license plate detection and recognition. Scientific Reports, 14: 14389. https://doi.org/10.1038/s41598-024-65272-1

[22] Kim, D., Kim, J., Park, E. (2024). AFA-Net: Adaptive Feature Attention Network in image deblurring and super-resolution for improving license plate recognition. Computer Vision and Image Understanding, 238: 103879. https://doi.org/10.1016/j.cviu.2023.103879

[23] Sridhar Gujjeti, D.R., Ganesan, D. (2024). Optimized background subtraction for high-performance video text detection. Journal of Theoretical and Applied Information Technology, 102(7): 2783-2795. https://www.jatit.org/volumes/Vol102No7/3Vol102No7.pdf.

[24] Wang, C.H. (2024). Using super-resolution imaging for recognition of low-resolution blurred license plates: A comparative study of Real-ESRGAN, A-ESRGAN, and StarSRGAN. arXiv preprint arXiv:2403.15466. https://doi.org/10.48550/arXiv.2403.15466

[25] Kothai, G., Povammal, E., Amutha, S., Deepa, V. (2024). An efficient deep learning approach for automatic license plate detection with novel feature extraction. Procedia Computer Science, 235: 2822-2832. https://doi.org/10.1016/j.procs.2024.04.267

[26] Zhang, L., Zhang, R. (2024). PLATEMASTER-X: Advanced neural network for personalized license plate recognition in the United States. In the 30th International Scientific and Practical Conference" Youth, Education and Science Through Today's Challenges", Porto, Portugal, p. 146. 

[27] Huang, H., Li, Z. (2021). FAFNet: A false alarm filter algorithm for license plate detection based on deep neural network. Traitement du Signal, 38(5): 1495-1501. https://doi.org/10.18280/ts.380525

[28] Omar, N. (2022). ResNet and LSTM based accurate approach for license plate detection and recognition. Traitement du Signal, 39(5): 1577-1583. https://doi.org/10.18280/ts.390514

[29] Johnson, H.R., Scofield, J.E., Kostic, B. (2024). The effect of color on license plate recall. Applied Ergonomics, 118: 104279. https://doi.org/10.1016/j.apergo.2024.104279

[30] Ramajo-Ballester, Á., Moreno, J.M.A., de la Escalera Hueso, A. (2024). Dual license plate recognition and visual features encoding for vehicle identification. Robotics and Autonomous Systems, 172: 104608. https://doi.org/10.1016/j.robot.2023.104608

[31] Murugan, V., Sowmyayani, S., Kavitha, J., Meenakshi, S. (2024). AI driven smart number plate identification for automatic identification. In 2024 IEEE International Conference on Computing, Power and Communication Technologies (IC2PCT), Greater Noida, India, pp. 1193-1197. https://doi.org/10.1109/IC2PCT60090.2024.10486444

[32] Naveen, P., Sivakumar, P. (2021). A deep convolution neural network for facial expression recognition. Journal of Current Science and Technology, 11(3): 402-410. https://doi.org/10.14456/jcst.2021.40

[33] Alexander Aremice, G. (2015). Iraqi vehicle new license plate style recognition. Journal of Engineering and Sustainable Development, 19(5): 1-15. https://jeasd.uomustansiriyah.edu.iq/index.php/jeasd/article/view/798.

[34] Abd Alhamza, D.A., Alaythawy, A.D. (2020). Iraqi license plate recognition based on machine learning. Iraqi Journal of Information and Communications Technology, 3(4): 1-10. https://doi.org/10.31987/ijict.3.4.94

[35] Al-Arbo, Y., Mahmood, H.F., Alqassab, A. (2025). End-to-end license plate detection and recognition in Iraq using a detection transformer and OCR. Mesopotamian Journal of Computer Science, 2025: 329-340. https://doi.org/10.58496/MJCSC/2025/021

[36] Hasan, R.H., Aboud, I.S., Hassoon, R.M., Khioon, A.S.A. (2024). Hetero-associative memory based new Iraqi license plate recognition. Baghdad Science Journal, 21(9): 3026-3036. https://doi.org/10.21123/bsj.2024.8823

[37] Shuriji, M.A., Al-Behadili, H., Hussain, H.A. (2025). Iraqi’s car license plate recognition based on deep learning. Iraqi Journal for Computer Science and Mathematics, 6(3): 305-312. https://doi.org/10.52866/2788-7421.1294