Research on Vehicle Identification Based on Multi-modal Fusion

Research on Vehicle Identification Based on Multi-modal Fusion

Andi Gao* Guanglei Zhang

International College, Krirk University, Bangkok 10220, Thailand

Corresponding Author Email: 
csandygao@gmail.com
Page: 
371-382
|
DOI: 
https://doi.org/10.18280/ijtdi.080301
Received: 
29 May 2024
|
Revised: 
13 August 2024
|
Accepted: 
11 September 2024
|
Available online: 
27 September 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In the realm of expressway development, the importance of vehicle identification technology is steadily rising. However, conventional systems often struggle to maintain optimal recognition accuracy in adverse weather conditions such as heavy rain, snow, or haze. This study proposes an enhanced approach to license plate recognition utilizing radar fusion technology to synthesize multiple information bands and improve image visibility. By addressing the challenges posed by inclement weather, our algorithm aims to overcome limitations observed in conventional de-fogging methods. Through cooperative image processing, the proposed algorithm automatically identifies and enhances license plate regions, subsequently employing a character recognition model to identify clipped characters. This enhanced technique effectively mitigates the impact of complex backgrounds and noise, thus boosting recognition accuracy. Simulation analyses conducted in MATLAB validate the efficacy of our approach. Through simulation analysis, it is found that the recognition accuracy rate can reach 97%, demonstrating its superior recognition accuracy even in adverse weather conditions.

Keywords: 

vehicle identification, multi-modal fusion, MATLAB, image enhancement processing

1. Introduction

Intelligent roadside sensing systems face limitations due to the high cost of LiDAR and its performance degradation under adverse weather and low-light conditions. Unlike LiDAR and cameras, which lack the ability to provide target speed information, the combination of millimeter-wave radar and cameras offers significant advantages. This pairing enables the acquisition of rich semantic information alongside accurate measurements of object distance and speed.

Millimeter-wave radar is less affected by extreme weather and lighting conditions, allowing for continuous operation, day or night, with lower processing power requirements. As a result, it delivers more precise estimates of distance and radial velocity. In contrast, cameras offer superior angular resolution and detection accuracy, providing valuable semantic information such as target contours, textures, and color distributions, thereby enhancing target classification capabilities.

As a result, millimeter-wave radar and monocular cameras have emerged as the primary sensing devices in vehicle identification systems. While license plate recognition tends to use camera devices.

Traditional vehicle recognition methods, reliant on visual observation of road conditions, offer wide visual coverage, good visual fidelity, and strong target identification capabilities at a low equipment cost. Pirgazi et al. [1] detected vehicles in video frames and inputted images using a deep learning model that employs a single-shot detector (SSD) approach. He and Hao [2] performed affine transformation and presented a technique that was capable of identifying and rectifying numerous license plates exhibiting significant distortion or skewing within a single photograph.

However, these traditional methods are prone to environmental and illumination interference, resulting in low acquisition accuracy and limited sensing distance. Chowdhury et al. [3] introduced a new Fractal Series Expansion (FSE) model for license plate enhancement, we validate its effectiveness through both qualitative and quantitative assessments, including recognition rate comparisons before and after enhancement. In addition, researchers also struggle to achieve precise depth sensing and continuous real-time motion state identification. Liu and Liu [4] harness the power of Structure from Motion (SFM) to precisely determine the pose of a moving camera, thereby refining the accuracy of ground plane estimation. Leveraging multi-task 2D detection and 9 Degrees of Freedom 3D head positioning, Chen et al. [5] introduced a comprehensive perception system tailored specifically for heavy-duty mining transportation trucks operating in the challenging environments of open-pit mines. Wang et al. [6] proposed environmental perception technology serves as the cornerstone and prerequisite for intelligent vehicle decision-making and control. And he offered insights into the current development status and control strategies employed by pivotal sensors, including machine vision, laser radar, and millimeter-wave radar.

In contrast, millimeter wave radar technology boasts features such as a large wavelength, bandwidth, penetration, and anti-interference capabilities, along with high identification accuracy and detection range. A study discussed license plate detection and recognition techniques using various image datasets and emphasized the advantages and limitations of artificial neural network technology [7]. A system based on millimeter wave radar for large-scale real-time tracking of vehicle trajectories has been proposed [8], which significantly increases the total number of captured vehicles compared to video clips. Wang [9] validated the effectiveness of the new labeling method and its related algorithms by processing millimeter wave data and introducing a real-time decoding YOLOv5-StrongSORT-TMC algorithm to process labeling information. Aziz et al. [10] proposed a radar and camera sensor fusion framework capable of automatically detecting, tracking, and classifying various targets on the road. However, there is room for improvement in both vehicle recognition efficiency and accuracy. Leveraging millimeter wave radar in combination with video streams and radar fusion technology, this paper proposes a real-time target detection approach for obtaining vehicle trajectories. By employing model tracking and transmission, our method enables effective monitoring of road area movement and real-time vehicle recognition.

2. Page Setup

2.1 System architecture of radar and video fusion technology

Optic fusion technology as shown in Figure 1. And principle of radar-video fusion technology as shown in Figure 2. The radar-video fusion technology system architecture encompasses several crucial modules, each contributing to the system's effectiveness [11, 12]:

1. Data acquisition module: This module primarily manages thermal infrared and visual images, proposing a radar imaging-based infrared imaging technique for comprehensive data acquisition.

2. Image preparation module: This module enhances image fusion by refining acquired pictures. Techniques such as noise reduction, contrast enhancement, and clarity adjustment are employed to optimize image quality.

3. Feature extraction module: Extracting relevant features (e.g., color, texture, shape) from both visible and thermal infrared image bands is essential for subsequent analysis.

4. Fusion algorithm module: Central to the paper is the Fusion Module, which achieves picture fusion by accurately merging thermal infrared and visible light images. Techniques like the weighted average method are commonly utilized for seamless integration.

5. Feature fusion module: This module further enhances algorithm performance by combining features from multiple sources, providing a comprehensive representation of image information. The fused features enhance the subsequent processing methods, leading to more thorough information reflection.

6. Object detection and recognition module: Integrated images and features are utilized for object identification. In the context of license plate recognition, deep learning algorithms are employed for precise detection and recognition of license plates.

7. Background processing module: This module undertakes additional processing and optimization of identified targets. Tasks such as license plate recognition and misreading correction are performed to enhance recognition accuracy.

Our experiments demonstrate the effectiveness of the proposed data fusion approach. We found that:

The fusion of thermal infrared and visible light images significantly enhances image quality, particularly in adverse weather conditions such as fog or rain.

Enhanced object detection: The multi-modal feature extraction and fusion lead to a significant improvement in object detection accuracy, especially for vehicles.

Accurate license plate recognition: The deep learning-based license plate recognition module achieves high accuracy rates, even in challenging lighting and weather conditions.

Through the collaborative efforts of these modules, the proposed method significantly enhances its utility in real-world scenarios, particularly on highways. By improving imaging quality and meeting application requirements such as vehicle license plate recognition in challenging environmental conditions, the system plays a vital role in obtaining necessary vehicle information on the road.

Figure 1. Optic fusion technology

Figure 2. Structure of radar fusion system

2.2 Main equipment

2.2.1 Millimeter-wave radar

A cornerstone of radar-video fusion technology is the millimeter-wave radar [13], a radio wave radar system operating within the millimeter wave frequency band. This radar system excels in detecting and ranging targets, even in adverse weather conditions such as rain, snow, and haze [14]. Key components of the millimeter-wave radar sensor include a high-frequency transmission device, a receiving device, a data processing unit, and a communication device. Its data processing module utilizes a multi-threaded high-speed processing chip, enabling real-time tracking and positioning of objects. The radar detector provides multiple independent information streams, including real-time speed, direction, target type (e.g., large, medium, small car, pedestrian), and azimuth of the target [15, 16].

2.2.2 Video surveillance devices

Millimeter-wave radar, as mentioned earlier, plays a pivotal role in Radar-video fusion technology. Alongside radar systems, video surveillance devices are essential components [17]. These devices operate in tandem with radar, penetrating adverse weather conditions to obtain precise target location and movement information. The major components of the millimeter-wave radar sensor include a high-frequency transmission device, a receiving device, a data processing unit, and a communication device. Its data processing module employs a multi-threaded high-speed processing chip to facilitate real-time tracking and positioning of objects. The radar detector provides various independent information streams, including real-time speed, direction, and target type (e.g., large, medium, small car, pedestrian).

Video recorder: A video recorder stores camera-received image signals on a hard disk for subsequent playback and analysis [18]. Traditional video recorders utilize tape or DVD media for image recording, while modern Digital Video Recorders (DVRs) or Network Video Recorders (NVRs) convert image data into digital format for storage on a hard disk, offering increased storage capacity and flexibility [19].

Monitoring server: The monitoring server serves as the central management and processing hub for cameras and captured images. This system facilitates real-time processing, storage, and distribution of video, supporting multiple channel transmission for remote image access and control [20].

2.3 Research on license plate recognition methods

Current license plate recognition technology has demonstrated the capability to enhance vehicle speed and traffic flow. However, in challenging conditions such as haze, recognition rates typically hover between 50 and 60 percent [21, 22]. Consequently, there is a pressing need to improve road sign recognition efficiency under hazy conditions. Traditional fog image and processing as shown in Figures 3 and 4.

2.3.1 Traditional denoising algorithm

Figure 3 shown images of vehicles in foggy weather. Figure 4 was the images of night. The foundation of early image enhancement techniques lies in the analysis of various photos or data. For instance, polarization-based algorithms have been employed to reduce noise and enhance visibility in polarized images to varying degrees, particularly in scenes with similar characteristics. While effective, these techniques are not suitable for real-time image restoration due to challenges in determining optimal polarization levels in rapidly changing scenes, such as those involving vehicles. Other image processing methods, like tissue chart-based and Retinex-based algorithms, have also been utilized to mitigate fog effects and improve image contrast and detail thus like in the studies [23, 24]. However, these algorithms often neglect image degradation, leading to suboptimal results.

Figure 3. Images of vehicles in foggy weather

Figure 4. Images of night

2.3.2 Atmospheric scattering model algorithm

The Atmospheric Scattering Model Algorithm represents a common approach for image de-fogging and de-noising, aimed at restoring image clarity and contrast degraded by atmospheric scattering [25]. This algorithm operates on the premise that every pixel in an image is affected by atmospheric scattering, resulting in blurring and limited brightness range. The fundamental principle of the Atmospheric Scattering Model Algorithm is as follows:

1. Estimate global atmospheric light:

Initially, the total atmospheric light is estimated. Typically, pixels with the highest brightness in an image represent atmospheric light, as they are directly illuminated by ambient light in the atmosphere.

$A=\max _{\mathrm{x} \in \mathrm{I}}(\mathrm{I}(\mathrm{x}))$                 (1)

where, I(x) represents the pixel value at position x in the image, and I is the pixel matrix of the entire image

2. Estimated transmittance:

Estimate the transmittance of each pixel to understand atmospheric scattering. The degree of impact on each pixel. Transmittance describes the attenuation of light as it passes through the atmosphere and is related to atmospheric concentration and distance. Typically, transmittance is estimated as the ratio between the brightness of each pixel in the image and the global atmospheric light [26].

$\mathrm{t}(\mathrm{x})=1-\omega \min _{\mathrm{c} \in \mathrm{I}_{\mathrm{s}}}\left(\frac{\mathrm{I}_{\mathrm{c}}}{\mathrm{A}}\right)$                   (2)

where, x represents a pixel position in the image, Ic is all pixel values in the field centered on pixel position x, Ix is the pixel value centered on x, I is the pixel matrix of the entire image, and ω is the parameter controlling the order of transmittance calculation.

3. Restore the fog removal image:

Using estimated global atmospheric light and transmittance, the fog removal image is recovered. This can be done by taking the original brightness value of each pixel, minus the transmittance multiplied by the global atmospheric light. This operation is able to offset the loss of brightness caused by atmospheric scattering and improve the contrast and sharpness of the image [27].

$J(x)=\frac{I(x)-A}{\max \left(t(x), t_0\right)}+A$                    (3)

where, t0 is the parameter that controls the transmittance minimum. This method is widely used at present, but the atmospheric scattering model algorithm will produce poor effect when there is strong background light change in the scene.

(1) Median filter

Median filtering is a simple and efficient denoising method. The basic idea is to replace the current pixel value with the average pixel around it. Median filtering is more effective at removing random noises such as salt and pepper noise. In implementation, the median filter can be implemented by the following steps:

1) Make sure the size of the image window is fixed.

2) Classify the pixels in the window, and then use their average value as the new value of the current pixel.

3) Move the image continuously until all the pixels are processed.

(2) Mean filtering

Mean filtering is one of the most commonly used filtering methods. Replaces the current pixel value with the average surrounding pixel. For zero-value noise like Gaussian noise, mean filtering can do a better job. Average filtering is performed in the following steps [28]:

1) Make sure the size of the image is fixed.

2) Add the values of all the pixels in the window, then divide all the pixels in the window to get a new value.

3) Move the image continuously until all the pixels are processed.

(3) Gaussian filter

Gaussian filter is a kind of smoothing method based on Gaussian function. It replaces the current value of the pixel, which is the average value around the pixel. The Gaussian filter can remove Gaussian white noise while preserving the information in the image. The implementation process of Gaussian filtering is as follows:

1) Determine the size of adjacent images, using the core function Gaussian function.

2) Multiply the value of each pixel in the window by the corresponding Gaussian kernel value, and add them together to get the new value of the current pixel.

3) Move the image one by one, then repeat step 2 until all the pixels are filled.

(4) Bidirectional filtering

Bidirectional filtering is a kind of nonlinear filtering based on the similarity between pixel spatial distance and pixel value. Bidirectional filtering can effectively remove noise while keeping the boundary clear. Bidirectional filtering is implemented as follows:

1) A set of fixed size images is selected and two Gaussian kernel functions are used.

2) For each pixel of an image, it is necessary to calculate its weight in a space, plus its similarity to the current pixel, and these two components are multiplied by its total weight.

3) Multiply the value of each pixel in an image by the total weight of the place to get a new pixel value.

4) Keep moving the image until all the pixels are processed.

(5) Wavelet denoising

Wavelet denoising is a nonlinear denoising method based on wavelet transform. The algorithm smooths the low frequency part and decomposes the image in wavelet domain to preserve the high frequency information. At the same time of removing noise, low clutter suppression technology can keep the image details and boundaries unchanged. Wavelet denoising is carried out in the following steps:

1) In order to obtain a low waveform coefficient, transform the image to a low waveform.

2) Binarize the wavelet coefficients by setting them below a certain threshold and maintaining them above a certain threshold.

3) Invert the processed wavelet coefficients so that an image can be obtained after de-noising.

2.4 Improved license plate recognition method

Liu et al. [29] proposed to use the spatial distribution characteristics of gray spots to find the optimal near-normal distribution and calculate the distribution of gray values in the sky to improve the details of the surface. On the basis of SSR and MSR algorithms, Vijayalakshmi et al. [30] used conventional truncation and stretching methods to process fuzzy images. Tao et al. [31] conducted curve tests on restaurants using automatic de-fogging technology and suggested a method for equalizing the intermittent structure. Anilkumar et al. [32] had found a way to balance the intermittent structure.

In foggy weather, the highlights are lost in the image, and the color distortion is more severe than that of traditional histogram. After the mist is removed, the brightness, relation and clarity of the image are marginalized, and the 2D digital color is recovered. Based on this, a new method is presented, which divides the RGB image into two parts, and makes use of the edge detection algorithm to balance the RGB image.

RGB channel is equalized independently: input graph I is divided into channel R, channel G and channel B. Each channel is equalized by organization chart method, and the balance results of R, G and B are obtained:

The improved algorithm derivation process used in this paper [33]:

(1) Denoising

$\mathrm{I}(\mathrm{x})=\mathrm{J}(\mathrm{x}) \mathrm{t}(\mathrm{x})+\mathrm{A}(1-\mathrm{t}(\mathrm{x}))$                     (4)

(2) Histogram equalization

$r_k=\frac{s_{\mathrm{k}}-s_{\min }}{s_{\max }-s_{\min }} \cdot(L-1)$                    (5)

(3) Binarization

$\operatorname{LBP}(x, y)=\sum_{i=0}^7 s\left(I\left(x_i, y_i\right)-I(x, y)\right) \cdot 2^i$                       (6)

(4) Color channel equalization

$\mathrm{R}^{\prime}, \mathrm{G}^{\prime}, \mathrm{B}^{\prime}=$ histogram equalization $(\mathrm{R}, \mathrm{G}, \mathrm{B})$                     (7)

Equalize the brightness channel: Resynthesize the result of the equalized RGB channel into image J:

$\mathrm{J}=\operatorname{merge}\left(\mathrm{R}^{\prime}, \mathrm{G}^{\prime}, \mathrm{B}^{\prime}\right)$                         (8)

(5) Marginal computing

$\mathrm{G}=\sqrt{\mathrm{G}_{\mathrm{x}}^2+\mathrm{G}_{\mathrm{y}}^2}$                     (9)

$\theta=\arctan \left(\frac{\mathrm{G}_{\mathrm{y}}}{\mathrm{G}_{\mathrm{x}}}\right)$                     (10)

$E(x, y)=\left\{\begin{array}{l}1, \text { if } G(x, y)>T_1 \text { and } \theta(x, y) \in\left[\theta_1, \theta_2\right] \\ 0, \text { otherwise }\end{array}\right.$                (11)

(6) Character recognition algorithm

$c_i=\operatorname{CNN}\left(\mathrm{f}_{\mathrm{i}}\right)$                       (12)

where, ci represents the recognition result of the i th character, and fi is the feature vector. By inputting the feature vector into the convolutional neural network, the recognition result of the corresponding character can be obtained.

As for the improved algorithm mentioned above, because it is based on the literature collation of various scholars, it fails to reflect the core idea of this paper, so the formula is refined.

2.4.1 Character segmentation based on improved edge detection

Traditional character segmentation algorithms, such as those based on edge detection, are often affected by noise and illumination, resulting in inaccurate segmentation results. In order to improve the robustness of character segmentation, this paper proposes an improved edge detection algorithm, which combines the advantages of Gaussian filter and Canny edge detection operator, and can effectively suppress noise and extract the edge of license plate characters.

Gaussian filter formula

$\mathrm{G}(\mathrm{x}, \mathrm{y})=1 /\left(2 \pi \sigma^2\right) * \exp \left(-\left(\mathrm{x}^2+\mathrm{y}^2\right) /\left(2 \sigma^2\right)\right)$                    (13)

where, G(x,y) is the pixel value of the filtered image, (x,y) is the pixel coordinate, and σ is the standard deviation of the Gaussian filter.

Canny edge detection operator

$L(x, y)=|G x(x, y)|^2+|G y(x, y)|^2$                    (14)

where, L(x,y) is the edge intensity, and Gx(x,y) and Gy(x,y)are the gradients of the image in the x direction and y direction, respectively.

Non-maximum suppression

$N(x, y)=\max (G(x, a), G(x, b))$                     (15)

where, N(x,y) is the edge intensity after non-maximum suppression, and G(x,a) and G(x,b) are the adjacent pixel values of the edge intensity L(x,y) in two directions, respectively.

Double threshold

$\begin{array}{c} \text { If } \mathrm{N}(\mathrm{x}, \mathrm{y})>=\mathrm{T} 1 &\& \mathrm{~L}(\mathrm{x}, \mathrm{y})>=\mathrm{T} 2 \\ & \mathrm{E}(\mathrm{x}, \mathrm{y})=1 \\ \text { else } & \\ &\mathrm{E}(\mathrm{x}, \mathrm{y})=0\end{array}$                     (16)

where, E(x,y) is the final edge pixel value, and T1 and T2 are the upper and lower thresholds, respectively.

2.4.2 Character segmentation based on morphology

Morphology is an image analysis method based on set theory and topology, which can be used to extract shape features from images. In order to further improve the accuracy of character segmentation, this paper proposes a morphological character segmentation method, which uses morphological operators, corrosion, expansion and open operations to extract the connected regions of license plate characters [34].

Corrosion formula

$A1 = B – C$              (17)

where, A1 is the image after corrosion, B is the original image, and C is the structural element.

Expansion formula

$A2 = B + C$              (18)

where, A2 is the expanded image, B is the original image, and C is the structural element.

Open formula

$A3 = B – (B – C)$              (19)

where, A3 is the image after open operation, B is the original image, and C is the structural element.

Closed operation formula

$A4 = B + (B – C)$              (20)

where, A4 is the image after the closed operation, B is the original image, and C is the structural element.

2.4.3 Improved character recognition algorithm

Character recognition based on deep learning

Deep learning is a machine learning method that automatically learns features from large amounts of data. In order to improve the accuracy of character recognition, this paper proposes a character recognition algorithm based on deep learning, which uses convolutional neural network (CNN) to extract character features and long short-term memory network (LSTM) to recognize character sequences [35].

Convolution kernel update formula

$\mathrm{W}_{\mathrm{t}}+1=\mathrm{W}_{\mathrm{t}}-\mathrm{a} \mathrm{\nabla E}\left(\mathrm{~W}_{\mathrm{t}}\right)$                    (21)

where, Wt is the convolution kernel at time t, a is the learning rate, and ∇E(Wt) is the gradient of the loss function.

Pooling operation formula

$\mathrm{y}_{\mathrm{i}}=\max \left(\mathrm{x}_{\mathrm{i} 1}, \mathrm{x}_{\mathrm{i} 2}, \ldots, \mathrm{x}_{\mathrm{in}}\right)$                      (22)

where, yi is the characteristic value after pooling.

In order to improve the robustness of character recognition, this paper proposes a character recognition method based on synthetic data, which uses data enhancement technology, random cropping, flipping and color dither to generate a large number of character samples with different illumination, angles and positions, so as to improve the model's resistance to noise and deformation [21].

Random clipping formula

$x_n=x[i 1: i 2, j 1: j 2]$                       (23)

where, xn is the clipped image, x is the original image, (i1,j1) and (i2,j2) are the top-left and lower-right coordinates of the clipped area.

Random flipping formula

$x_n=\operatorname{cv2.}{flip}(x, axis)$                    (24)

where, xn is the flipped image, x is the original image, axis is the rotation axis, 0 is the horizontal rotation, 1 is the vertical rotation.

Color jitter formula

$x_n=x+n p. random.uniform (-\delta, \delta, x.shape )$                  (25)

where, xn is the image after color jitter, x is the original image, and δ is the color jitter amplitude.

2.5 Improved inclement weather image enhancement algorithm

Fog is one of the main factors that lead to blur and distortion of license plate image under bad weather conditions. Retinex algorithm is a classical image de-fogging algorithm, which restores clear images by estimating the illumination and reflection components of images. In this paper, the improved Retinex algorithm is used to de-fog license plate images in bad weather conditions [36].

Improved Retinex algorithm

$L(x, y)=\log (I(x, y))-\log (A(x, y))$                   (26)

In the formula, the clear image after fog removal is restored by separating the illumination component L(x,y) and the reflection component A(x,y) of the image. Where I(x,y) represents the input fog image.

Gaussian filter

$A(x, y)=G(x, y) * I(x, y)$                  (27)

Firstly, the input foggy image I(x,y) is filtered by Gaussian, and the smooth image G(x,y) is obtained. Then, multiply G(x,y) with I(x,y) to get the reflection component A(x,y) Gaussian filtering can effectively remove the noise in the image and retain the structure information of the image.

Color constant

$\mathrm{C}=\operatorname{mean}(\mathrm{I}(\mathrm{x}, \mathrm{y}))$                  (28)

C represents the average gray value of the image, which can be used to adjust the brightness of the image after removing the fog. The mean function is used to compute the average of all pixels of I(x,y).

Image after fog removal

$\mathrm{I}_{\text {defoghy }}(\mathrm{x}, \mathrm{y})=\exp (\mathrm{L}(\mathrm{x}, \mathrm{y})+\mathrm{C})$                     (29)

Add the estimated exposure component L(x,y) and the color constant C. Then, the exponential operation is performed on the result to obtain the image Idefoggy(x,y) after fog removal.

2.5.1 Image denoising based on bilateral filtering

Bad weather conditions such as rain and snow will cause noise in the license plate image and reduce the image quality. Bilateral filtering is an effective image denoising method, which can calculate the filtering weight according to the gray value and spatial distance of pixels, effectively retain the image edge information, and remove the noise. In this paper, an improved bilateral filtering algorithm is used to denoise license plate images in bad weather conditions.

Improved bilateral filtering

$I_{\text {denoy }}(x, y)=1 / Z(x, y) * \sum_{i=-n}^n \sum_{j=-n}^n G_s(i, j) G_r(I(x, y), I(x+i, y+j)) I(x+i, y+j)$                    (30)

Z(x,y) is the normalization factor, and n is the filter window size.

Spatial gaussian kernel

$G_s(i, j)=\exp \left(-\left(i^2+j^2\right) /\left(2 \sigma_s^2\right)\right)$                     (31)

Spatial Gaussian kernel is used to control the relationship between filter weight and pixel spatial distance. σ s is the spatial standard deviation, and the larger the value, the smoother the filtering effect.

Improved contrast stretching

$\mathrm{I}_{\max }(\mathrm{x}, \mathrm{y})=\min (255,(\mathrm{I}(\mathrm{x}, \mathrm{y})-\min (\mathrm{I})) *(\max (\mathrm{I})-\min (\mathrm{I})) /(\max (\mathrm{I})-\min (\mathrm{I})))$                (32)

where, Imax(x,y) represents the enhanced image, I(x,y)represents the input image, min(I) represents the minimum gray value of the image, and max(I) represents the maximum gray value of the image.

Minimum gray value

$\min (\mathrm{I})=\min _{\mathrm{x}, \mathrm{y}} \mathrm{I}(\mathrm{x}, \mathrm{y})$                 (33)

The min function is used to compute the minimum value of all pixels of I(x,y).

Maximum gray value

$\max (\mathrm{I})=\max _{\mathrm{x}, \mathrm{y}} \mathrm{I}(\mathrm{x}, \mathrm{y})$                        (34)

The max function is used to compute the maximum value of all pixels of I(x,y).

2.5.2 Image enhancement based on histogram equalization

Histogram equalization can adjust the histogram of the image to make the gray value distribution more uniform, so as to improve the contrast and information of the image. In this paper, an improved histogram equalization algorithm is used to enhance the license plate image under bad weather conditions.

Improved histogram equalization

$\mathrm{I}_{\mathrm{m}}(\mathrm{x}, \mathrm{y})=\mathrm{N} * \operatorname{cdf}(\mathrm{I}(\mathrm{x}, \mathrm{y}))$                   (35)

where, Im(x,y) represents the equalized image, N represents the total number of pixels in the image, and cdf(I(x,y)) represents the cumulative distribution function.

Cumulative distribution function

$\operatorname{cdf}(\mathrm{I}(\mathrm{x}, \mathrm{y}))=\left(\mathrm{n}_{\mathrm{k}} / \mathrm{N}\right) * \operatorname{sum}_{\mathrm{i}=0}^{\mathrm{k}} \mathrm{n}_{\mathrm{i}}$              (36)

where, nk represents the number of pixels of gray value k, and ni represents the number of pixels of gray value i.

2.5.3 Image enhancement based on local adaptive histogram equalization

Local adaptive histogram equalization is an improved method of histogram equalization, which can adjust the image histogram according to the local information of the image, so as to retain the details of the image better. In this paper, an improved local adaptive histogram equalization algorithm is used to enhance the license plate image under bad weather conditions.

Improved local adaptive histogram equalization

$\mathrm{I}_{\mathrm{a}}(\mathrm{x}, \mathrm{y})=\mathrm{N} * \operatorname{cdf}_{\mathrm{k}}(\mathrm{I}(\mathrm{x}, \mathrm{y}))$                    (37)

where, Ia(x,y) represents the image after adaptive equalization, N represents the total number of pixels in the image, and cdfk(I(x,y)) represents the local cumulative distribution function.

3. Results

This article uses the VRV Data Files. Using the infrared camera to get a picture of the car, it is possible to get a clear picture of the license plate at night or at low light. The infrared imaging technique can effectively overcome the lack of lighting and enhance the image quality of the license plate.

The basic functions and components of the LPR are: Vehicle Identification. It consists of collecting, pretreating, locating, separating and recognizing characters. Pre-treatment. It includes two parts: Image Binarization, Edge Detection, Edge Tracking and Character Recognition. Position of license plate. Including the position of the license plate and the character segmentation. Character separation and recognition. When the vehicle reaches the trigger, the trigger unit sends the captured real time image to the vehicle recognition module. Vehicle recognition module analyses the image based on its content, determines the position of the license plate, and then divides the characters on the plate to identify, and finally produces the license plate output. Figure 5 illustrates the process of license plate recognition:

Figure 5. Flow chart of license plate recognition system

Color image is divided into, three components, respectively showing red, green, blue and other colors, gray is the process of making the color, components equal. Pixels with large gray values are brighter (the maximum pixel value is 255, which is white), and darker (the minimum pixel value is 0, which is black).

There are three main algorithms for image graying:

1. Maximum method: The value of R, G and B after conversion is equal to the largest one of the three values before conversion, that is: R=G=B=max(R, G, B), the brightness of the gray map converted by this method is very high.

2. Average method: The value of R, G and B after conversion is the average value of R, G and B before conversion R=G=B=(R+G+B)/3. The gray image produced by this method is relatively soft.

3. Weighted average method: According to a certain weight, weighted average of the values of R, G and B, namely: R=G=B=(ωRR+ωGG+ωBB)/3,among ωR, ωG and ωB are the weights of R, G and B respectively. The three different values above result in different grayscale images. Because the human eye is most sensitive to green, followed by red, the sensitivity to blue is the lowest, so ωGRB. The grayscale image that is easy to recognize will be obtained. In general, when ωR=0.299, ωG=0.587, ωB=0.114. The resulting grayscale image is the best.

This paper mainly uses weighted average method for image processing.

3.1 Radar point cloud data preprocessing

Millimeter-wave radar obtains this information through the transmission of radar beams and reception of radar waves reflected back from target objects. Typically, the radar beam is transmitted via multiple antennas, and the corresponding radar wave signals are received by multiple antennas as well. Analysis of the radar signals received by these antennas enables the extraction of crucial information such as the distance and speed of the target object. Subsequently, this information complements the image captured by the camera, providing a comprehensive understanding of the target object's characteristics. Figure 6 shows the application process of the point cloud algorithm.

Figure 6(a) shows the image before millimeter-wave radar processing. Figure 6(b) is the processing result of millimeter-wave radar point cloud image. As depicted in the figure, the blue point represents the license plate information of the vehicle, while the red point denotes the specific location information of the vehicle. During vehicle operation, millimeter-wave radar can effectively mitigate the adverse effects of inclement weather by transmitting and receiving radar waves to accurately determine the vehicle's position. Subsequently, the camera captures images of the vehicle, facilitating further license plate recognition. This process effectively addresses the challenge of capturing clear images under unfavorable weather conditions, ensuring robust performance in license plate recognition tasks.

Figure 6. Image before millimeter-wave radar processing and processing results of millimeter-wave radar point cloud image

3.2 Image preprocessing

To ensure robustness in license plate recognition, a diverse dataset comprising images captured under varying conditions—nighttime and foggy weather included—is essential. This diverse dataset ensures that license plates exhibit clarity amidst different background and lighting conditions. Simultaneously, perform the following image processing steps: Resize the images to a uniform size. This standardization simplifies subsequent processing, enabling consistent analysis across all images. Figure 7 shows the complete license plate image processing flow.

Figure 7. License plate processing flow chat

Figure 7(a) picture displayed original license plate recognition data: This image displays the raw, unprocessed input image captured for license plate recognition;

Figure 7(b) image shown cases the result of applying a denoising algorithm to the original image (a), reducing noise and enhancing image clarity;

Figure 7(c) image presented the license plate image converted from color (or potentially grayscale with noise) to a pure grayscale format, simplifying further processing;

Figure 7(d) image highlighted the edges detected within the grayscale image (c) using the Sobel edge detection algorithm, emphasizing boundaries between regions;

Figure 7(e) image shown the edges detected using the Roberts edge detection algorithm, offering an alternative edge detection result compared to the Sobel method (d);

Figure 7(f) image depicted the successful localization of the license plate within the image, typically achieved through edge analysis and shape recognition;

Figure 7(g) image illustrated the segmentation of the located license plate (f) into individual characters, isolating each character for recognition;

Figure 7(h) image presented the final output of the license plate recognition system, showcasing the recognized characters from the segmented image (g).

3.3 Noise removal

Images captured in adverse weather conditions often suffer from noise interference, such as raindrops or snowflakes. To mitigate this interference and enhance subsequent processing, this paper proposes a novel joint image denoising algorithm. By effectively removing noise from the images, this algorithm significantly improves the recognition rate of license plates under challenging weather conditions.

3.4 Contrast enhancement

Addressing the challenges encountered in license plate recognition, enhanced joint image processing methods, such as adaptive histogram equalization, offer a solution to enhance image contrast. By adjusting the histogram dynamically, this technique improves the visibility of license plates, facilitating easier license plate localization and recognition in subsequent stages.

3.5 Edge enhancement

The edge features of license plates play a pivotal role in recognition tasks. However, in adverse weather conditions, these edges may lack clarity due to factors such as light scattering or blurring. To enhance the edge features of license plates, employing edge enhancement algorithms such as Sobel or Canny proves beneficial. These algorithms effectively extract edge information from the image, thereby aiding in subsequent license plate localization and character segmentation tasks.

3.6 License plate location

In challenging meteorological conditions, factors like light, rain, and fog often obscure the license plate within the image area. Consequently, prevailing methods for license plate location primarily rely on the shape and other distinctive characteristics of the license plate itself. Leveraging features such as patterns present in the image, this method accurately identifies the license plate's location, providing a precise area for subsequent segmentation and recognition processes.

3.7 Character segmentation

Once the license plate location is determined, character segmentation within the plate's boundaries becomes imperative. The objective is to isolate individual characters for subsequent recognition. Utilizing techniques such as communication domain analysis and projection analysis, characters are segmented based on inter-character gaps and projection distribution. This approach ensures accurate and complete segmentation, facilitating efficient character recognition.

3.8 Character recognition

Following character segmentation, each isolated character undergoes recognition. Logo identification, the concluding stage of license plate recognition, aims to recognize segmented logos accurately. In this study, Convolutional Neural Networks (CNN) are employed for character recognition. Leveraging shape, texture, and other distinctive features, CNNs excel in recognizing Chinese characters, ensuring precise recognition outcomes.

Figure 8. Recognition results of the improved license plate image enhancement algorithm

Figure 8 shows impact of improved license plate image enhancement algorithm.This figure demonstrates the effectiveness of an improved license plate image enhancement algorithm by showcasing a direct comparison between a license plate image "Before processing" and the same image "After processing".

Figure 8 (a) Before processing: This image suffered from various imperfections such as poor lighting, blur, noise, or low contrast, making it difficult to discern the license plate characters.

Figure 8 (b) After processing: This image exhibited the output of the improved enhancement algorithm. We obviously saw that significant improvements in clarity, contrast, and sharpness, making the license plate characters easily readable and suitable for accurate character recognition. The specific improvements will depend on the algorithm's focus, but the overall goal is to present a significantly enhanced license plate image compared to its unprocessed counterpart.

3.9 Chapter summary

This section summarizes the key findings of our research.

Improved Accuracy: Table 1 shows our proposed method achieved a significantly higher accuracy rate (97%) compared to traditional methods (75%). This demonstrates the effectiveness of our approach in challenging weather conditions.

Enhanced Efficiency: And the proposed method reduced processing time by over 50%, indicating improved efficiency. Increased Robustness: As shown in Table 2 our method demonstrated higher robustness to fog, enabling accurate license plate recognition even in adverse weather conditions.

Our findings demonstrate the effectiveness of the proposed method for license plate recognition in adverse weather conditions. The combination of image preprocessing techniques, robust feature extraction, and deep learning-based character recognition significantly improves accuracy and efficiency.

Table 1. Performance comparison

Metric

Traditional Method

Proposed Method

Accuracy

0.75

0.97

Processing Time

2.5 seconds

1.2 seconds

Robustness to Fog

Low

High

Table 2. Performance under varying weather conditions

Weather Condition

Accuracy

Clear Weather

0.95

Light Fog

0.9

Heavy Fog

0.85

4. Discussion

In the realm of license plate identification, common algorithms are examined and their efficacy in foggy weather is enhanced. While traditional vehicle license plate recognition algorithms perform well under uniform lighting conditions, their efficiency diminishes in adverse weather scenarios. Recognizing the challenges posed by foggy weather on license plate identification, this paper proposes improvements to enhance recognition accuracy. As shown in Figure 8, we can see that the recognition accuracy rate of the inclement weather license plate recognition system studied in this paper can reach 97%, and it can effectively carry out license plate recognition in inclement weather.

Various image de-noising algorithms, atmospheric scattering model algorithms, and filtering methods are explored, alongside an analysis of their limitations. Traditional de-noising and atmospheric scattering techniques often fall short in adverse weather conditions, necessitating the utilization of diverse filtering algorithms for image enhancement.

The key contributions of this research lie in:

Enhanced image preprocessing: By incorporating advanced denoising algorithms and atmospheric scattering models, we effectively mitigate the impact of fog on image quality. This preprocessing step significantly improves the clarity and contrast of license plate images, facilitating subsequent processing stages.

Robust feature extraction: Our method utilizes edge detection techniques and shape analysis to accurately locate and segment license plates even in challenging conditions. This robust feature extraction ensures reliable character recognition despite image degradation.

Deep learning for character Recognition: The implementation of Convolutional Neural Networks (CNN) for character recognition leverages their ability to learn complex patterns and features. This approach significantly enhances recognition accuracy compared to traditional methods.

Building upon this groundwork, an enhanced license plate recognition algorithm is introduced. By leveraging image processing technology in conjunction with edge detection methods, automatic recognition of license plate images in haze weather is achieved. This algorithm enhances image brightness, contrast, and resolution amidst foggy conditions, subsequently aligning and integrating the image with RGB brightness to yield optimal results. Simulation experiments are meticulously designed, and the algorithm's efficacy is thoroughly evaluated and refined. Experimental results demonstrate the algorithm's effectiveness in improving license plate recognition accuracy under haze weather conditions, thereby aiding in license plate location, character segmentation, and character recognition tasks.

4.1 Limitations and future work

While our proposed method achieves promising results, there are limitations that warrant further investigation:

The use of advanced image processing techniques and deep learning models can lead to increased computational complexity. Future work could focus on optimizing the algorithm for real-time applications. The performance of the algorithm may vary depending on the specific characteristics of the fog and the diversity of license plate designs. Further research is needed to ensure robust performance across different environments and license plate types.

At the moment, Future research directions include:

Investigating the potential of advanced CNN architectures, such as residual networks or attention mechanisms, to further improve recognition accuracy. Designing algorithms that can dynamically adjust to varying weather conditions and lighting scenarios, and Combining data from multiple sensors, such as radar and lidar, to enhance license plate detection and recognition in challenging environments.

4.2 Impact and applications

The advancements presented in this research have significant implications for various applications, including Improved license plate recognition in adverse weather conditions can enhance traffic monitoring and enforcement systems. Robust license plate recognition is crucial for security applications, such as identifying vehicles involved in criminal activities. Accurate license plate recognition is essential for autonomous vehicles to navigate and interact with their surroundings safely. By addressing the challenges of license plate recognition in adverse weather, this research contributes to the development of more reliable and efficient intelligent transportation systems.

5. Conclusion

In summary, this paper presents a practical and widely applicable scheme for vehicle license plate recognition in adverse weather environments. Future endeavors could focus on further optimizing the proposed method and enhancing system stability to accommodate more complex environmental conditions. Additionally, integrating cutting-edge technologies such as deep learning holds promise for further improving the accuracy of license plate recognition systems, advancing the development and practical application of this technology. It is hoped that the findings of this research will offer valuable insights for the advancement and real-world implementation of license plate recognition technology.

Acknowledgment

The authors would like to express their thanks to the training and support by the Krirk University and Shandong University.

  References

[1] Pirgazi, J., Sorkhi, A.G., Kallehbasti, M.M.P. (2021). An efficient robust method for accurate and real-time vehicle plate recognition. Journal of Real-Time Image Processing, 18(5): 1759-1772. https://doi.org/10.1007/s11554-021-01118-7

[2] He, M.X., Hao, P. (2020). Robust automatic recognition of Chinese license plates in natural scenes. IEEE Access, 8: 173804-173814. https://doi.org/10.1109/access.2020.3026181

[3] Chowdhury, P.N., Shivakumara, P., Jalab, H.A., Ibrahim, R.W., Pal, U., Lu, T. (2020). A new Fractal Series Expansion based enhancement model for license plate recognition. Signal Processing: Image Communication, 89: 115958. https://doi.org/10.1016/j.image.2020.115958

[4] Liu, T., Liu, Y. (2021). Moving camera-based object tracking using adaptive ground plane estimation and constrained multiple kernels. Journal of Advanced Transportation, 2021: 8153474. https://doi.org/10.1155/2021/8153474

[5] Chen, L., Li, Y.C., Li, L.X., Qi, S.Y., Zhou, J., Tang, Y.C., Yang, J.J., Xin, J.M. (2024). High-precision positioning, perception and safe navigation for automated heavy-duty mining trucks. IEEE Transactions on Intelligent Vehicles, 9(4): 4644-4656. https://doi.org/10.1109/tiv.2024.3375273

[6] Wang, B.Y., Han, Y., Tian, D., Guan, T. (2021). Sensor-based environmental perception technology for intelligent vehicles. Journal of Sensors, 2021: 8199361. https://doi.org/10.1155/2021/8199361

[7] Aruna, V.S., Ravi, S., Suruthi, M. (2024). License plate recognition using artificial neural network techniques. In Proceedings of the 6th International Conference on Communications and Cyber Physical Engineering, Hyderabad, India, pp. 297-304. https://doi.org/10.1007/978-981-99-7137-4_29

[8] Wang, J.H., Fu, T., Xue, J.T., Li, C.M., Song, H., Xu, W.X., Shangguan, Q.Q. (2023). Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open TJRD TS dataset. International Journal of Transportation Science and Technology, 12(1): 273-290. https://doi.org/10.1016/j.ijtst.2022.02.006

[9] Wang, Y. (2023). Integrated smart pavement systems for environment monitoring, localization, and traffic data collection. Doctoral dissertation, Hong Kong Polytechnic University. https://research.polyu.edu.hk.

[10] Aziz, K., De Greef, E., Rykunov, M., Bourdoux, A., Sahli, H. (2020). Radar-camera fusion for road target classification. In 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, pp. 1-6. https://doi.org/10.1109/radarconf2043947.2020.9266510

[11] Tang X.L., Zhang Z.Q., Qin Y.C. (2022). On-road object detection and tracking based on radar and vision fusion: A review. IEEE Intelligent Transportation Systems Magazine, 14(5): 103-128. https://doi.org/10.1109/MITS.2021.3093379

[12] Zhou, Y., Dong, Y.Y., Hou, F.J., Wu, J.Q. (2022). Review on millimeter-wave radar and camera fusion technology. Sustainability, 14(9): 5114. https://doi.org/10.3390/su14095114

[13] Jha, U.S. (2018). The millimeter Wave (mmW) radar characterization, testing, verification challenges and opportunities. In 2018 IEEE AUTOTESTCON, National Harbor, MD, USA, pp. 1-5. https://doi.org/10.1109/autest.2018.8532561

[14] Huang, R.Y., Zhu, K.T., Chen, S.T., Xiao, T., Yang, M., Zheng, N.N. (2021). A high-precision and robust odometry based on sparse MMW radar data and a large-range and long-distance radar positioning data set. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, pp. 98-105. https://doi.org/10.1109/itsc48978.2021.9565129

[15] Kundrotas, M., Janutėnaitė-Bogdanienė, J., Šešok, D. (2023). Two-step algorithm for license plate identification using deep neural networks. Applied Sciences, 13(8): 4902. https://doi.org/10.3390/app13084902

[16] Gautam, A., Rana, D., Aggarwal, S., Bhosle, S., Sharma, H. (2023). Deep learning approach to automatically recognise license number plates. Multimedia Tools and Applications, 82: 31487-31504. https://doi.org/10.1007/s11042-023-15020-w

[17] Elharrouss, O., Almaadeed, N., Al-Maadeed, S. (2021). A review of video surveillance systems. Journal of Visual Communication and Image Representation, 77: 103116. https://doi.org/10.1016/j.jvcir.2021.103116

[18] Hasan, M.K., Ali, M.O., Rahman, M.H., Chowdhury, M.Z., Jang, Y.M. (2021). Optical camera communication in vehicular applications: A review. IEEE Transactions on Intelligent Transportation Systems, 23(7): 6260-6281. https://doi.org/10.1109/tits.2021.3086409

[19] Tsakanikas, V., Dagiuklas, T. (2018). Video surveillance systems-current status and future trends. Computers & Electrical Engineering, 70: 736-753. https://doi.org/10.1016/j.compeleceng.2017.11.011

[20] Renita, J., Elizabeth, N.E. (2017). Network's server monitoring and analysis using Nagios. In 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, India, pp. 1904-1909. https://doi.org/10.1109/wispnet.2017.8300092

[21] Abdellatif, M.M., Elshabasy, N.H., Elashmawy, A.E., AbdelRaheem, M. (2023). A low cost IoT-based Arabic license plate recognition model for smart parking systems. Ain Shams Engineering Journal, 14(6): 102178. https://doi.org/10.1016/j.asej.2023.102178

[22] Alharbi, F., Zakariah, M., Alshahrani, R., Albakri, A., Viriyasitavat, W., Alghamdi, A.A. (2023). Intelligent transportation using wireless sensor networks blockchain and license plate recognition. Sensors, 23(5): 2670. https://doi.org/10.3390/s23052670

[23] Li, N., Teurnier, B.L., Boffety, M., Goudail, F., Zhao, Y., Pan, Q. (2021). No-reference physics-based quality assessment of polarization images and its application to demosaicking. IEEE Transactions on Image Processing, 30: 8983-8998. https://doi.org/10.1109/tip.2021.3122085

[24] Rio-Alvarez, A., De Andres-Suarez, J., Gonzalez-Rodriguez, M., Fernandez-Lanvin, D., Pérez, B.L. (2019). Effects of challenging weather and illumination on learning-based license plate detection in noncontrolled environments. Scientific Programming, 2019: 6897345. https://doi.org/10.1155/2019/6897345

[25] Pattanaik, A., Balabantaray, R.C. (2022). Enhancement of license plate recognition performance using Xception with Mish activation function. Multimedia Tools and Applications, 82(11): 16793-16815. https://doi.org/10.1007/s11042-022-13922-9

[26] Yu, H., Wang, X.Q., Shao, Y.L., Qin, F.W., Chen, B., Gong, S.L. (2022). Research on license plate location and recognition in complex environment. Journal of Real-Time Image Processing, 19(4): 823-837. https://doi.org/10.1007/s11554-022-01225-z

[27] Tao, T., Dong, D.C., Huang, S.Y., Chen, W., Yang, L.G. (2020). Object detection-based license plate localization and recognition in complex environments. Transportation Research Record Journal of the Transportation Research Board, 2674(12): 212-223. https://doi.org/10.1177/0361198120954202

[28] He, M.X., Hao, P. (2020b). Robust automatic recognition of Chinese license plates in natural scenes. IEEE Access, 8: 173804-173814. https://doi.org/10.1109/access.2020.3026181

[29] Liu, Q., Chen, S.L., Chen, Y.X., Yin, X.C. (2024). Improving license plate recognition via diverse stylistic plate generation. Pattern Recognition Letters, 183: 117-124. https://doi.org/10.1016/j.patrec.2024.05.005

[30] Vijayalakshmi, K., Dhanamalar, M., Lepakshi, V.A., Jamtsho, S. (2024). Smart checkpoint management system for automatic number plate recognition in Bhutan vehicles using OCR technique. SN Computer Science, 5: 579. https://doi.org/10.1007/s42979-024-02905-2

[31] Tao, L.B., Hong, S.H., Lin, Y.X., Chen, Y.B., He, P.G., Tie, Z.X. (2024). A real-time license plate detection and recognition model in unconstrained scenarios. Sensors, 24(9): 2791. https://doi.org/10.3390/s24092791

[32] Anilkumar, C., Rani, M.S., Venkatesh, B., Rao, G.S. (2024). Automated license plate recognition for non-helmeted motor riders using YOLO and OCR. Journal of Mobile Multimedia, 20(2): 239-266. https://doi.org/10.13052/jmm1550-4646.2021

[33] Wang, W.H., Tu, J.Y. (2020). Research on license plate recognition algorithms based on deep learning in complex environment. IEEE Access, 8: 91661-91675. https://doi.org/10.1109/access.2020.2994287

[34] Li, R.M., Wang, S., Jiao, P.P., Lin, S.C. (2023). Traffic control optimization strategy based on license plate recognition data. Journal of Traffic and Transportation Engineering (English Edition), 10(1): 45-57. https://doi.org/10.1016/j.jtte.2021.12.004

[35] Alharbi, F., Zakariah, M., Alshahrani, R., Albakri, A., Viriyasitavat, W., Alghamdi, A.A. (2023). Intelligent transportation using wireless sensor networks blockchain and license plate recognition. Sensors, 23(5): 2670. https://doi.org/10.3390/s23052670

[36] Pan, S., Chen, S.B., Luo, B. (2023). A super-resolution-based license plate recognition method for remote surveillance. Journal of Visual Communication and Image Representation, 94: 103844. https://doi.org/10.1016/j.jvcir.2023.103844