Comparative Analysis of LiDAR and CCTV Sensor Accuracy at Signalized Intersections Under Varied Weather Conditions

Comparative Analysis of LiDAR and CCTV Sensor Accuracy at Signalized Intersections Under Varied Weather Conditions

Alireza Ansariyar* Abolfazl Taherpour Di Yang Mansoureh Jeihani

Department of Transportation and Urban Infrastructure Systems (TUIS), Morgan State University, Baltimore 21251, USA

Corresponding Author Email: 
alans2@morgan.edu
Page: 
237-249
|
DOI: 
https://doi.org/10.18280/ijtdi.080203
Received: 
28 March 2024
|
Revised: 
6 May 2024
|
Accepted: 
20 May 2024
|
Available online: 
30 June 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The study evaluates the accuracy of LiDAR and CCTV technologies for vehicle and pedestrian count collection at a signalized intersection under varied weather conditions. Data collection occurred over a two-hour period during peak morning and evening hours using both technologies. The trajectory identification, entry and exit point determination, and anomaly filtering were utilized to analyze the vehicle counts. The pedestrian counts were carefully analyzed using LiDAR point cloud data and CCTV footage to monitor movements, in areas. Analysis of the data showed differences in vehicle and pedestrian counts depending on the weather conditions. Rainy weather had the variations while sunny conditions also showed differences with snowy weather having the least discrepancies. Interestingly the southbound through and eastbound right movements exhibited the variations in both vehicle and pedestrian counts. Despite challenges like spots and weather impacts, both LiDAR and CCTV technologies hold promise for collecting traffic data. It is vitally important that this study focuses on the limitations of current traffic control systems. The integrity of current systems and improving them is essential for traffic monitoring and enhancing safety measures at signalized intersections.

Keywords: 

LiDAR technology, Closed-Circuit Television (CCTV), signalized intersections, intersection safety, limits of agreement (LoA)

1. Introduction

City transportation systems depend heavily on signalized intersections for the safe and effective movement of vehicles and pedestrians [1-3]. Emerging technologies have the potential to significantly enhance travel safety. If these technologies are widely adopted and integrated into smart transportation systems, they could lead to a substantial reduction in the frequency and severity of conflicts and crashes [4]. The purpose of this research is to examine and compare the effectiveness of Light Detection and Ranging (LiDAR) sensors and Closed-Circuit Television (CCTV) cameras at signalized intersections in different weather conditions. LiDAR sensors, which utilize Light Detection and Ranging technology have gained attention for their potential in traffic control [5-7]. These sensors emit laser pulses measure the time taken for the pulses to return, enabling distance and speed calculations. The advantages of LiDAR sensors include object detection the ability to create 3D maps of surroundings and consistent performance regardless of lighting conditions [8, 9]. Hereupon, LiDAR sensors hold promise for operation in challenging weather conditions by providing real time data on vehicle, pedestrian and cyclist presence and movement. Potentially reducing collision risks, through alerts and enabling efficient traffic rerouting [10].

On the hand CCTV cameras have been widely utilized in traffic monitoring systems providing the benefit of capturing data across a broad area [11, 12]. Additionally, CCTV cameras play a key role, for law enforcement agencies and efficient traffic control. However, they may encounter challenges in weather conditions and light settings. Due to their 2D image output CCTV cameras could hinder assessment of object height and distance impacting decision making during bad weather. This study aims to examine the strengths and limitations of LiDAR sensors and CCTV cameras when monitoring traffic at intersections with traffic signals under weather conditions. By conducting tests and comparative evaluations this research seeks to offer insights, into the real-world effectiveness of these technologies.

LiDAR sensors and CCTV cameras are often compared in the field of traffic management. That's because each system has different capabilities and ways of operating [4]. It is important to note that both LiDAR and CCTV technologies are efficient traffic monitoring at signalized intersections. When it comes to hardware, LiDAR sensors can pretty much do it all, from object detection and distance measurement, to determining speed of objects [13]. LiDAR sensors and CCTV cameras need to be compared in terms of their coverage, characteristics, advantages, and disadvantages. In traffic signal management, for instance, it might be found that LiDAR sensors are better at determining when vehicles are present at an intersection. However, this needs to be figured out in a systematic and an experimental way.

By understanding how LiDAR and CCTV sensors perform under varying weather conditions practitioners can proactively address risks. Also, this study seeks to provide information, on how well LiDAR sensors and CCTV cameras perform in different weather conditions. The goal is to assist in making decisions and improving the reliability of transportation systems.

The significance of this study lies in its ability to provide insights for traffic engineers, policymakers and practitioners involved in intersection safety and traffic management. Through an assessment of how both sensor technologies perform under weather conditions this research not only enhances the understanding of their operational differences but also offers guidance on how they can be effectively utilized in real world situations. Additionally, by highlighting the strengths and limitations of LiDAR sensors and CCTV cameras during weather conditions this study contributes to the improvement of more robust and effective urban transportation systems. Furthermore, the comparative analysis carried out here helps decision makers make informed choices, on sensor usage, placement and configuration to optimize resource allocation and enhance intersection safety and efficiency. The remainder of this paper is structured as follows: Section 2: Literature Review, Section 3: Research Methodology, Section 4: Data Analysis Results, Section 5: Discussion, Section 6: Conclusion, and Section 7: References.

2. Literature Review

Signalized intersections play a key role, in transportation systems requiring data gathering to effectively manage traffic and enhance safety [2, 3, 14, 15]. With the development of sensing technologies such as LiDAR and CCTV there is a growing interest in comparing their accuracy in different weather conditions [4], a growing interest is being shown in comparing their accuracy under a variety of weather conditions [16-22]. Accurate data collection at signalized intersections is crucial for various reasons. Firstly, it helps improve safety by identifying risks and implementing measures especially during challenging weather situations. Secondly, accurate data supports traffic management efforts by optimizing signal timing and detecting incidents ultimately leading to managing traffic flow and less congestion [23-27]. Thirdly, it informs infrastructure planning decisions, and it provides future traffic demands and effective prioritization of infrastructure investments [28, 29]. Lastly, it helps protect the environment by reducing fuel usage and emissions through traffic control [30-32].

In recent years, researchers have developed approaches to evaluate the precision of LiDAR and CCTV data gathering specifically looking at how they work in different weather situations [4, 33-35].

One efficient approach is the utilization of hybrid sensor fusion techniques [27, 36] which integrate LiDAR and CCTV data to investigate strengths of each technology. LiDAR is an effective tool that provide three dimensions data [37-39], while CCTV provides detailed images that monitor objects [40, 41]. Monitoring and analyzing traffic are both precise and reliable when these types of data are combined. Even so, machine learning techniques can be developed, which allow for the examination and comparison of the detailed data obtained from LiDAR systems and CCTV cameras. Convolutional Neural Networks (CNNs) [42-44], and Recurrent Neural Networks (RNNs) [45-47] are among the most widely utilized techniques for extracting effective patterns and features. Techniques such as regression analysis and hypothesis testing [10, 48], error modeling [49], and Generalized Additive Models (GAMs) [50] have been widely applied to investigate the effects of weather condition on sensor accuracy.

Weather Research and Forecasting (WRF) model is designed to simulate all kinds of weather events with impressive local (spatial) and immediate (temporal) detail [51, 52]. The WRF model is able to simulate weather patterns on both small and large scales.

Additionally, the Advanced Weather Interactive Processing System (AWIPS) [53] is also widely utilized in assessing a broad range of weather conditions on sensor’s performance in different weather conditions.

Previous studies have investigated the accuracy of LiDAR and CCTV sensors and their performance under various weather conditions at signalized intersections. While earlier research has explored comparisons of these sensors, this study recognized the gaps in the state-of-the-art and it analyzes their performance across a wide range of weather scenarios.

3. Research Methodology

Real-time vehicle and pedestrian movement data are being collected at a selected signalized intersection. To do this, the LiDAR sensor was installed to capture objects’ movements.

Detecting and recognizing objects using LiDAR sensor requires a series of tasks. The broad laser pulses emitted by a LiDAR sensor can be used to create a dense 3D point cloud. This point cloud is akin to a large volumetric representation of the environment around the sensor, including the surrounding structures (such as trees or buildings) and any objects in its path (such as vehicles, cyclists or pedestrians). Processing this large point cloud to obtain useful information, such as recognizing the types of objects at the intersection is the next step.

SVM and CNN machine-learning algorithms are utilized [10] to distinguish between different objects based on their size, shape, and movement patterns. Both models assist to identify characteristics for detecting and classifying objects. With this method of extracting features, CNNs can learn representations directly and detect and classify objects based on their unique traits. Through training and optimization, these techniques help LiDAR sensors adapt to changes in the environment ensuring performance in monitoring traffic situations. Figure 1 illustrates the positioning of the LiDAR sensor as it sends out signals, toward objects moving through an intersection.

The data collection process was conducted over three distinct weather conditions. A key aspect of their method involved using point cloud data to extract traffic details. Vehicle counts, pedestrian counts, and trajectories emerged as focal points of analysis, each meticulously delineated through the meticulous processing of the acquired point cloud data. Through a rigorous processing of the collected point cloud data, the movement of vehicles and pedestrians and other objects was tracked within the intersection.

On the other hand, image processing algorithms, such as clustering and segmentation techniques [54] were employed for object detection on the recorded videos from CCTV cameras to collect traffic data in three snowy, sunny, and rainy days. Traffic parameters including vehicle counts, pedestrian counts, and trajectories were then extracted. Figure 2 illustrates the CCTV camera views.

Figure 1. LiDAR perspective view of the intersection (left figure) and installed LiDAR location at the intersection of the case study (right figure)

Figure 2. CCTV camera view

Regarding CCTV cameras, strategically positioned CCTV cameras capture visual data of vehicular and pedestrian activities within the intersection. High-resolution video footage is recorded continuously to capture real-time events. Image processing algorithms, such as background subtraction or object tracking [55, 56] detect and track moving objects, including vehicles and pedestrians. Various features extracted from the detected objects, such as size, shape, and motion characteristics, facilitate classification into vehicles and pedestrians. Vehicle and pedestrian counts are estimated from the video data using these features. Machine learning algorithms, including ensemble methods or deep learning models that are strategically employed to enhance the accuracy and precision of count estimations, are employed to improve count estimations.

Attention was given to the specifics of the sensors, the mounting setups, and the necessary calibration procedures to ensure the system functioned as intended. The chosen sensors provided the appropriate resolution to identify the features of interest for this study. Sensors were placed to cover the four corners within LiDAR's visual range, ensuring no impediments to visibility. CCTV cameras used for video-based traffic monitoring were selected based on their imaging capabilities. They were required to have high resolution, a fast frame rate, and a sufficient dynamic range to capture traffic scenes under various lighting and weather conditions. Based on experience, the cameras were likely positioned to provide a comprehensive view of traffic flow while minimizing blind spots and obstacles.

As can be seen in Figure 3, the LiDAR sensor installed at E Cold Spring Ln – Hillen Rd intersection [4, 10] as this research case study.

Figure 3. E Cold Spring Ln – Hillen Rd intersection [10]

Several important reasons underpin the selection of the E Cold Spring Ln – Hillen Rd intersection in Baltimore City as the site for studying how effectively LiDAR and CCTV sensors work in different weather conditions to monitor traffic—especially pedestrian and bicycle traffic—that might move through or close to the intersection. It is important to note that, E Cold Spring intersection was practically in close proximity of Morgan State University. It was a busy urban intersection where the research team could get the kind of traffic volume that the study needed. The intersection had a crash history that was essential for conducting a before–after study. This history enabled a thorough evaluation of the sensors' effectiveness in collision prevention in different weather conditions.

Through analyses visualization methods and machine learning algorithms the study aimed to uncover correlations, patterns and irregularities, in the collected data. This holistic approach provides accurately estimate traffic parameters, thereby supporting traffic management strategies. The physical layout of the intersection is illustrated in Figure 3. The location of the LiDAR sensor is shown by a purple circle while the purple rectangle illustrates the location of two CCTV cameras.

A Bland-Altman analysis was conducted [57] to compare the collected data by LiDAR sensor and CCTV cameras. This method assesses the agreement [57, 58] between the two collected datasets by plotting the difference between vehicle/pedestrian counts obtained from LiDAR (Li) and CCTV (Ci) against their mean count (Mi = $\frac{L_i+C_i}{2}$) for each movement and time period.

Based on Bland-Altman analysis, the difference between overlaps and the random overlaps is calculated ($D_i$), and then the average of these differences is determined by Eq. (1).

$D^*=\frac{1}{n} \sum_{i=1}^n D_i$        (1)

The limits of agreement (LoA) are calculated as the mean difference (D*) ± 1.96 times the standard deviation of the differences (SD(D)) as can be seen in Eq. (2).

$\mathrm{LoA}=\mathrm{D}^* \pm 1.96 \times \mathrm{SD}(\mathrm{D})$     (2)

where, SD(D) is the standard deviation of the differences $D_i$. LoA represents the range where the differences can be expected between measurements from two methods to fall with a level of confidence. To calculate LoA, the difference between measurements is calculated. Then add or subtract 1.96 times the standard deviation of these differences. A narrower LoA indicates agreement between the two methods while a wider LoA shows variability or disagreement [58]. In other words, a narrower LoA means that measurements from both methods are closer to the difference showing agreement. On the hand a wider LoA suggests that there is spread or variability in measurements indicating increased disagreement between the two methods.

4. Data Analysis Results – LiDAR and CCTV Camera Footage Analysis

4.1 Snowy weather (Friday, January 19th, 2024 - 10:30 AM to 12:30 PM)

Snowy weather poses challenges, for traffic flow, such as visibility, slippery roads and longer stopping distances increasing the dangers for drivers and pedestrians. Traditional methods of monitoring traffic like CCTV cameras may struggle to track and analyze traffic patterns in harsh conditions. In contrast LiDAR technology shows promise as a solution by using laser pulses to create 3D maps. During snowy weather condition, CCTV cameras encounter risks that can significantly affect their performance in monitoring traffic. The reduced visibility caused by snowfall can result in image quality and restricted detection capabilities. Additionally, snow buildup on camera lenses or mounts may block the camera’s view. Furthermore, freezing temperatures can lead to icing on the lens or housing of the camera distorting images and causing issues with parts.

Snow accumulation on the sensors housing or emitter/ units may block laser beams leading to readings or complete signal loss. Subzero temperatures might cause icing on the sensors surfaces further affecting its operation. Moreover, snowflakes or ice particles in the atmosphere can scatter laser pulses causing noise or disruptions, in the data captured by the LiDAR sensor. Additionally extreme cold conditions can influence the parts of the sensor potentially resulting in malfunctions or operational issues.

For accurate analysis, two hours interval were selected for data analysis. The vehicle count data was collected to investigate the traffic condition at the intersection. Table 1 displays the number of collected vehicles at intersection under snowy weather condition.

As can be seen in Table 1, there is a significant difference between the LiDAR and CCTV vehicle counts in eastbound right (=WS) direction. Blind spots, near where LiDAR sensors installed can have an impact on how well they work, especially when the weather is snowy. Buildings, trees or other objects in proximity of the sensor's location can block its view. In snowy condition, the snow can cover things up and make it harder for the sensor to detect objects. This means that vehicles going through these spots might not be picked up accurately by the LiDAR sensor causing differences in vehicle counts compared to CCTV cameras. The difference in vehicle counts between LiDAR and CCTV for vehicles moving eastbound or heading towards WS could be due to reasons related to spots. Putting the LiDAR sensor close to this place raises the chances of spots causing inaccuracies in data collection especially when visibility is low due, to snow conditions.

Moreover, the angle and orientation of the LiDAR sensor relative to the movement direction may exacerbate blind spot effects, particularly during snowy conditions when obstructions are more prevalent.

To assess the accuracy of LiDAR compared to CCTV cameras during snowy weather conditions, Figure 4 illustrates the disparities in vehicle counts between the two technologies. In Figure 4, the vehicle counts recorded by CCTV cameras were subtracted from those recorded by the LiDAR sensor.

Figure 4. Discrepancies in vehicle counts between LiDAR and CCTV cameras across intersection movements during snowy weather

Table 1. Assessing vehicle counts across intersection movements in snowy weather

Hourly Counts

Technology

Movement

NE

NS

NW

ES

EW

EN

SW

SN

SE

WN

WE

WS

10:30 – 11:30

LiDAR

30

193

56

14

88

36

10

193

9

52

78

4

CCTV

29

202

57

15

94

34

9

207

9

48

76

20

10:45 – 11:45

LiDAR

35

199

56

15

81

39

10

210

15

52

77

4

CCTV

34

208

56

17

88

39

10

216

17

47

76

23

11:00 – 12:00

LiDAR

34

194

60

16

77

33

11

216

18

53

77

5

CCTV

34

199

60

17

89

33

11

226

21

49

71

21

11:15 – 12:15

LiDAR

34

189

60

21

83

35

7

209

24

53

91

3

CCTV

37

202

59

22

91

36

7

226

27

50

89

22

11:30 – 12:30

LiDAR

35

191

59

22

88

31

7

222

22

51

94

2

CCTV

38

203

59

22

98

32

7

237

26

50

96

23

Table 2. The percentage (%) differences of CCTV and LiDAR vehicle count in snowy weather

Hourly Counts

NE

NS

NW

ES

EW

EN

SW

SN

SE

WN

WE

WS

10:30 - 11:30

3.4

4.5

1.8

7

6.4

5.9

11.1

6.8

0

8.3

2.6

80

10:45 - 11:45

2.9

4.3

0

11.8

8.0

0

0

2.8

11.8

10.6

1.3

82.6

11:00 - 12:00

0

2.5

0

5.9

13.5

0

0

4.4

14.3

8.2

8.5

76.2

11:15 - 12:15

8.1

6.4

1.7

5

8.8

2.8

0

7.5

11

6

2

86.4

11:30 - 12:30

7.9

5.9

0

0

10.2

3.1

0

6.3

15.4

2

2.1

91.3

Table 3. Assessing pedestrian counts across intersection approaches in snowy weather

Hourly Counts

Technology

N

E

S

W

10:30 – 11:30

LiDAR

1

0

3

0

CCTV

2

0

3

0

10:45 – 11:45

LiDAR

1

2

2

0

CCTV

2

1

2

0

11:00 – 12:00

LiDAR

1

2

2

0

CCTV

3

1

2

0

11:15 – 12:15

LiDAR

2

3

2

0

CCTV

4

2

2

0

11:30 – 12:30

LiDAR

1

3

2

0

CCTV

2

2

2

0

Figure 5. The mean difference (D*) of vehicle counts of all time intervals for each movement in snowy weather condition

Figure 5 illustrates the mean difference (D*) of all intervals for each movement at the intersection in snowy weather condition.

Table 2 highlights the percentage of differences between CCTV and LiDAR in different time intervals in snowy weather conditions.

Gathering data, on pedestrians in conditions presents challenges for both CCTV and LiDAR technologies. One major obstacle is the reduced visibility caused by snowfall which can make it difficult for CCTV cameras and LiDAR sensors to spot pedestrians. Snow buildup on camera lenses or LiDAR equipment can also hinder visibility, impede pedestrian detection. Moreover, freezing temperatures may lead to ice formation on the surfaces of cameras and LiDAR sensors impacting their ability to accurately detect pedestrians.

In snowy weather, pedestrians may alter their behaviour and become more cautious, particularly when visibility on walkways is poor. Some pedestrians might cross paths in hazardous conditions, which poses a challenge for data collection. Pedestrian counts across intersection approaches in snowy weather can be seen in Table 3. Figure 6 showed that the pedestrian counts from LiDAR and CCTV were consistent across all directions. The findings demonstrated that both LiDAR and CCTV systems could accurately track human movements. At the corners during inclement weather, LiDAR and CCTV provide reliable alternatives to traditional pedestrian counters. These tools effectively capture the patterns and quantities of human movement through high-traffic areas. When used properly, they offer a robust method for monitoring pedestrian flow in various conditions.

Figure 6 demonstrates that there is no discernible difference in the pedestrian count across the intersection. Both surveillance methods are equally adept at calculating how many pedestrians cross the intersection.

Figure 6. Discrepancies in pedestrian counts between LiDAR and CCTV cameras during snowy weather

4.2 Sunny weather (Friday, January 26th, 2024 - 10:30 AM to 12:30 PM)

Sunny conditions provide visibility that greatly affects the accuracy of data collected by both technologies. Table 4 and Table 5 show the vehicle counts across intersection movements in sunny weather, and differences (%) between CCTV and LiDAR during intervals, respectively. The pedestrian count data was examined at times throughout the day under weather conditions as indicated in Table 6.

Figure 7 demonstrates the discrepancies in vehicle counts between LiDAR and CCTV cameras across intersection movements during sunny weather. Additionally, Figure 8 illustrates the mean difference (D*) of vehicle counts for each movement in sunny weather condition.

Analyzing pedestrian counts in sunny weather depicted in Figure 9. The comparison of pedestrian counts from LiDAR and CCTV consistently yielded outcomes indicating variations between the two technologies. This consistency highlights the effectiveness of both LiDAR and CCTV in tracking pedestrian movements at the intersection under sunny weather conditions. The dependable performance of these technologies suggests their suitability for collecting pedestrian data, offering insights to transportation planners and authorities, for improving pedestrian safety and infrastructure planning purposes.

Table 4. Assessing vehicle counts across intersection movements in sunny weather

Hourly Counts

Technology

Movement

NE

NS

NW

ES

EW

EN

SW

SN

SE

WN

WE

WS

10:30 – 11:30

LiDAR

101

520

143

64

205

114

28

475

74

156

234

2

CCTV

103

550

141

64

207

117

32

473

76

147

240

76

10:45 – 11:45

LiDAR

107

526

137

63

224

114

32

501

75

172

273

4

CCTV

108

550

133

64

229

118

35

503

78

165

280

79

11:00 – 12:00

LiDAR

108

552

135

64

238

118

31

508

69

190

290

5

CCTV

111

579

129

65

240

118

34

503

71

189

292

78

11:15 – 12:15

LiDAR

111

583

138

68

241

121

33

564

73

189

269

4

CCTV

116

605

131

68

240

117

36

558

73

191

269

70

11:30 – 12:30

LiDAR

121

602

124

68

252

122

37

557

71

180

277

4

CCTV

124

618

117

68

251

118

40

552

72

186

279

66

Table 5. The percentage (%) differences of CCTV and LiDAR vehicle count in sunny weather

Hourly Counts

NE

NS

NW

ES

EW

EN

SW

SN

SE

WN

WE

WS

10:30 - 11:30

1.9

5.5

1.4

0

1

2.6

12.5

0.4

2.6

6.1

2.5

97.4

10:45 - 11:45

0.9

4.4

3

1.6

2.2

3.4

8.6

0.4

3.8

4.2

2.5

94.9

11:00 - 12:00

2.7

4.7

4.7

1.5

0.8

0

8.8

1

2.8

0.5

0.7

93.6

11:15 - 12:15

4.3

3.6

5.3

0

0.4

3.4

8.3

1.1

0

1

0

94.3

11:30 - 12:30

2.4

2.6

6

0

0.4

3.4

7.5

0.9

1.4

3.2

0.7

93.9

Table 6. Assessing pedestrian counts across intersection approaches in sunny weather

Hourly Counts

Technology

N

E

S

W

10:30 – 11:30

LiDAR

55

15

27

7

CCTV

60

19

26

9

10:45 – 11:45

LiDAR

51

14

24

8

CCTV

55

17

23

10

11:00 – 12:00

LiDAR

39

15

20

6

CCTV

38

14

19

6

11:15 – 12:15

LiDAR

34

14

17

9

CCTV

32

13

16

8

11:30 – 12:30

LiDAR

38

15

22

12

CCTV

36

14

21

11

Figure 7. Discrepancies in vehicle counts between LiDAR and CCTV cameras across intersection movements during sunny weather

Figure 8. The mean difference (D*) of vehicle counts of all time intervals for each movement in sunny weather condition

Figure 9. Discrepancies in pedestrian counts between LiDAR and CCTV cameras during sunny weather

4.3 Rainy weather (Wednesday, February 28th, 2024 - 17:00 PM to 19:00 PM)

Rainy weather presents challenges, like decreased visibility, slippery roads and changes in traffic patterns. It can increase the likelihood of traffic crashes and traffic jams. Understanding how LiDAR and CCTV function in rainy conditions is essential for developing strategies to reduce risks and improve traffic management systems, thus making transportation networks safer and more robust. Rain impacts visibility for cameras, making it difficult to capture clear footage for counting pedestrians and vehicles. Additionally, rain affects LiDAR systems by interfering with data collection. Laser beams are less effective in rain, and LiDAR sensors can become obstructed and dirty. Baltimore City experienced an inconsistent rainy weather, especially on Fridays, providing a real challenge to collect data. To make sure the credibility of this research study would not be compromised by the conditions, however, the authors decided to collect data not only on Fridays but also on one Wednesday. This decision was made to incorporate weather conditions, for a comprehensive analysis enhancing the reliability of the study’s results. Despite collecting data on days, the primary focus remains on assessing sensor accuracy under varying weather conditions than comparing driver behavior across weekdays. By conducting analysis and accounting for specific weather conditions during data collection, the study aims to provide valuable insights into sensor performance beyond daily traffic variations. This method guarantees that the conclusions of the research are based on evidence and make a valuable contribution, to the field of sensor technologies despite variations, in data collection timing.

Table 7 illustrates the collected data from LiDAR and CCTV cameras in rainy weather condition.

To assess the accuracy of LiDAR compared to CCTV cameras under rainy conditions, Figure 10 illustrates the disparities in vehicle counts recorded by the two technologies.

Table 7. Assessing vehicle counts across intersection movements in rainy weather

Hourly Counts

Technology

Movement

NE

NS

NW

ES

EW

EN

SW

SN

SE

WN

WE

WS

17:00 – 18:00

LiDAR

109

573

150

68

353

174

60

1055

63

226

398

41

CCTV

119

609

147

68

353

176

60

1061

66

220

401

79

17:15 – 18:15

LiDAR

106

583

152

64

328

153

53

1001

57

225

387

44

CCTV

114

612

149

60

328

155

53

1009

63

229

392

81

17:30 – 18:30

LiDAR

109

552

160

62

284

151

48

840

50

219

357

53

CCTV

125

577

155

57

282

155

53

851

58

222

358

77

17:45 – 18:45

LiDAR

101

526

152

62

288

148

41

723

48

193

331

43

CCTV

117

548

148

57

284

152

46

729

49

197

328

57

18:00 – 19:00

LiDAR

100

533

142

56

278

127

32

620

45

183

309

34

CCTV

104

547

150

49

282

132

37

635

49

188

308

46

Table 8. The percentage (%) differences of CCTV and LiDAR vehicle count in rainy weather

Hourly Counts

NE

NS

NW

ES

EW

EN

SW

SN

SE

WN

WE

WS

17:00 – 18:00

8.4

5.9

2

0

0

1.1

0

0.6

4.5

2.7

0.7

48.1

17:15 – 18:15

7

4.7

2

6.7

0

1.3

0

0.8

9.5

1.7

1.3

45.7

17:30 – 18:30

12.8

4.3

3.2

8.8

0.7

3

9.4

1.3

13.8

1.4

0.3

31.2

17:45 – 18:45

13.7

4.0

2.7

9

1.4

2.6

10.9

0.8

2

2

1

24.6

18:00 – 19:00

3.8

2.6

5.3

14

1.4

3.8

13.5

2.4

8.2

2.7

0.3

26.1

Table 9. Assessing pedestrian counts across intersection approaches in rainy weather

Hourly Counts

Technology

N

E

S

W

17:00 – 18:00

LiDAR

38

13

29

8

CCTV

48

15

32

11

17:15 – 18:15

LiDAR

26

12

19

7

CCTV

32

12

21

7

17:30 – 18:30

LiDAR

22

8

15

8

CCTV

28

8

16

8

17:45 – 18:45

LiDAR

20

6

13

9

CCTV

31

7

16

9

18:00 – 19:00

LiDAR

17

7

11

6

CCTV

33

8

14

6

Figure 10. Discrepancies in vehicle counts between LiDAR and CCTV cameras across intersection movements during rainy weather

Figure 11. The mean difference (D*) of vehicle counts of all time intervals for each movement in rainy weather condition

Figure 12. Discrepancies in pedestrian counts between LiDAR and CCTV cameras during rainy weather

As can be seen in Figure 10, and similar to the obtained results in snowy and sunny weather conditions, the discrepancy between LiDAR and CCTV vehicle counts is considerable in southbound through (NS) and eastbound right (WS) directions. Figure 11 illustrates the mean difference (D*) of all intervals for each movement at the intersection in rainy weather condition.

Table 8 highlights the percentage of differences between CCTV and LiDAR in different hourly intervals in rainy weather condition.

The pedestrian counts data were analyzed at different hourly intervals in rainy weather as can be seen in Table 9.

Assessing pedestrian count data during rainy weather conditions, as depicted in Figure 12, presents challenges for both LiDAR and CCTV technologies. While these technologies have shown acceptable accuracy in optimal conditions, the comparison between pedestrian counts obtained by LiDAR and CCTV reveals discrepancies during rainy weather, particularly in the northern approach of the intersection (southbound). Rainy weather introduces complexities such as decreased visibility and distortion of signals, leading to reduced accuracy in pedestrian counting. LiDAR, reliant on laser beams for detection, may experience scattering or absorption of beams by raindrops, affecting its ability to accurately capture pedestrian movements, especially in areas with considerable gradients like the southbound of the intersection. Similarly, CCTV cameras may encounter issues with images and identifying objects when raindrops block the lens or cause pictures leading to a decreased accuracy.

One primary reason for selecting timeframes is to observe traffic patterns during both weekdays and weekends. By conducting counts on Wednesday and Friday, the research aim was to capture the changes in traffic flow that typically occur on days of the week taking into account factors like routines, work schedules and leisure activities. Additionally choosing mid-morning to early afternoon hours helps capture a range of traffic conditions, including both morning and evening rush hours, giving a view of intersection activity throughout the day. Moreover, selecting Fridays allows for considering fluctuations in traffic volume and behavior as people transition from weekday routines to weekend plans. This choice was influenced by the understanding that Fridays often have traffic patterns due to closures increased weekend trips and the excitement leading up to the weekend.

4.4 Quantitative comparisons between LiDAR and CCTV technologies

As shown in Table 2, and in snowy condition, comparing the LiDAR and CCTV counts at the E Cold Spring Ln – Hillen Rd intersection shows differences, in vehicle counts during hourly periods. The percentage gaps indicate varying vehicle counts detected by these two sensor technologies with changes ranging from variations to differences. For instance, between 10:30 AM to 11;30 AM, the percentage differences vary from 0% to 11.1%, showing disparities in vehicle counts between LiDAR and CCTV systems. However, as snowfall intensity rises, especially evident from 11:45 AM to 12:45 PM, the percentage gaps increase notably reaching up to 91.3%. These results imply that LiDAR sensors perform consistently well in conditions by detecting and counting vehicles while CCTV systems show more noticeable discrepancies due to visibility challenges caused by heavy snowfall. The differences observed in vehicle counts between LiDAR and CCTV during snowy weather stem from the limitations of each sensor technology. LiDAR sensors use emitted laser pulses to detect and measure distances and speeds of surrounding objects maintaining performance irrespective of weather conditions.

On the side, CCTV cameras might encounter difficulties, in recording vehicle activities and differentiating them from falling snow in the background. This could result in either underestimating or overestimating the number of vehicles. The intricate relationship among snow buildup decreased visibility and road conditions can affect how CCTV cameras work causing variations in vehicle counts when compared to LiDAR sensors.

As can be seen in Table 5, in sunny weather and during the 10:30 - 11:30 AM, percentage differences range from 0.4% to 12.5%, indicating minor to moderate differences in the number of vehicles detected by LiDAR and CCTV camera. Particularly the largest gap of 97.4% is seen in the WS (direction indicating a notable discrepancy in vehicle counts between CCTV and LiDAR). These discrepancies could be due to factors like objects being partially or fully hidden from the CCTV cameras view leading to inaccuracies in counting vehicles. Moreover, variations in lighting and camera placement can affect the precision of CCTV counts causing these observed differences. On the hand, LiDAR sensors consistently perform well under conditions by using laser technology to accurately track vehicle presence and movement, resulting in more dependable vehicle counts.

The results highlighted that inclement weather, such as snow or rain, presents significant challenges for the CCTV cameras. The results, however, indicate that LiDAR sensors can collect data more accurately under these conditions. Indeed, LiDAR sensors maintain performance in bad weather by using their laser-based detection technology to overcome disruptions caused by weather and provide reliable vehicle count data. Overall, these findings emphasize that LiDAR sensors are technically superior to CCTV cameras when it comes to dependable traffic monitoring, in challenging weather conditions. Hereupon, LiDAR technology could enhance safety at intersections and improve traffic management strategies in areas in challenging weather conditions.

5. Discussion

The limits of agreement (LoA) represent the extent of variability between two measurement techniques [58]. Essentially, LoA defines the scope in which the discrepancies between measurements obtained from methods are anticipated to lie with a degree of certainty. To provide an interpretation of the results the subsequent sections are included.

5.1 Vehicle counts comparison from LiDAR and CCTV technologies

In the context of comparing vehicle and pedestrian counts data collected by LiDAR and CCTV technologies, LoA is used as benchmarks to compare the two datasets. The mean difference, which is the difference between the counts from LiDAR and CCTV acts as the central point of LoA. The lower boundaries are then set by adding and subtracting 1.96 times the standard deviation of the differences from this mean difference. This range captures about 95% of the variations, between the datasets giving an insight into their alignment.

LoA plays a key role in assessing the reliability and consistency of measurements collected by two technologies. A tight LoA range suggests agreement between the data sets reflecting levels of accuracy and precision in the measurements. On the hand, a broad LoA range could indicate differences between the data sets, which may raise doubts about the reliability of one or both measurement methods. Figure 13 provides a representation of the LoA values, for vehicle counts.

Figure 13. LoA values for vehicles counts in different weather conditions

As can be seen in Figure 13, the LoA analysis involved determining the mean difference between the counts obtained by LiDAR and CCTV, that serves as the central point of the agreement range. Subsequently, the upper and lower limits of the LoA were established by adding and subtracting 1.96 times the standard deviation of the differences from the mean difference, respectively. Upon examination of the chart depicting LoA for different weather conditions—snowy, sunny, and rainy—it is evident that significant values are observed for the NS (southbound through) and WS (eastbound right) approaches. In the case of the NS movement, the significant LoA values suggest a considerable disparity between the vehicle counts recorded by LiDAR and CCTV, irrespective of the weather condition. The significant LoA observed in the WS approach indicate that the frequency of vehicles counted by each technology varies in sunny and rainy conditions. These discrepancies could be due to limitations in the detection range or technical constraints inherent in both LiDAR and CCTV systems. Significant LoA values for the WS approach might be attributed to technical issues with certain LiDAR systems, which can affect accuracy. These inaccuracies are likely due to the complexity of real-world intersections and the various technical challenges faced by LiDAR systems in urban environments. Moreover, heavy traffic volumes and densities in these movements contribute to congestion that obstructs LiDAR’s line of sight and causes complete hiding of vehicles. Unpredictable driver actions such as speeding or disregard for traffic signals further complicate vehicle tracking in these areas heightening disparities, in vehicle count data collected by LiDAR technology.

Additionally challenging weather conditions, like rain or snow worsen these constraints by decreasing visibility and impacting the efficiency of LiDAR sensors in areas with inclines, such as the southbound direction.

5.2 Pedestrian counts comparison from LiDAR and CCTV technologies

Figure 14. LoA values for pedestrian counts in different weather conditions

Figure 14 illustrates the LoA values for pedestrian counts. As shown in Figure 14, the noticeable LoA seen in the part of the intersection on rainy days, stem from a mix of factors tied to how pedestrians behave and the surrounding environment. Hereupon, Morgan State University’s campuses impact pedestrian traffic and behavior in the southbound area. As a hub with classes, events and housing options, the university draws in a lot of activity that could lead to increased pedestrians’ traffic and varied behaviors. This influx of people moving around for purposes within the university might cause fluctuations in pedestrian numbers and changes in behavior especially in rainy weathers. Rainy conditions can bring about shifts in how pedestrians move such as changing their paths or seeking shelter. This can affect their detectability by technologies like LiDAR and CCTV. Additionally factors like visibility, slippery surfaces and discomfort due to weather may prompt pedestrians to take risks like jaywalking or crossing where they should not be able crossing.

The changes, in pedestrian behavior exacerbated by inclement weather highlight the difficulties in counting pedestrian traffic near the southern direction of the intersection adjacent to Morgan State University’s campus. The study effectively evaluates the agreement boundaries between the two technologies; however, a direct reference to research would provide a better context for discussion and emphasize how this study adds to existing knowledge. Furthermore, introducing methodologies or insights in this research compared to previous studies underscores the importance of the provided discoveries.

The study investigates the agreement boundaries focusing on the difference, between LiDAR and CCTV counts as the midpoint to establish lower limits of agreement based on standard deviations. Notably there are differences in limits of agreement for both the NS and WS approaches, regardless of weather conditions. These variations indicate differing detection abilities or environmental factors affecting how well both technologies perform.

6. Conclusions

In recent years, the integration of advanced technologies in traffic surveillance has led to the emergence of LiDAR sensors and CCTV cameras as key tools. Understanding the advantages of LiDAR technology is essential for modernizing traffic data collection and improving decision-making in planning and traffic management. To doing so, two data collection methods were assessed, in relation to their accuracy dependability and simplicity including LiDAR sensors and traditional CCTV cameras. This study aims to evaluate the effectiveness and benefits of using LiDAR sensors versus CCTV cameras to collect data on vehicle and pedestrian counts at a signalized intersection known for its rate of crashes in Baltimore City.

The results highlighted that CCTV cameras often struggle in snowy conditions, as snow on the lenses can block views and distort images, which challenges the accurate detection of vehicles and pedestrians.

In sunny weather, both LiDAR sensors and CCTV cameras are effective in collecting data on vehicle and pedestrian counts. Data collected during rainy conditions revealed disparities in vehicle and pedestrian counts, particularly for northbound and eastbound movements, due to factors such as intersection gradient and geometry. For northbound movements, inclines or declines can influence traffic flow and pedestrian activity, while the proximity to Morgan State University likely contributes to increased pedestrian traffic around the intersection.

Inconsistencies in data may be caused in part by the different types of vehicles and their abilities to handle grades. For example, a tractor trailers ability to go up a 4.5 percent incline is quite different from that of a sedan or SUV vehicle. This difference can impact traffic flow speed patterns. To tackle this issue, reclassifying vehicles to include cars while treating buses, trucks and trailers as similar to cars could help reduce inaccuracies in sensor readings. By standardizing vehicle classifications in this way, it is possible to lessen the influence of specific vehicle performance traits on data precision leading to results, for analyzing intersections.

To ensure an accurate analysis of the collected counts data, the Limits of Agreement (LoA) values were investigated. The LoA results highlighted the acceptable data analysis.

This research focused on studying three weather conditions including sunny, snowy and rainy. To gain an understanding of sensor performance under environmental contexts, it would be beneficial to expand data collection to cover a wider range of seasons and time intervals. When justifying the choice of Baltimore City as the study area, it was specifically selected due to its weather patterns and relevance to traffic management issues. By choosing Baltimore City, the study was able to utilize existing infrastructure and partnerships with transportation authorities for smooth data collection and collaboration. Nonetheless there is recognition in this study of the importance of extending research to include seasons and times in order to capture a complete picture of sensor behavior and address potential variations, in traffic patterns and environmental conditions over time.

Future studies investigating the accuracy of LiDAR and CCTV technologies in detecting vehicles and pedestrians across diverse weather conditions. New methods could involve combining sensor technologies like radar or infrared sensors to improve detection accuracy in harsh weather conditions that LiDAR and CCTV may struggle with. Moreover, advancements in data fusion methods and machine learning algorithms show potential in enhancing the reliability and strength of vehicle and pedestrian detection systems allowing for real time traffic analysis.

Despite the examination carried out in this study, it is important to note some limitations. Firstly, the study concentrated on one intersection which might limit how broadly its findings can be applied to intersections with different layouts, traffic flows and environmental factors. Also, the study mainly relied on data from LiDAR and CCTV technologies without considering factors such as pedestrian demographics, driver behaviors or road infrastructure features that could affect the accuracy of vehicle and pedestrian tracking. Finally, although attempts were made to consider the weather conditions, the research may not have completely captured how weather impacts the accuracy of collecting traffic data. It is worth mentioning that broadening the study to include a variety of intersections, weather situations and operational scenarios would enhance the knowledge in creating stronger intersection safety measures.

It is worth noting that observing one day of snowfall might not fully capture all the types of snowy weather from light snow to blizzard conditions since each affecting sensor performance and intersection safety in unique ways. Similarly, variations in rainfall intensity ranging from rain to thunderstorms can have an impact on sensor accuracy and data collection results. During the specified period in Baltimore City, there were no days with conditions due to the regions sporadic fog occurrences. This absence prevented us from examining the effects of fog and glare during the study. This expansion will improve the relevance and reliability of this research findings, for real world traffic management situations.

Acknowledgment

This research study was supported by the SMARTER Center, a Tier 1 University Transportation Center of the U.S. Department of Transportation University Transportation Centers Program at Morgan State University.

  References

[1] Varaiya, P. (2013). Max pressure control of a network of signalized intersections. Transportation Research Part C: Emerging Technologies, 36: 177-195. https://doi.org/10.1016/j.trc.2013.08.014

[2] Rodegerdts, L.A., Nevers, B.L., Robinson, B., Ringert, J., Koonce, P., Bansen, J., Nguyen, T., McGill, J., Stewart, D., Suggett, J., Neuman, T., Antonucci, N., Hardy, K., Courage, K.G. (2004). Signalized intersections: Informational guide (No. FHWA-HRT-04-091). United States. Federal Highway Administration. https://rosap.ntl.bts.gov/view/dot/39968/dot_39968_DS1.pdf.

[3] Eom, M., Kim, B.I. (2020). The traffic signal control problem for intersections: A review. European Transport Research Review, 12: 1-20. https://doi.org/10.1186/s12544-020-00440-8

[4] Ansariyar, A. (2023). Evaluating sensor accuracy: A comparative study of LiDAR and CCTV technologies performance at signalized intersections. OSF Preprints, 1-16. https://doi.org/10.31219/osf.io/fz7jt 

[5] Guerrero-Ibáñez, J., Zeadally, S., Contreras-Castillo, J. (2018). Sensor technologies for intelligent transportation systems. Sensors, 18(4): 1212. https://doi.org/10.3390/s18041212

[6] Yeong, D.J., Velasco-Hernandez, G., Barry, J., Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 21(6): 2140. https://doi.org/10.3390/s21062140

[7] Ansariyar, A. (2023). Providing a comprehensive traffic safety analysis collected by two LiDAR sensors at a signalized intersection. Preprints.Org. https://doi.org/10.20944/preprints202310.1401.v1

[8] Wu, Y.T., Wang, Y.Y., Zhang, S.W., Ogai, H. (2020). Deep 3D object detection networks using LiDAR data: A review. IEEE Sensors Journal, 21(2): 1152-1171. https://doi.org/10.1109/JSEN.2020.3020626

[9] Villa, F., Severini, F., Madonini, F., Zappa, F. (2021). SPADs and SiPMs arrays for long-range high-speed light detection and ranging (LiDAR). Sensors, 21(11): 3839. https://doi.org/10.3390/s21113839

[10] Ansariyar, A., Taherpour, A. (2023). Statistical analysis of vehicle-vehicle conflicts with a LIDAR sensor in a signalized intersection. Advances in Transportation Studies, 60: 87-102. https://doi.org/10.53136/97912218074246

[11] Kastrinaki, V., Zervakis, M., Kalaitzakis, K. (2003). A survey of video processing techniques for traffic applications. Image and Vision Computing, 21(4): 359-381. https://doi.org/10.1016/S0262-8856(03)00004-0

[12] Valera, M., Velastin, S.A. (2005). Intelligent distributed surveillance systems: A review. IEE Proceedings-Vision, Image and Signal Processing, 152(2): 192-204. https://doi.org/10.1049/ip-vis:20041147

[13] Cross, C., Farhadmanesh, M., Rashidi, A. (2020). Assessing close-range photogrammetry as an alternative for LiDAR technology at UDOT divisions (No. UT-20.18). Utah Department of Transportation Research & Innovation Division https://rosap.ntl.bts.gov/view/dot/54923/dot_54923_DS1.pdf.

[14] Chen, L., Englund, C. (2015). Cooperative intersection management: A survey. IEEE Transactions on Intelligent Transportation Systems, 17(2): 570-586. https://doi.org/10.1109/TITS.2015.2471812

[15] Essa, M., Sayed, T. (2018). Traffic conflict models to evaluate the safety of signalized intersections at the cycle level. Transportation Research Part C: Emerging Technologies, 89: 289-302. https://doi.org/10.1016/j.trc.2018.02.014

[16] Babari, R., Hautière, N., Dumont, É., Paparoditis, N., Misener, J. (2012). Visibility monitoring using conventional roadside cameras - Emerging applications. Transportation Research Part C: Emerging Technologies, 22: 17-28. https://doi.org/10.1016/j.trc.2011.11.012

[17] Debnath, R., Singha, A., Saha, B., Bhowmik, M.K. (2020). A comparative study of background segmentation approaches in detection of person with gun under adverse weather conditions. In 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, pp. 1-7. https://doi.org/10.1109/ICCCNT49239.2020.9225409

[18] Mohammed, A.S., Amamou, A., Ayevide, F.K., Kelouwani, S., Agbossou, K., Zioui, N. (2020). The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review. Sensors, 20(22): 6532. https://doi.org/10.3390/s20226532

[19] Tang, L., Shi, Y.P., He, Q., Sadek, A.W., Qiao, C.M. (2020). Performance test of autonomous vehicle lidar sensors under different weather conditions. Transportation Research Record, 2674(1): 319-329. https://doi.org/10.1177/0361198120901681

[20] Wu, J.Q., Xu, H., Zheng, J.Y., Zhao, J.X. (2020). Automatic vehicle detection with roadside LiDAR data under rainy and snowy conditions. IEEE Intelligent Transportation Systems Magazine, 13(1): 197-209. https://doi.org/10.1109/MITS.2019.2926362

[21] Kutila, M., Pyykönen, P., Ritter, W., Sawade, O., Schäufele, B. (2016). Automotive LiDAR sensor development scenarios for harsh weather conditions. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, pp. 265-270. https://doi.org/10.1109/ITSC.2016.7795565

[22] Yoneda, K., Suganuma, N., Yanase, R., Aldibaja, M. (2019). Automated driving recognition technologies for adverse weather conditions. IATSS Research, 43(4): 253-262. https://doi.org/10.1016/j.iatssr.2019.11.005

[23] Shirazi, M.S., Morris, B.T. (2016). Looking at intersections: A survey of intersection monitoring, behavior and safety analysis of recent studies. IEEE Transactions on Intelligent Transportation Systems, 18(1): 4-24. https://doi.org/10.1109/TITS.2016.2568920

[24] Lu, Z.Y. (2016). Modelling, simulation and control of signalized intersections under adverse weather conditions. Master's thesis, University of Waterloo. https://core.ac.uk/download/pdf/144149192.pdf.

[25] Naveed, Q.N., Alqahtani, H., Khan, R.U., Almakdi, S., Alshehri, M., Abdul Rasheed, M.A. (2022). An intelligent traffic surveillance system using integrated wireless sensor network and improved phase timing optimization. Sensors, 22(9): 3333. https://doi.org/10.3390/s22093333

[26] Lilhore, U.K., Imoize, A.L., Li, C.T., Simaiya, S., Pani, S.K., Goyal, N., Kumar, A., Lee, C.C. (2022). Design and implementation of an ML and IoT based adaptive traffic-management system for smart cities. Sensors, 22(8): 2908. https://doi.org/10.3390/s22082908

[27] Ounoughi, C., Yahia, S.B. (2023). Data fusion for ITS: A systematic literature review. Information Fusion, 89: 267-291. https://doi.org/10.1016/j.inffus.2022.08.016

[28] Bibri, S.E. (2023). Data-driven smart eco-cities of the future: an empirically informed integrated model for strategic sustainable urban development. World Futures, 79(7-8): 703-746. https://doi.org/10.1080/02604027.2021.1969877

[29] Rane, N. (2023). Integrating leading-edge artificial intelligence (AI), internet of things (IoT), and big data technologies for smart and sustainable architecture, engineering and construction (AEC) industry: Challenges and future directions. Engineering and Construction (AEC) Industry: Challenges and Future Directions. SSRN. https://doi.org/10.2139/ssrn.4616049

[30] Wu, J.S., Guo, S., Li, J., Zeng, D.Z. (2016). Big data meet green challenges: Big data toward green applications. IEEE Systems Journal, 10(3): 888-900. https://doi.org/10.1109/JSYST.2016.2550530

[31] Bibri, S.E., Krogstie, J. (2020). Environmentally data-driven smart sustainable cities: Applied innovative solutions for energy efficiency, pollution reduction, and urban metabolism. Energy Informatics, 3: 29. https://doi.org/10.1186/s42162-020-00130-8

[32] Ansariyar, A., Tahmasebi, M. (2022). Investigating the effects of gradual deployment of market penetration rates (MPR) of connected vehicles on delay time and fuel consumption. Journal of Intelligent and Connected Vehicles, 5(3): 188-198. https://doi.org/10.1108/JICV-12-2021-0018

[33] Bernas, M., Płaczek, B., Korski, W., Loska, P., Smyła, J., Szymała, P. (2018). A survey and comparison of low-cost sensing technologies for road traffic monitoring. Sensors, 18(10): 3243. https://doi.org/10.3390/s18103243

[34] Zhao, J.X., Xu, H., Liu, H.C., Wu, J.Q, Zheng, Y.C., Wu, D.Y. (2019). Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors. Transportation Research Part C: Emerging Technologies, 100: 68-87. https://doi.org/10.1016/j.trc.2019.01.007

[35] Zhang, L.L., Yu, X., Daud, A., Mussah, A.R., Adu-Gyamfi, Y. (2024). Application of 2D homography for high resolution traffic data collection using CCTV cameras. arXiv preprint arXiv:2401.07220. https://doi.org/10.48550/arXiv.2401.07220

[36] Li, X.Z., Yu, Q., Alzahrani, B., Barnawi, A., Alhindi, A., Alghazzawi, D., Miao, Y.M. (2021). Data fusion for intelligent crowd monitoring and management systems: A survey. IEEE Access, 9: 47069-47083. https://doi.org/10.1109/ACCESS.2021.3060631

[37] Wu, J.Q., Xu, H., Sun, Y., Zheng, J.Y., Yue, R. (2018). Automatic background filtering method for roadside LiDAR data. Transportation Research Record, 2672(45): 106-114. https://doi.org/10.1177/0361198118775841

[38] Wang, Y.C., Wang, W.C., Liu, J.Z., Chen, T.H., Wang, S.Y., Yu, B., Qin, X.C. (2023). Framework for geometric information extraction and digital modeling from LiDAR data of road scenarios. Remote Sensing, 15(3): 576. https://doi.org/10.3390/rs15030576

[39] Ansariyar, A., Ardeshiri, A., Jeihani, M. (2023). Investigating the collected vehicle-pedestrian conflicts by a LIDAR sensor based on a new Post Encroachment Time Threshold (PET) classification at signalized intersections. Advances in transportation studies, 61: 103-118. https://doi.org/10.53136/97912218091907

[40] Murthy, C.B., Hashmi, M.F., Bokde, N.D., Geem, Z.W. (2020). Investigations of object detection in images/videos using various deep learning techniques and embedded platforms—A comprehensive review. Applied sciences, 10(9): 3280. https://doi.org/10.3390/app10093280

[41] Khare, Y., Ramesh, A., Chandran, V., Veerasamy, S., Singh, P., Adarsh, S., Anjali, T. (2022). Intelligent CCTV footage analysis with sound source separation, object detection and super resolution. In: Smys, S., Balas, V.E., Palanisamy, R. (eds) Inventive Computation and Information Technologies. Lecture Notes in Networks and Systems, 336: 107-118. Springer, Singapore. https://doi.org/10.1007/978-981-16-6723-7_9

[42] Fan, Y.C., Yelamandala, C.M., Chen, T.W., Huang, C.J. (2021). Real-time object detection for LiDAR based on LS-R-YOLOv4 neural network. Journal of Sensors, 2021: 1-11. https://doi.org/10.1155/2021/5576262

[43] Kim, J., Cho, J. (2021). RGDiNet: Efficient onboard object detection with faster R-CNN for air-to-ground surveillance. Sensors, 21(5): 1677. https://doi.org/10.3390/s21051677

[44] Alfred Daniel, J., Chandru Vignesh, C., Muthu, B.A., Senthil Kumar, R., Sivaparthipan, C.B., Marin, C.E.M. (2023). Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle. Multimedia Tools and Applications, 82(16): 25107-25130. https://doi.org/10.1007/s11042-023-14417-x

[45] Lotfi, F., Ajallooeian, V., Taghirad, H.D. (2018). Robust object tracking based on recurrent neural networks. In 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM), Tehran, Iran, pp. 507-511. http://doi.org/10.1109/ICRoM.2018.8657608

[46] Liu, F., Chen, Z.G., Wang, J. (2019). Video image target monitoring based on RNN-LSTM. Multimedia Tools and Applications, 78: 4527-4544. https://doi.org/10.1007/s11042-018-6058-6

[47] Alaba, S.Y., Ball, J.E. (2022). A survey on deep-learning-based LiDAR 3D object detection for autonomous driving. Sensors, 22(24): 9577. https://doi.org/10.3390/s22249577

[48] Zhang, Z.Y., Zheng, J.Y., Xu, H., Wang, X. (2019). Vehicle detection and tracking in complex traffic circumstances with roadside LiDAR. Transportation Research Record, 2673(9): 62-71. https://doi.org/10.1177/0361198119844457

[49] Anisha, A.M., Abdel-Aty, M., Abdelraouf, A., Islam, Z., Zheng, O. (2023). Automated vehicle to vehicle conflict analysis at signalized intersections by camera and LiDAR sensor fusion. Transportation Research Record, 2677(5): 117-132. https://doi.org/10.1177/03611981221128806

[50] Guan, F., Xu, H., Tian, Y. (2023). Evaluation of roadside LiDAR-based and vision-based multi-model all-traffic trajectory data. Sensors, 23(12): 5377. https://doi.org/10.3390/s23125377

[51] Powers, J.G., Klemp, J.B., Skamarock, W.C., et al. (2017). The weather research and forecasting model: Overview, system efforts, and future directions. Bulletin of the American Meteorological Society, 98(8): 1717-1737. https://doi.org/10.1175/BAMS-D-15-00308.1

[52] Hafeez, M.A., Nakamura, Y., Suzuki, T., Inoue, T., Matsuzaki, Y., Wang, K.N., Moiz, A. (2021). Integration of Weather Research and Forecasting (WRF) model with regional coastal ecosystem model to simulate the hypoxic conditions. Science of the Total Environment, 771: 145290. https://doi.org/10.1016/j.scitotenv.2021.145290

[53] Uccellini, L.W., Ten Hoeve, J.E. (2019). Evolving the National Weather Service to build a weather-ready nation: Connecting observations, forecasts, and warnings to decision-makers through impact-based decision support services. Bulletin of the American Meteorological Society, 100(10): 1923-1942. https://doi.org/10.1175/BAMS-D-18-0159.1

[54] Kaur, D., Kaur, Y. (2014). Various image segmentation techniques: A review. International Journal of Computer Science and Mobile Computing, 3(5): 809-814. 

[55] Ojha, S., Sakhare, S. (2015). Image processing techniques for object tracking in video surveillance-A survey. In 2015 International Conference on Pervasive Computing (ICPC), Pune, India, pp. 1-6. https://doi.org/10.1109/PERVASIVE.2015.7087180

[56] Sharma, R.D., Gupta, S.K. (2018). A survey on moving object detection and tracking based on background subtraction. Oxford Journal of Intelligent Decision and Data Science, 2018(1): 55-62.

[57] Giavarina D. (2015). Understanding Bland Altman analysis. Biochemia Medica, 25(2): 141-151. https://doi.org/10.11613/BM.2015.015

[58] Zou, G.Y. (2013). Confidence interval estimation for the Bland–Altman limits of agreement with multiple observations per individual. Statistical Methods in Medical Research, 22(6): 630-642. https://doi.org/10.1177/0962280211402548