- Open Access
- Authors : Rambarki Sai Akshit, J. Uday Shankar Rao, V. Pavan Pranesh, Konduru Hema Pushpika, Rambarki Sai Aashik, Dr. Manda Rama Narasinga Rao
- Paper ID : IJERTV13IS120135
- Volume & Issue : Volume 13, Issue 12 (December 2024)
- Published (First Online): 06-01-2025
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Automated Traffic Ticket Generation System for Speed Violations using YOLOv9 and DeepSORT
Rambarki Sai Akshit, J. Uday Shankar Rao, V. Pavan Pranesh
UG Student, Department of Computer Science Engineering, GITAM University, Visakhapatnam, India
Konduru Hema Pushpika
M.S Student, Department of Data Science, University of Maryland Baltimore County, Maryland, USA
Rambarki Sai Aashik
Master of Technology, VLSI Design GITAM University, Visakhapatnam, India
Dr. Manda Rama Narasinga Rao
Professor, Department of Computer Science Engineering, GITAM University, Visakhapatnam, India
AbstractThis paper introduces an Automated Traffic Ticket Generation System for Speed Violations, leveraging cutting-edge technologies to enhance the enforcement of traffic regulations. The system employs YOLO (You Only Look Once), a robust deep learning-based object detection algorithm, for accurate and real-time vehicle detection. It integrates Deep SORT for multi-object tracking, ensuring consistent identification of vehicles across video frames. To reliably extract vehicle identity, OpenALPR (Automatic License Plate Recognition) is incorporated, offering superior performance in recognizing license plates under diverse conditions, such as varying speeds, lighting, and camera angles. The system processes real-time video feeds from traffic surveillance cameras, detecting and tracking vehicles as they pass through predefined reference lines. By measuring the time taken for a vehicle to cross these lines, the system calculates its speed. If the speed exceeds the specified limit, an automated traffic ticket is generated. This ticket includes key details such as the vehicle type, speed, license plate number, and the exact timestamp of the violation. Experimental results demonstrate the system’s high accuracy and efficiency in detecting violations, tracking vehicles, and recognizing license plates. By automating traffic law enforcement, this solution reduces human errors, enhances accuracy, and provides a scalable approach to managing traffic violations.
KeywordsYOLOv9, DeepSORT, OpenALPR, Vehicle Detection, Speed Violation Detection, License Plate Recognition, Traffic Law Enforcement, Real-Time Traffic Monitoring
- INTRODUCTION
With the exponential increase in urbanization and the growing number of vehicles on the road, managing traffic flow and ensuring compliance with speed regulations have become crucial challenges for city planners and law enforcement agencies worldwide. Traditional methods of traffic enforcement, such as radar guns and manual speed checks, while useful, are limited by various factors including their
dependency on human operators, limited coverage area, and potential for human error. As the need for more efficient and reliable traffic management systems rises, intelligent systems driven by advanced technologies such as computer vision, machine learning, and automation offer promising alternatives to overcome these challenges.
This research introduces an Automated Traffic Ticket Generation System designed to significantly streamline the process of monitoring and enforcing speed limits in real-time. The core objective of this system is to leverage state-of-the- art technologies to automatically detect traffic violations, particularly speeding, without the need for human intervention. The proposed system integrates multiple cutting- edge technologies, including YOLO (You Only Look Once) for vehicle detection, DeepSORT for object tracking, and OpenALPR (Automatic License Plate Recognition) for license plate identification, to create an efficient and effective solution to enforce traffic rules and improve road safety.
At the heart of the system is YOLO, a real-time object detection algorithm that quickly identifies vehicles within a video frame and generates accurate bounding boxes around each vehicle. YOLO’s high-speed processing capability allows for near-instantaneous vehicle detection, even in busy or dynamic traffic conditions. YOLOs precision ensures that the vehicles are identified correctly, reducing the likelihood of false positives or missed detections.
Building upon YOLO, DeepSORT (Deep Learning-based Sort) is employed to enhance the tracking performance of the system. DeepSORT is a robust tracking algorithm that assigns unique identifiers to detected vehicles and maintains these identities across multiple frames. This is particularly useful in complex traffic situations where vehicles may overlap or occlude each other. By tracking the movement of each vehicle across frames, DeepSORT ensures that the system can
accurately monitor the trajectory of each vehicle, even in scenarios with high traffic congestion.
To estimate vehicle speeds, the system employs a novel approach based on central coordinates and predefined reference lines in the video footage. The speed estimation method works by calculating the distance a vehicle travels between reference points within a fixed time frame. This provides an accurate estimation of the vehicle’s speed, which is then compared against predefined speed limits to determine whether a violation has occurred.
Once a speed violation is detected, the system utilizes OpenALPR technology to capture and read the vehicle’s license plate number. OpenALPR, a widely used optical character recognition (OCR) system, extracts the alphanumeric characters of the license plate with high accuracy, even in low-light conditions or when the plate is partially obscured. The integration of OpenALPR into the system enables seamless ticket generation, automatically associating the detected violation with the corresponding vehicle’s registration details.
This research aims to demonstrate the feasibility and effectiveness of this fully automated system for traffic enforcement. By automating vehicle detection, tracking, speed estimation, and ticket generation, the system eliminates human errors, reduces administrative workload, and enhances the overall efficiency of traffic monitoring. Additionally, it offers the potential to improve road safety by ensuring consistent and objective enforcement of speed regulations, thereby contributing to reduced traffic accidents and safer roadways. The systems ability to function in real-time, with minimal manual intervention, represents a significant advancement in traffic enforcement technologies, paving the way for smarter and more reliable urban traffic management solutions.
This research introduces an Automated Traffic Ticket Generation System designed to streamline the process of monitoring and enforcing speed limits. By leveraging cutting- edge technologies, the system efficiently detects vehicles, tracks their movement, estimates speed, and generates tickets for violations in real time.
At its core, the system employs YOLO for rapid vehicle detection, providing accurate bounding boxes around each vehicle in the captured video footage. Deep SORT enhances tracking performance by maintaining vehicle identities across frames, even in congested conditions. To accurately estimate vehicle speeds, a novel approach utilizing central coordinates and predefined reference lines is implemented. Once a speed violation is confirmed, OpenALPR is employed to extract license plate information, allowing for seamless ticket generation.
This research aims to demonstrate the feasibility and effectiveness of an automated approach to traffic enforcement, highlighting its potential to improve compliance with speed regulations and contribute to safer roadways.
- LITERATURE REVIEW/p>
In recent years, the rapid advancements in deep learning have revolutionized the field of computer vision, significantly improving object detection capabilities for applications such as traffic surveillance. One of the most widely used techniques
for vehicle detection is YOLO (You Only Look Once), a real- time object detection model that has demonstrated remarkable performance in various applications. D. Balakrishnan et al. [2] introduced an improved version of YOLOv3, enhancing its real-time vehicle detection ability by optimizing the balance between speed and accuracy. Their model excels at detecting vehicles even in dense traffic, offering high frame processing rates and good detection accuracy. However, while YOLOv3 performs exceptionally in vehicle detection, it primarily focuses on identifying objects in the frame and does not address the additional requirements of tracking or speed estimation, which are critical for automated traffic violation detection. This gap in the model’s capabilities highlights the need for an integrated solution that combines vehicle detection with tracking and speed estimation, which is precisely what this research aims to address. By incorporating DeepSORT for tracking and a speed estimation mechanism, this study aims to create a more comprehensive end-to-end solution for automated traffic violation detection.
Vehicle tracking is another critical component in building a robust traffic monitoring system. DeepSORT is one of the most reliable objects tracking algorithms available, known for its ability to maintain vehicle identities across video frames in dynamic environments. S. A. U et al. [4] present a robust system for urban traffic management that combines YOLOv5 for real-time vehicle recognition and DeepSORT for vehicle tracking, with a focus on alleviating traffic congestion through direction analysis. Their system, applied to Hyderabad’s complex road network, categorizes vehicles into four types (bike, car, bus, and truck) and achieves an accuracy of 87.93% in vehicle tracking and direction analysis. While their approach significantly enhances traffic flow and congestion management, it primarily focuses on movement direction and vehicle classification. In contrast, our research provides a more comprehensive solution by extending beyond traffic flow optimization to include automated detection and enforcement of speed violations. Leveraging YOLOv9 for higher detection accuracy, DeepSORT for precise tracking, and OpenALPR for license plate recognition, our system not only monitors traffic but also ensures compliance with speed regulations. This added functionality addresses a critical aspect of road safety, offering a more holistic and scalable framework for intelligent traffic management in smart cities.
T. Thapliyal et al. [5] present a promising approach to Automatic License Plate Recognition (ALPR) by utilizing the YOLOv5 model for vehicle and plate detection and Tesseract OCR for extracting text from license plates. While their system addresses the challenges of high vehicle speed, non- uniform plate designs, and multilingual text, its reliance on Tesseract OCR limits its accuracy in handling distorted or low-resolution plates, especially under challenging lighting conditions. In contrast, our research leverages YOLOv9 for enhanced detection accuracy and OpenALPR, which is better optimized for real-world scenarios, providing more reliable license plate recognition even in high-speed traffic or poor- quality video feeds. Additionally, our system integrates functionalities for speed violation detection and automated ticket generation, making it a more comprehensive and scalable solution for traffic law enforcement. This positions our work as an advancement over T. Thapliyal et al.’s ALPR
framework, addressing broader traffic management challenges while maintaining robust performance in diverse environments.
P. N. Huu et al. [6] proposed a robust system for tracking and calculating the speed of mixed vehicles using YOLOv4 for vehicle detection and DeepSORT for object tracking. Their algorithm demonstrated impressive accuracy, achieving up to 95%, and proved its potential for real-world applications in urban traffic management. However, their focus was limited to speed tracking without addressing other critical aspects, such as vehicle identification for traffic violation enforcement. Our research advances this framework by adopting YOLOv9 for improved detection accuracy, integrating OpenALPR for seamless license plate recognition, and incorporating automated ticket generation for speeding violations. This end- to-end system not only monitors and tracks vehicles but also provides actionable outputs for traffic law enforcement, making it a more comprehensive and practical solution for real-time traffic management challenges.
- METHODOLOGY
This section outlines the structured methodology used in the development of the Automated Traffic Ticket Generation System designed to detect and enforce speed violations using YOLOv9, DeepSORT, and OpenALPR. The system integrates multiple technologies to ensure a seamless workflow from vehicle detection to the generation of traffic tickets.
- System Architecture
The architecture of the Automated Traffic Ticket Generation System is designed to handle the complex tasks involved in detecting, tracking, and penalizing vehicles that violate speed regulations using real-time video surveillance. This system is a multi-module, integrated solution that processes video frames to monitor traffic, detect violations, and generate automated tickets for offenders.
- Input Module: The system starts by capturing live video footage from traffic surveillance cameras. These cameras are strategically positioned along roads to provide continuous coverage of traffic flow. The video feed is processed frame by frame to detect vehicles, track their movement, and analyze their speed. The system ensures that no vehicle is overlooked, and violations are detected promptly.
- YOLOv9 Vehicle Detection: The YOLOv9 (You Only Look Once) algorithm is used for real-time, high-accuracy vehicle detection. YOLOv9 is designed to identify and classify objects within video frames with remarkable speed and precision, making it ideal for traffic monitoring tasks where quick processing is essential. The algorithm detects vehicles in each frame and creates bounding boxes around them, marking their positions for further analysis. YOLOv9s efficiency enables it to handle large amounts of video data and detect vehicles even in congested traffic scenarios.
- Deep SORT Tracking: After detecting the vehicles, the system employs the Deep SORT (Simple Online and
Realtime Tracking) algorithm to track each vehicle across successive frames. Deep SORT ensures that each vehicle is assigned a unique identifier and maintains this identity across the video stream, even when vehicles overlap, change lanes, or are momentarily obscured. Deep SORT uses Kalman filters to predict the movement of vehicles and correct any tracking errors. This continuous tracking is crucial for calculating the vehicles’ speed and ensuring accurate identification throughout the video sequence.
- Speed Estimation Module: The Speed Estimation Module calculates the speed of each vehicle based on its displacement across frames and the known frame rate of the video. By utilizing the vehicles position at two different points in time and the distance between those points (which can be inferred from the cameras field of view and calibration), the system estimates the vehicle’s speed. The system accounts for factors such as camera calibration and frame rate to ensure that the speed estimates are accurate. If a vehicle exceeds the predefined speed limit for that road segment, it is flagged for a violation.
- OpenALPR License Pate Recognition: For vehicles that exceed the speed limit, the system uses OpenALPR (Automatic License Plate Recognition) technology to capture and recognize the vehicles license plate number. OpenALPR uses advanced optical character recognition (OCR) techniques to accurately read and decode license plates, even under challenging conditions such as low light or motion blur. This process allows the system to associate the speeding vehicle with a specific registration number, ensuring that the violator is correctly identified for ticketing.
- Violation Detection and Ticket Generation: When the system detects a speeding violation, it records key details about the offense, including the vehicle’s speed, license plate number, time of the violation, and location. This information is stored in a secure database, which can be accessed for later reference or legal action. The system can automatically generate a traffic ticket based on the stored data, complete with relevant information such as the violation type, the violator’s details, and the fine amount. This automated process eliminates human error and speeds up the enforcement of traffic regulations, making the system more efficient and reliable than traditional manual ticketing methods.
a vehicle within the image frame. Each bounding box is represented in the format (xmin,ymin,xmax,ymax), where:
xmax, xmin: The x-coordinate of the bottom-right corner and the top-left corner of the bounding box respectively.
ymax, ymin: The y-coordinate of the bottom-right corner and the top-left corner of the bounding box respectively.
This rectangular box allows the YOLOv9 model to localize the detected vehicle precisely within the frame. The bounding box is essential for the subsequent steps in the system, such as vehicle tracking and speed estimation, as it helps to define the region of interest (ROI) for further analysis.
Central Coordinate Calculation: Once the bounding boxes are generated around the detected vehicles, the central coordinates of each bounding box are calculated to aid in tracking and speed estimation. The center coordinates are essential for maintaining accurate vehicle tracking over time and for calculating the vehicle’s movement across frames.
The center of the bounding box is computed using the following formula:
__ =
+ 2
Fig. 1. Flowchart illustrating proposed methodology
This modular approach ensures that each aspect of the violation detection and ticketing process is handled efficiently, from vehicle identification to final ticket generation. The use of advanced detection, tracking, and recognition technologies ensures high accuracy and reliability in traffic monitoring and enforcement.
- Vehicle Detection
The initial phase involves the detection of vehicles in real- time using YOLOv9. The detection process is as follows –
- Input Preparation: This phase begins with capturing video
__ =
+ 2
frames from traffic surveillance cameras, which are then resized and normalized to meet the input requirements of the YOLOv9 model. The frames are resized to ensure they match the resolution expected by YOLOv9, which is crucial for maintaining optimal detection accuracy. The resizing process ensures that the video can be processed efficiently, balancing detail with computational performance. Additionally, the frames are normalized, adjusting pixel values to a standard range to eliminate discrepancies caused by varying lighting conditions and camera settings. This step is essential for ensuring consistent and accurate vehicle detection, which is key to the overall effectiveness of the system.
- Bounding Box Generation: After the video frames are processed and prepared, YOLOv9 generates bounding boxes around each detected vehicle. A bounding box is a rectangular frame that identifies and isolates the location of
Fig. 2. Illustration of vehicle detection, from video frame capture to bounding box generation
- Input Preparation: This phase begins with capturing video
- Continuous Vehicle Tracking using DeepSORT
After the initial vehicle detection, the next step is the continuous tracking of vehicles across video frames. This is achieved through the DeepSORT algorithm, which significantly improves the tracking performance by integrating both appearance and motion information. DeepSORT is widely used for robust object tracking in dynamic environments like traffic monitoring.
- Speed Estimation
The Speed Estimation phase is crucial for determining whether a vehicle is exceeding the established speed limit. This process involves calculating the time it takes for a vehicle to traverse a known distance between two predefined
reference lines placed in the field of view of the camera. Typically, these reference lines are set 10 meters apart, providing a standardized measurement for speed calculation. To improve the accuracy of detection and reduce errors caused by vehicles moving at different angles or speeds, the reference lines are not defined as narrow lines but as broader stripes. This approach ensures that vehicles passing through the reference area are detected reliably, even if their bounding boxes only partially intersect the lines. The broader stripes help accommodate slight misalignments and ensure that the system captures the vehicle’s motion more accurately.
When the vehicle crosses the first reference line, the system logs the current timestamp as tstart. Then, as the vehicle crosses the second reference line, the timestamp tfinish is recorded. The time difference between these two timestamps, time_diff, given as, tfinishtstart, is calculated, and the vehicle’s speed is estimated using the formula:
OpenALPR, to convert the visual characters into machine- readable text. The recognized license plate number is linked to violation data, including the timestamp, location, and recorded speed. This automated approach enables accurate, real-time enforcement of speed regulations, providing reliable evidence for issuing digital traffic tickets. The system is designed to handle diverse license plate formats and ensures consistent performance across different lighting and weather scenarios. illustration of OpenALPR in action
=
_
Fig. 4. Illustration of OpenALPR in action
Where, the distance is set to 10 meters. The resulting speed is expressed in meters per second, providing a clear measure for evaluating compliance with speed regulations.
This speed estimation process allows the system to determine whether a vehicle is violating speed regulations. If the estimated speed exceeds the speed limit, the system triggers a violation detection, which is crucial for the automatic generation of a traffic ticket.
Fig. 3. Representation of the reference lines and the vehicle’s path, illustrating how intersections are detected
- License Plate Recognition
License Plate Recognition (LPR) is a key element of automated traffic monitoring systems, designed to identify vehicles exceeding predefined speed thresholds. When a vehicle’s speed surpasses the input threshold set for the road, the system triggers a camera to capture an image of the violating vehicle. This image is processed to extract the portion containing the license plate ensuring precise localization even under varied environmental conditions.
The isolated license plate is then analyzed using Optical Character Recognition (OCR) technology, such as
The digital ticket serves as official documentation for the traffic violation, ensuring the enforcement process is transparent and supported by verifiable data. Furthermore, the cropped image of the license plate and supporting evidence are stored securely for future verification, should any disputes arise. By automating tiket generation, the system streamlines
- Automated ticket generation using OpenALPR
The final phase of the proposed methodology is the automated generation of a digital ticket. This process leverages OpenALPR, a robust license plate recognition system, to efficiently identify vehicles involved in traffic violations. When a vehicle is detected exceeding the predetermined speed limit, the system captures a high-resolution image of the vehicle at the precise moment of the violation.
The bounding box coordinates isolate the segment of the image containing the vehicles license plate. This cropped section is processed by OpenALPR, which extracts the alphanumeric license plate number with high accuracy. The recognized number is then linked to the violation details recorded by the system, including the timestamp of the violation and location coordinates.
To generate a complete and reliable ticket, the system compiles all the violation data into a structured digital format. The ticket includes essential fields such as:
- Ticket Number: A unique identifier for each violation.
- Violation Date and Time: The exact moment the incident occurred.
- Vehicle Number: The extracted license plate details.
- Violation Penalty: The monetary fine associated with the offense.
- Location Coordinates: GPS data pinpointing where the violation took place.
- Offense Description: A summary of the violation (e.g., overspeeding).
- Evidence: The image(s) of the vehicle captured during the violation.
traffic law enforcement, reduces human error, and enhances overall compliance with speed regulations.
Fig. 5. Sample Digital Ticket generated
This comprehensive methodology integrates advanced technologies in object detection, tracking, speed estimation, and license plate recognition to create an efficient and automated traffic violation detection system. By leveraging YOLOv9, DeepSORT, and OpenALPR, the proposed system not only enhances traffic monitoring but also streamlines the process of ticket generation, contributing to improved traffic regulation compliance.
- System Architecture
- RESULTS AND ANALYSIS
The Table 1 presents a detailed analysis of the performance metrics for the Automated Traffic Ticket Generation System across multiple video feeds. The metrics include the number of actual violations, detected violations, violations where the number plate was successfully detected, and the error rates in violation detection and number plate recognition.
- Violation Detection Accuracy
The system demonstrated varying levels of accuracy across different videos. For instance:
- In Video 1, all 5 actual violations were accurately detected, resulting in a 0% error rate.
- In Video 6, only 3 out of 4 violations were detected, leading to the highest error rate of 25.00% in violation detection.
- Number Plate Detection Accuracy
Similar to violation detection, the performance of number plate recognition varied. For example
In Video 4, number plates for all violations were detected, achieving a 0% error rate.
- In Video 3, out of 20 detected violations, number plates for 18 vehicles were accurately recognized, resulting in a 10.00% error rate.
- Correlation Between Frame Rate and Performance
- Higher frame rates (e.g., Video 8, with 56 FPS) appear to improve detection and recognition capabilities, as evidenced by relatively lower error rates in both violation detection (13.64%) and number plate recognition (11.11%).
- Lower frame rates (e.g., Video 2, with 24 FPS) correspond to higher error rates, indicating that frame rate plays a critical role in ensuring the system captures sufficient data for accurate analysis.
- Efficiency Overall System Performance
- Across all videos, the system recorded 93 actual violations, out of which it successfully detected 83 violations. This corresponds to a detection success rate of approximately 89.25%.
- For number plate detection, out of the 83 detected violations, the system accurately recognized the number plates for 79 vehicles, achieving a recognition success rate of approximately 95.18%.
These results indicate that the proposed methodology significantly enhances the capabilities of traditional traffic enforcement systems, ensuring accurate detection, reliable tracking, and efficient ticket generation, ultimately contributing to improved road safety and compliance with traffic regulations.
TABLE I. Violations and Number Plate Detection Feed Results
Video fps Actual Violat- ions Detec- ted Viola- tions No. Plate Detected Error in Violation Detecti- on (%) Error in Plate Detecti- ons (%) Video 1 24 5 5 5 0 0.00 Video 2 24 13 11 10 15.38 9.09 Video 3 24 22 20 18 9.09 10.00 Video 4 36 36 1 1 1 0.00 Video 5 34 3 3 3 0 0.00 Video 6 44 13 11 11 15.38 0.00 Video 7 43 14 14 13 0 7.14 Video 8 56 22 18 13 13.64 11.11 Total 93 83 79 http://www.ijert.org
- Violation Detection Accuracy
- CONCLUSION
ISSN: 2278-0181
Volume 13, Issue 12, December 2024
10.1109/ICAEECI58247.2023.10370919. keywords:
The increasing complexities of urban traffic management necessitate innovative and efficient solutions to ensure road safety and enforce compliance with speed regulations. This research presents a comprehensive Automated Traffic Ticket Generation System, leveraging advanced technologies such as YOLOv9 for real-time vehicle detection, DeepSORT for robust tracking, and OpenALPR for accurate number plate recognition. By integrating these components, the system addresses key challenges in traffic enforcement, including real-time detection of speed violations and automated ticket generation, with minimal human intervention.
The results of this research highlight the effectiveness and feasibility of the proposed system. The system demonstrated a high degree of accuracy in violation detection (89.25%) and number plate recognition (95.18%), showcasing its potential for deployment in real-world scenarios. Error rates observed in specific instances were primarily attributed to factors such as low frame rates, complex traffic conditions, and motion blur. These findings underscore the importance of optimizing camera hardware to enhance the system’s reliability further. One of the significant contributions of thi research is the use of broader reference lines for speed estimation, which improves detection accuracy by reducing false negatives in intersection detection. Additionally, the adoption of bounding box technique mitigates inaccuracies caused by bounding box variations in speed calculations, ensuring consistent and reliable performance.
In conclusion, this research demonstrates that the proposed system can significantly streamline traffic enforcement by automating violation detection and ticket generation processes. Its deployment in urban settings can lead to improved compliance with traffic laws, reduced human workload, and ultimately safer roadways. This study lays a strong foundation for future advancements in intelligent traffic management systems.
REFERENCES
- Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press.
- D. Balakrishnan, S. Manideep Kumar Reddy, R. Lakshmi Venkatesh,
K. Aadith, R. Jebi Nalatharaj and M. Arshath, “Object Detection on Traffic Data Using Yolo,” 2023 International Conference on Data Science and Network Security (ICDSNS), Tiptur, India, 2023, pp. 1-5, doi: 10.1109/ICDSNS58469.2023.10245691. keywords:
{YOLO;Computer vision;Pedestrians;Machine learning algorithms;Surveillance;Real-time systems;Task analysis;object detection;You Only Look Once;Convolutional Neural Network;machine leaning techniques},
- Andrew Trask. 2019. Grokking Deep Learning (1st. ed.). Manning Publications Co., USA.
- S. A. U, K. Koushik, K. V. Reddy and G. B. K. Goud, “Smart Junctions: Yolov5, DeepSORT, and Direction Analysis for Traffic Enhancement,” 2024 1st International Conference on Trends in Engineering Systems and Technologies (ICTEST), Kochi, India, 2024,
- 1-6, doi: 10.1109/ICTEST60614.2024.10576081. keywords:
{YOLO;Analytical models;Adaptation models;Accuracy;Adaptive systems;Vehicledetection;Transportation; Traffic Congestion; YOLOv5; DeepSORT;Adaptive traffic management system;Noise reduction;Directional flow},
- 1-6, doi: 10.1109/ICTEST60614.2024.10576081. keywords:
- T. Thapliyal, S. Bhatt, V. Rawat and S. Maurya, “Automatic License Plate Recognition (ALPR) using YOLOv5 model and Tesseract OCR engine,” 2023 First International Conference on Advances in Electrical, Electronics and Computational Intelligence (ICAEECI), Tiruchengode, India, 2023, pp. 1-5, doi:
{YOLO;Computational modeling;Image processing;Lighting;Traffic control;Libraries;Character recognition;Number Plate;Number Plate Recognition;Pattern Classification;Artificial Neural Network;Image Binerazation;Optical Character Recognition},
- P. N. Huu, B. N. Anh, V. T. N. Nam, T. M. Hoang, T. D. Nguyen and
Q. T. Minh, “Tracking and Calculating Speed of Mixing Vehicles Using YOLOv4 and DeepSORT,” 2022 9th NAFOSTED Conference on Information and Computer Science (NICS), Ho Chi Minh City, Vietnam, 2022, pp. 105-110, doi: 10.1109/NICS56915.2022.10013396. keywords: {Deep
learning;Computer science;Computational modeling;Traffic control;Cameras;Artificial intelligence;Monitoring;Object detection;YOlOv4;DeepSORT;tracking object;processing image},
- Mohammed A. A. Al-qaness, Aaqif Afzaal Abbasi, Hong Fan, Rehab Ali Ibrahim, Saeed H. Alsamhi, and Ammar Hawbani. 2021. An improved YOLO-based road traffic monitoring system. Computing 103, 2 (Feb 2021), 211230. https://doi.org/10.1007/s00607-020-
00869-8
- R. Carreon Reyes, E. M. Cepe, N. D. Guerrero, R. V. Sevilla and D. L. Montesines, “Deep Inference Localization Approach of License Plate Recognition: A 2014 Series Philippine Vehicle License Plate,” 2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 2021,
pp. 257-260, doi: 10.1109/ICCIKE51210.2021.9410799.
- Yibing Kong, Yuhan Li, Xinyuan Zhang, and Zhiguo Zhou. 2024. YY- YOLO: Improved YOLOv5 for Object Detection on Traffic Signs. In Proceedings of the 2024 3rd International Symposium on Intelligent Unmanned Systems and Artificial Intelligence (SIUSAI ’24). Association for Computing Machinery, New York, NY, USA, 221
226. https://doi.org/10.1145/3669721.3669724
- Agung Yuwono Sugiyono, Kendricko Adrio, Kevin Tanuwijaya, and Kristien Margi Suryaningrum. 2024. Extracting Information from Vehicle Registration Plate using OCR Tesseract. Procedia Comput. Sci. 227, C (2023), 932938.
https://doi.org/10.1016/j.procs.2023.10.600
- L. Bao, Q. Wang, S. Zuo, Y. Jiang and X. Mo, “Research on highway night traffic event detection method based on video processing,” 2021 2nd International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Shanghai, China, 2021, pp. 248- 251, doi: 10.1109/AINIT54228.2021.00056. keywords: {Road
transportation;Seminars;Event detection;Vehicle detection;Snow;Traffic control;Inspection;component: Highways;Night Traffic Incidents;Optical Flow Method;Deepsort},
- Huiran Zhao and Lei Shi. 2021. Application of license plate image recognition technology in intelligent parking lot. In Proceedings of the 2020 International Conference on Cyberspace Innovation of Advanced Technologies (CIAT 2020). Association for Computing Machinery, New York, NY, USA, 7579.
https://doi.org/10.1145/3444370.3444551
- Gollapalli, Parwateeswar & Muthyala, Neha & Godugu, Prashanth & Didikadi, Nikitha & Ankem, Pavan. (2023). Speed sense: Smart traffic analysis with deep learning and machine learning. World Journal of Advanced Research and Reviews. 21. 2240-2247. 10.30574/wjarr.2024.21.3.0896.
- Ji, B.; Hong, E.J. Deep-Learning-Based Real-Time Road Traffic Prediction Using Long-Term Evolution Access Data. Sensors 2019, 19, 5327. https://doi.org/10.3390/s19235327
- Min, J. H., Ham, S. W., Kim, D.-K., & Lee, E. H. (2023). Deep
Multimodal Learning for Traffic Speed Estimation Combining Dedicated Short-Range Communication and Vehicle Detection System Data. Transportation Research Record, 2677(5), 247-259. https://doi.org/10.1177/03611981221130026
- S. Du, M. Ibrahim, M. Shehata and W. Badawy, “Automatic License Plate Recognition (ALPR): A State-of-the-Art Review,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 2, pp. 311-325, Feb. 2013, doi: 10.1109/TCSVT.2012.2203741.
keywords: {License plate recognition;Image color analysis;Feature extraction;Image edge detection;Vehicles;Lighting;Character recognition;Automatic license plate recognition (ALPR);automatic number plate recognition (ANPR);car plate recognition (CPR);optical character recognition (OCR) for cars},
- Vishal Jain, Zitha Sasindran, Anoop Rajagopal, Soma Biswas, Harish S Bharadwaj, and K R Ramakrishnan. 2016. Deep automatic license plate recognition system. In Proceedings of the Tenth Indian
Conference on Computer Vision, Graphics and Image Processing (ICVGIP ’16). Association for Computing Machinery, New York, NY, USA, Article 6, 18. https://doi.org/10.1145/3009977.3010052
ISSN: 2278-0181
Volume 13, Issue 12, December 2024
- Akshay Bakshi and Sandeep S. Udmale. 2023. ALPR: A Method for Identifying License Plates Using Sequential Information. In Computer Analysis of Images and Patterns: 20th International Conference, CAIP 2023, Limassol, Cyprus, September 2528, 2023, Proceedings, Part I. Springer-Verlag, Berlin, Heidelberg, 284294. https://doi.org/10.1007/978-3-031-44237-7_27
- p>Zhiquan Jiao and Hongri Fan. 2019. License Plate Recognition in Unconstrained Scenarios Based on ALPR System. In Proceedings of the 2019 International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI ’19). Association for Computing Machinery, New York, NY, USA, 540544. https://doi.org/10.1145/3366194.3366290
- Gamma Kosala, Agus Harjoko, and Sri Hartati. 2017. License Plate Detection Based on Convolutional Neural Network: Support Vector Machine (CNN-SVM). In Proceedings of the International Conference on Video and Image Processing (ICVIP ’17). Association for Computing Machinery, New York, NY, USA, 15. https://doi.org/10.1145/3177404.3177436
- Song, W.; Suandi, S.A. TSR-YOLO: A Chinese Traffic Sign Recognition Algorithm for Intelligent Vehicles in Complex Scenes. Sensors 2023, 23, 749. https://doi.org/10.3390/s23020749
- Kunekar, Pankaj & Narule, Yogita & Mahajan, Richa & Mandlapure, Shantanu & Mehendale, Eshan & Meshram, Yashashri. (2024). Traffic Management System Using YOLO Algorithm. 210. 10.3390/engproc2023059210.
- Mou, Afsana & Milanova, Mariofanna & Piya, Sumaiya. (2023). Intelligent Traffic Control System Using YOLO Algorithm for Traffic Congested Cities. 2014-2019. 10.1109/CSCE60160.2023.00331.
- D. A. Subhahan, S. R. Divya, U. K. Sree, T. Kiriti and Y. Sarthik, “An Efficient and Robust ALPR Model Using YOLOv8 and LPRNet,” 2023 International Conference on Recent Advances in Information Technology for Sustainable Development (ICRAIS), Manipal, India, 2023, pp. 260-265, doi: 10.1109/ICRAIS59684.2023.10367051.
keywords: {YOLO;Law enforcement;Streaming media;Logic gates;Licenses;Real-time systems;Convolutional neural networks;Deep Learning;License plate detection;license plate recognition;YOLOv8;LPRNet},