- Open Access
- Authors : Sowmya C V, Prabhat Kumar Yadav, Pappu Kumar Karn, Navin Shahi, Anup Raj
- Paper ID : IJERTCONV10IS12019
- Volume & Issue : RTCSIT – 2022 (Volume 10 – Issue 12)
- Published (First Online): 03-09-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Automated Parking Management System
1 2 3 4 5
Prof. Sowmya CV
, Prabhat Kumar Yadav , Pappu Kumar Karn , Navin Shahi , Anup Raj .
1 Faculty CSE Department, Sri Krishna Institute of Technology, Blore-560090, India
2,3,4,5 CSE Department, Sri Krishna Institute of Technology, Blore-560090, India
Abstract:- Nowadays days in many multiplex systems there is a severe problem for vehicle parking. There are many lanes for vehicle parking, one has to look for all the lanes for parking.There is a lot of men labour required in this process. So, there is a need of developing a system which directly indicate which parking slot is vacant without human intervention. This project implements automatic vehicle parking system by detecting the vehicle number in number plate.
This project canbe implemented by mall parking tolls where they can install this system in their cameras where thecamera will automatically identify the vehicle number and mention it in their respective parking ticket. Proposed work employs several image processing techniques such as, morphological transformation, Gaussian smoothing, and Gaussian threshold in the pre- processing stage so that it candeal with noisy, low illuminated, non-standard font number plates. Next, for number plate segmentation, contours are applied by border following and contours are filtered based on character dimensions and spatial localization. And then we will filter out the numbers based on the Standard Number entries.
I. INTRODUCTION
The purpose of this system is to create a real timeapplication of number plate detection and tagging can be made for vehicle parking systems. The objective is to extract and recognize vehicle registration numbers from vehicle images, process the image data finally utilize for access record and prepare electronic bill. Electronic Vehicle parking payment is one of the major research topics in intelligent transportation system (ITS). As the vehicle is arrived at the parking toll, the software installed camera pointing towards the vehicle will recognize the vehicle number from the number plate. After recognizing the number, the system willcheck if the vehicle number is registered or not in the system, if vehicle is registered then parking slot will be allocated and if not then the user should register in the system.
Three following scenarios can occur:
-
If the vehicle number is already registered, i.e., if the system finds the respective number then it will allow the vehicle to pass, noting the date and time of the entry.
-
If the vehicle is NOT registered, then the driver has two options, either he can register hisvehicle into the system by providing all therequired details OR
-
Can progress into the parking by selecting theProceed as Guest by just providing the name and number.
-
METHODOLOGY
The proposed methodology consisting of three major phases which is, pre-processing, detection, andrecognition areshown in Figure 1.
Fig 1. Theproposed number plate recognition system
a. Phase 1: Pre-processing
An image or a video can be used as the input. Because video is composed of a sequence of images/frames, the image source must be prepared for further processing before number plate detection can begin. Figure 2 (a) shows a sample input image that was used to demonstrate the technique. The order at which point image refine methods are used is in this manner:
i. Under-Sampling of Images
The method for detecting and recognising licence plates is designed to run at a constant frame rate. Image processing techniques for high-resolution photographs are, unsurprisingly, sluggish. In fact, considering photographs with such a high quality is superfluous. If the resolution exceeds a predetermined threshold, this stage decreases it.When it comes to video, every 25th frame is under- sampled.
this procedure. To binarize the image, the simplestsill procedure uses a worldwide threshold. However, under non- uniform ignition positions, this strategy grant permission not be relevant to. To execute adaptive sill, a preset window is picked and a weighted sum of neighbor pixels is found. The 'Inversion' is a mathematical negation that is carried out to meet our practical demands Each pel of the output opening figure can be driven mathematically as
_iiii(, ) = 0,
_ ii ii ( , ) > ( , )
Fig. 2. (a) Input image; (b) converted and cropped image;
(c) HSVConverted Image
-
RGB to HSV
Conversion
Figure 3: (a) RGB grayscale images; (b) HSV channels
The following OpenCV method converts RGB input images to HSV channels and returns the figure subsequently conversion. cv2 is the concept, which is kept as a numpy array inside, and cv2. COLOR BGR2HSV is a flag that converts RGB to HSV. This tutorial [15] covers everything you need to know about changing colour spaces. HSV channels also have the benefit of decoupling colour description from brightness. This ensures that the algorithm works for photographs with a wide range of lighting conditions. The output of HSV conversion is shown in Figures 2 (b) and 2 (c).
-
Extraction in grayscale
To obtain the value, the brightness or Value channel of the transformed HSVimage is retrieved.
The dark-headpiece movement (as known or named at another time or place bottom-headgear) is used to point up dark objects of interest in a comparatively brilliant tradition, when in fact the sociable movement (also known as top- headgear) is used to carry out the reverse. The Top- Hat is the dissimilarity 'tween written description of past events and the concept's gap, and the Black- Hat is the dissimilarity middle from two points the representation and the representation's closing. Top- headpiece results are amounted to the original concept in this place study, whereas black-hat results are subtracted [16].
Gaussian-Smoothing (ii) Adaptive Gaussian sill with Inverted Gaussian still The goal of way in, which is part of image segmentation, is to convert a grayscale image into a binary image. Image linearization is the common name for
A uninterrupted Gaussian function is secondhand in Gaussian cleaning or Gaussian smoothing. Gaussian permeating is used to minimise blast and fine detail. This will suffice for concept refine from on. The use of a Gaussian clean on an countenance has the additional benefit of :
Now 255,
T (x, y) is a opening function that calculates the beginning each pel T
() = 22 (1)
22
Where x and y are the level and upright point around which somethingrevolves lengths from the inception, individually, and the standard deviation of the Gaussian classification is pointed outby To create a two-spatial origin, first administer a order of superficial Gaussian models a cross, then repeat the process across. When distinguished to allure two
–
spatial adaptation, this leads in a decline in computational complicate dness [17]. The following function in OpenCV maybe used to ask Gaussian smoothing.
debate(5,5)refers to the proportion of the Gaussian kernel; the best theintensity, the better the smoothing intensity. The Gaussian seed's prei ctable difference is likely apiece third recommendation, 0 [18].
T(x,y) is a opening function that calculates the beginning each pel T
individually. Adaptive Gaussian sill is a useful function provided by OpenCV.
The threshold window's size is BLOCK SIZE, and the pressure total of the principles in the neighbourhood is premeditate d utilizing WEIGHT [19].
-
Figure 4 a) Contour application; b) Contour filtering; c) Contour
Phase 2: Detection of licence plates
Inverted Adaptive Gaussian sill returns a binarized image with values of 0 or 255 at the end of the preceding stage of image pre-processing. In both the detection and recognition phases, the binarized image is used as an input to the subsequent stages.
Applying contours (section 2.2.1)
The algorithm for producing contours is Contour search, also known as Border as. A contour is a line that connects spots of equal intensity along a border. Locating contours in OpenCV is similar to finding a white object against a black backdrop, thus all along the Adaptive Gaussian The Inversion process necessary to be secondhand all along the shelf step. The effect of applying contours to a binarized countenance is proved in Figure 4 (a).
-
-
Plate angle correction
At this point, each number plate is given a bounding box. If one the plates have angle falsification, an affine revolution is secondhand. An affine revolution, that is a plan middle from two points two scopes, is used protect points and lines. After the change, the parallel lines assert their likeness. On a straight line, the length ratio between the points is maintained. The angles and lengths of the lines and points, however,are not continued. To fix slab marking baseball home angle, utilise OpenCV's getRotationMatrix2D [20]method:
Where the axis of rotation is the centre, the angle is the rotation angle in degrees, and the scale factor is the scale factor. The following affine matrix is returned by this function:
1
Where x is the x-coordinate of the plate's centre, and y is the y- coordinate of the plate's centre, and = scale *
cos(angle). This matrix is supplied to warp as an input. As follows, Affine [21]:
The measurements of the number plate are width and hei
1 grouping
2.2.3. Remove any characters or outlines that overlap. As with the Filtering and Grouping Contours are secondhand for limit ed districts, specifically sharp edges and commotion outli ers.Although a human opinion may immediatelyrecognise that such outlines are superfluous, this must be factored into the software. To begin, each contour was given a set of bounding boxes. were then examined for each contour. As a result, most of the unneeded contours were filtered out, bringing us closer to our goal of detecting of the section system number- plate.The second stage of winnowing compares each outline to sporadic contour established determinants like outline dist ance and delta angle.
number 'zero,' it is conceivable for two or more contours to totally overlap with one other. If the inner contour is recognised during the contouring process, it may reside entirely within the outer contour.
Both contours can be labeled as independent figures during the whole of the acknowledgment process as a result of this experiences. As a result, this step is amounted
to guarantee that aforementioned overlapping characters are removed.
2.3. Phase 3: Recognition of characters
Character Transformation and Prediction (section 2.3.1Each profile, that shows a integrity on the number plate, is in creased into a 20 by 30 representation after overlapping characters are eliminated. This action is carried out to guarantee that the data is consistent. to produce a string of letters for each contour In the prediction the surplus outline groups produce in Fig 6 that do not indicate the Number plate abandon. The number plate is identified as the group with the most expected characters.
2.2.4. The model's training The K Nearest Neighbours (KNN) technique was used to train the model. Many other models were evaluated, including Decision Tree and Gradient Boosting, but K Neighbours outperformed them all. Randomized search was utilized to extract the best potential hyper parameters for the model. A severe search is seen of a deliberately made scope of subset of energetic-limits that correspond to the knowledge treasure in randomised search, which is an optimized version of
parameter sweep or grid search. The term "performance metrics" refers to a set of model. The fonts utilized are shown in Figure 6 (a), and the extracted images for the character 'P' are shown in Figure 6 (b).
DISCUSSION AND CONCLUSIONS
The tests were carried out on a Windows 10 computer with 8GB of RAM and a 2.4 GHz i5 processor. The image processing tools are implemented using the Python OpenCV package.
The technology has been put to the test with both photos and movies. All of the previously specified situations, such as unevenly lighted plates, plates with styled fonts, plates in close-up, plates at a distance, and angularly skewed plates, were included in the System test cases. For testing, images with various ambient circumstances are obtained. Figure 7 (a) depicts a picture that can be used to test the situation of
Fig. 6. (a) Fonts used for training; A test synopsis for a non-
standard fashionable number plate is proved in Figure 8 (a).A representative occurrence countenance for the disfigured and fought number plate is proved in Figure 8 (b). Figure 9 shows a test figure for a smudged
license plate. The effects of the number plate labeling and acknowledgment for all of the test sketches are proved in Figure 10. The system right discovered 98 portion of the number plates in a sample of 101 plates, containing Indian and alien plates, and it right recognized over 96.2 portion of the personalities on the plates. The veracity of number plate discovery and acknowledgment achieved apiece systems submitted in this
place paper is proved in Table 1. The results are again distingue to comparable studies on number plate discovery and labeling in India.
Fig. 7. (a)Irregular light and scarcely any plate;(b)A
incompletely overused number plate.
-
Detection of License Plates Number Plate Detection based on image processing is smooth and faster than machine intelligence, located and other difficult methods because it only demands plain numerical calculations. It will, still, take plenty work to add a roomy range of biases and original-realm scenarios. Machine learning-located models likely to act better in certain situations. Any machine learning- based algorithm relies heavily on data. There was not a lot of data available for this project. Image processing approaches, on the other hand, have been shown to outperform their machine learning-based equivalents in cases where data is few. As previously stated, having a trustworthy input source, particularly a still camera, is critical for strong detection and recognition accuracy.
angle
panying crooked
Fig. 8. (a) A number plate accompanying non-standard fashionable font;(b). Number plate accom
Fig. 9. A blurry number p
late
Fig. 10. Number plates detected and recognized
-
Recognition of License Plates
To some extent, recognition of characters from various typefaces was accomplished. When comparing various typefaces, it is crucialto keep in mind that a suitable balance must be struck. Over fitting the model with too many typefaces might result in poorgeneralization and biased prediction, 100 95 90 85 80 75. The model will be under fitted if there are too few fonts, resulting in poor prediction. Due to the lack of data, advanced models in the waythat Gradient Boosting did to over fit the dossier, resulting in poor forecasts. As a result, simpler models like K Neighbors appeared to do well.
CONCLUSION
REFERENCES
[1] R. A. Lotufo, A. D. Morgan and A. S. Johnson. (1990) Automatic number-plate recognition, IEE Colloquium on Image Analysis for ACCURACY Transport Applications, London. [2] J. A. G. Nijhuis, M. H. Ter Brugge, K. A. Helmholt,J. P. W. Pluim, L. Spaanenburg, R. S. Venema and M. A. Westenberg. (1995) Car license plate recognition with neural networks and fuzzy logic, Proceedings of ICNN'95 – International Conference on Neural Networks, Perth. [3] Sang Kyoon Kim, D. W. Kim and Hang Joon Kim. (1996) A recognition of vehicle license plate using a genetic algorithm based segmentation, Proceedings of 3rd IEEE International Conference on Image Processing, Lausanne. [4] Eun Ryung Lee, Pyeoung Kee Kim and Hang Joon Kim. (1994) Automatic recognition of a car license plate using color image processing, Proceedings of 1st International Conference on Image Processing, Austin. [5] Ching-Tang Hsieh, Yu-Shan Juan and Kuo-Ming Hung. (2005) Multiple License Plate Detection for Complex Background, 19th International Conference on Advanced Information Networking and Applications (AINA'05), Taipei. [6] Xifan Shi, Weizhong Zhao and Yonghang Shen, (2005) Automatic License Plate Recognition System Based on Color Image Processing, Gervasi O. et al. (eds) Computational Science and Its Applications ICCSA 2005.Lecture Notes in Computer Science, vol 3483, Singapore. [7] Shyang-Lih Chang, Li-Shien Chen, Yun-Chung Chung and Sei- Wan Chen, (2004) Automatic License Plate Recognition, IEEE Transactionson Intelligent Transportation Systems, 5 (1): 42-53. [8] K Tejas, K Ashok Reddy, D Pradeep Reddy, K P Bharath, R Karthik and M. R. Kumar, (2018) Efficient License Plate Recognition System with Smarter Interpretation Through IoT, Bansal J., Das K., Nagar A., Deep K., Ojha A. (eds) Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, 817: 207-220. [9] Md Yeasir Arafat, Anis Salwa Mohd Khairuddin, Uswah Khairuddin and Raveendran Paramesran, (2019) Systematic review on vehicular license plate recognition framework in intelligent transport systems, IETIntelligent Transport Systems. [10] getRotationMatrix2D – Geometric Image Transformations – OpenCV, [Online]. Available: https://docs.opencv.org/2.4/modules/imgproc/doc/geomet ric_transformations.html#getrotationmatrix2d. [11] warpAffine – Geometric Image Transformations – OpenCV, [Online]. Available:https://docs.opencv.org/2.4/modules/imgproc/doc/geomet ric_transformations.html#warpaffine.
[12] sklearn.neighbors.KNeighborsClassifier – scikitlearn docs, [Online]. Available: http://scikit learn.org/stable/modules/generated/sklearn.n eighbors.KNeighborsClassifier.html. [13] Nearest Neighbours – scikit-learn docs, [Online]. Available: http://scikit learn.org/stable/modules/neighbors.html#nearest neighbors classification. [14] M M Shidore, and S P Narote. (2011) Number Plate Recognition for Indian Vehicles International Journal of Computer Science and Network Security 11 (2): 143-146. [15] S. Kaur, (2016) An Automatic Number Plate Recognition System under Image Processing, International Journal of Intelligent Systems andApplications, 8 ( 3):14-25. [16] P Surekha, Pavan Gurudath, R Prithvi and VG Ritesh Ananth. (2018) Automatic license plate recognition using image processing and neural networks, ICTACT Journal on Image and Video Processing (IJIVP), 8 (4): 1786-1792.