- Open Access
- Total Downloads : 588
- Authors : Lakshmi J, Roopa S, Rashmi C R
- Paper ID : IJERTV4IS070365
- Volume & Issue : Volume 04, Issue 07 (July 2015)
- DOI : http://dx.doi.org/10.17577/IJERTV4IS070365
- Published (First Online): 15-07-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Pedestrian Detection System for Night Vision Application to Avoid Pedestrian Vehicle Related Accidents
Lakshmi J
Department of Electronics and Communication Channabasaveshwara Institute of Technology, Gubbi-57210, Karnataka, India
Rashmi C R
Department of Computer Science and Engineering Channabasaveshwara Institute of Technology, Gubbi-57210, Karnataka, India
Roopa S
Department of Electronics and Communication Channabasaveshwara Institute of Technology Gubbi-572216,Karnataka,india
Abstract In this paper a pedestrian detection and tracking system is developed that consisting of simple segmentation scheme for pedestrian detection and tracking at night time in- order to avoid pedestrian vehicle related accident. In this pedestrian detection system we use an image or a video already available that is captured using NIR camera. Based on segmentation a new vertical edge detection technique is developed and used to identify edges due to pedestrians. After that blob detection and merging is performed using connected component labeling to form a potential pedestrian image blocks which is called as candidate blocks. In this paper two different type of tracking schemes are explained. One is based on template matching and other one is based on image segmentation. The candidate blocks are rejected based on some certain criteria in order to reduce occurred false positives blocks. MATLAB tool is used to simulate system and has been tasted using various pedestrian videos and for images of different size and situation. Knowing correct pedestrian detected by number of pedestrian scene in video gives the system accuracy. So the proposed algorithm meets the same robustness of reference by reducing the number of operations needed also reduces the processing time, satisfies the requirements of real time applications.
Keywords- Segmentation, Vertical edge detection, Blob detection, Blob margining, Tracking.
-
INTRODUCTION
Pedestrian detection is an important field of research in both commercial and government organizations. Night vision is made possible by a combination of two approaches: one is sufficient spectral range and other is sufficient intensity range. Humans have poor night vision compared to many animals because the human eye lacks a tapetum lucidum (is a layer of tissue in the eye of many vertebrates, Lying immediately behind the retina helps for night vision). Night vision is an ability to see in low light or completely dark condition. The People drive in low-light condition at night or in completely dark conditions, even though head lights are present in vehicle, the driver is not able to detect pedestrians
when they are suddenly appeared. In the presence of day light its a much easy job for a driver to drive safely, but the problem is even more critical at night time or in bad weather conditions. There has been increase in the number of traffic accidents over the years and millions of peoples are injured by an accident. The Pedestrians are among the major victims of these accidents. For this reason a pedestrian detection system is introduced to detect the pedestrian in order to reduce the probability of pedestrian-vehicle-related accident. In the past years pedestrian detection includes segmentation, detection and tracking of human in thermal image (only at day time) [1]. Various types of sensors, such as an ultrasonic sensor, a laser scanner, a microwave radar, optical sensors and different kind of cameras, have been used to capture pedestrians. Algorithm used for segmentation consisting of shape based techniques [4], thresholding based on pedestrian brightness [6], [7], and edge based techniques [4], region based etc. Selecting a more efficient segmentation algorithm also reduces the number of blocks that are passed to the next stage for detection, further reducing the complexity. For this reason here we use edge detection algorithm. Pedestrian detection is more challenging and also a difficult task in image processing fields and is widely used in robotics, surveillance and in intelligent vehicles.
This paper describes a simple pedestrian detection system for monocular NIR image. The algorithm consists of four modules: Image capture, segmentation, tracking, warning and display. Pedestrian detection system will aid the driver about pedestrian on the road both in day and night time with the help of a camera and its display unit. The figure 1 gives the entire description of proposed algorithm.
S90=
1 2
0
0
1 2
1
0
1
(1)
3×3 Sobel mask, for diagonal edge (450) detection
Fig. 1.Block Diagram of the proposed algorithm
Here we taken available an available datasets, in that video
S45=
2
1
0
1 0
0 1
1 2
(2)
are captured using NIR camera, in that pedestrian generally
appears uniform and slightly brighter than the background
3×3 Sobel mask, for diagonal edge (1350) dete
and other objects in the image. From that video, frames (are electronically coded still image) are extracted. In video number of frames scanned per second gives frame rate. In video we can get better sense of motion by increasing the
S135=
0
1
2
1 2
0 1
1 0
(3)
frame rate. The standard frame rates are 25 frames per second and 29.97fps depends on video.
-
SEGMENTATION
Segmentation is the process of partitioning a digital image into multiple segments. In a simple way segmentation is nothing but divides and analyse. So the output of image segmentation is a set of segments that completely covers the
S90,S45,S135 are the kernels used for obtaining edges along
vertical and diagonal direction. A convolution operation is performed between input images, mask that give the output as edges of image. After detecting vertical and diagonal edges compare the edges if any of the diagonal edge value is greater than vertical edge value, are discarded. Otherwise they are retained same as vertical edge as shown in below equation (4).
OUT =
S90 If(S90 > S45 & Sv90 > S135 & S90 > Sthr)
entire image. Each of the pixels in a region is similar with respect to some characteristic or computed property, such
V {
0 , otherwise
} (4)
as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic. In this paper we uses an edge based segmentation to detect potential pedestrian image blocks in the frames. Segmentation consists of the following steps edge detection, grouping of edges to form blocks, and margining.
-
Edge detection
By applying edge detection algorithm to image the quantity of processed data will be reduced, and retains the important information regarding the shapes of pedestrian. Steps involved in edge detection are: 1. Smoothing, if any noise present in an image is reduced by using a suitable filter, here smoothing of image is achieved using Gaussian filter without eliminating the true edge of image. 2. Enhancement, highlight the pixels if any changes in their intensity values, it is achieved by calculating gradient magnitude. 3. Detection, some method are used to determine non-edge point that should be discarded as noise, other points are retained by considering as edge. In this system we use 3×3 sobel filter to obtain vertical and diagonal edges of the pedestrian in image. Sobel filter takes an inpt image and a kernel and finally produces an image with only edges (based on filter). Typically it is used in input grayscale image to determine the absolute gradient magnitude at each point.
3×3 Sobel mask, for vertical edge (900) detection.
VOUT is the output edge pixel value, S90vertical edge pixel value, S45 and S135 are the diagonal edge pixel values, Sthr is the threshold value used to remove the noise. Here we set the value of Sthr as 230 that are determined experimentally.
After detection of vertical edges in an image using 3×3 sobel kernel, we perform a morphological opening operation on vertical edges using a vertical kernel (as structuring element) of rectangle height 4 pixels and width 1 pixel, in-order to remove small edges usually occurred due to noise. Morphological opening operation is nothing but erosion followed by dilation. Fig. 2 shows the sample input and output after edge detection process along with morphological opening operation.
(a)
(b)
Fig.2. Input and corresponding output after comparison of edge and Morphological opening operation
-
Detection of blob
The output of the vertical edge detector is then passed to blob detection followed by merging, used for blocks identification. Blob detection is a fast and simple method that can be used for many machine vision tasks, such as tracking a red ball, finding a blue marker or detecting a person's skin etc. Blob detection will identifies the pedestrian like objects in an image or pedestrian image blocks, that are differ in properties like brightness or color compared to surrounding regions in the image. A blob containing a region of connected pixel, these connected pixels are determined by using connected component labeling. In this paper 8-connectivity is used because it label the complete blob at a time also avoid the label redundancies, the connected pedestrian pixel of same area pixels are labeled with same color where as different area are labeled with different color, it generating a data structure to save blob information. In this stage noise pixels are removed they occur due to small size blobs. The connected edge are detected and putting a rectangular block around pedestrian as shown in figure 3.
Fig.3. Output of blob detection, the labeling of each pixel and rectangular blocks around pedestrian.
-
Merging of blob
After finding blobs using connected component labeling usually vertical edges are appears close to each other they belong to same pedestrian in an image. Blob merging is necessary because if the pedestrian is very close to camera or vehicle is moving the edges of the same pedestrian are separated by a distance. In such cases the blobs are separated. As a result, there is a possibility of losing pedestrians, because the individual blobs may not singly satisfy the height
and width criteria also relabeling the pixel. If two or more blobs contain the same information, those types of blobs are merged. Merging of blob combines all the edges belongs to single pedestrian assigned to one label and other labeled edges are discarded. This improves detection accuracy and reduces missed detections.
Merging is performed as follows:
Dv < Vthr and Dh < Hthr (5)
Merging is performed by calculating the distance between centroids of adjacent blobs that distance are compared with threshold value. Let Dv and Dh are vertical and horizontal distances between centroids of adjacent blobs respectively. Vthr is threshold value of vertical distance and Hthr is horizontal distance threshold value, here we set the value of Vthr = 20 and Hthr = 50 based on the experimental criteria. As given in equation (5) if vertical threshold value is greater than vertical distance and horizontal threshold values are greater than horizontal distance value if this conditions satisfied that type of blobs are merged and relabeling the pixels along with rectangular blocks as shown in figure 4.
Fig. 4: Output of merging, relabeling of pixel and rectangular blocks around pedestrian
-
-
TRACKING
The motion of vehicle or moment of pedestrian or due to both the pedestrian position varies frame to frame. Visual surveillance application commonly uses pedestrian tracking systems. Tracking is also used in various systems like a key components, the aim is to understand behavior and activity recognition. In the previous years various methods are used for tracking single or multiple pedestrian whether they are static or moving, such as kalman filter[3], practical filter[9], background subtraction method, action recognition etc. all these approaches are computationally expensive and may not satisfies the real time requirements. We propose a new method in this paper that consist an accurate dynamical model for tracking which is computationally in expensive and meets the real time requirements. A searching window is formed by enlarging the height and width of the block that is
selected from previous frame in either horizontal or vertical direction is used for tracking. Tracking can be done in two ways either template matching or by segmentation.
-
Tracking based on template matching
The aim of template matching is to find the presence of template in large image, which means we want to find the portion of image that is exactly matched with template. In template based tracking method, the extended block in the present frame is compared with the block that is taken from the previous frame is treated as a template in a target searching window. Template is a small image or sub-image. It is used either in its direct form or in derived form i.e, by either a histogram matching or by matching a pattern generated from the unknown image with the standard image patterns. The maximum correlation search is applied and the co-ordinates of the block selected in the previous frame are updated with the co-ordinates of the block in present frame. Template matching is applied for only for few frames because the size of the pedestrians may change in the subsequent frames but in template matching the size of the pedestrian is assumed as remains constant. The drawback of template matching is that if it finds any of false positive occurred in the frame it continue the process by giving positive output only for the corresponding frame, till the pedestrian detection scheme achieved again, it has high computation cost.
-
Pedestrian tracking based on segmentation
In this pedestrian detection system we perform tracking of pedestrian based on segmentation method. In a certain frames the fine segmentation of a pedestrian is attained that makes the reliable initialization of tracking in the next frame, in this method segmentation is performed in same way as explained earlier. Segmentation is performed using edge base technique with 3×3 sobel filter, vertical (900) and diagonal (450, 1350) edges of blocks of pedestrian are detected and are compared to get only the vertical (900) edges of pedestrian blocks. Morphological opening operation (erosion followed by dilation) is performed using same vertical kernel on blocks to remove small edges occurred due to noise. Segmentation is achieved, but the pedestrian in present frame have only one search window. Segmentation used for tracking is different from previous segmentation in terms, here we segment only blocks instead of segmenting entire image. This segmentation helps in discarding if any of false detection occurred in frame. Fig.5. shows the tracking of pedestrians using segmentation method.
Fig. 5: keeping track of the pedestrian
-
-
REDUCTION OF FALSE POSITIVE
False positive is an error, in which test result abnormally identify the presence of condition. In our system even though the pedestrian is not in the frame it gives positive output only (detect the presence) and simply shows the rectangular blocks. False positives occur due to improper segmentation or other than pedestrian some objects have higher intensity value. The detected false poitive does not appear continuously as pedestrians occur. These false positives are reduced based on display criteria. The blocks obtained after segmentation are displayed in that false positive will occurs only in two or three frames out of five consecutive previous frames. This also accounts for any occasional missed detection. The below figure 6 shows the example for false positive reduction consists of five frames , in bellow figure 6(a), 6(b), 6(c), are three consecutive frames in that arrow indicate the presence of false positive where as in 6(d) and in 6(e) arrow indicates false positive is not detected. Finally this block will be considered as false detection, hence discarded and will not be passed on to display module.
Fig. 6: Reducing the occurrence of false positive
After performing detection and tracking of pedestrian on an image, if the pedestrians are present in frames the system will aid the driver about pedestrian and avoid the pedestrian vehicle related accidents. Pedestrian detection system will place a rectangular block on pedestrian and display the warning to help the driver.
-
EXPERIMENTAL RESULTS AND ANALYSIS
A series of experiments are carried out for different image of pedestrian such as segmentation algorithm, morphological opening operation, connected component labeling algorithm, tracking based on segmentation, false positive reduction. In segmentation we find out the edges of pedestrian present in image, using morphological opening operation noise is removed. Using connected component labeling algorithm connected edges are determined, based on the detected edges a rectangular block is placed on the pedestrian and warning is displayed to the driver. The algorithm is developed in matlab2014a, tasted on desktop with the following
Accuracy of thesystem
Correct Detection of Pedestrian TotalNumber of Pedestrian Scene
100
configuration: Intel(R) Atom(TM) CPU N550@1.50GHz processor, RAM- 2GB. On multiple videos the above pedestrian detection system was tasted. From the host vehicle the range of pedestrian detection is 20m to 100m. Since the speed of the algorithm depends on the number of edges detected in an image, the average time taken to process single image frame varies from 10 to 15ms. 10ms for the case where there are very few edges (say only few pedestrians are present, and with few vertical edges in the background) and 15ms for the case where there are more edges (for example, when there are 5-6 pedestrians and many objects like lampposts, cars in the background). It gives a system accuracy of 80% to 85%. Since the dark background is a very important requirement in the algorithm, it gives a better accuracy for highway kind of scenarios. The algorithm is also dependent on the vanishing point of the road and gives better accuracy in the scenarios.
Fig. 7: Pedestrian are detected correctly
-
(b)
Fig. 8: Output shows (a) is missed pedestrian, (b) false positive
Accuracy of the system for different videos is shown as in the table and graph, which is calculated as:
Fig 9: Accuracy of the system
TABLE I
RESULTS FOR DIFFERENT VIDEOS
Video (AVI)
Dura- tion (s)
Number of frames
Number of pedestrian scene
Correct Detection of pedestrian
False detection
Accuracy of system
Video1
26
667
8
7
2
87%
Video2
4
125
1
1
2
100%
Video3
16
481
15
12
6
80%
Video4
39
1196
32
17
15
53%
Video5
21
526
18
14
10
77%
Video6
5
138
14
10
5
71%
Fig.10: Videos used in verification, Pedestrians are detected by white boxes
The detection accuracy is reduced if any external light source occurs in the background or due to weak edges is one of The limitation of this method, shown in fig 11.
Fig.11: Missed detection due to external light source
-
-
CONCLUSION
In this paper a new approach is developed to detect pedestrian using edge detection algorithm at night time. To reduce the cost of implementation usually this type of system are developed. Since this system mainly consisting of edge detection, so this can be further extended for object detection problems. In this algorithm for pedestrian detection dark background is a very important requirement, for highway kind of scenarios it gives batter accuracy. Results reveal its effectiveness and robustness and can be applied in systems aimed to detection and tracking of pedestrian and display warning to the driver. The problem of pedestrian detection for the purpose of reducing pedestrian vehicle related accidents has been approached by using vertical edge information of the pedestrian in an image. A new set algorithms, which has been developed and evaluated for a wide range of datasets, exhibiting a good and robust performance. The success of the proposed system opens new frontiers for further research in the future. Future work includes using a classifier to decrease the number of false positives and modify the algorithm to be suitable for city conditions.
REFERENCES
-
Tarun kancharla, pallavi kharade, sanjyot gindi, Krishnan kutty, vinay g.vaidya, Edge base segmentation for pedestrian detection using NIR camera,, 10.1109/ICIIP.2011.6108965 © 2011
-
A. Broggi, R. Fedriga, A. Tagliati, T. Graf, and M.-M. Meinecke, Pedestrian Detection on a Moving Vehicle: An Investigation about Near Infra-Red Images, Proc. IEEE Intelligent Vehicles Symp., pp. 431-436, 2006.
-
F. Xu, X. Liu, and K. Fujimura, Pedestrian Detection and Tracking with Night Vision, IEEE Trans. Intelligent Transportation Systems, vol. 6, no. 1, pp. 63-71, Mar. 2005.
-
A. Broggi, M. Bertozzi, A. Fascioli, and M. Sechi, Shape- Based Pedestrian Detection, Proc. IEEE Intelligent Vehicles Symp., pp. 215-220, 2000.
-
F. Suard, A. Rakotomamonjy, A. Bensrhair, and A. Broggi, Pedestrian Detection Using Infrared Images and Histograms of Oriented Gradients, Proc. IEEE Intelligent Vehicles Symp., pp. 206- 212, 2006.
-
Q. M. Tian, Y. P. Luo, and D. C. Hu, Pedestrian Detection In Nighttime Driving, in Proc. IEEE Int. Conf. Image Graph., 2004, pp. 116-119.
-
Xian-Bin Cao, Hong Qiao, A Low-Cost Pedestrian-Detection System with a Single Optical Camera, IEEE transactions on intelligent transportation systems, vol. 9, no. 1, march 2008.
-
Chia-Yuan Ho, Chiung-Yao Fang, Infrared Night Vision Based Pedestrian Detection System, Department of Computer Science & Information Engineering National Taiwan Normal University.
-
Xia Liu and Kikuo Fujimura, Pedestrian Detection Using Stereo Night Vision, IEEE transactions on vehicular technology, VOL. 53. 6, NOVEMBER 2004.
-
Prasad S.Halgaonkar, Connected Component Analysis and Change Detection for Images, International Journal of Computer Trends and Technology- May to June Issue 2011.