A Basic Approach To Automatic Vehicle Guidance

DOI : 10.17577/IJERTV2IS4970

Download Full-Text PDF Cite this Publication

Text Only Version

A Basic Approach To Automatic Vehicle Guidance

Vinod B1

PG Scholar

ECE Department

Younus College of Engineering and Technology

Kerala

Ms. Divya R2 Asst. Professor ECE Department

Younus College of Engineering

and Technology Kerala

Mr. Rajeev S K3

Head of the Department ECE Department

Younus College of Engineering and Technology

Kerala

Abstract- Sensing vehicles, traffic signs and traffic situations during driving are important aspects in safe driving, accident avoidance, and automatic driving. We designed a system that is capable of identifying the presence of vehicles ahead, moving in the same direction as our car, by tracking them continuously with an in-car video camera. This paper describes a comprehensive approach to localizing target vehicles in video under various environmental conditions. The paper also deals with detection and recognition of traffic signs from real time video. In the first step a suitable method, which searches a sign in the frame, is chosen. In the second step the found area is compared with the signs templates. Detected and recognized sign is marked in the video.

Keywords: In-car video, vehicle detection, traffic signs, signs templates, traffic sign recognition, Line Segment Detection, Intensity Peak Detection, Traffic signs detection, video and image processing.

  1. INTRODUCTION

    Intelligent vehicles use various sensors that can provide the driver relevant information about the surroundings and even perform simple vehicle control tasks. One of mounted devices is a digital camera which provide realistic image. This camera vision has been applied to wide variety of applications for smarter transportation systems.

    An important competence of more advanced vision systems is the ability to detect objects, for example vehicles and traffic signs. Every vehicle on public roads must keep the rules of the road. Many of these rules are presented through the traffic road signs. So a driver must be able to detect and recognize signs and change his behaviour accordingly.

    Sensing vehicles, traffic signs and traffic situations during driving are important aspects in safe driving, accident avoidance, and automatic driving. We designed a system that is capable of identifying vehicles ahead, moving in the same direction as our car, by tracking them continuously with an in-car video camera. The fundamental problem here is to identify vehicles in changing environment and illumination.

    Although there have been numerous publications on general object recognition and tracking, or a combination of them, not many of these techniques could successfully be applied in real time for in-car video, which has to process the input on-the fly during vehicle movement. This paper introduces an effort to design and implement such real-time oriented algorithms and systems that are highly adaptive to the road and traffic scenes based on domain-specific knowledge on road, vehicle, and control.

    The in-car video is from a camera facing forward which is the simplest and most widely deployed system on police cars. It records various traffic and road situations ahead and is particularly important to safe driving, traffic recording, and vehicle pursuit.

    Fig 1: In Car Video Frame

    Our objective is to detect vehicles ahead or those being pursued and continuously track them on video. It is not easy for a single moving camera to quickly yield the information from dynamic scenes without stereo or other sensors assistance [1]. The main difficulties are, first, the numerous variations of vehicles in colour, shape, and type. The vast amount of vehicle samples is difficult to model or learn [2]. Second, the vehicle detection must be done automatically along with the video tracking. Many tracking algorithms only assume easily detectable targets or known initial positions [3]. Our novel method first

    selects and detects the most common low-level features on vehicles that are robust to changes of illumination, shape, and occlusion. We focus on the horizontal scene movement for fast processing based on the configuration of camera and vehicle driving mechanism. One- dimensional profiles are created from video frames to detect and track vehicles. Many related works of in-car video or images to identify cars are shape-based methods [7], [8] that usually suffer from brittleness due to the variety of vehicles and backgrounds. For example, the symmetric property is employed in [7] and [9][11].

    Detection and recognition are two major steps for determining types of traffic signs. Detection refers to the task of locating the traffic signs in the images. It is common to call the region in the image that potentially contains a traffic sign – the region of interests[4][12]. Traffic sign detection algorithm makes use of the special characteristics of traffic signs. Traffic sign detection system typically relies on the colour and geometric information in the images to detect the region of interests. Therefore important technologies are colour segmentation, edge and corner detection.

    Detected sign lies in the region of interest. After identifying this region, we extract important features of the region and classify this region of interest. For classification is able to use several techniques. This method compares region of interest with the traffic warning sign recognition templates. In the case of identity, program considers the potential traffic sign to be an actual traffic warning sign.

  2. VEHICLE DETECTION IN CAR VIDEO

    By showing the continuous motion of extracted points in car video without color and shape information to human subjects, we have confirmed that humans are capable of separating vehicles from background after knowing where the source is from. As the observer vehicle moves on the road, the relative background motion is determined. The motion projected to the camera frame is further determined from object distances. Such motion shows unique properties coherent to the vehicle ego motion.

    The target vehicles moving in the same direction as the camera have different motion against the background and, thus, show different optical flow from that of the background in the video. Different from many other works that put more effort into vehicle shape analysis in individual video frames, this paper only extracts low-level features such as intensity peaks, and horizontal line segments, as will be described in Section

  3. These features are used to detect vehicles. The algorithm for the traffic signs detection and recognition is described in section IV.

  1. FEATURE EXTRACTION

    The segmentation of vehicles from background is difficult due to the complex nature of the scenes,

    including occlusion by other moving vehicles, complicated shapes and textures, coupled with the ever changing background. The presence of specular reflection on the metallic surfaces and back windows, and shadows on most cars (deformed and spread), make the color and shape information unreliable. To cope with these variations, we select two types of low-level features for reliable vehicle detection: horizontal line segments, and intensity peaks.

    1. Line Segment Detection

      We have noticed that the backs of vehicles typically contain many horizontal edges formed by vehicle tops, windows, bumpers, and shadows. Most of them are visible during daylight, which indicates the existence of a vehicle. For vehicles with partially occluded backs, the detection of horizontal line segments is still stable. The vertical line segments, however, are notguaranteed to be visible due to a curved vehicle body, frequent occlusion by other cars, and occlusion over a changing background. The line segmenting algorithm recognizes multiple vehicle as a single group. As multiple vehicles separate their movements, the mixture of horizontal line segments has dispatched, and thus, the algorithm separates them.

    2. Intensity Peak Detection for Night Scenes

    The intensity peaks from the tail and head lights of moving vehicles and from the street lamps are used as features when the lighting conditions are poor. We detect intensity peaks from vehicle lights to obtain more evidence of the vehicle presence. These peaks will further be tracked across frames for the direction and speed of the moving targets.

  2. ALGORITHM FOR THE TRAFFIC SIGNS DETECTION AND RECOGNITION

    Algorithm works in several steps. Each step creates important parts of complex system and has basic meaning for the function.

    1. Load Real Time video frame

      The frame is loaded in 3 matrixes which represent Red, Green and Blue planes. Size loaded image gives size these matrixes.

    2. Convert colour space

      For expected work RGB plane is necessary converted to the luma portion (Y) and 2 chrominance (Cb, Cr) component of the image. The block Color Space Corversion does required conversion. The luma portion of the image is necessary for the recognition function and the Cr chrominance component of the image is necessary for the detection function.

    3. Separate required colour

      Red colour of the traffic signs is in this case main detection feature for localization the region of interest which is the region where the traffic sign can lie. Red

      colour of the traffic signs is the best visible in the Cr component of the image. The matrix of the Cr component contains numerical values in the interval

      <0:1>. The block Relational Operator compares values of the Cr component matrix with constant 0.62.

    4. Close found areas

      The block Closing closes chosen points into acceptable areas. The range of closing is suitable set in this block.

    5. Choose and frame the region of interest

      The block Blob Analysis separates closed areas according to set parameters. Areas that fill required conditions are framed and named. Output of this block BBox determines position and size elected regions of interest and output Count shows count elected regions. The results of this block are directed to the block Detection that determines, if elected regions of interest contain searched traffic warning sign.

    6. Detection function

      The block Detection is created like Matlab Function. This block decides if regions of interest contain searched traffic warning sign. The Cr component of the image inputs into the block like matrix I(rowi, coli) with numerical values in the interval <0:1>. The detection function separates matrix F (rowbox, colbox) that is marked by BBox from Blob Analysis. Matrix F is further converted to templates size, in our case has matrix F size 12 x 12. In next step mean value of matrix F is subtract from values of matrix F

      F = F mean(F(:)).

      By this step numerical values matrix F are approximately in the interval <0.5:0.5> where uninteresting colours be found around values 0, red tints are negative and white tints are positive. Each pixel matrix F is multiplied by corresponding pixel of the template detection matrix T and result is add to parameter S

      S=S+ T(row,col,iTmp)*F(row,col).

      The biggest correspondence between matrix F and detection template matrix T gives the highest parameter

      S. By this way template iTmp is determined by the best similarity and if the parameter S is higher than value of the minSimilarity, the detection function marks this region of interest as the region with the traffic sign. The detection template consists of 6 traffic signs. This traffic signs includes red area that is necessary for detection operation. Every traffic sign is saved to the detection template matrix size 12 x 12. The matrixes contain numerical values in the interval <1:1> where uninteresting colours be found around values 0, red tints are positive and white tints are negative[12].

    7. Highlight the traffic signs

      The regions of interest where detection function finds parameter S bigger than value of the minSimilarity are highlight in loaded image by green rectangles. That is realized in the block Draw Shapes. Found regions with

      traffic signs go also into block Recognition like variable matrix BBox.

    8. Tracking

      This function matches the targets found in the current video frame with those found in the previous frame. Targets in two video frames match when their bounding boxes overlap.

    9. Recognition function

      The block Recognition is similar to the detection function. The block is created like Matlab Function too and determines, what traffic sing contains marked region with traffic sign. The block Recognition works with luma component of the image, recognition template, templates name and matrix BBox that defines regions with traffic signs. The luma component of the image inputs like matrix I (rowl, coll) with numerical values in the interval <0:1> where values close 0 represent dark shades and values close 1 represent light shades. The recognition function separates matrix F (rowbox, colbox) that is marked by BBox. Matrix F is converted to recognition templates size (in our case: 18 x 18). In next step mean value of matrix F is subtract from values of matrix F

      F = F mean(F(:)).

      This step moves values of matrix F approximately into interval <0.5:0.5> and neutral grey gets to values around 0. Dark shades are negative and light shades are positive. Every pixel matrix F is multiplied by corresponding pixel of the template recognition matrix R and result is add to parameter C (confidence)

      C = C +R(row,col,iTmp)*F(row,col).

      The biggest correspondence between matrix F and recognition template matrix R gives highest parameter C. If the parameter C is higher than value of the minSimilarity, the recognition function marks this traffic sign with corresponding iTmp and name. Output of the recognition function is matrix Message which contains name recognized traffic sign. The recognition template is necessary to correct work of the recognition function. The recognition template and detection template consists of identical traffic signs. There are No entry, No entry for vehicular traffic, Stop and give way, Give way to traffic on major road (yield), Give priority to vehicles from opposite direction, No motor vehicles. But the detection template consists of the Cr component of the traffic sign and the recognition template consists of the luma component of the traffic sign.

      Every traffic sign is saved to the matrix size 18 x 18. The matrixes contain numerical values in the interval <1:1> where neutral gray be found around values 0, dark shades are in minus values and light shades are in plus values. For determination sort of the traffic sign is required high similarity between the region of the image (with traffic sign) and the recognition template. Therefore the matrix dimension of the recognition template is larger than the matrix dimension of the detection template. The traffic sign doesnt always

      need to stand perpendicularly to the camera. In the image the traffic sign can be angle-wise. The recognition function converts BBox area to area size 18 x 18 whereby eliminates rotation in two axes. Rotation in the third axis is partly eliminated in the recognition template. Each traffic sign is saved in 3 positions (rotation of 0° and ± 8°).

    10. Name the traffic signs

      Several numerical simulink blocks determine position of the traffic sign from BBox matrix and highlight this area in the image. Below this area is placed black box with the white notice. There is the traffic sig name and the sign order. The sign order is important in case of definition more traffic signs in one image.

  3. EXPERIMENTAL RESULTS

    Computer program has been created in Matlab using Video and Image Processing Blockset. The program has been debugged and tested in various videos with the supposed results. The line segmenting algorithm recognizes vehicle as a single group. As multiple vehicles separate their movements, the mixture of horizontal line segments has dispatched, and thus, the algorithm separates them. Moreover, the shadow with a vehicle body also creates edges in the video, and we just consider it to be a part of the vehicle. Fig. 2 gives vehicle detection results and Fig. 3 gives Traffic Sign detection Results.

    Fig 2: Vehicle detection Results.

    Fig 3: Traffic Sign detection Results.

  4. CONCLUSION

This paper has focused on an important task to detect and track vehicles ahead with an in-car video camera. Several general features that characterize the vehicles ahead are robustly extracted in the video. Experiments show the effectiveness of the system design and implementation. The computation is implemented in real time and is easy to embed into a hardware for real vehicle-borne video. Locating the traffic signs in the image is another important feature. Computer program has been created in Matlab using Video and Image Processing Blockset. The program has been debugged and tested in the group of various video frames with the supposed results.

ACKNOWLEDGMENT

The authors would like to thank the management, and Faculty Members, of Department of Electronics and Communication Engineering, Younus College of Engineering and Technology, Kollam for many insightful discussions and the facilities extended to us for completing the task.

REFERENCES

  1. H. Takizawa, K. Yamada, and T. Ito, Vehicles detection using sensor fusion, in Proc. IEEE Intell. Vehicle, 2004, pp. 238243.

  2. H. Schneiderman and T. Kanade, A statistical method for 3D object detection applied to faces and cars, in Proc. IEEE CVPR, 2000, pp. 746751.

  3. R. Lakaemper, S. S. Li, and M. Sobel, Correspondences of point sets using Particle Filters, in Proc. ICPR, Dec. 2008, pp. 15.

  4. Jurgen, Ronald K. (2007). Object Detection, Collision Warning and Avoidance Systems, ISBN- 13 978-0-07680-1810-3, SAE International, 400 Commonwealth Drive, Warrendale

  5. Bishop, R. (2005). Intelligent Vehicle Technology and Trends, ISBN 1-58053-911-4, ARTECH HOUSE, INC. 2005, Norwood, USA.

  6. Hsiu-Ming Yang; Chao-Lin Liu; Kun-Hao Liu; Shang-Ming Huang (2002). Traffic Sign Recognition in Disturbing Environments, IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, June 2005, National Chengchi University, Taipei, Taiwan.

  7. J. Chu, L. Ji, L. Guo, B. Li, and R. Wang, Study on method of detecting preceding vehicle based on monocular camera, in Proc. IEEE Intell. Vehicle, 2004, pp. 750755.

  8. D. Alonso, L. Salgado, and M. Nieto, Robust vehicle detection through multidimensional classification for on board video based systems, in Proc. IEEE ICIP, Sep. 2007, vol. 4, pp. 321324.

  9. P. Parodi and G. Piccioli, A feature-based recognition scheme for traffic scenes, in Proc. IEEE Intell. Vehicle, 1995, pp. 229234.

  10. C. Hoffman, T. Dang, and C. Stiller, Vehicle detection fusing 2D visual features, in Proc. IEEE Intell. Vehicle, 2004, pp. 280285.

  11. L. Gao, C. Li, T. Fang, and Z. Xiong, Vehicle detection based on color and edge information, in Proc. Image Anal. Recog., vol. 5112, Lect. Notes Comput. Sci., 2008, pp. 142150.

  12. Vavrik, J.; Bartak, P. & Cermak, R. Traffic Signs Detection In Image Data Advanced Engineering 4(2010)2, ISSN 1846-5900.

  13. Betke M.; Makris, Nicholas C. Information- Conserving Object Recognition, IEEE Computer Society International Conference on Computer Vision and Pattern Recognition, June 2002, University of Maryland, Maryland, USA.

  14. Gavrila, D.M. Traffic Sign Recognition Revisited, Proc. of the 21st DAGM Synposium fuer Mustererkennung, pp. 86-93, Springer Verlag, 1999, Ulm, Germany.

  15. Amirali Jazayeri, Hongyuan Cai, Jiang Yu Zheng, and Mihran Tuceryan, Vehicle Detection and Tracking in Car Video Based on Motion Model, IEEE Transactions on Intelligent Transportation Systems, vol. 12,No.2 ,June 2011.

Leave a Reply