- Open Access
- Total Downloads : 170
- Authors : R. Shiva Shankar, Vmnssvkr Gupta, Kvssr Murthy, D. Ravibabu
- Paper ID : IJERTV3IS071221
- Volume & Issue : Volume 03, Issue 07 (July 2014)
- Published (First Online): 26-07-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
An Approach to Identify Automatic Vehicle System for Aerial Surveillance
R Shiva Shankar1, Vmnssvkr Gupta2, Kvssr Murthy3, D Ravibabu4
1-4 Department of CSE, S.R.K.R Engg. College, Affiliated to Andhra University, Bhimavaram, W.G.District, Pin-534 204, A.P. INDIA,
Abstract : In this paper we proposed an Automatic Vehicle detection system for aerial surveillance. In this system, a pixel wise classification approach for vehicle detection is proposed. From the surviving framework of vehicle detection in aerial surveillance, classifications based on region and sliding window are escaped. Since the main disadvantage is that a vehicle tends to be divided as different regions when using various colors, furthermore all the vehicles are might be grouped as single region if they are similar.
To come out of this problem, a Dynamic Bayesian Network (DBN) for vehicle detection in aerial surveillance is implemented, which is based on the pixel-wise classification approach. This pixel-wise classification preserved with the characteristic mining process. The features are involved with vehicle color and local the features. Subsequently implementing these features a Dynamic Bayesian Network (DBN) is built for the classification purpose, it transforms the regional local features into quantitative detections.
This experiment is accompanied with several aerial videos and the developed technique is challenging the issue with aerial surveillance images taken at various heights under the divergent angle of camera.
-
INTRODUCTION
Aerial surveillance system has a long history in the military for detecting [1] enemy actions and in the profitable world for observing resources in [2] like forests and crops. Alike imaging approaches are used in aerial newscast collection and search and rescue aerial surveillance has been performed initially using film or electronic framing cameras. The main goal is to collect high-resolution still images of an area underneath surveillance that could after be tested by human or machine analysts to develop information of interest. Presently, there is increasing curiosity in using video cameras for these kinds of tasks. Video takes the dynamic actions that are not understood from aerial still pictures. It permits feedback and triggering of activities based on dynamic actions and offers critical and timely intelligence and understanding that is not then presented. Video detections are used to perceive and geo-locate the objects that are motion in real time and to control the camera, for instance, in order to follow identified vehicles or continuously monitor a site. But, video also carries new technical challenges. Video cameras have lesser resolution than those of framing cameras, in order to get the resolution that is essential to
detect objects on the ground, it is usually compulsory to use a telephoto lens, with a narrow field of the view.
This tends to the greatest serious shortcoming of video in surveillance it offers only a soda straw detection of the scene. The camera must then be scanned to shield regions that are extended of interest. A spectator viewing this video should pay continuous attention, as objects of curiosity move quickly in and out of the camera field of the observation. The video also shortages a more visual context the spectator has problem observing the relative positions of objects seen at one position in time to objects seen moments earlier. Furthermore, geodetic coordinates for objects of curiosity seen in the video do not exist.
In this paper, a new vehicle detection framework is developed in [5] that conserves the benefits of the present works and escapes their disadvantages. The framework can be separated into the following training phase and the detection phases. During training phase, we mine multiple characteristics containing local edge and corner characteristics, along with vehicle colors to train a dynamic Bayesian network (DBN). Consequently, the mined features include not only the information at pixel- level but also association among adjacent pixels in the region. Such design is more effective and efficient than region-based or multi-scale sliding window detection methods.
-
RELATED WORK
One of the major subjects in aerial image investigation is scene registration [3] and orientation. The examination is started from issue of image capture. Airborne helicopter video for approximating traffic parameters and the helicopter mark the video unsteady, it is tough to view and the extracted parameters less precise. To make it correct, a frame-by-frame video-registration approach using a feature tracker to repeatedly regulate control-point correspondences is preserved. This transforms the spatio-temporal video into the temporal information, thus rectifying for airborne platform motion and attitude mistakes. The registration is strong, with the remaining jitter being lesser than limited pixels above hundreds of frames. An easy vehicle detection scheme finds vehicle positions in the video, which are then followed by the feature tracker, permitting us to estimate the average velocity, instant velocity, and remaining parameters automatically to surround by 10% of manual calculations. The whole procedure of registration, detection, tracking, and estimation needs only a lesser seconds for every frame. A sample multimedia geographic
information system (GIS) is generated as a visualization tool for watching the registered video, rest of airborne or satellite imagery, and data relating to geo-referenced positions within a base map.
The other useful topic in intelligent aerial surveillance [4] is vehicle identification and tracking. The challenge of vehicle identification in aerial surveillance encloses camera movements like panning, tilting, and rotation. Furthermore, airborne platforms at several heights result in various sizes of target objects. The adaptive ROI approximation algorithm first investigates how the camera is used, like pan, tilt and zoom control parameters, from the ego-motion calculated from the aerial surveillance video. Using a frame based camera operator attention model, it can not only approximate the ROI of the camera operator at the frame level, but also approximate the ROI at the sequence level. Results after conducting experiments using video taken by Predator Unmanned Aerial Vehicle (UAV) visualizes that the proposed adaptive ROI estimation
measured by calculating the segmentation results of total experts.
One of the major important concepts is, removing background color of [7] every frame and then modified vehicle candidates regions by imposing size restrictions of vehicle. Moving vehicle identification technique called MVD-RD for airborne urban traffic surveillance. At first, the non-road regions are mined using road identification technique. Secondly, the non-road regions with no vehicles are detached based on their size. The result of this is shrinkage of two-stage regions; the identification area decreases more. Lastly, to the decreased area, image subtraction is used in order to acquire all moving regions and therefore all moving vehicles can be exactly filtered in an easy way.
-
SYSTEM ARCHITECTURE
algorithm is effective and at the same time efficient.
One of the most useful topic is hierarchical modal that illustrates various levels of particulars of vehicle features [5]. There is no specific vehicle models supposed, make the approach flexible. The mining is based on a hierarchical model that designates the prominent vehicle characteristics at various levels of detail. Besides the object belongings, the model alsoencloses contextual knowledge, relations between a vehicle and remaining objects.
Image Frames
Feature Extraction Local Feature Analysis
Database
Background Color removal
These consider multiple hints and use Mixture of Experts region [6] and object segmentation algorithm for aerial surveillance video. The MoE segmentation algorithm carefully unites the results of a set of region segmentation and object identification and recognition algorithms. According to the above detections, our initial implementation contains of three experts:
Moving Object Detection and Segmentation: As its name recommends, this expert identifies and then segments objects that are moving. Its outcome is a binary mask that denotes the image regions that move, produced either by movement of an object or by the parallax affect when the camera moving, and regions that may not move.
-
Mean Shift Unsupervised Segmentation: As mentioned before, this expert divides an image into regions depending on their color, edge profile and texture. In the present example it is used to enforce spatial, color and texture restrictions on the region and object construction. For instance, by imposing the object boundary to be a division
Edge Detection
Transform Color
Corner Detection
Color Classification
Dynamic Bayesian Network
Fig 3.1
Feature Extraction
Classification
Post
of the region borders that this expert calculates, speckle noise, which regularly plagues texture-based segmentation, is significantly decreased.
. TSMAP Supervised Segmentation: Trainable Sequential MAP is an expert educated to differentiate among a group of classes (i.e. regions or objects with a semantic meaning, like cars, buildings or trees) and therefore offers a well-built semantic meaning to a region or an object mined by using the MoE algorithm. The concluding results of the MoE segmentation algorithm are
In this paper, we propose a latest vehicle detection
framework that preserves the benefits of the works that are already existed and escapes their disadvantages. The modules of the proposed system framework are described in Fig. 3.1. The framework can be separated into the following training phase and the detection phase. In the training phase, we mine multiple features containing local edge and corner features, as well as vehicle colors to educate a dynamic Bayesian network (DBN). In the detection phase, we first complete background color subtraction equivalent to the procedure proposed in [9].
Then, the same feature mining procedure is applied similar to the training phase. The mined features provided as the confirmation to infer the unidentified state of the trained DBN, which in turn denotes whether the pixels belong to the vehicle or not. In this paper, we do not apply classification based on regions, which would extremely depend on outcomes of color segmentation algorithms like mean shift. There is no necessity to produce multiscale sliding windows. The differentiating feature of the proposed context is that the identification task is based on
Module 3: Feature Extraction
In this module take the local features and color features. This module depends on two modules described above. Frame edge Image is capable to move by completing detecting edges, corners and spaces for changing color and store the result in detection folder.
Detect
the pixel wise categorization. Though, the features are mined in an adjacent region of every pixel. Thus, the mined features include not only the information at pixel-level but also relationships among adjacent pixels in a region. Such design is additionally effective and efficient than the region-based [8] [12] or multiscale sliding window identification methods [11].
Frame Image Detect the Edge
Corner
Transform color
The rest of this paper is structured as follows: Section IV give details of the modules of the proposed automatic vehicle detection mechanism in detail. Section V express and analyzes the experimental results. Lastly, conclusions are made in Section VI.
-
-
MODULES
In this system the vehicle detection is based on subsequent modules
-
Frame Extraction
-
Background Color Removal
-
Feature Extraction
-
Classification
-
Post Processing
Module 1: Frame Extraction
In this module.avi video is given as input; then carry out a frame mining operation on video. Later completing frame mining multiple frames are produced dynamically and maintained in corresponding frames folder.
Module 4: Classification
In this module complete pixel wise classification for vehicle identification using DBN method. In the detection phase, the Bayesian rule is used to acquire the possibility that a pixel belong to a vehicle.
Frame Image Get pixel values Apply
Detect
Module 5: Post Processing
In this module morphological actions to improve the detection mask and do connected component labeling to obtain the vehicle objects.
Frame
Video Extraction
Module 2: Background Color Removal
Frame Image
Detected Vehicle
Apply post processing
Mask the detected
In this module eliminate the background color from the frames, apply the color classification for vehicle and non-vehicle various colors, repeatedly all the pictures are maintained in background frames folder.
Display the Result
Frame Image Construct the color
Remove frequent color
Get the color Back
-
ALGORITHM
The Canny Edge Detection Algorithm
The Canny algorithm fundamentally acquires edges in which the gray scale intensity of the picture modifies the most. These areas are obtained by decisive gradients of the picture. Gradients at every pixel in the round image are find out by performing what is called as
the Sobel-operator. First step is to estimate the gradient in the x- and y-direction correspondingly by performing the kernels.
The algorithm runs in 5 individual steps:
Step.1: Smoothing: Blurring of the image to eliminate noise.
It is inevitable that all images captured from the camera will include some quantity of noise. To stop that noise is mistaken for edges, noise must be decreased.
Step.2: Finding gradients:
The edges should be noticeable where the gradients of the picture have large magnitudes.
Step.3: Non-maximum suppression: Only local maxima must be marked as edges.
Step.4: Double thresholding: Potential edges are computed by thresholding.
Step.5: Edge tracking by hysteresis:
Final edges are finding out by containing all the edges that are not joined to a very particular edge.
-
-
RESULT ANALYSIS
The analysis of the proposed system and its performance can be described in this section. The assessment is beginning with several vehicle detection methods. The moving vehicle detection with road detection method needs setting of lot more parameters to impose the size restrictions in order to decrease false alarms. The analysis and performance is in progress with the various scenarios such as, without removing background, without enhanced edge detector, without background removal and with enhanced edge detector. The performance is tested with the earlier detection techniques and also with our proposed DBN. The subsequent diagram visualizes the performance development between different vehicle detection techniques.
Method
Hit ratio
Number of false
positives per frame
MVDRD
72.09
0.499
Symmetric properties
74.96
0.450
Cascade classifier
78.08
0.399
Proposed BN
92.31
0.297
Proposed DBN
92.35
0.278
Table.5.1 Detection Accuracy of Four Different Scenarios
Fig.5.1 Comparisons of different Vehicle detection method.
In this, the assessment analysis presented with the hit ratio as well as number of false positives per frame. Here the DBN out performs BN. When observing identification results of successive frames, the detection results via DBN are more constant. The reason is that in aerial surveillance the aircraft moving the camera effectively following the vehicles on the ground, and therefore the locations of the vehicles would not have theatrical modifications in the scene even when the vehicles are moving with large speeds.
Now the analysis is starts with the speeds of different detection techniques. The subsequent diagram visualizes the average processing speeds of various vehicle detection methods. The experiments are performed on a personal computer. The proposed framework using DBN and BN cannot attain the frame rate of the surveillance videos; it is enough to complete vehicle identification of every 50 to hundred frames.
-
REFERENCES
-
A. C. Shastry and R. A. Schowengerdt, Airborne video registration and traffic-flow parameter estimation, IEEE Trans. Intell. Transp. Syst., vol. 6, no. 4, pp. 391405, Dec. 2005.
-
H. Cheng and J.Wus, Adaptive region of interest estimation for aerial surveillance video, in Proc. IEEE Int. Conf. Image Process., 2005, vol. 3, pp. 860863
-
S. Hinz and A. Baumgartner, Vehicle detection in aerial images using generic features, grouping, and context, in Proc. DAGM- Symp., Sep. 2001, vol. 2191, Lecture Notes in Computer Science, pp. 4552.
-
H. Cheng and D. Butler, Segmentation of aerial surveillance video using a mixture of experts, in Proc. IEEE Digit. Imaging Comput.
Tech. Appl., 2005, p. 66.
-
R. Lin, X. Cao, Y. Xu, C.Wu, and H. Qiao, Airborne moving vehicle detection for urban traffic surveillance, in Proc. 11th Int. IEEE Conf. Intell. Transp. Syst., Oct. 2008, pp. 163167.
-
L. D. Chou, J. Y. Yang, Y. C. Hsieh, D. C. Chang, and C. F. Tung,
Intersection- based routing protocol for VANETs,Wirel. Pers. Commun., vol. 60, no. 1, pp. 105124, Sep. 2011.
-
S. Srinivasan, H. Latchman, J. Shea, T. Wong, and J. McNair,
Airborne traffic surveillance systems: Video surveillance of highway traffic, in Proc. ACM 2nd Int. Workshop Video Surveillance Sens. Netw., 2004, pp. 131135.
-
L. Hong, Y. Ruan, W. Li, D. Wicker, and J. Layne, Energy-based video tracking using joint target density processing with an application to unmanned aerial vehicle surveillance, IET Comput.
Vis., vol. 2, no. 1, pp. 112, 2008
[11] R. Lin, X. Cao, Y. Xu, C.Wu, and H. Qiao, Airborne moving vehicle detection for video surveillance of urban traffic, in Proc. IEEE Intell. Veh. Symp., 2009, pp. 203208.