- Open Access
- Total Downloads : 185
- Authors : Kalpesh M Limbasiya, Pratik Ratanpara
- Paper ID : IJERTV3IS041587
- Volume & Issue : Volume 03, Issue 04 (April 2014)
- Published (First Online): 26-04-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Comprehensive Study on Motion Detection in Video with Surveillance System
Kalpesh Limbasiya
Computer Engineering Department
V.V.P Engineering College, GTU Rajkot, India
Pratik Ratanpara
Computer Engineering Department
-
Engineering College, GTU Rajkot, India
AbstractFor the Current Situation, now a Day Video Surveillance is an Important Security Mechanism for many Applications either it is for personal use or Commercial use. Using the Surveillance System we can establish security without physical need. In this paper we have done comprehensive Study on Motion Detection in Video with Surveillance System. In this System Detecting and Tracking are done through several steps: Background Modeling, Foreground and Feature Extraction, Object Detection, Object Modeling, Analysis of object. In this continuous process, For Background Modeling there is different algorithm like: W4 (What? Where? Who? When?), Median based Algorithm, HRR (Highest Redundancy Ratio) Algorithm. And for the Motion Analysis there is different Method like: Egin Gait, Template Matching, Baseline Algorithm, and Star Skelton Model using Human Gait. In this Paper we try to explain this Algorithm / Method to achieve Best Recognition Rate and Less Computational Cost for Human Motion Detection in Video. We found that HRR is best for Background Modeling with less Computational time and Star Skeleton Model for Human Recognition with less computational Cost.
Keywords Motion Detection, Surveillance System, HRR (Highest Redundancy Ratio), Star Skelton Model, Recognition Rate, Computational Cost
-
INTRODUCTION
In any Surveillance System Automatic detection and recognition of objects is of prime importance for this type of security systems. Automated video surveillance addresses real time observation of objects within a tight environment. Outside surveillance systems must be able to detect and track objects moving in its view field, classify those objects and detect some of their activities.
Surveillance systems deal with the monitoring of stationary and moving objects through a specific scene in real time. These systems aim to provide an autonomous way to track and understand the motions of objects/Human observed by surveillance cameras at different places. Intelligent visual surveillance is broken into the following steps: Background Modeling, Foreground and Feature Extraction, Object Tracking, Human Modeling, Human Motion analysis.
In this Continuous Process there are some factor which affect in this system i.e. Changing Environment, Illumination Variation, One object Occulted by another object. This all factor should be considered in this system.
This paper focuses on some different algorithm / method and highlight where they are suited. We have study and compare W4 [1] (What? Where? Who? When?), Median based Algorithm, HRR [2] (Highest Redundancy Ratio) Algorithm which is used for Background Modeling. And Egin Gait, Template Matching, Baseline Algorithm, and Star Skelton Model using Human Gait for Human Motion Analysis.
This Paper organized as follow. In section II we give basic overview of Motion Detection. In section III there is a brief Introduction of Background Modeling Algorithm. In section IV comparison of this Algorithm. In section V some Introduction about Motion Analysis Method and in next VI section Comparison of that Method. And in last section we conclude about all discussed Algorithm / Method.
-
OVERVIEW OF MOTION DETECTION
In Surveillance Video if there is an unusual event occurs we have to detect that object and analysis their activity. For this there are several steps as shown given below figure (see figure 1). First of all background is Model. Then foreground and Feature is extracted, in which background and foreground pixel is located. Foreground object is separated from background. Object is Tracked and Identified. Bounding box is drawn over the detected object. Then contour is drawn. Human modeling is done then motion analysis is done.
Fig 1: Basic Steps of Motion Detection
-
BACKGROUND MODELING ALGORITHM Background Modeling is the First Basic Step of this
System. In this Procedure Foreground Pixel and Background Pixel is identified. So we can model Background frame. Some well-known Algorithm is:
-
W4 (What? Where? Who? When?)
-
Median based Algorithm
-
HRR (Highest Redundancy Ratio)
-
W4 (What? Where? Who? When?)
This Algorithm combines shape analysis and robust techniques for tracking to detect people, and to locate and track their body parts like, head, hands, feet, and torso. These models are designed to allowW4 to determine types of interactions between people and objects. [1] W4 uses dynamic appearance models for tracking people. Single person and groups are distinguished using projection histograms. Each person in a group is tracked by tracking the head of that person. [1]
Limitations of this algorithm are that W4 Cannot Track People Individually.W4 has been designed to work with only visible monochromatic video sources taken from a stationary camera. [1]
-
Median based Algorithm
This Background Modeling Initialization Algorithm is based on the assumption that the background at every pixel must be visible more than fifty percent of the time during the training sequence. [2] Background is calculated according that pixel. Limitations of this Algorithm is, It will give results in wrong background intensity value, especially when a moving object stops for more than fifty percent of the training sequence time. [2]
-
HRR (Highest Redunduncy Ratio)
In this Algorithm the basic idea in the background model initialization is depending on that stationary pixel intensity value is brightness value which has the highest redundancy ratio on intensity values taken from a training sequence. This Ratio is calculated using the h(x) = Highest_Redundant {N(Ft (x))} Here, N(Ft(x)) indicates the number of the redundancy of the intensity value at pixel location x in all images in F. F is an array containing T consecutive images. [2]
-
-
COMPARING RESULT OF DIFFERENT BACKGROUND MODELING ALGORITHM
According to Ismail Haritaoglu, David Harwood and Larry
S. Davis for W4 Algorithm and Murat Ekinci, Eyup Gedijli for HRR Comparison is done for Intensity Assigned for Pixel Location and Computation time for 50 Frame is shown in Table1.
TABLE I. COMPARISON OF BACKGROUN MODELING ALGORITHM
Algorithm
Intensity assigned for Picture Location
Computational Time (sec)
W4
56
2595
Median
68
2283
HRR
105
571
-
MOTION ANALYSIS ALGORITHM
-
Eigen Gait
Motion based Recognition of People using Image Self Similarity is done using all the given procedure. Moving objects are tracked in each frame based on spatial and temporal image coherence. [3] An image template at time t, denoted by Ot, is extracted for each tracked object, consisting of the image region enclosed within the bounding box of its motion blob in the current frame. Deciding whether a moving object corresponds to a walking person is currently done based on simple shape like aspect ratio of the bounding box and blob size and periodicity cues. [3]
Once a person has been tracked for N consecutive frames, its N image templates are scaled o the same dimensions HxW, as their sizes may vary due to change in camera viewpoint and segmentation errors. The image self-similarity, S, of the person is then computed. The usefulness of the self-similarity plot in characterizing and recognizing individual gaits are to build a gait pattern classifier that takes an SP (self-similarity plot) as the input feature vector. For this, take an eigenface approach [4], in which they treat a similarity plot the same way that a face image is used in a face recognizer. [3]
The summary of this approach is that it extracts relevant information from input feature vectors (face images or SPs) by finding the principal components of the distribution of the feature space, after it applies standard pattern classification of new feature vectors in the lower-dimensional space spanned by the principal components. [3] Use a simple non-parametric pattern classification technique for recognition. In gait classification normalize the input in order to account for different starting pose and walking pose using the same frequency and same no of cycle. [5] Then label the normalized similarity plot. Then compute the principal component space by computing the Eigen value decomposition of their covariance matrix. This space is called Eigen gait. [3]
-
Baseline Algorithm
The baseline algorithm, which is designed to be simple and fast, this algorithm is composed of three parts. First, using a Java based GUI. In this Step semi-automatically define bounding boxes around the moving person / object in each frame of a sequence. In Second part, extract the silhouette of the person by processing the portion only with the bounding boxes. [6] For this step, first estimate the background statistics in terms of the mean and co-variances of the RGB channels at each pixel, using the pixel values outside the bounding boxes for each frame and each pixel within the corresponding bounding boxes. Then compute the Mahalanobis distance of
the pixel value from the estimated background pixel. In which they have found that if smooth the distance image using a 9 by
9 pyramidal averaging filter, the resultant silhouettes have smooth boundaries. Then Foreground Pixel is specified using the Threshold Value. Then remove connected regions less than the N size Pixel and scale the silhouette to occupy a 128 by 88 sized block. The scaling offers some amount of scale invariance and facilitates to the fast computation of the similarity measure. This is the third step of the processing. [6]
-
Star Skeleton Model
The internal motion of a moving object is change in its boundary shape over time. This is determined by the skeletonization. There is no of standard technique for skeletonization such as thinning and distance transformation. These all techniques are computationally expensive and highly sensitive with noise. [7] But in Star skeleton model external points on the boundary of the target is calculated. Target is represented as a star fashion joined with its centroid. Procedure is as follow: Calculate the centroid of the target boundary. Calculate the distance from the centroid to each border point. Calculate the Local Maxima. Advantages of this method are: It is Not Iterative and thats why it is computationally cheap. [7]
-
-
COMPARING RESULT OF DIFFERENT MOTION
ANALYSIS METHODS
According to above discussed all method and author, the given below result is compared as per research paper referenced. We represent below table in terms of recognition Rate and Computational Cost.
TABLE II. COMPARISON OF SEVERAL RECENT METHODS
Algorithm
Recognition Rate (%)
Computational Time (min)
Eigen
88.75
8.44
Baseline
91.25
20
Star Skeleton
97.50
2.05
-
CONCLUSION
-
In this Paper Several Background Modeling Algorithm and Human Motion Identification Method is discussed and compared. Background Modeling Algorithm is compared in Terms of Intensity Assigned for pixel location and Computational Time. In which HRR has Highest Pixel Intensity Assignment and Low Computational Time. And Recognition Rate of Star Skeleton Model using human gait is high and Computational Time is Lowest.
ACKNOWLEDGMENT
There are lots of people who helped and inspired us making research successful. We thank to our faculty of VVP Engineering College, Rajkot, to provide us valuable guidance about research and publication, to encourage, advice and support us. Last, but not least our special thanks to our institute, VVP Engineering College, Rajkot, to giving this opportunity in great environment.
REFERENCES
-
Ismail Haritaoglu, David Harwood and Larry S. Davis W4: A Real Time System for Detecting and Tracking People University of Maryland College Park, MD.
-
Murat Ekinci, Eyup Gedijli Silhouette Based Human Motion Detection and Analysis for Real-Time Automated Video Surveillance Dept. of Computer Engineering, Karadeniz Technical University,Trabzon, 61080, TURKEY. Turk J Elec Engin, VOL.13, NO.2 2005.
-
Chiraz BenAbdelkadery, Ross Cutlerz,Harsh Nanday and Larry Davis. Eigen Gait Motion-based Recognition of People using Image Self- Similarity, University of Maryland.
-
M. Turk and A. Pentland, Face Recognition using Eigenfaces, in Proceedings of the Computer Vision and Pattern Recognition, 1991R. Nicole, Title of paper with only first word capitalized, J. Name Stand. Abbrev., in press.
-
R. Cutler and L. Davis, Robust Real-time Periodic Motion Detection, Analysis and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 2, pp. 129155, 2000.
-
P . Jonathon Phillips, Sudeep Sarkar, Isidro Robledo, Patrick Grother, and Kevin Bowye, The Gait Identification Challenge Problem: Data Sets and Baseline AlgorithmComputer Science and Engineering, University of South Florida, Tampa, Florida 33620-53993Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556.
-
Hironobu Fujiyoshi, Alan J. Lipton Real-time human motion analysis by image Skeletonization The Robotics Institute. Carnegie Mellon University. 5000 Forbes Avenue, Pittsburgh, PA, 15213.