Adaptive Moving Foreground Object Detection in Real Time Videos Using Frame Based Approach

DOI : 10.17577/IJERTV3IS031313

Download Full-Text PDF Cite this Publication

Text Only Version

Adaptive Moving Foreground Object Detection in Real Time Videos Using Frame Based Approach

Vanathi B

P.G. Scholar ECE Department

Coimbatore Institute of Technology Coimbatore, India

Dr. S. Uma Maheswari Associate Professor ECE Department

Coimbatore Institute of Technology Coimbatore, India

Abstract The project proposes a real time moving object detection technique for monitoring purpose. This method of foreground object detection does not require user data or training information. The successive video frames in the video sequence are considered for the detection of the moving object. The events occurring in an area are captured by the camera which is provided as input to the moving object detection system which alerts the user. The adaptive Frame Matching Algorithm used identifies the mobile object at that instant in the frame. So this frame based approach results in the instantaneous indication of moving object in the video sequence. The proposed foreground object detection process can perform well in the presence of multiple moving objects.

The computational time for the extraction of the foreground object in live videos using frame matching algorithm in indoor environment is approximately 0.1 second.

Keywords moving object detection; monitoring purpose; frame matching algorithm; foreground object detection.

  1. INTRODUCTION

    The moving object detection and tracking is vital for monitoring purpose. In real time, human eyes can monitor the moving object quickly. But continuous monitoring by humans is impossible. Therefore computer vision is used for vigilant monitoring. The computer vision methods still find it very difficult to extract the moving object against the background in real time video. As in live videos training the system about the object to be detected is impossible. So, the algorithm should be adaptive in nature.

    In general various factors influencing the process of moving object detection are:

    1. Type of moving object and number of moving objects in a video.

    2. Motion exhibited by the moving object.

    3. Ambiguities between the moving object and the background.

    In the proposed system, the frame based approach, is used to detect the moving object. Consecutive frames are compared to find the presence or absence of moving object. Thus the foreground object without the background is obtained.

    For surveillance purpose in order to assist human operators in identification of important events in a scene an intelligent visual surveillance system is used. Such a system requires fast and robust methods for moving object detection, object tracking and event analysis.

    1. Moving object detection is the fundamental step in video analysis. This aims at extracting the interested moving object in the video sequence. The performance of this step is significant because subsequent processing greatly depends on this step.

    2. Object tracking is an important step in vision system. It creates temporal correspondence among the detected object from frame to frame. It aims to achieve better resolution with data transmission and low computation as much as possible.

    3. Event analysis analyzes the video contents and important events are identified which are used for decision making process.

      In general, moving object detection is done by the techniques such as background elimination, temporal differencing, simple region analysis and optical flow. Of these Background subtraction or elimination is the most widely used technique.

      Background subtraction is done by comparing the incoming video frames with a reference frame. Reference frame is a static frame without any foreground object as proposed in [3] & [4]. For robust tracking of the mobile object, an effective background subtraction unit is required. The challenges in developing background subtraction algorithms are illumination changes, moving background, shadows caused by the objects.

      Work done in [5] used optical flow analysis to find the mobile object. Optical flow is the motion of the brightness pattern. Additional hardware support is required for its performance. The moving object is detected by analyzing optical flow information obtained from non-stationary images. This method requires estimated velocity of moving object beforehand. And computational time to retrieve the moving object is high.

      Region analysis technology such as region area, region height/width, region movement features also detects foreground objects.

      Temporal difference or frame difference method [6] performs by comparing the captured image frames. This also requires reference frame which is chosen by template matching technique. The template matching technique adopted in this work is dynamic and adaptive.

      The proposed system compares the captured image frames in the video sequence without any reference frame. On the other hand, successive incoming frames are compared so that even small change in pixel intensity can be noticed which in turn indicates the presence of moving object.

  2. EXISTING WORK

    Moving object can be detected and extracted using two approaches namely unsupervised or supervised approach. The supervised approach requires knowledge on the object to be detected in advance or the interaction from the user. On the other hand unsupervised method does not require training data. The unsupervised method of moving object detection in stored videos resulted in successful detection by maintaining the spatial continuity and temporal consistency as proposed by Ref. [1]. This works uses visual and motion saliency information from the input video sequence. However this method takes much time to process video frames as the frames are not real time.

    Mostly the foreground object is detected by eliminating the each video frame from video sequence. In the live video as scenario changes it is difficult to extract the moving object. To track the moving object efficiently color histogram analysis is performed as in [2]. This work depicts that the motion detection analysis is highly accurate for gray scaled images. Though the quality of tracking is high, the accuracy in detecting and counting the moving objects is less for low resolution images.

    In the case of complex background, a novel frame work named DEtecting Continuous Outliers in the LOw-rank Representation (DECOLOR) is used in [4] to detect moving object and to estimate the background without any training data. This method segments the objects using the motion information and it formulates the component of background modeling. This frame work is not suitable for live videos.

  3. PROPOSED WORK

    Visual surveillance has growing importance in many fields such as medical, security and military. The large amount of data involved makes it impossible for the human operator to guarantee vigilant monitoring for longer duration. Robustness and computational time are the major design goals of the proposed work.

    The proposed system utilizes the frame matching technique to detect the moving object against the background in live videos. Determining the moving object in real world scenario by the humans is difficult. In such cases vigilant surveillance

    system should be installed at the required places. The system designed should work on live videos.

    The three important factors influencing the process of moving object detection are:

      1. Resolution of the camera.

      2. Area covered by the camera.

      3. Distance between the camera and the moving object.

        /li>

    The events occurring in the required area is captured by the camera. The video sequence is given as input to software tool after establishing connection between the image acquisition device driver and the software .The captured video is in RGB color space. This is because, all standard cameras capture the images or videos in RGB color space by default. This is followed by the frame matching process, where subsequent frames are compared and the difference location is plotted to indicate the moving pixel. And from the difference value the background subtracted image is obtained.

    Camera

    Connection Establishment with software

    Frame splitting

    RGB Image

    Background Subtracted image

    Binary Image

    Frame Matching Algorithm

    Alert

    Fig 1: Block diagram for the proposed object extraction algorithm

    1. Frame Matching Algorithm

      The frame matching technique compares each and every pixel in the subsequent frames. At a time instant two frames i.e., the current frame and the previous frame are considered and the corresponding pixel location is compared to find the mobile object.

      The frame matching algorithm actually computes the difference between two frames. Images at two different instances i.e. previous image and current image are subtracted for detecting the moving object as specified in Ref. [7]. Here process is done for real time video sequences and the absolute difference is calculated.

      The frame matching is followed by binarization. The resulting pixel intensity B (i, j) is a binary value where, non-zero value represents change in pixel intensity between the successive frames which indicates the presence of moving object.

      Binarization of the difference frame

      B (i, j) = 0, if ( (n-1)th ~ nth frame) = 0 (1)

      = 1, otherwise

      Where,

      i,j represents the pixel intensities in row and column wise. nth -> pixel intensity of the current frame,

      (n-1)th -> pixel intensity of the previous frame.

      B(i,j) = 0 -> no change in pixel value -> no moving object. B(i,j) = 1 -> change in pixel value -> moving object.

      With the pixel intensity changes, the objects in motion are detected.

    2. Process Flow

      If N

      No indication

      B(i,j)=0

      Y

      Moving pixels are highlighted in the RGB image

      Start

      Background Subtracted Image

      Choosing the video source

      Alert for moving object

      Specifying the number of frames and frame grab interval

      Time Calculation

      Connecting to the hardware

      Video Capturing Begins

    3. Process Flow Explanation

    If frames N

    acquired

    <= frames specified

    Y

    A

    A

    Considering the (n-1)th and the nth frame

    Step1: The video source is chosen and the number of frames is specified.

    Process is stopped

    Step2: Connection with the hardware is established. Step3: Video capturing process is begun.

    Step4: If the frames acquired is less than the specified number of frames.

    Successive frames are compared to find the pixel intensity change.

    The absolute difference between the current and the previous frame is computed.

    Binarization of the frame is done.

    If the pixel intensity of the binary image is 1

    • The pixel intensity change is indicated by the centroid in the RGB image.

    Moving Object alone is extracted.

    Alert is generated for the moving object.

    Time for the entire process is calculated.

    Step5: Process is stopped once the frames acquired exceed the specified number of frames

    Computing the absolute difference of nth

    and (n-1)th frame

    This method can work better in the indoor and the outdoor environments, under varying illumination conditions, presence of background dynamics.

  4. EXPERIMENTAL RESULTS

The experiment is conducted under different scenarios:

Background image

Input Image

Fig 2: Background image without foreground object

No moving object

Input Image Binary Image

Retrieved Image

Fig 3: Retrieved Image when no moving object is present

Moving object exists

Input Image Binary Image

Retrieved Image

Fig 4: Image Retrieval with single moving object

Multiple Moving Objects

Input Image Binary Image

Retrieved Image

Fig 5: Image Retrieval with multiple moving object

The time taken for the moving object detection process is around 0.1 second for a frame with the resolution of 640 X 480.

REFERENCES

  1. Wei-Te Li, Haw-Shiuan Chang, Kuo-Chin Lien, Hui-Tang Chang and Yu Chiang Frank Wang, Exploring Visual and Motion Saliency for Automatic Video Object Extraction, IEEE Transactions on Image Processing, Vol.22, No.7, July 2013, pp.2600-2610.

  2. Dutta R, Mitra K, Mukherjee S, Sharma P, Real Time Edge Detected Advanced Image Acquisition System Using RGB Analysis, International Conference on Intelligent Syatem and Signal Processing(ISSP) March 2013, PP.87-91.

  3. Xiaowei Zhou, Can Yang and Weichuan Yu, Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol.35, No.3, March 2013, pp.597-610.

  4. Songyin Fu, Gangyi Jiang, Mei Yu,An Effective Background Subtraction Method Based on Pixel Change Classification, International Conference on Electrical and Control Engineering, June 2010, pp.4634-4637.

  5. Ho Gi Jung, Jae Kyu Suhr, Kwanghyuk Bae and Jaihie Kim,Free Parking Detection Using Optical Flow Based Euclidean 3D Reconstruction,MVA 2007 Conference on Machine Vision Applicatios,May 16-18 2007,Tokyo, Japan.

  6. Widyawan, Muhammad Ishan Zul,Adaptive Motion Detection Algorithm Using Frame Differences and Dynamic Template Matching Method,The 9thInternational Conference on Ubiquitious Robots and Ambient Intelligence(URAI 2012), Nov. 26-28,2012 in Daejeon Conventional Centre(DCC), Daejeon, Korea.

  7. Raj Bharath, Dhivya, Object detection, classification in Videos and its Parametric Evaluation Using Matlab, International Journal of Advance Research in Computer Science and Management Studies, Vol.2, Issue 1, Jan 2014,pp 525- 533.

Leave a Reply