Face Tracking Techniques in Color Images : A Study and Review

DOI : 10.17577/IJERTV2IS121322

Download Full-Text PDF Cite this Publication

Text Only Version

Face Tracking Techniques in Color Images : A Study and Review

Ravi Subban Muthukumar. S, Pasupathi. P

Dept. of Computer Science Centre for Info. Tech & Engg., Pondicherry University Manonmaniam Sundaranar University Pondicherry, India Tirunelveli, India

  1. Arun Benedict, J. Jayapal

    Department of Computer Science, St. Joseph College of Arts and Science

    Cudalore, India

    Abstract Face tracking has become an increasingly important research topic in the computer vision field, mainly due to its large amount of real-world applications and situations where such methods can be applied, such as Video Surveillance, Criminal Identification, Terrorist Identification, Border Intrusion. It is very difficult to come up with a robust solution due to variations in illumination, pose, appearance, etc. This paper presents a comparative study and analysis of face tracking techniques. Of the face tracking methods used till now, Memory-based Particle Filter method and Stereo methods and KanadeLucasTomasi and Scale-invariant Feature Transform of Model Based used for face tracking algorithm produces almost 100% results.

    Index TermsKalman Filters, Gabor Wavelets, Active Contour Model, Bayesian Network.

    1. INTRODUCTION

      In order to analyse and recognize faces people in realistically unconstrained environments, robust tracking and segmentation is a prerequisite. This provides a sequence of face images normalized with respect to scale and image plane translation. Robust tracking and segmentation of faces is a prerequisite for face analysis and recognition. Robust real- time tracking and segmentation of a moving face in image sequences is a fundamental step in many computer vision systems including automated visual surveillance, human machine interface, and very low-bandwidth telecommunications. An advanced surveillance interface may use face tracking and detection techniques to direct the attention of the computer to a human being and maintain the face in the cameras field of view and in consequence reduce the communication bandwidth as well as a memory space by transmitting/storing only the fraction of a video frame which contains the area of interest (the tracked face). Monitoring the human face continuously may be necessary to accomplish tasks such as recognizing the users face [14]. Nowadays intelligent service robots are providing services for human beings. They operate in dynamic and unstructured environment and interact with people using user-friendly interface in natural end efficient way. The mobile agents can aid the surveillance tasks and provide useful information about human activity [15].

    2. FACE TRACKING SYSTEMS

      Face tracking is one of the challenging research area in recent years. It is a very challenging problem due to the large appearance variability of a face. A human can track the human face because we already know where the face is in the previous frame and how it looks like. Particular scenarios where face tracking can be used are face modelling, this is done using an interpolation which takes too long in high resolution images regarding the good points of this tracker, and the more important one is its ability to keep tracking a face even when small occlusions occur. An algorithm is developed by Claudio A.Perez, Alvaro Palma, Carlos A.Holzmann and Christian Pena [9] for face detection and eye tracking on frontal faces with no restrictions on the background. The questionable observer detection problem is introduced and defined by Jeremiah R. Barr, Kevin W. Bowyer and Patrick J. Flynn [48] as: Given a collection of videos of crowds, determine which individuals appear unusually often across the set of videos.

      An efficient foreground/background video coding algorithm was proposed by Kwok-Wai Wong, Kin-Man Lam, Wan-Chi Siu and Kai-Ming Tse [10]. A vision system that performs tracking a human face in 3D was presented by Bogdan Kwolek [13]. He combined color and stereo cues to find likely image regions where face may exist. A greedy search algorithm was used that checked for a face candidate focusing the action around the position of the face. A novel object tracking algorithm was proposed by Mohand Said Allili and Djemel Ziou [24] for video sequences. The formulation of their tracking model is based on variational calculus, where region and boundary information cooperate for object boundary localization by using active contours. A real-time system called DRUIDE, for Detection-Recognition- Unification-Interpretation-Decision-Evolution was proposed by Jean-Christophe Terrillon, Arnaud Pilpré, Yoshinori Niwa and Kazuhiko Yamamoto [21] and that is intended for the robust simultaneous face detection or face tracking and for the recognition of multiple hand postures of the Japanese Sign Language (JSL) in color video sequences. A method that merge face detection and face tracking into a single probabilistic framework was proposed by Sachin Gangaputra, and Sachin Gangaputra [27].

      A 3-tier framework for hardware implementation of Dynamic Face Tracking System (DFTS) based on Gabor Wavelets was presented by Eustace Painkras and Charayaphan

      Charoensak [28]. Amine Iraqui.H, Yohan Dupuis, Rémi Boutteau, Jean-Yves Ertaud and Xavier Savatier [50] have described a dual camera vision system, capable of automatically detecting and tracking regions of interest at a higher zoom level. A unified system for segmentation and tracking of face and hands in a sign language recognition using a single camera was presented by George Awad, Junwei Han and Alistair Sutherland [30]. Unlike much related work that uses color gloves, they detect skin by combining three useful features: color, motion and position. A multiple-stage face detection and tracking system that is designed for implementation on the NICTA high resolution (5 MP) smart camera was presented by Y. M. Mustafah, T. Shan, A. W. Azman, A. Bigdeli, and B. C. Lovell [41].

      A spatial-temporal mutual feedback scheme was proposed by Xuchao Li, Xiaofang Zhou [8] [42] that aimed at face detection and tracking in video sequences. The beauty of the algorithm is its ability to form the close-looped negative feedback contributions from spatial detection and temporal tracking, which decreases the error of detection and tracking. A novel particle filter, called M-PF was proposed by Dan Mikami, Kazuhiro Otsuka, And Junji Yamato [45], for the visual tracking of human face pose. The omni-directional cameras have an obvious drawback; that is, only low- resolution images are captured. Therefore, the objects are not able to be correctly identified if they are far from the omni- directional cameras. To overcome this problem, Chin-Shyurng Fahn and Chin-Sung Lo [46] proposed a high-definition human face tracking system using the fusion of omni- directional and pan-tilt-zoom (PTZ) cameras. Based on the research of the existing digital image processing algorithms Zhao Wenge and He Huiming [49] has a reasonable hardware and software division for the realization of the face tracking functions.

      An image tracking stratagem was developed to perform different facial emotions on the robot skull by Ching-Kuo Wang, Yuan-Chang Chang and Cheng-Hang Shieh [51] to analyze the neck dynamics. A visual tracking and servoing of human face are implemented through image processing by A.

      A. Shafie, A. Iqbal and M. R. Khan [52]. Zdenek Kalal, Krystian Mikolaj, Jiri Matas [56] designed a novel system for long-term tracking of a human face in unconstrained videos on Tracking-Learning-Detection (TLD) approach.

      Two broad approaches to the representation and tracking of moving objects are motionbased and model-based. Both methods have their relative strengths and weaknesses and seem to be complementary [3]. Motion-based approaches depend on a robust method for grouping visual motions consistently over time [4]. They tend to be fast but do not guarantee that the tracked regions have any meaning. Model based approaches, on the other hand, can impose high-level semantic knowledge more readily but suffer from being computationally expensive due to the need to cope with scaling, translation, rotation and deformation [1]. Di Xie et al proposed a novel system that acquires clear human face or head in video stream from a single surveillance camera and

      applies several state-of-the-art computer vision algorithms to generate real-time human head detection and tracking results.

      1. Motion Based Face Tracking Systems

        1. Kalman Filters

          Stephen McKenna and Shaogang Gong [1] implemented a real-time multi-motion tracker using Kalman filters to track objects as groups of temporal zero crossings. A new near-real time technique for 3D face pose tracking from a monocular image sequence obtained from an uncalibrated camera was proposed by Zhiwei Zhu, Qiang Ji [19]. The basic idea behind their approach is that instead of treating 2D face detection and 3D face pose estimation separately, they performed simultaneous 2D face detection and 3D face pose tracking. 3D face pose at a time instant is constrained by the face dynamics using Kalman Filtering and by the face appearance in the image.

        2. Active Contour Models

          An improvement to the conventional active contour model, called Snuke was proposed by Xiong Bing, Yu Wei, and Charuyaphan Chareonsak [20], and applied it in the application of face biometric; i.e. automatic detection and tracking of complex face contour. Fast and efficient motion detection and estimation methods are introduced. The obtained motion information is used to reduce the number of sub- images to search.

        3. Particle Filter

          A system was developed by Lukasz Stasiak and Andrzej Pacut

          [40] to detect and track individuals and groups of people in real-time, designed as a first screening of the iris-based access control. The particle filtering was used in the Conditional Density Propagation framework of Isard and Blake, and the face detection was carried out by using the skin color in HSV color space. An automatic approach was described by Tie Yun and Ling Guan [54] to track fiducial points for various facial expressions. This approach combines color based kernel correlation technique for the observation likelihood with DE- MC particle filtering distribution for multiple points tracking. An efficient face detection and tracking system for mobile interaction was proposed by Yeong Nam Chae, Jaewon Ha, and Hyun S. Yang [55]. To detect face rapidly, the proposed system adopts color filtering based efficient region selection method.

        4. Dynamic sound beam Algorithm

          A method of visually steerable sound beam forming was presented by Kensuke Shinoda et. al. [18]. The method is a combination of face detection and tracking by motion image processing and sound beam forming by speaker array.

        5. Stereo methods

          Stephen J. Krotosky, Shinko Y. Cheng, Mohan M. Trivedi [22] reviews both a stereo-based and long wave infrared-based system for smart airbag deployment as vision based systems for smart airbag systems is aimed to give precise information about occupant pose and location. This information can be used to make intelligent airbag deployment decisions

        6. Wavelet Transform

          An efficient algorithm was presented by Bardia Mohabbati and Shohreh Kasaei [31] to detect and track faces in color image sequences in wavelet domain. The algorithm first utilizes a nonlinear skin color model and face model to extract face candidate regions.

        7. De-Identication

          Prachi Agrawal and P. J. Narayanan [59] presented an approach to de-identify individuals from videos.

      2. Model Based Face Tracking Systems

        1. Kalman Filtering

          A novel algorithm for detecting facial features and an implementation of a real-time system for tracking multiple faces and facial features was presented by Antonio Colmenarez et. al. [5]. E.Loutas, C.Nikou and I.Pitas [11] proposed an information theoretic approach to joint probabilitistic face detection and face tracking. Likelihood information is performed by using set of automatically generated feature points, while the priority probability estimation is based on mutual information tracking cue and a Gaussian temporal model. A trainable system for face detection and tracking was described by Augusto Destrero et. al. [37]. The structure of the system is based on multiple cues that discard non face areas as soon as possible. They combine motion, skin, and face detection. Vincent Girondel et. al. [25] have presented a fast and efficient algorithm based on the combination of partial Kalman filtering and faces pursuit to track multiple persons even under occlusions. This method can be used for indoor video sequences.

        2. Continuously Adaptive Mean Shift Tracking

          A novel probabilistic approach to unify face detection and tracking was presented by Ji Tao and Yap-Peng Tan [26]. Using preliminarily detected FORs, a hypothesis sequence can be constructed to recover the missing faces by maximizing the probability score of a graphical chain model. A system for multiple objects tracking and multi-view faces detection and recognition was proposed by Han-Pang Huang', and Chun- Ting Lin [32] using a novel method called Multi-CAMSHIFT.

          Prahlad Vadakkepat et. al. [44] addressed a scenario where a robot tracks and follows a human. A neural network was utilized to learn the skin and nonskin colors. The skin-color probability map is utilized for skin classification and morphology-based preprocessing. Heuristic rule is used for face-ratio analysis and Bayesian cost analysis for label classification.

        3. Condensation Algorithm

          XIA Siyu LI Jiuxian XIA Liangzheng [35] addressed the problem of human face tracking in color image sequences. A method using human skin color feature integrated with Condensation algorithm is proposed. Vincent Girondel et. al.

          [25] proposed the Boosted Adaptive Particle Filter (BAPF) algorithm for face detection and tracking in video sequences. The APF algorithm is proposed to obtain more accurate estimates of the proposal distribution and the posterior distribution for improving the tracking accuracy in the input video sequences. Therefore, Frank Wallhoff et. al. [36]

          presented a multi-modal approach for finding and tracking a face and estimating the heads gaze as well as the eyes view direction.

        4. Active Contour Model

          A new approach for automatically segmenting and tracking of faces in color images was presented by Karin Sobottka Ioannis Pitas [2]. The segmentation of faces is done based on color and shape information. By searching for facial features, face hypotheses are verified.

        5. Top-Down Algorithm

          Shinjiro Kawato and Jun Ohya [6] proposed a method that automatically extract a skin color distribution model for face detection systems. They applied this method to their face detection and tracking system.

        6. Dynamic Programming

          Zhu Liu and Yao Wang [7] proposed a new face detection method based on template matching. Recognizing that the actual face in the test image can be stretched non-uniformly compared to the face template, they developed an algorithm that uses dynamic programming to test stretching in both horizontal and vertical directions and search for the best matching region.

        7. Bayesian Network

          A joint probabilistic face detection and face tracing algorithm was proposed by E.Loutas, C.Nikou, I.Pitas [11]. Face tracking is achieved by a Bayesian network. The likelihood estimation is based on statistical training of set of automatically generated feature points.

        8. Gaussan Temporal Model

          Wcimin Huang et. al.[12] presented a novel algorithm to detect face eyes in a reliable manner with a stereo camera. Joint probabilistic face detection and tracking algorithm combining likelihood estimation and a prior probability is proposed by Evangelos Loutas, Ioannis Pitas, and Christophoros Nikou [23]. The likelihood estimation scheme is based on the statistical training of sets of automatically generated feature points and a mutual information tracking cue, while the prior probability estimation is based on a Gaussian temporal model. Usman Qayyum and Dr. Muhammad Younus Javed [33] dealt with the real time implementation of face detection, tracking and facial feature localization in video sequence that is invariant to scale, translation, and (±45) rotation transformation. The proposed system contains two parts, visual guidance and face/non-face classification.

        9. Adaboost Classifiers

          Tilo Burghardt and Janko Calic [34] presented A real-time method for extracting information about the locomotive activity of animals in wildlife videos by detecting and tracking the animals' faces. As an example application, the system is trained on lions. The underlying detection strategy is based on the concepts used in the Viola-Jones detector, an algorithm that was originally used for human face detection utilising Haar-like features and AdaBoost classifiers.

        10. Hierarchical Multi-Resolution LP Model

          Eng-Jon Ong and Richard Bowden [38] proposed a learnt data-driven approach for accurate, real-time tracking of facial

          features using only intensity information. The task of automatic facial feature tracking is non-trivial since the face is a highly deformable object with large textural variations and motion in certain regions.

        11. Scale-Invariant Feature Transform

          Michail Krinidis, Nikos Nikolaidis, and Ioannis Pitas [39] presented a novel approach for selecting and tracking feature points in video sequences. In this approach, the image intensity is represented by a 3-D deformable surface model. The proposed approach relies on selecting and tracking feature points by exploiting the so-called generalized displacement vector that appears in the explicit surface deformation governing equations.

        12. Color Histograms

          Soufiane Ammouri and Guillaume-Alexandre Bilodeau [43] presented detection and tracking methods for users body parts in video sequences. They used a technique based on color and shape to detect the body parts and the medication bottles. A systematic discussion of both pros and cons of two well- known traditional approaches for image contrast enhancement is conducted by K. Kyamakya, J. C. Chedjou, M. A. Latif and

          U. A. Khan [47]. The first approach is based on the CNN paradigm and the second one is based on the coupled nonlinear oscillators paradigm for image processing.

        13. Nonintrusive System

          A nonintrusive system which can detect fatigue of the driver and issue a timely warning was developed by Hardeep Singh, Mr. J.S Bhatia and Mrs. Jasbir Kaur [57]. Since a large number of road accidents occur due to the driver drowsiness.

        14. Hybrid Fourier-Based Facial Feature

          A robust face recognition system for large-scale data sets taken under uncontrolled illumination variations was proposed by Wonjun Hwang, Haitao Wang, Hyunwoo Kim, Seok-Cheol Kee, and Junmo Kim [58]. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme.

        15. Statistical Skin Color Model

          De-Jiao Niu, Yongzhao Zhan, Shun-Ling Song [16] presented a method for face detection, tracking and privacy protection. According to skin-color distribution in the color space, they developed a statistical skin color model through interactive sample training.

        16. Probabilistic Method

      A new probabilistic method for detecting and tracking multiple faces in a video sequence was presented by Ragini Choudhury Verma, Cordelia Schmid, and Krystian Mikolajczyk [17]. The proposed method integrates the information of face probabilities provided by the detector and the temporal information provided by the tracker to produce a method superior to the available detection and tracking methods.

    3. DISCUSSION

      Robust real-time tracking and segmentation of a moving face in image sequences is a fundamental step in many vision systems including automated visual surveillance, human machine interface, and very low-bandwidth telecommunications. An advanced surveillance interface may use face tracking and detection techniques to direct the attention of the computer to a human being and maintain the face in the cameras field of view and in consequence reduce the communication bandwidth as well as a memory space by transmitting/ storing only the fraction of a video frame which contains the area of interest (the tracked face).

      The results obtained by all the researchers are shown in the table I and II. There are two types of face tracking systems namely, motion based face tracking and model based face tracking system. In the motion based face tracking, Kalman filters, Dynamic sound beam, Active Contour Model, Stereo methods, Wavelet Transform, Differential Evolution Markov Chain (DE-MC) particle filters, Particle Filter, De- Identication, Feature Point Tracking, Principle Component Analysis, Mean-Shift, Genetic Algorithm and Principle Component Analysis, Markov Model, Gabor Wavelets, Boosted Adaptive Particle Filter(BAPF), Kalman Filtering, Memory-based Particle Filter, Particle Filter, Hierarchical Agglomerative Clustering, Real-time Image Processing System, Adaboost, Pen-Tilt Zoom, Visual Tracking, Online boosting OC, Tracking-Learning-Detection and Digital Image Processing. All the methods specified produce better face tracking results. But the methods, Stereo methods, Digital Image Processing and Memory-based Particle Filter produce 100% face tracking results. The model based face tracking systems are: Kalman filters, Active Contour Model, Information-Based Maximum Discrimination (IBMD) classifiers, Top-down algorithm., Clustering Method, Bayesian network, Stereo Tracking, Continuously Adaptive Mean Shift, Multi-Continuously Adaptive Mean Shift, Wavelet Transform, Adaboost, Condensation algorithm, Biased-Linear Predictor, KanadeLucasTomasi and Scale- invariant Feature Transform, Modified X-Means algorithm, Color Histograms and their Second Order Hu Moments., Continuously Adaptive Mean Shift Tracking, Coupled Nonlinear Oscillator and Hybrid Fourier features. Kanade LucasTomasi and Scale-invariant Feature Transform produce the best face tracking results with 99%.

    4. CONCLUSION

      Most of the time, a video sequence of the scene is available using which a person may have to be recognized. Hence, a robust system that detects and tracks a face is necessary. Face detection and tracking becomes an important task with the growing demand for content-based image functionality. This paper provides a comparative analysis on face detection and face tacking. Of the face tracking methods used till now, Memory-based Particle Filter method and Stereo methods and KanadeLucasTomasiand Scale-invariant Feature Transform of Model Based used for face tracking algorithm produces almost 100% results.

      TABLE I. MOTION BASED FACE TRACKING SYSTEMS

      TABLE II MODEL BASED FACE TRACKING SYSTEMS

      Authors Name

      Face Tracking Method Used

      Detection Rates

      Stephen McKenna et.al. [1]

      Kalman filters

      —-

      Karin Sobottkaet.al. [2]

      Active Contour Model

      —-

      Antonio Colmenarezet.al. [5]

      Information-Based Maximum Discrimination (IBMD) classifiers

      98%

      ShinjiroKawatoet.al. [6]

      Top-down algorithm.

      —-

      Zhu Liu et.al. [7]

      Clustering Method

      82%

      E.Loutas, et.al. [11]

      Bayesian network

      —-

      Wcimin Huang et.al. [12]

      Sterio Tracking

      96%

      Ji Tao et. al. [26]

      Continuously Adaptive Mean Shift

      94%

      Han-Pang Huang' et. al. [32]

      Multi-Continuously Adaptive Mean Shift

      91%

      UsmanQayyumet. al. [33]

      Wavelet Transform

      90%

      TiloBurghardtet. al. [34]

      Adaboost

      —-

      XIA Siyuet. al.[35]

      Condensation algorithm

      —-

      Frank Wallhoff et. al. [36]

      Condensation algorithm

      90%

      Augusto Destrero[37]

      Kalman Filtering

      94%

      Eng-Jon Onget. al.[38]

      Biased-Linear Predictor

      —-

      MichailKrinidis et. al. [39]

      KanadeLucasTomasiand Scale-invariant Feature Transform

      99%

      Lukasz Stasiaket. al. [40]

      Modified X-Means algorithm.

      —-

      SoufianeAmmouri et. al. [43]

      Color Histograms and their Second Order Hu Moments.

      97%

      PrahladVadakkepat et. al. [44]

      Continuously Adaptive Mean Shift Tracking

      —-

      K. Kyamakya et. al. [47]

      Coupled Nonlinear Oscillator

      —-

      HardeepSinghet. al.[57]

      Eye tracking system

      —-

      WonjunHwanget. al. [58]

      Hybrid Fourier features

      95%

      ACKNOWLEDGMENT

      This work is supported and funded by the University Grant Commission (UGC), India under major research project to the Department of Computer Science of Pondicherry University, Puducherry, India.

      Authors Name

      Face Tracking Method Used

      Detection Rates

      Stephen McKenna et.al. [1]

      Kalman filters

      —-

      Kensuke Shinoda et.al.[18]

      Dynamic sound beam

      —-

      XiongBing, Yu Wei et. al.[20]

      Active Contour Model

      90%

      Stephen J. Krotosky et. al.[22]

      Stereo methods

      100%

      Bardiaet. al. [31]

      Wavelet Transform

      85%

      Tie Yun et. al. [54]

      Differential Evolution Markov Chain (DE-MC) particle filters

      88%

      Yeong Nam Chaeet. al.[55]

      Particle Filter

      88%

      PrachiAgrawal et. al.[59]

      De-Identication

      90%

      Claudio A.Perezet.al. [9]

      Digital Image Procesing

      100%

      Kwok- Wai Wong, et.al. [10]

      Feature Point Tracking

      —-

      BogdanKwolek [13]

      Principle Component Analysis

      —-

      Jean-Christophe Terrillon et. al[21]

      Mean-Shift

      96%

      Kwok-Wai Wong et. al. [24]

      Genetic Algorithm and Principle Component Analysis

      —-

      SachinGangaputra et. al. [27]

      Markov Model

      —-

      Eustace Painkraset. al. [28]

      Gabor Wavelets

      —-

      Vincent Girondel et. al. [29]

      Boosted Adaptive Particle Filter(BAPF)

      —-

      George Awad et. al. [30]

      Kalman Filtering

      —-

      Y. M. Mustafah et. al. [41]

      Background Subtraction

      —-

      Xuchao Li et. al. [42]

      Kalman Filtering

      —-

      Dan Mikami et. al. [45]

      Memory-based Particle Filter

      100 %

      Chin-ShyurngFahn et. al.[46]

      Particle Filter

      —-

      Jeremiah R. Barr et. al. [48]

      Hierarchical Agglomerative Clustering

      —-

      Zhao Wenge et. al. [49]

      Real-time Image Processing System

      —-

      Amine Iraqui et.al. [50].

      Adaboost

      —-

      Ching-Kuo Wang et. al. [51]

      Pen-Tilt Zoom

      —-

      A. A. Shafieet. al. [52]

      Visual Tracking

      —-

      HongwenHuoet. al. [53]

      Online boosting OC

      ZdenekKalal et. al. [56]

      Tracking-Learning-Detection

      99%

      Authors Name

      Face Tracking Method Used

      Detection Rates

      Stephen McKenna et.al. [1]

      Kalman filters

      —-

      Kensuke Shinoda et.al.[18]

      Dynamic sound beam

      —-

      XiongBing, Yu Wei et. al.[20]

      Active Contour Model

      90%

      Stephen J. Krotosky et. al.[22]

      Stereo methods

      100%

      Bardiaet. al. [31]

      Wavelet Transform

      85%

      Tie Yun et. al. [54]

      Differential Evolution Markov Chain (DE-MC) particle filters

      88%

      Yeong Nam Chaeet. al.[55]

      Particle Filter

      88%

      PrachiAgrawal et. al.[59]

      De-Identication

      90%

      Claudio A.Perezet.al. [9]

      Digital Image Procesing

      100%

      Kwok- Wai Wong, et.al. [10]

      Feature Point Tracking

      —-

      BogdanKwolek [13]

      Principle Component Analysis

      —-

      Jean-Christophe Terrillon et. al[21]

      Mean-Shift

      96%

      Kwok-Wai Wong et. al. [24]

      Genetic Algorithm and Principle Component Analysis

      —-

      SachinGangaputra et. al. [27]

      Markov Model

      —-

      Eustace Painkraset. al. [28]

      Gabor Wavelets

      —-

      Vincent Girondel et. al. [29]

      Boosted Adaptive Particle Filter(BAPF)

      —-

      George Awad et. al. [30]

      Kalman Filtering

      —-

      Y. M. Mustafah et. al. [41]

      Background Subtraction

      —-

      Xuchao Li et. al. [42]

      Kalman Filtering

      —-

      Dan Mikami et. al. [45]

      Memory-based Particle Filter

      100 %

      Chin-ShyurngFahn et. al.[46]

      Particle Filter

      —-

      Jeremiah R. Barr et. al. [48]

      Hierarchical Agglomerative Clustering

      —-

      Zhao Wenge et. al. [49]

      Real-time Image Processing System

      —-

      Amine Iraqui et.al. [50].

      Adaboost

      —-

      Ching-Kuo Wang et. al. [51]

      Pen-Tilt Zoom

      —-

      A. A. Shafieet. al. [52]

      Visual Tracking

      —-

      HongwenHuoet. al. [53]

      Online boosting OC

      ZdenekKalal et. al. [56]

      Tracking-Learning-Detection

      99%

      REFERENCES

      1. Stephen McKenna et. al, Tracking Faces, Proceedings of IEEE Conference, ISBN: 0-8186- 7713, pp:271-276, 1996.

      2. Karin Sobottka Ioannis Pitas, Segmentation and Tracking of Faces in Color Images, Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96), ISBN: O-8186-7713, pp: 236 -341, 1996.

      3. S. Gil et. al, Combining multiple motion estimates for vehicle tracking, In ECCV, volume 11, pp: 307-320, 1996.

      4. S. Gong et. al, Bayesian nets for mapping contextual knowledge to computational constraints in motion segmentation and tracking In BMVC, 1993.

      5. Antonio Colmenarez et. al, Detection and Tracking of Faces and Facial Features, Proceedings of IEEE Conference, ISBN: 0-7803-5467, pp: 657 661, 1999.

      6. Shinjiro Kawato et. al, Automatic Skin-color Distribution Extraction for Face Detection and Tracking, Proceedinas of ICSP2000, ISBN: 0-7803-5747, pp: 1415 1418, 2000.

      7. Zhu Liu et. al, Face Detection and Tracking in Video Using Dynamic Programming, Proceedings of IEEE Conference, ISBN: 0-7803-6297, pp: 53 56, 2000.

      8. Bernd Menser et. al, Face Detection and Tracking, for Video Coding Applications, Proceedings of IEEE Conference, ISBN: 0-7803-6514, pp: 49 53, 2000.

      9. Claudio A.Perez et. al, Face and Eye Tracking Algorithm based on Digital Image Processing, Proceedings of IEEE Conference, ISBN: 0-7803-7087, pp: 1178 1183, 2001.

      10. Kwok- Wai Wong et. al, Face Segmentation and Facial Feature Tracking for Videophone Applications, Proceedings of 2001 International Symposium on Intelligent Multimedia,Vvideo and Speech Processing, pp: 518 521, May 24 2001, Hong Kong.

      11. E.Loutas et. al, An Information Theoretic Approach to Joint Probabilitistic Face Detection and Face Tracking, Proceedings of IEEE Conference, ISBN: 0-7803-7622, pp: I505 – I508, 2002.

      12. Wcimin Huang et. al, Real Time Head Tracking and Face Eyes Detection, Proceedings of IEEE Conference TENCON02, ISBN: 0-7803-7490, pp: 507 – 510, 2002.

      13. Bogdan Kwolek, Face Tracking System Based on Color, Stereovision and Elliptical Shape Features, Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS03) ISBN: 0-7695-1971, 2003.

      14. R. Chellappa, S. Zhou, and B. Li. Bayesian methods for face recognition from video. In Int. Conf. on Acoustics Speech and Signal Processing, Orlando, Florida, 2002.

      15. L. Davis et. al. Visual surveillance of human activity. In Asian Conf. on Computer Vision, pages 267274, 1998.

      16. De-Jiao Niu et. al, Research and Implementation of Real-Time Face Detection, Tracking And Protection, Proceedings of the Second 1nternational Conference on Machine Learning and Cybernetics, -Wan, ISBN: 0-7803-786, pp: 2765 2770, 2-5 November 2003.

      17. Ragini Choudhury Verma et. al, Face Detection and Tracking in a Video by Propagating Detection Probabilities, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 10, Pp: 1215 1228, October 2003.

      18. Kensuke Shinoda et. al, Visually Steerable Sound Beam Forming Method Possible to Track Target Person by Real-Time Visual Face Tracking and Speaker Array, Proceedings of IEEE Conference, ISBN: 0-7803-7952, pp: 2199 2204, 2003.

      19. Zhiwei Zhu, Qiang Ji, 3D Face Pose Tracking From an Uncalibrated Monocular Camera, Proceedings of the 17th International Conference on Pattern Recognition (ICPR04), ISBN: 1051-4651, pp: 2004.

      20. Xiong Bing, Yu Wei et. al, Automatic Focusing Technique for Face Detection and Face Contour Tracking , 2004 IEEE International Workshop on Biomedical Circuits & Systems, BioCM2004, ISBN: 0-7803-8665, pp: S3.2-9 – S3.2-I2, 2004.

      21. Jean-Christophe Terrillon et. al, DRUIDE: A Real-Time System for Robust Multiple Face Detection, Tracking and Hand Posture Recognition in Color Video sequences, Proceedings of the 17th International Conference on Pattern Recognition (ICPR04), ISBN: 1051-4651, 2004.

      22. Stephen J. Krotosky et. al, Face Detection and Head Tracking using Stereo and Thermal Infrared Cameras for Smart Airbags: A Comparative Analysis, 2004 IEEE Intelligent Transportation Systems Conference Washington. D.C., USA, ISBN: 0-7803-8500, pp: 17 22, 2004.

      23. Evangelos Loutas et. al, Probabilistic Multiple Face Detection and Tracking Using Entropy Measures, IEEE Transactions On Circuits And Systems For Video Technology, Vol. 14, No. 1, ISBN: 1051-8215, January 2004.

      24. Mohand Sa¨d Allili et. al, A Robust Video Object Tracking by Using Active Contours, Proceedings of the 2006 Conference on

        Computer Vision and Pattern Recognition Workshop (CVPRW06), ISBN: 0-7695-2646-2/06, 2006.

      25. Vincent Girondel et. al, Real Time Tracking of Multiple Persons by Kalman Filtering and Face Pursuit for Multimedia Applications, ISBN: 0-7803-8387, pp: 201 205, 2004.

      26. Ji Tao et. al, A Unified Probabilistic Approach to Face Detection and Tracking, Proceedings of IEEE Conference, ISBN: 0-7803-8834, pp: 3797 3800, 2005.

      27. Sachin Gangaputra et. al, A Unified Stochastic Model for Detecting and Tracking Faces, Proceedings of the Second Canadian Conference on Computer and Robot Vision (CRV05), ISBN: 0-7695-2319, 2005.

      28. Eustace Painkras et. al, FaceProcessor: A Frarnework for Hardware Design and Implementation of a Dynarnic Face Tracking Systern, Proceedings of IEEE Conference, ICICS 2005, ISBN: 0-7803-9282, pp: 172 176, 2005.

      29. Wenlong Zheng et. al, A Boosted Adaptive Particle Filter for Face Detection and Tracking, Proceedings of IEEE Conference, ICIP 2006, ISBN: 1-4244-0481, pp: 2821- 2824, 2006.

      30. George Awad et. al, A Unified System for Segmentation and Tracking of Face and Hands in Sign Language Recognition, Proceedings of the 18th International Conference on Pattern Recognition (ICPR'06), ISBN: 0-7695-2521, 2006.

      31. Bardia Mohabbati et. al, Face Localization and Versatile Tracking in Wavelet Domain, Proceedings of IEEE Conference, ISBN: 0-7803-9521, 1552 1556, 2006.

      32. Han-Pang Huang' et. al, Multi-CAMSHIFT for Multi-View Faces Tracking and Recognition, Proceedings of the 2006 IEEE International Conference on Robotics and Biomimetics, December 17 – 20, 2006, ISBN: 1-4244-0571, pp: 1334 1339, Kunming, China, 2006.

      33. Usman Qayyum et. al, Real Time Notch Based Face Detection, Tracking and Facial Feature Localization, IEEE-ICET 2006, 2nd International Conference on Emerging Technologies Peshawar, Pakistan, 13-14 November, ISBN: 1-4244-0502, pp: 70 75, 2006.

      34. Tilo Burghardt et. al, Real-time Face Detection and Tracking of Animals, 8th Seminar on Neural Network Applications in Electrical Engineering, NEUREL-2006, Faculty of Electrical Engineering, University of Belgrade, Serbia, September 25-27, ISBN: 1 -4244-0433, 37 32, 2006.

      35. XIA Siyu et. al, Robust Face Tracking Using Self-Skin Color Segmentation, ICSP2006 Proceedings, ISBN: 0-7803-9737, 2006.

      36. Frank Wallhoff et. al, Multimodal Face Detection, Head Orientation and Eye Gaze Tracking, 2006 IEEE International Conference on Multi sensor Fusion and Integration for Intelligent Systems September 3-6, 2006, ISBN: 1-4244-0567, pp: 13 18, Heidelberg, Germany.

      37. Augusto Destrero et. al, A system for face detection and tracking in unconstrained environments, ISBN: 978-1-4244- 196pp: 499 504, 2007.

      38. Eng-Jon Ong and Richard Bowden, Robust Facial Feature Tracking using Shape-Constrained Multi-Resolution Selected Linear Predictors, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 6, No. 1, January 2007.

      39. Michail Krinidis et. al, 2-D Feature-Point Selection and Tracking using 3-D Physics-Based Deformable Surfaces, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 17, No. 7, pp: 876 888, July 2007.

      40. Lukasz Stasiak et. al, Particle filters for multi-face detection and tracking with automatic clustering, IEEE International Workshop on Imaging Systems and Techniques – IST 2007,

        Krakow, Poland, May 4-5, ISBN: 1-4244-0965, pp: 1 6,

        2007.

      41. Y. M. Mustafah et. al, Real-Time Face Detection and Tracking for High Resolution Smart Camera System, Proceedings of IEEE Conference on Digital Image Computing Techniques and Applications, DICTA.2007, ISBN: 0-7695-3067, pp: 387 393, 2007.

      42. Xuchao Li et. al, Automatic Real-Time Face Detection and Tracking Based on Space-Temporal Mutual Feedback for Video Sequence, Proceedings of IEEE Conference ICALIP 2008, ISBN: 978-1-4244-1724, pp: 1650 1654, 2008.

      43. Soufiane Ammouri et. al, Face and hands detection and tracking applied to the monitoring of medication intake, Canadian Conference on Computer and Robot Vision, CRV 2008, ISBN: 978-0-7695-3153, pp: 147 – 154, 2008.

      44. Prahlad Vadakkepat et. al, Multimodal Approach to Human- Face Detection and Tracking, IEEE Transactions on Industrial Electronics, Vol. 55, No. 3, ISBN: 0278-0046, pp: 1385 1393, March 2008.

      45. Dan Mikami et. al, Memory-based Particle Filter for Face Pose Tracking Robust under Complex Dynamics, Proceedings of IEEE Conference, ISBN: 978-1-4244-3991, pp: 999 1006, 2009.

      46. Chin-Shyurng Fahn et. al, A High-Definition Human Face Tracking System Using the Fusion of Omni-directional and PTZ Cameras Mounted on a Mobile Robot, 2010 5th IEEE Conference on Industrial Electronics and Applications, ISBN: 978-1-4244-5046, pp: 6 11, 2010.

      47. K. Kyamakya et. al, A Novel Image Processing Approach Combining a Coupled Nonlinear Oscillators-based Paradigm with Cellular Neural Networks for Dynamic Robust Contrast Enhancement, 2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA), ISBN: 978-1-4244-6678, 2010.

      48. Jeremiah R. Barr et. al, Detecting Questionable Observers Using Face Track Clustering, Proceedings of IEEE Conference, ISBN: 978-1-4244, pp: 182 189, 2010.

      49. Zhao Wenge et. al, FPGA-based Video Image Processing System Research, Proceedings of IEEE Conference, ISBN: 978-1-4244-5540, pp: 680 682, 2010.

      50. Amine Iraqui.H et. al, Fusion of Omnidirectional and PTZ cameras for face detection & tracking, 2010 International Conference on Emerging Security Technologies (IEEE), ISBN: 978-0-7695-4175, pp: 18 23, 2010.

      51. Ching-Kuo Wang et. al, Analysis and Implementation of the PTZ-Class Facial Tracking on Humanoid Robot, Proceedings of the Ninth International Conference on Machine Learning and Cybernetics (IEEE), ISBN: 978-1-4244-6527, pp: 2565 2570, Qingdao, 11-14 July 2010.

      52. A. A. Shafie et. al, Visual Tracking and Servoing of Human Face for Robotic Head Amir-II, IEEE International Conference on Computer and Communication Engineering (ICCCE 2010), 978-1-4244-6235, 11-13 May 2010, Kuala Lumpur, Malaysia.

      53. Hongwen Huo et. al, Online Boosting OC for Face Recognition in Continuous Video Stream, 2010 International Conference on Pattern Recognition, IPCR 2010, ISBN: 1051-4651, pp: 1233 1236, 2010.

      54. Tie Yun and Ling Guan, Fiducially Point Tracking For Facial Expression Using Multiple Particle Filters with Kernel Correlation Analysis, Proceedings of 2010 IEEE 17th International Conference on Image Processing, ISBN: 978-1- 4244-7994, pp: 373 376, September 26-29, 2010, Hong Kong.

      55. Yeong Nam Chae et. al, Development of an Efficient Face Detection and Tracking System for Mobile Devices, ISBN: 978-1-4244-9026, pp: 192 196, 2010.

      56. Zdenek Kalal et. al , FACE-TLD: Tracking-Learning- Detection Applied to Faces, Proceedings of 2010, IEEE 17th International Conference on Image Processing, (ICIP 2010), ISBN: 978-1-4244-7994, pp: 3789 3792, September 26-29, 2010, Hong Kong.

      57. Hardeep Singh et. al, Eye Tracking based Driver Fatigue Monitoring and Warning System, ISBN: 978-1-4244-7882, 2011.

      58. Wonjun Hwang et. al, Face Recognition System using Multiple Face Model of Hybrid Fourier Feature Under Uncontrolled Illumination Variation IEEE Transactions on Image Processing, Vol. 20, No. 4, ISBN: 1057-7149, pp: 1152 1165, April 2011.

      59. Prachi Agrawal et. al, Person De-Identification in Videos, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 21, No. 3, ISBN: 1051-8215, pp: 299 310, March 2011.

      60. Di Xie, Lu Dang, Ruofeng Tong, Video Based Head Detection and Tracking Surveillance System, Proceedings of IEEE conference, 2012.

Leave a Reply